cvpr cvpr2013 cvpr2013-125 knowledge-graph by maker-knowledge-mining

125 cvpr-2013-Dictionary Learning from Ambiguously Labeled Data


Source: pdf

Author: Yi-Chen Chen, Vishal M. Patel, Jaishanker K. Pillai, Rama Chellappa, P. Jonathon Phillips

Abstract: We propose a novel dictionary-based learning method for ambiguously labeled multiclass classification, where each training sample has multiple labels and only one of them is the correct label. The dictionary learning problem is solved using an iterative alternating algorithm. At each iteration of the algorithm, two alternating steps are performed: a confidence update and a dictionary update. The confidence of each sample is defined as the probability distribution on its ambiguous labels. The dictionaries are updated using either soft (EM-based) or hard decision rules. Extensive evaluations on existing datasets demonstrate that the proposed method performs significantly better than state-of-the-art ambiguously labeled learning approaches.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Jonathon Phillips † Abstract We propose a novel dictionary-based learning method for ambiguously labeled multiclass classification, where each training sample has multiple labels and only one of them is the correct label. [sent-4, score-0.734]

2 The dictionary learning problem is solved using an iterative alternating algorithm. [sent-5, score-0.426]

3 At each iteration of the algorithm, two alternating steps are performed: a confidence update and a dictionary update. [sent-6, score-0.597]

4 The confidence of each sample is defined as the probability distribution on its ambiguous labels. [sent-7, score-0.258]

5 The dictionaries are updated using either soft (EM-based) or hard decision rules. [sent-8, score-0.336]

6 Extensive evaluations on existing datasets demonstrate that the proposed method performs significantly better than state-of-the-art ambiguously labeled learning approaches. [sent-9, score-0.629]

7 Introduction In many practical image and video applications, one has access only to ambiguously labeled data. [sent-11, score-0.59]

8 The problem of learning identities where each example is associated with multiple labels, when only one of which is correct is often known as ambiguously labeled learning. [sent-13, score-0.629]

9 A semi− supervised dictionary− based learning method was proposed in [18] under the formulation where there are either labeled samples or totally unlabeled sam− ples available for training. [sent-16, score-0.291]

10 The method iteratively estimates the confidence of unlabeled samples belonging to each class ∗Yi− Chen Chen, Vishal M. [sent-17, score-0.348]

11 We say a signal x is sparse in dictionary D if it can be approxi− mated by x = Dt, where t is a sparse vector and D is a dic− tionary that contains atoms as its columns. [sent-32, score-0.472]

12 The dictionary D can be analytic such as a redundant Gabor dictionary or it can be trained directly from data. [sent-33, score-0.714]

13 It has been observed that learning a dictionary directly from training data rather than using a predetermined dictionary usually leads to bet− ter representation. [sent-34, score-0.753]

14 Thus, learned dictionaries generally have superior results in many practical image processing applica− tions such as restoration and classification. [sent-35, score-0.195]

15 This has moti− vated researchers to develop dictionary learning algorithms for supervised [15], [11], [17], [14], [16], semi− supervised [18] and unsupervised [20], [4], [9] learning. [sent-36, score-0.396]

16 In this paper, we consider a dictionary learning problem where each train− ing sample is provided with a set of possible labels and only one label among them is the true one. [sent-37, score-0.589]

17 We develop dictio− nary learning algorithms that process ambiguously labeled data. [sent-38, score-0.659]

18 faces), the algorithm consists of two main steps: confidence update and dictionary update. [sent-43, score-0.567]

19 The con− fidence for each sample is defined as the probability distri− bution on its ambiguous labels. [sent-44, score-0.191]

20 In the confidence update phase, the confidence is updated for each sample according to its residuals when the sample is projected onto different class dictionaries. [sent-45, score-0.557]

21 Then, the dictionary is updated using a fixed confidence. [sent-46, score-0.401]

22 (b) An illustration of how common label samples are collected to learn intermediate dictionaries, which are used to update the confidence for sample xi. [sent-70, score-0.429]

23 We propose a dictionary− based learning method when ambiguously labeled data are provided for training. [sent-72, score-0.629]

24 We show that our dictionary learning with soft decision rule is an EM− based dictionary learning method. [sent-76, score-0.889]

25 We propose a weighted K− SVD [1] algorithm to weigh the importance of samples according to their confidences during the learning process. [sent-78, score-0.155]

26 The true label zi of the ith training sample is in the multi− label set Li. [sent-95, score-0.162]

27 For each feature vector xi and for each class j, we define a latent variable pi,j, which represents the confidence of xi belong− ing to the jth class. [sent-97, score-0.383]

28 Define Cj to be the collection of samples in class j represented as a matrix and C = [C1, C2, · · · , CK] be the concatenation of all samples from different, c·l·a·ss ,Ces. [sent-107, score-0.22]

29 Similarly, let Dj be the dictionary that is learned from the data in Cj and D = [D1, D2 , · · · , DK] be the concatenation of all dictionaries. [sent-108, score-0.357]

30 Given this ambiguously labeled data, how can one learn dictionaries to represent each class? [sent-110, score-0.785]

31 We solve the dictionary learning problem using an iter− ative alternating algorithm. [sent-111, score-0.426]

32 At each iteration, two major steps are performed: confidence update and dictionary up− date. [sent-112, score-0.567]

33 Confidence Update: We use the notation D(t) , P(t) to de− note the dictionary matrix and confidence matrix respec− tively, in the tth iteration. [sent-118, score-0.494]

34 Keeping the dictionary D(t) fixed, the confidence of a feature vector belonging to classes outside its label set is fixed at 0 and is not updated. [sent-119, score-0.593]

35 To update the confidence of a sample belonging to classes in its label set, we first make the observation that a sample 1We refer to class matrices and clusters interchangeably. [sent-120, score-0.475]

36 333555224 which is well represented by the dictionary of class j, should have high confidence. [sent-121, score-0.415]

37 In other words, the confidence of a sample xi belonging to a class j should be inversely proportional to the reconstruction error that results when xi is projected onto Dj . [sent-122, score-0.485]

38 This can be done by updating the confidence matrix P(t) as follows xi wher βj(t)apn(it,dj)σ=(jt)? [sent-123, score-0.214]

39 3, we derive (2) under the assumption that the likelihood of each sample xi is a mixture of Gaussian densities, is the weight associated with the density of label j. [sent-134, score-0.185]

40 and β(jt) Cluster Update:2 Once the confidence matrix P(t) is up− dated, we use it to update the class matrix C(t+1) . [sent-135, score-0.268]

41 Given a class matrix we seek a dictionary that provides C(jt+1), D(jt+1) the sparsest representation for each example feature in this matrix, by solving the following optimization problem (D(jt+1),Γj(t+1)) = arDg,mΓin? [sent-139, score-0.415]

42 The K− SVD algorithm alternates between sparse− coding and dictionary update steps. [sent-152, score-0.43]

43 dictionary is updated atom− by− atom in an efficient way. [sent-155, score-0.401]

44 The entire approach for learning dictionaries from ambiguously labeled data using hard decisions is summarized in Algo− rithm 1. [sent-156, score-0.878]

45 The Dictionary proach Learning Soft Decision ap- The dictionary learning soft decision (DLSD) approach learns dictionaries that are used to update the confidence for each sample xi, based on the weighted distribution of other samples that share the same candidate label belonging to Li. [sent-159, score-1.194]

46 The weighted distribution of other samples sharing a × given candidate label c is computed through the normaliza− tion of all pl,c’s with l i. [sent-160, score-0.17]

47 = Confidence Update: In this step, given the intermediate dictionary D(t),i learned from the previous iteration for each sample xi, we calculate the residuals using for all jl in Li as e(itj)l,i = ? [sent-162, score-0.542]

48 We then use (2) to update the confidence replaced by e(itj)l,i. [sent-165, score-0.21]

49 =hes ie common ambiguous label samples are collected to learn the intermediate dictionaries The cell marked with ‘ ’ at the (i, j) entry indicates a non− zerop(it,j). [sent-177, score-0.427]

50 To learn the intermediate dictionaries for xi, exclusion of xi (corresponding to red cells) is necessary to enhance discriminative learning. [sent-181, score-0.302]

51 p(iNt)(i,jl),jl], where the weight wm reflects the relative amount of contri− bution from xim when learning the dictionary. [sent-199, score-0.182]

52 After Tc soft decision iterations, for each training sam− ple, we assign the label with the maximum confidence. [sent-223, score-0.151]

53 The labeled class matrices are used to learn the final dictionary D∗ = D(Tc) = via the K− SVD algorithm. [sent-224, score-0.532]

54 ,(8) where Zl is the random variable that corresponds to the true label zl of the observed sample xl. [sent-239, score-0.157]

55 Determining initial dictionaries The performance of both DLSD and DLHD will depend on the initial dictionaries as they determine how well the final dictionaries are learned through successive alternating iterations. [sent-317, score-0.615]

56 As a result, initializing our method with proper dictionaries is critical. [sent-318, score-0.195]

57 In this section, we propose an algo− rithm that uses both ambiguous labels and features to deter− mine the initial dictionaries. [sent-319, score-0.172]

58 At iteration t = 0, we build dictionaries for the sample xi, denoted by D(0),i = where the [Dj(10),i|D(j02),i|. [sent-328, score-0.249]

59 |Dj(|0L),ii|], intermediatedictionaryDj(k0),iislearnedfromsamplesother than xi with ambiguous label jk ∈ Li. [sent-331, score-0.238]

60 Each initial dictionary is then learned from the corresponding cluster using the K− SVD algorithm [1]. [sent-339, score-0.357]

61 Note that our method is very different from the approach oflearning dictionaries from partially labeled data [18]. [sent-341, score-0.312]

62 The work in [18] learns class discriminative dictionaries while our work learns class reconstructive dictionaries. [sent-342, score-0.365]

63 In addi− tion, from the formulation in [18] we see there are either labeled samples or totally unlabeled samples available for training. [sent-343, score-0.333]

64 In contrast, in our formulation, all samples are am− biguously labeled according to three controlled parameters. [sent-344, score-0.198]

65 In fact, formulations in [18] and [20] (for totally unlabeled samples) are special cases of the ambiguously labeled for− mulation presented in this paper. [sent-345, score-0.644]

66 Experiments To evaluate the performance of our dictionary method, we performed two sets of experiments defined in [5][6]: in− ductive experiments and transductive experiments. [sent-347, score-0.451]

67 We re− port the average test error rates (for inductive experiments) and the average labeling error rates (for transductive exper− iments), which were computed over 5 trials. [sent-348, score-0.411]

68 In an inductive experiment, samples are split in half into a training set and a test set. [sent-349, score-0.149]

69 Each sample in the training set is ambiguously labeled according to controlled parameters, while each sample in the test set is unlabeled. [sent-350, score-0.698]

70 In each trial, using the learned dictionaries from the training set, the test 333555557 error rate is calculated as the ratio of the number of test samples that are erroneously labeled, to the total number of test samples. [sent-351, score-0.313]

71 In a transductive experiment, all samples with ambiguous labels are used to train the dictionaries. [sent-352, score-0.293]

72 In each trial, the labeling error rate is calculated as the ratio of the × number of training samples that are erroneously labeled, to the total number of training samples. [sent-353, score-0.167]

73 Following the notations in [6], the controlled parameters are: p (proportion of ambiguously labeled samples), q (the number of extra labels for each ambiguously labeled sam− ple) and ? [sent-354, score-1.269]

74 (the degree of ambiguity −the maximum proba− bility of an extra label co− occurring with a true label, over all labels and inputs [6]). [sent-355, score-0.143]

75 Figures a3li(zae) da cnodl u(mb)n s−h voewcto average ×te1st) error rates (for inductive experiments) of the proposed dic− tionary method (DLHD and DLSD) versus p and ? [sent-365, score-0.266]

76 Both dictionary methods are compa− × rable to the Convex Learning from Partial Labels (CLPL) method (denoted as ‘mean’) [6]. [sent-368, score-0.357]

77 3(c) shows the aver− age labeling error rates (for transductive experiments) ver− sus q curves. [sent-370, score-0.243]

78 Figures 4(a) and (b) show the average labeling error rates versus p and q in transductive experi− ments. [sent-382, score-0.29]

79 Clearly, when either p or q is zero in transductive ex− periments, there exist no ambiguous labels and hence the × labeling errors are zero. [sent-384, score-0.261]

80 When 95% of samples are ambiguously labeled, the lowest average error labeling rate, 0. [sent-387, score-0.64]

81 It is observed that when 95% of samples are ambiguously labeled, DLSD achieves the lowest error labeling rate, of 14. [sent-400, score-0.64]

82 Discussions To explain the performance gain of our dictionary learn− ing approach, in Fig. [sent-404, score-0.391]

83 4, we show curves of two addi− tional baseline methods: ‘no dictionary learning (DL)’ and ‘equally− weighted K− SVD’ . [sent-405, score-0.431]

84 The ‘no DL’ method utilizes features and ambiguous labels only, without learning dictio− naries. [sent-406, score-0.157]

85 The ‘equally− weighted K− SVD’ method contrasts the DLSD method by simply us− ing equal weights among possible samples of each label for dictionary learning. [sent-409, score-0.561]

86 In other words, it ignores the weight matrix W in (7) and learns dictionaries by the standard K− SVD algorithm. [sent-410, score-0.222]

87 Performance of the proposed dictionary methods and other baselines [5], [6] on the LFW dataset. [sent-425, score-0.357]

88 (a) Average test error rates versus the proportion of ambiguously labeled samples (p ∈ [0, 0. [sent-426, score-0.818]

89 (b) Average test error rates versus the degree of ambiguity for each ambiguously labeled sample (p = 1, q = 1, ? [sent-428, score-0.791]

90 (c) Average labeling error rates versus the number of extra labels for each ambiguously labeled sample (p = 1, q ∈ [0, 1, . [sent-430, score-0.929]

91 The proposed dictionary methods are comparable to the CLPL method (‘mean’). [sent-434, score-0.357]

92 Performance of the proposed dictionary methods, two baseline methods (no dictionary learning −‘no DL’, and standard K− SVD ‘equally− weighted K− SVD’), CLPL (‘mean’) and ‘naive’ methods [5], [6] on transductive rates versus the proportion of ambiguously labeled samples (p ∈ experiments. [sent-436, score-1.663]

93 2(b), (c)) and noise, the learned dictionary atoms in our method are able to account for these variations to some de− gree. [sent-447, score-0.39]

94 5, we further show the initial (at t = 0) and updated (using DLSD at t = 20) confidence ma− trices corresponding to this experiment, where samples and labels are indexed vertically and horizontally, respectively. [sent-450, score-0.313]

95 Without any prior knowledge, ambiguously labeled samples have equally probable initial confidences. [sent-451, score-0.706]

96 Initial and updated confidence matrices on the TV series ‘LOST’ (12− class) dataset. [sent-458, score-0.21]

97 Conclusion We have extended the dictionary learning to the case of ambiguously labeled learning, where each example is sup− plied with multiple labels, only one of which is correct. [sent-462, score-0.986]

98 The proposed method iteratively estimates the confidence of samples belonging to each of the classes and uses it to refine the learned dictionaries. [sent-463, score-0.263]

99 Experiments using three publicly available datasets demonstrate the improved accuracy of the proposed method compared to state− of− the− art ambiguously labeled learning techniques. [sent-464, score-0.629]

100 The K− SVD: an algo− rithm for designing of overcomplete dictionaries for sparse represen− tation. [sent-473, score-0.249]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ambiguously', 0.473), ('dictionary', 0.357), ('dlsd', 0.328), ('dlhd', 0.226), ('dictionaries', 0.195), ('dj', 0.168), ('svd', 0.152), ('jt', 0.14), ('confidence', 0.137), ('clpl', 0.127), ('jtl', 0.123), ('labeled', 0.117), ('xl', 0.112), ('ipl', 0.109), ('jl', 0.101), ('dji', 0.101), ('transductive', 0.094), ('pie', 0.084), ('dictio', 0.082), ('samples', 0.081), ('ym', 0.079), ('xi', 0.077), ('update', 0.073), ('algo', 0.073), ('inductive', 0.068), ('ambiguous', 0.067), ('wm', 0.065), ('rates', 0.063), ('class', 0.058), ('patel', 0.056), ('sam', 0.056), ('dic', 0.055), ('label', 0.054), ('sample', 0.054), ('rithm', 0.054), ('decision', 0.054), ('labels', 0.051), ('tionary', 0.051), ('labeling', 0.049), ('zl', 0.049), ('lfw', 0.049), ('represen', 0.048), ('itj', 0.048), ('versus', 0.047), ('pillai', 0.045), ('rama', 0.045), ('belonging', 0.045), ('naive', 0.045), ('updated', 0.044), ('xim', 0.044), ('soft', 0.043), ('subject', 0.042), ('addi', 0.041), ('ardgm', 0.041), ('argdmijaxl', 0.041), ('fiw', 0.041), ('jlt', 0.041), ('lost', 0.041), ('jk', 0.04), ('resized', 0.04), ('learning', 0.039), ('phillips', 0.038), ('extra', 0.038), ('cmu', 0.038), ('dl', 0.038), ('error', 0.037), ('applica', 0.036), ('fidence', 0.036), ('jonathon', 0.036), ('vishal', 0.036), ('tc', 0.036), ('weighted', 0.035), ('equally', 0.035), ('respec', 0.034), ('bution', 0.034), ('jaishanker', 0.034), ('ing', 0.034), ('cour', 0.033), ('faces', 0.033), ('atoms', 0.033), ('pj', 0.032), ('standards', 0.032), ('equalized', 0.032), ('kt', 0.031), ('signal', 0.031), ('alternating', 0.03), ('nary', 0.03), ('tv', 0.03), ('tl', 0.03), ('intermediate', 0.03), ('con', 0.029), ('series', 0.029), ('semi', 0.029), ('mairal', 0.029), ('log', 0.028), ('em', 0.028), ('pi', 0.028), ('unlabeled', 0.027), ('learns', 0.027), ('totally', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 125 cvpr-2013-Dictionary Learning from Ambiguously Labeled Data

Author: Yi-Chen Chen, Vishal M. Patel, Jaishanker K. Pillai, Rama Chellappa, P. Jonathon Phillips

Abstract: We propose a novel dictionary-based learning method for ambiguously labeled multiclass classification, where each training sample has multiple labels and only one of them is the correct label. The dictionary learning problem is solved using an iterative alternating algorithm. At each iteration of the algorithm, two alternating steps are performed: a confidence update and a dictionary update. The confidence of each sample is defined as the probability distribution on its ambiguous labels. The dictionaries are updated using either soft (EM-based) or hard decision rules. Extensive evaluations on existing datasets demonstrate that the proposed method performs significantly better than state-of-the-art ambiguously labeled learning approaches.

2 0.3363649 296 cvpr-2013-Multi-level Discriminative Dictionary Learning towards Hierarchical Visual Categorization

Author: Li Shen, Shuhui Wang, Gang Sun, Shuqiang Jiang, Qingming Huang

Abstract: For the task of visual categorization, the learning model is expected to be endowed with discriminative visual feature representation and flexibilities in processing many categories. Many existing approaches are designed based on a flat category structure, or rely on a set of pre-computed visual features, hence may not be appreciated for dealing with large numbers of categories. In this paper, we propose a novel dictionary learning method by taking advantage of hierarchical category correlation. For each internode of the hierarchical category structure, a discriminative dictionary and a set of classification models are learnt for visual categorization, and the dictionaries in different layers are learnt to exploit the discriminative visual properties of different granularity. Moreover, the dictionaries in lower levels also inherit the dictionary of ancestor nodes, so that categories in lower levels are described with multi-scale visual information using our dictionary learning approach. Experiments on ImageNet object data subset and SUN397 scene dataset demonstrate that our approach achieves promising performance on data with large numbers of classes compared with some state-of-the-art methods, and is more efficient in processing large numbers of categories.

3 0.32042238 392 cvpr-2013-Separable Dictionary Learning

Author: Simon Hawe, Matthias Seibert, Martin Kleinsteuber

Abstract: Many techniques in computer vision, machine learning, and statistics rely on the fact that a signal of interest admits a sparse representation over some dictionary. Dictionaries are either available analytically, or can be learned from a suitable training set. While analytic dictionaries permit to capture the global structure of a signal and allow a fast implementation, learned dictionaries often perform better in applications as they are more adapted to the considered class of signals. In imagery, unfortunately, the numerical burden for (i) learning a dictionary and for (ii) employing the dictionary for reconstruction tasks only allows to deal with relatively small image patches that only capture local image information. The approach presented in this paper aims at overcoming these drawbacks by allowing a separable structure on the dictionary throughout the learning process. On the one hand, this permits larger patch-sizes for the learning phase, on the other hand, the dictionary is applied efficiently in reconstruction tasks. The learning procedure is based on optimizing over a product of spheres which updates the dictionary as a whole, thus enforces basic dictionary proper- , ties such as mutual coherence explicitly during the learning procedure. In the special case where no separable structure is enforced, our method competes with state-of-the-art dictionary learning methods like K-SVD.

4 0.30019444 58 cvpr-2013-Beta Process Joint Dictionary Learning for Coupled Feature Spaces with Application to Single Image Super-Resolution

Author: Li He, Hairong Qi, Russell Zaretzki

Abstract: This paper addresses the problem of learning overcomplete dictionaries for the coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. A Bayesian method using a beta process prior is applied to learn the over-complete dictionaries. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Another advantage of the proposed method is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. We compare the proposed approach to several state-of-the-art dictionary learning methods super-resolution. tionaries learned resolution results ods. by applying this method to single image The experimental results show that dicby our method produces the best supercompared to other state-of-the-art meth-

5 0.29695183 185 cvpr-2013-Generalized Domain-Adaptive Dictionaries

Author: Sumit Shekhar, Vishal M. Patel, Hien V. Nguyen, Rama Chellappa

Abstract: Data-driven dictionaries have produced state-of-the-art results in various classification tasks. However, when the target data has a different distribution than the source data, the learned sparse representation may not be optimal. In this paper, we investigate if it is possible to optimally represent both source and target by a common dictionary. Specifically, we describe a technique which jointly learns projections of data in the two domains, and a latent dictionary which can succinctly represent both the domains in the projected low-dimensional space. An efficient optimization technique is presented, which can be easily kernelized and extended to multiple domains. The algorithm is modified to learn a common discriminative dictionary, which can be further used for classification. The proposed approach does not require any explicit correspondence between the source and target domains, and shows good results even when there are only a few labels available in the target domain. Various recognition experiments show that the methodperforms onparor better than competitive stateof-the-art methods.

6 0.27722743 257 cvpr-2013-Learning Structured Low-Rank Representations for Image Classification

7 0.27535522 261 cvpr-2013-Learning by Associating Ambiguously Labeled Images

8 0.25376928 66 cvpr-2013-Block and Group Regularized Sparse Modeling for Dictionary Learning

9 0.24839342 315 cvpr-2013-Online Robust Dictionary Learning

10 0.18537287 422 cvpr-2013-Tag Taxonomy Aware Dictionary Learning for Region Tagging

11 0.17189175 302 cvpr-2013-Multi-task Sparse Learning with Beta Process Prior for Action Recognition

12 0.17182995 419 cvpr-2013-Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation

13 0.16830944 5 cvpr-2013-A Bayesian Approach to Multimodal Visual Dictionary Learning

14 0.13030916 204 cvpr-2013-Histograms of Sparse Codes for Object Detection

15 0.11935861 399 cvpr-2013-Single-Sample Face Recognition with Image Corruption and Misalignment via Sparse Illumination Transfer

16 0.11006773 220 cvpr-2013-In Defense of Sparsity Based Face Recognition

17 0.076277643 147 cvpr-2013-Ensemble Learning for Confidence Measures in Stereo Vision

18 0.075060174 442 cvpr-2013-Transfer Sparse Coding for Robust Image Representation

19 0.073925748 421 cvpr-2013-Supervised Kernel Descriptors for Visual Recognition

20 0.071382657 142 cvpr-2013-Efficient Detector Adaptation for Object Detection in a Video


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.175), (1, -0.162), (2, -0.254), (3, 0.272), (4, -0.066), (5, -0.103), (6, 0.093), (7, 0.098), (8, 0.008), (9, 0.077), (10, 0.027), (11, 0.044), (12, 0.021), (13, 0.029), (14, -0.019), (15, -0.002), (16, -0.007), (17, -0.019), (18, -0.027), (19, -0.015), (20, -0.044), (21, 0.005), (22, -0.034), (23, 0.07), (24, 0.027), (25, -0.064), (26, -0.029), (27, 0.012), (28, -0.031), (29, 0.041), (30, -0.02), (31, 0.049), (32, -0.077), (33, 0.026), (34, 0.04), (35, 0.008), (36, -0.039), (37, -0.063), (38, 0.024), (39, -0.047), (40, 0.065), (41, -0.003), (42, 0.021), (43, 0.051), (44, 0.016), (45, 0.042), (46, 0.04), (47, 0.016), (48, 0.018), (49, 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95403588 125 cvpr-2013-Dictionary Learning from Ambiguously Labeled Data

Author: Yi-Chen Chen, Vishal M. Patel, Jaishanker K. Pillai, Rama Chellappa, P. Jonathon Phillips

Abstract: We propose a novel dictionary-based learning method for ambiguously labeled multiclass classification, where each training sample has multiple labels and only one of them is the correct label. The dictionary learning problem is solved using an iterative alternating algorithm. At each iteration of the algorithm, two alternating steps are performed: a confidence update and a dictionary update. The confidence of each sample is defined as the probability distribution on its ambiguous labels. The dictionaries are updated using either soft (EM-based) or hard decision rules. Extensive evaluations on existing datasets demonstrate that the proposed method performs significantly better than state-of-the-art ambiguously labeled learning approaches.

2 0.90711808 315 cvpr-2013-Online Robust Dictionary Learning

Author: Cewu Lu, Jiaping Shi, Jiaya Jia

Abstract: Online dictionary learning is particularly useful for processing large-scale and dynamic data in computer vision. It, however, faces the major difficulty to incorporate robust functions, rather than the square data fitting term, to handle outliers in training data. In thispaper, wepropose a new online framework enabling the use of ?1 sparse data fitting term in robust dictionary learning, notably enhancing the usability and practicality of this important technique. Extensive experiments have been carried out to validate our new framework.

3 0.9032228 392 cvpr-2013-Separable Dictionary Learning

Author: Simon Hawe, Matthias Seibert, Martin Kleinsteuber

Abstract: Many techniques in computer vision, machine learning, and statistics rely on the fact that a signal of interest admits a sparse representation over some dictionary. Dictionaries are either available analytically, or can be learned from a suitable training set. While analytic dictionaries permit to capture the global structure of a signal and allow a fast implementation, learned dictionaries often perform better in applications as they are more adapted to the considered class of signals. In imagery, unfortunately, the numerical burden for (i) learning a dictionary and for (ii) employing the dictionary for reconstruction tasks only allows to deal with relatively small image patches that only capture local image information. The approach presented in this paper aims at overcoming these drawbacks by allowing a separable structure on the dictionary throughout the learning process. On the one hand, this permits larger patch-sizes for the learning phase, on the other hand, the dictionary is applied efficiently in reconstruction tasks. The learning procedure is based on optimizing over a product of spheres which updates the dictionary as a whole, thus enforces basic dictionary proper- , ties such as mutual coherence explicitly during the learning procedure. In the special case where no separable structure is enforced, our method competes with state-of-the-art dictionary learning methods like K-SVD.

4 0.89856565 66 cvpr-2013-Block and Group Regularized Sparse Modeling for Dictionary Learning

Author: Yu-Tseh Chi, Mohsen Ali, Ajit Rajwade, Jeffrey Ho

Abstract: This paper proposes a dictionary learning framework that combines the proposed block/group (BGSC) or reconstructed block/group (R-BGSC) sparse coding schemes with the novel Intra-block Coherence Suppression Dictionary Learning (ICS-DL) algorithm. An important and distinguishing feature of the proposed framework is that all dictionary blocks are trained simultaneously with respect to each data group while the intra-block coherence being explicitly minimized as an important objective. We provide both empirical evidence and heuristic support for this feature that can be considered as a direct consequence of incorporating both the group structure for the input data and the block structure for the dictionary in the learning process. The optimization problems for both the dictionary learning and sparse coding can be solved efficiently using block-gradient descent, and the details of the optimization algorithms are presented. We evaluate the proposed methods using well-known datasets, and favorable comparisons with state-of-the-art dictionary learning methods demonstrate the viability and validity of the proposed framework.

5 0.89800429 58 cvpr-2013-Beta Process Joint Dictionary Learning for Coupled Feature Spaces with Application to Single Image Super-Resolution

Author: Li He, Hairong Qi, Russell Zaretzki

Abstract: This paper addresses the problem of learning overcomplete dictionaries for the coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. A Bayesian method using a beta process prior is applied to learn the over-complete dictionaries. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Another advantage of the proposed method is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. We compare the proposed approach to several state-of-the-art dictionary learning methods super-resolution. tionaries learned resolution results ods. by applying this method to single image The experimental results show that dicby our method produces the best supercompared to other state-of-the-art meth-

6 0.88064247 257 cvpr-2013-Learning Structured Low-Rank Representations for Image Classification

7 0.85037196 296 cvpr-2013-Multi-level Discriminative Dictionary Learning towards Hierarchical Visual Categorization

8 0.7958988 185 cvpr-2013-Generalized Domain-Adaptive Dictionaries

9 0.68791705 5 cvpr-2013-A Bayesian Approach to Multimodal Visual Dictionary Learning

10 0.66496015 220 cvpr-2013-In Defense of Sparsity Based Face Recognition

11 0.64387292 422 cvpr-2013-Tag Taxonomy Aware Dictionary Learning for Region Tagging

12 0.61744881 302 cvpr-2013-Multi-task Sparse Learning with Beta Process Prior for Action Recognition

13 0.56994581 442 cvpr-2013-Transfer Sparse Coding for Robust Image Representation

14 0.53967619 261 cvpr-2013-Learning by Associating Ambiguously Labeled Images

15 0.50307471 204 cvpr-2013-Histograms of Sparse Codes for Object Detection

16 0.4796935 83 cvpr-2013-Classification of Tumor Histology via Morphometric Context

17 0.47307977 399 cvpr-2013-Single-Sample Face Recognition with Image Corruption and Misalignment via Sparse Illumination Transfer

18 0.46313646 419 cvpr-2013-Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation

19 0.38962322 403 cvpr-2013-Sparse Output Coding for Large-Scale Visual Recognition

20 0.38284108 15 cvpr-2013-A Lazy Man's Approach to Benchmarking: Semisupervised Classifier Evaluation and Recalibration


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.063), (16, 0.021), (19, 0.013), (26, 0.022), (28, 0.012), (33, 0.205), (67, 0.054), (69, 0.036), (87, 0.485)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.88502002 274 cvpr-2013-Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

Author: Marcus A. Brubaker, Andreas Geiger, Raquel Urtasun

Abstract: In this paper we propose an affordable solution to selflocalization, which utilizes visual odometry and road maps as the only inputs. To this end, we present a probabilistic model as well as an efficient approximate inference algorithm, which is able to utilize distributed computation to meet the real-time requirements of autonomous systems. Because of the probabilistic nature of the model we are able to cope with uncertainty due to noisy visual odometry and inherent ambiguities in the map (e.g., in a Manhattan world). By exploiting freely available, community developed maps and visual odometry measurements, we are able to localize a vehicle up to 3m after only a few seconds of driving on maps which contain more than 2,150km of drivable roads.

2 0.87696254 230 cvpr-2013-Joint 3D Scene Reconstruction and Class Segmentation

Author: Christian Häne, Christopher Zach, Andrea Cohen, Roland Angst, Marc Pollefeys

Abstract: Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being ’too noisy’. Unfortunately, these priors generally yield overly smooth reconstructions and/or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other’s task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.

3 0.87012291 354 cvpr-2013-Relative Volume Constraints for Single View 3D Reconstruction

Author: Eno Töppe, Claudia Nieuwenhuis, Daniel Cremers

Abstract: We introduce the concept of relative volume constraints in order to account for insufficient information in the reconstruction of 3D objects from a single image. The key idea is to formulate a variational reconstruction approach with shape priors in form of relative depth profiles or volume ratios relating object parts. Such shape priors can easily be derived either from a user sketch or from the object’s shading profile in the image. They can handle textured or shadowed object regions by propagating information. We propose a convex relaxation of the constrained optimization problem which can be solved optimally in a few seconds on graphics hardware. In contrast to existing single view reconstruction algorithms, the proposed algorithm provides substantially more flexibility to recover shape details such as self-occlusions, dents and holes, which are not visible in the object silhouette.

4 0.83839142 209 cvpr-2013-Hypergraphs for Joint Multi-view Reconstruction and Multi-object Tracking

Author: Martin Hofmann, Daniel Wolf, Gerhard Rigoll

Abstract: We generalize the network flow formulation for multiobject tracking to multi-camera setups. In the past, reconstruction of multi-camera data was done as a separate extension. In this work, we present a combined maximum a posteriori (MAP) formulation, which jointly models multicamera reconstruction as well as global temporal data association. A flow graph is constructed, which tracks objects in 3D world space. The multi-camera reconstruction can be efficiently incorporated as additional constraints on the flow graph without making the graph unnecessarily large. The final graph is efficiently solved using binary linear programming. On the PETS 2009 dataset we achieve results that significantly exceed the current state of the art.

same-paper 5 0.83053446 125 cvpr-2013-Dictionary Learning from Ambiguously Labeled Data

Author: Yi-Chen Chen, Vishal M. Patel, Jaishanker K. Pillai, Rama Chellappa, P. Jonathon Phillips

Abstract: We propose a novel dictionary-based learning method for ambiguously labeled multiclass classification, where each training sample has multiple labels and only one of them is the correct label. The dictionary learning problem is solved using an iterative alternating algorithm. At each iteration of the algorithm, two alternating steps are performed: a confidence update and a dictionary update. The confidence of each sample is defined as the probability distribution on its ambiguous labels. The dictionaries are updated using either soft (EM-based) or hard decision rules. Extensive evaluations on existing datasets demonstrate that the proposed method performs significantly better than state-of-the-art ambiguously labeled learning approaches.

6 0.81477648 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display

7 0.80946279 39 cvpr-2013-Alternating Decision Forests

8 0.80475157 107 cvpr-2013-Deformable Spatial Pyramid Matching for Fast Dense Correspondences

9 0.77457446 222 cvpr-2013-Incorporating User Interaction and Topological Constraints within Contour Completion via Discrete Calculus

10 0.74725688 396 cvpr-2013-Simultaneous Active Learning of Classifiers & Attributes via Relative Feedback

11 0.72832716 298 cvpr-2013-Multi-scale Curve Detection on Surfaces

12 0.68950248 71 cvpr-2013-Boundary Cues for 3D Object Shape Recovery

13 0.68151069 155 cvpr-2013-Exploiting the Power of Stereo Confidences

14 0.66564631 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities

15 0.65673757 147 cvpr-2013-Ensemble Learning for Confidence Measures in Stereo Vision

16 0.654531 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging

17 0.65372169 373 cvpr-2013-SWIGS: A Swift Guided Sampling Method

18 0.65230107 72 cvpr-2013-Boundary Detection Benchmarking: Beyond F-Measures

19 0.65000653 467 cvpr-2013-Wide-Baseline Hair Capture Using Strand-Based Refinement

20 0.64944458 289 cvpr-2013-Monocular Template-Based 3D Reconstruction of Extensible Surfaces with Local Linear Elasticity