cvpr cvpr2013 cvpr2013-257 knowledge-graph by maker-knowledge-mining

257 cvpr-2013-Learning Structured Low-Rank Representations for Image Classification


Source: pdf

Author: Yangmuzi Zhang, Zhuolin Jiang, Larry S. Davis

Abstract: An approach to learn a structured low-rank representation for image classification is presented. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier. Experimental results demonstrate the effectiveness of our approach.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. [sent-4, score-0.407]

2 A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. [sent-5, score-0.725]

3 Introduction Recent research has demonstrated that sparse coding (or sparse representation) is a powerful image representation model. [sent-9, score-0.384]

4 The idea is to represent an input signal as a linear combination of a few items from an over-complete dictionary D. [sent-10, score-0.665]

5 The sparse representationbased coding (SRC) algorithm [27] takes the entire training set as dictionary. [sent-13, score-0.272]

6 However, sparse coding with a large dictionary is computationally expensive. [sent-14, score-0.805]

7 The performance of algorithms like image classification is improved dramatically with a well-constructed dictionary and the encoding step is efficient with a compact dictionary. [sent-16, score-0.652]

8 Low-rank matrix recovery, which determines a low-rank data matrix from corrupted input data, has been successfully applied to applications including salient object detection [24], segmentation and grouping [35, 13, 6], background subtraction [7], tracking [34], and 3D visual recovery [13, 3 1]. [sent-24, score-0.282]

9 edu rank matrix recovery to remove noise from the training data class by class. [sent-28, score-0.425]

10 [19] presents a discriminative low-rank dictionary learning for sparse representation (DLRD SR) to learn a low-rank dictionary for sparse representation-based face recognition. [sent-32, score-1.697]

11 A sub-dictionary Di is learned for each class independently; these dictionaries are then combined to form a dictionary D = [D1, D2 , . [sent-33, score-0.763]

12 Label information from training data is incorporated into the dictionary learning process by adding an ideal-code regularization term to the objective function of dictionary learning. [sent-40, score-1.333]

13 Unlike [19], the dictionary learned by our approach has good reconstruction and discrimination capabilities. [sent-41, score-0.632]

14 With this high-quality dictionary, we are able to learn a sparse and structural representation by adding a sparseness criteria into the low-rank objective function. [sent-42, score-0.326]

15 In contrast to the prior work [5, 19] on classification that performs low-rank recovery class by class during training, our method processes all training data simultaneously. [sent-45, score-0.398]

16 Compared to other dictionary learning methods [12, 33, 27, 25] that are very sensitive to noise in training images, our dictionary learning algorithm is robust. [sent-46, score-1.384]

17 Contaminated images can be recovered during our dictionary learning process. [sent-47, score-0.628]

18 The main contributions of this paper are: • We present an approach to learn a structural low-rank aWnde sparse image representation. [sent-48, score-0.265]

19 • We present a supervised training algorithm to construct 6 6 6 7 7 7 64 464 a discriminative and reconstructive dictionary, which is used to obtain a low-rank and sparse representation for images. [sent-51, score-0.395]

20 [26] has shown that sparse representation achieves impressive results on face recognition. [sent-58, score-0.249]

21 One of the most commonly used dictionary learning method is K-SVD [1]. [sent-64, score-0.628]

22 Several algorithms have been developed to make the dictionary more discriminative for sparse coding. [sent-66, score-0.786]

23 In [23], a dictionary is updated iteratively based on the results of a linear predictive classier to include structure information. [sent-67, score-0.604]

24 [12] presents a Label Consistent K-SVD (LC-KSVD) algorithm to learn a compact and discriminative dictionary for sparse coding. [sent-68, score-0.82]

25 [3 1] presents an image classification framework by using non-negative sparse coding, low-rank and sparse matrix decomposition. [sent-82, score-0.326]

26 Compared to previous work, our approach effectively constructs a reconstructive and discriminative dictionary from corrupted training data. [sent-84, score-0.889]

27 Based on this dictionary, structured low-rank and sparse representations are learned for classification. [sent-85, score-0.283]

28 [5] uses this technique to remove noise from training samples class by class; this process is computationally expensive for large numbers of classes. [sent-105, score-0.284]

29 t X = DZ + E where D is a dictionary that linearly spans the data space. [sent-119, score-0.604]

30 [18] employs the whole training set as the dictionary, but this might not be efficient for finding a discriminative representation in classification problems. [sent-121, score-0.24]

31 6 6 6 7 7 7 75 575 [19] tries to learn a structured dictionary by minimizing the rank of each sub-dictionary. [sent-122, score-0.761]

32 Associating label information in the training process, a discriminative dictionary can be learned from all training samples simultaneously. [sent-125, score-0.937]

33 The learned dictionary encourages images from the same class to have similar representations (i. [sent-126, score-0.817]

34 Learning Structured Sparse and Low-rank Representation To better classify images even when training and testing images have been corrupted, we propose a robust supervised algorithm to learn a structured sparse and low-rank representation for images. [sent-131, score-0.433]

35 We construct a discriminative dictionary via explicit utilization of label information from the training data. [sent-132, score-0.765]

36 Based on the dictionary, we learn low- rank and sparse representations for images. [sent-133, score-0.3]

37 As discussed before, low-rank matrix recovery helps to decompose a corrupted matrix X into a low-rank component DZ and a sparse noise component E, i. [sent-143, score-0.549]

38 With respect to a semantic dictionary D, the optimal representation matrix Z for X should be block-diagonal [18]: Z∗? [sent-146, score-0.699]

39 Given a dictionary D, the objective function is formulated as: mZ,Ein||Z||∗ + λ||E||1 + β||Z||1 (4) s. [sent-151, score-0.604]

40 To obtain a low-rank and sparse data representation, D should have discriminative and reconstructive power. [sent-168, score-0.235]

41 Although this decomposition might not result in minimal reconstruction error, low-rank and sparse Q is an optimal representation for classification. [sent-187, score-0.238]

42 With the above definition, we propose to learn a semantic structured dictionary by supervised learning. [sent-188, score-0.722]

43 ur We ein afodrdm aat rieognu linartoiz tahtieo dictionary learning process. [sent-191, score-0.628]

44 A dictionary that encourages Z to be close to Q is preferred. [sent-192, score-0.629]

45 The objective function for dictionary learning is defined as follows: Zm,Ein,D||Z||∗+ λ||E||1 + β||Z||1 + α||Z − Q||2F (5) s. [sent-193, score-0.628]

46 The first subproblem is to compute the optimal Z, E for a given dictionary D. [sent-201, score-0.628]

47 The second subproblem is to solve dictionary D for the given Z, E calculated from the first subproblem. [sent-203, score-0.628]

48 This equation (12) is a quadratic form in variable D, so we can derive an optimal dictionary Dupdate immediately. [sent-222, score-0.626]

49 The dictionary construction process is summarized in Algorithm 2. [sent-224, score-0.604]

50 The input dictionary D0 is initialized by combining all the individual class dictionaries, i. [sent-229, score-0.687]

51 After the dictionary is learned, the low-rank sparse representations Z of 6 6 6 7 7 7 97 797 Algorithm 2 Dictionary Learning via Inexact ALM Input: Data X, and Parameters λ, β, α, γ Output: D, Z Initialize: Initial Dictionary D0, ? [sent-238, score-0.803]

52 Our approach is compared with several other algorithms including the locality-constrained linear coding method (LLC) [25], × SRC [27], LR [5], LR with structural incoherence from [5], DLRD SR [19] and our method without the regularization term | |Z − Q| | (our method without Q). [sent-248, score-0.445]

53 Our trained dictionary has 5 items for each class. [sent-262, score-0.665]

54 We repeat our experiments starting with 32 randomly selected training images and 20 dictionary items per class. [sent-263, score-0.736]

55 We compare our approach with LLC [25], SRC [27], LR [5], and LR with structural incoherence [5]. [sent-273, score-0.336]

56 We evaluate the performance of the SRC algorithm using a full-size dictionary (all training samples). [sent-274, score-0.675]

57 Our method, by taking advantage of structure information, achieves better performance than LLC, LR, LR with structural incoherence and our method without Q. [sent-279, score-0.336]

58 The dictionary contains 50 items (5 for each category). [sent-283, score-0.665]

59 The first line shows the testing images’ representa- tion based on LR and LR with structural incoherence [5]. [sent-284, score-0.397]

60 Figures 3(a) and 3(c) are representations with the full size dictionary (all training sample). [sent-285, score-0.752]

61 Comparison of representations for testing samples from the first ten classes on the Extended YaleB. [sent-287, score-0.277]

62 (a) LR with full-size dictionary; (b) LR with dictionary size 50; (c) LR with structural incoherence with full-size dictionary; (d) LR with structural incoherence with dictionary size 50; (e) SRC; (f) LLC; (g) Our method without Q; (h) Our method. [sent-289, score-1.88]

63 (a) original faces; (b) the low-rank component DZ; (c) the sparse noise component E. [sent-292, score-0.267]

64 Figures 3(e), 3(f) and 3(g) are the representations based on SRC, LLC with the same dictionary size and our method without Q. [sent-295, score-0.681]

65 We also evaluate the computation time of our approach and LR with structural incoherence [5] that trains a model class by class (Figure 5(a)) and uses SRC for classification. [sent-302, score-0.502]

66 Clearly, training over all classes simultaneously is faster than class by class if discriminativeness is preserved for different classes. [sent-305, score-0.311]

67 Our training time is twice as fast and testing is three times faster than LR with structural incoherence. [sent-306, score-0.241]

68 We use seven unobscured images from session 1 and one image with sunglass as training samples for each person, the rest as testing. [sent-328, score-0.546]

69 We use seven unobscured images from session 1 and one image with a scarf as training samples for each person, the remainder as testing. [sent-331, score-0.572]

70 We use seven unobscured images from session 1, one image with sunglasses, and one with a scarf as training samples for each person. [sent-334, score-0.572]

71 Our methods are compared with LLC [25], SRC [27], LR [5], and LR with structural incoherence [5]. [sent-339, score-0.336]

72 Comparison of representations for testing samples from the first ten classes on the AR for the sunglass scenario. [sent-341, score-0.384]

73 (a) LR with full-size dictionary; (b) LR with dictionary size 50; (c) LR with structural incoherence with full-size dictionary; (d) LR with structural incoherence with dictionary size 50; (e) SRC; (f) LLC; (g) Our method without Q; (h) Our method. [sent-343, score-1.88]

74 6 6 6 87 78719 919 (a) original (b) the low-rank gray images component DZ × (c) the sparse noise component E Figure 7. [sent-344, score-0.267]

75 Examples of image decomposition for testing samples from class 4 and 10 on the AR. [sent-345, score-0.272]

76 Our approach achieves the best results and outperforms other approaches with the same dictionary size by more than 3% for the sunglass scenario, 7% for the scarf scenario, and 2% for the mixed scenario. [sent-347, score-0.844]

77 We visualize the representation Z for the first ten classes under the sunglasses scenario. [sent-348, score-0.241]

78 W0 ter use i5n0g as our dictionary ×si 1z0e, i =. [sent-351, score-0.604]

79 Figures 6(a) and 6(c) show the representations of LR and LR method without structural incoherence with a full-size dictionary. [sent-354, score-0.413]

80 In Figures 6(b) and 6(d), we randomly pick 5 dictionary items for each class, and use this reduced dictionary to learn sparse codes. [sent-355, score-1.425]

81 For comparison purposes, we also choose 50 as the dictionary size in LLC and SRC* to learn the representations shown in Figures 6(e) and 6(f). [sent-356, score-0.715]

82 75 In addition, we compare our results with LC-KSVD [12] using the same training samples under the sun and scarf scenarios, using unobscured images for test. [sent-367, score-0.448]

83 Examples of image decomposition for testing samples from class 95 on the AR with 20% uniform noise. [sent-373, score-0.272]

84 (a) corrupted faces; (b) the low-rank component DZ; (c) the sparse noise component E. [sent-374, score-0.368]

85 seven images with illumination and expression variations from session 1 are used for training images, and the other seven images from session 2 are used as testing images. [sent-375, score-0.408]

86 Figure 9 shows the representations of 15 testing samples which are randomly selected from classes 4 ∼ 8. [sent-388, score-0.25]

87 Comparison of representations for testing samples from class 4 to 8 on the Caltech101 . [sent-391, score-0.294]

88 (a) LR with full-size dictionary; (b) LR with dictionary size 55; (c) LR with structural incoherence with full-size dictionary; (d) LR with structural incoherence with dictionary size 55; (e) LLC; (f) Our method without Q; (g) Our method. [sent-393, score-1.88]

89 Discriminative representations are learned via low-rank recovery even for corrupted datasets. [sent-412, score-0.319]

90 The learned representations reveal structural information automatically and can be used for classification directly. [sent-413, score-0.262]

91 Low-rank matrix recovery with structural incoherence for robust face recognition, 2012. [sent-444, score-0.549]

92 Image denoising via sparse and redundant representations over learned dictionaries. [sent-464, score-0.251]

93 Learning a discriminative dictionary for sparse coding via label consistent k-svd, 2011. [sent-491, score-0.895]

94 Sparse representation for face recognition based on discriminative low-rank dictionary learning, 2012. [sent-545, score-0.791]

95 Joint learning and dictionary construction for pattern recognition, 2008. [sent-571, score-0.628]

96 Robust pricipal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. [sent-593, score-0.301]

97 Gabor feature based sparse representation for face recognition with gabor occlusion dictionary, 2010. [sent-611, score-0.273]

98 Image classification by non-negative sparse coding, low-rank and sparse decomposition, 2011. [sent-627, score-0.292]

99 Online semi-supervised discriminative dictionary learning for sparse representation, 2012. [sent-633, score-0.81]

100 Discriminative k-svd for dictionary learning in face recognition, 2010. [sent-638, score-0.694]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('dictionary', 0.604), ('lr', 0.268), ('incoherence', 0.227), ('dz', 0.22), ('src', 0.187), ('unobscured', 0.171), ('dzj', 0.15), ('zj', 0.137), ('scarf', 0.133), ('sparse', 0.122), ('llc', 0.117), ('sunglasses', 0.114), ('recovery', 0.113), ('structural', 0.109), ('sunglass', 0.107), ('corrupted', 0.101), ('ar', 0.087), ('dupdate', 0.086), ('ej', 0.084), ('class', 0.083), ('acc', 0.08), ('session', 0.08), ('coding', 0.079), ('representations', 0.077), ('wj', 0.077), ('samples', 0.073), ('training', 0.071), ('yaleb', 0.07), ('rank', 0.067), ('face', 0.066), ('dlrd', 0.064), ('scarves', 0.064), ('representation', 0.061), ('testing', 0.061), ('items', 0.061), ('alm', 0.061), ('discriminative', 0.06), ('inexact', 0.058), ('noise', 0.057), ('argmzin', 0.057), ('structured', 0.056), ('decomposition', 0.055), ('reconstructive', 0.053), ('di', 0.053), ('contaminated', 0.05), ('dictionaries', 0.048), ('classification', 0.048), ('component', 0.044), ('seven', 0.044), ('argmein', 0.043), ('pricipal', 0.043), ('wright', 0.041), ('classes', 0.039), ('database', 0.038), ('vmax', 0.038), ('argmwin', 0.038), ('zzt', 0.038), ('figures', 0.037), ('discriminativeness', 0.035), ('zh', 0.035), ('matrix', 0.034), ('learn', 0.034), ('scenario', 0.034), ('lin', 0.033), ('sr', 0.032), ('updating', 0.031), ('regularization', 0.03), ('label', 0.03), ('rates', 0.03), ('zhang', 0.029), ('learned', 0.028), ('supervised', 0.028), ('illumination', 0.028), ('controls', 0.028), ('corruption', 0.027), ('ten', 0.027), ('rao', 0.026), ('linearized', 0.025), ('replaced', 0.025), ('update', 0.025), ('matsushita', 0.025), ('encourages', 0.025), ('person', 0.025), ('subproblem', 0.024), ('extended', 0.024), ('occlusion', 0.024), ('denoising', 0.024), ('learning', 0.024), ('associating', 0.023), ('rewritten', 0.023), ('elad', 0.023), ('zi', 0.023), ('jiang', 0.023), ('iis', 0.023), ('variable', 0.022), ('ganesh', 0.022), ('yang', 0.021), ('conditions', 0.021), ('sapiro', 0.021), ('multipliers', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999952 257 cvpr-2013-Learning Structured Low-Rank Representations for Image Classification

Author: Yangmuzi Zhang, Zhuolin Jiang, Larry S. Davis

Abstract: An approach to learn a structured low-rank representation for image classification is presented. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier. Experimental results demonstrate the effectiveness of our approach.

2 0.46041566 296 cvpr-2013-Multi-level Discriminative Dictionary Learning towards Hierarchical Visual Categorization

Author: Li Shen, Shuhui Wang, Gang Sun, Shuqiang Jiang, Qingming Huang

Abstract: For the task of visual categorization, the learning model is expected to be endowed with discriminative visual feature representation and flexibilities in processing many categories. Many existing approaches are designed based on a flat category structure, or rely on a set of pre-computed visual features, hence may not be appreciated for dealing with large numbers of categories. In this paper, we propose a novel dictionary learning method by taking advantage of hierarchical category correlation. For each internode of the hierarchical category structure, a discriminative dictionary and a set of classification models are learnt for visual categorization, and the dictionaries in different layers are learnt to exploit the discriminative visual properties of different granularity. Moreover, the dictionaries in lower levels also inherit the dictionary of ancestor nodes, so that categories in lower levels are described with multi-scale visual information using our dictionary learning approach. Experiments on ImageNet object data subset and SUN397 scene dataset demonstrate that our approach achieves promising performance on data with large numbers of classes compared with some state-of-the-art methods, and is more efficient in processing large numbers of categories.

3 0.4321999 392 cvpr-2013-Separable Dictionary Learning

Author: Simon Hawe, Matthias Seibert, Martin Kleinsteuber

Abstract: Many techniques in computer vision, machine learning, and statistics rely on the fact that a signal of interest admits a sparse representation over some dictionary. Dictionaries are either available analytically, or can be learned from a suitable training set. While analytic dictionaries permit to capture the global structure of a signal and allow a fast implementation, learned dictionaries often perform better in applications as they are more adapted to the considered class of signals. In imagery, unfortunately, the numerical burden for (i) learning a dictionary and for (ii) employing the dictionary for reconstruction tasks only allows to deal with relatively small image patches that only capture local image information. The approach presented in this paper aims at overcoming these drawbacks by allowing a separable structure on the dictionary throughout the learning process. On the one hand, this permits larger patch-sizes for the learning phase, on the other hand, the dictionary is applied efficiently in reconstruction tasks. The learning procedure is based on optimizing over a product of spheres which updates the dictionary as a whole, thus enforces basic dictionary proper- , ties such as mutual coherence explicitly during the learning procedure. In the special case where no separable structure is enforced, our method competes with state-of-the-art dictionary learning methods like K-SVD.

4 0.41409272 185 cvpr-2013-Generalized Domain-Adaptive Dictionaries

Author: Sumit Shekhar, Vishal M. Patel, Hien V. Nguyen, Rama Chellappa

Abstract: Data-driven dictionaries have produced state-of-the-art results in various classification tasks. However, when the target data has a different distribution than the source data, the learned sparse representation may not be optimal. In this paper, we investigate if it is possible to optimally represent both source and target by a common dictionary. Specifically, we describe a technique which jointly learns projections of data in the two domains, and a latent dictionary which can succinctly represent both the domains in the projected low-dimensional space. An efficient optimization technique is presented, which can be easily kernelized and extended to multiple domains. The algorithm is modified to learn a common discriminative dictionary, which can be further used for classification. The proposed approach does not require any explicit correspondence between the source and target domains, and shows good results even when there are only a few labels available in the target domain. Various recognition experiments show that the methodperforms onparor better than competitive stateof-the-art methods.

5 0.40931466 58 cvpr-2013-Beta Process Joint Dictionary Learning for Coupled Feature Spaces with Application to Single Image Super-Resolution

Author: Li He, Hairong Qi, Russell Zaretzki

Abstract: This paper addresses the problem of learning overcomplete dictionaries for the coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. A Bayesian method using a beta process prior is applied to learn the over-complete dictionaries. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Another advantage of the proposed method is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. We compare the proposed approach to several state-of-the-art dictionary learning methods super-resolution. tionaries learned resolution results ods. by applying this method to single image The experimental results show that dicby our method produces the best supercompared to other state-of-the-art meth-

6 0.39832661 315 cvpr-2013-Online Robust Dictionary Learning

7 0.39724949 66 cvpr-2013-Block and Group Regularized Sparse Modeling for Dictionary Learning

8 0.34922314 220 cvpr-2013-In Defense of Sparsity Based Face Recognition

9 0.28608418 422 cvpr-2013-Tag Taxonomy Aware Dictionary Learning for Region Tagging

10 0.28300208 399 cvpr-2013-Single-Sample Face Recognition with Image Corruption and Misalignment via Sparse Illumination Transfer

11 0.27722743 125 cvpr-2013-Dictionary Learning from Ambiguously Labeled Data

12 0.26711231 5 cvpr-2013-A Bayesian Approach to Multimodal Visual Dictionary Learning

13 0.23052131 302 cvpr-2013-Multi-task Sparse Learning with Beta Process Prior for Action Recognition

14 0.18943323 419 cvpr-2013-Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation

15 0.18197119 204 cvpr-2013-Histograms of Sparse Codes for Object Detection

16 0.14078362 415 cvpr-2013-Structured Face Hallucination

17 0.13882211 178 cvpr-2013-From Local Similarity to Global Coding: An Application to Image Classification

18 0.13603584 442 cvpr-2013-Transfer Sparse Coding for Robust Image Representation

19 0.12586679 421 cvpr-2013-Supervised Kernel Descriptors for Visual Recognition

20 0.12466412 160 cvpr-2013-Face Recognition in Movie Trailers via Mean Sequence Sparse Representation-Based Classification


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.211), (1, -0.224), (2, -0.357), (3, 0.404), (4, -0.129), (5, -0.151), (6, 0.13), (7, 0.134), (8, 0.032), (9, 0.074), (10, 0.031), (11, 0.049), (12, 0.03), (13, 0.055), (14, 0.043), (15, 0.025), (16, 0.046), (17, 0.032), (18, -0.035), (19, 0.02), (20, 0.008), (21, 0.044), (22, 0.024), (23, 0.022), (24, 0.027), (25, -0.038), (26, -0.051), (27, 0.015), (28, -0.047), (29, 0.024), (30, 0.043), (31, 0.014), (32, -0.006), (33, -0.033), (34, 0.001), (35, -0.006), (36, 0.0), (37, -0.05), (38, -0.024), (39, 0.001), (40, -0.037), (41, -0.022), (42, -0.002), (43, 0.018), (44, 0.035), (45, 0.013), (46, -0.039), (47, -0.001), (48, -0.029), (49, 0.043)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.96629602 392 cvpr-2013-Separable Dictionary Learning

Author: Simon Hawe, Matthias Seibert, Martin Kleinsteuber

Abstract: Many techniques in computer vision, machine learning, and statistics rely on the fact that a signal of interest admits a sparse representation over some dictionary. Dictionaries are either available analytically, or can be learned from a suitable training set. While analytic dictionaries permit to capture the global structure of a signal and allow a fast implementation, learned dictionaries often perform better in applications as they are more adapted to the considered class of signals. In imagery, unfortunately, the numerical burden for (i) learning a dictionary and for (ii) employing the dictionary for reconstruction tasks only allows to deal with relatively small image patches that only capture local image information. The approach presented in this paper aims at overcoming these drawbacks by allowing a separable structure on the dictionary throughout the learning process. On the one hand, this permits larger patch-sizes for the learning phase, on the other hand, the dictionary is applied efficiently in reconstruction tasks. The learning procedure is based on optimizing over a product of spheres which updates the dictionary as a whole, thus enforces basic dictionary proper- , ties such as mutual coherence explicitly during the learning procedure. In the special case where no separable structure is enforced, our method competes with state-of-the-art dictionary learning methods like K-SVD.

same-paper 2 0.95903522 257 cvpr-2013-Learning Structured Low-Rank Representations for Image Classification

Author: Yangmuzi Zhang, Zhuolin Jiang, Larry S. Davis

Abstract: An approach to learn a structured low-rank representation for image classification is presented. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier. Experimental results demonstrate the effectiveness of our approach.

3 0.95799452 66 cvpr-2013-Block and Group Regularized Sparse Modeling for Dictionary Learning

Author: Yu-Tseh Chi, Mohsen Ali, Ajit Rajwade, Jeffrey Ho

Abstract: This paper proposes a dictionary learning framework that combines the proposed block/group (BGSC) or reconstructed block/group (R-BGSC) sparse coding schemes with the novel Intra-block Coherence Suppression Dictionary Learning (ICS-DL) algorithm. An important and distinguishing feature of the proposed framework is that all dictionary blocks are trained simultaneously with respect to each data group while the intra-block coherence being explicitly minimized as an important objective. We provide both empirical evidence and heuristic support for this feature that can be considered as a direct consequence of incorporating both the group structure for the input data and the block structure for the dictionary in the learning process. The optimization problems for both the dictionary learning and sparse coding can be solved efficiently using block-gradient descent, and the details of the optimization algorithms are presented. We evaluate the proposed methods using well-known datasets, and favorable comparisons with state-of-the-art dictionary learning methods demonstrate the viability and validity of the proposed framework.

4 0.9499681 58 cvpr-2013-Beta Process Joint Dictionary Learning for Coupled Feature Spaces with Application to Single Image Super-Resolution

Author: Li He, Hairong Qi, Russell Zaretzki

Abstract: This paper addresses the problem of learning overcomplete dictionaries for the coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. A Bayesian method using a beta process prior is applied to learn the over-complete dictionaries. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Another advantage of the proposed method is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. We compare the proposed approach to several state-of-the-art dictionary learning methods super-resolution. tionaries learned resolution results ods. by applying this method to single image The experimental results show that dicby our method produces the best supercompared to other state-of-the-art meth-

5 0.93868476 315 cvpr-2013-Online Robust Dictionary Learning

Author: Cewu Lu, Jiaping Shi, Jiaya Jia

Abstract: Online dictionary learning is particularly useful for processing large-scale and dynamic data in computer vision. It, however, faces the major difficulty to incorporate robust functions, rather than the square data fitting term, to handle outliers in training data. In thispaper, wepropose a new online framework enabling the use of ?1 sparse data fitting term in robust dictionary learning, notably enhancing the usability and practicality of this important technique. Extensive experiments have been carried out to validate our new framework.

6 0.87480623 296 cvpr-2013-Multi-level Discriminative Dictionary Learning towards Hierarchical Visual Categorization

7 0.8519783 125 cvpr-2013-Dictionary Learning from Ambiguously Labeled Data

8 0.79072368 185 cvpr-2013-Generalized Domain-Adaptive Dictionaries

9 0.76532567 220 cvpr-2013-In Defense of Sparsity Based Face Recognition

10 0.72307539 5 cvpr-2013-A Bayesian Approach to Multimodal Visual Dictionary Learning

11 0.68141568 422 cvpr-2013-Tag Taxonomy Aware Dictionary Learning for Region Tagging

12 0.63524669 302 cvpr-2013-Multi-task Sparse Learning with Beta Process Prior for Action Recognition

13 0.58170235 399 cvpr-2013-Single-Sample Face Recognition with Image Corruption and Misalignment via Sparse Illumination Transfer

14 0.56839025 204 cvpr-2013-Histograms of Sparse Codes for Object Detection

15 0.55801564 442 cvpr-2013-Transfer Sparse Coding for Robust Image Representation

16 0.54892623 83 cvpr-2013-Classification of Tumor Histology via Morphometric Context

17 0.4096221 419 cvpr-2013-Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation

18 0.40834764 164 cvpr-2013-Fast Convolutional Sparse Coding

19 0.40538204 178 cvpr-2013-From Local Similarity to Global Coding: An Application to Image Classification

20 0.38993376 427 cvpr-2013-Texture Enhanced Image Denoising via Gradient Histogram Preservation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.089), (16, 0.018), (22, 0.012), (26, 0.043), (33, 0.312), (39, 0.041), (49, 0.203), (67, 0.078), (69, 0.034), (77, 0.021), (87, 0.06)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.9175809 105 cvpr-2013-Deep Learning Shape Priors for Object Segmentation

Author: Fei Chen, Huimin Yu, Roland Hu, Xunxun Zeng

Abstract: In this paper we introduce a new shape-driven approach for object segmentation. Given a training set of shapes, we first use deep Boltzmann machine to learn the hierarchical architecture of shape priors. This learned hierarchical architecture is then used to model shape variations of global and local structures in an energetic form. Finally, it is applied to data-driven variational methods to perform object extraction of corrupted data based on shape probabilistic representation. Experiments demonstrate that our model can be applied to dataset of arbitrary prior shapes, and can cope with image noise and clutter, as well as partial occlusions.

2 0.90896267 12 cvpr-2013-A Global Approach for the Detection of Vanishing Points and Mutually Orthogonal Vanishing Directions

Author: Michel Antunes, João P. Barreto

Abstract: This article presents a new global approach for detecting vanishing points and groups of mutually orthogonal vanishing directions using lines detected in images of man-made environments. These two multi-model fitting problems are respectively cast as Uncapacited Facility Location (UFL) and Hierarchical Facility Location (HFL) instances that are efficiently solved using a message passing inference algorithm. We also propose new functions for measuring the consistency between an edge and aputative vanishingpoint, and for computing the vanishing point defined by a subset of edges. Extensive experiments in both synthetic and real images show that our algorithms outperform the state-ofthe-art methods while keeping computation tractable. In addition, we show for the first time results in simultaneously detecting multiple Manhattan-world configurations that can either share one vanishing direction (Atlanta world) or be completely independent.

3 0.88714945 17 cvpr-2013-A Machine Learning Approach for Non-blind Image Deconvolution

Author: Christian J. Schuler, Harold Christopher Burger, Stefan Harmeling, Bernhard Schölkopf

Abstract: Image deconvolution is the ill-posed problem of recovering a sharp image, given a blurry one generated by a convolution. In this work, we deal with space-invariant non- blind deconvolution. Currently, the most successful meth- ods involve a regularized inversion of the blur in Fourier domain as a first step. This step amplifies and colors the noise, and corrupts the image information. In a second (and arguably more difficult) step, one then needs to remove the colored noise, typically using a cleverly engineered algorithm. However, the methods based on this two-step ap- proach do not properly address the fact that the image information has been corrupted. In this work, we also rely on a two-step procedure, but learn the second step on a large dataset of natural images, using a neural network. We will show that this approach outperforms the current state-ofthe-art on a large dataset of artificially blurred images. We demonstrate the practical applicability of our method in a real-world example with photographic out-of-focus blur.

same-paper 4 0.88044894 257 cvpr-2013-Learning Structured Low-Rank Representations for Image Classification

Author: Yangmuzi Zhang, Zhuolin Jiang, Larry S. Davis

Abstract: An approach to learn a structured low-rank representation for image classification is presented. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier. Experimental results demonstrate the effectiveness of our approach.

5 0.85623449 30 cvpr-2013-Accurate Localization of 3D Objects from RGB-D Data Using Segmentation Hypotheses

Author: Byung-soo Kim, Shili Xu, Silvio Savarese

Abstract: In this paper we focus on the problem of detecting objects in 3D from RGB-D images. We propose a novel framework that explores the compatibility between segmentation hypotheses of the object in the image and the corresponding 3D map. Our framework allows to discover the optimal location of the object using a generalization of the structural latent SVM formulation in 3D as well as the definition of a new loss function defined over the 3D space in training. We evaluate our method using two existing RGB-D datasets. Extensive quantitative and qualitative experimental results show that our proposed approach outperforms state-of-theart as methods well as a number of baseline approaches for both 3D and 2D object recognition tasks.

6 0.85562068 359 cvpr-2013-Robust Discriminative Response Map Fitting with Constrained Local Models

7 0.85460746 399 cvpr-2013-Single-Sample Face Recognition with Image Corruption and Misalignment via Sparse Illumination Transfer

8 0.85330617 240 cvpr-2013-Keypoints from Symmetries by Wave Propagation

9 0.85317057 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases

10 0.85190761 202 cvpr-2013-Hierarchical Saliency Detection

11 0.85154706 82 cvpr-2013-Class Generative Models Based on Feature Regression for Pose Estimation of Object Categories

12 0.85144532 43 cvpr-2013-Analyzing Semantic Segmentation Using Hybrid Human-Machine CRFs

13 0.85070515 168 cvpr-2013-Fast Object Detection with Entropy-Driven Evaluation

14 0.85052156 322 cvpr-2013-PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures with Spatial Priors

15 0.85041821 207 cvpr-2013-Human Pose Estimation Using a Joint Pixel-wise and Part-wise Formulation

16 0.85035926 119 cvpr-2013-Detecting and Aligning Faces by Image Retrieval

17 0.8502816 167 cvpr-2013-Fast Multiple-Part Based Object Detection Using KD-Ferns

18 0.85007757 318 cvpr-2013-Optimized Pedestrian Detection for Multiple and Occluded People

19 0.85000288 94 cvpr-2013-Context-Aware Modeling and Recognition of Activities in Video

20 0.84998727 173 cvpr-2013-Finding Things: Image Parsing with Regions and Per-Exemplar Detectors