iccv iccv2013 iccv2013-14 knowledge-graph by maker-knowledge-mining

14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding


Source: pdf

Author: Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang

Abstract: In many sparse coding based image restoration and image classification problems, using non-convex ?p-norm minimization (0 ≤ p < 1) can often obtain better results than timhei convex 0?1 -norm m 1)ini camniza otfiteonn. Ab naiunm bbeetrt of algorithms, e.g., iteratively reweighted least squares (IRLS), iteratively thresholding method (ITM-?p), and look-up table (LUT), have been proposed for non-convex ?p-norm sparse coding, while some analytic solutions have been suggested for some specific values of p. In this paper, by extending the popular soft-thresholding operator, we propose a generalized iterated shrinkage algorithm (GISA) for ?p-norm non-convex sparse coding. Unlike the analytic solutions, the proposed GISA algorithm is easy to implement, and can be adopted for solving non-convex sparse coding problems with arbitrary p values. Compared with LUT, GISA is more general and does not need to compute and store the look-up tables. Compared with IRLS and ITM-?p, GISA is theoretically more solid and can achieve more accurate solutions. Experiments on image restoration and sparse coding based face recognition are conducted to validate the performance of GISA. ××

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 cn Abstract In many sparse coding based image restoration and image classification problems, using non-convex ? [sent-11, score-0.21]

2 , iteratively reweighted least squares (IRLS), iteratively thresholding method (ITM-? [sent-16, score-0.256]

3 p-norm sparse coding, while some analytic solutions have been suggested for some specific values of p. [sent-18, score-0.134]

4 In this paper, by extending the popular soft-thresholding operator, we propose a generalized iterated shrinkage algorithm (GISA) for ? [sent-19, score-0.196]

5 Unlike the analytic solutions, the proposed GISA algorithm is easy to implement, and can be adopted for solving non-convex sparse coding problems with arbitrary p values. [sent-21, score-0.251]

6 Experiments on image restoration and sparse coding based face recognition are conducted to validate the performance of GISA. [sent-25, score-0.291]

7 Introduction Sparse coding [7, 18, 3 1] is an effective tool in a myriad of applications such as compressed sensing [11], image restoration [24, 25], face recognition [38], etc. [sent-27, score-0.271]

8 Originally, it aims to solve the following minimization problem: mxin21 ? [sent-28, score-0.06]

9 p-norm non-convex sparse coding problems, and they have been applied to various vision and learning tasks, e. [sent-71, score-0.151]

10 , compressed sensing [10], image restoration [25], face recognition [29], and variable selection [33]. [sent-73, score-0.186]

11 Several typical algorithms include iteratively reweighted least squares (IRLS) [12, 14, 23, 24, 28], iteratively reweighted ? [sent-74, score-0.138]

12 LUT uses look-up tables to store the solutions w. [sent-81, score-0.053]

13 Other algorithms, such as the analytic solutions in [25, 39], can only be used for some specific values of p. [sent-88, score-0.068]

14 217 Inspired by the great success of soft thresholding [16] and iterative shrinkage/thresholding (IST) [15] methods, in this paper, we propose a generalized iterated shrinkage algorithm (GISA) for ? [sent-89, score-0.391]

15 p-norm sparse coding problems with arbitrary p, λ and y values. [sent-92, score-0.174]

16 It is easy to implement and can be readily used to solve the many ? [sent-95, score-0.052]

17 (4), IRLS and IRL1 sometimes cannot converge to the desired solutions. [sent-156, score-0.092]

18 3, by initializing x(0) = y, IRLS and IRL1 would converge to the same local minimum. [sent-160, score-0.092]

19 (4) is for 1D optimization, one can define a proper thresholding function [33] or construct look-up tables (LUTs) [25] in advance. [sent-162, score-0.162]

20 p converge to the same local minimum, but GISA can converge to a better solution. [sent-170, score-0.184]

21 , 1/2 or 2/3, the analytic solutions can be derived [25, 39]. [sent-173, score-0.068]

22 (11) cannot always guarantee to converge to the global solution. [sent-180, score-0.092]

23 p-norm non-convex sparse coding problems where the values of x, λ and p are unconstrained, LUT will not be an effective and efficient solution. [sent-185, score-0.174]

24 (13) Generally, if |y| ≤ λ, the soft-thresholding operator uses the thresholding r|yu|le ≤ t oλ assign T1(y; λ) otol d0in; goth oeprewraistoer, uses tthhee shrinkage rule to assign T1(y; λ) to sgn(y)(|y| − λ). [sent-202, score-0.349]

25 Generalization of soft-thresholding Inspired by soft-thresholding, we proposed a generalized shrankage/thresholding operator to solve the ? [sent-205, score-0.108]

26 (4) by modifying the thresholding and the shrinkage rules. [sent-207, score-0.275]

27 Thus, to generalize soft thresholding for solving the problem in Eq. [sent-219, score-0.203]

28 (19) In ITM, She [33] extended the soft-thresholding with the thresholding function in Eq. [sent-235, score-0.162]

29 (11) actually is not a good generalization of the soft-thresholding operator for ? [sent-242, score-0.061]

30 Thus, to generalize softthresholding, we should solve the following nonlinear equation system to determine a correct thresholding value τpGST(λ) and its corresponding x∗p: (x0(λ,p), 21? [sent-246, score-0.207]

31 e range of (x(0λ,p), +∞) can be obtained as xp∗ = (2λ(1 − p))2−1p, (24) and the thresholding value τpGST(λ) is τpGST(λ) = (2λ(1 − p))2−1p + λp(2λ(1 − p))p2−−1p. [sent-270, score-0.162]

32 Theorem 1 For any y ∈ (τpGST(λ), +∞), f(x) has one unique minimum SpGST(y; λ) in the range of (xp∗, +∞), which can be obtained by solving the following equation: SGpST(y;λ) − y+ λp? [sent-272, score-0.066]

33 In Algorithm 1, the output would converge to the correct solution when J → ∞. [sent-298, score-0.113]

34 s Finally, we propose a generalized soft-thresholding (GST) function for solving the ? [sent-301, score-0.083]

35 (28) Like the soft-thresholding function, the GST function also involves a thresholding rule TpGST(y; λ) = 0 when |y| ≤ τpGST(λ) and a shrinkage rule TpGST(y; λ) = sgn(y)SpGST(y; λ) when |y| > τpGST(λ). [sent-305, score-0.339]

36 Compared with the thresholding functwiohne nin | [|3 >3], τ in GST we adopt a different thresholding val- ue τpGST(λ), and propose an algorithm, i. [sent-306, score-0.324]

37 When p = 1, GST will converge after one iteration. [sent-320, score-0.092]

38 Since pl→im1τpGST(λ) = λp l→im1(1 − p)p−1 = λ, (29) the thresholding value of GST will become λ, and the GST function becomes T1GST(y;λ) =? [sent-321, score-0.162]

39 When p = 0, GST will also converge after one iteration. [sent-324, score-0.092]

40 Generalized iterated shrinkage algorithm With the proposed GST in Eq. [sent-330, score-0.154]

41 (28), we can readily have a generalized iterated shrinkage algorithm (GISA) for solving the ? [sent-331, score-0.237]

42 GISA The proposed GISA is an iterative algorithm, and in each iteration it involves a gradient descent step based on A or y, followed by a generalized shrinkage/thresholding step: x(k+1) = TGpST(x(k) − ? [sent-336, score-0.097]

43 Output: x Actually, GISA is a generalization of the iterative shrinkage/thresholding (IST) method [15], and an example of the iterative thresholding method (ITM) [33]. [sent-360, score-0.247]

44 In [33], She proved that, for any thresholding function Θ (y; λ) defined for −∞ < y < +∞ and 0 ≤ λ < +∞, if Θ (y; λ) satisfies the following properties: i) Θ(−y; λ) = −Θ(y; λ), ii) Θ(y; λ) ≤ −Θ(y? [sent-361, score-0.162]

45 , iii) limy→∞Θ(y; λ) =; ∞, iv) 0 ≤ Θ(y; λ) ≤λ y =fo ∞r 0, ≤ y < ∞, the ITM0 ≤m eΘth(yo;dλ )w ≤ou yld f converge t o∞ a, stationary point. [sent-363, score-0.092]

46 2-norm, GISA can al- so converge to the optimal solution. [sent-369, score-0.092]

47 Moreover, if p = 1, GISA would degenerate to IST, and would converge to the global minimum. [sent-370, score-0.092]

48 Sparse GST gradient based deconvolution using One important application of sparse coding is image restoration. [sent-378, score-0.342]

49 n A typical image deconvolution model usually includes a fidelity term and a regularization term, where the fidelity term is modeled based on the degradation process, and the regularization term is modeled based on image priors. [sent-382, score-0.245]

50 Recent studies on natural image statistics have shown that the marginal distributions of filtering responses can be modeled as hyper-Laplacian with 0 < p < 1 [25, 28, 35], which had been adopted in many low level vision problems [13, 36]. [sent-383, score-0.062]

51 By using the sparse gradient based image prior, the image deconvolution model can be formulated as mxin21 ? [sent-384, score-0.257]

52 pp, (37) where λ is the regularization parameter, D = [Dh, Dv] denotes the gradient operator, and Dh and Dv are the horizontal and vertical gradient operators, respectively. [sent-388, score-0.063]

53 We adopt an alternating minimization strategy to solve the problem in Eq. [sent-399, score-0.06]

54 In each iteration, given a fixed d, x can be obtained by solving the following subproblem η2λ mxin21 ? [sent-401, score-0.068]

55 Given a fixed x, let dref = Dx, and d can be obtained by solving the following subproblem: mdin 2η ? [sent-413, score-0.108]

56 (42) Finally, we summarize the GST based image deconvolution algorithm in Algorithm 3. [sent-429, score-0.169]

57 Algorithm 3 is similar to the algorithms in [25, 37], but Wang and Yin [37] only studied the Laplacian prior (p = 1), and Krishnan and Fergus [25] used look-up table (LUT) to solve the subproblem in Eq. [sent-430, score-0.051]

58 Here we empirically choose J = 1, making our algorithm very efficient for sparse gradient based image deconvolution. [sent-432, score-0.088]

59 Experimental results In this section, we evaluate the proposed GISA on two representative vision applications: image deconvolution and face recognition. [sent-434, score-0.23]

60 In image deconvolution experiments, we compare GISA with four state-of-the-art algorithms of ? [sent-435, score-0.169]

61 In face recognition, we use GISA to solve the sparse representation-based classification (SRC) model [38], and show that the performance of SRC can be improved by using GISA with p < 1. [sent-456, score-0.151]

62 Image deconvolution In image deconvolution, we followed the experiment setting in [25]. [sent-465, score-0.169]

63 5 shows the deconvolution results of GISA on a test image by using p = 1and p = 0. [sent-485, score-0.169]

64 7 is much better than that with p = 1in terms of suppressing noise and ring effects and preserving edge details, which indicates that non-convex image deconvolution can much improve the deconvolution performance. [sent-488, score-0.338]

65 Image deconvolution with GISA: (a) original image, (b) blurry image, (c) deconvolution result (PSNR: 27. [sent-509, score-0.369]

66 17) of GISA with p = 1, and (d) deconvolution result (PSNR: 28. [sent-510, score-0.169]

67 Face recognition via sparse coding Given a test sample y and the training data matrix X = [X1,X2, ,XK], where Xk, k = 1, 2, ,K, is the sample matrix of class k, Wright et al. [sent-536, score-0.171]

68 [38] proposed a sparse representation based classification (SRC) method for face recognition (FR). [sent-537, score-0.147]

69 SRC first seeks the solution of the following sparse coding problem: × αˆ = argmαin? [sent-538, score-0.172]

70 Then, by simply replacing the soft-thresholding operator in ALM by the proposed GST operator, we can embed the proposed GISA algorithm into the ALM method for solving the SRC model with arbitrary values of p and q. [sent-553, score-0.1]

71 We use the principal component analysis (PCA) to reduce the dimensionality of face images, and test the algorithms in both the original image space (1, 024 dimensions) and the PCA subspace with feature dimension 500, 300, and 100, respectively. [sent-559, score-0.061]

72 6 we show the recognition rates of SRC versus different p values. [sent-562, score-0.048]

73 Table 2 lists the recognition rates of SRC and SRC-p (p = 0. [sent-568, score-0.068]

74 One can see that, GISA based SRC-p can always achieve higher recognition rates than the original SRC with p = 1. [sent-570, score-0.048]

75 This is because when the face feature dimension is small, the matrix X tends to be redundant. [sent-572, score-0.061]

76 Thus, the coding solution should be sparser, and GISA is more probable to obtain the correct solution. [sent-573, score-0.106]

77 By choosing q = p = 1, the SRC method would become robust to face corruption/occlusion [38]. [sent-574, score-0.061]

78 The face recognition rate of SRC by varying the value of p with GISA. [sent-590, score-0.081]

79 09 275 0 < q = p < 1, and embed the proposed GISA into ALM to implement SRC-p, q for robust face recognition. [sent-600, score-0.106]

80 Two types of face image corruption are considered: random pixel corruption and random block occlusion. [sent-601, score-0.181]

81 Table 3 lists the recognition rates of SRC and SRC-p, q under different ratios of random corruption. [sent-605, score-0.068]

82 One can see that SRC-p, q can always outperform SRC for recognizing face images with random corruption. [sent-606, score-0.061]

83 Table 4 lists the recognition rates of SRC and SRC-p, q under different ratios of block occlusion. [sent-608, score-0.096]

84 Again, SRC-p, q can obtain better recognition rates than SRC for face recognition with random block occlusion. [sent-609, score-0.157]

85 Recognition rate (%) on face images with random corruption. [sent-611, score-0.061]

86 Recognition rate (%) on face images with block occlusion. [sent-623, score-0.089]

87 We proposed a generalized shrinkage/thresholding (GST) function and the associated gen- eralized iterated shrinkage algorithm (GISA) for ? [sent-629, score-0.196]

88 Compared with the state-of-theart methods, GISA is theoretically more solid, easier to understand and more efficient to implement, and it can converge to a more accurate solution. [sent-631, score-0.114]

89 Our experimental results on image deconvolution verify the effectiveness and efficiency of GISA, and our experiments on sparse coding based face recognition showed that ? [sent-632, score-0.401]

90 A fast iterative shrinkage / thresholding algorithm for linear inverse problems. [sent-658, score-0.308]

91 [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] thresholding algorithms for image restoration. [sent-668, score-0.162]

92 An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. [sent-742, score-0.218]

93 For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution. [sent-753, score-0.074]

94 Recovering sparse signals with a certain family of nonconvex penalties and DC programming. [sent-774, score-0.108]

95 From few to many: Illumination cone models for face recognition under variable lighting and pose. [sent-780, score-0.081]

96 q minimization with 0 < q < 1 for sparse solution of under-determined linear systems. [sent-808, score-0.123]

97 Acquiring linear subspaces for face recognition under variable lighting. [sent-814, score-0.081]

98 An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors. [sent-857, score-0.117]

99 1/2 regularization: A thresholding rep- resentation theory and a fast solver. [sent-897, score-0.162]

100 A generalized accelerated proximal gradient approach for total-variation-based image restoration. [sent-912, score-0.064]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('gisa', 0.621), ('gst', 0.432), ('pgst', 0.286), ('src', 0.197), ('irls', 0.183), ('deconvolution', 0.169), ('lut', 0.165), ('thresholding', 0.162), ('shrinkage', 0.113), ('tpgst', 0.101), ('converge', 0.092), ('coding', 0.085), ('psnr', 0.075), ('dref', 0.067), ('sgpst', 0.067), ('sparse', 0.066), ('xp', 0.066), ('face', 0.061), ('restoration', 0.059), ('ax', 0.058), ('spgst', 0.05), ('tgpst', 0.05), ('corruption', 0.046), ('softthresholding', 0.045), ('reweighted', 0.044), ('dx', 0.042), ('generalized', 0.042), ('operator', 0.042), ('pp', 0.042), ('nonconvex', 0.042), ('iterated', 0.041), ('solving', 0.041), ('sgn', 0.04), ('alm', 0.038), ('minimization', 0.036), ('analytic', 0.036), ('gp', 0.035), ('deconv', 0.034), ('italian', 0.034), ('mail', 0.034), ('marjanovic', 0.034), ('pitm', 0.034), ('spitm', 0.034), ('iterative', 0.033), ('ist', 0.033), ('solutions', 0.032), ('rule', 0.032), ('blurry', 0.031), ('yale', 0.031), ('underdetermined', 0.03), ('candes', 0.029), ('joshi', 0.029), ('krishnan', 0.029), ('tes', 0.029), ('rates', 0.028), ('deblurring', 0.028), ('block', 0.028), ('mathematics', 0.028), ('implement', 0.028), ('chartrand', 0.028), ('subproblem', 0.027), ('zitnick', 0.027), ('minimum', 0.025), ('siam', 0.025), ('ieee', 0.025), ('iteratively', 0.025), ('szeliski', 0.025), ('solve', 0.024), ('beck', 0.024), ('deno', 0.024), ('iter', 0.024), ('itm', 0.024), ('twist', 0.024), ('problems', 0.023), ('compressed', 0.023), ('theorem', 0.023), ('sparsest', 0.023), ('sensing', 0.023), ('theoretically', 0.022), ('theorems', 0.022), ('gradient', 0.022), ('wavelets', 0.022), ('pure', 0.021), ('yin', 0.021), ('equation', 0.021), ('mxin', 0.021), ('solution', 0.021), ('store', 0.021), ('lists', 0.02), ('fr', 0.02), ('recognition', 0.02), ('statistics', 0.02), ('fergus', 0.02), ('generalization', 0.019), ('modeled', 0.019), ('kong', 0.019), ('regularization', 0.019), ('compressive', 0.019), ('sastry', 0.017), ('embed', 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding

Author: Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang

Abstract: In many sparse coding based image restoration and image classification problems, using non-convex ?p-norm minimization (0 ≤ p < 1) can often obtain better results than timhei convex 0?1 -norm m 1)ini camniza otfiteonn. Ab naiunm bbeetrt of algorithms, e.g., iteratively reweighted least squares (IRLS), iteratively thresholding method (ITM-?p), and look-up table (LUT), have been proposed for non-convex ?p-norm sparse coding, while some analytic solutions have been suggested for some specific values of p. In this paper, by extending the popular soft-thresholding operator, we propose a generalized iterated shrinkage algorithm (GISA) for ?p-norm non-convex sparse coding. Unlike the analytic solutions, the proposed GISA algorithm is easy to implement, and can be adopted for solving non-convex sparse coding problems with arbitrary p values. Compared with LUT, GISA is more general and does not need to compute and store the look-up tables. Compared with IRLS and ITM-?p, GISA is theoretically more solid and can achieve more accurate solutions. Experiments on image restoration and sparse coding based face recognition are conducted to validate the performance of GISA. ××

2 0.15459621 20 iccv-2013-A Max-Margin Perspective on Sparse Representation-Based Classification

Author: Zhaowen Wang, Jianchao Yang, Nasser Nasrabadi, Thomas Huang

Abstract: Sparse Representation-based Classification (SRC) is a powerful tool in distinguishing signal categories which lie on different subspaces. Despite its wide application to visual recognition tasks, current understanding of SRC is solely based on a reconstructive perspective, which neither offers any guarantee on its classification performance nor provides any insight on how to design a discriminative dictionary for SRC. In this paper, we present a novel perspective towards SRC and interpret it as a margin classifier. The decision boundary and margin of SRC are analyzed in local regions where the support of sparse code is stable. Based on the derived margin, we propose a hinge loss function as the gauge for the classification performance of SRC. A stochastic gradient descent algorithm is implemented to maximize the margin of SRC and obtain more discriminative dictionaries. Experiments validate the effectiveness of the proposed approach in predicting classification performance and improving dictionary quality over reconstructive ones. Classification results competitive with other state-ofthe-art sparse coding methods are reported on several data sets.

3 0.1321547 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications

Author: Yu-Tseh Chi, Mohsen Ali, Muhammad Rushdi, Jeffrey Ho

Abstract: This paper proposes a novel approach for sparse coding that further improves upon the sparse representation-based classification (SRC) framework. The proposed framework, Affine-Constrained Group Sparse Coding (ACGSC), extends the current SRC framework to classification problems with multiple input samples. Geometrically, the affineconstrained group sparse coding essentially searches for the vector in the convex hull spanned by the input vectors that can best be sparse coded using the given dictionary. The resulting objectivefunction is still convex and can be efficiently optimized using iterative block-coordinate descent scheme that is guaranteed to converge. Furthermore, we provide a form of sparse recovery result that guarantees, at least theoretically, that the classification performance of the constrained group sparse coding should be at least as good as the group sparse coding. We have evaluated the proposed approach using three different recognition experiments that involve illumination variation of faces and textures, and face recognition under occlusions. Prelimi- nary experiments have demonstrated the effectiveness of the proposed approach, and in particular, the results from the recognition/occlusion experiment are surprisingly accurate and robust.

4 0.096916795 292 iccv-2013-Non-convex P-Norm Projection for Robust Sparsity

Author: Mithun Das Gupta, Sanjeev Kumar

Abstract: In this paper, we investigate the properties of Lp norm (p ≤ 1) within a projection framework. We start with the (KpK T≤ equations of the neoctni-olnin efraarm optimization problem a thnde then use its key properties to arrive at an algorithm for Lp norm projection on the non-negative simplex. We compare with L1projection which needs prior knowledge of the true norm, as well as hard thresholding based sparsificationproposed in recent compressed sensing literature. We show performance improvements compared to these techniques across different vision applications.

5 0.088891044 103 iccv-2013-Deblurring by Example Using Dense Correspondence

Author: Yoav Hacohen, Eli Shechtman, Dani Lischinski

Abstract: This paper presents a new method for deblurring photos using a sharp reference example that contains some shared content with the blurry photo. Most previous deblurring methods that exploit information from other photos require an accurately registered photo of the same static scene. In contrast, our method aims to exploit reference images where the shared content may have undergone substantial photometric and non-rigid geometric transformations, as these are the kind of reference images most likely to be found in personal photo albums. Our approach builds upon a recent method for examplebased deblurring using non-rigid dense correspondence (NRDC) [11] and extends it in two ways. First, we suggest exploiting information from the reference image not only for blur kernel estimation, but also as a powerful local prior for the non-blind deconvolution step. Second, we introduce a simple yet robust technique for spatially varying blur estimation, rather than assuming spatially uniform blur. Unlike the aboveprevious method, which hasproven successful only with simple deblurring scenarios, we demonstrate that our method succeeds on a variety of real-world examples. We provide quantitative and qualitative evaluation of our method and show that it outperforms the state-of-the-art.

6 0.061694976 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

7 0.061543528 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition

8 0.059135158 174 iccv-2013-Forward Motion Deblurring

9 0.058144733 354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition

10 0.055817392 129 iccv-2013-Dynamic Scene Deblurring

11 0.054205682 97 iccv-2013-Coupling Alignments with Recognition for Still-to-Video Face Recognition

12 0.052504126 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

13 0.052326187 258 iccv-2013-Low-Rank Sparse Coding for Image Classification

14 0.050909013 138 iccv-2013-Efficient and Robust Large-Scale Rotation Averaging

15 0.047644999 23 iccv-2013-A New Image Quality Metric for Image Auto-denoising

16 0.045900065 357 iccv-2013-Robust Matrix Factorization with Unknown Noise

17 0.045134328 310 iccv-2013-Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

18 0.04425453 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

19 0.0417739 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition

20 0.041523661 422 iccv-2013-Toward Guaranteed Illumination Models for Non-convex Objects


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.094), (1, 0.007), (2, -0.057), (3, -0.026), (4, -0.111), (5, -0.004), (6, 0.001), (7, 0.005), (8, 0.02), (9, -0.046), (10, -0.015), (11, -0.034), (12, 0.016), (13, -0.037), (14, -0.027), (15, 0.036), (16, -0.014), (17, 0.003), (18, -0.028), (19, 0.007), (20, 0.0), (21, -0.032), (22, -0.0), (23, -0.066), (24, 0.022), (25, -0.01), (26, 0.041), (27, -0.019), (28, 0.047), (29, -0.011), (30, -0.001), (31, -0.018), (32, -0.051), (33, 0.008), (34, 0.033), (35, -0.005), (36, -0.008), (37, 0.005), (38, 0.052), (39, 0.033), (40, -0.003), (41, 0.018), (42, -0.003), (43, 0.086), (44, -0.04), (45, 0.066), (46, 0.02), (47, -0.035), (48, -0.014), (49, 0.04)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.90180695 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding

Author: Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang

Abstract: In many sparse coding based image restoration and image classification problems, using non-convex ?p-norm minimization (0 ≤ p < 1) can often obtain better results than timhei convex 0?1 -norm m 1)ini camniza otfiteonn. Ab naiunm bbeetrt of algorithms, e.g., iteratively reweighted least squares (IRLS), iteratively thresholding method (ITM-?p), and look-up table (LUT), have been proposed for non-convex ?p-norm sparse coding, while some analytic solutions have been suggested for some specific values of p. In this paper, by extending the popular soft-thresholding operator, we propose a generalized iterated shrinkage algorithm (GISA) for ?p-norm non-convex sparse coding. Unlike the analytic solutions, the proposed GISA algorithm is easy to implement, and can be adopted for solving non-convex sparse coding problems with arbitrary p values. Compared with LUT, GISA is more general and does not need to compute and store the look-up tables. Compared with IRLS and ITM-?p, GISA is theoretically more solid and can achieve more accurate solutions. Experiments on image restoration and sparse coding based face recognition are conducted to validate the performance of GISA. ××

2 0.71912205 292 iccv-2013-Non-convex P-Norm Projection for Robust Sparsity

Author: Mithun Das Gupta, Sanjeev Kumar

Abstract: In this paper, we investigate the properties of Lp norm (p ≤ 1) within a projection framework. We start with the (KpK T≤ equations of the neoctni-olnin efraarm optimization problem a thnde then use its key properties to arrive at an algorithm for Lp norm projection on the non-negative simplex. We compare with L1projection which needs prior knowledge of the true norm, as well as hard thresholding based sparsificationproposed in recent compressed sensing literature. We show performance improvements compared to these techniques across different vision applications.

3 0.71273226 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications

Author: Yu-Tseh Chi, Mohsen Ali, Muhammad Rushdi, Jeffrey Ho

Abstract: This paper proposes a novel approach for sparse coding that further improves upon the sparse representation-based classification (SRC) framework. The proposed framework, Affine-Constrained Group Sparse Coding (ACGSC), extends the current SRC framework to classification problems with multiple input samples. Geometrically, the affineconstrained group sparse coding essentially searches for the vector in the convex hull spanned by the input vectors that can best be sparse coded using the given dictionary. The resulting objectivefunction is still convex and can be efficiently optimized using iterative block-coordinate descent scheme that is guaranteed to converge. Furthermore, we provide a form of sparse recovery result that guarantees, at least theoretically, that the classification performance of the constrained group sparse coding should be at least as good as the group sparse coding. We have evaluated the proposed approach using three different recognition experiments that involve illumination variation of faces and textures, and face recognition under occlusions. Prelimi- nary experiments have demonstrated the effectiveness of the proposed approach, and in particular, the results from the recognition/occlusion experiment are surprisingly accurate and robust.

4 0.66293442 310 iccv-2013-Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

Author: Tae-Hyun Oh, Hyeongwoo Kim, Yu-Wing Tai, Jean-Charles Bazin, In So Kweon

Abstract: Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values. The proposed objective function implicitly encourages the target rank constraint in rank minimization. Our experimental analyses show that our approach performs better than conventional rank minimization when the number of samples is deficient, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g. high dynamic range imaging, photometric stereo and image alignment, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.

5 0.62368268 434 iccv-2013-Unifying Nuclear Norm and Bilinear Factorization Approaches for Low-Rank Matrix Decomposition

Author: Ricardo Cabral, Fernando De_La_Torre, João P. Costeira, Alexandre Bernardino

Abstract: Low rank models have been widely usedfor the representation of shape, appearance or motion in computer vision problems. Traditional approaches to fit low rank models make use of an explicit bilinear factorization. These approaches benefit from fast numerical methods for optimization and easy kernelization. However, they suffer from serious local minima problems depending on the loss function and the amount/type of missing data. Recently, these lowrank models have alternatively been formulated as convex problems using the nuclear norm regularizer; unlike factorization methods, their numerical solvers are slow and it is unclear how to kernelize them or to impose a rank a priori. This paper proposes a unified approach to bilinear factorization and nuclear norm regularization, that inherits the benefits of both. We analyze the conditions under which these approaches are equivalent. Moreover, based on this analysis, we propose a new optimization algorithm and a “rank continuation ” strategy that outperform state-of-theart approaches for Robust PCA, Structure from Motion and Photometric Stereo with outliers and missing data.

6 0.60143918 20 iccv-2013-A Max-Margin Perspective on Sparse Representation-Based Classification

7 0.596883 354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition

8 0.59383619 357 iccv-2013-Robust Matrix Factorization with Unknown Noise

9 0.57942778 167 iccv-2013-Finding Causal Interactions in Video Sequences

10 0.57244062 408 iccv-2013-Super-resolution via Transform-Invariant Group-Sparse Regularization

11 0.57239956 258 iccv-2013-Low-Rank Sparse Coding for Image Classification

12 0.57046849 422 iccv-2013-Toward Guaranteed Illumination Models for Non-convex Objects

13 0.56530148 60 iccv-2013-Bayesian Robust Matrix Factorization for Image and Video Processing

14 0.55431354 364 iccv-2013-SGTD: Structure Gradient and Texture Decorrelating Regularization for Image Decomposition

15 0.52701956 34 iccv-2013-Abnormal Event Detection at 150 FPS in MATLAB

16 0.5195353 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

17 0.51559222 98 iccv-2013-Cross-Field Joint Image Restoration via Scale Map

18 0.50737613 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

19 0.5022909 173 iccv-2013-Fluttering Pattern Generation Using Modified Legendre Sequence for Coded Exposure Imaging

20 0.50141156 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.054), (7, 0.019), (26, 0.058), (27, 0.035), (31, 0.06), (42, 0.166), (48, 0.015), (64, 0.025), (66, 0.23), (73, 0.043), (78, 0.016), (89, 0.129), (95, 0.011), (97, 0.027), (98, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.80928671 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding

Author: Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang

Abstract: In many sparse coding based image restoration and image classification problems, using non-convex ?p-norm minimization (0 ≤ p < 1) can often obtain better results than timhei convex 0?1 -norm m 1)ini camniza otfiteonn. Ab naiunm bbeetrt of algorithms, e.g., iteratively reweighted least squares (IRLS), iteratively thresholding method (ITM-?p), and look-up table (LUT), have been proposed for non-convex ?p-norm sparse coding, while some analytic solutions have been suggested for some specific values of p. In this paper, by extending the popular soft-thresholding operator, we propose a generalized iterated shrinkage algorithm (GISA) for ?p-norm non-convex sparse coding. Unlike the analytic solutions, the proposed GISA algorithm is easy to implement, and can be adopted for solving non-convex sparse coding problems with arbitrary p values. Compared with LUT, GISA is more general and does not need to compute and store the look-up tables. Compared with IRLS and ITM-?p, GISA is theoretically more solid and can achieve more accurate solutions. Experiments on image restoration and sparse coding based face recognition are conducted to validate the performance of GISA. ××

2 0.74862123 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information

Author: Andy J. Ma, Pong C. Yuen, Jiawei Li

Abstract: This paper addresses a new person re-identification problem without the label information of persons under non-overlapping target cameras. Given the matched (positive) and unmatched (negative) image pairs from source domain cameras, as well as unmatched (negative) image pairs which can be easily generated from target domain cameras, we propose a Domain Transfer Ranked Support Vector Machines (DTRSVM) method for re-identification under target domain cameras. To overcome the problems introduced due to the absence of matched (positive) image pairs in target domain, we relax the discriminative constraint to a necessary condition only relying on the positive mean in target domain. By estimating the target positive mean using source and target domain data, a new discriminative model with high confidence in target positive mean and low confidence in target negative image pairs is developed. Since the necessary condition may not truly preserve the discriminability, multi-task support vector ranking is proposed to incorporate the training data from source domain with label information. Experimental results show that the proposed DTRSVM outperforms existing methods without using label information in target cameras. And the top 30 rank accuracy can be improved by the proposed method upto 9.40% on publicly available person re-identification datasets.

3 0.69584614 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

Author: Hongteng Xu, Hongyuan Zha

Abstract: Data sparsity has been a thorny issuefor manifold-based image synthesis, and in this paper we address this critical problem by leveraging ideas from transfer learning. Specifically, we propose methods based on generating auxiliary data in the form of synthetic samples using transformations of the original sparse samples. To incorporate the auxiliary data, we propose a weighted data synthesis method, which adaptively selects from the generated samples for inclusion during the manifold learning process via a weighted iterative algorithm. To demonstrate the feasibility of the proposed method, we apply it to the problem of face image synthesis from sparse samples. Compared with existing methods, the proposed method shows encouraging results with good performance improvements.

4 0.69384062 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

5 0.69172299 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

Author: Chenglong Bao, Jian-Feng Cai, Hui Ji

Abstract: In recent years, how to learn a dictionary from input images for sparse modelling has been one very active topic in image processing and recognition. Most existing dictionary learning methods consider an over-complete dictionary, e.g. the K-SVD method. Often they require solving some minimization problem that is very challenging in terms of computational feasibility and efficiency. However, if the correlations among dictionary atoms are not well constrained, the redundancy of the dictionary does not necessarily improve the performance of sparse coding. This paper proposed a fast orthogonal dictionary learning method for sparse image representation. With comparable performance on several image restoration tasks, the proposed method is much more computationally efficient than the over-complete dictionary based learning methods.

6 0.68871844 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso

7 0.68768853 277 iccv-2013-Multi-channel Correlation Filters

8 0.68739438 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications

9 0.68687373 362 iccv-2013-Robust Tucker Tensor Decomposition for Effective Image Representation

10 0.68682534 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

11 0.68506539 54 iccv-2013-Attribute Pivots for Guiding Relevance Feedback in Image Search

12 0.68367171 44 iccv-2013-Adapting Classification Cascades to New Domains

13 0.68263149 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

14 0.68244851 106 iccv-2013-Deep Learning Identity-Preserving Face Space

15 0.68231165 392 iccv-2013-Similarity Metric Learning for Face Recognition

16 0.6811952 231 iccv-2013-Latent Multitask Learning for View-Invariant Action Recognition

17 0.68116766 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

18 0.68065631 149 iccv-2013-Exemplar-Based Graph Matching for Robust Facial Landmark Localization

19 0.68063426 184 iccv-2013-Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion

20 0.68056977 232 iccv-2013-Latent Space Sparse Subspace Clustering