iccv iccv2013 iccv2013-292 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Mithun Das Gupta, Sanjeev Kumar
Abstract: In this paper, we investigate the properties of Lp norm (p ≤ 1) within a projection framework. We start with the (KpK T≤ equations of the neoctni-olnin efraarm optimization problem a thnde then use its key properties to arrive at an algorithm for Lp norm projection on the non-negative simplex. We compare with L1projection which needs prior knowledge of the true norm, as well as hard thresholding based sparsificationproposed in recent compressed sensing literature. We show performance improvements compared to these techniques across different vision applications.
Reference: text
sentIndex sentText sentNum sentScore
1 com Abstract In this paper, we investigate the properties of Lp norm (p ≤ 1) within a projection framework. [sent-5, score-0.567]
2 We start with the (KpK T≤ equations of the neoctni-olnin efraarm optimization problem a thnde then use its key properties to arrive at an algorithm for Lp norm projection on the non-negative simplex. [sent-6, score-0.633]
3 Assumptions of Laplacian, Gaussian and Generalized Gaus- sian priors result in L1 (Lasso), L2 (Ridge regression) and Lp norm regularization. [sent-11, score-0.381]
4 Accurate modeling of such distributions leads to Lp norm regularization with p < 1. [sent-26, score-0.442]
5 Non-convexity of Lp norm poses a major hindrance in efficient solution to resulting optimization problem. [sent-27, score-0.47]
6 Solving an under-determined system of equations under non-convex Lp norm prior has been known to be NP hard. [sent-28, score-0.447]
7 transform domain image denoising, can be cast as projection of a vector on norm ball and does not involve solving an under-determined system of linear equations. [sent-32, score-0.681]
8 Such problems can directly benefit from efficient projection on non-convex Lp norm ball. [sent-33, score-0.567]
9 For other problems, iterative schemes such as gradient-projection can be devised with projection on non-convex norm ball as building block, which of course looses guarantee of global optimality, but presents a potentially attractive alternative to IRLS and other competing methods. [sent-34, score-0.718]
10 ≤ z where norm prior fteorrmmu appears as ncfo(nxs)tra sin. [sent-47, score-0.381]
11 where 11559933 norm prior term is part of objective function. [sent-54, score-0.381]
12 Therefore, in principle, an algorithm for any one formulation can be used to find solution of the other formulation by using bisection over the regularization parameter. [sent-57, score-0.258]
13 Such equivalence, however, breaks down in the case of non-convex norm prior. [sent-58, score-0.381]
14 2 (left) shows that norm of the optimal solution exhibits vertical discontinuity at some values of regularization parameter θ for the penalty function formulation. [sent-60, score-0.667]
15 Range of norm values corresponding to such discontinuities are never attained by optimal solution of penalty function formulation. [sent-61, score-0.606]
16 In contrast, optimal solution of projection formulation exhibits continuous decrease of projection error with increasing z as shown in Fig. [sent-62, score-0.603]
17 Consequently, for certain range of values of z, penalty function formulation can not emulate the behavior of projection formulation. [sent-64, score-0.325]
18 Projection function formulation can be considered a more general problem with penalty function formulation as a special case. [sent-65, score-0.193]
19 KKT conditions and combinatorial explosion Consider an arbitrary algorithm A trying to solve min s . [sent-68, score-0.16]
20 1 Assuming appropriate regularization conditions are met, necessary conditions for local optimality are given by KKT first order condition ∂∂xL = 0, and ? [sent-82, score-0.178]
21 Even for convenient functional forms of f, gi, hi, there is an inherent combinatorial branching involved in complementary slackness condition Eq. [sent-110, score-0.28]
22 For ease of future reference we refer to it as complementary slackness condition (CSC) branching stage. [sent-112, score-0.229]
23 For A to be potentially polynomial time, it must avoid comFboinra Ator tioal b explosion laty C poSlCyn. [sent-114, score-0.168]
24 In the simplest of cases, this reduced system of equations could be a system of full rank linear equations and vacuously satisfied inequalities, immediately yielding a solution. [sent-121, score-0.132]
25 For A to be polynomial time, it Emquusta taiovonid (E cSoEm)b birnaantocrhiianlg explosion oa tb eE pSEol as owmeil a as mCSe,C it. [sent-126, score-0.168]
26 Once A has branched over all ESE branching possibilitiesO, e. [sent-127, score-0.173]
27 In present work, we show that for globally optimal non-convex Lp norm projection problem, number of CSC and ESE branches can be restricted to polynomial. [sent-132, score-0.69]
28 Furthermore, we propose a conjecture, which guarantees polynomial time complexity for RSE branch as well, thus making complete problem polynomial time. [sent-133, score-0.225]
29 Even if conjecture fails for very stringent norms p ≤ 0. [sent-134, score-0.275]
30 2, RSE stage, being a one dimensional root finding problem, i Rs ScoEnd stuacgiev,e bfoeirn generic global optimization nmdeitnhgods such as branch and bound [22, 3]. [sent-135, score-0.195]
31 This makes complete problem (reducible to polynomial number of RSE stages) efficiently solvable by branch and bound. [sent-136, score-0.166]
32 Lp Norm Projection for non-negative data Lp norm projection problem is given by, mxin21? [sent-138, score-0.567]
33 j=1 Except for the trivial case, when v lies inside norm ball and optimal solution is x = v, optimal solution of above problem will lie at boundary of norm ball. [sent-145, score-1.156]
34 This can be proved by contradiction, by joining putative solution inside norm ball with v by a line segment and showing that intersection of this line segment with norm ball will result in better solution. [sent-146, score-1.079]
35 Based on Lemma 3 from [7], we can work with the absolute values of the elements of v and add the sign to the projection x later such that vixi > 0. [sent-147, score-0.186]
36 Left: norm of solution vs regularization parameter θ in penalty function formulation. [sent-150, score-0.616]
37 Decrease of norm exhibits vertical discon- tinuity and some norm values are never attained by regularization path. [sent-151, score-0.823]
38 Right: Projection distance ftoinru penalty sfoumnceti noonr mfo vrmaluuleastio anre (black bars) adn bdy projection tfioornm pualathti. [sent-152, score-0.271]
39 on R solution x for projection formulation continuously (blue curve). [sent-153, score-0.329]
40 22 vs norm of optimal solution nA dsi regularization parameter z increases optimal gets closer to target v. [sent-156, score-0.633]
41 Our next proposition asserts that ordering of components of v determines ordering of components of optimal solution x. [sent-177, score-0.229]
42 If vi > vj, then in optimal solution xi ≥ xj. [sent-180, score-0.442]
43 Proof: If xi < xj, then swapping xi and xj can be shown to result in a better solution. [sent-181, score-0.236]
44 If vi > vj, and in optimal solution xi = 0, then xj = 0. [sent-189, score-0.492]
45 2 is that, combinatorial explosion for CSC can be avoided. [sent-192, score-0.16]
46 When at least one of xi’s is zero, consideration of local geometry of constraint surfaces reveals that surface normal vector for xi 0, and that for p-norm constraint become linearly dependent, ,t ahnuds violating the linearly independent constraint qualification (LICQ) criterion [18]. [sent-201, score-0.234]
47 Hence, we adopt the more fundamental necessary condition that for the feasibility of local optimality at a point, it must be impossible to decrease objective function further by any local movement remaining feasible with the constraints. [sent-202, score-0.154]
48 Whenever any xi is zero and ≥ norm constraint zero component This leads us to x is optimal {xi |i ∈ Nnon is satisfied, any local movement with nonalong xi will result in constraint violation. [sent-203, score-0.82]
49 We plot the representative curves for different values of vi but fixed p and θ (Fig. [sent-207, score-0.209]
50 For all values of vi > vit (p, θ), there are two roots for (7) denoted as xil and xir. [sent-212, score-0.408]
51 The two possible solutions for vi = 8 are denoted in the figure as xil and xir. [sent-217, score-0.324]
52 The left root xlk corresponding to any vk is NOT part of the optimal solution for (5), except possibly for the smallest vi among those with non-zero optimal projection xi. [sent-223, score-0.766]
53 In the first stage, we note that since xil moves towards left with increasing vi, in optimal solution, two distinct entries vi and vj can not have the same root xil = xjl as it will violate Prop. [sent-225, score-0.732]
54 This means at most one vi can have corresponding left solution at xil. [sent-228, score-0.341]
55 In second stage of proof, we note that since, irrespective of ordering of vi’s, xir for any vi is larger than xjl for all other vj ’s, j i. [sent-229, score-0.32]
56 Hence, any vi with corresponding solution at xil must be the smallest vi among those with non-zero xi, otherwise Prop. [sent-230, score-0.671]
57 3 is that, combinatorial explosion of ESE can be avoided, since the number of ESE branches are upper bounded by 2. [sent-236, score-0.232]
58 One branch corresponds to solution where smallest non-zero xi corresponds to the left root and all other xj ’s correspond to the right roots. [sent-237, score-0.562]
59 The other branch corresponds to solution where all non-zero xi’s correspond to right root. [sent-238, score-0.239]
60 RSEL and RSER represent RSE stage operations corresponding to choice of which root (left or right) is used for different equations in KKT system. [sent-243, score-0.154]
61 3 (left), for all vi and θ, the curve has a unique minimum point, which can be shown to be ximin = [p(1 − p)θ]2−1p (8) Next we evaluate (7), but for different values of θ, as shown in Fig. [sent-245, score-0.263]
62 7 left root) 9: else if j ←== R 2S E /* second ESE branch: all xis come 10: 11: 12: from right root */ then XR ← RSER(v(1 : ρ) , z, p) (Eq. [sent-248, score-0.174]
63 2 16: xopt ← x 17: end if 18: end for 19: reorder xopt according to initial sorting of v 20: return xopt with increasing θ, for constant vi and p. [sent-256, score-0.524]
64 For every vi there exists a θ (and corresponding xi), given p, such that Eq. [sent-260, score-0.209]
65 To explore the behavior of the two ESE branches we chose a random v, and set a suitable norm such that none of the elements of its projection x can be zero. [sent-265, score-0.639]
66 Bour tp f o≥r more stringent sparsity requirements p < 0. [sent-272, score-0.164]
67 5 and higher dimensional problems the solution norm for xL can intersect the norm constraint line multiple times as shown in Fig. [sent-273, score-0.898]
68 This observation leads to the following conjecture: Conjecture: For some integer r, the rth derivative of the norm curve for xL against θ (Fig. [sent-275, score-0.435]
69 Right: for higher dimensions and stringent wrt xL touches the norm constraint line at multiple points. [sent-283, score-0.572]
70 Empirically we found that the 2nd derivative has one zero crossing, leading to maximum 3 zero crossings for the solution norm, as shown in Fig. [sent-284, score-0.197]
71 We could still argue that our algorithm based on the above conjecture, will give us globally optimal solution with very high probability. [sent-289, score-0.14]
72 4) where they claim that lp penalties converge to a solution whenever one of two theorems are applicable. [sent-292, score-0.372]
73 General sparse coding problem The above developments were all concentrated on solving a projection problem over separable variables. [sent-294, score-0.239]
74 Now we look into the more generic problem, w˜ = argmwin ||Y − Aw||2 + β|φTw|p (11) Using the half-quadratic penalty method [11, 12, 24], we now introduce the auxiliary vector x, such that w˜ = argmwin ||Y − Aw||2+2ζ? [sent-295, score-0.191]
75 22+ β|x|p (12) This equivalent formulation decouples the non-linear norm term from the principal variable with the additional property that it is exactly same as the previous formulation when → ∞. [sent-297, score-0.489]
76 22 + β|x|p (14) Alternating solution for the two subproblems leads to the solution of the the original system (Eq. [sent-302, score-0.178]
77 Comparison techniques against L1 norm minimization In this section we describe comparative experiments with other algorithms, primarily with algorithms which produce sparsity in otherwise more established methods, specifically, projection onto convex sets (POCS) [2]. [sent-307, score-0.626]
78 We compare against alternate projection algorithms proposed by Candes and Romburg [4](L1POCS) and Lu Gan [8](KPOCS) where the projection alternates between a POCS projection and a sparsifying projections which typically bounds the L1 norm ofthe projected vector. [sent-308, score-0.981]
79 The solution of sparse POCS lies in the intersection between the hyperplane P = {x : Φx = y} and the L1 ball B of radius |h |yxp| |e 1. [sent-312, score-0.256]
80 lIfa nthee P Pex =ac {t norm xof = =th ye original signal is known, i. [sent-313, score-0.457]
81 In the absence of the knowledge of the true norm of the unknown vector, L1POCS generates a sparse approx- x imation of the original vector. [sent-322, score-0.434]
82 A hard thresholding step, keeping the K largest components of a vector is used as the second projection step. [sent-324, score-0.186]
83 The principle difference in the two schemes is that in [8], the extra information is the sparsity of x (as in CoSaMP [17]), whereas in [4] the expected L1 norm of the original signal is required. [sent-325, score-0.516]
84 We replaced the norm projection scheme in these two techniques by our method (LpPOCS). [sent-327, score-0.567]
85 6 if not stated otherwise and we keep the norm constraint to be 1http://nuit-blanche. [sent-330, score-0.428]
86 The norm parameter p works analogous to the forced sparsity K. [sent-356, score-0.44]
87 1 our method almost always finds the true support of the solution vector x, except for very stringent norm requirements for p ≤ 0. [sent-374, score-0.575]
88 IRLS is [20] the number next to it denotes the value of the norm constraint q, Lq our denotes the proposed method. [sent-388, score-0.428]
89 [14] proposed an LUT based approach for general p and analytical approach for p = 1/2 and p = 2/3 to find optimal solution of (20). [sent-408, score-0.14]
90 Optimal solution will be located either at boundary of interval [0, v] or where derivative is zero i. [sent-410, score-0.143]
91 Our algorithm does +thex norm 0mi wnhimicihza istio sanm feor a sa Ellq t. [sent-415, score-0.381]
92 The images show the noisy image, the best L1 reconstruction, and the best Lp norm reconstruction respectively. [sent-432, score-0.381]
93 Sparse lp PCA Informative feature selection by sparse PCA has been proposed recently by Naikal et al. [sent-435, score-0.294]
94 We replace the final l1 norm constraint by t(hxe more generic lp norm constraint. [sent-451, score-1.05]
95 Note that with the lp norm formulation, lesser features are selected. [sent-455, score-0.622]
96 Conclusion In this paper we have looked into the projection onto the Lp norm ball and provided some insights into constructing an exhaustive search algorithm. [sent-469, score-0.681]
97 Top:Imagesof4objectsintheBMWdat base[16] with superimposed SURF features; Middle: Informative features detected by the l1 sparse PCA approach from [15]; Bottom: Informative features detected by lp sparse PCA. [sent-471, score-0.347]
98 The method of successive projection for finding a common point of convex sets. [sent-481, score-0.186]
99 Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property. [sent-530, score-0.174]
100 Efficient euclidean projections onto the intersection of norm balls. [sent-613, score-0.423]
wordName wordTfidf (topN-words)
[('norm', 0.381), ('lp', 0.241), ('csc', 0.211), ('vi', 0.209), ('projection', 0.186), ('ese', 0.159), ('naikal', 0.148), ('kkt', 0.126), ('xl', 0.12), ('xil', 0.115), ('irls', 0.114), ('branching', 0.114), ('ball', 0.114), ('conjecture', 0.111), ('explosion', 0.109), ('rse', 0.108), ('branch', 0.107), ('gasso', 0.105), ('xopt', 0.105), ('stringent', 0.105), ('xi', 0.093), ('proposition', 0.089), ('kpocs', 0.089), ('objopt', 0.089), ('pocs', 0.089), ('solution', 0.089), ('root', 0.088), ('penalty', 0.085), ('var', 0.078), ('signal', 0.076), ('lut', 0.073), ('branches', 0.072), ('equations', 0.066), ('lq', 0.066), ('optimality', 0.061), ('regularization', 0.061), ('branched', 0.059), ('cosamp', 0.059), ('lppocs', 0.059), ('rsel', 0.059), ('rser', 0.059), ('saab', 0.059), ('slackness', 0.059), ('xitan', 0.059), ('sparsity', 0.059), ('polynomial', 0.059), ('norms', 0.059), ('vj', 0.058), ('tangent', 0.056), ('condition', 0.056), ('wavelet', 0.055), ('formulation', 0.054), ('zero', 0.054), ('curve', 0.054), ('sparse', 0.053), ('sdh', 0.053), ('sparsification', 0.053), ('soviet', 0.053), ('argmwin', 0.053), ('xjl', 0.053), ('igi', 0.053), ('combinatorial', 0.051), ('optimal', 0.051), ('xj', 0.05), ('aw', 0.05), ('bangalore', 0.049), ('rn', 0.049), ('smallest', 0.049), ('xr', 0.047), ('constraint', 0.047), ('recovery', 0.046), ('bredies', 0.046), ('argminx', 0.046), ('xip', 0.046), ('geman', 0.044), ('vit', 0.044), ('apr', 0.044), ('correction', 0.043), ('right', 0.043), ('left', 0.043), ('projections', 0.042), ('proof', 0.042), ('necessity', 0.042), ('duchi', 0.042), ('whenever', 0.042), ('outline', 0.041), ('garg', 0.04), ('gan', 0.04), ('roots', 0.04), ('snr', 0.04), ('touches', 0.039), ('india', 0.039), ('isometry', 0.038), ('elimination', 0.038), ('consequence', 0.038), ('peak', 0.038), ('tw', 0.038), ('decrease', 0.037), ('iterative', 0.037), ('signs', 0.037)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000014 292 iccv-2013-Non-convex P-Norm Projection for Robust Sparsity
Author: Mithun Das Gupta, Sanjeev Kumar
Abstract: In this paper, we investigate the properties of Lp norm (p ≤ 1) within a projection framework. We start with the (KpK T≤ equations of the neoctni-olnin efraarm optimization problem a thnde then use its key properties to arrive at an algorithm for Lp norm projection on the non-negative simplex. We compare with L1projection which needs prior knowledge of the true norm, as well as hard thresholding based sparsificationproposed in recent compressed sensing literature. We show performance improvements compared to these techniques across different vision applications.
2 0.13739817 140 iccv-2013-Elastic Net Constraints for Shape Matching
Author: Emanuele Rodolà, Andrea Torsello, Tatsuya Harada, Yasuo Kuniyoshi, Daniel Cremers
Abstract: We consider a parametrized relaxation of the widely adopted quadratic assignment problem (QAP) formulation for minimum distortion correspondence between deformable shapes. In order to control the accuracy/sparsity trade-off we introduce a weighting parameter on the combination of two existing relaxations, namely spectral and game-theoretic. This leads to the introduction of the elastic net penalty function into shape matching problems. In combination with an efficient algorithm to project onto the elastic net ball, we obtain an approach for deformable shape matching with controllable sparsity. Experiments on a standard benchmark confirm the effectiveness of the approach.
3 0.13388266 104 iccv-2013-Decomposing Bag of Words Histograms
Author: Ankit Gandhi, Karteek Alahari, C.V. Jawahar
Abstract: We aim to decompose a global histogram representation of an image into histograms of its associated objects and regions. This task is formulated as an optimization problem, given a set of linear classifiers, which can effectively discriminate the object categories present in the image. Our decomposition bypasses harder problems associated with accurately localizing and segmenting objects. We evaluate our method on a wide variety of composite histograms, and also compare it with MRF-based solutions. In addition to merely measuring the accuracy of decomposition, we also show the utility of the estimated object and background histograms for the task of image classification on the PASCAL VOC 2007 dataset.
4 0.13230595 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables
Author: Daozheng Chen, Dhruv Batra, William T. Freeman
Abstract: Latent variables models have been applied to a number of computer vision problems. However, the complexity of the latent space is typically left as a free design choice. A larger latent space results in a more expressive model, but such models are prone to overfitting and are slower to perform inference with. The goal of this paper is to regularize the complexity of the latent space and learn which hidden states are really relevant for prediction. Specifically, we propose using group-sparsity-inducing regularizers such as ?1-?2 to estimate the parameters of Structured SVMs with unstructured latent variables. Our experiments on digit recognition and object detection show that our approach is indeed able to control the complexity of latent space without any significant loss in accuracy of the learnt model.
5 0.10978488 434 iccv-2013-Unifying Nuclear Norm and Bilinear Factorization Approaches for Low-Rank Matrix Decomposition
Author: Ricardo Cabral, Fernando De_La_Torre, João P. Costeira, Alexandre Bernardino
Abstract: Low rank models have been widely usedfor the representation of shape, appearance or motion in computer vision problems. Traditional approaches to fit low rank models make use of an explicit bilinear factorization. These approaches benefit from fast numerical methods for optimization and easy kernelization. However, they suffer from serious local minima problems depending on the loss function and the amount/type of missing data. Recently, these lowrank models have alternatively been formulated as convex problems using the nuclear norm regularizer; unlike factorization methods, their numerical solvers are slow and it is unclear how to kernelize them or to impose a rank a priori. This paper proposes a unified approach to bilinear factorization and nuclear norm regularization, that inherits the benefits of both. We analyze the conditions under which these approaches are equivalent. Moreover, based on this analysis, we propose a new optimization algorithm and a “rank continuation ” strategy that outperform state-of-theart approaches for Robust PCA, Structure from Motion and Photometric Stereo with outliers and missing data.
6 0.10066897 235 iccv-2013-Learning Coupled Feature Spaces for Cross-Modal Matching
7 0.10055158 250 iccv-2013-Lifting 3D Manhattan Lines from a Single Image
8 0.1000085 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold
9 0.096916795 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding
10 0.093431756 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering
11 0.090621151 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration
12 0.089833342 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification
13 0.089011535 290 iccv-2013-New Graph Structured Sparsity Model for Multi-label Image Annotations
14 0.087114163 232 iccv-2013-Latent Space Sparse Subspace Clustering
15 0.083575115 346 iccv-2013-Rectangling Stereographic Projection for Wide-Angle Image Visualization
16 0.08258763 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization
17 0.079713061 81 iccv-2013-Combining the Right Features for Complex Event Recognition
18 0.078028254 357 iccv-2013-Robust Matrix Factorization with Unknown Noise
19 0.075643577 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications
20 0.073131867 354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition
topicId topicWeight
[(0, 0.19), (1, -0.03), (2, -0.069), (3, -0.027), (4, -0.108), (5, 0.058), (6, -0.015), (7, 0.007), (8, 0.05), (9, -0.081), (10, -0.017), (11, -0.03), (12, -0.088), (13, 0.002), (14, 0.039), (15, 0.032), (16, 0.014), (17, 0.061), (18, -0.011), (19, 0.016), (20, 0.038), (21, -0.014), (22, -0.049), (23, -0.077), (24, 0.004), (25, 0.005), (26, 0.114), (27, -0.055), (28, 0.035), (29, -0.008), (30, 0.015), (31, -0.01), (32, -0.074), (33, 0.07), (34, 0.006), (35, 0.015), (36, 0.03), (37, 0.013), (38, 0.104), (39, 0.023), (40, 0.054), (41, -0.07), (42, 0.01), (43, 0.059), (44, 0.021), (45, 0.087), (46, 0.028), (47, -0.082), (48, -0.038), (49, 0.042)]
simIndex simValue paperId paperTitle
same-paper 1 0.9781934 292 iccv-2013-Non-convex P-Norm Projection for Robust Sparsity
Author: Mithun Das Gupta, Sanjeev Kumar
Abstract: In this paper, we investigate the properties of Lp norm (p ≤ 1) within a projection framework. We start with the (KpK T≤ equations of the neoctni-olnin efraarm optimization problem a thnde then use its key properties to arrive at an algorithm for Lp norm projection on the non-negative simplex. We compare with L1projection which needs prior knowledge of the true norm, as well as hard thresholding based sparsificationproposed in recent compressed sensing literature. We show performance improvements compared to these techniques across different vision applications.
2 0.75186622 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding
Author: Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang
Abstract: In many sparse coding based image restoration and image classification problems, using non-convex ?p-norm minimization (0 ≤ p < 1) can often obtain better results than timhei convex 0?1 -norm m 1)ini camniza otfiteonn. Ab naiunm bbeetrt of algorithms, e.g., iteratively reweighted least squares (IRLS), iteratively thresholding method (ITM-?p), and look-up table (LUT), have been proposed for non-convex ?p-norm sparse coding, while some analytic solutions have been suggested for some specific values of p. In this paper, by extending the popular soft-thresholding operator, we propose a generalized iterated shrinkage algorithm (GISA) for ?p-norm non-convex sparse coding. Unlike the analytic solutions, the proposed GISA algorithm is easy to implement, and can be adopted for solving non-convex sparse coding problems with arbitrary p values. Compared with LUT, GISA is more general and does not need to compute and store the look-up tables. Compared with IRLS and ITM-?p, GISA is theoretically more solid and can achieve more accurate solutions. Experiments on image restoration and sparse coding based face recognition are conducted to validate the performance of GISA. ××
3 0.73999721 434 iccv-2013-Unifying Nuclear Norm and Bilinear Factorization Approaches for Low-Rank Matrix Decomposition
Author: Ricardo Cabral, Fernando De_La_Torre, João P. Costeira, Alexandre Bernardino
Abstract: Low rank models have been widely usedfor the representation of shape, appearance or motion in computer vision problems. Traditional approaches to fit low rank models make use of an explicit bilinear factorization. These approaches benefit from fast numerical methods for optimization and easy kernelization. However, they suffer from serious local minima problems depending on the loss function and the amount/type of missing data. Recently, these lowrank models have alternatively been formulated as convex problems using the nuclear norm regularizer; unlike factorization methods, their numerical solvers are slow and it is unclear how to kernelize them or to impose a rank a priori. This paper proposes a unified approach to bilinear factorization and nuclear norm regularization, that inherits the benefits of both. We analyze the conditions under which these approaches are equivalent. Moreover, based on this analysis, we propose a new optimization algorithm and a “rank continuation ” strategy that outperform state-of-theart approaches for Robust PCA, Structure from Motion and Photometric Stereo with outliers and missing data.
4 0.73324406 310 iccv-2013-Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision
Author: Tae-Hyun Oh, Hyeongwoo Kim, Yu-Wing Tai, Jean-Charles Bazin, In So Kweon
Abstract: Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values. The proposed objective function implicitly encourages the target rank constraint in rank minimization. Our experimental analyses show that our approach performs better than conventional rank minimization when the number of samples is deficient, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g. high dynamic range imaging, photometric stereo and image alignment, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.
5 0.73267406 364 iccv-2013-SGTD: Structure Gradient and Texture Decorrelating Regularization for Image Decomposition
Author: Qiegen Liu, Jianbo Liu, Pei Dong, Dong Liang
Abstract: This paper presents a novel structure gradient and texture decorrelating regularization (SGTD) for image decomposition. The motivation of the idea is under the assumption that the structure gradient and texture components should be properly decorrelated for a successful decomposition. The proposed model consists of the data fidelity term, total variation regularization and the SGTD regularization. An augmented Lagrangian method is proposed to address this optimization issue, by first transforming the unconstrained problem to an equivalent constrained problem and then applying an alternating direction method to iteratively solve the subproblems. Experimental results demonstrate that the proposed method presents better or comparable performance as state-of-the-art methods do.
6 0.71887386 140 iccv-2013-Elastic Net Constraints for Shape Matching
7 0.70023948 167 iccv-2013-Finding Causal Interactions in Video Sequences
8 0.69191766 141 iccv-2013-Enhanced Continuous Tabu Search for Parameter Estimation in Multiview Geometry
9 0.6800279 60 iccv-2013-Bayesian Robust Matrix Factorization for Image and Video Processing
10 0.67619789 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold
11 0.66976976 357 iccv-2013-Robust Matrix Factorization with Unknown Noise
12 0.63995731 408 iccv-2013-Super-resolution via Transform-Invariant Group-Sparse Regularization
13 0.62021816 280 iccv-2013-Multi-view 3D Reconstruction from Uncalibrated Radially-Symmetric Cameras
14 0.61557794 55 iccv-2013-Automatic Kronecker Product Model Based Detection of Repeated Patterns in 2D Urban Images
15 0.61242676 98 iccv-2013-Cross-Field Joint Image Restoration via Scale Map
16 0.61066073 235 iccv-2013-Learning Coupled Feature Spaces for Cross-Modal Matching
17 0.60293686 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications
18 0.59786421 422 iccv-2013-Toward Guaranteed Illumination Models for Non-convex Objects
19 0.59371322 312 iccv-2013-Perceptual Fidelity Aware Mean Squared Error
20 0.58370602 49 iccv-2013-An Enhanced Structure-from-Motion Paradigm Based on the Absolute Dual Quadric and Images of Circular Points
topicId topicWeight
[(2, 0.036), (7, 0.028), (12, 0.011), (26, 0.068), (27, 0.345), (31, 0.06), (42, 0.118), (64, 0.035), (73, 0.038), (89, 0.169), (98, 0.014)]
simIndex simValue paperId paperTitle
same-paper 1 0.79376745 292 iccv-2013-Non-convex P-Norm Projection for Robust Sparsity
Author: Mithun Das Gupta, Sanjeev Kumar
Abstract: In this paper, we investigate the properties of Lp norm (p ≤ 1) within a projection framework. We start with the (KpK T≤ equations of the neoctni-olnin efraarm optimization problem a thnde then use its key properties to arrive at an algorithm for Lp norm projection on the non-negative simplex. We compare with L1projection which needs prior knowledge of the true norm, as well as hard thresholding based sparsificationproposed in recent compressed sensing literature. We show performance improvements compared to these techniques across different vision applications.
2 0.78417289 342 iccv-2013-Real-Time Solution to the Absolute Pose Problem with Unknown Radial Distortion and Focal Length
Author: Zuzana Kukelova, Martin Bujnak, Tomas Pajdla
Abstract: Theproblem ofdetermining the absoluteposition andorientation of a camera from a set of 2D-to-3D point correspondences is one of the most important problems in computer vision with a broad range of applications. In this paper we present a new solution to the absolute pose problem for camera with unknown radial distortion and unknown focal length from five 2D-to-3D point correspondences. Our new solver is numerically more stable, more accurate, and significantly faster than the existing state-of-the-art minimal fourpoint absolutepose solvers for this problem. Moreover, our solver results in less solutions and can handle larger radial distortions. The new solver is straightforward and uses only simple concepts from linear algebra. Therefore it is simpler than the state-of-the-art Gr¨ obner basis solvers. We compare our new solver with the existing state-of-theart solvers and show its usefulness on synthetic and real datasets. 1
3 0.71886116 310 iccv-2013-Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision
Author: Tae-Hyun Oh, Hyeongwoo Kim, Yu-Wing Tai, Jean-Charles Bazin, In So Kweon
Abstract: Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values. The proposed objective function implicitly encourages the target rank constraint in rank minimization. Our experimental analyses show that our approach performs better than conventional rank minimization when the number of samples is deficient, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g. high dynamic range imaging, photometric stereo and image alignment, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.
4 0.68169951 378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval
Author: Shiliang Zhang, Ming Yang, Xiaoyu Wang, Yuanqing Lin, Qi Tian
Abstract: Inverted indexes in image retrieval not only allow fast access to database images but also summarize all knowledge about the database, so that their discriminative capacity largely determines the retrieval performance. In this paper, for vocabulary tree based image retrieval, we propose a semantic-aware co-indexing algorithm to jointly San Antonio, TX 78249 . j dl@gmai l com qit ian@cs .ut sa . edu . The query embed two strong cues into the inverted indexes: 1) local invariant features that are robust to delineate low-level image contents, and 2) semantic attributes from large-scale object recognition that may reveal image semantic meanings. For an initial set of inverted indexes of local features, we utilize 1000 semantic attributes to filter out isolated images and insert semantically similar images to the initial set. Encoding these two distinct cues together effectively enhances the discriminative capability of inverted indexes. Such co-indexing operations are totally off-line and introduce small computation overhead to online query cause only local features but no semantic attributes are used for query. Experiments and comparisons with recent retrieval methods on 3 datasets, i.e., UKbench, Holidays, Oxford5K, and 1.3 million images from Flickr as distractors, manifest the competitive performance of our method 1.
5 0.67973828 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild
Author: Zhenyu Guo, Z. Jane Wang
Abstract: Digital images nowadays show large appearance variabilities on picture styles, in terms of color tone, contrast, vignetting, and etc. These ‘picture styles’ are directly related to the scene radiance, image pipeline of the camera, and post processing functions (e.g., photography effect filters). Due to the complexity and nonlinearity of these factors, popular gradient-based image descriptors generally are not invariant to different picture styles, which could degrade the performance for object recognition. Given that images shared online or created by individual users are taken with a wide range of devices and may be processed by various post processing functions, to find a robust object recognition system is useful and challenging. In this paper, we investigate the influence of picture styles on object recognition by making a connection between image descriptors and a pixel mapping function g, and accordingly propose an adaptive approach based on a g-incorporated kernel descriptor and multiple kernel learning, without estimating or specifying the image styles used in training and testing. We conduct experiments on the Domain Adaptation data set, the Oxford Flower data set, and several variants of the Flower data set by introducing popular photography effects through post-processing. The results demonstrate that theproposedmethod consistently yields recognition improvements over standard descriptors in all studied cases.
6 0.66289055 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation
7 0.5903573 280 iccv-2013-Multi-view 3D Reconstruction from Uncalibrated Radially-Symmetric Cameras
8 0.58104724 353 iccv-2013-Revisiting the PnP Problem: A Fast, General and Optimal Solution
9 0.58076149 27 iccv-2013-A Robust Analytical Solution to Isometric Shape-from-Template with Focal Length Calibration
10 0.57935202 324 iccv-2013-Potts Model, Parametric Maxflow and K-Submodular Functions
11 0.57728511 362 iccv-2013-Robust Tucker Tensor Decomposition for Effective Image Representation
12 0.57456267 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification
13 0.5728395 436 iccv-2013-Unsupervised Intrinsic Calibration from a Single Frame Using a "Plumb-Line" Approach
14 0.57265794 277 iccv-2013-Multi-channel Correlation Filters
15 0.57209039 149 iccv-2013-Exemplar-Based Graph Matching for Robust Facial Landmark Localization
16 0.57109052 49 iccv-2013-An Enhanced Structure-from-Motion Paradigm Based on the Absolute Dual Quadric and Images of Circular Points
17 0.57026434 35 iccv-2013-Accurate Blur Models vs. Image Priors in Single Image Super-resolution
18 0.56988174 232 iccv-2013-Latent Space Sparse Subspace Clustering
19 0.5679673 128 iccv-2013-Dynamic Probabilistic Volumetric Models
20 0.56433094 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding