cvpr cvpr2013 cvpr2013-405 knowledge-graph by maker-knowledge-mining

405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds


Source: pdf

Author: Bo Wang, Zhuowen Tu

Abstract: With the increasing availability of high dimensional data and demand in sophisticated data analysis algorithms, manifold learning becomes a critical technique to perform dimensionality reduction, unraveling the intrinsic data structure. The real-world data however often come with noises and outliers; seldom, all the data live in a single linear subspace. Inspired by the recent advances in sparse subspace learning and diffusion-based approaches, we propose a new manifold denoising algorithm in which data neighborhoods are adaptively inferred via sparse subspace reconstruction; we then derive a new formulation to perform denoising to the original data. Experiments carried out on both toy and real applications demonstrate the effectiveness of our method; it is insensitive to parameter tuning and we show significant improvement over the competing algorithms.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 com l Abstract With the increasing availability of high dimensional data and demand in sophisticated data analysis algorithms, manifold learning becomes a critical technique to perform dimensionality reduction, unraveling the intrinsic data structure. [sent-5, score-0.348]

2 For simple data such as well-aligned faces, PCA techniques are shown to be well applicable; for data of high complexity, a single linear subspace then may not be sufficient to capture the intrinsic data structure. [sent-12, score-0.376]

3 mixture of PCA [22]) and product of linear subspaces (generalized principal component analysis [24]) are proposed. [sent-15, score-0.147]

4 In more general cases of manifold learning, nonlinearality is often assumed to preserve the lower dimension embedding [18, 21, 16, 3]. [sent-16, score-0.245]

5 Both the ISOMAP [21] and LLE [16] make an assumption of local Euclidean space and try to preserve similar neighborhood structures in the embedded spaces. [sent-17, score-0.091]

6 The spectral manifold learning approaches [3] compute the Eigen-maps for the graph Laplacian to directly perform non-linear dimensionality reduction. [sent-18, score-0.38]

7 From a different angle, diffusion maps [8] defines diffusion distances between data samples; an input similarity measurement is improved through a diffusion process. [sent-19, score-0.279]

8 Diffusion-based methods [8, 14] essentially perform denoising on the similarity metric on which spectral manifold learning methods can be applied. [sent-20, score-0.699]

9 The idea is to explicitly construct a new embedding space with a corresponding metric which is more faithful to the manifold structure and hence induces a better distance/similarity measure. [sent-21, score-0.245]

10 All the manifold learning methods above depend on the construction of a good neighborhood graph, either explicitly [21, 16] or implicitly [3, 8, 25, 26]. [sent-24, score-0.344]

11 Essentially, the neighborhood or k-nearest neighborhood definition forms a sparse sample assumption. [sent-26, score-0.2]

12 Recent seminal work in compressive sensing [4] demonstrates the significance of having the sparse assumption in reconstruction, if the data is indeed sparse. [sent-27, score-0.074]

13 The idea of local sparse data reconstruction [27] has achieved some significant improvement in codebook-based image classification task. [sent-28, score-0.128]

14 Moreover, as discussed before, data manifolds are often composed of multiple subspaces. [sent-29, score-0.098]

15 The sparse subspace clustering algorithm (SSC) [9] provides a tractable way of learning subspaces with the in-class sparse self-reconstruction assumption. [sent-30, score-0.709]

16 Here, inspired by sparse subspace learning and diffusion-based approaches, we propose a new manifold denoising algorithm in which data neighborhoods are adaptively inferred via sparse subspace reconstruction; we then 444666668 derive a new formulation to perform denoising to the original data. [sent-31, score-1.899]

17 In the experiments, we show significant improvements over the existing manifold denoising algorithms. [sent-32, score-0.592]

18 Next, we review related work in manifold denoising and then give the detailed formulation of our algorithm. [sent-34, score-0.592]

19 Related Work Although manifold learning is an important topic in machine learning, manifold denoising has received relatively less attention. [sent-36, score-0.873]

20 Unlike the traditional manifold learning and dimension reduction algorithms, the goal of a denoising algorithm is to obtain a cleaner output in the same dimensionality as the input data. [sent-37, score-0.702]

21 The outputs of a denoising algorithm can be further used to facilitate tasks such as feature selection, clustering, and classification. [sent-38, score-0.379]

22 The simplest denoising technique is PCA which adopts a strategy of projection and backward mapping. [sent-39, score-0.347]

23 The idea behind them is that noises in the samples follows a normal distribution and averaging local manifolds can reduce the noise degree. [sent-43, score-0.305]

24 These methods obtain denoised outputs by reversing a diffusion process of which graph Laplacian is the generator. [sent-48, score-0.434]

25 However, the problem associated with them is that graph Laplacian only captures global gradient distribution of the manifolds and it fails to reduce the local noise inherited in sub-manifolds which are important to pattern discovery such as clustering and classification. [sent-49, score-0.255]

26 Lastly, both categories face the same challenge to choose the number of neighbors when either inferring the local manifolds or choosing local reliable clusters. [sent-50, score-0.202]

27 In this paper, we try to overcame these problems using two strategies: 1) We use a recent subspace learning algorithm to capture the subspace structures which indicate a good potential clusters and also a reliable similarity measure. [sent-51, score-0.791]

28 2) We design a local sparse regularization term which shares several common algebraic properties with graph Laplacian but with the difference in emphasizing the local subspace distributions. [sent-52, score-0.45]

29 , xn} where each xi ∈ Rm and m indicaast Ses =the { input data dim}e wnsihoerne. [sent-60, score-0.087]

30 The objective of manifold denoising in this paper ∈is R therefore to compute a new denoised data matrix X? [sent-65, score-0.968]

31 ∈ Rm×n to facilitate clustering and classification in furtX? [sent-66, score-0.112]

32 Each data sample xi here is associated with a label indicator vector yi ∈ {0, 1}C such that yi (k) = 1if xi belongs to the k-th subspace, }and otherwise yi (k) = 0. [sent-70, score-0.22]

33 ; ynT]T ∈ {0, 1}n×C is used to represent a clustering scheme. [sent-74, score-0.08]

34 , hxen }ve, ratnidce tshe V edges Ep are weights adtean soatmedby an n n similarity mnda ttrhiex Wdg ews Eith a Wij indicating ttehed similarity b netw siemeinl xi yan mda xj . [sent-79, score-0.212]

35 W Ni represents a set of xi’s neighbors in graph G, excluding xi and Ki = |Ni |. [sent-80, score-0.176]

36 Background: Sparse Subspace Clustering Sparse subspace clustering(SSC) [9] builds clustering on sparse learning with applications demonstrated in motion segmentation [9]. [sent-83, score-0.534]

37 The motivation of SSC is simple in that each data sample xi can be expressed as a? [sent-84, score-0.087]

38 linear combination of all the data within the cluster, xi = ? [sent-85, score-0.087]

39 =eairn a sparse coefficient matrix A ∈ Rn×n such that aij ? [sent-90, score-0.128]

40 The sparse matrix A has two important usages: 1) For each datum xi, its neighbors Ni can be easily inferred by the nonzero ele,m itesn ntse ginh bthoer si -Nth column of A. [sent-103, score-0.284]

41 Therefore, we don’t need to specify the number of nearest neighbors for each datum as most of other methods do. [sent-104, score-0.098]

42 In addition, we can theoretically guarantee that all chosen neighbors of xi belong to the same subspace with xi. [sent-105, score-0.516]

43 1 Motivations A straightforward idea behind a good denoising algorithm is that, for each subspace, the new denoised data still reserves the intrinsic allocation of subspaces and also can be easily reconstructed by simple PCA algorithm. [sent-111, score-0.912]

44 Hence, we proposed an optimization framework that aims to minimize both subspace coherence error and reconstruction er- ror. [sent-112, score-0.588]

45 An good denoised output should satisfy two metrics: 1) Points belonging to the same subspaces should be still in the same subspaces; 2) They are close to what they were. [sent-117, score-0.423]

46 The first metric can be achieved by minimizing the subspace coherence error while the second is satisfied by minimizing the reconstruction error. [sent-118, score-0.56]

47 We can see that, those noisy points are moved towards the center of the subspace they belong to and at the same time, we reserve the positions of most of the points. [sent-121, score-0.344]

48 2 Subspace Coherence Error Instead of using partition matrix Y directly, we adopted a Scaled Partition Matrix, G, such that G= Y(YTY)−21 = [g1, g2, . [sent-127, score-0.095]

49 gik = Yik/√nk, w ≤he kre nk is) itsh eth esiz ke- ohf c tohlek-th subspace, can be regarded as the confidence that xi is assigned to the k-th subspace. [sent-134, score-0.168]

50 is The principle of local denoising is that the subspace assignments in the neighborhood of each patient should be as smooth as possible. [sent-136, score-0.793]

51 Specifically, it assumes that the cluster indicator value at xi should be well inferred by the local neighbors Ni. [sent-137, score-0.287]

52 So given a similarity graph G = (V, E), we extract a subgraph Gi = (Vi, Ei) such that Vi = Ni ∪ xi aenxtdr aEcit =a sEub(Vgir)ap. [sent-138, score-0.223]

53 hT hGe similarity m) sautrcixh athsasotc Viat=ed Nwith∪ t hxe subgraph G Ei( iVs Wi = W(Ei). [sent-139, score-0.104]

54 Using the label diffusion algorithm h[ 2G9], we can Wrec(Eonstruct a virtual label indicator vector pk such that pk = (1 − α)(I − αLi)−1g(kVi), 1 ≤ k ≤ C, (2) where α is a constant (0 < α < 1) and g(kVi) is the scaled subspace indicator vector on the subgraph (VVi. [sent-140, score-0.763]

55 Li represents tshueb snpoarcmea ilnizdeicda ttroarn vseictitoonr mona ttrhiex fuobrg trhaep subgraph Wi, i. [sent-141, score-0.112]

56 g ik = pk (Ki + 1) is the estimated cluster assignm+en 1t ainnddi gc? [sent-148, score-0.148]

57 1−βiβ(i(Kli)+1)0 ifot xhjeriwsi tshee l-th element in Ni (5) This sparse matrix B represents a linear relationship that g? [sent-158, score-0.128]

58 1 = Trace(GT(I Denote Z = (I − − B)T(I B)T(I − − B)G) (6) B), then our subspace as- signment er Zesul =ts can b Be )obt(Iain −ed B by solving rt sheub following optimization problems: G∈mRnin×C Trace(GTZG) s. [sent-187, score-0.344]

59 444676880 The idea behind the denoising principle is that the denoised features should be coherent to the subspace assignments. [sent-192, score-1.042]

60 So for each new feature f, we define the coherence error CE as follows: ? [sent-193, score-0.162]

61 Nj From (8), we can see that the coherence error (CE) evaluates two aspects : 1) The extent of match between the feature and the local structure of the image manifold; 2)The fitness between the feature and the subspace assignments. [sent-200, score-0.546]

62 The smaller CE is, the more relevant to the subspace structures the feature is. [sent-201, score-0.372]

63 Therefore, one objective can be defined as the coherence error for the whole denoised data X? [sent-220, score-0.484]

64 3 Subspace Reconstruction Error Another direction of denosing is to make sure the new data points are not far away from the initial data manifolds which are inferred by traditional data reconstruction tech- × nique PCA. [sent-236, score-0.21]

65 We look at each data point xi and its neighbors Ni, and assume them to be random samples from a linear subspace, approximating a local manifold around xi (To be exact, it can be guaranteed that neighbours inferred by sparse subspace clustering(1) share the same subspace [9]). [sent-237, score-1.329]

66 Ni, we can approximate it with PCA: Ri = UiUiTXi(I − llT/(1 + ki)) + XillT/(1 + ki) (11) where Ui ∈ Rm×di denote the di principal components from PCA ∈and R ki = |Ni |. [sent-239, score-0.218]

67 The number of the principal component di can b=e e s|Ntim|a. [sent-240, score-0.076]

68 Wf teh eest pimrinacteip dail by setting the variance of the chosen di principal components account for at least 90% the whole variance. [sent-242, score-0.076]

69 xal reconstructed data for the j-th neighbor for xi in (11). [sent-255, score-0.087]

70 , Sn] be a 1 n block matrix where each block Si ∈ Rn×(1+]ki b)e corresponds ctok mthea neighborhood indicator m∈at Rrix. [sent-260, score-0.163]

71 Therefore we can express the reconstruction error in matrix form: LRE = ? [sent-262, score-0.154]

72 Hence the overall error can be expressed as a combination of subspace coherence error and subspace reconstruction error: L(λ) = LCE + λLRE = Trace(ZX? [sent-275, score-0.95]

73 ur algorithm depends on the neighborhood graphs, so our algorithm is potential to be generalized into an iterative way. [sent-286, score-0.09]

74 ∗ (λ) in (15), the sparse subspace matrix can be refined. [sent-288, score-0.472]

75 Analysis In this section, we provide some preliminary analysis of our method and its relationship with some existing denoising algorithms. [sent-295, score-0.347]

76 (1); Step2: Compute subspace coherence error by constructing Z from eqn. [sent-308, score-0.506]

77 Another line of denoising algorithms tends to use Laplacian-Bertrami operator on smooth manifolds [13, 11]. [sent-311, score-0.484]

78 They often use Normalized Laplacian L+ = I D−21 WD−21 where D is the − degree dm Laatrpixla coiaf nth Le similarity matrix W. [sent-313, score-0.093]

79 The key difference from ours is that, Laplacian regularization only considers the global structure while our algorithm focus on local subspace structure. [sent-314, score-0.344]

80 As pointed by [11], pure Laplacian denoising tends to be over smooth and local structure is more helpful to reduce the effect of noise for applications such as clustering and classification afterwards. [sent-315, score-0.511]

81 We can see that, our SSD can successfully remove the added noise and therefore facilitate the applications afterwards. [sent-329, score-0.077]

82 We can see that, when noise is small, one round of SSD is enough to clean the data, while iterative SSD can obtain cleaner data with large noise. [sent-343, score-0.138]

83 And we compare our method to a few existing manifold denoising algorithms : 1) Generalized Blurring Mean Shift (GBMS) [5]; 2) Manifold Blurring Mean Shift (MBMS) [28]; 3) Manifold Denoising (MD) [13]; 4) Locally Linear Denoising (LLD) [11]. [sent-348, score-0.592]

84 Firstly, we perform clustering on three benchmark face recognition datasets (ORL, Yale, Yale B). [sent-368, score-0.127]

85 We use Normalized Mutual Information (NMI) [19] to evaluate the performance of clustering results. [sent-369, score-0.08]

86 We also show some illustrative of the denoised images from YaleB in Fig. [sent-452, score-0.371]

87 It is noticeable that SSD is capable to adjust the face orientations and illuminations to make the face image clearer and discriminative. [sent-454, score-0.094]

88 The upper panel is the initial faces while the lower panel is the denoised output. [sent-457, score-0.526]

89 Instead, our algorithm can infer the cluster distribution since we aim to minimize the subspace coherence error. [sent-471, score-0.527]

90 However, our method reserves the basic topology of the subspace distribution due to the objective of minimizing the reconstruction error. [sent-473, score-0.479]

91 With denoising algorithms, the classification accuracy can be improved. [sent-503, score-0.347]

92 In this case, other denoising algorithms failed to improve the data features. [sent-511, score-0.347]

93 In addition, we show several examples of denoised images compared with the original ones in Fig. [sent-513, score-0.322]

94 The most obvious change is that the denoised ones look smoother and easier to identify. [sent-515, score-0.353]

95 Conclusion In this paper, we have introduced a general manifold learning algorithm which takes the advantages of subspace 4We downloaded the data from http://www. [sent-518, score-0.625]

96 The upper panel is the initial faces while the lower panel is the denoised output. [sent-528, score-0.526]

97 It is observed that SSD reserves the distinctive style of each digit or letter while smoothing a few minor outliers, e. [sent-529, score-0.119]

98 Our algorithm requires nearly no parameter tuning across different datasets and we show significant improvement over the competing methods on various applications such as clustering and classification. [sent-533, score-0.08]

99 The lower panel are the results using SVM with linear kernels. [sent-594, score-0.086]

100 The baseline refers the results on raw features without any denoising algorithm. [sent-595, score-0.347]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('denoising', 0.347), ('subspace', 0.344), ('denoised', 0.322), ('ssd', 0.322), ('manifold', 0.245), ('ki', 0.142), ('ssc', 0.12), ('coherence', 0.116), ('subspaces', 0.101), ('manifolds', 0.098), ('laplacian', 0.097), ('gbms', 0.091), ('kvi', 0.091), ('mbms', 0.091), ('xi', 0.087), ('panel', 0.086), ('gik', 0.081), ('reserves', 0.081), ('lre', 0.081), ('clustering', 0.08), ('diffusion', 0.08), ('gk', 0.078), ('blurring', 0.075), ('orl', 0.075), ('yty', 0.075), ('sparse', 0.074), ('ni', 0.074), ('pk', 0.071), ('noises', 0.071), ('subgraph', 0.065), ('yale', 0.064), ('neighborhood', 0.063), ('alphadigits', 0.061), ('bgk', 0.061), ('lld', 0.061), ('pca', 0.06), ('ce', 0.06), ('inferred', 0.058), ('neighbors', 0.057), ('reconstruction', 0.054), ('round', 0.054), ('matrix', 0.054), ('lgc', 0.054), ('trace', 0.051), ('nmi', 0.05), ('gtg', 0.05), ('yaleb', 0.05), ('lce', 0.05), ('generalised', 0.05), ('illustrative', 0.049), ('zhuowen', 0.047), ('ttrhiex', 0.047), ('face', 0.047), ('indicator', 0.046), ('toy', 0.046), ('error', 0.046), ('principal', 0.046), ('noise', 0.045), ('ift', 0.045), ('caltech', 0.044), ('datum', 0.041), ('partition', 0.041), ('gc', 0.04), ('fitness', 0.04), ('scaled', 0.04), ('smooth', 0.039), ('similarity', 0.039), ('cleaner', 0.039), ('cluster', 0.039), ('ik', 0.038), ('zx', 0.038), ('digit', 0.038), ('chapelle', 0.038), ('rm', 0.037), ('learning', 0.036), ('dimensionality', 0.035), ('rn', 0.034), ('columbia', 0.034), ('samples', 0.033), ('rounds', 0.033), ('spectral', 0.032), ('faces', 0.032), ('facilitate', 0.032), ('graph', 0.032), ('intrinsic', 0.032), ('ig', 0.032), ('easier', 0.031), ('shift', 0.03), ('neural', 0.03), ('di', 0.03), ('neighborhoods', 0.03), ('behind', 0.029), ('semisupervised', 0.029), ('award', 0.029), ('averaging', 0.029), ('ei', 0.028), ('minimize', 0.028), ('theoretically', 0.028), ('structures', 0.028), ('generalized', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999988 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds

Author: Bo Wang, Zhuowen Tu

Abstract: With the increasing availability of high dimensional data and demand in sophisticated data analysis algorithms, manifold learning becomes a critical technique to perform dimensionality reduction, unraveling the intrinsic data structure. The real-world data however often come with noises and outliers; seldom, all the data live in a single linear subspace. Inspired by the recent advances in sparse subspace learning and diffusion-based approaches, we propose a new manifold denoising algorithm in which data neighborhoods are adaptively inferred via sparse subspace reconstruction; we then derive a new formulation to perform denoising to the original data. Experiments carried out on both toy and real applications demonstrate the effectiveness of our method; it is insensitive to parameter tuning and we show significant improvement over the competing algorithms.

2 0.24186374 215 cvpr-2013-Improved Image Set Classification via Joint Sparse Approximated Nearest Subspaces

Author: Shaokang Chen, Conrad Sanderson, Mehrtash T. Harandi, Brian C. Lovell

Abstract: Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).

3 0.24083541 379 cvpr-2013-Scalable Sparse Subspace Clustering

Author: Xi Peng, Lei Zhang, Zhang Yi

Abstract: In this paper, we address two problems in Sparse Subspace Clustering algorithm (SSC), i.e., scalability issue and out-of-sample problem. SSC constructs a sparse similarity graph for spectral clustering by using ?1-minimization based coefficients, has achieved state-of-the-art results for image clustering and motion segmentation. However, the time complexity of SSC is proportion to the cubic of problem size such that it is inefficient to apply SSC into large scale setting. Moreover, SSC does not handle with out-ofsample data that are not used to construct the similarity graph. For each new datum, SSC needs recalculating the cluster membership of the whole data set, which makes SSC is not competitive in fast online clustering. To address the problems, this paper proposes out-of-sample extension of SSC, named as Scalable Sparse Subspace Clustering (SSSC), which makes SSC feasible to cluster large scale data sets. The solution of SSSC adopts a ”sampling, clustering, coding, and classifying” strategy. Extensive experimental results on several popular data sets demonstrate the effectiveness and efficiency of our method comparing with the state-of-the-art algorithms.

4 0.23116939 135 cvpr-2013-Discriminative Subspace Clustering

Author: Vasileios Zografos, Liam Ellis, Rudolf Mester

Abstract: We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.

5 0.23108035 427 cvpr-2013-Texture Enhanced Image Denoising via Gradient Histogram Preservation

Author: Wangmeng Zuo, Lei Zhang, Chunwei Song, David Zhang

Abstract: Image denoising is a classical yet fundamental problem in low level vision, as well as an ideal test bed to evaluate various statistical image modeling methods. One of the most challenging problems in image denoising is how to preserve the fine scale texture structures while removing noise. Various natural image priors, such as gradient based prior, nonlocal self-similarity prior, and sparsity prior, have been extensively exploited for noise removal. The denoising algorithms based on these priors, however, tend to smooth the detailed image textures, degrading the image visual quality. To address this problem, in this paper we propose a texture enhanced image denoising (TEID) method by enforcing the gradient distribution of the denoised image to be close to the estimated gradient distribution of the original image. A novel gradient histogram preservation (GHP) algorithm is developed to enhance the texture structures while removing noise. Our experimental results demonstrate that theproposed GHP based TEID can well preserve the texture features of the denoised images, making them look more natural.

6 0.18094279 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features

7 0.1803131 109 cvpr-2013-Dense Non-rigid Point-Matching Using Random Projections

8 0.16984154 259 cvpr-2013-Learning a Manifold as an Atlas

9 0.1687396 306 cvpr-2013-Non-rigid Structure from Motion with Diffusion Maps Prior

10 0.14390998 169 cvpr-2013-Fast Patch-Based Denoising Using Approximated Patch Geodesic Paths

11 0.14384413 276 cvpr-2013-MKPLS: Manifold Kernel Partial Least Squares for Lipreading and Speaker Identification

12 0.13974793 250 cvpr-2013-Learning Cross-Domain Information Transfer for Location Recognition and Clustering

13 0.13281408 433 cvpr-2013-Top-Down Segmentation of Non-rigid Visual Objects Using Derivative-Based Search on Sparse Manifolds

14 0.13242193 64 cvpr-2013-Blessing of Dimensionality: High-Dimensional Feature and Its Efficient Compression for Face Verification

15 0.13030434 91 cvpr-2013-Consensus of k-NNs for Robust Neighborhood Selection on Graph-Based Manifolds

16 0.12967317 126 cvpr-2013-Diffusion Processes for Retrieval Revisited

17 0.12315928 92 cvpr-2013-Constrained Clustering and Its Application to Face Clustering in Videos

18 0.12128076 178 cvpr-2013-From Local Similarity to Global Coding: An Application to Image Classification

19 0.12094612 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters

20 0.1148522 367 cvpr-2013-Rolling Riemannian Manifolds to Solve the Multi-class Classification Problem


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.218), (1, 0.003), (2, -0.104), (3, 0.128), (4, 0.036), (5, 0.015), (6, -0.045), (7, -0.195), (8, -0.041), (9, -0.141), (10, 0.053), (11, -0.039), (12, -0.16), (13, -0.151), (14, -0.067), (15, 0.043), (16, -0.199), (17, -0.045), (18, -0.139), (19, 0.015), (20, 0.159), (21, 0.1), (22, 0.093), (23, -0.014), (24, -0.015), (25, -0.076), (26, 0.002), (27, -0.119), (28, 0.02), (29, -0.033), (30, -0.015), (31, 0.039), (32, 0.061), (33, -0.05), (34, -0.097), (35, 0.069), (36, -0.008), (37, -0.022), (38, -0.051), (39, -0.004), (40, -0.06), (41, -0.05), (42, -0.036), (43, 0.026), (44, 0.024), (45, 0.042), (46, -0.017), (47, 0.037), (48, 0.04), (49, -0.094)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95595127 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds

Author: Bo Wang, Zhuowen Tu

Abstract: With the increasing availability of high dimensional data and demand in sophisticated data analysis algorithms, manifold learning becomes a critical technique to perform dimensionality reduction, unraveling the intrinsic data structure. The real-world data however often come with noises and outliers; seldom, all the data live in a single linear subspace. Inspired by the recent advances in sparse subspace learning and diffusion-based approaches, we propose a new manifold denoising algorithm in which data neighborhoods are adaptively inferred via sparse subspace reconstruction; we then derive a new formulation to perform denoising to the original data. Experiments carried out on both toy and real applications demonstrate the effectiveness of our method; it is insensitive to parameter tuning and we show significant improvement over the competing algorithms.

2 0.85416085 215 cvpr-2013-Improved Image Set Classification via Joint Sparse Approximated Nearest Subspaces

Author: Shaokang Chen, Conrad Sanderson, Mehrtash T. Harandi, Brian C. Lovell

Abstract: Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).

3 0.83380973 135 cvpr-2013-Discriminative Subspace Clustering

Author: Vasileios Zografos, Liam Ellis, Rudolf Mester

Abstract: We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.

4 0.82909709 191 cvpr-2013-Graph-Laplacian PCA: Closed-Form Solution and Robustness

Author: Bo Jiang, Chris Ding, Bio Luo, Jin Tang

Abstract: Principal Component Analysis (PCA) is a widely used to learn a low-dimensional representation. In many applications, both vector data X and graph data W are available. Laplacian embedding is widely used for embedding graph data. Wepropose a graph-Laplacian PCA (gLPCA) to learn a low dimensional representation of X that incorporates graph structures encoded in W. This model has several advantages: (1) It is a data representation model. (2) It has a compact closed-form solution and can be efficiently computed. (3) It is capable to remove corruptions. Extensive experiments on 8 datasets show promising results on image reconstruction and significant improvement on clustering and classification.

5 0.82297277 379 cvpr-2013-Scalable Sparse Subspace Clustering

Author: Xi Peng, Lei Zhang, Zhang Yi

Abstract: In this paper, we address two problems in Sparse Subspace Clustering algorithm (SSC), i.e., scalability issue and out-of-sample problem. SSC constructs a sparse similarity graph for spectral clustering by using ?1-minimization based coefficients, has achieved state-of-the-art results for image clustering and motion segmentation. However, the time complexity of SSC is proportion to the cubic of problem size such that it is inefficient to apply SSC into large scale setting. Moreover, SSC does not handle with out-ofsample data that are not used to construct the similarity graph. For each new datum, SSC needs recalculating the cluster membership of the whole data set, which makes SSC is not competitive in fast online clustering. To address the problems, this paper proposes out-of-sample extension of SSC, named as Scalable Sparse Subspace Clustering (SSSC), which makes SSC feasible to cluster large scale data sets. The solution of SSSC adopts a ”sampling, clustering, coding, and classifying” strategy. Extensive experimental results on several popular data sets demonstrate the effectiveness and efficiency of our method comparing with the state-of-the-art algorithms.

6 0.79990393 259 cvpr-2013-Learning a Manifold as an Atlas

7 0.74089658 109 cvpr-2013-Dense Non-rigid Point-Matching Using Random Projections

8 0.68927056 250 cvpr-2013-Learning Cross-Domain Information Transfer for Location Recognition and Clustering

9 0.67028773 93 cvpr-2013-Constraints as Features

10 0.650361 276 cvpr-2013-MKPLS: Manifold Kernel Partial Least Squares for Lipreading and Speaker Identification

11 0.59805393 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features

12 0.54177719 427 cvpr-2013-Texture Enhanced Image Denoising via Gradient Histogram Preservation

13 0.53994256 42 cvpr-2013-Analytic Bilinear Appearance Subspace Construction for Modeling Image Irradiance under Natural Illumination and Non-Lambertian Reflectance

14 0.53990549 367 cvpr-2013-Rolling Riemannian Manifolds to Solve the Multi-class Classification Problem

15 0.53640115 253 cvpr-2013-Learning Multiple Non-linear Sub-spaces Using K-RBMs

16 0.52999681 433 cvpr-2013-Top-Down Segmentation of Non-rigid Visual Objects Using Derivative-Based Search on Sparse Manifolds

17 0.50689071 91 cvpr-2013-Consensus of k-NNs for Robust Neighborhood Selection on Graph-Based Manifolds

18 0.49515626 126 cvpr-2013-Diffusion Processes for Retrieval Revisited

19 0.49426866 238 cvpr-2013-Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices

20 0.48680106 306 cvpr-2013-Non-rigid Structure from Motion with Diffusion Maps Prior


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.129), (10, 0.1), (16, 0.019), (19, 0.011), (26, 0.128), (33, 0.283), (39, 0.01), (67, 0.053), (69, 0.093), (87, 0.046), (99, 0.011)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.92243546 440 cvpr-2013-Tracking People and Their Objects

Author: Tobias Baumgartner, Dennis Mitzel, Bastian Leibe

Abstract: Current pedestrian tracking approaches ignore important aspects of human behavior. Humans are not moving independently, but they closely interact with their environment, which includes not only other persons, but also different scene objects. Typical everyday scenarios include people moving in groups, pushing child strollers, or pulling luggage. In this paper, we propose a probabilistic approach for classifying such person-object interactions, associating objects to persons, and predicting how the interaction will most likely continue. Our approach relies on stereo depth information in order to track all scene objects in 3D, while simultaneously building up their 3D shape models. These models and their relative spatial arrangement are then fed into a probabilistic graphical model which jointly infers pairwise interactions and object classes. The inferred interactions can then be used to support tracking by recovering lost object tracks. We evaluate our approach on a novel dataset containing more than 15,000 frames of personobject interactions in 325 video sequences and demonstrate good performance in challenging real-world scenarios.

2 0.91901392 311 cvpr-2013-Occlusion Patterns for Object Class Detection

Author: Bojan Pepikj, Michael Stark, Peter Gehler, Bernt Schiele

Abstract: Despite the success of recent object class recognition systems, the long-standing problem of partial occlusion remains a major challenge, and a principled solution is yet to be found. In this paper we leave the beaten path of methods that treat occlusion as just another source of noise instead, we include the occluder itself into the modelling, by mining distinctive, reoccurring occlusion patterns from annotated training data. These patterns are then used as training data for dedicated detectors of varying sophistication. In particular, we evaluate and compare models that range from standard object class detectors to hierarchical, part-based representations of occluder/occludee pairs. In an extensive evaluation we derive insights that can aid further developments in tackling the occlusion challenge. –

same-paper 3 0.91460288 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds

Author: Bo Wang, Zhuowen Tu

Abstract: With the increasing availability of high dimensional data and demand in sophisticated data analysis algorithms, manifold learning becomes a critical technique to perform dimensionality reduction, unraveling the intrinsic data structure. The real-world data however often come with noises and outliers; seldom, all the data live in a single linear subspace. Inspired by the recent advances in sparse subspace learning and diffusion-based approaches, we propose a new manifold denoising algorithm in which data neighborhoods are adaptively inferred via sparse subspace reconstruction; we then derive a new formulation to perform denoising to the original data. Experiments carried out on both toy and real applications demonstrate the effectiveness of our method; it is insensitive to parameter tuning and we show significant improvement over the competing algorithms.

4 0.91246176 152 cvpr-2013-Exemplar-Based Face Parsing

Author: Brandon M. Smith, Li Zhang, Jonathan Brandt, Zhe Lin, Jianchao Yang

Abstract: In this work, we propose an exemplar-based face image segmentation algorithm. We take inspiration from previous works on image parsing for general scenes. Our approach assumes a database of exemplar face images, each of which is associated with a hand-labeled segmentation map. Given a test image, our algorithm first selects a subset of exemplar images from the database, Our algorithm then computes a nonrigid warp for each exemplar image to align it with the test image. Finally, we propagate labels from the exemplar images to the test image in a pixel-wise manner, using trained weights to modulate and combine label maps from different exemplars. We evaluate our method on two challenging datasets and compare with two face parsing algorithms and a general scene parsing algorithm. We also compare our segmentation results with contour-based face alignment results; that is, we first run the alignment algorithms to extract contour points and then derive segments from the contours. Our algorithm compares favorably with all previous works on all datasets evaluated.

5 0.91019118 280 cvpr-2013-Maximum Cohesive Grid of Superpixels for Fast Object Localization

Author: Liang Li, Wei Feng, Liang Wan, Jiawan Zhang

Abstract: This paper addresses a challenging problem of regularizing arbitrary superpixels into an optimal grid structure, which may significantly extend current low-level vision algorithms by allowing them to use superpixels (SPs) conveniently as using pixels. For this purpose, we aim at constructing maximum cohesive SP-grid, which is composed of real nodes, i.e. SPs, and dummy nodes that are meaningless in the image with only position-taking function in the grid. For a given formation of image SPs and proper number of dummy nodes, we first dynamically align them into a grid based on the centroid localities of SPs. We then define the SP-grid coherence as the sum of edge weights, with SP locality and appearance encoded, along all direct paths connecting any two nearest neighboring real nodes in the grid. We finally maximize the SP-grid coherence via cascade dynamic programming. Our approach can take the regional objectness as an optional constraint to produce more semantically reliable SP-grids. Experiments on object localization show that our approach outperforms state-of-the-art methods in terms of both detection accuracy and speed. We also find that with the same searching strategy and features, object localization at SP-level is about 100-500 times faster than pixel-level, with usually better detection accuracy.

6 0.90172684 88 cvpr-2013-Compressible Motion Fields

7 0.89774704 353 cvpr-2013-Relative Hidden Markov Models for Evaluating Motion Skill

8 0.89767456 423 cvpr-2013-Template-Based Isometric Deformable 3D Reconstruction with Sampling-Based Focal Length Self-Calibration

9 0.89724064 281 cvpr-2013-Measures and Meta-Measures for the Supervised Evaluation of Image Segmentation

10 0.89556623 99 cvpr-2013-Cross-View Image Geolocalization

11 0.89050019 104 cvpr-2013-Deep Convolutional Network Cascade for Facial Point Detection

12 0.8898077 424 cvpr-2013-Templateless Quasi-rigid Shape Modeling with Implicit Loop-Closure

13 0.88941485 445 cvpr-2013-Understanding Bayesian Rooms Using Composite 3D Object Models

14 0.88909727 292 cvpr-2013-Multi-agent Event Detection: Localization and Role Assignment

15 0.8890332 96 cvpr-2013-Correlation Filters for Object Alignment

16 0.88871133 70 cvpr-2013-Bottom-Up Segmentation for Top-Down Detection

17 0.88828808 101 cvpr-2013-Cumulative Attribute Space for Age and Crowd Density Estimation

18 0.8874889 61 cvpr-2013-Beyond Point Clouds: Scene Understanding by Reasoning Geometry and Physics

19 0.88742399 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases

20 0.88641083 77 cvpr-2013-Capturing Complex Spatio-temporal Relations among Facial Muscles for Facial Expression Recognition