iccv iccv2013 iccv2013-158 knowledge-graph by maker-knowledge-mining

158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition


Source: pdf

Author: Oren Barkan, Jonathan Weill, Lior Wolf, Hagai Aronowitz

Abstract: This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework. First, the Over-Complete Local Binary Patterns (OCLBP) face representation scheme is introduced as a multi-scale modified version of the Local Binary Patterns (LBP) scheme. Second, we propose an efficient matrix-vector multiplication-based recognition system. The system is based on Linear Discriminant Analysis (LDA) coupled with Within Class Covariance Normalization (WCCN). This is further extended to the unsupervised case by proposing an unsupervised variant of WCCN. Lastly, we introduce Diffusion Maps (DM) for non-linear dimensionality reduction as an alternative to the Whitened Principal Component Analysis (WPCA) method which is often used in face recognition. We evaluate the proposed framework on the LFW face recognition dataset under the restricted, unrestricted and unsupervised protocols. In all three cases we achieve very competitive results.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 ac Abstract This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework. [sent-6, score-0.325]

2 First, the Over-Complete Local Binary Patterns (OCLBP) face representation scheme is introduced as a multi-scale modified version of the Local Binary Patterns (LBP) scheme. [sent-7, score-0.144]

3 This is further extended to the unsupervised case by proposing an unsupervised variant of WCCN. [sent-10, score-0.285]

4 Lastly, we introduce Diffusion Maps (DM) for non-linear dimensionality reduction as an alternative to the Whitened Principal Component Analysis (WPCA) method which is often used in face recognition. [sent-11, score-0.368]

5 We evaluate the proposed framework on the LFW face recognition dataset under the restricted, unrestricted and unsupervised protocols. [sent-12, score-0.399]

6 Introduction The Labeled Faces in the Wild (LFW) face recognition benchmark [1] is currently the most active research benchmark of its kind. [sent-15, score-0.279]

7 It is built around a simple binary decision task: given two face images, is the same person being photographed in both? [sent-16, score-0.166]

8 The comprehensive results tables show a large variety of methods which can be roughly divided into two categories: pair comparison methods and signature based methods. [sent-17, score-0.116]

9 In the signature based methods [5, 6, 7, 8, 9], each face image is represented by a single descriptor vector and is then discarded. [sent-19, score-0.261]

10 To compare two face images, their signatures are compared using predefined metric functions, which are sometimes learned based on the training data. [sent-20, score-0.204]

11 Furthermore, there is a practical value in signature based methods in which the signature is compact. [sent-29, score-0.126]

12 Such systems can store and retrieve face images using limited resources. [sent-30, score-0.165]

13 In this paper, we propose an efficient signature based method, in which the storage footprint of each signature is on the order of a hundred floating point numbers. [sent-31, score-0.126]

14 However, this added accuracy is hidden until dimensionality reduction is performed. [sent-36, score-0.244]

15 In Section 3, we propose the use of the WCCN [10] metric learning technique for face recognition. [sent-37, score-0.177]

16 In Section 5, we describe in detail our proposed recognition system, which is applicable for both supervised and unsupervised learning by utilizing the scheme described in Section 4. [sent-39, score-0.25]

17 This results in an extension of the WCCN metric learning to the unsupervised case. [sent-40, score-0.185]

18 In Section 6, the Diffusion Maps technique (DM) [11] is introduced as a non-linear dimensionality reduction method for face recognition. [sent-41, score-0.368]

19 In Section 7, we evaluate the proposed system on the LFW dataset under the restricted, unrestricted and unsupervised protocols and report state of the art results on these benchmarks. [sent-43, score-0.297]

20 1 Overview of the recognition pipeline A unified pipeline is used in order to solve the unsupervised case and the two supervised scenarios of the LFW benchmark: the restricted and the unrestricted protocols. [sent-46, score-0.515]

21 First, a representation is constructed from the face images. [sent-47, score-0.144]

22 This is either WPCA or the Diffusion Maps for the unsupervised case, or PCA-LDA or DM-LDA for the two supervised settings. [sent-50, score-0.213]

23 For the unsupervised case, our unsupervised WCCN variant is applied. [sent-53, score-0.285]

24 Over-complete representations Over-complete representations have been found to be useful for improving the robustness of classification systems by using richer descriptors [14, 15]. [sent-56, score-0.136]

25 In this work, we introduce two new adaptations of descriptors for the domain of face recognition. [sent-57, score-0.197]

26 In the experimental results section, we show that the improvement in the accuracy of using over-complete representations remains hidden until some dimensionality reduction is involved. [sent-59, score-0.264]

27 Specifically, a modified 'uniform' version [9] of the original LBP was found to be useful for the task of face recognition. [sent-64, score-0.144]

28 The standard LBP operator for face recognition is denoted as LBPpu, r2 where u2 stands for uniform patterns, p defines the number of points that are uniformly sampled over a circle with a radius r . [sent-67, score-0.203]

29 For an overview of the LBP operator for face recognition we refer the reader to [9]. [sent-69, score-0.206]

30 However, after applying dimensionality reduction, a significant gain in accuracy is achieved by the more elaborate scheme. [sent-87, score-0.152]

31 Scattering transform for face recognition The Scattering Transform was introduced by Mallat in [13]. [sent-90, score-0.181]

32 As an image representation, a scattering convolution network was proposed in [20]. [sent-92, score-0.285]

33 The output of the first layer of a scattering network can be considered as a SIFT-like descriptor while the second layer adds further complementary invariant information which improves discrimination quality. [sent-95, score-0.339]

34 In this work, we investigate the contribution of the Scattering descriptor to our face recognition framework. [sent-97, score-0.235]

35 In a similar manner to the OCLBP, we find that the Scattering descriptor is much more effective when combined with dimensionality reduction. [sent-98, score-0.206]

36 Within class covariance normalization Within been used and was covariance Class Covariance Normalization (WCCN) has mostly in the speaker recognition community first introduced in [10]. [sent-101, score-0.218]

37 In WCCN, this effect is performed in a softer way without performing explicit dimensionality reduction: instead of discarding the directions that correspond to the top eigenvalues, WCCN reduces the effect of the within class directions by employing a normalization transform T ? [sent-115, score-0.213]

38 While to the best of our knowledge it was previously unused in face recognition, we show a clear improvement in performance over the state of the art by using the WCCN method when applied in the LDA subspace. [sent-118, score-0.197]

39 In this work, we also introduce an unsupervised version of WCCN, which is shown to be useful in case we lack the necessary labeled data. [sent-119, score-0.149]

40 Furthermore, we show that although the unsupervised WCCN algorithm does not make use of any label information, it is competitive with the original supervised WCCN in several scenarios. [sent-121, score-0.241]

41 Unsupervised labeling A common and challenging problem in machine learning is the beneficial utilization of successful supervised algorithms in the absence of labeled data. [sent-123, score-0.11]

42 In this section, we propose a simple unsupervised algorithm for generating valuable labels for the pair matching problem. [sent-124, score-0.194]

43 First, we assume that we are equipped with an unsupervised algorithm that is able to achieve some classification accuracy – we consider this algorithm as the baseline algorithm. [sent-126, score-0.195]

44 If our baseline algorithm manages to achieve a reasonable accuracy on the training set, we would expect to find many fewer classification mistakes on the tails, rather than in the area around the mean score. [sent-130, score-0.116]

45 In the case of the "same/not-same" classification, we would expect the majority of the scores in one tail to belong to pairs that are matched and the majority of the scores on the other tail to belong to pairs that are mismatched. [sent-131, score-0.218]

46 Since the generated labels are used to train a new supervised model we can apply Algorithm 1 iteratively. [sent-145, score-0.113]

47 Another possible extension is to use a set of supervised algorithms instead of a single one and to determine the final labeling according to a voting scheme. [sent-146, score-0.113]

48 Fast supervised and unsupervised vector multiplication recognition system We now describe in detail our proposed recognition system, which we call VMRS for Vector Multiplication Recognition System. [sent-148, score-0.34]

49 Then, we perform an additional supervised dimensionality reduction by applying LDA. [sent-151, score-0.311]

50 , representing two face images, the final score is de? [sent-170, score-0.144]

51 Unsupervised pipeline The pipeline described above is supervised and requires labeled data. [sent-183, score-0.226]

52 Specifically, we use the WPCA model as a baseline A and generate new labels according to the distribution of the scores of pairs in the training set. [sent-186, score-0.136]

53 Since WCCN computation is based on pairs from the same class, we only choose scores from one of the two tails (the 'same' tail). [sent-188, score-0.121]

54 As already mentioned in Section 4, one can iterate between generating new labels, using them for training a new supervised model, and generating new scores. [sent-204, score-0.154]

55 With the introduction of this unsupervised variant of WCCN, the proposed system is suitable for both the supervised and the unsupervised scenarios. [sent-207, score-0.398]

56 Diffusion Maps Many of the state of the art face recognition systems incorporate a dimensionality reduction component. [sent-214, score-0.479]

57 Second, in some cases and especially when the high dimensionality stems from over-complete representations, there is a large amount of redundancy in the data. [sent-217, score-0.132]

58 Most of the work done so far in face recognition applied linear dimensionality reduction. [sent-219, score-0.313]

59 One of the problems with linear dimensionality reduction is the implicit assumption that the geometric structure of data points is well captured by a linear subspace. [sent-220, score-0.224]

60 We propose to use a non-linear dimensionality reduction technique called Diffusion Maps (DM). [sent-222, score-0.224]

61 it, j is the 11996633 Tables 1-3: Classification accuracy (± standard error) of various combinations of classifiers and descriptors in the unsupervised, restricted and unrestricted settings, respectively. [sent-308, score-0.222]

62 A diffusion distance after t steps is defined by: Dt( xi , xj)? [sent-371, score-0.137]

63 Since the diffusion distance computation requires the evaluation of the distances over the entire training set, it results in an extremely complex operation. [sent-376, score-0.177]

64 ki indicates the i-th element of the k -th eigenvector of P and lis the dimension of the diffusion space V . [sent-407, score-0.109]

65 This result justifies the use of squared Euclidean distance in the diffusion space. [sent-412, score-0.109]

66 This decay is related to the complexity of the intrinsic dimensionality of the data and the choice of the parameter ? [sent-418, score-0.156]

67 Uniform scaling Inspired by WPCA, we propose to weigh all coordinates in the diffusion space uniformly. [sent-423, score-0.109]

68 We hypothesize that this improvement occurs because DM, as an unsupervised algorithm, holds little information in its eigenvalues regarding the actual discrimination capability. [sent-435, score-0.175]

69 Thus, we propose a simpler solution: Our approach assumes that the training data is sufficiently diverse in order to capture most of the variability of the face space. [sent-442, score-0.171]

70 In this case, we would expect the embedding of a new test sample to be approximated well by a linear combination of embeddings of the training samples in the low dimensional diffusion space. [sent-443, score-0.184]

71 The most popular supervised benchmark is the "imagerestricted training''. [sent-479, score-0.136]

72 Last, the unsupervised benchmark uses the same training set. [sent-487, score-0.202]

73 The evaluation task remains the same as before distinguish between matching ("same'') and nonmatching ("not-same'') pairs of face images. [sent-489, score-0.179]

74 We set it to use the Gabor wavelet and the values suggested in [27]: a scattering order of 2, maximum scale of 3 and 6 different orientations. [sent-505, score-0.285]

75 In the unrestricted and restricted benchmarks, we used LDA dimensions of 100, 100, 100, 30 and 70 for the LBP, OCLBP, TPLBP, SIFT and Scattering descriptors, respectively. [sent-513, score-0.149]

76 As already mentioned in Section 6, we chose the threshold in the unsupervised WCCN algorithm such that the number of generated 'same' labels is 15% of all pairs. [sent-514, score-0.152]

77 Results We evaluate the proposed system for each feature and its square root version under the restricted, unrestricted and unsupervised protocols. [sent-517, score-0.244]

78 The unsupervised results for the individual face descriptors are depicted in Table 1. [sent-519, score-0.323]

79 The table shows the progression from the baseline "raw" descriptors, before any learning was applied, through the use of dimensionality reduction (WPCA or DM) to the results of applying unsupervised WCCN (Section 6. [sent-520, score-0.377]

80 As can be seen, the suggested pipeline improves the recognition quality of all descriptors significantly, in both the dimensionality reduction step and in the unsupervised WCCN step. [sent-522, score-0.498]

81 While our face description method is considerably simpler than I-LPQ* [28], which is currently 11996655 the state of the art in this category, it outperforms it, even with the usage of a single descriptor. [sent-527, score-0.228]

82 In Table 2, we present four possibilities which differ by the dimensionality reduction algorithm used: PCA followed by LDA (PCALDA), DM followed by LDA (DMLDA), WPCA or DM. [sent-529, score-0.224]

83 As a usual trend, it seems that employing LDA in between the unsupervised dimensionality reduction (PCA or DM) and the WCCN method improves results. [sent-531, score-0.373]

84 It is important to clarify that both LDA and WCCN were applied in a restricted manner by using only pairs information, i. [sent-532, score-0.119]

85 The only exception is the "Tom-vs-Pete" [5] method which uses an external labeled dataset, which is much bigger than the LFW dataset, and employs a much more sophisticated face alignment method. [sent-539, score-0.167]

86 2), however, does not seem to improve over descriptors of lower dimensionality by a significant margin. [sent-553, score-0.185]

87 (5±y362897standr error) for various systems operating in the unsupervised setting. [sent-559, score-0.176]

88 error) for various systems operating in the unrestricted setting. [sent-561, score-0.142]

89 One can also notice that the unsupervised WCCN in some of the cases achieves an accuracy which is not far away from the accuracy obtained by the original supervised WCCN. [sent-562, score-0.253]

90 2% for the restricted case while the WPCA + unsupervised WCCN pipeline achieves an accuracy of 86. [sent-564, score-0.261]

91 The method is heavily based on dimensionality reduction algorithms, both supervised and unsupervised, in order to utilize high dimensionality representations. [sent-569, score-0.443]

92 Necessary adjustments are performed in order to adapt methods such as WCCN and DM to the requirements of face identification and of the various benchmark protocols. [sent-570, score-0.193]

93 The emergence of the new face recognition benchmarks has led to the abandonment of the classical algebraic methods such as Eigenfaces and Fisherfaces. [sent-572, score-0.218]

94 However, it is closely related to other algebraic dimensionality reduction methods. [sent-575, score-0.224]

95 In contrast to recent contributions such as CSML [6] or the Ensemble [29] that learning, are influenced our method Metric Learning by modern demonstrates trends method in metric that classical face recognition methods can still be relevant to contemporary research. [sent-576, score-0.214]

96 Learned-Miller, "Labeled faces in the wild: A database for studying face recognition in unconstrained environments," University of Massachusetts, Amherst, 2007. [sent-582, score-0.181]

97 Pietikainen, "Face description with local binary patterns: Application to face recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. [sent-619, score-0.175]

98 Cox, "How far can you get with a modern face recognition test set using only simple features? [sent-648, score-0.181]

99 Mallat, "Combined scattering for rotation invariant texture analysis," in European Symposium on Artificial Neural Networks, 2012. [sent-685, score-0.285]

100 Wolf, "Leveraging billions of faces to overcome performance barriers in unconstrained face recognition," 2011 . [sent-705, score-0.144]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('wccn', 0.62), ('wpca', 0.329), ('oclbp', 0.31), ('scattering', 0.285), ('lbp', 0.169), ('dm', 0.147), ('face', 0.144), ('dimensionality', 0.132), ('unsupervised', 0.126), ('diffusion', 0.109), ('lda', 0.107), ('lfw', 0.1), ('unrestricted', 0.092), ('reduction', 0.092), ('supervised', 0.087), ('tplbp', 0.086), ('signature', 0.063), ('mallat', 0.06), ('pipeline', 0.058), ('restricted', 0.057), ('speaker', 0.055), ('descriptor', 0.054), ('descriptors', 0.053), ('aviv', 0.052), ('benchmark', 0.049), ('eigenvalues', 0.049), ('tails', 0.045), ('tail', 0.043), ('verification', 0.041), ('tel', 0.041), ('taigman', 0.04), ('conference', 0.038), ('wolf', 0.038), ('pca', 0.038), ('recognition', 0.037), ('benchmarks', 0.037), ('class', 0.035), ('pairs', 0.035), ('ht', 0.034), ('tau', 0.034), ('covariance', 0.034), ('patterns', 0.034), ('metric', 0.033), ('xj', 0.033), ('variant', 0.033), ('cosine', 0.032), ('ofw', 0.032), ('tables', 0.031), ('description', 0.031), ('british', 0.031), ('xij', 0.03), ('art', 0.03), ('nystrom', 0.03), ('operating', 0.029), ('pinto', 0.029), ('competitive', 0.028), ('dimensional', 0.028), ('xi', 0.028), ('clarify', 0.027), ('baseline', 0.027), ('training', 0.027), ('multiplication', 0.027), ('excluding', 0.026), ('labels', 0.026), ('extension', 0.026), ('cox', 0.026), ('whitened', 0.026), ('system', 0.026), ('reader', 0.025), ('pietikainen', 0.025), ('ibm', 0.025), ('wild', 0.025), ('decay', 0.024), ('state', 0.023), ('configurations', 0.023), ('employing', 0.023), ('normalization', 0.023), ('labeled', 0.023), ('toolbox', 0.022), ('hassner', 0.022), ('pair', 0.022), ('decision', 0.022), ('cao', 0.022), ('transition', 0.022), ('classification', 0.022), ('radius', 0.022), ('block', 0.021), ('systems', 0.021), ('affinity', 0.021), ('extremely', 0.021), ('scores', 0.021), ('accuracy', 0.02), ('combined', 0.02), ('representations', 0.02), ('blocks', 0.02), ('spectral', 0.02), ('expect', 0.02), ('european', 0.02), ('computation', 0.02), ('generating', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition

Author: Oren Barkan, Jonathan Weill, Lior Wolf, Hagai Aronowitz

Abstract: This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework. First, the Over-Complete Local Binary Patterns (OCLBP) face representation scheme is introduced as a multi-scale modified version of the Local Binary Patterns (LBP) scheme. Second, we propose an efficient matrix-vector multiplication-based recognition system. The system is based on Linear Discriminant Analysis (LDA) coupled with Within Class Covariance Normalization (WCCN). This is further extended to the unsupervised case by proposing an unsupervised variant of WCCN. Lastly, we introduce Diffusion Maps (DM) for non-linear dimensionality reduction as an alternative to the Whitened Principal Component Analysis (WPCA) method which is often used in face recognition. We evaluate the proposed framework on the LFW face recognition dataset under the restricted, unrestricted and unsupervised protocols. In all three cases we achieve very competitive results.

2 0.19411491 392 iccv-2013-Similarity Metric Learning for Face Recognition

Author: Qiong Cao, Yiming Ying, Peng Li

Abstract: Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].

3 0.13030414 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition

Author: Yizhe Zhang, Ming Shao, Edward K. Wong, Yun Fu

Abstract: One of the most challenging task in face recognition is to identify people with varied poses. Namely, the test faces have significantly different poses compared with the registered faces. In this paper, we propose a high-level feature learning scheme to extract pose-invariant identity feature for face recognition. First, we build a single-hiddenlayer neural network with sparse constraint, to extractposeinvariant feature in a supervised fashion. Second, we further enhance the discriminative capability of the proposed feature by using multiple random faces as the target values for multiple encoders. By enforcing the target values to be uniquefor inputfaces over differentposes, the learned highlevel feature that is represented by the neurons in the hidden layer is pose free and only relevant to the identity information. Finally, we conduct face identification on CMU MultiPIE, and verification on Labeled Faces in the Wild (LFW) databases, where identification rank-1 accuracy and face verification accuracy with ROC curve are reported. These experiments demonstrate that our model is superior to oth- er state-of-the-art approaches on handling pose variations.

4 0.12461054 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

Author: Xudong Cao, David Wipf, Fang Wen, Genquan Duan, Jian Sun

Abstract: Face verification involves determining whether a pair of facial images belongs to the same or different subjects. This problem can prove to be quite challenging in many important applications where labeled training data is scarce, e.g., family album photo organization software. Herein we propose a principled transfer learning approach for merging plentiful source-domain data with limited samples from some target domain of interest to create a classifier that ideally performs nearly as well as if rich target-domain data were present. Based upon a surprisingly simple generative Bayesian model, our approach combines a KL-divergencebased regularizer/prior with a robust likelihood function leading to a scalable implementation via the EM algorithm. As justification for our design choices, we later use principles from convex analysis to recast our algorithm as an equivalent structured rank minimization problem leading to a number of interesting insights related to solution structure and feature-transform invariance. These insights help to both explain the effectiveness of our algorithm as well as elucidate a wide variety of related Bayesian approaches. Experimental testing with challenging datasets validate the utility of the proposed algorithm.

5 0.1167255 29 iccv-2013-A Scalable Unsupervised Feature Merging Approach to Efficient Dimensionality Reduction of High-Dimensional Visual Data

Author: Lingqiao Liu, Lei Wang

Abstract: To achieve a good trade-off between recognition accuracy and computational efficiency, it is often needed to reduce high-dimensional visual data to medium-dimensional ones. For this task, even applying a simple full-matrixbased linear projection causes significant computation and memory use. When the number of visual data is large, how to efficiently learn such a projection could even become a problem. The recent feature merging approach offers an efficient way to reduce the dimensionality, which only requires a single scan of features to perform reduction. However, existing merging algorithms do not scale well with highdimensional data, especially in the unsupervised case. To address this problem, we formulate unsupervised feature merging as a PCA problem imposed with a special structure constraint. By exploiting its connection with kmeans, we transform this constrained PCA problem into a feature clustering problem. Moreover, we employ the hashing technique to improve its scalability. These produce a scalable feature merging algorithm for our dimensional- ity reduction task. In addition, we develop an extension of this method by leveraging the neighborhood structure in the data to further improve dimensionality reduction performance. In further, we explore the incorporation of bipolar merging a variant of merging function which allows the subtraction operation into our algorithms. Through three applications in visual recognition, we demonstrate that our methods can not only achieve good dimensionality reduction performance with little computational cost but also help to create more powerful representation at both image level and local feature level. – –

6 0.11067541 106 iccv-2013-Deep Learning Identity-Preserving Face Space

7 0.10116347 157 iccv-2013-Fast Face Detector Training Using Tailored Views

8 0.10008313 206 iccv-2013-Hybrid Deep Learning for Face Verification

9 0.079265758 153 iccv-2013-Face Recognition Using Face Patch Networks

10 0.075747572 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition

11 0.073628485 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

12 0.0715333 444 iccv-2013-Viewing Real-World Faces in 3D

13 0.071293451 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties

14 0.070613988 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition

15 0.068941139 73 iccv-2013-Class-Specific Simplex-Latent Dirichlet Allocation for Image Classification

16 0.068416864 70 iccv-2013-Cascaded Shape Space Pruning for Robust Facial Landmark Detection

17 0.068128973 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

18 0.067897782 391 iccv-2013-Sieving Regression Forest Votes for Facial Feature Detection in the Wild

19 0.063543901 97 iccv-2013-Coupling Alignments with Recognition for Still-to-Video Face Recognition

20 0.060883351 426 iccv-2013-Training Deformable Part Models with Decorrelated Features


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.161), (1, 0.047), (2, -0.067), (3, -0.083), (4, -0.03), (5, -0.014), (6, 0.116), (7, 0.071), (8, 0.009), (9, -0.04), (10, -0.012), (11, 0.004), (12, 0.024), (13, -0.024), (14, 0.017), (15, -0.015), (16, -0.031), (17, -0.002), (18, -0.009), (19, 0.01), (20, -0.017), (21, -0.048), (22, 0.011), (23, -0.019), (24, 0.006), (25, 0.067), (26, -0.007), (27, 0.047), (28, 0.012), (29, 0.128), (30, 0.02), (31, -0.019), (32, -0.002), (33, -0.022), (34, -0.061), (35, -0.01), (36, -0.018), (37, -0.075), (38, -0.034), (39, -0.044), (40, 0.013), (41, 0.015), (42, -0.033), (43, -0.011), (44, 0.013), (45, -0.11), (46, -0.015), (47, 0.006), (48, -0.006), (49, 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94849586 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition

Author: Oren Barkan, Jonathan Weill, Lior Wolf, Hagai Aronowitz

Abstract: This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework. First, the Over-Complete Local Binary Patterns (OCLBP) face representation scheme is introduced as a multi-scale modified version of the Local Binary Patterns (LBP) scheme. Second, we propose an efficient matrix-vector multiplication-based recognition system. The system is based on Linear Discriminant Analysis (LDA) coupled with Within Class Covariance Normalization (WCCN). This is further extended to the unsupervised case by proposing an unsupervised variant of WCCN. Lastly, we introduce Diffusion Maps (DM) for non-linear dimensionality reduction as an alternative to the Whitened Principal Component Analysis (WPCA) method which is often used in face recognition. We evaluate the proposed framework on the LFW face recognition dataset under the restricted, unrestricted and unsupervised protocols. In all three cases we achieve very competitive results.

2 0.87824452 392 iccv-2013-Similarity Metric Learning for Face Recognition

Author: Qiong Cao, Yiming Ying, Peng Li

Abstract: Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].

3 0.85229146 154 iccv-2013-Face Recognition via Archetype Hull Ranking

Author: Yuanjun Xiong, Wei Liu, Deli Zhao, Xiaoou Tang

Abstract: The archetype hull model is playing an important role in large-scale data analytics and mining, but rarely applied to vision problems. In this paper, we migrate such a geometric model to address face recognition and verification together through proposing a unified archetype hull ranking framework. Upon a scalable graph characterized by a compact set of archetype exemplars whose convex hull encompasses most of the training images, the proposed framework explicitly captures the relevance between any query and the stored archetypes, yielding a rank vector over the archetype hull. The archetype hull ranking is then executed on every block of face images to generate a blockwise similarity measure that is achieved by comparing two different rank vectors with respect to the same archetype hull. After integrating blockwise similarity measurements with learned importance weights, we accomplish a sensible face similarity measure which can support robust and effective face recognition and verification. We evaluate the face similarity measure in terms of experiments performed on three benchmark face databases Multi-PIE, Pubfig83, and LFW, demonstrat- ing its performance superior to the state-of-the-arts.

4 0.78835821 206 iccv-2013-Hybrid Deep Learning for Face Verification

Author: Yi Sun, Xiaogang Wang, Xiaoou Tang

Abstract: This paper proposes a hybrid convolutional network (ConvNet)-Restricted Boltzmann Machine (RBM) model for face verification in wild conditions. A key contribution of this work is to directly learn relational visual features, which indicate identity similarities, from raw pixels of face pairs with a hybrid deep network. The deep ConvNets in our model mimic the primary visual cortex to jointly extract local relational visual features from two face images compared with the learned filter pairs. These relational features are further processed through multiple layers to extract high-level and global features. Multiple groups of ConvNets are constructed in order to achieve robustness and characterize face similarities from different aspects. The top-layerRBMperforms inferencefrom complementary high-level features extracted from different ConvNet groups with a two-level average pooling hierarchy. The entire hybrid deep network is jointly fine-tuned to optimize for the task of face verification. Our model achieves competitive face verification performance on the LFW dataset.

5 0.77335066 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition

Author: Dihong Gong, Zhifeng Li, Dahua Lin, Jianzhuang Liu, Xiaoou Tang

Abstract: Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizingfaces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, calledHidden FactorAnalysis (HFA). This methodcaptures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.

6 0.76154864 106 iccv-2013-Deep Learning Identity-Preserving Face Space

7 0.75184053 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition

8 0.71753639 212 iccv-2013-Image Set Classification Using Holistic Multiple Order Statistics Features and Localized Multi-kernel Metric Learning

9 0.71454722 222 iccv-2013-Joint Learning of Discriminative Prototypes and Large Margin Nearest Neighbor Classifiers

10 0.71145117 153 iccv-2013-Face Recognition Using Face Patch Networks

11 0.70530874 157 iccv-2013-Fast Face Detector Training Using Tailored Views

12 0.69419307 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

13 0.68811929 272 iccv-2013-Modifying the Memorability of Face Photographs

14 0.67711967 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition

15 0.67168361 29 iccv-2013-A Scalable Unsupervised Feature Merging Approach to Efficient Dimensionality Reduction of High-Dimensional Visual Data

16 0.67004246 332 iccv-2013-Quadruplet-Wise Image Similarity Learning

17 0.66551471 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

18 0.64967239 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties

19 0.64043391 177 iccv-2013-From Point to Set: Extend the Learning of Distance Metrics

20 0.63907909 261 iccv-2013-Markov Network-Based Unified Classifier for Face Identification


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.082), (4, 0.241), (7, 0.033), (10, 0.011), (12, 0.01), (26, 0.09), (31, 0.046), (34, 0.011), (42, 0.133), (48, 0.011), (64, 0.038), (73, 0.023), (78, 0.01), (89, 0.156)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.83479512 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition

Author: Dihong Gong, Zhifeng Li, Dahua Lin, Jianzhuang Liu, Xiaoou Tang

Abstract: Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizingfaces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, calledHidden FactorAnalysis (HFA). This methodcaptures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.

2 0.80899048 236 iccv-2013-Learning Discriminative Part Detectors for Image Classification and Cosegmentation

Author: Jian Sun, Jean Ponce

Abstract: In this paper, we address the problem of learning discriminative part detectors from image sets with category labels. We propose a novel latent SVM model regularized by group sparsity to learn these part detectors. Starting from a large set of initial parts, the group sparsity regularizer forces the model to jointly select and optimize a set of discriminative part detectors in a max-margin framework. We propose a stochastic version of a proximal algorithm to solve the corresponding optimization problem. We apply the proposed method to image classification and cosegmentation, and quantitative experiments with standard benchmarks show that it matches or improves upon the state of the art.

same-paper 3 0.80719298 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition

Author: Oren Barkan, Jonathan Weill, Lior Wolf, Hagai Aronowitz

Abstract: This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework. First, the Over-Complete Local Binary Patterns (OCLBP) face representation scheme is introduced as a multi-scale modified version of the Local Binary Patterns (LBP) scheme. Second, we propose an efficient matrix-vector multiplication-based recognition system. The system is based on Linear Discriminant Analysis (LDA) coupled with Within Class Covariance Normalization (WCCN). This is further extended to the unsupervised case by proposing an unsupervised variant of WCCN. Lastly, we introduce Diffusion Maps (DM) for non-linear dimensionality reduction as an alternative to the Whitened Principal Component Analysis (WPCA) method which is often used in face recognition. We evaluate the proposed framework on the LFW face recognition dataset under the restricted, unrestricted and unsupervised protocols. In all three cases we achieve very competitive results.

4 0.80604291 163 iccv-2013-Feature Weighting via Optimal Thresholding for Video Analysis

Author: Zhongwen Xu, Yi Yang, Ivor Tsang, Nicu Sebe, Alexander G. Hauptmann

Abstract: Fusion of multiple features can boost the performance of large-scale visual classification and detection tasks like TRECVID Multimedia Event Detection (MED) competition [1]. In this paper, we propose a novel feature fusion approach, namely Feature Weighting via Optimal Thresholding (FWOT) to effectively fuse various features. FWOT learns the weights, thresholding and smoothing parameters in a joint framework to combine the decision values obtained from all the individual features and the early fusion. To the best of our knowledge, this is the first work to consider the weight and threshold factors of fusion problem simultaneously. Compared to state-of-the-art fusion algorithms, our approach achieves promising improvements on HMDB [8] action recognition dataset and CCV [5] video classification dataset. In addition, experiments on two TRECVID MED 2011 collections show that our approach outperforms the state-of-the-art fusion methods for complex event detection.

5 0.76112449 71 iccv-2013-Category-Independent Object-Level Saliency Detection

Author: Yangqing Jia, Mei Han

Abstract: It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. We obtain the high-level saliency prior with the objectness algorithm to find potential object candidates without the need of category information, and then enforce the consistency among the salient regions using a Gaussian MRF with the weights scaled by diverse density that emphasizes the influence of potential foreground pixels. Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics.

6 0.74002659 107 iccv-2013-Deformable Part Descriptors for Fine-Grained Recognition and Attribute Prediction

7 0.7370159 106 iccv-2013-Deep Learning Identity-Preserving Face Space

8 0.72862315 95 iccv-2013-Cosegmentation and Cosketch by Unsupervised Learning

9 0.7202509 426 iccv-2013-Training Deformable Part Models with Decorrelated Features

10 0.7190451 383 iccv-2013-Semi-supervised Learning for Large Scale Image Cosegmentation

11 0.71795595 73 iccv-2013-Class-Specific Simplex-Latent Dirichlet Allocation for Image Classification

12 0.71784961 392 iccv-2013-Similarity Metric Learning for Face Recognition

13 0.70847452 180 iccv-2013-From Where and How to What We See

14 0.70786673 445 iccv-2013-Visual Reranking through Weakly Supervised Multi-graph Learning

15 0.70612001 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization

16 0.70590067 203 iccv-2013-How Related Exemplars Help Complex Event Detection in Web Videos?

17 0.70572674 74 iccv-2013-Co-segmentation by Composition

18 0.70246053 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition

19 0.70203173 44 iccv-2013-Adapting Classification Cascades to New Domains

20 0.70135951 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation