iccv iccv2013 iccv2013-392 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Qiong Cao, Yiming Ying, Peng Li
Abstract: Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].
Reference: text
sentIndex sentText sentNum sentScore
1 uk 8 Abstract Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. [sent-4, score-0.424]
2 This problem is challenging and difficult due to the large variations in face images. [sent-5, score-0.328]
3 In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. [sent-6, score-0.584]
4 We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. [sent-7, score-0.398]
5 In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. [sent-8, score-0.157]
6 Recently, considerable research efforts are devoted to the unconstrained face verification problem [8, 17, 18, 20, 23, 24], the task of which is to predict whether two face images represent the same person or not. [sent-12, score-0.807]
7 The face images are taken under unconstrained conditions and show significant variations in complex background, lighting, pose, and expression (see e. [sent-13, score-0.409]
8 In addition, the evaluation procedure for face verification typically assumes that the person identities in the training and test sets are exclusive, requiring the prediction of never-seen-before faces. [sent-16, score-0.452]
9 Similarity metric learning aims to learn an appropriate distance or similarity measure to compare pairs of examples. [sent-18, score-0.692]
10 This provides a natural solution for the verification task. [sent-19, score-0.217]
11 Metric learning [5,7, 22, 25, 26] usually focuses on the (squared) Mahalanobis distance defined, for any 푥, 푡 ∈ ℝ푑, by u푑a푀re(d푥), M푡) h=a (n푥o b−is 푡 d)i푇s푀tan(c푥e −def 푡i)n,e wd,h feorre a n푀y 푥is, a positive sem(푥i-,d푡e)fi =nite ( (p. [sent-20, score-0.136]
12 that directly applying metric learning methods only yields a modest performance for face verification. [sent-26, score-0.648]
13 This may be partly because most of such methods deal with the specific tasks of improving kNN classification, which may be not necessarily suitable for face verification. [sent-27, score-0.235]
14 Similarity learning aims to learn the bilinear similarity function [3, 19] defined by 푠푀(푥, 푡) (=√ 푥푇푀푡 √or the cosine similarity 퐶푆푀(푥, 푡) = 푥푇푀푡/(√푥푇푀푥√푡푇푀푡) [14], which has successful application/s( i√n image searchin)g and face verification. [sent-28, score-0.795]
15 To this end, we develop a novel regulariza- tion framework to learn similarity metrics for unconstrained face verification, which is referred to as similarity metric learning over the intra-personal subspace. [sent-30, score-1.189]
16 We formulate its objective function by considering both the robustness to the large intra-personal variations and the discriminative power, a property that most metric learning methods do not hold. [sent-31, score-0.588]
17 In addition, our formulation is a convex optimization problem, and hence a global solution can be efficiently found by existing algorithms. [sent-32, score-0.149]
18 This is, for instance, not the case for the current similarity metric learning model [14]. [sent-33, score-0.603]
19 We report experimental results on the Labeled Faces in the Wild (LFW) [10] dataset, a standard testbed for un22440088 constrained face verification. [sent-34, score-0.235]
20 The face images collected directly from the website Yahoo! [sent-35, score-0.27]
21 Shifting to the unrestricted setting, our method achieves 90. [sent-40, score-0.172]
22 Similarity Metric Learning Over the IntraPersonal Subspace In this section, we develop a new method of learning a similarity metric for face verification, which will be described step by step as follows. [sent-55, score-0.838]
23 One challenging issue in face verification is to retain the robustness of the similarity metric to the noise and the large intra-personal variations in face images. [sent-60, score-1.347]
24 PCA computes the 푑 eigenvectors with the larg∑est eigenvalues of the covariance matrix defined by 퐶 = ∑푖푛=1 (푥푖 − m) (푥푖 m)푇 ∈ ℝ푝×푝, where m is the me∑an of the− −da mta). [sent-62, score-0.148]
25 The mapping of the Eigenfaces to the 푘-dimensional intra-personal subspace (푘 ≤ 푑) is defined by the whitening process: =푘 푥˜ = diag(휆1−1/2, . [sent-71, score-0.25]
26 Throughout this paper, we only consider the special case where the dimension of the intra-personal subspace equals the dimension of PCA, i. [sent-78, score-0.253]
27 After the images are mapped to the intrapersonal subspace, we now consider the discrimination using a similarity metric function, a property that discriminates similar image-pairs from dissimilar image-pairs. [sent-86, score-0.947]
28 To this end, one option is to use the cosine similarity function 퐶푆푀 which was observed to outperform the distance measurement 푑푀 in face verification [14]. [sent-87, score-0.739]
29 Recent studies [3, 19] observed that the similarity function 푠푀 has a promising performance on image similarity search. [sent-89, score-0.38]
30 Motivated by these observations, we combine the similarity function 푠푀 and the distance 푑푀 and propose a generalized similarity metric 푓(푀,퐺) to measure the similarity of an image pair ( 푥˜푖, ˜ 푥푗): 푓(푀,퐺)( 푥˜푖, ˜푥 푗) = 푠퐺( 푥˜푖, ˜푥 푗) − 푑푀( ˜푥푖, ˜ 푥푗). [sent-90, score-0.951]
31 To better discriminate similar image-pairs from dissimilar image-pairs, we should learn 푀 and from the available data such that 푓(푀,퐺) ( 푥˜푖 , ˜푥 푗) reports a large 퐺 score for 푦푖푗 = 1 and a small score otherwise. [sent-96, score-0.144]
32 Based on this rationale, we derive the formulation of the empirical discrimination using the hinge loss: ℰemp(푀,퐺) = ∑ (1 − 푦푖푗푓(푀,퐺)( 푥˜푖, 푥˜ 푗))+. [sent-97, score-0.149]
33 (5) (푖∑,푗) ∈풫 22440099 푀 Minimizing the above empirical error with respect to and 퐺 will encourage the discrimination of similar image-pairs from dissimilar ones. [sent-98, score-0.227]
34 However, directly minimizing the functional ℰemp does not guarantee a robust similar metric 푓(푀,퐺) ntaol large intra-personal variations and also will lead to overfitting. [sent-99, score-0.422]
35 Below, we propose a novel regularization framework which learns a robust and discriminative similarity metric. [sent-100, score-0.23]
36 Based on the above discussions, our target now is to learn matrices and 퐺 such that 푓(푀,퐺) not only retains the robustness to the large intra-personal variations but also preserves a good discriminative information. [sent-102, score-0.183]
37 To this end, we propose a new method referred to as similarity metric learning over the intra-personal subspace which is given by 푀 푀m,퐺i∈n핊푑ℰemp(푀,퐺) +2훾(∥푀 − 퐼∥2퐹+ ∥퐺 − 퐼∥2퐹). [sent-103, score-0.8]
38 The regularization term ∥푀−퐼∥ 2퐹+ ∥ 퐺−퐼∥ 2퐹 in our formulTahtieonre (g7u)l prevents image v푀ec−to퐼rs∥ +in∥ t퐺he− intra-personal subspace from being distorted too much, and hence retains the most robustness of the intra-personal subspace. [sent-107, score-0.317]
39 Minimizing the empirical term ∑(푖,푗)∈풫 휉푖푗 promotes the discriminative power of 푓푀,퐺 for∑ ∑discriminating similar image-pairs from dissimilar ones. [sent-108, score-0.177]
40 Later on, formulation (7) is referred to as Sub-SML for similarity metric learning over the intra-personal subspace. [sent-111, score-0.711]
41 Related Work and Discussion There is a large amount of work on learning similarity metrics. [sent-145, score-0.274]
42 Below we review metric learning models [11, 22, 25, 27] which are closely related to our proposed method Sub-SML, and show the inherent relationship among these models. [sent-146, score-0.413]
43 [25] proposed to maximize the sum of distances between dissimilar pairs, while maintaining an upper bound on the sum of squared distances between similar pairs. [sent-148, score-0.144]
44 [∑22] developed the method called LMNN to learn a Mahalanobis distance metric in kNN clas- sification settings. [sent-153, score-0.381]
45 [11] proposed a side-information based linear discriminant analysis (SILD) approach for face verification. [sent-163, score-0.235]
46 (16) We should mention that the image-vectors 푥푖 and 푥푗 in formulations (9), (10), (11) and (12) for face verification are PCA-reduced vectors (i. [sent-176, score-0.512]
47 We can ∑obs(푖e,r푗v)e∈,풮 from their equivalent formulations (13), (14), (15), ∑and (16), that they can also be regarded as metric learning over the intra-personal subspace. [sent-180, score-0.513]
48 In this sense, we can say that minimizing the average distance between similar images plays a similar role as mapping the images to the intra-personal subspace using the whitening process (2). [sent-181, score-0.302]
49 The learned metric on the intra-personal subspace should best reflect the geometry induced by the similarity and dissimilarity of face images: the distance defined on the intrapersonal subspace between similar image-pairs is small while the distance between dissimilar image-pairs is large. [sent-182, score-1.46]
50 The metric learning methods [11, 22, 25, 27] used different objective functions to achieve this goal. [sent-183, score-0.447]
51 However, the above methods mainly have two limitations: (L1) Although these methods can be regarded as metric learning over the intra-personal subspace, they mainly focused on the discrimination of the metric and do not explicitly take into account its robustness. [sent-184, score-0.865]
52 Hence, the learned metrics may not be robust to intra-personal variations; (L2) Despite the fact that the bilinear similarity function 푠푀 and 퐶푆푀 outperform metric learning using 푑푀 for face verification [14], the above methods only used the distance metric 푑푀. [sent-185, score-1.525]
53 Our proposed method Sub-SML addressed the above limitations by introducing a new similarity metric and a novel regularization framework for learning similarity metrics. [sent-187, score-0.88]
54 There are 13233 face images of 5749 people in this database, and 1680 of them appear in more than two images. [sent-190, score-0.235]
55 It is commonly regarded to be a challenging dataset for face verification since the faces were detected from images taken from Yahoo! [sent-191, score-0.545]
56 The images were prepared in two ways: “aligned” using commercial face alignment software by [20] and “funneled” available on the LFW website [10]. [sent-193, score-0.27]
57 Both original values and square roots of these descriptors are tested as suggested in [8, 24]. [sent-196, score-0.126]
58 0 0 465318 Table 1: Performance of Sub-SL,Sub-ML, Sub-SML across different PCA dimension 푑 : (a) SIFT descriptor and (b) LBP descriptor. [sent-222, score-0.107]
59 ing methods on the single descriptor in the restricted setting of LFW. [sent-223, score-0.239]
60 In the restricted setting, only 600 similar/dissimilar pairs are available while the identity of images is unknown. [sent-228, score-0.132]
61 In the unrestricted setting, the identity information of images is provided. [sent-229, score-0.172]
62 The performance is reported using mean verification rate (standard error) and ROC curve. [sent-230, score-0.217]
63 In particular, on each test, for Sub-SML, PCA is applied to reduce the noise of face images and the resultant Eigenfaces are further mapped to the intra-personal subspace by using ˜푥 = 퐿−1푥, where 퐿풮 is given by equation (3). [sent-231, score-0.475]
64 Also, similar image-pairs from the 9-fold training set are used to compute the intrapersonal covariance matrix 퐶풮 . [sent-233, score-0.19]
65 Interestingly, we o∥b=s erv 1e)d b einf our experiment ttoha St uLb2normalization usually improves the performance of most of metric learning methods. [sent-237, score-0.413]
66 Image Restricted setting We first evaluate our method in the restricted setting of the LFW dataset. [sent-241, score-0.235]
67 We conduct experiments to show that Sub-SML has effectively addressed limitations of existing metric learning methods listed as (L1) and (L2) at the end of Section 3. [sent-243, score-0.503]
68 In particular, we show the effectiveness of Sub-SML in two main aspects: the generalized similarity metric 푓(푀,퐺) combining 푑푀 and 푠퐺, and SubSML as a metric learning method over the intra-personal subspace. [sent-244, score-0.978]
69 Firstly, we compare Sub-SML with the following two formulations, where only the distance metric 푑푀 or the bi- linear similarity metric 푠퐺 is used as the similarity metric. [sent-246, score-1.09]
70 y0 3650847 휉푡 ≥ 0, )∀]푡 ≥= (17) 1( −푖,푗 휉) ∈ 풫, 22441122 Table 3: Comparison of Sub-SML with other state-of-the-art methods in the restricted setting of LFW. [sent-250, score-0.165]
71 As baselines, PCA and Intra-PCA denote the methods using the Euclidean distance over the PCA-reduced subspace and the intra-personal subspace, respectively. [sent-254, score-0.207]
72 Hence,) )in = =th (i2s special case the verification rate using t2h)e/ 3E. [sent-256, score-0.217]
73 We can observe from Table 1a that, across different PCA dimensions, Intra-PCA is much better than PCA, which shows the effectiveness ofremoving intra-personal variations by mapping Eigenfaces into the intra-personal subspace using the whitening process given by equation (2). [sent-259, score-0.421]
74 These observations show the effectiveness of learning the generalized similarity metric 푓(푀,퐺) compared with only learning the distance metric 푑푀 or the bilinear similarity metric 푠퐺. [sent-266, score-1.684]
75 Secondly, we compare with other metric learning methods such as the method in [25] denoted by Xing, ITML [7], LDML [8], SILD [11], and DML-eig [27]. [sent-267, score-0.413]
76 For fairness of comparison, we also compare with their variants where image-vectors were processed by PCA and further mapped to the intra-personal subspace before being fed into metric learning methods. [sent-268, score-0.655]
77 From Table 2 we can see that, on the SIFT descriptor, Sub-SML significantly outperforms the other methods such false positive rate Figure 2: ROC curve of Sub-SML and other state-of-the-art methods in the restricted setting of the LFW database. [sent-271, score-0.165]
78 These are the best results, to the best of our knowledge, reported so far for SIFT and LBP on the restricted setting of LFW dataset. [sent-277, score-0.165]
79 This observation validates the effectiveness of Sub-SML as a similarity metric learning method over the intra-personal subspace. [sent-278, score-0.649]
80 In addition, we can observe that Sub-ITML and Sub-LDML improve the performance of ITML and LDML, respectively, which shows the effectiveness of the mapping to the intra-personal subspace mentioned in Section 2. [sent-279, score-0.201]
81 Overall, the above comparison results suggest that our proposed method Sub-SML has effectively overcome limitations of existing metric learning methods listed as (L1) and (L2) at the end of Section 3. [sent-280, score-0.503]
82 Specifically, we first generate the similarity scores by Sub-SML from three descriptors SIFT, LBP and TPLBP and their square roots (six scores). [sent-283, score-0.316]
83 Image Unrestricted setting Here, we evaluate Sub-SML on the unrestricted setting of LFW, where the label information allows us to generate more image-pairs during training. [sent-292, score-0.312]
84 Table 4 shows the comparison results on the SIFT descriptor against state-of-the-art metric learning methods such as ITML [7], LDML [8], and their variants Sub-ITML and Sub-LDML. [sent-294, score-0.521]
85 We observe that, across the number of pairs per fold, the performance of Sub-SML is significantly better than other methods, which shows its effectiveness as a similarity metric learning method over the intra-personal subspace. [sent-295, score-0.686]
86 In addition, we observe that Sub-ITML and Sub-LDML respectively improve the performance of ITML and LDML, which again verifies the effectiveness of removing intra-personal variations using the whitening process given by equation (2). [sent-296, score-0.266]
87 Secondly, we compare Sub-SML with existing state-ofthe-art results on the unrestricted setting of LFW using single and multiple descriptors. [sent-300, score-0.242]
88 By further combining three descriptors and their square roots following the procedure [8, 24], Sub-SML using 2000 image-pairs achieves 90. [sent-307, score-0.126]
89 Conclusion In this paper we introduced a novel regularization framework of learning a similarity metric for unconstrained face 1Recently, Cui et al. [sent-313, score-0.959]
90 35% in their CVPR 2013 paper which was achieved, however, by using spatial face region descriptors and a multiple metric learning method. [sent-315, score-0.7]
91 false positive rate Figure 3: ROC curve of Sub-SML and other state-of-the-art methods in the unrestricted setting of LFW. [sent-316, score-0.242]
92 We formulate its learning objective by incorporating the robustness to large intra-personal variations and the discrimination power ofnovel similarity metrics, a property most existing metric learning methods do not hold. [sent-318, score-0.978]
93 Our formulation is a convex optimization problem which guarantees the existence of its global solution. [sent-319, score-0.157]
94 Large scale online learning of image similarity through ranking. [sent-342, score-0.274]
95 Learning a similarity metric discriminatively with application to face verification. [sent-357, score-0.754]
96 Fusing robust face region descriptors via multiple metric learning for face recognition in the wild. [sent-365, score-0.935]
97 0 01654 108 Table 5: Comparison of Sub-SML with other state-of-the-art results in the unrestricted setting of LFW: the top 7 rows are based on single descriptor and the bottom 4 rows are based on multiple descriptors. [sent-401, score-0.316]
98 Beyond simple features: a largescale feature search approach to unconstrained face recognition. [sent-463, score-0.316]
99 Distance metric learning for large margin nearest neighbour classification. [sent-488, score-0.413]
100 Distance metric learning with application to clustering with side information. [sent-507, score-0.413]
wordName wordTfidf (topN-words)
[('metric', 0.329), ('face', 0.235), ('lfw', 0.224), ('ldml', 0.223), ('verification', 0.217), ('sild', 0.216), ('similarity', 0.19), ('itml', 0.178), ('unrestricted', 0.172), ('subspace', 0.155), ('intrapersonal', 0.148), ('dissimilar', 0.144), ('lbp', 0.143), ('eigenfaces', 0.126), ('pca', 0.12), ('lmnn', 0.119), ('emp', 0.108), ('restricted', 0.095), ('whitening', 0.095), ('variations', 0.093), ('learning', 0.084), ('discrimination', 0.083), ('unconstrained', 0.081), ('roc', 0.081), ('wild', 0.076), ('xing', 0.075), ('descriptor', 0.074), ('roots', 0.074), ('exeter', 0.072), ('funneled', 0.072), ('ssiifftt', 0.072), ('yiming', 0.072), ('sift', 0.071), ('setting', 0.07), ('formulation', 0.066), ('ying', 0.066), ('plda', 0.064), ('csml', 0.064), ('tplbp', 0.064), ('eigenvalues', 0.06), ('formulations', 0.06), ('yahoo', 0.059), ('fold', 0.058), ('kan', 0.056), ('shalit', 0.056), ('faces', 0.053), ('mapped', 0.053), ('wolf', 0.053), ('descriptors', 0.052), ('distance', 0.052), ('convex', 0.051), ('bilinear', 0.051), ('isp', 0.051), ('dual', 0.049), ('robustness', 0.048), ('news', 0.048), ('cui', 0.048), ('limitations', 0.047), ('effectiveness', 0.046), ('eigenvectors', 0.046), ('cosine', 0.045), ('tr', 0.044), ('listed', 0.043), ('referred', 0.042), ('covariance', 0.042), ('retains', 0.042), ('weinberger', 0.041), ('qp', 0.041), ('hassner', 0.041), ('regularization', 0.04), ('guarantees', 0.04), ('regarded', 0.04), ('devoted', 0.039), ('mahalanobis', 0.039), ('metrics', 0.038), ('pairs', 0.037), ('website', 0.035), ('variants', 0.034), ('objective', 0.034), ('knn', 0.034), ('lagrangian', 0.033), ('shan', 0.033), ('power', 0.033), ('dimension', 0.033), ('equals', 0.032), ('depicts', 0.032), ('uk', 0.032), ('equation', 0.032), ('hence', 0.032), ('tahre', 0.032), ('erik', 0.032), ('aronmd', 0.032), ('spe', 0.032), ('sdif', 0.032), ('refe', 0.032), ('ftro', 0.032), ('rac', 0.032), ('mma', 0.032), ('writ', 0.032), ('equalities', 0.032)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000004 392 iccv-2013-Similarity Metric Learning for Face Recognition
Author: Qiong Cao, Yiming Ying, Peng Li
Abstract: Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].
2 0.25760642 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification
Author: Xudong Cao, David Wipf, Fang Wen, Genquan Duan, Jian Sun
Abstract: Face verification involves determining whether a pair of facial images belongs to the same or different subjects. This problem can prove to be quite challenging in many important applications where labeled training data is scarce, e.g., family album photo organization software. Herein we propose a principled transfer learning approach for merging plentiful source-domain data with limited samples from some target domain of interest to create a classifier that ideally performs nearly as well as if rich target-domain data were present. Based upon a surprisingly simple generative Bayesian model, our approach combines a KL-divergencebased regularizer/prior with a robust likelihood function leading to a scalable implementation via the EM algorithm. As justification for our design choices, we later use principles from convex analysis to recast our algorithm as an equivalent structured rank minimization problem leading to a number of interesting insights related to solution structure and feature-transform invariance. These insights help to both explain the effectiveness of our algorithm as well as elucidate a wide variety of related Bayesian approaches. Experimental testing with challenging datasets validate the utility of the proposed algorithm.
3 0.21548401 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition
Author: Yizhe Zhang, Ming Shao, Edward K. Wong, Yun Fu
Abstract: One of the most challenging task in face recognition is to identify people with varied poses. Namely, the test faces have significantly different poses compared with the registered faces. In this paper, we propose a high-level feature learning scheme to extract pose-invariant identity feature for face recognition. First, we build a single-hiddenlayer neural network with sparse constraint, to extractposeinvariant feature in a supervised fashion. Second, we further enhance the discriminative capability of the proposed feature by using multiple random faces as the target values for multiple encoders. By enforcing the target values to be uniquefor inputfaces over differentposes, the learned highlevel feature that is represented by the neurons in the hidden layer is pose free and only relevant to the identity information. Finally, we conduct face identification on CMU MultiPIE, and verification on Labeled Faces in the Wild (LFW) databases, where identification rank-1 accuracy and face verification accuracy with ROC curve are reported. These experiments demonstrate that our model is superior to oth- er state-of-the-art approaches on handling pose variations.
4 0.19411491 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition
Author: Oren Barkan, Jonathan Weill, Lior Wolf, Hagai Aronowitz
Abstract: This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework. First, the Over-Complete Local Binary Patterns (OCLBP) face representation scheme is introduced as a multi-scale modified version of the Local Binary Patterns (LBP) scheme. Second, we propose an efficient matrix-vector multiplication-based recognition system. The system is based on Linear Discriminant Analysis (LDA) coupled with Within Class Covariance Normalization (WCCN). This is further extended to the unsupervised case by proposing an unsupervised variant of WCCN. Lastly, we introduce Diffusion Maps (DM) for non-linear dimensionality reduction as an alternative to the Whitened Principal Component Analysis (WPCA) method which is often used in face recognition. We evaluate the proposed framework on the LFW face recognition dataset under the restricted, unrestricted and unsupervised protocols. In all three cases we achieve very competitive results.
5 0.1807085 206 iccv-2013-Hybrid Deep Learning for Face Verification
Author: Yi Sun, Xiaogang Wang, Xiaoou Tang
Abstract: This paper proposes a hybrid convolutional network (ConvNet)-Restricted Boltzmann Machine (RBM) model for face verification in wild conditions. A key contribution of this work is to directly learn relational visual features, which indicate identity similarities, from raw pixels of face pairs with a hybrid deep network. The deep ConvNets in our model mimic the primary visual cortex to jointly extract local relational visual features from two face images compared with the learned filter pairs. These relational features are further processed through multiple layers to extract high-level and global features. Multiple groups of ConvNets are constructed in order to achieve robustness and characterize face similarities from different aspects. The top-layerRBMperforms inferencefrom complementary high-level features extracted from different ConvNet groups with a two-level average pooling hierarchy. The entire hybrid deep network is jointly fine-tuned to optimize for the task of face verification. Our model achieves competitive face verification performance on the LFW dataset.
6 0.15019898 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition
7 0.149169 157 iccv-2013-Fast Face Detector Training Using Tailored Views
8 0.14523397 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties
9 0.13293365 153 iccv-2013-Face Recognition Using Face Patch Networks
10 0.1317658 222 iccv-2013-Joint Learning of Discriminative Prototypes and Large Margin Nearest Neighbor Classifiers
11 0.12618798 177 iccv-2013-From Point to Set: Extend the Learning of Distance Metrics
12 0.12045366 106 iccv-2013-Deep Learning Identity-Preserving Face Space
13 0.11805753 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization
14 0.1173353 444 iccv-2013-Viewing Real-World Faces in 3D
15 0.11662757 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity
17 0.11430088 70 iccv-2013-Cascaded Shape Space Pruning for Robust Facial Landmark Detection
18 0.11415976 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition
19 0.11227466 23 iccv-2013-A New Image Quality Metric for Image Auto-denoising
20 0.11019668 332 iccv-2013-Quadruplet-Wise Image Similarity Learning
topicId topicWeight
[(0, 0.212), (1, 0.054), (2, -0.116), (3, -0.132), (4, -0.1), (5, -0.015), (6, 0.212), (7, 0.148), (8, 0.057), (9, -0.019), (10, 0.009), (11, 0.015), (12, 0.003), (13, -0.022), (14, 0.011), (15, -0.051), (16, -0.029), (17, -0.044), (18, -0.013), (19, 0.021), (20, -0.008), (21, -0.066), (22, 0.0), (23, -0.107), (24, 0.033), (25, 0.083), (26, 0.021), (27, 0.048), (28, -0.016), (29, 0.201), (30, -0.003), (31, -0.063), (32, -0.004), (33, -0.053), (34, -0.018), (35, -0.017), (36, 0.036), (37, -0.123), (38, -0.031), (39, -0.065), (40, 0.068), (41, 0.078), (42, -0.021), (43, 0.085), (44, 0.128), (45, -0.066), (46, 0.011), (47, -0.023), (48, -0.014), (49, 0.07)]
simIndex simValue paperId paperTitle
same-paper 1 0.97878516 392 iccv-2013-Similarity Metric Learning for Face Recognition
Author: Qiong Cao, Yiming Ying, Peng Li
Abstract: Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].
2 0.86549759 154 iccv-2013-Face Recognition via Archetype Hull Ranking
Author: Yuanjun Xiong, Wei Liu, Deli Zhao, Xiaoou Tang
Abstract: The archetype hull model is playing an important role in large-scale data analytics and mining, but rarely applied to vision problems. In this paper, we migrate such a geometric model to address face recognition and verification together through proposing a unified archetype hull ranking framework. Upon a scalable graph characterized by a compact set of archetype exemplars whose convex hull encompasses most of the training images, the proposed framework explicitly captures the relevance between any query and the stored archetypes, yielding a rank vector over the archetype hull. The archetype hull ranking is then executed on every block of face images to generate a blockwise similarity measure that is achieved by comparing two different rank vectors with respect to the same archetype hull. After integrating blockwise similarity measurements with learned importance weights, we accomplish a sensible face similarity measure which can support robust and effective face recognition and verification. We evaluate the face similarity measure in terms of experiments performed on three benchmark face databases Multi-PIE, Pubfig83, and LFW, demonstrat- ing its performance superior to the state-of-the-arts.
3 0.85367596 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition
Author: Oren Barkan, Jonathan Weill, Lior Wolf, Hagai Aronowitz
Abstract: This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework. First, the Over-Complete Local Binary Patterns (OCLBP) face representation scheme is introduced as a multi-scale modified version of the Local Binary Patterns (LBP) scheme. Second, we propose an efficient matrix-vector multiplication-based recognition system. The system is based on Linear Discriminant Analysis (LDA) coupled with Within Class Covariance Normalization (WCCN). This is further extended to the unsupervised case by proposing an unsupervised variant of WCCN. Lastly, we introduce Diffusion Maps (DM) for non-linear dimensionality reduction as an alternative to the Whitened Principal Component Analysis (WPCA) method which is often used in face recognition. We evaluate the proposed framework on the LFW face recognition dataset under the restricted, unrestricted and unsupervised protocols. In all three cases we achieve very competitive results.
4 0.76985681 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition
Author: Dihong Gong, Zhifeng Li, Dahua Lin, Jianzhuang Liu, Xiaoou Tang
Abstract: Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizingfaces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, calledHidden FactorAnalysis (HFA). This methodcaptures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.
5 0.74620646 222 iccv-2013-Joint Learning of Discriminative Prototypes and Large Margin Nearest Neighbor Classifiers
Author: Martin Köstinger, Paul Wohlhart, Peter M. Roth, Horst Bischof
Abstract: In this paper, we raise important issues concerning the evaluation complexity of existing Mahalanobis metric learning methods. The complexity scales linearly with the size of the dataset. This is especially cumbersome on large scale or for real-time applications with limited time budget. To alleviate this problem we propose to represent the dataset by a fixed number of discriminative prototypes. In particular, we introduce a new method that jointly chooses the positioning of prototypes and also optimizes the Mahalanobis distance metric with respect to these. We show that choosing the positioning of the prototypes and learning the metric in parallel leads to a drastically reduced evaluation effort while maintaining the discriminative essence of the original dataset. Moreover, for most problems our method performing k-nearest prototype (k-NP) classification on the condensed dataset leads to even better generalization compared to k-NN classification using all data. Results on a variety of challenging benchmarks demonstrate the power of our method. These include standard machine learning datasets as well as the challenging Public Fig- ures Face Database. On the competitive machine learning benchmarks we are comparable to the state-of-the-art while being more efficient. On the face benchmark we clearly outperform the state-of-the-art in Mahalanobis metric learning with drastically reduced evaluation effort.
7 0.73206407 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition
8 0.72570145 177 iccv-2013-From Point to Set: Extend the Learning of Distance Metrics
9 0.72483796 206 iccv-2013-Hybrid Deep Learning for Face Verification
10 0.71861553 106 iccv-2013-Deep Learning Identity-Preserving Face Space
11 0.71774852 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification
12 0.69748139 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition
13 0.69414252 272 iccv-2013-Modifying the Memorability of Face Photographs
14 0.68240738 153 iccv-2013-Face Recognition Using Face Patch Networks
15 0.65646958 157 iccv-2013-Fast Face Detector Training Using Tailored Views
16 0.65320057 227 iccv-2013-Large-Scale Image Annotation by Efficient and Robust Kernel Metric Learning
17 0.6486671 332 iccv-2013-Quadruplet-Wise Image Similarity Learning
18 0.63969129 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties
19 0.6354714 25 iccv-2013-A Novel Earth Mover's Distance Methodology for Image Matching with Gaussian Mixture Models
20 0.6325525 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation
topicId topicWeight
[(2, 0.066), (4, 0.026), (7, 0.047), (10, 0.194), (13, 0.012), (26, 0.076), (31, 0.051), (34, 0.01), (42, 0.174), (48, 0.02), (64, 0.049), (73, 0.025), (78, 0.012), (89, 0.132), (98, 0.032)]
simIndex simValue paperId paperTitle
same-paper 1 0.83641499 392 iccv-2013-Similarity Metric Learning for Face Recognition
Author: Qiong Cao, Yiming Ying, Peng Li
Abstract: Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].
2 0.76826596 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples
Author: Hongteng Xu, Hongyuan Zha
Abstract: Data sparsity has been a thorny issuefor manifold-based image synthesis, and in this paper we address this critical problem by leveraging ideas from transfer learning. Specifically, we propose methods based on generating auxiliary data in the form of synthetic samples using transformations of the original sparse samples. To incorporate the auxiliary data, we propose a weighted data synthesis method, which adaptively selects from the generated samples for inclusion during the manifold learning process via a weighted iterative algorithm. To demonstrate the feasibility of the proposed method, we apply it to the problem of face image synthesis from sparse samples. Compared with existing methods, the proposed method shows encouraging results with good performance improvements.
3 0.76526517 44 iccv-2013-Adapting Classification Cascades to New Domains
Author: Vidit Jain, Sachin Sudhakar Farfade
Abstract: Classification cascades have been very effective for object detection. Such a cascade fails to perform well in data domains with variations in appearances that may not be captured in the training examples. This limited generalization severely restricts the domains for which they can be used effectively. A common approach to address this limitation is to train a new cascade of classifiers from scratch for each of the new domains. Building separate detectors for each of the different domains requires huge annotation and computational effort, making it not scalable to a large number of data domains. Here we present an algorithm for quickly adapting a pre-trained cascade of classifiers using a small number oflabeledpositive instancesfrom a different yet similar data domain. In our experiments with images of human babies and human-like characters from movies, we demonstrate that the adapted cascade significantly outperforms both of the original cascade and the one trained from scratch using the given training examples. –
4 0.76426291 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation
Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu
Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.
5 0.76398188 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration
Author: Chenglong Bao, Jian-Feng Cai, Hui Ji
Abstract: In recent years, how to learn a dictionary from input images for sparse modelling has been one very active topic in image processing and recognition. Most existing dictionary learning methods consider an over-complete dictionary, e.g. the K-SVD method. Often they require solving some minimization problem that is very challenging in terms of computational feasibility and efficiency. However, if the correlations among dictionary atoms are not well constrained, the redundancy of the dictionary does not necessarily improve the performance of sparse coding. This paper proposed a fast orthogonal dictionary learning method for sparse image representation. With comparable performance on several image restoration tasks, the proposed method is much more computationally efficient than the over-complete dictionary based learning methods.
6 0.76007748 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person
7 0.75902635 52 iccv-2013-Attribute Adaptation for Personalized Image Search
8 0.75899678 54 iccv-2013-Attribute Pivots for Guiding Relevance Feedback in Image Search
9 0.75847363 106 iccv-2013-Deep Learning Identity-Preserving Face Space
10 0.75786304 35 iccv-2013-Accurate Blur Models vs. Image Priors in Single Image Super-resolution
11 0.75692999 277 iccv-2013-Multi-channel Correlation Filters
12 0.75679386 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso
13 0.75663608 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification
14 0.75500715 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications
15 0.75492102 123 iccv-2013-Domain Adaptive Classification
16 0.754794 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition
17 0.75439626 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation
18 0.75413114 330 iccv-2013-Proportion Priors for Image Sequence Segmentation
19 0.75345254 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding
20 0.7517935 396 iccv-2013-Space-Time Robust Representation for Action Recognition