iccv iccv2013 iccv2013-26 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Xudong Cao, David Wipf, Fang Wen, Genquan Duan, Jian Sun
Abstract: Face verification involves determining whether a pair of facial images belongs to the same or different subjects. This problem can prove to be quite challenging in many important applications where labeled training data is scarce, e.g., family album photo organization software. Herein we propose a principled transfer learning approach for merging plentiful source-domain data with limited samples from some target domain of interest to create a classifier that ideally performs nearly as well as if rich target-domain data were present. Based upon a surprisingly simple generative Bayesian model, our approach combines a KL-divergencebased regularizer/prior with a robust likelihood function leading to a scalable implementation via the EM algorithm. As justification for our design choices, we later use principles from convex analysis to recast our algorithm as an equivalent structured rank minimization problem leading to a number of interesting insights related to solution structure and feature-transform invariance. These insights help to both explain the effectiveness of our algorithm as well as elucidate a wide variety of related Bayesian approaches. Experimental testing with challenging datasets validate the utility of the proposed algorithm.
Reference: text
sentIndex sentText sentNum sentScore
1 com Abstract Face verification involves determining whether a pair of facial images belongs to the same or different subjects. [sent-2, score-0.262]
2 Herein we propose a principled transfer learning approach for merging plentiful source-domain data with limited samples from some target domain of interest to create a classifier that ideally performs nearly as well as if rich target-domain data were present. [sent-6, score-0.769]
3 Based upon a surprisingly simple generative Bayesian model, our approach combines a KL-divergencebased regularizer/prior with a robust likelihood function leading to a scalable implementation via the EM algorithm. [sent-7, score-0.177]
4 As justification for our design choices, we later use principles from convex analysis to recast our algorithm as an equivalent structured rank minimization problem leading to a number of interesting insights related to solution structure and feature-transform invariance. [sent-8, score-0.392]
5 Introduction Numerous computer vision applications involve testing a pair of facial images to determine whether or not they belong to the same subject. [sent-12, score-0.138]
6 For example, this so-called face verification task is required by automatic PC or mobile logon using facial identity, or for grouping images of the same face for tagging purposes, etc. [sent-13, score-0.57]
7 Recently, several authors have demonstrated that simple, scalable generative Bayesian models are capable of achieving state-of-the-art performance on challenging face verification benchmarks [21, 18, 5]. [sent-15, score-0.365]
8 The former can be viewed as confounding, nuisance factors while the latter in isolation should determine successful face verification. [sent-17, score-0.191]
9 While these results are promising, many important practical scenarios involve cross-domain data drawn from potentially different facial appearance distributions. [sent-23, score-0.193]
10 Therefore a model trained using widely available web images may suffer a large performance drop in an application-specific target domain that cannot be viewed as iid image samples from the web. [sent-24, score-0.304]
11 This paper addresses these issues by deriving and analyzing a principled transfer learning algorithm for combining plentiful source-domain data (e. [sent-26, score-0.502]
12 Although conceptually we may address this problem by adapting any number of baseline face verification algorithms, we choose the Joint Bayesian algorithm as our starting point for two reasons. [sent-31, score-0.394]
13 First, despite its simplicity and underlying Gaussian assumptions (see below for details), this algorithm nonetheless achieves the highest published results on the most influential benchmark face verification datasets. [sent-32, score-0.431]
14 Secondly, the scalability and transparency of the Joint Bayesian cost function and update rules render 33220081 principled transfer learning extensions and detailed analysis tractable. [sent-33, score-0.517]
15 Our basic strategy can be viewed from an informationtheoretic perspective, where the idea is to penalize the Kullback-Leibler divergence between the distributions of source- and target-domain data to maximize the sharing of information. [sent-34, score-0.185]
16 For the zero-mean multivariate Gaussians used by PLDA and Joint Bayesian algorithms, this reduces to the Burg matrix divergence between the corresponding covariance matrices [8]. [sent-35, score-0.279]
17 The main contributions herein can then be summarized as follows: ∙ ∙ ∙ Development of a simple, scalable transfer learning mDeetvheolodp mfore adapting existing generative fsafecer lveearirfniicnagtion models to new domains where data is scarce. [sent-40, score-0.615]
18 This demonstrates several desirable properties related to robustness, feature-transform invariance, subspace learning, and computational efficiency, while further elucidating many existing Bayesian face verification algorithms as a natural byproduct. [sent-42, score-0.445]
19 Section 2 describes related work on transfer learning while Section 3 briefly reviews the Joint Bayesian face verification algorithm which serves as the basis of our approach. [sent-45, score-0.713]
20 The specifics of the proposed transfer learning algorithm are presented in Secion 4 followed by theoretical analysis and motivation for our particular model in Section 5. [sent-46, score-0.397]
21 learn a Mahalanobis distance function, where the learned metric is “close” to the Euclidian distance in the sense of KullbackLeibler divergence [7]. [sent-51, score-0.135]
22 This influential approach, termed ITML, has also been extended to domain adaptation problems [23, 16]. [sent-52, score-0.243]
23 Other regularizers, including maximum mean discrepancy [20, 9] and Bregman divergence [24], have also been studied for transfer learning. [sent-55, score-0.481]
24 Our algorithm differs from these discriminative approaches via the choice of our generative model and its subsequent interaction with the KL divergence regularizer. [sent-56, score-0.184]
25 Transfer learning algorithms have also been developed based upon recent rank minimization techniques [4, 14, 15]. [sent-57, score-0.289]
26 However, these methods apply to problems that are structurally very different from face verification and existing methods do not apply here. [sent-58, score-0.316]
27 Although our algorithm is not directly derived from a rank minimization perspective, as intimated above it can be interpreted as a particular minimization task that includes multiple concave penalties on matrix singular values that are combined in a novel way. [sent-59, score-0.466]
28 Review of the Joint Bayesian Method This section briefly reviews the Joint Bayesian method for face verification [5] which will serve as the basis for our transfer learning algorithm. [sent-61, score-0.713]
29 In this context we assume that the appearance of relevant facial features is influenced by two latent factors: identity and intra-personal variations. [sent-62, score-0.227]
30 The Joint Bayesian method then models both 휇 and 휖 as multivariate Gaussians with zero mean (after the appropriate centering operation) and covariance matrices 푆휇 and 푆휖 respectively. [sent-67, score-0.192]
31 During the testing phase, unlike previous Bayesian face recognition algorithms which discriminate based on the difference between a pair of faces [19, 27], the Joint Bayesian classifier is based upon the full joint distribution of face image pairs leading to a considerable performance boost. [sent-69, score-0.569]
32 Using (1), it is readily shown that the joint distributions 푝(푥1 , 푥2 ∣퐻퐼) and 푝(푥1 , 푥2 ∣퐻퐸) 33220092 are zero-mean Gaussians with covariance matrices [푆휇푆+휇 푆휀 푆휇푆+휇 푆휀 ] and [푆휇0+ 푆휀 푆휇0+ 푆휀 ] respectively. [sent-71, score-0.217]
33 Transfer Learning Algorithm We will now adapt the Joint Bayesian model to the transfer learning problem by first proposing an appropriate cost function followed by the development of a simple EM algorithm for training purposes. [sent-75, score-0.397]
34 The KL divergence, as well as alternative penalties based on Bregman divergences and maximum mean discrepancy, have been motivated for related transfer learning purposes [7, 23, 16, 20, 24, 9], although not in combination with a likelihood function as we have done here. [sent-84, score-0.594]
35 In the absence of significant source-domain data, (3) reduces to a current state-of-the-art algorithm for face verification. [sent-86, score-0.192]
36 A completely independent justification of (3) and the associated EM update rules is possible using ideas from convex analysis (see Section 5). [sent-91, score-0.185]
37 Specifically, we decompose all of the samples of one subject1 푋 into two latent parts based on (1): identity 휇, which is invariant for all images of the same subject, and intra-personal variations {휖1 , . [sent-105, score-0.178]
38 Alons gth ceo identity atondr intra-personal vari}a,ti wonhes are independent Gaussians, it is easy to show that 퐻 follows a zeromean Gaussian distribution with covariance We may Ω. [sent-113, score-0.157]
39 E-step: Given the samples of one subject 푋, the expectation of the associated latent variable 퐻 can be derived as 퐸(퐻∣푋) = Ω푃푇(푃Ω푃푇)−1푋. [sent-118, score-0.162]
40 Also, × × from an implementational standpoint, instead of adapting the mean face from the source to target domain, we directly estimate the mean using only target-domain data. [sent-123, score-0.386]
41 This is because first-order statistics can be reliably estimated with relatively limited data, even though the second-order, high-dimensional covariances cannot be. [sent-124, score-0.124]
42 While the full E-step can actually be calculated using our model with limited additional computation (we merely need to compute a posterior covariance analogous to the mean from (5)), we choose not to include this extra term for several reasons. [sent-132, score-0.139]
43 The remainder of this Section will argue that the optimization problem from (7) provides a compelling, complementary picture of the original transfer learning formulation from Section 4. [sent-144, score-0.439]
44 To begin, the penalty terms in (7) both rely on the log-det function, which represents a somewhat common surrogate for the matrix rank function. [sent-146, score-0.209]
45 For a given symmetric, positive semidefinite matrix 푍, let 흈 denote the vector of all singular values in 푍 (which will be non-negative) and ement. [sent-148, score-0.147]
46 We then have log ∣푍∣ = 휎푟 its 푟-th el(8) ∑푟log휎푟=푝 l→im0푝1∑푟(휎푟푝− 1) ∝ ∥흈∥0= rank[푍], In this context, log ∣푍∣ can be viewed as a scaled and transIlant etdhi vse crosniotenx to,f l lroagnk∣푍[푍∣] c . [sent-149, score-0.184]
47 The objective function from (7) is basically attempting to find covariances 푇휇 and 푇휖 of (approximately) minimal rank, subject to the constraint that the latent variables and 피, when confined to the subspaces proscribed by their respective covariances, satisfy the constraint 핏 = 피 + 필Ψ. [sent-151, score-0.235]
48 Low rank solutions can be highly desirable for regularization purposes, interpretability, and implementational efficiency. [sent-153, score-0.301]
49 The latter is especially crucial for many practical applications, where minimal rank implies fast evaluation on test data (see below). [sent-154, score-0.214]
50 However, in the absence of 필 필 332210 14 prior knowledge, and with limited training data, the associated subspace estimates may be unreliable or possibly associated with undesirable degenerate solutions. [sent-155, score-0.118]
51 Fortunately, when prior information is available in the form of nonzero covariances 푆휇 and 푆휖, the situation becomes much more appealing. [sent-156, score-0.124]
52 The log-det penalty now handles the subspace spanned by the prior source-domain information (meaning the span of the singular vectors of 휆푆휇 and 휆푆휖 that have significant singular values) very differently than the orthogonal complement. [sent-157, score-0.424]
53 In directions characterized by small (or zero) singular values, 푇휇 or 푇휖 will be penalized heavily akin to the rank function per the analysis above. [sent-158, score-0.345]
54 In contrast, when source-domain singular values are relatively large, the associated penalty softens considerably, approaching a nearly-flat convex, ℓ1 norm-like regularizer (in the sense that log(휎 + 푐) achieves a near constant gradient with respect to 휎 as 푐 becomes large). [sent-159, score-0.265]
55 Moreover, it is readily shown that, because the likelihood ratio test (2) used to compare two new faces is invariant to an invertible transformation such as 푊, the solution 푊푇휇∗ and 푊푇휖∗ is for all practical purposes fully equivalent to 푇휇∗ and 푇휖∗ . [sent-166, score-0.341]
56 This highly desirable invariance property is quite unlike other sparse or low-rank models that incorporate, for example, convex penalties such as the ℓ1 norm or the nuclear norm. [sent-167, score-0.253]
57 With these penalties an invertible feature transform would lead to an entirely different decision rule and therefore different classification results. [sent-168, score-0.15]
58 , face log-on for smartphones, where fast, real-time computations are required. [sent-175, score-0.154]
59 We note that the convex nuclear norm substitution for the rank penalty does not shrink nearly as many singular values to exactly zero (experiments not shown), and thus does not produce nearly as parsimonious a representation. [sent-176, score-0.553]
60 Thus heuristic singular value thresholding is required for a practical, computationally-efficient implementation. [sent-177, score-0.147]
61 It densely samples multi-scale LBP descriptors centered at dense facial landmarks, and then concatenates them to form a high-dimensional feature. [sent-184, score-0.151]
62 Results with Similar Source/Target Domains A good transfer learning method should seamlessly perform well even with differing degrees of similarity between the source and target domains. [sent-192, score-0.497]
63 3For practical purposes, and to avoid undefined solutions involving the log of zero in (7), both 푆휖 and 푆휇 can be chosen with no strictly zerovalued singular values. [sent-196, score-0.294]
64 This would then imply that 푇휖 and 푇휇, and therefore and 퐵 cannot be strictly low rank without some minimal level of thresholding. [sent-197, score-0.159]
65 However, this is a relatively minor implementational detail and does not affect the overall nature of the arguments made in this section. [sent-198, score-0.137]
66 We use the LFW dataset [13] for the target domain both because of its similarity with WDRef and because it represents a well-studied and challenging benchmark allowing us to place our results in the context of existing face verification systems [17, 26, 25, 2, 18, 6]. [sent-205, score-0.477]
67 Although WDRef and LFW da- ta are similar, this result shows that the proposed transfer learning method can still improve the accuracy even further to 96. [sent-209, score-0.397]
68 Moreover, given that our algorithm explicitly abides by all of the rules pertaining to the unrestricted LFW protocol, it now represents the best reported result on this important benchmark. [sent-212, score-0.149]
69 Results with Large Domain Differences Next we experimentally verify the proposed method in two common daily-life scenarios where, unlike the previous section, considerable domain differences exists relative to source-domain data collected from the Internet. [sent-216, score-0.138]
70 It contains 58 subjects and 1,948 images in total, typically around 40 samples per subject. [sent-219, score-0.213]
71 We study the accuracy of our model as a function of the number of subjects used in the target domain. [sent-221, score-0.223]
72 Due to significant domain differences between web images and the captured video camera frames, the error rate of the baseline source-domain model is 13. [sent-222, score-0.192]
73 Results from models trained with target-domain data only (TDO) and transfer learning (TL). [sent-234, score-0.438]
74 This dataset contains eight real family photo albums collected from personal contacts. [sent-238, score-0.149]
75 There is considerable diversity between the different albums in terms of the number of images, subjects, and time frame. [sent-239, score-0.126]
76 The smallest album contains 10 subjects and around 400 images taken over the past two years. [sent-240, score-0.301]
77 In contrast, the largest albums contain hundreds of subjects and around 10, 000 images taken over the past eight years. [sent-241, score-0.25]
78 To mimic a practical scenario, we consider each family album as a target domain. [sent-242, score-0.316]
79 As shown in Figure 3, for most albums the error rate is reduced to less than half of the error rate achieved by the source-domain baseline model. [sent-245, score-0.17]
80 We expect that this could improve the user experience in personal album management on many platforms such as PC, phone, and social networks. [sent-246, score-0.187]
81 Comparisons with Existing TL methods Using the video camera dataset from the previous sec- tion, we now turn to comparisons with competing transfer 33220136 Figure 3. [sent-249, score-0.346]
82 X-axis labels show the number of images and subjects in the corresponding album. [sent-251, score-0.162]
83 Because metric and subspace learning represent influential, extensible approaches for face verification, we choose to conduct experiments with two popular representatives. [sent-253, score-0.285]
84 By using the source-domain to obtain such a prior, ITML is naturedly extended as a transfer learning method, referred to as T-ITML. [sent-255, score-0.397]
85 [24] proposed a framework for transductive transfer subspace learning based on Bregman divergences. [sent-258, score-0.477]
86 By applying their framework, we then have transfer LDA, or T-LDA, as a useful competing method. [sent-259, score-0.346]
87 For T-ITML, a large amount training pairs4 are generated to maximize its performance, while for T-LDA the optimal subspace dimensionality must be selected. [sent-261, score-0.124]
88 4We generate 200,000 pairs for training the source-domain metric 6,000 pairs for transfer learning 6. [sent-276, score-0.397]
89 Large Scale Data and When provided with a small amount of target-domain data, ideally a good transfer learning algorithm will produce a new model which performs nearly as well as a model trained using a fully-representative, large-scale targetdomain dataset. [sent-279, score-0.563]
90 First, our transfer learning model performs similarly to the model trained using the large-scale target-domain data, which is around 20 times larger than the data used for transfer learning. [sent-286, score-0.784]
91 The error rate of the models trained with target-domain data only (TDO) and transfer learning (TL). [sent-298, score-0.479]
92 To examine these effects we use the PCA dimension to represent the freedom of the transfer learning model, while WDAsian is used as the target-domain dataset. [sent-304, score-0.397]
93 The reason of course is that the sourcedomain data acts as a powerful regularizer, centering the solution space at a more reasonable baseline. [sent-315, score-0.118]
94 In this con33220147 Right: the results obtained by transfer learning. [sent-316, score-0.346]
95 Conclusion This paper presents a generative Bayesian transfer learning algorithm particularly well-suited for the face verification problem. [sent-319, score-0.762]
96 Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. [sent-341, score-0.224]
97 Blessing of dimensionality: High-dimensional feature and its efficient compression for face verification. [sent-365, score-0.154]
98 Another interpretation of the em algorithm for mixture distributions. [sent-403, score-0.124]
99 Labeled faces in the wild: A database for studying face recognition in unconstrained environments. [sent-410, score-0.206]
100 Leveraging billions of faces to overcome performance barriers in unconstrained face recognition. [sent-494, score-0.206]
wordName wordTfidf (topN-words)
[('transfer', 0.346), ('bayesian', 0.24), ('subjects', 0.162), ('verification', 0.162), ('rank', 0.159), ('wdref', 0.156), ('face', 0.154), ('singular', 0.147), ('album', 0.139), ('divergence', 0.135), ('covariances', 0.124), ('em', 0.124), ('kl', 0.12), ('lfw', 0.109), ('scarce', 0.109), ('domain', 0.1), ('facial', 0.1), ('implementational', 0.093), ('plda', 0.093), ('bregman', 0.093), ('log', 0.092), ('joint', 0.092), ('albums', 0.088), ('covariance', 0.082), ('subspace', 0.08), ('penalties', 0.078), ('adapting', 0.078), ('rules', 0.077), ('identity', 0.075), ('itml', 0.075), ('adaptation', 0.073), ('invertible', 0.072), ('unrestricted', 0.072), ('purposes', 0.07), ('sourcedomain', 0.07), ('targetdomain', 0.07), ('wdasian', 0.07), ('influential', 0.07), ('regularizer', 0.068), ('kulis', 0.063), ('risk', 0.062), ('elucidate', 0.062), ('plentiful', 0.062), ('gaussians', 0.062), ('multivariate', 0.062), ('target', 0.061), ('family', 0.061), ('subject', 0.059), ('justification', 0.058), ('analogous', 0.057), ('practical', 0.055), ('nearly', 0.055), ('domains', 0.053), ('latent', 0.052), ('faces', 0.052), ('achievable', 0.052), ('penalization', 0.052), ('learning', 0.051), ('web', 0.051), ('samples', 0.051), ('penalty', 0.05), ('convex', 0.05), ('informationtheoretic', 0.05), ('likelihood', 0.049), ('desirable', 0.049), ('tl', 0.049), ('generative', 0.049), ('pca', 0.049), ('wen', 0.049), ('centering', 0.048), ('platforms', 0.048), ('taigman', 0.048), ('published', 0.045), ('dimensionality', 0.044), ('arguments', 0.044), ('reformulation', 0.043), ('insights', 0.043), ('readily', 0.043), ('principled', 0.043), ('remainder', 0.042), ('lbp', 0.042), ('minimization', 0.041), ('secondly', 0.041), ('rate', 0.041), ('trained', 0.041), ('leading', 0.041), ('cao', 0.04), ('penalized', 0.039), ('differing', 0.039), ('invariance', 0.039), ('protocol', 0.039), ('herein', 0.038), ('involve', 0.038), ('absence', 0.038), ('considerable', 0.038), ('file', 0.038), ('upon', 0.038), ('factors', 0.037), ('nuclear', 0.037), ('tdo', 0.037)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000007 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification
Author: Xudong Cao, David Wipf, Fang Wen, Genquan Duan, Jian Sun
Abstract: Face verification involves determining whether a pair of facial images belongs to the same or different subjects. This problem can prove to be quite challenging in many important applications where labeled training data is scarce, e.g., family album photo organization software. Herein we propose a principled transfer learning approach for merging plentiful source-domain data with limited samples from some target domain of interest to create a classifier that ideally performs nearly as well as if rich target-domain data were present. Based upon a surprisingly simple generative Bayesian model, our approach combines a KL-divergencebased regularizer/prior with a robust likelihood function leading to a scalable implementation via the EM algorithm. As justification for our design choices, we later use principles from convex analysis to recast our algorithm as an equivalent structured rank minimization problem leading to a number of interesting insights related to solution structure and feature-transform invariance. These insights help to both explain the effectiveness of our algorithm as well as elucidate a wide variety of related Bayesian approaches. Experimental testing with challenging datasets validate the utility of the proposed algorithm.
2 0.25760642 392 iccv-2013-Similarity Metric Learning for Face Recognition
Author: Qiong Cao, Yiming Ying, Peng Li
Abstract: Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].
3 0.19146252 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition
Author: Yizhe Zhang, Ming Shao, Edward K. Wong, Yun Fu
Abstract: One of the most challenging task in face recognition is to identify people with varied poses. Namely, the test faces have significantly different poses compared with the registered faces. In this paper, we propose a high-level feature learning scheme to extract pose-invariant identity feature for face recognition. First, we build a single-hiddenlayer neural network with sparse constraint, to extractposeinvariant feature in a supervised fashion. Second, we further enhance the discriminative capability of the proposed feature by using multiple random faces as the target values for multiple encoders. By enforcing the target values to be uniquefor inputfaces over differentposes, the learned highlevel feature that is represented by the neurons in the hidden layer is pose free and only relevant to the identity information. Finally, we conduct face identification on CMU MultiPIE, and verification on Labeled Faces in the Wild (LFW) databases, where identification rank-1 accuracy and face verification accuracy with ROC curve are reported. These experiments demonstrate that our model is superior to oth- er state-of-the-art approaches on handling pose variations.
4 0.17270029 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment
Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars
Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.
5 0.15645373 123 iccv-2013-Domain Adaptive Classification
Author: Fatemeh Mirrashed, Mohammad Rastegari
Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.
6 0.15252551 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition
7 0.15186091 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection
8 0.14817703 157 iccv-2013-Fast Face Detector Training Using Tailored Views
9 0.14070338 444 iccv-2013-Viewing Real-World Faces in 3D
10 0.12937124 310 iccv-2013-Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision
11 0.12930734 206 iccv-2013-Hybrid Deep Learning for Face Verification
12 0.12857485 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation
13 0.12810844 70 iccv-2013-Cascaded Shape Space Pruning for Robust Facial Landmark Detection
14 0.12648235 44 iccv-2013-Adapting Classification Cascades to New Domains
15 0.12461054 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition
16 0.12058202 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information
17 0.11904607 223 iccv-2013-Joint Noise Level Estimation from Personal Photo Collections
18 0.11850389 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation
19 0.11209662 434 iccv-2013-Unifying Nuclear Norm and Bilinear Factorization Approaches for Low-Rank Matrix Decomposition
20 0.10973215 219 iccv-2013-Internet Based Morphable Model
topicId topicWeight
[(0, 0.279), (1, 0.056), (2, -0.116), (3, -0.131), (4, -0.069), (5, -0.047), (6, 0.184), (7, 0.116), (8, 0.079), (9, -0.004), (10, -0.004), (11, -0.08), (12, -0.03), (13, -0.036), (14, 0.04), (15, -0.123), (16, -0.08), (17, -0.027), (18, -0.026), (19, -0.01), (20, 0.062), (21, -0.098), (22, 0.042), (23, -0.057), (24, 0.039), (25, 0.013), (26, 0.049), (27, -0.095), (28, 0.025), (29, 0.106), (30, 0.072), (31, 0.007), (32, -0.011), (33, 0.017), (34, -0.028), (35, -0.068), (36, 0.003), (37, 0.047), (38, 0.03), (39, -0.015), (40, -0.012), (41, 0.042), (42, -0.031), (43, 0.061), (44, 0.043), (45, 0.028), (46, -0.097), (47, -0.014), (48, 0.022), (49, 0.085)]
simIndex simValue paperId paperTitle
same-paper 1 0.9668926 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification
Author: Xudong Cao, David Wipf, Fang Wen, Genquan Duan, Jian Sun
Abstract: Face verification involves determining whether a pair of facial images belongs to the same or different subjects. This problem can prove to be quite challenging in many important applications where labeled training data is scarce, e.g., family album photo organization software. Herein we propose a principled transfer learning approach for merging plentiful source-domain data with limited samples from some target domain of interest to create a classifier that ideally performs nearly as well as if rich target-domain data were present. Based upon a surprisingly simple generative Bayesian model, our approach combines a KL-divergencebased regularizer/prior with a robust likelihood function leading to a scalable implementation via the EM algorithm. As justification for our design choices, we later use principles from convex analysis to recast our algorithm as an equivalent structured rank minimization problem leading to a number of interesting insights related to solution structure and feature-transform invariance. These insights help to both explain the effectiveness of our algorithm as well as elucidate a wide variety of related Bayesian approaches. Experimental testing with challenging datasets validate the utility of the proposed algorithm.
2 0.75552207 392 iccv-2013-Similarity Metric Learning for Face Recognition
Author: Qiong Cao, Yiming Ying, Peng Li
Abstract: Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].
3 0.74834961 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition
Author: Dihong Gong, Zhifeng Li, Dahua Lin, Jianzhuang Liu, Xiaoou Tang
Abstract: Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizingfaces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, calledHidden FactorAnalysis (HFA). This methodcaptures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.
4 0.7430194 154 iccv-2013-Face Recognition via Archetype Hull Ranking
Author: Yuanjun Xiong, Wei Liu, Deli Zhao, Xiaoou Tang
Abstract: The archetype hull model is playing an important role in large-scale data analytics and mining, but rarely applied to vision problems. In this paper, we migrate such a geometric model to address face recognition and verification together through proposing a unified archetype hull ranking framework. Upon a scalable graph characterized by a compact set of archetype exemplars whose convex hull encompasses most of the training images, the proposed framework explicitly captures the relevance between any query and the stored archetypes, yielding a rank vector over the archetype hull. The archetype hull ranking is then executed on every block of face images to generate a blockwise similarity measure that is achieved by comparing two different rank vectors with respect to the same archetype hull. After integrating blockwise similarity measurements with learned importance weights, we accomplish a sensible face similarity measure which can support robust and effective face recognition and verification. We evaluate the face similarity measure in terms of experiments performed on three benchmark face databases Multi-PIE, Pubfig83, and LFW, demonstrat- ing its performance superior to the state-of-the-arts.
5 0.74240726 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition
Author: Oren Barkan, Jonathan Weill, Lior Wolf, Hagai Aronowitz
Abstract: This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework. First, the Over-Complete Local Binary Patterns (OCLBP) face representation scheme is introduced as a multi-scale modified version of the Local Binary Patterns (LBP) scheme. Second, we propose an efficient matrix-vector multiplication-based recognition system. The system is based on Linear Discriminant Analysis (LDA) coupled with Within Class Covariance Normalization (WCCN). This is further extended to the unsupervised case by proposing an unsupervised variant of WCCN. Lastly, we introduce Diffusion Maps (DM) for non-linear dimensionality reduction as an alternative to the Whitened Principal Component Analysis (WPCA) method which is often used in face recognition. We evaluate the proposed framework on the LFW face recognition dataset under the restricted, unrestricted and unsupervised protocols. In all three cases we achieve very competitive results.
6 0.72670561 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation
7 0.71336943 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition
8 0.71326888 44 iccv-2013-Adapting Classification Cascades to New Domains
9 0.69121552 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation
10 0.67464238 157 iccv-2013-Fast Face Detector Training Using Tailored Views
11 0.67396867 357 iccv-2013-Robust Matrix Factorization with Unknown Noise
12 0.67385089 206 iccv-2013-Hybrid Deep Learning for Face Verification
13 0.66002536 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation
14 0.6592856 248 iccv-2013-Learning to Rank Using Privileged Information
15 0.65527314 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias
16 0.65210801 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity
17 0.64889359 272 iccv-2013-Modifying the Memorability of Face Photographs
18 0.63940692 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding
19 0.63925868 106 iccv-2013-Deep Learning Identity-Preserving Face Space
20 0.61212546 310 iccv-2013-Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision
topicId topicWeight
[(2, 0.073), (3, 0.121), (4, 0.019), (7, 0.033), (10, 0.012), (12, 0.013), (26, 0.071), (27, 0.015), (31, 0.064), (34, 0.018), (42, 0.176), (64, 0.06), (73, 0.036), (89, 0.165), (98, 0.047)]
simIndex simValue paperId paperTitle
same-paper 1 0.91313308 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification
Author: Xudong Cao, David Wipf, Fang Wen, Genquan Duan, Jian Sun
Abstract: Face verification involves determining whether a pair of facial images belongs to the same or different subjects. This problem can prove to be quite challenging in many important applications where labeled training data is scarce, e.g., family album photo organization software. Herein we propose a principled transfer learning approach for merging plentiful source-domain data with limited samples from some target domain of interest to create a classifier that ideally performs nearly as well as if rich target-domain data were present. Based upon a surprisingly simple generative Bayesian model, our approach combines a KL-divergencebased regularizer/prior with a robust likelihood function leading to a scalable implementation via the EM algorithm. As justification for our design choices, we later use principles from convex analysis to recast our algorithm as an equivalent structured rank minimization problem leading to a number of interesting insights related to solution structure and feature-transform invariance. These insights help to both explain the effectiveness of our algorithm as well as elucidate a wide variety of related Bayesian approaches. Experimental testing with challenging datasets validate the utility of the proposed algorithm.
2 0.90330553 141 iccv-2013-Enhanced Continuous Tabu Search for Parameter Estimation in Multiview Geometry
Author: Guoqing Zhou, Qing Wang
Abstract: Optimization using the L∞ norm has been becoming an effective way to solve parameter estimation problems in multiview geometry. But the computational cost increases rapidly with the size of measurement data. Although some strategies have been presented to improve the efficiency of L∞ optimization, it is still an open issue. In the paper, we propose a novel approach under theframework ofenhanced continuous tabu search (ECTS) for generic parameter estimation in multiview geometry. ECTS is an optimization method in the domain of artificial intelligence, which has an interesting ability of covering a wide solution space by promoting the search far away from current solution and consecutively decreasing the possibility of trapping in the local minima. Taking the triangulation as an example, we propose the corresponding ways in the key steps of ECTS, diversification and intensification. We also present theoretical proof to guarantee the global convergence of search with probability one. Experimental results have validated that the ECTS based approach can obtain global optimum efficiently, especially for large scale dimension of parameter. Potentially, the novel ECTS based algorithm can be applied in many applications of multiview geometry.
3 0.89852339 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation
Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu
Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.
4 0.89063925 44 iccv-2013-Adapting Classification Cascades to New Domains
Author: Vidit Jain, Sachin Sudhakar Farfade
Abstract: Classification cascades have been very effective for object detection. Such a cascade fails to perform well in data domains with variations in appearances that may not be captured in the training examples. This limited generalization severely restricts the domains for which they can be used effectively. A common approach to address this limitation is to train a new cascade of classifiers from scratch for each of the new domains. Building separate detectors for each of the different domains requires huge annotation and computational effort, making it not scalable to a large number of data domains. Here we present an algorithm for quickly adapting a pre-trained cascade of classifiers using a small number oflabeledpositive instancesfrom a different yet similar data domain. In our experiments with images of human babies and human-like characters from movies, we demonstrate that the adapted cascade significantly outperforms both of the original cascade and the one trained from scratch using the given training examples. –
5 0.89043033 392 iccv-2013-Similarity Metric Learning for Face Recognition
Author: Qiong Cao, Yiming Ying, Peng Li
Abstract: Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].
6 0.88865197 52 iccv-2013-Attribute Adaptation for Personalized Image Search
7 0.88791281 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation
8 0.88761079 286 iccv-2013-NYC3DCars: A Dataset of 3D Vehicles in Geographic Context
9 0.88675219 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples
10 0.88545251 123 iccv-2013-Domain Adaptive Classification
11 0.88333738 277 iccv-2013-Multi-channel Correlation Filters
12 0.88039643 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition
13 0.87944418 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment
14 0.87887496 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information
15 0.87885076 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person
16 0.87859112 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications
17 0.87830859 365 iccv-2013-SIFTpack: A Compact Representation for Efficient SIFT Matching
18 0.87808359 180 iccv-2013-From Where and How to What We See
19 0.87790406 54 iccv-2013-Attribute Pivots for Guiding Relevance Feedback in Image Search
20 0.87656796 59 iccv-2013-Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation