iccv iccv2013 iccv2013-261 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Wonjun Hwang, Kyungshik Roh, Junmo Kim
Abstract: We propose a novel unifying framework using a Markov network to learn the relationship between multiple classifiers in face recognition. We assume that we have several complementary classifiers and assign observation nodes to the features of a query image and hidden nodes to the features of gallery images. We connect each hidden node to its corresponding observation node and to the hidden nodes of other neighboring classifiers. For each observation-hidden node pair, we collect a set of gallery candidates that are most similar to the observation instance, and the relationship between the hidden nodes is captured in terms of the similarity matrix between the collected gallery images. Posterior probabilities in the hidden nodes are computed by the belief-propagation algorithm. The novelty of the proposed framework is the method that takes into account the classifier dependency using the results of each neighboring classifier. We present extensive results on two different evaluation protocols, known and unknown image variation tests, using three different databases, which shows that the proposed framework always leads to good accuracy in face recognition.
Reference: text
sentIndex sentText sentNum sentScore
1 kr st Abstract We propose a novel unifying framework using a Markov network to learn the relationship between multiple classifiers in face recognition. [sent-9, score-0.445]
2 We assume that we have several complementary classifiers and assign observation nodes to the features of a query image and hidden nodes to the features of gallery images. [sent-10, score-0.937]
3 We connect each hidden node to its corresponding observation node and to the hidden nodes of other neighboring classifiers. [sent-11, score-0.545]
4 For each observation-hidden node pair, we collect a set of gallery candidates that are most similar to the observation instance, and the relationship between the hidden nodes is captured in terms of the similarity matrix between the collected gallery images. [sent-12, score-1.102]
5 Posterior probabilities in the hidden nodes are computed by the belief-propagation algorithm. [sent-13, score-0.197]
6 We present extensive results on two different evaluation protocols, known and unknown image variation tests, using three different databases, which shows that the proposed framework always leads to good accuracy in face recognition. [sent-15, score-0.199]
7 Introduction When we use a face image as a query, we can retrieve several desired face images from a large image database. [sent-17, score-0.24]
8 We calculate many similarities of the query image and the gallery images in the database, and the retrieved gallery images are ranked by similar orders. [sent-18, score-0.795]
9 It is a one-to-many identification problem [8] and has many applications such as searching similar face images in a database and face tagging in images and videos. [sent-19, score-0.336]
10 However, the traditional face recognition algorithms [10][14] have been developed for one-to-one verification [8], in particular, a biometric task. [sent-20, score-0.274]
11 Recent successful face recognition methods have attempted to merge several classifiers using multiple feature sets of different characteristics, as in component-based methods, which extract features from separate spatial re- Figure 1. [sent-21, score-0.395]
12 These methods used the classifiers not only based on the different feature sets but also trained independently, and the similarity scores are merged with the predefined parameters. [sent-27, score-0.272]
13 Note that these methods lead to good accuracy in face verification, but there is no specific framework for the one-to-many identification problem. [sent-29, score-0.211]
14 First, we assume that we have multiple classifiers that have complementary characteristics. [sent-31, score-0.217]
15 We can unify the multiple classifiers based not on the predefined weight values but on a Markov network, as summarized in Figure 2. [sent-32, score-0.217]
16 For this purpose, we assign one node of a Markov network to each classifier. [sent-33, score-0.161]
17 At its paired hidden node, we retrieve the first n similar gallery 11995522 Figure 2. [sent-36, score-0.439]
18 N multiple classifiers are deployed under (a) the traditional recognition framework and (b) the proposed recognition framework using a Markov network. [sent-37, score-0.363]
19 Observation node y is assigned by the feature of a query image and hidden node x is assigned by the feature of a gallery image. [sent-39, score-0.721]
20 samples from the database, and their orders are made by the similarity scores for the query face image. [sent-42, score-0.214]
21 The multiple classifiers have their own lists of retrieved gallery images, which are not identical in general, thereby complementing the neighbor classifiers. [sent-43, score-0.633]
22 Because the hidden nodes are connected by the network lines, the relationship of the connected nodes is learned by the similarity scores between the neighbor classifiers, and the scores are calculated by concatenating the two gallery features of the neighbor classifiers. [sent-44, score-0.867]
23 The posterior probability at each hidden node is easily computed by the belief-propagation algorithm. [sent-45, score-0.235]
24 Our main contribution is the novel recognition framework that successfully organizes the classifier relationship using the similarity between the top ranked gallery images of the corresponding classifiers. [sent-48, score-0.527]
25 Its own node characteristics are iteratively propagated to other neighbor nodes. [sent-49, score-0.178]
26 As a result, all nodes are correlated to others, which improves the recognition results in a one-to-many identification task. [sent-50, score-0.197]
27 Related Work The performances of recent face recognition algorithms have gradually advanced, which is largely due to techniques that merge several classifiers of different characteristics [21][20][9]. [sent-56, score-0.469]
28 In this paper, we use Gabor feature-based classifiers as representative classifiers, but the proposed framework is applicable to any other classifiers. [sent-58, score-0.239]
29 [3] proposed a hybrid face recognition method that combines holistic and feature analysis-based approaches using a Markov random field (MRF) model. [sent-60, score-0.177]
30 On the other hand, we use the Markov network to apply the relationship between the multiple classifiers to the face recognition scheme and propose a similarity-based compatibility function between the neighbor classifiers for the identification task. [sent-68, score-0.908]
31 However, in this paper, given a query image, y, we would like to identify the most similar gallery image, x, from the enrolled image set in a one-to-many face identification scenario. [sent-73, score-0.591]
32 The belief-propagation algorithm iteratively updates the message mij from the node ito the node j,and the equation is as follows: mij(xj) = ? [sent-81, score-0.294]
33 =j where mij (xj) is an element of the vector mij corresponding to the gallery candidate xj . [sent-86, score-0.465]
34 The marginal probability bi for gallery xi at node iis derived by bi(xi) = ? [sent-87, score-0.554]
35 Node In [3], an image was divided into several patches and nodes were assigned to these patches under the Markov assumption, but in this paper, we assign the nodes to the feature vectors of the multiple classifiers. [sent-93, score-0.184]
36 For example, the query feature and the gallery feature are assigned to the observation node, y, and the hidden node, x, respectively, and we design the parallel network without loops for simplicity, as shown in Figure 2 (b). [sent-94, score-0.59]
37 In this respect, each classifier is influenced by two neighbor classifiers under the Markov assumption, and we can finally have N observation and hidden nodes pairs. [sent-95, score-0.543]
38 A pair of the observation and the hidden nodes might be assumed as traditional face recognition, but the hidden nodes are connected by the network lines. [sent-96, score-0.648]
39 At each hidden node, we collect n gallery candidates that are most similar to the observation feature. [sent-97, score-0.511]
40 The hidden nodes have their own sets of gallery candidates retrieved from the database, and the n candidates ofeach node are not the same as the others because we assume that the multiple classifiers have different characteristics. [sent-98, score-0.978]
41 For example, when assuming that there are two classifiers, we can have two sets of the n first ranked gallery candidates, and the relationship between hidden node x1 and x2 can be evaluated for pairs of realizations, (x1p, x2q), 1 ≤ p ≤ n, 1 ≤ q ≤ n. [sent-99, score-0.6]
42 In detail, x11, the first ranke)d, 1ele ≤me pnt ≤ ≤of n nt,he1 gallery ca nn. [sent-100, score-0.334]
43 The similarity between instances x1p and x2q of different hidden nodes x1 and x2 is computed by comparing two concatenated features, = [fx1p fxp] and = [fxq fx2q], where x1p and x2q are from the retrieved gallery sets at the first and second hidden nodes, respectively. [sent-103, score-0.722]
44 We generate the neighbor features, fxp and fxq, using the neighbor classifier (red), and we add them to the main features, fx1p and fx2q , (blue), in order. [sent-104, score-0.18]
45 Compatibility Function For a given observation, yi, the query-gallery compatibility function, Φ(xi , yi), is evaluated for n gallery candidates of xi, {xi1, . [sent-108, score-0.495]
46 Th,e { gallery-gallery compatibility fgun act vioecn,Ψtor(x ini, xj), is evaluated for n n pairs of (xi , xj), where xi takes v)a,l uise on l{uxaite1 , . [sent-113, score-0.172]
47 orrelation between two feature vectors as a measure of similarity, the n most similar gallery images are retrieved at each hidden node. [sent-122, score-0.476]
48 We define the compatibility function between the hidden nodes iand j as, ψij(xi, xj) = exp(−|sij(xi, xj) − 1|2/2σ2), (4) where σ is a noise parameter. [sent-123, score-0.34]
49 wH wowe neveeedr, xip and xjq are from different face classifiers, and we cannot directly compare their corresponding features, fxip and fxjq . [sent-126, score-0.35]
50 To address this problem, we propose the concatenated features, and which version of fxip and fxjq . [sent-127, score-0.161]
51 The similarity between iand j nodes is measured by the normalized correlation coefficient between the concatenated features: fxip fxjq, sij(xip,xjq) =< are an augmented fxip,fxjq > /||fxip||||fxjq||. [sent-129, score-0.235]
52 Given a query image at node i, n gallery images that are most similar to the query image, yi, are retrieved. [sent-132, score-0.577]
53 The query-gallery compatibility function at node iis evaluated and represented by a column vector Φi. [sent-133, score-0.225]
54 The gallery-gallery compatibility function between the gallery node iand the neighbor node j is computed for each pair of the candidates and is represented by a matrix, Ψij . [sent-134, score-0.779]
55 (10) As the posterior probability p(xi |y1, · · · , yN) is proportional to bi (xi) at the hidden n|yode, xi, we have a chance to use these N marginal probabilities such as b1(x1) , b2 (x2) , · · · , bN (xN). [sent-156, score-0.187]
56 e 4, where the message that comes from the neigh- bor node ito the current node j corresponds to the column Table 1. [sent-161, score-0.248]
57 Each element in the matrix (blue area) is a measure of similarity between a gallery image and a gallery image of another node. [sent-169, score-0.694]
58 If the two gallery images are similar enough to result in a large term in the matrix, the corresponding term in the column vector (red area) receives more emphasis. [sent-170, score-0.334]
59 For example, suppose we would like to compare the query image and gallery image 1. [sent-171, score-0.402]
60 If gallery image 1 is somehow similar to another gallery image, say, image 2, then the idea is that we will use the similarity between image 2 and the query to update the similarity between image 1 and the query. [sent-172, score-0.788]
61 In this protocol, the multiple classifiers are trained by the FRGC training image set, and the test sets consist of the FRGC test images whose conditions are similar to that of the training images. [sent-179, score-0.271]
62 In the other protocol, as described in Table 2, we train the multiple classifiers using only the FRGC training set, and the two test sets are composed of XM2VTS (XvX) and BANCA images (BvB), respectively, where we expect to observe performance changes according to the un-trained image variations. [sent-185, score-0.244]
63 In the end, face recognition accuracy is calculated by the 1st rank of the Cumulative Match Characteristics (CMC) curve as a measurement of one-to-many face identification [8]. [sent-187, score-0.345]
64 Gabor LDA-based Classifier To evaluate the generalizability of the proposed recognition framework, we employ two different Gabor LDA-based classifiers such as the RSG classifiers, proposed in this paper, and the ECG classifiers [4], respectively. [sent-190, score-0.494]
65 In this paper, we build the ten RSG classifiers as the multiple classifiers (N = 10). [sent-198, score-0.479]
66 They finally merged the multiple classifiers and achieved the best result among the Gabor- example, the BGL and ECG classifiers and ten RSG classifiers, are shown under the traditional and the proposed frameworks in the (a) CvC and (b) UvC tests. [sent-201, score-0.564]
67 The average accuracy of combining different numbers of RSG classifiers under the traditional and proposed frameworks. [sent-204, score-0.247]
68 The score fusion rule is the sum rule and the test protocol is the UvC test. [sent-205, score-0.223]
69 In this paper, we employ twelve ECG classifiers (N = 12) for the full performance without consideration of the computational complexity. [sent-207, score-0.254]
70 As shown in Figure 6, the average recognition rates of the RSG classifiers range from 63. [sent-211, score-0.253]
71 75% (2nd classifier), but the BGL classifier and the ECG classifiers achieve an average of 72. [sent-213, score-0.272]
72 Note that all single RSG classifiers of the unified framework improve the average accuracy by approximately 12–13% with the UvC test compared to the original single RSG classifiers. [sent-218, score-0.266]
73 Moreover, all single RSG classifiers of the proposed method show better accuracy than the BGL and ECG classifiers, and similar improvement is also found in the CvC test. [sent-219, score-0.217]
74 11995566 RSG classifiers as a function of the size of the retrieval images, n, in the UvC test. [sent-220, score-0.217]
75 As shown in Figure 7, when the number of classifiers increases, the accuracy also increases in both the traditional and proposed methods. [sent-225, score-0.247]
76 In this paper, we use the sum rule [7] to combine RSG classifiers for simplicity, but more complex combination algorithms may further improve system performance. [sent-227, score-0.267]
77 The performances of the proposed method are compared with the Gabor-based classifiers such as the BGL and ECG classifiers. [sent-235, score-0.265]
78 , MIN, MAX, and Sum) [7], RANK based fusion [11], likelihood rate (LR) based fusion [16], Gaussian mixture model-based LR (GMLR) method [13], and fisher classifier (LDA) based score fusion method [18]. [sent-243, score-0.244]
79 When compared with the performances of the other Gabor-based methods such as BGL and ECG classifiers, as shown in Table 3, the twelve ECG classifiers fused by the LR method show an average of 85. [sent-254, score-0.302]
80 21% average of the ten RSG classifiers fused by the sum rule. [sent-257, score-0.286]
81 On the other hand, the proposed recognition framework works successfully for the RSG classifiers and ECG classifiers. [sent-263, score-0.275]
82 For example, using the proposed recognition framework, the recognition rate of the RSG classifiers is boosted from 82. [sent-264, score-0.313]
83 18% and the recognition rate of the ECG classifiers is boosted from 85. [sent-266, score-0.277]
84 From this result, we can conclude that the proposed method efficiently combines multiple classifiers using the classifier relationship in the known variation test. [sent-269, score-0.334]
85 CMC curves corresponding to the well-known recognition methods and the proposed framework using RSG classifiers and ECG classifiers in the (a) XvX test and (b) BvB test, respectively. [sent-275, score-0.519]
86 sifiers merged by sum rule, denoted by RSG (Sum), the twelve ECG classifiers merged by the LR method, denoted by ECG (LR), and the proposed frameworks with the RSG and ECG classifiers, respectively, in the XvX and BvB tests. [sent-277, score-0.362]
87 In detail, the RSG and ECG classifiers show similar recognition rates, approximately 82%, in the XvX test, but the ECG classifiers achieve a 7% better recognition rate than the RSG classifiers in the BvB test. [sent-280, score-0.723]
88 Note that compared to the performances of the proposed method in the known variation test protocol (i. [sent-285, score-0.162]
89 Conclusion We propose a novel face recognition framework, particularly for the one-to-many identification task, based on multiple classifiers connected by a Markov network. [sent-294, score-0.463]
90 The Markov network probabilistically models the relationships between a query and gallery images and between neighboring gallery images. [sent-295, score-0.79]
91 From the viewpoint of an observation-hidden node pair, we retrieve the most similar gallery images from the database using a query image face model. [sent-296, score-0.656]
92 The statistical dependency between the hidden nodes is calculated by the similarities between the retrieved gallery images. [sent-297, score-0.589]
93 We prove the good performances of the proposed framework using RSG classifiers and ECG classifiers, respectively. [sent-299, score-0.287]
94 Moreover, we have confirmed the generality of the proposed method with the known variation test and unknown variation test protocols which consist of three different databases: the FRGC, XM2VTS, and BANCA databases. [sent-300, score-0.182]
95 A hybrid face recognition method using markov random fields. [sent-333, score-0.256]
96 Face recognition system using extended curvature gabor classifier bunch for low-resolution face image. [sent-341, score-0.338]
97 Face recognition system using multiple face model of hybrid fourier feature under uncontrolled illumination variation. [sent-350, score-0.201]
98 Unconstrained face recognition using mrf priors and manifold traversing. [sent-462, score-0.183]
99 Hierarchical ensemble of global and local classifiers for face recognition. [sent-484, score-0.337]
100 Fusing gabor and lbp feature set for kernel-based face recognition. [sent-490, score-0.247]
wordName wordTfidf (topN-words)
[('rsg', 0.553), ('ecg', 0.415), ('gallery', 0.334), ('classifiers', 0.217), ('bgl', 0.19), ('uvc', 0.156), ('frgc', 0.147), ('gabor', 0.127), ('bvb', 0.121), ('xvx', 0.121), ('face', 0.12), ('compatibility', 0.118), ('node', 0.107), ('hidden', 0.105), ('banca', 0.104), ('nodes', 0.092), ('markov', 0.079), ('identification', 0.069), ('fxip', 0.069), ('fxjq', 0.069), ('query', 0.068), ('fusion', 0.063), ('protocol', 0.057), ('biometric', 0.057), ('classifier', 0.055), ('network', 0.054), ('xi', 0.054), ('xjq', 0.052), ('korea', 0.051), ('cmc', 0.049), ('performances', 0.048), ('lda', 0.048), ('cvc', 0.047), ('ver', 0.047), ('lr', 0.047), ('mij', 0.046), ('neighbor', 0.045), ('ten', 0.045), ('candidates', 0.043), ('hwang', 0.042), ('protocols', 0.041), ('xip', 0.04), ('xj', 0.039), ('kittler', 0.038), ('twelve', 0.037), ('retrieved', 0.037), ('recognition', 0.036), ('marginal', 0.035), ('fxik', 0.035), ('fxp', 0.035), ('fxq', 0.035), ('localities', 0.035), ('messer', 0.035), ('mki', 0.035), ('protocoldatabaseindividualimage', 0.035), ('message', 0.034), ('relationship', 0.032), ('fyi', 0.031), ('authentication', 0.031), ('marcel', 0.031), ('verification', 0.031), ('sij', 0.03), ('variation', 0.03), ('traditional', 0.03), ('observation', 0.029), ('merged', 0.029), ('rodrigues', 0.028), ('unknown', 0.027), ('test', 0.027), ('international', 0.027), ('database', 0.027), ('mrf', 0.027), ('rule', 0.026), ('similarity', 0.026), ('characteristics', 0.026), ('frameworks', 0.026), ('iand', 0.025), ('kee', 0.024), ('unexpected', 0.024), ('boosted', 0.024), ('uncontrolled', 0.024), ('bi', 0.024), ('ij', 0.024), ('shan', 0.024), ('sum', 0.024), ('generalizability', 0.024), ('posterior', 0.023), ('controlled', 0.023), ('concatenated', 0.023), ('ranked', 0.022), ('biometrics', 0.022), ('merge', 0.022), ('framework', 0.022), ('gestures', 0.022), ('dependency', 0.021), ('hybrid', 0.021), ('bg', 0.021), ('vr', 0.021), ('tests', 0.021), ('connected', 0.021)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999982 261 iccv-2013-Markov Network-Based Unified Classifier for Face Identification
Author: Wonjun Hwang, Kyungshik Roh, Junmo Kim
Abstract: We propose a novel unifying framework using a Markov network to learn the relationship between multiple classifiers in face recognition. We assume that we have several complementary classifiers and assign observation nodes to the features of a query image and hidden nodes to the features of gallery images. We connect each hidden node to its corresponding observation node and to the hidden nodes of other neighboring classifiers. For each observation-hidden node pair, we collect a set of gallery candidates that are most similar to the observation instance, and the relationship between the hidden nodes is captured in terms of the similarity matrix between the collected gallery images. Posterior probabilities in the hidden nodes are computed by the belief-propagation algorithm. The novelty of the proposed framework is the method that takes into account the classifier dependency using the results of each neighboring classifier. We present extensive results on two different evaluation protocols, known and unknown image variation tests, using three different databases, which shows that the proposed framework always leads to good accuracy in face recognition.
2 0.20023799 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition
Author: Renliang Weng, Jiwen Lu, Junlin Hu, Gao Yang, Yap-Peng Tan
Abstract: Over the past two decades, a number of face recognition methods have been proposed in the literature. Most of them use holistic face images to recognize people. However, human faces are easily occluded by other objects in many real-world scenarios and we have to recognize the person of interest from his/her partial faces. In this paper, we propose a new partial face recognition approach by using feature set matching, which is able to align partial face patches to holistic gallery faces automatically and is robust to occlusions and illumination changes. Given each gallery image and probe face patch, we first detect keypoints and extract their local features. Then, we propose a Metric Learned ExtendedRobust PointMatching (MLERPM) method to discriminatively match local feature sets of a pair of gallery and probe samples. Lastly, the similarity of two faces is converted as the distance between two feature sets. Experimental results on three public face databases are presented to show the effectiveness of the proposed approach.
3 0.17646554 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person
Author: Meng Yang, Luc Van_Gool, Lei Zhang
Abstract: Face recognition (FR) with a single training sample per person (STSPP) is a very challenging problem due to the lack of information to predict the variations in the query sample. Sparse representation based classification has shown interesting results in robust FR; however, its performance will deteriorate much for FR with STSPP. To address this issue, in this paper we learn a sparse variation dictionary from a generic training set to improve the query sample representation by STSPP. Instead of learning from the generic training set independently w.r.t. the gallery set, the proposed sparse variation dictionary learning (SVDL) method is adaptive to the gallery set by jointly learning a projection to connect the generic training set with the gallery set. The learnt sparse variation dictionary can be easily integrated into the framework of sparse representation based classification so that various variations in face images, including illumination, expression, occlusion, pose, etc., can be better handled. Experiments on the large-scale CMU Multi-PIE, FRGC and LFW databases demonstrate the promising performance of SVDL on FR with STSPP.
4 0.17168029 97 iccv-2013-Coupling Alignments with Recognition for Still-to-Video Face Recognition
Author: Zhiwu Huang, Xiaowei Zhao, Shiguang Shan, Ruiping Wang, Xilin Chen
Abstract: The Still-to-Video (S2V) face recognition systems typically need to match faces in low-quality videos captured under unconstrained conditions against high quality still face images, which is very challenging because of noise, image blur, lowface resolutions, varying headpose, complex lighting, and alignment difficulty. To address the problem, one solution is to select the frames of ‘best quality ’ from videos (hereinafter called quality alignment in this paper). Meanwhile, the faces in the selected frames should also be geometrically aligned to the still faces offline well-aligned in the gallery. In this paper, we discover that the interactions among the three tasks–quality alignment, geometric alignment and face recognition–can benefit from each other, thus should be performed jointly. With this in mind, we propose a Coupling Alignments with Recognition (CAR) method to tightly couple these tasks via low-rank regularized sparse representation in a unified framework. Our method makes the three tasks promote mutually by a joint optimization in an Augmented Lagrange Multiplier routine. Extensive , experiments on two challenging S2V datasets demonstrate that our method outperforms the state-of-the-art methods impressively.
5 0.15270856 305 iccv-2013-POP: Person Re-identification Post-rank Optimisation
Author: Chunxiao Liu, Chen Change Loy, Shaogang Gong, Guijin Wang
Abstract: Owing to visual ambiguities and disparities, person reidentification methods inevitably produce suboptimal ranklist, which still requires exhaustive human eyeballing to identify the correct target from hundreds of different likelycandidates. Existing re-identification studies focus on improving the ranking performance, but rarely look into the critical problem of optimising the time-consuming and error-prone post-rank visual search at the user end. In this study, we present a novel one-shot Post-rank OPtimisation (POP) method, which allows a user to quickly refine their search by either “one-shot” or a couple of sparse negative selections during a re-identification process. We conduct systematic behavioural studies to understand user’s searching behaviour and show that the proposed method allows correct re-identification to converge 2.6 times faster than the conventional exhaustive search. Importantly, through extensive evaluations we demonstrate that the method is capable of achieving significant improvement over the stateof-the-art distance metric learning based ranking models, even with just “one shot” feedback optimisation, by as much as over 30% performance improvement for rank 1reidentification on the VIPeR and i-LIDS datasets.
6 0.12691557 190 iccv-2013-Handling Occlusions with Franken-Classifiers
7 0.11853907 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition
8 0.091943927 106 iccv-2013-Deep Learning Identity-Preserving Face Space
9 0.089239962 157 iccv-2013-Fast Face Detector Training Using Tailored Views
10 0.08628203 153 iccv-2013-Face Recognition Using Face Patch Networks
11 0.084846526 279 iccv-2013-Multi-stage Contextual Deep Learning for Pedestrian Detection
12 0.083218828 233 iccv-2013-Latent Task Adaptation with Large-Scale Hierarchies
13 0.074808232 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition
14 0.072568551 338 iccv-2013-Randomized Ensemble Tracking
15 0.068353213 165 iccv-2013-Find the Best Path: An Efficient and Accurate Classifier for Image Hierarchies
16 0.066594556 444 iccv-2013-Viewing Real-World Faces in 3D
17 0.065197125 44 iccv-2013-Adapting Classification Cascades to New Domains
18 0.063596196 392 iccv-2013-Similarity Metric Learning for Face Recognition
19 0.062649861 266 iccv-2013-Mining Multiple Queries for Image Retrieval: On-the-Fly Learning of an Object-Specific Mid-level Representation
20 0.061018109 237 iccv-2013-Learning Graph Matching: Oriented to Category Modeling from Cluttered Scenes
topicId topicWeight
[(0, 0.139), (1, 0.054), (2, -0.062), (3, -0.099), (4, 0.0), (5, -0.024), (6, 0.106), (7, 0.063), (8, 0.002), (9, -0.022), (10, -0.023), (11, -0.002), (12, 0.058), (13, 0.047), (14, 0.022), (15, 0.023), (16, -0.018), (17, -0.064), (18, 0.003), (19, 0.064), (20, -0.11), (21, -0.127), (22, -0.007), (23, -0.063), (24, 0.048), (25, 0.094), (26, -0.133), (27, 0.069), (28, -0.025), (29, -0.031), (30, -0.004), (31, -0.065), (32, -0.028), (33, 0.001), (34, -0.043), (35, -0.062), (36, -0.099), (37, 0.037), (38, 0.005), (39, -0.029), (40, 0.038), (41, -0.006), (42, 0.048), (43, -0.018), (44, -0.029), (45, 0.061), (46, 0.029), (47, 0.062), (48, 0.063), (49, -0.067)]
simIndex simValue paperId paperTitle
same-paper 1 0.90875518 261 iccv-2013-Markov Network-Based Unified Classifier for Face Identification
Author: Wonjun Hwang, Kyungshik Roh, Junmo Kim
Abstract: We propose a novel unifying framework using a Markov network to learn the relationship between multiple classifiers in face recognition. We assume that we have several complementary classifiers and assign observation nodes to the features of a query image and hidden nodes to the features of gallery images. We connect each hidden node to its corresponding observation node and to the hidden nodes of other neighboring classifiers. For each observation-hidden node pair, we collect a set of gallery candidates that are most similar to the observation instance, and the relationship between the hidden nodes is captured in terms of the similarity matrix between the collected gallery images. Posterior probabilities in the hidden nodes are computed by the belief-propagation algorithm. The novelty of the proposed framework is the method that takes into account the classifier dependency using the results of each neighboring classifier. We present extensive results on two different evaluation protocols, known and unknown image variation tests, using three different databases, which shows that the proposed framework always leads to good accuracy in face recognition.
2 0.78988177 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition
Author: Renliang Weng, Jiwen Lu, Junlin Hu, Gao Yang, Yap-Peng Tan
Abstract: Over the past two decades, a number of face recognition methods have been proposed in the literature. Most of them use holistic face images to recognize people. However, human faces are easily occluded by other objects in many real-world scenarios and we have to recognize the person of interest from his/her partial faces. In this paper, we propose a new partial face recognition approach by using feature set matching, which is able to align partial face patches to holistic gallery faces automatically and is robust to occlusions and illumination changes. Given each gallery image and probe face patch, we first detect keypoints and extract their local features. Then, we propose a Metric Learned ExtendedRobust PointMatching (MLERPM) method to discriminatively match local feature sets of a pair of gallery and probe samples. Lastly, the similarity of two faces is converted as the distance between two feature sets. Experimental results on three public face databases are presented to show the effectiveness of the proposed approach.
3 0.74998492 97 iccv-2013-Coupling Alignments with Recognition for Still-to-Video Face Recognition
Author: Zhiwu Huang, Xiaowei Zhao, Shiguang Shan, Ruiping Wang, Xilin Chen
Abstract: The Still-to-Video (S2V) face recognition systems typically need to match faces in low-quality videos captured under unconstrained conditions against high quality still face images, which is very challenging because of noise, image blur, lowface resolutions, varying headpose, complex lighting, and alignment difficulty. To address the problem, one solution is to select the frames of ‘best quality ’ from videos (hereinafter called quality alignment in this paper). Meanwhile, the faces in the selected frames should also be geometrically aligned to the still faces offline well-aligned in the gallery. In this paper, we discover that the interactions among the three tasks–quality alignment, geometric alignment and face recognition–can benefit from each other, thus should be performed jointly. With this in mind, we propose a Coupling Alignments with Recognition (CAR) method to tightly couple these tasks via low-rank regularized sparse representation in a unified framework. Our method makes the three tasks promote mutually by a joint optimization in an Augmented Lagrange Multiplier routine. Extensive , experiments on two challenging S2V datasets demonstrate that our method outperforms the state-of-the-art methods impressively.
4 0.64684075 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition
Author: Dihong Gong, Zhifeng Li, Dahua Lin, Jianzhuang Liu, Xiaoou Tang
Abstract: Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizingfaces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, calledHidden FactorAnalysis (HFA). This methodcaptures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.
5 0.64536411 154 iccv-2013-Face Recognition via Archetype Hull Ranking
Author: Yuanjun Xiong, Wei Liu, Deli Zhao, Xiaoou Tang
Abstract: The archetype hull model is playing an important role in large-scale data analytics and mining, but rarely applied to vision problems. In this paper, we migrate such a geometric model to address face recognition and verification together through proposing a unified archetype hull ranking framework. Upon a scalable graph characterized by a compact set of archetype exemplars whose convex hull encompasses most of the training images, the proposed framework explicitly captures the relevance between any query and the stored archetypes, yielding a rank vector over the archetype hull. The archetype hull ranking is then executed on every block of face images to generate a blockwise similarity measure that is achieved by comparing two different rank vectors with respect to the same archetype hull. After integrating blockwise similarity measurements with learned importance weights, we accomplish a sensible face similarity measure which can support robust and effective face recognition and verification. We evaluate the face similarity measure in terms of experiments performed on three benchmark face databases Multi-PIE, Pubfig83, and LFW, demonstrat- ing its performance superior to the state-of-the-arts.
6 0.62087423 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person
7 0.61224639 153 iccv-2013-Face Recognition Using Face Patch Networks
8 0.60248357 106 iccv-2013-Deep Learning Identity-Preserving Face Space
9 0.59600043 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition
10 0.59262198 305 iccv-2013-POP: Person Re-identification Post-rank Optimisation
11 0.58623379 267 iccv-2013-Model Recommendation with Virtual Probes for Egocentric Hand Detection
12 0.57435405 206 iccv-2013-Hybrid Deep Learning for Face Verification
13 0.55145854 157 iccv-2013-Fast Face Detector Training Using Tailored Views
14 0.52726889 84 iccv-2013-Complex 3D General Object Reconstruction from Line Drawings
15 0.52563363 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition
16 0.51317 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation
17 0.48294213 313 iccv-2013-Person Re-identification by Salience Matching
18 0.43917203 391 iccv-2013-Sieving Regression Forest Votes for Facial Feature Detection in the Wild
19 0.43763313 392 iccv-2013-Similarity Metric Learning for Face Recognition
20 0.42401779 272 iccv-2013-Modifying the Memorability of Face Photographs
topicId topicWeight
[(2, 0.056), (4, 0.012), (6, 0.328), (7, 0.018), (26, 0.072), (31, 0.041), (42, 0.121), (48, 0.01), (64, 0.027), (73, 0.025), (89, 0.14), (98, 0.017)]
simIndex simValue paperId paperTitle
1 0.70591551 11 iccv-2013-A Fully Hierarchical Approach for Finding Correspondences in Non-rigid Shapes
Author: Ivan Sipiran, Benjamin Bustos
Abstract: This paper presents a hierarchical method for finding correspondences in non-rigid shapes. We propose a new representation for 3D meshes: the decomposition tree. This structure characterizes the recursive decomposition process of a mesh into regions of interest and keypoints. The internal nodes contain regions of interest (which may be recursively decomposed) and the leaf nodes contain the keypoints to be matched. We also propose a hierarchical matching algorithm that performs in a level-wise manner. The matching process is guided by the similarity between regions in high levels of the tree, until reaching the keypoints stored in the leaves. This allows us to reduce the search space of correspondences, making also the matching process efficient. We evaluate the effectiveness of our approach using the SHREC’2010 robust correspondence benchmark. In addition, we show that our results outperform the state of the art.
same-paper 2 0.6989699 261 iccv-2013-Markov Network-Based Unified Classifier for Face Identification
Author: Wonjun Hwang, Kyungshik Roh, Junmo Kim
Abstract: We propose a novel unifying framework using a Markov network to learn the relationship between multiple classifiers in face recognition. We assume that we have several complementary classifiers and assign observation nodes to the features of a query image and hidden nodes to the features of gallery images. We connect each hidden node to its corresponding observation node and to the hidden nodes of other neighboring classifiers. For each observation-hidden node pair, we collect a set of gallery candidates that are most similar to the observation instance, and the relationship between the hidden nodes is captured in terms of the similarity matrix between the collected gallery images. Posterior probabilities in the hidden nodes are computed by the belief-propagation algorithm. The novelty of the proposed framework is the method that takes into account the classifier dependency using the results of each neighboring classifier. We present extensive results on two different evaluation protocols, known and unknown image variation tests, using three different databases, which shows that the proposed framework always leads to good accuracy in face recognition.
3 0.64104688 121 iccv-2013-Discriminatively Trained Templates for 3D Object Detection: A Real Time Scalable Approach
Author: Reyes Rios-Cabrera, Tinne Tuytelaars
Abstract: In this paper we propose a new method for detecting multiple specific 3D objects in real time. We start from the template-based approach based on the LINE2D/LINEMOD representation introduced recently by Hinterstoisser et al., yet extend it in two ways. First, we propose to learn the templates in a discriminative fashion. We show that this can be done online during the collection of the example images, in just a few milliseconds, and has a big impact on the accuracy of the detector. Second, we propose a scheme based on cascades that speeds up detection. Since detection of an object is fast, new objects can be added with very low cost, making our approach scale well. In our experiments, we easily handle 10-30 3D objects at frame rates above 10fps using a single CPU core. We outperform the state-of-the-art both in terms of speed as well as in terms of accuracy, as validated on 3 different datasets. This holds both when using monocular color images (with LINE2D) and when using RGBD images (with LINEMOD). Moreover, wepropose a challenging new dataset made of12 objects, for future competing methods on monocular color images.
4 0.62388563 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation
Author: Ali Elqursh, Ahmed Elgammal
Abstract: The vast majority of work on motion segmentation adopts the affine camera model due to its simplicity. Under the affine model, the motion segmentation problem becomes that of subspace separation. Due to this assumption, such methods are mainly offline and exhibit poor performance when the assumption is not satisfied. This is made evident in state-of-the-art methods that relax this assumption by using piecewise affine spaces and spectral clustering techniques to achieve better results. In this paper, we formulate the problem of motion segmentation as that of manifold separation. We then show how label propagation can be used in an online framework to achieve manifold separation. The performance of our framework is evaluated on a benchmark dataset and achieves competitive performance while being online.
5 0.58464539 256 iccv-2013-Locally Affine Sparse-to-Dense Matching for Motion and Occlusion Estimation
Author: Marius Leordeanu, Andrei Zanfir, Cristian Sminchisescu
Abstract: Estimating a dense correspondence field between successive video frames, under large displacement, is important in many visual learning and recognition tasks. We propose a novel sparse-to-dense matching method for motion field estimation and occlusion detection. As an alternative to the current coarse-to-fine approaches from the optical flow literature, we start from the higher level of sparse matching with rich appearance and geometric constraints collected over extended neighborhoods, using an occlusion aware, locally affine model. Then, we move towards the simpler, but denser classic flow field model, with an interpolation procedure that offers a natural transition between the sparse and the dense correspondence fields. We experimentally demonstrate that our appearance features and our complex geometric constraintspermit the correct motion estimation even in difficult cases of large displacements and significant appearance changes. We also propose a novel classification method for occlusion detection that works in conjunction with the sparse-to-dense matching model. We validate our approach on the newly released Sintel dataset and obtain state-of-the-art results.
6 0.54913634 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition
7 0.54344022 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples
8 0.54134196 44 iccv-2013-Adapting Classification Cascades to New Domains
9 0.54120505 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation
10 0.5404886 330 iccv-2013-Proportion Priors for Image Sequence Segmentation
11 0.54025912 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation
12 0.54014206 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications
13 0.5398587 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization
14 0.53983104 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification
15 0.53962523 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition
16 0.53950739 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration
17 0.53914994 122 iccv-2013-Distributed Low-Rank Subspace Segmentation
18 0.53880423 392 iccv-2013-Similarity Metric Learning for Face Recognition
19 0.53876925 208 iccv-2013-Image Co-segmentation via Consistent Functional Maps
20 0.5381189 277 iccv-2013-Multi-channel Correlation Filters