iccv iccv2013 iccv2013-356 knowledge-graph by maker-knowledge-mining

356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition


Source: pdf

Author: Renliang Weng, Jiwen Lu, Junlin Hu, Gao Yang, Yap-Peng Tan

Abstract: Over the past two decades, a number of face recognition methods have been proposed in the literature. Most of them use holistic face images to recognize people. However, human faces are easily occluded by other objects in many real-world scenarios and we have to recognize the person of interest from his/her partial faces. In this paper, we propose a new partial face recognition approach by using feature set matching, which is able to align partial face patches to holistic gallery faces automatically and is robust to occlusions and illumination changes. Given each gallery image and probe face patch, we first detect keypoints and extract their local features. Then, we propose a Metric Learned ExtendedRobust PointMatching (MLERPM) method to discriminatively match local feature sets of a pair of gallery and probe samples. Lastly, the similarity of two faces is converted as the distance between two feature sets. Experimental results on three public face databases are presented to show the effectiveness of the proposed approach.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 sg Abstract Over the past two decades, a number of face recognition methods have been proposed in the literature. [sent-7, score-0.301]

2 Most of them use holistic face images to recognize people. [sent-8, score-0.374]

3 However, human faces are easily occluded by other objects in many real-world scenarios and we have to recognize the person of interest from his/her partial faces. [sent-9, score-0.332]

4 In this paper, we propose a new partial face recognition approach by using feature set matching, which is able to align partial face patches to holistic gallery faces automatically and is robust to occlusions and illumination changes. [sent-10, score-1.634]

5 Given each gallery image and probe face patch, we first detect keypoints and extract their local features. [sent-11, score-1.354]

6 Then, we propose a Metric Learned ExtendedRobust PointMatching (MLERPM) method to discriminatively match local feature sets of a pair of gallery and probe samples. [sent-12, score-1.01]

7 Experimental results on three public face databases are presented to show the effectiveness of the proposed approach. [sent-14, score-0.31]

8 Introduction A number of face recognition approaches have been proposed over the past two decades [22, 3, 1, 27, 18]. [sent-16, score-0.301]

9 Therefore, it is desirable to develop a practical face recognition system which is able to process partial faces directly without any alignment and also robust to occlusions, variations of illumination and pose. [sent-20, score-0.64]

10 To make face recognition applicable in the real-life sce- ? [sent-21, score-0.301]

11 (a) Three partial face patches in the red ellipse are from the LFW database occluded by heads. [sent-45, score-0.474]

12 [11]; (b) Partial faces with scarf and sunglasses occlusion in the AR dataset [19]. [sent-46, score-0.298]

13 The objective of our study is to identify people from such partially occluded face images. [sent-47, score-0.319]

14 narios, several works have been presented to align probe facial images with training images automatically. [sent-48, score-0.62]

15 [13] developed an automatic face alignment method through minimizing a structured sparsity norm. [sent-51, score-0.299]

16 However, all these face alignment methods would fail to work if the probe image is an arbitrary face patch. [sent-52, score-1.054]

17 While these approaches can achieve encouraging recognition performance in case of occlusions, they would fail if the probe image is an arbitrary face patch. [sent-54, score-0.796]

18 In contrast to these methods, our approach processes partial face directly without manual alignment, which is more close to practical applications. [sent-55, score-0.415]

19 [24] was the first work that used graph matching for face recognition. [sent-57, score-0.353]

20 Chui and Rangarajan [6] presented Robust Point set Matching (RPM) to align two feature sets according to their geometry distribution by learning a non-affine transformation function through iter- ative updates. [sent-59, score-0.252]

21 The left image is probe partial face image, and the right one is gallery face image. [sent-71, score-1.589]

22 (b) Keypoint selection: correctly matched keypoints of these two images are connected by green lines, while two pairs of false matches are connected by red lines. [sent-72, score-0.252]

23 (c) MLERPM process: point set of probe image marked out as blue diamond is iteratively aligned to the red-marked point set of gallery image from left top to the right bottom. [sent-73, score-0.996]

24 (d) Matching result: the left one is the warped image using the transformation parameters learnt from the matching process, the right one is the gallery image. [sent-75, score-0.684]

25 Through MLERPM, the probe image is successfully aligned to the gallery image. [sent-76, score-0.914]

26 [15] utilized SRC to reconstruct probe local feature set with gallery feature sets, and they used the reconstruction error as distance metric. [sent-79, score-1.024]

27 Based on the matching result, a point set distance metric is proposed to describe the similarity of two faces. [sent-82, score-0.228]

28 Our approach doesn’t require manual face alignment and is robust to occlusions as well as illumination changes. [sent-83, score-0.397]

29 Experimental results on three public face databases are presented to show the effectiveness of the proposed approach. [sent-84, score-0.31]

30 Proposed Approach We propose to use local features instead of holistic features for partial face representation. [sent-86, score-0.47]

31 With matched keypoint pairs at hand, we design a point set distance metric to describe the difference between two faces based on MLERPM, where the lowest matching distance achieved would be reckoned as positive match. [sent-90, score-0.591]

32 The face matching process is illustrated in Figure 2. [sent-91, score-0.353]

33 Feature Extraction Since there exist rotation, translation, scaling and even occlusions between probe image and gallery images of same identity, it is very difficult to normalize them to eye positions. [sent-96, score-0.972]

34 Without proper face alignment, holistic features would fail to work. [sent-97, score-0.339]

35 Normally for a typical 128 128 face image, SIFT feattuorre. [sent-100, score-0.26]

36 To describe the texture features of these detected keypoints, we combined the strength of SIFT and SURF keypoint descriptor by simple concatenation. [sent-103, score-0.208]

37 SURF keypoint descriptor was introduced as a complement to SIFT for its greater robustness against illumination variations [14]. [sent-104, score-0.214]

38 Keypoint Selection As we have indicated previously, the number of keypoints of facial image could be up to hundreds. [sent-108, score-0.236]

39 Moreover, irrelevant keypoints might hamper point set matching process, such as misleading the matching process to a local minimum, especially when genuine matching pairs are few among all matching features. [sent-110, score-0.626]

40 We applied the idea of Lowe’s matching scheme [17] for keypoint selection, which is to compare the ratio of distance of the closest neighbour to the one of the second-closest neighbour to a predefined threshold. [sent-112, score-0.349]

41 These coarsely matched keypoint 602 pairs are then selected for our MLERPM for finer matching. [sent-115, score-0.223]

42 Metric Learned Robust Point Matching After feature extraction and keypoints selection, for the probe partial face image, its geometry feature set is {gP1, g2P, . [sent-118, score-1.172]

43 tPNP}, where NP is the number of keypoints ains probe feature s}e,t. [sent-124, score-0.689]

44 w Similarly, for the gallery image, we have {gG1, g2G, . [sent-125, score-0.443]

45 Likewise, not all keypoints in gallery images are ensured to be matched. [sent-133, score-0.646]

46 Hence, this point set matching is a subset point matching problem. [sent-134, score-0.268]

47 One-to-one point correspondence: this trait is obvious as keypoints ooifn dtif cfoerrreenstp positions ithni sth tera probe image shouldn’t be matched to a single keypoint in the gallery image. [sent-135, score-1.358]

48 Non-affine transformation: the appearance of face changes weh ternan tshfoer perspective or facial expression changes. [sent-136, score-0.316]

49 Hence we extended that framework to directly match textural features by introducing metric-learned texture distance as a regularizing term. [sent-140, score-0.276]

50 in order to match a non-smiling mouth in the probe face image to a smiling one of gallery image, it will tilt the whole probe image to make its mouth part smile, which however, would make the rest part of image highly distorted. [sent-144, score-1.743]

51 1 (1) where m is the correspondence matrix and mij denotes the correspondence from keypoint iof probe image to keypoint j of gallery image. [sent-168, score-1.526]

52 5, f is the geometric non-affine transformation function and Ψ(f) calculates the energy of its non-affine portion, both of which would be specified later. [sent-170, score-0.218]

53 In the above cost function, the first summation measures the total weighted cost of matching probe keypoint set and gallery keypoint set based on geometric and textural information. [sent-171, score-1.58]

54 The second summation penalizes the case where only few point correspondences are established, and the third summation makes point correspondence fuzzy, that is mij could have any value between 0 and 1. [sent-172, score-0.309]

55 Parameter C controls the fuzziness of correspondence matrix: as the value of C gradually decreases, mij moves towards to either 0 or 1, such that the correspondence between two point sets becomes more definite. [sent-173, score-0.381]

56 Applying Chui’s framework, we update the correspondence matrix and transformation parameters alternatively embedded in an annealing process: Step1. [sent-175, score-0.228]

57 Correspondence matrix update: Correspondence between probe feature point i and gallery feature point j is updated by mij= exp? [sent-176, score-1.097]

58 (4) in which fi is one of the k randomly selected anchor points from probe keypoint set, and σ controls the influence of anchor points: the larger σ is, the more global the transformation would be, which means anchor points far away from point gi could have impact on it as well. [sent-192, score-1.007]

59 Error image records the absolute values of pixel-wise difference between gallery image and warped image. [sent-206, score-0.525]

60 In the blue rectangle: Each column indicates the status of one iteration, the first row shows the matching process of geometric feature sets of gallery image and probe image, where the blue diamonds denote probe keypoints and red crosses denote gallery keypoints. [sent-207, score-2.244]

61 Hence it would be prudent to set λ2 to a large value in the beginning, and gradually decrease it during the iteration process, as it’s beneficial to align the matching images with affine transformation first before we get into detailed local warping (non-affine transformation). [sent-212, score-0.337]

62 For notational clarity, the probe geometric feature set is grouped into one matrix as X, where its ith column is giP. [sent-213, score-0.607]

63 Similarly, the gallery geometric feature set is grouped into Y . [sent-214, score-0.554]

64 We update between step 1and step 2 alternatively, while gradually decreasing the values of C and λ2, so that transformation parameters would be gradually refined and correspondences between two point sets would be more definite. [sent-225, score-0.316]

65 Experiments To verify the effectiveness of our partial face recognition approach, we conducted partial face recognition for arbitrary face patch on the LFW dataset [11]. [sent-232, score-1.172]

66 To comprehensively demonstrate the pros and cons of our approach, we did experiments of disguised and occluded partial face recognition on the AR [19] and Extended Yale B [10] datasets, respectively. [sent-233, score-0.515]

67 Data Sets LFW Dataset: The Labeled Face in the Wild (LFW) dataset [11] contains 13233 labeled faces of 5749 people, in which 1680 people have two or more face images. [sent-236, score-0.342]

68 Second row: probe partial face images randomly generated from another image of the same subject. [sent-240, score-0.932]

69 For each subject, there are 26 face pictures taken in two different sessions (each session has 13 face images). [sent-244, score-0.564]

70 In each session, there are 3 images with different illumination conditions, 4 images with different expressions, and 6 images with different facial disguises (3 images wearing sunglasses and 3 images wearing scarf, respectively). [sent-245, score-0.353]

71 Extended Yale B: There are 2414 frontal face images of 38 identities photographed under varying controlled illuminations in the Extended Yale B database. [sent-246, score-0.283]

72 For each subject, we randomly selected one image of him or her to synthetically produce a probe partial face image, while the other 9 images formed gallery set. [sent-253, score-1.375]

73 For the gallery set, all images were normalized to 128 128 pixels according to the eye positions. [sent-254, score-0.466]

74 Figure t5o osh 1o2w8s × some example ncoorrmdianlgize tod gallery face images (the first row). [sent-255, score-0.726]

75 Note that our method is able to work on non-aligned gallery images as well. [sent-256, score-0.466]

76 Before extracting local features, we generated partial faces in a random way. [sent-257, score-0.237]

77 S boomthe sample partial face images are shown in Figure 5 (the bottom row). [sent-262, score-0.438]

78 Second row: probe images occluded by sunglasses and scarf. [sent-266, score-0.639]

79 Sample probe images in Extended Yale B dataset with random block occlusion with their correspondent occlusion levels are listed underneath. [sent-268, score-0.671]

80 For fair comparison with existing holistic methods, all these probe images and gallery images were cropped to 128 128 pixels and properly aligned. [sent-271, score-1.047]

81 In our experiments we synthesized contiguousblock-occluded images with occlusion levels ranging from 10% to 50%, by superimposing a correspondingly sized un- related image randomly on each probe image, as in Fig. [sent-275, score-0.571]

82 We conducted partial face recognition for arbitrary face patch on the LFW dataset. [sent-285, score-0.716]

83 The second group of comparing algorithms work on either geometric features or textural features of concatenated SIFT and SURF feature sets (SIFTSURF). [sent-295, score-0.291]

84 The first one was using Lowe’s matching method to match textural features sets of gallery images and probe images, the number of matching pairs was set as similarity criterion. [sent-298, score-1.328]

85 The third method was Earth Mover’s Distance (EMD)[20], which measures the minimum cost of transforming one distribution of textural feature set into the other, where we set number of Kmeans clusters to 10 as it was the setting achieving the best recognition result. [sent-300, score-0.226]

86 This is because matching only on geometry features or on texture features merely exploits partial information of face image, whereas both the geometry information and texture information of feature sets were considered by ERPM and MLERPM, resulting in a much more robust feature set matching. [sent-310, score-0.767]

87 Experiment 2: Partial Face Recognition under Disguise: The AR dataset was selected for our partial face recognition under disguise. [sent-316, score-0.456]

88 Only those matched keypoints in facial area were selected to point set distance calculation. [sent-319, score-0.36]

89 Experiment 3: Partial Face Recognition with Random Block Occlusion: The Extended Yale B dataset was selected for our partial face recognition under random block occlusion. [sent-320, score-0.456]

90 Before occlusion level arrived at 40%, our method performed comparably with SRC, but it degraded drastically when the occlusion percent is larger than 40%, while in the dataset of AR, our method did nearly perfectly where the percent of disguise for scarf is 40%. [sent-322, score-0.317]

91 in Figure 7, when occlusion percent is 50%, most part of face area is occluded, making face match extremely difficult. [sent-338, score-0.637]

92 Conclusion In this paper, we have proposed a partial face recognition method by using robust feature set matching. [sent-341, score-0.517]

93 We proposed to use local features instead of holistic features, and these local feature point sets were matched by our MLERP- M approach, the outcome of which were a point set correspondence matrix indicating matching keypoint pairs and a non-affine transformation function. [sent-342, score-0.721]

94 This transformation function could align the probe partial face to gallery face automatically. [sent-343, score-1.741]

95 Moreover, a point set distance metric was designed, based on which, a simple nearest neighbor classifier could recognize input probe faces robustly even at presence of occlusions. [sent-344, score-0.724]

96 Experimental results on three widely used face datasets were presented to show the efficacy and limitations of our proposed method, the latter of which pointed out the direction for our future work. [sent-345, score-0.26]

97 From few to many: Illumination cone models for face recognition under variable lighting and pose. [sent-413, score-0.301]

98 Labeled faces in the wild: A database for studying face recognition in unconstrained environments. [sent-421, score-0.383]

99 Discriminative multimanifold analysis for face recognition from a single training sample per person. [sent-469, score-0.301]

100 Sparse representation or collaborative representation: Which helps face recognition? [sent-545, score-0.26]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('probe', 0.471), ('gallery', 0.443), ('mlerpm', 0.312), ('face', 0.26), ('keypoints', 0.18), ('keypoint', 0.174), ('partial', 0.155), ('textural', 0.147), ('yale', 0.108), ('transformation', 0.105), ('lfw', 0.104), ('chui', 0.103), ('mij', 0.103), ('matching', 0.093), ('ar', 0.09), ('sunglasses', 0.086), ('faces', 0.082), ('gip', 0.079), ('rpm', 0.079), ('surf', 0.076), ('scarf', 0.076), ('correspondent', 0.069), ('correspondence', 0.068), ('gjg', 0.067), ('metric', 0.06), ('disguise', 0.059), ('occluded', 0.059), ('np', 0.059), ('facial', 0.056), ('holistic', 0.055), ('occlusion', 0.054), ('geometric', 0.05), ('src', 0.05), ('gq', 0.049), ('matched', 0.049), ('align', 0.047), ('lowe', 0.046), ('gradually', 0.045), ('pq', 0.045), ('erpm', 0.045), ('hausdist', 0.045), ('itmax', 0.045), ('tjg', 0.045), ('session', 0.044), ('sift', 0.044), ('warped', 0.043), ('recognition', 0.041), ('point', 0.041), ('anchor', 0.04), ('ranks', 0.04), ('illumination', 0.04), ('rangarajan', 0.039), ('jiwen', 0.039), ('pql', 0.039), ('records', 0.039), ('calculates', 0.039), ('alignment', 0.039), ('feature', 0.038), ('percent', 0.037), ('subjects', 0.037), ('recognize', 0.036), ('extended', 0.035), ('occlusions', 0.035), ('texture', 0.034), ('distance', 0.034), ('hence', 0.033), ('tps', 0.033), ('genuine', 0.033), ('sets', 0.032), ('cropped', 0.032), ('pami', 0.032), ('emd', 0.031), ('mover', 0.03), ('annealing', 0.03), ('liao', 0.03), ('geometry', 0.03), ('splines', 0.029), ('eng', 0.028), ('wearing', 0.028), ('summation', 0.028), ('speeded', 0.027), ('earth', 0.027), ('hausdorff', 0.027), ('public', 0.027), ('female', 0.026), ('match', 0.026), ('matrix', 0.025), ('male', 0.025), ('gi', 0.025), ('neighbour', 0.024), ('controls', 0.024), ('mouth', 0.024), ('would', 0.024), ('comparing', 0.024), ('grouped', 0.023), ('row', 0.023), ('robust', 0.023), ('randomly', 0.023), ('images', 0.023), ('databases', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition

Author: Renliang Weng, Jiwen Lu, Junlin Hu, Gao Yang, Yap-Peng Tan

Abstract: Over the past two decades, a number of face recognition methods have been proposed in the literature. Most of them use holistic face images to recognize people. However, human faces are easily occluded by other objects in many real-world scenarios and we have to recognize the person of interest from his/her partial faces. In this paper, we propose a new partial face recognition approach by using feature set matching, which is able to align partial face patches to holistic gallery faces automatically and is robust to occlusions and illumination changes. Given each gallery image and probe face patch, we first detect keypoints and extract their local features. Then, we propose a Metric Learned ExtendedRobust PointMatching (MLERPM) method to discriminatively match local feature sets of a pair of gallery and probe samples. Lastly, the similarity of two faces is converted as the distance between two feature sets. Experimental results on three public face databases are presented to show the effectiveness of the proposed approach.

2 0.34173107 97 iccv-2013-Coupling Alignments with Recognition for Still-to-Video Face Recognition

Author: Zhiwu Huang, Xiaowei Zhao, Shiguang Shan, Ruiping Wang, Xilin Chen

Abstract: The Still-to-Video (S2V) face recognition systems typically need to match faces in low-quality videos captured under unconstrained conditions against high quality still face images, which is very challenging because of noise, image blur, lowface resolutions, varying headpose, complex lighting, and alignment difficulty. To address the problem, one solution is to select the frames of ‘best quality ’ from videos (hereinafter called quality alignment in this paper). Meanwhile, the faces in the selected frames should also be geometrically aligned to the still faces offline well-aligned in the gallery. In this paper, we discover that the interactions among the three tasks–quality alignment, geometric alignment and face recognition–can benefit from each other, thus should be performed jointly. With this in mind, we propose a Coupling Alignments with Recognition (CAR) method to tightly couple these tasks via low-rank regularized sparse representation in a unified framework. Our method makes the three tasks promote mutually by a joint optimization in an Augmented Lagrange Multiplier routine. Extensive , experiments on two challenging S2V datasets demonstrate that our method outperforms the state-of-the-art methods impressively.

3 0.32754719 305 iccv-2013-POP: Person Re-identification Post-rank Optimisation

Author: Chunxiao Liu, Chen Change Loy, Shaogang Gong, Guijin Wang

Abstract: Owing to visual ambiguities and disparities, person reidentification methods inevitably produce suboptimal ranklist, which still requires exhaustive human eyeballing to identify the correct target from hundreds of different likelycandidates. Existing re-identification studies focus on improving the ranking performance, but rarely look into the critical problem of optimising the time-consuming and error-prone post-rank visual search at the user end. In this study, we present a novel one-shot Post-rank OPtimisation (POP) method, which allows a user to quickly refine their search by either “one-shot” or a couple of sparse negative selections during a re-identification process. We conduct systematic behavioural studies to understand user’s searching behaviour and show that the proposed method allows correct re-identification to converge 2.6 times faster than the conventional exhaustive search. Importantly, through extensive evaluations we demonstrate that the method is capable of achieving significant improvement over the stateof-the-art distance metric learning based ranking models, even with just “one shot” feedback optimisation, by as much as over 30% performance improvement for rank 1reidentification on the VIPeR and i-LIDS datasets.

4 0.31111553 267 iccv-2013-Model Recommendation with Virtual Probes for Egocentric Hand Detection

Author: Cheng Li, Kris M. Kitani

Abstract: Egocentric cameras can be used to benefit such tasks as analyzing fine motor skills, recognizing gestures and learning about hand-object manipulation. To enable such technology, we believe that the hands must detected on thepixellevel to gain important information about the shape of the hands and fingers. We show that the problem of pixel-wise hand detection can be effectively solved, by posing the problem as a model recommendation task. As such, the goal of a recommendation system is to recommend the n-best hand detectors based on the probe set a small amount of labeled data from the test distribution. This requirement of a probe set is a serious limitation in many applications, such as ego-centric hand detection, where the test distribution may be continually changing. To address this limitation, we propose the use of virtual probes which can be automatically extracted from the test distribution. The key idea is – that many features, such as the color distribution or relative performance between two detectors, can be used as a proxy to the probe set. In our experiments we show that the recommendation paradigm is well-equipped to handle complex changes in the appearance of the hands in firstperson vision. In particular, we show how our system is able to generalize to new scenarios by testing our model across multiple users.

5 0.24287702 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

Author: Meng Yang, Luc Van_Gool, Lei Zhang

Abstract: Face recognition (FR) with a single training sample per person (STSPP) is a very challenging problem due to the lack of information to predict the variations in the query sample. Sparse representation based classification has shown interesting results in robust FR; however, its performance will deteriorate much for FR with STSPP. To address this issue, in this paper we learn a sparse variation dictionary from a generic training set to improve the query sample representation by STSPP. Instead of learning from the generic training set independently w.r.t. the gallery set, the proposed sparse variation dictionary learning (SVDL) method is adaptive to the gallery set by jointly learning a projection to connect the generic training set with the gallery set. The learnt sparse variation dictionary can be easily integrated into the framework of sparse representation based classification so that various variations in face images, including illumination, expression, occlusion, pose, etc., can be better handled. Experiments on the large-scale CMU Multi-PIE, FRGC and LFW databases demonstrate the promising performance of SVDL on FR with STSPP.

6 0.24188341 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition

7 0.20023799 261 iccv-2013-Markov Network-Based Unified Classifier for Face Identification

8 0.16909496 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition

9 0.16603072 157 iccv-2013-Fast Face Detector Training Using Tailored Views

10 0.16367085 106 iccv-2013-Deep Learning Identity-Preserving Face Space

11 0.15019898 392 iccv-2013-Similarity Metric Learning for Face Recognition

12 0.13078268 11 iccv-2013-A Fully Hierarchical Approach for Finding Correspondences in Non-rigid Shapes

13 0.12224633 444 iccv-2013-Viewing Real-World Faces in 3D

14 0.11551894 321 iccv-2013-Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model

15 0.11379202 391 iccv-2013-Sieving Regression Forest Votes for Facial Feature Detection in the Wild

16 0.11036768 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering

17 0.1096741 153 iccv-2013-Face Recognition Using Face Patch Networks

18 0.10863942 70 iccv-2013-Cascaded Shape Space Pruning for Robust Facial Landmark Detection

19 0.10369006 206 iccv-2013-Hybrid Deep Learning for Face Verification

20 0.10130728 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.184), (1, 0.014), (2, -0.119), (3, -0.121), (4, -0.059), (5, -0.123), (6, 0.259), (7, 0.108), (8, 0.028), (9, 0.003), (10, -0.016), (11, 0.09), (12, 0.1), (13, 0.018), (14, -0.019), (15, 0.021), (16, -0.028), (17, -0.003), (18, 0.009), (19, 0.034), (20, -0.094), (21, -0.17), (22, -0.022), (23, -0.177), (24, 0.191), (25, 0.16), (26, -0.19), (27, 0.238), (28, -0.046), (29, -0.062), (30, 0.059), (31, -0.186), (32, -0.046), (33, -0.007), (34, 0.02), (35, -0.007), (36, -0.091), (37, -0.045), (38, -0.001), (39, 0.004), (40, 0.04), (41, 0.051), (42, 0.045), (43, -0.036), (44, -0.037), (45, 0.083), (46, 0.026), (47, 0.06), (48, 0.076), (49, -0.094)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93159586 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition

Author: Renliang Weng, Jiwen Lu, Junlin Hu, Gao Yang, Yap-Peng Tan

Abstract: Over the past two decades, a number of face recognition methods have been proposed in the literature. Most of them use holistic face images to recognize people. However, human faces are easily occluded by other objects in many real-world scenarios and we have to recognize the person of interest from his/her partial faces. In this paper, we propose a new partial face recognition approach by using feature set matching, which is able to align partial face patches to holistic gallery faces automatically and is robust to occlusions and illumination changes. Given each gallery image and probe face patch, we first detect keypoints and extract their local features. Then, we propose a Metric Learned ExtendedRobust PointMatching (MLERPM) method to discriminatively match local feature sets of a pair of gallery and probe samples. Lastly, the similarity of two faces is converted as the distance between two feature sets. Experimental results on three public face databases are presented to show the effectiveness of the proposed approach.

2 0.84166789 97 iccv-2013-Coupling Alignments with Recognition for Still-to-Video Face Recognition

Author: Zhiwu Huang, Xiaowei Zhao, Shiguang Shan, Ruiping Wang, Xilin Chen

Abstract: The Still-to-Video (S2V) face recognition systems typically need to match faces in low-quality videos captured under unconstrained conditions against high quality still face images, which is very challenging because of noise, image blur, lowface resolutions, varying headpose, complex lighting, and alignment difficulty. To address the problem, one solution is to select the frames of ‘best quality ’ from videos (hereinafter called quality alignment in this paper). Meanwhile, the faces in the selected frames should also be geometrically aligned to the still faces offline well-aligned in the gallery. In this paper, we discover that the interactions among the three tasks–quality alignment, geometric alignment and face recognition–can benefit from each other, thus should be performed jointly. With this in mind, we propose a Coupling Alignments with Recognition (CAR) method to tightly couple these tasks via low-rank regularized sparse representation in a unified framework. Our method makes the three tasks promote mutually by a joint optimization in an Augmented Lagrange Multiplier routine. Extensive , experiments on two challenging S2V datasets demonstrate that our method outperforms the state-of-the-art methods impressively.

3 0.72537118 261 iccv-2013-Markov Network-Based Unified Classifier for Face Identification

Author: Wonjun Hwang, Kyungshik Roh, Junmo Kim

Abstract: We propose a novel unifying framework using a Markov network to learn the relationship between multiple classifiers in face recognition. We assume that we have several complementary classifiers and assign observation nodes to the features of a query image and hidden nodes to the features of gallery images. We connect each hidden node to its corresponding observation node and to the hidden nodes of other neighboring classifiers. For each observation-hidden node pair, we collect a set of gallery candidates that are most similar to the observation instance, and the relationship between the hidden nodes is captured in terms of the similarity matrix between the collected gallery images. Posterior probabilities in the hidden nodes are computed by the belief-propagation algorithm. The novelty of the proposed framework is the method that takes into account the classifier dependency using the results of each neighboring classifier. We present extensive results on two different evaluation protocols, known and unknown image variation tests, using three different databases, which shows that the proposed framework always leads to good accuracy in face recognition.

4 0.67499876 154 iccv-2013-Face Recognition via Archetype Hull Ranking

Author: Yuanjun Xiong, Wei Liu, Deli Zhao, Xiaoou Tang

Abstract: The archetype hull model is playing an important role in large-scale data analytics and mining, but rarely applied to vision problems. In this paper, we migrate such a geometric model to address face recognition and verification together through proposing a unified archetype hull ranking framework. Upon a scalable graph characterized by a compact set of archetype exemplars whose convex hull encompasses most of the training images, the proposed framework explicitly captures the relevance between any query and the stored archetypes, yielding a rank vector over the archetype hull. The archetype hull ranking is then executed on every block of face images to generate a blockwise similarity measure that is achieved by comparing two different rank vectors with respect to the same archetype hull. After integrating blockwise similarity measurements with learned importance weights, we accomplish a sensible face similarity measure which can support robust and effective face recognition and verification. We evaluate the face similarity measure in terms of experiments performed on three benchmark face databases Multi-PIE, Pubfig83, and LFW, demonstrat- ing its performance superior to the state-of-the-arts.

5 0.67367768 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition

Author: Dihong Gong, Zhifeng Li, Dahua Lin, Jianzhuang Liu, Xiaoou Tang

Abstract: Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizingfaces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, calledHidden FactorAnalysis (HFA). This methodcaptures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.

6 0.65986586 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

7 0.65204597 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition

8 0.63597143 106 iccv-2013-Deep Learning Identity-Preserving Face Space

9 0.62898791 267 iccv-2013-Model Recommendation with Virtual Probes for Egocentric Hand Detection

10 0.61895585 305 iccv-2013-POP: Person Re-identification Post-rank Optimisation

11 0.5400995 157 iccv-2013-Fast Face Detector Training Using Tailored Views

12 0.52832419 153 iccv-2013-Face Recognition Using Face Patch Networks

13 0.52393413 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition

14 0.52344066 206 iccv-2013-Hybrid Deep Learning for Face Verification

15 0.51871252 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

16 0.51131463 392 iccv-2013-Similarity Metric Learning for Face Recognition

17 0.50547028 272 iccv-2013-Modifying the Memorability of Face Photographs

18 0.50474507 313 iccv-2013-Person Re-identification by Salience Matching

19 0.47757146 84 iccv-2013-Complex 3D General Object Reconstruction from Line Drawings

20 0.44104984 391 iccv-2013-Sieving Regression Forest Votes for Facial Feature Detection in the Wild


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.052), (7, 0.047), (26, 0.045), (31, 0.034), (34, 0.018), (42, 0.123), (62, 0.011), (64, 0.051), (73, 0.039), (78, 0.012), (83, 0.241), (89, 0.19), (95, 0.025), (98, 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77971607 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition

Author: Renliang Weng, Jiwen Lu, Junlin Hu, Gao Yang, Yap-Peng Tan

Abstract: Over the past two decades, a number of face recognition methods have been proposed in the literature. Most of them use holistic face images to recognize people. However, human faces are easily occluded by other objects in many real-world scenarios and we have to recognize the person of interest from his/her partial faces. In this paper, we propose a new partial face recognition approach by using feature set matching, which is able to align partial face patches to holistic gallery faces automatically and is robust to occlusions and illumination changes. Given each gallery image and probe face patch, we first detect keypoints and extract their local features. Then, we propose a Metric Learned ExtendedRobust PointMatching (MLERPM) method to discriminatively match local feature sets of a pair of gallery and probe samples. Lastly, the similarity of two faces is converted as the distance between two feature sets. Experimental results on three public face databases are presented to show the effectiveness of the proposed approach.

2 0.71047437 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity

Author: Jia Xu, Vamsi K. Ithapu, Lopamudra Mukherjee, James M. Rehg, Vikas Singh

Abstract: We study the problem of online subspace learning in the context of sequential observations involving structured perturbations. In online subspace learning, the observations are an unknown mixture of two components presented to the model sequentially the main effect which pertains to the subspace and a residual/error term. If no additional requirement is imposed on the residual, it often corresponds to noise terms in the signal which were unaccounted for by the main effect. To remedy this, one may impose ‘structural’ contiguity, which has the intended effect of leveraging the secondary terms as a covariate that helps the estimation of the subspace itself, instead of merely serving as a noise residual. We show that the corresponding online estimation procedure can be written as an approximate optimization process on a Grassmannian. We propose an efficient numerical solution, GOSUS, Grassmannian Online Subspace Updates with Structured-sparsity, for this problem. GOSUS is expressive enough in modeling both homogeneous perturbations of the subspace and structural contiguities of outliers, and after certain manipulations, solvable — via an alternating direction method of multipliers (ADMM). We evaluate the empirical performance of this algorithm on two problems of interest: online background subtraction and online multiple face tracking, and demonstrate that it achieves competitive performance with the state-of-the-art in near real time.

3 0.70858479 291 iccv-2013-No Matter Where You Are: Flexible Graph-Guided Multi-task Learning for Multi-view Head Pose Classification under Target Motion

Author: Yan Yan, Elisa Ricci, Ramanathan Subramanian, Oswald Lanz, Nicu Sebe

Abstract: We propose a novel Multi-Task Learning framework (FEGA-MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. As the target (person) moves, distortions in facial appearance owing to camera perspective and scale severely impede performance of traditional head pose classification methods. FEGA-MTL operates on a dense uniform spatial grid and learns appearance relationships across partitions as well as partition-specific appearance variations for a given head pose to build region-specific classifiers. Guided by two graphs which a-priori model appearance similarity among (i) grid partitions based on camera geometry and (ii) head pose classes, the learner efficiently clusters appearancewise related grid partitions to derive the optimal partitioning. For pose classification, upon determining the target’s position using a person tracker, the appropriate regionspecific classifier is invoked. Experiments confirm that FEGA-MTL achieves state-of-the-art classification with few training data.

4 0.70794463 79 iccv-2013-Coherent Object Detection with 3D Geometric Context from a Single Image

Author: Jiyan Pan, Takeo Kanade

Abstract: Objects in a real world image cannot have arbitrary appearance, sizes and locations due to geometric constraints in 3D space. Such a 3D geometric context plays an important role in resolving visual ambiguities and achieving coherent object detection. In this paper, we develop a RANSAC-CRF framework to detect objects that are geometrically coherent in the 3D world. Different from existing methods, we propose a novel generalized RANSAC algorithm to generate global 3D geometry hypothesesfrom local entities such that outlier suppression and noise reduction is achieved simultaneously. In addition, we evaluate those hypotheses using a CRF which considers both the compatibility of individual objects under global 3D geometric context and the compatibility between adjacent objects under local 3D geometric context. Experiment results show that our approach compares favorably with the state of the art.

5 0.70708025 249 iccv-2013-Learning to Share Latent Tasks for Action Recognition

Author: Qiang Zhou, Gang Wang, Kui Jia, Qi Zhao

Abstract: Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motionpatterns instead offull actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoidingfalselyforcing all categories to share knowledge. Experimental results on multiplepublic data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.

6 0.70651007 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

7 0.70478553 57 iccv-2013-BOLD Features to Detect Texture-less Objects

8 0.70445246 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

9 0.7038067 97 iccv-2013-Coupling Alignments with Recognition for Still-to-Video Face Recognition

10 0.70372677 339 iccv-2013-Rank Minimization across Appearance and Shape for AAM Ensemble Fitting

11 0.70352393 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

12 0.70290875 157 iccv-2013-Fast Face Detector Training Using Tailored Views

13 0.70244765 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables

14 0.70217317 115 iccv-2013-Direct Optimization of Frame-to-Frame Rotation

15 0.702052 65 iccv-2013-Breaking the Chain: Liberation from the Temporal Markov Assumption for Tracking Human Poses

16 0.70204413 321 iccv-2013-Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model

17 0.70196706 449 iccv-2013-What Do You Do? Occupation Recognition in a Photo via Social Context

18 0.70194817 140 iccv-2013-Elastic Net Constraints for Shape Matching

19 0.70182556 349 iccv-2013-Regionlets for Generic Object Detection

20 0.70144004 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment