iccv iccv2013 iccv2013-272 knowledge-graph by maker-knowledge-mining

272 iccv-2013-Modifying the Memorability of Face Photographs


Source: pdf

Author: Aditya Khosla, Wilma A. Bainbridge, Antonio Torralba, Aude Oliva

Abstract: Contemporary life bombards us with many new images of faces every day, which poses non-trivial constraints on human memory. The vast majority of face photographs are intended to be remembered, either because of personal relevance, commercial interests or because the pictures were deliberately designed to be memorable. Can we make aportrait more memorable or more forgettable automatically? Here, we provide a method to modify the memorability of individual face photographs, while keeping the identity and other facial traits (e.g. age, attractiveness, and emotional magnitude) of the individual fixed. We show that face photographs manipulated to be more memorable (or more forgettable) are indeed more often remembered (or forgotten) in a crowd-sourcing experiment with an accuracy of 74%. Quantifying and modifying the ‘memorability ’ of a face lends itself to many useful applications in computer vision and graphics, such as mnemonic aids for learning, photo editing applications for social networks and tools for designing memorable advertisements.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The vast majority of face photographs are intended to be remembered, either because of personal relevance, commercial interests or because the pictures were deliberately designed to be memorable. [sent-5, score-0.143]

2 Here, we provide a method to modify the memorability of individual face photographs, while keeping the identity and other facial traits (e. [sent-7, score-1.257]

3 We show that face photographs manipulated to be more memorable (or more forgettable) are indeed more often remembered (or forgotten) in a crowd-sourcing experiment with an accuracy of 74%. [sent-10, score-0.296]

4 Quantifying and modifying the ‘memorability ’ of a face lends itself to many useful applications in computer vision and graphics, such as mnemonic aids for learning, photo editing applications for social networks and tools for designing memorable advertisements. [sent-11, score-0.311]

5 In fact, we automatically tag faces with personality, social, and emotional traits within a single glance: according to [28], an emotionally neutral face is judged in the instance it is seen, on traits such as level of attractiveness, likeability, and aggressiveness. [sent-19, score-0.282]

6 Face memorability is in fact a critical factor that dictates many of our social Figure1:Examplesofmdifyngthem orabiltyofaces while keeping identity and other attributes fixed. [sent-22, score-1.069]

7 Despite subtle changes, there is a significant impact on the memorability of the modified images. [sent-23, score-0.984]

8 In this work, we show that it is indeed possible to change the memorability of a face photograph. [sent-28, score-1.043]

9 Figure 1 shows some photographs manipulated using our method (one in33219003 dividual per row) such that each face is more or less memorable than the original photograph, while maintaining the identity, gender, emotions and other traits ofthe person. [sent-29, score-0.304]

10 However, despite these subtle changes, when testing people’s visual memory of faces, the modification is successful: after glancing at hundreds of faces, observers remember better seeing the faces warped towards memorability, than the ones warped away from it. [sent-31, score-0.219]

11 Another line of work has discussed the use of geometric face space models of face distinctiveness to test memorability [4]. [sent-34, score-1.197]

12 Importantly, recent research has found that memorability is a trait intrinsic to images, regardless of the components that make up memorability [2, 12, 13]. [sent-35, score-1.884]

13 To overcome the complex combination of factors that determine the memorability of a face, we propose a datadriven approach to modify face memorability. [sent-37, score-1.114]

14 In our method, we combine the representational power of features based on Active Appearance Models (AAMs) with the predictive power of global features such as Histograms of Oriented Gradients (HOG) [6], to achieve desired effects on face memorability. [sent-38, score-0.194]

15 Our experiments show that our method can accurately modify the memorability of faces with an accuracy of 74%. [sent-39, score-1.087]

16 Furthermore, image memorability and the objects and regions that make a picture more or less memorable can be estimated using state of the art computer vision approaches [12, 13, 14] . [sent-42, score-1.047]

17 While predicting image memorability lends itself to a wide variety of applications, no work so far has attempted to automatically predict and modify the memorability of individual face photographs. [sent-43, score-2.096]

18 Face modification: The major contribution of this work is modifying faces to make them more or less memorable. [sent-44, score-0.142]

19 There has been significant work in modifying faces along other axes or attributes, such as gender [15], age [17, 23], facial expressions [1] and attractiveness [18]. [sent-45, score-0.33]

20 Face caricatures: Work in computer vision and psychology has also looked at face caricatures [3, 21], where the distinctive (i. [sent-48, score-0.147]

21 The distinctiveness of a face is known to affect its later recognition in humans [4], so increasing the memorability of a face may caricaturize it to some degree. [sent-51, score-1.197]

22 However, unlike face caricature work, the current study aims to maintain the realism of the faces, by preserving face identity. [sent-52, score-0.232]

23 Recent memorability work finds that distinctiveness is not the sole predic- tor of face memorability [2], so the algorithm presented in this paper is likely to change the faces in more subtle ways than simply enlarging distinctive physical traits. [sent-53, score-2.13]

24 Thus, in this section, we explore various features for predicting face memorability and propose a robust memorability metric to significantly improve face memorability prediction. [sent-56, score-3.052]

25 We also note that the task of automatically predicting the memorability of faces using computer vision features has not been explored in prior works. [sent-57, score-1.048]

26 1 we describe the dataset used in our experiments and the method used to measure memorability scores. [sent-60, score-0.934]

27 2, we describe our robust memorability metric that accounts for false alarms leading to significantly improved prediction performance (Sec 3. [sent-63, score-1.042]

28 Based on this, Bainbridge et al [2] investigated two memorability scores 33219014 M(HN e−N[t1rF2ic)](Ours)FHa0uc. [sent-78, score-0.964]

29 Rather than being memorable (with high correct detections), these faces are in fact “familiar” [26] - people are more likely to report having seen them, leading to both correct detections and false alarms. [sent-96, score-0.227]

30 To account for this effect, we propose a slight modification to the method of computing the memorability score. [sent-97, score-1.0]

31 Thus, the new memorability score can be computed as H−NF, unlike NH as done in [12] and [2]. [sent-99, score-0.956]

32 The negative memorability scores can be easily adjusted 1t]o. [sent-101, score-0.964]

33 To show that our metric is robust, we apply it to both the face [2] and scene memorability [12] datasets. [sent-105, score-1.043]

34 By using our new metric, we have effectively decreased noise in the prediction labels (memorability scores) caused by inflated memorability scores of familiar images. [sent-111, score-1.01]

35 We note that the performance improvement is not as large in scenes because the human consistency of false alarms and the rate of false alarms is significantly lower, and effects of familiarity may function differently. [sent-113, score-0.203]

36 Figure2:Aadtermactimoetai vogltenoh nal notain:Wean etmor atceaiteogltenv he d7 facil landmarks of key geometric points on the face and collected 19 demographic and facial attributes for each image in the 10k US Adult Faces Database1 . [sent-114, score-0.302]

37 We use our proposed memorability score that takes false alarms into consideration for the remaining experiments in this paper. [sent-115, score-1.031]

38 2) for the prediction of memorability and other attributes (Sec. [sent-126, score-1.042]

39 Note that since we aim to modify faces instead of detect keypoints, we assume that landmark annotation is available at both train and test times. [sent-137, score-0.185]

40 To collect the facial attributes, we conducted a separate AMT survey similar to [16], where each of the 2222 face photographs was annotated by twelve different workers on 19 demographic and facial attributes of relevance for face memorability and face modification. [sent-140, score-1.545]

41 We collected a variety of attributes including demographics such as gender, race and age, physical attributes such as attractiveness, facial hair and make up, and social attributes such as emotional magnitude and friendliness. [sent-141, score-0.374]

42 These attributes are required when modifying a face so we can attempt to keep them constant or modify them jointly with memorability, as required by the user. [sent-142, score-0.315]

43 2 Setup Dataset: In our experiments, we use the 10k US Adult Faces Database [2] that consists of 2222 face photographs annotated with memorability scores. [sent-148, score-1.077]

44 2 summarizes the prediction performance of face memorability and other attributes when using various features. [sent-168, score-1.151]

45 This implies that it is essential to use these features in our face modification algorithm to robustly predict memorability after making modifications to a face. [sent-171, score-1.167]

46 2, shape is used in our algorithm to parametrize faces so it essentially has zero cost of extraction for modified faces. [sent-175, score-0.167]

47 Similar to memorability prediction, we find that dense global features tend to outperform shape features for most attributes. [sent-176, score-1.021]

48 For real-valued attributes and memorability, we report Spearman’s rank correlation (ρ), while for discrete valued attributes such as ‘male’, we report classification accuracy. [sent-178, score-0.15]

49 This might suggest why, unlike our method, existing methods [17, 18] typically use landmark-based features instead of dense global features for the modification of facial attributes. [sent-182, score-0.196]

50 Modifying Face Memorability In order to modify a face photograph, we must first define an expressive yet low-dimensional representation of a face. [sent-184, score-0.18]

51 We need to parametrize a face such that we can synthesize new, realistic-looking faces. [sent-185, score-0.14]

52 While the above parametrization is extremely powerful and allows us to modify a given face along various dimensions, we require a method to evaluate the modifications in order to make predictable changes to a face. [sent-192, score-0.245]

53 Our objective is to modify the memorability score of a face, while preserving the identity and other attributes such as age, gender, emotions, etc of the individual. [sent-193, score-1.155]

54 Specifically, our cost function consists of three terms: (1) the cost of modifying the identity of the person, (2) the cost of not achieving the desired memorability score, and (3) the cost of modifying other attributes. [sent-197, score-1.203]

55 By minimizing this cost function, we can achieve the desired effect on the memorability of a face photograph. [sent-198, score-1.089]

56 3, it is crucial to use dense global features when predicting face memorability. [sent-202, score-0.165]

57 3) in the form of AAMs is a common method for representing faces for modification because it provides an expressive, and low-dimensional feature space that is reversible, i. [sent-211, score-0.148]

58 3As there could be components of appearance outside the face region such as hair that we would like to be able to modify, we use the entire image instead of just the face pixels (as is typically done). [sent-234, score-0.218]

59 HOG [6] or SIFT [19]) and A, the set of facial attributes (e. [sent-242, score-0.153]

60 Then we define mi (x) as a function to predict the memorability score of an image represented by PCA coefficients x computed using feature i ∈ F. [sent-245, score-1.002]

61 Now, given an image that we want to modify, our goal is to synthesize a new image I has a memorability score that of M (specified by the user) and preserves the identity and other facial attributes of the original image Iˆ. [sent-255, score-1.147]

62 Since the performance of different features on memorability prediction varies significantly (Sec. [sent-262, score-0.981]

63 Overall, this function penalizes the memorability score of the new image x if it does not match the desired memorability score, M. [sent-265, score-1.91]

64 Additionally, a user could easily modify the relative importance of different attributes in the above cost function. [sent-268, score-0.172]

65 Overall, Cid and Cattr encourage the face to remain the same as while Cmem encourages the face to be modified to have the desired memorability score of M. [sent-269, score-1.218]

66 Note that the dense global features play a significant role in accurate memorability prediction (Sec. [sent-283, score-1.005]

67 Experiments In this section, we describe the experimental evaluation of our memorability modification algorithm. [sent-293, score-1.0]

68 Setup Our goal is to evaluate whether our algorithm is able to modify the memorability of faces in a predictable way. [sent-304, score-1.105]

69 Then we compare the memorability scores of the modified images; if the mean memorability of the set of images whose memorability was increased is higher than the decreased set, we can conclude that our algorithm is accurately modifying memorability. [sent-306, score-2.916]

70 The target images were modified to have a memorability score that differs by 0. [sent-316, score-0.993]

71 3 summarizes the quantitative results from the memorability games described in Sec. [sent-324, score-0.934]

72 3(a), we show 33219058 the overall memorability scores of all target images after the two types of modifications (i. [sent-328, score-1.009]

73 We observe that the mean memorability score ‘memorability increase’ images is significantly higher than that of the ‘memorability decrease’ images. [sent-331, score-0.956]

74 3(b) shows the difference in memorability scores of individual images; for a given image we subtract the observed memorability of the version modified to have lower memorability image from that of the version modified to have higher memorability. [sent-334, score-2.88]

75 We find that the expected change in memorability (> 0) occurs in about 74% of the images (chance is 50%). [sent-335, score-0.934]

76 This is a fairly high value given our limited understanding of face memorability and the factors affecting it. [sent-336, score-1.043]

77 We also observe that the increase in memorability scores is much larger in magnitude than the decrease. [sent-337, score-0.964]

78 5 shows qualitative results of modifying images to have higher and lower memorability, together with the memorability scores obtained from our experiments. [sent-339, score-1.024]

79 While we observe that the more memorable faces tend to be more ‘interesting’, there is no single modification axis such as distinctiveness, age, etc, that leads to more or less memorable faces. [sent-340, score-0.368]

80 Essentially, our data-driven approach is effectively able to identify the subtle elements of a face that affect its memorability and apply those effects to novel faces. [sent-341, score-1.091]

81 Analysis To investigate the contribution of shape and appearance features to face memorability, we conduct a second AMT study similar to the one described in Sec. [sent-344, score-0.14]

82 In addition, the changes in memorability scores were not as significant in this case as compared to the original setting. [sent-350, score-0.964]

83 This shows that a combination of shape and appearance features are important for modifying memorability; however, it is interesting to note that despite the limited degree of freedoms, our algorithm achieved a reasonable modification accuracy. [sent-351, score-0.157]

84 We find that having more clusters allows us to have better reconstructions without significant sacrifice in memorability prediction performance. [sent-356, score-0.967]

85 Lastly, since changes in memorability lead to unintuitive modifications to faces, in Fig. [sent-358, score-0.966]

86 6, we apply our algorithm to modify other attributes whose effects are better understood. [sent-359, score-0.168]

87 0 203 4 50 sorted image index (a) Overall memorability sorted image index scores (b) Individual images Figure 3: Quantitative results: (a) Memorability scores of all images in the increase/decrease experimental settings, and (b) change in memorability scores of individual images. [sent-366, score-1.958]

88 3 19876540NumberASo1phfaclursten15fature20 (a) Reconstruction error (b) Memorability prediction Figure 4: Analysis: Figure showing (a) reconstruction error, and (b) memorability prediction performance as we change the number of clusters in AAM. [sent-370, score-1.0]

89 For instance, for animated films, movies, or video games, one could imagine animators creating cartoon characters with different levels of memorability [10] or make-up artists making any actor a highly memorable protagonist surrounded by forgettable extras. [sent-374, score-1.063]

90 Importantly, the current results show that memorability is a trait that can be manipulated like a facial emotion, changing the whole face in subtle ways to make it look more distinctive and interesting. [sent-375, score-1.181]

91 These memorability transformations are subtle, like an imperceptible “memory face lift. [sent-376, score-1.043]

92 ” These modified faces are either better remembered or forgotten after a glance, depending on our manipulation. [sent-377, score-0.168]

93 33219069 of the modification together with memorability scores from human experiments. [sent-391, score-1.03]

94 Arrow direction indicates which face is expected to have higher or lower memorability of the two while numbers indicate the actual memorability scores. [sent-392, score-1.977]

95 ↓ age original ↑ age ↓ attractive original ↑ attractive ↓ friendly original ↑ friendly Figure6:Modifyngotheratributes:Weincrease/decreaseotheratributes uchasage,atractiven s andfriendlines . [sent-393, score-0.138]

96 Weight, sex, and facial expressions: On the manipulation of attributes in generative 3D face models. [sent-398, score-0.262]

97 Formal models of familiarity and memorability in face recognition. [sent-418, score-1.074]

98 The use of facial motion and facial form during the processing ofidentity. [sent-500, score-0.156]

99 Threedimensional caricatures of human heads: distinctiveness and the perception of facial age. [sent-544, score-0.161]

100 A unified account of the effects of distinctiveness, inversion, and race in face recognition. [sent-561, score-0.144]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('memorability', 0.934), ('face', 0.109), ('memorable', 0.101), ('faces', 0.082), ('facial', 0.078), ('attributes', 0.075), ('modify', 0.071), ('modification', 0.066), ('modifying', 0.06), ('attractiveness', 0.05), ('aams', 0.049), ('distinctiveness', 0.045), ('alarms', 0.044), ('caricatures', 0.038), ('cattr', 0.038), ('cid', 0.038), ('cmem', 0.038), ('adult', 0.035), ('emotional', 0.035), ('photographs', 0.034), ('remembered', 0.034), ('isola', 0.034), ('prediction', 0.033), ('modifications', 0.032), ('landmark', 0.032), ('familiarity', 0.031), ('false', 0.031), ('scores', 0.03), ('gender', 0.03), ('age', 0.03), ('bainbridge', 0.028), ('forgettable', 0.028), ('forgotten', 0.028), ('remember', 0.028), ('traits', 0.028), ('cost', 0.026), ('subtle', 0.026), ('identity', 0.025), ('modified', 0.024), ('dense', 0.024), ('pca', 0.024), ('social', 0.023), ('hyperparameters', 0.023), ('effects', 0.022), ('friendly', 0.022), ('score', 0.022), ('landmarks', 0.021), ('xa', 0.02), ('desired', 0.02), ('spearman', 0.019), ('portrait', 0.019), ('demographic', 0.019), ('filler', 0.019), ('splits', 0.018), ('predictable', 0.018), ('amt', 0.018), ('emotion', 0.018), ('manipulated', 0.018), ('parametrize', 0.018), ('coefficients', 0.018), ('predicting', 0.018), ('svr', 0.018), ('lends', 0.018), ('tend', 0.018), ('xs', 0.017), ('hit', 0.017), ('candid', 0.017), ('memory', 0.017), ('shape', 0.017), ('attractive', 0.017), ('photograph', 0.017), ('mi', 0.016), ('supplemental', 0.016), ('participants', 0.016), ('aam', 0.016), ('attribute', 0.016), ('trait', 0.016), ('principal', 0.015), ('nh', 0.015), ('nf', 0.015), ('torralba', 0.015), ('parametrization', 0.015), ('warping', 0.015), ('representational', 0.015), ('hog', 0.014), ('features', 0.014), ('etc', 0.014), ('preserving', 0.014), ('emotions', 0.014), ('target', 0.013), ('sift', 0.013), ('hope', 0.013), ('people', 0.013), ('khosla', 0.013), ('synthesize', 0.013), ('race', 0.013), ('familiar', 0.013), ('predict', 0.012), ('keeping', 0.012), ('picture', 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999958 272 iccv-2013-Modifying the Memorability of Face Photographs

Author: Aditya Khosla, Wilma A. Bainbridge, Antonio Torralba, Aude Oliva

Abstract: Contemporary life bombards us with many new images of faces every day, which poses non-trivial constraints on human memory. The vast majority of face photographs are intended to be remembered, either because of personal relevance, commercial interests or because the pictures were deliberately designed to be memorable. Can we make aportrait more memorable or more forgettable automatically? Here, we provide a method to modify the memorability of individual face photographs, while keeping the identity and other facial traits (e.g. age, attractiveness, and emotional magnitude) of the individual fixed. We show that face photographs manipulated to be more memorable (or more forgettable) are indeed more often remembered (or forgotten) in a crowd-sourcing experiment with an accuracy of 74%. Quantifying and modifying the ‘memorability ’ of a face lends itself to many useful applications in computer vision and graphics, such as mnemonic aids for learning, photo editing applications for social networks and tools for designing memorable advertisements.

2 0.20463938 416 iccv-2013-The Interestingness of Images

Author: Michael Gygli, Helmut Grabner, Hayko Riemenschneider, Fabian Nater, Luc Van_Gool

Abstract: We investigate human interest in photos. Based on our own and others ’psychological experiments, we identify various cues for “interestingness ”, namely aesthetics, unusualness and general preferences. For the ranking of retrieved images, interestingness is more appropriate than cues proposed earlier. Interestingness is, for example, correlated with what people believe they will remember. This is opposed to actual memorability, which is uncorrelated to both of them. We introduce a set of features computationally capturing the three main aspects of visual interestingness that we propose and build an interestingness predictor from them. Its performance is shown on three datasets with varying context, reflecting diverse levels of prior knowledge of the viewers.

3 0.11182845 157 iccv-2013-Fast Face Detector Training Using Tailored Views

Author: Kristina Scherbaum, James Petterson, Rogerio S. Feris, Volker Blanz, Hans-Peter Seidel

Abstract: Face detection is an important task in computer vision and often serves as the first step for a variety of applications. State-of-the-art approaches use efficient learning algorithms and train on large amounts of manually labeled imagery. Acquiring appropriate training images, however, is very time-consuming and does not guarantee that the collected training data is representative in terms of data variability. Moreover, available data sets are often acquired under controlled settings, restricting, for example, scene illumination or 3D head pose to a narrow range. This paper takes a look into the automated generation of adaptive training samples from a 3D morphable face model. Using statistical insights, the tailored training data guarantees full data variability and is enriched by arbitrary facial attributes such as age or body weight. Moreover, it can automatically adapt to environmental constraints, such as illumination or viewing angle of recorded video footage from surveillance cameras. We use the tailored imagery to train a new many-core imple- mentation of Viola Jones ’ AdaBoost object detection framework. The new implementation is not only faster but also enables the use of multiple feature channels such as color features at training time. In our experiments we trained seven view-dependent face detectors and evaluate these on the Face Detection Data Set and Benchmark (FDDB). Our experiments show that the use of tailored training imagery outperforms state-of-the-art approaches on this challenging dataset.

4 0.099444799 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition

Author: Yizhe Zhang, Ming Shao, Edward K. Wong, Yun Fu

Abstract: One of the most challenging task in face recognition is to identify people with varied poses. Namely, the test faces have significantly different poses compared with the registered faces. In this paper, we propose a high-level feature learning scheme to extract pose-invariant identity feature for face recognition. First, we build a single-hiddenlayer neural network with sparse constraint, to extractposeinvariant feature in a supervised fashion. Second, we further enhance the discriminative capability of the proposed feature by using multiple random faces as the target values for multiple encoders. By enforcing the target values to be uniquefor inputfaces over differentposes, the learned highlevel feature that is represented by the neurons in the hidden layer is pose free and only relevant to the identity information. Finally, we conduct face identification on CMU MultiPIE, and verification on Labeled Faces in the Wild (LFW) databases, where identification rank-1 accuracy and face verification accuracy with ROC curve are reported. These experiments demonstrate that our model is superior to oth- er state-of-the-art approaches on handling pose variations.

5 0.089393698 70 iccv-2013-Cascaded Shape Space Pruning for Robust Facial Landmark Detection

Author: Xiaowei Zhao, Shiguang Shan, Xiujuan Chai, Xilin Chen

Abstract: In this paper, we propose a novel cascaded face shape space pruning algorithm for robust facial landmark detection. Through progressively excluding the incorrect candidate shapes, our algorithm can accurately and efficiently achieve the globally optimal shape configuration. Specifically, individual landmark detectors are firstly applied to eliminate wrong candidates for each landmark. Then, the candidate shape space is further pruned by jointly removing incorrect shape configurations. To achieve this purpose, a discriminative structure classifier is designed to assess the candidate shape configurations. Based on the learned discriminative structure classifier, an efficient shape space pruning strategy is proposed to quickly reject most incorrect candidate shapes while preserve the true shape. The proposed algorithm is carefully evaluated on a large set of real world face images. In addition, comparison results on the publicly available BioID and LFW face databases demonstrate that our algorithm outperforms some state-of-the-art algorithms.

6 0.071051948 219 iccv-2013-Internet Based Morphable Model

7 0.071027771 36 iccv-2013-Accurate and Robust 3D Facial Capture Using a Single RGBD Camera

8 0.068575539 321 iccv-2013-Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model

9 0.068423748 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition

10 0.068195023 444 iccv-2013-Viewing Real-World Faces in 3D

11 0.064284429 339 iccv-2013-Rank Minimization across Appearance and Shape for AAM Ensemble Fitting

12 0.06117719 391 iccv-2013-Sieving Regression Forest Votes for Facial Feature Detection in the Wild

13 0.05959814 302 iccv-2013-Optimization Problems for Fast AAM Fitting in-the-Wild

14 0.057647642 97 iccv-2013-Coupling Alignments with Recognition for Still-to-Video Face Recognition

15 0.055463839 52 iccv-2013-Attribute Adaptation for Personalized Image Search

16 0.053008914 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

17 0.052035034 392 iccv-2013-Similarity Metric Learning for Face Recognition

18 0.051714823 356 iccv-2013-Robust Feature Set Matching for Partial Face Recognition

19 0.049504235 31 iccv-2013-A Unified Probabilistic Approach Modeling Relationships between Attributes and Objects

20 0.048332993 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.09), (1, 0.027), (2, -0.044), (3, -0.079), (4, 0.016), (5, -0.066), (6, 0.11), (7, 0.012), (8, 0.038), (9, 0.027), (10, -0.021), (11, 0.045), (12, 0.037), (13, 0.019), (14, -0.037), (15, 0.015), (16, 0.005), (17, 0.017), (18, -0.021), (19, -0.023), (20, -0.016), (21, -0.013), (22, 0.009), (23, 0.027), (24, 0.004), (25, -0.009), (26, 0.004), (27, -0.029), (28, -0.006), (29, 0.008), (30, -0.011), (31, -0.012), (32, -0.007), (33, -0.02), (34, 0.007), (35, -0.004), (36, 0.043), (37, -0.055), (38, -0.031), (39, 0.026), (40, 0.012), (41, 0.009), (42, -0.037), (43, 0.063), (44, -0.019), (45, -0.017), (46, 0.008), (47, -0.038), (48, -0.085), (49, 0.006)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.91693074 272 iccv-2013-Modifying the Memorability of Face Photographs

Author: Aditya Khosla, Wilma A. Bainbridge, Antonio Torralba, Aude Oliva

Abstract: Contemporary life bombards us with many new images of faces every day, which poses non-trivial constraints on human memory. The vast majority of face photographs are intended to be remembered, either because of personal relevance, commercial interests or because the pictures were deliberately designed to be memorable. Can we make aportrait more memorable or more forgettable automatically? Here, we provide a method to modify the memorability of individual face photographs, while keeping the identity and other facial traits (e.g. age, attractiveness, and emotional magnitude) of the individual fixed. We show that face photographs manipulated to be more memorable (or more forgettable) are indeed more often remembered (or forgotten) in a crowd-sourcing experiment with an accuracy of 74%. Quantifying and modifying the ‘memorability ’ of a face lends itself to many useful applications in computer vision and graphics, such as mnemonic aids for learning, photo editing applications for social networks and tools for designing memorable advertisements.

2 0.74730927 157 iccv-2013-Fast Face Detector Training Using Tailored Views

Author: Kristina Scherbaum, James Petterson, Rogerio S. Feris, Volker Blanz, Hans-Peter Seidel

Abstract: Face detection is an important task in computer vision and often serves as the first step for a variety of applications. State-of-the-art approaches use efficient learning algorithms and train on large amounts of manually labeled imagery. Acquiring appropriate training images, however, is very time-consuming and does not guarantee that the collected training data is representative in terms of data variability. Moreover, available data sets are often acquired under controlled settings, restricting, for example, scene illumination or 3D head pose to a narrow range. This paper takes a look into the automated generation of adaptive training samples from a 3D morphable face model. Using statistical insights, the tailored training data guarantees full data variability and is enriched by arbitrary facial attributes such as age or body weight. Moreover, it can automatically adapt to environmental constraints, such as illumination or viewing angle of recorded video footage from surveillance cameras. We use the tailored imagery to train a new many-core imple- mentation of Viola Jones ’ AdaBoost object detection framework. The new implementation is not only faster but also enables the use of multiple feature channels such as color features at training time. In our experiments we trained seven view-dependent face detectors and evaluate these on the Face Detection Data Set and Benchmark (FDDB). Our experiments show that the use of tailored training imagery outperforms state-of-the-art approaches on this challenging dataset.

3 0.69323856 355 iccv-2013-Robust Face Landmark Estimation under Occlusion

Author: Xavier P. Burgos-Artizzu, Pietro Perona, Piotr Dollár

Abstract: Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since theyfail toprovide aprincipled way ofhandling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR ’s performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.

4 0.69177175 321 iccv-2013-Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model

Author: Xiang Yu, Junzhou Huang, Shaoting Zhang, Wang Yan, Dimitris N. Metaxas

Abstract: This paper addresses the problem of facial landmark localization and tracking from a single camera. We present a two-stage cascaded deformable shape model to effectively and efficiently localize facial landmarks with large head pose variations. For face detection, we propose a group sparse learning method to automatically select the most salient facial landmarks. By introducing 3D face shape model, we use procrustes analysis to achieve pose-free facial landmark initialization. For deformation, the first step uses mean-shift local search with constrained local model to rapidly approach the global optimum. The second step uses component-wise active contours to discriminatively refine the subtle shape variation. Our framework can simultaneously handle face detection, pose-free landmark localization and tracking in real time. Extensive experiments are conducted on both laboratory environmental face databases and face-in-the-wild databases. All results demonstrate that our approach has certain advantages over state-of-theart methods in handling pose variations1.

5 0.6746757 70 iccv-2013-Cascaded Shape Space Pruning for Robust Facial Landmark Detection

Author: Xiaowei Zhao, Shiguang Shan, Xiujuan Chai, Xilin Chen

Abstract: In this paper, we propose a novel cascaded face shape space pruning algorithm for robust facial landmark detection. Through progressively excluding the incorrect candidate shapes, our algorithm can accurately and efficiently achieve the globally optimal shape configuration. Specifically, individual landmark detectors are firstly applied to eliminate wrong candidates for each landmark. Then, the candidate shape space is further pruned by jointly removing incorrect shape configurations. To achieve this purpose, a discriminative structure classifier is designed to assess the candidate shape configurations. Based on the learned discriminative structure classifier, an efficient shape space pruning strategy is proposed to quickly reject most incorrect candidate shapes while preserve the true shape. The proposed algorithm is carefully evaluated on a large set of real world face images. In addition, comparison results on the publicly available BioID and LFW face databases demonstrate that our algorithm outperforms some state-of-the-art algorithms.

6 0.67248571 391 iccv-2013-Sieving Regression Forest Votes for Facial Feature Detection in the Wild

7 0.67052418 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition

8 0.65776253 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

9 0.6520139 251 iccv-2013-Like Father, Like Son: Facial Expression Dynamics for Kinship Verification

10 0.62721586 149 iccv-2013-Exemplar-Based Graph Matching for Robust Facial Landmark Localization

11 0.62640345 154 iccv-2013-Face Recognition via Archetype Hull Ranking

12 0.61909389 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition

13 0.59426779 219 iccv-2013-Internet Based Morphable Model

14 0.56871581 339 iccv-2013-Rank Minimization across Appearance and Shape for AAM Ensemble Fitting

15 0.55044055 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition

16 0.54860634 393 iccv-2013-Simultaneous Clustering and Tracklet Linking for Multi-face Tracking in Videos

17 0.5429098 206 iccv-2013-Hybrid Deep Learning for Face Verification

18 0.54210806 392 iccv-2013-Similarity Metric Learning for Face Recognition

19 0.52712905 302 iccv-2013-Optimization Problems for Fast AAM Fitting in-the-Wild

20 0.52001137 106 iccv-2013-Deep Learning Identity-Preserving Face Space


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.046), (4, 0.013), (7, 0.012), (8, 0.242), (26, 0.069), (31, 0.045), (34, 0.011), (41, 0.011), (42, 0.127), (48, 0.016), (64, 0.038), (73, 0.034), (78, 0.011), (89, 0.141), (95, 0.025), (98, 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.76502925 246 iccv-2013-Learning the Visual Interpretation of Sentences

Author: C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende

Abstract: Sentences that describe visual scenes contain a wide variety of information pertaining to the presence of objects, their attributes and their spatial relations. In this paper we learn the visual features that correspond to semantic phrases derived from sentences. Specifically, we extract predicate tuples that contain two nouns and a relation. The relation may take several forms, such as a verb, preposition, adjective or their combination. We model a scene using a Conditional Random Field (CRF) formulation where each node corresponds to an object, and the edges to their relations. We determine the potentials of the CRF using the tuples extracted from the sentences. We generate novel scenes depicting the sentences’ visual meaning by sampling from the CRF. The CRF is also used to score a set of scenes for a text-based image retrieval task. Our results show we can generate (retrieve) scenes that convey the desired semantic meaning, even when scenes (queries) are described by multiple sentences. Significant improvement is found over several baseline approaches.

same-paper 2 0.76014072 272 iccv-2013-Modifying the Memorability of Face Photographs

Author: Aditya Khosla, Wilma A. Bainbridge, Antonio Torralba, Aude Oliva

Abstract: Contemporary life bombards us with many new images of faces every day, which poses non-trivial constraints on human memory. The vast majority of face photographs are intended to be remembered, either because of personal relevance, commercial interests or because the pictures were deliberately designed to be memorable. Can we make aportrait more memorable or more forgettable automatically? Here, we provide a method to modify the memorability of individual face photographs, while keeping the identity and other facial traits (e.g. age, attractiveness, and emotional magnitude) of the individual fixed. We show that face photographs manipulated to be more memorable (or more forgettable) are indeed more often remembered (or forgotten) in a crowd-sourcing experiment with an accuracy of 74%. Quantifying and modifying the ‘memorability ’ of a face lends itself to many useful applications in computer vision and graphics, such as mnemonic aids for learning, photo editing applications for social networks and tools for designing memorable advertisements.

3 0.74399632 3 iccv-2013-3D Sub-query Expansion for Improving Sketch-Based Multi-view Image Retrieval

Author: Yen-Liang Lin, Cheng-Yu Huang, Hao-Jeng Wang, Winston Hsu

Abstract: We propose a 3D sub-query expansion approach for boosting sketch-based multi-view image retrieval. The core idea of our method is to automatically convert two (guided) 2D sketches into an approximated 3D sketch model, and then generate multi-view sketches as expanded sub-queries to improve the retrieval performance. To learn the weights among synthesized views (sub-queries), we present a new multi-query feature to model the similarity between subqueries and dataset images, and formulate it into a convex optimization problem. Our approach shows superior performance compared with the state-of-the-art approach on a public multi-view image dataset. Moreover, we also conduct sensitivity tests to analyze the parameters of our approach based on the gathered user sketches.

4 0.7100246 186 iccv-2013-GrabCut in One Cut

Author: Meng Tang, Lena Gorelick, Olga Veksler, Yuri Boykov

Abstract: Among image segmentation algorithms there are two major groups: (a) methods assuming known appearance models and (b) methods estimating appearance models jointly with segmentation. Typically, the first group optimizes appearance log-likelihoods in combination with some spacial regularization. This problem is relatively simple and many methods guarantee globally optimal results. The second group treats model parameters as additional variables transforming simple segmentation energies into highorder NP-hard functionals (Zhu-Yuille, Chan-Vese, GrabCut, etc). It is known that such methods indirectly minimize the appearance overlap between the segments. We propose a new energy term explicitly measuring L1 distance between the object and background appearance models that can be globally maximized in one graph cut. We show that in many applications our simple term makes NP-hard segmentation functionals unnecessary. Our one cut algorithm effectively replaces approximate iterative optimization techniques based on block coordinate descent.

5 0.69207537 62 iccv-2013-Bird Part Localization Using Exemplar-Based Models with Enforced Pose and Subcategory Consistency

Author: Jiongxin Liu, Peter N. Belhumeur

Abstract: In this paper, we propose a novel approach for bird part localization, targeting fine-grained categories with wide variations in appearance due to different poses (including aspect and orientation) and subcategories. As it is challenging to represent such variations across a large set of diverse samples with tractable parametric models, we turn to individual exemplars. Specifically, we extend the exemplarbased models in [4] by enforcing pose and subcategory consistency at the parts. During training, we build posespecific detectors scoring part poses across subcategories, and subcategory-specific detectors scoring part appearance across poses. At the testing stage, likely exemplars are matched to the image, suggesting part locations whose pose and subcategory consistency are well-supported by the image cues. From these hypotheses, part configuration can be predicted with very high accuracy. Experimental results demonstrate significantperformance gainsfrom our method on an extensive dataset: CUB-200-2011 [30], for both localization and classification tasks.

6 0.65929055 428 iccv-2013-Translating Video Content to Natural Language Descriptions

7 0.65884638 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

8 0.65867275 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity

9 0.6576857 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

10 0.65684247 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

11 0.65493387 330 iccv-2013-Proportion Priors for Image Sequence Segmentation

12 0.65489858 79 iccv-2013-Coherent Object Detection with 3D Geometric Context from a Single Image

13 0.6542207 44 iccv-2013-Adapting Classification Cascades to New Domains

14 0.65399069 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

15 0.65391326 157 iccv-2013-Fast Face Detector Training Using Tailored Views

16 0.65368998 257 iccv-2013-Log-Euclidean Kernels for Sparse Representation and Dictionary Learning

17 0.65302157 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

18 0.65234029 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications

19 0.65218043 277 iccv-2013-Multi-channel Correlation Filters

20 0.65212148 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition