cvpr cvpr2013 cvpr2013-36 knowledge-graph by maker-knowledge-mining

36 cvpr-2013-Adding Unlabeled Samples to Categories by Learned Attributes


Source: pdf

Author: Jonghyun Choi, Mohammad Rastegari, Ali Farhadi, Larry S. Davis

Abstract: We propose a method to expand the visual coverage of training sets that consist of a small number of labeled examples using learned attributes. Our optimization formulation discovers category specific attributes as well as the images that have high confidence in terms of the attributes. In addition, we propose a method to stably capture example-specific attributes for a small sized training set. Our method adds images to a category from a large unlabeled image pool, and leads to significant improvement in category recognition accuracy evaluated on a large-scale dataset, ImageNet.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Our optimization formulation discovers category specific attributes as well as the images that have high confidence in terms of the attributes. [sent-5, score-0.707]

2 In addition, we propose a method to stably capture example-specific attributes for a small sized training set. [sent-6, score-0.453]

3 Our method adds images to a category from a large unlabeled image pool, and leads to significant improvement in category recognition accuracy evaluated on a large-scale dataset, ImageNet. [sent-7, score-1.055]

4 Building a good training set with minimal supervision is a core problem in training visual category recognition algorithms [1]. [sent-10, score-0.449]

5 So, given a relatively small initial set of labeled samples from a category, we want to mine a large pool of unlabeled samples to identify visually different examples without human intervention. [sent-13, score-1.143]

6 edu @ Semi-supervised learning (SSL) aims at labeling unlabeled images based on their underlying distribution shared with a few labeled samples [5, 17,21]. [sent-22, score-0.78]

7 In SSL, it is assumed that the unlabeled images that are distributed around the labeled samples are highly likely to be members ofthe labeled category. [sent-23, score-0.901]

8 However, if we need to dramatically change the decision boundary of a category to achieve good classification performance, it is unlikely that this can be done just by adding samples that are similar in the space in which the original classifier is constructed. [sent-24, score-0.501]

9 To expand the boundary of a category to an unseen region, we propose a method that selects unlabeled samples based on their attributes. [sent-25, score-0.961]

10 The selected unlabeled samples are not always instances from the same category, but they can still improve category recognition accuracy, similar to [7, 10]. [sent-26, score-0.933]

11 The categorywide attributes find samples that share a large number of discriminative attributes with the preponderance of training data. [sent-28, score-1.039]

12 The example-specific attributes find samples that are highly predictive of the hard examples from a category - the ones poorly predicted by a leave one out protocol. [sent-29, score-0.919]

13 We demonstrate that our augmented training set can significantly improve the recognition accuracy over a very small initial labeled training set, where the unlabeled samples are selected from a very large unlabeled image pool, e. [sent-30, score-1.436]

14 We show the effectiveness of using attributes learned with auxiliary data to label unlabeled images without annotated attributes. [sent-34, score-0.939]

15 We propose a framework that jointly identifies the unlabeled images and category wide attributes through an optimization that seeks high classification accuracy in both the original feature space and the attribute space. [sent-36, score-1.49]

16 We propose a method to learn example specific attributes with a small sized training set, used with the proposed framework. [sent-38, score-0.475]

17 We then combine the category wide and the example specific attributes to further im- prove the quality of image selection by diversifying the variations of selected images. [sent-39, score-0.759]

18 Section 4 describes our optimization framework for discovering category wide attributes and the unlabeled images as well as a method to capture exemplar specific attributes. [sent-42, score-1.47]

19 proposed a novel active learning framework based on interactive communication between learners and supervisors (teachers) via attributes [13]. [sent-51, score-0.481]

20 Semi-Supervised Learning Semi-supervised learning (SSL) adds unlabeled examples to a training set by modeling the distribution of features without supervision. [sent-53, score-0.681]

21 proposed a SSL based scene category recognition framework using attributes, constrained by a category ontology [17]. [sent-59, score-0.568]

22 They leverage the inter-class relationships as constraints for SSL using semantic attributes given by a category ontology as a priori. [sent-60, score-0.692]

23 Our approach is similar to their work in terms of using attributes, but aims to discover attributes without any structured semantic prior. [sent-61, score-0.425]

24 They assume that the images in a category are not diverse and adding all images from some selected category will help to build a better model for the target category. [sent-65, score-0.673]

25 [14] propose discovering implicit attributes that are not necessarily semantic for category recognition. [sent-73, score-0.658]

26 The discovered attributes preserve category-specific traits as well as their visual similarity by an iterative algorithm that learns discriminative hyperplanes with maxmargin and locality sensitive hashing criteria. [sent-74, score-0.605]

27 Approach Overview Given a handful of labeled training examples per category, it is difficult to build a generalizable visual model of a category even with sophisticated classifiers [20]. [sent-76, score-0.643]

28 To address the lack of variations of the few labeled examples, we expand the visual boundary of a category by adding unlabeled samples based on their attributes. [sent-77, score-1.116]

29 The attribute description allows us to find examples that are visually different but similar in traits or characteristics [4, 8, 9]. [sent-78, score-0.492]

30 Based on recent work on automatic discovery of attributes [14] and large scale category-labeled image datasets [2], we discover a rich set of attributes. [sent-79, score-0.493]

31 These attributes are leaned using an auxiliary category-labeled dataset to avoid biasing the attribute models towards the few labeled examples. [sent-80, score-0.841]

32 The motivation here is similar to what under- lies the successful Classemes representation [18] which achieved good category recognition performance by representing samples by external data that consists of a large number of samples from various categories. [sent-81, score-0.543]

33 Across the original visual feature space and the attribute space, we propose a framework that jointly selects the unlabeled images to be assigned to each category and the discriminative attribute representations of the categories based on either a category wide or exemplar based ranking criteria. [sent-82, score-2.12]

34 1 presents the optimization framework for category wide addition of unlabeled samples to categories. [sent-85, score-0.916]

35 This adds samples that share many discriminative attributes amongst themselves and the given labeled training data. [sent-86, score-0.801]

36 The same framework can be applied to identify relevant unlabeled samples based on their attribute similarity to specific instances of the training data. [sent-87, score-0.99]

37 This only involves a simple change to one term of the optimization, and is based on how ranks of unlabeled samples change as labeled samples are left out, one at a time, from the attribute based classifier. [sent-88, score-1.212]

38 We refer to the first as a categorical analysis and the second as an exemplar analysis. [sent-90, score-0.479]

39 Categorical Analysis We simultaneously discover discriminative attributes and images from the unlabeled data set in a joint optimization framework formulated in both visual feature space and attribute space with a max margin criterion for discriminativity. [sent-94, score-1.302]

40 Also unlike [10], we do not need to learn the distributions of the unlabeled images in the original feature space. [sent-96, score-0.514]

41 d to a category based on identifying discriminative attribute models. [sent-123, score-0.597]

42 Since the problems of determining the discriminative attributes and selecting the subset of unlabeled data to assign to a category are coupled, we learn them jointly. [sent-124, score-1.213]

43 Additionally, we want to mitigate against unlabeled samples being assigned to multiple categories, so a term M(·) is added to the optimization criteria tgoo erinefso,r scoe ath teatr. [sent-125, score-0.636]

44 ,l} Xn X k=Xl+ 1 Ic,k ≤ γ, = Ic,k M(I) XX Ic1 · Ic2, Xc16=Xc2 (1) Ic ∈ {0, 1} is the sample selection vector for category c, and∈ ∈ind {i0c,a1t}es i ws hthiceh s aumnlpalbee sleedle scatimopnle vse catorer s feolrec ctaetde gfoorry ya cs-, signment to the training set of category c. [sent-139, score-0.66]

45 {TRz }(2) TR essentia|lly choo{szes the }to|p γ responses of the }attribute classifier from the unlabeled set by the fifth constraint of Eq. [sent-155, score-0.507]

46 At the first iteration, the initial value of I determined by training the attribute classifier wca on is the given labeled training set. [sent-165, score-0.798]

47 For our purposes, though, we can accomplish the same thing by analyzing how the ranks of unlabeled samples change when a single sample is eliminated from the training set of the attribute SVM. [sent-179, score-1.068]

48 If an unlabeled sample sees its rank drop sharply from its rank in the full-sample SVM, then the training sample dropped should have strong attribute similarity to the unlabeled sample. [sent-180, score-1.448]

49 The leftmost column shows unlabeled samples sorted by their rank in the attribute classifier learned from that set. [sent-183, score-1.036]

50 Then we construct leave one out attribute classifiers, and each column shows the new rankings of unlabeled samples when each image at the top of the column is eliminated from the training set. [sent-184, score-1.064]

51 Eliminating the half orange (second sample, top row) from the training set µ reduces the rank of the globally best unlabeled sample from 1to 10. [sent-185, score-0.662]

52 First, let wca be the attribute classifier for the current training set for category c (while the process is initialized based on the labeled training set, after each iteration we use the additional unlabeled samples added to the category to construct a new attribute classifier). [sent-186, score-2.213]

53 Let wca,j¯ be the attribute classifier learned when the ith sample is removed from the training set. [sent-187, score-0.459]

54 We next describe how we use the ranks of unlabeled samples in these two classifiers to modify TR in Eq. [sent-188, score-0.682]

55 Basically, we are going to re-rank the unlabeled samples based on their rank changes from wca to wac,j¯. [sent-190, score-0.81]

56 This can be accomplished by computing the following score based on rank changes, and sorting the unlabeled samples by this score: ej(xi) = rgµ(xi) −rj(νxi), (3) where xi is a sample from the an unlabeled pool, rg (·) and rj (·) are the rank functions of wca and wca,j¯ respec(·ti)v aenlyd. [sent-192, score-1.399]

57 The left most column is a list of unlabeled images ordered by confidence score by Rest of the columns are lists of unlabeled images ordered by each wac,i¯’s. [sent-214, score-1.061]

58 The unlabeled image pool consists of images that are arbitrarily chosen from the entire 1,000 categories in the ILSVRC 2010 benchmark dataset, but includes at least 50 samples from each of the categories to be learned. [sent-219, score-0.879]

59 For learning the attribute space and the mapper, it is expected that the attribute mapper should capture some attribute of the categories of interest. [sent-221, score-1.097]

60 For this purpose, we use 50 labeled samples from 93 categories that are similar to the 11categories to learn the attribute space. [sent-222, score-0.667]

61 Experiments The main goal of our method is to add unlabeled images to the initial training set in order to classify more test images correctly. [sent-224, score-0.63]

62 For categorical attribute only, we mostly use γ = 50 except ones in Section 6. [sent-246, score-0.499]

63 For combining exemplar and categorical attributes, we mostly use γ = 20 and γi = 3 except for Section 6. [sent-248, score-0.479]

64 Qualitative Results Our method discovers examples that expand the visual coverage of a category by not only adding the examples from the same category but also examples from other categories. [sent-256, score-0.963]

65 Figure 2 illustrates qualitative results on the category Dalmatian for both categorical and exemplar attributes analyses. [sent-257, score-1.137]

66 The selected examples based on categorical attributes exhibit characteristics commonly found in the labeled examples such as dotted, four legged animal. [sent-258, score-1.037]

67 The exemplar attributes, on the other hand, select examples that exhibit the characteristic of individual labeled training examples. [sent-259, score-0.589]

68 Comparison with Other Selection Criteria Given our goal of selecting examples from a large unlabeled data with only a small number of labeled training samples, we do not compare with semi-supervised learning methods because they need more labeled data to model the distribution. [sent-262, score-0.932]

69 ’ refers to our method of select examples using categorical attributes only. [sent-269, score-0.751]

70 ‘E+C’ refers to addition using categorical and exemplar attributes. [sent-270, score-0.513]

71 The size of the unlabeled dataset is roughly 3,000 from randomly chosen categories out of 1,000 categories. [sent-271, score-0.554]

72 We compare to baseline algorithms which are applicable to the large unlabeled data scenario. [sent-272, score-0.491]

73 However, our method identifies useful images in the unlabeled image pool and significantly improves mAP by 7. [sent-278, score-0.571]

74 The added examples serve not only as positive samples for each category but also as negative samples for other categories. [sent-283, score-0.667]

75 In addition, the exemplar attributes further improve the recognition accuracy. [sent-291, score-0.669]

76 Note that the selected examples by categorical attributes display characteristics commonly found in the labeled training examples such as ‘dotted’, ‘four legged animal’ . [sent-297, score-1.099]

77 In contrast, the exemplar attributes select the examples that display the characteristic of individual example. [sent-298, score-0.794]

78 Mean average precision (mAP) of 11 category by our method varying the number of unlabeled images selected. [sent-300, score-0.844]

79 Set), the augmented set by our method using category wide attributes only (+ by C only) and categorical+exemplar attributes respectively. [sent-302, score-1.091]

80 Red bars denote the purity of selected images using category wide attributes only (+ by C only) and the green bars are obtained from categorical+exemplar attributes (+ by E+C). [sent-313, score-1.48]

81 (The results using both exemplar and categorical attributes are similar so are omitted). [sent-317, score-0.87]

82 Precision of Unlabeled Data The unlabeled data can be composed of images from many categories. [sent-320, score-0.492]

83 The precision of the unlabeled data is defined as the ratio of size of the unlabeled images from extraneous categories to the size of the entire unlabeled image data. [sent-321, score-1.636]

84 The larger the unlabeled data, the lower we expect its precision to be (imagine running a text based image search 888888777800888 Figure 5. [sent-322, score-0.554]

85 Even the similar examples alone improve the category recognition accuracy compared to just using the initial labeled set. [sent-328, score-0.539]

86 It is interesting to observe how robust our method is against the precision of unlabeled data. [sent-330, score-0.554]

87 We start with an unlabeled set (550 images, 50 from each of the 11 categories) of precision 1. [sent-331, score-0.554]

88 As shown in Figure 6, we observe that the accuracy improvement by our method using categorical attributes is quite stable even when precision is low. [sent-334, score-0.677]

89 Mean average precision (mAP) as a function of precision of unlabeled data. [sent-354, score-0.639]

90 Precision denotes the ratio of size of the unlabeled images from extraneous categories to the size of the entire unlabeled image data (size = 50,000). [sent-355, score-1.082]

91 Comparison to Exemplar SVM We also compare the effectiveness of our proposed exemplar attributes discovery method (Sec. [sent-362, score-0.737]

92 To stabilize the exemplar SVM scores, we employ 50,000 ex- ternal negative samples to learn each exemplar SVM while we use the small original training set for our method. [sent-367, score-0.778]

93 Our method outperforms the exemplar SVM in terms of category recognition accuracy by APs without the extra large negative example set (size = 50,000). [sent-372, score-0.545]

94 ure 8 shows that our exemplar attribute discovery method outperforms the exemplar SVM by large margins even without the large negative example set. [sent-373, score-0.922]

95 Conclusion We proposed a method to select unlabeled images to learn classifiers based on learned attributes. [sent-375, score-0.602]

96 The unlabeled images selected by our method do not necessarily belong to the category of interest but are similar in attributes. [sent-376, score-0.818]

97 Our method does not require any annotated attribute set a priori but first builds an automatically learned attribute space. [sent-377, score-0.624]

98 We formulate a joint optimization framework to select both images and the attributes for a category and solve it iteratively. [sent-378, score-0.711]

99 In addition to the category wide attributes, we identify example specific attributes to diversify the selected images. [sent-379, score-0.782]

100 From a large unlabeled data pool, the selected images improve category recognition accuracy significantly over accuracy obtained using the initial labeled training set. [sent-381, score-1.057]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('unlabeled', 0.469), ('attributes', 0.391), ('attribute', 0.298), ('exemplar', 0.278), ('category', 0.267), ('categorical', 0.201), ('wca', 0.161), ('samples', 0.138), ('ssl', 0.136), ('labeled', 0.124), ('purity', 0.107), ('bars', 0.1), ('examples', 0.095), ('jca', 0.092), ('mapper', 0.092), ('categories', 0.085), ('precision', 0.085), ('pool', 0.079), ('borrowing', 0.072), ('jcv', 0.069), ('wac', 0.069), ('discovery', 0.068), ('rastegari', 0.065), ('active', 0.064), ('training', 0.062), ('selected', 0.059), ('orange', 0.056), ('expand', 0.055), ('ilsvrc', 0.053), ('initial', 0.053), ('traits', 0.049), ('svm', 0.047), ('legged', 0.046), ('wcv', 0.046), ('xl', 0.046), ('ranks', 0.045), ('farhadi', 0.044), ('ic', 0.043), ('rank', 0.042), ('hyperplanes', 0.042), ('wide', 0.042), ('xn', 0.041), ('halved', 0.041), ('jx', 0.038), ('classifier', 0.038), ('intervention', 0.037), ('alc', 0.036), ('extraneous', 0.036), ('generalizable', 0.036), ('maxmargin', 0.036), ('discover', 0.034), ('transfer', 0.034), ('refers', 0.034), ('ontology', 0.034), ('imagenet', 0.034), ('adding', 0.034), ('sample', 0.033), ('parkash', 0.033), ('selects', 0.032), ('discriminative', 0.032), ('selecting', 0.032), ('vse', 0.031), ('borrow', 0.03), ('select', 0.03), ('ranked', 0.03), ('classifiers', 0.03), ('shrivastava', 0.03), ('adds', 0.029), ('visual', 0.029), ('supervision', 0.029), ('added', 0.029), ('aps', 0.028), ('leave', 0.028), ('learned', 0.028), ('map', 0.028), ('auxiliary', 0.028), ('ordered', 0.027), ('learning', 0.026), ('salakhutdinov', 0.026), ('characteristics', 0.026), ('margin', 0.026), ('discovers', 0.026), ('balancing', 0.026), ('hashing', 0.026), ('share', 0.025), ('dogs', 0.025), ('decision', 0.024), ('liblinear', 0.024), ('visually', 0.024), ('images', 0.023), ('xi', 0.023), ('identify', 0.023), ('column', 0.023), ('members', 0.023), ('eliminated', 0.023), ('tr', 0.022), ('baseline', 0.022), ('learn', 0.022), ('sharing', 0.022), ('rj', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 36 cvpr-2013-Adding Unlabeled Samples to Categories by Learned Attributes

Author: Jonghyun Choi, Mohammad Rastegari, Ali Farhadi, Larry S. Davis

Abstract: We propose a method to expand the visual coverage of training sets that consist of a small number of labeled examples using learned attributes. Our optimization formulation discovers category specific attributes as well as the images that have high confidence in terms of the attributes. In addition, we propose a method to stably capture example-specific attributes for a small sized training set. Our method adds images to a category from a large unlabeled image pool, and leads to significant improvement in category recognition accuracy evaluated on a large-scale dataset, ImageNet.

2 0.4639644 116 cvpr-2013-Designing Category-Level Attributes for Discriminative Visual Recognition

Author: Felix X. Yu, Liangliang Cao, Rogerio S. Feris, John R. Smith, Shih-Fu Chang

Abstract: Attribute-based representation has shown great promises for visual recognition due to its intuitive interpretation and cross-category generalization property. However, human efforts are usually involved in the attribute designing process, making the representation costly to obtain. In this paper, we propose a novel formulation to automatically design discriminative “category-level attributes ”, which can be efficiently encoded by a compact category-attribute matrix. The formulation allows us to achieve intuitive and critical design criteria (category-separability, learnability) in a principled way. The designed attributes can be used for tasks of cross-category knowledge transfer, achieving superior performance over well-known attribute dataset Animals with Attributes (AwA) and a large-scale ILSVRC2010 dataset (1.2M images). This approach also leads to state-ofthe-art performance on the zero-shot learning task on AwA.

3 0.28560367 229 cvpr-2013-It's Not Polite to Point: Describing People with Uncertain Attributes

Author: Amir Sadovnik, Andrew Gallagher, Tsuhan Chen

Abstract: Visual attributes are powerful features for many different applications in computer vision such as object detection and scene recognition. Visual attributes present another application that has not been examined as rigorously: verbal communication from a computer to a human. Since many attributes are nameable, the computer is able to communicate these concepts through language. However, this is not a trivial task. Given a set of attributes, selecting a subset to be communicated is task dependent. Moreover, because attribute classifiers are noisy, it is important to find ways to deal with this uncertainty. We address the issue of communication by examining the task of composing an automatic description of a person in a group photo that distinguishes him from the others. We introduce an efficient, principled methodfor choosing which attributes are included in a short description to maximize the likelihood that a third party will correctly guess to which person the description refers. We compare our algorithm to computer baselines and human describers, and show the strength of our method in creating effective descriptions.

4 0.28326738 34 cvpr-2013-Adaptive Active Learning for Image Classification

Author: Xin Li, Yuhong Guo

Abstract: Recently active learning has attracted a lot of attention in computer vision field, as it is time and cost consuming to prepare a good set of labeled images for vision data analysis. Most existing active learning approaches employed in computer vision adopt most uncertainty measures as instance selection criteria. Although most uncertainty query selection strategies are very effective in many circumstances, they fail to take information in the large amount of unlabeled instances into account and are prone to querying outliers. In this paper, we present a novel adaptive active learning approach that combines an information density measure and a most uncertainty measure together to select critical instances to label for image classifications. Our experiments on two essential tasks of computer vision, object recognition and scene recognition, demonstrate the efficacy of the proposed approach.

5 0.26881558 85 cvpr-2013-Complex Event Detection via Multi-source Video Attributes

Author: Zhigang Ma, Yi Yang, Zhongwen Xu, Shuicheng Yan, Nicu Sebe, Alexander G. Hauptmann

Abstract: Complex events essentially include human, scenes, objects and actions that can be summarized by visual attributes, so leveraging relevant attributes properly could be helpful for event detection. Many works have exploited attributes at image level for various applications. However, attributes at image level are possibly insufficient for complex event detection in videos due to their limited capability in characterizing the dynamic properties of video data. Hence, we propose to leverage attributes at video level (named as video attributes in this work), i.e., the semantic labels of external videos are used as attributes. Compared to complex event videos, these external videos contain simple contents such as objects, scenes and actions which are the basic elements of complex events. Specifically, building upon a correlation vector which correlates the attributes and the complex event, we incorporate video attributes latently as extra informative cues into the event detector learnt from complex event videos. Extensive experiments on a real-world large-scale dataset validate the efficacy of the proposed approach.

6 0.26708922 293 cvpr-2013-Multi-attribute Queries: To Merge or Not to Merge?

7 0.24896778 396 cvpr-2013-Simultaneous Active Learning of Classifiers & Attributes via Relative Feedback

8 0.23938777 461 cvpr-2013-Weakly Supervised Learning for Attribute Localization in Outdoor Scenes

9 0.2316308 101 cvpr-2013-Cumulative Attribute Space for Age and Crowd Density Estimation

10 0.22966821 241 cvpr-2013-Label-Embedding for Attribute-Based Classification

11 0.2232164 390 cvpr-2013-Semi-supervised Node Splitting for Random Forest Construction

12 0.21251012 387 cvpr-2013-Semi-supervised Domain Adaptation with Instance Constraints

13 0.19476843 310 cvpr-2013-Object-Centric Anomaly Detection by Attribute-Based Reasoning

14 0.18784209 48 cvpr-2013-Attribute-Based Detection of Unfamiliar Classes with Humans in the Loop

15 0.1824313 146 cvpr-2013-Enriching Texture Analysis with Semantic Data

16 0.1746887 348 cvpr-2013-Recognizing Activities via Bag of Words for Attribute Dynamics

17 0.16742828 459 cvpr-2013-Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots

18 0.14931652 80 cvpr-2013-Category Modeling from Just a Single Labeling: Use Depth Information to Guide the Learning of 2D Models

19 0.14872888 248 cvpr-2013-Learning Collections of Part Models for Object Recognition

20 0.14792281 153 cvpr-2013-Expanded Parts Model for Human Attribute and Action Recognition in Still Images


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.221), (1, -0.202), (2, -0.076), (3, -0.04), (4, 0.181), (5, 0.152), (6, -0.358), (7, 0.068), (8, 0.127), (9, 0.23), (10, -0.046), (11, 0.07), (12, -0.058), (13, -0.019), (14, -0.012), (15, -0.036), (16, -0.067), (17, -0.168), (18, -0.053), (19, 0.109), (20, -0.089), (21, -0.09), (22, -0.063), (23, -0.025), (24, 0.081), (25, 0.032), (26, 0.018), (27, 0.007), (28, -0.012), (29, 0.019), (30, -0.081), (31, 0.027), (32, -0.029), (33, 0.037), (34, 0.008), (35, 0.02), (36, -0.006), (37, -0.109), (38, 0.046), (39, -0.125), (40, -0.062), (41, -0.026), (42, 0.079), (43, 0.078), (44, 0.064), (45, -0.026), (46, -0.066), (47, -0.123), (48, -0.053), (49, 0.01)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96825564 36 cvpr-2013-Adding Unlabeled Samples to Categories by Learned Attributes

Author: Jonghyun Choi, Mohammad Rastegari, Ali Farhadi, Larry S. Davis

Abstract: We propose a method to expand the visual coverage of training sets that consist of a small number of labeled examples using learned attributes. Our optimization formulation discovers category specific attributes as well as the images that have high confidence in terms of the attributes. In addition, we propose a method to stably capture example-specific attributes for a small sized training set. Our method adds images to a category from a large unlabeled image pool, and leads to significant improvement in category recognition accuracy evaluated on a large-scale dataset, ImageNet.

2 0.85591209 396 cvpr-2013-Simultaneous Active Learning of Classifiers & Attributes via Relative Feedback

Author: Arijit Biswas, Devi Parikh

Abstract: Active learning provides useful tools to reduce annotation costs without compromising classifier performance. However it traditionally views the supervisor simply as a labeling machine. Recently a new interactive learning paradigm was introduced that allows the supervisor to additionally convey useful domain knowledge using attributes. The learner first conveys its belief about an actively chosen image e.g. “I think this is a forest, what do you think?”. If the learner is wrong, the supervisorprovides an explanation e.g. “No, this is too open to be a forest”. With access to a pre-trained set of relative attribute predictors, the learner fetches all unlabeled images more open than the query image, and uses them as negative examples of forests to update its classifier. This rich human-machine communication leads to better classification performance. In this work, we propose three improvements over this set-up. First, we incorporate a weighting scheme that instead of making a hard decision reasons about the likelihood of an image being a negative example. Second, we do away with pre-trained attributes and instead learn the attribute models on the fly, alleviating overhead and restrictions of a pre-determined attribute vocabulary. Finally, we propose an active learning framework that accounts for not just the label- but also the attributes-based feedback while selecting the next query image. We demonstrate significant improvement in classification accuracy on faces and shoes. We also collect and make available the largest relative attributes dataset containing 29 attributes of faces from 60 categories.

3 0.84001291 116 cvpr-2013-Designing Category-Level Attributes for Discriminative Visual Recognition

Author: Felix X. Yu, Liangliang Cao, Rogerio S. Feris, John R. Smith, Shih-Fu Chang

Abstract: Attribute-based representation has shown great promises for visual recognition due to its intuitive interpretation and cross-category generalization property. However, human efforts are usually involved in the attribute designing process, making the representation costly to obtain. In this paper, we propose a novel formulation to automatically design discriminative “category-level attributes ”, which can be efficiently encoded by a compact category-attribute matrix. The formulation allows us to achieve intuitive and critical design criteria (category-separability, learnability) in a principled way. The designed attributes can be used for tasks of cross-category knowledge transfer, achieving superior performance over well-known attribute dataset Animals with Attributes (AwA) and a large-scale ILSVRC2010 dataset (1.2M images). This approach also leads to state-ofthe-art performance on the zero-shot learning task on AwA.

4 0.82249701 241 cvpr-2013-Label-Embedding for Attribute-Based Classification

Author: Zeynep Akata, Florent Perronnin, Zaid Harchaoui, Cordelia Schmid

Abstract: Attributes are an intermediate representation, which enables parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function which measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. The label embedding framework offers other advantages such as the ability to leverage alternative sources of information in addition to attributes (e.g. class hierarchies) or to transition smoothly from zero-shot learning to learning with large quantities of data.

5 0.78529769 310 cvpr-2013-Object-Centric Anomaly Detection by Attribute-Based Reasoning

Author: Babak Saleh, Ali Farhadi, Ahmed Elgammal

Abstract: When describing images, humans tend not to talk about the obvious, but rather mention what they find interesting. We argue that abnormalities and deviations from typicalities are among the most important components that form what is worth mentioning. In this paper we introduce the abnormality detection as a recognition problem and show how to model typicalities and, consequently, meaningful deviations from prototypical properties of categories. Our model can recognize abnormalities and report the main reasons of any recognized abnormality. We also show that abnormality predictions can help image categorization. We introduce the abnormality detection dataset and show interesting results on how to reason about abnormalities.

6 0.76828212 48 cvpr-2013-Attribute-Based Detection of Unfamiliar Classes with Humans in the Loop

7 0.7668975 229 cvpr-2013-It's Not Polite to Point: Describing People with Uncertain Attributes

8 0.76653457 293 cvpr-2013-Multi-attribute Queries: To Merge or Not to Merge?

9 0.69064987 461 cvpr-2013-Weakly Supervised Learning for Attribute Localization in Outdoor Scenes

10 0.63292623 34 cvpr-2013-Adaptive Active Learning for Image Classification

11 0.61248785 101 cvpr-2013-Cumulative Attribute Space for Age and Crowd Density Estimation

12 0.57653064 85 cvpr-2013-Complex Event Detection via Multi-source Video Attributes

13 0.5671261 390 cvpr-2013-Semi-supervised Node Splitting for Random Forest Construction

14 0.52891594 348 cvpr-2013-Recognizing Activities via Bag of Words for Attribute Dynamics

15 0.52263999 99 cvpr-2013-Cross-View Image Geolocalization

16 0.51930171 463 cvpr-2013-What's in a Name? First Names as Facial Attributes

17 0.50415081 261 cvpr-2013-Learning by Associating Ambiguously Labeled Images

18 0.4795773 179 cvpr-2013-From N to N+1: Multiclass Transfer Incremental Learning

19 0.47929922 323 cvpr-2013-POOF: Part-Based One-vs.-One Features for Fine-Grained Categorization, Face Verification, and Attribute Estimation

20 0.47168151 442 cvpr-2013-Transfer Sparse Coding for Robust Image Representation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.155), (16, 0.02), (26, 0.042), (28, 0.029), (33, 0.28), (36, 0.01), (67, 0.069), (69, 0.072), (77, 0.021), (82, 0.128), (87, 0.093)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.93161559 327 cvpr-2013-Pattern-Driven Colorization of 3D Surfaces

Author: George Leifman, Ayellet Tal

Abstract: Colorization refers to the process of adding color to black & white images or videos. This paper extends the term to handle surfaces in three dimensions. This is important for applications in which the colors of an object need to be restored and no relevant image exists for texturing it. We focus on surfaces with patterns and propose a novel algorithm for adding colors to these surfaces. The user needs only to scribble a few color strokes on one instance of each pattern, and the system proceeds to automatically colorize the whole surface. For this scheme to work, we address not only the problem of colorization, but also the problem of pattern detection on surfaces.

2 0.93004954 242 cvpr-2013-Label Propagation from ImageNet to 3D Point Clouds

Author: Yan Wang, Rongrong Ji, Shih-Fu Chang

Abstract: Recent years have witnessed a growing interest in understanding the semantics of point clouds in a wide variety of applications. However, point cloud labeling remains an open problem, due to the difficulty in acquiring sufficient 3D point labels towards training effective classifiers. In this paper, we overcome this challenge by utilizing the existing massive 2D semantic labeled datasets from decadelong community efforts, such as ImageNet and LabelMe, and a novel “cross-domain ” label propagation approach. Our proposed method consists of two major novel components, Exemplar SVM based label propagation, which effectively addresses the cross-domain issue, and a graphical model based contextual refinement incorporating 3D constraints. Most importantly, the entire process does not require any training data from the target scenes, also with good scalability towards large scale applications. We evaluate our approach on the well-known Cornell Point Cloud Dataset, achieving much greater efficiency and comparable accuracy even without any 3D training data. Our approach shows further major gains in accuracy when the training data from the target scenes is used, outperforming state-ofthe-art approaches with far better efficiency.

3 0.92721754 248 cvpr-2013-Learning Collections of Part Models for Object Recognition

Author: Ian Endres, Kevin J. Shih, Johnston Jiaa, Derek Hoiem

Abstract: We propose a method to learn a diverse collection of discriminative parts from object bounding box annotations. Part detectors can be trained and applied individually, which simplifies learning and extension to new features or categories. We apply the parts to object category detection, pooling part detections within bottom-up proposed regions and using a boosted classifier with proposed sigmoid weak learners for scoring. On PASCAL VOC 2010, we evaluate the part detectors ’ ability to discriminate and localize annotated keypoints. Our detection system is competitive with the best-existing systems, outperforming other HOG-based detectors on the more deformable categories.

4 0.91900992 414 cvpr-2013-Structure Preserving Object Tracking

Author: Lu Zhang, Laurens van_der_Maaten

Abstract: Model-free trackers can track arbitrary objects based on a single (bounding-box) annotation of the object. Whilst the performance of model-free trackers has recently improved significantly, simultaneously tracking multiple objects with similar appearance remains very hard. In this paper, we propose a new multi-object model-free tracker (based on tracking-by-detection) that resolves this problem by incorporating spatial constraints between the objects. The spatial constraints are learned along with the object detectors using an online structured SVM algorithm. The experimental evaluation ofour structure-preserving object tracker (SPOT) reveals significant performance improvements in multi-object tracking. We also show that SPOT can improve the performance of single-object trackers by simultaneously tracking different parts of the object.

5 0.91897261 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities

Author: Horst Possegger, Sabine Sternig, Thomas Mauthner, Peter M. Roth, Horst Bischof

Abstract: Combining foreground images from multiple views by projecting them onto a common ground-plane has been recently applied within many multi-object tracking approaches. These planar projections introduce severe artifacts and constrain most approaches to objects moving on a common 2D ground-plane. To overcome these limitations, we introduce the concept of an occupancy volume exploiting the full geometry and the objects ’ center of mass and develop an efficient algorithm for 3D object tracking. Individual objects are tracked using the local mass density scores within a particle filter based approach, constrained by a Voronoi partitioning between nearby trackers. Our method benefits from the geometric knowledge given by the occupancy volume to robustly extract features and train classifiers on-demand, when volumetric information becomes unreliable. We evaluate our approach on several challenging real-world scenarios including the public APIDIS dataset. Experimental evaluations demonstrate significant improvements compared to state-of-theart methods, while achieving real-time performance. – –

6 0.91628766 408 cvpr-2013-Spatiotemporal Deformable Part Models for Action Detection

7 0.91624254 225 cvpr-2013-Integrating Grammar and Segmentation for Human Pose Estimation

8 0.91600335 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking

9 0.91466731 325 cvpr-2013-Part Discovery from Partial Correspondence

same-paper 10 0.91436249 36 cvpr-2013-Adding Unlabeled Samples to Categories by Learned Attributes

11 0.91429496 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases

12 0.91408753 98 cvpr-2013-Cross-View Action Recognition via a Continuous Virtual Path

13 0.9140361 61 cvpr-2013-Beyond Point Clouds: Scene Understanding by Reasoning Geometry and Physics

14 0.91316307 445 cvpr-2013-Understanding Bayesian Rooms Using Composite 3D Object Models

15 0.91258192 19 cvpr-2013-A Minimum Error Vanishing Point Detection Approach for Uncalibrated Monocular Images of Man-Made Environments

16 0.91188198 256 cvpr-2013-Learning Structured Hough Voting for Joint Object Detection and Occlusion Reasoning

17 0.91133189 372 cvpr-2013-SLAM++: Simultaneous Localisation and Mapping at the Level of Objects

18 0.91122037 143 cvpr-2013-Efficient Large-Scale Structured Learning

19 0.91118395 314 cvpr-2013-Online Object Tracking: A Benchmark

20 0.91115999 14 cvpr-2013-A Joint Model for 2D and 3D Pose Estimation from a Single Image