nips nips2008 nips2008-142 knowledge-graph by maker-knowledge-mining

142 nips-2008-Multi-Level Active Prediction of Useful Image Annotations for Recognition


Source: pdf

Author: Sudheendra Vijayanarasimhan, Kristen Grauman

Abstract: We introduce a framework for actively learning visual categories from a mixture of weakly and strongly labeled image examples. We propose to allow the categorylearner to strategically choose what annotations it receives—based on both the expected reduction in uncertainty as well as the relative costs of obtaining each annotation. We construct a multiple-instance discriminative classifier based on the initial training data. Then all remaining unlabeled and weakly labeled examples are surveyed to actively determine which annotation ought to be requested next. After each request, the current classifier is incrementally updated. Unlike previous work, our approach accounts for the fact that the optimal use of manual annotation may call for a combination of labels at multiple levels of granularity (e.g., a full segmentation on some images and a present/absent flag on others). As a result, it is possible to learn more accurate category models with a lower total expenditure of manual annotation effort. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We introduce a framework for actively learning visual categories from a mixture of weakly and strongly labeled image examples. [sent-3, score-0.595]

2 We propose to allow the categorylearner to strategically choose what annotations it receives—based on both the expected reduction in uncertainty as well as the relative costs of obtaining each annotation. [sent-4, score-0.333]

3 Then all remaining unlabeled and weakly labeled examples are surveyed to actively determine which annotation ought to be requested next. [sent-6, score-0.932]

4 Unlike previous work, our approach accounts for the fact that the optimal use of manual annotation may call for a combination of labels at multiple levels of granularity (e. [sent-8, score-0.612]

5 As a result, it is possible to learn more accurate category models with a lower total expenditure of manual annotation effort. [sent-11, score-0.524]

6 The extent of an image labeling can range from a flag telling whether the object of interest is present or absent, to a full segmentation specifying the object boundary. [sent-17, score-0.387]

7 Meanwhile, the learning algorithm must be able to accommodate the multiple levels of granularity that may occur in provided image annotations, and to compute which item at which of those levels appears to be most fruitful to have labeled next (see Figure 1). [sent-25, score-0.446]

8 Useful image annotations can occur at multiple levels of granularity. [sent-28, score-0.421]

9 Left: For example, a learner may only know whether the image contains a particular object or not (top row, dotted boxes denote object is present), or it may also have segmented foregrounds (middle row), or it may have detailed outlines of object parts (bottom row). [sent-29, score-0.428]

10 The learner may only be given the noisy groups and told that each includes at least one instance of the specified class (top), or, for some groups, the individual example images may be labeled as positive or negative (bottom). [sent-31, score-0.498]

11 We propose an active learning paradigm that directs manual annotation effort to the most informative examples and levels. [sent-32, score-0.793]

12 To address this challenge, we propose a method that actively targets the learner’s requests for supervision so as to maximize the expected benefit to the category models. [sent-33, score-0.35]

13 Our method constructs an initial classifier from limited labeled data, and then considers all remaining unlabeled and weakly labeled examples to determine what annotation seems most informative to obtain. [sent-34, score-0.836]

14 Since the varying levels of annotation demand varying degrees of manual effort, our active selection process weighs the value of the information gain against the cost of actually obtaining any given annotation. [sent-35, score-0.965]

15 Our approach accounts for the fact that image annotations can exist at multiple levels of granularity: both the classifier and active selection objectives are formulated to accommodate dual-layer labels. [sent-37, score-0.773]

16 To achieve this duality for the classifier, we express the problem in the multiple instance learning (MIL) setting [9], where training examples are specified as bags of the finer granularity instances, and positive bags may contain an arbitrary number of negatives. [sent-38, score-0.892]

17 To achieve the duality for the active selection, we design a decision-theoretic criterion that balances the variable costs associated with each type of annotation with the expected gain in information. [sent-39, score-0.603]

18 Essentially this allows the learner to automatically predict when the extra effort of a more precise annotation is warranted. [sent-40, score-0.423]

19 The main contribution of this work is a unified framework to actively learn categories from a mixture of weakly and strongly labeled examples. [sent-41, score-0.454]

20 We are the first to identify and address the problem of active visual category learning with multi-level annotations. [sent-42, score-0.446]

21 Not only does our active strategy learn more quickly than a random selection baseline, but for a fixed amount of manual resources, it yields more accurate models than conventional single-layer active selection strategies. [sent-44, score-0.79]

22 Recent methods have shown the possibility of learning visual patterns from unlabeled [3, 2] image collections, while other techniques aim to share or re-use knowledge across categories [10, 4]. [sent-46, score-0.458]

23 Using weakly labeled images to learn categories was proposed in [1], and several researchers have shown that MIL can accommodate the weak or noisy supervision often available for image data [11–14]. [sent-48, score-0.595]

24 Working in the other direction, some research seeks to facilitate the manual labor of image annotation, tempting users with games or nice datasets [7, 8]. [sent-49, score-0.322]

25 However, when faced with a distribution of unlabeled images, almost all existing methods for visual category learning are essentially passive, selecting points at random to label. [sent-50, score-0.391]

26 Our active selection procedure is in part inspired by this work, as it also seeks to balance the cost and utility tradeoff. [sent-55, score-0.441]

27 Recent work has considered active learning with Gaussian Process classifiers [19], and relevance feedback for video annotations [20]. [sent-56, score-0.493]

28 In contrast, we show how to form active multiple-instance learners, where constraints or labels must be sought at multiple levels of granularity. [sent-57, score-0.442]

29 Further, we introduce the notion of predicting when to “invest” the labor of more expensive image annotations so as to ultimately yield bigger benefits to the classifier. [sent-58, score-0.412]

30 Unlike any previous work, our method continually guides the annotation process to the appropriate level of supervision. [sent-59, score-0.383]

31 While an active criterion for instance-level queries is suggested in [21] and applied within an MI learner, it cannot actively select positive bags or unlabeled bags, and does not consider the cost of obtaining the labels requested. [sent-60, score-1.328]

32 The key idea is to actively determine which annotations a user should be asked to provide, and in what order. [sent-64, score-0.379]

33 We consider image collections consisting of a variety of supervisory information: some images are labeled as containing the category of interest (or not), some have both a class label and a foreground segmentation, while others have no annotations at all. [sent-65, score-0.821]

34 We derive an active learning criterion function that predicts how informative further annotation on any particular unlabeled image or region would be, while accounting for the variable expense associated with different annotation types. [sent-66, score-1.146]

35 As long as the information expected from further annotations outweighs the cost of obtaining them, our algorithm will request the next valuable label, re-train the classifier, and repeat. [sent-67, score-0.529]

36 However, the fact that image annotations can exist at multiple levels of granularity demands a learning algorithm that can encode any known labels at the levels they occur, and so MIL [9] is more applicable. [sent-73, score-0.619]

37 In MIL, the learner is instead provided with sets (bags) of patterns rather than individual patterns, and is only told that at least one member of any positive bag is truly positive, while every member of any negative bag is guaranteed to be negative. [sent-74, score-0.615]

38 MIL is well-suited for the following two image classification scenarios: • Training images are labeled as to whether they contain the category of interest, but they also contain other objects and background clutter. [sent-76, score-0.524]

39 Every image is represented by a bag of regions, each of which is characterized by its color, texture, shape, etc. [sent-77, score-0.318]

40 The goal is to predict when new image regions contain the object—that is, to learn to label regions as foreground or background. [sent-80, score-0.348]

41 We integrate our active selection method with the SVM-based MIL approach given in [22], which uses a Normalized Set Kernel (NSK) to describe bags based on the average representation of instances within them. [sent-87, score-0.766]

42 Following [23], we use the NSK mapping for positive bags only; all instances in a negative bag are treated individually as negative. [sent-88, score-0.78]

43 Whereas active selection criteria for traditional supervised classifiers need only identify the best instance to label next, in the MIL domain we have a more complex choice. [sent-95, score-0.485]

44 There are three possible types of request: the system can ask for a label on an instance, a label on an unlabeled bag, or for a joint labeling of all instances within a positive bag. [sent-96, score-0.673]

45 So, we must design a selection criterion that simultaneously determines which type of annotation to request, and for which example to request it. [sent-97, score-0.435]

46 Adding to the challenge, the selection process must also account for the variable costs associated with each level of annotation (e. [sent-98, score-0.452]

47 We extend the value of information (VOI) strategy proposed in [18] to enable active MIL selection, and derive a generalized value function that can accept both instances and bags. [sent-101, score-0.411]

48 This allows us to predict the information gain in a joint labeling of multiple instances at once, and thereby actively choose when it is worthwhile to expend more or less manual effort in the training process. [sent-102, score-0.641]

49 Our method continually re-evaluates the expected significance of knowing more about any unlabeled or partially labeled example, as quantified by the predicted reduction in misclassification risk plus the cost of obtaining the label. [sent-103, score-0.696]

50 We consider a collection of unlabeled data XU , and labeled data XL composed of a set of positive ˜ bags Xp and a set of negative instances Xn . [sent-104, score-0.896]

51 Recall that positively labeled bags contain instances whose labels are unknown, since they contain an unknown mix of positive and negative instances. [sent-105, score-0.841]

52 Let rp denote the user-specified risk associated with misclassifying a positive example as negative, and rn denote the risk of misclassifying a negative. [sent-106, score-0.492]

53 The risk associated with the labeled data is: Risk(XL ) = rp (1 − p(Xi )) + rn p(xi ), (1) ˜ xi ∈Xn Xi ∈Xp where xi denotes an instance and Xi denotes a bag. [sent-107, score-0.431]

54 The corresponding risk for unlabeled data is: Risk(XU ) = rp (1 − p(xi )) Pr(yi = +1|xi ) + rn p(xi )(1 − Pr(yi = +1|xi )), (2) xi ∈XU where yi is the true label for unlabeled example xi . [sent-111, score-0.785]

55 This simplifies the risk for the unlabeled data to: Risk(XU ) = xi ∈XU (rp + rn )(1 − p(xi ))p(xi ), where again we transform unlabeled bags according to the NSK before computing the posterior. [sent-113, score-0.93]

56 If the VOI is high for a given input, then the total cost would be decreased by adding its annotation; similarly, low values indicate minor gains, and negative values indicate an annotation that costs more to obtain than it is worth. [sent-117, score-0.471]

57 Thus at each iteration, the active learner surveys all remaining unlabeled and weakly labeled examples, computes their VOI, and requests the label for the example with the maximal value. [sent-118, score-0.867]

58 Secondly, for active selection to proceed at multiple levels, the VOI must act as an overloaded function: we need to be able to evaluate the VOI when z is an unlabeled instance or an unlabeled bag or a weakly labeled example, i. [sent-121, score-1.24]

59 , a positive bag containing an unknown number of negative instances. [sent-123, score-0.337]

60 Similarly, if z is an unlabeled bag, the label assignment can only be positive or negative, and we compute the probability of either label via the NSK mapping. [sent-127, score-0.462]

61 For positive bag z, the expected total risk is then the average risk computed over all S generated samples: 1 E= S S Risk({XL (a1 )k z} ∪ {z1 (a ) , . [sent-139, score-0.609]

62 To compute the risk on XL for each fixed sample we simply remove the weakly labeled positive bag z, and insert its instances as labeled positives and negatives, as dictated by the sample’s label assignment. [sent-146, score-1.008]

63 To complete our active selection function, we must define the cost function C(z), which maps an input to the amount of effort required to annotate it. [sent-148, score-0.532]

64 We can now actively select which examples and what type of annotation to request, so as to maximize the expected benefit to the category model relative to the manual effort expended. [sent-152, score-0.768]

65 After each annotation is added and the classifier is revised accordingly, the VOI is evaluated on the remaining unlabeled and weakly labeled data in order to choose the next annotation. [sent-153, score-0.671]

66 This process repeats either until the available amount of manual resources is exhausted, or, alternatively, until the maximum VOI is negative, indicating further annotations are not worth the effort. [sent-154, score-0.34]

67 We provide comparisons with single-level active learning (with both the method of [21], and where the same VOI function is used but is restricted to actively label only instances), as well as passive learning. [sent-158, score-0.567]

68 2 To determine how much more labeling a positive bag costs relative to labeling an instance, we performed user studies for both of the scenarios evaluated. [sent-160, score-0.508]

69 For the first scenario, users were shown oversegmented images and had to click on all the segments belonging to the object of interest. [sent-161, score-0.357]

70 In the second, users were shown a page of downloaded Web images and had to click on only those images containing the object of interest. [sent-162, score-0.426]

71 For segmentation, obtaining labels on all positive segments took users on average four times as much time as setting a flag. [sent-164, score-0.336]

72 3 times as long to identify all positives within bags of 25 noisy images. [sent-166, score-0.33]

73 Thus we set the cost of labeling a positive bag to 4 and 6. [sent-167, score-0.474]

74 These values agree with the average sparsity of the two datasets: the Google set contains about 30% true positive images while the SIVAL set contains 10% positive segments per image. [sent-169, score-0.324]

75 Thus each image is a bag containing both positive and negative instances (segments). [sent-175, score-0.59]

76 Labels on the training data specify whether the object of interest is present or not, but the segments themselves are unlabeled (though the dataset does provide ground truth segment labels for evaluation purposes). [sent-176, score-0.434]

77 Our active learning method must choose its queries from among 10 positive bags (complete segmentations), 300 unlabeled instances (individual segments), and about 150 unlabeled bags (present/absent flag on the image). [sent-178, score-1.558]

78 All methods are given a fixed amount of manual effort (40 cost units) and are allowed to make a sequence of choices until that cost is used up. [sent-182, score-0.443]

79 Recall that a cost of 40 could correspond, for example, to obtaining labels on 40 40 1 = 40 instances or 4 = 10 positive bags, or some mixture thereof. [sent-183, score-0.494]

80 Figure 2(b) summarizes the learning curves for all categories, in terms of the average improvement at a fixed point midway through the active learning phase. [sent-184, score-0.311]

81 This is because single-level active selection can only make a sequence of greedy choices while our approach can jointly select bags of instances to query. [sent-190, score-0.766]

82 (b) Summary of the average improvement over all categories after half of the annotation cost is used. [sent-200, score-0.454]

83 For the same amount of annotation cost, our multi-level approach learns more quickly than both traditional single-level active selection as well as both forms of random selection. [sent-201, score-0.613]

84 Our method tends to request complete segmentations or image labels early on, followed by queries on unlabeled segments later on. [sent-218, score-0.62]

85 For both methods, the percent gains decrease with increasing cost; this makes sense, since eventually (for enough manual effort) a passive learner can begin to catch up to an active learner. [sent-228, score-0.566]

86 2 Actively Learning Visual Categories from Web Images Next we evaluate the scenario where each positive bag is a collection of images, among which only a portion are actually positive instances for the class of interest. [sent-230, score-0.535]

87 We show how to boost accuracy with these types of learners while leveraging minimal manual annotation effort. [sent-236, score-0.377]

88 To re-use the publicly available dataset from [5], we randomly group Google images into bags of size 25 to simulate multiple searches as in [11], yielding about 30 bags per category. [sent-237, score-0.752]

89 We randomly select 10 positive and 10 negative bags (from all other categories) to serve as the initial training data for each class. [sent-238, score-0.431]

90 The rest of the positive bags of a class are used to construct the test sets. [sent-239, score-0.407]

91 We represent each image as a bag of “visual words”, and compare examples with a linear kernel. [sent-241, score-0.344]

92 Our method makes active queries among 10 positive bags (complete labels) and about 250 unlabeled instances (images). [sent-242, score-1.043]

93 There are no unlabeled bags in this scenario, since every downloaded batch is associated with a keyword. [sent-243, score-0.555]

94 Our multi-level active approach outperforms both random selection strategies and traditional single-level active selection. [sent-247, score-0.621]

95 Figure 4 shows the learning curves and a summary of our active learner’s performance. [sent-248, score-0.311]

96 On this dataset, random selection with multi-level annotations actually outperforms random selection on single-level annotations (see the boxplots). [sent-250, score-0.556]

97 We attribute this to the distribution of bags/instances: on average more positive bags were randomly chosen, and each addition led to a larger increase in the AUROC. [sent-251, score-0.407]

98 5 Conclusions and Future Work Our approach addresses a new problem: how to actively choose not only which instance to label, but also what type of image annotation to acquire in a cost-effective way. [sent-252, score-0.576]

99 Our method is general enough to accept other types of annotations or classifiers, as long as the cost and risk functions can be appropriately defined. [sent-253, score-0.503]

100 Comparisons with passive learning methods and single-level active learning show that our multi-level method is better-suited for building classifiers with minimal human intervention. [sent-254, score-0.328]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('bags', 0.301), ('active', 0.269), ('mil', 0.266), ('annotation', 0.261), ('annotations', 0.224), ('unlabeled', 0.214), ('multi', 0.213), ('bag', 0.207), ('xl', 0.172), ('voi', 0.166), ('risk', 0.161), ('actively', 0.155), ('instances', 0.142), ('request', 0.12), ('category', 0.119), ('cost', 0.118), ('sival', 0.116), ('manual', 0.116), ('images', 0.113), ('image', 0.111), ('labeled', 0.109), ('xu', 0.103), ('level', 0.095), ('effort', 0.091), ('labels', 0.087), ('weakly', 0.087), ('label', 0.084), ('nsk', 0.083), ('object', 0.082), ('positive', 0.08), ('classi', 0.076), ('categories', 0.075), ('er', 0.072), ('learner', 0.071), ('labeling', 0.069), ('obtaining', 0.067), ('auroc', 0.066), ('granularity', 0.062), ('passive', 0.059), ('visual', 0.058), ('ag', 0.056), ('selection', 0.054), ('ought', 0.053), ('segments', 0.051), ('gains', 0.051), ('users', 0.051), ('negative', 0.05), ('vijayanarasimhan', 0.05), ('grauman', 0.05), ('google', 0.05), ('roc', 0.05), ('levels', 0.049), ('instance', 0.049), ('pr', 0.049), ('web', 0.048), ('labor', 0.044), ('segmentation', 0.043), ('supervision', 0.043), ('curves', 0.042), ('costs', 0.042), ('scenarios', 0.041), ('downloaded', 0.04), ('selections', 0.04), ('xi', 0.04), ('multiple', 0.037), ('queries', 0.037), ('contain', 0.036), ('sgn', 0.036), ('foreground', 0.035), ('ers', 0.034), ('expensive', 0.033), ('craven', 0.033), ('finer', 0.033), ('kapoor', 0.033), ('oversegmented', 0.033), ('requests', 0.033), ('zmm', 0.033), ('rp', 0.032), ('gain', 0.031), ('cluttered', 0.03), ('informative', 0.03), ('misclassifying', 0.029), ('miu', 0.029), ('traditional', 0.029), ('accommodate', 0.029), ('positives', 0.029), ('learn', 0.028), ('area', 0.027), ('regions', 0.027), ('boxplots', 0.027), ('requested', 0.027), ('click', 0.027), ('continually', 0.027), ('class', 0.026), ('attribute', 0.026), ('examples', 0.026), ('freeman', 0.025), ('selective', 0.025), ('ray', 0.025), ('austin', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999905 142 nips-2008-Multi-Level Active Prediction of Useful Image Annotations for Recognition

Author: Sudheendra Vijayanarasimhan, Kristen Grauman

Abstract: We introduce a framework for actively learning visual categories from a mixture of weakly and strongly labeled image examples. We propose to allow the categorylearner to strategically choose what annotations it receives—based on both the expected reduction in uncertainty as well as the relative costs of obtaining each annotation. We construct a multiple-instance discriminative classifier based on the initial training data. Then all remaining unlabeled and weakly labeled examples are surveyed to actively determine which annotation ought to be requested next. After each request, the current classifier is incrementally updated. Unlike previous work, our approach accounts for the fact that the optimal use of manual annotation may call for a combination of labels at multiple levels of granularity (e.g., a full segmentation on some images and a present/absent flag on others). As a result, it is possible to learn more accurate category models with a lower total expenditure of manual annotation effort. 1

2 0.21546483 101 nips-2008-Human Active Learning

Author: Rui M. Castro, Charles Kalish, Robert Nowak, Ruichen Qian, Tim Rogers, Xiaojin Zhu

Abstract: We investigate a topic at the interface of machine learning and cognitive science. Human active learning, where learners can actively query the world for information, is contrasted with passive learning from random examples. Furthermore, we compare human active learning performance with predictions from statistical learning theory. We conduct a series of human category learning experiments inspired by a machine learning task for which active and passive learning error bounds are well understood, and dramatically distinct. Our results indicate that humans are capable of actively selecting informative queries, and in doing so learn better and faster than if they are given random training data, as predicted by learning theory. However, the improvement over passive learning is not as dramatic as that achieved by machine active learning algorithms. To the best of our knowledge, this is the first quantitative study comparing human category learning in active versus passive settings. 1

3 0.15683903 116 nips-2008-Learning Hybrid Models for Image Annotation with Partially Labeled Data

Author: Xuming He, Richard S. Zemel

Abstract: Extensive labeled data for image annotation systems, which learn to assign class labels to image regions, is difficult to obtain. We explore a hybrid model framework for utilizing partially labeled data that integrates a generative topic model for image appearance with discriminative label prediction. We propose three alternative formulations for imposing a spatial smoothness prior on the image labels. Tests of the new models and some baseline approaches on three real image datasets demonstrate the effectiveness of incorporating the latent structure. 1

4 0.14808832 205 nips-2008-Semi-supervised Learning with Weakly-Related Unlabeled Data : Towards Better Text Categorization

Author: Liu Yang, Rong Jin, Rahul Sukthankar

Abstract: The cluster assumption is exploited by most semi-supervised learning (SSL) methods. However, if the unlabeled data is merely weakly related to the target classes, it becomes questionable whether driving the decision boundary to the low density regions of the unlabeled data will help the classification. In such case, the cluster assumption may not be valid; and consequently how to leverage this type of unlabeled data to enhance the classification accuracy becomes a challenge. We introduce “Semi-supervised Learning with Weakly-Related Unlabeled Data” (SSLW), an inductive method that builds upon the maximum-margin approach, towards a better usage of weakly-related unlabeled information. Although the SSLW could improve a wide range of classification tasks, in this paper, we focus on text categorization with a small training pool. The key assumption behind this work is that, even with different topics, the word usage patterns across different corpora tends to be consistent. To this end, SSLW estimates the optimal wordcorrelation matrix that is consistent with both the co-occurrence information derived from the weakly-related unlabeled documents and the labeled documents. For empirical evaluation, we present a direct comparison with a number of stateof-the-art methods for inductive semi-supervised learning and text categorization. We show that SSLW results in a significant improvement in categorization accuracy, equipped with a small training set and an unlabeled resource that is weakly related to the test domain.

5 0.14140566 245 nips-2008-Unlabeled data: Now it helps, now it doesn't

Author: Aarti Singh, Robert Nowak, Xiaojin Zhu

Abstract: Empirical evidence shows that in favorable situations semi-supervised learning (SSL) algorithms can capitalize on the abundance of unlabeled training data to improve the performance of a learning task, in the sense that fewer labeled training data are needed to achieve a target error bound. However, in other situations unlabeled data do not seem to help. Recent attempts at theoretically characterizing SSL gains only provide a partial and sometimes apparently conflicting explanations of whether, and to what extent, unlabeled data can help. In this paper, we attempt to bridge the gap between the practice and theory of semi-supervised learning. We develop a finite sample analysis that characterizes the value of unlabeled data and quantifies the performance improvement of SSL compared to supervised learning. We show that there are large classes of problems for which SSL can significantly outperform supervised learning, in finite sample regimes and sometimes also in terms of error convergence rates.

6 0.13571751 246 nips-2008-Unsupervised Learning of Visual Sense Models for Polysemous Words

7 0.13530883 130 nips-2008-MCBoost: Multiple Classifier Boosting for Perceptual Co-clustering of Images and Visual Features

8 0.13158678 120 nips-2008-Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text

9 0.12846683 123 nips-2008-Linear Classification and Selective Sampling Under Low Noise Conditions

10 0.1283213 42 nips-2008-Cascaded Classification Models: Combining Models for Holistic Scene Understanding

11 0.12559442 6 nips-2008-A ``Shape Aware'' Model for semi-supervised Learning of Objects and its Context

12 0.12023581 191 nips-2008-Recursive Segmentation and Recognition Templates for 2D Parsing

13 0.11609368 208 nips-2008-Shared Segmentation of Natural Scenes Using Dependent Pitman-Yor Processes

14 0.10671686 241 nips-2008-Transfer Learning by Distribution Matching for Targeted Advertising

15 0.094284914 207 nips-2008-Shape-Based Object Localization for Descriptive Classification

16 0.091500856 56 nips-2008-Deep Learning with Kernel Regularization for Visual Recognition

17 0.087489888 242 nips-2008-Translated Learning: Transfer Learning across Different Feature Spaces

18 0.086499386 21 nips-2008-An Homotopy Algorithm for the Lasso with Online Observations

19 0.083908305 193 nips-2008-Regularized Co-Clustering with Dual Supervision

20 0.082332842 194 nips-2008-Regularized Learning with Networks of Features


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.241), (1, -0.168), (2, 0.038), (3, -0.19), (4, -0.105), (5, 0.069), (6, 0.007), (7, -0.103), (8, 0.034), (9, 0.102), (10, 0.064), (11, -0.001), (12, -0.071), (13, -0.273), (14, -0.009), (15, 0.007), (16, -0.084), (17, 0.074), (18, 0.069), (19, -0.02), (20, 0.05), (21, 0.007), (22, 0.084), (23, -0.032), (24, -0.03), (25, -0.09), (26, 0.007), (27, -0.08), (28, 0.02), (29, -0.056), (30, 0.013), (31, 0.116), (32, -0.154), (33, -0.135), (34, -0.017), (35, 0.062), (36, -0.046), (37, -0.006), (38, 0.072), (39, -0.057), (40, -0.066), (41, -0.016), (42, -0.062), (43, -0.018), (44, 0.123), (45, -0.022), (46, 0.03), (47, -0.057), (48, 0.034), (49, -0.046)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97160852 142 nips-2008-Multi-Level Active Prediction of Useful Image Annotations for Recognition

Author: Sudheendra Vijayanarasimhan, Kristen Grauman

Abstract: We introduce a framework for actively learning visual categories from a mixture of weakly and strongly labeled image examples. We propose to allow the categorylearner to strategically choose what annotations it receives—based on both the expected reduction in uncertainty as well as the relative costs of obtaining each annotation. We construct a multiple-instance discriminative classifier based on the initial training data. Then all remaining unlabeled and weakly labeled examples are surveyed to actively determine which annotation ought to be requested next. After each request, the current classifier is incrementally updated. Unlike previous work, our approach accounts for the fact that the optimal use of manual annotation may call for a combination of labels at multiple levels of granularity (e.g., a full segmentation on some images and a present/absent flag on others). As a result, it is possible to learn more accurate category models with a lower total expenditure of manual annotation effort. 1

2 0.70910406 101 nips-2008-Human Active Learning

Author: Rui M. Castro, Charles Kalish, Robert Nowak, Ruichen Qian, Tim Rogers, Xiaojin Zhu

Abstract: We investigate a topic at the interface of machine learning and cognitive science. Human active learning, where learners can actively query the world for information, is contrasted with passive learning from random examples. Furthermore, we compare human active learning performance with predictions from statistical learning theory. We conduct a series of human category learning experiments inspired by a machine learning task for which active and passive learning error bounds are well understood, and dramatically distinct. Our results indicate that humans are capable of actively selecting informative queries, and in doing so learn better and faster than if they are given random training data, as predicted by learning theory. However, the improvement over passive learning is not as dramatic as that achieved by machine active learning algorithms. To the best of our knowledge, this is the first quantitative study comparing human category learning in active versus passive settings. 1

3 0.62882113 205 nips-2008-Semi-supervised Learning with Weakly-Related Unlabeled Data : Towards Better Text Categorization

Author: Liu Yang, Rong Jin, Rahul Sukthankar

Abstract: The cluster assumption is exploited by most semi-supervised learning (SSL) methods. However, if the unlabeled data is merely weakly related to the target classes, it becomes questionable whether driving the decision boundary to the low density regions of the unlabeled data will help the classification. In such case, the cluster assumption may not be valid; and consequently how to leverage this type of unlabeled data to enhance the classification accuracy becomes a challenge. We introduce “Semi-supervised Learning with Weakly-Related Unlabeled Data” (SSLW), an inductive method that builds upon the maximum-margin approach, towards a better usage of weakly-related unlabeled information. Although the SSLW could improve a wide range of classification tasks, in this paper, we focus on text categorization with a small training pool. The key assumption behind this work is that, even with different topics, the word usage patterns across different corpora tends to be consistent. To this end, SSLW estimates the optimal wordcorrelation matrix that is consistent with both the co-occurrence information derived from the weakly-related unlabeled documents and the labeled documents. For empirical evaluation, we present a direct comparison with a number of stateof-the-art methods for inductive semi-supervised learning and text categorization. We show that SSLW results in a significant improvement in categorization accuracy, equipped with a small training set and an unlabeled resource that is weakly related to the test domain.

4 0.62397122 42 nips-2008-Cascaded Classification Models: Combining Models for Holistic Scene Understanding

Author: Geremy Heitz, Stephen Gould, Ashutosh Saxena, Daphne Koller

Abstract: One of the original goals of computer vision was to fully understand a natural scene. This requires solving several sub-problems simultaneously, including object detection, region labeling, and geometric reasoning. The last few decades have seen great progress in tackling each of these problems in isolation. Only recently have researchers returned to the difficult task of considering them jointly. In this work, we consider learning a set of related models in such that they both solve their own problem and help each other. We develop a framework called Cascaded Classification Models (CCM), where repeated instantiations of these classifiers are coupled by their input/output variables in a cascade that improves performance at each level. Our method requires only a limited “black box” interface with the models, allowing us to use very sophisticated, state-of-the-art classifiers without having to look under the hood. We demonstrate the effectiveness of our method on a large set of natural images by combining the subtasks of scene categorization, object detection, multiclass image segmentation, and 3d reconstruction. 1

5 0.6194762 130 nips-2008-MCBoost: Multiple Classifier Boosting for Perceptual Co-clustering of Images and Visual Features

Author: Tae-kyun Kim, Roberto Cipolla

Abstract: We present a new co-clustering problem of images and visual features. The problem involves a set of non-object images in addition to a set of object images and features to be co-clustered. Co-clustering is performed in a way that maximises discrimination of object images from non-object images, thus emphasizing discriminative features. This provides a way of obtaining perceptual joint-clusters of object images and features. We tackle the problem by simultaneously boosting multiple strong classifiers which compete for images by their expertise. Each boosting classifier is an aggregation of weak-learners, i.e. simple visual features. The obtained classifiers are useful for object detection tasks which exhibit multimodalities, e.g. multi-category and multi-view object detection tasks. Experiments on a set of pedestrian images and a face data set demonstrate that the method yields intuitive image clusters with associated features and is much superior to conventional boosting classifiers in object detection tasks. 1

6 0.60902834 128 nips-2008-Look Ma, No Hands: Analyzing the Monotonic Feature Abstraction for Text Classification

7 0.59982622 5 nips-2008-A Transductive Bound for the Voted Classifier with an Application to Semi-supervised Learning

8 0.58844632 246 nips-2008-Unsupervised Learning of Visual Sense Models for Polysemous Words

9 0.54705727 245 nips-2008-Unlabeled data: Now it helps, now it doesn't

10 0.53745961 41 nips-2008-Breaking Audio CAPTCHAs

11 0.50085765 123 nips-2008-Linear Classification and Selective Sampling Under Low Noise Conditions

12 0.49277985 207 nips-2008-Shape-Based Object Localization for Descriptive Classification

13 0.48896018 116 nips-2008-Learning Hybrid Models for Image Annotation with Partially Labeled Data

14 0.48843724 36 nips-2008-Beyond Novelty Detection: Incongruent Events, when General and Specific Classifiers Disagree

15 0.46317396 191 nips-2008-Recursive Segmentation and Recognition Templates for 2D Parsing

16 0.41296881 120 nips-2008-Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text

17 0.41225669 6 nips-2008-A ``Shape Aware'' Model for semi-supervised Learning of Objects and its Context

18 0.4103905 15 nips-2008-Adaptive Martingale Boosting

19 0.39750585 148 nips-2008-Natural Image Denoising with Convolutional Networks

20 0.39503649 242 nips-2008-Translated Learning: Transfer Learning across Different Feature Spaces


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(6, 0.087), (7, 0.063), (12, 0.05), (15, 0.01), (28, 0.137), (57, 0.077), (59, 0.017), (63, 0.019), (65, 0.238), (71, 0.03), (77, 0.039), (78, 0.021), (83, 0.119)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.81502783 142 nips-2008-Multi-Level Active Prediction of Useful Image Annotations for Recognition

Author: Sudheendra Vijayanarasimhan, Kristen Grauman

Abstract: We introduce a framework for actively learning visual categories from a mixture of weakly and strongly labeled image examples. We propose to allow the categorylearner to strategically choose what annotations it receives—based on both the expected reduction in uncertainty as well as the relative costs of obtaining each annotation. We construct a multiple-instance discriminative classifier based on the initial training data. Then all remaining unlabeled and weakly labeled examples are surveyed to actively determine which annotation ought to be requested next. After each request, the current classifier is incrementally updated. Unlike previous work, our approach accounts for the fact that the optimal use of manual annotation may call for a combination of labels at multiple levels of granularity (e.g., a full segmentation on some images and a present/absent flag on others). As a result, it is possible to learn more accurate category models with a lower total expenditure of manual annotation effort. 1

2 0.77230918 243 nips-2008-Understanding Brain Connectivity Patterns during Motor Imagery for Brain-Computer Interfacing

Author: Moritz Grosse-wentrup

Abstract: EEG connectivity measures could provide a new type of feature space for inferring a subject’s intention in Brain-Computer Interfaces (BCIs). However, very little is known on EEG connectivity patterns for BCIs. In this study, EEG connectivity during motor imagery (MI) of the left and right is investigated in a broad frequency range across the whole scalp by combining Beamforming with Transfer Entropy and taking into account possible volume conduction effects. Observed connectivity patterns indicate that modulation intentionally induced by MI is strongest in the γ-band, i.e., above 35 Hz. Furthermore, modulation between MI and rest is found to be more pronounced than between MI of different hands. This is in contrast to results on MI obtained with bandpower features, and might provide an explanation for the so far only moderate success of connectivity features in BCIs. It is concluded that future studies on connectivity based BCIs should focus on high frequency bands and consider experimental paradigms that maximally vary cognitive demands between conditions. 1

3 0.66948444 194 nips-2008-Regularized Learning with Networks of Features

Author: Ted Sandler, John Blitzer, Partha P. Talukdar, Lyle H. Ungar

Abstract: For many supervised learning problems, we possess prior knowledge about which features yield similar information about the target variable. In predicting the topic of a document, we might know that two words are synonyms, and when performing image recognition, we know which pixels are adjacent. Such synonymous or neighboring features are near-duplicates and should be expected to have similar weights in an accurate model. Here we present a framework for regularized learning when one has prior knowledge about which features are expected to have similar and dissimilar weights. The prior knowledge is encoded as a network whose vertices are features and whose edges represent similarities and dissimilarities between them. During learning, each feature’s weight is penalized by the amount it differs from the average weight of its neighbors. For text classification, regularization using networks of word co-occurrences outperforms manifold learning and compares favorably to other recently proposed semi-supervised learning methods. For sentiment analysis, feature networks constructed from declarative human knowledge significantly improve prediction accuracy. 1

4 0.6689921 116 nips-2008-Learning Hybrid Models for Image Annotation with Partially Labeled Data

Author: Xuming He, Richard S. Zemel

Abstract: Extensive labeled data for image annotation systems, which learn to assign class labels to image regions, is difficult to obtain. We explore a hybrid model framework for utilizing partially labeled data that integrates a generative topic model for image appearance with discriminative label prediction. We propose three alternative formulations for imposing a spatial smoothness prior on the image labels. Tests of the new models and some baseline approaches on three real image datasets demonstrate the effectiveness of incorporating the latent structure. 1

5 0.6647144 95 nips-2008-Grouping Contours Via a Related Image

Author: Praveen Srinivasan, Liming Wang, Jianbo Shi

Abstract: Contours have been established in the biological and computer vision literature as a compact yet descriptive representation of object shape. While individual contours provide structure, they lack the large spatial support of region segments (which lack internal structure). We present a method for further grouping of contours in an image using their relationship to the contours of a second, related image. Stereo, motion, and similarity all provide cues that can aid this task; contours that have similar transformations relating them to their matching contours in the second image likely belong to a single group. To find matches for contours, we rely only on shape, which applies directly to all three modalities without modification, in contrast to the specialized approaches developed for each independently. Visually salient contours are extracted in each image, along with a set of candidate transformations for aligning subsets of them. For each transformation, groups of contours with matching shape across the two images are identified to provide a context for evaluating matches of individual contour points across the images. The resulting contexts of contours are used to perform a final grouping on contours in the original image while simultaneously finding matches in the related image, again by shape matching. We demonstrate grouping results on image pairs consisting of stereo, motion, and similar images. Our method also produces qualitatively better results against a baseline method that does not use the inferred contexts. 1

6 0.66341257 120 nips-2008-Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text

7 0.66261297 245 nips-2008-Unlabeled data: Now it helps, now it doesn't

8 0.6625011 42 nips-2008-Cascaded Classification Models: Combining Models for Holistic Scene Understanding

9 0.66232097 91 nips-2008-Generative and Discriminative Learning with Unknown Labeling Bias

10 0.66163075 32 nips-2008-Bayesian Kernel Shaping for Learning Control

11 0.6613512 205 nips-2008-Semi-supervised Learning with Weakly-Related Unlabeled Data : Towards Better Text Categorization

12 0.66054362 79 nips-2008-Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning

13 0.65639079 14 nips-2008-Adaptive Forward-Backward Greedy Algorithm for Sparse Learning with Linear Models

14 0.65441835 130 nips-2008-MCBoost: Multiple Classifier Boosting for Perceptual Co-clustering of Images and Visual Features

15 0.65193117 62 nips-2008-Differentiable Sparse Coding

16 0.65156245 26 nips-2008-Analyzing human feature learning as nonparametric Bayesian inference

17 0.65051192 202 nips-2008-Robust Regression and Lasso

18 0.64999044 75 nips-2008-Estimating vector fields using sparse basis field expansions

19 0.64987427 176 nips-2008-Partially Observed Maximum Entropy Discrimination Markov Networks

20 0.64902568 143 nips-2008-Multi-label Multiple Kernel Learning