cvpr cvpr2013 cvpr2013-239 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Paul Bodesheim, Alexander Freytag, Erik Rodner, Michael Kemmler, Joachim Denzler
Abstract: Detecting samples from previously unknown classes is a crucial task in object recognition, especially when dealing with real-world applications where the closed-world assumption does not hold. We present how to apply a null space method for novelty detection, which maps all training samples of one class to a single point. Beside the possibility of modeling a single class, we are able to treat multiple known classes jointly and to detect novelties for a set of classes with a single model. In contrast to modeling the support of each known class individually, our approach makes use of a projection in a joint subspace where training samples of all known classes have zero intra-class variance. This subspace is called the null space of the training data. To decide about novelty of a test sample, our null space approach allows for solely relying on a distance measure instead of performing density estimation directly. Therefore, we derive a simple yet powerful method for multi-class novelty detection, an important problem not studied sufficiently so far. Our novelty detection approach is assessed in com- prehensive multi-class experiments using the publicly available datasets Caltech-256 and ImageNet. The analysis reveals that our null space approach is perfectly suited for multi-class novelty detection since it outperforms all other methods.
Reference: text
sentIndex sentText sentNum sentScore
1 We present how to apply a null space method for novelty detection, which maps all training samples of one class to a single point. [sent-4, score-1.513]
2 In contrast to modeling the support of each known class individually, our approach makes use of a projection in a joint subspace where training samples of all known classes have zero intra-class variance. [sent-6, score-0.667]
3 This subspace is called the null space of the training data. [sent-7, score-0.679]
4 To decide about novelty of a test sample, our null space approach allows for solely relying on a distance measure instead of performing density estimation directly. [sent-8, score-1.258]
5 Therefore, we derive a simple yet powerful method for multi-class novelty detection, an important problem not studied sufficiently so far. [sent-9, score-0.647]
6 Our novelty detection approach is assessed in com- prehensive multi-class experiments using the publicly available datasets Caltech-256 and ImageNet. [sent-10, score-0.713]
7 The analysis reveals that our null space approach is perfectly suited for multi-class novelty detection since it outperforms all other methods. [sent-11, score-1.327]
8 Novelty detection in the null space of a three-classexample: training samples of the known classes bonsai, lemon, and vase are projected to a single point, respectively (colored dots). [sent-17, score-1.003]
9 Despite its importance, however, novelty detection is an often neglected part in visual recognition systems. [sent-21, score-0.713]
10 The definition of novelty detection can be summarized as follows. [sent-22, score-0.713]
11 Based on a fixed set of training samples from a fixed number of categories, novelty detection is a binary decision task to determine for each test sample whether it belongs to one of the known categories or not. [sent-23, score-1.053]
12 A common assumption for novelty detection is that in feature space, sam- ples occurring far away from the training data most likely belong to a new category. [sent-24, score-0.743]
13 However, we assume that objects of new categories occur far away from the training data in the null space. [sent-25, score-0.668]
14 As a consequence, a whole class is represented as a single point and we can directly use distances between the projection of a test sample and the class representations to obtain a novelty measure. [sent-27, score-1.001]
15 An example of our null space approach using three categories of the ImageNet dataset [3] is shown in Figure 1. [sent-28, score-0.652]
16 In 333333777422 the null space, test samples of known categories have small distances to the corresponding class representations. [sent-29, score-0.92]
17 Related work on novelty detection mainly focuses on modeling the distribution of a single class with arbitrary complex models (see Sect. [sent-32, score-0.855]
18 With our proposed approach, we circumvent the estimation of complex class distributions by totally removing the intra-class variances using null space projections. [sent-34, score-0.709]
19 In contrast, our novelty detection approach yields a score obtained from a single subspace computed jointly for all known categories. [sent-37, score-0.895]
20 We provide a method for novelty detection where we build on the Null Foley-Sammon transform (NFST) [7] due to its inherent properties explained later in this paper. [sent-39, score-0.745]
21 With this transform, we are able to model all known training classes jointly and obtain a single novelty score for multiple classes allowing for joint multi-class novelty detection. [sent-40, score-1.737]
22 We are not aware of any existing method that is able to perform multiclass novelty detection with a single model. [sent-41, score-0.786]
23 Since our approach is based on the theory of null spaces which is not widely-used in our community, we give a detailed review of null space methods and a kernelization strategy in Sect. [sent-44, score-1.209]
24 Our multi-class novelty detection approach as well as the derived one-class classification method using null space methods is explained in Sect. [sent-46, score-1.334]
25 An overview of related work on novelty detection is given in Sect. [sent-48, score-0.713]
26 5 showing the suitability of null space methods for multi-class novelty detection. [sent-51, score-1.24]
27 Reviewing null space methods In the following, we review NFST in detail, since it lies at the core of our approach and is not widely-used so far. [sent-54, score-0.593]
28 Our resulting novelty detection method based on null spaces is carried out in Sect. [sent-55, score-1.3]
29 NFST is limited to problems with small IRD×N the input space (left) to the null space (right), adapted from [7]. [sent-68, score-0.626]
30 (5) ϕ >0 (4) In [7], such a ϕ is called null projection direction. [sent-93, score-0.611]
31 To additionally guarantee (6), we first define the null spaces of the matrices St and Sw : IRD = {z ∈ IRD Zt = {z ∈ | St z = 0} , (7) Zw | Swz = 0} , (8) Zt⊥ Zw⊥. [sent-105, score-0.63]
32 , of problem (11), we can compute null projection directions ϕ(1) , . [sent-154, score-0.635]
33 re S aibmle- ilar to (10), we calculate C − 1 null projection directions iϕla(1r) t,o . [sent-211, score-0.658]
34 , ϕ),( Cw−e1 c) lbcuutl using c−oe 1ff nicuileln ptsr ijne T dhierercetfioornes, the coefficients for null projection directions are: Vt˜io . [sent-214, score-0.635]
35 , in the null space, where k∗ contai∗ns values of∗ the kernel function calculated between x∗ and all N training samples. [sent-224, score-0.66]
36 Related approaches Beside NFST and KNFST, there exist further null space approaches. [sent-227, score-0.593]
37 However, it is not guaranteed that such principal components exist, especially in large-scale settings (which is in contrast to the null space of KNFST we use in our approach). [sent-230, score-0.619]
38 There also exists a metric learning approach [5] that is closely related to null space methods. [sent-232, score-0.615]
39 To bridge this gap, we propose a novelty detection method in the next section. [sent-240, score-0.713]
40 Novelty detection with null space methods In the previous section, we have described NFST and its kernelization based on already existing work [7, 12, 26]. [sent-242, score-0.688]
41 This section explains how to adapt null space methods for novelty detection in both one-class and multi-class scenarios. [sent-243, score-1.306]
42 We therefore first show how to perform multi-class novelty detection and then apply this idea to the one-class case. [sent-246, score-0.713]
43 Additionally, we characterize the advantages of our novelty detection approach that come from the model properties. [sent-247, score-0.713]
44 Multi-class novelty detection using null spaces In multi-class novelty detection, we want to calculate a novelty score indicating whether a test sample belongs to one of C known classes, no matter to which class. [sent-250, score-2.762]
45 Throughout the rest of this paper, we refer to the classes known dur- using projections in the joint null space: the novelty score of a test sample x∗ is the smallest distance between its projection t∗ and the class projections in the null space. [sent-251, score-2.295]
46 We calculate a null space of dimension C −1 and determine target points one point nf oCr −eac1h a target ecrlmasisn, corresponding to the projection of class samples in the null space (see Sect. [sent-253, score-1.654]
47 To obtain a single novelty score of a test sample x∗, we first map x∗ to t∗ by projecting x∗ into the null space. [sent-255, score-1.312]
48 Applying a pooling step directly in the joint null space of all C classes, we use the smallest distance between t∗ and the target points . [sent-256, score-0.729]
49 (16) The larger the score and thus the minimum distance in the null space, the more novel is the test sample. [sent-263, score-0.612]
50 Training a binary SVM for each known class using the samples of the other classes as negatives only leads to separations from other known classes and not from currently unknown ones. [sent-267, score-0.688]
51 In contrast, the separation from every currently unknown class is possible with our approach due to the simple class representations in the null space. [sent-268, score-0.886]
52 Additionally, we are able to treat all classes jointly with their true class labels, while training a binary SVM for each known class treats remaining known classes as a single negative class, which contradicts to the idea of novelty detection. [sent-269, score-1.352]
53 The novelty detection formulation of SVM [20] is only derived for one-class settings (see Sect. [sent-270, score-0.713]
54 Our one-class classification approaches: all samples of the target class are mapped on a single point t in a one-dimensional subspace and the novelty score of a test sample x∗ is the distance of its projection t∗ to t. [sent-286, score-1.241]
55 One-class classification using null spaces At first glance, one-class classification is not possible with null space methods, because we only have a single target class in a one-class setting. [sent-289, score-1.458]
56 This leads to zero null projection directions, since the number of these directions is C 1(see Sect. [sent-290, score-0.672]
57 Using this idea, we are able to compute a single null projection direction and all class samples are mapped on a single target value t along this direction. [sent-295, score-1.039]
58 To check whether a test sample x∗ belongs to the target class, we compute its projection on − the null projection direction and obtain the value t∗ . [sent-296, score-0.787]
59 As a novelty score of x∗ we propose using the absolute difference between t and t∗ : OneClassNovelty(x∗) = |t − t∗ | (17) similar to the multi-class case, where a large score indicates novelty. [sent-297, score-0.715]
60 gle null projection direction by separating the class samples from “minus data”. [sent-301, score-0.855]
61 Again, all true class samples are mapped on a single target value t and we compute the novelty score similar to our first approach using Eq. [sent-303, score-1.061]
62 t, which is of no interest for computing the novelty score. [sent-306, score-0.647]
63 Computing the novelty score of a new sample can be done in linear time. [sent-311, score-0.708]
64 Advantages of our novelty detection approach Our proposed novelty detection approach benefits from the null space, ajoint subspace of all training samples where each known class is represented by a single point. [sent-316, score-2.381]
65 In contrast to other subspace methods such as Kernel PCA, additional density estimation or clustering within the obtained subspace can be avoided and a simple distance measure can be applied to get a novelty score. [sent-317, score-0.759]
66 , when applying the one-vsrest SVM framework, null space methods offer the possibility to treat several classes in a joint manner with a single subspace model. [sent-320, score-0.819]
67 Additionally, our approach separates known classes from every currently unknown class without the necessity of negative samples by using simple representations of known classes in the null space. [sent-321, score-1.227]
68 This is in contrast to binary classifiers treating samples of one class as positives and samples of remaining known classes as negatives. [sent-322, score-0.593]
69 Using null space methods for novelty detection, we are able to calculate a single feature for each class of the target data and thus are able to compute features with zero intra-class variance. [sent-323, score-1.566]
70 Such features are computable with null space methods, even for multiple classes. [sent-326, score-0.593]
71 The transformed features obtained using null space methods can therefore be treated as class-specific features, since the transformation preserves the joint characteristics within each class. [sent-329, score-0.616]
72 As previously mentioned, such features are perfectly suited for novelty detection from a theoretical point of view [23]. [sent-330, score-0.734]
73 Hence, additional parameter tuning beyond kernel hyperparameters is not necessary for our proposed novelty detection method. [sent-336, score-0.803]
74 Related work on novelty detection An overview of basic concepts for novelty detection in signal processing is provided by the review papers of Markou and Singh [15, 16]. [sent-338, score-1.426]
75 In visual object recognition, novelty detection should not be confused with the detection of unseen classes in zero shot learning [9], where knowledge about new objects is used explicitly, e. [sent-339, score-0.963]
76 Generally, novelty detection problems can be divided into one-class and multi-class settings depending on the number of known classes during training. [sent-342, score-0.9]
77 Recent work on novelty detection focuses on one-class classification. [sent-343, score-0.713]
78 In the following, we give a short overview of related work for both one-class and multiclass novelty detection scenarios. [sent-345, score-0.738]
79 Artificial super-class A simple way to perform multiclass novelty detection is to train a single one-class classifier for all available samples of all known categories [10]. [sent-364, score-0.99]
80 Therefore, the rejection method used in [22] can be applied to novelty detection and we compare our approach to this strategy. [sent-371, score-0.732]
81 Multi-class classifiers Reject strategies for multi-class classification are related to novelty detection but differ in treating regions between classes. [sent-372, score-0.808]
82 Samples could also be rejected when being close to the decision boundaries between known classes, which is obviously contradicting with the idea of novelty detection and which is a severe problem especially when classes overlap in feature space. [sent-373, score-0.918]
83 Experiments We evaluate our novelty detection approach1 in visual object recognition on two datasets, Caltech-256 [6] and ImageNet [3]. [sent-380, score-0.713]
84 In the experiments, we focus on multi-class novelty detection. [sent-384, score-0.647]
85 However, this is not the case when multiple classes are considered and we show that our null space approach outperforms all other methods in this typical scenario important for lifelong learning and automatic object discovery. [sent-387, score-0.749]
86 Performance in multi-class novelty detection on the Caltech-256 dataset. [sent-433, score-0.713]
87 The experiments on the ImageNet dataset are done with 100 samples per target class for training and 50 samples from each of the 1,000 classes (including the target classes) for testing. [sent-439, score-0.629]
88 Multi-class novelty detection results The results on the Caltech-256 dataset and the ImageNet dataset are shown in Figure 5 and Figure 6, respectively. [sent-470, score-0.713]
89 Interestingly, the binary SVM approach seems to be more suitable for the task of multi-class novelty detection in terms of higher median AUC scores compared to most approaches based on one-class classifiers. [sent-472, score-0.77]
90 This highlights the capability and the relevance of our proposed null space approach for novelty detection. [sent-476, score-1.24]
91 Conclusions and future work Multi-class novelty detection is a challenging problem, which needs more attention from the research community. [sent-478, score-0.713]
92 This paper proposes a new novelty detection approach based on null space projections, which is perfectly suitable for tackling this problem. [sent-480, score-1.327]
93 The benefit of our proposed multi-class novelty detection approach is its ability of separating a set of known 333333778088 iCAadnMU%][e86570 K6N. [sent-481, score-0.806]
94 Performance in multi-class novelty detection on the ImageNet dataset. [sent-522, score-0.713]
95 The approach is able to decide about novelty in a single step using a single model, whereas other approaches need to train a model for each known class without considering them jointly. [sent-524, score-0.903]
96 Our experimental results clearly demonstrate the advantage of the joint learning approach using the null space leading to the best performance for multi-class novelty detection compared to all other methods. [sent-525, score-1.329]
97 Additionally, we have addressed one-class classification as a special case of novelty detection. [sent-526, score-0.675]
98 Future work will concentrate on novelty detection in large-scale scenarios, where hundreds or thousands of categories are known to the model. [sent-528, score-0.838]
99 Additionally, incorporating metric learning approaches related to null space methods such as [5, 17] is of special interest. [sent-529, score-0.615]
100 A small sphere and large margin approach for novelty detection using training data with outliers. [sent-735, score-0.743]
wordName wordTfidf (topN-words)
[('novelty', 0.647), ('null', 0.56), ('nfst', 0.176), ('classes', 0.121), ('class', 0.116), ('knfst', 0.105), ('samples', 0.101), ('scatter', 0.088), ('target', 0.08), ('zt', 0.072), ('ird', 0.07), ('kernel', 0.07), ('imagenet', 0.069), ('detection', 0.066), ('known', 0.066), ('fst', 0.062), ('hik', 0.062), ('categories', 0.059), ('mapped', 0.057), ('subspace', 0.056), ('tax', 0.054), ('eigenbasis', 0.053), ('svdd', 0.053), ('projection', 0.051), ('sw', 0.05), ('zw', 0.048), ('svm', 0.048), ('markou', 0.047), ('additionally', 0.043), ('treating', 0.038), ('currently', 0.038), ('unknown', 0.038), ('zero', 0.037), ('auc', 0.037), ('projections', 0.036), ('btswb', 0.035), ('icaadnmu', 0.035), ('lifelong', 0.035), ('tsb', 0.035), ('score', 0.034), ('nothing', 0.034), ('products', 0.033), ('space', 0.033), ('pooling', 0.033), ('transform', 0.032), ('acu', 0.031), ('reviewed', 0.031), ('inner', 0.031), ('training', 0.03), ('classifiers', 0.029), ('kernelization', 0.029), ('tsw', 0.029), ('fisher', 0.029), ('discriminant', 0.029), ('classification', 0.028), ('origin', 0.028), ('jena', 0.027), ('separating', 0.027), ('sample', 0.027), ('spaces', 0.027), ('principal', 0.026), ('single', 0.026), ('variance', 0.026), ('unseen', 0.026), ('beside', 0.025), ('multiclass', 0.025), ('directions', 0.024), ('sb', 0.023), ('calculate', 0.023), ('crosses', 0.023), ('joint', 0.023), ('kernels', 0.022), ('dealing', 0.022), ('able', 0.022), ('basis', 0.022), ('metric', 0.022), ('outlier', 0.022), ('binary', 0.021), ('eigenvalues', 0.021), ('perfectly', 0.021), ('hyperparameters', 0.02), ('eigenvectors', 0.02), ('ect', 0.02), ('artificial', 0.02), ('rejection', 0.019), ('pages', 0.019), ('kernelized', 0.019), ('occur', 0.019), ('separation', 0.018), ('decision', 0.018), ('predictive', 0.018), ('median', 0.018), ('test', 0.018), ('gaussian', 0.018), ('kt', 0.018), ('scores', 0.018), ('neural', 0.018), ('nf', 0.017), ('libsvm', 0.017), ('matrix', 0.017)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999982 239 cvpr-2013-Kernel Null Space Methods for Novelty Detection
Author: Paul Bodesheim, Alexander Freytag, Erik Rodner, Michael Kemmler, Joachim Denzler
Abstract: Detecting samples from previously unknown classes is a crucial task in object recognition, especially when dealing with real-world applications where the closed-world assumption does not hold. We present how to apply a null space method for novelty detection, which maps all training samples of one class to a single point. Beside the possibility of modeling a single class, we are able to treat multiple known classes jointly and to detect novelties for a set of classes with a single model. In contrast to modeling the support of each known class individually, our approach makes use of a projection in a joint subspace where training samples of all known classes have zero intra-class variance. This subspace is called the null space of the training data. To decide about novelty of a test sample, our null space approach allows for solely relying on a distance measure instead of performing density estimation directly. Therefore, we derive a simple yet powerful method for multi-class novelty detection, an important problem not studied sufficiently so far. Our novelty detection approach is assessed in com- prehensive multi-class experiments using the publicly available datasets Caltech-256 and ImageNet. The analysis reveals that our null space approach is perfectly suited for multi-class novelty detection since it outperforms all other methods.
2 0.13412981 261 cvpr-2013-Learning by Associating Ambiguously Labeled Images
Author: Zinan Zeng, Shijie Xiao, Kui Jia, Tsung-Han Chan, Shenghua Gao, Dong Xu, Yi Ma
Abstract: We study in this paper the problem of learning classifiers from ambiguously labeled images. For instance, in the collection of new images, each image contains some samples of interest (e.g., human faces), and its associated caption has labels with the true ones included, while the samplelabel association is unknown. The task is to learn classifiers from these ambiguously labeled images and generalize to new images. An essential consideration here is how to make use of the information embedded in the relations between samples and labels, both within each image and across the image set. To this end, we propose a novel framework to address this problem. Our framework is motivated by the observation that samples from the same class repetitively appear in the collection of ambiguously labeled training images, while they are just ambiguously labeled in each image. If we can identify samples of the same class from each image and associate them across the image set, the matrix formed by the samples from the same class would be ideally low-rank. By leveraging such a low-rank assump- tion, we can simultaneously optimize a partial permutation matrix (PPM) for each image, which is formulated in order to exploit all information between samples and labels in a principled way. The obtained PPMs can be readily used to assign labels to samples in training images, and then a standard SVM classifier can be trained and used for unseen data. Experiments on benchmark datasets show the effectiveness of our proposed method.
3 0.082709305 82 cvpr-2013-Class Generative Models Based on Feature Regression for Pose Estimation of Object Categories
Author: Michele Fenzi, Laura Leal-Taixé, Bodo Rosenhahn, Jörn Ostermann
Abstract: In this paper, we propose a method for learning a class representation that can return a continuous value for the pose of an unknown class instance using only 2D data and weak 3D labelling information. Our method is based on generative feature models, i.e., regression functions learnt from local descriptors of the same patch collected under different viewpoints. The individual generative models are then clustered in order to create class generative models which form the class representation. At run-time, the pose of the query image is estimated in a maximum a posteriori fashion by combining the regression functions belonging to the matching clusters. We evaluate our approach on the EPFL car dataset [17] and the Pointing’04 face dataset [8]. Experimental results show that our method outperforms by 10% the state-of-the-art in the first dataset and by 9% in the second.
4 0.069203027 48 cvpr-2013-Attribute-Based Detection of Unfamiliar Classes with Humans in the Loop
Author: Catherine Wah, Serge Belongie
Abstract: Recent work in computer vision has addressed zero-shot learning or unseen class detection, which involves categorizing objects without observing any training examples. However, these problems assume that attributes or defining characteristics of these unobserved classes are known, leveraging this information at test time to detect an unseen class. We address the more realistic problem of detecting categories that do not appear in the dataset in any form. We denote such a category as an unfamiliar class; it is neither observed at train time, nor do we possess any knowledge regarding its relationships to attributes. This problem is one that has received limited attention within the computer vision community. In this work, we propose a novel ap. ucs d .edu Unfamiliar? or?not? UERY?IMAGQ IMmFaAtgMechs?inIlLatsrA?inYRESg MFNaAotc?ihntIlraLsin?A YRgES UMNaotFc?hAinMltarsIinL?NIgAOR AKNTAWDNO ?Train g?imagesn U(se)alc?n)eSs(Long?bilCas n?a’t lrfyibuteIn?mfoartesixNearwter proach to the unfamiliar class detection task that builds on attribute-based classification methods, and we empirically demonstrate how classification accuracy is impacted by attribute noise and dataset “difficulty,” as quantified by the separation of classes in the attribute space. We also present a method for incorporating human users to overcome deficiencies in attribute detection. We demonstrate results superior to existing methods on the challenging CUB-200-2011 dataset.
5 0.067192204 179 cvpr-2013-From N to N+1: Multiclass Transfer Incremental Learning
Author: Ilja Kuzborskij, Francesco Orabona, Barbara Caputo
Abstract: Since the seminal work of Thrun [17], the learning to learnparadigm has been defined as the ability ofan agent to improve its performance at each task with experience, with the number of tasks. Within the object categorization domain, the visual learning community has actively declined this paradigm in the transfer learning setting. Almost all proposed methods focus on category detection problems, addressing how to learn a new target class from few samples by leveraging over the known source. But if one thinks oflearning over multiple tasks, there is a needfor multiclass transfer learning algorithms able to exploit previous source knowledge when learning a new class, while at the same time optimizing their overall performance. This is an open challenge for existing transfer learning algorithms. The contribution of this paper is a discriminative method that addresses this issue, based on a Least-Squares Support Vector Machine formulation. Our approach is designed to balance between transferring to the new class and preserving what has already been learned on the source models. Exten- sive experiments on subsets of publicly available datasets prove the effectiveness of our approach.
6 0.0642948 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features
7 0.063332304 387 cvpr-2013-Semi-supervised Domain Adaptation with Instance Constraints
8 0.060280278 67 cvpr-2013-Blocks That Shout: Distinctive Parts for Scene Classification
9 0.058650378 421 cvpr-2013-Supervised Kernel Descriptors for Visual Recognition
10 0.057452708 173 cvpr-2013-Finding Things: Image Parsing with Regions and Per-Exemplar Detectors
11 0.05708899 260 cvpr-2013-Learning and Calibrating Per-Location Classifiers for Visual Place Recognition
12 0.056786958 221 cvpr-2013-Incorporating Structural Alternatives and Sharing into Hierarchy for Multiclass Object Recognition and Detection
13 0.056512877 142 cvpr-2013-Efficient Detector Adaptation for Object Detection in a Video
14 0.054921705 144 cvpr-2013-Efficient Maximum Appearance Search for Large-Scale Object Detection
15 0.053179473 270 cvpr-2013-Local Fisher Discriminant Analysis for Pedestrian Re-identification
16 0.052010179 74 cvpr-2013-CLAM: Coupled Localization and Mapping with Efficient Outlier Handling
17 0.052008115 247 cvpr-2013-Learning Class-to-Image Distance with Object Matchings
18 0.051721405 264 cvpr-2013-Learning to Detect Partially Overlapping Instances
19 0.050975595 185 cvpr-2013-Generalized Domain-Adaptive Dictionaries
20 0.050301507 150 cvpr-2013-Event Recognition in Videos by Learning from Heterogeneous Web Sources
topicId topicWeight
[(0, 0.135), (1, -0.031), (2, -0.039), (3, 0.02), (4, 0.036), (5, 0.02), (6, -0.03), (7, -0.028), (8, -0.003), (9, -0.008), (10, -0.029), (11, -0.024), (12, -0.023), (13, -0.064), (14, -0.055), (15, -0.029), (16, -0.035), (17, -0.033), (18, -0.026), (19, -0.012), (20, -0.005), (21, -0.019), (22, -0.006), (23, -0.006), (24, 0.003), (25, 0.017), (26, -0.038), (27, 0.043), (28, -0.051), (29, -0.049), (30, -0.068), (31, 0.029), (32, -0.039), (33, -0.022), (34, -0.008), (35, 0.004), (36, -0.01), (37, -0.026), (38, 0.005), (39, 0.015), (40, 0.003), (41, -0.023), (42, -0.008), (43, 0.002), (44, 0.037), (45, -0.006), (46, 0.009), (47, -0.013), (48, 0.003), (49, 0.043)]
simIndex simValue paperId paperTitle
same-paper 1 0.94628251 239 cvpr-2013-Kernel Null Space Methods for Novelty Detection
Author: Paul Bodesheim, Alexander Freytag, Erik Rodner, Michael Kemmler, Joachim Denzler
Abstract: Detecting samples from previously unknown classes is a crucial task in object recognition, especially when dealing with real-world applications where the closed-world assumption does not hold. We present how to apply a null space method for novelty detection, which maps all training samples of one class to a single point. Beside the possibility of modeling a single class, we are able to treat multiple known classes jointly and to detect novelties for a set of classes with a single model. In contrast to modeling the support of each known class individually, our approach makes use of a projection in a joint subspace where training samples of all known classes have zero intra-class variance. This subspace is called the null space of the training data. To decide about novelty of a test sample, our null space approach allows for solely relying on a distance measure instead of performing density estimation directly. Therefore, we derive a simple yet powerful method for multi-class novelty detection, an important problem not studied sufficiently so far. Our novelty detection approach is assessed in com- prehensive multi-class experiments using the publicly available datasets Caltech-256 and ImageNet. The analysis reveals that our null space approach is perfectly suited for multi-class novelty detection since it outperforms all other methods.
2 0.77705258 320 cvpr-2013-Optimizing 1-Nearest Prototype Classifiers
Author: Paul Wohlhart, Martin Köstinger, Michael Donoser, Peter M. Roth, Horst Bischof
Abstract: The development of complex, powerful classifiers and their constant improvement have contributed much to the progress in many fields of computer vision. However, the trend towards large scale datasets revived the interest in simpler classifiers to reduce runtime. Simple nearest neighbor classifiers have several beneficial properties, such as low complexity and inherent multi-class handling, however, they have a runtime linear in the size of the database. Recent related work represents data samples by assigning them to a set of prototypes that partition the input feature space and afterwards applies linear classifiers on top of this representation to approximate decision boundaries locally linear. In this paper, we go a step beyond these approaches and purely focus on 1-nearest prototype classification, where we propose a novel algorithm for deriving optimal prototypes in a discriminative manner from the training samples. Our method is implicitly multi-class capable, parameter free, avoids noise overfitting and, since during testing only comparisons to the derived prototypes are required, highly efficient. Experiments demonstrate that we are able to outperform related locally linear methods, while even getting close to the results of more complex classifiers.
3 0.75948584 261 cvpr-2013-Learning by Associating Ambiguously Labeled Images
Author: Zinan Zeng, Shijie Xiao, Kui Jia, Tsung-Han Chan, Shenghua Gao, Dong Xu, Yi Ma
Abstract: We study in this paper the problem of learning classifiers from ambiguously labeled images. For instance, in the collection of new images, each image contains some samples of interest (e.g., human faces), and its associated caption has labels with the true ones included, while the samplelabel association is unknown. The task is to learn classifiers from these ambiguously labeled images and generalize to new images. An essential consideration here is how to make use of the information embedded in the relations between samples and labels, both within each image and across the image set. To this end, we propose a novel framework to address this problem. Our framework is motivated by the observation that samples from the same class repetitively appear in the collection of ambiguously labeled training images, while they are just ambiguously labeled in each image. If we can identify samples of the same class from each image and associate them across the image set, the matrix formed by the samples from the same class would be ideally low-rank. By leveraging such a low-rank assump- tion, we can simultaneously optimize a partial permutation matrix (PPM) for each image, which is formulated in order to exploit all information between samples and labels in a principled way. The obtained PPMs can be readily used to assign labels to samples in training images, and then a standard SVM classifier can be trained and used for unseen data. Experiments on benchmark datasets show the effectiveness of our proposed method.
4 0.75325227 201 cvpr-2013-Heterogeneous Visual Features Fusion via Sparse Multimodal Machine
Author: Hua Wang, Feiping Nie, Heng Huang, Chris Ding
Abstract: To better understand, search, and classify image and video information, many visual feature descriptors have been proposed to describe elementary visual characteristics, such as the shape, the color, the texture, etc. How to integrate these heterogeneous visual features and identify the important ones from them for specific vision tasks has become an increasingly critical problem. In this paper, We propose a novel Sparse Multimodal Learning (SMML) approach to integrate such heterogeneous features by using the joint structured sparsity regularizations to learn the feature importance of for the vision tasks from both group-wise and individual point of views. A new optimization algorithm is also introduced to solve the non-smooth objective with rigorously proved global convergence. We applied our SMML method to five broadly used object categorization and scene understanding image data sets for both singlelabel and multi-label image classification tasks. For each data set we integrate six different types of popularly used image features. Compared to existing scene and object cat- egorization methods using either single modality or multimodalities of features, our approach always achieves better performances measured.
5 0.71168518 179 cvpr-2013-From N to N+1: Multiclass Transfer Incremental Learning
Author: Ilja Kuzborskij, Francesco Orabona, Barbara Caputo
Abstract: Since the seminal work of Thrun [17], the learning to learnparadigm has been defined as the ability ofan agent to improve its performance at each task with experience, with the number of tasks. Within the object categorization domain, the visual learning community has actively declined this paradigm in the transfer learning setting. Almost all proposed methods focus on category detection problems, addressing how to learn a new target class from few samples by leveraging over the known source. But if one thinks oflearning over multiple tasks, there is a needfor multiclass transfer learning algorithms able to exploit previous source knowledge when learning a new class, while at the same time optimizing their overall performance. This is an open challenge for existing transfer learning algorithms. The contribution of this paper is a discriminative method that addresses this issue, based on a Least-Squares Support Vector Machine formulation. Our approach is designed to balance between transferring to the new class and preserving what has already been learned on the source models. Exten- sive experiments on subsets of publicly available datasets prove the effectiveness of our approach.
6 0.7018497 390 cvpr-2013-Semi-supervised Node Splitting for Random Forest Construction
7 0.69852018 377 cvpr-2013-Sample-Specific Late Fusion for Visual Category Recognition
8 0.6916694 168 cvpr-2013-Fast Object Detection with Entropy-Driven Evaluation
9 0.68596274 403 cvpr-2013-Sparse Output Coding for Large-Scale Visual Recognition
10 0.68224877 271 cvpr-2013-Locally Aligned Feature Transforms across Views
11 0.67516404 134 cvpr-2013-Discriminative Sub-categorization
12 0.67222315 270 cvpr-2013-Local Fisher Discriminant Analysis for Pedestrian Re-identification
13 0.66871232 417 cvpr-2013-Subcategory-Aware Object Classification
14 0.65989673 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features
15 0.65360355 371 cvpr-2013-SCaLE: Supervised and Cascaded Laplacian Eigenmaps for Visual Object Recognition Based on Nearest Neighbors
16 0.64971644 442 cvpr-2013-Transfer Sparse Coding for Robust Image Representation
17 0.64816582 15 cvpr-2013-A Lazy Man's Approach to Benchmarking: Semisupervised Classifier Evaluation and Recalibration
18 0.64410591 266 cvpr-2013-Learning without Human Scores for Blind Image Quality Assessment
19 0.64194047 142 cvpr-2013-Efficient Detector Adaptation for Object Detection in a Video
20 0.64135039 7 cvpr-2013-A Divide-and-Conquer Method for Scalable Low-Rank Latent Matrix Pursuit
topicId topicWeight
[(10, 0.099), (16, 0.022), (26, 0.044), (28, 0.019), (33, 0.265), (67, 0.065), (69, 0.06), (87, 0.067), (96, 0.251)]
simIndex simValue paperId paperTitle
Author: Stefan Harmeling, Michael Hirsch, Bernhard Schölkopf
Abstract: We establish a link between Fourier optics and a recent construction from the machine learning community termed the kernel mean map. Using the Fraunhofer approximation, it identifies the kernel with the squared Fourier transform of the aperture. This allows us to use results about the invertibility of the kernel mean map to provide a statement about the invertibility of Fraunhofer diffraction, showing that imaging processes with arbitrarily small apertures can in principle be invertible, i.e., do not lose information, provided the objects to be imaged satisfy a generic condition. A real world experiment shows that we can super-resolve beyond the Rayleigh limit.
2 0.87875825 228 cvpr-2013-Is There a Procedural Logic to Architecture?
Author: Julien Weissenberg, Hayko Riemenschneider, Mukta Prasad, Luc Van_Gool
Abstract: Urban models are key to navigation, architecture and entertainment. Apart from visualizing fa ¸cades, a number of tedious tasks remain largely manual (e.g. compression, generating new fac ¸ade designs and structurally comparing fa c¸ades for classification, retrieval and clustering). We propose a novel procedural modelling method to automatically learn a grammar from a set of fa c¸ades, generate new fa ¸cade instances and compare fa ¸cades. To deal with the difficulty of grammatical inference, we reformulate the problem. Instead of inferring a compromising, onesize-fits-all, single grammar for all tasks, we infer a model whose successive refinements are production rules tailored for each task. We demonstrate our automatic rule inference on datasets of two different architectural styles. Our method supercedes manual expert work and cuts the time required to build a procedural model of a fa ¸cade from several days to a few milliseconds.
3 0.85006213 218 cvpr-2013-Improving the Visual Comprehension of Point Sets
Author: Sagi Katz, Ayellet Tal
Abstract: Point sets are the standard output of many 3D scanning systems and depth cameras. Presenting the set of points as is, might “hide ” the prominent features of the object from which the points are sampled. Our goal is to reduce the number of points in a point set, for improving the visual comprehension from a given viewpoint. This is done by controlling the density of the reduced point set, so as to create bright regions (low density) and dark regions (high density), producing an effect of shading. This data reduction is achieved by leveraging a limitation of a solution to the classical problem of determining visibility from a viewpoint. In addition, we introduce a new dual problem, for determining visibility of a point from infinity, and show how a limitation of its solution can be leveraged in a similar way.
4 0.8384521 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields
Author: Bastian Goldluecke, Sven Wanner
Abstract: Unlike traditional images which do not offer information for different directions of incident light, a light field is defined on ray space, and implicitly encodes scene geometry data in a rich structure which becomes visible on its epipolar plane images. In this work, we analyze regularization of light fields in variational frameworks and show that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space. We derive differential constraints on this vector field to enable consistent disparity map regularization. Furthermore, we show how the disparity field is related to the regularization of more general vector-valued functions on the 4D ray space of the light field. This way, we derive an efficient variational framework with convex priors, which can serve as a fundament for a large class of inverse problems on ray space.
same-paper 5 0.81510472 239 cvpr-2013-Kernel Null Space Methods for Novelty Detection
Author: Paul Bodesheim, Alexander Freytag, Erik Rodner, Michael Kemmler, Joachim Denzler
Abstract: Detecting samples from previously unknown classes is a crucial task in object recognition, especially when dealing with real-world applications where the closed-world assumption does not hold. We present how to apply a null space method for novelty detection, which maps all training samples of one class to a single point. Beside the possibility of modeling a single class, we are able to treat multiple known classes jointly and to detect novelties for a set of classes with a single model. In contrast to modeling the support of each known class individually, our approach makes use of a projection in a joint subspace where training samples of all known classes have zero intra-class variance. This subspace is called the null space of the training data. To decide about novelty of a test sample, our null space approach allows for solely relying on a distance measure instead of performing density estimation directly. Therefore, we derive a simple yet powerful method for multi-class novelty detection, an important problem not studied sufficiently so far. Our novelty detection approach is assessed in com- prehensive multi-class experiments using the publicly available datasets Caltech-256 and ImageNet. The analysis reveals that our null space approach is perfectly suited for multi-class novelty detection since it outperforms all other methods.
6 0.79119885 318 cvpr-2013-Optimized Pedestrian Detection for Multiple and Occluded People
7 0.78787887 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields
8 0.76958209 448 cvpr-2013-Universality of the Local Marginal Polytope
9 0.76824588 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases
10 0.76761276 248 cvpr-2013-Learning Collections of Part Models for Object Recognition
11 0.767564 203 cvpr-2013-Hierarchical Video Representation with Trajectory Binary Partition Tree
12 0.76575822 4 cvpr-2013-3D Visual Proxemics: Recognizing Human Interactions in 3D from a Single Image
13 0.76502043 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities
15 0.76481175 256 cvpr-2013-Learning Structured Hough Voting for Joint Object Detection and Occlusion Reasoning
16 0.76478082 70 cvpr-2013-Bottom-Up Segmentation for Top-Down Detection
17 0.76474047 242 cvpr-2013-Label Propagation from ImageNet to 3D Point Clouds
18 0.76448858 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures
19 0.7644226 94 cvpr-2013-Context-Aware Modeling and Recognition of Activities in Video
20 0.76412439 372 cvpr-2013-SLAM++: Simultaneous Localisation and Mapping at the Level of Objects