iccv iccv2013 iccv2013-181 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Tatiana Tommasi, Barbara Caputo
Abstract: Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on imageto-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.
Reference: text
sentIndex sentText sentNum sentScore
1 be @ Abstract Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. [sent-3, score-0.098]
2 The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. [sent-4, score-0.418]
3 We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. [sent-8, score-0.487]
4 We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. [sent-9, score-0.619]
5 To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. [sent-10, score-0.446]
6 Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings. [sent-11, score-0.074]
7 A source domain (S) usually contains a large amount of labeled images, while a target domain (T) refers broadly to a dataset that is assumed to have different characteristics from the source, and few or no labeled samples. [sent-25, score-0.624]
8 e dni bstrotihbu ctiaosne sm, tishem atetcchhn, regardless of its underlying reason, range from max-margin classifiers able to adapt their learning parameters [15] to methods attempting to learn how to project the data in a new intermediate space, where the features lose the specific bias [13]. [sent-29, score-0.115]
9 Even though the domain adaptation/dataset bias problem is clearly at its core a generalization problem, the almost totality of approaches presented so far use image-to-image learning algorithms on top of BOW representations. [sent-32, score-0.285]
10 Here we turn the table around: instead of considering a descriptor and trying to amend the issues that it generates with image-to-image distance based learning methods, we show that the NBNN method is a priori more robust to visual domain shift. [sent-33, score-0.293]
11 Experiments on existing domain adaptation databases confirm our intuition: on all of them, the NBNN classifier obtains strong results, often achieving the state of the art. [sent-34, score-0.516]
12 Armed with this knowledge, we build a NBNN domain adaptation algorithm that iteratively learns a class metric while inducing, for each sample, a large margin separation among classes. [sent-35, score-0.619]
13 Our algorithm is the first NBNNbased domain adaptation method proposed so far and performs consistently better than the original NBNN classifier, obtaining the state of the art in all experiments . [sent-36, score-0.479]
14 The rest of the paper is organized as follows: after reviewing previous work (section 2) we set the notation and show with a proofofconcept experiment that the mere plugand-play of the NBNN classifier leads to remarkable results on the domain adaptation problem (section 3). [sent-37, score-0.466]
15 Section 4 describes our NBNN-based domain adaptation algorithm, and section 5 reports the experiments showing the strength of the original NBNN classifier on two different settings, as well as the added value brought by our algorithm. [sent-38, score-0.466]
16 Related Work Domain adaptation is a widely researched problem. [sent-41, score-0.202]
17 The field is infact becoming increasingly aware of the dataset bias issue [25]: existing image collections used for object categorization present specific characteristics which prevent cross-dataset generalization. [sent-45, score-0.103]
18 They define several procedures to transform the input features such that different domains become similar and any classifier can be proficiently applied. [sent-48, score-0.119]
19 In [22] the key idea is to learn a regularized transformation using informationtheoretic metric learning that maps source data to the target domain. [sent-49, score-0.252]
20 [13] proposed to project both the source and target domain samples onto a set of intermediate subspaces, while Gong et al. [sent-51, score-0.462]
21 Another stream of works propose classifier adaptation ap- proaches. [sent-53, score-0.25]
22 Here the authors learn a a model composed of a general and a specific part, taking care of the dataset bias at training time. [sent-56, score-0.088]
23 This representation allows to reach state of the art results (even combined with SPM [16]) over in-domain problems, but its use for crossdomain tasks may not be beneficial. [sent-58, score-0.093]
24 As pointed out in [20, 19], a visual word dictionary built on the source set may be a bad reference for local descriptor extracted from the target. [sent-59, score-0.135]
25 The NBNN classifier was introduced to overcome both these problems and it has shown top performances when applied on visual object categorization tasks. [sent-64, score-0.085]
26 Our work fits in this context: we focus on the suitability of NBNN for domain adaptation and we propose an algorithm that further exploits the NBNN specific features. [sent-66, score-0.439]
27 2) and we describe a proof of concept experiment illustrating the benefits of NBNN for the domain adaptation problem (section 3. [sent-70, score-0.439]
28 Each fim is quantized to a pre-defined vocabulary of w visual words and substituted by the index of the closest codebook element. [sent-86, score-0.109]
29 Hence, for a maximum a posteriori classifier, each descriptor m votes for the most probable class in c = {1, . [sent-95, score-0.097]
30 }In a thdis t setting cthtieo nN BofNN vo algorithm [6] defines the votes in terms of the local distance DLOC (im, c) = k fim − ficm k 2 between each feature and its NN( i nm c,lca)ss = c . [sent-99, score-0.225]
31 Figure 1illustrates this point when training on the whole Pascal VOC07 dataset (20 classes), and testing on an image of class car extracted from ImageNet. [sent-107, score-0.091]
32 A NN classifier using the DMAT distance (that we call from now on MATCH) labels the query image correctly. [sent-111, score-0.078]
33 The table shows the accuracy result over 30 samples of class car. [sent-113, score-0.095]
34 We repeated the experiment also for 30 samples of class chair and bird. [sent-114, score-0.118]
35 NBNN-based Domain Adaptation Our main intuition is that the distribution of local features per class may be similar across two domains despite the variation between the respective image distributions. [sent-125, score-0.116]
36 This similarity can also be enhanced by a domain adaptation approach in the NBNN setting. [sent-126, score-0.418]
37 With this goal in mind, we start from the metric learning method proposed in [27] and we extend it to deal with two domains. [sent-127, score-0.081]
38 Inspired by [7] we propose a greedy algorithm which progressively selects an increasing number of target instances and combines it with a subset of the source data while learning iteratively a Mahalanobis metric per class. [sent-128, score-0.317]
39 When facing an unsupervised crossdomain problem several labeled samples of a source set S : {Fl , yl }lL=1 are available together with unlabeled samples {oFf the target set T : {Fu}uU=1 . [sent-131, score-0.37]
40 By applying the NBNN algorithm, twaritghe t sheet source l}ocal features as training1 , we can estimate the label for each target sample according to (2). [sent-132, score-0.192]
41 The data subsets Sk and Tk extracted from the two domains can then be used to learn a specific Mahalanobis metric per class which induces for each sample a large margin separation between the image-to-class distance of the correct (or estimated) class (p) and all the other classes (n). [sent-133, score-0.423]
42 The metrics are coded in the matrices Wc ∈ Rd×d for 1For NBNN (and NN) there is no real training phase, but we refer to the available labeled samples as training set, generalizing from the standard statistical learning terminology. [sent-134, score-0.118]
43 889999 and marked with their class label yl, while the red circles are the target samples with their assigned label yu. [sent-135, score-0.178]
44 At each of the k iterations, NBNN predicts on the unlabeled target samples in Uk. [sent-136, score-0.133]
45 Two samples from Uk and two samples from Sk are then respectively added to, and removed from the training set, depending on the difference between the negative and positive distances. [sent-137, score-0.122]
46 We follow the formulation in [27], but we keep the source and target samples separated with different parameters λs,λt and relative weights Γs (k),Γt (k) such that the full objective reads like this O(W1, W2, . [sent-145, score-0.221]
47 At the first round the labels predicted for the target samples may not be fully reliable. [sent-161, score-0.171]
48 On the other hand, we want the source sample importance to decrease in time. [sent-168, score-0.109]
49 We clarify here how the samples are extracted from the two domains to define the sets Sk and Tk . [sent-171, score-0.145]
50 The distances D(Fl ,p) and D(Fl , n) are calculated at each round by using flcm ∈ (Sk + Tk − l). [sent-174, score-0.08]
51 For each assigned class yu = ∅ p we sort the samples i cn. [sent-176, score-0.095]
52 In the considered max-margin framework,= =th mis corresponds t hoe identifying a couple goinf images that fall into the margin band and which are closest to the margin bound. [sent-179, score-0.123]
53 This procedure is repeated at each round k and the described samples are progressively moved from Uk to Tk+1, thus helping to tune the metric at the following iteration and adapting it to solve the target problem. [sent-180, score-0.294]
54 While focusing on the target, the source information should become less and less relevant. [sent-181, score-0.088]
55 In practice we identify and delete the source sample lying far from the margin bound and which have low probability to affect its position. [sent-190, score-0.156]
56 The combination of metric learning and sample selection define our DA-NBNN algorithm. [sent-192, score-0.102]
57 Each process helps the other: by tuning Wc we adjust the image-to-class distance, while by changing the training data we redefine the local feature set for each class, making it progressively more suitable for the target domain. [sent-193, score-0.148]
58 At the bezp→l2 ginning utmheb feirrs ot s aectt may pb ele large fno}r, large-scale source datasets, while in the following steps the dimension of both sets is regulated by the rate with which the samples are added/removed from the training set. [sent-196, score-0.16]
59 This is the standard testbed used for visual domain adaptation methods. [sent-203, score-0.418]
60 Each domain consists of 31 classes of office-related objects (e. [sent-211, score-0.248]
61 Here, the number of classes is reduced to 10 and a new domain is added with images extracted from Caltech-256 [14]. [sent-217, score-0.272]
62 The obtained 64-dimensional descriptors are then used with a 1-Nearest Neighbor classifier in two different ways, without including any spatial information: BOW: for each domain we constructed a 800-visual-word vocabulary by k-means on a random subset of the data. [sent-220, score-0.329]
63 We choose the reference domain to use, and the associated histogram descriptor, depending on the specific experiment (more details in the following sections). [sent-222, score-0.237]
64 We ran experiments always considering a couple of domains, one regarded as source and the other as target. [sent-225, score-0.117]
65 In the unsupervised setting the domains are disjoint, while in the semi-supervised setting three images per class of the target domain are added to the source. [sent-226, score-0.518]
66 We consider as baseline several state-of-the-art domain adaptation methods: PCAT : all the original features are projected to the PCA subspace learned from the target domain [12]. [sent-230, score-0.717]
67 SGF [13]: the Sampling Geodesic Flow approach considers a set of intermediate subspaces between the source and target domains to model their shift. [sent-231, score-0.298]
68 GFK [12]: it extends the previous technique by considering an infinite number of intermediate subspaces integrated by the Geodesic Flow Kernel. [sent-232, score-0.076]
69 It uses the correspondence between source and target label data to learn a metric which maps the samples into a new feature space. [sent-234, score-0.278]
70 We The underlined values are used to evaluate a measure of domain shift (see Webcam). [sent-239, score-0.25]
71 It runs large-margin metric learning over the source domain in the image-to-class setting and corresponds to a non-adaptive reference method. [sent-242, score-0.414]
72 We divided the data of each domain into a training and a test set. [sent-248, score-0.238]
73 The Dslr domain is quite similar to Webcam and contains less images, hence we neglected it for these experiments. [sent-251, score-0.216]
74 We chose the BOW representation depending on the domain considered as target. [sent-252, score-0.216]
75 A Fmoarzo inns are eus,e ind as source an Cd the testing samples of Caltech are used as target, with all images represented by histograms over the Caltech visual vocabulary. [sent-255, score-0.138]
76 Average Source→Source (SS) and Source→Target (ST) rFeigsuultrse over ethraeg unsupervised setting i)n a Tnadb Sleo 1r. [sent-260, score-0.074]
77 This indicates that in the imageto-class setting the domain shift is intrinsically smaller. [sent-293, score-0.279]
78 This behavior replicates over all of the other domain couples. [sent-294, score-0.216]
79 Finally, let us compare NBNN with the GFK domain adaptation approach3. [sent-295, score-0.418]
80 01) than GFK over ten of the twelve possible domain couples. [sent-297, score-0.216]
81 on the source domain, without even considering any target information provides extremely good results indeed better than the state of the art established by GFK. [sent-304, score-0.232]
82 01) in recognition rate over most of the domain couples (they perform equally only over A → C and C → W). [sent-306, score-0.241]
83 – eIty i ps rwfoorrmth paying a otntelynt oiovner a Also → to Cth aen comparison between the DA-NBNN results and the corresponding indomain NBNN results. [sent-307, score-0.077]
84 In previous work this kind of behavior has been interpreted as further evidence of the effectiveness of the proposed adaptation method. [sent-310, score-0.202]
85 This would imply that the adaptive methods, apart from the domain shift, ends up solving some of the issues generated by the vector quantization. [sent-313, score-0.244]
86 The advantage of DA-NBNN remains significant over other Source → Target results regardless of the chosen vocabulary udricmee →nsi Toanr gfeort BresOuWlts sa rnedg athrdel ecossnos ifd ethreed c image-toimage-based classifier (see Figure 4). [sent-314, score-0.083]
87 Results: increasing the number of classes Lastly, we repeated the experiments on the the original Office dataset, which contains more classes than its Office+Caltech version. [sent-317, score-0.087]
88 We focused on the domain couples analyzed in previous work and we reproduce exactly the experimental setting of [22, 12]. [sent-318, score-0.27]
89 html 990033 obtained in the unsupervised setting by implementing our own BOW method. [sent-323, score-0.074]
90 The recognition accuracies in both the unsupervised and semi-supervised setting are presented in Table 2. [sent-324, score-0.074]
91 Conclusion In this paper we tested the generalization ability of the NBNN classifier [6] on the domain adaptation problem, looking into how the vector quantization step in BOW features, and the choice of an image-to-image based classifier, affect performance. [sent-328, score-0.501]
92 The results obtained are competitive, often superior, to the state of the art, achieved by sophisticated image-to-image distance based learning methods. [sent-329, score-0.083]
93 Building on this, we proposed an NBNN-based domain adaptation algorithm that learns iteratively a Mahalanobis class specific metric, while inducing for each sample a large margin separation among classes. [sent-330, score-0.65]
94 We tested our algorithm on two different settings, in the unsupervised and semi-supervised scenarios, obtaining the state of the art. [sent-331, score-0.074]
95 We believe that these results provide very strong evidence of the importance of casting the domain adaptation problem within the NBNN framework. [sent-332, score-0.446]
96 Our algorithm injects a learning component in it through metric learning, but several other options are possible, such as through the development of NBNN kernel functions [26]. [sent-333, score-0.081]
97 Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. [sent-365, score-0.418]
98 Domain adaptation prob- [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] lems: A DASVM classification technique and a circular validation strategy. [sent-382, score-0.202]
99 Subspace interpolation via [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] dictionary learning for unsupervised domain adaptation. [sent-466, score-0.285]
100 Distance metric learning for large margin nearest neighbor classification. [sent-526, score-0.158]
wordName wordTfidf (topN-words)
[('nbnn', 0.761), ('domain', 0.216), ('adaptation', 0.202), ('bow', 0.195), ('sk', 0.122), ('tk', 0.119), ('zp', 0.1), ('upn', 0.093), ('source', 0.088), ('target', 0.083), ('fl', 0.082), ('caltech', 0.08), ('gfk', 0.077), ('fim', 0.074), ('webcam', 0.071), ('domains', 0.071), ('ficm', 0.063), ('lpn', 0.063), ('wkc', 0.063), ('metric', 0.057), ('uk', 0.056), ('wc', 0.056), ('fu', 0.051), ('office', 0.051), ('samples', 0.05), ('classifier', 0.048), ('margin', 0.047), ('inducing', 0.046), ('fj', 0.046), ('unsupervised', 0.045), ('class', 0.045), ('bias', 0.045), ('triplets', 0.044), ('dslr', 0.043), ('progressively', 0.043), ('danbnn', 0.042), ('dloc', 0.042), ('dmat', 0.042), ('flcm', 0.042), ('indomain', 0.042), ('minn', 0.042), ('amazon', 0.04), ('round', 0.038), ('categorization', 0.037), ('nn', 0.037), ('cac', 0.037), ('fi', 0.037), ('vocabulary', 0.035), ('ps', 0.035), ('quantization', 0.035), ('frustratingly', 0.034), ('tommasi', 0.034), ('shift', 0.034), ('mahalanobis', 0.034), ('match', 0.032), ('art', 0.032), ('crossdomain', 0.032), ('pascal', 0.032), ('classes', 0.032), ('subspaces', 0.031), ('unaligned', 0.031), ('surf', 0.031), ('separation', 0.03), ('descriptors', 0.03), ('distance', 0.03), ('seminal', 0.03), ('neighbor', 0.03), ('caputo', 0.03), ('boiman', 0.03), ('votes', 0.029), ('setting', 0.029), ('state', 0.029), ('couple', 0.029), ('casting', 0.028), ('qiu', 0.028), ('adaptive', 0.028), ('adopted', 0.026), ('gopalan', 0.025), ('intermediate', 0.025), ('geodesic', 0.025), ('couples', 0.025), ('blitzer', 0.025), ('extracted', 0.024), ('learning', 0.024), ('pt', 0.024), ('descriptor', 0.023), ('repeated', 0.023), ('iteratively', 0.022), ('training', 0.022), ('noticed', 0.022), ('yl', 0.022), ('databases', 0.021), ('broadly', 0.021), ('sample', 0.021), ('generalize', 0.021), ('ss', 0.021), ('specific', 0.021), ('saenko', 0.021), ('concept', 0.021), ('infinite', 0.02)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000001 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation
Author: Tatiana Tommasi, Barbara Caputo
Abstract: Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on imageto-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.
2 0.25301906 123 iccv-2013-Domain Adaptive Classification
Author: Fatemeh Mirrashed, Mohammad Rastegari
Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.
3 0.22195143 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection
Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann
Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.
4 0.22116393 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment
Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars
Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.
Author: Andy J. Ma, Pong C. Yuen, Jiawei Li
Abstract: This paper addresses a new person re-identification problem without the label information of persons under non-overlapping target cameras. Given the matched (positive) and unmatched (negative) image pairs from source domain cameras, as well as unmatched (negative) image pairs which can be easily generated from target domain cameras, we propose a Domain Transfer Ranked Support Vector Machines (DTRSVM) method for re-identification under target domain cameras. To overcome the problems introduced due to the absence of matched (positive) image pairs in target domain, we relax the discriminative constraint to a necessary condition only relying on the positive mean in target domain. By estimating the target positive mean using source and target domain data, a new discriminative model with high confidence in target positive mean and low confidence in target negative image pairs is developed. Since the necessary condition may not truly preserve the discriminability, multi-task support vector ranking is proposed to incorporate the training data from source domain with label information. Experimental results show that the proposed DTRSVM outperforms existing methods without using label information in target cameras. And the top 30 rank accuracy can be improved by the proposed method upto 9.40% on publicly available person re-identification datasets.
6 0.10375664 44 iccv-2013-Adapting Classification Cascades to New Domains
7 0.10360426 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation
8 0.098088011 104 iccv-2013-Decomposing Bag of Words Histograms
9 0.097108349 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild
10 0.09545254 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification
11 0.090245992 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias
12 0.089838147 451 iccv-2013-Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions
13 0.088538587 303 iccv-2013-Orderless Tracking through Model-Averaged Posterior Estimation
14 0.076477833 233 iccv-2013-Latent Task Adaptation with Large-Scale Hierarchies
15 0.074933231 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces
16 0.0740017 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation
17 0.073727362 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling
18 0.070688382 6 iccv-2013-A Convex Optimization Framework for Active Learning
19 0.063312411 52 iccv-2013-Attribute Adaptation for Personalized Image Search
20 0.062198319 197 iccv-2013-Hierarchical Joint Max-Margin Learning of Mid and Top Level Representations for Visual Recognition
topicId topicWeight
[(0, 0.16), (1, 0.076), (2, -0.053), (3, -0.074), (4, -0.017), (5, 0.029), (6, -0.015), (7, 0.037), (8, 0.019), (9, -0.012), (10, 0.011), (11, -0.129), (12, -0.025), (13, -0.061), (14, 0.143), (15, -0.196), (16, -0.058), (17, -0.032), (18, 0.031), (19, -0.037), (20, 0.169), (21, -0.107), (22, 0.076), (23, 0.088), (24, 0.012), (25, -0.046), (26, -0.08), (27, -0.027), (28, -0.014), (29, 0.009), (30, 0.023), (31, 0.009), (32, 0.015), (33, -0.002), (34, -0.037), (35, -0.031), (36, 0.006), (37, -0.059), (38, 0.058), (39, 0.024), (40, -0.007), (41, 0.089), (42, -0.006), (43, 0.034), (44, -0.046), (45, 0.03), (46, 0.017), (47, 0.011), (48, 0.027), (49, 0.037)]
simIndex simValue paperId paperTitle
same-paper 1 0.95943981 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation
Author: Tatiana Tommasi, Barbara Caputo
Abstract: Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on imageto-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.
2 0.91353005 123 iccv-2013-Domain Adaptive Classification
Author: Fatemeh Mirrashed, Mohammad Rastegari
Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.
3 0.90890515 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation
Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu
Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.
4 0.90642565 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection
Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann
Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.
5 0.90367389 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment
Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars
Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.
7 0.68792623 44 iccv-2013-Adapting Classification Cascades to New Domains
8 0.66319215 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces
9 0.63592589 451 iccv-2013-Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions
10 0.63045764 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild
11 0.60748523 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias
12 0.58645856 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification
13 0.49899384 178 iccv-2013-From Semi-supervised to Transfer Counting of Crowds
14 0.4980343 413 iccv-2013-Target-Driven Moire Pattern Synthesis by Phase Modulation
15 0.43727008 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling
16 0.43679774 248 iccv-2013-Learning to Rank Using Privileged Information
17 0.42603433 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition
18 0.42542395 170 iccv-2013-Fingerspelling Recognition with Semi-Markov Conditional Random Fields
19 0.42206362 125 iccv-2013-Drosophila Embryo Stage Annotation Using Label Propagation
20 0.42111692 253 iccv-2013-Linear Sequence Discriminant Analysis: A Model-Based Dimensionality Reduction Method for Vector Sequences
topicId topicWeight
[(2, 0.082), (7, 0.015), (12, 0.012), (26, 0.08), (31, 0.041), (35, 0.02), (40, 0.014), (42, 0.114), (48, 0.018), (64, 0.035), (73, 0.035), (77, 0.195), (89, 0.13), (95, 0.011), (98, 0.105)]
simIndex simValue paperId paperTitle
same-paper 1 0.79713607 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation
Author: Tatiana Tommasi, Barbara Caputo
Abstract: Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on imageto-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.
2 0.79631162 350 iccv-2013-Relative Attributes for Large-Scale Abandoned Object Detection
Author: Quanfu Fan, Prasad Gabbur, Sharath Pankanti
Abstract: Effective reduction of false alarms in large-scale video surveillance is rather challenging, especially for applications where abnormal events of interest rarely occur, such as abandoned object detection. We develop an approach to prioritize alerts by ranking them, and demonstrate its great effectiveness in reducing false positives while keeping good detection accuracy. Our approach benefits from a novel representation of abandoned object alerts by relative attributes, namely staticness, foregroundness and abandonment. The relative strengths of these attributes are quantified using a ranking function[19] learnt on suitably designed low-level spatial and temporal features.These attributes of varying strengths are not only powerful in distinguishing abandoned objects from false alarms such as people and light artifacts, but also computationally efficient for large-scale deployment. With these features, we apply a linear ranking algorithm to sort alerts according to their relevance to the end-user. We test the effectiveness of our approach on both public data sets and large ones collected from the real world.
3 0.77327323 83 iccv-2013-Complementary Projection Hashing
Author: Zhongming Jin, Yao Hu, Yue Lin, Debing Zhang, Shiding Lin, Deng Cai, Xuelong Li
Abstract: Recently, hashing techniques have been widely applied to solve the approximate nearest neighbors search problem in many vision applications. Generally, these hashing approaches generate 2c buckets, where c is the length of the hash code. A good hashing method should satisfy the following two requirements: 1) mapping the nearby data points into the same bucket or nearby (measured by xue long l i opt . ac . cn @ a(a)b(b) the Hamming distance) buckets. 2) all the data points are evenly distributed among all the buckets. In this paper, we propose a novel algorithm named Complementary Projection Hashing (CPH) to find the optimal hashing functions which explicitly considers the above two requirements. Specifically, CPHaims at sequentiallyfinding a series ofhyperplanes (hashing functions) which cross the sparse region of the data. At the same time, the data points are evenly distributed in the hypercubes generated by these hyperplanes. The experiments comparing with the state-of-the-art hashing methods demonstrate the effectiveness of the proposed method.
4 0.76428032 142 iccv-2013-Ensemble Projection for Semi-supervised Image Classification
Author: Dengxin Dai, Luc Van_Gool
Abstract: This paper investigates the problem of semi-supervised classification. Unlike previous methods to regularize classifying boundaries with unlabeled data, our method learns a new image representation from all available data (labeled and unlabeled) andperformsplain supervised learning with the new feature. In particular, an ensemble of image prototype sets are sampled automatically from the available data, to represent a rich set of visual categories/attributes. Discriminative functions are then learned on these prototype sets, and image are represented by the concatenation of their projected values onto the prototypes (similarities to them) for further classification. Experiments on four standard datasets show three interesting phenomena: (1) our method consistently outperforms previous methods for semi-supervised image classification; (2) our method lets itself combine well with these methods; and (3) our method works well for self-taught image classification where unlabeled data are not coming from the same distribution as la- beled ones, but rather from a random collection of images.
5 0.75516427 180 iccv-2013-From Where and How to What We See
Author: S. Karthikeyan, Vignesh Jagadeesh, Renuka Shenoy, Miguel Ecksteinz, B.S. Manjunath
Abstract: Eye movement studies have confirmed that overt attention is highly biased towards faces and text regions in images. In this paper we explore a novel problem of predicting face and text regions in images using eye tracking data from multiple subjects. The problem is challenging as we aim to predict the semantics (face/text/background) only from eye tracking data without utilizing any image information. The proposed algorithm spatially clusters eye tracking data obtained in an image into different coherent groups and subsequently models the likelihood of the clusters containing faces and text using afully connectedMarkov Random Field (MRF). Given the eye tracking datafrom a test image, itpredicts potential face/head (humans, dogs and cats) and text locations reliably. Furthermore, the approach can be used to select regions of interest for further analysis by object detectors for faces and text. The hybrid eye position/object detector approach achieves better detection performance and reduced computation time compared to using only the object detection algorithm. We also present a new eye tracking dataset on 300 images selected from ICDAR, Street-view, Flickr and Oxford-IIIT Pet Dataset from 15 subjects.
6 0.75144923 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias
7 0.74476022 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection
8 0.73944747 141 iccv-2013-Enhanced Continuous Tabu Search for Parameter Estimation in Multiview Geometry
9 0.73872471 19 iccv-2013-A Learning-Based Approach to Reduce JPEG Artifacts in Image Matting
10 0.73239535 434 iccv-2013-Unifying Nuclear Norm and Bilinear Factorization Approaches for Low-Rank Matrix Decomposition
11 0.72213542 123 iccv-2013-Domain Adaptive Classification
12 0.72073191 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation
13 0.7192151 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction
14 0.70973384 33 iccv-2013-A Unified Video Segmentation Benchmark: Annotation, Metrics and Analysis
15 0.70290786 44 iccv-2013-Adapting Classification Cascades to New Domains
16 0.69910681 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition
17 0.69774276 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment
18 0.69561493 402 iccv-2013-Street View Motion-from-Structure-from-Motion
19 0.69478631 43 iccv-2013-Active Visual Recognition with Expertise Estimation in Crowdsourcing
20 0.69448566 85 iccv-2013-Compositional Models for Video Event Detection: A Multiple Kernel Learning Latent Variable Approach