nips nips2009 nips2009-149 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Kevin Briggman, Winfried Denk, Sebastian Seung, Moritz N. Helmstaedter, Srinivas C. Turaga
Abstract: Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning the affinity graph. We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are able to train an affinity classifier to directly minimize the Rand index of segmentations resulting from the graph partitioning. Our learning algorithm corresponds to the learning of maximin affinities between image pixel pairs, which are predictive of the pixel-pair connectivity. 1
Reference: text
sentIndex sentText sentNum sentScore
1 Maximin affinity learning of image segmentation Srinivas C. [sent-1, score-0.359]
2 Sebastian Seung MIT, HHMI Abstract Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. [sent-4, score-0.565]
3 We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. [sent-7, score-0.454]
4 The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. [sent-8, score-0.591]
5 By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are able to train an affinity classifier to directly minimize the Rand index of segmentations resulting from the graph partitioning. [sent-9, score-0.766]
6 Our learning algorithm corresponds to the learning of maximin affinities between image pixel pairs, which are predictive of the pixel-pair connectivity. [sent-10, score-0.674]
7 1 Introduction Supervised learning has emerged as a serious contender in the field of image segmentation, ever since the creation of training sets of images with “ground truth” segmentations provided by humans, such as the Berkeley Segmentation Dataset [15]. [sent-11, score-0.282]
8 In the supervised learning method presented here, the segmentation algorithm consists of a parametrized classifier that predicts the weights of a nearest neighbor affinity graph over image pixels, followed by a graph partitioner that thresholds the affinity graph and finds its connected components. [sent-13, score-0.966]
9 Our objective function is the Rand index [18], which has recently been proposed as a quantitative measure of segmentation performance [23]. [sent-14, score-0.327]
10 edu 1 hypothetical thresholded affinity graphs segmentation algorithm missing merge! [sent-17, score-0.422]
11 affinity prediction image threshold, connected components weighted affinity graph segmentation affinity graph #1 affinity graph #2 Figure 1: (left) Our segmentation algorithm. [sent-18, score-1.233]
12 We first generate a nearest neighbor weighted affinity graph representing the degree to which nearest neighbor pixels should be grouped together. [sent-19, score-0.401]
13 The segmentation is generated by finding the connected components of the thresholded affinity graph. [sent-20, score-0.453]
14 (right) Affinity misclassification rates are a poor measure of segmentation performance. [sent-21, score-0.261]
15 Affinity graph #1 makes only 1 error (dashed edge) but results in poor segmentations, while graph #2 generates a perfect segmentation despite making many affinity misclassifications (dashed edges). [sent-22, score-0.511]
16 Because maximin edges of the affinity graph play a key role in our learning method, we call it maximin affinity learning of image segmentation, or MALIS. [sent-23, score-1.246]
17 The minimax path and edge are standard concepts in graph theory, and maximin is the opposite-sign sibling of minimax. [sent-24, score-0.725]
18 MALIS focuses on improving classifier output at maximin edges, because classifying these edges incorrectly leads to genuine segmentation errors, the splitting or merging of segments. [sent-26, score-0.822]
19 To the best of our knowledge, MALIS is the first supervised learning method that is based on optimizing a genuine measure of segmentation performance. [sent-27, score-0.307]
20 But these approaches do not optimize a genuine measure of segmentation performance such as the Rand index. [sent-32, score-0.283]
21 Our classifier can also be viewed as a boundary detector, since a nearest neighbor affinity graph is essentially the same as a boundary map, up to a sign inversion. [sent-37, score-0.254]
22 The classifier parameters are not trained to optimize performance at boundary detection, but to optimize performance at segmentation as measured by the Rand index. [sent-39, score-0.309]
23 But image labeling is more similar to multi-class pixel classification rather than image segmentation, as the latter task may require distinguishing between multiple objects in a single image that all have the same label. [sent-41, score-0.386]
24 2 Partitioning a thresholded affinity graph by connected components Our class of segmentation algorithms is constructed by combining a classifier and a graph partitioner (see Figure 1). [sent-46, score-0.749]
25 The nodes of the graph are image pixels, and the edges are between nearest neighbor pairs of pixels. [sent-48, score-0.396]
26 The graph partitioner first thresholds the affinity graph by removing all edges with weights less than some threshold value θ. [sent-52, score-0.381]
27 The connected components of this thresholded affinity graph are the segments of the image. [sent-53, score-0.367]
28 For this class of segmentation algorithms, it’s obvious that a single misclassified edge of the affinity graph can dramatically alter the resulting segmentation by splitting or merging two segments (see Fig. [sent-54, score-0.753]
29 This is why it is important to learn by optimizing a measure of segmentation performance rather than affinity prediction. [sent-56, score-0.261]
30 More sophisticated algorithms, such as spectral clustering [20] or graph cuts [3], might be more robust to misclassifications of one or a few edges of the affinity graph. [sent-58, score-0.246]
31 First, because of the simplicity of our graph partitioning, we can derive a simple and direct method of supervised learning that optimizes a true measure of image segmentation performance. [sent-61, score-0.508]
32 So far learning based on more sophisticated graph partitioning methods has fallen short of this goal [17, 2]. [sent-62, score-0.241]
33 Second, even if it were possible to properly learn the affinities used by more sophisticated graph partitioning methods, we would still prefer our simple connected components. [sent-63, score-0.318]
34 The classifier in our segmentation algorithm can also carry out sophisticated computations, if its representational power is sufficiently great. [sent-64, score-0.288]
35 The sophisticated partitioning methods clean up the affinity graph by using prior assumptions about the properties of image segmentations. [sent-66, score-0.339]
36 If the sophisticated partitioning methods are indeed the best way of achieving good segmentation performance, we suspect that our classifier will learn them from the training data. [sent-69, score-0.4]
37 3 The Rand index quantifies segmentation performance Image segmentation can be viewed as a special case of the general problem of clustering, as image segments are clusters of image pixels. [sent-71, score-0.834]
38 Recently it has been proposed that the Rand index be applied to image segmentations [23]. [sent-73, score-0.297]
39 Define a segmentation S as an assignment of a segment label si to each pixel i. [sent-74, score-0.432]
40 ˆ Given two segmentations S and S of an image with N pixels, define the function ˆ 1 − RI (S, S) = N 2 −1 ∑ ˆ ˆ δ ( si , s j ) − δ ( si , s j ) (1) i< j which is the fraction of image pixel pairs on which the two segmentations disagree. [sent-76, score-0.715]
41 We will refer to ˆ ˆ the function 1 − RI (S, S) as the Rand index, although strictly speaking the Rand index is RI (S, S), the fraction of image pixel pairs on which the two segmentations agree. [sent-77, score-0.422]
42 ˆ In this paper, the Rand index is applied to compare the output S of a segmentation algorithm with a ground truth segmentation S, and will serve as an objective function for learning. [sent-79, score-0.588]
43 Figure 1 illustrates why the Rand index is a sensible measure of segmentation performance. [sent-80, score-0.327]
44 The segmentation of affinity graph #1 incurs a huge Rand index penalty relative to the ground truth. [sent-81, score-0.471]
45 A single wrongly classified edge of the affinity graph leads to an incorrect merger of two segments, causing many pairs of image pixels to be wrongly assigned to the same segment. [sent-82, score-0.504]
46 On the other hand, the segmentation corresponding to affinity graph #2 has a perfect Rand index, even though there are misclassifications in the affinity graph. [sent-83, score-0.386]
47 In short, the Rand index makes sense because it strongly penalizes errors in the affinity graph that lead to split and merger errors. [sent-84, score-0.237]
48 3 1 merger 1 1’ 1’ 2 2 3 3 2’ 2’ 2’ 3’ 3’ 4 split 4’ groundtruth test 4 4’ rand index Figure 2: The Rand index quantifies segmentation performance by comparing the difference in pixel pair connectivity between the groundtruth and test segmentations. [sent-85, score-0.951]
49 Each diagonal block corresponds to connected pixel pairs belonging to one of the image segments. [sent-87, score-0.3]
50 The Rand index incurs penalties when pixels pairs that must not be connected are connected or vice versa. [sent-88, score-0.378]
51 4 Connectivity and maximin affinity Recall that our segmentation algorithm works by finding connected components of the thresholded ˆ affinity graph. [sent-92, score-0.937]
52 In other words, we would like a way of characterizing whether two pixels are connected in the thresholded affinity graph. [sent-95, score-0.275]
53 To do this, we introduce the concept of maximin affinity, which is defined for any pair of pixels in an affinity graph (the definition is generally applicable to any weighted graph). [sent-96, score-0.742]
54 Let P ij be the set of all paths in the graph that connect pixels i and j. [sent-98, score-0.231]
55 A pair of pixels is connected in the thresholded affinity graph if and only if their maximin affinity exceeds the threshold value. [sent-103, score-0.941]
56 By definition, a pixel pair is connected in the thresholded affinity graph if and only if there exists a path between them. [sent-105, score-0.456]
57 Such a path is equivalent to a path in the unthresholded affinity graph for which the minimal affinity is above the threshold value. [sent-106, score-0.259]
58 This path in turn exists if and only if the maximin affinity is above the threshold value. [sent-107, score-0.557]
59 As a consequence of this theorem, pixel pairs can be classified as connected or disconnected by ˆ thresholding maximin affinities. [sent-108, score-0.74]
60 Let S be the segmentation produced by thresholding the affinity graph Aij and then finding connected components. [sent-109, score-0.482]
61 4 Any path in a maximum spanning tree is a maximin path. [sent-113, score-0.578]
62 For our nearest neighbor affinity graphs, the maximin affinity of a pixel pair can be computed in O(| E| · α(|V |)) where | E| is the number of graph edges and |V | is the number of pixels and α(·) is the inverse Ackerman function which grows ∗ sub-logarithmically. [sent-114, score-0.974]
63 Note that maximin affinities are required for training, but not testing. [sent-116, score-0.484]
64 For segmenting the image at test time, only a connected components computation need be performed, which takes time linear in the number of edges | E|. [sent-117, score-0.253]
65 5 Optimizing the Rand index by learning maximin affinities Since the affinities and maximin affinities are both functions of the image I and the classifier param∗ eters W, we will write them as Aij ( I; W ) and Aij ( I; W ), respectively. [sent-118, score-1.132]
66 Define (k, l ) = mm(i, j) to be the maximin edge for the pixel pair (i, j). [sent-125, score-0.659]
67 If there is a tie, choose between the maximin edges at random. [sent-126, score-0.539]
68 Note that a single edge can be the maximin edge for multiple pairs of pixels, so its affinity can appear multiple times in the MALIS cost function. [sent-129, score-0.651]
69 Roughly speaking, the MALIS cost function is similar to the standard cost function, except that each edge in the affinity graph is weighted by the number of pixel pairs that it causes to be incorrectly classified. [sent-130, score-0.35]
70 6 Online stochastic gradient descent Computing the cost function or its gradient requires finding the maximin edges for all pixel pairs. [sent-131, score-0.697]
71 Pick a random pair of nearest neighbor pixels i neighbor) pixels i and j from a randomly drawn and j from a randomly drawn training image I training image I. [sent-137, score-0.566]
72 However, the standard method picks a nearest neighbor pixel pair and trains the affinity of the edge between them. [sent-144, score-0.283]
73 The maximin method picks a pixel pair of arbitrary separation and trains the minimal affinity on a maximin path between them. [sent-145, score-1.171]
74 Effectively, our connected components performs spatial integration over the nearest neighbor affinity graph to make connectivity decisions about pixel pairs at large distances. [sent-146, score-0.476]
75 The maximin computation requires that on each iteration the affinity graph be computed for the whole image. [sent-149, score-0.609]
76 Thus there is a computational price to be paid for the optimization of a true segmentation error. [sent-151, score-0.261]
77 Tracing axons and dendrites is a very large-scale image segmentation problem requiring high accuracy. [sent-157, score-0.414]
78 2 Training convolutional networks for affinity classification Any classifier that is a smooth function of its parameters can be used for maximin affinity learning. [sent-161, score-0.513]
79 ˆ x As noted earlier, maximin affinity learning can be significantly slower than standard affinity learning, due to the need for computing the entire affinity graph on each iteration, while standard affinity training need only predict the weight of a single edge in the graph. [sent-170, score-0.688]
80 A consequence of this approximation is that the maximum separation between image pixel pairs chosen for training is less than about 20 pixels. [sent-173, score-0.246]
81 A second means of speeding up the maximin procedure is by pretraining the maximin CN for 500,000 iterations using the fast standard affinity classification cost function. [sent-174, score-0.99]
82 Maximin learning leads to dramatic improvement in segmentation performance 0. [sent-176, score-0.261]
83 2 0 0 1 2 3 Splits/object 4 5 Figure 3: Quantification of segmentation performance on 3d electron microscopic images of neural tissue. [sent-208, score-0.389]
84 We benchmarked the performance of the standard and maximin affinity classifiers by measuring the the pixel-pair connectivity classification performance using the Rand index. [sent-212, score-0.525]
85 In images with large numbers of segments, most pixel pairs will be disconnected from one another leading to a large imbalancing the number of connected and disconnected pixel pairs. [sent-218, score-0.392]
86 This is reflected in the fact that the Rand index is over 95% for both segmentation algorithms. [sent-219, score-0.327]
87 From these curves, we observe that our maximin affinity classifier dramatically outperforms the standard affinity classifier. [sent-222, score-0.484]
88 On the contrary, our experiments suggest that when trained properly, thresholded affinity classification followed by connected components can be an extremely competitive method of image segmentations. [sent-225, score-0.316]
89 8 Discussion In this paper, we have trained an affinity classifier to produce affinity graphs that result in excellent segmentations when partitioned by the simple graph partitioning algorithm of thresholding followed by connected components. [sent-226, score-0.492]
90 In contrast to classic graph-based segmentation algorithms where 7 mergers image groundtruth maximin training standard training Figure 4: A 2d cross-section through a 3d segmentation of the test image. [sent-229, score-1.219]
91 The maximin segmentation correctly segments several objects which are merged in the standard segmentation, and even correctly segments objects which are missing in the groundtruth segmentation. [sent-230, score-0.913]
92 Not all segments merged in the standard segmentation are merged at locations visible in this cross section. [sent-231, score-0.355]
93 Pixels colored black in the machine segmentations correspond to pixels completely disconnected from their neighbors and represent boundary regions. [sent-232, score-0.296]
94 the partitioning phase dominates, our partitioning algorithm is simple and can partition graphs in time linearly proportional to the number of edges in the graph. [sent-233, score-0.256]
95 We also do not require any prior knowledge of the number of image segments or image segment sizes at test time, in contrast to other graph partitioning algorithms [7, 20]. [sent-234, score-0.475]
96 The formalism of maximin affinities used to derive our learning algorithm has connections to singlelinkage hierarchical clustering, minimum spanning trees and ultrametric distances. [sent-235, score-0.554]
97 Felzenszwalb and Huttenlocher [7] describe a graph partitioning algorithm based on a minimum spanning tree computation which resembles our segmentation algorithm, in part. [sent-236, score-0.526]
98 The Ultrametric Contour Map algorithm [1] generates hierarchical segmentations nearly identical those generated by varying the threshold of our graph partitioning algorithm. [sent-237, score-0.377]
99 Neither of these methods incorporates a means for learning from labeled data, but our work shows how the performance of these algorithms can be improved by use of our maximin affinity learning. [sent-238, score-0.484]
100 A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. [sent-339, score-0.311]
wordName wordTfidf (topN-words)
[('maximin', 0.484), ('nity', 0.465), ('af', 0.439), ('segmentation', 0.261), ('rand', 0.26), ('segmentations', 0.133), ('graph', 0.125), ('pixels', 0.106), ('image', 0.098), ('thresholded', 0.092), ('malis', 0.092), ('pixel', 0.092), ('partitioning', 0.089), ('connected', 0.077), ('aij', 0.077), ('er', 0.074), ('nities', 0.074), ('index', 0.066), ('electron', 0.065), ('si', 0.064), ('classi', 0.061), ('akl', 0.061), ('edge', 0.056), ('edges', 0.055), ('misclassi', 0.051), ('segments', 0.05), ('groundtruth', 0.046), ('affinity', 0.046), ('merger', 0.046), ('microscopy', 0.046), ('partitioner', 0.046), ('tissue', 0.043), ('path', 0.043), ('nearest', 0.043), ('neighbor', 0.042), ('connectivity', 0.041), ('pij', 0.039), ('spanning', 0.035), ('disconnected', 0.035), ('fowlkes', 0.035), ('axons', 0.035), ('microscopic', 0.035), ('ultrametric', 0.035), ('pairs', 0.033), ('quanti', 0.031), ('briggman', 0.03), ('denk', 0.03), ('insitute', 0.03), ('threshold', 0.03), ('convolutional', 0.029), ('images', 0.028), ('sophisticated', 0.027), ('pair', 0.027), ('trained', 0.026), ('cns', 0.026), ('supervised', 0.024), ('medical', 0.024), ('tu', 0.023), ('training', 0.023), ('trains', 0.023), ('cn', 0.023), ('graphs', 0.023), ('mergers', 0.023), ('turaga', 0.023), ('ers', 0.023), ('components', 0.023), ('genuine', 0.022), ('merged', 0.022), ('boundary', 0.022), ('cost', 0.022), ('train', 0.022), ('clustering', 0.022), ('gradient', 0.022), ('segmented', 0.022), ('amm', 0.02), ('curr', 0.02), ('dendrites', 0.02), ('helmstaedter', 0.02), ('neurobiol', 0.02), ('opin', 0.02), ('wrongly', 0.02), ('thresholding', 0.019), ('incurs', 0.019), ('modules', 0.018), ('tracing', 0.018), ('minimal', 0.018), ('brain', 0.018), ('roc', 0.017), ('cuts', 0.017), ('minimax', 0.017), ('dw', 0.016), ('martin', 0.016), ('tree', 0.016), ('ri', 0.016), ('felzenszwalb', 0.015), ('vision', 0.015), ('segment', 0.015), ('serial', 0.014), ('cubic', 0.014), ('minimize', 0.014)]
simIndex simValue paperId paperTitle
same-paper 1 1.0 149 nips-2009-Maximin affinity learning of image segmentation
Author: Kevin Briggman, Winfried Denk, Sebastian Seung, Moritz N. Helmstaedter, Srinivas C. Turaga
Abstract: Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning the affinity graph. We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are able to train an affinity classifier to directly minimize the Rand index of segmentations resulting from the graph partitioning. Our learning algorithm corresponds to the learning of maximin affinities between image pixel pairs, which are predictive of the pixel-pair connectivity. 1
2 0.11538112 211 nips-2009-Segmenting Scenes by Matching Image Composites
Author: Bryan Russell, Alyosha Efros, Josef Sivic, Bill Freeman, Andrew Zisserman
Abstract: In this paper, we investigate how, given an image, similar images sharing the same global description can help with unsupervised scene segmentation. In contrast to recent work in semantic alignment of scenes, we allow an input image to be explained by partial matches of similar scenes. This allows for a better explanation of the input scenes. We perform MRF-based segmentation that optimizes over matches, while respecting boundary information. The recovered segments are then used to re-query a large database of images to retrieve better matches for the target regions. We show improved performance in detecting the principal occluding and contact boundaries for the scene over previous methods on data gathered from the LabelMe database.
3 0.10163409 201 nips-2009-Region-based Segmentation and Object Detection
Author: Stephen Gould, Tianshi Gao, Daphne Koller
Abstract: Object detection and multi-class image segmentation are two closely related tasks that can be greatly improved when solved jointly by feeding information from one task to the other [10, 11]. However, current state-of-the-art models use a separate representation for each task making joint inference clumsy and leaving the classification of many parts of the scene ambiguous. In this work, we propose a hierarchical region-based approach to joint object detection and image segmentation. Our approach simultaneously reasons about pixels, regions and objects in a coherent probabilistic model. Pixel appearance features allow us to perform well on classifying amorphous background classes, while the explicit representation of regions facilitate the computation of more sophisticated features necessary for object detection. Importantly, our model gives a single unified description of the scene—we explain every pixel in the image and enforce global consistency between all random variables in our model. We run experiments on the challenging Street Scene dataset [2] and show significant improvement over state-of-the-art results for object detection accuracy. 1
4 0.081009932 102 nips-2009-Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models
Author: Jing Gao, Feng Liang, Wei Fan, Yizhou Sun, Jiawei Han
Abstract: Ensemble classifiers such as bagging, boosting and model averaging are known to have improved accuracy and robustness over a single model. Their potential, however, is limited in applications which have no access to raw data but to the meta-level model output. In this paper, we study ensemble learning with output from multiple supervised and unsupervised models, a topic where little work has been done. Although unsupervised models, such as clustering, do not directly generate label prediction for each individual, they provide useful constraints for the joint prediction of a set of related objects. We propose to consolidate a classification solution by maximizing the consensus among both supervised predictions and unsupervised constraints. We cast this ensemble task as an optimization problem on a bipartite graph, where the objective function favors the smoothness of the prediction over the graph, as well as penalizing deviations from the initial labeling provided by supervised models. We solve this problem through iterative propagation of probability estimates among neighboring nodes. Our method can also be interpreted as conducting a constrained embedding in a transformed space, or a ranking on the graph. Experimental results on three real applications demonstrate the benefits of the proposed method over existing alternatives1 . 1
5 0.072064742 243 nips-2009-The Ordered Residual Kernel for Robust Motion Subspace Clustering
Author: Tat-jun Chin, Hanzi Wang, David Suter
Abstract: We present a novel and highly effective approach for multi-body motion segmentation. Drawing inspiration from robust statistical model fitting, we estimate putative subspace hypotheses from the data. However, instead of ranking them we encapsulate the hypotheses in a novel Mercer kernel which elicits the potential of two point trajectories to have emerged from the same subspace. The kernel permits the application of well-established statistical learning methods for effective outlier rejection, automatic recovery of the number of motions and accurate segmentation of the point trajectories. The method operates well under severe outliers arising from spurious trajectories or mistracks. Detailed experiments on a recent benchmark dataset (Hopkins 155) show that our method is superior to other stateof-the-art approaches in terms of recovering the number of motions, segmentation accuracy, robustness against gross outliers and computational efficiency. 1 Introduction1 Multi-body motion segmentation concerns the separation of motions arising from multiple moving objects in a video sequence. The input data is usually a set of points on the surface of the objects which are tracked throughout the video sequence. Motion segmentation can serve as a useful preprocessing step for many computer vision applications. In recent years the case of rigid (i.e. nonarticulated) objects for which the motions could be semi-dependent on each other has received much attention [18, 14, 19, 21, 22, 17]. Under this domain the affine projection model is usually adopted. Such a model implies that the point trajectories from a particular motion lie on a linear subspace of at most four, and trajectories from different motions lie on distinct subspaces. Thus multi-body motion segmentation is reduced to the problem of subspace segmentation or clustering. To realize practical algorithms, motion segmentation approaches should possess four desirable attributes: (1) Accuracy in classifying the point trajectories to the motions they respectively belong to. This is crucial for success in the subsequent vision applications, e.g. object recognition, 3D reconstruction. (2) Robustness against inlier noise (e.g. slight localization error) and gross outliers (e.g. mistracks, spurious trajectories), since getting imperfect data is almost always unavoidable in practical circumstances. (3) Ability to automatically deduce the number of motions in the data. This is pivotal to accomplish fully automated vision applications. (4) Computational efficiency. This is integral for the processing of video sequences which are usually large amounts of data. Recent work on multi-body motion segmentation can roughly be divided into algebraic or factorization methods [3, 19, 20], statistical methods [17, 7, 14, 6, 10] and clustering methods [22, 21, 5]. Notable approaches include Generalized PCA (GPCA) [19, 20], an algebraic method based on the idea that one can fit a union of m subspaces with a set of polynomials of degree m. Statistical methods often employ concepts such random hypothesis generation [4, 17], Expectation-Maximization [14, 6] 1 This work was supported by the Australian Research Council (ARC) under the project DP0878801. 1 and geometric model selection [7, 8]. Clustering based methods [22, 21, 5] are also gaining attention due to their effectiveness. They usually include a dimensionality reduction step (e.g. manifold learning [5]) followed by a clustering of the point trajectories (e.g. via spectral clustering in [21]). A recent benchmark [18] indicated that Local Subspace Affinity (LSA) [21] gave the best performance in terms of classification accuracy, although their result was subsequently surpassed by [5, 10]. However, we argue that most of the previous approaches do not simultaneously fulfil the qualities desirable of motion segmentation algorithms. Most notably, although some of the approaches have the means to estimate the number of motions, they are generally unreliable in this respect and require manual input of this parameter. In fact this prior knowledge was given to all the methods compared in [18]2 . Secondly, most of the methods (e.g. [19, 5]) do not explicitly deal with outliers. They will almost always breakdown when given corrupted data. These deficiencies reduce the usefulness of available motion segmentation algorithms in practical circumstances. In this paper we attempt to bridge the gap between experimental performance and practical usability. Our previous work [2] indicates that robust multi-structure model fitting can be achieved effectively with statistical learning. Here we extend this concept to motion subspace clustering. Drawing inspiration from robust statistical model fitting [4], we estimate random hypotheses of motion subspaces in the data. However, instead of ranking these hypotheses we encapsulate them in a novel Mercer kernel. The kernel can function reliably despite overwhelming sampling imbalance, and it permits the application of non-linear dimensionality reduction techniques to effectively identify and reject outlying trajectories. This is then followed by Kernel PCA [11] to maximize the separation between groups and spectral clustering [13] to recover the number of motions and clustering. Experiments on the Hopkins 155 benchmark dataset [18] show that our method is superior to other approaches in terms of the qualities described above, including computational efficiency. 1.1 Brief review of affine model multi-body motion segmentation Let {tf p ∈ R2 }f =1,...,F be the set of 2D coordinates of P trajectories tracked across F frames. In p=1,...,P multi-body motion segmentation the tf p ’s correspond to points on the surface of rigid objects which are moving. The goal is to separate the trajectories into groups corresponding to the motion they belong to. In other words, if we arrange the coordinates in the following data matrix t11 · · · t1P . . ∈ R2F ×P , .. . T= . (1) . . . tF 1 . . . tF P the goal is to find the permutation Γ ∈ RP ×P such that the columns of T · Γ are arranged according to the respective motions they belong to. It turns out that under affine projection [1, 16] trajectories from the same motion lie on a distinct subspace in R2F , and each of these motion subspaces is of dimensions 2, 3 or 4. Thus motion segmentation can be accomplished via clustering subspaces in R2F . See [1, 16] for more details. Realistically actual motion sequences might contain trajectories which do not correspond to valid objects or motions. These trajectories behave as outliers in the data and, if not taken into account, can be seriously detrimental to subspace clustering algorithms. 2 The Ordered Residual Kernel (ORK) First, we take a statistical model fitting point of view to motion segmentation. Let {xi }i=1,...,N be the set of N samples on which we want to perform model fitting. We randomly draw p-subsets from the data and use it to fit a hypothesis of the model, where p is the number of parameters that define the model. In motion segmentation, the xi ’s are the columns of matrix T, and p = 4 since the model is a four-dimensional subspace3 . Assume that M of such random hypotheses are drawn. i i For each data point xi compute its absolute residual set ri = {r1 , . . . , rM } as measured to the M hypotheses. For motion segmentation, the residual is the orthogonal distance to a hypothesis 2 As confirmed through private contact with the authors of [18]. Ideally we should also consider degenerate motions with subspace dimensions 2 or 3, but previous work [18] using RANSAC [4] and our results suggest this is not a pressing issue for the Hopkins 155 dataset. 3 2 i i subspace. We sort the elements in ri to obtain the sorted residual set ˜i = {rλi , . . . , rλi }, where r 1 M i i the permutation {λi , . . . , λi } is obtained such that rλi ≤ · · · ≤ rλi . Define the following 1 M 1 M ˜ θi := {λi , . . . , λi } 1 M (2) ˜ as the sorted hypothesis set of point xi , i.e. θi depicts the order in which xi becomes the inlier of the M hypotheses as a fictitious inlier threshold is increased from 0 to ∞. We define the Ordered Residual Kernel (ORK) between two data points as 1 kr (xi1 , xi2 ) := ˜ Z M/h t ˜ ˜ zt · k∩ (θi1 , θi2 ), (3) t=1 M/h where zt = 1 are the harmonic series and Z = t=1 zt is the (M/h)-th harmonic number. t Without lost of generality assume that M is wholly divisible by h. Step size h is used to obtain the Difference of Intersection Kernel (DOIK) 1 ˜1:α t ˜ ˜ ˜1:α ˜1:α ˜1:α k∩ (θi1 , θi2 ) := (|θi1 t ∩ θi2 t | − |θi1 t−1 ∩ θi2 t−1 |) (4) h ˜a:b where αt = t · h and αt−1 = (t − 1) · h. Symbol θi indicates the set formed by the a-th to ˜i . Since the contents of the sorted hypotheses set are merely permutations of the b-th elements of θ {1 . . . M }, i.e. there are no repeating elements, 0 ≤ kr (xi1 , xi2 ) ≤ 1. ˜ (5) Note that kr is independent of the type of model to be fitted, thus it is applicable to generic statistical ˜ model fitting problems. However, we concentrate on motion subspaces in this paper. Let τ be a fictitious inlier threshold. The kernel kr captures the intuition that, if τ is low, two ˜ points arising from the same subspace will have high normalized intersection since they share many common hypotheses which correspond to that subspace. If τ is high, implausible hypotheses fitted on outliers start to dominate and decrease the normalized intersection. Step size h allows us to quantify the rate of change of intersection if τ is increased from 0 to ∞, and since zt is decreasing, kr will evaluate to a high value for two points from the same subspace. In contrast, kr is always low ˜ ˜ for points not from the same subspace or that are outliers. Proof of satisfying Mercer’s condition. Let D be a fixed domain, and P(D) be the power set of D, i.e. the set of all subsets of D. Let S ⊆ P(D), and p, q ∈ S. If µ is a measure on D, then k∩ (p, q) = µ(p ∩ q), (6) called the intersection kernel, is provably a valid Mercer kernel [12]. The DOIK can be rewritten as t ˜ ˜ k∩ (θi1 , θi2 ) = 1 ˜(αt−1 +1):αt ˜(αt−1 +1):αt (|θ ∩ θi2 | h i1 ˜1:(α ) ˜(α +1):αt | + |θ (αt−1 +1):αt ∩ θ 1:(αt−1 ) |). ˜ ˜ +|θi1 t−1 ∩ θi2 t−1 i1 i2 (7) If we let D = {1 . . . M } be the set of all possible hypothesis indices and µ be uniform on D, each term in Eq. (7) is simply an intersection kernel multiplied by |D|/h. Since multiplying a kernel with a positive constant and adding two kernels respectively produce valid Mercer kernels [12], the DOIK and ORK are also valid Mercer kernels.• Parameter h in kr depends on the number of random hypotheses M , i.e. step size h can be set as a ˜ ratio of M . The value of M can be determined based on the size of the p-subset and the size of the data N (e.g. [23, 15]), and thus h is not contingent on knowledge of the true inlier noise scale or threshold. Moreover, our experiments in Sec. 4 show that segmentation performance is relatively insensitive to the settings of h and M . 2.1 Performance under sampling imbalance Methods based on random sampling (e.g. RANSAC [4]) are usually affected by unbalanced datasets. The probability of simultaneously retrieving p inliers from a particular structure is tiny if points 3 from that structure represent only a small minority in the data. In an unbalanced dataset the “pure” p-subsets in the M randomly drawn samples will be dominated by points from the majority structure in the data. This is a pronounced problem in motion sequences, since there is usually a background “object” whose point trajectories form a large majority in the data. In fact, for motion sequences from the Hopkins 155 dataset [18] with typically about 300 points per sequence, M has to be raised to about 20,000 before a pure p-subset from the non-background objects is sampled. However, ORK can function reliably despite serious sampling imbalance. This is because points from the same subspace are roughly equi-distance to the sampled hypotheses in their vicinity, even though these hypotheses might not pass through that subspace. Moreover, since zt in Eq. (3) is decreasing only residuals/hypotheses in the vicinity of a point are heavily weighted in the intersection. Fig. 1(a) illustrates this condition. Results in Sec. 4 show that ORK excelled even with M = 1, 000. (a) Data in R2F . (b) Data in RKHS Fkr . ˜ Figure 1: (a) ORK under sampling imbalance. (b) Data in RKHS induced by ORK. 3 Multi-Body Motion Segmentation using ORK In this section, we describe how ORK is used for multi-body motion segmentation. 3.1 Outlier rejection via non-linear dimensionality reduction Denote by Fkr the Reproducing Kernel Hilbert Space (RKHS) induced by kr . Let matrix A = ˜ ˜ [φ(x1 ) . . . φ(xN )] contain the input data after it is mapped to Fkr . The kernel matrix K = AT A is ˜ computed using the kernel function kr as ˜ Kp,q = φ(xp ), φ(xq ) = kr (xp , xq ), p, q ∈ {1 . . . N }. ˜ (8) Since kr is a valid Mercer kernel, K is guaranteed to be positive semi-definite [12]. Let K = ˜ Q∆QT be the eigenvalue decomposition (EVD) of K. Then the rank-n Kernel Singular Value Decomposition (Kernel SVD) [12] of A is 1 1 An = [AQn (∆n )− 2 ][(∆n ) 2 ][(Qn )T ] ≡ Un Σn (Vn )T . n n (9) n Via the Matlab notation, Q = Q:,1:n and ∆ = ∆1:n,1:n . The left singular vectors U is an orthonormal basis for the n-dimensional principal subspace of the whole dataset in Fkr . Projecting ˜ the data onto the principal subspace yields 1 1 B = [AQn (∆n )− 2 ]T A = (∆n ) 2 (Qn )T , (10) n×N where B = [b1 . . . bN ] ∈ R is the reduced dimension version of A. Directions of the principal subspace are dominated by inlier points, since kr evaluates to a high value generally for them, but ˜ always to a low value for gross outliers. Moreover the kernel ensures that points from the same subspace are mapped to the same cluster and vice versa. Fig. 1(b) illustrates this condition. Fig. 2(a)(left) shows the first frame of sequence “Cars10” from the Hopkins 155 dataset [18] with 100 false trajectories of Brownian motion added to the original data (297 points). The corresponing RKHS norm histogram for n = 3 is displayed in Fig. 2(b). The existence of two distinct modes, 4 15 Outlier mode Bin count Inlier mode 10 5 0 (a) (left) Before and (right) after outlier removal. Blue dots are inliers while red dots are added outliers. 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 Vector norm in principal subspace 0.18 0.2 (b) Actual norm histogram of “cars10”. Figure 2: Demonstration of outlier rejection on sequence “cars10” from Hopkins 155. corresponding respectively to inliers and outliers, is evident. We exploit this observation for outlier rejection by discarding data with low norms in the principal subspace. The cut-off threshold ψ can be determined by analyzing the shape of the distribution. For instance we can fit a 1D Gaussian Mixture Model (GMM) with two components and set ψ as the point of equal Mahalanobis distance between the two components. However, our experimentation shows that an effective threshold can be obtained by simply setting ψ as the average value of all the norms, i.e. ψ= 1 N N bi . (11) i=1 This method was applied uniformly on all the sequences in our experiments in Sec. 4. Fig. 2(a)(right) shows an actual result of the method on Fig. 2(a)(left). 3.2 Recovering the number of motions and subspace clustering After outlier rejection, we further take advantage of the mapping induced by ORK for recovering the number of motions and subspace clustering. On the remaining data, we perform Kernel PCA [11] to seek the principal components which maximize the variance of the data in the RKHS, as Fig. 1(b) illustrates. Let {yi }i=1,...,N ′ be the N ′ -point subset of the input data that remains after outlier removal, where N ′ < N . Denote by C = [φ(y1 ) . . . φ(yN ′ )] the data matrix after mapping the data ˜ to Fkr , and by symbol C the result of adjusting C with the empirical mean of {φ(y1 ), . . . , φ(yN ′ )}. ˜ ˜ ˜ ˜ The centered kernel matrix K′ = CT C [11] can be obtained as 1 ˜ K′ = ν T K′ ν, ν = [IN ′ − ′ 1N ′ ,N ′ ], (12) N where K′ = CT C is the uncentered kernel matrix, Is and 1s,s are respectively the s × s identity ˜ ˜ matrix and a matrix of ones. If K′ = RΩRT is the EVD of K′ , then we obtain first-m kernel m ˜ principal components P of C as the first-m left singular vectors of C , i.e. 1 ˜ Pm = CRm (Ωm )− 2 , (13) where Rm = R:,1:m and Ω1:m,1:m ; see Eq. (9). Projecting the data on the principal components yields 1 D = [d1 . . . dN ′ ] = (Ωm ) 2 (Rm )T , (14) ′ where D ∈ Rm×N . The affine subspace span(Pm ) maximizes the spread of the centered data in the RKHS, and the projection D offers an effective representation for clustering. Fig. 3(a) shows the Kernel PCA projection results for m = 3 on the sequence in Fig. 2(a). The number of clusters in D is recovered via spectral clustering. More specifically we apply the Normalized Cut (Ncut) [13] algorithm. A fully connected graph is first derived from the data, where ′ ′ its weighted adjacency matrix W ∈ RN ×N is obtained as Wp,q = exp(− dp − dq 2 /2δ 2 ), (15) and δ is taken as the average nearest neighbour distance in the Euclidean sense among the vectors in D. The Laplacian matrix [13] is then derived from W and eigendecomposed. Under Ncut, 5 0.1 0.05 0 −0.05 −0.1 0.1 −0.15 0.15 0.08 0.1 0.05 0 −0.05 −0.1 0.06 (a) Kernel PCA and Ncut results. (b) W matrix. (c) Final result for “cars10”. Figure 3: Actual results on the motion sequence in Fig. 2(a)(left). the number of clusters is revealed as the number of eigenvalues of the Laplacian that are zero or numerically insignificant. With this knowledge, a subsequent k-means step is then performed to cluster the points. Fig. 3(b) shows W for the input data in Fig. 2(a)(left) after outlier removal. It can be seen that strong affinity exists between points from the same cluster, thus allowing accurate clustering. Figs. 3(a) and 3(c) illustrate the final clustering result for the data in Fig. 2(a)(left). There are several reasons why spectral clustering under our framework is more successful than previous methods. Firstly, we perform an effective outlier rejection step that removes bad trajectories that can potentially mislead the clustering. Secondly, the mapping induced by ORK deliberately separates the trajectories based on their cluster membership. Finally, we perform Kernel PCA to maximize the variance of the data. Effectively this also improves the separation of clusters, thus facilitating an accurate recovery of the number of clusters and also the subsequent segmentation. This distinguishes our work from previous clustering based methods [21, 5] which tend to operate without maximizing the between-class scatter. Results in Sec. 4 validate our claims. 4 Results Henceforth we indicate the proposed method as “ORK”. We leverage on a recently published benchmark on affine model motion segmentation [18] as a basis of comparison. The benchmark was evaluated on the Hopkins 155 dataset4 which contains 155 sequences with tracked point trajectories. A total of 120 sequences have two motions while 35 have three motions. The sequences contain degenerate and non-degenerate motions, independent and partially dependent motions, articulated motions, nonrigid motions etc. In terms of video content three categories exist: Checkerboard sequences, traffic sequences (moving cars, trucks) and articulated motions (moving faces, people). 4.1 Details on benchmarking Four major algorithms were compared in [18]: Generalized PCA (GPCA) [19], Local Subspace Affinity (LSA) [21], Multi-Stage Learning (MSL) [14] and RANSAC [17]. Here we extend the benchmark with newly reported results from Locally Linear Manifold Clustering (LLMC) [5] and Agglomerative Lossy Compression (ALC) [10, 9]. We also compare our method against Kanatani and Matsunaga’s [8] algorithm (henceforth, the “KM” method) in estimating the number of independent motions in the video sequences. Note that KM per se does not perform motion segmentation. For the sake of objective comparisons we use only implementations available publicly5. Following [18], motion segmentation performance is evaluated in terms of the labelling error of the point trajectories, where each point in a sequence has a ground truth label, i.e. number of mislabeled points . (16) classification error = total number of points Unlike [18], we also emphasize on the ability of the methods in recovering the number of motions. However, although the methods compared in [18] (except RANSAC) theoretically have the means to 4 Available at http://www.vision.jhu.edu/data/hopkins155/. For MSL and KM, see http://www.suri.cs.okayama-u.ac.jp/e-program-separate.html/. For GPCA, LSA and RANSAC, refer to the url for the Hopkins 155 dataset. 5 6 do so, their estimation of the number of motions is generally unrealiable and the benchmark results in [18] were obtained by revealing the actual number of motions to the algorithms. A similar initialization exists in [5, 10] where the results were obtained by giving LLMC and ALC this knowledge a priori (for LLMC, this was given at least to the variant LLMC 4m during dimensionality reduction [5], where m is the true number of motions). In the following subsections, where variants exist for the compared algorithms we use results from the best performing variant. In the following the number of random hypotheses M and step size h for ORK are fixed at 1000 and 300 respectively, and unlike the others, ORK is not given knowledge of the number of motions. 4.2 Data without gross outliers We apply ORK on the Hopkins 155 dataset. Since ORK uses random sampling we repeat it 100 times for each sequence and average the results. Table 1 depicts the obtained classification error among those from previously proposed methods. ORK (column 9) gives comparable results to the other methods for sequences with 2 motions (mean = 7.83%, median = 0.41%). For sequences with 3 motions, ORK (mean = 12.62%, median = 4.75%) outperforms GPCA and RANSAC, but is slightly less accurate than the others. However, bear in mind that unlike the other methods ORK is not given prior knowledge of the true number of motions and has to estimate this independently. Column Method 1 REF 2 GPCA Mean Median 2.03 0.00 4.59 0.38 Mean Median 5.08 2.40 28.66 28.26 3 4 5 6 LSA MSL RANSAC LLMC Sequences with 2 motions 3.45 4.14 5.56 3.62 0.59 0.00 1.18 0.00 Sequences with 3 motions 9.73 8.23 22.94 8.85 2.33 1.76 22.03 3.19 8 ALC 9 ORK 10 ORK∗ 3.03 0.00 7.83 0.41 1.27 0.00 6.26 1.02 12.62 4.75 2.09 0.05 Table 1: Classification error (%) on Hopkins 155 sequences. REF represents the reference/control method which operates based on knowledge of ground truth segmentation. Refer to [18] for details. We also separately investigate the accuracy of ORK in estimating the number of motions, and compare it against KM [8] which was proposed for this purpose. Note that such an experiment was not attempted in [18] since approaches compared therein generally do not perform reliably in estimating the number of motions. The results in Table 2 (columns 1–2) show that for sequences with two motions, KM (80.83%) outperforms ORK (67.37%) by ≈ 15 percentage points. However, for sequences with three motions, ORK (49.66%) vastly outperforms KM (14.29%) by more than doubling the percentage points of accuracy. The overall accuracy of KM (65.81%) is slightly better than ORK (63.37%), but this is mostly because sequences with two motions form the majority in the dataset (120 out of 155). This leads us to conclude that ORK is actually the superior method here. Dataset Column Method 2 motions 3 motions Overall Hopkins 155 1 2 KM ORK 80.83% 67.37% 14.29% 49.66% 65.81% 63.37% Hopkins 155 + Outliers 3 4 KM ORK 00.00% 47.58% 100.00% 50.00% 22.58% 48.13% Table 2: Accuracy in determining the number of motions in a sequence. Note that in the experiment with outliers (columns 3–4), KM returns a constant number of 3 motions for all sequences. We re-evaluate the performance of ORK by considering only results on sequences where the number of motions is estimated correctly by ORK (there are about 98 ≡ 63.37% of such cases). The results are tabulated under ORK∗ (column 10) in Table 1. It can be seen that when ORK estimates the number of motions correctly, it is significantly more accurate than the other methods. Finally, we compare the speed of the methods in Table 3. ORK was implemented and run in Matlab on a Dual Core Pentium 3.00GHz machine with 4GB of main memory (this is much less powerful 7 than the 8 Core Xeon 3.66GHz with 32GB memory used in [18] for the other methods in Table 3). The results show that ORK is comparable to LSA, much faster than MSL and ALC, but slower than GPCA and RANSAC. Timing results of LLMC are not available in the literature. Method 2 motions 3 motions GPCA 324ms 738ms LSA 7.584s 15.956s MSL 11h 4m 1d 23h RANSAC 175ms 258ms ALC 10m 32s 10m 32s ORK 4.249s 8.479s Table 3: Average computation time on Hopkins 155 sequences. 4.3 Data with gross outliers We next examine the ability of the proposed method in dealing with gross outliers in motion data. For each sequence in Hopkins 155, we add 100 gross outliers by creating trajectories corresponding to mistracks or spuriously occuring points. These are created by randomly initializing 100 locations in the first frame and allowing them to drift throughout the sequence according to Brownian motion. The corrupted sequences are then subjected to the algorithms for motion segmentation. Since only ORK is capable of rejecting outliers, the classification error of Eq. (16) is evaluated on the inlier points only. The results in Table 4 illustrate that ORK (column 4) is the most accurate method by a large margin. Despite being given the true number of motions a priori, GPCA, LSA and RANSAC are unable to provide satisfactory segmentation results. Column Method Mean Median Mean Median 1 2 3 4 GPCA LSA RANSAC ORK Sequences with 2 motions 28.66 24.25 30.64 16.50 30.96 26.51 32.36 10.54 Sequences with 3 motions 40.61 30.94 42.24 19.99 41.30 27.68 43.43 8.49 5 ORK∗ 1.62 0.00 2.68 0.09 Table 4: Classification error (%) on Hopkins 155 sequences with 100 gross outliers per sequence. In terms of estimating the number of motions, as shown in column 4 in Table 2 the overall accuracy of ORK is reduced to 48.13%. This is contributed mainly by the deterioration in accuracy on sequences with two motions (47.58%), although the accuracy on sequences with three motions are maintained (50.00%). This is not a surprising result, since sequences with three motions generally have more (inlying) point trajectories than sequences with two motions, thus the outlier rates for sequences with three motions are lower (recall that a fixed number of 100 false trajectories are added). On the other hand, the KM method (column 3) is completely overwhelmed by the outliers— for all the sequences with outliers it returned a constant “3” as the number of motions. We again re-evaluate ORK by considering results from sequences (now with gross outliers) where the number of motions is correctly estimated (there are about 75 ≡ 48.13% of such cases). The results tabulated under ORK∗ (column 5) in Table 4 show that the proposed method can accurately segment the point trajectories without being influenced by the gross outliers. 5 Conclusions In this paper we propose a novel and highly effective approach for multi-body motion segmentation. Our idea is based on encapsulating random hypotheses in a novel Mercel kernel and statistical learning. We evaluated our method on the Hopkins 155 dataset with results showing that the idea is superior other state-of-the-art approaches. It is by far the most accurate in terms of estimating the number of motions, and it excels in segmentation accuracy despite lacking prior knowledge of the number of motions. The proposed idea is also highly robust towards outliers in the input data. Acknowledgements. We are grateful to the authors of [18] especially Ren´ Vidal for discussions e and insights which have been immensely helpful. 8 References [1] T. Boult and L. Brown. Factorization-based segmentation of motions. In IEEE Workshop on Motion Understanding, 1991. [2] T.-J. Chin, H. Wang, and D. Suter. Robust fitting of multiple structures: The statistical learning approach. In ICCV, 2009. [3] J. Costeira and T. Kanade. A multibody factorization method for independently moving objects. IJCV, 29(3):159–179, 1998. [4] M. A. Fischler and R. C. Bolles. Random sample concensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. of the ACM, 24:381–395, 1981. [5] A. Goh and R. Vidal. Segmenting motions of different types by unsupervised manifold clustering. In CVPR, 2007. [6] A. Gruber and Y. Weiss. Multibody factorization with uncertainty and missing data using the EM algorithm. In CVPR, 2004. [7] K. Kanatani. Motion segmentation by subspace separation and model selection. In ICCV, 2001. [8] K. Kanatani and C. Matsunaga. Estimating the number of independent motions for multibody segmentation. In ACCV, 2002. [9] Y. Ma, H. Derksen, W. Hong, and J. Wright. Segmentation of multivariate mixed data via lossy coding and compression. TPAMI, 29(9):1546–1562, 2007. [10] S. Rao, R. Tron, Y. Ma, and R. Vidal. Motion segmentation via robust subspace separation in the presence of outlying, incomplete, or corrupted trajectories. In CVPR, 2008. [11] B. Sch¨ lkopf, A. Smola, and K. R. M¨ ller. Nonlinear component analysis as a kernel eigeno u value problem. Neural Computation, 10:1299–1319, 1998. [12] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. Cambridge University Press, 2004. [13] J. Shi and J. Malik. Normalized cuts and image segmentation. TPAMI, 22(8):888–905, 2000. [14] Y. Sugaya and K. Kanatani. Geometric structure of degeneracy for multi-body motion segmentation. In Workshop on Statistical Methods in Video Processing, 2004. [15] R. Toldo and A. Fusiello. Robust multiple structures estimation with J-Linkage. In ECCV, 2008. [16] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography. IJCV, 9(2):137–154, 1992. [17] P. Torr. Geometric motion segmentation and model selection. Phil. Trans. Royal Society of London, 356(1740):1321–1340, 1998. [18] R. Tron and R. Vidal. A benchmark for the comparison of 3-D motion segmentation algorithms. In CVPR, 2007. [19] R. Vidal and R. Hartley. Motion segmentation with missing data by PowerFactorization and Generalized PCA. In CVPR, 2004. [20] R. Vidal, Y. Ma, and S. Sastry. Generalized Principal Component Analysis (GPCA). TPAMI, 27(12):1–15, 2005. [21] J. Yan and M. Pollefeys. A general framework for motion segmentation: independent, articulated, rigid, non-rigid, degenerate and non-degenerate. In ECCV, 2006. [22] L. Zelnik-Manor and M. Irani. Degeneracies, dependencies and their implications on multibody and multi-sequence factorization. In CVPR, 2003. [23] W. Zhang and J. Koseck´ . Nonparametric estimation of multiple structures with outliers. In a Dynamical Vision, ICCV 2005 and ECCV 2006 Workshops, 2006. 9
6 0.070851915 89 nips-2009-FACTORIE: Probabilistic Programming via Imperatively Defined Factor Graphs
7 0.070811123 212 nips-2009-Semi-Supervised Learning in Gigantic Image Collections
8 0.05931776 82 nips-2009-Entropic Graph Regularization in Non-Parametric Semi-Supervised Classification
9 0.057258036 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation
10 0.051778108 141 nips-2009-Local Rules for Global MAP: When Do They Work ?
11 0.051144578 58 nips-2009-Constructing Topological Maps using Markov Random Fields and Loop-Closure Detection
12 0.048377257 133 nips-2009-Learning models of object structure
13 0.047220334 260 nips-2009-Zero-shot Learning with Semantic Output Codes
14 0.046610001 95 nips-2009-Fast subtree kernels on graphs
15 0.046427164 251 nips-2009-Unsupervised Detection of Regions of Interest Using Iterative Link Analysis
16 0.045765374 120 nips-2009-Kernels and learning curves for Gaussian process regression on random graphs
17 0.044947356 47 nips-2009-Boosting with Spatial Regularization
18 0.044822875 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis
19 0.044177756 122 nips-2009-Label Selection on Graphs
20 0.044054888 148 nips-2009-Matrix Completion from Power-Law Distributed Samples
topicId topicWeight
[(0, -0.139), (1, -0.047), (2, -0.085), (3, 0.013), (4, -0.053), (5, 0.09), (6, -0.003), (7, 0.011), (8, 0.023), (9, -0.036), (10, -0.061), (11, 0.046), (12, 0.052), (13, -0.066), (14, -0.041), (15, 0.032), (16, 0.031), (17, -0.057), (18, 0.058), (19, -0.088), (20, 0.036), (21, 0.019), (22, 0.029), (23, -0.037), (24, 0.018), (25, 0.023), (26, 0.05), (27, -0.049), (28, 0.086), (29, -0.073), (30, 0.04), (31, -0.031), (32, -0.024), (33, -0.006), (34, 0.055), (35, -0.049), (36, 0.11), (37, 0.132), (38, -0.052), (39, -0.01), (40, 0.039), (41, 0.031), (42, -0.053), (43, 0.024), (44, -0.03), (45, -0.057), (46, -0.032), (47, -0.076), (48, -0.009), (49, 0.066)]
simIndex simValue paperId paperTitle
same-paper 1 0.95580751 149 nips-2009-Maximin affinity learning of image segmentation
Author: Kevin Briggman, Winfried Denk, Sebastian Seung, Moritz N. Helmstaedter, Srinivas C. Turaga
Abstract: Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning the affinity graph. We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are able to train an affinity classifier to directly minimize the Rand index of segmentations resulting from the graph partitioning. Our learning algorithm corresponds to the learning of maximin affinities between image pixel pairs, which are predictive of the pixel-pair connectivity. 1
2 0.61784273 102 nips-2009-Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models
Author: Jing Gao, Feng Liang, Wei Fan, Yizhou Sun, Jiawei Han
Abstract: Ensemble classifiers such as bagging, boosting and model averaging are known to have improved accuracy and robustness over a single model. Their potential, however, is limited in applications which have no access to raw data but to the meta-level model output. In this paper, we study ensemble learning with output from multiple supervised and unsupervised models, a topic where little work has been done. Although unsupervised models, such as clustering, do not directly generate label prediction for each individual, they provide useful constraints for the joint prediction of a set of related objects. We propose to consolidate a classification solution by maximizing the consensus among both supervised predictions and unsupervised constraints. We cast this ensemble task as an optimization problem on a bipartite graph, where the objective function favors the smoothness of the prediction over the graph, as well as penalizing deviations from the initial labeling provided by supervised models. We solve this problem through iterative propagation of probability estimates among neighboring nodes. Our method can also be interpreted as conducting a constrained embedding in a transformed space, or a ranking on the graph. Experimental results on three real applications demonstrate the benefits of the proposed method over existing alternatives1 . 1
3 0.51647007 211 nips-2009-Segmenting Scenes by Matching Image Composites
Author: Bryan Russell, Alyosha Efros, Josef Sivic, Bill Freeman, Andrew Zisserman
Abstract: In this paper, we investigate how, given an image, similar images sharing the same global description can help with unsupervised scene segmentation. In contrast to recent work in semantic alignment of scenes, we allow an input image to be explained by partial matches of similar scenes. This allows for a better explanation of the input scenes. We perform MRF-based segmentation that optimizes over matches, while respecting boundary information. The recovered segments are then used to re-query a large database of images to retrieve better matches for the target regions. We show improved performance in detecting the principal occluding and contact boundaries for the scene over previous methods on data gathered from the LabelMe database.
4 0.51114613 122 nips-2009-Label Selection on Graphs
Author: Andrew Guillory, Jeff A. Bilmes
Abstract: We investigate methods for selecting sets of labeled vertices for use in predicting the labels of vertices on a graph. We specifically study methods which choose a single batch of labeled vertices (i.e. offline, non sequential methods). In this setting, we find common graph smoothness assumptions directly motivate simple label selection methods with interesting theoretical guarantees. These methods bound prediction error in terms of the smoothness of the true labels with respect to the graph. Some of these bounds give new motivations for previously proposed algorithms, and some suggest new algorithms which we evaluate. We show improved performance over baseline methods on several real world data sets. 1
5 0.50826865 58 nips-2009-Constructing Topological Maps using Markov Random Fields and Loop-Closure Detection
Author: Roy Anati, Kostas Daniilidis
Abstract: We present a system which constructs a topological map of an environment given a sequence of images. This system includes a novel image similarity score which uses dynamic programming to match images using both the appearance and relative positions of local features simultaneously. Additionally, an MRF is constructed to model the probability of loop-closures. A locally optimal labeling is found using Loopy-BP. Finally we outline a method to generate a topological map from loop closure data. Results, presented on four urban sequences and one indoor sequence, outperform the state of the art. 1
6 0.50193292 89 nips-2009-FACTORIE: Probabilistic Programming via Imperatively Defined Factor Graphs
7 0.4863897 201 nips-2009-Region-based Segmentation and Object Detection
8 0.47340316 258 nips-2009-Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise
9 0.45424074 141 nips-2009-Local Rules for Global MAP: When Do They Work ?
10 0.45224789 175 nips-2009-Occlusive Components Analysis
11 0.44933555 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis
12 0.43758833 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation
13 0.42387569 243 nips-2009-The Ordered Residual Kernel for Robust Motion Subspace Clustering
14 0.42069116 82 nips-2009-Entropic Graph Regularization in Non-Parametric Semi-Supervised Classification
15 0.41465658 131 nips-2009-Learning from Neighboring Strokes: Combining Appearance and Context for Multi-Domain Sketch Recognition
16 0.40381873 44 nips-2009-Beyond Categories: The Visual Memex Model for Reasoning About Object Relationships
17 0.40340954 251 nips-2009-Unsupervised Detection of Regions of Interest Using Iterative Link Analysis
18 0.39268723 71 nips-2009-Distribution-Calibrated Hierarchical Classification
19 0.38428518 249 nips-2009-Tracking Dynamic Sources of Malicious Activity at Internet Scale
20 0.37879968 143 nips-2009-Localizing Bugs in Program Executions with Graphical Models
topicId topicWeight
[(11, 0.31), (24, 0.027), (25, 0.135), (35, 0.04), (36, 0.081), (39, 0.072), (58, 0.048), (71, 0.032), (81, 0.034), (86, 0.071), (91, 0.012)]
simIndex simValue paperId paperTitle
same-paper 1 0.8216117 149 nips-2009-Maximin affinity learning of image segmentation
Author: Kevin Briggman, Winfried Denk, Sebastian Seung, Moritz N. Helmstaedter, Srinivas C. Turaga
Abstract: Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning the affinity graph. We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are able to train an affinity classifier to directly minimize the Rand index of segmentations resulting from the graph partitioning. Our learning algorithm corresponds to the learning of maximin affinities between image pixel pairs, which are predictive of the pixel-pair connectivity. 1
2 0.71008217 248 nips-2009-Toward Provably Correct Feature Selection in Arbitrary Domains
Author: Dimitris Margaritis
Abstract: In this paper we address the problem of provably correct feature selection in arbitrary domains. An optimal solution to the problem is a Markov boundary, which is a minimal set of features that make the probability distribution of a target variable conditionally invariant to the state of all other features in the domain. While numerous algorithms for this problem have been proposed, their theoretical correctness and practical behavior under arbitrary probability distributions is unclear. We address this by introducing the Markov Boundary Theorem that precisely characterizes the properties of an ideal Markov boundary, and use it to develop algorithms that learn a more general boundary that can capture complex interactions that only appear when the values of multiple features are considered together. We introduce two algorithms: an exact, provably correct one as well a more practical randomized anytime version, and show that they perform well on artificial as well as benchmark and real-world data sets. Throughout the paper we make minimal assumptions that consist of only a general set of axioms that hold for every probability distribution, which gives these algorithms universal applicability. 1 Introduction and Motivation The problem of feature selection has a long history due to its significance in a wide range of important problems, from early ones like pattern recognition to recent ones such as text categorization, gene expression analysis and others. In such domains, using all available features may be prohibitively expensive, unnecessarily wasteful, and may lead to poor generalization performance, especially in the presence of irrelevant or redundant features. Thus, selecting a subset of features of the domain for use in subsequent application of machine learning algorithms has become a standard preprocessing step. A typical task of these algorithms is learning a classifier: Given a number of input features and a quantity of interest, called the target variable, choose a member of a family of classifiers that can predict the target variable’s value as well as possible. Another task is understanding the domain and the quantities that interact with the target quantity. Many algorithms have been proposed for feature selection. Unfortunately, little attention has been paid to the issue of their behavior under a variety of application domains that can be encountered in practice. In particular, it is known that many can fail under certain probability distributions such as ones that contain a (near) parity function [1], which contain interactions that only appear when the values of multiple features are considered together. There is therefore an acute need for algorithms that are widely applicable and can be theoretically proven to work under any probability distribution. In this paper we present two such algorithms, an exact and a more practical randomized approximate one. We use the observation (first made in Koller and Sahami [2]) that an optimal solution to the problem is a Markov boundary, defined to be a minimal set of features that make the probability distribution of a target variable conditionally invariant to the state of all other features in the domain (a more precise definition is given later in Section 3) and present a family of algorithms for learning 1 the Markov boundary of a target variable in arbitrary domains. We first introduce a theorem that exactly characterizes the minimal set of features necessary for probabilistically isolating a variable, and then relax this definition to derive a family of algorithms that learn a parameterized approximation of the ideal boundary that are provably correct under a minimal set of assumptions, including a set of axioms that hold for any probability distribution. In the following section we present work on feature selection, followed by notation and definitions in Section 3. We subsequently introduce an important theorem and the aforementioned parameterized family of algorithms in Sections 4 and 5 respectively, including a practical anytime version. We evaluate these algorithms in Section 6 and conclude in Section 7. 2 Related Work Numerous algorithms have been proposed for feature selection. At the highest level algorithms can be classified as filter, wrapper, or embedded methods. Filter methods work without consulting the classifier (if any) that will make use of their output i.e., the resulting set of selected features. They therefore have typically wider applicability since they are not tied to any particular classifier family. In contrast, wrappers make the classifier an integral part of their operation, repeatedly invoking it to evaluate each of a sequence of feature subsets, and selecting the subset that results in minimum estimated classification error (for that particular classifier). Finally, embedded algorithms are classifier-learning algorithms that perform feature selection implicitly during their operation e.g., decision tree learners. Early work was motivated by the problem of pattern recognition which inherently contains a large number of features (pixels, regions, signal responses at multiple frequencies etc.). Narendra and Fukunaga [3] first cast feature selection as a problem of maximization of an objective function over the set of features to use, and proposed a number of search approaches including forward selection and backward elimination. Later work by machine learning researchers includes the FOCUS algorithm of Almuallim and Dietterich [4], which is a filter method for deterministic, noise-free domains. The RELIEF algorithm [5] instead uses a randomized selection of data points to update a weight assigned to each feature, selecting the features whose weight exceeds a given threshold. A large number of additional algorithms have appeared in the literature, too many to list here—good surveys are included in Dash and Liu [6]; Guyon and Elisseeff [1]; Liu and Motoda [7]. An important concept for feature subset selection is relevance. Several notions of relevance are discussed in a number of important papers such as Blum and Langley [8]; Kohavi and John [9]. The argument that the problem of feature selection can be cast as the problem of Markov blanket discovery was first made convincingly in Koller and Sahami [2], who also presented an algorithm for learning an approximate Markov blanket using mutual information. Other algorithms include the GS algorithm [10], originally developed for learning of the structure of a Bayesian network of a domain, and extensions to it [11] including the recent MMMB algorithm [12]. Meinshausen and B¨ hlmann [13] u recently proposed an optimal theoretical solution to the problem of learning the neighborhood of a Markov network when the distribution of the domain can be assumed to be a multidimensional Gaussian i.e., linear relations among features with Gaussian noise. This assumption implies that the Composition axiom holds in the domain (see Pearl [14] for a definition of Composition); the difference with our work is that we address here the problem in general domains where it may not necessarily hold. 3 Notation and Preliminaries In this section we present notation, fundamental definitions and axioms that will be subsequently used in the rest of the paper. We use the term “feature” and “variable” interchangeably, and denote variables by capital letters (X, Y etc.) and sets of variables by bold letters (S, T etc.). We denote the set of all variables/features in the domain (the “universe”) by U. All algorithms presented are independence-based, learning the Markov boundary of a given target variable using the truth value of a number of conditional independence statements. The use of conditional independence for feature selection subsumes many other criteria proposed in the literature. In particular, the use of classification accuracy of the target variable can be seen as a special case of testing for its conditional independence with some of its predictor variables (conditional on the subset selected at any given moment). A benefit of using conditional independence is that, while classification error estimates depend on the classifier family used, conditional independence does not. In addition, algorithms utilizing conditional independence for feature selection are applicable to all domain types, 2 e.g., discrete, ordinal, continuous with non-linear or arbitrary non-degenerate associations or mixed domains, as long as a reliable estimate of probabilistic independence is available. We denote probabilistic independence by the symbol “ ⊥ ” i.e., (X⊥ Y | Z) denotes the fact ⊥ ⊥ that the variables in set X are (jointly) conditionally independent from those in set Y given the values of the variables in set Z; (X⊥ Y | Z) denotes their conditional dependence. We assume ⊥ the existence of a probabilistic independence query oracle that is available to answer any query of the form (X, Y | Z), corresponding to the question “Is the set of variables in X independent of the variables in Y given the value of the variables in Z?” (This is similar to the approach of learning from statistical queries of Kearns and Vazirani [15].) In practice however, such an oracle does not exist, but can be approximated by a statistical independence test on a data set. Many tests of independence have appeared and studied extensively in the statistical literature over the last century; in this work we use the χ2 (chi-square) test of independence [16]. A Markov blanket of variable X is a set of variables such that, after fixing (by “knowing”) the value of all of its members, the set of remaining variables in the domain, taken together as a single setvalued variable, are statistically independent of X. More precisely, we have the following definition. Definition 1. A set of variables S ⊆ U is called a Markov blanket of variable X if and only if (X⊥ U − S − {X} | S). ⊥ Intuitively, a Markov blanket S of X captures all the information in the remaining domain variables U − S − {X} that can affect the probability distribution of X, making their value redundant as far as X is concerned (given S). The blanket therefore captures the essence of the feature selection problem for target variable X: By completely “shielding” X, a Markov blanket precludes the existence of any possible information about X that can come from variables not in the blanket, making it an ideal solution to the feature selection problem. A minimal Markov blanket is called a Markov boundary. Definition 2. A set of variables S ⊆ U − {X} is called a Markov boundary of variable X if it is a minimal Markov blanket of X i.e., none of its proper subsets is a Markov blanket. Pearl [14] proved that that the axioms of Symmetry, Decomposition, Weak Union, and Intersection are sufficient to guarantee a unique Markov boundary. These are shown below together with the axiom of Contraction. (Symmetry) (Decomposition) (Weak Union) (Contraction) (Intersection) (X⊥ Y | Z) =⇒ (Y⊥ X | Z) ⊥ ⊥ (X⊥ Y ∪ W | Z) =⇒ (X⊥ Y | Z) ∧ (X⊥ W | Z) ⊥ ⊥ ⊥ (X⊥ Y ∪ W | Z) =⇒ (X⊥ Y | Z ∪ W) ⊥ ⊥ (X⊥ Y | Z) ∧ (X⊥ W | Y ∪ Z) =⇒ (X⊥ Y ∪ W | Z) ⊥ ⊥ ⊥ (X⊥ Y | Z ∪ W) ∧ (X⊥ W | Z ∪ Y) =⇒ (X⊥ Y ∪ W | Z) ⊥ ⊥ ⊥ (1) The Symmetry, Decomposition, Contraction and Weak Union axioms are very general: they are necessary axioms for the probabilistic definition of independence i.e., they hold in every probability distribution, as their proofs are based on the axioms of probability theory. Intersection is not universal but it holds in distributions that are positive, i.e., any value combination of the domain variables has a non-zero probability of occurring. 4 The Markov Boundary Theorem According to Definition 2, a Markov boundary is a minimal Markov blanket. We first introduce a theorem that provides an alternative, equivalent definition of the concept of Markov boundary that we will relax later in the paper to produce a more general boundary definition. Theorem 1 (Markov Boundary Theorem). Assuming that the Decomposition and Contraction axioms hold, S ⊆ U − {X} is a Markov boundary of variable X ∈ U if and only if ∀ T ⊆ U − {X}, T ⊆ U − S ⇐⇒ (X⊥ T | S − T) . ⊥ (2) A detailed proof cannot be included here due to space constraints but a proof sketch appears in Appendix A. According to the above theorem, a Markov boundary S partitions the powerset of U − {X} into two parts: (a) set P1 that contains all subsets of U − S, and (b) set P2 containing the remaining subsets. All sets in P1 are conditionally independent of X, and all sets in P2 are conditionally dependent with X. Intuitively, the two directions of the logical equivalence relation of Eq. (2) correspond to the concept of Markov blanket and its minimality i.e., the equation ∀ T ⊆ U − {X}, T ⊆ U − S =⇒ (X⊥ T | S − T) ⊥ 3 Algorithm 1 The abstract GS(m) (X) algorithm. Returns an m-Markov boundary of X. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: S←∅ /* Growing phase. */ for all Y ⊆ U − S − {X} such that 1 ≤ |Y| ≤ m do if (X ⊥ Y | S) then ⊥ S←S∪Y goto line 3 /* Restart loop. */ /* Shrinking phase. */ for all Y ∈ S do if (X⊥ Y | S − {Y }) then ⊥ S ← S − {Y } goto line 8 /* Restart loop. */ return S or, equivalently, (∀ T ⊆ U − S − {X}, (X⊥ T | S)) (as T and S are disjoint) corresponds to ⊥ the definition of Markov blanket, as it includes T = U − S − {X}. In the opposite direction, the contrapositive form is ∀ T ⊆ U − {X}, T ⊆ U − S =⇒ (X ⊥ T | S − T) . ⊥ This corresponds to the concept of minimality of the Markov boundary: It states that all sets that contain a part of S cannot be independent of X given the remainder of S. Informally, this is because if there existed some set T that contained a non-empty subset T′ of S such that (X⊥ T | S − T), ⊥ then one would be able to shrink S by T′ (by the property of Contraction) and therefore S would not be minimal (more details in Appendix A). 5 A Family of Algorithms for Arbitrary Domains Theorem 1 defines conditions that precisely characterize a Markov boundary and thus can be thought of as an alternative definition of a boundary. By relaxing these conditions we can produce a more general definition. In particular, an m-Markov boundary is defined as follows. Definition 3. A set of variables S ⊆ U − {X} of a domain U is called an m-Markov boundary of variable X ∈ U if and only if ∀ T ⊆ U − {X} such that |T| ≤ m, T ⊆ U − S ⇐⇒ (X⊥ T | S − T) . ⊥ We call the parameter m of an m-Markov boundary the Markov boundary margin. Intuitively, an m-boundary S guarantees that (a) all subsets of its complement (excluding X) of size m or smaller are independent of X given S, and (b) all sets T of size m or smaller that are not subsets of its complement are dependent of X given the part of S that is not contained in T. This definition is a special case of the properties of a boundary stated in Theorem 1, with each set T mentioned in the theorem now restricted to having size m or smaller. For m = n − 1, where n = |U |, the condition |T| ≤ m is always satisfied and can be omitted; in this case the definition of an (n − 1)-Markov boundary results in exactly Eq. (2) of Theorem 1. We now present an algorithm called GS(m) , shown in Algorithm 1, that provably correctly learns an m-boundary of a target variable X. GS(m) operates in two phases, a growing and a shrinking phase (hence the acronym). During the growing phase it examines sets of variables of size up to m, where m is a user-specified parameter. During the shrinking phase, single variables are examined for conditional independence and possible removal from S (examining sets in the shrinking phase is not necessary for provably correct operation—see Appendix B). The orders of examination of the sets for possible addition and deletion from the candidate boundary are left intentionally unspecified in Algorithm 1—one can therefore view it as an abstract representative of a family of algorithms, with each member specifying one such ordering. All members of this family are m-correct, as the proof of correctness does not depend on the ordering. In practice numerous choices for the ordering exist; one possibility is to examine the sets in the growing phase in order of increasing set size and, for each such size, in order of decreasing conditional mutual information I(X, Y, S) between X and Y given S. The rationale for this heuristic choice is that (usually) tests with smaller conditional sets tend to be more reliable, and sorting by mutual information tends to lessen the chance of adding false members of the Markov boundary. We used this implementation in all our experiments, presented later in Section 6. Intuitively, the margin represents a trade-off between sample and computational complexity and completeness: For m = n − 1 = |U| − 1, the algorithm returns a Markov boundary in unrestricted 4 Algorithm 2 The RGS(m,k) (X) algorithm, a randomized anytime version of the GS(m) algorithm, utilizing k random subsets for the growing phase. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: S←∅ /* Growing phase. */ repeat Schanged ← false Y ← subset of U − S − {X} of size 1 ≤ |Y| ≤ m of maximum dependence out of k random subsets if (X ⊥ Y | S) then ⊥ S←S∪Y Schanged ← true until Schanged = false /* Shrinking phase. */ for all Y ∈ S do if (X⊥ Y | S − {Y }) then ⊥ S ← S − {Y } goto line 11 /* Restart loop. */ return S (arbitrary) domains. For 1 ≤ m < n − 1, GS(m) may recover the correct boundary depending on characteristics of the domain. For example, it will recover the correct boundary in domains containing embedded parity functions such that the number of variables involved in every k-bit parity function is m + 1 or less i.e., if k ≤ m + 1 (parity functions are corner cases in the space of probability distributions that are known to be hard to learn [17]). The proof of m-correctness of GS(m) is included in Appendix B. Note that it is based on Theorem 1 and the universal axioms of Eqs. (1) only i.e., Intersection is not needed, and thus it is widely applicable (to any domain). A Practical Randomized Anytime Version While GS(m) is provably correct even in difficult domains such as those that contain parity functions, it may be impractical with a large number of features as its asymptotic complexity is O(nm ). We therefore also we here provide a more practical randomized version called RGS(m,k) (Randomized GS(m) ), shown in Algorithm 2. The RGS(m,k) algorithm has an additional parameter k that limits its computational requirements: instead of exhaustively examining all possible subsets of (U −S−{X}) (as GS(m) does), it instead samples k subsets from the set of all possible subsets of (U − S − {X}), where k is user-specified. It is therefore a randomized algorithm that becomes equivalent to GS(m) given a large enough k. Many possibilities for the method of random selection of the subsets exist; in our experiments we select a subset Y = {Yi } (1 ≤ |Y| ≤ m) with probability proportional |Y| to i=1 (1/p(X, Yi | S)), where p(X, Yi | S) is the p-value of the corresponding (univariate) test between X and Yi given S, which has a low computational cost. The RGS(m,k) algorithm is useful in situations where the amount of time to produce an answer may be limited and/or the limit unknown beforehand: it is easy to show that the growing phase of GS(m) produces an an upper-bound of the m-boundary of X. Therefore, the RGS(m,k) algorithm, if interrupted, will return an approximation of this upper bound. Moreover, if there exists time for the shrinking phase to be executed (which conducts a number of tests linear in n and is thus fast), extraneous variables will be removed and a minimal blanket (boundary) approximation will be returned. These features make it an anytime algorithm, which is a more appropriate choice for situations where critical events may occur that require the interruption of computation, e.g., during the planning phase of a robot, which may be interrupted at any time due to an urgent external event that requires a decision to be made based on the present state’s feature values. 6 Experiments We evaluated the GS(m) and the RGS(m,k) algorithms on synthetic as well as real-world and benchmark data sets. We first systematically examined the performance on the task of recovering near-parity functions, which are known to be hard to learn [17]. We compared GS(m) and RGS(m,k) with respect to accuracy of recovery of the original boundary as well as computational cost. We generated domains of sizes ranging from 10 to 100 variables, of which 4 variables (X1 to X4 ) were related through a near-parity relation with bit probability 0.60 and various degrees of noise. The remaining independent variables (X5 to Xn ) act as “dis5 GS(1) GS(3) RGS(1, 1000) 0 0.05 RGS(3, 1000) Relieved, threshold = 0.001 Relieved, threshold = 0.03 0.1 0.15 0.2 0.25 Noise probability 0.3 0.35 0.4 Probabilistic isolation performance of GS(m) and RELIEVED Probabilistic isolation performance of GS(m) and RGS(m ,k) Real-world and benchmark data sets 1 Data set Balance scale Balloons Car evaluation Credit screening Monks Nursery Tic-tac-toe Breast cancer Chess Audiology 0.8 0.6 0.4 0.2 0 0 0.2 0.4 (m = 3) GS 0.6 0.8 1 Real-world and benchmark data sets RGS(m = 3, k = 300) average isolation measure F1 measure 50 variables, true Markov boundary size = 3 Bernoulli probability = 0.6, 1000 data points RELIEVED(threshold = 0.03) average isolation measure F1 measure of GS(m ), RGS(m, k ) and RELIEVED vs. noise level 1.3 1.2 1.1 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1 Data set Balance scale Balloons Car evaluation Credit screening Monks Nursery Tic-tac-toe Breast cancer Chess Audiology 0.8 0.6 0.4 0.2 0 0 0.2 0.4 (m = 3) average isolation measure GS 0.6 0.8 1 average isolation measure Figure 2: Left: F1 measure of GS(m) , RGS(m,k) and RELIEVED under increasing amounts of noise. Middle: Probabilistic isolation performance comparison between GS(3) and RELIEVED on real-world and benchmark data sets. Right: Same for GS(3) and RGS(3,1000) . tractors” and had randomly assigned probabilities i.e., the correct boundary of X1 is B1 = {X2 , X3 , X4 }. In such domains, learning the boundary of X1 is difficult because of the large number of distractors and because each Xi ∈ B1 is independent of X1 given any proper subset of B1 − {Xi } (they only become dependent when including all of them in the conditioning set). To measure an algorithm’s feature selection performance, acF -measure of GS and RGS vs. domain size curacy (fraction of variables correctly included or excluded) is inappropriate as the accuracy of trivial algorithms such as returning the empty set will tend to 1 as n increases. Precision and recall are therefore more appropriate, with precision defined as the fraction of features returned that are in the correct boundary (3 features for X1 ), and recall as the fraction of the features present in the correct boundary that are returned by the algorithm. A convenient and frequently used Number of domain variables measure that combines precision and recall is the F1 meaRunning time of GS and RGS vs. domain size sure, defined as the harmonic mean of precision and recall [18]. In Fig. 1 (top) we report 95% confidence intervals for the F1 measure and execution time of GS(m) (margins m = 1 to 3) and RGS(m,k) (margins 1 to 3 and k = 1000 random subsets), using 20 data sets containing 10 to 100 variables, with the target variable X1 was perturbed (inverted) by noise with 10% probability. As can be seen, the RGS(m,k) and GS(m) using the same value for margin perform comparably Number of domain variables with respect to F1 , up to their 95% confidence intervals. With Figure 1: GS(m) and RGS(m,k) per(m,k) respect to execution time however RGS exhibits much formance with respect to domain greater scalability (Fig. 1 bottom, log scale); for example, it size (number of variables). Top: F1 executes in about 10 seconds on average in domains containmeasure, reflecting accuracy. Bot(m) ing 100 variables, while GS executes in 1,000 seconds on tom: Execution time in seconds (log average for this domain size. scale). We also compared GS(m) and RGS(m,k) to RELIEF [5], a well-known algorithm for feature selection that is known to be able to recover parity functions in certain cases [5]. RELIEF learns a weight for each variable and compares it to a threshold τ to decide on its inclusion in the set of relevant variables. As it has been reported [9] that RELIEF can exhibit large variance due to randomization that is necessary only for very large data sets, we instead used a deterministic variant called RELIEVED [9], whose behavior corresponds to RELIEF at the limit of infinite execution time. We calculated the F1 measure for GS(m) , RGS(m,k) and RELIEVED in the presence of varying amounts of noise, with noise probability ranging from 0 (no noise) to 0.4. We used domains containing 50 variables, as GS(m) becomes computationally demanding in larger domains. In Figure 2 (left) we show the performance of GS(m) and RGS(m,k) for m equal to 1 and 3, k = 1000 and RELIEVED for thresholds τ = 0.01 and 0.03 for various amounts of noise on the target variable. Again, each experiment was repeated 20 times to generate 95% confidence intervals. We can observe that even though m = 1 (equivalent to the GS algorithm) performs poorly, increasing the margin m makes it more likely to recover the correct Markov boundary, and GS(3) (m = 3) recovers the exact blanket even with few (1,000) data points. RELIEVED does comparably to GS(3) for little noise and for a large threshold, (m ) (m, k ) 1 True Markov boundary size = 3, 1000 data points Bernoulli probability = 0.6, noise probability = 0.1 1 0.9 GS(1) GS(2) GS(3) RGS(1, 1000) (2, 1000) RGS (3, 1000) RGS 0.8 F1-measure 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 10 20 30 40 50 60 (m ) 70 80 90 100 90 100 (m, k ) True Markov boundary size = 3, 1000 data points Bernoulli probability = 0.6, noise probability = 0.1 10000 GS(1) GS(2) GS(3) Execution time (sec) 1000 RGS(1, 1000) RGS(2, 1000) RGS(3, 1000) 100 10 1 0.1 0.01 10 6 20 30 40 50 60 70 80 but appears to deteriorate for more noisy domains. As we can see it is difficult to choose the “right” threshold for RELIEVED—better performing τ at low noise can become worse in noisy environments; in particular, small τ tend to include irrelevant variables while large τ tend to miss actual members. We also evaluated GS(m) , RGS(m,k) , and RELIEVED on benchmark and real-world data sets from the UCI Machine Learning repository. As the true Markov boundary for these is impossible to know, we used as performance measure a measure of probabilistic isolation by the Markov boundary returned of subsets outside the boundary. For each domain variable X, we measured the independence of subsets Y of size 1, 2 and 3 given the blanket S of X returned by GS(3) and RELIEVED for τ = 0.03 (as this value seemed to do better in the previous set of experiments), as measured by the average p-value of the χ2 test between X and Y given S (with p-values of 0 and 1 indicating ideal dependence and independence, respectively). Due to the large number of subsets outside the boundary when the boundary is small, we limited the estimation of isolation performance to 2,000 subsets per variable. We plot the results in Figure 2 (middle and right). Each point represents a variable in the corresponding data set. Points under the diagonal indicate better probabilistic isolation performance for that variable for GS(3) compared to RELIEVED (middle plot) or to RGS(3,1000) (right plot). To obtain a statistically significant comparison, we used the non-parametric Wilcoxon paired signed-rank test, which indicated that GS(3) RGS(3,1000) are statistically equivalent to each other, while both outperformed RELIEVED at the 99.99% significance level (α < 10−7 ). 7 Conclusion In this paper we presented algorithms for the problem of feature selection in unrestricted (arbitrary distribution) domains that may contain complex interactions that only appear when the values of multiple features are considered together. We introduced two algorithms: an exact, provably correct one as well a more practical randomized anytime version, and evaluated them on on artificial, benchmark and real-world data, demonstrating that they perform well, even in the presence of noise. We also introduced the Markov Boundary Theorem that precisely characterizes the properties of a boundary, and used it to prove m-correctness of the exact family of algorithms presented. We made minimal assumptions that consist of only a general set of axioms that hold for every probability distribution, giving our algorithms universal applicability. Appendix A: Proof sketch of the Markov Boundary Theorem Proof sketch. (=⇒ direction) We need to prove that if S is a Markov boundary of X then (a) for every set T ⊆ U − S − {X}, (X⊥ T | S − T), and (b) for every set T′ ⊆ U − S that does not ⊥ contain X, (X ⊥ T′ | S − T′ ). Case (a) is immediate from the definition of the boundary and the ⊥ Decomposition theorem. Case (b) can be proven by contradiction: Assuming the independence of T′ that contains a non-empty part T′ in S and a part T′ in U − S, we get (from Decomposition) 1 2 (X⊥ T′ | S − T′ ). We can then use Contraction to show that the set S − T′ satisfies the inde⊥ 1 1 1 pendence property of a Markov boundary, i.e., that (X⊥ U − (S − T′ ) − {X} | S − T′ ), which ⊥ 1 1 contradicts the assumption that S is a boundary (and thus minimal). (⇐= direction) We need to prove that if Eq. (2) holds, then S is a minimal Markov blanket. The proof that S is a blanket is immediate. We can prove minimality by contradiction: Assume S = S1 ∪ S2 with S1 a blanket and S2 = ∅ i.e., S1 is a blanket strictly smaller than S. Then (X⊥ S2 | ⊥ S1 ) = (X⊥ S2 | S − S2 ). However, since S2 ⊆ U − S, from Eq. (2) we get (X ⊥ S2 | S − S2 ), ⊥ ⊥ which is a contradiction. Appendix B: Proof of m-Correctness of GS(m) Let the value of the set S at the end of the growing phase be SG , its value at the end of the shrinking phase SS , and their difference S∆ = SG − SS . The following two observations are immediate. Observation 1. For every Y ⊆ U − SG − {X} such that 1 ≤ |Y| ≤ m, (X⊥ Y | SG ). ⊥ Observation 2. For every Y ∈ SS , (X ⊥ Y | SS − {Y }). ⊥ Lemma 2. Consider variables Y1 , Y2 , . . . , Yt for some t ≥ 1 and let Y = {Yj }t . Assuming that j=1 Contraction holds, if (X⊥ Yi | S − {Yj }i ) for all i = 1, . . . , t, then (X⊥ Y | S − Y). ⊥ ⊥ j=1 Proof. By induction on Yj , j = 1, 2, . . . , t, using Contraction to decrease the conditioning set S t down to S − {Yj }i j=1 for all i = 1, 2, . . . , t. Since Y = {Yj }j=1 , we immediately obtain the desired relation (X⊥ Y | S − Y). ⊥ 7 Lemma 2 can be used to show that the variables found individually independent of X during the shrinking phase are actually jointly independent of X, given the final set SS . Let S∆ = {Y1 , Y2 , . . . , Yt } be the set of variables removed (in that order) from SG to form the final set SS i.e., S∆ = SG − SS . Using the above lemma, the following is immediate. Corollary 3. Assuming that the Contraction axiom holds, (X⊥ S∆ | SS ). ⊥ Lemma 4. If the Contraction, Decomposition and Weak Union axioms hold, then for every set T ⊆ U − SG − {X} such that (X⊥ T | SG ), ⊥ (X⊥ T ∪ (SG − SS ) | SS ). ⊥ (3) Furthermore SS is minimal i.e., there does not exist a subset of SS for which Eq. (3) is true. Proof. From Corollary 3, (X⊥ S∆ | SS ). Also, by the hypothesis, (X⊥ T | SG ) = (X⊥ T | ⊥ ⊥ ⊥ SS ∪ S∆ ), where S∆ = SG − SS as usual. From these two relations and Contraction we obtain (X⊥ T ∪ S∆ | SS ). ⊥ To prove minimality, let us assume that SS = ∅ (if SS = ∅ then it is already minimal). We prove by contradiction: Assume that there exists a set S′ ⊂ SS such that (X⊥ T ∪ (SG − S′ ) | S′ ). Let ⊥ W = SS − S′ = ∅. Note that W and S′ are disjoint. We have that SS ⊆ SS ∪ S∆ =⇒ SS − S′ ⊆ SS ∪ S∆ − S′ ⊆ T ∪ (SS ∪ S∆ − S′ ) =⇒ W ⊆ T ∪ (SS ∪ S∆ − S′ ) = T ∪ (SG − S′ ) • Since (X⊥ T ∪ (SG − S′ ) | S′ ) and W ⊆ T ∪ (SS ∪ S∆ − S′ ), from Decomposition we ⊥ get (X⊥ W | S′ ). ⊥ • From (X⊥ W | S′ ) and Weak Union we have that for every Y ∈ W, (X⊥ Y | S′ ∪ ⊥ ⊥ (W − {Y })). • Since S′ and W are disjoint and since Y ∈ W, Y ∈ S′ . Applying the set equality (A − B) ∪ C = (A ∪ B) − (A − C) to S′ ∪ (W − {Y }) we obtain S′ ∪ W − ({Y } − S′ ) = SS − {Y }. • Therefore, ∀ Y ∈ W, (X⊥ Y | SS − {Y }). ⊥ However, at the end of the shrinking phase, all variables Y in SS (and therefore in W, as W ⊆ SS ) have been evaluated for independence and found dependent (Observation 2). Thus, since W = ∅, there exists at least one Y such that (X ⊥ Y | SS − {Y }), producing a contradiction. ⊥ Theorem 5. Assuming that the Contraction, Decomposition, and Weak Union axioms hold, Algorithm 1 is m-correct with respect to X. Proof. We use the Markov Boundary Theorem. We first prove that ∀ T ⊆ U − {X} such that |T| ≤ m, T ⊆ U − SS =⇒ (X⊥ T | SS − T) ⊥ or, equivalently, ∀ T ⊆ U − SS − {X} such that |T| ≤ m, (X⊥ T | SS ). ⊥ Since U − SS − {X} = S∆ ∪ (U − SG − {X}), S∆ and U − SG − {X} are disjoint, there are three kinds of sets of size m or less to consider: (i) all sets T ⊆ S∆ , (ii) all sets T ⊆ U − SG − {X}, and (iii) all sets (if any) T = T′ ∪ T′′ , T′ ∩ T′′ = ∅, that have a non-empty part T′ ⊆ S∆ and a non-empty part T′′ ⊆ U − SG − {X}. (i) From Corollary 3, (X⊥ S∆ | SS ). Therefore, from Decomposition, for any set T ⊆ S∆ , ⊥ (X⊥ T | SS ). ⊥ (ii) By Observation 1, for every set T ⊆ U − SG − {X} such that |T| ≤ m, (X⊥ T | SG ). ⊥ By Lemma 4 we get (X⊥ T ∪ S∆ | SS ), from which we obtain (X⊥ T | SS ) by ⊥ ⊥ Decomposition. (iii) Since |T| ≤ m, we have that |T′′ | ≤ m. Since T′′ ⊆ U − SG − {X}, by Observation 1, (X⊥ T′′ | SG ). Therefore, by Lemma 4, (X⊥ T′′ ∪ S∆ | SS ). Since T′ ⊆ S∆ ⇒ ⊥ ⊥ T′′ ∪ T′ ⊆ T′′ ∪ S∆ , by Decomposition to obtain (X⊥ T′′ ∪ T′ | SS ) = (X⊥ T | SS ). ⊥ ⊥ To complete the proof we need to prove that ∀ T ⊆ U − {X} such that |T| ≤ m, T ⊆ U − SS =⇒ (X ⊥ T | SS − T) . ⊥ Let T = T1 ∪ T2 , with T1 ⊆ SS and T2 ⊆ U − SS . Since T ⊆ U − SS , T1 contains at least one variable Y ∈ SS . From Observation 2, (X ⊥ Y | SS − {Y }). From this and (the contrapositive of) ⊥ Weak Union, we get (X ⊥ {Y } ∪ (T1 − {Y }) | SS − {Y } − (T1 − {Y })) = (X ⊥ T1 | SS − T1 ). ⊥ ⊥ From (the contrapositive of) Decomposition we get (X ⊥ T1 ∪ T2 | SS − T1 ) = (X ⊥ T | ⊥ ⊥ SS − T1 ), which is equal to (X ⊥ T | SS − T1 − T2 ) = (X ⊥ T | SS − T) as SS and T2 are ⊥ ⊥ disjoint. 8 References [1] Isabelle Guyon and Andr´ Elisseeff. An introduction to variable and feature selection. Journal e of Machine Learning Research, 3:1157–1182, 2003. [2] Daphne Koller and Mehran Sahami. Toward optimal feature selection. In Proceedings of the Tenth International Conference on Machine Learning (ICML), pages 284–292, 1996. [3] P. M. Narendra and K. Fukunaga. A branch and bound algorithm for feature subset selection. IEEE Transactions on Computers, C-26(9):917–922, 1977. [4] H. Almuallim and T. G. Dietterich. Learning with many irrelevant features. In Proceedings of the National Conference on the Americal Association for Artifical Intelligence (AAAI), 1991. [5] K. Kira and L. A. Rendell. The feature selection problem: Traditional methods and a new algorithm. In Proceedings of the National Conference on the Americal Association for Artifical Intelligence (AAAI), pages 129–134, 1992. [6] M. Dash and H. Liu. Feature selection for classification. Intelligent Data Analysis, 1(3): 131–156, 1997. [7] Huan Liu and Hiroshi Motoda, editors. Feature Extraction, Construction and Selection: A Data Mining Perspective, volume 453 of The Springer International Series in Engineering and Computer Science. 1998. [8] Avrim Blum and Pat Langley. Selection of relevant features and examples in machine learning. Artificial Intelligence, 97(1-2):245–271, 1997. [9] R. Kohavi and G. H. John. Wrappers for feature subset selection. Artificial Intelligence, 97 (1-2):273–324, 1997. [10] Dimitris Margaritis and Sebastian Thrun. Bayesian network induction via local neighborhoods. In Advances in Neural Information Processing Systems 12 (NIPS), 2000. [11] I. Tsamardinos, C. Aliferis, and A. Statnikov. Algorithms for large scale Markov blanket discovery. In Proceedings of the 16th International FLAIRS Conference, 2003. [12] I. Tsamardinos, C. Aliferis, and A. Statnikov. Time and sample efficient discovery of Markov blankets and direct causal relations. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 673–678, 2003. [13] N. Meinshausen and P. B¨ hlmann. High-dimensional graphs and variable selection with the u Lasso. Annals of Statistics, 34:1436–1462, 2006. [14] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. 1988. [15] Michael Kearns and Umesh V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994. [16] A. Agresti. Categorical Data Analysis. John Wiley and Sons, 1990. [17] M. Kearns. Efficient noise-tolerant learning from statistical queries. J. ACM, 45(6):983–1006, 1998. [18] C. J. van Rijsbergen. Information Retrieval. Butterworth-Heinemann, London, 1979. 9
3 0.56614983 260 nips-2009-Zero-shot Learning with Semantic Output Codes
Author: Mark Palatucci, Dean Pomerleau, Geoffrey E. Hinton, Tom M. Mitchell
Abstract: We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words. 1
4 0.5428614 133 nips-2009-Learning models of object structure
Author: Joseph Schlecht, Kobus Barnard
Abstract: We present an approach for learning stochastic geometric models of object categories from single view images. We focus here on models expressible as a spatially contiguous assemblage of blocks. Model topologies are learned across groups of images, and one or more such topologies is linked to an object category (e.g. chairs). Fitting learned topologies to an image can be used to identify the object class, as well as detail its geometry. The latter goes beyond labeling objects, as it provides the geometric structure of particular instances. We learn the models using joint statistical inference over category parameters, camera parameters, and instance parameters. These produce an image likelihood through a statistical imaging model. We use trans-dimensional sampling to explore topology hypotheses, and alternate between Metropolis-Hastings and stochastic dynamics to explore instance parameters. Experiments on images of furniture objects such as tables and chairs suggest that this is an effective approach for learning models that encode simple representations of category geometry and the statistics thereof, and support inferring both category and geometry on held out single view images. 1
5 0.52982098 211 nips-2009-Segmenting Scenes by Matching Image Composites
Author: Bryan Russell, Alyosha Efros, Josef Sivic, Bill Freeman, Andrew Zisserman
Abstract: In this paper, we investigate how, given an image, similar images sharing the same global description can help with unsupervised scene segmentation. In contrast to recent work in semantic alignment of scenes, we allow an input image to be explained by partial matches of similar scenes. This allows for a better explanation of the input scenes. We perform MRF-based segmentation that optimizes over matches, while respecting boundary information. The recovered segments are then used to re-query a large database of images to retrieve better matches for the target regions. We show improved performance in detecting the principal occluding and contact boundaries for the scene over previous methods on data gathered from the LabelMe database.
7 0.51602668 25 nips-2009-Adaptive Design Optimization in Experiments with People
8 0.5157119 258 nips-2009-Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise
9 0.5103054 44 nips-2009-Beyond Categories: The Visual Memex Model for Reasoning About Object Relationships
10 0.50771987 115 nips-2009-Individuation, Identification and Object Discovery
11 0.50709897 131 nips-2009-Learning from Neighboring Strokes: Combining Appearance and Context for Multi-Domain Sketch Recognition
13 0.50560999 201 nips-2009-Region-based Segmentation and Object Detection
14 0.50411063 175 nips-2009-Occlusive Components Analysis
15 0.5039804 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation
16 0.5016911 225 nips-2009-Sparsistent Learning of Varying-coefficient Models with Structural Changes
17 0.50065601 155 nips-2009-Modelling Relational Data using Bayesian Clustered Tensor Factorization
18 0.50011486 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction
19 0.49892294 17 nips-2009-A Sparse Non-Parametric Approach for Single Channel Separation of Known Sounds
20 0.49867043 28 nips-2009-An Additive Latent Feature Model for Transparent Object Recognition