cvpr cvpr2013 cvpr2013-93 knowledge-graph by maker-knowledge-mining

93 cvpr-2013-Constraints as Features


Source: pdf

Author: Shmuel Asafi, Daniel Cohen-Or

Abstract: In this paper, we introduce a new approach to constrained clustering which treats the constraints as features. Our method augments the original feature space with additional dimensions, each of which derived from a given Cannot-link constraints. The specified Cannot-link pair gets extreme coordinates values, and the rest of the points get coordinate values that express their spatial influence from the specified constrained pair. After augmenting all the new features, a standard unconstrained clustering algorithm can be performed, like k-means or spectral clustering. We demonstrate the efficacy of our method for active semi-supervised learning applied to image segmentation and compare it to alternative methods. We also evaluate the performance of our method on the four most commonly evaluated datasets from the UCI machine learning repository.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract In this paper, we introduce a new approach to constrained clustering which treats the constraints as features. [sent-2, score-0.895]

2 The specified Cannot-link pair gets extreme coordinates values, and the rest of the points get coordinate values that express their spatial influence from the specified constrained pair. [sent-4, score-0.614]

3 After augmenting all the new features, a standard unconstrained clustering algorithm can be performed, like k-means or spectral clustering. [sent-5, score-0.724]

4 Unconstrained or unsupervised clustering problems, where none of the data is labelled, remain an active research field in many domains. [sent-10, score-0.384]

5 However, sometimes some supervision is given on a subset of the data in the form of labelled data, or as pairs of constraints; namely, Must-link and Cannot-link constraints that define a pair of data points that should or should not be associated with the same cluster [1]. [sent-11, score-0.508]

6 Must-links and Cannot-links impose direct constraints on the specified pair of data points; however, the main challenge is that they should have a non-local influence on the clustering result and should affect data points that are further away from the specified pairs. [sent-12, score-0.898]

7 Must-link constraints are usually considered an easier case than Cannot-link constraints, since Must-link constraints are transitive and can be represented as a shortening of the distance between the pair to zero or some other small constant [2, 3]. [sent-13, score-0.79]

8 On the other hand, Cannot-link constraints are non-transitive and have no obvious geometric interpretation [2, 6, 7] and therefore Daniel Cohen-Or Tel Aviv University cohenor@ gmai l com . [sent-14, score-0.296]

9 In this paper, we introduce a new approach for realizing constrained clustering, focusing on Cannot-link constraints. [sent-16, score-0.261]

10 The key idea is to embed the data points into a higher dimensional space, which augments the given space with additional dimensions derived from Cannot-link constraints. [sent-17, score-0.38]

11 The all-pairs distances among the data points in the augmented space combines both the original distances and the distances of points according to each Cannot-link constraint. [sent-19, score-0.639]

12 The actual clustering is then performed by some convenient clustering technique, like k-means or spectral clustering. [sent-20, score-0.956]

13 The technique that we introduce augments the space with additional coordinates that softly distribute the influence of the Cannot-link constraints, and defer the hard decisions to the actual clustering phase. [sent-22, score-0.61]

14 Background The problem of constrained clustering has been a subject of research in the context of semi-supervised learning where only a (typically small) subset of the data is labeled. [sent-27, score-0.637]

15 [5] has first formulated the constraints in the form of pair-wise Must-Link and CannotLink constraints. [sent-29, score-0.296]

16 The early work in constrained clustering approaches considered the constraints by directly modifying existing clustering methods by including explicit actions induced by the Must-Link or Cannot-link constraints,e. [sent-30, score-1.233]

17 The colouring is the ground-truth clustering of the data. [sent-35, score-0.338]

18 On the right is the high-dimensional space where an extra dimension was derived from the Cannot-link constraint and augmented to the original space. [sent-37, score-0.402]

19 In the middle are the corresponding distance matrices: the bottom distance matrix can much more easily be clustered to the two ground-truth clusters. [sent-38, score-0.28]

20 These methods were only considering hardconstraints, where any inconsistency in the given constraints led to poor results or to break [8, 4]. [sent-40, score-0.296]

21 [2] interrupt the Must-Link constraints geometrically by setting a zero-distance between all such pairs, and then recalculate the distance matrix using an all- pair-shortest-path algorithm which enforces the triangle inequality and thereby restores a metric property. [sent-42, score-0.615]

22 In our work, we adopt their solution to the Must-Link constraints, and introduce a novel technique that also interprets the Cannot-Link constraints geometrically. [sent-44, score-0.296]

23 Another approach to consider the pair-wise constraints is to learn and define a distance metric that respects the given constraints. [sent-45, score-0.438]

24 The advantage of this approach is that the new distances can then be used with standard unconstrained clustering algorithms. [sent-47, score-0.576]

25 The decoupling of clustering method and the pair-wise constraints data embedding is also realized in our approach. [sent-48, score-0.664]

26 However, the metric learning methods do not embed the constraints in the dataset itself, thus they might not be able to warp or bend the space significantly enough to respect the given constraints. [sent-49, score-0.423]

27 A recent approach is to apply the constraints into spectral clustering methods. [sent-50, score-0.88]

28 Spectral clustering considers the affinity matrix A, which expresses the similarity of pairs of points (Ai,j), rather than directly the distance matrix. [sent-51, score-0.855]

29 The idea was first presented by Kamvar and Klein [12] by applying simple local changes to the affinity matrix: Must-Link pairs were set to Ai,j = 1meaning they are very similar, and Cannot-Link pairs were set to Ai,j = 0 meaning they are very dissimilar. [sent-53, score-0.264]

30 However, although elegant, such a naive realization of the concept leads to only limited local geometric impact, and requires many constraints to be placed before a significant effect takes place. [sent-55, score-0.367]

31 Some attempts have been made at improving this method by propagating the constraints themselves to nearby pairs [13, 26, 27]. [sent-56, score-0.386]

32 Other methods propagate the constrained entries in the affinity matrix to nearby entries [14]. [sent-57, score-0.581]

33 Since the affinity matrix is derived directly from the distance matrix, these methods are usually effective with Must-Link constraints, but remain limited when dealing with Cannot-Link constraints [15, 16, 17]. [sent-58, score-0.663]

34 Must-Link constraints attract points together and CannotLink constraints push points apart, while springs are attached to all pairs of points. [sent-61, score-0.85]

35 However, a Cannot-link constraint repulses the constraints pair together with their nearby points and often creates cluttered regions and conflicts. [sent-63, score-0.624]

36 As demonstrated, the main caveat of this approach is that the displacements ofthe points occur within the original dimensions, leaving the relations among the data points in their original subspace untouched. [sent-65, score-0.286]

37 Overview We present a constraint clustering approach, where the input points {Xi}iK=1 are M-dimensional vectors. [sent-68, score-0.549]

38 However, in most cases, the feature space is imperfect, so without any constraint, the clustering of the points is likely to be erroneous. [sent-70, score-0.473]

39 The key idea is to convert a given constrained clustering problem to an unconstrained one. [sent-71, score-0.739]

40 More specifically, we assume that we are given a set of points Xi and the constraints are given as Must-links and Cannot-links pairs. [sent-72, score-0.396]

41 The constraints are assumed to be sparse, constituting a semisupervised clustering problem. [sent-73, score-0.666]

42 Once the constrained problem is converted to an unconstrained one, any clustering method can be used. [sent-74, score-0.739]

43 Some unconstrained clustering techniques, like spectral methods, do not require as input the embedding of the points in space, but only a distance matrix Di,j that encodes the dis- tances between every pair of points in the feature space. [sent-75, score-1.203]

44 The method that we present takes as input a distance matrix Di,j (extracted from a feature space of M dimensions) and N pairs of constraints and creates an altered distance matrix (representing distances measured in an augmented space of M+N dimensions). [sent-76, score-1.01]

45 The embedding ofthe points X in the augmented feature space is also available, if needed. [sent-77, score-0.267]

46 First, the Must-links constraints modify D by shortening constrained pairs and running the all-pairshortest-path algorithm as described by Kamvar et al. [sent-79, score-0.673]

47 Then to respect the Cannotlink constraints, Dˆ is augmented by adding additional coordinates to yield The modified distance matrix Dˆi,j is first embedded in M dimensional feature-space. [sent-81, score-0.343]

48 Then, to account for the Cannot-link constraints new coordinates are introduced in additional dimensions, where each coordinate is derived directly from one Cannot-Link constraint. [sent-82, score-0.474]

49 Assuming N pairs of Cannot-link constraints, the augmented feature space is now in M + N dimensions, and the distance matrix can be defined, respectively. [sent-84, score-0.355]

50 D˜ 6 we discuss the elementary requirements of an interactive semi-supervised clustering system, where in Section 7 we demonstrate such a system with a simple active semisupervised image segmentation application. [sent-88, score-0.527]

51 Converting Constraints to Features The first step is to modify the input distance matrix D by respecting all the Must-link constraints to define Dˆ. [sent-91, score-0.507]

52 Initialize the updated matrix Dˆ = D For every Must-Link constrained pair (i, j), shorten tFhoerir e dviesrtayn Mceu tsot Dˆi,j = c odnmsitnra. [sent-93, score-0.389]

53 min(Dˆx,y, Dˆx,i Dˆi,j Dˆj,y), Dˆx,y The above algorithm shortens the entries in the distance matrix D between constrained pairs to a minimum constant and then updates all other distances in the matrix appropriately so the triangle-inequality holds. [sent-95, score-0.684]

54 The implementation requires going through every pair of points for every Must-Link constraint, thereby its timecomplexity is O(N · K2), where N is the number of MustcLoimnkp cloexnistytra iisn Ots,( Nand · KK is the number of data points. [sent-99, score-0.3]

55 Once is available, we need to account for the N Cannot-link constraints to define D˜: Dˆ D˜pi,j =Dˆip,j+ ? [sent-100, score-0.296]

56 where D(c) are constrained distance matrices, N is the number of Cannot-Link constraints and α is a factor determining the degree of influence the Cannot-Link constraints have. [sent-107, score-0.971]

57 α can also be a function of N to account for cases where the number of constraints is relatively large with respect to M. [sent-108, score-0.357]

58 In our implementation we used the Euclidean norm (p = 2) and set α = dmax, where dmax is the maximum distance in so that constrained distances are normalized to the scale of distances in In the following we show how to define D(c) for a given Cannot-link constraint. [sent-110, score-0.621]

59 In each row: (a) original feature space with groundtruth clustering and a Cannot-Link constraint in black. [sent-114, score-0.584]

60 (c) our method augments a dimension in which the desired points are distanced without damaging local structure or creating a clutter of points. [sent-116, score-0.28]

61 Converting a Cannot-link constraint to a feature For each Cannot-Link constraint we derive a new feature dimension encoded in D(c) . [sent-118, score-0.379]

62 All other data points should be assigned a scalar that expresses dthateair p roeinlattsio snh owulitdh thee a scsoignnsetrdain ae sdc pair, or their likelihood to be clustered with one of the constrained pair. [sent-120, score-0.446]

63 We have chosen to implement a distance derivation for the new dimension based on the Diffusion Maps, since the distances between points in the diffusion space better reflect their likelihood to be clustered together [20]. [sent-121, score-0.622]

64 Notice, however, that we are not using the Diffusion Map to make clustering decisions directly. [sent-122, score-0.37]

65 All oatrehe gri points are given v vaalluueess according rteo tpheec following equation: vi=((ϕϕ((ii, cc22)) −+ ϕ ϕ((ii, cc11)))), (2) where ϕ(x, y) is the distance between points x and y in the Diffusion Map, and vi is the value given to point i. [sent-125, score-0.32]

66 Points that are closer to c1in the Diffusion Map get values closer to 1, points closer to c2 get values closer to −1, and points that are as nctslos celo stoe rb tooth c get vta vlauleuse csl colsoes etor t0o. [sent-126, score-0.368]

67 We denote the constrained distance matrix between points in the new dimension derived from constraint c by: Di(c,j) = |vi − vj| (3) The distance matrix of the original dataset can be interpreted as an undirected graph that encodes ”walking distances” between pairs of points. [sent-128, score-1.039]

68 The two elements marked black are the constrained endpoints. [sent-140, score-0.301]

69 An interactive constrained clustering method should follow these guideline in order to be effective: • • • Calculations should be relatively efficient in timecomplexity, ssi snhcoeu utlhde system itisv einlyter aefcftiivciee. [sent-146, score-0.69]

70 Early constraints should have relatively significant effEeacrtlsy, lcaotners craoinnststr asihnotsu sdh hoauvlde hrealaveti vfienleyr s iegfnfeicfitcs. [sent-148, score-0.326]

71 Adding any single MustLink constraint is a O(K2) operation since it only includes updating the all-pair-shortest-path matrix D(SP) ; adding a single Cannot-Link constraint can also be achieved in O(K2) assuming the Diffusion Map is calculated in a preprocess. [sent-151, score-0.331]

72 Of course, the unconstrained clustering algorithm itself must also be time-efficient. [sent-153, score-0.478]

73 The first Cannot-Link constraints added by the Di(c,j) user have an immediate and significant effect on the resulting distance matrix This effect is a result of α (in Eq 1) having a constant value, which is determined by the maximal distance in the original distance matrix. [sent-155, score-0.724]

74 Thus, the first constrained distance matrix D(c) is given a weight which is equivalent to the weight of the original distance matrix. [sent-156, score-0.55]

75 Additional constraints all share the same total weight α, so the incremental addition of more constraints causes each constraint to become gradually less significant. [sent-157, score-0.703]

76 Notice, that the order by which constraints are added does not affect the final result. [sent-158, score-0.296]

77 Constrained Image Segmentation We have implemented a simple constrained image segmentation application to demonstrate the applicability of our method to interactive semi-supervised clustering problems. [sent-162, score-0.748]

78 An initial unconstrained segmentation is constructed using standard Spectral Clustering, taking the distances between the feature vectors in L2-norm. [sent-168, score-0.323]

79 The user can then either insert a manual constraint by clicking on the desired pixels (for either Must-Link or Cannot-Link constraints), or she can request the application to insert a random constraint respecting the groundtruth labelling. [sent-169, score-0.501]

80 The application constructs the constrained segmentation using 4 different methods: Constraints as Features, Spectral Learning (Kamvar 2003) [12], Affinity Propagation [14] and Constrained Spectral Clustering (CSC) [15]. [sent-170, score-0.349]

81 Only a 2-way clustering algorithm was provided for the latter method; thus only 2-segment examples (foreground-background) were tested 111666333866 Figure 4. [sent-171, score-0.338]

82 Top (left-to-right): Original image with a total of 3 constraints (two Must-Links in blue and one Cannot-Link in black), oversegmen- tation to super-pixels, ground truth segmentation, 3D view of the Feature Space with constraints. [sent-173, score-0.296]

83 Left-toright: Original image with a total of 3 constraints (one Must-Links in blue and two Cannot-Link in black), oversegmentation to superpixels, our result, other compared methods and unconstrained result. [sent-178, score-0.436]

84 Leftto-right: Original image with a total of 4 constraints (three MustLinks in blue and one Cannot-Link in black), our result, other compared methods and unconstrained result. [sent-182, score-0.436]

85 In Figure 7 we show the results for random constraints drawn according to the ground truth segmentation, for which we have also compared to Information Theory Metric Learning (ITML) [25]. [sent-188, score-0.296]

86 Our method reaches a good result with very few constraints, and is able to quickly improve when more constraints are added. [sent-189, score-0.296]

87 We have also compared our constrained algorithm to the other methods, with random constraints over the 117 images with four or less ground-truth segments that are in the Berkeley Segmentation Dataset. [sent-190, score-0.557]

88 We take average, minimum and maximum results over 50 different random sets of constraints for each number of constraints. [sent-203, score-0.296]

89 The unconstrained results of each method is different, because each has a different implementation of the diffusion map or of K-means. [sent-206, score-0.422]

90 The Constrained Spectral Clustering (CSP) and Spectral Learning (Kamvar 2003) methods require relatively many constraints to be placed before they make a significant impact, and they usually exhibit a monotonically increasing function of clustering accuracy against number of constraints. [sent-207, score-0.695]

91 Performance of four methods on two images, evaluated using the Adjusted Rand Evaluated for 1 to 100 randomly chosen constraints (horizontal). [sent-210, score-0.34]

92 monotony: the addition of more constraints might sometimes cause less accurate clustering. [sent-217, score-0.296]

93 Conclusions and Discussion We have presented a technique for constrained clustering where new features derived from pair-wise constraints are augmented to the original data feature space. [sent-220, score-1.134]

94 Our algorithm is decoupled from the actual clustering as it merely redefines or warps the distances among the points in the feature space. [sent-221, score-0.714]

95 c(x) = 0 is converted into the unconstrained problem min f(x) + μc(x), in our setting the hard Cannot-link constraints are incorporated into an unconstrained system where those constraints are not guaranteed to be satisfied. [sent-225, score-0.872]

96 While this is the key of our method, it also incurs two intricacies: first, if the number of constraints is relatively high to the original space dimension, the complexity of the computation increases significantly. [sent-227, score-0.369]

97 All datasets were reduced to 2-way clustering problems by selecting the hardest two classes to discern. [sent-230, score-0.338]

98 The figure shows clustering results evaluated using the Adjusted Rand Index on the vertical scale, number of constraints on the horizontal scale. [sent-231, score-0.678]

99 The figure shows clustering results evaluated using the Adjusted Rand Index on the vertical scale, number of constraints on the horizontal scale. [sent-237, score-0.678]

100 From instancelevel constraints to space-level constraints: Making the most of prior knowledge in data clustering, The Nineteenth International Conference on Machine Learning, 2002. [sent-249, score-0.296]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('clustering', 0.338), ('constraints', 0.296), ('kamvar', 0.262), ('constrained', 0.261), ('spectral', 0.246), ('diffusion', 0.217), ('davidson', 0.164), ('wagstaff', 0.164), ('affinity', 0.148), ('unconstrained', 0.14), ('uci', 0.14), ('cannotlink', 0.131), ('constraint', 0.111), ('augmented', 0.102), ('points', 0.1), ('cardie', 0.098), ('distances', 0.098), ('itml', 0.097), ('augments', 0.093), ('propagation', 0.089), ('dimension', 0.087), ('distance', 0.086), ('rand', 0.083), ('ravi', 0.081), ('adjusted', 0.076), ('matrix', 0.074), ('altered', 0.07), ('asafi', 0.066), ('iris', 0.066), ('redefines', 0.066), ('timecomplexity', 0.066), ('dimensions', 0.065), ('interactive', 0.061), ('derived', 0.059), ('shortening', 0.058), ('aviv', 0.058), ('seventeenth', 0.058), ('recalculate', 0.058), ('pairs', 0.058), ('groundtruth', 0.057), ('klein', 0.056), ('metric', 0.056), ('conference', 0.056), ('pair', 0.054), ('csc', 0.054), ('csp', 0.054), ('nadler', 0.054), ('user', 0.053), ('expresses', 0.051), ('coordinates', 0.051), ('lafon', 0.051), ('respecting', 0.051), ('segmentation', 0.05), ('dmin', 0.046), ('active', 0.046), ('thereby', 0.045), ('repository', 0.045), ('tel', 0.045), ('evaluated', 0.044), ('wine', 0.043), ('dmax', 0.043), ('warps', 0.043), ('original', 0.043), ('closer', 0.042), ('insert', 0.04), ('realization', 0.04), ('black', 0.04), ('agglomerative', 0.039), ('converting', 0.039), ('calculation', 0.039), ('walking', 0.039), ('specified', 0.039), ('international', 0.038), ('application', 0.038), ('coordinate', 0.038), ('learning', 0.038), ('path', 0.037), ('updating', 0.035), ('feature', 0.035), ('implementation', 0.035), ('vi', 0.034), ('euclidean', 0.034), ('actual', 0.034), ('index', 0.034), ('clustered', 0.034), ('entries', 0.033), ('embed', 0.033), ('machine', 0.033), ('nearby', 0.032), ('decisions', 0.032), ('influence', 0.032), ('semisupervised', 0.032), ('creates', 0.031), ('walk', 0.031), ('placed', 0.031), ('cases', 0.031), ('embedding', 0.03), ('additional', 0.03), ('relatively', 0.03), ('map', 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000012 93 cvpr-2013-Constraints as Features

Author: Shmuel Asafi, Daniel Cohen-Or

Abstract: In this paper, we introduce a new approach to constrained clustering which treats the constraints as features. Our method augments the original feature space with additional dimensions, each of which derived from a given Cannot-link constraints. The specified Cannot-link pair gets extreme coordinates values, and the rest of the points get coordinate values that express their spatial influence from the specified constrained pair. After augmenting all the new features, a standard unconstrained clustering algorithm can be performed, like k-means or spectral clustering. We demonstrate the efficacy of our method for active semi-supervised learning applied to image segmentation and compare it to alternative methods. We also evaluate the performance of our method on the four most commonly evaluated datasets from the UCI machine learning repository.

2 0.22268742 126 cvpr-2013-Diffusion Processes for Retrieval Revisited

Author: Michael Donoser, Horst Bischof

Abstract: In this paper we revisit diffusion processes on affinity graphs for capturing the intrinsic manifold structure defined by pairwise affinity matrices. Such diffusion processes have already proved the ability to significantly improve subsequent applications like retrieval. We give a thorough overview of the state-of-the-art in this field and discuss obvious similarities and differences. Based on our observations, we are then able to derive a generic framework for diffusion processes in the scope of retrieval applications, where the related work represents specific instances of our generic formulation. We evaluate our framework on several retrieval tasks and are able to derive algorithms that e. g. achieve a 100% bullseye score on the popular MPEG7 shape retrieval data set.

3 0.18688735 92 cvpr-2013-Constrained Clustering and Its Application to Face Clustering in Videos

Author: Baoyuan Wu, Yifan Zhang, Bao-Gang Hu, Qiang Ji

Abstract: In this paper, we focus on face clustering in videos. Given the detected faces from real-world videos, we partition all faces into K disjoint clusters. Different from clustering on a collection of facial images, the faces from videos are organized as face tracks and the frame index of each face is also provided. As a result, many pairwise constraints between faces can be easily obtained from the temporal and spatial knowledge of the face tracks. These constraints can be effectively incorporated into a generative clustering model based on the Hidden Markov Random Fields (HMRFs). Within the HMRF model, the pairwise constraints are augmented by label-level and constraint-level local smoothness to guide the clustering process. The parameters for both the unary and the pairwise potential functions are learned by the simulated field algorithm, and the weights of constraints can be easily adjusted. We further introduce an efficient clustering framework specially for face clustering in videos, considering that faces in adjacent frames of the same face track are very similar. The framework is applicable to other clustering algorithms to significantly reduce the computational cost. Experiments on two face data sets from real-world videos demonstrate the significantly improved performance of our algorithm over state-of-theart algorithms.

4 0.18568176 379 cvpr-2013-Scalable Sparse Subspace Clustering

Author: Xi Peng, Lei Zhang, Zhang Yi

Abstract: In this paper, we address two problems in Sparse Subspace Clustering algorithm (SSC), i.e., scalability issue and out-of-sample problem. SSC constructs a sparse similarity graph for spectral clustering by using ?1-minimization based coefficients, has achieved state-of-the-art results for image clustering and motion segmentation. However, the time complexity of SSC is proportion to the cubic of problem size such that it is inefficient to apply SSC into large scale setting. Moreover, SSC does not handle with out-ofsample data that are not used to construct the similarity graph. For each new datum, SSC needs recalculating the cluster membership of the whole data set, which makes SSC is not competitive in fast online clustering. To address the problems, this paper proposes out-of-sample extension of SSC, named as Scalable Sparse Subspace Clustering (SSSC), which makes SSC feasible to cluster large scale data sets. The solution of SSSC adopts a ”sampling, clustering, coding, and classifying” strategy. Extensive experimental results on several popular data sets demonstrate the effectiveness and efficiency of our method comparing with the state-of-the-art algorithms.

5 0.14994742 135 cvpr-2013-Discriminative Subspace Clustering

Author: Vasileios Zografos, Liam Ellis, Rudolf Mester

Abstract: We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.

6 0.11535572 91 cvpr-2013-Consensus of k-NNs for Robust Neighborhood Selection on Graph-Based Manifolds

7 0.10940615 306 cvpr-2013-Non-rigid Structure from Motion with Diffusion Maps Prior

8 0.10929426 234 cvpr-2013-Joint Spectral Correspondence for Disparate Image Matching

9 0.10670295 460 cvpr-2013-Weakly-Supervised Dual Clustering for Image Semantic Segmentation

10 0.095166318 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds

11 0.08678405 134 cvpr-2013-Discriminative Sub-categorization

12 0.083749317 300 cvpr-2013-Multi-target Tracking by Lagrangian Relaxation to Min-cost Network Flow

13 0.08211714 233 cvpr-2013-Joint Sparsity-Based Representation and Analysis of Unconstrained Activities

14 0.079315722 437 cvpr-2013-Towards Fast and Accurate Segmentation

15 0.077696979 109 cvpr-2013-Dense Non-rigid Point-Matching Using Random Projections

16 0.077494822 33 cvpr-2013-Active Contours with Group Similarity

17 0.076430842 9 cvpr-2013-A Fast Semidefinite Approach to Solving Binary Quadratic Problems

18 0.073617548 366 cvpr-2013-Robust Region Grouping via Internal Patch Statistics

19 0.070007533 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

20 0.069962956 387 cvpr-2013-Semi-supervised Domain Adaptation with Instance Constraints


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.187), (1, 0.015), (2, -0.018), (3, 0.034), (4, 0.065), (5, -0.02), (6, -0.027), (7, -0.105), (8, -0.066), (9, -0.06), (10, 0.069), (11, -0.003), (12, -0.079), (13, -0.048), (14, -0.056), (15, -0.01), (16, -0.043), (17, -0.033), (18, -0.06), (19, 0.012), (20, 0.062), (21, 0.057), (22, -0.044), (23, 0.01), (24, 0.057), (25, -0.032), (26, 0.056), (27, -0.071), (28, 0.008), (29, -0.009), (30, 0.09), (31, 0.087), (32, 0.066), (33, -0.062), (34, 0.036), (35, 0.139), (36, -0.01), (37, 0.009), (38, 0.021), (39, -0.084), (40, -0.052), (41, -0.025), (42, -0.016), (43, 0.134), (44, 0.004), (45, -0.107), (46, -0.009), (47, 0.144), (48, 0.129), (49, -0.039)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97378445 93 cvpr-2013-Constraints as Features

Author: Shmuel Asafi, Daniel Cohen-Or

Abstract: In this paper, we introduce a new approach to constrained clustering which treats the constraints as features. Our method augments the original feature space with additional dimensions, each of which derived from a given Cannot-link constraints. The specified Cannot-link pair gets extreme coordinates values, and the rest of the points get coordinate values that express their spatial influence from the specified constrained pair. After augmenting all the new features, a standard unconstrained clustering algorithm can be performed, like k-means or spectral clustering. We demonstrate the efficacy of our method for active semi-supervised learning applied to image segmentation and compare it to alternative methods. We also evaluate the performance of our method on the four most commonly evaluated datasets from the UCI machine learning repository.

2 0.80263114 379 cvpr-2013-Scalable Sparse Subspace Clustering

Author: Xi Peng, Lei Zhang, Zhang Yi

Abstract: In this paper, we address two problems in Sparse Subspace Clustering algorithm (SSC), i.e., scalability issue and out-of-sample problem. SSC constructs a sparse similarity graph for spectral clustering by using ?1-minimization based coefficients, has achieved state-of-the-art results for image clustering and motion segmentation. However, the time complexity of SSC is proportion to the cubic of problem size such that it is inefficient to apply SSC into large scale setting. Moreover, SSC does not handle with out-ofsample data that are not used to construct the similarity graph. For each new datum, SSC needs recalculating the cluster membership of the whole data set, which makes SSC is not competitive in fast online clustering. To address the problems, this paper proposes out-of-sample extension of SSC, named as Scalable Sparse Subspace Clustering (SSSC), which makes SSC feasible to cluster large scale data sets. The solution of SSSC adopts a ”sampling, clustering, coding, and classifying” strategy. Extensive experimental results on several popular data sets demonstrate the effectiveness and efficiency of our method comparing with the state-of-the-art algorithms.

3 0.78282946 191 cvpr-2013-Graph-Laplacian PCA: Closed-Form Solution and Robustness

Author: Bo Jiang, Chris Ding, Bio Luo, Jin Tang

Abstract: Principal Component Analysis (PCA) is a widely used to learn a low-dimensional representation. In many applications, both vector data X and graph data W are available. Laplacian embedding is widely used for embedding graph data. Wepropose a graph-Laplacian PCA (gLPCA) to learn a low dimensional representation of X that incorporates graph structures encoded in W. This model has several advantages: (1) It is a data representation model. (2) It has a compact closed-form solution and can be efficiently computed. (3) It is capable to remove corruptions. Extensive experiments on 8 datasets show promising results on image reconstruction and significant improvement on clustering and classification.

4 0.77542442 126 cvpr-2013-Diffusion Processes for Retrieval Revisited

Author: Michael Donoser, Horst Bischof

Abstract: In this paper we revisit diffusion processes on affinity graphs for capturing the intrinsic manifold structure defined by pairwise affinity matrices. Such diffusion processes have already proved the ability to significantly improve subsequent applications like retrieval. We give a thorough overview of the state-of-the-art in this field and discuss obvious similarities and differences. Based on our observations, we are then able to derive a generic framework for diffusion processes in the scope of retrieval applications, where the related work represents specific instances of our generic formulation. We evaluate our framework on several retrieval tasks and are able to derive algorithms that e. g. achieve a 100% bullseye score on the popular MPEG7 shape retrieval data set.

5 0.72016275 91 cvpr-2013-Consensus of k-NNs for Robust Neighborhood Selection on Graph-Based Manifolds

Author: Vittal Premachandran, Ramakrishna Kakarala

Abstract: Propagating similarity information along the data manifold requires careful selection of local neighborhood. Selecting a “good” neighborhood in an unsupervised setting, given an affinity graph, has been a difficult task. The most common way to select a local neighborhood has been to use the k-nearest neighborhood (k-NN) selection criterion. However, it has the tendency to include noisy edges. In this paper, we propose a way to select a robust neighborhood using the consensus of multiple rounds of k-NNs. We explain how using consensus information can give better control over neighborhood selection. We also explain in detail the problems with another recently proposed neighborhood selection criteria, i.e., Dominant Neighbors, and show that our method is immune to those problems. Finally, we show the results from experiments in which we compare our method to other neighborhood selection approaches. The results corroborate our claims that consensus ofk-NNs does indeed help in selecting more robust and stable localities.

6 0.69822919 9 cvpr-2013-A Fast Semidefinite Approach to Solving Binary Quadratic Problems

7 0.66813803 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds

8 0.65925729 135 cvpr-2013-Discriminative Subspace Clustering

9 0.62636822 184 cvpr-2013-Gauging Association Patterns of Chromosome Territories via Chromatic Median

10 0.62049538 429 cvpr-2013-The Generalized Laplacian Distance and Its Applications for Visual Matching

11 0.61965698 106 cvpr-2013-Deformable Graph Matching

12 0.61276823 109 cvpr-2013-Dense Non-rigid Point-Matching Using Random Projections

13 0.60344261 193 cvpr-2013-Graph Transduction Learning with Connectivity Constraints with Application to Multiple Foreground Cosegmentation

14 0.56438392 134 cvpr-2013-Discriminative Sub-categorization

15 0.55285329 92 cvpr-2013-Constrained Clustering and Its Application to Face Clustering in Videos

16 0.54523963 234 cvpr-2013-Joint Spectral Correspondence for Disparate Image Matching

17 0.53663045 460 cvpr-2013-Weakly-Supervised Dual Clustering for Image Semantic Segmentation

18 0.52607036 437 cvpr-2013-Towards Fast and Accurate Segmentation

19 0.50354415 192 cvpr-2013-Graph Matching with Anchor Nodes: A Learning Approach

20 0.48630157 366 cvpr-2013-Robust Region Grouping via Internal Patch Statistics


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.072), (26, 0.02), (33, 0.755), (67, 0.03), (69, 0.012), (87, 0.048)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99990511 93 cvpr-2013-Constraints as Features

Author: Shmuel Asafi, Daniel Cohen-Or

Abstract: In this paper, we introduce a new approach to constrained clustering which treats the constraints as features. Our method augments the original feature space with additional dimensions, each of which derived from a given Cannot-link constraints. The specified Cannot-link pair gets extreme coordinates values, and the rest of the points get coordinate values that express their spatial influence from the specified constrained pair. After augmenting all the new features, a standard unconstrained clustering algorithm can be performed, like k-means or spectral clustering. We demonstrate the efficacy of our method for active semi-supervised learning applied to image segmentation and compare it to alternative methods. We also evaluate the performance of our method on the four most commonly evaluated datasets from the UCI machine learning repository.

2 0.99988616 180 cvpr-2013-Fully-Connected CRFs with Non-Parametric Pairwise Potential

Author: Neill D.F. Campbell, Kartic Subr, Jan Kautz

Abstract: Conditional Random Fields (CRFs) are used for diverse tasks, ranging from image denoising to object recognition. For images, they are commonly defined as a graph with nodes corresponding to individual pixels and pairwise links that connect nodes to their immediate neighbors. Recent work has shown that fully-connected CRFs, where each node is connected to every other node, can be solved efficiently under the restriction that the pairwise term is a Gaussian kernel over a Euclidean feature space. In this paper, we generalize the pairwise terms to a non-linear dissimilarity measure that is not required to be a distance metric. To this end, we propose a density estimation technique to derive conditional pairwise potentials in a nonparametric manner. We then use an efficient embedding technique to estimate an approximate Euclidean feature space for these potentials, in which the pairwise term can still be expressed as a Gaussian kernel. We demonstrate that the use of non-parametric models for the pairwise interactions, conditioned on the input data, greatly increases expressive power whilst maintaining efficient inference.

3 0.99975866 260 cvpr-2013-Learning and Calibrating Per-Location Classifiers for Visual Place Recognition

Author: Petr Gronát, Guillaume Obozinski, Josef Sivic, Tomáš Pajdla

Abstract: The aim of this work is to localize a query photograph by finding other images depicting the same place in a large geotagged image database. This is a challenging task due to changes in viewpoint, imaging conditions and the large size of the image database. The contribution of this work is two-fold. First, we cast the place recognition problem as a classification task and use the available geotags to train a classifier for each location in the database in a similar manner to per-exemplar SVMs in object recognition. Second, as onlyfewpositive training examples are availablefor each location, we propose a new approach to calibrate all the per-location SVM classifiers using only the negative examples. The calibration we propose relies on a significance measure essentially equivalent to the p-values classically used in statistical hypothesis testing. Experiments are performed on a database of 25,000 geotagged street view images of Pittsburgh and demonstrate improved place recognition accuracy of the proposed approach over the previous work. 2Center for Machine Perception, Faculty of Electrical Engineering 3WILLOW project, Laboratoire d’Informatique de l’E´cole Normale Sup e´rieure, ENS/INRIA/CNRS UMR 8548. 4Universit Paris-Est, LIGM (UMR CNRS 8049), Center for Visual Computing, Ecole des Ponts - ParisTech, 77455 Marne-la-Valle, France

4 0.9995169 178 cvpr-2013-From Local Similarity to Global Coding: An Application to Image Classification

Author: Amirreza Shaban, Hamid R. Rabiee, Mehrdad Farajtabar, Marjan Ghazvininejad

Abstract: Bag of words models for feature extraction have demonstrated top-notch performance in image classification. These representations are usually accompanied by a coding method. Recently, methods that code a descriptor giving regard to its nearby bases have proved efficacious. These methods take into account the nonlinear structure of descriptors, since local similarities are a good approximation of global similarities. However, they confine their usage of the global similarities to nearby bases. In this paper, we propose a coding scheme that brings into focus the manifold structure of descriptors, and devise a method to compute the global similarities of descriptors to the bases. Given a local similarity measure between bases, a global measure is computed. Exploiting the local similarity of a descriptor and its nearby bases, a global measure of association of a descriptor to all the bases is computed. Unlike the locality-based and sparse coding methods, the proposed coding varies smoothly with respect to the underlying manifold. Experiments on benchmark image classification datasets substantiate the superiority oftheproposed method over its locality and sparsity based rivals.

5 0.99949038 357 cvpr-2013-Revisiting Depth Layers from Occlusions

Author: Adarsh Kowdle, Andrew Gallagher, Tsuhan Chen

Abstract: In this work, we consider images of a scene with a moving object captured by a static camera. As the object (human or otherwise) moves about the scene, it reveals pairwise depth-ordering or occlusion cues. The goal of this work is to use these sparse occlusion cues along with monocular depth occlusion cues to densely segment the scene into depth layers. We cast the problem of depth-layer segmentation as a discrete labeling problem on a spatiotemporal Markov Random Field (MRF) that uses the motion occlusion cues along with monocular cues and a smooth motion prior for the moving object. We quantitatively show that depth ordering produced by the proposed combination of the depth cues from object motion and monocular occlusion cues are superior to using either feature independently, and using a na¨ ıve combination of the features.

6 0.99948692 55 cvpr-2013-Background Modeling Based on Bidirectional Analysis

7 0.99935734 137 cvpr-2013-Dynamic Scene Classification: Learning Motion Descriptors with Slow Features Analysis

8 0.99907798 346 cvpr-2013-Real-Time No-Reference Image Quality Assessment Based on Filter Learning

9 0.99907261 113 cvpr-2013-Dense Variational Reconstruction of Non-rigid Surfaces from Monocular Video

10 0.99895221 252 cvpr-2013-Learning Locally-Adaptive Decision Functions for Person Verification

11 0.99885821 165 cvpr-2013-Fast Energy Minimization Using Learned State Filters

12 0.99822265 59 cvpr-2013-Better Exploiting Motion for Better Action Recognition

13 0.99641484 301 cvpr-2013-Multi-target Tracking by Rank-1 Tensor Approximation

14 0.99521708 48 cvpr-2013-Attribute-Based Detection of Unfamiliar Classes with Humans in the Loop

15 0.99058747 189 cvpr-2013-Graph-Based Discriminative Learning for Location Recognition

16 0.98944068 379 cvpr-2013-Scalable Sparse Subspace Clustering

17 0.98922348 266 cvpr-2013-Learning without Human Scores for Blind Image Quality Assessment

18 0.98845333 306 cvpr-2013-Non-rigid Structure from Motion with Diffusion Maps Prior

19 0.98839492 343 cvpr-2013-Query Adaptive Similarity for Large Scale Object Retrieval

20 0.98715532 148 cvpr-2013-Ensemble Video Object Cut in Highly Dynamic Scenes