nips nips2006 nips2006-174 knowledge-graph by maker-knowledge-mining

174 nips-2006-Similarity by Composition


Source: pdf

Author: Oren Boiman, Michal Irani

Abstract: We propose a new approach for measuring similarity between two signals, which is applicable to many machine learning tasks, and to many signal types. We say that a signal S1 is “similar” to a signal S2 if it is “easy” to compose S1 from few large contiguous chunks of S2 . Obviously, if we use small enough pieces, then any signal can be composed of any other. Therefore, the larger those pieces are, the more similar S1 is to S2 . This induces a local similarity score at every point in the signal, based on the size of its supported surrounding region. These local scores can in turn be accumulated in a principled information-theoretic way into a global similarity score of the entire S1 to S2 . “Similarity by Composition” can be applied between pairs of signals, between groups of signals, and also between different portions of the same signal. It can therefore be employed in a wide variety of machine learning problems (clustering, classification, retrieval, segmentation, attention, saliency, labelling, etc.), and can be applied to a wide range of signal types (images, video, audio, biological data, etc.) We show a few such examples. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 of Computer Science and Applied Math The Weizmann Institute of Science 76100 Rehovot, Israel Abstract We propose a new approach for measuring similarity between two signals, which is applicable to many machine learning tasks, and to many signal types. [sent-2, score-0.197]

2 We say that a signal S1 is “similar” to a signal S2 if it is “easy” to compose S1 from few large contiguous chunks of S2 . [sent-3, score-0.314]

3 This induces a local similarity score at every point in the signal, based on the size of its supported surrounding region. [sent-6, score-0.274]

4 These local scores can in turn be accumulated in a principled information-theoretic way into a global similarity score of the entire S1 to S2 . [sent-7, score-0.408]

5 1 Introduction A good measure for similarity between signals is necessary in many machine learning problems. [sent-12, score-0.185]

6 In this paper we present a new notion of similarity between signals, and demonstrate its applicability to several machine learning problems and to several signal types. [sent-30, score-0.212]

7 We would like to employ this idea to indicate high similarity of Image-B to Image-A, and lower similarity of Image-C to Image-A. [sent-34, score-0.217]

8 In other words, regions in one signal (the “query” signal) which can be composed using large contiguous chunks of data from the other signal (the “reference” signal) are considered to have high local similarity. [sent-35, score-0.465]

9 On the other hand, regions in the query signal which can be composed only by using small fragmented pieces are considered locally dissimilar. [sent-36, score-0.395]

10 This induces a similarity score at every point in the signal based on the size of its largest surrounding region which can be found in the other signal (allowing for some distortions). [sent-37, score-0.496]

11 The large shared regions between B and A (indicated by colors) provide high evidence to their similarity. [sent-42, score-0.287]

12 Note that the shared regions between similar signals are typically irregularly shaped, and therefore cannot be restricted to predefined regularly shaped partitioning of the signal. [sent-46, score-0.231]

13 Other attempts to maintain the benefits of local similarity while maintaining global structural information have recently been proposed [8]. [sent-53, score-0.226]

14 In our previous work [5] we presented an approach for detecting irregularities in images/video as regions that cannot be composed from large pieces of data from other images/video. [sent-55, score-0.315]

15 In this paper we extend this approach to a general principled theory of “Similarity by Composition”, from which we derive local and global similarity and dissimilarity measures between signals. [sent-57, score-0.295]

16 Using this model we derive information-theoretic measures for local and global similarities induced by shared regions. [sent-62, score-0.216]

17 The local similarities of shared regions (“local evidence scores”) are accumulated into a global similarity score (“global evidence score”) of the entire query signal relative to the reference signal. [sent-63, score-1.064]

18 We further prove upper and lower bounds on the global evidence score, which are computationally tractable. [sent-64, score-0.231]

19 It can also be applied to compute similarity of a signal to a group of signals (i. [sent-67, score-0.281]

20 , compose a query signal from pieces extracted from multiple reference signals). [sent-69, score-0.361]

21 The importance of large shared regions between signals have been recognized by biologists for determining similarities between DNA sequences, amino acid chains, etc. [sent-74, score-0.234]

22 In principle, results of such tools can be fed into our theoretical framework, to obtain similarity scores between biological data sequences in a principled information theoretic way. [sent-78, score-0.193]

23 2 we derive information-theoretic measures for local and global “evidence” (similarity) induced by shared regions. [sent-80, score-0.195]

24 4 demonstrates the applicability of the derived local and global similarity measures for various machine learning tasks and several types of signal. [sent-84, score-0.27]

25 2 Similarity by Composition – Theoretical Framework We derive principled information-theoretic measures for local and global similarity between a “query” Q (one or more signals) and a “reference” ref (one or more signals). [sent-85, score-0.637]

26 Large shared regions between Q and ref provide high statistical evidence to their similarity. [sent-86, score-0.651]

27 We then show how these pieces of local evidence can be integrated to provide “global evidence” for the entire query Q (Sec. [sent-91, score-0.383]

28 To do so, we will compare the likelihood that R was generated by ref , versus the likelihood that it was generated by some random process. [sent-98, score-0.446]

29 More formally, we denote by Href the hypothesis that R was “generated” by ref , and by H0 the hypothesis that R was generated by a random process, or by any other application-dependent PDF (referred to as the “null hypothesis”). [sent-99, score-0.442]

30 Href assumes the following model for the “generation” of R: a region was taken from somewhere in ref , was globally transformed by some global transformation T , followed by some small possible local distortions, and then put into Q to generate R. [sent-100, score-0.586]

31 In the simplest case (only shifts), T is the corresponding location in ref . [sent-102, score-0.364]

32 If there are multiple corresponding regions in ref , (i. [sent-106, score-0.467]

33 LES is referred to as a “local evidence score”, because the higher LES is, the smaller the probability that R was generated by random (H0 ). [sent-110, score-0.18]

34 , the probability of getting a score LES(R) > l for a randomly generated region R is smaller than 2−l (this is due to LES being a log-likelihood ratio [3]). [sent-113, score-0.208]

35 High LES therefore provides higher statistical evidence that R was generated from ref . [sent-114, score-0.544]

36 Note that the larger the region R ⊆ Q is, the higher its evidence score LES(R|Href ) (and therefore it will also provide higher statistical evidence to the hypothesis that Q was composed from ref ). [sent-115, score-0.932]

37 For example, assume for simplicity that R has a single identical copy in ref , and that T is restricted to shifts with uniform probability (i. [sent-116, score-0.419]

38 Therefore, the likelihood ratio of R increases, and so does its evidence score LES. [sent-120, score-0.266]

39 LES can also be interpreted as the number of bits saved by describing the region R using ref , instead of describing it using H0 : Recall that the optimal average code length of a random variable y with probability function P (y) is length(y) = −log(P (y)). [sent-121, score-0.486]

40 Therefore we can write the evidence score as LES(R|Href ) = length(R|H0 ) − length(R|Href ). [sent-122, score-0.247]

41 LES(R|H ) ref (where A region R induces “average savings per point” for every point q ∈ R, namely, |R| |R| is the number of points in R). [sent-124, score-0.477]

42 However, a point q ∈ R may also be contained in other regions generated by ref , each with its own local evidence score. [sent-125, score-0.714]

43 q∈R |R| For any point q ∈ Q we define R[q] to be the region which provides this maximal score for q. [sent-128, score-0.254]

44 1 shows such maximal regions found in Image-B (the query Q) given Image-A (the reference ref ). [sent-130, score-0.67]

45 , Rk ⊆ Q be k disjoint regions in Q, which have been generated independently from the examples in ref . [sent-138, score-0.489]

46 The statistical significance of such an accumulated evidence is k expressed by: P ( GES(Q|Href , S) > l | H0 ) = P ( i=1 LES(Ri |Href ) > l | H0 ) < 2−l . [sent-148, score-0.194]

47 Consequently, we can accumulate local evidence of non-overlapping regions within Q which have similar regions in ref for obtaining global evidence on the hypothesis that Q was generated from ref . [sent-149, score-1.447]

48 Thus, for example, if we found 5 regions within Q with similar copies in ref , each resulting with probability less than 10% of being generated by random, then the probability that Q was generated by random is less than (10%)5 = 0. [sent-150, score-0.511]

49 (4) ) is achieved by the segmentation of Q with the best accumulated evidence score, Ri ∈S LES(Ri |Href ) = GES(Q|Href , S), penalized by the length of the segmentation description logP (S|Href ) = −length(S). [sent-168, score-0.371]

50 Obviously, every segmentation provides such a lower (albeit less tight) bound on the total evidence score. [sent-169, score-0.265]

51 Thus, if we find large enough contiguous regions in Q, with supporting regions in ref (i. [sent-170, score-0.599]

52 , high enough local evidence scores), and define R0 to be the remainder of Q, then S = R0 , R1 , . [sent-172, score-0.225]

53 As to the upper bound on GES(Q|Href ), this can be done by summing up the maximal point-wise evidence scores P ES(q|Href ) (see Eq. [sent-176, score-0.281]

54 Note that the upper bound is computed by finding the maximal evidence regions that pass through every point in the query, regardless of the region complexity length(R). [sent-179, score-0.442]

55 3 Algorithmic Framework The local and global evidence scores presented in Sec. [sent-182, score-0.322]

56 2 provide new local and global similarity measures for signal data, which can be used for various learning and inference problems (see Sec. [sent-183, score-0.366]

57 In this section we briefly describe the algorithmic framework used for computing P ES, LES, and GES to obtain the local and global compositional similarity measures. [sent-185, score-0.251]

58 Assume we are given a large region R ⊂ Q and would like to estimate its evidence score LES(R|Href ). [sent-186, score-0.344]

59 We would like to find similar regions to R in ref , that would provide large local evidence for R. [sent-187, score-0.692]

60 The search for a similar ensemble in ref is done using efficient inference on a star graphical model, while allowing for small local displacement of each local patch [5]. [sent-191, score-0.584]

61 For example, in images these would be small spatial patches around each pixel contained in the larger image region R, and the displacements would be small shifts in x and y. [sent-192, score-0.262]

62 In video data the region R would be a space-time volumetric region, and it would be broken into lots of small overlapping space-time volumetric patches. [sent-193, score-0.204]

63 In general, for any n-dimensional signal representation, the region R would be a large n-dimensional region within the signal, and the patches would be small n-dimensional overlapping regions within R. [sent-196, score-0.434]

64 The local patch descriptors are signal and application dependent, but can be very simple. [sent-197, score-0.256]

65 It is the simultaneous matching of all these simple local patch descriptors with their relative positions that provides the strong overall evidence score for the entire region R. [sent-201, score-0.52]

66 , location in ref ) and local patch displacements ∆li for each patch i in R (i = 1, 2, . [sent-204, score-0.612]

67 These patches restrict the possible locations of R in ref , i. [sent-219, score-0.405]

68 Each new patch further reduces this list of candidate positions of R in ref . [sent-223, score-0.435]

69 For each point q ∈ Q we want to estimate its maximal region R[q] and its corresponding evidence score LES(R|Href ) (Sec. [sent-228, score-0.412]

70 Using the single image (a) as a “reference” of good quality grapefruits, we can detect defects (irregularities) in an image (b) of different grapefruits at different arrangements. [sent-232, score-0.213]

71 Right side of (a) and (b) show detected defects in red (points with small intraimage evidence LES). [sent-236, score-0.306]

72 order to perform this step efficiently, we start with a small surrounding region around q, break it into patches, and search only for that region in ref (using the same efficient search method described above). [sent-238, score-0.575]

73 Locations in ref where good initial matches were found are treated as candidates, and are gradually ‘grown’ to their maximal possible matching regions (allowing for local distortions in patch position and descriptor, as before). [sent-239, score-0.715]

74 The evidence score LES of each such maximally grown region is computed. [sent-240, score-0.369]

75 In practice, a region found maximal for one point is likely to be the maximal region for many other points in Q. [sent-242, score-0.346]

76 Thus the number of different maximal regions in Q will tend to be significantly smaller than the number of points in Q. [sent-243, score-0.187]

77 , Rk , we can use these to induce a reasonable (although not optimal) segmentation using the following heuristics: We choose the first segment ˜ to be the maximal region with the largest evidence score: R1 = argmaxRi LES(Ri |Href ). [sent-251, score-0.399]

78 Re˜i |Href ) of these regions, we can obtain a reasonable lower evaluating the evidence scores LES(R bound on GES(Q|Href ) using the left-hand side of Eq. [sent-257, score-0.251]

79 4 Applications and Results The global similarity measure GES(Q|Href ) can be applied between individual signals, and/or between groups of signals (by setting Q and ref accordingly). [sent-267, score-0.607]

80 The local similarity measure LES(R|Href ) can be used for local inference problems, such as local classification, saliency, segmentation, etc. [sent-269, score-0.317]

81 , points with low intra-image evidence scores LES (when measured relative to the rest of the image). [sent-274, score-0.229]

82 Each maximal region R[q] provides high evidence (translated to high affinity scores) that all the points within it should be grouped together (see text for more details). [sent-276, score-0.339]

83 , by setting Q to be one part of the signal, and ref to be the rest of the signal). [sent-279, score-0.364]

84 Such intra-signal evidence can be used for inference tasks like segmentation, while the absence of intra-signal evidence (local dissimilarity) can be used for detecting saliency/irregularities. [sent-280, score-0.363]

85 Detection of Saliency/Irregularities (in Images): Using our statistical framework, we define a point q ∈ Q to be irregular if its best local evidence score LES(R[q] |Href ) is below some threshold. [sent-288, score-0.314]

86 a, used as ref ), we can detect defects (irregularities) in an image of different grapefruits at different arrangements (Fig. [sent-294, score-0.552]

87 The algorithm tried to compose Q from as large as possible pieces of ref . [sent-297, score-0.494]

88 , points whose maximal regions were small) were determined as irregular. [sent-300, score-0.187]

89 Alternatively, local saliency within a query signal Q can also be measured relative to other portions of Q, e. [sent-304, score-0.337]

90 , by trying to compose each region in Q using pieces from the rest of Q. [sent-306, score-0.227]

91 For each point q ∈ Q we compute its intra-signal evidence score LES(R[q] ) relative to the other (non-neighboring) parts of the image. [sent-307, score-0.263]

92 Points with low intra-signal evidence are detected as salient. [sent-308, score-0.181]

93 Examples of using intra-signal saliency to detect defects in fabric can be found in Fig. [sent-309, score-0.186]

94 We used a SIFT-like [9] patch descriptor, but computed densely for all local patches in the image. [sent-314, score-0.179]

95 Signal Segmentation (Images): For each point q ∈ Q we compute its maximal evidence region R[q] . [sent-319, score-0.323]

96 Every maximal region provides evidence to the fact that all points within the region should be clustered/segmented together. [sent-321, score-0.436]

97 Thus, large regions which appear also in ref (in the case of a single image – other regions in Q) are likely to be clustered together in Q. [sent-324, score-0.595]

98 Note that we have not used explicitly low level similarity in neighboring point, as is customary in most image segmentation algorithms. [sent-329, score-0.202]

99 In our implementation, 3D space-time video regions were broken into small spatio-temporal video patches (7 × 7 × 4). [sent-359, score-0.232]

100 These approaches are useful for recognition, but are not applicable to non-class based inference problems (such as similarity between pairs of signals with no prior data, clustering, etc. [sent-366, score-0.2]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('href', 0.758), ('ref', 0.364), ('ges', 0.215), ('les', 0.181), ('evidence', 0.158), ('regions', 0.103), ('defects', 0.102), ('similarity', 0.101), ('region', 0.097), ('signal', 0.096), ('score', 0.089), ('query', 0.085), ('signals', 0.084), ('segmentation', 0.076), ('pieces', 0.073), ('patch', 0.071), ('irregularities', 0.069), ('maximal', 0.068), ('local', 0.067), ('global', 0.058), ('compose', 0.057), ('reference', 0.05), ('composition', 0.048), ('grapefruits', 0.045), ('saliency', 0.045), ('video', 0.044), ('ri', 0.043), ('distortions', 0.042), ('descriptor', 0.042), ('patches', 0.041), ('scores', 0.039), ('displacements', 0.039), ('composed', 0.038), ('shifts', 0.037), ('chunks', 0.036), ('accumulated', 0.036), ('sequences', 0.035), ('ballet', 0.034), ('detecting', 0.032), ('contiguous', 0.029), ('measures', 0.029), ('hypothesis', 0.028), ('audio', 0.028), ('portions', 0.028), ('shared', 0.026), ('algorithmic', 0.025), ('grown', 0.025), ('image', 0.025), ('nity', 0.025), ('gurations', 0.025), ('length', 0.025), ('marginalize', 0.024), ('side', 0.023), ('detected', 0.023), ('images', 0.023), ('boiman', 0.023), ('fabric', 0.023), ('lots', 0.023), ('segmentations', 0.022), ('descriptors', 0.022), ('accumulate', 0.022), ('generated', 0.022), ('database', 0.022), ('rk', 0.022), ('dissimilarity', 0.022), ('similarities', 0.021), ('lr', 0.021), ('bags', 0.021), ('af', 0.02), ('irani', 0.02), ('volumetric', 0.02), ('likelihood', 0.019), ('transformations', 0.019), ('restricted', 0.018), ('principled', 0.018), ('prede', 0.018), ('logp', 0.018), ('weizmann', 0.018), ('backgrounds', 0.018), ('recognition', 0.018), ('action', 0.018), ('retrieval', 0.018), ('occlusions', 0.017), ('surrounding', 0.017), ('bound', 0.016), ('classi', 0.016), ('walk', 0.016), ('speakers', 0.016), ('fragments', 0.016), ('ijcv', 0.016), ('points', 0.016), ('detect', 0.016), ('relative', 0.016), ('obviously', 0.015), ('inference', 0.015), ('induced', 0.015), ('alignment', 0.015), ('applicability', 0.015), ('lower', 0.015), ('repetitions', 0.015)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999988 174 nips-2006-Similarity by Composition

Author: Oren Boiman, Michal Irani

Abstract: We propose a new approach for measuring similarity between two signals, which is applicable to many machine learning tasks, and to many signal types. We say that a signal S1 is “similar” to a signal S2 if it is “easy” to compose S1 from few large contiguous chunks of S2 . Obviously, if we use small enough pieces, then any signal can be composed of any other. Therefore, the larger those pieces are, the more similar S1 is to S2 . This induces a local similarity score at every point in the signal, based on the size of its supported surrounding region. These local scores can in turn be accumulated in a principled information-theoretic way into a global similarity score of the entire S1 to S2 . “Similarity by Composition” can be applied between pairs of signals, between groups of signals, and also between different portions of the same signal. It can therefore be employed in a wide variety of machine learning problems (clustering, classification, retrieval, segmentation, attention, saliency, labelling, etc.), and can be applied to a wide range of signal types (images, video, audio, biological data, etc.) We show a few such examples. 1

2 0.071357548 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

Author: Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, Bernhard Schölkopf

Abstract: This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that—despite the lack of any biological prior knowledge—our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1

3 0.06211362 78 nips-2006-Fast Discriminative Visual Codebooks using Randomized Clustering Forests

Author: Frank Moosmann, Bill Triggs, Frederic Jurie

Abstract: Some of the most effective recent methods for content-based image classification work by extracting dense or sparse local image descriptors, quantizing them according to a coding rule such as k-means vector quantization, accumulating histograms of the resulting “visual word” codes over the image, and classifying these with a conventional classifier such as an SVM. Large numbers of descriptors and large codebooks are needed for good results and this becomes slow using k-means. We introduce Extremely Randomized Clustering Forests – ensembles of randomly created clustering trees – and show that these provide more accurate results, much faster training and testing and good resistance to background clutter in several state-of-the-art image classification tasks. 1

4 0.05734586 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

Author: Dong S. Cheng, Vittorio Murino, Mário Figueiredo

Abstract: This paper proposes a new approach to model-based clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the class labels are only indirectly observed (via the probability density of each class); as finite mixture learning under a grouping prior. To estimate the parameters of the proposed model, we derive a (generalized) EM algorithm with a closed-form E-step, in contrast with other recent approaches to semi-supervised probabilistic clustering which require Gibbs sampling or suboptimal shortcuts. We show that our approach is ideally suited for image segmentation: it avoids the combinatorial nature Markov random field priors, and opens the door to more sophisticated spatial priors (e.g., wavelet-based) in a simple and computationally efficient way. Finally, we extend our formulation to work in unsupervised, semi-supervised, or discriminative modes. 1

5 0.0543813 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo

Author: Oliver Williams

Abstract: This paper describes a Gaussian process framework for inferring pixel-wise disparity and bi-layer segmentation of a scene given a stereo pair of images. The Gaussian process covariance is parameterized by a foreground-backgroundocclusion segmentation label to model both smooth regions and discontinuities. As such, we call our model a switched Gaussian process. We propose a greedy incremental algorithm for adding observations from the data and assigning segmentation labels. Two observation schedules are proposed: the first treats scanlines as independent, the second uses an active learning criterion to select a sparse subset of points to measure. We show that this probabilistic framework has comparable performance to the state-of-the-art. 1

6 0.053350911 55 nips-2006-Computation of Similarity Measures for Sequential Data using Generalized Suffix Trees

7 0.048580188 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

8 0.048195917 179 nips-2006-Sparse Representation for Signal Classification

9 0.045535803 122 nips-2006-Learning to parse images of articulated bodies

10 0.04493019 46 nips-2006-Blind source separation for over-determined delayed mixtures

11 0.044900991 66 nips-2006-Detecting Humans via Their Pose

12 0.042771176 52 nips-2006-Clustering appearance and shape by learning jigsaws

13 0.041998576 86 nips-2006-Graph-Based Visual Saliency

14 0.041703779 97 nips-2006-Inducing Metric Violations in Human Similarity Judgements

15 0.040798273 9 nips-2006-A Nonparametric Bayesian Method for Inferring Features From Similarity Judgments

16 0.040748693 80 nips-2006-Fundamental Limitations of Spectral Clustering

17 0.039345443 108 nips-2006-Large Scale Hidden Semi-Markov SVMs

18 0.036371674 119 nips-2006-Learning to Rank with Nonsmooth Cost Functions

19 0.033673506 114 nips-2006-Learning Time-Intensity Profiles of Human Activity using Non-Parametric Bayesian Models

20 0.032604273 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.117), (1, 0.001), (2, 0.082), (3, -0.031), (4, 0.02), (5, -0.045), (6, -0.054), (7, -0.044), (8, 0.023), (9, -0.013), (10, 0.046), (11, 0.033), (12, -0.017), (13, 0.001), (14, 0.007), (15, 0.014), (16, 0.036), (17, -0.018), (18, 0.028), (19, -0.026), (20, 0.026), (21, 0.018), (22, -0.033), (23, 0.029), (24, 0.062), (25, -0.051), (26, -0.073), (27, -0.019), (28, -0.047), (29, 0.018), (30, 0.018), (31, -0.055), (32, -0.04), (33, 0.001), (34, 0.011), (35, 0.055), (36, 0.0), (37, -0.005), (38, -0.023), (39, -0.001), (40, -0.041), (41, -0.083), (42, -0.058), (43, -0.058), (44, -0.025), (45, 0.034), (46, -0.058), (47, -0.107), (48, -0.072), (49, 0.109)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92032385 174 nips-2006-Similarity by Composition

Author: Oren Boiman, Michal Irani

Abstract: We propose a new approach for measuring similarity between two signals, which is applicable to many machine learning tasks, and to many signal types. We say that a signal S1 is “similar” to a signal S2 if it is “easy” to compose S1 from few large contiguous chunks of S2 . Obviously, if we use small enough pieces, then any signal can be composed of any other. Therefore, the larger those pieces are, the more similar S1 is to S2 . This induces a local similarity score at every point in the signal, based on the size of its supported surrounding region. These local scores can in turn be accumulated in a principled information-theoretic way into a global similarity score of the entire S1 to S2 . “Similarity by Composition” can be applied between pairs of signals, between groups of signals, and also between different portions of the same signal. It can therefore be employed in a wide variety of machine learning problems (clustering, classification, retrieval, segmentation, attention, saliency, labelling, etc.), and can be applied to a wide range of signal types (images, video, audio, biological data, etc.) We show a few such examples. 1

2 0.48924729 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

Author: Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, Bernhard Schölkopf

Abstract: This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that—despite the lack of any biological prior knowledge—our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1

3 0.47875613 45 nips-2006-Blind Motion Deblurring Using Image Statistics

Author: Anat Levin

Abstract: We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture.

4 0.44071969 78 nips-2006-Fast Discriminative Visual Codebooks using Randomized Clustering Forests

Author: Frank Moosmann, Bill Triggs, Frederic Jurie

Abstract: Some of the most effective recent methods for content-based image classification work by extracting dense or sparse local image descriptors, quantizing them according to a coding rule such as k-means vector quantization, accumulating histograms of the resulting “visual word” codes over the image, and classifying these with a conventional classifier such as an SVM. Large numbers of descriptors and large codebooks are needed for good results and this becomes slow using k-means. We introduce Extremely Randomized Clustering Forests – ensembles of randomly created clustering trees – and show that these provide more accurate results, much faster training and testing and good resistance to background clutter in several state-of-the-art image classification tasks. 1

5 0.42507631 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

Author: Dong S. Cheng, Vittorio Murino, Mário Figueiredo

Abstract: This paper proposes a new approach to model-based clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the class labels are only indirectly observed (via the probability density of each class); as finite mixture learning under a grouping prior. To estimate the parameters of the proposed model, we derive a (generalized) EM algorithm with a closed-form E-step, in contrast with other recent approaches to semi-supervised probabilistic clustering which require Gibbs sampling or suboptimal shortcuts. We show that our approach is ideally suited for image segmentation: it avoids the combinatorial nature Markov random field priors, and opens the door to more sophisticated spatial priors (e.g., wavelet-based) in a simple and computationally efficient way. Finally, we extend our formulation to work in unsupervised, semi-supervised, or discriminative modes. 1

6 0.41380447 52 nips-2006-Clustering appearance and shape by learning jigsaws

7 0.40713415 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

8 0.405976 55 nips-2006-Computation of Similarity Measures for Sequential Data using Generalized Suffix Trees

9 0.40399799 86 nips-2006-Graph-Based Visual Saliency

10 0.39970097 66 nips-2006-Detecting Humans via Their Pose

11 0.39621279 53 nips-2006-Combining causal and similarity-based reasoning

12 0.38015032 89 nips-2006-Handling Advertisements of Unknown Quality in Search Advertising

13 0.37704164 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo

14 0.36419445 73 nips-2006-Efficient Methods for Privacy Preserving Face Detection

15 0.3607671 42 nips-2006-Bayesian Image Super-resolution, Continued

16 0.35935017 122 nips-2006-Learning to parse images of articulated bodies

17 0.35879406 49 nips-2006-Causal inference in sensorimotor integration

18 0.34764326 40 nips-2006-Bayesian Detection of Infrequent Differences in Sets of Time Series with Shared Structure

19 0.34578675 180 nips-2006-Speakers optimize information density through syntactic reduction

20 0.34395835 139 nips-2006-Multi-dynamic Bayesian Networks


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.089), (2, 0.277), (3, 0.031), (7, 0.09), (8, 0.01), (9, 0.035), (12, 0.018), (20, 0.04), (21, 0.017), (22, 0.027), (44, 0.067), (57, 0.103), (65, 0.04), (69, 0.039), (90, 0.01)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.83121622 27 nips-2006-An EM Algorithm for Localizing Multiple Sound Sources in Reverberant Environments

Author: Michael I. Mandel, Daniel P. Ellis, Tony Jebara

Abstract: We present a method for localizing and separating sound sources in stereo recordings that is robust to reverberation and does not make any assumptions about the source statistics. The method consists of a probabilistic model of binaural multisource recordings and an expectation maximization algorithm for finding the maximum likelihood parameters of that model. These parameters include distributions over delays and assignments of time-frequency regions to sources. We evaluate this method against two comparable algorithms on simulations of simultaneous speech from two or three sources. Our method outperforms the others in anechoic conditions and performs as well as the better of the two in the presence of reverberation. 1

2 0.80694121 125 nips-2006-Logarithmic Online Regret Bounds for Undiscounted Reinforcement Learning

Author: Peter Auer, Ronald Ortner

Abstract: We present a learning algorithm for undiscounted reinforcement learning. Our interest lies in bounds for the algorithm’s online performance after some finite number of steps. In the spirit of similar methods already successfully applied for the exploration-exploitation tradeoff in multi-armed bandit problems, we use upper confidence bounds to show that our UCRL algorithm achieves logarithmic online regret in the number of steps taken with respect to an optimal policy. 1 1.1

same-paper 3 0.76911384 174 nips-2006-Similarity by Composition

Author: Oren Boiman, Michal Irani

Abstract: We propose a new approach for measuring similarity between two signals, which is applicable to many machine learning tasks, and to many signal types. We say that a signal S1 is “similar” to a signal S2 if it is “easy” to compose S1 from few large contiguous chunks of S2 . Obviously, if we use small enough pieces, then any signal can be composed of any other. Therefore, the larger those pieces are, the more similar S1 is to S2 . This induces a local similarity score at every point in the signal, based on the size of its supported surrounding region. These local scores can in turn be accumulated in a principled information-theoretic way into a global similarity score of the entire S1 to S2 . “Similarity by Composition” can be applied between pairs of signals, between groups of signals, and also between different portions of the same signal. It can therefore be employed in a wide variety of machine learning problems (clustering, classification, retrieval, segmentation, attention, saliency, labelling, etc.), and can be applied to a wide range of signal types (images, video, audio, biological data, etc.) We show a few such examples. 1

4 0.73278743 85 nips-2006-Geometric entropy minimization (GEM) for anomaly detection and localization

Author: Alfred O. Hero

Abstract: We introduce a novel adaptive non-parametric anomaly detection approach, called GEM, that is based on the minimal covering properties of K-point entropic graphs when constructed on N training samples from a nominal probability distribution. Such graphs have the property that as N → ∞ their span recovers the entropy minimizing set that supports at least ρ = K/N (100)% of the mass of the Lebesgue part of the distribution. When a test sample falls outside of the entropy minimizing set an anomaly can be declared at a statistical level of significance α = 1 − ρ. A method for implementing this non-parametric anomaly detector is proposed that approximates this minimum entropy set by the influence region of a K-point entropic graph built on the training data. By implementing an incremental leave-one-out k-nearest neighbor graph on resampled subsets of the training data GEM can efficiently detect outliers at a given level of significance and compute their empirical p-values. We illustrate GEM for several simulated and real data sets in high dimensional feature spaces. 1

5 0.60247141 171 nips-2006-Sample Complexity of Policy Search with Known Dynamics

Author: Peter L. Bartlett, Ambuj Tewari

Abstract: We consider methods that try to find a good policy for a Markov decision process by choosing one from a given class. The policy is chosen based on its empirical performance in simulations. We are interested in conditions on the complexity of the policy class that ensure the success of such simulation based policy search methods. We show that under bounds on the amount of computation involved in computing policies, transition dynamics and rewards, uniform convergence of empirical estimates to true value functions occurs. Previously, such results were derived by assuming boundedness of pseudodimension and Lipschitz continuity. These assumptions and ours are both stronger than the usual combinatorial complexity measures. We show, via minimax inequalities, that this is essential: boundedness of pseudodimension or fat-shattering dimension alone is not sufficient.

6 0.56817716 38 nips-2006-Automated Hierarchy Discovery for Planning in Partially Observable Environments

7 0.55724728 34 nips-2006-Approximate Correspondences in High Dimensions

8 0.55589598 124 nips-2006-Linearly-solvable Markov decision problems

9 0.5532735 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

10 0.55281758 202 nips-2006-iLSTD: Eligibility Traces and Convergence Analysis

11 0.55134094 44 nips-2006-Bayesian Policy Gradient Algorithms

12 0.54890811 42 nips-2006-Bayesian Image Super-resolution, Continued

13 0.5487808 119 nips-2006-Learning to Rank with Nonsmooth Cost Functions

14 0.54853803 118 nips-2006-Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields

15 0.54771042 43 nips-2006-Bayesian Model Scoring in Markov Random Fields

16 0.54562199 110 nips-2006-Learning Dense 3D Correspondence

17 0.54530913 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints

18 0.54475081 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

19 0.54253447 185 nips-2006-Subordinate class recognition using relational object models

20 0.54212445 32 nips-2006-Analysis of Empirical Bayesian Methods for Neuroelectromagnetic Source Localization