nips nips2006 nips2006-52 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Anitha Kannan, John Winn, Carsten Rother
Abstract: Patch-based appearance models are used in a wide range of computer vision applications. To learn such models it has previously been necessary to specify a suitable set of patch sizes and shapes by hand. In the jigsaw model presented here, the shape, size and appearance of patches are learned automatically from the repeated structures in a set of training images. By learning such irregularly shaped ‘jigsaw pieces’, we are able to discover both the shape and the appearance of object parts without supervision. When applied to face images, for example, the learned jigsaw pieces are surprisingly strongly associated with face parts of different shapes and scales such as eyes, noses, eyebrows and cheeks, to name a few. We conclude that learning the shape of the patch not only improves the accuracy of appearance-based part detection but also allows for shape-based part detection. This enables parts of similar appearance but different shapes to be distinguished; for example, while foreheads and cheeks are both skin colored, they have markedly different shapes. 1
Reference: text
sentIndex sentText sentNum sentScore
1 In the jigsaw model presented here, the shape, size and appearance of patches are learned automatically from the repeated structures in a set of training images. [sent-4, score-1.123]
2 By learning such irregularly shaped ‘jigsaw pieces’, we are able to discover both the shape and the appearance of object parts without supervision. [sent-5, score-0.293]
3 When applied to face images, for example, the learned jigsaw pieces are surprisingly strongly associated with face parts of different shapes and scales such as eyes, noses, eyebrows and cheeks, to name a few. [sent-6, score-1.36]
4 This enables parts of similar appearance but different shapes to be distinguished; for example, while foreheads and cheeks are both skin colored, they have markedly different shapes. [sent-8, score-0.21]
5 However, a central problem with existing patch-based models is that there is no way to choose the shape and size of a patch; typically a predefined set of patch sizes and shapes (often rectangles or circles) are used. [sent-12, score-0.256]
6 We believe that natural images can provide enough cues to allow patches to be discovered of varying shape and size corresponding to the shape and size of object parts present in the images. [sent-13, score-0.384]
7 Indeed, we will show that the patches discovered by the jigsaw model can become strongly associated with semantic object parts. [sent-14, score-1.067]
8 With this motivation, we introduce a generative model for a set of images that learns to extract irregularly shaped and sized patches from a latent image which are combined to generate each training image. [sent-15, score-0.363]
9 We call this latent image a jigsaw as it contains all the necessary ‘jigsaw pieces’ that can be used to generate the target image set. [sent-16, score-1.148]
10 We present an inference algorithm for learning the jigsaw and for finding the jigsaw pieces that make up each image. [sent-17, score-2.011]
11 As our proposed jigsaw model is a generative model for an image, it can be readily used as a component in many computer vision applications for both image understanding and image synthesis. [sent-18, score-1.18]
12 These include object recognition, detection, image segmentation and image classification, object synthesis, image de-noising, super resolution, texture transfer between images and image in-painting. [sent-19, score-0.687]
13 In fact, the jigsaw model is likely to be useable as a direct replacement for a fixed patch model in any existing patch-based system. [sent-20, score-1.008]
14 2 Related work The closest work to ours is the epitome model of Jojic et al. [sent-21, score-0.208]
15 This is a generative model for image patches, or alternatively a model for images if patches that share coordinates in the image are averaged together (although this averaging often leads to a blurry result). [sent-23, score-0.449]
16 In contrast, in the jigsaw model, the inference process chooses appropriately shaped and sized pieces from the training images when learning the jigsaw. [sent-25, score-1.222]
17 Whilst this work does allow different patch shapes, it does not learn patch appearance since it works from a supplied texture image. [sent-38, score-0.288]
18 These models use hand-selected patch shapes (typically rectangles) which can lead to poor results given that different object parts have different sizes and shapes. [sent-41, score-0.237]
19 In fact, the use of fixed patches reduces accuracy when the object part is of different size and shape than the chosen patch; in this case, the patch model has to cope with the variability outside the object part. [sent-42, score-0.362]
20 In addition, such models ignore the shape of the object part which is frequently much more discriminative than appearance alone. [sent-44, score-0.224]
21 3 Probabilistic model This section describes the probabilistic model that we use to learn a jigsaw from a set of training images. [sent-48, score-0.946]
22 Thus, while allowing the jigsaw pieces to have arbitrary shape, we ensure that such pieces are shared across the entire image set, exhaustively explain the input image set, and are also large enough to be discriminative. [sent-50, score-1.55]
23 By meeting these criteria, we can capture both the appearance and the shape of repeated image structures, for example, eyes, noses and mouths in a set of face images. [sent-51, score-0.351]
24 We define a jigsaw J to be an image such that each pixel z in J has an intensity value µ(z) and an associated variance λ−1 (z) (so λ is the inverse variance, also called the precision). [sent-52, score-1.08]
25 A set of spatially Figure 1: Graphical model showing how the jigsaw J is used to generate a set of images I1 . [sent-53, score-1.005]
26 IN by combining the jigsaw pieces in different ways. [sent-56, score-1.108]
27 Each image has a corresponding offset map L which defines the jigsaw pieces used to generate that image (see text for details). [sent-57, score-1.451]
28 Notice that several jigsaw pieces can overlap and hence share parts of their appearance. [sent-58, score-1.15]
29 We can combine many of these pieces to generate images, noting that pixels in the jigsaw be re-used in multiple pieces. [sent-60, score-1.169]
30 Our probabilistic model is a generative image model which generates an image by joining together pieces of the jigsaw and then adding Gaussian noise of variance given by the jigsaw. [sent-61, score-1.385]
31 For each image I, we have an associated offset map L of the same size which determines the jigsaw pieces used to make that image. [sent-62, score-1.338]
32 This offset map defines a position in the jigsaw for each pixel in the image (more than one image pixel can map to the same jigsaw pixel). [sent-63, score-2.266]
33 Each entry in the offset map is a two-dimensional offset li = (lx , ly ), which maps a 2D point i in the image to a 2D point z in the jigsaw using z = (i − li ) mod |J|, where |J| = (width, height) are the dimensions of the jigsaw. [sent-64, score-1.269]
34 Notice that if two adjacent pixels in the image have the same offset label, then they map to adjacent pixels in the jigsaw. [sent-65, score-0.341]
35 Given this mapping and the jigsaw, the probability distribution of an image is assumed to be independent for each pixel and is given by N (I(i); µ(i − li ), λ(i − li )−1 ) P (I | J, L) = (1) i where the product is over image pixel positions and both subtractions are modulo |J|. [sent-67, score-0.376]
36 We want the images to consist of coherent pieces of the jigsaw, and so we define a Markov random field on the offset map to encourage neighboring pixels to have the same offsets. [sent-68, score-0.449]
37 The interaction potential ψ defines a Pott’s model on the offsets: ψ(li , lj ) = γ δ(li = lj ) (3) where γ is a parameter which influences the typical size of the learned jigsaw pieces. [sent-70, score-0.995]
38 Currently, γ is set to give the largest pieces whilst maintaining reasonable quality when the image is reconstructed from the jigsaw. [sent-71, score-0.357]
39 When learning the jigsaw, it is possible for regions of the jigsaw to be unused, that is, to have no image pixels mapped to them. [sent-72, score-1.098]
40 To allow for this case, we define a Normal-Gamma prior on µ and λ for each jigsaw pixel z, N (µ(z); µ0 , (βλ(z))−1 ) Gamma(λ(z); a, b). [sent-73, score-0.958]
41 Inference and learning: The model defines the joint probability distribution on a jigsaw J, a set of images I1 . [sent-78, score-0.977]
42 In other words, our goal is to find the jigsaw J and offset maps L1 . [sent-89, score-1.006]
43 First, the jigsaw is initialised by setting the precisions λ to the expected value under the prior b/a and the means µ to Gaussian noise with the same mean and variance as the data. [sent-94, score-0.926]
44 Given this initialisation, the offset maps are updated for each image by applying the alpha-expansion graph-cut algorithm of [11] (note that our energy is submodular, also known as regular). [sent-95, score-0.214]
45 Given the inferred offset maps, the jigsaw J that maximises P J, {I, L}N can be found analyti1 cally. [sent-97, score-0.991]
46 This is achieved for a jigsaw pixel z, the optimal mean µ and precision λ by using µ λ −1 = = βµ0 + x∈X(z) I(x) β + |X(z)| b + βµ2 − (β + |X(z)|)(µ )2 + 0 a + |X(z)| (6) x∈X(z) I(x)2 (7) where X(z) is the set of image pixels that are mapped to the jigsaw pixel z across all images. [sent-98, score-2.098]
47 We iterate between finding the offset maps holding the jigsaw fixed, and updating the jigsaw using the recently updated offset maps. [sent-99, score-1.997]
48 When inference has converged, we apply a clustering step to determine the jigsaw pieces (in future we plan to extend the model so that this clustering arises directly during learning). [sent-100, score-1.164]
49 The degree of overlap is measured as the ratio of the intersection to the union of the two regions of the jigsaw the image regions map to. [sent-102, score-1.098]
50 This has the effect of clustering image regions by both appearance and shape. [sent-103, score-0.252]
51 Each cluster then corresponds to a region of the jigsaw with an (approximately) consistent shape that explains a large number of image regions. [sent-104, score-1.097]
52 This image was constructed by placing four distinct objects (star, triangle, square and circle), at random positions on a black background image, with the pixels from the more recently placed object replacing the previously drawn pixels. [sent-107, score-0.238]
53 Using this image as the only input, we would like our model to automatically infer the appearances and shapes of the objects present in the image. [sent-109, score-0.236]
54 For instance, in [1], a separate shape epitome is learned in conjunction with the appearance epitome so that image patches can be explained as a two-layered composition of appearance patches using the shape patch. [sent-111, score-1.0]
55 This segmentation illustrates the different shaped jigsaw pieces found when learning the jigsaw shown in (c)-(d). [sent-114, score-2.09]
56 (c) Jigsaw mean with the four most-used jigsaw pieces are outlined in white. [sent-115, score-1.134]
57 (d) The jigsaw variance summed across the RGB channels; white is high, black is low. [sent-116, score-0.914]
58 2b-d shows the results of learning a jigsaw of this toy image. [sent-122, score-0.922]
59 2b, we show how the image decomposes into jigsaw pieces. [sent-124, score-1.014]
60 This is further illustrated in the 36 × 36 learned jigsaw whose mean and variance are shown in Fig. [sent-130, score-0.955]
61 The learned jigsaw has captured the shapes and appearances of the four objects and a black region for modelling the background. [sent-132, score-1.055]
62 Under our Bayesian model, pixels in the jigsaw that have never been used in explaining the observation are set to µ0 , which we have fixed to . [sent-133, score-0.951]
63 We can obtain jigsaw pieces by doing the clustering step outlined in Section. [sent-135, score-1.143]
64 2c, we also show the four most-used jigsaw pieces thus obtained by outlining them in white. [sent-138, score-1.108]
65 Comparison to epitome model: In this section, we compare the jigsaw model with the epitome model [1], as applied to the dog image in Fig. [sent-139, score-1.43]
66 3c) such that the average patch area was 49 pixels, the same as in the epitome model. [sent-144, score-0.271]
67 3b shows the segmentation of the image after (a) Input image (b) Image showing segmentation (c) Jigsaw mean Reconstructions from: (e) Jigsaw Mean squared error: . [sent-147, score-0.343]
68 While this reconstruction has similar mean squared error to the jigsaw reconstruction, it is more blurry and less visually pleasing. [sent-152, score-1.001]
69 We can see that the pieces correspond to meaningful regions such as flowers, and also that patch boundaries tend to follow object boundaries. [sent-154, score-0.377]
70 3c & d, we find that the jigsaw is much less blurred than the epitome and also doesn’t have the epitome’s artificial ’block’ structure. [sent-156, score-1.097]
71 Instead, the boundaries between different textures are placed to allocate the appropriate amount of jigsaw space to each texture, for example, entire flowers are represented as one coherent region. [sent-157, score-0.948]
72 However, whilst this technique can also be applied to jigsaw learning, it has not been found to be necessary in order to obtain a good solution. [sent-159, score-0.93]
73 3e-g, we compare reconstructions of the input image from the learned jigsaw and epitome models. [sent-161, score-1.265]
74 Since the jigsaw is a generative model for an image, we can reconstruct the image by mapping pixel colors from the jigsaw according to the offset map. [sent-162, score-2.101]
75 The first approach is most comparable to the jigsaw reconstruction, as it requires only one offset per pixel. [sent-164, score-0.991]
76 The reconstruction from the jigsaw is noticeably less blurry and is more visibly pleasing as there is no averaging in the generative process and patch boundaries tend to occur at actual object boundaries. [sent-170, score-1.151]
77 Modelling face images: We next applied the jigsaw model to a set of 100 face images from the Olivetti database at AT&T; consisting of 10 different images of 10 people. [sent-171, score-1.137]
78 We set the jigsaw size to 128×128 pixels so that the jigsaw has only 1/25 of the area of the input images combined. [sent-173, score-1.94]
79 Figure 4a shows the inferred segmentation of the images into different shaped and sized pieces (each row contains the images of one person). [sent-174, score-0.419]
80 When the faces depict the same person with similar pose, the resulting segmentations for these images are typically similar, showing that similar jigsaw pieces are being used to explain each image. [sent-175, score-1.198]
81 Figure 4b shows the mean of the learned jigsaw which can be seen to contain a number of face ‘elements’ such as eyes, noses etc. [sent-177, score-1.019]
82 To obtain the jigsaw pieces, we applied the clustering step outlined in Section. [sent-178, score-0.938]
83 With the jigsaw pieces known, we can now retrieve the regions from the image set that correspond to each jigsaw piece. [sent-183, score-2.147]
84 In Figure 5 (right), we show a random selection of image regions corresponding to several of the most common jigsaw pieces (shown color-coded). [sent-184, score-1.244]
85 What is surprising is that a particular jigsaw piece becomes very strongly associated with a particular face part (far more so than when clustering by appearance alone). [sent-185, score-1.128]
86 Thus, by learning the shape of each jigsaw piece, our model has effectively identified small and large face parts of widely different Figure 5: Left: The learned face jigsaw of Fig. [sent-186, score-2.047]
87 4 with overlaid white outlines showing different overlapping jigsaw pieces. [sent-187, score-0.953]
88 Areas of the jigsaw not used by the remaining pieces have been blacked out. [sent-189, score-1.108]
89 Seven of the most frequently used jigsaw pieces are shown colored. [sent-190, score-1.108]
90 For each color-coded jigsaw piece in the left image, a column shows randomly chosen images from the image set, for which that piece was selected. [sent-192, score-1.16]
91 Notice how these pieces are very strongly associated with different face parts – the model has achieved unsupervised discovery of two different nose shapes, eyes, eyebrows, cheeks etc, despite their widely different shapes and sizes. [sent-193, score-0.414]
92 We can also see from that figure that certain jigsaw pieces are conserved across different people – for example, the nose piece shown in the first column of that figure. [sent-195, score-1.165]
93 5 Discussion We have presented a generative jigsaw model which is capable of learning the shape, size and appearance of repeated regions in a set of images. [sent-196, score-1.064]
94 We have also shown that, for a set of face images, the learned jigsaw pieces are strongly associated with particular face parts. [sent-197, score-1.253]
95 Currently, we apply a post-hoc clustering step to learn the jigsaw pieces. [sent-198, score-0.939]
96 This process can be incorporated into the model by extending the pixel offset to include a cluster label and learning the region of jigsaw used by each cluster. [sent-199, score-1.073]
97 We can also extend the model to allow the jigsaw pieces to undergo deformation by favoring neighboring offsets that are similar as well as being identical, using a scheme similar to that of [12]. [sent-204, score-1.168]
98 For instance, for the toy example, it took about 30 minutes to learn a 36 × 36 jigsaw from a 150 × 150 image. [sent-207, score-0.937]
99 To account for this, we are investigating factored variants of the jigsaw which separate out different latent causes of appearance variability. [sent-212, score-1.008]
100 Despite this limitation, however, we are already achieving very promising results when using the jigsaw for image synthesis, motion segmentation and object recognition. [sent-213, score-1.101]
wordName wordTfidf (topN-words)
[('jigsaw', 0.903), ('pieces', 0.205), ('epitome', 0.194), ('image', 0.111), ('appearance', 0.095), ('offset', 0.088), ('patch', 0.077), ('patches', 0.071), ('shape', 0.07), ('shapes', 0.062), ('images', 0.06), ('pixel', 0.055), ('face', 0.05), ('pixels', 0.048), ('object', 0.047), ('piece', 0.043), ('segmentation', 0.04), ('blurry', 0.039), ('shaped', 0.039), ('synthesis', 0.037), ('reconstruction', 0.033), ('learned', 0.029), ('parts', 0.028), ('whilst', 0.027), ('cheeks', 0.025), ('jojic', 0.025), ('noses', 0.025), ('regions', 0.025), ('texture', 0.024), ('boundaries', 0.023), ('li', 0.022), ('eyes', 0.022), ('clustering', 0.021), ('occlusion', 0.02), ('overlapping', 0.02), ('objects', 0.02), ('map', 0.02), ('jigsaws', 0.019), ('lj', 0.019), ('cvpr', 0.019), ('toy', 0.019), ('neighboring', 0.018), ('eyebrows', 0.017), ('appearances', 0.017), ('borenstein', 0.017), ('owers', 0.017), ('rother', 0.017), ('winn', 0.017), ('eld', 0.016), ('strongly', 0.016), ('generative', 0.016), ('discovered', 0.016), ('sized', 0.015), ('person', 0.015), ('offsets', 0.015), ('layout', 0.015), ('overlaid', 0.015), ('unused', 0.015), ('maps', 0.015), ('input', 0.015), ('freeman', 0.015), ('learn', 0.015), ('showing', 0.015), ('irregularly', 0.014), ('efros', 0.014), ('model', 0.014), ('outlined', 0.014), ('variability', 0.014), ('overlap', 0.014), ('reconstructed', 0.014), ('squared', 0.014), ('nose', 0.014), ('region', 0.013), ('generate', 0.013), ('transfer', 0.013), ('deformation', 0.013), ('rgb', 0.013), ('rectangles', 0.013), ('reconstructions', 0.013), ('adjacent', 0.013), ('averaging', 0.013), ('infer', 0.012), ('placed', 0.012), ('mean', 0.012), ('super', 0.012), ('cliques', 0.012), ('models', 0.012), ('ln', 0.011), ('variance', 0.011), ('layers', 0.011), ('reconstruct', 0.011), ('mapped', 0.011), ('modelling', 0.011), ('cope', 0.011), ('vision', 0.011), ('recognition', 0.011), ('sizes', 0.011), ('size', 0.011), ('coherent', 0.01), ('latent', 0.01)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000001 52 nips-2006-Clustering appearance and shape by learning jigsaws
Author: Anitha Kannan, John Winn, Carsten Rother
Abstract: Patch-based appearance models are used in a wide range of computer vision applications. To learn such models it has previously been necessary to specify a suitable set of patch sizes and shapes by hand. In the jigsaw model presented here, the shape, size and appearance of patches are learned automatically from the repeated structures in a set of training images. By learning such irregularly shaped ‘jigsaw pieces’, we are able to discover both the shape and the appearance of object parts without supervision. When applied to face images, for example, the learned jigsaw pieces are surprisingly strongly associated with face parts of different shapes and scales such as eyes, noses, eyebrows and cheeks, to name a few. We conclude that learning the shape of the patch not only improves the accuracy of appearance-based part detection but also allows for shape-based part detection. This enables parts of similar appearance but different shapes to be distinguished; for example, while foreheads and cheeks are both skin colored, they have markedly different shapes. 1
2 0.075662524 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions
Author: Andrea Frome, Yoram Singer, Jitendra Malik
Abstract: In this paper we introduce and experiment with a framework for learning local perceptual distance functions for visual recognition. We learn a distance function for each training image as a combination of elementary distances between patch-based visual features. We apply these combined local distance functions to the tasks of image retrieval and classification of novel images. On the Caltech 101 object recognition benchmark, we achieve 60.3% mean recognition across classes using 15 training images per class, which is better than the best published performance by Zhang, et al. 1
3 0.071434855 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency
Author: Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, Bernhard Schölkopf
Abstract: This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that—despite the lack of any biological prior knowledge—our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1
4 0.05911788 122 nips-2006-Learning to parse images of articulated bodies
Author: Deva Ramanan
Abstract: We consider the machine vision task of pose estimation from static images, specifically for the case of articulated objects. This problem is hard because of the large number of degrees of freedom to be estimated. Following a established line of research, pose estimation is framed as inference in a probabilistic model. In our experience however, the success of many approaches often lie in the power of the features. Our primary contribution is a novel casting of visual inference as an iterative parsing process, where one sequentially learns better and better features tuned to a particular image. We show quantitative results for human pose estimation on a database of over 300 images that suggest our algorithm is competitive with or surpasses the state-of-the-art. Since our procedure is quite general (it does not rely on face or skin detection), we also use it to estimate the poses of horses in the Weizmann database. 1
5 0.057508577 42 nips-2006-Bayesian Image Super-resolution, Continued
Author: Lyndsey C. Pickup, David P. Capel, Stephen J. Roberts, Andrew Zisserman
Abstract: This paper develops a multi-frame image super-resolution approach from a Bayesian view-point by marginalizing over the unknown registration parameters relating the set of input low-resolution views. In Tipping and Bishop’s Bayesian image super-resolution approach [16], the marginalization was over the superresolution image, necessitating the use of an unfavorable image prior. By integrating over the registration parameters rather than the high-resolution image, our method allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. In addition to the motion model used by Tipping and Bishop, illumination components are introduced into the generative model, allowing us to handle changes in lighting as well as motion. We show results on real and synthetic datasets to illustrate the efficacy of this approach.
6 0.055328798 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation
7 0.052980803 78 nips-2006-Fast Discriminative Visual Codebooks using Randomized Clustering Forests
8 0.051601067 73 nips-2006-Efficient Methods for Privacy Preserving Face Detection
9 0.04932984 185 nips-2006-Subordinate class recognition using relational object models
10 0.047685143 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo
11 0.046862695 66 nips-2006-Detecting Humans via Their Pose
12 0.046736896 45 nips-2006-Blind Motion Deblurring Using Image Statistics
13 0.045615356 50 nips-2006-Chained Boosting
14 0.043151148 110 nips-2006-Learning Dense 3D Correspondence
15 0.042771176 174 nips-2006-Similarity by Composition
16 0.042648435 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing
17 0.03913305 120 nips-2006-Learning to Traverse Image Manifolds
18 0.037777271 170 nips-2006-Robotic Grasping of Novel Objects
19 0.037618656 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints
20 0.034063883 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model
topicId topicWeight
[(0, -0.09), (1, 0.003), (2, 0.1), (3, -0.036), (4, 0.043), (5, -0.046), (6, -0.102), (7, -0.072), (8, 0.002), (9, 0.026), (10, 0.101), (11, 0.028), (12, -0.01), (13, -0.007), (14, 0.038), (15, 0.096), (16, 0.023), (17, -0.012), (18, 0.036), (19, 0.025), (20, -0.01), (21, 0.01), (22, -0.013), (23, -0.039), (24, 0.038), (25, 0.014), (26, 0.021), (27, -0.005), (28, 0.03), (29, 0.001), (30, -0.011), (31, 0.053), (32, -0.015), (33, -0.007), (34, 0.029), (35, -0.016), (36, 0.002), (37, -0.118), (38, 0.064), (39, -0.006), (40, -0.03), (41, -0.01), (42, -0.033), (43, -0.018), (44, -0.037), (45, -0.078), (46, -0.01), (47, -0.066), (48, 0.045), (49, -0.061)]
simIndex simValue paperId paperTitle
same-paper 1 0.93714809 52 nips-2006-Clustering appearance and shape by learning jigsaws
Author: Anitha Kannan, John Winn, Carsten Rother
Abstract: Patch-based appearance models are used in a wide range of computer vision applications. To learn such models it has previously been necessary to specify a suitable set of patch sizes and shapes by hand. In the jigsaw model presented here, the shape, size and appearance of patches are learned automatically from the repeated structures in a set of training images. By learning such irregularly shaped ‘jigsaw pieces’, we are able to discover both the shape and the appearance of object parts without supervision. When applied to face images, for example, the learned jigsaw pieces are surprisingly strongly associated with face parts of different shapes and scales such as eyes, noses, eyebrows and cheeks, to name a few. We conclude that learning the shape of the patch not only improves the accuracy of appearance-based part detection but also allows for shape-based part detection. This enables parts of similar appearance but different shapes to be distinguished; for example, while foreheads and cheeks are both skin colored, they have markedly different shapes. 1
2 0.67918694 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions
Author: Andrea Frome, Yoram Singer, Jitendra Malik
Abstract: In this paper we introduce and experiment with a framework for learning local perceptual distance functions for visual recognition. We learn a distance function for each training image as a combination of elementary distances between patch-based visual features. We apply these combined local distance functions to the tasks of image retrieval and classification of novel images. On the Caltech 101 object recognition benchmark, we achieve 60.3% mean recognition across classes using 15 training images per class, which is better than the best published performance by Zhang, et al. 1
3 0.6719209 122 nips-2006-Learning to parse images of articulated bodies
Author: Deva Ramanan
Abstract: We consider the machine vision task of pose estimation from static images, specifically for the case of articulated objects. This problem is hard because of the large number of degrees of freedom to be estimated. Following a established line of research, pose estimation is framed as inference in a probabilistic model. In our experience however, the success of many approaches often lie in the power of the features. Our primary contribution is a novel casting of visual inference as an iterative parsing process, where one sequentially learns better and better features tuned to a particular image. We show quantitative results for human pose estimation on a database of over 300 images that suggest our algorithm is competitive with or surpasses the state-of-the-art. Since our procedure is quite general (it does not rely on face or skin detection), we also use it to estimate the poses of horses in the Weizmann database. 1
4 0.64452147 170 nips-2006-Robotic Grasping of Novel Objects
Author: Ashutosh Saxena, Justin Driemeyer, Justin Kearns, Andrew Y. Ng
Abstract: We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. Our algorithm is trained via supervised learning, using synthetic images for the training set. We demonstrate on a robotic manipulation platform that this approach successfully grasps a wide variety of objects, such as wine glasses, duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. 1
5 0.61133218 45 nips-2006-Blind Motion Deblurring Using Image Statistics
Author: Anat Levin
Abstract: We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture.
6 0.58358103 73 nips-2006-Efficient Methods for Privacy Preserving Face Detection
7 0.56564975 42 nips-2006-Bayesian Image Super-resolution, Continued
8 0.53615147 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing
9 0.53096855 185 nips-2006-Subordinate class recognition using relational object models
10 0.51010305 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model
11 0.50996828 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency
12 0.50701839 78 nips-2006-Fast Discriminative Visual Codebooks using Randomized Clustering Forests
13 0.47942778 66 nips-2006-Detecting Humans via Their Pose
14 0.44166103 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation
15 0.43447092 50 nips-2006-Chained Boosting
16 0.43343067 174 nips-2006-Similarity by Composition
17 0.43065643 16 nips-2006-A Theory of Retinal Population Coding
18 0.41351846 110 nips-2006-Learning Dense 3D Correspondence
19 0.38967583 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo
20 0.36612365 118 nips-2006-Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields
topicId topicWeight
[(1, 0.063), (3, 0.023), (7, 0.051), (8, 0.016), (9, 0.039), (12, 0.015), (20, 0.024), (21, 0.01), (22, 0.03), (44, 0.039), (57, 0.459), (65, 0.038), (69, 0.027), (90, 0.017)]
simIndex simValue paperId paperTitle
1 0.97007465 66 nips-2006-Detecting Humans via Their Pose
Author: Alessandro Bissacco, Ming-Hsuan Yang, Stefano Soatto
Abstract: We consider the problem of detecting humans and classifying their pose from a single image. Specifically, our goal is to devise a statistical model that simultaneously answers two questions: 1) is there a human in the image? and, if so, 2) what is a low-dimensional representation of her pose? We investigate models that can be learned in an unsupervised manner on unlabeled images of human poses, and provide information that can be used to match the pose of a new image to the ones present in the training set. Starting from a set of descriptors recently proposed for human detection, we apply the Latent Dirichlet Allocation framework to model the statistics of these features, and use the resulting model to answer the above questions. We show how our model can efficiently describe the space of images of humans with their pose, by providing an effective representation of poses for tasks such as classification and matching, while performing remarkably well in human/non human decision problems, thus enabling its use for human detection. We validate the model with extensive quantitative experiments and comparisons with other approaches on human detection and pose matching. 1
2 0.95588934 50 nips-2006-Chained Boosting
Author: Christian R. Shelton, Wesley Huie, Kin F. Kan
Abstract: We describe a method to learn to make sequential stopping decisions, such as those made along a processing pipeline. We envision a scenario in which a series of decisions must be made as to whether to continue to process. Further processing costs time and resources, but may add value. Our goal is to create, based on historic data, a series of decision rules (one at each stage in the pipeline) that decide, based on information gathered up to that point, whether to continue processing the part. We demonstrate how our framework encompasses problems from manufacturing to vision processing. We derive a quadratic (in the number of decisions) bound on testing performance and provide empirical results on object detection. 1 Pipelined Decisions In many decision problems, all of the data do not arrive at the same time. Often further data collection can be expensive and we would like to make a decision without accruing the added cost. Consider silicon wafer manufacturing. The wafer is processed in a series of stages. After each stage some tests are performed to judge the quality of the wafer. If the wafer fails (due to flaws), then the processing time, energy, and materials are wasted. So, we would like to detect such a failure as early as possible in the production pipeline. A similar problem can occur in vision processing. Consider the case of object detection in images. Often low-level pixel operations (such as downsampling an image) can be performed in parallel by dedicated hardware (on a video capture board, for example). However, searching each subimage patch of the whole image to test whether it is the object in question takes time that is proportional to the number of pixels. Therefore, we can imagine a image pipeline in which low resolution versions of the whole image are scanned first. Subimages which are extremely unlikely to contain the desired object are rejected and only those which pass are processed at higher resolution. In this way, we save on many pixel operations and can reduce the cost in time to process an image. Even if downsampling is not possible through dedicated hardware, for most object detection schemes, the image must be downsampled to form an image pyramid in order to search for the object at different scales. Therefore, we can run the early stages of such a pipelined detector at the low resolution versions of the image and throw out large regions of the high resolution versions. Most of the processing is spent searching for small faces (at the high resolutions), so this method can save a lot of processing. Such chained decisions also occur if there is a human in the decision process (to ask further clarifying questions in database search, for instance). We propose a framework that can model all of these scenarios and allow such decision rules to be learned from historic data. We give a learning algorithm based on the minimization of the exponential loss and conclude with some experimental results. 1.1 Problem Formulation Let there be s stages to the processing pipeline. We assume that there is a static distribution from which the parts, objects, or units to be processed are drawn. Let p(x, c) represent this distribution in which x is a vector of the features of this unit and c represents the costs associated with this unit. In particular, let xi (1 ≤ i ≤ s) be the set of measurements (features) available to the decision maker immediately following stage i. Let c i (1 ≤ i ≤ s) be the cost of rejecting (or stopping the processing of) this unit immediately following stage i. Finally, let c s+1 be the cost of allowing the part to pass through all processing stages. Note that ci need not be monotonic in i. To take our wafer manufacturing example, for wafers that are good we might let c i = i for 1 ≤ i ≤ s, indicating that if a wafer is rejected at any stage, one unit of work has been invested for each stage of processing. For the same good wafers, we might let cs+1 = s − 1000, indicating that the value of a completed wafer is 1000 units and therefore the total cost is the processing cost minus the resulting value. For a flawed wafer, the values might be the same, except for c s+1 which we would set to s, indicating that there is no value for a bad wafer. Note that the costs may be either positive or negative. However, only their relative values are important. Once a part has been drawn from the distribution, there is no way of affecting the “base level” for the value of the part. Therefore, we assume for the remainder of this paper that c i ≥ 0 for 1 ≤ i ≤ s + 1 and that ci = 0 for some value of i (between 1 and s + 1). Our goal is to produce a series of decision rules f i (xi ) for 1 ≤ i ≤ s. We let fi have a range of {0, 1} and let 0 indicate that processing should continue and 1 indicate that processing should be halted. We let f denote the collection of these s decision rules and augment the collection with an additional rule f s+1 which is identically 1 (for ease of notation). The cost of using these rules to halt processing an example is therefore s+1 i−1 ci fi (xi ) L(f (x), c) = i=1 (1 − fj (xj )) . j=1 We would like to find a set of decision rules that minimize E p [L(f (x), c)]. While p(x, c) is not known, we do have a series of samples (training set) D = {(x1 , c1 ), (x2 , c2 ), . . . , (xn , cn )} of n examples drawn from the distribution p. We use superscripts to denote the example index and subscripts to denote the stage index. 2 Boosting Solution For this paper, we consider constructing the rules f i from simpler decision rules, much as in the Adaboost algorithm [1, 2]. We assume that each decision f i (xi ) is computed as the threshold of another function g i (xi ): fi (xi ) = I(gi (xi ) > 0).1 We bound the empirical risk: n n s+1 L(f (xk ), ck ) = i−1 ck I(gi (xk ) > 0) i i k=1 i=1 k=1 n s+1 j=1 k i−1 ck egi (xi ) i ≤ k=1 i=1 I(gj (xk ) ≤ 0) j k n s+1 j=1 k ck egi (xi )− i e−gj (xj ) = Pi−1 j=1 gj (xk ) j . (1) k=1 i=1 Our decision to make all costs positive ensures that the bounds hold. Our decision to make the optimal cost zero helps to ensure that the bound is reasonably tight. As in boosting, we restrict g i (xi ) to take the form mi αi,l hi,l (xi ), the weighted sum of m i subl=1 classifiers, each of which returns either −1 or +1. We will construct these weighted sums incrementally and greedily, adding one additional subclassifier and associated weight at each step. We will pick the stage, weight, and function of the subclassifier in order to make the largest negative change in the exponential bound to the empirical risk. The subclassifiers, h i,l will be drawn from a small class of hypotheses, H. 1 I is the indicator function that equals 1 if the argument is true and 0 otherwise. 1. Initialize gi (x) = 0 for all stages i k 2. Initialize wi = ck for all stages i and examples k. i 3. For each stage i: (a) Calculate targets for each training example, as shown in equation 5. (b) Let h be the result of running the base learner on this set. (c) Calculate the corresponding α as per equation 3. (d) Score this classification as per equation 4 ¯ 4. Select the stage ¯ with the best (highest) score. Let h and α be the classifier and ı ¯ weight found at that stage. ¯¯ 5. Let g¯(x) ← g¯(x) + αh(x). ı ı 6. Update the weights (see equation 2): ¯ k k ¯ ı • ∀1 ≤ k ≤ n, multiply w¯ by eαh(x¯ ) . ı ¯ k k ¯ ı • ∀1 ≤ k ≤ n, j > ¯, multiply wj by e−αh(x¯ ) . ı 7. Repeat from step 3 Figure 1: Chained Boosting Algorithm 2.1 Weight Optimization We first assume that the stage at which to add a new subclassifier and the subclassifier to add have ¯ ¯ already been chosen: ¯ and h, respectively. That is, h will become h¯,m¯+1 but we simplify it for ı ı ı ease of expression. Our goal is to find α ¯,m¯+1 which we similarly abbreviate to α. We first define ¯ ı ı k k wi = ck egi (xi )− i Pi−1 j=1 gj (xk ) j (2) + as the weight of example k at stage i, or its current contribution to our risk bound. If we let D h be ¯ − ¯ the set of indexes of the members of D for which h returns +1, and let D h be similarly defined for ¯ ¯ returns −1, we can further define those for which h s+1 + W¯ = ı k w¯ + ı + k∈Dh ¯ s+1 k wi − W¯ = ı − ı k∈Dh i=¯+1 ¯ k w¯ + ı − k∈Dh ¯ k wi . + ı k∈Dh i=¯+1 ¯ + ¯ We interpret W¯ to be the sum of the weights which h will emphasize. That is, it corresponds to ı ¯ ¯ the weights along the path that h selects: For those examples for which h recommends termination, we add the current weight (related to the cost of stopping the processing at this stage). For those ¯ examples for which h recommends continued processing, we add in all future weights (related to all − future costs associated with this example). W¯ can be similarly interpreted to be the weights (or ı ¯ costs) that h recommends skipping. If we optimize the loss bound of Equation 1 with respect to α, we obtain ¯ α= ¯ 1 Wı− log ¯ . + 2 W¯ ı (3) The more weight (cost) that the rule recommends to skip, the higher its α coefficient. 2.2 Full Optimization Using Equation 3 it is straight forward to show that the reduction in Equation 1 due to the addition of this new subclassifier will be + ¯ − ¯ W¯ (1 − eα ) + W¯ (1 − e−α ) . ı ı (4) We know of no efficient method for determining ¯, the stage at which to add a subclassifier, except ı by exhaustive search. However, within a stage, the choice of which subclassifier to use becomes one of maximizing n s+1 k¯ k k z¯ h(x¯ ) , where z¯ = ı ı ı k k wi − w¯ ı (5) i=¯+1 ı k=1 ¯ with respect to h. This is equivalent to an weighted empirical risk minimization where the training 1 2 n k k set is {x¯ , x¯ , . . . , x¯ }. The label of x¯ is the sign of z¯ , and the weight of the same example is the ı ı ı ı ı k magnitude of z¯ . ı 2.3 Algorithm The resulting algorithm is only slightly more complex than standard Adaboost. Instead of a weight vector (one weight for each data example), we now have a weight matrix (one weight for each data example for each stage). We initialize each weight to be the cost associated with halting the corresponding example at the corresponding stage. We start with all g i (x) = 0. The complete algorithm is as in Figure 1. Each time through steps 3 through 7, we complete one “round” and add one additional rule to one stage of the processing. We stop executing this loop when α ≤ 0 or when an iteration counter ¯ exceeds a preset threshold. Bottom-Up Variation In situations where information is only gained after each stage (such as in section 4), we can also train the classifiers “bottom-up.” That is, we can start by only adding classifiers to the last stage. Once finished with it, we proceed to the previous stage, and so on. Thus instead of selecting the best stage, i, in each round, we systematically work our way backward through the stages, never revisiting previously set stages. 3 Performance Bounds Using the bounds in [3] we can provide a risk bound for this problem. We let E denote the expectaˆ tion with respect to the true distribution p(x, c) and En denote the empirical average with respect to the n training samples. We first bound the indicator function with a piece-wise linear function, b θ , 1 with a maximum slope of θ : z ,0 . I(z > 0) ≤ bθ (z) = max min 1, 1 + θ We then bound the loss: L(f (x), c) ≤ φ θ (f (x), c) where s+1 φθ (f (x), c) = ci min{bθ (gi (xi )), bθ (−gi−1 (xi−1 )), bθ (−gi−2 (xi−2 )), . . . , bθ (−g1 (x1 ))} i=1 s+1 i ci Bθ (gi (xi ), gi−1 (xi−1 ), . . . , g1 (x1 )) = i=1 We replaced the product of indicator functions with a minimization and then bounded each indicator i with bθ . Bθ is just a more compact presentation of the composition of the function b θ and the minimization. We assume that the weights α at each stage have been scaled to sum to 1. This has no affect on the resulting classifications, but is necessary for the derivation below. Before stating the theorem, for clarity, we state two standard definition: Definition 1. Let p(x) be a probability distribution on the set X and let {x 1 , x2 , . . . , xn } be n independent samples from p(x). Let σ 1 , σ 2 , . . . , σ n be n independent samples from a Rademacher random variable (a binary variable that takes on either +1 or −1 with equal probability). Let F be a class of functions mapping X to . Define the Rademacher Complexity of F to be Rn (F ) = E sup f ∈F n 1 n σ i f (xi ) i=1 1 where the expectation is over the random draws of x through xn and σ 1 through σ n . Definition 2. Let p(x), {x1 , x2 , . . . , xn }, and F be as above. Let g 1 , g 2 , . . . , g n be n independent samples from a Gaussian distribution with mean 0 and variance 1. Analogous to the above definition, define the Gaussian Complexity of G to be Gn (F ) = E sup f ∈F 1 n n g i f (xi ) . i=1 We can now state our theorem, bounding the true risk by a function of the empirical risk: Theorem 3. Let H1 , H2 , . . . , Hs be the sequence of the sets of functions from which the base classifier draws for chain boosting. If H i is closed under negation for all i, all costs are bounded between 0 and 1, and the weights for the classifiers at each stage sum to 1, then with probability 1 − δ, k ˆ E [L(f (x), c)] ≤ En [φθ (f (x), c)] + θ s (i + 1)Gn (Hi ) + i=1 8 ln 2 δ n for some constant k. Proof. Theorem 8 of [3] states ˆ E [L(x, c)] ≤ En (φθ (f (x), c)) + 2Rn (φθ ◦ F ) + 8 ln 2 δ n and therefore we need only bound the R n (φθ ◦ F ) term to demonstrate our theorem. For our case, we have 1 Rn (φθ ◦ F ) = E sup n f ∈F = E sup f ∈F 1 n s+1 ≤ E sup j=1 f ∈F n n σ i φθ (f (xi ), ci ) i=1 s+1 σi i=1 1 n s ci Bθ (gj (xi ), gj−1 (xi ), . . . , g1 (xi )) j j j−1 1 j=1 n s+1 s σ i Bθ (gj (xi ), gj−1 (xi ), . . . , g1 (xi )) = j j−1 1 i=1 s Rn (Bθ ◦ G j ) j=1 where Gi is the space of convex combinations of functions from H i and G i is the cross product of G1 through Gi . The inequality comes from switching the expectation and the maximization and then from dropping the c i (see [4], lemma 5). j s s Lemma 4 of [3] states that there exists a k such that R n (Bθ ◦ G j ) ≤ kGn (Bθ ◦ G j ). Theorem 14 j 2 s j s of the same paper allows us to conclude that G n (Bθ ◦ G ) ≤ θ i=1 Gn (Gi ). (Because Bθ is the 1 s minimum over a set of functions with maximum slope of θ , the maximum slope of B θ is also 1 .) θ Theorem 12, part 2 states G n (Gi ) = Gn (Hi ). Taken together, this proves our result. Note that this bound has only quadratic dependence on s, the length of the chain and does not explicitly depend on the number of rounds of boosting (the number of rounds affects φ θ which, in turn, affects the bound). 4 Application We tested our algorithm on the MIT face database [5]. This database contains 19-by-19 gray-scale images of faces and non-faces. The training set has 2429 face images and 4548 non-face images. The testing set has 472 faces and 23573 non-faces. We weighted the training set images so that the ratio of the weight of face images to non-face images matched the ratio in the testing set. 0.4 0.4 CB Global CB Bottom−up SVM Boosting 0.3 0.25 0.2 0.15 0.1 150 0.2 100 0.1 False positive rate 0.3 50 Average number of pixels 0.35 average cost/error per example 200 training cost training error testing cost testing error 0.05 0 100 200 300 400 500 number of rounds 700 1000 (a) 0 0 0.2 0.4 0.6 False negative rate 0.8 0 1 (b) Figure 2: (a) Accuracy verses the number of rounds for a typical run, (b) Error rates and average costs for a variety of cost settings. 4.1 Object Detection as Chained Boosting Our goal is to produce a classifier that can identify non-face images at very low resolutions, thereby allowing for quick processing of large images (as explained later). Most image patches (or subwindows) do not contain faces. We, therefore, built a multi-stage detection system where any early rejection is labeled as a non-face. The first stage looks at image patches of size 3-by-3 (i.e. a lowerresolution version of the 19-by-19 original image). The next stage looks at the same image, but at a resolution of 6-by-6. The third stage considers the image at 12-by-12. We did not present the full 19-by-19 images as the classification did not significantly improve over the 12-by-12 versions. We employ a simple base classifier: the set of all functions that look at a single pixel and predict the class by thresholding the pixel’s value. The total classifier at any stage is a linear combination of these simple classifiers. For a given stage, all of the base classifiers that target a particular pixel are added together producing a complex function of the value of the pixel. Yet, this pixel can only take on a finite number of values (256 in this case). Therefore, we can compile this set of base classifiers into a single look-up function that maps the brightness of the pixel into a real number. The total classifier for the whole stage is merely the sum of these look-up functions. Therefore, the total work necessary to compute the classification at a stage is proportional to the number of pixels in the image considered at that stage, regardless of the number of base classifiers used. We therefore assign a cost to each stage of processing proportional to the number of pixels at that stage. If the image is a face, we add a negative cost (i.e. bonus) if the image is allowed to pass through all of the processing stages (and is therefore “accepted” as a face). If the image is a nonface, we add a bonus if the image is rejected at any stage before completion (i.e. correctly labelled). While this dataset has only segmented image patches, in a real application, the classifier would be run on all sub-windows of an image. More importantly, it would also be run at multiple resolutions in order to detect faces of different sizes (or at different distances from the camera). The classifier chain could be run simultaneously at each of these resolutions. To wit, while running the final 12-by12 stage at one resolution of the image, the 6-by-6 (previous) stage could be run at the same image resolution. This 6-by-6 processing would be the necessary pre-processing step to running the 12-by12 stage at a higher resolution. As we run our final scan for big faces (at a low resolution), we can already (at the same image resolution) be performing initial tests to throw out portions of the image as not worthy of testing for smaller faces (at a higher resolution). Most of the work of detecting objects must be done at the high resolutions because there are many more overlapping subwindows. This chained method allows the culling of most of this high-resolution image processing. 4.2 Experiments For each example, we construct a vector of stage costs as above. We add a constant to this vector to ensure that the minimal element is zero, as per section 1.1. We scale all vectors by the same amount to ensure that the maximal value is 1.This means that the number of misclassifications is an upper bound on the total cost that the learning algorithm is trying to minimize. There are three flexible quantities in this problem formulation: the cost of a pixel evaluation, the bonus for a correct face classification, and the bonus for a correct non-face classification. Changing these quantities will control the trade-off between false positives and true positives, and between classification error and speed. Figure 2(a) shows the result of a typical run of the algorithm. As a function of the number of rounds, it plots the cost (that which the algorithm is trying to minimize) and the error (number of misclassified image patches), for both the training and testing sets (where the training set has been reweighted to have the same proportion of faces to non-faces as the testing set). We compared our algorithm’s performance to the performance of support vector machines (SVM) [6] and Adaboost [1] trained and tested on the highest resolution, 12-by-12, image patches. We employed SVM-light [7] with a linear kernels. Figure 2(b) compares the error rates for the methods (solid lines, read against the left vertical axis). Note that the error rates are almost identical for the methods. The dashed lines (read against the right vertical axis) show the average number of pixels evaluated (or total processing cost) for each of the methods. The SVM and Adaboost algorithms have a constant processing cost. Our method (by either training scheme) produces lower processing cost for most error rates. 5 Related Work Cascade detectors for vision processing (see [8] or [9] for example) may appear to be similar to the work in this paper. Especially at first glance for the area of object detection, they appear almost the same. However, cascade detection and this work (chained detection) are quite different. Cascade detectors are built one at a time. A coarse detector is first trained. The examples which pass that detector are then passed to a finer detector for training, and so on. A series of targets for false-positive rates define the increasing accuracy of the detector cascade. By contrast, our chain detectors are trained as an ensemble. This is necessary because of two differences in the problem formulation. First, we assume that the information available at each stage changes. Second, we assume there is an explicit cost model that dictates the cost of proceeding from stage to stage and the cost of rejection (or acceptance) at any particular stage. By contrast, cascade detectors are seeking to minimize computational power necessary for a fixed decision. Therefore, the information available to all of the stages is the same, and there are no fixed costs associated with each stage. The ability to train all of the classifiers at the same time is crucial to good performance in our framework. The first classifier in the chain cannot determine whether it is advantageous to send an example further along unless it knows how the later stages will process the example. Conversely, the later stages cannot construct optimal classifications until they know the distribution of examples that they will see. Section 4.1 may further confuse the matter. We demonstrated how chained boosting can be used to reduce the computational costs of object detection in images. Cascade detectors are often used for the same purpose. However, the reductions in computational time come from two different sources. In cascade detectors, the time taken to evaluate a given image patch is reduced. In our chained detector formulation, image patches are ignored completely based on analysis of lower resolution patches in the image pyramid. To further illustrate the difference, cascade detectors can always be used to speed up asymmetric classification tasks (and are often applied to image detection). By contrast, in Section 4.1 we have exploited the fact that object detection in images is typically performed at multiple scales to turn the problem into a pipeline and apply our framework. Cascade detectors address situations in which prior class probabilities are not equal, while chained detectors address situations in which information is gained at a cost. Both are valid (and separate) ways of tackling image processing (and other tasks as well). In many ways, they are complementary approaches. Classic sequence analysis [10, 11] also addresses the problem of optimal stopping. However, it assumes that the samples are drawn i.i.d. from (usually) a known distribution. Our problem is quite different in that each consecutive sample is drawn from a different (and related) distribution and our goal is to find a decision rule without producing a generative model. WaldBoost [12] is a boosting algorithm based on this. It builds a series of features and a ratio comparison test in order to decide when to stop. For WaldBoost, the available features (information) not change between stages. Rather, any feature is available for selection at any point in the chain. Again, this is a different problem than the one considered in this paper. 6 Conclusions We feel this framework of staged decision making is useful in a wide variety of areas. This paper demonstrated how the framework applies to one vision processing task. Obviously it also applies to manufacturing pipelines where errors can be introduced at different stages. It should also be applicable to scenarios where information gathering is costly. Our current formulation only allows for early negative detection. In the face detection example above, this means that in order to report “face,” the classifier must process each stage, even if the result is assured earlier. In Figure 2(b), clearly the upper-left corner (100% false positives and 0% false negatives) is reachable with little effort: classify everything positive without looking at any features. We would like to extend this framework to cover such two-sided early decisions. While perhaps not useful in manufacturing (or even face detection, where the interesting part of the ROC curve is far from the upper-left), it would make the framework more applicable to informationgathering applications. Acknowledgements This research was supported through the grant “Adaptive Decision Making for Silicon Wafer Testing” from Intel Research and UC MICRO. References [1] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In EuroCOLT, pages 23–37, 1995. [2] Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In ICML, pages 148–156, 1996. [3] Peter L. Bartlett and Shahar Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. JMLR, 2:463–482, 2002. [4] Ron Meir and Tong Zhang. Generalization error bounds for Bayesian mixture algorithms. JMLR, 4:839–860, 2003. [5] MIT. CBCL face database #1, 2000. http://cbcl.mit.edu/cbcl/softwaredatasets/FaceData2.html. [6] Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. A training algorithm for optimal margin classifiers. In COLT, pages 144–152, 1992. [7] T. Joachims. Making large-scale SVM learning practical. In B. Schlkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods — Support Vector Learning. MIT-Press, 1999. [8] Paul A. Viola and Michael J. Jones. Rapid object detection using a boosted cascade of simple features. In CVPR, pages 511–518, 2001. [9] Jianxin Wu, Matthew D. Mullin, and James M. Rehg. Linear asymmetric classifier for cascade detectors. In ICML, pages 988–995, 2005. [10] Abraham Wald. Sequential Analysis. Chapman & Hall, Ltd., 1947. [11] K. S. Fu. Sequential Methods in Pattern Recognition and Machine Learning. Academic Press, 1968. ˇ [12] Jan Sochman and Jiˇ´ Matas. Waldboost — learning for time constrained sequential detection. rı In CVPR, pages 150–156, 2005.
same-paper 3 0.95491207 52 nips-2006-Clustering appearance and shape by learning jigsaws
Author: Anitha Kannan, John Winn, Carsten Rother
Abstract: Patch-based appearance models are used in a wide range of computer vision applications. To learn such models it has previously been necessary to specify a suitable set of patch sizes and shapes by hand. In the jigsaw model presented here, the shape, size and appearance of patches are learned automatically from the repeated structures in a set of training images. By learning such irregularly shaped ‘jigsaw pieces’, we are able to discover both the shape and the appearance of object parts without supervision. When applied to face images, for example, the learned jigsaw pieces are surprisingly strongly associated with face parts of different shapes and scales such as eyes, noses, eyebrows and cheeks, to name a few. We conclude that learning the shape of the patch not only improves the accuracy of appearance-based part detection but also allows for shape-based part detection. This enables parts of similar appearance but different shapes to be distinguished; for example, while foreheads and cheeks are both skin colored, they have markedly different shapes. 1
4 0.94857979 74 nips-2006-Efficient Structure Learning of Markov Networks using $L 1$-Regularization
Author: Su-in Lee, Varun Ganapathi, Daphne Koller
Abstract: Markov networks are commonly used in a wide variety of applications, ranging from computer vision, to natural language, to computational biology. In most current applications, even those that rely heavily on learned models, the structure of the Markov network is constructed by hand, due to the lack of effective algorithms for learning Markov network structure from data. In this paper, we provide a computationally efficient method for learning Markov network structure from data. Our method is based on the use of L1 regularization on the weights of the log-linear model, which has the effect of biasing the model towards solutions where many of the parameters are zero. This formulation converts the Markov network learning problem into a convex optimization problem in a continuous space, which can be solved using efficient gradient methods. A key issue in this setting is the (unavoidable) use of approximate inference, which can lead to errors in the gradient computation when the network structure is dense. Thus, we explore the use of different feature introduction schemes and compare their performance. We provide results for our method on synthetic data, and on two real world data sets: pixel values in the MNIST data, and genetic sequence variations in the human HapMap data. We show that our L1 -based method achieves considerably higher generalization performance than the more standard L2 -based method (a Gaussian parameter prior) or pure maximum-likelihood learning. We also show that we can learn MRF network structure at a computational cost that is not much greater than learning parameters alone, demonstrating the existence of a feasible method for this important problem.
5 0.76894563 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency
Author: Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, Bernhard Schölkopf
Abstract: This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that—despite the lack of any biological prior knowledge—our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1
6 0.75107139 42 nips-2006-Bayesian Image Super-resolution, Continued
7 0.74999678 34 nips-2006-Approximate Correspondences in High Dimensions
8 0.73863542 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions
9 0.72839308 110 nips-2006-Learning Dense 3D Correspondence
10 0.72402507 122 nips-2006-Learning to parse images of articulated bodies
11 0.72337371 47 nips-2006-Boosting Structured Prediction for Imitation Learning
12 0.71977371 185 nips-2006-Subordinate class recognition using relational object models
13 0.71248984 45 nips-2006-Blind Motion Deblurring Using Image Statistics
14 0.67900175 154 nips-2006-Optimal Change-Detection and Spiking Neurons
15 0.67677897 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model
16 0.67222524 118 nips-2006-Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields
17 0.66879499 78 nips-2006-Fast Discriminative Visual Codebooks using Randomized Clustering Forests
18 0.66690749 130 nips-2006-Max-margin classification of incomplete data
19 0.66566366 188 nips-2006-Temporal and Cross-Subject Probabilistic Models for fMRI Prediction Tasks
20 0.6584478 134 nips-2006-Modeling Human Motion Using Binary Latent Variables