nips nips2003 nips2003-17 knowledge-graph by maker-knowledge-mining

17 nips-2003-A Sampled Texture Prior for Image Super-Resolution


Source: pdf

Author: Lyndsey C. Pickup, Stephen J. Roberts, Andrew Zisserman

Abstract: Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over the high-resolution image space. Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. Here we present a domain-specific image prior in the form of a p.d.f. based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 uk Abstract Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. [sent-6, score-1.02]

2 Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over the high-resolution image space. [sent-7, score-1.08]

3 Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. [sent-8, score-0.024]

4 Here we present a domain-specific image prior in the form of a p. [sent-9, score-0.534]

5 based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. [sent-12, score-0.265]

6 1 Introduction The aim of super-resolution is to take a set of one or more low-resolution input images of a scene, and estimate a higher-resolution image. [sent-13, score-0.373]

7 If there are several low resolution images available with sub-pixel displacements, then the high frequency information of the superresolution image can be increased. [sent-14, score-0.759]

8 A second approach uses an unsupervised technique where latent variables are introduced to model the mean intensity of groups of surrounding pixels [2]. [sent-16, score-0.159]

9 In cases where the high-frequency detail is recovered from image displacements, the models tend to assume that each low-resolution image is a subsample from a true highresolution image or continuous scene. [sent-17, score-1.127]

10 The generation of the low-resolution inputs can then be expressed as a degradation of the super-resolution image, usually by applying an image homography, convolving with blurring functions, and subsampling [3, 4, 5, 6, 7, 8, 9]. [sent-18, score-0.35]

11 Unfortunately, the ML (maximum likelihood) super-resolution images obtained by revers- ing the generative process above tend to be poorly conditioned and susceptible to highfrequency noise. [sent-19, score-0.408]

12 Most approaches to multiple-image super-resolution use a MAP (maximum a-posteriori) approach to regularize the solution using a prior distribution over the high-resolution space. [sent-20, score-0.226]

13 Gaussian process priors [4], Gaussian MRFs (Markov Random Fields) and Huber MRFs [3] have all been proposed as suitable candidates. [sent-21, score-0.032]

14 In this paper, we consider an image prior based upon samples taken from other images, inspired by the use of non-parametric sampling methods in texture synthesis [10]. [sent-22, score-0.841]

15 This texture synthesis method outperformed many other complex parametric models for texture representation, and produces perceptively correct-looking areas of texture given a sample texture seed. [sent-23, score-0.994]

16 It works by finding texture patches similar to the area around a pixel of interest, and estimating the intensity of the central pixel from a histogram built up from similar samples. [sent-24, score-0.811]

17 We turn this approach around to produce an image prior by finding areas in our sample set that are similar to patches in our super-resolution image, and evaluate how well they match, building up a p. [sent-25, score-0.788]

18 In short, given a set of low resolution images and example images of textures in the same class at the higher resolution, our objective is to construct a super-resolution image using a prior that is sampled from the example images. [sent-29, score-1.402]

19 We develop our model in section 2, and expand upon some of the implementation details in section 3, as well as introducing the Huber prior model against which most of the comparisons in this paper are made. [sent-31, score-0.261]

20 The main contribution of this work is in the construction of the prior over the super-resolution image, but first we will consider the generative model for the low-resolution image generation, which closely follows the approaches of [3] and [4]. [sent-34, score-0.563]

21 We have K low-resolution images y (k) , which we assume are generated from the super-resolution image x by y (k) = W (k) x + G (k) (1) −1 where G is a vector of i. [sent-35, score-0.661]

22 Gaussians G ∼ N (0, βG ), and βG is the noise precision. [sent-38, score-0.059]

23 The construction of W involves mapping each low-resolution pixel into the space of the super-resolution image, and performing a convolution with a point spread function. [sent-39, score-0.131]

24 We begin by assuming that the image registration parameters may be determined a priori, so each input image has a corresponding set of registration parameters θ (k) . [sent-41, score-0.849]

25 We may now construct the likelihood function βG M/2 βG (k) exp − ||y − W (k) x||2 (2) 2π 2 where each input image is assumed to have M pixels (and the super-resolution image N pixels). [sent-42, score-0.8]

26 To address this problem, a prior over the super-resolution image is often used. [sent-44, score-0.534]

27 In [4], the authors restricted themselves to Gaussian process priors, which made their estimation of the registration parameters θ tractable, but encouraged smoothness across x without any special treatment to allow for edges. [sent-45, score-0.097]

28 The Huber Prior was used successfully in [3] to penalize image gradients while being less harsh on large image discontinuities than a Gaussian prior. [sent-46, score-0.689]

29 Details of the Huber prior are given in section 3. [sent-47, score-0.205]

30 If we assume a uniform prior over the input images, the posterior distribution over x is of the form K p(x|{y (k) , θ (k) }) p(y (k) |x, θ (k) ). [sent-48, score-0.246]

31 ∝ p(x) (4) k=1 To build our expression for p(x), we adopt the philosophy of [10], and sample from other example images rather than developing a parametric model. [sent-49, score-0.411]

32 A similar philosophy was used in [11] for image-based rendering. [sent-50, score-0.043]

33 Given a small image patch around any particular pixel, we can learn a distribution for the central pixel’s intensity value by examining the values at the centres of similar patches from other images. [sent-51, score-0.839]

34 Each pixel xi has a neighbourhood region R(xi ) consisting of the pixels around it, but not including xi itself. [sent-52, score-0.405]

35 For each R(xi ), we find the closest neighbourhood patch in the set of sampled patches, and find the central pixel associated with this nearest neighbour, LR (xi ). [sent-53, score-0.426]

36 The intensity of our original pixel is then assumed to be Gaussian distributed with mean equal to the intensity of this central pixel, and with some precision βT , −1 xi ∼ N (LR (xi ), βT ) (5) leading us to a prior of the form p(x) = βT 2π N/2 exp − βT ||x − LR (x)||2 . [sent-54, score-0.532]

37 Our super-resolution image is then just arg minx (L), where K ||y (k) − W (k) x||2 . [sent-56, score-0.352]

38 L = β||x − LR (x)||2 + (8) k=1 3 Implementation details We optimize the objective function of equation 8 using scaled conjugate gradients (SCG) to obtain an approximation to our super-resolution image. [sent-57, score-0.063]

39 For speed, we approximate this by dL 2 = 2β x − LR (x) − dx K K W (k)T y (k) − W (k) x , (9) k=1 which assumes that small perturbations in the neighbours of x will not change the value returned by LR (x). [sent-59, score-0.023]

40 Our image patch regions R(xi ) are square windows centred on xi , and pixels near the edge of the image are supported using the average image of [3] extending beyond the edge of the super-resolution image. [sent-62, score-1.311]

41 To compute the nearest region in the example images, patches are normalized to sum to unity, and centre weighted as in [10] by a 2-dimensional Gaussian. [sent-63, score-0.227]

42 The width of the image patches used, and of the Gaussian weights, depends very much upon the scales of the textures present in the image. [sent-64, score-0.637]

43 Our images intensities were in the range [0, 1], and all the work so far has been with grey-scale images. [sent-65, score-0.332]

44 Most of our results with this sample-based prior are compared to super-resolution images obtained using the Huber prior used in [3]. [sent-66, score-0.742]

45 Other edge-preserving functions are discussed in [12], though the Huber function performed better than these as a prior in this case. [sent-67, score-0.205]

46 Plugging this into the posterior distribution of equation 4 leads to a Huber MAP image x H which minimizes the negative log probability 4N LH = β K ||y (k) − W (k) x||2 , ρ((Gx)i ) + i=1 (12) k=1 where again the r. [sent-70, score-0.329]

47 has been scaled so that β is the single unknown ratio parameter. [sent-73, score-0.032]

48 We added varying amounts of Gaussian noise (2/256, 6/256 and 12/256, grey levels) and took varying number of these images (2, 5, 10) to produce nine separate sets of low-resolution inputs from each of our initial “ground-truth” high resolution images. [sent-76, score-0.747]

49 Figure 1 shows three 100 × 100 pixel ground truth images, each accompanied by corresponding 40 × 40 pixel low-resolution images generated from the ground truth images at half the resolution, with 6/256 levels of noise. [sent-77, score-1.476]

50 Our aim was to reconstruct the central 50 × 50 pixel section of the original ground truth image. [sent-78, score-0.414]

51 Figure 2 shows the example images from which our texture samples patches were taken 1 – note that these do not overlap with the sections used to generate the low-resolution images. [sent-79, score-0.756]

52 Text Truth Brick Truth Beads Truth Text Low−res Brick Low−res Beads Low−res Figure 1: Left to right: ground truth text, ground truth brick, ground truth beads, low-res text, low-res brick and low-res beads. [sent-80, score-0.872]

53 Figure 3 shows the difference in super-resolution image quality that can be obtained using the sample-based prior over the Huber prior using identical input sets as described above. [sent-84, score-0.78]

54 For each Huber super-resolution image, we ran a set of reconstructions, varying the Huber parameter α and the prior strength parameter β. [sent-85, score-0.265]

55 The image shown for each input number/noise level pair is the one which gave the minimum RMS error when compared to the ground-truth image; these are very close to the “best” images chosen from the same sets by a human subject. [sent-86, score-0.702]

56 The images shown for the sample-based prior are again the best (in the sense of having minimal RMS error) of several runs per image. [sent-87, score-0.537]

57 We varied the size of the sample patches from 5 to 13 pixels in edge length – computational cost meant that larger patches were not considered. [sent-88, score-0.578]

58 Compared to the Huber images, we tried relatively few different patch size and β-value combinations for our sample-based prior; again, this was due to our method taking longer to execute than the Huber method. [sent-89, score-0.142]

59 Consequently, the Huber parameters are more likely to lie close to their own optimal values than our sample-based prior parameters are. [sent-90, score-0.205]

60 We also present images recovered using a “wrong” texture. [sent-91, score-0.369]

61 We generated ten lowresolution images from a picture of a leaf, and used texture samples from a small black-andwhite spiral in our reconstruction (Figure 4). [sent-92, score-0.683]

62 A selection of results are shown in Figure 5, where we varied the β parameter governing the prior’s contribution to the output image. [sent-93, score-0.026]

63 The text and brick datasets contained 2, 6, 12 grey levels of noise, while the beads dataset used 2, 12 and 32 grey levels. [sent-100, score-1.041]

64 Each image shown is the best of several attempts with varying prior strengths, Huber parameter (for the Huber MRF prior images) and patch neighbourhood sizes (for the texture-based prior images). [sent-101, score-1.199]

65 Figure 4: The original 120×120 high-resolution image (left), and the 80×80 pixel “wrong” texture sample image (right). [sent-103, score-1.053]

66 64 Figure 5: Four 120×120 super-resolution images are shown on the lower row, reconstructed using different values of the prior strength parameter β: 0. [sent-108, score-0.565]

67 5 Discussion and further considerations The images of Figure 3 show that our prior offers a qualitative improvement over the generic prior, especially when few input images are available. [sent-113, score-0.932]

68 Figure 6 plots the RMS errors from the Huber and sample-based priors against each other. [sent-115, score-0.056]

69 In all cases, the sample-based method fares better, with the difference most notable in the text example. [sent-116, score-0.111]

70 In general, larger patch sizes (11 × 11 pixels) give smaller errors for the noisy inputs, while small patches (5 × 5) are better for the less noisy images. [sent-117, score-0.338]

71 Computational costs mean we limited the patch size to no more than 13 × 13, and terminated the SCG optimization algorithm after approximately 20 iterations. [sent-118, score-0.142]

72 Since in general the textures for the prior will not be invariant to rotation and scaling, consideration of the registration of the input images will be necessary. [sent-120, score-0.758]

73 The optimal patch size will be a function of the image textures, so learning this as a parameter of an extended model, in a similar way to how [4] learns the point-spread function for a set of input images, is another direction of interest. [sent-121, score-0.512]

74 Comparison of RMSE (grey levels) Texture−based RMS 60 equal−error line text dataset brick dateset bead dataset 50 40 30 20 10 10 20 30 40 Huber RMS 50 60 Figure 6: Comparison of RMS errors in reconstructing the text, brick and bead images using the Huber and sample-based priors. [sent-122, score-1.012]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('huber', 0.457), ('images', 0.332), ('image', 0.329), ('brick', 0.242), ('texture', 0.228), ('grey', 0.22), ('prior', 0.205), ('patches', 0.196), ('beads', 0.162), ('lr', 0.149), ('patch', 0.142), ('truth', 0.137), ('pixel', 0.131), ('rms', 0.114), ('levels', 0.109), ('pixels', 0.101), ('text', 0.088), ('beta', 0.085), ('hmap', 0.081), ('lowresolution', 0.081), ('neighbourhood', 0.081), ('textures', 0.079), ('registration', 0.075), ('ground', 0.073), ('resolution', 0.072), ('scg', 0.07), ('displacements', 0.064), ('gx', 0.064), ('noise', 0.059), ('intensity', 0.058), ('bead', 0.054), ('highresolution', 0.054), ('synthesis', 0.046), ('central', 0.045), ('irani', 0.043), ('mrf', 0.043), ('mrfs', 0.043), ('philosophy', 0.043), ('res', 0.042), ('input', 0.041), ('oxford', 0.04), ('ml', 0.039), ('recovered', 0.037), ('sample', 0.036), ('xi', 0.035), ('upon', 0.033), ('varying', 0.032), ('scaled', 0.032), ('priors', 0.032), ('gradients', 0.031), ('centre', 0.031), ('recovering', 0.03), ('wrong', 0.03), ('generative', 0.029), ('strength', 0.028), ('reconstruct', 0.028), ('sampled', 0.027), ('obermayer', 0.026), ('low', 0.026), ('tend', 0.026), ('varied', 0.026), ('rotation', 0.026), ('plots', 0.024), ('learn', 0.024), ('unity', 0.023), ('centres', 0.023), ('cvgip', 0.023), ('fares', 0.023), ('glenn', 0.023), ('lh', 0.023), ('minx', 0.023), ('neighbour', 0.023), ('netherlands', 0.023), ('stephen', 0.023), ('subsample', 0.023), ('swamped', 0.023), ('zoom', 0.023), ('dx', 0.023), ('column', 0.023), ('edge', 0.023), ('implementation', 0.023), ('gaussian', 0.023), ('matches', 0.022), ('generic', 0.022), ('becker', 0.022), ('around', 0.022), ('smoothness', 0.022), ('thrun', 0.021), ('spiral', 0.021), ('accompanied', 0.021), ('anisotropic', 0.021), ('dissimilar', 0.021), ('greg', 0.021), ('pickup', 0.021), ('regularize', 0.021), ('rendering', 0.021), ('rmse', 0.021), ('susceptible', 0.021), ('generation', 0.021), ('reconstruction', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 17 nips-2003-A Sampled Texture Prior for Image Super-Resolution

Author: Lyndsey C. Pickup, Stephen J. Roberts, Andrew Zisserman

Abstract: Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over the high-resolution image space. Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. Here we present a domain-specific image prior in the form of a p.d.f. based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. 1

2 0.22362526 88 nips-2003-Image Reconstruction by Linear Programming

Author: Koji Tsuda, Gunnar Rätsch

Abstract: A common way of image denoising is to project a noisy image to the subspace of admissible images made for instance by PCA. However, a major drawback of this method is that all pixels are updated by the projection, even when only a few pixels are corrupted by noise or occlusion. We propose a new method to identify the noisy pixels by 1 -norm penalization and update the identified pixels only. The identification and updating of noisy pixels are formulated as one linear program which can be solved efficiently. Especially, one can apply the ν-trick to directly specify the fraction of pixels to be reconstructed. Moreover, we extend the linear program to be able to exploit prior knowledge that occlusions often appear in contiguous blocks (e.g. sunglasses on faces). The basic idea is to penalize boundary points and interior points of the occluded area differently. We are able to show the ν-property also for this extended LP leading a method which is easy to use. Experimental results impressively demonstrate the power of our approach.

3 0.19750996 12 nips-2003-A Model for Learning the Semantics of Pictures

Author: Victor Lavrenko, R. Manmatha, Jiwoon Jeon

Abstract: We propose an approach to learning the semantics of images which allows us to automatically annotate an image with keywords and to retrieve images based on text queries. We do this using a formalism that models the generation of annotated images. We assume that every image is divided into regions, each described by a continuous-valued feature vector. Given a training set of images with annotations, we compute a joint probabilistic model of image features and words which allow us to predict the probability of generating a word given the image regions. This may be used to automatically annotate and retrieve images given a word as a query. Experiments show that our model significantly outperforms the best of the previously reported results on the tasks of automatic image annotation and retrieval. 1

4 0.13505131 192 nips-2003-Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes

Author: Kevin P. Murphy, Antonio Torralba, William T. Freeman

Abstract: Standard approaches to object detection focus on local patches of the image, and try to classify them as background or not. We propose to use the scene context (image as a whole) as an extra source of (global) information, to help resolve local ambiguities. We present a conditional random field for jointly solving the tasks of object detection and scene classification. 1

5 0.12248046 139 nips-2003-Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression

Author: Roland Vollgraf, Michael Scholz, Ian A. Meinertzhagen, Klaus Obermayer

Abstract: Nonlinear filtering can solve very complex problems, but typically involve very time consuming calculations. Here we show that for filters that are constructed as a RBF network with Gaussian basis functions, a decomposition into linear filters exists, which can be computed efficiently in the frequency domain, yielding dramatic improvement in speed. We present an application of this idea to image processing. In electron micrograph images of photoreceptor terminals of the fruit fly, Drosophila, synaptic vesicles containing neurotransmitter should be detected and labeled automatically. We use hand labels, provided by human experts, to learn a RBF filter using Support Vector Regression with Gaussian kernels. We will show that the resulting nonlinear filter solves the task to a degree of accuracy, which is close to what can be achieved by human experts. This allows the very time consuming task of data evaluation to be done efficiently. 1

6 0.11393899 73 nips-2003-Feature Selection in Clustering Problems

7 0.1114402 119 nips-2003-Local Phase Coherence and the Perception of Blur

8 0.11133597 190 nips-2003-Unsupervised Color Decomposition Of Histologically Stained Tissue Samples

9 0.10854274 39 nips-2003-Bayesian Color Constancy with Non-Gaussian Models

10 0.08531446 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

11 0.076419346 69 nips-2003-Factorization with Uncertainty and Missing Data: Exploiting Temporal Coherence

12 0.072047569 133 nips-2003-Mutual Boosting for Contextual Inference

13 0.070267498 11 nips-2003-A Mixed-Signal VLSI for Real-Time Generation of Edge-Based Image Vectors

14 0.068508632 138 nips-2003-Non-linear CCA and PCA by Alignment of Local Models

15 0.065211222 77 nips-2003-Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data

16 0.064685963 117 nips-2003-Linear Response for Approximate Inference

17 0.064092502 186 nips-2003-Towards Social Robots: Automatic Evaluation of Human-Robot Interaction by Facial Expression Classification

18 0.063677691 152 nips-2003-Pairwise Clustering and Graphical Models

19 0.062544882 85 nips-2003-Human and Ideal Observers for Detecting Image Curves

20 0.062232167 9 nips-2003-A Kullback-Leibler Divergence Based Kernel for SVM Classification in Multimedia Applications


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.184), (1, -0.077), (2, 0.057), (3, -0.044), (4, -0.231), (5, -0.075), (6, 0.052), (7, -0.051), (8, 0.033), (9, -0.137), (10, -0.109), (11, 0.03), (12, -0.233), (13, 0.161), (14, -0.068), (15, -0.106), (16, -0.089), (17, 0.077), (18, -0.142), (19, 0.193), (20, 0.006), (21, 0.141), (22, -0.03), (23, -0.021), (24, -0.036), (25, 0.162), (26, 0.03), (27, -0.02), (28, 0.006), (29, -0.026), (30, 0.007), (31, -0.048), (32, 0.209), (33, -0.055), (34, 0.001), (35, 0.053), (36, -0.036), (37, -0.056), (38, 0.016), (39, 0.077), (40, 0.055), (41, 0.025), (42, -0.059), (43, 0.013), (44, -0.039), (45, -0.02), (46, 0.079), (47, -0.006), (48, 0.081), (49, 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98659807 17 nips-2003-A Sampled Texture Prior for Image Super-Resolution

Author: Lyndsey C. Pickup, Stephen J. Roberts, Andrew Zisserman

Abstract: Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over the high-resolution image space. Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. Here we present a domain-specific image prior in the form of a p.d.f. based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. 1

2 0.8233217 12 nips-2003-A Model for Learning the Semantics of Pictures

Author: Victor Lavrenko, R. Manmatha, Jiwoon Jeon

Abstract: We propose an approach to learning the semantics of images which allows us to automatically annotate an image with keywords and to retrieve images based on text queries. We do this using a formalism that models the generation of annotated images. We assume that every image is divided into regions, each described by a continuous-valued feature vector. Given a training set of images with annotations, we compute a joint probabilistic model of image features and words which allow us to predict the probability of generating a word given the image regions. This may be used to automatically annotate and retrieve images given a word as a query. Experiments show that our model significantly outperforms the best of the previously reported results on the tasks of automatic image annotation and retrieval. 1

3 0.77779609 39 nips-2003-Bayesian Color Constancy with Non-Gaussian Models

Author: Charles Rosenberg, Alok Ladsariya, Tom Minka

Abstract: We present a Bayesian approach to color constancy which utilizes a nonGaussian probabilistic model of the image formation process. The parameters of this model are estimated directly from an uncalibrated image set and a small number of additional algorithmic parameters are chosen using cross validation. The algorithm is empirically shown to exhibit RMS error lower than other color constancy algorithms based on the Lambertian surface reflectance model when estimating the illuminants of a set of test images. This is demonstrated via a direct performance comparison utilizing a publicly available set of real world test images and code base.

4 0.76733667 88 nips-2003-Image Reconstruction by Linear Programming

Author: Koji Tsuda, Gunnar Rätsch

Abstract: A common way of image denoising is to project a noisy image to the subspace of admissible images made for instance by PCA. However, a major drawback of this method is that all pixels are updated by the projection, even when only a few pixels are corrupted by noise or occlusion. We propose a new method to identify the noisy pixels by 1 -norm penalization and update the identified pixels only. The identification and updating of noisy pixels are formulated as one linear program which can be solved efficiently. Especially, one can apply the ν-trick to directly specify the fraction of pixels to be reconstructed. Moreover, we extend the linear program to be able to exploit prior knowledge that occlusions often appear in contiguous blocks (e.g. sunglasses on faces). The basic idea is to penalize boundary points and interior points of the occluded area differently. We are able to show the ν-property also for this extended LP leading a method which is easy to use. Experimental results impressively demonstrate the power of our approach.

5 0.56643498 139 nips-2003-Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression

Author: Roland Vollgraf, Michael Scholz, Ian A. Meinertzhagen, Klaus Obermayer

Abstract: Nonlinear filtering can solve very complex problems, but typically involve very time consuming calculations. Here we show that for filters that are constructed as a RBF network with Gaussian basis functions, a decomposition into linear filters exists, which can be computed efficiently in the frequency domain, yielding dramatic improvement in speed. We present an application of this idea to image processing. In electron micrograph images of photoreceptor terminals of the fruit fly, Drosophila, synaptic vesicles containing neurotransmitter should be detected and labeled automatically. We use hand labels, provided by human experts, to learn a RBF filter using Support Vector Regression with Gaussian kernels. We will show that the resulting nonlinear filter solves the task to a degree of accuracy, which is close to what can be achieved by human experts. This allows the very time consuming task of data evaluation to be done efficiently. 1

6 0.56222045 190 nips-2003-Unsupervised Color Decomposition Of Histologically Stained Tissue Samples

7 0.5590896 119 nips-2003-Local Phase Coherence and the Perception of Blur

8 0.5166803 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

9 0.48216051 192 nips-2003-Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes

10 0.4423618 11 nips-2003-A Mixed-Signal VLSI for Real-Time Generation of Edge-Based Image Vectors

11 0.41516805 73 nips-2003-Feature Selection in Clustering Problems

12 0.35528806 133 nips-2003-Mutual Boosting for Contextual Inference

13 0.34852141 138 nips-2003-Non-linear CCA and PCA by Alignment of Local Models

14 0.33414015 120 nips-2003-Locality Preserving Projections

15 0.33212173 69 nips-2003-Factorization with Uncertainty and Missing Data: Exploiting Temporal Coherence

16 0.32654351 195 nips-2003-When Does Non-Negative Matrix Factorization Give a Correct Decomposition into Parts?

17 0.31050968 85 nips-2003-Human and Ideal Observers for Detecting Image Curves

18 0.30174294 77 nips-2003-Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data

19 0.29763612 152 nips-2003-Pairwise Clustering and Graphical Models

20 0.26629212 168 nips-2003-Salient Boundary Detection using Ratio Contour


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.032), (11, 0.111), (29, 0.025), (30, 0.025), (35, 0.033), (53, 0.108), (66, 0.013), (68, 0.285), (71, 0.067), (76, 0.052), (85, 0.06), (91, 0.085)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.81658405 17 nips-2003-A Sampled Texture Prior for Image Super-Resolution

Author: Lyndsey C. Pickup, Stephen J. Roberts, Andrew Zisserman

Abstract: Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over the high-resolution image space. Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. Here we present a domain-specific image prior in the form of a p.d.f. based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. 1

2 0.80031902 25 nips-2003-An MCMC-Based Method of Comparing Connectionist Models in Cognitive Science

Author: Woojae Kim, Daniel J. Navarro, Mark A. Pitt, In J. Myung

Abstract: Despite the popularity of connectionist models in cognitive science, their performance can often be difficult to evaluate. Inspired by the geometric approach to statistical model selection, we introduce a conceptually similar method to examine the global behavior of a connectionist model, by counting the number and types of response patterns it can simulate. The Markov Chain Monte Carlo-based algorithm that we constructed Þnds these patterns efficiently. We demonstrate the approach using two localist network models of speech perception. 1

3 0.78038162 152 nips-2003-Pairwise Clustering and Graphical Models

Author: Noam Shental, Assaf Zomet, Tomer Hertz, Yair Weiss

Abstract: Significant progress in clustering has been achieved by algorithms that are based on pairwise affinities between the datapoints. In particular, spectral clustering methods have the advantage of being able to divide arbitrarily shaped clusters and are based on efficient eigenvector calculations. However, spectral methods lack a straightforward probabilistic interpretation which makes it difficult to automatically set parameters using training data. In this paper we use the previously proposed typical cut framework for pairwise clustering. We show an equivalence between calculating the typical cut and inference in an undirected graphical model. We show that for clustering problems with hundreds of datapoints exact inference may still be possible. For more complicated datasets, we show that loopy belief propagation (BP) and generalized belief propagation (GBP) can give excellent results on challenging clustering problems. We also use graphical models to derive a learning algorithm for affinity matrices based on labeled data. 1

4 0.58529574 173 nips-2003-Semi-supervised Protein Classification Using Cluster Kernels

Author: Jason Weston, Dengyong Zhou, André Elisseeff, William S. Noble, Christina S. Leslie

Abstract: A key issue in supervised protein classification is the representation of input sequences of amino acids. Recent work using string kernels for protein data has achieved state-of-the-art classification performance. However, such representations are based only on labeled data — examples with known 3D structures, organized into structural classes — while in practice, unlabeled data is far more plentiful. In this work, we develop simple and scalable cluster kernel techniques for incorporating unlabeled data into the representation of protein sequences. We show that our methods greatly improve the classification performance of string kernels and outperform standard approaches for using unlabeled data, such as adding close homologs of the positive examples to the training data. We achieve equal or superior performance to previously presented cluster kernel methods while achieving far greater computational efficiency. 1

5 0.56843001 88 nips-2003-Image Reconstruction by Linear Programming

Author: Koji Tsuda, Gunnar Rätsch

Abstract: A common way of image denoising is to project a noisy image to the subspace of admissible images made for instance by PCA. However, a major drawback of this method is that all pixels are updated by the projection, even when only a few pixels are corrupted by noise or occlusion. We propose a new method to identify the noisy pixels by 1 -norm penalization and update the identified pixels only. The identification and updating of noisy pixels are formulated as one linear program which can be solved efficiently. Especially, one can apply the ν-trick to directly specify the fraction of pixels to be reconstructed. Moreover, we extend the linear program to be able to exploit prior knowledge that occlusions often appear in contiguous blocks (e.g. sunglasses on faces). The basic idea is to penalize boundary points and interior points of the occluded area differently. We are able to show the ν-property also for this extended LP leading a method which is easy to use. Experimental results impressively demonstrate the power of our approach.

6 0.56214857 12 nips-2003-A Model for Learning the Semantics of Pictures

7 0.56207454 37 nips-2003-Automatic Annotation of Everyday Movements

8 0.54786205 107 nips-2003-Learning Spectral Clustering

9 0.52755088 168 nips-2003-Salient Boundary Detection using Ratio Contour

10 0.52566034 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

11 0.52458745 112 nips-2003-Learning to Find Pre-Images

12 0.52190006 9 nips-2003-A Kullback-Leibler Divergence Based Kernel for SVM Classification in Multimedia Applications

13 0.52052385 192 nips-2003-Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes

14 0.5159446 120 nips-2003-Locality Preserving Projections

15 0.51583552 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons

16 0.51344907 113 nips-2003-Learning with Local and Global Consistency

17 0.51308346 138 nips-2003-Non-linear CCA and PCA by Alignment of Local Models

18 0.5130024 139 nips-2003-Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression

19 0.51149261 50 nips-2003-Denoising and Untangling Graphs Using Degree Priors

20 0.50973988 126 nips-2003-Measure Based Regularization