nips nips2006 nips2006-45 knowledge-graph by maker-knowledge-mining

45 nips-2006-Blind Motion Deblurring Using Image Statistics


Source: pdf

Author: Anat Levin

Abstract: We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Blind Motion Deblurring Using Image Statistics Anat Levin∗ School of Computer Science and Engineering The Hebrew University of Jerusalem Abstract We address the problem of blind motion deblurring from a single image, caused by a few moving objects. [sent-1, score-0.539]

2 In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. [sent-2, score-0.603]

3 Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. [sent-3, score-0.876]

4 However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. [sent-4, score-0.913]

5 Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. [sent-5, score-0.605]

6 Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. [sent-7, score-0.666]

7 This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. [sent-8, score-0.752]

8 1 Introduction Motion blur is the result of the relative motion between the camera and the scene during image exposure time. [sent-11, score-0.972]

9 As blurring can significantly degrade the visual quality of images, photographers and camera manufactures are frequently searching for methods to limit the phenomenon. [sent-13, score-0.441]

10 One solution that reduces the degree of blur is to capture images using shorter exposure intervals. [sent-14, score-0.653]

11 An alternative approach is to try to remove the blur off-line. [sent-16, score-0.602]

12 Blur is usually modeled as a linear convolution of an image with a blurring kernel, also known as the point spread function (or PSF). [sent-17, score-0.613]

13 Image deconvolution is the process of recovering the unknown image from its blurred version, given a blurring kernel. [sent-18, score-1.053]

14 In most situations, however, the blurring kernel is unknown as well, and the task also requires the estimation of the underlying blurring kernel. [sent-19, score-0.877]

15 Most of the existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. [sent-21, score-0.876]

16 While the uniform blur assumption is valid for a restricted set of camera motions, it’s usually far from being satisfying when the scene contains several objects moving independently. [sent-22, score-0.699]

17 In this work, however, we would like to address blind multiple motions deblurring using a single frame. [sent-24, score-0.481]

18 The first assumption is that the image consists of a small number of blurring layers with the same blurring kernel within each layer. [sent-26, score-1.17]

19 Most of the examples in this paper include a single blurred object and an unblurred background. [sent-27, score-0.446]

20 As a result, within each blurred layer, the blurring kernel is a simple one dimensional box filter, so that the only unknown parameters are the blur direction and the width of the blur kernel. [sent-31, score-1.996]

21 Deblurring different motions requires the segmentation of the image into layers with different blurs as well as the reconstruction of the blurring kernel in each layer. [sent-32, score-1.136]

22 While image segmentation is an active and challenging research area which utilizes various low level and high level cues, the only segmentation cue used in this work is the degree of blur. [sent-33, score-0.485]

23 In order to discriminate different degrees of blur we use the statistics of natural images. [sent-34, score-0.624]

24 Our observation is that statistics of derivatives responses in images are significantly changed as a result of blur, and that the expected statistics under different blurring kernels can be modeled. [sent-35, score-0.728]

25 Given a model of the derivatives statistics under different blurring kernels our algorithm searches for a mixture model that will best describe the distribution observed in the input image. [sent-36, score-0.67]

26 This results in a set of 2 (or some other small number) blurring kernels that were used in the image. [sent-37, score-0.44]

27 In order to segment the image into blurring layers we measure the likelihood of the derivatives in small image windows, under each model. [sent-38, score-1.124]

28 Research about blind deconvolution given a single image, usually concentrate at cases in which the image is uniformly blurred. [sent-42, score-0.503]

29 Early deblurring methods treated blurs that can be characterized by a regular pattern of zeros in the frequency domain such as box filter blurs [26]. [sent-44, score-0.713]

30 Even in the noise free case, box filter blurs can not be identified in the frequency domain if different blurs are present. [sent-46, score-0.439]

31 In a creative recent research which inspired our approach, Fergus et al [12] use the statistics of natural images to estimate the blurring kernel (again, assuming a uniform blur). [sent-49, score-0.559]

32 Their approach searches for the max-marginal blurring kernel and a deblurred image, using a prior on derivatives distribution in an unblurred image. [sent-50, score-0.862]

33 They address more than box filters, and present impressing reconstructions of complex blurring kernels. [sent-51, score-0.52]

34 As the edge’s scale provides some measure of blur this is used for segmenting an image into a focus and out of focus layers. [sent-55, score-0.799]

35 In [4], blind restoration of spatially-varying blur was studied in the case of astronomical images, which have statistics quite different from the natural scenes addressed in this paper. [sent-57, score-0.794]

36 As a scene point focus is a function of its depth, the relative blur is used to estimate depth information. [sent-62, score-0.665]

37 2 Image statistics and blurring Figure 1(a) presents an image of an outdoor scene, with a passing bus. [sent-65, score-0.65]

38 The bus is blurred horizontally as a result of the bus motion. [sent-66, score-0.484]

39 In fig 1(b) we plot the log histogram of the vertical derivatives of this image, and the horizontal derivatives within the blurred area (marked with a rectangle). [sent-67, score-0.831]

40 As can be 0 0 0 vertical horizontal −1 Input 5 taps blur 21 taps blur −2 −2 horizontal blurred vertical −1 −2 −3 −4 −3 −4 −4 −6 −5 −5 −6 −8 −6 −7 −10 −7 −8 −9 −0. [sent-68, score-1.769]

41 (b) Horizontal derivatives within the blurred region versus vertical derivatives in the entire image. [sent-100, score-0.728]

42 (d) Horizontal derivatives within the blurred region matched with blurred verticals (4 tap blur). [sent-102, score-0.728]

43 seen, the blur changes the shape of the histogram significantly. [sent-103, score-0.619]

44 This suggests that the statistics of derivative filters responses can be used for detecting blurred image areas. [sent-104, score-0.509]

45 How does the degree of blur affects the derivatives histogram? [sent-105, score-0.752]

46 We convolve the image with the kernels f k (where k runs from 1 to 30) and compute the vertical derivatives distributions: T pk ∝ hist(dy ∗ fk ∗ I) (1) T where dy = [1 − 1] . [sent-108, score-0.624]

47 As the size of the blurring kernel changes the derivatives distribution, we would also like to use the histograms for determining the degree of blur. [sent-110, score-0.706]

48 For example, as illustrated in fig 1(d), we can match the distribution of vertical derivatives in the blurred area, and p 4 , the distribution of horizontal derivatives after blurring with a 4 tap kernel. [sent-111, score-1.191]

49 1 Identifying blur using image statistics Given an image, the direction of motion blur can be selected as the direction with minimal derivatives variation, as in [28]. [sent-113, score-1.705]

50 For the simplicity of the derivation we will assume here that the motion direction is horizontal, and that the image contains a single blurred object plus an unblurred background. [sent-114, score-0.762]

51 Our goal is to determine the size of the blur kernel. [sent-115, score-0.574]

52 That is, to recover the filter f k which is responsible for the blur observed in the image. [sent-116, score-0.574]

53 Therefore, without segmenting the blurred areas there is no single blurring model p k that will describe the observed histogram. [sent-119, score-0.724]

54 We define the log-likelihood of the derivatives in a window with respect to each of the blurring models as: k (i) = log pk (Ix (j)) (2) j∈Wi Where Ix (j) is the horizontal derivative in pixel j, and W i is a window around pixel i. [sent-121, score-0.811]

55 On the other hand, uniform areas receive the highest likelihoods from wide blur kernels (since the derivatives distribution for wide kernels is more concentrated around zero, as can be observed in figure 1(c)). [sent-124, score-0.883]

56 When the image consists of large uniform areas, this bias the likelihood toward wider blur kernels. [sent-125, score-0.816]

57 In order to make our model consistent, when building the blurred distribution models p k (eq 1), we also take into account only pixels within a window around a vertical edge. [sent-127, score-0.441]

58 2 Segmenting blur layers Once the blurring kernel f k has been found, we can use it to deconvolve the image, as in fig 2(b). [sent-130, score-1.18]

59 While this significantly improves the image in the blurred areas, serious artifacts are observed in the background. [sent-131, score-0.49]

60 Therefore, in addition to recovering the blurring kernel, we need to segment the image into blurred and unblurred layers. [sent-132, score-1.126]

61 The final restorated image is computed as: R(i) = x(i)I −fk (i) + (1 − x(i))I(i) (7) 3 Results To compute a deconvolved image I −fk given the blurring kernel, we follow [12] in using the matlab implementation (deconvlucy) of the Richardson-Lucy deconvolution algorithm [23, 18]. [sent-143, score-1.035]

62 For the doll example the image was segmented into 3 blurring layers. [sent-145, score-0.679]

63 To determine the blur direction in those images we select the direction with minimal derivatives variation, as in [28]. [sent-148, score-0.872]

64 For each image we show what happens if the segmentation is ignored and the entire image is deconvolved with the selected kernel (for the doll case the wider kernel is shown). [sent-150, score-0.862]

65 In comparison, the third row presents the restorated images computed from eq 7 using the blurring layers segmentation. [sent-152, score-0.641]

66 (e)Segmentation contour The recovered blur sizes for those examples were 12 pixels for the bicycles image and 4 pixels for the bus. [sent-165, score-0.978]

67 For the doll image a 9 pixels blur was identified in the skirt segment and a 2 pixels blur in the doll head. [sent-166, score-1.611]

68 We note that while recovering big degrees of blur as in the bicycles example is visually more impressing, discriminating small degrees of blur as in the bus example is more challenging from the statistical aspect. [sent-167, score-1.409]

69 This is because the derivatives distributions in the case of small blurs are much more similar to the distributions of unblurred images. [sent-168, score-0.544]

70 For the bus image the size of the blur kernel found by our algorithm was 4 pixels. [sent-169, score-0.945]

71 Segmentation: As demonstrated in fig 2(b) deconvolving the entire image with the same kernel damages the unblurred parts. [sent-175, score-0.522]

72 One obvious solution is to divide the image into regions and match a separate blur kernel to each region. [sent-176, score-0.848]

73 While likelihood measure based on a big window is more reliable, such a window might cover regions from different blurring layers. [sent-180, score-0.53]

74 Another alternative is to brake the image into segments using an unsupervised segmentation algorithm, and match a kernel to each segment. [sent-181, score-0.432]

75 The fact that blur changes the derivatives distributions also suggests that it might be captured as a kind of texture cue. [sent-182, score-0.814]

76 However, as this is an unsupervised segmentation process which does not take into account the grouping goal, it’s hard to expect it to yield exactly the blurred layers. [sent-186, score-0.421]

77 The output over-segments blur layers, while merging parts of blurred and unblurred objects. [sent-188, score-1.02]

78 To do that independently of the segmentation, we manually segmented the bus and applied the matlab blind deconvolution function (deconvblind), initialized with a 1 × 7 box kernel. [sent-195, score-0.495]

79 Yet, the histogram structure of different images is not identical, and we found that trying to deblur one image using the statistics of a different image doesn’t work that well. [sent-199, score-0.516]

80 For example, figure 5 shows the result of deblurring the bus image using the bicycles image statistics. [sent-200, score-0.814]

81 The selected blur in this case was a 6-tap kernel, but deblurring the image with this kernel introduces artifacts. [sent-201, score-1.122]

82 Our solution was to work on each image using the vertical derivatives histograms from the same image. [sent-203, score-0.497]

83 This isn’t an optimal solution as when the image is blurred horizontally some of the vertical derivatives are degraded as well. [sent-204, score-0.737]

84 (a) (b) (c) (d) Figure 5: Deblurring the bus image using the bicycles image statistics. [sent-206, score-0.54]

85 One failure source is blurs which can’t be described as a box filter, or failures in identifying the blur direction. [sent-213, score-0.848]

86 Even when this isn’t the case, the algorithm may fail to identify the correct blur size or it may not infer the correct segmentation. [sent-214, score-0.574]

87 The bushes area consists of many small derivatives which are explained better by a small blur model than by a no-blur model. [sent-217, score-0.816]

88 As a result the algorithm selected a 6-pixels blur model. [sent-219, score-0.574]

89 This model might increase the likelihood of the bushes texture and the noise on the road, but it doesn’t remove the blur of the car. [sent-220, score-0.687]

90 4 Discussion This paper addresses the problem of blind motion deconvolution without assuming that the entire image undergone the same blur. [sent-227, score-0.606]

91 Thus, in addition to recovering an unknown blur kernel, we segment the image into layers with different blurs. [sent-228, score-0.953]

92 We treat this highly challenging task using a surprisingly simple approach, relying on the derivatives distribution in blurred images. [sent-229, score-0.441]

93 We model the expected derivatives distributions under different degrees of blur, and those distributions are used for detecting different blurs in image windows. [sent-230, score-0.577]

94 The box filters model used in this work is definitely limiting, and as pointed out by [12, 6], many blurring patterns observed in real images are more complex. [sent-231, score-0.524]

95 Stronger models might enable us to identify a wider class of blurring kernels rather than just box filters. [sent-233, score-0.537]

96 Particularly, they could provide a better strategy for identifying the blur direction. [sent-234, score-0.592]

97 In future work, it will also be interesting to try to detect different blurs without assuming a small number of blurring layers. [sent-236, score-0.61]

98 This will require estimating the blurs in the image in a continues way, and might also provide a depth from focus algorithm that will work on a single image. [sent-237, score-0.422]

99 Local scale control for edge detection and blur estimation. [sent-294, score-0.574]

100 Simultaneous image formation and motion blur restoration via multiple capture. [sent-334, score-0.904]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('blur', 0.574), ('blurring', 0.399), ('deblurring', 0.274), ('blurred', 0.263), ('image', 0.195), ('blurs', 0.183), ('unblurred', 0.183), ('derivatives', 0.178), ('deconvolution', 0.146), ('blind', 0.143), ('segmentation', 0.136), ('layers', 0.098), ('bus', 0.097), ('motion', 0.087), ('kernel', 0.079), ('horizontal', 0.075), ('vertical', 0.074), ('box', 0.073), ('doll', 0.066), ('fk', 0.058), ('window', 0.054), ('deconvolved', 0.053), ('bicycles', 0.053), ('images', 0.052), ('pixels', 0.05), ('lter', 0.05), ('recovering', 0.05), ('histograms', 0.05), ('restoration', 0.048), ('scene', 0.047), ('motions', 0.046), ('bushes', 0.046), ('histogram', 0.045), ('texture', 0.044), ('depth', 0.044), ('camera', 0.042), ('kernels', 0.041), ('levin', 0.04), ('segment', 0.036), ('entire', 0.035), ('eq', 0.035), ('direction', 0.034), ('areas', 0.032), ('artifacts', 0.032), ('recovered', 0.031), ('convolve', 0.03), ('deconvolve', 0.03), ('deconvolving', 0.03), ('elder', 0.03), ('impressing', 0.03), ('matting', 0.03), ('restorated', 0.03), ('taps', 0.03), ('segmenting', 0.03), ('pk', 0.029), ('statistics', 0.029), ('try', 0.028), ('presents', 0.027), ('windows', 0.027), ('horizontally', 0.027), ('optics', 0.027), ('exposure', 0.027), ('isn', 0.027), ('stronger', 0.025), ('contour', 0.025), ('eccv', 0.025), ('wider', 0.024), ('pami', 0.024), ('tap', 0.024), ('concentrates', 0.024), ('ections', 0.024), ('ix', 0.024), ('seam', 0.024), ('cvpr', 0.024), ('lters', 0.024), ('likelihood', 0.023), ('searches', 0.023), ('unsupervised', 0.022), ('derivative', 0.022), ('degrees', 0.021), ('scanning', 0.02), ('eij', 0.02), ('energy', 0.02), ('discriminating', 0.019), ('dy', 0.019), ('usually', 0.019), ('segmented', 0.019), ('velocity', 0.019), ('fergus', 0.019), ('identifying', 0.018), ('smoothness', 0.018), ('captured', 0.018), ('align', 0.018), ('doesn', 0.018), ('siggraph', 0.018), ('address', 0.018), ('area', 0.018), ('moving', 0.017), ('matlab', 0.017), ('likelihoods', 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 45 nips-2006-Blind Motion Deblurring Using Image Statistics

Author: Anat Levin

Abstract: We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture.

2 0.20125026 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

Author: Andrea Frome, Yoram Singer, Jitendra Malik

Abstract: In this paper we introduce and experiment with a framework for learning local perceptual distance functions for visual recognition. We learn a distance function for each training image as a combination of elementary distances between patch-based visual features. We apply these combined local distance functions to the tasks of image retrieval and classification of novel images. On the Caltech 101 object recognition benchmark, we achieve 60.3% mean recognition across classes using 15 training images per class, which is better than the best published performance by Zhang, et al. 1

3 0.1813284 16 nips-2006-A Theory of Retinal Population Coding

Author: Eizaburo Doi, Michael S. Lewicki

Abstract: Efficient coding models predict that the optimal code for natural images is a population of oriented Gabor receptive fields. These results match response properties of neurons in primary visual cortex, but not those in the retina. Does the retina use an optimal code, and if so, what is it optimized for? Previous theories of retinal coding have assumed that the goal is to encode the maximal amount of information about the sensory signal. However, the image sampled by retinal photoreceptors is degraded both by the optics of the eye and by the photoreceptor noise. Therefore, de-blurring and de-noising of the retinal signal should be important aspects of retinal coding. Furthermore, the ideal retinal code should be robust to neural noise and make optimal use of all available neurons. Here we present a theoretical framework to derive codes that simultaneously satisfy all of these desiderata. When optimized for natural images, the model yields filters that show strong similarities to retinal ganglion cell (RGC) receptive fields. Importantly, the characteristics of receptive fields vary with retinal eccentricities where the optical blur and the number of RGCs are significantly different. The proposed model provides a unified account of retinal coding, and more generally, it may be viewed as an extension of the Wiener filter with an arbitrary number of noisy units. 1

4 0.11461881 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

Author: Dong S. Cheng, Vittorio Murino, Mário Figueiredo

Abstract: This paper proposes a new approach to model-based clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the class labels are only indirectly observed (via the probability density of each class); as finite mixture learning under a grouping prior. To estimate the parameters of the proposed model, we derive a (generalized) EM algorithm with a closed-form E-step, in contrast with other recent approaches to semi-supervised probabilistic clustering which require Gibbs sampling or suboptimal shortcuts. We show that our approach is ideally suited for image segmentation: it avoids the combinatorial nature Markov random field priors, and opens the door to more sophisticated spatial priors (e.g., wavelet-based) in a simple and computationally efficient way. Finally, we extend our formulation to work in unsupervised, semi-supervised, or discriminative modes. 1

5 0.11016071 42 nips-2006-Bayesian Image Super-resolution, Continued

Author: Lyndsey C. Pickup, David P. Capel, Stephen J. Roberts, Andrew Zisserman

Abstract: This paper develops a multi-frame image super-resolution approach from a Bayesian view-point by marginalizing over the unknown registration parameters relating the set of input low-resolution views. In Tipping and Bishop’s Bayesian image super-resolution approach [16], the marginalization was over the superresolution image, necessitating the use of an unfavorable image prior. By integrating over the registration parameters rather than the high-resolution image, our method allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. In addition to the motion model used by Tipping and Bishop, illumination components are introduced into the generative model, allowing us to handle changes in lighting as well as motion. We show results on real and synthetic datasets to illustrate the efficacy of this approach.

6 0.10235995 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo

7 0.071065664 103 nips-2006-Kernels on Structured Objects Through Nested Histograms

8 0.068514608 153 nips-2006-Online Clustering of Moving Hyperplanes

9 0.064277694 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

10 0.063361056 78 nips-2006-Fast Discriminative Visual Codebooks using Randomized Clustering Forests

11 0.063220538 111 nips-2006-Learning Motion Style Synthesis from Perceptual Observations

12 0.05839505 31 nips-2006-Analysis of Contour Motions

13 0.055889603 66 nips-2006-Detecting Humans via Their Pose

14 0.055325158 65 nips-2006-Denoising and Dimension Reduction in Feature Space

15 0.053437091 134 nips-2006-Modeling Human Motion Using Binary Latent Variables

16 0.052747324 122 nips-2006-Learning to parse images of articulated bodies

17 0.052056193 46 nips-2006-Blind source separation for over-determined delayed mixtures

18 0.046747185 167 nips-2006-Recursive ICA

19 0.046736896 52 nips-2006-Clustering appearance and shape by learning jigsaws

20 0.046574254 50 nips-2006-Chained Boosting


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.146), (1, 0.01), (2, 0.151), (3, -0.02), (4, 0.015), (5, -0.076), (6, -0.142), (7, -0.114), (8, 0.016), (9, 0.113), (10, 0.13), (11, 0.008), (12, -0.042), (13, -0.039), (14, 0.081), (15, 0.123), (16, -0.03), (17, -0.036), (18, 0.007), (19, 0.08), (20, -0.16), (21, 0.025), (22, -0.023), (23, -0.052), (24, 0.127), (25, 0.099), (26, 0.0), (27, 0.011), (28, 0.092), (29, 0.179), (30, -0.041), (31, -0.012), (32, 0.008), (33, -0.017), (34, 0.072), (35, -0.006), (36, -0.066), (37, 0.006), (38, -0.005), (39, -0.045), (40, -0.099), (41, -0.098), (42, 0.138), (43, -0.025), (44, 0.09), (45, 0.077), (46, 0.123), (47, -0.262), (48, -0.02), (49, 0.126)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94240195 45 nips-2006-Blind Motion Deblurring Using Image Statistics

Author: Anat Levin

Abstract: We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture.

2 0.6457817 16 nips-2006-A Theory of Retinal Population Coding

Author: Eizaburo Doi, Michael S. Lewicki

Abstract: Efficient coding models predict that the optimal code for natural images is a population of oriented Gabor receptive fields. These results match response properties of neurons in primary visual cortex, but not those in the retina. Does the retina use an optimal code, and if so, what is it optimized for? Previous theories of retinal coding have assumed that the goal is to encode the maximal amount of information about the sensory signal. However, the image sampled by retinal photoreceptors is degraded both by the optics of the eye and by the photoreceptor noise. Therefore, de-blurring and de-noising of the retinal signal should be important aspects of retinal coding. Furthermore, the ideal retinal code should be robust to neural noise and make optimal use of all available neurons. Here we present a theoretical framework to derive codes that simultaneously satisfy all of these desiderata. When optimized for natural images, the model yields filters that show strong similarities to retinal ganglion cell (RGC) receptive fields. Importantly, the characteristics of receptive fields vary with retinal eccentricities where the optical blur and the number of RGCs are significantly different. The proposed model provides a unified account of retinal coding, and more generally, it may be viewed as an extension of the Wiener filter with an arbitrary number of noisy units. 1

3 0.58996183 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

Author: Andrea Frome, Yoram Singer, Jitendra Malik

Abstract: In this paper we introduce and experiment with a framework for learning local perceptual distance functions for visual recognition. We learn a distance function for each training image as a combination of elementary distances between patch-based visual features. We apply these combined local distance functions to the tasks of image retrieval and classification of novel images. On the Caltech 101 object recognition benchmark, we achieve 60.3% mean recognition across classes using 15 training images per class, which is better than the best published performance by Zhang, et al. 1

4 0.57600451 42 nips-2006-Bayesian Image Super-resolution, Continued

Author: Lyndsey C. Pickup, David P. Capel, Stephen J. Roberts, Andrew Zisserman

Abstract: This paper develops a multi-frame image super-resolution approach from a Bayesian view-point by marginalizing over the unknown registration parameters relating the set of input low-resolution views. In Tipping and Bishop’s Bayesian image super-resolution approach [16], the marginalization was over the superresolution image, necessitating the use of an unfavorable image prior. By integrating over the registration parameters rather than the high-resolution image, our method allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. In addition to the motion model used by Tipping and Bishop, illumination components are introduced into the generative model, allowing us to handle changes in lighting as well as motion. We show results on real and synthetic datasets to illustrate the efficacy of this approach.

5 0.57081842 52 nips-2006-Clustering appearance and shape by learning jigsaws

Author: Anitha Kannan, John Winn, Carsten Rother

Abstract: Patch-based appearance models are used in a wide range of computer vision applications. To learn such models it has previously been necessary to specify a suitable set of patch sizes and shapes by hand. In the jigsaw model presented here, the shape, size and appearance of patches are learned automatically from the repeated structures in a set of training images. By learning such irregularly shaped ‘jigsaw pieces’, we are able to discover both the shape and the appearance of object parts without supervision. When applied to face images, for example, the learned jigsaw pieces are surprisingly strongly associated with face parts of different shapes and scales such as eyes, noses, eyebrows and cheeks, to name a few. We conclude that learning the shape of the patch not only improves the accuracy of appearance-based part detection but also allows for shape-based part detection. This enables parts of similar appearance but different shapes to be distinguished; for example, while foreheads and cheeks are both skin colored, they have markedly different shapes. 1

6 0.47941953 182 nips-2006-Statistical Modeling of Images with Fields of Gaussian Scale Mixtures

7 0.4732106 174 nips-2006-Similarity by Composition

8 0.42249638 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

9 0.39200264 78 nips-2006-Fast Discriminative Visual Codebooks using Randomized Clustering Forests

10 0.38399464 170 nips-2006-Robotic Grasping of Novel Objects

11 0.38226452 103 nips-2006-Kernels on Structured Objects Through Nested Histograms

12 0.35897765 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo

13 0.32560763 122 nips-2006-Learning to parse images of articulated bodies

14 0.28973439 73 nips-2006-Efficient Methods for Privacy Preserving Face Detection

15 0.28973341 147 nips-2006-Non-rigid point set registration: Coherent Point Drift

16 0.27733487 153 nips-2006-Online Clustering of Moving Hyperplanes

17 0.27665037 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model

18 0.27626741 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

19 0.26602891 40 nips-2006-Bayesian Detection of Infrequent Differences in Sets of Time Series with Shared Structure

20 0.25374356 31 nips-2006-Analysis of Contour Motions


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.09), (3, 0.038), (7, 0.052), (8, 0.031), (9, 0.027), (15, 0.298), (20, 0.03), (21, 0.013), (22, 0.029), (44, 0.052), (57, 0.101), (65, 0.056), (69, 0.063), (71, 0.015)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77292907 45 nips-2006-Blind Motion Deblurring Using Image Statistics

Author: Anat Levin

Abstract: We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture.

2 0.63552219 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons

Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein

Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1

3 0.5058974 34 nips-2006-Approximate Correspondences in High Dimensions

Author: Kristen Grauman, Trevor Darrell

Abstract: Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel. 1

4 0.50588238 134 nips-2006-Modeling Human Motion Using Binary Latent Variables

Author: Graham W. Taylor, Geoffrey E. Hinton, Sam T. Roweis

Abstract: We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued “visible” variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture. Website: http://www.cs.toronto.edu/∼gwtaylor/publications/nips2006mhmublv/

5 0.50067896 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

Author: Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, Bernhard Schölkopf

Abstract: This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that—despite the lack of any biological prior knowledge—our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1

6 0.49720839 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints

7 0.49272421 42 nips-2006-Bayesian Image Super-resolution, Continued

8 0.48932388 167 nips-2006-Recursive ICA

9 0.48932093 118 nips-2006-Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields

10 0.488848 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation

11 0.48615387 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

12 0.48595279 110 nips-2006-Learning Dense 3D Correspondence

13 0.48522612 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model

14 0.48496726 202 nips-2006-iLSTD: Eligibility Traces and Convergence Analysis

15 0.48366603 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment

16 0.48303258 32 nips-2006-Analysis of Empirical Bayesian Methods for Neuroelectromagnetic Source Localization

17 0.48239014 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo

18 0.47992966 119 nips-2006-Learning to Rank with Nonsmooth Cost Functions

19 0.47908813 130 nips-2006-Max-margin classification of incomplete data

20 0.47854683 43 nips-2006-Bayesian Model Scoring in Markov Random Fields