nips nips2003 nips2003-39 knowledge-graph by maker-knowledge-mining

39 nips-2003-Bayesian Color Constancy with Non-Gaussian Models


Source: pdf

Author: Charles Rosenberg, Alok Ladsariya, Tom Minka

Abstract: We present a Bayesian approach to color constancy which utilizes a nonGaussian probabilistic model of the image formation process. The parameters of this model are estimated directly from an uncalibrated image set and a small number of additional algorithmic parameters are chosen using cross validation. The algorithm is empirically shown to exhibit RMS error lower than other color constancy algorithms based on the Lambertian surface reflectance model when estimating the illuminants of a set of test images. This is demonstrated via a direct performance comparison utilizing a publicly available set of real world test images and code base.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We present a Bayesian approach to color constancy which utilizes a nonGaussian probabilistic model of the image formation process. [sent-7, score-0.769]

2 The parameters of this model are estimated directly from an uncalibrated image set and a small number of additional algorithmic parameters are chosen using cross validation. [sent-8, score-0.131]

3 The algorithm is empirically shown to exhibit RMS error lower than other color constancy algorithms based on the Lambertian surface reflectance model when estimating the illuminants of a set of test images. [sent-9, score-1.016]

4 This is demonstrated via a direct performance comparison utilizing a publicly available set of real world test images and code base. [sent-10, score-0.191]

5 Because the illuminants in the world have varying colors, the measured color of an object will change under different light sources. [sent-12, score-0.656]

6 We propose an algorithm for color constancy which, given an image, will automatically estimate the color of the illuminant (assumed constant over the image), allowing the image to be color corrected. [sent-13, score-1.95]

7 This color constancy problem is ill-posed, because object color and illuminant color are not uniquely separable. [sent-14, score-1.864]

8 Historically, algorithms for color constancy have fallen into two groups. [sent-15, score-0.683]

9 The second group uses a statistical model to quantify the probability of each illuminant and then makes an estimate from these probabilities. [sent-17, score-0.469]

10 But as shown by [3, 1], currently the best performance on real images is achieved by gamut mapping, a constraint-based algorithm. [sent-19, score-0.479]

11 And, in the words of some leading researchers, even gamut mapping is not “good enough” for object recognition [8]. [sent-20, score-0.487]

12 In this paper, we show that it is possible to outperform gamut mapping with a statistical approach, by using appropriate probability models with the appropriate statistical framework. [sent-21, score-0.538]

13 We use the principled Bayesian color constancy framework of [4], but combine it with rich, nonparametric image models, such as used by Color by Correlation [1]. [sent-22, score-0.769]

14 Even though our algorithm outperforms gamut mapping on average, there are cases in which gamut mapping provides better estimates, and, in fact, the errors of the two methods are surprisingly uncorrelated. [sent-25, score-0.97]

15 This is an interesting result, because it suggests that gamut mapping exploits image properties which are different from what is learned by our algorithm, and probably other statistical algorithms. [sent-26, score-0.575]

16 2 The imaging model Our approach is to model the observed image pixels with a probabilistic generative model, decomposing them as the product of unknown surface reflectances with an unknown illuminant. [sent-28, score-0.253]

17 Let y be an image pixel with three color channels: (yr , yg , yb ). [sent-32, score-0.64]

18 The pixel is assumed to be the result of light reflecting off of a surface under the Lambertian reflectance model. [sent-33, score-0.139]

19 Denote the power of the light in each channel by = ( r , g , b ), with each channel ranging from zero to infinity. [sent-34, score-0.188]

20 Denote this reflectance by x = (xr , xg , xb ), with each channel ranging from zero to one. [sent-36, score-0.128]

21 The model for the pixel is the well-known diagonal lighting model: yr = r xr yg = g xg yb = b xb (1) To simplify the equations below, we write this in matrix form as L y = diag( ) = Lx (2) (3) This specifies the conditional distribution p(y| , x). [sent-37, score-0.342]

22 The prior distribution for the illuminant (p( )) will be uniform over a constraint set, described later in section 5. [sent-40, score-0.463]

23 The most difficult step is to construct a model for the surface reflectances in an image containing many pixels: Y = X = (y(1), . [sent-42, score-0.141]

24 If mk is the probability of a surface having a reflectance value in bin k, so that k mk = 1, then independence says f (n1 , . [sent-63, score-0.279]

25 , nK ) ≈ (smk Γ(nk )) (9) sΓ(n) k 0 if nk = 0 1 if nk > 0 clip(nk ) = (10) This resembles a multinomial distribution on clipped counts. [sent-75, score-0.443]

26 Unfortunately, this distribution strongly prefers that the image contains a small number of different reflectances, which biases the light source estimate. [sent-76, score-0.157]

27 , nK ) = (11) clip(nk ) k clip(nk ) (12) k = n νk The modified counts νk sum to n just like the original counts nk , but are distributed equally over all reflectances present in the image. [sent-80, score-0.254]

28 3 The color constancy algorithm The algorithm for estimating the illuminant has two parts: (1) discretize the set of all illuminants on a fine grid and compute their likelihood and (2) pick the illuminant which minimizes the risk. [sent-81, score-1.865]

29 The likelihood of the observed image data Y for a given illuminant is p(Y| ) p(y(i)| , x(i)) p(X)dX = X i −1 n | p(X = L−1 Y) = |L (13) (14) −1 The quantity L Y can be understood as the color-corrected image. [sent-82, score-0.555]

30 The determinant term, 1/( r g b )n , makes this a valid distribution over Y and has the effect of introducing a preference for dimmer illuminants independently of the prior on reflectances. [sent-83, score-0.224]

31 An answer that the illuminant is ∗ , when it is really , incurs some cost, denoted R( ∗ | ). [sent-85, score-0.444]

32 Let this function be quadratic in some transformation g of the illuminant vector : R( ∗ | ) = ||g( ∗ ) − g( )||2 (18) This occurs, for example, when the cost function is squared error in chromaticity. [sent-86, score-0.469]

33 4 Relation to other algorithms In this section we describe related color constancy algorithms using the framework of the imaging model introduced in section 2. [sent-88, score-0.744]

34 Scale by max The scale by max algorithm (as tested e. [sent-95, score-0.126]

35 in [3]) estimates the illuminant by the simple formula r = max yr (i) i g = max yg (i) b i = max yb (i) i (20) which is the dimmest illuminant in the valid set (15). [sent-97, score-1.224]

36 Then p(X) is constant and the maximum-likelihood illuminant is (20). [sent-99, score-0.444]

37 Gray-world The gray-world algorithm [5] chooses the illuminant such that the average value in each channel of the corrected image is a constant, e. [sent-101, score-0.624]

38 Let the reflectances be independent for each pixel and each channel, with distribution p(xc ) ∝ exp(−2xc ) in each channel c. [sent-106, score-0.109]

39 (21) whose maximum is (as desired) c = 2 n yc (i) i (22) Figure 1: Plots of slices of the three dimensional color surface reflectance distribution along a single dimension. [sent-108, score-0.507]

40 Row one plots green versus blue with 0,0 at the upper left of each subplot and slices in red whose magnitude increases from left to right. [sent-109, score-0.149]

41 Row two plots red versus blue with slices in green. [sent-110, score-0.131]

42 Row three plots red versus green with slices in blue. [sent-111, score-0.131]

43 Instead, observed pixels are quantized into color bins, and the frequency of each bin is counted for each illuminant, in a finite set of illuminants. [sent-113, score-0.472]

44 ) Let mk ( ) be the frequency of color bin k for illuminant , and let n1 · · · nK be the color histogram of the image, then the likelihood of is computed as mk ( )clip(nk ) p(Y| ) = (23) k While theoretically this is very general, there are practical limitations. [sent-115, score-1.432]

45 One must learn the color frequencies for every possible illuminant. [sent-117, score-0.357]

46 Since collecting real-world data whose illuminant is known is difficult, mk ( ) is typically trained synthetically with random surfaces, which may not represent the statistics of natural scenes. [sent-118, score-0.539]

47 The second issue is that colors and illuminants live in an unbounded 3D space [1], unlike reflectances which are bounded. [sent-119, score-0.227]

48 In order to store a color distribution for each illuminant, brightness variation needs to be artificially bounded. [sent-120, score-0.403]

49 To reduce the storage of the mk ( )’s, Barnard et al [1] store the color distribution only for illuminants of a fixed brightness. [sent-122, score-0.676]

50 The other part of the bias is due to using clipped counts in the likelihood. [sent-124, score-0.107]

51 As explained in section 2, a multinomial likelihood with clipped counts is a special case of the Dirichlet-multinomial, and prefers images with a small number of different colors. [sent-125, score-0.258]

52 1 Reflectance Distribution To implement the Bayesian algorithm, we need to learn the real-world frequencies mk of quantized reflectance vectors. [sent-128, score-0.125]

53 The direct approach to this would require a set of images with ground truth information regarding the associated illumination parameters or, alternately, a set of images captured under a canonical illuminant and camera. [sent-129, score-0.759]

54 Unfortunately, it is quite difficult to collect a large number of images under controlled conditions. [sent-130, score-0.103]

55 The estimates from some “base” color constancy algorithm are used as a proxy for the ground truth. [sent-132, score-0.737]

56 We used approximately 2300 randomly selected JPEG images from news sites on the web for bootstrapping, consisting mostly of outdoor scenes, indoor news conferences, and sporting event scenes. [sent-135, score-0.151]

57 A new image is formed in which each pixel is the mean of an m by m block of the original image. [sent-143, score-0.124]

58 The second pre-processing step removes dark pixels from the computation, which, because of noise and quantization effects do not contain reliable color information. [sent-144, score-0.469]

59 Pixels whose yr + yg + yb channel sum is less than a given threshold are excluded from the computation. [sent-145, score-0.293]

60 In addition to the reflectance prior, the parameters of our algorithm are: the number of reflectance histogram bins, the scale down factor, and the dark pixel threshold value. [sent-146, score-0.131]

61 (The “ball2” object was removed so that there was no overlap between the tuning and test sets. [sent-150, score-0.134]

62 ) For the purpose of speed, only images captured with the Philips Ultralume and the Macbeth Judge II fluorescent illuminants were included. [sent-151, score-0.329]

63 The best set of parameters was found to be: 32 × 32 × 32 reflectance bins, scale down by m = 3, and omit pixels with a channel sum less than 8/(3 × 255). [sent-152, score-0.149]

64 3 Illuminant prior To facilitate a direct comparison, we adopt the two illuminant priors from [3]. [sent-154, score-0.463]

65 The first prior, full set, discretizes the illuminants uniformly in polar coordinates. [sent-156, score-0.205]

66 The second prior, hull set, is a subset of full set restricted to be within the convex hull of the test set illuminants and other real world illuminants. [sent-157, score-0.385]

67 1 Evaluation Specifics To test the algorithms we use the publicly available real world image data set [2] used by Barnard, Martin, Coath and Funt in a comprehensive evaluation of color constancy algorithms in [3]. [sent-161, score-0.857]

68 The data set consists of images of 30 scenes captured under 11 light sources, for a total of 321 images (after the authors removed images which had collection problems) with ground truth illuminant information provided in the form of an RGB value. [sent-162, score-0.894]

69 As in the “rg error” measure of [3], illuminant error is measured in chromaticity space: (24) 1 = r /( r + g + b ) 2 = g /( r + g + b ) R( ∗ | ) = ( ∗ − 1 )2 + ( ∗ − 2 )2 (25) 1 2 The Bayesian algorithm is adapted to minimize this risk by computing the posterior mean in chromaticity space. [sent-163, score-0.615]

70 Table 1: The average error of several color constancy algorithms on the test set. [sent-165, score-0.733]

71 We compare two versions of our Bayesian method to the gamut mapping and scale by max algorithms. [sent-200, score-0.549]

72 The appropriate preprocessing for each algorithm was applied to the images to achieve the best possible performance. [sent-201, score-0.15]

73 (Note that we do not include results for color by correlation since the gamut mapping results were found to be significantly better in [3]. [sent-202, score-0.87]

74 ) In all configurations, our algorithm exhibits the lowest RMS error except in a single case where it is not statistically different than that of gamut mapping. [sent-203, score-0.424]

75 The tuning set, while composed of separate images than the test set, is very similar and has known illuminants, and, accordingly, gives the best results. [sent-207, score-0.169]

76 1, is not that different, particularly when the illuminant search is constrained. [sent-209, score-0.444]

77 The gamut mapping algorithm (called CRULE and ECRULE in [3]) is also presented in two versions: with and without segmenting the images as a preprocessing step as described in [3]. [sent-210, score-0.614]

78 In the evaluation of color constancy algorithms in [3] gamut mapping was found on average to outperform all other algorithms when evaluated on real world images. [sent-212, score-1.196]

79 It is interesting to note that the gamut mapping algorithm is sensitive to segmentation. [sent-213, score-0.487]

80 Since fundamentally it should not be sensitive to the number of pixels of a particular color in the image we must assume that this is because the segmentation is implementing some form of noise filtering. [sent-214, score-0.537]

81 Scale by max is also included as a reference point and still performs quite well given its simplicity, often beating out much more complex constancy algorithms [8, 3]. [sent-216, score-0.364]

82 Its performance is the same for both illuminant sets since it does not involve a search over illuminants. [sent-217, score-0.444]

83 edu/˜chuck/nips-2003/ Surprisingly, when the error of the Bayesian method is compared with the gamut mapping method on individual test images, the correlation coefficient is -0. [sent-221, score-0.563]

84 Thus the images which confuse the Bayesian method are quite different from the images which confuse gamut mapping. [sent-223, score-0.632]

85 This suggests that an algorithm which could jointly model the image properties exploited by both algorithms might give dramatic improvements. [sent-224, score-0.109]

86 As an example of the potential improvement, the RMS error of an ideal algorithm whose error is the minimum of Bayes and gamut on each image in the test set is only 0. [sent-225, score-0.56]

87 7 Conclusions and Future Work We have demonstrated empirically that Bayesian color constancy with the appropriate nonGaussian models can outperform gamut mapping on a standard test set. [sent-227, score-1.196]

88 This is true regardless of whether a calibrated or uncalibrated training set is used, or whether the full set or a restricted set of illuminants is searched. [sent-228, score-0.25]

89 This should give new hope to the pursuit of statistical methods as a unifying framework for color constancy. [sent-229, score-0.401]

90 This is simply an image modeling problem which can be attacked using standard statistical methods. [sent-232, score-0.111]

91 A particularly promising direction is to pursue models which can enforce constraints like that in the gamut mapping algorithm, since the images where Bayes has the largest errors appear to be relatively easy for gamut mapping. [sent-233, score-0.943]

92 Acknowledgments We would like to thank Kobus Barnard for making his test images and code publicly available. [sent-234, score-0.166]

93 Funt, “Colour by correlation in a three dimensional colour space,” Proceedings of the 6th European Conference on Computer Vision, pp. [sent-240, score-0.144]

94 Funt, “A comparison of color constancy algorithms; Part Two. [sent-254, score-0.683]

95 Freeman, “Bayesian color constancy,” Journal of the Optical Society of America A, vol. [sent-265, score-0.357]

96 Buchsbaum, “A spatial processor model for object colour perception,” Journal of the Franklin Institute, vol. [sent-270, score-0.118]

97 Hubel, “Colour by correlation: a simple, unifying approach to colour constancy,” The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. [sent-279, score-0.114]

98 Barnard, “Learning color constancy,” Proceedings of Imaging Science and Technology / Society for Information Display Fourth Color Imaging Conference. [sent-285, score-0.357]

99 “Bootstrapping color constancy,” Proceedings of SPIE: Electronic Imaging IV, 3644, 1999. [sent-297, score-0.357]

100 Vrhel, “Estimation of illumination for color correction,” Proc ICASSP, pp. [sent-302, score-0.395]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('illuminant', 0.444), ('gamut', 0.376), ('color', 0.357), ('constancy', 0.326), ('ectances', 0.256), ('ectance', 0.223), ('illuminants', 0.205), ('nk', 0.186), ('funt', 0.12), ('barnard', 0.113), ('images', 0.103), ('re', 0.1), ('colour', 0.095), ('mk', 0.095), ('mapping', 0.088), ('image', 0.086), ('clip', 0.085), ('yg', 0.085), ('yb', 0.074), ('channel', 0.071), ('hull', 0.065), ('rms', 0.063), ('bayesian', 0.063), ('yr', 0.063), ('imaging', 0.061), ('surface', 0.055), ('pixels', 0.051), ('chromaticity', 0.051), ('clipped', 0.051), ('coath', 0.051), ('smk', 0.051), ('slices', 0.05), ('correlation', 0.049), ('light', 0.046), ('uncalibrated', 0.045), ('yc', 0.045), ('bins', 0.043), ('martin', 0.043), ('segmentation', 0.043), ('tuning', 0.041), ('bayes', 0.039), ('pixel', 0.038), ('illumination', 0.038), ('publicly', 0.038), ('max', 0.038), ('bootstrapping', 0.036), ('counts', 0.034), ('lambertian', 0.034), ('xg', 0.034), ('bin', 0.034), ('bootstrap', 0.032), ('ground', 0.031), ('quantized', 0.03), ('brightness', 0.027), ('scale', 0.027), ('base', 0.026), ('quantization', 0.025), ('confuse', 0.025), ('prefers', 0.025), ('xr', 0.025), ('statistical', 0.025), ('histogram', 0.025), ('error', 0.025), ('test', 0.025), ('world', 0.025), ('likelihood', 0.025), ('outperform', 0.024), ('news', 0.024), ('nongaussian', 0.024), ('carnegie', 0.024), ('mellon', 0.024), ('pittsburgh', 0.024), ('removed', 0.024), ('red', 0.024), ('preprocessing', 0.024), ('object', 0.023), ('algorithm', 0.023), ('xb', 0.023), ('bias', 0.022), ('colors', 0.022), ('posterior', 0.021), ('captured', 0.021), ('overlap', 0.021), ('minka', 0.021), ('plots', 0.02), ('versions', 0.02), ('multinomial', 0.02), ('unifying', 0.019), ('truth', 0.019), ('store', 0.019), ('prior', 0.019), ('pa', 0.019), ('versus', 0.019), ('surprisingly', 0.019), ('vision', 0.018), ('removes', 0.018), ('green', 0.018), ('blue', 0.018), ('dark', 0.018), ('grid', 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 39 nips-2003-Bayesian Color Constancy with Non-Gaussian Models

Author: Charles Rosenberg, Alok Ladsariya, Tom Minka

Abstract: We present a Bayesian approach to color constancy which utilizes a nonGaussian probabilistic model of the image formation process. The parameters of this model are estimated directly from an uncalibrated image set and a small number of additional algorithmic parameters are chosen using cross validation. The algorithm is empirically shown to exhibit RMS error lower than other color constancy algorithms based on the Lambertian surface reflectance model when estimating the illuminants of a set of test images. This is demonstrated via a direct performance comparison utilizing a publicly available set of real world test images and code base.

2 0.11763185 190 nips-2003-Unsupervised Color Decomposition Of Histologically Stained Tissue Samples

Author: Andrew Rabinovich, Sameer Agarwal, Casey Laris, Jeffrey H. Price, Serge J. Belongie

Abstract: Accurate spectral decomposition is essential for the analysis and diagnosis of histologically stained tissue sections. In this paper we present the first automated system for performing this decomposition. We compare the performance of our system with ground truth data and report favorable results. 1

3 0.10854274 17 nips-2003-A Sampled Texture Prior for Image Super-Resolution

Author: Lyndsey C. Pickup, Stephen J. Roberts, Andrew Zisserman

Abstract: Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over the high-resolution image space. Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. Here we present a domain-specific image prior in the form of a p.d.f. based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. 1

4 0.087939821 12 nips-2003-A Model for Learning the Semantics of Pictures

Author: Victor Lavrenko, R. Manmatha, Jiwoon Jeon

Abstract: We propose an approach to learning the semantics of images which allows us to automatically annotate an image with keywords and to retrieve images based on text queries. We do this using a formalism that models the generation of annotated images. We assume that every image is divided into regions, each described by a continuous-valued feature vector. Given a training set of images with annotations, we compute a joint probabilistic model of image features and words which allow us to predict the probability of generating a word given the image regions. This may be used to automatically annotate and retrieve images given a word as a query. Experiments show that our model significantly outperforms the best of the previously reported results on the tasks of automatic image annotation and retrieval. 1

5 0.079203203 28 nips-2003-Application of SVMs for Colour Classification and Collision Detection with AIBO Robots

Author: Michael J. Quinlan, Stephan K. Chalup, Richard H. Middleton

Abstract: This article addresses the issues of colour classification and collision detection as they occur in the legged league robot soccer environment of RoboCup. We show how the method of one-class classification with support vector machines (SVMs) can be applied to solve these tasks satisfactorily using the limited hardware capacity of the prescribed Sony AIBO quadruped robots. The experimental evaluation shows an improvement over our previous methods of ellipse fitting for colour classification and the statistical approach used for collision detection.

6 0.062548704 88 nips-2003-Image Reconstruction by Linear Programming

7 0.053157602 152 nips-2003-Pairwise Clustering and Graphical Models

8 0.048233669 40 nips-2003-Bias-Corrected Bootstrap and Model Uncertainty

9 0.044154696 97 nips-2003-Iterative Scaled Trust-Region Learning in Krylov Subspaces via Pearlmutter's Implicit Sparse Hessian

10 0.042377286 192 nips-2003-Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes

11 0.04143266 73 nips-2003-Feature Selection in Clustering Problems

12 0.041303705 53 nips-2003-Discriminating Deformable Shape Classes

13 0.038283244 166 nips-2003-Reconstructing MEG Sources with Unknown Correlations

14 0.037820317 119 nips-2003-Local Phase Coherence and the Perception of Blur

15 0.037648018 160 nips-2003-Prediction on Spike Data Using Kernel Algorithms

16 0.037608869 133 nips-2003-Mutual Boosting for Contextual Inference

17 0.036126133 139 nips-2003-Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression

18 0.035211489 74 nips-2003-Finding the M Most Probable Configurations using Loopy Belief Propagation

19 0.034992281 62 nips-2003-Envelope-based Planning in Relational MDPs

20 0.032996919 184 nips-2003-The Diffusion-Limited Biochemical Signal-Relay Channel


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.121), (1, -0.027), (2, 0.027), (3, -0.018), (4, -0.114), (5, -0.028), (6, 0.047), (7, -0.035), (8, -0.002), (9, -0.019), (10, -0.043), (11, 0.004), (12, -0.128), (13, 0.093), (14, -0.032), (15, -0.049), (16, -0.06), (17, -0.003), (18, -0.04), (19, 0.127), (20, 0.035), (21, 0.087), (22, -0.024), (23, -0.063), (24, -0.053), (25, 0.063), (26, -0.02), (27, 0.079), (28, 0.05), (29, -0.033), (30, 0.047), (31, -0.12), (32, 0.182), (33, -0.082), (34, 0.1), (35, 0.055), (36, -0.023), (37, 0.041), (38, 0.086), (39, -0.041), (40, 0.009), (41, 0.133), (42, -0.03), (43, -0.035), (44, 0.058), (45, 0.112), (46, 0.051), (47, -0.038), (48, -0.09), (49, 0.001)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93601221 39 nips-2003-Bayesian Color Constancy with Non-Gaussian Models

Author: Charles Rosenberg, Alok Ladsariya, Tom Minka

Abstract: We present a Bayesian approach to color constancy which utilizes a nonGaussian probabilistic model of the image formation process. The parameters of this model are estimated directly from an uncalibrated image set and a small number of additional algorithmic parameters are chosen using cross validation. The algorithm is empirically shown to exhibit RMS error lower than other color constancy algorithms based on the Lambertian surface reflectance model when estimating the illuminants of a set of test images. This is demonstrated via a direct performance comparison utilizing a publicly available set of real world test images and code base.

2 0.7166155 190 nips-2003-Unsupervised Color Decomposition Of Histologically Stained Tissue Samples

Author: Andrew Rabinovich, Sameer Agarwal, Casey Laris, Jeffrey H. Price, Serge J. Belongie

Abstract: Accurate spectral decomposition is essential for the analysis and diagnosis of histologically stained tissue sections. In this paper we present the first automated system for performing this decomposition. We compare the performance of our system with ground truth data and report favorable results. 1

3 0.68428802 17 nips-2003-A Sampled Texture Prior for Image Super-Resolution

Author: Lyndsey C. Pickup, Stephen J. Roberts, Andrew Zisserman

Abstract: Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over the high-resolution image space. Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. Here we present a domain-specific image prior in the form of a p.d.f. based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. 1

4 0.53072155 12 nips-2003-A Model for Learning the Semantics of Pictures

Author: Victor Lavrenko, R. Manmatha, Jiwoon Jeon

Abstract: We propose an approach to learning the semantics of images which allows us to automatically annotate an image with keywords and to retrieve images based on text queries. We do this using a formalism that models the generation of annotated images. We assume that every image is divided into regions, each described by a continuous-valued feature vector. Given a training set of images with annotations, we compute a joint probabilistic model of image features and words which allow us to predict the probability of generating a word given the image regions. This may be used to automatically annotate and retrieve images given a word as a query. Experiments show that our model significantly outperforms the best of the previously reported results on the tasks of automatic image annotation and retrieval. 1

5 0.49932688 88 nips-2003-Image Reconstruction by Linear Programming

Author: Koji Tsuda, Gunnar Rätsch

Abstract: A common way of image denoising is to project a noisy image to the subspace of admissible images made for instance by PCA. However, a major drawback of this method is that all pixels are updated by the projection, even when only a few pixels are corrupted by noise or occlusion. We propose a new method to identify the noisy pixels by 1 -norm penalization and update the identified pixels only. The identification and updating of noisy pixels are formulated as one linear program which can be solved efficiently. Especially, one can apply the ν-trick to directly specify the fraction of pixels to be reconstructed. Moreover, we extend the linear program to be able to exploit prior knowledge that occlusions often appear in contiguous blocks (e.g. sunglasses on faces). The basic idea is to penalize boundary points and interior points of the occluded area differently. We are able to show the ν-property also for this extended LP leading a method which is easy to use. Experimental results impressively demonstrate the power of our approach.

6 0.43854862 195 nips-2003-When Does Non-Negative Matrix Factorization Give a Correct Decomposition into Parts?

7 0.38604867 28 nips-2003-Application of SVMs for Colour Classification and Collision Detection with AIBO Robots

8 0.38379329 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

9 0.3781637 139 nips-2003-Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression

10 0.36206582 97 nips-2003-Iterative Scaled Trust-Region Learning in Krylov Subspaces via Pearlmutter's Implicit Sparse Hessian

11 0.31112862 192 nips-2003-Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes

12 0.30581009 152 nips-2003-Pairwise Clustering and Graphical Models

13 0.30461895 11 nips-2003-A Mixed-Signal VLSI for Real-Time Generation of Edge-Based Image Vectors

14 0.2868692 119 nips-2003-Local Phase Coherence and the Perception of Blur

15 0.28217101 40 nips-2003-Bias-Corrected Bootstrap and Model Uncertainty

16 0.281335 53 nips-2003-Discriminating Deformable Shape Classes

17 0.26566419 62 nips-2003-Envelope-based Planning in Relational MDPs

18 0.26399285 166 nips-2003-Reconstructing MEG Sources with Unknown Correlations

19 0.25282142 175 nips-2003-Sensory Modality Segregation

20 0.25093466 103 nips-2003-Learning Bounds for a Generalized Family of Bayesian Posterior Distributions


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.036), (11, 0.045), (29, 0.016), (30, 0.014), (35, 0.045), (53, 0.101), (71, 0.058), (73, 0.348), (76, 0.043), (85, 0.083), (91, 0.093)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77300537 39 nips-2003-Bayesian Color Constancy with Non-Gaussian Models

Author: Charles Rosenberg, Alok Ladsariya, Tom Minka

Abstract: We present a Bayesian approach to color constancy which utilizes a nonGaussian probabilistic model of the image formation process. The parameters of this model are estimated directly from an uncalibrated image set and a small number of additional algorithmic parameters are chosen using cross validation. The algorithm is empirically shown to exhibit RMS error lower than other color constancy algorithms based on the Lambertian surface reflectance model when estimating the illuminants of a set of test images. This is demonstrated via a direct performance comparison utilizing a publicly available set of real world test images and code base.

2 0.7550351 31 nips-2003-Approximate Analytical Bootstrap Averages for Support Vector Classifiers

Author: Dörthe Malzahn, Manfred Opper

Abstract: We compute approximate analytical bootstrap averages for support vector classification using a combination of the replica method of statistical physics and the TAP approach for approximate inference. We test our method on a few datasets and compare it with exact averages obtained by extensive Monte-Carlo sampling. 1

3 0.49055186 101 nips-2003-Large Margin Classifiers: Convex Loss, Low Noise, and Convergence Rates

Author: Peter L. Bartlett, Michael I. Jordan, Jon D. Mcauliffe

Abstract: Many classification algorithms, including the support vector machine, boosting and logistic regression, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0-1 loss function. We characterize the statistical consequences of using such a surrogate by providing a general quantitative relationship between the risk as assessed using the 0-1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial bounds under the weakest possible condition on the loss function—that it satisfy a pointwise form of Fisher consistency for classification. The relationship is based on a variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise. Finally, we present applications of our results to the estimation of convergence rates in the general setting of function classes that are scaled hulls of a finite-dimensional base class.

4 0.46508652 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

Author: Sanjiv Kumar, Martial Hebert

Abstract: In this paper we present Discriminative Random Fields (DRF), a discriminative framework for the classification of natural image regions by incorporating neighborhood spatial dependencies in the labels as well as the observed data. The proposed model exploits local discriminative models and allows to relax the assumption of conditional independence of the observed data given the labels, commonly used in the Markov Random Field (MRF) framework. The parameters of the DRF model are learned using penalized maximum pseudo-likelihood method. Furthermore, the form of the DRF model allows the MAP inference for binary classification problems using the graph min-cut algorithms. The performance of the model was verified on the synthetic as well as the real-world images. The DRF model outperforms the MRF model in the experiments. 1

5 0.46087503 20 nips-2003-All learning is Local: Multi-agent Learning in Global Reward Games

Author: Yu-han Chang, Tracey Ho, Leslie P. Kaelbling

Abstract: In large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent’s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and learn an effective policy. 1

6 0.45991009 113 nips-2003-Learning with Local and Global Consistency

7 0.45909321 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons

8 0.45868763 3 nips-2003-AUC Optimization vs. Error Rate Minimization

9 0.45786977 126 nips-2003-Measure Based Regularization

10 0.45665109 78 nips-2003-Gaussian Processes in Reinforcement Learning

11 0.45615923 107 nips-2003-Learning Spectral Clustering

12 0.45608512 109 nips-2003-Learning a Rare Event Detection Cascade by Direct Feature Selection

13 0.45580804 57 nips-2003-Dynamical Modeling with Kernels for Nonlinear Time Series Prediction

14 0.45505199 147 nips-2003-Online Learning via Global Feedback for Phrase Recognition

15 0.45464155 112 nips-2003-Learning to Find Pre-Images

16 0.45426077 138 nips-2003-Non-linear CCA and PCA by Alignment of Local Models

17 0.45330009 72 nips-2003-Fast Feature Selection from Microarray Expression Data via Multiplicative Large Margin Algorithms

18 0.45253733 47 nips-2003-Computing Gaussian Mixture Models with EM Using Equivalence Constraints

19 0.45221755 135 nips-2003-Necessary Intransitive Likelihood-Ratio Classifiers

20 0.4521341 143 nips-2003-On the Dynamics of Boosting