nips nips2003 nips2003-54 knowledge-graph by maker-knowledge-mining

54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images


Source: pdf

Author: Sanjiv Kumar, Martial Hebert

Abstract: In this paper we present Discriminative Random Fields (DRF), a discriminative framework for the classification of natural image regions by incorporating neighborhood spatial dependencies in the labels as well as the observed data. The proposed model exploits local discriminative models and allows to relax the assumption of conditional independence of the observed data given the labels, commonly used in the Markov Random Field (MRF) framework. The parameters of the DRF model are learned using penalized maximum pseudo-likelihood method. Furthermore, the form of the DRF model allows the MAP inference for binary classification problems using the graph min-cut algorithms. The performance of the model was verified on the synthetic as well as the real-world images. The DRF model outperforms the MRF model in the experiments. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract In this paper we present Discriminative Random Fields (DRF), a discriminative framework for the classification of natural image regions by incorporating neighborhood spatial dependencies in the labels as well as the observed data. [sent-3, score-0.383]

2 The proposed model exploits local discriminative models and allows to relax the assumption of conditional independence of the observed data given the labels, commonly used in the Markov Random Field (MRF) framework. [sent-4, score-0.226]

3 The parameters of the DRF model are learned using penalized maximum pseudo-likelihood method. [sent-5, score-0.122]

4 how to model arbitrarily complex dependencies in the observed image data as well as the labels in a principled manner. [sent-13, score-0.201]

5 Let the corresponding labels at the image sites be given by x = {xi }i∈S . [sent-17, score-0.208]

6 In the MRF framework, the posterior over the labels given the data is expressed using the Bayes’ rule as, p(x|y) ∝ p(x, y) = p(x)p(y|x) where the prior over labels, p(x) is modeled as a MRF. [sent-18, score-0.12]

7 The data belonging to such a class is highly dependent on its neighbors since the lines or edges at spatially adjoining sites follow some underlying organization rules rather than being random (See Fig. [sent-26, score-0.167]

8 Now considering a different point of view, for classification purposes, we are interested in estimating the posterior over labels given the observations, i. [sent-31, score-0.095]

9 In a generative framework, one expends efforts to model the joint distribution p(x, y), which involves implicit modeling of the observations. [sent-34, score-0.113]

10 In a discriminative framework, one models the distribution p(x|y) directly. [sent-35, score-0.154]

11 As noted in [2], a potential advantage of using the discriminative approach is that the true underlying generative model may be quite complex even though the class posterior is simple. [sent-36, score-0.336]

12 This means that the generative approach may spend a lot of resources on modeling the generative models which are not particularly relevant to the task of inferring the class labels. [sent-37, score-0.144]

13 This approach allows one to capture arbitrary dependencies between the observations without resorting to any model approximations. [sent-42, score-0.102]

14 Our model further enhances the CRFs by proposing the use of local discriminative models to capture the class associations at individual sites as well as the interactions in the neighboring sites on 2-D grid lattices. [sent-43, score-0.394]

15 The proposed model uses local discriminative models to achieve the site classification while permitting interactions in both the observed data and the label field in a principled manner. [sent-44, score-0.376]

16 With a slight abuse of notations, in the rest of the paper we will call Ai as association potential and Iij as interaction potential. [sent-57, score-0.344]

17 In the DRFs, the association potential is seen as a local decision term which decides the association of a given site to a certain class ignoring its neighbors. [sent-59, score-0.416]

18 The interaction potential is seen as a data dependent smoothing function. [sent-60, score-0.302]

19 1 Association potential In the DRF framework, A(xi , y) is modeled using a local discriminative model that outputs the association of the site i with class xi . [sent-67, score-0.591]

20 Generalized Linear Models (GLM) are used extensively in statistics to model the class posteriors given the observations [8]. [sent-68, score-0.117]

21 For each site i, let f i (y) be a function that maps the observations y on a feature vector such that f i : y → l . [sent-69, score-0.206]

22 Using a logistic function as the link, the local class posterior can be modeled as, 1 P (xi = 1|y) = = σ(w0 + wT f i (y)) (2) 1 −(w0 +w T f i (y )) 1 1+e where w = {w0 , w1 } are the model parameters. [sent-70, score-0.216]

23 To extend the logistic model to induce a nonlinear decision boundary in the feature space, a transformed feature vector at each site i is defined as, hi (y) = [1, φ1 (f i (y)), . [sent-71, score-0.413]

24 Further, since xi ∈ {−1, 1}, the probability in (2) can be compactly expressed as P (xi |y) = σ(xi wT hi (y)). [sent-77, score-0.135]

25 Finally, the association potential is defined as, A(xi , y) = log(σ(xi wT hi (y)) (3) This transformation makes sure that the DRF yields standard logistic classifier if the interaction potential in (1) is set to zero. [sent-78, score-0.584]

26 Note that the transformed feature vector at each site i, i. [sent-79, score-0.191]

27 hi (y) is a function of whole set of observations y. [sent-81, score-0.109]

28 y i to get the log-likelihood, which acts as the association potential. [sent-84, score-0.115]

29 [2] used the scaled likelihoods to approximate the actual likelihoods at each site required by the generative formulation. [sent-86, score-0.229]

30 These scaled likelihoods were obtained by scaling the local class posteriors learned using a neural network. [sent-87, score-0.116]

31 On the contrary, in the DRF model, the local class posterior is an integral part of the full conditional model in (1). [sent-88, score-0.122]

32 Also, unlike [2], the parameters of the association and interaction potential are learned simultaneously in the DRF framework. [sent-89, score-0.411]

33 2 Interaction potential To model the interaction potential I, we first analyze the interaction potential commonly used in the MRF framework. [sent-91, score-0.598]

34 Note that the MRF framework does not permit the use of data in the interaction potential. [sent-92, score-0.229]

35 For a homogeneous and isotropic Ising model, the interaction potential is given as I = βxi xj , which penalizes every dissimilar pair of labels by the cost β [1]. [sent-93, score-0.408]

36 This form of interaction prefers piecewise constant smoothing without explicitly considering discontinuities in the data. [sent-94, score-0.25]

37 In the DRF formulation, the interaction potential is a function of all the observations y. [sent-95, score-0.289]

38 We would like to have similar labels at a pair of sites for which the observed data supports such a hypothesis. [sent-96, score-0.171]

39 In other words, we are interested in learning a pairwise discriminative model as the interaction potential. [sent-97, score-0.417]

40 For a pair of sites (i, j), let µij (ψ i (y), ψ j (y)) be a new feature vector such that µij : γ × γ → q , where ψ k : y → γ . [sent-98, score-0.132]

41 Denoting this feature vector as µij (y) for simplification, the interaction potential is modeled as, I(xi , xj , y) = xi xj v T µij (y) (4) where v are the model parameters. [sent-99, score-0.447]

42 This form of interaction potential is much simpler than the one proposed in [7], and makes the parameter learning a convex problem. [sent-101, score-0.252]

43 There are two interesting properties of the interaction potential given in (4). [sent-102, score-0.252]

44 First, if the association potential at each site and the interaction potentials of all the pairwise cliques except the pair (i, j) are set to zero in (1), the DRF acts as a logistic classifier which yields the probability of the site pair to have the same labels given the observed data. [sent-103, score-0.959]

45 Second, the proposed interaction potential is a generalization of the Ising model. [sent-104, score-0.252]

46 Thus, the form in (4) acts as a data-dependent discontinuity adaptive model that will moderate smoothing when the data from the two sites is ’different’. [sent-106, score-0.187]

47 The data-dependent smoothing can especially be useful to absorb the errors in modeling the association potential. [sent-107, score-0.173]

48 Anisotropy can be easily included in the DRF model by parametrizing the interaction potentials of different directional pairwise cliques with different sets of parameters v. [sent-108, score-0.346]

49 The form of the DRF model resembles the posterior of the MRF framework assuming conditionally independent data. [sent-110, score-0.084]

50 However, in the MRF framework, the parameters of the class generative models, p(y i |xi ) and the parameters of the prior random field on labels, p(x) are generally assumed to be independent and learned separately [1]. [sent-111, score-0.196]

51 However, for the Ising model in MRFs, pseudo-likelihood tends to overestimate the interaction parameter β, causing the MAP estimates of the field to be very poor solutions [9]. [sent-119, score-0.245]

52 Our experiments in the previous work [7] and Section 4 of this paper verify these observations for the interaction parameters in DRFs too. [sent-120, score-0.267]

53 Similar to the concept of weight decay in neural learning literature, we assume a Gaussian prior over the interaction parameters v such that p(v|τ ) = N (v; 0, τ 2 I) where I is the identity matrix. [sent-122, score-0.23]

54 The problem of inference is to find the optimal label configuration x given an image y, where optimality is defined with respect to a cost function. [sent-133, score-0.091]

55 However, since these algorithms do not allow negative interaction between the sites, the data-dependent smoothing for each clique is set to be v Tµij (y) = max{0, v Tµij (y)}, yielding an approximate MAP estimate. [sent-137, score-0.267]

56 This is equivalent to switching the smoothing off at the image discontinuities. [sent-138, score-0.104]

57 4 Experiments and discussion For comparison, a MRF framework was also learned assuming a conditionally independent likelihood and a homogeneous and isotropic Ising interaction model. [sent-139, score-0.312]

58 So, the MRF −1 posterior is p(x|y) = Zm exp i∈S log p(si (y i )|xi ) + i∈S j∈Ni βxi xj where β is the interaction parameter and si (y i ) is a single-site feature vector at ith site such that si : y i → d . [sent-140, score-0.553]

59 Note that si (y i ) does not take into account influence of the data in the neighborhood of ith site. [sent-141, score-0.113]

60 A first order neighborhood (4 nearest neighbors) was used for label interaction in all the experiments. [sent-142, score-0.278]

61 1 Synthetic images The aim of these experiments was to obtain correct labels from corrupted binary images. [sent-144, score-0.167]

62 For each noise model, 50 images were generated from each base image. [sent-148, score-0.131]

63 Each pixel was considered as an image site and the feature vector si (y i ) was simply chosen to be a scalar representing the intensity at ith site. [sent-149, score-0.295]

64 In experiments with the synthetic data, no neighborhood data interaction was used for the DRFs (i. [sent-150, score-0.272]

65 f i (y) = si (y i )) to observe the gains only due to the use of discriminative models in the association and interaction potentials. [sent-152, score-0.517]

66 A linear discriminant was implemented in the association potential such that hi (y) = [1, f i (y)]T . [sent-153, score-0.216]

67 The pairwise data vector µij (y) was obtained by taking the absolute difference of si (y i ) and sj (y j ). [sent-154, score-0.092]

68 1 was used for training while 150 noisy images from the rest of the three base images were used for testing. [sent-157, score-0.162]

69 (i) The interaction parameters for the DRF (v) as well as for the MRF (β) were set to zero. [sent-159, score-0.23]

70 This reduces the DRF model to a logistic classifier and MRF to a maximum likelihood (ML) classifier. [sent-160, score-0.124]

71 (iii) Finally, the DRF parameters were learned using penalized pseudo-likelihood and the best β for the MRF was chosen from cross-validation. [sent-168, score-0.097]

72 The MAP estimates of the labels were obtained using graph-cuts for both the models. [sent-169, score-0.085]

73 Under the first noise model, each image pixel was corrupted with independent Gaussian noise of standard deviation 0. [sent-170, score-0.161]

74 The pixelwise classification error for this noise model is given in the top row of Table 1. [sent-174, score-0.119]

75 Since the form of noise is the same as the likelihood model in the MRF, MRF is Table 1: Pixelwise classification errors (%) on 150 synthetic test images. [sent-175, score-0.092]

76 From top, first row:original images, second row: images corrupted with ’bimodal’ noise, third row: MRF results, fourth row: DRF results. [sent-193, score-0.102]

77 The DRF model is affected more because all the parameters in DRFs are learned simultaneously unlike MRFs. [sent-198, score-0.092]

78 In the second noise model each pixel was corrupted with independent mixture of Gaussian noise. [sent-199, score-0.114]

79 An interesting point to note is that DRF yields lower error than MRF even when the logistic classifier has higher error than the ML classifier on the test data. [sent-211, score-0.116]

80 FP for logistic classifier were kept to be the same as for DRF for DR comparison. [sent-215, score-0.099]

81 Superscript − indicates no neighborhood data interaction was used. [sent-216, score-0.241]

82 2 Real-World images The proposed DRF model was applied to the task of detecting man-made structures in natural scenes. [sent-228, score-0.092]

83 The aim was to label each image site as structured or nonstructured. [sent-229, score-0.251]

84 The training and the test set contained 108 and 129 images respectively, each of size 256×384 pixels, from the Corel image database. [sent-230, score-0.121]

85 For each image site i, a 5-dim single-site feature vector si (y i ) and a 14-dim multiscale feature vector f i (y) is computed using orientation and magnitude based features as described in [16]. [sent-232, score-0.334]

86 Note that f i (y) incorporates data interaction from neighboring sites. [sent-233, score-0.2]

87 For the association potentials, a transformed feature vector hi (y) was computed at each site i using quadratic transforms of vector f i (y). [sent-234, score-0.355]

88 For the MRF, each class conditional density was modeled as a mixture of five Gaussians. [sent-238, score-0.11]

89 For two typical images from the test set, the detection results for the MRF and the DRF models are given in Fig. [sent-240, score-0.111]

90 For a quantitative evaluation, we compared the detection rates and the number of false positives per image for different techniques. [sent-244, score-0.176]

91 For the comparison of detection rates, in all the experiments, the decision threshold of the logistic classifier was fixed such that it yields the same false positive rate as the DRF. [sent-245, score-0.2]

92 Thus, no neighborhood data interaction was used for both the logistic classifier and the DRF, i. [sent-247, score-0.34]

93 The detection rates of the MRF and the DRF are higher than the logistic classifier due to the label interaction. [sent-251, score-0.18]

94 However, higher detection rate and lower false positives for the DRF in comparison to the MRF indicate the gains due to the use of discriminative models in the association and interaction potentials in the DRF. [sent-252, score-0.621]

95 In the next experiment, to take advantage of the power of the DRF framework, data interaction was allowed for both the logistic classifier as well as the DRF (’Logistic’ and ’DRF’ in Table 2). [sent-253, score-0.299]

96 The DRF detection rate increases substantially and the false positives decrease further indicating the importance of allowing the data interaction in addition to the label interaction. [sent-254, score-0.359]

97 5 Conclusion and future work We have presented discriminative random fields which provide a principled approach for combining local discriminative classifiers that allow the use of arbitrary overlapping features, with adaptive data-dependent smoothing over the label field. [sent-255, score-0.436]

98 A class of discrete multiresolution random fields and its application to image segmentation. [sent-293, score-0.115]

99 Discriminative random fields: A discriminative framework for contextual interaction in classification. [sent-319, score-0.437]

100 Man-made structure detection in natural images using a causal multiscale random field. [sent-374, score-0.166]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('drf', 0.762), ('mrf', 0.402), ('interaction', 0.2), ('discriminative', 0.154), ('site', 0.143), ('logistic', 0.099), ('association', 0.092), ('sites', 0.089), ('drfs', 0.075), ('hi', 0.072), ('ising', 0.071), ('images', 0.067), ('labels', 0.065), ('xi', 0.063), ('ij', 0.056), ('si', 0.054), ('image', 0.054), ('classi', 0.053), ('potential', 0.052), ('smoothing', 0.05), ('field', 0.047), ('detection', 0.044), ('lafferty', 0.042), ('wt', 0.041), ('neighborhood', 0.041), ('false', 0.04), ('dependencies', 0.04), ('iij', 0.039), ('crf', 0.039), ('kumar', 0.039), ('pairwise', 0.038), ('generative', 0.038), ('bimodal', 0.038), ('positives', 0.038), ('learned', 0.037), ('class', 0.037), ('observations', 0.037), ('ni', 0.037), ('label', 0.037), ('potentials', 0.036), ('noise', 0.036), ('er', 0.035), ('corrupted', 0.035), ('dr', 0.033), ('contrastive', 0.033), ('modeling', 0.031), ('multiscale', 0.031), ('synthetic', 0.031), ('conditional', 0.03), ('parameters', 0.03), ('pixelwise', 0.03), ('xni', 0.03), ('posterior', 0.03), ('penalized', 0.03), ('fields', 0.03), ('fp', 0.03), ('contextual', 0.03), ('framework', 0.029), ('accommodate', 0.028), ('xj', 0.028), ('base', 0.028), ('row', 0.028), ('map', 0.027), ('notations', 0.027), ('pl', 0.027), ('feng', 0.026), ('crfs', 0.026), ('feature', 0.026), ('eld', 0.026), ('modeled', 0.025), ('model', 0.025), ('likelihoods', 0.024), ('elds', 0.024), ('homogeneous', 0.024), ('random', 0.024), ('mrfs', 0.024), ('saddle', 0.024), ('acts', 0.023), ('transformed', 0.022), ('isotropic', 0.022), ('ml', 0.022), ('estimates', 0.02), ('efforts', 0.019), ('contrary', 0.019), ('cation', 0.018), ('posteriors', 0.018), ('mixture', 0.018), ('table', 0.018), ('ith', 0.018), ('commonly', 0.017), ('pair', 0.017), ('clique', 0.017), ('sparseness', 0.017), ('yields', 0.017), ('bayesian', 0.017), ('principled', 0.017), ('gains', 0.017), ('cliques', 0.017), ('neighbors', 0.017), ('structured', 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999997 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

Author: Sanjiv Kumar, Martial Hebert

Abstract: In this paper we present Discriminative Random Fields (DRF), a discriminative framework for the classification of natural image regions by incorporating neighborhood spatial dependencies in the labels as well as the observed data. The proposed model exploits local discriminative models and allows to relax the assumption of conditional independence of the observed data given the labels, commonly used in the Markov Random Field (MRF) framework. The parameters of the DRF model are learned using penalized maximum pseudo-likelihood method. Furthermore, the form of the DRF model allows the MAP inference for binary classification problems using the graph min-cut algorithms. The performance of the model was verified on the synthetic as well as the real-world images. The DRF model outperforms the MRF model in the experiments. 1

2 0.085882455 21 nips-2003-An Autonomous Robotic System for Mapping Abandoned Mines

Author: David Ferguson, Aaron Morris, Dirk Hähnel, Christopher Baker, Zachary Omohundro, Carlos Reverte, Scott Thayer, Charles Whittaker, William Whittaker, Wolfram Burgard, Sebastian Thrun

Abstract: We present the software architecture of a robotic system for mapping abandoned mines. The software is capable of acquiring consistent 2D maps of large mines with many cycles, represented as Markov random £elds. 3D C-space maps are acquired from local 3D range scans, which are used to identify navigable paths using A* search. Our system has been deployed in three abandoned mines, two of which inaccessible to people, where it has acquired maps of unprecedented detail and accuracy. 1

3 0.08531446 17 nips-2003-A Sampled Texture Prior for Image Super-Resolution

Author: Lyndsey C. Pickup, Stephen J. Roberts, Andrew Zisserman

Abstract: Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over the high-resolution image space. Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. Here we present a domain-specific image prior in the form of a p.d.f. based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. 1

4 0.068010464 124 nips-2003-Max-Margin Markov Networks

Author: Ben Taskar, Carlos Guestrin, Daphne Koller

Abstract: In typical classification tasks, we seek a function which assigns a label to a single object. Kernel-based approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from the ability to use high-dimensional feature spaces, and from their strong theoretical guarantees. However, many real-world tasks involve sequential, spatial, or structured data, where multiple labels must be assigned. Existing kernel-based methods ignore structure in the problem, assigning labels independently to each object, losing much useful information. Conversely, probabilistic graphical models, such as Markov networks, can represent correlations between labels, by exploiting problem structure, but cannot handle high-dimensional feature spaces, and lack strong theoretical generalization guarantees. In this paper, we present a new framework that combines the advantages of both approaches: Maximum margin Markov (M3 ) networks incorporate both kernels, which efficiently deal with high-dimensional features, and the ability to capture correlations in structured data. We present an efficient algorithm for learning M3 networks based on a compact quadratic program formulation. We provide a new theoretical bound for generalization in structured domains. Experiments on the task of handwritten character recognition and collective hypertext classification demonstrate very significant gains over previous approaches. 1

5 0.065685429 73 nips-2003-Feature Selection in Clustering Problems

Author: Volker Roth, Tilman Lange

Abstract: A novel approach to combining clustering and feature selection is presented. It implements a wrapper strategy for feature selection, in the sense that the features are directly selected by optimizing the discriminative power of the used partitioning algorithm. On the technical side, we present an efficient optimization algorithm with guaranteed local convergence property. The only free parameter of this method is selected by a resampling-based stability analysis. Experiments with real-world datasets demonstrate that our method is able to infer both meaningful partitions and meaningful subsets of features. 1

6 0.062538557 109 nips-2003-Learning a Rare Event Detection Cascade by Direct Feature Selection

7 0.062142394 117 nips-2003-Linear Response for Approximate Inference

8 0.060320005 193 nips-2003-Variational Linear Response

9 0.058222439 9 nips-2003-A Kullback-Leibler Divergence Based Kernel for SVM Classification in Multimedia Applications

10 0.057554334 192 nips-2003-Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes

11 0.056154493 50 nips-2003-Denoising and Untangling Graphs Using Degree Priors

12 0.055530075 70 nips-2003-Fast Algorithms for Large-State-Space HMMs with Applications to Web Usage Analysis

13 0.050862283 12 nips-2003-A Model for Learning the Semantics of Pictures

14 0.048096165 133 nips-2003-Mutual Boosting for Contextual Inference

15 0.04775155 186 nips-2003-Towards Social Robots: Automatic Evaluation of Human-Robot Interaction by Facial Expression Classification

16 0.047465023 35 nips-2003-Attractive People: Assembling Loose-Limbed Models using Non-parametric Belief Propagation

17 0.047257029 152 nips-2003-Pairwise Clustering and Graphical Models

18 0.044903256 122 nips-2003-Margin Maximizing Loss Functions

19 0.043916982 115 nips-2003-Linear Dependent Dimensionality Reduction

20 0.043199711 113 nips-2003-Learning with Local and Global Consistency


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.16), (1, -0.062), (2, 0.004), (3, -0.024), (4, -0.022), (5, -0.084), (6, 0.021), (7, -0.051), (8, 0.003), (9, -0.037), (10, -0.016), (11, -0.01), (12, -0.097), (13, 0.006), (14, 0.02), (15, -0.015), (16, -0.055), (17, 0.013), (18, -0.067), (19, 0.089), (20, 0.035), (21, -0.017), (22, 0.063), (23, 0.003), (24, -0.03), (25, 0.012), (26, -0.006), (27, -0.049), (28, -0.011), (29, 0.022), (30, 0.031), (31, -0.118), (32, 0.006), (33, 0.026), (34, 0.046), (35, -0.007), (36, 0.028), (37, -0.039), (38, 0.037), (39, 0.123), (40, -0.036), (41, -0.016), (42, 0.025), (43, -0.006), (44, 0.057), (45, 0.065), (46, -0.108), (47, 0.112), (48, 0.115), (49, 0.008)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.89352715 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

Author: Sanjiv Kumar, Martial Hebert

Abstract: In this paper we present Discriminative Random Fields (DRF), a discriminative framework for the classification of natural image regions by incorporating neighborhood spatial dependencies in the labels as well as the observed data. The proposed model exploits local discriminative models and allows to relax the assumption of conditional independence of the observed data given the labels, commonly used in the Markov Random Field (MRF) framework. The parameters of the DRF model are learned using penalized maximum pseudo-likelihood method. Furthermore, the form of the DRF model allows the MAP inference for binary classification problems using the graph min-cut algorithms. The performance of the model was verified on the synthetic as well as the real-world images. The DRF model outperforms the MRF model in the experiments. 1

2 0.54639572 192 nips-2003-Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes

Author: Kevin P. Murphy, Antonio Torralba, William T. Freeman

Abstract: Standard approaches to object detection focus on local patches of the image, and try to classify them as background or not. We propose to use the scene context (image as a whole) as an extra source of (global) information, to help resolve local ambiguities. We present a conditional random field for jointly solving the tasks of object detection and scene classification. 1

3 0.54303116 21 nips-2003-An Autonomous Robotic System for Mapping Abandoned Mines

Author: David Ferguson, Aaron Morris, Dirk Hähnel, Christopher Baker, Zachary Omohundro, Carlos Reverte, Scott Thayer, Charles Whittaker, William Whittaker, Wolfram Burgard, Sebastian Thrun

Abstract: We present the software architecture of a robotic system for mapping abandoned mines. The software is capable of acquiring consistent 2D maps of large mines with many cycles, represented as Markov random £elds. 3D C-space maps are acquired from local 3D range scans, which are used to identify navigable paths using A* search. Our system has been deployed in three abandoned mines, two of which inaccessible to people, where it has acquired maps of unprecedented detail and accuracy. 1

4 0.48067695 17 nips-2003-A Sampled Texture Prior for Image Super-Resolution

Author: Lyndsey C. Pickup, Stephen J. Roberts, Andrew Zisserman

Abstract: Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over the high-resolution image space. Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. Here we present a domain-specific image prior in the form of a p.d.f. based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. 1

5 0.46567079 28 nips-2003-Application of SVMs for Colour Classification and Collision Detection with AIBO Robots

Author: Michael J. Quinlan, Stephan K. Chalup, Richard H. Middleton

Abstract: This article addresses the issues of colour classification and collision detection as they occur in the legged league robot soccer environment of RoboCup. We show how the method of one-class classification with support vector machines (SVMs) can be applied to solve these tasks satisfactorily using the limited hardware capacity of the prescribed Sony AIBO quadruped robots. The experimental evaluation shows an improvement over our previous methods of ellipse fitting for colour classification and the statistical approach used for collision detection.

6 0.44865978 135 nips-2003-Necessary Intransitive Likelihood-Ratio Classifiers

7 0.43007874 12 nips-2003-A Model for Learning the Semantics of Pictures

8 0.42645988 139 nips-2003-Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression

9 0.42531997 152 nips-2003-Pairwise Clustering and Graphical Models

10 0.42510515 100 nips-2003-Laplace Propagation

11 0.41914108 39 nips-2003-Bayesian Color Constancy with Non-Gaussian Models

12 0.41195399 181 nips-2003-Statistical Debugging of Sampled Programs

13 0.4116973 153 nips-2003-Parameterized Novelty Detectors for Environmental Sensor Monitoring

14 0.38964665 193 nips-2003-Variational Linear Response

15 0.38772127 6 nips-2003-A Fast Multi-Resolution Method for Detection of Significant Spatial Disease Clusters

16 0.38304615 73 nips-2003-Feature Selection in Clustering Problems

17 0.3792755 133 nips-2003-Mutual Boosting for Contextual Inference

18 0.37559357 109 nips-2003-Learning a Rare Event Detection Cascade by Direct Feature Selection

19 0.36623755 186 nips-2003-Towards Social Robots: Automatic Evaluation of Human-Robot Interaction by Facial Expression Classification

20 0.35291889 50 nips-2003-Denoising and Untangling Graphs Using Degree Priors


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.046), (11, 0.047), (26, 0.223), (29, 0.018), (30, 0.017), (35, 0.065), (53, 0.104), (69, 0.017), (71, 0.073), (76, 0.044), (85, 0.116), (91, 0.108), (99, 0.01)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.95639277 99 nips-2003-Kernels for Structured Natural Language Data

Author: Jun Suzuki, Yutaka Sasaki, Eisaku Maeda

Abstract: This paper devises a novel kernel function for structured natural language data. In the field of Natural Language Processing, feature extraction consists of the following two steps: (1) syntactically and semantically analyzing raw data, i.e., character strings, then representing the results as discrete structures, such as parse trees and dependency graphs with part-of-speech tags; (2) creating (possibly high-dimensional) numerical feature vectors from the discrete structures. The new kernels, called Hierarchical Directed Acyclic Graph (HDAG) kernels, directly accept DAGs whose nodes can contain DAGs. HDAG data structures are needed to fully reflect the syntactic and semantic structures that natural language data inherently have. In this paper, we define the kernel function and show how it permits efficient calculation. Experiments demonstrate that the proposed kernels are superior to existing kernel functions, e.g., sequence kernels, tree kernels, and bag-of-words kernels. 1

2 0.88802147 151 nips-2003-PAC-Bayesian Generic Chaining

Author: Jean-yves Audibert, Olivier Bousquet

Abstract: There exist many different generalization error bounds for classification. Each of these bounds contains an improvement over the others for certain situations. Our goal is to combine these different improvements into a single bound. In particular we combine the PAC-Bayes approach introduced by McAllester [1], which is interesting for averaging classifiers, with the optimal union bound provided by the generic chaining technique developed by Fernique and Talagrand [2]. This combination is quite natural since the generic chaining is based on the notion of majorizing measures, which can be considered as priors on the set of classifiers, and such priors also arise in the PAC-bayesian setting. 1

same-paper 3 0.84576267 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

Author: Sanjiv Kumar, Martial Hebert

Abstract: In this paper we present Discriminative Random Fields (DRF), a discriminative framework for the classification of natural image regions by incorporating neighborhood spatial dependencies in the labels as well as the observed data. The proposed model exploits local discriminative models and allows to relax the assumption of conditional independence of the observed data given the labels, commonly used in the Markov Random Field (MRF) framework. The parameters of the DRF model are learned using penalized maximum pseudo-likelihood method. Furthermore, the form of the DRF model allows the MAP inference for binary classification problems using the graph min-cut algorithms. The performance of the model was verified on the synthetic as well as the real-world images. The DRF model outperforms the MRF model in the experiments. 1

4 0.69092196 20 nips-2003-All learning is Local: Multi-agent Learning in Global Reward Games

Author: Yu-han Chang, Tracey Ho, Leslie P. Kaelbling

Abstract: In large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent’s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and learn an effective policy. 1

5 0.6873197 109 nips-2003-Learning a Rare Event Detection Cascade by Direct Feature Selection

Author: Jianxin Wu, James M. Rehg, Matthew D. Mullin

Abstract: Face detection is a canonical example of a rare event detection problem, in which target patterns occur with much lower frequency than nontargets. Out of millions of face-sized windows in an input image, for example, only a few will typically contain a face. Viola and Jones recently proposed a cascade architecture for face detection which successfully addresses the rare event nature of the task. A central part of their method is a feature selection algorithm based on AdaBoost. We present a novel cascade learning algorithm based on forward feature selection which is two orders of magnitude faster than the Viola-Jones approach and yields classifiers of equivalent quality. This faster method could be used for more demanding classification tasks, such as on-line learning. 1

6 0.68646759 3 nips-2003-AUC Optimization vs. Error Rate Minimization

7 0.68144566 192 nips-2003-Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes

8 0.68121308 101 nips-2003-Large Margin Classifiers: Convex Loss, Low Noise, and Convergence Rates

9 0.68043613 78 nips-2003-Gaussian Processes in Reinforcement Learning

10 0.68038821 124 nips-2003-Max-Margin Markov Networks

11 0.68026966 113 nips-2003-Learning with Local and Global Consistency

12 0.67848754 50 nips-2003-Denoising and Untangling Graphs Using Degree Priors

13 0.6763773 57 nips-2003-Dynamical Modeling with Kernels for Nonlinear Time Series Prediction

14 0.67622298 126 nips-2003-Measure Based Regularization

15 0.67544883 147 nips-2003-Online Learning via Global Feedback for Phrase Recognition

16 0.67234039 172 nips-2003-Semi-Supervised Learning with Trees

17 0.67142081 143 nips-2003-On the Dynamics of Boosting

18 0.67011988 28 nips-2003-Application of SVMs for Colour Classification and Collision Detection with AIBO Robots

19 0.66849941 4 nips-2003-A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning

20 0.66819042 47 nips-2003-Computing Gaussian Mixture Models with EM Using Equivalence Constraints