nips nips2009 nips2009-201 knowledge-graph by maker-knowledge-mining

201 nips-2009-Region-based Segmentation and Object Detection


Source: pdf

Author: Stephen Gould, Tianshi Gao, Daphne Koller

Abstract: Object detection and multi-class image segmentation are two closely related tasks that can be greatly improved when solved jointly by feeding information from one task to the other [10, 11]. However, current state-of-the-art models use a separate representation for each task making joint inference clumsy and leaving the classification of many parts of the scene ambiguous. In this work, we propose a hierarchical region-based approach to joint object detection and image segmentation. Our approach simultaneously reasons about pixels, regions and objects in a coherent probabilistic model. Pixel appearance features allow us to perform well on classifying amorphous background classes, while the explicit representation of regions facilitate the computation of more sophisticated features necessary for object detection. Importantly, our model gives a single unified description of the scene—we explain every pixel in the image and enforce global consistency between all random variables in our model. We run experiments on the challenging Street Scene dataset [2] and show significant improvement over state-of-the-art results for object detection accuracy. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In this work, we propose a hierarchical region-based approach to joint object detection and image segmentation. [sent-5, score-0.652]

2 Our approach simultaneously reasons about pixels, regions and objects in a coherent probabilistic model. [sent-6, score-0.481]

3 Pixel appearance features allow us to perform well on classifying amorphous background classes, while the explicit representation of regions facilitate the computation of more sophisticated features necessary for object detection. [sent-7, score-0.957]

4 We run experiments on the challenging Street Scene dataset [2] and show significant improvement over state-of-the-art results for object detection accuracy. [sent-9, score-0.547]

5 This is typified by the sliding-window object detection approach [22, 20, 4], but is also true of most other detection schemes (such as centroid-based methods [13] or boundary edge methods [5]). [sent-12, score-0.822]

6 The most successful approaches combine cues from inside the object boundary (local features) with cues from outside the object (contextual cues), e. [sent-13, score-0.904]

7 For example, a bounding-box based object detector includes many pixels within each candidate detection window that are not part of the object itself. [sent-19, score-1.319]

8 In this work, we propose a more integrated region-based approach that combines multi-class image segmentation with object detection. [sent-25, score-0.612]

9 At the region level we label pixels as belonging to one of a number of background classes (currently sky, tree, road, grass, water, building, mountain) or a single foreground class. [sent-27, score-0.737]

10 The foreground class is then further classified, at the object level, into one of our known object classes (currently car and pedestrian) or unknown. [sent-28, score-1.111]

11 [7] which aims to decompose an image into coherent regions by dynamically moving pixel between regions and evaluating these moves relative to a global energy objective. [sent-30, score-1.085]

12 These bottom-up pixel moves result in regions with coherent appearance. [sent-31, score-0.563]

13 Unfortunately, complex objects such as people or cars are composed of several dissimilar regions which will not be combined by this bottom-up approach. [sent-32, score-0.47]

14 For example, we can propose an entire object comprised of multiple regions and evaluate this joint move against our global objective. [sent-34, score-0.679]

15 Thus, our hierarchical model enjoys the best of two worlds: Like multi-class image segmentation, our model uniquely explains every pixel in the image and groups these into semantically coherent regions. [sent-35, score-0.453]

16 Like object detection, our model uses sophisticated shape and appearance features computed over candidate object locations with precise boundaries. [sent-36, score-1.092]

17 Furthermore, our joint model over regions and objects allows context to be encoded through direct semantic relationships (e. [sent-37, score-0.534]

18 2 Background and Related Work Our method inherits features from the sliding-window object detector works, such as Torralba et al. [sent-40, score-0.555]

19 We further incorporate into our model many novel ideas for improving object detection via scene context. [sent-43, score-0.777]

20 The innovative works that inspire ours include predicting camera viewpoint for estimating the real world size of object candidates [12], relating “things” (objects) to nearby “stuff” (regions) [9], co-occurrence of object classes [15], and general scene “gist” [18]. [sent-44, score-1.123]

21 , object detection) to provide features for other related tasks (e. [sent-48, score-0.443]

22 , the pixels in a bounding box identified as “car” by the object detector, may be labeled as “sky” by an image segmentation task). [sent-53, score-0.893]

23 The decomposition of a scene into regions to provide the basis for vision tasks exists in some scene parsing works. [sent-56, score-0.689]

24 Other works attempt to integrate tasks such as object detection and multi-class image segmentation into a single CRF model. [sent-66, score-0.798]

25 However, these models either use a different representation for object and non-object regions [23] or rely on a pixel-level representation [16]. [sent-67, score-0.621]

26 The former does not enforce label consistency between object bounding boxes and the underlying pixels while the latter does not distinguish between adjacent objects of the same class. [sent-68, score-0.866]

27 [8] also use regions for object detection instead of the traditional slidingwindow approach. [sent-70, score-0.776]

28 However, unlike our method, they use a single over-segmentation of the image and make the strong assumption that each segment represents a (probabilistically) recognizable object part. [sent-71, score-0.497]

29 Furthermore, we incorporate background regions which allows us to eliminate large portions of the image thereby reducing the number of component regions that need to be considered for each object. [sent-74, score-0.625]

30 As a result, the approach performs poorly on most foreground object classes. [sent-82, score-0.598]

31 2 3 Region-based Model for Object Detection We now present an overview of our joint object detection and scene segmentation model. [sent-83, score-0.892]

32 This model combines scene structure and semantics in a coherent energy function. [sent-84, score-0.497]

33 We address this deficiency by allowing an object to be composed of many regions (rather than trying to force dissimilar regions to merge). [sent-102, score-0.883]

34 The object to which a region belongs is denoted by its object-correspondence variable Or ∈ {∅, 1, . [sent-103, score-0.588]

35 Like regions, the set of pixels that comprise the o-th object is denoted by Po = r:Or =o Pr . [sent-108, score-0.596]

36 Currently, we do not allow a single region or object to be composed of multiple disconnected components. [sent-109, score-0.588]

37 Each region has an appearance variable Ar that summarizes the appearance of the region as a whole, a semantic class label Sr (such as “road” or “foreground object”), and an object-correspondence variable Or . [sent-112, score-0.711]

38 We assume that the image was taken by a camera with horizontal axis parallel to the ground and model the horizon v hz ∈ [0, 1] as the normalized row in the image corresponding to its location. [sent-115, score-0.509]

39 The energy function includes terms for modeling the location of the horizon, region label preferences, region boundary quality, object labels, and contextual relationships between objects and regions. [sent-118, score-1.367]

40 The ψ hz term captures the a priori location of the horizon in the scene and, in our model, is implemented as a log-gaussian ψ hz (v hz ) = − log N (v hz ; µ, σ 2 ) with parameters µ and σ learned from labeled training images. [sent-128, score-1.237]

41 Knowing the location of the horizon allows us to compute the world height of an object in the scene. [sent-129, score-0.527]

42 [12], it can be shown that the height yk of an object vt (or region) in the scene can be approximated as yk ≈ h vhz−vb where h is the height of the camera −vb origin above the ground, and vt and vb are the row of the top-most and bottom-most pixels in the object/region, respectively. [sent-131, score-0.974]

43 In our current work, we assume that all images were taken from the vt same height above the ground, allowing us to use vhz−vb as a feature in our region and object terms. [sent-132, score-0.632]

44 The region term ψ reg in our energy function captures the preference for a region to be assigned different semantic labels (currently sky, tree, road, grass, water, building, mountain, foreground). [sent-134, score-0.815]

45 If a region is associated with an object, then we constrain the assignment of its class label to foreground (e. [sent-136, score-0.436]

46 The term ψ bdry penalizes two adjacent regions with similar appearance or lack of boundary contrast. [sent-146, score-0.755]

47 Then the boundary term is 2 1 1 bdry bdry bdry ψrs = ηA · |Ers | · e− 2 d(Ar ,As ;ΣA ) + ηα e− 2 d(αp ,αq ;Σα ) 2 (3) (p,q)∈Ers where the ΣA and Σα are the image-specific pixel appearance covariance matrix computed over all pixels and neighboring pixels, respectively. [sent-150, score-1.136]

48 The parameters ηA and ηα encode the trade-off between the region similarity and boundary contrast terms and weight them against the other terms in the energy function (Equation 1). [sent-153, score-0.504]

49 Note that the boundary term does not include semantic class or object information. [sent-154, score-0.605]

50 Going beyond the model in [7], we include object terms ψ obj in our energy function that score the likelihood of a group of regions being assigned a given object label. [sent-157, score-1.319]

51 Like the region term, the object term is defined by a logistic function that maps object features φo : Po , v hz , I → Rn to probability of each object class. [sent-160, score-1.673]

52 However, since our region layer already identifies foreground regions, we would like our energy to improve only when we recognize known object classes. [sent-161, score-0.982]

53 We therefore bias the object term to give zero contribution to the energy for the class unknown. [sent-162, score-0.622]

54 1 Formally we have obj ψn (Co , v hz ) = −η obj No log σ Co | φo ; θobj − log σ unknown | φo ; θobj (4) where No is the number of pixels belonging to the object. [sent-163, score-0.681]

55 Intuitively, contextual information which relates objects to their local background can improve object detection. [sent-165, score-0.695]

56 4 encoding such relationships through pairwise energy terms between objects Co and regions Sr . [sent-168, score-0.655]

57 Since the pairwise context term is between objects and (background) regions it grows linearly with the number of object classes. [sent-178, score-0.914]

58 2 Object Detectors Performing well at object detection requires more than simple region appearance features. [sent-181, score-0.877]

59 Indeed, the power of state-of-the-art object detectors is their ability to model localized appearance and general shape characteristics of an object class. [sent-182, score-1.025]

60 Thus, in addition to raw appearance features, we append to our object feature vector φo features derived from such object detection models. [sent-183, score-1.124]

61 We discuss two methods for adapting state-of-the-art object detector technologies for this purpose. [sent-184, score-0.471]

62 In the first approach, we treat the object detector as a black-box that returns a score per (rectangular) candidate window. [sent-185, score-0.524]

63 However, recall that an object in our model is defined by a contiguous set of pixels Po , not a rectangular window. [sent-186, score-0.635]

64 Note that in the above black-box approach many of the pixels within the bounding box are not actually part of the object (consider, for example, an L-shaped region). [sent-190, score-0.673]

65 In our implementation, we use a soft mask that attenuates the intensity of pixels outside the object based on their distance to the object boundary (see Figure 2). [sent-192, score-1.199]

66 For both approaches, we append the score (for each object) from the object detection classifiers— linear SVM or boosted decision trees—to the object feature vector φo . [sent-199, score-0.978]

67 (a) full window (b) hard region mask (c) hard window (d) soft region mask (e) soft window Figure 2: Illustration of soft mask for proposed object regions. [sent-200, score-1.189]

68 That is, when we extract object-detector features we map the object pixels Po onto the original image and extract our features at the higher resolution. [sent-207, score-0.803]

69 We initialize the scene by segmenting the image using an off-the-shelf unsupervised segmentation algorithm (in our experiments we use meanshift [3]). [sent-215, score-0.45]

70 Thus we remove the object variables O and C from the model and artificially increase the bdry bdry boundary term weights (ηα and ηA ) to promote merging. [sent-218, score-0.902]

71 In this phase, the algorithm behaves exactly as in [7] by iteratively proposing re-assignments of pixels to regions (variables R) and recomputes the optimal assignment to the remaining variables (S and v hz ). [sent-219, score-0.675]

72 In the second phase, we anneal the boundary term weights and introduce object variables over all foreground regions. [sent-223, score-0.76]

73 2 below) of new regions generated from sliding-window object candidates (affecting both R and O). [sent-225, score-0.663]

74 Since only part of the scene is changing during any iteration we only need to recompute the features and energy terms for the regions affected by a move. [sent-228, score-0.698]

75 This allows us to maximize each region term independently during each proposal step—we use an iterated conditional modes (ICM) update to optimize v hz after the region labels have been inferred. [sent-231, score-0.707]

76 2 Proposal Moves We now describe the set of pixel and region proposal moves considered by our algorithm. [sent-235, score-0.516]

77 These moves are relative to the current best scene decomposition and are designed to take large steps in the energy space to avoid local minima. [sent-236, score-0.559]

78 During inference, these segments are used to propose a re-assignment of all pixels in the segment to a neighboring region or creation of new region. [sent-243, score-0.442]

79 These bottom-up proposal moves work well for background classes, but tend to result in over-segmented foreground classes which have heterogeneous appearance, for example, one would not expect the wheels and body of a car to be grouped together by a bottom-up approach. [sent-244, score-0.595]

80 An analogous set of moves can be used for merging two adjacent objects or assigning regions to objects. [sent-245, score-0.599]

81 However, if an object is decomposed into multiple regions, this bottom-up approach is problematic as multiple such moves may be required to produce a complete object. [sent-246, score-0.533]

82 We get around this difficulty by introducing a new set of powerful top-down proposal moves based on object detection candidates. [sent-248, score-0.753]

83 Here we use pre-computed candidates from a sliding-window detector to propose new foreground regions with corresponding object variable. [sent-249, score-0.948]

84 Boosted pixel appearance features (see [7]) and object detectors are learned separately and their output provided as input features to the combined model. [sent-271, score-0.817]

85 For both the base object detectors and the parameters of the region and object terms, we use a closed-loop learning technique where we first learn an initial set of parameters from training data. [sent-272, score-1.055]

86 We then run inference on our training set and record mistakes made by the algorithm (false-positives for object detection and incorrect moves for the full algorithm). [sent-273, score-0.758]

87 The dataset comes with hand-annotated region labels and object boundaries. [sent-279, score-0.588]

88 Since our algorithm produces MAP estimation for the scene we cannot simply generate a precision-recall curve by varying the object classifier threshold as is usual for reporting object detection results. [sent-289, score-1.169]

89 Unlike the baselines, this forces only one candidate object per region. [sent-297, score-0.445]

90 However, by trading-off the strength (and hence operating point) of the energy terms in our model we can increase the maximum recall for a given object class (e. [sent-298, score-0.58]

91 , by increasing the weight of the object term by a factor of 30 we were able to increase pedestrian recall from 0. [sent-300, score-0.527]

92 The first row shows the original image (left) together with annotated regions and objects (middle-left), regions (middle-right) and predicted horizon (right). [sent-309, score-0.827]

93 6 Discussion In this paper we have presented a hierarchical model for joint object detection and image segmentation. [sent-315, score-0.652]

94 One of the difficulties in our model is learning the trade-off between energy terms—too strong a boundary penalty and all regions will be merged together, while too weak a penalty and the scene will be split into too many segments. [sent-319, score-0.767]

95 A unified system for object detection, texture recognition, and context analysis based on the standard model feature set. [sent-339, score-0.439]

96 Combined object categorization and segmentation with an implicit shape model. [sent-404, score-0.539]

97 Nonparametric scene parsing: Label transfer via dense scene alignment. [sent-410, score-0.46]

98 TextonBoost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. [sent-425, score-0.471]

99 Contextual models for object detection using boosted random fields. [sent-453, score-0.586]

100 A dynamic conditional random field model for joint labeling of object and scene classes. [sent-473, score-0.622]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('object', 0.392), ('scene', 0.23), ('regions', 0.229), ('hz', 0.208), ('foreground', 0.206), ('pixels', 0.204), ('region', 0.196), ('energy', 0.188), ('bdry', 0.174), ('objects', 0.173), ('detection', 0.155), ('moves', 0.141), ('appearance', 0.134), ('sr', 0.126), ('boundary', 0.12), ('obj', 0.118), ('ctxt', 0.116), ('segmentation', 0.115), ('pixel', 0.114), ('image', 0.105), ('co', 0.105), ('gould', 0.101), ('reg', 0.1), ('pedestrian', 0.093), ('horizon', 0.091), ('car', 0.085), ('coherent', 0.079), ('detector', 0.079), ('detectors', 0.075), ('rb', 0.072), ('hog', 0.069), ('contextual', 0.068), ('sky', 0.065), ('proposal', 0.065), ('background', 0.062), ('torralba', 0.061), ('vb', 0.06), ('po', 0.06), ('mask', 0.058), ('heitz', 0.058), ('move', 0.058), ('adjacent', 0.056), ('candidate', 0.053), ('road', 0.053), ('holistic', 0.052), ('semantic', 0.051), ('features', 0.051), ('stuff', 0.051), ('semantically', 0.05), ('rp', 0.05), ('context', 0.047), ('pr', 0.047), ('hoiem', 0.046), ('nr', 0.046), ('street', 0.045), ('height', 0.044), ('window', 0.044), ('shotton', 0.043), ('term', 0.042), ('captures', 0.042), ('segments', 0.042), ('candidates', 0.042), ('grass', 0.041), ('dalal', 0.041), ('bounding', 0.041), ('boosted', 0.039), ('rectangular', 0.039), ('cntxt', 0.039), ('polygons', 0.039), ('subtask', 0.039), ('vhz', 0.039), ('inference', 0.039), ('sophisticated', 0.038), ('eccv', 0.036), ('classes', 0.036), ('box', 0.036), ('currently', 0.035), ('rs', 0.035), ('cars', 0.035), ('assignment', 0.034), ('merge', 0.034), ('relationships', 0.034), ('urban', 0.034), ('bicycles', 0.034), ('et', 0.033), ('belonging', 0.033), ('dissimilar', 0.033), ('water', 0.033), ('soft', 0.033), ('shape', 0.032), ('ijcv', 0.032), ('dictionary', 0.032), ('overlapping', 0.031), ('pairwise', 0.031), ('cvpr', 0.031), ('mistakes', 0.031), ('logit', 0.031), ('masked', 0.031), ('triggs', 0.031), ('works', 0.031)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 201 nips-2009-Region-based Segmentation and Object Detection

Author: Stephen Gould, Tianshi Gao, Daphne Koller

Abstract: Object detection and multi-class image segmentation are two closely related tasks that can be greatly improved when solved jointly by feeding information from one task to the other [10, 11]. However, current state-of-the-art models use a separate representation for each task making joint inference clumsy and leaving the classification of many parts of the scene ambiguous. In this work, we propose a hierarchical region-based approach to joint object detection and image segmentation. Our approach simultaneously reasons about pixels, regions and objects in a coherent probabilistic model. Pixel appearance features allow us to perform well on classifying amorphous background classes, while the explicit representation of regions facilitate the computation of more sophisticated features necessary for object detection. Importantly, our model gives a single unified description of the scene—we explain every pixel in the image and enforce global consistency between all random variables in our model. We run experiments on the challenging Street Scene dataset [2] and show significant improvement over state-of-the-art results for object detection accuracy. 1

2 0.36657584 211 nips-2009-Segmenting Scenes by Matching Image Composites

Author: Bryan Russell, Alyosha Efros, Josef Sivic, Bill Freeman, Andrew Zisserman

Abstract: In this paper, we investigate how, given an image, similar images sharing the same global description can help with unsupervised scene segmentation. In contrast to recent work in semantic alignment of scenes, we allow an input image to be explained by partial matches of similar scenes. This allows for a better explanation of the input scenes. We perform MRF-based segmentation that optimizes over matches, while respecting boundary information. The recovered segments are then used to re-query a large database of images to retrieve better matches for the target regions. We show improved performance in detecting the principal occluding and contact boundaries for the scene over previous methods on data gathered from the LabelMe database.

3 0.26710328 133 nips-2009-Learning models of object structure

Author: Joseph Schlecht, Kobus Barnard

Abstract: We present an approach for learning stochastic geometric models of object categories from single view images. We focus here on models expressible as a spatially contiguous assemblage of blocks. Model topologies are learned across groups of images, and one or more such topologies is linked to an object category (e.g. chairs). Fitting learned topologies to an image can be used to identify the object class, as well as detail its geometry. The latter goes beyond labeling objects, as it provides the geometric structure of particular instances. We learn the models using joint statistical inference over category parameters, camera parameters, and instance parameters. These produce an image likelihood through a statistical imaging model. We use trans-dimensional sampling to explore topology hypotheses, and alternate between Metropolis-Hastings and stochastic dynamics to explore instance parameters. Experiments on images of furniture objects such as tables and chairs suggest that this is an effective approach for learning models that encode simple representations of category geometry and the statistics thereof, and support inferring both category and geometry on held out single view images. 1

4 0.18824963 175 nips-2009-Occlusive Components Analysis

Author: Jörg Lücke, Richard Turner, Maneesh Sahani, Marc Henniges

Abstract: We study unsupervised learning in a probabilistic generative model for occlusion. The model uses two types of latent variables: one indicates which objects are present in the image, and the other how they are ordered in depth. This depth order then determines how the positions and appearances of the objects present, specified in the model parameters, combine to form the image. We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another. Exact maximum-likelihood learning is intractable. However, we show that tractable approximations to Expectation Maximization (EM) can be found if the training images each contain only a small number of objects on average. In numerical experiments it is shown that these approximations recover the correct set of object parameters. Experiments on a novel version of the bars test using colored bars, and experiments on more realistic data, show that the algorithm performs well in extracting the generating causes. Experiments based on the standard bars benchmark test for object learning show that the algorithm performs well in comparison to other recent component extraction approaches. The model and the learning algorithm thus connect research on occlusion with the research field of multiple-causes component extraction methods. 1

5 0.18105902 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation

Author: Lan Du, Lu Ren, Lawrence Carin, David B. Dunson

Abstract: A non-parametric Bayesian model is proposed for processing multiple images. The analysis employs image features and, when present, the words associated with accompanying annotations. The model clusters the images into classes, and each image is segmented into a set of objects, also allowing the opportunity to assign a word to each object (localized labeling). Each object is assumed to be represented as a heterogeneous mix of components, with this realized via mixture models linking image features to object types. The number of image classes, number of object types, and the characteristics of the object-feature mixture models are inferred nonparametrically. To constitute spatially contiguous objects, a new logistic stick-breaking process is developed. Inference is performed efficiently via variational Bayesian analysis, with example results presented on two image databases.

6 0.16741021 44 nips-2009-Beyond Categories: The Visual Memex Model for Reasoning About Object Relationships

7 0.16575183 28 nips-2009-An Additive Latent Feature Model for Transparent Object Recognition

8 0.16564049 84 nips-2009-Evaluating multi-class learning strategies in a generative hierarchical framework for object detection

9 0.16487153 236 nips-2009-Structured output regression for detection with partial truncation

10 0.13296694 131 nips-2009-Learning from Neighboring Strokes: Combining Appearance and Context for Multi-Domain Sketch Recognition

11 0.12399213 85 nips-2009-Explaining human multiple object tracking as resource-constrained approximate inference in a dynamic probabilistic model

12 0.12396407 251 nips-2009-Unsupervised Detection of Regions of Interest Using Iterative Link Analysis

13 0.12162589 97 nips-2009-Free energy score space

14 0.1115891 96 nips-2009-Filtering Abstract Senses From Image Search Results

15 0.10163409 149 nips-2009-Maximin affinity learning of image segmentation

16 0.098671973 260 nips-2009-Zero-shot Learning with Semantic Output Codes

17 0.094289497 86 nips-2009-Exploring Functional Connectivities of the Human Brain using Multivariate Information Analysis

18 0.093057461 88 nips-2009-Extending Phase Mechanism to Differential Motion Opponency for Motion Pop-out

19 0.08816272 2 nips-2009-3D Object Recognition with Deep Belief Nets

20 0.085323296 102 nips-2009-Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.265), (1, -0.234), (2, -0.241), (3, -0.079), (4, -0.043), (5, 0.257), (6, -0.0), (7, 0.041), (8, 0.214), (9, -0.091), (10, 0.034), (11, -0.041), (12, 0.147), (13, -0.129), (14, -0.036), (15, 0.061), (16, -0.019), (17, -0.154), (18, 0.155), (19, -0.09), (20, 0.02), (21, 0.026), (22, -0.063), (23, -0.041), (24, -0.058), (25, -0.093), (26, 0.033), (27, 0.009), (28, -0.03), (29, -0.027), (30, -0.059), (31, 0.077), (32, 0.036), (33, -0.005), (34, -0.004), (35, -0.096), (36, -0.067), (37, 0.06), (38, -0.032), (39, 0.01), (40, -0.024), (41, -0.028), (42, 0.108), (43, -0.105), (44, 0.058), (45, -0.03), (46, 0.037), (47, 0.005), (48, -0.059), (49, 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98706162 201 nips-2009-Region-based Segmentation and Object Detection

Author: Stephen Gould, Tianshi Gao, Daphne Koller

Abstract: Object detection and multi-class image segmentation are two closely related tasks that can be greatly improved when solved jointly by feeding information from one task to the other [10, 11]. However, current state-of-the-art models use a separate representation for each task making joint inference clumsy and leaving the classification of many parts of the scene ambiguous. In this work, we propose a hierarchical region-based approach to joint object detection and image segmentation. Our approach simultaneously reasons about pixels, regions and objects in a coherent probabilistic model. Pixel appearance features allow us to perform well on classifying amorphous background classes, while the explicit representation of regions facilitate the computation of more sophisticated features necessary for object detection. Importantly, our model gives a single unified description of the scene—we explain every pixel in the image and enforce global consistency between all random variables in our model. We run experiments on the challenging Street Scene dataset [2] and show significant improvement over state-of-the-art results for object detection accuracy. 1

2 0.85481542 211 nips-2009-Segmenting Scenes by Matching Image Composites

Author: Bryan Russell, Alyosha Efros, Josef Sivic, Bill Freeman, Andrew Zisserman

Abstract: In this paper, we investigate how, given an image, similar images sharing the same global description can help with unsupervised scene segmentation. In contrast to recent work in semantic alignment of scenes, we allow an input image to be explained by partial matches of similar scenes. This allows for a better explanation of the input scenes. We perform MRF-based segmentation that optimizes over matches, while respecting boundary information. The recovered segments are then used to re-query a large database of images to retrieve better matches for the target regions. We show improved performance in detecting the principal occluding and contact boundaries for the scene over previous methods on data gathered from the LabelMe database.

3 0.83190006 133 nips-2009-Learning models of object structure

Author: Joseph Schlecht, Kobus Barnard

Abstract: We present an approach for learning stochastic geometric models of object categories from single view images. We focus here on models expressible as a spatially contiguous assemblage of blocks. Model topologies are learned across groups of images, and one or more such topologies is linked to an object category (e.g. chairs). Fitting learned topologies to an image can be used to identify the object class, as well as detail its geometry. The latter goes beyond labeling objects, as it provides the geometric structure of particular instances. We learn the models using joint statistical inference over category parameters, camera parameters, and instance parameters. These produce an image likelihood through a statistical imaging model. We use trans-dimensional sampling to explore topology hypotheses, and alternate between Metropolis-Hastings and stochastic dynamics to explore instance parameters. Experiments on images of furniture objects such as tables and chairs suggest that this is an effective approach for learning models that encode simple representations of category geometry and the statistics thereof, and support inferring both category and geometry on held out single view images. 1

4 0.79474968 44 nips-2009-Beyond Categories: The Visual Memex Model for Reasoning About Object Relationships

Author: Tomasz Malisiewicz, Alyosha Efros

Abstract: The use of context is critical for scene understanding in computer vision, where the recognition of an object is driven by both local appearance and the object’s relationship to other elements of the scene (context). Most current approaches rely on modeling the relationships between object categories as a source of context. In this paper we seek to move beyond categories to provide a richer appearancebased model of context. We present an exemplar-based model of objects and their relationships, the Visual Memex, that encodes both local appearance and 2D spatial context between object instances. We evaluate our model on Torralba’s proposed Context Challenge against a baseline category-based system. Our experiments suggest that moving beyond categories for context modeling appears to be quite beneficial, and may be the critical missing ingredient in scene understanding systems. 1

5 0.76488942 175 nips-2009-Occlusive Components Analysis

Author: Jörg Lücke, Richard Turner, Maneesh Sahani, Marc Henniges

Abstract: We study unsupervised learning in a probabilistic generative model for occlusion. The model uses two types of latent variables: one indicates which objects are present in the image, and the other how they are ordered in depth. This depth order then determines how the positions and appearances of the objects present, specified in the model parameters, combine to form the image. We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another. Exact maximum-likelihood learning is intractable. However, we show that tractable approximations to Expectation Maximization (EM) can be found if the training images each contain only a small number of objects on average. In numerical experiments it is shown that these approximations recover the correct set of object parameters. Experiments on a novel version of the bars test using colored bars, and experiments on more realistic data, show that the algorithm performs well in extracting the generating causes. Experiments based on the standard bars benchmark test for object learning show that the algorithm performs well in comparison to other recent component extraction approaches. The model and the learning algorithm thus connect research on occlusion with the research field of multiple-causes component extraction methods. 1

6 0.75970399 28 nips-2009-An Additive Latent Feature Model for Transparent Object Recognition

7 0.74749321 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation

8 0.66176385 236 nips-2009-Structured output regression for detection with partial truncation

9 0.66109842 84 nips-2009-Evaluating multi-class learning strategies in a generative hierarchical framework for object detection

10 0.62721235 85 nips-2009-Explaining human multiple object tracking as resource-constrained approximate inference in a dynamic probabilistic model

11 0.62062275 131 nips-2009-Learning from Neighboring Strokes: Combining Appearance and Context for Multi-Domain Sketch Recognition

12 0.60882193 251 nips-2009-Unsupervised Detection of Regions of Interest Using Iterative Link Analysis

13 0.55835569 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

14 0.52675557 149 nips-2009-Maximin affinity learning of image segmentation

15 0.50066555 96 nips-2009-Filtering Abstract Senses From Image Search Results

16 0.49653566 58 nips-2009-Constructing Topological Maps using Markov Random Fields and Loop-Closure Detection

17 0.447375 6 nips-2009-A Biologically Plausible Model for Rapid Natural Scene Identification

18 0.43867257 235 nips-2009-Structural inference affects depth perception in the context of potential occlusion

19 0.42107296 97 nips-2009-Free energy score space

20 0.3815721 93 nips-2009-Fast Image Deconvolution using Hyper-Laplacian Priors


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(24, 0.024), (25, 0.508), (35, 0.071), (36, 0.079), (39, 0.05), (58, 0.063), (71, 0.037), (81, 0.017), (86, 0.053)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.99540448 152 nips-2009-Measuring model complexity with the prior predictive

Author: Wolf Vanpaemel

Abstract: In the last few decades, model complexity has received a lot of press. While many methods have been proposed that jointly measure a model’s descriptive adequacy and its complexity, few measures exist that measure complexity in itself. Moreover, existing measures ignore the parameter prior, which is an inherent part of the model and affects the complexity. This paper presents a stand alone measure for model complexity, that takes the number of parameters, the functional form, the range of the parameters and the parameter prior into account. This Prior Predictive Complexity (PPC) is an intuitive and easy to compute measure. It starts from the observation that model complexity is the property of the model that enables it to fit a wide range of outcomes. The PPC then measures how wide this range exactly is. keywords: Model Selection & Structure Learning; Model Comparison Methods; Perception The recent revolution in model selection methods in the cognitive sciences was driven to a large extent by the observation that computational models can differ in their complexity. Differences in complexity put models on unequal footing when their ability to approximate empirical data is assessed. Therefore, models should be penalized for their complexity when their adequacy is measured. The balance between descriptive adequacy and complexity has been termed generalizability [1, 2]. Much attention has been devoted to developing, advocating, and comparing different measures of generalizability (for a recent overview, see [3]). In contrast, measures of complexity have received relatively little attention. The aim of the current paper is to propose and illustrate a stand alone measure of model complexity, called the Prior Predictive Complexity (PPC). The PPC is based on the intuitive idea that a complex model can predict many outcomes and a simple model can predict a few outcomes only. First, I discuss existing approaches to measuring model complexity and note some of their limitations. In particular, I argue that currently existing measures ignore one important aspect of a model: the prior distribution it assumes over the parameters. I then introduce the PPC, which, unlike the existing measures, is sensitive to the parameter prior. Next, the PPC is illustrated by calculating the complexities of two popular models of information integration. 1 Previous approaches to measuring model complexity A first approach to assess the (relative) complexity of models relies on simulated data. Simulationbased methods differ in how these artificial data are generated. A first, atheoretical approach uses random data [4, 5]. In the semi-theoretical approach, the data are generated from some theoretically ∗ I am grateful to Michael Lee and Liz Bonawitz. 1 interesting functions, such as the exponential or the logistic function [4]. Using these approaches, the models under consideration are equally complex if each model provides the best optimal fit to roughly the same number of data sets. A final approach to generating artificial data is a theoretical one, in which the data are generated from the models of interest themselves [6, 7]. The parameter sets used in the generation can either be hand-picked by the researcher, estimated from empirical data or drawn from a previously specified distribution. If the models under consideration are equally complex, each model should provide the best optimal fit to self-generated data more often than the other models under consideration do. One problem with this simulation-based approach is that it is very labor intensive. It requires generating a large amount of artificial data sets, and fitting the models to all these data sets. Further, it relies on choices that are often made in an arbitrary fashion that nonetheless bias the results. For example, in the semi-theoretical approach, a crucial choice is which functions to use. Similarly, in the theoretical approach, results are heavily influenced by the parameter values used in generating the data. If they are fixed, on what basis? If they are estimated from empirical data, from which data? If they are drawn randomly, from which distribution? Further, a simulation study only gives a rough idea of complexity differences but provides no direct measure reflecting the complexity. A number of proposals have been made to measure model complexity more directly. Consider a model M with k parameters, summarized in the parameter vector θ = (θ1 , θ2 , . . . , θk , ) which has a range indicated by Ω. Let d denote the data and p(d|θ, M ) the likelihood. The most straightforward measure of model complexity is the parametric complexity (PC), which simply counts the number of parameters: PC = k. (1) PC is attractive as a measure of model complexity since it is very easy to calculate. Further, it has a direct and well understood relation toward complexity: the more parameters, the more complex the model. It is included as the complexity term of several generalizability measures such as AIC [8] and BIC [9], and it is at the heart of the Likelihood Ratio Test. Despite this intuitive appeal, PC is not free from problems. One problem with PC is that it reflects only a single aspect of complexity. Also the parameter range and the functional form (the way the parameters are combined in the model equation) influence a model’s complexity, but these dimensions of complexity are ignored in PC [2, 6]. A complexity measure that takes these three dimensions into account is provided by the geometric complexity (GC) measure, which is inspired by differential geometry [10]. In GC, complexity is conceptualized as the number of distinguishable probability distributions a model can generate. It is defined by GC = k n ln + ln 2 2π det I(θ|M )dθ, (2) Ω where n indicates the size of the data sample and I(θ) is the Fisher Information Matrix: Iij (θ|M ) = −Eθ ∂ 2 ln p(d|θ, M ) . ∂θi ∂θj (3) Note that I(θ|M ) is determined by the likelihood function p(d|θ, M ), which is in turn determined by the model equation. Hence GC is sensitive to the number of parameters (through k), the functional form (through I), and the range (through Ω). Quite surprisingly, GC turns out to be equal to the complexity term used in one version of Minimum Description Length (MDL), a measure of generalizability developed within the domain of information theory [2, 11, 12, 13]. GC contrasts favorably with PC, in the sense that it takes three dimensions of complexity into account rather than a single one. A major drawback of GC is that, unlike PC, it requires considerable technical sophistication to be computed, as it relies on the second derivative of the likelihood. A more important limitation of both PC and GC is that these measures are insensitive to yet another important dimension contributing to model complexity: the prior distribution over the model parameters. The relation between the parameter prior distribution and model complexity is discussed next. 2 2 Model complexity and the parameter prior The growing popularity of Bayesian methods in psychology has not only raised awareness that model complexity should be taken into account when testing models [6], it has also drawn attention to the fact that in many occasions, relevant prior information is available [14]. In Bayesian methods, there is room to incorporate this information in two different flavors: as a prior distribution over the models, or as a prior distribution over the parameters. Specifying a model prior is a daunting task, so almost invariably, the model prior is taken to be uniform (but see [15] for an exception). In contrast, information regarding the parameter is much easier to include, although still challenging (e.g., [16]). There are two ways to formalize prior information about a model’s parameters: using the parameter prior range (often referred to as simply the range) and using the parameter prior distribution (often referred to as simply the prior). The prior range indicates which parameter values are allowed and which are forbidden. The prior distribution indicates which parameter values are likely and which are unlikely. Models that share the same equation and the same range but differ in the prior distribution can be considered different models (or at least different model versions), just like models that share the same equation but differ in range are different model versions. Like the parameter prior range, the parameter prior distribution influences the model complexity. In general, a model with a vague parameter prior distribution is more complex than a model with a sharply peaked parameter prior distribution, much as a model with a broad-ranged parameter is more complex than the same model where the parameter is heavily restricted. To drive home the point that the parameter prior should be considered when model complexity is assessed, consider the following “fair coin” model Mf and a “biased coin” model Mb . There is a clear intuitive complexity difference between these models: Mb is more complex than Mf . The most straightforward way to formalize these models is as follows, where ph denotes the probability of observing heads: ph = 1/2, (4) ph = θ 0≤θ≤1 p(θ) = 1, (5) for model Mf and the triplet of equations jointly define model Mb . The range forbids values smaller than 0 or greater than 1 because ph is a proportion. As Mf and Mb have a different number of parameters, both PC and GC, being sensitive to the number of parameters, pick up the difference in model complexity between the models. Alternatively, model Mf could be defined as follows: ph = θ 0≤θ≤1 1 p(θ) = δ(θ − ), 2 (6) where δ(x) is the Dirac delta. Note that the model formalized in Equation 6 is exactly identical the model formalized in Equation 4. However, relying on the formulation of model Mf in Equation 6, PC and GC now judge Mf and Mb to be equally complex: both models share the same model equation (which implies they have the same number of parameters and the same functional form) and the same range for the parameter. Hence, PC and GC make an incorrect judgement of the complexity difference between both models. This misjudgement is a direct result of the insensitivity of these measures to the parameter prior. As models Mf and Mb have different prior distributions over their parameter, a measure sensitive to the prior would pick up the complexity difference between these models. Such a measure is introduced next. 3 The Prior Predictive Complexity Model complexity refers to the property of the model that enables it to predict a wide range of data patterns [2]. The idea of the PPC is to measure how wide this range exactly is. A complex model 3 can predict many outcomes, and a simple model can predict a few outcomes only. Model simplicity, then, refers to the property of placing restrictions on the possible outcomes: the greater restrictions, the greater the simplicity. To understand how model complexity is measured in the PPC, it is useful to think about the universal interval (UI) and the predicted interval (PI). The universal interval is the range of outcomes that could potentially be observed, irrespective of any model. For example, in an experiment with n binomial trials, it is impossible to observe less that zero successes, or more than n successes, so the range of possible outcomes is [0, n] . Similarly, the universal interval for a proportion is [0, 1]. The predicted interval is the interval containing all outcomes the model predicts. An intuitive way to gauge model complexity is then the cardinality of the predicted interval, relative to the cardinality of the universal interval, averaged over all m conditions or stimuli: PPC = 1 m m i=1 |PIi | . |UIi | (7) A key aspect of the PPC is deriving the predicted interval. For a parameterized likelihood-based model, prediction takes the form of a distribution over all possible outcomes for some future, yet-tobe-observed data d under some model M . This distribution is called the prior predictive distribution (ppd) and can be calculated using the law of total probability: p(d|M ) = p(d|θ, M )p(θ|M )dθ. (8) Ω Predicting the probability of unseen future data d arising under the assumption that model M is true involves integrating the probability of the data for each of the possible parameter values, p(d|θ, M ), as weighted by the prior probability of each of these values, p(θ|M ). Note that the ppd relies on the number of parameters (through the number of integrals and the likelihood), the model equation (through the likelihood), and the parameter range (through Ω). Therefore, as GC, the PPC is sensitive to all these aspects. In contrast to GC, however, the ppd, and hence the PPC, also relies on the parameter prior. Since predictions are made probabilistically, virtually all outcomes will be assigned some prior weight. This implies that, in principle, the predicted interval equals the universal interval. However, for some outcomes the assigned weight will be extremely small. Therefore, it seems reasonable to restrict the predicted interval to the smallest interval that includes some predetermined amount of the prior mass. For example, the 95% predictive interval is defined by those outcomes with the highest prior mass that together make up 95% of the prior mass. Analytical solutions to the integral defining the ppd are rarely available. Instead, one should rely on approximations to the ppd by drawing samples from it. In the current study, sampling was performed using WinBUGS [17, 18], a highly versatile, user friendly, and freely available software package. It contains sophisticated and relatively general-purpose Markov Chain Monte Carlo (MCMC) algorithms to sample from any distribution of interest. 4 An application example The PPC is illustrated by comparing the complexity of two popular models of information integration, which attempt to account for how people merge potentially ambiguous or conflicting information from various sensorial sources to create subjective experience. These models either assume that the sources of information are combined additively (the Linear Integration Model; LIM; [19]) or multiplicatively (the Fuzzy Logical Model of Perception; FLMP; [20, 21]). 4.1 Information integration tasks A typical information integration task exposes participants simultaneously to different sources of information and requires this combined experience to be identified in a forced-choice identification task. The presented stimuli are generated from a factorial manipulation of the sources of information by systematically varying the ambiguity of each of the sources. The relevant empirical data consist 4 of, for each of the presented stimuli, the counts km of the number of times the mth stimulus was identified as one of the response alternatives, out of the tm trials on which it was presented. For example, an experiment in phonemic identification could involve two phonemes to be identified, /ba/ and /da/ and two sources of information, auditory and visual. Stimuli are created by crossing different levels of audible speech, varying between /ba/ and /da/, with different levels of visible speech, also varying between these alternatives. The resulting set of stimuli spans a continuum between the two syllables. The participant is then asked to listen and to watch the speaker, and based on this combined audiovisual experience, to identify the syllable as being either /ba/ or /da/. In the so-called expanded factorial design, not only bimodal stimuli (containing both auditory and visual information) but also unimodal stimuli (providing only a single source of information) are presented. 4.2 Information integration models In what follows, the formal description of the LIM and the FLMP is outlined for a design with two response alternatives (/da/ or /ba/) and two sources (auditory and visual), with I and J levels, respectively. In such a two-choice identification task, the counts km follow a Binomial distribution: km ∼ Binomial(pm , tm ), (9) where pm indicates the probability that the mth stimulus is identified as /da/. 4.2.1 Model equation The probability for the stimulus constructed with the ith level of the first source and the jth level of the second being identified as /da/ is computed according to the choice rule: pij = s (ij, /da/) , s (ij, /da/) + s (ij, /ba/) (10) where s (ij, /da/) represents the overall degree of support for the stimulus to be /da/. The sources of information are assumed to be evaluated independently, implying that different parameters are used for the different modalities. In the present example, the degree of auditory support for /da/ is denoted by ai (i = 1, . . . , I) and the degree of visual support for /da/ by bj (j = 1, . . . , J). When a unimodal stimulus is presented, the overall degree of support for each alternative is given by s (i∗, /da/) = ai and s (∗j, /da/) = bj , where the asterisk (*) indicates the absence of information, implying that Equation 10 reduces to pi∗ = ai and p∗j = bj . (11) When a bimodal stimulus is presented, the overall degree of support for each alternative is based on the integration or blending of both these sources. Hence, for bimodal stimuli, s (ij, /da/) = ai bj , where the operator denotes the combination of both sources. Hence, Equation 10 reduces to ai bj . (12) pij = ai bj + (1 − ai ) (1 − bj ) = +, so Equation 12 becomes The LIM assumes an additive combination, i.e., pij = ai + bj . 2 (13) The FLMP, in contrast, assumes a multiplicative combination, i.e., = ×, so Equation 12 becomes ai bj . ai bj + (1 − ai )(1 − bj ) (14) pij = 5 4.2.2 Parameter prior range and distribution Each level of auditory and visual support for /da/ (i.e., ai and bj , respectively) is associated with a free parameter, which implies that the FLMP and the LIM have an equal number of free parameters, I + J. Each of these parameters is constrained to satisfy 0 ≤ ai , bj ≤ 1. The original formulations of the LIM and FLMP unfortunately left the parameter priors unspecified. However, an implicit assumption that has been commonly used is a uniform prior for each of the parameters. This assumption implicitly underlies classical and widely adopted methods for model evaluation using accounted percentage of variance or maximum likelihood. ai ∼ Uniform(0, 1) and bi ∼ Uniform(0, 1) for i = 1, . . . , I; j = 1, . . . , J. (15) The models relying on this set of uniform priors will be referred to as LIMu and FLMPu . Note that LIMu and FLMPu treat the different parameters as independent. This approach misses important information. In particular, the experimental design is such that the amount of support for each level i + 1 is always higher than for level i. Because parameter ai (or bi ) corresponds to the degree of auditory (or visual) support for a unimodal stimulus at the ith level, it seems reasonable to expect the following orderings among the parameters to hold (see also [6]): aj > ai and bj > bi for j > i. (16) The models relying on this set of ordered priors will be referred to as LIMo and FLMPo . 4.3 Complexity and experimental design It is tempting to consider model complexity as an inherent characteristic of a model. For some models and for some measures of complexity this is clearly the case. Consider, for example, model Mb . In any experimental design (i.e., a number of coin tosses), PCMb = 1. However, more generally, this is not the case. Focusing on the FLMP and the LIM, it is clear that even a simple measure as PC depends crucially on (some aspects of) the experimental design. In particular, every level corresponds to a new parameter, so PC = I + J . Similarly, GC is dependent on design choices. The PPC is not different in this respect. The design sensitivity implies that one can only make sensible conclusions about differences in model complexity by using different designs. In an information integration task, the design decisions include the type of design (expanded or not), the number of sources, the number of response alternatives, the number of levels for each source, and the number of observations for each stimulus (sample size). The present study focuses on the expanded factorial designs with two sources and two response alternatives. The additional design features were varied: both a 5 × 5 and a 8 × 2 design were considered, using three different sample sizes (20, 60 and 150, following [2]). 4.4 Results Figure 1 shows the 99% predicted interval in the 8×2 design with n = 150. Each panel corresponds to a different model. In each panel, each of the 26 stimuli is displayed on the x-axis. The first eight stimuli correspond to the stimuli with the lowest level of visual support, and are ordered in increasing order of auditory support. The next eight stimuli correspond to the stimuli with the highest level of visual support. The next eight stimuli correspond to the unimodal stimuli where only auditory information is provided (again ranked in increasing order). The final two stimuli are the unimodal visual stimuli. Panel A shows that the predicted interval of LIMu nearly equals the universal interval, ranging between 0 and 1. This indicates that almost all outcomes are given a non-negligible prior mass by LIMu , making it almost maximally complex. FLMPu is even more complex. The predicted interval, shown in Panel B, virtually equals the universal interval, indicating that the model predicts virtually every possible outcome. Panels C and D show the dramatic effect of incorporating relevant prior information in the models. The predicted intervals of both LIMo and FLMPo are much smaller than their counterparts using the uniform priors. Focusing on the comparison between LIM and FLMP, the PPC indicates that the latter is more complex than the former. This observation holds irrespective of the model version (assuming uniform 6 0.9 0.8 0.8 Proportion of /da/ responses 1 0.9 Proportion of /da/ responses 1 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.7 0.6 0.5 0.4 0.3 0.2 0.1 11 21 A 1* 0 *1 11 21 B 1* *1 1* *1 0.8 Proportion of /da/ responses 0.9 0.8 21 1 0.9 Proportion of /da/ responses 1 11 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.7 0.6 0.5 0.4 0.3 0.2 0.1 11 21 C 1* 0 *1 D Figure 1: The 99% predicted interval for each of the 26 stimuli (x-axis) according to LIMu (Panel A), FLMPu (Panel B), LIMo (Panel C), and FLMPo (Panel D). Table 1: PPC, based on the 99% predicted interval, for four models across six different designs. 20 LIMu FLMPu LIMo FLMPo 5×5 60 150 20 8×2 60 150 0.97 1 0.75 0.83 0.94 1 0.67 0.80 .97 1 0.77 0.86 0.95 1 0.69 0.82 0.93 0.99 0.64 0.78 7 0.94 0.99 0.66 0.81 vs. ordered priors). The smaller complexity of LIM is in line with previous attempts to measure the relative complexities of LIM and FLMP, such as the atheoretical simulation-based approach ([4] but see [5]), the semi-theoretical simulation-based approach [4], the theoretical simulation-based approach [2, 6, 22], and a direct computation of the GC [2]. The PPC’s for all six designs considered are displayed in Table 1. It shows that the observations made for the 8 × 2, n = 150 design holds across the five remaining designs as well: LIM is simpler than FLMP; and models assuming ordered priors are simpler than models assuming uniform priors. Note that these conclusions would not have been possible based on PC or GC. For PC, all four models have the same complexity. GC, in contrast, would detect complexity differences between LIM and FLMP (i.e., the first conclusion), but due to its insensitivity to the parameter prior, the complexity differences between LIMu and LIMo on the one hand, and FLMPu and FLMPo on the other hand (i.e., the second conclusion) would have gone unnoticed. 5 Discussion A theorist defining a model should clearly and explicitly specify at least the three following pieces of information: the model equation, the parameter prior range, and the parameter prior distribution. If any of these pieces is missing, the model should be regarded as incomplete, and therefore untestable. Consequently, any measure of generalizability should be sensitive to all three aspects of the model definition. Many currently popular generalizability measures do not satisfy this criterion, including AIC, BIC and MDL. A measure of generalizability that does take these three aspects of a model into account is the marginal likelihood [6, 7, 14, 23]. Often, the marginal likelihood is criticized exactly for its sensitivity to the prior range and distribution (e.g., [24]). However, in the light of the fact that the prior is a part of the model definition, I see the sensitivity of the marginal likelihood to the prior as an asset rather than a nuisance. It is precisely the measures of generalizability that are insensitive to the prior that miss an important aspect of the model. Similarly, any stand alone measure of model complexity should be sensitive to all three aspects of the model definition, as all three aspects contribute to the model’s complexity (with the model equation contributing two factors: the number of parameters and the functional form). Existing measures of complexity do not satisfy this requirement and are therefore incomplete. PC takes only part of the model equation into account, whereas GC takes only the model equation and the range into account. In contrast, the PPC currently proposed is sensitive to all these three aspects. It assesses model complexity using the predicted interval which contains all possible outcomes a model can generate. A narrow predicted interval (relative to the universal interval) indicates a simple model; a complex model is characterized by a wide predicted interval. There is a tight coupling between the notions of information, knowledge and uncertainty, and the notion of model complexity. As parameters correspond to unknown variables, having more information available leads to fewer parameters and hence to a simpler model. Similarly, the more information there is available, the sharper the parameter prior, implying a simpler model. To put it differently, the less uncertainty present in a model, the narrower its predicted interval, and the simpler the model. For example, in model Mb , there is maximal uncertainty. Nothing but the range is known about θ, so all values of θ are equally likely. In contrast, in model Mf , there is minimal uncertainty. In fact, ph is known for sure, so only a single value of θ is possible. This difference in uncertainty is translated in a difference in complexity. The same is true for the information integration models. Incorporating the order constraints in the priors reduces the uncertainty compared to the models without these constraints (it tells you, for example, that parameter a1 is smaller than a2 ). This reduction in uncertainty is reflected by a smaller complexity. There are many different sources of prior information that can be translated in a range or distribution. The illustration using the information integration models highlighted that prior information can reflect meaningful information in the design. Alternatively, priors can be informed by previous applications of similar models in similar settings. Probably the purest form of priors are those that translate theoretical assumptions made by a model (see [16]). The fact that it is often difficult to formalize this prior information may not be used as an excuse to leave the prior unspecified. Sure it is a challenging task, but so is translating theoretical assumptions into the model equation. Formalizing theory, intuitions, and information is what model building is all about. 8 References [1] Myung, I. J. (2000) The importance of complexity in model selection. Journal of Mathematical Psychology, 44, 190–204. [2] Pitt, M. A., Myung, I. J., and Zhang, S. (2002) Toward a method of selecting among computational models of cognition. Psychological Review, 109, 472–491. [3] Shiffrin, R. M., Lee, M. D., Kim, W., and Wagenmakers, E. J. (2008) A survey of model evaluation approaches with a tutorial on hierarchical Bayesian methods. Cognitive Science, 32, 1248–1284. [4] Cutting, J. E., Bruno, N., Brady, N. P., and Moore, C. (1992) Selectivity, scope, and simplicity of models: A lesson from fitting judgments of perceived depth. Journal of Experimental Psychology: General, 121, 364–381. [5] Dunn, J. (2000) Model complexity: The fit to random data reconsidered. Psychological Research, 63, 174–182. [6] Myung, I. J. and Pitt, M. A. (1997) Applying Occam’s razor in modeling cognition: A Bayesian approach. Psychonomic Bulletin & Review, 4, 79–95. [7] Vanpaemel, W. and Storms, G. (in press) Abstraction and model evaluation in category learning. Behavior Research Methods. [8] Akaike, H. (1973) Information theory and an extension of the maximum likelihood principle. Petrov, B. and Csaki, B. (eds.), Second International Symposium on Information Theory, pp. 267–281, Academiai Kiado. [9] Schwarz, G. (1978) Estimating the dimension of a model. Annals of Statistics, 6, 461–464. [10] Myung, I. J., Balasubramanian, V., and Pitt, M. A. (2000) Counting probability distributions: Differential geometry and model selection. Proceedings of the National Academy of Sciences, 97, 11170–11175. [11] Lee, M. D. (2002) Generating additive clustering models with minimal stochastic complexity. Journal of Classification, 19, 69–85. [12] Rissanen, J. (1996) Fisher information and stochastic complexity. IEEE Transactions on Information Theory, 42, 40–47. [13] Gr¨ nwald, P. (2000) Model selection based on minimum description length. Journal of Mathematical u Psychology, 44, 133–152. [14] Lee, M. D. and Wagenmakers, E. J. (2005) Bayesian statistical inference in psychology: Comment on Trafimow (2003). Psychological Review, 112, 662–668. [15] Lee, M. D. and Vanpaemel, W. (2008) Exemplars, prototypes, similarities and rules in category representation: An example of hierarchical Bayesian analysis. Cognitive Science, 32, 1403–1424. [16] Vanpaemel, W. and Lee, M. D. (submitted) Using priors to formalize theory: Optimal attention and the generalized context model. [17] Lee, M. D. (2008) Three case studies in the Bayesian analysis of cognitive models. Psychonomic Bulletin & Review, 15, 1–15. [18] Spiegelhalter, D., Thomas, A., Best, N., and Lunn, D. (2004) WinBUGS User Manual Version 2.0. Medical Research Council Biostatistics Unit. Institute of Public Health, Cambridge. [19] Anderson, N. H. (1981) Foundations of information integration theory. Academic Press. [20] Oden, G. C. and Massaro, D. W. (1978) Integration of featural information in speech perception. Psychological Review, 85, 172–191. [21] Massaro, D. W. (1998) Perceiving Talking Faces: From Speech Perception to a Behavioral Principle. MIT Press. [22] Massaro, D. W., Cohen, M. M., Campbell, C. S., and Rodriguez, T. (2001) Bayes factor of model selection validates FLMP. Psychonomic Bulletin and Review, 8, 1–17. [23] Kass, R. E. and Raftery, A. E. (1995) Bayes factors. Journal of the American Statistical Association, 90, 773–795. [24] Liu, C. C. and Aitkin, M. (2008) Bayes factors: Prior sensitivity and model generalizability. Journal of Mathematical Psychology, 53, 362–375. 9

same-paper 2 0.98052746 201 nips-2009-Region-based Segmentation and Object Detection

Author: Stephen Gould, Tianshi Gao, Daphne Koller

Abstract: Object detection and multi-class image segmentation are two closely related tasks that can be greatly improved when solved jointly by feeding information from one task to the other [10, 11]. However, current state-of-the-art models use a separate representation for each task making joint inference clumsy and leaving the classification of many parts of the scene ambiguous. In this work, we propose a hierarchical region-based approach to joint object detection and image segmentation. Our approach simultaneously reasons about pixels, regions and objects in a coherent probabilistic model. Pixel appearance features allow us to perform well on classifying amorphous background classes, while the explicit representation of regions facilitate the computation of more sophisticated features necessary for object detection. Importantly, our model gives a single unified description of the scene—we explain every pixel in the image and enforce global consistency between all random variables in our model. We run experiments on the challenging Street Scene dataset [2] and show significant improvement over state-of-the-art results for object detection accuracy. 1

3 0.9646458 258 nips-2009-Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise

Author: Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R. Movellan, Paul L. Ruvolo

Abstract: Modern machine learning-based approaches to computer vision require very large databases of hand labeled images. Some contemporary vision systems already require on the order of millions of images for training (e.g., Omron face detector [9]). New Internet-based services allow for a large number of labelers to collaborate around the world at very low cost. However, using these services brings interesting theoretical and practical challenges: (1) The labelers may have wide ranging levels of expertise which are unknown a priori, and in some cases may be adversarial; (2) images may vary in their level of difficulty; and (3) multiple labels for the same image must be combined to provide an estimate of the actual label of the image. Probabilistic approaches provide a principled way to approach these problems. In this paper we present a probabilistic model and use it to simultaneously infer the label of each image, the expertise of each labeler, and the difficulty of each image. On both simulated and real data, we demonstrate that the model outperforms the commonly used “Majority Vote” heuristic for inferring image labels, and is robust to both noisy and adversarial labelers. 1

4 0.96347332 160 nips-2009-Multiple Incremental Decremental Learning of Support Vector Machines

Author: Masayuki Karasuyama, Ichiro Takeuchi

Abstract: We propose a multiple incremental decremental algorithm of Support Vector Machine (SVM). Conventional single incremental decremental SVM can update the trained model efficiently when single data point is added to or removed from the training set. When we add and/or remove multiple data points, this algorithm is time-consuming because we need to repeatedly apply it to each data point. The proposed algorithm is computationally more efficient when multiple data points are added and/or removed simultaneously. The single incremental decremental algorithm is built on an optimization technique called parametric programming. We extend the idea and introduce multi-parametric programming for developing the proposed algorithm. Experimental results on synthetic and real data sets indicate that the proposed algorithm can significantly reduce the computational cost of multiple incremental decremental operation. Our approach is especially useful for online SVM learning in which we need to remove old data points and add new data points in a short amount of time.

5 0.9447726 245 nips-2009-Thresholding Procedures for High Dimensional Variable Selection and Statistical Estimation

Author: Shuheng Zhou

Abstract: Given n noisy samples with p dimensions, where n ≪ p, we show that the multistep thresholding procedure can accurately estimate a sparse vector β ∈ Rp in a linear model, under the restricted eigenvalue conditions (Bickel-Ritov-Tsybakov 09). Thus our conditions for model selection consistency are considerably weaker than what has been achieved in previous works. More importantly, this method allows very significant values of s, which is the number of non-zero elements in the true parameter. For example, it works for cases where the ordinary Lasso would have failed. Finally, we show that if X obeys a uniform uncertainty principle and if the true parameter is sufficiently sparse, the Gauss-Dantzig selector (Cand` se Tao 07) achieves the ℓ2 loss within a logarithmic factor of the ideal mean square error one would achieve with an oracle which would supply perfect information about which coordinates are non-zero and which are above the noise level, while selecting a sufficiently sparse model. 1

6 0.80873775 133 nips-2009-Learning models of object structure

7 0.80081528 214 nips-2009-Semi-supervised Regression using Hessian energy with an application to semi-supervised dimensionality reduction

8 0.79230458 211 nips-2009-Segmenting Scenes by Matching Image Composites

9 0.75321031 25 nips-2009-Adaptive Design Optimization in Experiments with People

10 0.71603912 115 nips-2009-Individuation, Identification and Object Discovery

11 0.70526272 85 nips-2009-Explaining human multiple object tracking as resource-constrained approximate inference in a dynamic probabilistic model

12 0.7039097 225 nips-2009-Sparsistent Learning of Varying-coefficient Models with Structural Changes

13 0.70315504 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation

14 0.7001518 1 nips-2009-$L 1$-Penalized Robust Estimation for a Class of Inverse Problems Arising in Multiview Geometry

15 0.6969806 44 nips-2009-Beyond Categories: The Visual Memex Model for Reasoning About Object Relationships

16 0.69541174 175 nips-2009-Occlusive Components Analysis

17 0.69050342 131 nips-2009-Learning from Neighboring Strokes: Combining Appearance and Context for Multi-Domain Sketch Recognition

18 0.6857338 28 nips-2009-An Additive Latent Feature Model for Transparent Object Recognition

19 0.67438269 231 nips-2009-Statistical Models of Linear and Nonlinear Contextual Interactions in Early Visual Processing

20 0.6678648 168 nips-2009-Non-stationary continuous dynamic Bayesian networks