nips nips2001 nips2001-182 knowledge-graph by maker-knowledge-mining

182 nips-2001-The Fidelity of Local Ordinal Encoding


Source: pdf

Author: Javid Sadr, Sayan Mukherjee, Keith Thoresz, Pawan Sinha

Abstract: A key question in neuroscience is how to encode sensory stimuli such as images and sounds. Motivated by studies of response properties of neurons in the early cortical areas, we propose an encoding scheme that dispenses with absolute measures of signal intensity or contrast and uses, instead, only local ordinal measures. In this scheme, the structure of a signal is represented by a set of equalities and inequalities across adjacent regions. In this paper, we focus on characterizing the fidelity of this representation strategy. We develop a regularization approach for image reconstruction from ordinal measures and thereby demonstrate that the ordinal representation scheme can faithfully encode signal structure. We also present a neurally plausible implementation of this computation that uses only local update rules. The results highlight the robustness and generalization ability of local ordinal encodings for the task of pattern classification. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract A key question in neuroscience is how to encode sensory stimuli such as images and sounds. [sent-3, score-0.197]

2 Motivated by studies of response properties of neurons in the early cortical areas, we propose an encoding scheme that dispenses with absolute measures of signal intensity or contrast and uses, instead, only local ordinal measures. [sent-4, score-1.426]

3 In this scheme, the structure of a signal is represented by a set of equalities and inequalities across adjacent regions. [sent-5, score-0.069]

4 In this paper, we focus on characterizing the fidelity of this representation strategy. [sent-6, score-0.043]

5 We develop a regularization approach for image reconstruction from ordinal measures and thereby demonstrate that the ordinal representation scheme can faithfully encode signal structure. [sent-7, score-2.128]

6 We also present a neurally plausible implementation of this computation that uses only local update rules. [sent-8, score-0.135]

7 The results highlight the robustness and generalization ability of local ordinal encodings for the task of pattern classification. [sent-9, score-0.886]

8 1 Introduction Biological and artificial recognition systems face the challenge of grouping together differing proximal stimuli arising from the same underlying object. [sent-10, score-0.069]

9 How well the system succeeds in overcoming this challenge is critically dependent on the nature of the internal representations against which the observed inputs are matched. [sent-11, score-0.073]

10 The representation schemes should be capable of efficiently encoding object concepts while being tolerant to their appearance variations. [sent-12, score-0.208]

11 In this paper, we introduce and characterize a biologically plausible representation scheme for encoding signal structure. [sent-13, score-0.284]

12 The scheme employs a simple vocabulary of local ordinal relations, of the kind that early sensory neurons are capable of extracting. [sent-14, score-1.094]

13 Our results so far suggest that this scheme possesses several desirable characteristics, including tolerance to object appearance variations, computational simplicity, and low memory requirements. [sent-15, score-0.199]

14 We develop and demonstrate our ideas in the visual domain, but they are intended to be applicable to other sensory modalities as well. [sent-16, score-0.082]

15 The starting point for our proposal lies in studies of the response properties of neurons in the early sensory cortical areas. [sent-17, score-0.204]

16 These response properties constrain Figure 1: (a) A schematic contrast response curve for a primary visual cortex neuron. [sent-18, score-0.171]

17 The response of the neuron saturates at low contrast values. [sent-19, score-0.093]

18 This unit can be thought of as an ordinal comparator, providing information only about contrast polarity but not its magnitude. [sent-21, score-0.844]

19 the kinds of measurements that can plausibly be included in our representation scheme. [sent-22, score-0.043]

20 In the visual domain, many striate cortical neurons have rapidly saturating contrast response functions [1, 4]. [sent-23, score-0.31]

21 Their tendency to reach ceiling level responses at low contrast values render these neurons sensitive primarily to local ordinal, rather than metric, relations. [sent-24, score-0.17]

22 We propose to use an idealization of such units as the basic vocabulary of our representation scheme (figure 1). [sent-25, score-0.21]

23 In this scheme, objects are encoded as sets of local ordinal relations across image regions. [sent-26, score-1.301]

24 As discussed below, this very simple idea seems well suited to handling the photometric appearance variations that real-world objects exhibit. [sent-27, score-0.238]

25 Figure 2: The challenge for a representation scheme: to construct stable descriptions of objects despite radical changes in appearance. [sent-28, score-0.142]

26 As figure 2 shows, variations in illumination significantly alter the individual brightness of different parts of the face, such as the eyes, cheeks, and forehead. [sent-29, score-0.169]

27 Therefore, absolute image brightness distributions are unlikely to be adequate for classifying all of these images as depicting the same underlying object. [sent-30, score-0.352]

28 Even the contrast magnitudes across different parts of the face change greatly under different lighting conditions. [sent-31, score-0.25]

29 While the absolute luminance and contrast magnitude information is highly variable across these images, Thoresz and Sinha [9] have shown that one can identify some stable ordinal measurements. [sent-32, score-1.114]

30 Figure 3 shows several pairs of average brightness values over localized patches for each of the three images included in figure 2. [sent-33, score-0.181]

31 For instance, the average brightness of the left eye is always less than that of the forehead, irrespective of the lighting conditions. [sent-35, score-0.158]

32 The relative magnitudes of the two brightness values may change, but the sign of the inequality does not. [sent-36, score-0.174]

33 In other words, the ordinal relationship between the average brightnesses of the pair is invariant under lighting changes. [sent-37, score-0.895]

34 It seems, therefore that local ordinal relations may encode the stable facial attributes across different illumination conditions. [sent-39, score-1.258]

35 An additional advantage to using ordinal relations is their natural robustness to sensor noise. [sent-40, score-1.031]

36 Thus, it would seem that local ordinal representations may be well suited for devising compact representations, robust against Figure 3: The absolute brightnesses and their relative magnitudes change under different lighting conditions but several pair-wise ordinal relationships stay invariant. [sent-41, score-2.007]

37 large photometric variations, for at least some classes of objects. [sent-42, score-0.043]

38 Notably, for similar reasons, ordinal measures have also been shown to be a powerful tool for simple, efficient, and robust stereo image matching [3]. [sent-43, score-0.951]

39 In what follows, we address an important open question regarding the expressiveness of the ordinal representation scheme. [sent-44, score-0.854]

40 Given that this scheme ignores absolute luminance and contrast magnitude information, an obvious question that arises is whether such a crude representation strategy can encode object/image structure with any fidelity. [sent-45, score-0.515]

41 2 Information Content of Local Ordinal Encoding Figure 4 shows how we define ordinal relations between an image region pa and its immediate neighbors pb = {pa1 , . [sent-46, score-1.349]

42 In the conventional rectilinear grid, when all image regions pa are considered, four of the eight relations are redundant; we encode the remaining four as {1, 0, −1} based on the difference in luminance between two neighbors being positive, zero, or negative, respectively. [sent-50, score-0.651]

43 To demonstrate the richness of information encoded by this scheme, we compare the original image to one produced by a function that reconstructs the image using local ordinal relationships as constraints. [sent-51, score-1.195]

44 Infinitely many reconstruction functions could satisfy the given ordinal constraints. [sent-53, score-0.888]

45 To make the problem well-posed we regularize [10] the reconstruction function subject to the ordinal constraints, as done in ordinal regression for ranking document Department of Brain Sciences, MIT Cambridge, Massachusetts, USA. [sent-54, score-1.707]

46 edu Neighbors’ relations to pixel of interest ———————————————————————————I(pa ) < = < < > < < < I(pa1 ) I(pa2 ) I(pa3 ) I(pa4 ) I(pa5 ) I(pa6 ) I(pa7 ) I(pa8 ) (1) Figure 4: Ordinal relationships between an image region pa and its neighbors. [sent-57, score-0.497]

47 Our regularization term is a norm in a Reproducing Kernel Hilbert Space (RKHS) [2, 11]. [sent-59, score-0.037]

48 The reconstruction function, f (x), obtained from optimizing (4) subject to box constraints (5) has the following form f (x) = αp (K(x, xpa ) − K(x, xpb )) . [sent-63, score-0.383]

49 The remaining αp with absolute value less than C satisfy the inequality constraints in (3), whereas those with absolute value at C violate them. [sent-70, score-0.237]

50 Figure 5 depicts two typical reconstructions performed by this algorithm. [sent-71, score-0.12]

51 The difference images and error histograms suggests that the reconstructions closely match the source images. [sent-72, score-0.178]

52 3 Discussion Our reconstruction results suggest that the local ordinal representation can faithfully encode image structure. [sent-73, score-1.225]

53 Thus, even though individual ordinal relations are insensitive to absolute luminance or contrast magnitude, a set of such relations implicitly encodes metric information. [sent-74, score-1.49]

54 In the context of the human visual system, this result suggests that the rapidly saturating contrast response functions of the early visual neurons do not significantly hinder their ability to convey accurate image information to subsequent cortical stages. [sent-75, score-0.449]

55 An important question that arises here is what are the strengths and limitations of local ordinal encoding. [sent-76, score-0.906]

56 The first key limitation is that for any choice of neighborhood size over which ordinal relations are extracted, there are classes of images for which the local ordinal representation will be unable to encode the metric structure. [sent-77, score-2.103]

57 For a neighborhood of size n, an image with regions of different luminance embedded in a uniform background and mutually separated by a distance greater than n would constitute such an image. [sent-78, score-0.225]

58 In general, sparse images present a problem for this representation scheme, as might foveal or cortical “magnification,” for example. [sent-79, score-0.157]

59 This issue could be addressed by using ordinal relations across multiple scales, perhaps in an adaptive way that varies with the smoothness or sparseness of the stimulus. [sent-80, score-1.077]

60 Our intent in using this approach for reconstructions was to show via well-understood theoretical tools the richness of information that local ordinal representations pro- Figure 6: Reconstruction results from the relaxation approach. [sent-82, score-1.104]

61 In order to address the neural plausibility requirement, we have developed a simple relaxation-based approach with purely local update rules of the kind that can easily be implemented by cortical circuitry. [sent-84, score-0.127]

62 Each unit communicates only with its immediate neighbors and modifies its value incrementally up or down (starting from an arbitrary state) depending on the number of ordinal relations in the positive or negative direction. [sent-85, score-1.093]

63 Figure 6 shows four examples of image reconstructions performed using a relaxation-based approach. [sent-88, score-0.234]

64 A third potential limitation is that the scheme does not appear to constitute a compact code. [sent-89, score-0.147]

65 If each pixel must be encoded in terms of its relations with all of its eight neighbors, where each relation takes one of three values, {−1, 0, 1}, then what has been gained over the original image where each pixel is encoded by 8 bits? [sent-90, score-0.573]

66 Eight relations per pixel is highly redundant – four are sufficient. [sent-93, score-0.313]

67 In fact, as shown in figure 7, the scheme can also tolerate several missing relations. [sent-94, score-0.1]

68 Figure 7: Five reconstructions, shown here to demonstrate the robustness of local ordinal encoding to missing inputs. [sent-95, score-0.952]

69 From left to right: reconstructions based on 100%, 80%, 60%, 40%, and 20% of the full set of immediate neighbor relations. [sent-96, score-0.149]

70 An advantage to using ordinal relations is that they can be extracted and transmitted much more reliably than metric ones. [sent-98, score-1.046]

71 These relations share the same spirit (a) (b) Figure 8: A small collection of ordinal relations (a), though insufficient for high fidelity reconstruction, is very effective for pattern classification despite significant appearance variations. [sent-99, score-1.295]

72 (b) Results of using a local ordinal relationship based template to detect face patterns. [sent-100, score-0.894]

73 The program places white dots at the centers of patches classified as faces. [sent-101, score-0.029]

74 ) as loss functions used in robust statistics [6] and trimmed or Winsorized estimators. [sent-103, score-0.025]

75 The intent of the visual system is often not to encode/reconstruct images with perfect fidelity, but rather to encode the most stable characteristics that can aid in classification. [sent-105, score-0.273]

76 In this context, a few ordinal relations may suffice for encoding objects reliably. [sent-106, score-1.103]

77 Figure 8 shows the results of using less than 20 relations for detecting faces. [sent-107, score-0.216]

78 Its generalization arises because it defines an equivalence class of patterns. [sent-109, score-0.024]

79 In summary, the ordinal representation scheme provides a neurally plausible strategy for encoding signal structure. [sent-110, score-1.086]

80 While in this paper we focus on demonstrating the fidelity of this scheme, we believe that its true strength lies in defining equivalence classes of patterns enabling generalizations over appearance variations in objects. [sent-111, score-0.118]

81 Contrast coding by cells in the cat’s striate cortex: monocular vs. [sent-123, score-0.043]

82 Spatiotemporal organization of simple-cell receptive fields in the cat’s striate cortex. [sent-146, score-0.043]

83 Mds of nominal data: the recovery of metric information with alscal. [sent-203, score-0.042]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ordinal', 0.788), ('relations', 0.216), ('reconstructions', 0.12), ('image', 0.114), ('xpa', 0.108), ('xpb', 0.108), ('reconstruction', 0.1), ('scheme', 0.1), ('brightness', 0.094), ('di', 0.091), ('delity', 0.086), ('rpa', 0.086), ('thoresz', 0.086), ('absolute', 0.086), ('luminance', 0.086), ('pb', 0.075), ('appearance', 0.075), ('encode', 0.075), ('local', 0.071), ('pixel', 0.07), ('pa', 0.067), ('erent', 0.067), ('encoding', 0.066), ('xqa', 0.065), ('xqb', 0.065), ('lighting', 0.064), ('neighbors', 0.06), ('images', 0.058), ('sinha', 0.056), ('cortical', 0.056), ('contrast', 0.056), ('magnitudes', 0.051), ('intensity', 0.047), ('across', 0.044), ('brightnesses', 0.043), ('idealization', 0.043), ('intent', 0.043), ('ipa', 0.043), ('javid', 0.043), ('kpq', 0.043), ('photometric', 0.043), ('richness', 0.043), ('sadr', 0.043), ('neurons', 0.043), ('variations', 0.043), ('representation', 0.043), ('erence', 0.043), ('striate', 0.043), ('metric', 0.042), ('sensory', 0.041), ('visual', 0.041), ('representations', 0.039), ('multiresolution', 0.038), ('neurally', 0.038), ('regularization', 0.037), ('response', 0.037), ('constraints', 0.036), ('su', 0.035), ('gure', 0.035), ('face', 0.035), ('encoded', 0.035), ('challenge', 0.034), ('saturating', 0.034), ('faithfully', 0.034), ('eight', 0.033), ('objects', 0.033), ('stable', 0.032), ('illumination', 0.032), ('rkhs', 0.032), ('subject', 0.031), ('relationships', 0.03), ('inequality', 0.029), ('immediate', 0.029), ('reproducing', 0.029), ('smoothness', 0.029), ('patches', 0.029), ('biological', 0.027), ('robustness', 0.027), ('early', 0.027), ('redundant', 0.027), ('classi', 0.027), ('plausible', 0.026), ('signal', 0.025), ('constitute', 0.025), ('robust', 0.025), ('characteristics', 0.024), ('reconstructed', 0.024), ('vocabulary', 0.024), ('measures', 0.024), ('cat', 0.024), ('biologically', 0.024), ('arises', 0.024), ('sciences', 0.024), ('object', 0.024), ('question', 0.023), ('limitation', 0.022), ('magnitude', 0.022), ('seems', 0.022), ('massachusetts', 0.022), ('suited', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 182 nips-2001-The Fidelity of Local Ordinal Encoding

Author: Javid Sadr, Sayan Mukherjee, Keith Thoresz, Pawan Sinha

Abstract: A key question in neuroscience is how to encode sensory stimuli such as images and sounds. Motivated by studies of response properties of neurons in the early cortical areas, we propose an encoding scheme that dispenses with absolute measures of signal intensity or contrast and uses, instead, only local ordinal measures. In this scheme, the structure of a signal is represented by a set of equalities and inequalities across adjacent regions. In this paper, we focus on characterizing the fidelity of this representation strategy. We develop a regularization approach for image reconstruction from ordinal measures and thereby demonstrate that the ordinal representation scheme can faithfully encode signal structure. We also present a neurally plausible implementation of this computation that uses only local update rules. The results highlight the robustness and generalization ability of local ordinal encodings for the task of pattern classification. 1

2 0.088057593 191 nips-2001-Transform-invariant Image Decomposition with Similarity Templates

Author: Chris Stauffer, Erik Miller, Kinh Tieu

Abstract: Recent work has shown impressive transform-invariant modeling and clustering for sets of images of objects with similar appearance. We seek to expand these capabilities to sets of images of an object class that show considerable variation across individual instances (e.g. pedestrian images) using a representation based on pixel-wise similarities, similarity templates. Because of its invariance to the colors of particular components of an object, this representation enables detection of instances of an object class and enables alignment of those instances. Further, this model implicitly represents the regions of color regularity in the class-specific image set enabling a decomposition of that object class into component regions. 1

3 0.071422465 46 nips-2001-Categorization by Learning and Combining Object Parts

Author: Bernd Heisele, Thomas Serre, Massimiliano Pontil, Thomas Vetter, Tomaso Poggio

Abstract: We describe an algorithm for automatically learning discriminative components of objects with SVM classifiers. It is based on growing image parts by minimizing theoretical bounds on the error probability of an SVM. Component-based face classifiers are then combined in a second stage to yield a hierarchical SVM classifier. Experimental results in face classification show considerable robustness against rotations in depth and suggest performance at significantly better level than other face detection systems. Novel aspects of our approach are: a) an algorithm to learn component-based classification experts and their combination, b) the use of 3-D morphable models for training, and c) a maximum operation on the output of each component classifier which may be relevant for biological models of visual recognition.

4 0.066730276 110 nips-2001-Learning Hierarchical Structures with Linear Relational Embedding

Author: Alberto Paccanaro, Geoffrey E. Hinton

Abstract: We present Linear Relational Embedding (LRE), a new method of learning a distributed representation of concepts from data consisting of instances of relations between given concepts. Its final goal is to be able to generalize, i.e. infer new instances of these relations among the concepts. On a task involving family relationships we show that LRE can generalize better than any previously published method. We then show how LRE can be used effectively to find compact distributed representations for variable-sized recursive data structures, such as trees and lists. 1 Linear Relational Embedding Our aim is to take a large set of facts about a domain expressed as tuples of arbitrary symbols in a simple and rigid syntactic format and to be able to infer other “common-sense” facts without having any prior knowledge about the domain. Let us imagine a situation in which we have a set of concepts and a set of relations among these concepts, and that our data consists of few instances of these relations that hold among the concepts. We want to be able to infer other instances of these relations. For example, if the concepts are the people in a certain family, the relations are kinship relations, and we are given the facts ”Alberto has-father Pietro” and ”Pietro has-brother Giovanni”, we would like to be able to infer ”Alberto has-uncle Giovanni”. Our approach is to learn appropriate distributed representations of the entities in the data, and then exploit the generalization properties of the distributed representations [2] to make the inferences. In this paper we present a method, which we have called Linear Relational Embedding (LRE), which learns a distributed representation for the concepts by embedding them in a space where the relations between concepts are linear transformations of their distributed representations. Let us consider the case in which all the relations are binary, i.e. involve two concepts. , and the problem In this case our data consists of triplets we are trying to solve is to infer missing triplets when we are given only few of them. Inferring a triplet is equivalent to being able to complete it, that is to come up with one of its elements, given the other two. Here we shall always try to complete the third element of the triplets 1 . LRE will then represent each concept in the data as a learned vector in a 2 0    £ § ¥ £  § ¥ % 

5 0.066411905 54 nips-2001-Contextual Modulation of Target Saliency

Author: Antonio Torralba

Abstract: The most popular algorithms for object detection require the use of exhaustive spatial and scale search procedures. In such approaches, an object is defined by means of local features. fu this paper we show that including contextual information in object detection procedures provides an efficient way of cutting down the need for exhaustive search. We present results with real images showing that the proposed scheme is able to accurately predict likely object classes, locations and sizes. 1

6 0.06506557 74 nips-2001-Face Recognition Using Kernel Methods

7 0.063829847 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

8 0.058474198 153 nips-2001-Product Analysis: Learning to Model Observations as Products of Hidden Variables

9 0.055243406 80 nips-2001-Generalizable Relational Binding from Coarse-coded Distributed Representations

10 0.05341889 84 nips-2001-Global Coordination of Local Linear Models

11 0.05265788 127 nips-2001-Multi Dimensional ICA to Separate Correlated Sources

12 0.049235057 89 nips-2001-Grouping with Bias

13 0.048692062 22 nips-2001-A kernel method for multi-labelled classification

14 0.048312768 21 nips-2001-A Variational Approach to Learning Curves

15 0.046945993 10 nips-2001-A Hierarchical Model of Complex Cells in Visual Cortex for the Binocular Perception of Motion-in-Depth

16 0.046447225 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

17 0.045155555 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

18 0.045081209 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

19 0.044707932 189 nips-2001-The g Factor: Relating Distributions on Features to Distributions on Images

20 0.044199698 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.134), (1, -0.052), (2, -0.068), (3, 0.017), (4, -0.023), (5, 0.015), (6, -0.152), (7, -0.018), (8, 0.047), (9, -0.006), (10, 0.002), (11, 0.031), (12, 0.077), (13, -0.003), (14, -0.016), (15, 0.015), (16, 0.01), (17, 0.061), (18, -0.044), (19, 0.036), (20, 0.014), (21, -0.008), (22, 0.02), (23, -0.02), (24, 0.003), (25, -0.03), (26, 0.003), (27, 0.017), (28, 0.003), (29, -0.017), (30, -0.027), (31, -0.042), (32, -0.081), (33, 0.097), (34, 0.063), (35, -0.102), (36, 0.087), (37, -0.009), (38, 0.082), (39, 0.101), (40, 0.074), (41, 0.13), (42, 0.026), (43, -0.027), (44, 0.116), (45, 0.006), (46, -0.151), (47, 0.099), (48, 0.043), (49, -0.05)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95582157 182 nips-2001-The Fidelity of Local Ordinal Encoding

Author: Javid Sadr, Sayan Mukherjee, Keith Thoresz, Pawan Sinha

Abstract: A key question in neuroscience is how to encode sensory stimuli such as images and sounds. Motivated by studies of response properties of neurons in the early cortical areas, we propose an encoding scheme that dispenses with absolute measures of signal intensity or contrast and uses, instead, only local ordinal measures. In this scheme, the structure of a signal is represented by a set of equalities and inequalities across adjacent regions. In this paper, we focus on characterizing the fidelity of this representation strategy. We develop a regularization approach for image reconstruction from ordinal measures and thereby demonstrate that the ordinal representation scheme can faithfully encode signal structure. We also present a neurally plausible implementation of this computation that uses only local update rules. The results highlight the robustness and generalization ability of local ordinal encodings for the task of pattern classification. 1

2 0.68042469 191 nips-2001-Transform-invariant Image Decomposition with Similarity Templates

Author: Chris Stauffer, Erik Miller, Kinh Tieu

Abstract: Recent work has shown impressive transform-invariant modeling and clustering for sets of images of objects with similar appearance. We seek to expand these capabilities to sets of images of an object class that show considerable variation across individual instances (e.g. pedestrian images) using a representation based on pixel-wise similarities, similarity templates. Because of its invariance to the colors of particular components of an object, this representation enables detection of instances of an object class and enables alignment of those instances. Further, this model implicitly represents the regions of color regularity in the class-specific image set enabling a decomposition of that object class into component regions. 1

3 0.4110415 54 nips-2001-Contextual Modulation of Target Saliency

Author: Antonio Torralba

Abstract: The most popular algorithms for object detection require the use of exhaustive spatial and scale search procedures. In such approaches, an object is defined by means of local features. fu this paper we show that including contextual information in object detection procedures provides an efficient way of cutting down the need for exhaustive search. We present results with real images showing that the proposed scheme is able to accurately predict likely object classes, locations and sizes. 1

4 0.40614849 110 nips-2001-Learning Hierarchical Structures with Linear Relational Embedding

Author: Alberto Paccanaro, Geoffrey E. Hinton

Abstract: We present Linear Relational Embedding (LRE), a new method of learning a distributed representation of concepts from data consisting of instances of relations between given concepts. Its final goal is to be able to generalize, i.e. infer new instances of these relations among the concepts. On a task involving family relationships we show that LRE can generalize better than any previously published method. We then show how LRE can be used effectively to find compact distributed representations for variable-sized recursive data structures, such as trees and lists. 1 Linear Relational Embedding Our aim is to take a large set of facts about a domain expressed as tuples of arbitrary symbols in a simple and rigid syntactic format and to be able to infer other “common-sense” facts without having any prior knowledge about the domain. Let us imagine a situation in which we have a set of concepts and a set of relations among these concepts, and that our data consists of few instances of these relations that hold among the concepts. We want to be able to infer other instances of these relations. For example, if the concepts are the people in a certain family, the relations are kinship relations, and we are given the facts ”Alberto has-father Pietro” and ”Pietro has-brother Giovanni”, we would like to be able to infer ”Alberto has-uncle Giovanni”. Our approach is to learn appropriate distributed representations of the entities in the data, and then exploit the generalization properties of the distributed representations [2] to make the inferences. In this paper we present a method, which we have called Linear Relational Embedding (LRE), which learns a distributed representation for the concepts by embedding them in a space where the relations between concepts are linear transformations of their distributed representations. Let us consider the case in which all the relations are binary, i.e. involve two concepts. , and the problem In this case our data consists of triplets we are trying to solve is to infer missing triplets when we are given only few of them. Inferring a triplet is equivalent to being able to complete it, that is to come up with one of its elements, given the other two. Here we shall always try to complete the third element of the triplets 1 . LRE will then represent each concept in the data as a learned vector in a 2 0    £ § ¥ £  § ¥ % 

5 0.36713013 74 nips-2001-Face Recognition Using Kernel Methods

Author: Ming-Hsuan Yang

Abstract: Principal Component Analysis and Fisher Linear Discriminant methods have demonstrated their success in face detection, recognition, and tracking. The representation in these subspace methods is based on second order statistics of the image set, and does not address higher order statistical dependencies such as the relationships among three or more pixels. Recently Higher Order Statistics and Independent Component Analysis (ICA) have been used as informative low dimensional representations for visual recognition. In this paper, we investigate the use of Kernel Principal Component Analysis and Kernel Fisher Linear Discriminant for learning low dimensional representations for face recognition, which we call Kernel Eigenface and Kernel Fisherface methods. While Eigenface and Fisherface methods aim to find projection directions based on the second order correlation of samples, Kernel Eigenface and Kernel Fisherface methods provide generalizations which take higher order correlations into account. We compare the performance of kernel methods with Eigenface, Fisherface and ICA-based methods for face recognition with variation in pose, scale, lighting and expression. Experimental results show that kernel methods provide better representations and achieve lower error rates for face recognition. 1 Motivation and Approach Subspace methods have been applied successfully in numerous visual recognition tasks such as face localization, face recognition, 3D object recognition, and tracking. In particular, Principal Component Analysis (PCA) [20] [13] ,and Fisher Linear Discriminant (FLD) methods [6] have been applied to face recognition with impressive results. While PCA aims to extract a subspace in which the variance is maximized (or the reconstruction error is minimized), some unwanted variations (due to lighting, facial expressions, viewing points, etc.) may be retained (See [8] for examples). It has been observed that in face recognition the variations between the images of the same face due to illumination and viewing direction are almost always larger than image variations due to the changes in face identity [1]. Therefore, while the PCA projections are optimal in a correlation sense (or for reconstruction

6 0.36446935 46 nips-2001-Categorization by Learning and Combining Object Parts

7 0.36101609 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation

8 0.35850781 22 nips-2001-A kernel method for multi-labelled classification

9 0.35477814 153 nips-2001-Product Analysis: Learning to Model Observations as Products of Hidden Variables

10 0.35475928 75 nips-2001-Fast, Large-Scale Transformation-Invariant Clustering

11 0.34168282 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

12 0.34107301 89 nips-2001-Grouping with Bias

13 0.33260059 80 nips-2001-Generalizable Relational Binding from Coarse-coded Distributed Representations

14 0.30636573 189 nips-2001-The g Factor: Relating Distributions on Features to Distributions on Images

15 0.29761931 34 nips-2001-Analog Soft-Pattern-Matching Classifier using Floating-Gate MOS Technology

16 0.29691929 158 nips-2001-Receptive field structure of flow detectors for heading perception

17 0.28593168 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

18 0.28430992 64 nips-2001-EM-DD: An Improved Multiple-Instance Learning Technique

19 0.27758622 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

20 0.27138129 126 nips-2001-Motivated Reinforcement Learning


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.034), (19, 0.025), (20, 0.018), (27, 0.088), (30, 0.103), (38, 0.051), (49, 0.02), (59, 0.024), (64, 0.167), (72, 0.069), (78, 0.065), (79, 0.039), (83, 0.025), (91, 0.158)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9092561 182 nips-2001-The Fidelity of Local Ordinal Encoding

Author: Javid Sadr, Sayan Mukherjee, Keith Thoresz, Pawan Sinha

Abstract: A key question in neuroscience is how to encode sensory stimuli such as images and sounds. Motivated by studies of response properties of neurons in the early cortical areas, we propose an encoding scheme that dispenses with absolute measures of signal intensity or contrast and uses, instead, only local ordinal measures. In this scheme, the structure of a signal is represented by a set of equalities and inequalities across adjacent regions. In this paper, we focus on characterizing the fidelity of this representation strategy. We develop a regularization approach for image reconstruction from ordinal measures and thereby demonstrate that the ordinal representation scheme can faithfully encode signal structure. We also present a neurally plausible implementation of this computation that uses only local update rules. The results highlight the robustness and generalization ability of local ordinal encodings for the task of pattern classification. 1

2 0.82034194 92 nips-2001-Incorporating Invariances in Non-Linear Support Vector Machines

Author: Olivier Chapelle, Bernhard Schćžšlkopf

Abstract: The choice of an SVM kernel corresponds to the choice of a representation of the data in a feature space and, to improve performance , it should therefore incorporate prior knowledge such as known transformation invariances. We propose a technique which extends earlier work and aims at incorporating invariances in nonlinear kernels. We show on a digit recognition task that the proposed approach is superior to the Virtual Support Vector method, which previously had been the method of choice. 1

3 0.782444 186 nips-2001-The Noisy Euclidean Traveling Salesman Problem and Learning

Author: Mikio L. Braun, Joachim M. Buhmann

Abstract: We consider noisy Euclidean traveling salesman problems in the plane, which are random combinatorial problems with underlying structure. Gibbs sampling is used to compute average trajectories, which estimate the underlying structure common to all instances. This procedure requires identifying the exact relationship between permutations and tours. In a learning setting, the average trajectory is used as a model to construct solutions to new instances sampled from the same source. Experimental results show that the average trajectory can in fact estimate the underlying structure and that overfitting effects occur if the trajectory adapts too closely to a single instance. 1

4 0.76095593 100 nips-2001-Iterative Double Clustering for Unsupervised and Semi-Supervised Learning

Author: Ran El-Yaniv, Oren Souroujon

Abstract: We present a powerful meta-clustering technique called Iterative Double Clustering (IDC). The IDC method is a natural extension of the recent Double Clustering (DC) method of Slonim and Tishby that exhibited impressive performance on text categorization tasks [12]. Using synthetically generated data we empirically find that whenever the DC procedure is successful in recovering some of the structure hidden in the data, the extended IDC procedure can incrementally compute a significantly more accurate classification. IDC is especially advantageous when the data exhibits high attribute noise. Our simulation results also show the effectiveness of IDC in text categorization problems. Surprisingly, this unsupervised procedure can be competitive with a (supervised) SVM trained with a small training set. Finally, we propose a simple and natural extension of IDC for semi-supervised and transductive learning where we are given both labeled and unlabeled examples. 1

5 0.7551384 46 nips-2001-Categorization by Learning and Combining Object Parts

Author: Bernd Heisele, Thomas Serre, Massimiliano Pontil, Thomas Vetter, Tomaso Poggio

Abstract: We describe an algorithm for automatically learning discriminative components of objects with SVM classifiers. It is based on growing image parts by minimizing theoretical bounds on the error probability of an SVM. Component-based face classifiers are then combined in a second stage to yield a hierarchical SVM classifier. Experimental results in face classification show considerable robustness against rotations in depth and suggest performance at significantly better level than other face detection systems. Novel aspects of our approach are: a) an algorithm to learn component-based classification experts and their combination, b) the use of 3-D morphable models for training, and c) a maximum operation on the output of each component classifier which may be relevant for biological models of visual recognition.

6 0.75360286 161 nips-2001-Reinforcement Learning with Long Short-Term Memory

7 0.75152457 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

8 0.75109339 162 nips-2001-Relative Density Nets: A New Way to Combine Backpropagation with HMM's

9 0.74979436 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

10 0.74795854 102 nips-2001-KLD-Sampling: Adaptive Particle Filters

11 0.74780202 149 nips-2001-Probabilistic Abstraction Hierarchies

12 0.74641109 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

13 0.74589837 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

14 0.74523973 56 nips-2001-Convolution Kernels for Natural Language

15 0.74456286 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

16 0.74055773 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

17 0.73973107 57 nips-2001-Correlation Codes in Neuronal Populations

18 0.73926854 183 nips-2001-The Infinite Hidden Markov Model

19 0.73835206 77 nips-2001-Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade

20 0.73826432 123 nips-2001-Modeling Temporal Structure in Classical Conditioning