nips nips2009 nips2009-176 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Jake Bouvrie, Lorenzo Rosasco, Tomaso Poggio
Abstract: A goal of central importance in the study of hierarchical models for object recognition – and indeed the mammalian visual cortex – is that of understanding quantitatively the trade-off between invariance and selectivity, and how invariance and discrimination properties contribute towards providing an improved representation useful for learning from data. In this work we provide a general group-theoretic framework for characterizing and understanding invariance in a family of hierarchical models. We show that by taking an algebraic perspective, one can provide a concise set of conditions which must be met to establish invariance, as well as a constructive prescription for meeting those conditions. Analyses in specific cases of particular relevance to computer vision and text processing are given, yielding insight into how and when invariance can be achieved. We find that the minimal intrinsic properties of a hierarchical model needed to support a particular invariance can be clearly described, thereby encouraging efficient computational implementations. 1
Reference: text
sentIndex sentText sentNum sentScore
1 In this work we provide a general group-theoretic framework for characterizing and understanding invariance in a family of hierarchical models. [sent-5, score-0.573]
2 Analyses in specific cases of particular relevance to computer vision and text processing are given, yielding insight into how and when invariance can be achieved. [sent-7, score-0.462]
3 We find that the minimal intrinsic properties of a hierarchical model needed to support a particular invariance can be clearly described, thereby encouraging efficient computational implementations. [sent-8, score-0.544]
4 Translation invariance is then explicitly built into the processing pathway by way of complex units which pool locally over simple units. [sent-12, score-0.493]
5 Following the flow of processing in a hierarchy from the bottom upwards, the layerwise representations gain invariance while simultaneously becoming selective for more complex patterns. [sent-15, score-0.506]
6 In this paper, we focus on hierarchical models incorporating an explicit attempt to impose transformation invariance, and do not directly address the case of deep layered models without local transformation or pooling operations (e. [sent-17, score-0.319]
7 [11] have established a framework which makes possible a more precise characterization of the operation of hierarchical models via the study of invariance and discrimination properties. [sent-21, score-0.544]
8 study invariance in an implicit, rather than constructive, fashion. [sent-23, score-0.462]
9 In their work, two cases are studied: invariance with respect to image rotations and string reversals, and the analysis is tailored to the particular setting. [sent-24, score-0.615]
10 In this paper, we reinterpret and extend the invariance analysis of Smale et al. [sent-25, score-0.462]
11 using a group-theoretic language towards clarifying and unifying the general properties necessary for invariance in a family of hierarchical models. [sent-26, score-0.544]
12 We additionally find that when one imposes the mild requirement that the transformations of interest have group structure, a broad class of hierarchical models can only be invariant to orthog1 onal transformations. [sent-28, score-0.49]
13 This result suggests that common architectures found in the literature might need to be rethought and modified so as to allow for broader invariance possibilities. [sent-29, score-0.462]
14 to a more general setting allowing for general pooling functions, and give a proof for invariance of the corresponding family of hierarchical feature maps. [sent-34, score-0.713]
15 We then establish a group-theoretic framework for characterizing invariance in hierarchical models expressed in terms of the objects defined here. [sent-36, score-0.604]
16 Within this framework, we turn to the problem of invariance in two specific domains of practical relevance: images and text strings. [sent-37, score-0.462]
17 We will draw attention to the conditions needed for the neural response to be invariant with respect to a family of arbitrary transformations, and then generalize the neural response map to allow for arbitrary pooling functions. [sent-44, score-0.505]
18 The proof of invariance given in [11] is extended to this generalized setting. [sent-45, score-0.462]
19 Therefore the key step to establishing invariance is verification of this Assumption. [sent-47, score-0.462]
20 There we are able to describe, for a broad range of hierarchical models (including a class of convolutional neural networks [6]), the necessary conditions for invariance to a set of transformations. [sent-49, score-0.61]
21 These are abstract sets in general, however here we will take them to be comprised of translations with h ∈ Hi defined by h : vi → vi+1 . [sent-61, score-0.357]
22 We rule out the degenerate translations and transformations, h or r mapping their entire domain to a single point. [sent-81, score-0.3]
23 When it is necessary to identify transformations defined on a specific domain v, we will use the notation rv : v → v. [sent-82, score-0.517]
24 In order to state the invariance properties of a given feature map, a technical assumption is needed. [sent-86, score-0.545]
25 There exists a surjective map π : H → H satisfying rv ◦ h = π(h) ◦ ru (1) for all h ∈ H. [sent-89, score-0.803]
26 As we will describe below, establishing invariance will boil down to verifying Assumption 1. [sent-92, score-0.521]
27 2 Invariance and Generalized Pooling We next provide a generalized proof of invariance of a family of hierarchical feature maps, where the properties we derive do not depend on the choice of the pooling function. [sent-94, score-0.713]
28 Given the above assumption, invariance can be established for general pooling functions of which the max is only one particular choice. [sent-95, score-0.601]
29 The final step will then be to state an invariance result for the generalized feature map, given that Assumption 1 holds. [sent-97, score-0.492]
30 Given Assumption 1, we can now prove invariance of a neural response feature map built from the general pooling function Ψ. [sent-108, score-0.811]
31 Proving invariance thus reduces to verifying that the Assumption actually holds, and is valid. [sent-119, score-0.521]
32 If R is fixed, then the assumption can only be satisfied by placing requirements on the sets of built-in translations Hi , i = 1, . [sent-125, score-0.353]
33 Therefore, we will make quantitative, constructive statements about the minimal sets of translations associated to a layer required to support invariance to a set of transformations. [sent-129, score-0.851]
34 The restriction behavior of the translations in H prevents us from simply generating a group out of the elements of H. [sent-141, score-0.486]
35 To get around this difficulty, we will decompose the h ∈ H into a composition of two functions: a translation group action and an inclusion. [sent-142, score-0.318]
36 Let S generate a group of translations T by defining the injective map S→T a → ta . [sent-143, score-0.733]
37 (2) That is, to every element of a ∈ S we associate a member of the group T whose action corresponds to translation in S by a: Ata (x) = x + a for x, a ∈ S. [sent-144, score-0.346]
38 (Although we assume the specific case of translations throughout, the sets of intrinsic operations Hi may more generally contain other kinds of transformations. [sent-145, score-0.3]
39 ) Furthermore, because the translations H can be parameterized by an element of S, one can apply Equation (2) to define an injective map τ : H → T by ha → ta . [sent-147, score-0.695]
40 To avoid confusion we denoted the former case by ru and the latter by rv . [sent-151, score-0.687]
41 Although ru and rv are the same “kind” of transformation, one cannot in general associate to each “kind” of transformation r ∈ R a single element of some group as we did in the case of translations above. [sent-152, score-1.253]
42 We will therefore consider ru and rv to be distinct transformations, loosely associated to r. [sent-154, score-0.687]
43 In our development, we will make the important assumption that the transformations ru , rv ∈ R can be expressed as actions of elements of some group, and denote this group by R. [sent-155, score-1.042]
44 More precisely, for every ru ∈ R, there is assumed to be a corresponding element ρu ∈ R whose action satisfies Aρu (x) = ru (x) for all x ∈ u, and similarly, for every rv ∈ R, there is assumed to be a corresponding element ρv ∈ R whose action satisfies Aρv (x) = rv (x) for all x ∈ v. [sent-156, score-1.592]
45 Assumption 1 requires that rv ◦ h = h ◦ ru for h, h ∈ H, with the map π : h → h onto. [sent-159, score-0.76]
46 Set h = ha , h = hb , and denote also by ru , rv the elements of the group R corresponding to the given transformation r ∈ R. [sent-162, score-0.97]
47 The Assumption says in part that rv ◦ h = h ◦ ru for some h ∈ H. [sent-163, score-0.722]
48 One can now see that if this condition is met, then verifying Equation (3) reduces to checking that Arv ◦ Ata = Atb ◦ Aru , 4 (4) and that the map ta → tb is onto. [sent-169, score-0.454]
49 Do do this, one needs R and T to be subgroups of the same group G so that the associativity property of group actions applies. [sent-171, score-0.361]
50 In our setting G is easily shown to be isomorphic to a group with normal subgroup T and subgroup R where each element may be written in the form g = tr for t ∈ T, r ∈ R. [sent-174, score-0.428]
51 Note that although this construction precludes R from containing the transformations in T , allowing R to contain translations is an uninteresting case. [sent-176, score-0.448]
52 Returning to Equation (4), we can apply the associativity property of actions and see that Equation (4) will hold as long as ˜ ˜ rv T = T ru (6) for every r ∈ R. [sent-178, score-0.687]
53 The two invariance conditions we have described thus far combine to capture the content of Assumption 1, but in a manner that separates group related conditions from constraints due to restriction and the nested nature of an architecture’s patch domains. [sent-180, score-0.718]
54 We can summarize the invariance conditions in the form of a concise definition that can be applied to establish invariance of the neural response ˜ feature maps Nm (f ), 2 ≤ m ≤ n with respect to a set of transformations. [sent-181, score-1.133]
55 When ru = rv for all r ∈ R, this means that normalizer of ˜ ˜ ˜ T in R is R. [sent-186, score-0.687]
56 Left transformations rv never take a point in v outside of v, and right transformations ru never take a point in u/v outside of u/v (respectively): imArv ◦ ιv ⊆ v, imAru ◦ ιu ⊆ u, imAru ◦ ιv ⊆ v, ˜ for all r ∈ R. [sent-188, score-0.983]
57 ˜ The final condition above has been added to ensure that any set of translations T we might construct satisfy the implicit assumption that the hierarchy’s translation functions h ∈ H are maps which respect the definition h : u → v. [sent-191, score-0.44]
58 ˜ ˜ ˜ ˜ If R and T are compatible, then for each ta ∈ T Equation 3 holds for some tb ∈ T , and the map ˜ → T (by Condition (1) above). [sent-192, score-0.364]
59 ˜ ta → tb is surjective from T As will become clear in the following section, the tools available to us from group theory will provide insight into the structure of compatible sets. [sent-194, score-0.576]
60 We will show that the only way to satisfy Condition (1) in Definition 3 is to require that patible T ˜ ˜ T be a union of R-orbits, under the action −1 (t, r) → rv tru (7) ˜ for t ∈ T , r ∈ R. [sent-197, score-0.472]
61 ˜ If rv = ru = r, then the action (7) is exactly conjugation and the R-orbit of a translation t ∈ T −1 ˜ Orbits of this form are also equivalence classes is the conjugacy class CR (t) = {rtr | r ∈ R}. [sent-200, score-0.914]
62 The following Proposition shows that, given set of candidate translations in H, we can construct a ˜ ˜ ˜ set of translations compatible with R by requiring T to be a union of R-orbits under the action of conjugation. [sent-202, score-0.764]
63 Let Γ ⊆ T be a given set of translations, and assume the following: (1) G ∼ T R, = ˜ (2) For each r ∈ R, r = ru = rv , (3) R is a subgroup of R. [sent-204, score-0.769]
64 1 Isometries of the Plane Consider the case where G is the group M of planar isometries, u ⊂ v ⊂ S = R2 , and H involves translations in the plane. [sent-210, score-0.486]
65 Let O2 be the group of orthogonal operators, and let ta ∈ T denote a translation represented by the vector a ∈ R2 . [sent-211, score-0.371]
66 We recall ˜ ˜ that the kernel of a group homomorphism π : G → G is a normal subgroup of G, and that normal subgroups N of G are invariant under the operation of conjugation by elements g of G. [sent-216, score-0.461]
67 For each m ∈ M , ta ∈ T , mta = tb m for some unique element tb ∈ T . [sent-220, score-0.457]
68 Let H be the set of translations associated to an arbitrary layer of the hierarchical feature map and define the injective map τ : H → T by ha → ta , where a is a parameter characterizing the translation. [sent-223, score-0.925]
69 This proposition states that the hierarchical feature map may be made invariant to isometries, however one might reasonably ask whether the feature map can be invariant to other transformations. [sent-227, score-0.574]
70 The following Proposition confirms that isometries are the only possible transformations, with group structure, to which the hierarchy may be made invariant in the exact sense of Definition 2. [sent-228, score-0.37]
71 Then at all layers, the group of orthogonal operators O2 is the only group of transformations to which the neural response can be invariant. [sent-231, score-0.532]
72 6 ~ OR(ta) ~ OR(tb) tc ~ OR(tc) ta tb vi Figure 1: Example illustrating construction of an appropriate H. [sent-232, score-0.385]
73 Suppose H initially contains the translations Γ = {ha , hb , hc }. [sent-233, score-0.3]
74 Then to be invariant to rotations, the condition on H is that H must also include translations ˜ defined by the R-orbits OR (ta ), OR (tb ) and OR (tc ). [sent-234, score-0.437]
75 In ˜ ˜ ˜ ˜ = SO2 , and the orbits are translations this example R to points lying on a circle in the plane. [sent-235, score-0.401]
76 If we choose the group of rotations of the plane by setting R = SO2 O2 , then the orbits OR (a) are circles of radius a . [sent-240, score-0.352]
77 Therefore rotation invariance is possible as ˜ ˜ ˜ long as the set T (and therefore H, since we can take H = τ −1 (T )) includes translations to all ˜ points along the circle of radius a, for each element ta ∈ T . [sent-242, score-1.012]
78 In this case the construction of an appropriate set of translations ˜ is similar: we require that T include at least the conjugacy classes with respect to the group Cn , CCn (t) for each t ∈ Γ = τ (H). [sent-247, score-0.486]
79 So we can conclude invariance of the initial kernel. [sent-253, score-0.462]
80 Clearly translations over a finite string do not constitute a group as the law of composition is not closed in this case. [sent-260, score-0.545]
81 Following the case of circular data where arbitrary translations are allowed, we will then consider the original setting described in Smale et al. [sent-262, score-0.338]
82 In particular, one can interpret the operation of the translations in H as a circular shift of a string followed by truncation outside of a fixed window. [sent-265, score-0.394]
83 The cyclic group of circular shifts of an n-string is readily seen to be isomorphic to the group of rotations of an n-sided regular polygon. [sent-266, score-0.552]
84 Similarly, reversal of an n-string is isomorphic to reflection of an n-sided polygon, and describes a cyclic group of order two. [sent-267, score-0.292]
85 As in Equation (5), we can combine rotation and reflection via a semidirect product Dn ∼ Cn = 7 C2 (9) where Ck denotes the cyclic group of order k. [sent-268, score-0.291]
86 Then the group of symmetries of a closed n-string is described by the relations 2 Dn = t, r | tn , rv , rv trv t . [sent-271, score-0.97]
87 As manipulations of a closed n-string and an n-sided polygon are isomorphic, we will use geometric concepts and terminology to establish invariance of the neural response defined on strings with respect to reversal. [sent-276, score-0.715]
88 In the case of reflections of strings, ru is quite distinct from rv . [sent-278, score-0.687]
89 The latter reflection, rv , is the usual reflection of an v-sided regular polygon, whereas we would like ru to reflect a smaller u-sided polygon. [sent-279, score-0.687]
90 To build a group out of such operations, however, we will need to ensure that ru and rv both apply in the context of v-sided polygons. [sent-280, score-0.841]
91 This can be done by extending Aru to v by defining ru to be the composition of two operations: one which reflects the u portion of a string and leaves the rest fixed, and another which reflects the remaining (v − u)-substring while leaving the first u-substring fixed. [sent-281, score-0.409]
92 In this case, one will notice that ru can be written in terms of rotations and the usual reflection rv : ru = rv t−u = tu rv . [sent-282, score-1.84]
93 (11) This also implies that for any x ∈ T , {rxr−1 | r ∈ rv } = {rxr−1 | r ∈ rv , ru }, where we have used the fact that T is abelian, and applied the relations in Equation (10). [sent-283, score-1.087]
94 Given x ∈ T , a reasonable requirement is that ˜ there must exist an x ∈ T such that rv x = x ru . [sent-285, score-0.687]
95 In this case x = rv xru = rv xrv t−u = x−1 rv rv t−u = x−1 t−u , (12) where the second equality follows from Equation (11), and the remaining equalities follow from ˜ the relations (10). [sent-286, score-1.507]
96 Let H be the set of translations associated to an arbitrary layer of the hierarchical feature map and define the injective map τ : H → T by ha → ta , where a is a parameter characterizing the translation. [sent-289, score-0.925]
97 Proposition 4 in fact points to the minimum T for reversals in this scenario as well, noticing that the set of allowed translations is the same set above but with the illegal elements removed. [sent-295, score-0.329]
98 5 Conclusion We have shown that the tools offered by group theory can be profitably applied towards understanding invariance properties of a broad class of deep, hierarchical models. [sent-298, score-0.725]
99 If one knows in advance the transformations to which a model should be invariant, then the translations which must be built into the hierarchy can be described. [sent-299, score-0.523]
100 In the case of images, we showed that the only group to which a model in the class of interest can be invariant is the group of planar orthogonal operators. [sent-300, score-0.446]
wordName wordTfidf (topN-words)
[('invariance', 0.462), ('rv', 0.369), ('ru', 0.318), ('translations', 0.3), ('smale', 0.165), ('ta', 0.161), ('group', 0.154), ('transformations', 0.148), ('pooling', 0.139), ('tb', 0.13), ('im', 0.106), ('invariant', 0.106), ('strings', 0.103), ('nm', 0.102), ('orbits', 0.101), ('rotations', 0.097), ('ection', 0.086), ('aru', 0.082), ('hierarchical', 0.082), ('subgroup', 0.082), ('ha', 0.08), ('response', 0.076), ('isomorphic', 0.074), ('proposition', 0.074), ('map', 0.073), ('action', 0.073), ('algebraic', 0.073), ('ata', 0.072), ('conjugation', 0.066), ('isometries', 0.066), ('hi', 0.063), ('compatible', 0.061), ('verifying', 0.059), ('vi', 0.057), ('string', 0.056), ('translation', 0.056), ('rotation', 0.053), ('assumption', 0.053), ('subgroups', 0.053), ('layer', 0.052), ('km', 0.051), ('atb', 0.049), ('bouvrie', 0.049), ('semidirect', 0.049), ('transformation', 0.049), ('symmetries', 0.047), ('ag', 0.045), ('injective', 0.045), ('hierarchy', 0.044), ('surjective', 0.043), ('polygon', 0.043), ('rosasco', 0.043), ('cr', 0.043), ('re', 0.04), ('patches', 0.04), ('nition', 0.038), ('circular', 0.038), ('tc', 0.037), ('concise', 0.037), ('layers', 0.037), ('constructive', 0.037), ('element', 0.036), ('cyclic', 0.035), ('conditions', 0.035), ('says', 0.035), ('composition', 0.035), ('vm', 0.034), ('cn', 0.033), ('arv', 0.033), ('dihedral', 0.033), ('hubel', 0.033), ('imaru', 0.033), ('rxr', 0.033), ('conjugacy', 0.032), ('planar', 0.032), ('restriction', 0.032), ('condition', 0.031), ('establish', 0.031), ('relations', 0.031), ('dn', 0.031), ('convolutional', 0.031), ('qm', 0.031), ('built', 0.031), ('union', 0.03), ('hu', 0.03), ('feature', 0.03), ('convolution', 0.029), ('reversal', 0.029), ('substrings', 0.029), ('serre', 0.029), ('prescription', 0.029), ('reversals', 0.029), ('characterizing', 0.029), ('met', 0.028), ('vn', 0.027), ('associate', 0.027), ('architecture', 0.027), ('tools', 0.027), ('compositions', 0.026), ('isomorphism', 0.026)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000004 176 nips-2009-On Invariance in Hierarchical Models
Author: Jake Bouvrie, Lorenzo Rosasco, Tomaso Poggio
Abstract: A goal of central importance in the study of hierarchical models for object recognition – and indeed the mammalian visual cortex – is that of understanding quantitatively the trade-off between invariance and selectivity, and how invariance and discrimination properties contribute towards providing an improved representation useful for learning from data. In this work we provide a general group-theoretic framework for characterizing and understanding invariance in a family of hierarchical models. We show that by taking an algebraic perspective, one can provide a concise set of conditions which must be met to establish invariance, as well as a constructive prescription for meeting those conditions. Analyses in specific cases of particular relevance to computer vision and text processing are given, yielding insight into how and when invariance can be achieved. We find that the minimal intrinsic properties of a hierarchical model needed to support a particular invariance can be clearly described, thereby encouraging efficient computational implementations. 1
2 0.38848785 151 nips-2009-Measuring Invariances in Deep Networks
Author: Ian Goodfellow, Honglak Lee, Quoc V. Le, Andrew Saxe, Andrew Y. Ng
Abstract: For many pattern recognition tasks, the ideal input feature would be invariant to multiple confounding properties (such as illumination and viewing angle, in computer vision applications). Recently, deep architectures trained in an unsupervised manner have been proposed as an automatic method for extracting useful features. However, it is difficult to evaluate the learned features by any means other than using them in a classifier. In this paper, we propose a number of empirical tests that directly measure the degree to which these learned features are invariant to different input transformations. We find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images. We find that convolutional deep belief networks learn substantially more invariant features in each layer. These results further justify the use of “deep” vs. “shallower” representations, but suggest that mechanisms beyond merely stacking one autoencoder on top of another may be important for achieving invariance. Our evaluation metrics can also be used to evaluate future work in deep learning, and thus help the development of future algorithms. 1
3 0.10813817 137 nips-2009-Learning transport operators for image manifolds
Author: Benjamin Culpepper, Bruno A. Olshausen
Abstract: We describe an unsupervised manifold learning algorithm that represents a surface through a compact description of operators that traverse it. The operators are based on matrix exponentials, which are the solution to a system of first-order linear differential equations. The matrix exponents are represented by a basis that is adapted to the statistics of the data so that the infinitesimal generator for a trajectory along the underlying manifold can be produced by linearly composing a few elements. The method is applied to recover topological structure from low dimensional synthetic data, and to model local structure in how natural images change over time and scale. 1
4 0.068138056 219 nips-2009-Slow, Decorrelated Features for Pretraining Complex Cell-like Networks
Author: Yoshua Bengio, James S. Bergstra
Abstract: We introduce a new type of neural network activation function based on recent physiological rate models for complex cells in visual area V1. A single-hiddenlayer neural network of this kind of model achieves 1.50% error on MNIST. We also introduce an existing criterion for learning slow, decorrelated features as a pretraining strategy for image models. This pretraining strategy results in orientation-selective features, similar to the receptive fields of complex cells. With this pretraining, the same single-hidden-layer model achieves 1.34% error, even though the pretraining sample distribution is very different from the fine-tuning distribution. To implement this pretraining strategy, we derive a fast algorithm for online learning of decorrelated features such that each iteration of the algorithm runs in linear time with respect to the number of features. 1
5 0.064588062 84 nips-2009-Evaluating multi-class learning strategies in a generative hierarchical framework for object detection
Author: Sanja Fidler, Marko Boben, Ales Leonardis
Abstract: Multi-class object learning and detection is a challenging problem due to the large number of object classes and their high visual variability. Specialized detectors usually excel in performance, while joint representations optimize sharing and reduce inference time — but are complex to train. Conveniently, sequential class learning cuts down training time by transferring existing knowledge to novel classes, but cannot fully exploit the shareability of features among object classes and might depend on ordering of classes during learning. In hierarchical frameworks these issues have been little explored. In this paper, we provide a rigorous experimental analysis of various multiple object class learning strategies within a generative hierarchical framework. Specifically, we propose, evaluate and compare three important types of multi-class learning: 1.) independent training of individual categories, 2.) joint training of classes, and 3.) sequential learning of classes. We explore and compare their computational behavior (space and time) and detection performance as a function of the number of learned object classes on several recognition datasets. We show that sequential training achieves the best trade-off between inference and training times at a comparable detection performance and could thus be used to learn the classes on a larger scale. 1
6 0.063504249 230 nips-2009-Statistical Consistency of Top-k Ranking
7 0.063373595 177 nips-2009-On Learning Rotations
8 0.063163348 2 nips-2009-3D Object Recognition with Deep Belief Nets
9 0.058458108 236 nips-2009-Structured output regression for detection with partial truncation
10 0.053238787 43 nips-2009-Bayesian estimation of orientation preference maps
11 0.052301671 161 nips-2009-Nash Equilibria of Static Prediction Games
12 0.051754601 1 nips-2009-$L 1$-Penalized Robust Estimation for a Class of Inverse Problems Arising in Multiview Geometry
13 0.049306087 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference
14 0.047786407 11 nips-2009-A General Projection Property for Distribution Families
15 0.04704126 241 nips-2009-The 'tree-dependent components' of natural scenes are edge filters
16 0.047027316 245 nips-2009-Thresholding Procedures for High Dimensional Variable Selection and Statistical Estimation
17 0.044133574 119 nips-2009-Kernel Methods for Deep Learning
18 0.043115232 102 nips-2009-Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models
19 0.042139068 41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior
20 0.041620504 14 nips-2009-A Parameter-free Hedging Algorithm
topicId topicWeight
[(0, -0.158), (1, -0.054), (2, -0.0), (3, 0.023), (4, -0.043), (5, 0.064), (6, -0.014), (7, 0.101), (8, 0.059), (9, -0.105), (10, 0.046), (11, -0.029), (12, -0.261), (13, 0.069), (14, 0.196), (15, 0.029), (16, 0.063), (17, -0.087), (18, -0.048), (19, 0.055), (20, -0.105), (21, 0.148), (22, -0.053), (23, -0.02), (24, -0.055), (25, 0.02), (26, -0.056), (27, -0.173), (28, 0.022), (29, 0.023), (30, 0.066), (31, 0.066), (32, 0.031), (33, 0.033), (34, -0.01), (35, 0.051), (36, 0.067), (37, -0.084), (38, -0.013), (39, -0.052), (40, 0.276), (41, 0.078), (42, -0.032), (43, 0.21), (44, 0.091), (45, 0.006), (46, -0.173), (47, -0.075), (48, -0.018), (49, -0.134)]
simIndex simValue paperId paperTitle
same-paper 1 0.96724367 176 nips-2009-On Invariance in Hierarchical Models
Author: Jake Bouvrie, Lorenzo Rosasco, Tomaso Poggio
Abstract: A goal of central importance in the study of hierarchical models for object recognition – and indeed the mammalian visual cortex – is that of understanding quantitatively the trade-off between invariance and selectivity, and how invariance and discrimination properties contribute towards providing an improved representation useful for learning from data. In this work we provide a general group-theoretic framework for characterizing and understanding invariance in a family of hierarchical models. We show that by taking an algebraic perspective, one can provide a concise set of conditions which must be met to establish invariance, as well as a constructive prescription for meeting those conditions. Analyses in specific cases of particular relevance to computer vision and text processing are given, yielding insight into how and when invariance can be achieved. We find that the minimal intrinsic properties of a hierarchical model needed to support a particular invariance can be clearly described, thereby encouraging efficient computational implementations. 1
2 0.79311925 151 nips-2009-Measuring Invariances in Deep Networks
Author: Ian Goodfellow, Honglak Lee, Quoc V. Le, Andrew Saxe, Andrew Y. Ng
Abstract: For many pattern recognition tasks, the ideal input feature would be invariant to multiple confounding properties (such as illumination and viewing angle, in computer vision applications). Recently, deep architectures trained in an unsupervised manner have been proposed as an automatic method for extracting useful features. However, it is difficult to evaluate the learned features by any means other than using them in a classifier. In this paper, we propose a number of empirical tests that directly measure the degree to which these learned features are invariant to different input transformations. We find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images. We find that convolutional deep belief networks learn substantially more invariant features in each layer. These results further justify the use of “deep” vs. “shallower” representations, but suggest that mechanisms beyond merely stacking one autoencoder on top of another may be important for achieving invariance. Our evaluation metrics can also be used to evaluate future work in deep learning, and thus help the development of future algorithms. 1
3 0.37828776 219 nips-2009-Slow, Decorrelated Features for Pretraining Complex Cell-like Networks
Author: Yoshua Bengio, James S. Bergstra
Abstract: We introduce a new type of neural network activation function based on recent physiological rate models for complex cells in visual area V1. A single-hiddenlayer neural network of this kind of model achieves 1.50% error on MNIST. We also introduce an existing criterion for learning slow, decorrelated features as a pretraining strategy for image models. This pretraining strategy results in orientation-selective features, similar to the receptive fields of complex cells. With this pretraining, the same single-hidden-layer model achieves 1.34% error, even though the pretraining sample distribution is very different from the fine-tuning distribution. To implement this pretraining strategy, we derive a fast algorithm for online learning of decorrelated features such that each iteration of the algorithm runs in linear time with respect to the number of features. 1
4 0.33483198 137 nips-2009-Learning transport operators for image manifolds
Author: Benjamin Culpepper, Bruno A. Olshausen
Abstract: We describe an unsupervised manifold learning algorithm that represents a surface through a compact description of operators that traverse it. The operators are based on matrix exponentials, which are the solution to a system of first-order linear differential equations. The matrix exponents are represented by a basis that is adapted to the statistics of the data so that the infinitesimal generator for a trajectory along the underlying manifold can be produced by linearly composing a few elements. The method is applied to recover topological structure from low dimensional synthetic data, and to model local structure in how natural images change over time and scale. 1
Author: Sanja Fidler, Marko Boben, Ales Leonardis
Abstract: Multi-class object learning and detection is a challenging problem due to the large number of object classes and their high visual variability. Specialized detectors usually excel in performance, while joint representations optimize sharing and reduce inference time — but are complex to train. Conveniently, sequential class learning cuts down training time by transferring existing knowledge to novel classes, but cannot fully exploit the shareability of features among object classes and might depend on ordering of classes during learning. In hierarchical frameworks these issues have been little explored. In this paper, we provide a rigorous experimental analysis of various multiple object class learning strategies within a generative hierarchical framework. Specifically, we propose, evaluate and compare three important types of multi-class learning: 1.) independent training of individual categories, 2.) joint training of classes, and 3.) sequential learning of classes. We explore and compare their computational behavior (space and time) and detection performance as a function of the number of learned object classes on several recognition datasets. We show that sequential training achieves the best trade-off between inference and training times at a comparable detection performance and could thus be used to learn the classes on a larger scale. 1
6 0.31361839 177 nips-2009-On Learning Rotations
7 0.2933386 105 nips-2009-Grouped Orthogonal Matching Pursuit for Variable Selection and Prediction
8 0.28333473 2 nips-2009-3D Object Recognition with Deep Belief Nets
9 0.28004774 78 nips-2009-Efficient Moments-based Permutation Tests
10 0.27342302 1 nips-2009-$L 1$-Penalized Robust Estimation for a Class of Inverse Problems Arising in Multiview Geometry
11 0.27003613 180 nips-2009-On the Convergence of the Concave-Convex Procedure
12 0.26720154 119 nips-2009-Kernel Methods for Deep Learning
13 0.24778509 216 nips-2009-Sequential effects reflect parallel learning of multiple environmental regularities
14 0.24647041 58 nips-2009-Constructing Topological Maps using Markov Random Fields and Loop-Closure Detection
15 0.23851511 253 nips-2009-Unsupervised feature learning for audio classification using convolutional deep belief networks
16 0.23706558 194 nips-2009-Predicting the Optimal Spacing of Study: A Multiscale Context Model of Memory
17 0.22914882 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis
18 0.2110341 161 nips-2009-Nash Equilibria of Static Prediction Games
19 0.19970386 69 nips-2009-Discrete MDL Predicts in Total Variation
20 0.19685306 43 nips-2009-Bayesian estimation of orientation preference maps
topicId topicWeight
[(24, 0.038), (25, 0.06), (31, 0.01), (35, 0.032), (36, 0.065), (39, 0.026), (58, 0.062), (61, 0.017), (71, 0.05), (81, 0.011), (86, 0.531), (91, 0.014)]
simIndex simValue paperId paperTitle
1 0.99084324 120 nips-2009-Kernels and learning curves for Gaussian process regression on random graphs
Author: Peter Sollich, Matthew Urry, Camille Coti
Abstract: We investigate how well Gaussian process regression can learn functions defined on graphs, using large regular random graphs as a paradigmatic example. Random-walk based kernels are shown to have some non-trivial properties: within the standard approximation of a locally tree-like graph structure, the kernel does not become constant, i.e. neighbouring function values do not become fully correlated, when the lengthscale σ of the kernel is made large. Instead the kernel attains a non-trivial limiting form, which we calculate. The fully correlated limit is reached only once loops become relevant, and we estimate where the crossover to this regime occurs. Our main subject are learning curves of Bayes error versus training set size. We show that these are qualitatively well predicted by a simple approximation using only the spectrum of a large tree as input, and generically scale with n/V , the number of training examples per vertex. We also explore how this behaviour changes for kernel lengthscales that are large enough for loops to become important. 1 Motivation and Outline Gaussian processes (GPs) have become a standard part of the machine learning toolbox [1]. Learning curves are a convenient way of characterizing their capabilities: they give the generalization error as a function of the number of training examples n, averaged over all datasets of size n under appropriate assumptions about the process generating the data. We focus here on the case of GP regression, where a real-valued output function f (x) is to be learned. The general behaviour of GP learning curves is then relatively well understood for the scenario where the inputs x come from a continuous space, typically Rn [2, 3, 4, 5, 6, 7, 8, 9, 10]. For large n, the learning curves then typically decay as a power law ∝ n−α with an exponent α ≤ 1 that depends on the dimensionality n of the space as well as the smoothness properties of the function f (x) as encoded in the covariance function. But there are many interesting application domains that involve discrete input spaces, where x could be a string, an amino acid sequence (with f (x) some measure of secondary structure or biological function), a research paper (with f (x) related to impact), a web page (with f (x) giving a score used to rank pages), etc. In many such situations, similarity between different inputs – which will govern our prior beliefs about how closely related the corresponding function values are – can be represented by edges in a graph. One would then like to know how well GP regression can work in such problem domains; see also [11] for a related online regression algorithm. We study this 1 problem here theoretically by focussing on the paradigmatic example of random regular graphs, where every node has the same connectivity. Sec. 2 discusses the properties of random-walk inspired kernels [12] on such random graphs. These are analogous to the standard radial basis function kernels exp[−(x − x )2 /(2σ 2 )], but we find that they have surprising properties on large graphs. In particular, while loops in large random graphs are long and can be neglected for many purposes, by approximating the graph structure as locally tree-like, here this leads to a non-trivial limiting form of the kernel for σ → ∞ that is not constant. The fully correlated limit, where the kernel is constant, is obtained only because of the presence of loops, and we estimate when the crossover to this regime takes place. In Sec. 3 we move on to the learning curves themselves. A simple approximation based on the graph eigenvalues, using only the known spectrum of a large tree as input, works well qualitatively and predicts the exact asymptotics for large numbers of training examples. When the kernel lengthscale is not too large, below the crossover discussed in Sec. 2 for the covariance kernel, the learning curves depend on the number of examples per vertex. We also explore how this behaviour changes as the kernel lengthscale is made larger. Sec. 4 summarizes the results and discusses some open questions. 2 Kernels on graphs and trees We assume that we are trying to learn a function defined on the vertices of a graph. Vertices are labelled by i = 1 . . . V , instead of the generic input label x we used in the introduction, and the associated function values are denoted fi ∈ R. By taking the prior P (f ) over these functions f = (f1 , . . . , fV ) as a (zero mean) Gaussian process we are saying that P (f ) ∝ exp(− 1 f T C −1 f ). 2 The covariance function or kernel C is then, in our graph setting, just a positive definite V × V matrix. The graph structure is characterized by a V × V adjacency matrix, with Aij = 1 if nodes i and j are connected by an edge, and 0 otherwise. All links are assumed to be undirected, so that Aij = Aji , V and there are no self-loops (Aii = 0). The degree of each node is then defined as di = j=1 Aij . The covariance kernels we discuss in this paper are the natural generalizations of the squaredexponential kernel in Euclidean space [12]. They can be expressed in terms of the normalized graph Laplacian, defined as L = 1 − D −1/2 AD −1/2 , where D is a diagonal matrix with entries d1 , . . . , dV and 1 is the V × V identity matrix. An advantage of L over the unnormalized Laplacian D − A, which was used in the earlier paper [13], is that the eigenvalues of L (again a V × V matrix) lie in the interval [0,2] (see e.g. [14]). From the graph Laplacian, the covariance kernels we consider here are constructed as follows. The p-step random walk kernel is (for a ≥ 2) C ∝ (1 − a−1 L)p = 1 − a−1 1 + a−1 D −1/2 AD −1/2 p (1) while the diffusion kernel is given by 1 C ∝ exp − 2 σ 2 L ∝ exp 1 2 −1/2 AD −1/2 2σ D (2) We will always normalize these so that (1/V ) i Cii = 1, which corresponds to setting the average (over vertices) prior variance of the function to be learned to unity. To see the connection of the above kernels to random walks, assume we have a walker on the graph who at each time step selects randomly one of the neighbouring vertices and moves to it. The probability for a move from vertex j to i is then Aij /dj . The transition matrix after s steps follows as (AD −1 )s : its ij-element gives the probability of being on vertex i, having started at j. We can now compare this with the p-step kernel by expanding the p-th power in (1): p p ( p )a−s (1−a−1 )p−s (D −1/2 AD −1/2 )s = D −1/2 s C∝ s=0 ( p )a−s (1−a−1 )p−s (AD −1 )s D 1/2 s s=0 (3) Thus C is essentially a random walk transition matrix, averaged over the number of steps s with s ∼ Binomial(p, 1/a) 2 (4) a=2, d=3 K1 1 1 Cl,p 0.9 p=1 p=2 p=3 p=4 p=5 p=10 p=20 p=50 p=100 p=200 p=500 p=infty 0.8 0.6 0.4 d=3 0.8 0.7 0.6 a=2, V=infty a=2, V=500 a=4, V=infty a=4, V=500 0.5 0.4 0.3 0.2 0.2 ln V / ln(d-1) 0.1 0 0 5 10 l 0 15 1 10 p/a 100 1000 Figure 1: (Left) Random walk kernel C ,p plotted vs distance along graph, for increasing number of steps p and a = 2, d = 3. Note the convergence to a limiting shape for large p that is not the naive fully correlated limit C ,p→∞ = 1. (Right) Numerical results for average covariance K1 between neighbouring nodes, averaged over neighbours and over randomly generated regular graphs. This shows that 1/a can be interpreted as the probability of actually taking a step at each of p “attempts”. To obtain the actual C the resulting averaged transition matrix is premultiplied by D −1/2 and postmultiplied by D 1/2 , which ensures that the kernel C is symmetric. For the diffusion kernel, one finds an analogous result but the number of random walk steps is now distributed as s ∼ Poisson(σ 2 /2). This implies in particular that the diffusion kernel is the limit of the p-step kernel for p, a → ∞ at constant p/a = σ 2 /2. Accordingly, we discuss mainly the p-step kernel in this paper because results for the diffusion kernel can be retrieved as limiting cases. In the limit of a large number of steps s, the random walk on a graph will reach its stationary distribution p∞ ∝ De where e = (1, . . . , 1). (This form of p∞ can be verified by checking that it remains unchanged after multiplication with the transition matrix AD −1 .) The s-step transition matrix for large s is then p∞ eT = DeeT because we converge from any starting vertex to the stationary distribution. It follows that for large p or σ 2 the covariance kernel becomes C ∝ D 1/2 eeT D 1/2 , i.e. Cij ∝ (di dj )1/2 . This is consistent with the interpretation of σ or (p/a)1/2 as a lengthscale over which the random walk can diffuse along the graph: once this lengthscale becomes large, the covariance kernel Cij is essentially independent of the distance (along the graph) between the vertices i and j, and the function f becomes fully correlated across the graph. (Explicitly f = vD 1/2 e under the prior, with v a single Gaussian random variable.) As we next show, however, the approach to this fully correlated limit as p or σ are increased is non-trivial. We focus in this paper on kernels on random regular graphs. This means we consider adjacency matrices A which are regular in the sense that they give for each vertex the same degree, di = d. A uniform probability distribution is then taken across all A that obey this constraint [15]. What will the above kernels look like on typical samples drawn from this distribution? Such random regular graphs will have long loops, of length of order ln(V ) or larger if V is large. Their local structure is then that of a regular tree of degree d, which suggests that it should be possible to calculate the kernel accurately within a tree approximation. In a regular tree all nodes are equivalent, so the kernel can only depend on the distance between two nodes i and j. Denoting this kernel value C ,p for a p-step random walk kernel, one has then C ,p=0 = δ ,0 and γp+1 C0,p+1 γp+1 C ,p+1 = = 1− 1 ad C 1 a C0,p + −1,p 1 a + 1− C1,p 1 a C (5) ,p + d−1 ad C +1,p for ≥1 (6) where γp is chosen to achieve the desired normalization C0,p = 1 of the prior variance for every p. Fig. 1(left) shows results obtained by iterating this recursion numerically, for a regular graph (in the tree approximation) with degree d = 3, and a = 2. As expected the kernel becomes more longranged initially as p increases, but eventually it is seen to approach a non-trivial limiting form. This can be calculated as C ,p→∞ = [1 + (d − 1)/d](d − 1)− /2 (7) 3 and is also plotted in the figure, showing good agreement with the numerical iteration. There are (at least) two ways of obtaining the result (7). One is to take the limit σ → ∞ of the integral representation of the diffusion kernel on regular trees given in [16] (which is also quoted in [13] but with a typographical error that effectively removes the factor (d − 1)− /2 ). Another route is to find the steady state of the recursion for C ,p . This is easy to do but requires as input the unknown steady state value of γp . To determine this, one can map from C ,p to the total random walk probability S ,p in each “shell” of vertices at distance from the starting vertex, changing variables to S0,p = C0,p and S ,p = d(d − 1) −1 C ,p ( ≥ 1). Omitting the factors γp , this results in a recursion for S ,p that simply describes a biased random walk on = 0, 1, 2, . . ., with a probability of 1 − 1/a of remaining at the current , probability 1/(ad) of moving to the left and probability (d − 1)/(ad) of moving to the right. The point = 0 is a reflecting barrier where only moves to the right are allowed, with probability 1/a. The time evolution of this random walk starting from = 0 can now be analysed as in [17]. As expected from the balance of moves to the left and right, S ,p for large p is peaked around the average position of the walk, = p(d − 2)/(ad). For smaller than this S ,p has a tail behaving as ∝ (d − 1) /2 , and converting back to C ,p gives the large- scaling of C ,p→∞ ∝ (d − 1)− /2 ; this in turn fixes the value of γp→∞ and so eventually gives (7). The above analysis shows that for large p the random walk kernel, calculated in the absence of loops, does not approach the expected fully correlated limit; given that all vertices have the same degree, the latter would correspond to C ,p→∞ = 1. This implies, conversely, that the fully correlated limit is reached only because of the presence of loops in the graph. It is then interesting to ask at what point, as p is increased, the tree approximation for the kernel breaks down. To estimate this, we note that a regular tree of depth has V = 1 + d(d − 1) −1 nodes. So a regular graph can be tree-like at most out to ≈ ln(V )/ ln(d − 1). Comparing with the typical number of steps our random walk takes, which is p/a from (4), we then expect loop effects to appear in the covariance kernel when p/a ≈ ln(V )/ ln(d − 1) (8) To check this prediction, we measure the analogue of C1,p on randomly generated [15] regular graphs. Because of the presence of loops, the local kernel values are not all identical, so the appropriate estimate of what would be C1,p on a tree is K1 = Cij / Cii Cjj for neighbouring nodes i and j. Averaging over all pairs of such neighbours, and then over a number of randomly generated graphs we find the results in Fig. 1(right). The results for K1 (symbols) accurately track the tree predictions (lines) for small p/a, and start to deviate just around the values of p/a expected from (8), as marked by the arrow. The deviations manifest themselves in larger values of K1 , which eventually – now that p/a is large enough for the kernel to “notice” the loops - approach the fully correlated limit K1 = 1. 3 Learning curves We now turn to the analysis of learning curves for GP regression on random regular graphs. We assume that the target function f ∗ is drawn from a GP prior with a p-step random walk covariance kernel C. Training examples are input-output pairs (iµ , fi∗ + ξµ ) where ξµ is i.i.d. Gaussian noise µ of variance σ 2 ; the distribution of training inputs iµ is taken to be uniform across vertices. Inference from a data set D of n such examples µ = 1, . . . , n takes place using the prior defined by C and a Gaussian likelihood with noise variance σ 2 . We thus assume an inference model that is matched to the data generating process. This is obviously an over-simplification but is appropriate for the present first exploration of learning curves on random graphs. We emphasize that as n is increased we see more and more function values from the same graph, which is fixed by the problem domain; the graph does not grow. ˆ The generalization error is the squared difference between the estimated function fi and the target fi∗ , averaged across the (uniform) input distribution, the posterior distribution of f ∗ given D, the distribution of datasets D, and finally – in our non-Euclidean setting – the random graph ensemble. Given the assumption of a matched inference model, this is just the average Bayes error, or the average posterior variance, which can be expressed explicitly as [1] (n) = V −1 Cii − k(i)T Kk−1 (i) i 4 D,graphs (9) where the average is over data sets and over graphs, K is an n × n matrix with elements Kµµ = Ciµ ,iµ + σ 2 δµµ and k(i) is a vector with entries kµ (i) = Ci,iµ . The resulting learning curve depends, in addition to n, on the graph structure as determined by V and d, and the kernel and noise level as specified by p, a and σ 2 . We fix d = 3 throughout to avoid having too many parameters to vary, although similar results are obtained for larger d. Exact prediction of learning curves by analytical calculation is very difficult due to the complicated way in which the random selection of training inputs enters the matrix K and vector k in (9). However, by first expressing these quantities in terms of kernel eigenvalues (see below) and then approximating the average over datasets, one can derive the approximation [3, 6] =g n + σ2 V , g(h) = (λ−1 + h)−1 α (10) α=1 This equation for has to be solved self-consistently because also appears on the r.h.s. In the Euclidean case the resulting predictions approximate the true learning curves quite reliably. The derivation of (10) for inputs on a fixed graph is unchanged from [3], provided the kernel eigenvalues λα appearing in the function g(h) are defined appropriately, by the eigenfunction condition Cij φj = λφi ; the average here is over the input distribution, i.e. . . . = V −1 j . . . From the definition (1) of the p-step kernel, we see that then λα = κV −1 (1 − λL /a)p in terms of the corα responding eigenvalue of the graph Laplacian L. The constant κ has to be chosen to enforce our normalization convention α λα = Cjj = 1. Fortunately, for large V the spectrum of the Laplacian of a random regular graph can be approximated by that of the corresponding large regular tree, which has spectral density [14] L ρ(λ ) = 4(d−1) − (λL − 1)2 d2 2πdλL (2 − λL ) (11) in the range λL ∈ [λL , λL ], λL = 1 + 2d−1 (d − 1)1/2 , where the term under the square root is ± + − positive. (There are also two isolated eigenvalues λL = 0, 2 but these have weight 1/V each and so can be ignored for large V .) Rewriting (10) as = V −1 α [(V λα )−1 + (n/V )( + σ 2 )−1 ]−1 and then replacing the average over kernel eigenvalues by an integral over the spectral density leads to the following prediction for the learning curve: = dλL ρ(λL )[κ−1 (1 − λL /a)−p + ν/( + σ 2 )]−1 (12) with κ determined from κ dλL ρ(λL )(1 − λL /a)p = 1. A general consequence of the form of this result is that the learning curve depends on n and V only through the ratio ν = n/V , i.e. the number of training examples per vertex. The approximation (12) also predicts that the learning curve will have two regimes, one for small ν where σ 2 and the generalization error will be essentially 2 independent of σ ; and another for large ν where σ 2 so that can be neglected on the r.h.s. and one has a fully explicit expression for . We compare the above prediction in Fig. 2(left) to the results of numerical simulations of the learning curves, averaged over datasets and random regular graphs. The two regimes predicted by the approximation are clearly visible; the approximation works well inside each regime but less well in the crossover between the two. One striking observation is that the approximation seems to predict the asymptotic large-n behaviour exactly; this is distinct to the Euclidean case, where generally only the power-law of the n-dependence but not its prefactor come out accurately. To see why, we exploit that for large n (where σ 2 ) the approximation (9) effectively neglects fluctuations in the training input “density” of a randomly drawn set of training inputs [3, 6]. This is justified in the graph case for large ν = n/V , because the number of training inputs each vertex receives, Binomial(n, 1/V ), has negligible relative fluctuations away from its mean ν. In the Euclidean case there is no similar result, because all training inputs are different with probability one even for large n. Fig. 2(right) illustrates that for larger a the difference in the crossover region between the true (numerically simulated) learning curves and our approximation becomes larger. This is because the average number of steps p/a of the random walk kernel then decreases: we get closer to the limit of uncorrelated function values (a → ∞, Cij = δij ). In that limit and for low σ 2 and large V the 5 V=500 (filled) & 1000 (empty), d=3, a=2, p=10 V=500, d=3, a=4, p=10 0 0 10 10 ε ε -1 -1 10 10 -2 10 -2 10 2 σ = 0.1 2 σ = 0.1 2 -3 10 σ = 0.01 2 σ = 0.01 -3 10 2 σ = 0.001 2 σ = 0.001 2 -4 10 2 σ = 0.0001 σ = 0.0001 -4 10 2 σ =0 -5 2 σ =0 -5 10 0.1 1 ν=n/V 10 10 0.1 1 ν=n/V 10 Figure 2: (Left) Learning curves for GP regression on random regular graphs with degree d = 3 and V = 500 (small filled circles) and V = 1000 (empty circles) vertices. Plotting generalization error versus ν = n/V superimposes the results for both values of V , as expected from the approximation (12). The lines are the quantitative predictions of this approximation. Noise level as shown, kernel parameters a = 2, p = 10. (Right) As on the left but with V = 500 only and for larger a = 4. 2 V=500, d=3, a=2, p=20 0 0 V=500, d=3, a=2, p=200, σ =0.1 10 10 ε ε simulation -1 2 10 1/(1+n/σ ) theory (tree) theory (eigenv.) -1 10 -2 10 2 σ = 0.1 -3 10 -4 10 -2 10 2 σ = 0.01 2 σ = 0.001 2 σ = 0.0001 -3 10 2 σ =0 -5 10 -4 0.1 1 ν=n/V 10 10 1 10 100 n 1000 10000 Figure 3: (Left) Learning curves for GP regression on random regular graphs with degree d = 3 and V = 500, and kernel parameters a = 2, p = 20; noise level σ 2 as shown. Circles: numerical simulations; lines: approximation (12). (Right) As on the left but for much larger p = 200 and for a single random graph, with σ 2 = 0.1. Dotted line: naive estimate = 1/(1 + n/σ 2 ). Dashed line: approximation (10) using the tree spectrum and the large p-limit, see (17). Solid line: (10) with numerically determined graph eigenvalues λL as input. α true learning curve is = exp(−ν), reflecting the probability of a training input set not containing a particular vertex, while the approximation can be shown to predict = max{1 − ν, 0}, i.e. a decay of the error to zero at ν = 1. Plotting these two curves (not displayed here) indeed shows the same “shape” of disagreement as in Fig. 2(right), with the approximation underestimating the true generalization error. Increasing p has the effect of making the kernel longer ranged, giving an effect opposite to that of increasing a. In line with this, larger values of p improve the accuracy of the approximation (12): see Fig. 3(left). One may ask about the shape of the learning curves for large number of training examples (per vertex) ν. The roughly straight lines on the right of the log-log plots discussed so far suggest that ∝ 1/ν in this regime. This is correct in the mathematical limit ν → ∞ because the graph kernel has a nonzero minimal eigenvalue λ− = κV −1 (1−λL /a)p : for ν σ 2 /(V λ− ), the square bracket + 6 in (12) can then be approximated by ν/( +σ 2 ) and one gets (because also regime) ≈ σ 2 /ν. σ 2 in the asymptotic However, once p becomes reasonably large, V λ− can be shown – by analysing the scaling of κ, see Appendix – to be extremely (exponentially in p) small; for the parameter values in Fig. 3(left) it is around 4 × 10−30 . The “terminal” asymptotic regime ≈ σ 2 /ν is then essentially unreachable. A more detailed analysis of (12) for large p and large (but not exponentially large) ν, as sketched in the Appendix, yields ∝ (cσ 2 /ν) ln3/2 (ν/(cσ 2 )), c ∝ p−3/2 (13) This shows that there are logarithmic corrections to the naive σ 2 /ν scaling that would apply in the true terminal regime. More intriguing is the scaling of the coefficient c with p, which implies that to reach a specified (low) generalization error one needs a number of training examples per vertex of order ν ∝ cσ 2 ∝ p−3/2 σ 2 . Even though the covariance kernel C ,p – in the same tree approximation that also went into (12) – approaches a limiting form for large p as discussed in Sec. 2, generalization performance thus continues to improve with increasing p. The explanation for this must presumably be that C ,p converges to the limit (7) only at fixed , while in the tail ∝ p, it continues to change. For finite graph sizes V we know of course that loops will eventually become important as p increases, around the crossover point estimated in (8). The approximation for the learning curve in (12) should then break down. The most naive estimate beyond this point would be to say that the kernel becomes nearly fully correlated, Cij ∝ (di dj )1/2 which in the regular case simplifies to Cij = 1. With only one function value to learn, and correspondingly only one nonzero kernel eigenvalue λα=1 = 1, one would predict = 1/(1 + n/σ 2 ). Fig. 3(right) shows, however, that this significantly underestimates the actual generalization error, even though for this graph λα=1 = 0.994 is very close to unity so that the other eigenvalues sum to no more than 0.006. An almost perfect prediction is obtained, on the other hand, from the approximation (10) with the numerically calculated values of the Laplacian – and hence kernel – eigenvalues. The presence of the small kernel eigenvalues is again seen to cause logarithmic corrections to the naive ∝ 1/n scaling. Using the tree spectrum as an approximation and exploiting the large-p limit, one finds indeed (see Appendix, Eq. (17)) that ∝ (c σ 2 /n) ln3/2 (n/c σ 2 ) where now n enters rather than ν = n/V , c being a constant dependent only on p and a: informally, the function to be learned only has a finite (rather than ∝ V ) number of degrees of freedom. The approximation (17) in fact provides a qualitatively accurate description of the data Fig. 3(right), as the dashed line in the figure shows. We thus have the somewhat unusual situation that the tree spectrum is enough to give a good description of the learning curves even when loops are important, while (see Sec. 2) this is not so as far as the evaluation of the covariance kernel itself is concerned. 4 Summary and Outlook We have studied theoretically the generalization performance of GP regression on graphs, focussing on the paradigmatic case of random regular graphs where every vertex has the same degree d. Our initial concern was with the behaviour of p-step random walk kernels on such graphs. If these are calculated within the usual approximation of a locally tree-like structure, then they converge to a non-trivial limiting form (7) when p – or the corresponding lengthscale σ in the closely related diffusion kernel – becomes large. The limit of full correlation between all function values on the graph is only reached because of the presence of loops, and we have estimated in (8) the values of p around which the crossover to this loop-dominated regime occurs; numerical data for correlations of function values on neighbouring vertices support this result. In the second part of the paper we concentrated on the learning curves themselves. We assumed that inference is performed with the correct parameters describing the data generating process; the generalization error is then just the Bayes error. The approximation (12) gives a good qualitative description of the learning curve using only the known spectrum of a large regular tree as input. It predicts in particular that the key parameter that determines the generalization error is ν = n/V , the number of training examples per vertex. We demonstrated also that the approximation is in fact more useful than in the Euclidean case because it gives exact asymptotics for the limit ν 1. Quantitatively, we found that the learning curves decay as ∝ σ 2 /ν with non-trivial logarithmic correction terms. Slower power laws ∝ ν −α with α < 1, as in the Euclidean case, do not appear. 7 We attribute this to the fact that on a graph there is no analogue of the local roughness of a target function because there is a minimum distance (one step along the graph) between different input points. Finally we looked at the learning curves for larger p, where loops become important. These can still be predicted quite accurately by using the tree eigenvalue spectrum as an approximation, if one keeps track of the zero graph Laplacian eigenvalue which we were able to ignore previously; the approximation shows that the generalization error scales as σ 2 /n with again logarithmic corrections. In future work we plan to extend our analysis to graphs that are not regular, including ones from application domains as well as artificial ones with power-law tails in the distribution of degrees d, where qualitatively new effects are to be expected. It would also be desirable to improve the predictions for the learning curve in the crossover region ≈ σ 2 , which should be achievable using iterative approaches based on belief propagation that have already been shown to give accurate approximations for graph eigenvalue spectra [18]. These tools could then be further extended to study e.g. the effects of model mismatch in GP regression on random graphs, and how these are mitigated by tuning appropriate hyperparameters. Appendix We sketch here how to derive (13) from (12) for large p. Eq. (12) writes = g(νV /( + σ 2 )) with λL + g(h) = dλL ρ(λL )[κ−1 (1 − λL /a)−p + hV −1 ]−1 (14) λL − and κ determined from the condition g(0) = 1. (This g(h) is the tree spectrum approximation to the g(h) of (10).) Turning first to g(0), the factor (1 − λL /a)p decays quickly to zero as λL increases above λL . One can then approximate this factor according to (1 − λL /a)p [(a − λL )/(a − λL )]p ≈ − − − (1 − λL /a)p exp[−(λL − λL )p/(a − λL )]. In the regime near λL one can also approximate the − − − − spectral density (11) by its leading square-root increase, ρ(λL ) = r(λL − λL )1/2 , with r = (d − − 1)1/4 d5/2 /[π(d − 2)2 ]. Switching then to a new integration variable y = (λL − λL )p/(a − λL ) and − − extending the integration limit to ∞ gives ∞ √ 1 = g(0) = κr(1 − λL /a)p [p/(a − λL )]−3/2 dy y e−y (15) − − 0 and this fixes κ. Proceeding similarly for h > 0 gives ∞ g(h) = κr(1−λL /a)p [p/(a−λL )]−3/2 F (hκV −1 (1−λL /a)p ), − − − F (z) = √ dy y (ey +z)−1 0 (16) Dividing by g(0) = 1 shows that simply g(h) = F (hV −1 c−1 )/F (0), where c = 1/[κ(1 − σ2 λL /a)p ] = rF (0)[p/(a − λL )]−3/2 which scales as p−3/2 . In the asymptotic regime − − 2 2 we then have = g(νV /σ ) = F (ν/(cσ ))/F (0) and the desired result (13) follows from the large-z behaviour of F (z) ≈ z −1 ln3/2 (z). One can proceed similarly for the regime where loops become important. Clearly the zero Laplacian eigenvalue with weight 1/V then has to be taken into account. If we assume that the remainder of the Laplacian spectrum can still be approximated by that of a tree [18], we get (V + hκ)−1 + r(1 − λL /a)p [p/(a − λL )]−3/2 F (hκV −1 (1 − λL /a)p ) − − − g(h) = (17) V −1 + r(1 − λL /a)p [p/(a − λL )]−3/2 F (0) − − The denominator here is κ−1 and the two terms are proportional respectively to the covariance kernel eigenvalue λ1 , corresponding to λL = 0 and the constant eigenfunction, and to 1−λ1 . Dropping the 1 first terms in the numerator and denominator of (17) by taking V → ∞ leads back to the previous analysis as it should. For a situation as in Fig. 3(right), on the other hand, where λ1 is close to unity, we have κ ≈ V and so g(h) ≈ (1 + h)−1 + rV (1 − λL /a)p [p/(a − λL )]−3/2 F (h(1 − λL /a)p ) (18) − − − The second term, coming from the small kernel eigenvalues, is the more slowly decaying because it corresponds to fine detail of the target function that needs many training examples to learn accurately. It will therefore dominate the asymptotic behaviour of the learning curve: = g(n/σ 2 ) ∝ F (n/(c σ 2 )) with c = (1 − λL /a)−p independent of V . The large-n tail of the learning curve in − Fig. 3(right) is consistent with this form. 8 References [1] C E Rasmussen and C K I Williams. Gaussian processes for regression. In D S Touretzky, M C Mozer, and M E Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages 514–520, Cambridge, MA, 1996. MIT Press. [2] M Opper. Regression with Gaussian processes: Average case performance. In I K Kwok-Yee, M Wong, I King, and Dit-Yun Yeung, editors, Theoretical Aspects of Neural Computation: A Multidisciplinary Perspective, pages 17–23. Springer, 1997. [3] P Sollich. Learning curves for Gaussian processes. In M S Kearns, S A Solla, and D A Cohn, editors, Advances in Neural Information Processing Systems 11, pages 344–350, Cambridge, MA, 1999. MIT Press. [4] M Opper and F Vivarelli. General bounds on Bayes errors for regression with Gaussian processes. In M Kearns, S A Solla, and D Cohn, editors, Advances in Neural Information Processing Systems 11, pages 302–308, Cambridge, MA, 1999. MIT Press. [5] C K I Williams and F Vivarelli. Upper and lower bounds on the learning curve for Gaussian processes. Mach. Learn., 40(1):77–102, 2000. [6] D Malzahn and M Opper. Learning curves for Gaussian processes regression: A framework for good approximations. In T K Leen, T G Dietterich, and V Tresp, editors, Advances in Neural Information Processing Systems 13, pages 273–279, Cambridge, MA, 2001. MIT Press. [7] D Malzahn and M Opper. A variational approach to learning curves. In T G Dietterich, S Becker, and Z Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 463–469, Cambridge, MA, 2002. MIT Press. [8] P Sollich and A Halees. Learning curves for Gaussian process regression: approximations and bounds. Neural Comput., 14(6):1393–1428, 2002. [9] P Sollich. Gaussian process regression with mismatched models. In T G Dietterich, S Becker, and Z Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 519–526, Cambridge, MA, 2002. MIT Press. [10] P Sollich. Can Gaussian process regression be made robust against model mismatch? In Deterministic and Statistical Methods in Machine Learning, volume 3635 of Lecture Notes in Artificial Intelligence, pages 199–210. 2005. [11] M Herbster, M Pontil, and L Wainer. Online learning over graphs. In ICML ’05: Proceedings of the 22nd international conference on Machine learning, pages 305–312, New York, NY, USA, 2005. ACM. [12] A J Smola and R Kondor. Kernels and regularization on graphs. In M Warmuth and B Sch¨ lkopf, o editors, Proc. Conference on Learning Theory (COLT), Lect. Notes Comp. Sci., pages 144–158. Springer, Heidelberg, 2003. [13] R I Kondor and J D Lafferty. Diffusion kernels on graphs and other discrete input spaces. In ICML ’02: Proceedings of the Nineteenth International Conference on Machine Learning, pages 315–322, San Francisco, CA, USA, 2002. Morgan Kaufmann. [14] F R K Chung. Spectral graph theory. Number 92 in Regional Conference Series in Mathematics. Americal Mathematical Society, 1997. [15] A Steger and N C Wormald. Generating random regular graphs quickly. Combinator. Probab. Comput., 8(4):377–396, 1999. [16] F Chung and S-T Yau. Coverings, heat kernels and spanning trees. The Electronic Journal of Combinatorics, 6(1):R12, 1999. [17] C Monthus and C Texier. Random walk on the Bethe lattice and hyperbolic brownian motion. J. Phys. A, 29(10):2399–2409, 1996. [18] T Rogers, I Perez Castillo, R Kuehn, and K Takeda. Cavity approach to the spectral density of sparse symmetric random matrices. Phys. Rev. E, 78(3):031116, 2008. 9
2 0.98674905 6 nips-2009-A Biologically Plausible Model for Rapid Natural Scene Identification
Author: Sennay Ghebreab, Steven Scholte, Victor Lamme, Arnold Smeulders
Abstract: Contrast statistics of the majority of natural images conform to a Weibull distribution. This property of natural images may facilitate efficient and very rapid extraction of a scene's visual gist. Here we investigated whether a neural response model based on the Wei bull contrast distribution captures visual information that humans use to rapidly identify natural scenes. In a learning phase, we measured EEG activity of 32 subjects viewing brief flashes of 700 natural scenes. From these neural measurements and the contrast statistics of the natural image stimuli, we derived an across subject Wei bull response model. We used this model to predict the EEG responses to 100 new natural scenes and estimated which scene the subject viewed by finding the best match between the model predictions and the observed EEG responses. In almost 90 percent of the cases our model accurately predicted the observed scene. Moreover, in most failed cases, the scene mistaken for the observed scene was visually similar to the observed scene itself. Similar results were obtained in a separate experiment in which 16 other subjects where presented with artificial occlusion models of natural images. Together, these results suggest that Weibull contrast statistics of natural images contain a considerable amount of visual gist information to warrant rapid image identification.
3 0.97236097 190 nips-2009-Polynomial Semantic Indexing
Author: Bing Bai, Jason Weston, David Grangier, Ronan Collobert, Kunihiko Sadamasa, Yanjun Qi, Corinna Cortes, Mehryar Mohri
Abstract: We present a class of nonlinear (polynomial) models that are discriminatively trained to directly map from the word content in a query-document or documentdocument pair to a ranking score. Dealing with polynomial models on word features is computationally challenging. We propose a low-rank (but diagonal preserving) representation of our polynomial models to induce feasible memory and computation requirements. We provide an empirical study on retrieval tasks based on Wikipedia documents, where we obtain state-of-the-art performance while providing realistically scalable methods. 1
4 0.97235328 92 nips-2009-Fast Graph Laplacian Regularized Kernel Learning via Semidefinite–Quadratic–Linear Programming
Author: Xiao-ming Wu, Anthony M. So, Zhenguo Li, Shuo-yen R. Li
Abstract: Kernel learning is a powerful framework for nonlinear data modeling. Using the kernel trick, a number of problems have been formulated as semidefinite programs (SDPs). These include Maximum Variance Unfolding (MVU) (Weinberger et al., 2004) in nonlinear dimensionality reduction, and Pairwise Constraint Propagation (PCP) (Li et al., 2008) in constrained clustering. Although in theory SDPs can be efficiently solved, the high computational complexity incurred in numerically processing the huge linear matrix inequality constraints has rendered the SDP approach unscalable. In this paper, we show that a large class of kernel learning problems can be reformulated as semidefinite-quadratic-linear programs (SQLPs), which only contain a simple positive semidefinite constraint, a second-order cone constraint and a number of linear constraints. These constraints are much easier to process numerically, and the gain in speedup over previous approaches is at least of the order m2.5 , where m is the matrix dimension. Experimental results are also presented to show the superb computational efficiency of our approach.
Author: Honglak Lee, Peter Pham, Yan Largman, Andrew Y. Ng
Abstract: In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learning approaches have not been extensively studied for auditory data. In this paper, we apply convolutional deep belief networks to audio data and empirically evaluate them on various audio classification tasks. In the case of speech data, we show that the learned features correspond to phones/phonemes. In addition, our feature representations learned from unlabeled audio data show very good performance for multiple audio classification tasks. We hope that this paper will inspire more research on deep learning approaches applied to a wide range of audio recognition tasks. 1
same-paper 6 0.95327151 176 nips-2009-On Invariance in Hierarchical Models
7 0.83508366 151 nips-2009-Measuring Invariances in Deep Networks
8 0.79117578 119 nips-2009-Kernel Methods for Deep Learning
9 0.76048303 32 nips-2009-An Online Algorithm for Large Scale Image Similarity Learning
10 0.73983938 142 nips-2009-Locality-sensitive binary codes from shift-invariant kernels
11 0.73927641 137 nips-2009-Learning transport operators for image manifolds
12 0.72553033 196 nips-2009-Quantification and the language of thought
13 0.72230738 241 nips-2009-The 'tree-dependent components' of natural scenes are edge filters
14 0.71930093 84 nips-2009-Evaluating multi-class learning strategies in a generative hierarchical framework for object detection
15 0.71366209 95 nips-2009-Fast subtree kernels on graphs
16 0.71088755 104 nips-2009-Group Sparse Coding
17 0.703076 212 nips-2009-Semi-Supervised Learning in Gigantic Image Collections
18 0.70244443 2 nips-2009-3D Object Recognition with Deep Belief Nets
19 0.70172942 210 nips-2009-STDP enables spiking neurons to detect hidden causes of their inputs
20 0.69025904 87 nips-2009-Exponential Family Graph Matching and Ranking