nips nips2010 nips2010-65 knowledge-graph by maker-knowledge-mining

65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform


Source: pdf

Author: Siwei Lyu

Abstract: Divisive normalization (DN) has been advocated as an effective nonlinear efficient coding transform for natural sensory signals with applications in biology and engineering. In this work, we aim to establish a connection between the DN transform and the statistical properties of natural sensory signals. Our analysis is based on the use of multivariate t model to capture some important statistical properties of natural sensory signals. The multivariate t model justifies DN as an approximation to the transform that completely eliminates its statistical dependency. Furthermore, using the multivariate t model and measuring statistical dependency with multi-information, we can precisely quantify the statistical dependency that is reduced by the DN transform. We compare this with the actual performance of the DN transform in reducing statistical dependencies of natural sensory signals. Our theoretical analysis and quantitative evaluations confirm DN as an effective efficient coding transform for natural sensory signals. On the other hand, we also observe a previously unreported phenomenon that DN may increase statistical dependencies when the size of pooling is small. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In this work, we aim to establish a connection between the DN transform and the statistical properties of natural sensory signals. [sent-2, score-0.886]

2 Our analysis is based on the use of multivariate t model to capture some important statistical properties of natural sensory signals. [sent-3, score-0.751]

3 The multivariate t model justifies DN as an approximation to the transform that completely eliminates its statistical dependency. [sent-4, score-0.708]

4 Furthermore, using the multivariate t model and measuring statistical dependency with multi-information, we can precisely quantify the statistical dependency that is reduced by the DN transform. [sent-5, score-0.691]

5 We compare this with the actual performance of the DN transform in reducing statistical dependencies of natural sensory signals. [sent-6, score-0.999]

6 Our theoretical analysis and quantitative evaluations confirm DN as an effective efficient coding transform for natural sensory signals. [sent-7, score-0.963]

7 On the other hand, we also observe a previously unreported phenomenon that DN may increase statistical dependencies when the size of pooling is small. [sent-8, score-0.306]

8 1 Introduction It has been widely accepted that biological sensory systems are adapted to match the statistical properties of the signals in the natural environments. [sent-9, score-0.64]

9 Among different ways such may be achieved, the efficient coding hypothesis [2, 3] asserts that a sensory system might be understood as a transform that reduces redundancies in its responses to the input sensory stimuli (e. [sent-10, score-1.206]

10 Such signal transforms, termed as efficient coding transforms, are also important to applications in engineering – with the reduced statistical dependencies, sensory signals can be more efficiently stored, transmitted and processed. [sent-13, score-0.699]

11 Over the years, many works, most notably the ICA methodology, have aimed to find linear efficient coding transforms for natural sensory signals [20, 4, 15]. [sent-14, score-0.751]

12 Nonetheless, it has also been noted that there are statistical dependencies in natural images or sounds, to which linear transforms are not effective to reduce or eliminate [5, 17]. [sent-16, score-0.385]

13 Divisive normalization (DN) is perhaps the most simple nonlinear efficient coding transform that has been extensively studied recently. [sent-18, score-0.601]

14 The output of the DN transform is obtained from the response of a linear basis function divided by the square root of a biased and weighted sum of the squared responses of neighboring basis functions of adjacent spatial locations, orientations and scales. [sent-19, score-0.458]

15 1 (a) (b) (c) (d) (e) Figure 1: Statistical properties of natural images in a band-pass domain and their representations with the multivariate t model. [sent-21, score-0.399]

16 (c): Contour plot of the optimally fitted multivariate t model of p(x1 , x2 ). [sent-24, score-0.26]

17 Blue dashed curves correspond to E(x1 |x2 ) and E(x1 |x2 ) ± std(x1 |x2 ) from the optimally fitted multivariate t model to p(x1 , x2 ). [sent-27, score-0.356]

18 In image processing, nonlinear image representations based on DN have been applied to image compression and contrast enhancement [18, 16] showing improved performance over linear representations. [sent-29, score-0.241]

19 As an important nonlinear transform with such a ubiquity, it has been of great interest to find the underlying principle from which DN originates. [sent-30, score-0.397]

20 Based on empirical observations, Schwartz and Simoncelli [23] suggested that DN can reduce statistical dependencies in natural sensory signals and is thus justified by the efficient coding hypothesis. [sent-31, score-0.88]

21 More recent works on statistical models and efficient coding transforms of natural sensory signals (e. [sent-32, score-0.829]

22 However, this claim needs to be rigorously validated based on statistical properties of natural sensory signals, and quantitatively evaluated with DN’s performance in reducing statistical dependencies of natural sensory signals. [sent-35, score-1.161]

23 In this work, we aim to establish a connection between the DN transform and the statistical properties of natural sensory signals. [sent-36, score-0.886]

24 Our analysis is based on the use of multivariate t model to capture some important statistical properties of natural sensory signals. [sent-37, score-0.751]

25 The multivariate t model justifies DN as an approximation to the transform that completely eliminates its statistical dependency. [sent-38, score-0.708]

26 Furthermore, using the multivariate t model and measuring statistical dependency with multi-information, we can precisely quantify the statistical dependency that is reduced by the DN transform. [sent-39, score-0.691]

27 We compare this with the actual performance of the DN transform in reducing statistical dependencies of natural sensory signals. [sent-40, score-0.999]

28 Our theoretical analysis and quantitative evaluations confirm DN as an effective efficient coding transform for natural sensory signals. [sent-41, score-0.963]

29 On the other hand, we also observe a previously unreported phenomenon that DN may increase statistical dependencies when the size of pooling is small. [sent-42, score-0.306]

30 Over the years, many distinct statistical properties of natural sensory signals have been observed. [sent-45, score-0.64]

31 It has been noted that higher order statistical dependencies in the joint and conditional densities (Fig. [sent-50, score-0.239]

32 1 (b) and (d)) cannot be effectively reduced with linear transform [17]. [sent-51, score-0.362]

33 Similar behaviors have also been observed for orientation and scale neighbors [6], as well as other type of sensory signals such as audios [23, 17]. [sent-54, score-0.512]

34 2 A compact mathematical form that can capture all three aforementioned statistical properties is the multivariate Student’s t model. [sent-55, score-0.333]

35 From data of neighboring responses of natural sensory signals in the band-pass domain, the parameters (α, β) in the multivariate t model can be obtained numerically with maximum likelihood, the details of which are given in the supplementary material. [sent-58, score-0.859]

36 The joint density of the fitted multivariate t model has elliptically symmetric level curves of equal probability, and its marginals are 1D Student’s t densities that are non-Gaussian and kurtotic [14], all resembling those of the natural sensory signals, Fig. [sent-60, score-0.834]

37 It is due to its heavy tail property that the multivariate t model has been used as models of natural images [35, 22]. [sent-62, score-0.371]

38 Furthermore, we provide another property of the multivariate t model that captures the bow-tie dependency exhibited by the conditional distributions of natural sensory signals. [sent-63, score-0.795]

39 The three red solid curves correspond to E(xi |x\i ) and E(xi |x\i ) ± var(xi |x\i ) for pairs of adjacent band-pass filtered responses of a natural image, and the three blue dashed curves are the same quantities of the optimally fitted t model. [sent-71, score-0.462]

40 The bow-tie phenomenon comes directly from the dependencies in the conditional variances, which is precisely captured by the fitted multivariate t model3 . [sent-72, score-0.388]

41 This is based on an important property of the multivariate t model – it is a special case of the Gaussian scale mixture (GSM) [1]. [sent-74, score-0.227]

42 To simplify the discussion, hereafter we will assume that the signals have been whitened so that there is no second-order dependencies in x. [sent-77, score-0.223]

43 √ According to the GSM equivalence of the multivariate t model, we have u = x/ z. [sent-79, score-0.227]

44 As an isotropic Gaussian vector has mutually independent components, there is no statistical dependency among √ elements of u. [sent-80, score-0.327]

45 In other words, x/ z equals to a transform that completely eliminates all statistical dependencies in x. [sent-81, score-0.588]

46 Unfortunately, this optimal efficient coding transform is not realizable, because z is a latent variable that we do not have direct access to. [sent-82, score-0.523]

47 (1) can be shown to be equivalent to the standard definition of multivariate t density in [14]. [sent-84, score-0.275]

48 2β + d √ ˆ If we drop the irrelevant scaling factors from each of these estimators and plug them in x/ z , we obtain a nonlinear transform of x as, x x x y = φ(x), where φ(x) ≡ √ = . [sent-89, score-0.424]

49 Lemma 2 shows that the DN transform is justified as an approximate to the optimal efficient coding transform given a multivariate t model of natural sensory signals. [sent-91, score-1.53]

50 Our result also shows that the DN transform approximately “gaussianizes” the input data, a phenomenon that has been empirically observed by several authors (e. [sent-92, score-0.393]

51 1 z2 = ˆ α+xx , and z3 = Ez|x (1/z|x) ˆ 2β + d − 2 −1 = Properties of DN Transform The standard DN transform given by Eq. [sent-96, score-0.362]

52 Lemma 3 For the standard DN transform given in Eq. [sent-99, score-0.362]

53 Further, the DN transform of a multivariate t vector also has a closed form density function. [sent-102, score-0.667]

54 2 Equivalent Forms of DN Transform In the current literature, the DN transform has been defined in many different forms other than Eq. [sent-107, score-0.391]

55 However, if we are merely interested in their ability to reduce statistical dependencies, many of the different forms of DN transform based on l2 norm of the input vector x become equivalent. [sent-109, score-0.469]

56 To be more specific, we quantify statistical statistical dependency of a random vector x using the multi-information (MI) [27], defined as d I(x) = p(x) log p(x)/ x d H(xk ) − H(x), p(xk ) dx = k=1 (4) k=1 where H(·) denotes the Shannon differential entropy. [sent-110, score-0.315]

57 4 Now consider four different definitions of the DN transform expressed in terms of the individual element of the output vector as xi x2 xi x2 i i yi = √ , ti = . [sent-116, score-0.387]

58 si is the output of the original DN transform used by Heeger [12]. [sent-120, score-0.362]

59 vi corresponds to the DN transform used by Schwartz and Simoncelli [23]. [sent-121, score-0.362]

60 Last, ti is the output of the DN transform used in [31]. [sent-124, score-0.362]

61 2 1 − yi α + x x − x2 1 − yi α + x x\i i \i As element-wise operations do not affect MI, in terms of dependency reduction, all three transforms are equivalent to the standard form in terms of reducing statistical dependencies. [sent-126, score-0.345]

62 4 Quantifying DN Transform as Efficient Coding Transform We have set up a relation between the DN transform with statistical properties of natural sensory signals through the multivariate t model. [sent-128, score-1.229]

63 However, its effectiveness as an efficient coding transform for natural sensory signals needs yet to be quantified for two reasons. [sent-129, score-1.085]

64 First, DN is only an approximation to the optimal transform that eliminates statistical dependencies in a multivariate t model. [sent-130, score-0.815]

65 Further, the multivariate t model itself is a surrogate of the true statistical model of natural sensory signals. [sent-131, score-0.723]

66 It is our goal in this section to quantify the effectiveness of the DN transform in reducing statistical dependencies. [sent-132, score-0.534]

67 We start with a study of applying DN to the multivariate t model, the closed form density of which permits us a theoretical analysis of DN’s performance in dependency reduction. [sent-133, score-0.454]

68 We then appy DN to real natural sensory signal data, and compare its effectiveness as an efficient coding transform with the theoretical prediction obtained with the multivariate t model. [sent-134, score-1.241]

69 1 Results with Multivariate t Model For simplicity, we consider isotropic models whose second order dependencies are removed with whitening. [sent-136, score-0.229]

70 The density functions of multivariate t and r models lead to closed form solutions for MI, as formally stated in the following lemma (proved in the supplementary material). [sent-137, score-0.401]

71 Similarly, the MI of a d-dimensional r vector y = φ(x), which is the DN transform of x, is I(y) = d log Γ(β + (d − 1)/2) − log Γ(β) − (d − 1) log Γ(β + d/2) + (β − 1)Ψ(β) + (d − 1)(β + d/2 − 1)Ψ(β + d/2) − d(β + (d − 3)/2)Ψ(β + (d − 1)/2). [sent-139, score-0.362]

72 Using Lemma 5, for a d-dimensional t vector, if we have I(x) > I(y), the DN transform reduces its statistical dependency, conversely, if I(x) < I(y), it increases dependency. [sent-142, score-0.44]

73 These plots illustrate several interesting aspects of the DN transform as an approximate efficient coding transform of the multivariate t models. [sent-148, score-1.112]

74 5 Figure 2: left: Surface plot of [I(x) − I(φ(x))]/I(x), measuring MI changes after applying the DN transform φ(·) to an isotropic t vector x. [sent-167, score-0.529]

75 The two coordinates correspond with data dimensionality (d) and shape parameters of the multivariate t model (β). [sent-169, score-0.261]

76 Therefore, though effective for high dimensional models, DN is not an efficient coding transform for low dimensional multivariate t models. [sent-175, score-0.822]

77 2 Results with Natural Sensory Signals As mentioned previously, the multivariate t model is an approximation to the source model of natural sensory signals. [sent-177, score-0.645]

78 Therefore, we would like to compare our analysis in the previous section with the actual dependency reduction performance of the DN transform on real natural sensory signal data. [sent-178, score-0.93]

79 (5) k=1 Next, the entropy of y = φ(x) is related to the entropy of x, as H(y) = H(x) − x p(x) log det det ∂φ(x) ∂x ∂φ(x) ∂x dx, where det ∂φ(x) ∂x is the Jacobian determinant of φ(x) [9]. [sent-186, score-0.243]

80 (6) with 10, 000 random samples drawn from the same multivariate t models. [sent-197, score-0.227]

81 04 4 9 16 25 36 49 64 81 100 121 (a) t model (b) audio data (c) image data Figure 3: (a) Comparison of theoretical prediction of MI reduction for isotropic t model with β = 1. [sent-223, score-0.267]

82 (6) and m-spacing estimator [30] on 10, 000 random samples drawn from the corresponding multivariate t models (red dashed curve). [sent-225, score-0.302]

83 With these data, we first fit multivariate t models using maximum likelihood (detailed procedure given in the supplementary material), from which we compute the theoretical prediction of MI difference using Lemma 5. [sent-244, score-0.277]

84 These plots suggest two properties of the fitted multivariate t model. [sent-247, score-0.255]

85 Using the same data, we obtain the optimal DN transform by searching for optimal α in Eq. [sent-250, score-0.362]

86 3 (b) (for audios) and (c) (for images), we show MI changes of using DN on natural sensory data that are predicted by the optimally fitted t model (blue solid curves) and that obtained with optimized DN parameters using nonparametric estimation of Eq. [sent-257, score-0.546]

87 In general, changes in statistical dependencies obtained with the optimal DN transforms are in accordance with those predicted by the multivariate t model. [sent-260, score-0.491]

88 This may be caused by the approximation nature of the multivariate t model to natural sensory data. [sent-263, score-0.645]

89 As such, more complex structures in the natural sensory signals, especially with larger local windows, cannot be effectively captured by the multivariate t models, which renders DN less effective. [sent-264, score-0.645]

90 On the other hand, our observation based on the multivariate t model that the DN transform tends to increase statistical dependency for small pooling size also holds to real data. [sent-265, score-0.831]

91 On the surface, our finding seems to be in contradiction with [23], where it was empirically shown that applying an equivalent form of the DN transform as Eq. [sent-267, score-0.362]

92 However, one key yet subtle difference is that statistical dependency is defined as the correlations in the conditional variances in [23], i. [sent-270, score-0.228]

93 The observation made in [23] is then based on the empirical observations that after applying DN transform, such dependencies in the transformed variables become weaker, while our results show that the statistical dependency measured by MI in that case actually increases. [sent-274, score-0.312]

94 5 Conclusion In this work, based on the use of the multivariate t model of natural sensory signals, we have presented a theoretical analysis showing that DN emerges as an approximate efficient coding transform. [sent-275, score-0.828]

95 Furthermore, we provide a quantitative analysis of the effectiveness of DN as an efficient coding transform for the multivariate t model and natural sensory signal data. [sent-276, score-1.219]

96 These analyses confirm the ability of DN in reducing statistical dependency of natural sensory signals. [sent-277, score-0.657]

97 More interestingly, we observe a previously unreported result that DN can actually increase statistical dependency when the size of pooling is small. [sent-278, score-0.295]

98 As a future direction, we would like to extend this study to a generalized DN transform where the denominator and numerator can have different degrees. [sent-279, score-0.388]

99 Factorial coding of natural images: how effective are linear models in removing higherorder dependencies? [sent-302, score-0.258]

100 Input–output statistical independence in divisive normalization models of v1 neurons. [sent-422, score-0.222]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('dn', 0.641), ('transform', 0.362), ('sensory', 0.321), ('multivariate', 0.227), ('mi', 0.164), ('coding', 0.161), ('dependency', 0.127), ('isotropic', 0.122), ('signals', 0.116), ('dependencies', 0.107), ('divisive', 0.101), ('natural', 0.097), ('statistical', 0.078), ('lemma', 0.068), ('audio', 0.063), ('image', 0.06), ('transforms', 0.056), ('xx', 0.055), ('curves', 0.053), ('tted', 0.053), ('audios', 0.053), ('nonparam', 0.053), ('unreported', 0.053), ('det', 0.052), ('solid', 0.05), ('density', 0.048), ('curve', 0.048), ('images', 0.047), ('simoncelli', 0.044), ('dashed', 0.043), ('schwartz', 0.043), ('normalization', 0.043), ('proc', 0.042), ('responses', 0.041), ('eliminates', 0.041), ('xk', 0.04), ('ez', 0.04), ('lyu', 0.038), ('pooling', 0.037), ('jacobian', 0.036), ('dimensional', 0.036), ('blue', 0.035), ('bls', 0.035), ('elliptically', 0.035), ('kurtosis', 0.035), ('nonlinear', 0.035), ('shape', 0.034), ('reducing', 0.034), ('optimally', 0.033), ('justi', 0.032), ('quantify', 0.032), ('estimator', 0.032), ('red', 0.031), ('phenomenon', 0.031), ('densities', 0.031), ('determinant', 0.031), ('albany', 0.031), ('masking', 0.031), ('windows', 0.03), ('closed', 0.03), ('forms', 0.029), ('neighboring', 0.029), ('gamma', 0.029), ('properties', 0.028), ('bandpass', 0.028), ('binning', 0.028), ('regards', 0.028), ('ltered', 0.028), ('supplementary', 0.028), ('entropy', 0.028), ('effectiveness', 0.028), ('estimators', 0.027), ('var', 0.027), ('clips', 0.027), ('gsm', 0.027), ('compression', 0.026), ('adjacent', 0.026), ('ef', 0.026), ('denominator', 0.026), ('yk', 0.026), ('khz', 0.025), ('visual', 0.025), ('yi', 0.025), ('reductions', 0.024), ('matthias', 0.024), ('digamma', 0.024), ('changes', 0.023), ('signal', 0.023), ('retinal', 0.023), ('sounds', 0.023), ('conditional', 0.023), ('symmetric', 0.022), ('pt', 0.022), ('measuring', 0.022), ('std', 0.022), ('orientation', 0.022), ('lter', 0.022), ('nonparametric', 0.022), ('theoretical', 0.022), ('intensities', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

Author: Siwei Lyu

Abstract: Divisive normalization (DN) has been advocated as an effective nonlinear efficient coding transform for natural sensory signals with applications in biology and engineering. In this work, we aim to establish a connection between the DN transform and the statistical properties of natural sensory signals. Our analysis is based on the use of multivariate t model to capture some important statistical properties of natural sensory signals. The multivariate t model justifies DN as an approximation to the transform that completely eliminates its statistical dependency. Furthermore, using the multivariate t model and measuring statistical dependency with multi-information, we can precisely quantify the statistical dependency that is reduced by the DN transform. We compare this with the actual performance of the DN transform in reducing statistical dependencies of natural sensory signals. Our theoretical analysis and quantitative evaluations confirm DN as an effective efficient coding transform for natural sensory signals. On the other hand, we also observe a previously unreported phenomenon that DN may increase statistical dependencies when the size of pooling is small. 1

2 0.27919328 8 nips-2010-A Log-Domain Implementation of the Diffusion Network in Very Large Scale Integration

Author: Yi-da Wu, Shi-jie Lin, Hsin Chen

Abstract: The Diffusion Network(DN) is a stochastic recurrent network which has been shown capable of modeling the distributions of continuous-valued, continuoustime paths. However, the dynamics of the DN are governed by stochastic differential equations, making the DN unfavourable for simulation in a digital computer. This paper presents the implementation of the DN in analogue Very Large Scale Integration, enabling the DN to be simulated in real time. Moreover, the logdomain representation is applied to the DN, allowing the supply voltage and thus the power consumption to be reduced without limiting the dynamic ranges for diffusion processes. A VLSI chip containing a DN with two stochastic units has been designed and fabricated. The design of component circuits will be described, so will the simulation of the full system be presented. The simulation results demonstrate that the DN in VLSI is able to regenerate various types of continuous paths in real-time. 1

3 0.14065656 59 nips-2010-Deep Coding Network

Author: Yuanqing Lin, Zhang Tong, Shenghuo Zhu, Kai Yu

Abstract: This paper proposes a principled extension of the traditional single-layer flat sparse coding scheme, where a two-layer coding scheme is derived based on theoretical analysis of nonlinear functional approximation that extends recent results for local coordinate coding. The two-layer approach can be easily generalized to deeper structures in a hierarchical multiple-layer manner. Empirically, it is shown that the deep coding approach yields improved performance in benchmark datasets.

4 0.1375265 104 nips-2010-Generative Local Metric Learning for Nearest Neighbor Classification

Author: Yung-kyun Noh, Byoung-tak Zhang, Daniel D. Lee

Abstract: We consider the problem of learning a local metric to enhance the performance of nearest neighbor classification. Conventional metric learning methods attempt to separate data distributions in a purely discriminative manner; here we show how to take advantage of information from parametric generative models. We focus on the bias in the information-theoretic error arising from finite sampling effects, and find an appropriate local metric that maximally reduces the bias based upon knowledge from generative models. As a byproduct, the asymptotic theoretical analysis in this work relates metric learning with dimensionality reduction, which was not understood from previous discriminative approaches. Empirical experiments show that this learned local metric enhances the discriminative nearest neighbor performance on various datasets using simple class conditional generative models. 1

5 0.11893485 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

Author: Deep Ganguli, Eero P. Simoncelli

Abstract: unkown-abstract

6 0.11707685 268 nips-2010-The Neural Costs of Optimal Control

7 0.10755602 109 nips-2010-Group Sparse Coding with a Laplacian Scale Mixture Prior

8 0.08214736 96 nips-2010-Fractionally Predictive Spiking Neurons

9 0.073382601 143 nips-2010-Learning Convolutional Feature Hierarchies for Visual Recognition

10 0.070238374 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

11 0.067896448 103 nips-2010-Generating more realistic images using gated MRF's

12 0.066684052 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication

13 0.064847834 17 nips-2010-A biologically plausible network for the computation of orientation dominance

14 0.062999465 277 nips-2010-Two-Layer Generalization Analysis for Ranking Using Rademacher Average

15 0.06161581 101 nips-2010-Gaussian sampling by local perturbations

16 0.058582533 161 nips-2010-Linear readout from a neural population with partial correlation data

17 0.057432555 70 nips-2010-Efficient Optimization for Discriminative Latent Class Models

18 0.054189019 80 nips-2010-Estimation of Renyi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs

19 0.053314339 126 nips-2010-Inference with Multivariate Heavy-Tails in Linear Models

20 0.050479893 266 nips-2010-The Maximal Causes of Natural Scenes are Edge Filters


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.167), (1, 0.062), (2, -0.121), (3, 0.076), (4, 0.055), (5, 0.039), (6, 0.008), (7, 0.052), (8, -0.06), (9, -0.04), (10, 0.011), (11, -0.051), (12, 0.019), (13, -0.163), (14, -0.099), (15, -0.081), (16, 0.011), (17, -0.001), (18, -0.037), (19, 0.057), (20, 0.034), (21, -0.129), (22, -0.005), (23, 0.039), (24, -0.105), (25, -0.094), (26, 0.131), (27, -0.049), (28, -0.083), (29, -0.013), (30, -0.092), (31, 0.205), (32, -0.035), (33, -0.17), (34, 0.056), (35, 0.11), (36, 0.265), (37, -0.048), (38, -0.046), (39, 0.055), (40, 0.028), (41, 0.018), (42, 0.074), (43, -0.09), (44, -0.05), (45, 0.064), (46, 0.038), (47, 0.038), (48, 0.044), (49, -0.102)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96713817 65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

Author: Siwei Lyu

Abstract: Divisive normalization (DN) has been advocated as an effective nonlinear efficient coding transform for natural sensory signals with applications in biology and engineering. In this work, we aim to establish a connection between the DN transform and the statistical properties of natural sensory signals. Our analysis is based on the use of multivariate t model to capture some important statistical properties of natural sensory signals. The multivariate t model justifies DN as an approximation to the transform that completely eliminates its statistical dependency. Furthermore, using the multivariate t model and measuring statistical dependency with multi-information, we can precisely quantify the statistical dependency that is reduced by the DN transform. We compare this with the actual performance of the DN transform in reducing statistical dependencies of natural sensory signals. Our theoretical analysis and quantitative evaluations confirm DN as an effective efficient coding transform for natural sensory signals. On the other hand, we also observe a previously unreported phenomenon that DN may increase statistical dependencies when the size of pooling is small. 1

2 0.81174064 8 nips-2010-A Log-Domain Implementation of the Diffusion Network in Very Large Scale Integration

Author: Yi-da Wu, Shi-jie Lin, Hsin Chen

Abstract: The Diffusion Network(DN) is a stochastic recurrent network which has been shown capable of modeling the distributions of continuous-valued, continuoustime paths. However, the dynamics of the DN are governed by stochastic differential equations, making the DN unfavourable for simulation in a digital computer. This paper presents the implementation of the DN in analogue Very Large Scale Integration, enabling the DN to be simulated in real time. Moreover, the logdomain representation is applied to the DN, allowing the supply voltage and thus the power consumption to be reduced without limiting the dynamic ranges for diffusion processes. A VLSI chip containing a DN with two stochastic units has been designed and fabricated. The design of component circuits will be described, so will the simulation of the full system be presented. The simulation results demonstrate that the DN in VLSI is able to regenerate various types of continuous paths in real-time. 1

3 0.45654523 76 nips-2010-Energy Disaggregation via Discriminative Sparse Coding

Author: J. Z. Kolter, Siddharth Batra, Andrew Y. Ng

Abstract: Energy disaggregation is the task of taking a whole-home energy signal and separating it into its component appliances. Studies have shown that having devicelevel energy information can cause users to conserve significant amounts of energy, but current electricity meters only report whole-home data. Thus, developing algorithmic methods for disaggregation presents a key technical challenge in the effort to maximize energy conservation. In this paper, we examine a large scale energy disaggregation task, and apply a novel extension of sparse coding to this problem. In particular, we develop a method, based upon structured prediction, for discriminatively training sparse coding algorithms specifically to maximize disaggregation performance. We show that this significantly improves the performance of sparse coding algorithms on the energy task and illustrate how these disaggregation results can provide useful information about energy usage. 1

4 0.45607153 104 nips-2010-Generative Local Metric Learning for Nearest Neighbor Classification

Author: Yung-kyun Noh, Byoung-tak Zhang, Daniel D. Lee

Abstract: We consider the problem of learning a local metric to enhance the performance of nearest neighbor classification. Conventional metric learning methods attempt to separate data distributions in a purely discriminative manner; here we show how to take advantage of information from parametric generative models. We focus on the bias in the information-theoretic error arising from finite sampling effects, and find an appropriate local metric that maximally reduces the bias based upon knowledge from generative models. As a byproduct, the asymptotic theoretical analysis in this work relates metric learning with dimensionality reduction, which was not understood from previous discriminative approaches. Empirical experiments show that this learned local metric enhances the discriminative nearest neighbor performance on various datasets using simple class conditional generative models. 1

5 0.45429441 59 nips-2010-Deep Coding Network

Author: Yuanqing Lin, Zhang Tong, Shenghuo Zhu, Kai Yu

Abstract: This paper proposes a principled extension of the traditional single-layer flat sparse coding scheme, where a two-layer coding scheme is derived based on theoretical analysis of nonlinear functional approximation that extends recent results for local coordinate coding. The two-layer approach can be easily generalized to deeper structures in a hierarchical multiple-layer manner. Empirically, it is shown that the deep coding approach yields improved performance in benchmark datasets.

6 0.44919616 109 nips-2010-Group Sparse Coding with a Laplacian Scale Mixture Prior

7 0.44767648 268 nips-2010-The Neural Costs of Optimal Control

8 0.44622362 157 nips-2010-Learning to localise sounds with spiking neural networks

9 0.40184921 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication

10 0.39041039 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

11 0.38941976 266 nips-2010-The Maximal Causes of Natural Scenes are Edge Filters

12 0.35267335 143 nips-2010-Learning Convolutional Feature Hierarchies for Visual Recognition

13 0.34376127 16 nips-2010-A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model

14 0.34094062 96 nips-2010-Fractionally Predictive Spiking Neurons

15 0.32583374 53 nips-2010-Copula Bayesian Networks

16 0.32572782 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

17 0.32463056 161 nips-2010-Linear readout from a neural population with partial correlation data

18 0.31432015 129 nips-2010-Inter-time segment information sharing for non-homogeneous dynamic Bayesian networks

19 0.3134751 115 nips-2010-Identifying Dendritic Processing

20 0.30714312 80 nips-2010-Estimation of Renyi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.022), (17, 0.014), (27, 0.114), (30, 0.046), (35, 0.029), (45, 0.144), (50, 0.059), (52, 0.382), (60, 0.032), (77, 0.04), (78, 0.012), (90, 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.8922472 227 nips-2010-Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models

Author: Felipe Gerhard, Wulfram Gerstner

Abstract: Generalized Linear Models (GLMs) are an increasingly popular framework for modeling neural spike trains. They have been linked to the theory of stochastic point processes and researchers have used this relation to assess goodness-of-fit using methods from point-process theory, e.g. the time-rescaling theorem. However, high neural firing rates or coarse discretization lead to a breakdown of the assumptions necessary for this connection. Here, we show how goodness-of-fit tests from point-process theory can still be applied to GLMs by constructing equivalent surrogate point processes out of time-series observations. Furthermore, two additional tests based on thinning and complementing point processes are introduced. They augment the instruments available for checking model adequacy of point processes as well as discretized models. 1

2 0.88915861 231 nips-2010-Robust PCA via Outlier Pursuit

Author: Huan Xu, Constantine Caramanis, Sujay Sanghavi

Abstract: Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield entire points that are completely corrupted. We present an efficient convex optimization-based algorithm we call Outlier Pursuit, that under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation, is of paramount interest in bioinformatics and financial applications, and beyond. Our techniques involve matrix decomposition using nuclear norm minimization, however, our results, setup, and approach, necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the uncorrupted matrix, rather than the exact matrix itself.

3 0.87769902 279 nips-2010-Universal Kernels on Non-Standard Input Spaces

Author: Andreas Christmann, Ingo Steinwart

Abstract: During the last years support vector machines (SVMs) have been successfully applied in situations where the input space X is not necessarily a subset of Rd . Examples include SVMs for the analysis of histograms or colored images, SVMs for text classiÄ?Ĺš cation and web mining, and SVMs for applications from computational biology using, e.g., kernels for trees and graphs. Moreover, SVMs are known to be consistent to the Bayes risk, if either the input space is a complete separable metric space and the reproducing kernel Hilbert space (RKHS) H ⊂ Lp (PX ) is dense, or if the SVM uses a universal kernel k. So far, however, there are no kernels of practical interest known that satisfy these assumptions, if X ⊂ Rd . We close this gap by providing a general technique based on Taylor-type kernels to explicitly construct universal kernels on compact metric spaces which are not subset of Rd . We apply this technique for the following special cases: universal kernels on the set of probability measures, universal kernels based on Fourier transforms, and universal kernels for signal processing. 1

same-paper 4 0.87480581 65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

Author: Siwei Lyu

Abstract: Divisive normalization (DN) has been advocated as an effective nonlinear efficient coding transform for natural sensory signals with applications in biology and engineering. In this work, we aim to establish a connection between the DN transform and the statistical properties of natural sensory signals. Our analysis is based on the use of multivariate t model to capture some important statistical properties of natural sensory signals. The multivariate t model justifies DN as an approximation to the transform that completely eliminates its statistical dependency. Furthermore, using the multivariate t model and measuring statistical dependency with multi-information, we can precisely quantify the statistical dependency that is reduced by the DN transform. We compare this with the actual performance of the DN transform in reducing statistical dependencies of natural sensory signals. Our theoretical analysis and quantitative evaluations confirm DN as an effective efficient coding transform for natural sensory signals. On the other hand, we also observe a previously unreported phenomenon that DN may increase statistical dependencies when the size of pooling is small. 1

5 0.82131165 140 nips-2010-Layer-wise analysis of deep networks with Gaussian kernels

Author: Grégoire Montavon, Klaus-Robert Müller, Mikio L. Braun

Abstract: Deep networks can potentially express a learning problem more efficiently than local learning machines. While deep networks outperform local learning machines on some problems, it is still unclear how their nice representation emerges from their complex structure. We present an analysis based on Gaussian kernels that measures how the representation of the learning problem evolves layer after layer as the deep network builds higher-level abstract representations of the input. We use this analysis to show empirically that deep networks build progressively better representations of the learning problem and that the best representations are obtained when the deep network discriminates only in the last layers. 1

6 0.62199819 18 nips-2010-A novel family of non-parametric cumulative based divergences for point processes

7 0.60420662 96 nips-2010-Fractionally Predictive Spiking Neurons

8 0.59469986 131 nips-2010-Joint Analysis of Time-Evolving Binary Matrices and Associated Documents

9 0.59276682 115 nips-2010-Identifying Dendritic Processing

10 0.59171081 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

11 0.59059584 109 nips-2010-Group Sparse Coding with a Laplacian Scale Mixture Prior

12 0.58962107 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

13 0.58420289 238 nips-2010-Short-term memory in neuronal networks through dynamical compressed sensing

14 0.58055973 64 nips-2010-Distributionally Robust Markov Decision Processes

15 0.57668728 51 nips-2010-Construction of Dependent Dirichlet Processes based on Poisson Processes

16 0.57541192 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication

17 0.57290441 111 nips-2010-Hallucinations in Charles Bonnet Syndrome Induced by Homeostasis: a Deep Boltzmann Machine Model

18 0.56746495 253 nips-2010-Spike timing-dependent plasticity as dynamic filter

19 0.56669813 117 nips-2010-Identifying graph-structured activation patterns in networks

20 0.56572825 31 nips-2010-An analysis on negative curvature induced by singularity in multi-layer neural-network learning