nips nips2002 nips2002-47 knowledge-graph by maker-knowledge-mining

47 nips-2002-Branching Law for Axons


Source: pdf

Author: Dmitri B. Chklovskii, Armen Stepanyants

Abstract: What determines the caliber of axonal branches? We pursue the hypothesis that the axonal caliber has evolved to minimize signal propagation delays, while keeping arbor volume to a minimum. We show that for a general cost function the optimal diameters of mother (do) and daughter (d], d 2 ) branches at a bifurcation obey v v 路 d

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract What determines the caliber of axonal branches? [sent-5, score-0.576]

2 We pursue the hypothesis that the axonal caliber has evolved to minimize signal propagation delays, while keeping arbor volume to a minimum. [sent-6, score-0.859]

3 We show that for a general cost function the optimal diameters of mother (do) and daughter (d], d 2 ) branches at a bifurcation obey v v 路 d " a b ranc hmg 1aw: d0 + 2 = ]v + 2 + d 2 + 2 . [sent-7, score-0.572]

4 The denvatIOn re l' on th e les fact that the conduction speed scales with the axon diameter to the power V (v = 1 for myelinated axons and V = 0. [sent-8, score-1.028]

5 We test the branching law on the available experimental data and find a reasonable agreement. [sent-10, score-0.628]

6 1 Introduction Multi-cellular organisms have solved the problem of efficient transport of nutrients and communication between their body parts by evolving spectacular networks: trees, blood vessels, bronchs, and neuronal arbors. [sent-11, score-0.123]

7 These networks consist of segments bifurcating into thinner and thinner branches. [sent-12, score-0.151]

8 Understanding of branching in transport networks has been advanced through the application of the optimization theory ([1], [2] and references therein) . [sent-13, score-0.39]

9 Here we apply the optimization theory to explain the caliber of branching segments in communication networks , i. [sent-14, score-0.632]

10 Axons in different organisms vary in caliber from O. [sent-17, score-0.262]

11 ll1m (terminal segments in neocortex) to lOOOl1m (squid giant axon) [3]. [sent-18, score-0.049]

12 What factors could be responsible for such variation in axon caliber? [sent-19, score-0.116]

13 According to the experimental data [4] and cable theory [5], thicker axons conduct action potential faster, leading to shorter reaction times and, perhaps, quicker thinking. [sent-20, score-0.292]

14 This increases evolutionary fitness or, equivalently, reduces costs associated with conduction delays. [sent-21, score-0.428]

15 It is likely that thick axons are evolutionary costly because they require large amount of cytoplasm and occupy valuable space [6], [7]. [sent-23, score-0.324]

16 Then, is there an optimal axon caliber, which minimizes the combined cost of conduction delays and volume? [sent-24, score-0.654]

17 In this paper we derive an expression for the optimal axon diameter, which minimizes the combined cost of conduction delay and volume. [sent-25, score-0.749]

18 Although the relative cost of del ay and volume is unknown, we use this expression to derive a law describing segment caliber of branching axons with no free parameters. [sent-26, score-1.54]

19 We test this law on the published anatomical data and find a satisfactory agreement. [sent-27, score-0.248]

20 2 Derivation of the branching law Although our theory holds for a rather general class of cost functions (see Methods), we start, for the sake of simplicity, by deriving the branching law in a special case of a linear cost function. [sent-28, score-1.398]

21 Detrimental contribution to fitness , It , of an axonal segment of length , L , can be represented as the sum of two terms , one proportional to the conduction delay along the segment, T, and the other - to the segment volume, V: It =aT+ jJV. [sent-29, score-1.445]

22 (1) Here, a and f3 are unknown but constant coefficients which reflect the rel ative contribution to the fitness cost of the signal propagation delay and the axonal volume. [sent-30, score-0.904]

23 5 4 diameter, d Figure 1: Fitness cost of a myelinated axonal segment as a function of its diameter. [sent-44, score-0.936]

24 The lines show the volume cost, the delay cost, and the total cost. [sent-45, score-0.225]

25 Diameter and cost values are normalized to their respective optimal values. [sent-47, score-0.145]

26 We look for the axon caliber d that minimizes the cost function It. [sent-48, score-0.509]

27 To do this, we rewrite It as a function of d by noticing the following relations: i) Volume, V=! [sent-49, score-0.018]

28 ; s iii) Conduction velocity s=kd for myelinated axons (for non-myelinated axons, see Methods): (2) This cost function contains two terms, which have opposite dependence on d, and has a minimum, Fig. [sent-61, score-0.635]

29 a~ Next, by setting - ad =0 we find that the cost is minimized by the following axonal caliber: ( d=~) lrkfJ 1/3 (3) The utility of this result may seem rather limited because the relative cost of time fJ ' is unknown. [sent-63, score-0.736]

30 volume, a/ Figure 2: A simple axonal arbor with a single branch point and three axonal segments. [sent-65, score-0.975]

31 Time delays along each segment are " to, t" and t2. [sent-67, score-0.373]

32 The total time delay down the first branch is T , =to +f" and the second T z=to +f2路 However, we can apply this result to axonal branching and arrive at a testable prediction about the relationship among branch diameters without knowing the relative cost. [sent-68, score-1.534]

33 To do this we write the cost function for a bifurcation consisting of three segments, Fig. [sent-69, score-0.203]

34 2: (4) where to is a conduction delay along segment 0, t1 - conduction delay along segment 1, t2 - conduction delay along segment 2. [sent-70, score-2.214]

35 Coefficients a1 and a2 represent relative costs of conduction delays for synapses located on the two daughter branches and may be different. [sent-71, score-0.474]

36 We group the terms corresponding to the same segment together: (5) We look for segment diameters , which minimize this cost function. [sent-72, score-0.948]

37 To do this we make the dependence on the diameters explicit and differentiate in respect to them. [sent-73, score-0.31]

38 (5) depends on the diameter of only one segment the variables separate and we arrive at expressions analogous to Eq. [sent-75, score-0.447]

39 (3): ( 2a J kfJn I/3 d = I l ' ( Jif3 d = 2a2 2 k {In (6) It is easy to see that these diameters satisfy the following branching law: dg = d? [sent-76, score-0.662]

40 (7) Similar expression can be derived for non-myelinated axons (see Methods) . [sent-78, score-0.269]

41 In this case, the conduction velocity scales with the square root of segment diameter, resulting in a branching exponent of 2. [sent-79, score-1.194]

42 (7) have been derived for blood vessels, tree branching and bronchs by balancing metabolic cost of pumping viscous fluid and volume cost [8], [9]. [sent-82, score-0.922]

43 Application of viscous flow to dendrites has been discussed in [10]. [sent-83, score-0.108]

44 However, it is hard to see how dendrites could be conduits to viscous fluid if their ends are sealed. [sent-84, score-0.15]

45 Rail [11] has derived a similar law for branching dendrites by postulating impedance matching: (8) However, the main purpose of Rail's law was to simplify calculations of dendritic conduction rather than to explain the actual branch caliber measurements. [sent-85, score-1.544]

46 3 Comparison with experiment We test our branching law, Eq. [sent-86, score-0.354]

47 (7), by comparing it with the data obtained from myelinated motor fibers of the cat [12] , Fig. [sent-87, score-0.223]

48 Data points represent 63 branch points for which all three axonal calibers were available. [sent-89, score-0.519]

49 Despite the large spread in the data it is consistent with our predictions. [sent-92, score-0.069]

50 57 , is closer to our prediction than to Rail ' s law, TJ = 1. [sent-94, score-0.019]

51 where exponent TJ We also show the histogram of the exponents TJ obtained for each of 63 branch points from the same data set, Fig. [sent-96, score-0.421]

52 67 , is much closer to our predicted value for myelinated axons, '7 = 3, than to RaIl's law, '7 = 1. [sent-99, score-0.206]

53 9 Figure 3: Comparison of the experimental data (asterisks) [12] with theoretical predictions. [sent-122, score-0.026]

54 Each axonal bifurcation (with d, =F- d 2 ) is represented in the plot twice. [sent-123, score-0.405]

55 57 , and our prediction for myelinated axons, '7 = 3. [sent-127, score-0.187]

56 Analysis of the experimental data reveals a large spread in the values of the exponent, '7. [sent-128, score-0.095]

57 This spread may arise from the biological variability in the axon diameters, other factors influencing axon diameters, or measurement errors due to the finite resolution of light microscopy. [sent-129, score-0.379]

58 Although we cannot distinguish between these causes, we performed a simulation showing that a reasonable measurement error is sufficient to account for the spread. [sent-130, score-0.06]

59 First, based on the experimental data [12], we generate a set of diameters do, d, and d 2 at branch points, which satisfy Eq. [sent-131, score-0.506]

60 We do this by taking all diameter pairs at branch point from the experimental data and calculating the value of the third diameter according to Eq. [sent-133, score-0.488]

61 Next we simulate the experimental data by adding Gaussian noise to all branch diameters, and calculate the probability distribution for the exponent '7 resulting from this procedure. [sent-135, score-0.418]

62 4 shows that the spread in the histogram of branching exponent could be explained by Gaussian measurement error with standard deviation of O. [sent-137, score-0.732]

63 um precision with which diameter measurements are reported in [12]. [sent-142, score-0.145]

64 14 12 RaIl's 10 average exponent 8 6 predicted exponent 2 0 0 2 3 6 Figure 4: Experimentally observed spread in the branching exponent may arise from the measurement errors. [sent-143, score-1.161]

65 The histogram shows the distribution of the exponent '7, Eq. [sent-144, score-0.249]

66 The line shows the simulated distribution of the exponent obtained in the presence of measurement errors. [sent-148, score-0.28]

67 4 Conclusion Starting with the hypotheses that axonal arbors had been optimized in the course of evolution for fast signal conduction while keeping arbor volume to a minimum we derived a branching law that relates segment diameters at a branch point. [sent-149, score-2.15]

68 The derivation was done for the cost function of a general form , and relies only on the known scaling of signal propagation velocity with the axonal caliber. [sent-150, score-0.632]

69 This law is consistent with the available experimental data on myelinated axons. [sent-151, score-0.413]

70 The observed spread in the branching exponent may be accounted for by the measurement error. [sent-152, score-0.703]

71 There, similar to non-myelinated axons, time delay or attenuation of passively propagating signals scales as one over the square root of diameter. [sent-155, score-0.195]

72 This leads to a branching law with exponent of 5/2. [sent-156, score-0.774]

73 However, the presence of reflections from branch points and active conductances is likely to complicate the picture. [sent-157, score-0.208]

74 5 Methods The detrimental contribution of an axonal arbor to the evolutionary fitness can be quantified by the cost, Q:. [sent-158, score-0.644]

75 We postulate that the cost function , Q:, is a monotonically increasing function of the total axonal volume per neuron, V , and all signal propagation delays, Tj , from soma to j -th synapse, where j = 1,2,3, . [sent-159, score-0.666]

76 : (10) Below we show that this rather general cost function (along with biophysical properties ofaxons) is minimized when axonal caliber satisfies the following branching law : ( 11) with branching exponent '7 axons . [sent-162, score-2.124]

77 (11) for a single branch point, our theory can be trivially extended to more complex arbor topologies. [sent-165, score-0.281]

78 We rewrite the cost function, ([, in terms of volume contributions, ~, of i -th axonal segment to the total volume of the axonal arbor, V , and signal propagation delay, t i , occurred along i -th axonal segment. [sent-166, score-1.736]

79 The cost function reduces to: (12) Next, we express volume and signal propagation delay of each segment as a function of segment diameter. [sent-167, score-0.99]

80 The volume of each cylindrical segment is given by: 1r 2 V =-Ld, 4 I where I (13) I Li and d i are segment length and diameter, correspondingly. [sent-168, score-0.6]

81 Signal propagation delay, t i , is given by the ratio of segment length, L i , and signal speed, Si' Signal speed along axonal segment, in turn, depends on its diameter as : (14) where V = 1 for myelinated [4] and V = 0. [sent-169, score-1.095]

82 As a result propagation delay along segment i is: (15) Substituting Eqs. [sent-171, score-0.508]

83 (12) , we find the dependence of the cost function on segment diameters, t1'(1r Lod 2 +- ~d2 +- ~d2 - +~ v - +~v J 1r 1r Lo - Lov v ~ 4 0 4 I 4 2 ' kd o kd I ' kd 0 kd 2 (16) . [sent-173, score-0.839]

84 To find the diameters of all segments, which minimize the cost function ([, we calculate its partial derivatives with respect to all segment diameters and set them to zero: (17) ~=Q:'! [sent-174, score-1.028]

85 ' ad v 2 2 '--2 d -Q:' 2 T2 v~ kd v +1 =0 2 By solving these equations we find the optimal segment diameters: dv +2 o = 2v(Q:~ I +Q:;. [sent-179, score-0.421]

86 (18) These equations imply that the cost function is minimized when the segment diameters at a branch point satisfy the following expression (independent of the particular form of the cost function, which enters Eq. [sent-186, score-1.075]

87 The vascular system and the cost of blood volume. [sent-231, score-0.199]

88 (1927) A relationship between circumference and weight in trees and its bearing on branching angles. [sent-235, score-0.415]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('branching', 0.354), ('axonal', 0.347), ('conduction', 0.291), ('diameters', 0.289), ('segment', 0.257), ('axons', 0.248), ('caliber', 0.229), ('exponent', 0.22), ('law', 0.2), ('myelinated', 0.187), ('branch', 0.172), ('delay', 0.157), ('diameter', 0.145), ('cost', 0.145), ('rail', 0.127), ('tj', 0.123), ('axon', 0.116), ('arbor', 0.109), ('kd', 0.092), ('delays', 0.083), ('fitness', 0.083), ('spread', 0.069), ('volume', 0.068), ('propagation', 0.061), ('measurement', 0.06), ('bifurcation', 0.058), ('blood', 0.054), ('dendrites', 0.054), ('viscous', 0.054), ('segments', 0.049), ('find', 0.048), ('signal', 0.045), ('bronchs', 0.042), ('chklovskii', 0.042), ('daughter', 0.042), ('fluid', 0.042), ('physiol', 0.042), ('thinner', 0.042), ('vessels', 0.042), ('branches', 0.038), ('fibers', 0.036), ('transport', 0.036), ('velocity', 0.034), ('evolutionary', 0.034), ('detrimental', 0.033), ('cold', 0.033), ('harbor', 0.033), ('spring', 0.033), ('organisms', 0.033), ('along', 0.033), ('histogram', 0.029), ('coefficients', 0.028), ('murray', 0.028), ('minimized', 0.027), ('dendritic', 0.026), ('experimental', 0.026), ('arrive', 0.025), ('trees', 0.025), ('ad', 0.024), ('thick', 0.024), ('santa', 0.024), ('expression', 0.021), ('scales', 0.021), ('dependence', 0.021), ('neuron', 0.021), ('expressions', 0.02), ('speed', 0.02), ('costs', 0.02), ('contribution', 0.02), ('minimizes', 0.019), ('satisfy', 0.019), ('closer', 0.019), ('fe', 0.018), ('testable', 0.018), ('postulating', 0.018), ('cajal', 0.018), ('bifurcating', 0.018), ('quantified', 0.018), ('stevens', 0.018), ('arbors', 0.018), ('asterisks', 0.018), ('bearing', 0.018), ('bungtown', 0.018), ('circumference', 0.018), ('conductances', 0.018), ('cylindrical', 0.018), ('cytoplasm', 0.018), ('del', 0.018), ('harvard', 0.018), ('metabolic', 0.018), ('reflections', 0.018), ('rel', 0.018), ('sinauer', 0.018), ('sunderland', 0.018), ('thicker', 0.018), ('vertebrates', 0.018), ('wiring', 0.018), ('arise', 0.018), ('rewrite', 0.018), ('root', 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 47 nips-2002-Branching Law for Axons

Author: Dmitri B. Chklovskii, Armen Stepanyants

Abstract: What determines the caliber of axonal branches? We pursue the hypothesis that the axonal caliber has evolved to minimize signal propagation delays, while keeping arbor volume to a minimum. We show that for a general cost function the optimal diameters of mother (do) and daughter (d], d 2 ) branches at a bifurcation obey v v 路 d

2 0.098233879 200 nips-2002-Topographic Map Formation by Silicon Growth Cones

Author: Brian Taba, Kwabena A. Boahen

Abstract: We describe a self-configuring neuromorphic chip that uses a model of activity-dependent axon remodeling to automatically wire topographic maps based solely on input correlations. Axons are guided by growth cones, which are modeled in analog VLSI for the first time. Growth cones migrate up neurotropin gradients, which are represented by charge diffusing in transistor channels. Virtual axons move by rerouting address-events. We refined an initially gross topographic projection by simulating retinal wave input. 1 Neuromorphic Systems Neuromorphic engineers are attempting to match the computational efficiency of biological systems by morphing neurocircuitry into silicon circuits [1]. One of the most detailed implementations to date is the silicon retina described in [2] . This chip comprises thirteen different cell types, each of which must be individually and painstakingly wired. While this circuit-level approach has been very successful in sensory systems, it is less helpful when modeling largely unelucidated and exceedingly plastic higher processing centers in cortex. Instead of an explicit blueprint for every cortical area, what is needed is a developmental rule that can wire complex circuits from minimal specifications. One candidate is the famous

3 0.08761131 9 nips-2002-A Minimal Intervention Principle for Coordinated Movement

Author: Emanuel Todorov, Michael I. Jordan

Abstract: Behavioral goals are achieved reliably and repeatedly with movements rarely reproducible in their detail. Here we offer an explanation: we show that not only are variability and goal achievement compatible, but indeed that allowing variability in redundant dimensions is the optimal control strategy in the face of uncertainty. The optimal feedback control laws for typical motor tasks obey a “minimal intervention” principle: deviations from the average trajectory are only corrected when they interfere with the task goals. The resulting behavior exhibits task-constrained variability, as well as synergetic coupling among actuators—which is another unexplained empirical phenomenon.

4 0.070438132 171 nips-2002-Reconstructing Stimulus-Driven Neural Networks from Spike Times

Author: Duane Q. Nykamp

Abstract: We present a method to distinguish direct connections between two neurons from common input originating from other, unmeasured neurons. The distinction is computed from the spike times of the two neurons in response to a white noise stimulus. Although the method is based on a highly idealized linear-nonlinear approximation of neural response, we demonstrate via simulation that the approach can work with a more realistic, integrate-and-fire neuron model. We propose that the approach exemplified by this analysis may yield viable tools for reconstructing stimulus-driven neural networks from data gathered in neurophysiology experiments.

5 0.046838764 67 nips-2002-Discriminative Binaural Sound Localization

Author: Ehud Ben-reuven, Yoram Singer

Abstract: Time difference of arrival (TDOA) is commonly used to estimate the azimuth of a source in a microphone array. The most common methods to estimate TDOA are based on finding extrema in generalized crosscorrelation waveforms. In this paper we apply microphone array techniques to a manikin head. By considering the entire cross-correlation waveform we achieve azimuth prediction accuracy that exceeds extrema locating methods. We do so by quantizing the azimuthal angle and treating the prediction problem as a multiclass categorization task. We demonstrate the merits of our approach by evaluating the various approaches on Sony’s AIBO robot.

6 0.045061372 51 nips-2002-Classifying Patterns of Visual Motion - a Neuromorphic Approach

7 0.044434186 155 nips-2002-Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach

8 0.042781819 33 nips-2002-Approximate Linear Programming for Average-Cost Dynamic Programming

9 0.042767674 199 nips-2002-Timing and Partial Observability in the Dopamine System

10 0.041781612 172 nips-2002-Recovering Articulated Model Topology from Observed Rigid Motion

11 0.041582428 30 nips-2002-Annealing and the Rate Distortion Problem

12 0.038803998 76 nips-2002-Dynamical Constraints on Computing with Spike Timing in the Cortex

13 0.034094609 174 nips-2002-Regularized Greedy Importance Sampling

14 0.033340823 179 nips-2002-Scaling of Probability-Based Optimization Algorithms

15 0.032254584 14 nips-2002-A Probabilistic Approach to Single Channel Blind Signal Separation

16 0.03047977 94 nips-2002-Fractional Belief Propagation

17 0.028431423 28 nips-2002-An Information Theoretic Approach to the Functional Classification of Neurons

18 0.028395373 24 nips-2002-Adaptive Scaling for Feature Selection in SVMs

19 0.027772509 79 nips-2002-Evidence Optimization Techniques for Estimating Stimulus-Response Functions

20 0.027283525 128 nips-2002-Learning a Forward Model of a Reflex


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.079), (1, 0.04), (2, -0.025), (3, -0.006), (4, 0.005), (5, 0.021), (6, -0.007), (7, 0.014), (8, 0.028), (9, 0.042), (10, -0.009), (11, 0.028), (12, -0.002), (13, 0.002), (14, 0.005), (15, -0.004), (16, -0.007), (17, -0.011), (18, -0.021), (19, -0.064), (20, -0.023), (21, 0.06), (22, 0.007), (23, -0.037), (24, 0.075), (25, -0.002), (26, 0.04), (27, -0.015), (28, 0.054), (29, -0.06), (30, 0.044), (31, -0.027), (32, -0.058), (33, -0.089), (34, -0.032), (35, 0.064), (36, -0.123), (37, -0.126), (38, 0.129), (39, 0.017), (40, 0.011), (41, 0.146), (42, -0.049), (43, 0.067), (44, 0.029), (45, 0.052), (46, -0.246), (47, 0.057), (48, 0.135), (49, 0.035)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96962482 47 nips-2002-Branching Law for Axons

Author: Dmitri B. Chklovskii, Armen Stepanyants

Abstract: What determines the caliber of axonal branches? We pursue the hypothesis that the axonal caliber has evolved to minimize signal propagation delays, while keeping arbor volume to a minimum. We show that for a general cost function the optimal diameters of mother (do) and daughter (d], d 2 ) branches at a bifurcation obey v v 路 d

2 0.47624698 200 nips-2002-Topographic Map Formation by Silicon Growth Cones

Author: Brian Taba, Kwabena A. Boahen

Abstract: We describe a self-configuring neuromorphic chip that uses a model of activity-dependent axon remodeling to automatically wire topographic maps based solely on input correlations. Axons are guided by growth cones, which are modeled in analog VLSI for the first time. Growth cones migrate up neurotropin gradients, which are represented by charge diffusing in transistor channels. Virtual axons move by rerouting address-events. We refined an initially gross topographic projection by simulating retinal wave input. 1 Neuromorphic Systems Neuromorphic engineers are attempting to match the computational efficiency of biological systems by morphing neurocircuitry into silicon circuits [1]. One of the most detailed implementations to date is the silicon retina described in [2] . This chip comprises thirteen different cell types, each of which must be individually and painstakingly wired. While this circuit-level approach has been very successful in sensory systems, it is less helpful when modeling largely unelucidated and exceedingly plastic higher processing centers in cortex. Instead of an explicit blueprint for every cortical area, what is needed is a developmental rule that can wire complex circuits from minimal specifications. One candidate is the famous

3 0.4155024 67 nips-2002-Discriminative Binaural Sound Localization

Author: Ehud Ben-reuven, Yoram Singer

Abstract: Time difference of arrival (TDOA) is commonly used to estimate the azimuth of a source in a microphone array. The most common methods to estimate TDOA are based on finding extrema in generalized crosscorrelation waveforms. In this paper we apply microphone array techniques to a manikin head. By considering the entire cross-correlation waveform we achieve azimuth prediction accuracy that exceeds extrema locating methods. We do so by quantizing the azimuthal angle and treating the prediction problem as a multiclass categorization task. We demonstrate the merits of our approach by evaluating the various approaches on Sony’s AIBO robot.

4 0.3749972 9 nips-2002-A Minimal Intervention Principle for Coordinated Movement

Author: Emanuel Todorov, Michael I. Jordan

Abstract: Behavioral goals are achieved reliably and repeatedly with movements rarely reproducible in their detail. Here we offer an explanation: we show that not only are variability and goal achievement compatible, but indeed that allowing variability in redundant dimensions is the optimal control strategy in the face of uncertainty. The optimal feedback control laws for typical motor tasks obey a “minimal intervention” principle: deviations from the average trajectory are only corrected when they interfere with the task goals. The resulting behavior exhibits task-constrained variability, as well as synergetic coupling among actuators—which is another unexplained empirical phenomenon.

5 0.33189064 179 nips-2002-Scaling of Probability-Based Optimization Algorithms

Author: J. L. Shapiro

Abstract: Population-based Incremental Learning is shown require very sensitive scaling of its learning rate. The learning rate must scale with the system size in a problem-dependent way. This is shown in two problems: the needle-in-a haystack, in which the learning rate must vanish exponentially in the system size, and in a smooth function in which the learning rate must vanish like the square root of the system size. Two methods are proposed for removing this sensitivity. A learning dynamics which obeys detailed balance is shown to give consistent performance over the entire range of learning rates. An analog of mutation is shown to require a learning rate which scales as the inverse system size, but is problem independent. 1

6 0.32668906 183 nips-2002-Source Separation with a Sensor Array using Graphical Models and Subband Filtering

7 0.30448294 33 nips-2002-Approximate Linear Programming for Average-Cost Dynamic Programming

8 0.27720863 182 nips-2002-Shape Recipes: Scene Representations that Refer to the Image

9 0.24773954 117 nips-2002-Intrinsic Dimension Estimation Using Packing Numbers

10 0.23018618 95 nips-2002-Gaussian Process Priors with Uncertain Inputs Application to Multiple-Step Ahead Time Series Forecasting

11 0.22272238 155 nips-2002-Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach

12 0.22211066 123 nips-2002-Learning Attractor Landscapes for Learning Motor Primitives

13 0.21800889 14 nips-2002-A Probabilistic Approach to Single Channel Blind Signal Separation

14 0.20914154 172 nips-2002-Recovering Articulated Model Topology from Observed Rigid Motion

15 0.20636675 174 nips-2002-Regularized Greedy Importance Sampling

16 0.1982587 187 nips-2002-Spikernels: Embedding Spiking Neurons in Inner-Product Spaces

17 0.19541612 60 nips-2002-Convergence Properties of Some Spike-Triggered Analysis Techniques

18 0.19500013 189 nips-2002-Stable Fixed Points of Loopy Belief Propagation Are Local Minima of the Bethe Free Energy

19 0.1947792 142 nips-2002-Maximum Likelihood and the Information Bottleneck

20 0.1913484 186 nips-2002-Spike Timing-Dependent Plasticity in the Address Domain


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.012), (23, 0.027), (42, 0.032), (54, 0.105), (55, 0.056), (57, 0.018), (68, 0.045), (74, 0.041), (92, 0.028), (96, 0.466), (98, 0.058)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.80266464 47 nips-2002-Branching Law for Axons

Author: Dmitri B. Chklovskii, Armen Stepanyants

Abstract: What determines the caliber of axonal branches? We pursue the hypothesis that the axonal caliber has evolved to minimize signal propagation delays, while keeping arbor volume to a minimum. We show that for a general cost function the optimal diameters of mother (do) and daughter (d], d 2 ) branches at a bifurcation obey v v 路 d

2 0.29518887 10 nips-2002-A Model for Learning Variance Components of Natural Images

Author: Yan Karklin, Michael S. Lewicki

Abstract: We present a hierarchical Bayesian model for learning efficient codes of higher-order structure in natural images. The model, a non-linear generalization of independent component analysis, replaces the standard assumption of independence for the joint distribution of coefficients with a distribution that is adapted to the variance structure of the coefficients of an efficient image basis. This offers a novel description of higherorder image structure and provides a way to learn coarse-coded, sparsedistributed representations of abstract image properties such as object location, scale, and texture.

3 0.29300478 119 nips-2002-Kernel Dependency Estimation

Author: Jason Weston, Olivier Chapelle, Vladimir Vapnik, André Elisseeff, Bernhard Schölkopf

Abstract: We consider the learning problem of finding a dependency between a general class of objects and another, possibly different, general class of objects. The objects can be for example: vectors, images, strings, trees or graphs. Such a task is made possible by employing similarity measures in both input and output spaces using kernel functions, thus embedding the objects into vector spaces. We experimentally validate our approach on several tasks: mapping strings to strings, pattern recognition, and reconstruction from partial images. 1

4 0.29241449 106 nips-2002-Hyperkernels

Author: Cheng S. Ong, Robert C. Williamson, Alex J. Smola

Abstract: We consider the problem of choosing a kernel suitable for estimation using a Gaussian Process estimator or a Support Vector Machine. A novel solution is presented which involves defining a Reproducing Kernel Hilbert Space on the space of kernels itself. By utilizing an analog of the classical representer theorem, the problem of choosing a kernel from a parameterized family of kernels (e.g. of varying width) is reduced to a statistical estimation problem akin to the problem of minimizing a regularized risk functional. Various classical settings for model or kernel selection are special cases of our framework.

5 0.29219007 24 nips-2002-Adaptive Scaling for Feature Selection in SVMs

Author: Yves Grandvalet, Stéphane Canu

Abstract: This paper introduces an algorithm for the automatic relevance determination of input variables in kernelized Support Vector Machines. Relevance is measured by scale factors defining the input space metric, and feature selection is performed by assigning zero weights to irrelevant variables. The metric is automatically tuned by the minimization of the standard SVM empirical risk, where scale factors are added to the usual set of parameters defining the classifier. Feature selection is achieved by constraints encouraging the sparsity of scale factors. The resulting algorithm compares favorably to state-of-the-art feature selection procedures and demonstrates its effectiveness on a demanding facial expression recognition problem.

6 0.29150307 189 nips-2002-Stable Fixed Points of Loopy Belief Propagation Are Local Minima of the Bethe Free Energy

7 0.29086849 82 nips-2002-Exponential Family PCA for Belief Compression in POMDPs

8 0.29010981 11 nips-2002-A Model for Real-Time Computation in Generic Neural Microcircuits

9 0.2895889 9 nips-2002-A Minimal Intervention Principle for Coordinated Movement

10 0.28916875 21 nips-2002-Adaptive Classification by Variational Kalman Filtering

11 0.28896207 37 nips-2002-Automatic Derivation of Statistical Algorithms: The EM Family and Beyond

12 0.28889358 123 nips-2002-Learning Attractor Landscapes for Learning Motor Primitives

13 0.28874674 68 nips-2002-Discriminative Densities from Maximum Contrast Estimation

14 0.28856504 28 nips-2002-An Information Theoretic Approach to the Functional Classification of Neurons

15 0.28847167 141 nips-2002-Maximally Informative Dimensions: Analyzing Neural Responses to Natural Signals

16 0.28822249 70 nips-2002-Distance Metric Learning with Application to Clustering with Side-Information

17 0.28776449 187 nips-2002-Spikernels: Embedding Spiking Neurons in Inner-Product Spaces

18 0.28765669 14 nips-2002-A Probabilistic Approach to Single Channel Blind Signal Separation

19 0.28751802 27 nips-2002-An Impossibility Theorem for Clustering

20 0.28735772 2 nips-2002-A Bilinear Model for Sparse Coding