nips nips2001 knowledge-graph by maker-knowledge-mining

nips 2001 knowledge graph


similar papers computed by tfidf model


similar papers computed by lsi model


similar papers computed by lda model


papers list:

1 nips-2001-(Not) Bounding the True Error

Author: John Langford, Rich Caruana

Abstract: We present a new approach to bounding the true error rate of a continuous valued classifier based upon PAC-Bayes bounds. The method first constructs a distribution over classifiers by determining how sensitive each parameter in the model is to noise. The true error rate of the stochastic classifier found with the sensitivity analysis can then be tightly bounded using a PAC-Bayes bound. In this paper we demonstrate the method on artificial neural networks with results of a order of magnitude improvement vs. the best deterministic neural net bounds. £ ¡ ¤¢

2 nips-2001-3 state neurons for contextual processing

Author: Ádám Kepecs, S. Raghavachari

Abstract: Neurons receive excitatory inputs via both fast AMPA and slow NMDA type receptors. We find that neurons receiving input via NMDA receptors can have two stable membrane states which are input dependent. Action potentials can only be initiated from the higher voltage state. Similar observations have been made in several brain areas which might be explained by our model. The interactions between the two kinds of inputs lead us to suggest that some neurons may operate in 3 states: disabled, enabled and firing. Such enabled, but non-firing modes can be used to introduce context-dependent processing in neural networks. We provide a simple example and discuss possible implications for neuronal processing and response variability. 1

3 nips-2001-ACh, Uncertainty, and Cortical Inference

Author: Peter Dayan, Angela J. Yu

Abstract: Acetylcholine (ACh) has been implicated in a wide variety of tasks involving attentional processes and plasticity. Following extensive animal studies, it has previously been suggested that ACh reports on uncertainty and controls hippocampal, cortical and cortico-amygdalar plasticity. We extend this view and consider its effects on cortical representational inference, arguing that ACh controls the balance between bottom-up inference, influenced by input stimuli, and top-down inference, influenced by contextual information. We illustrate our proposal using a hierarchical hidden Markov model.

4 nips-2001-ALGONQUIN - Learning Dynamic Noise Models From Noisy Speech for Robust Speech Recognition

Author: Brendan J. Frey, Trausti T. Kristjansson, Li Deng, Alex Acero

Abstract: A challenging, unsolved problem in the speech recognition community is recognizing speech signals that are corrupted by loud, highly nonstationary noise. One approach to noisy speech recognition is to automatically remove the noise from the cepstrum sequence before feeding it in to a clean speech recognizer. In previous work published in Eurospeech, we showed how a probability model trained on clean speech and a separate probability model trained on noise could be combined for the purpose of estimating the noisefree speech from the noisy speech. We showed how an iterative 2nd order vector Taylor series approximation could be used for probabilistic inference in this model. In many circumstances, it is not possible to obtain examples of noise without speech. Noise statistics may change significantly during an utterance, so that speechfree frames are not sufficient for estimating the noise model. In this paper, we show how the noise model can be learned even when the data contains speech. In particular, the noise model can be learned from the test utterance and then used to de noise the test utterance. The approximate inference technique is used as an approximate E step in a generalized EM algorithm that learns the parameters of the noise model from a test utterance. For both Wall Street J ournal data with added noise samples and the Aurora benchmark, we show that the new noise adaptive technique performs as well as or significantly better than the non-adaptive algorithm, without the need for a separate training set of noise examples. 1

5 nips-2001-A Bayesian Model Predicts Human Parse Preference and Reading Times in Sentence Processing

Author: S. Narayanan, Daniel Jurafsky

Abstract: Narayanan and Jurafsky (1998) proposed that human language comprehension can be modeled by treating human comprehenders as Bayesian reasoners, and modeling the comprehension process with Bayesian decision trees. In this paper we extend the Narayanan and Jurafsky model to make further predictions about reading time given the probability of difference parses or interpretations, and test the model against reading time data from a psycholinguistic experiment. 1

6 nips-2001-A Bayesian Network for Real-Time Musical Accompaniment

Author: Christopher Raphael

Abstract: We describe a computer system that provides a real-time musical accompaniment for a live soloist in a piece of non-improvised music for soloist and accompaniment. A Bayesian network is developed that represents the joint distribution on the times at which the solo and accompaniment notes are played, relating the two parts through a layer of hidden variables. The network is first constructed using the rhythmic information contained in the musical score. The network is then trained to capture the musical interpretations of the soloist and accompanist in an off-line rehearsal phase. During live accompaniment the learned distribution of the network is combined with a real-time analysis of the soloist's acoustic signal, performed with a hidden Markov model, to generate a musically principled accompaniment that respects all available sources of knowledge. A live demonstration will be provided. 1

7 nips-2001-A Dynamic HMM for On-line Segmentation of Sequential Data

Author: Jens Kohlmorgen, Steven Lemm

Abstract: We propose a novel method for the analysis of sequential data that exhibits an inherent mode switching. In particular, the data might be a non-stationary time series from a dynamical system that switches between multiple operating modes. Unlike other approaches, our method processes the data incrementally and without any training of internal parameters. We use an HMM with a dynamically changing number of states and an on-line variant of the Viterbi algorithm that performs an unsupervised segmentation and classification of the data on-the-fly, i.e. the method is able to process incoming data in real-time. The main idea of the approach is to track and segment changes of the probability density of the data in a sliding window on the incoming data stream. The usefulness of the algorithm is demonstrated by an application to a switching dynamical system. 1

8 nips-2001-A General Greedy Approximation Algorithm with Applications

Author: T. Zhang

Abstract: Greedy approximation algorithms have been frequently used to obtain sparse solutions to learning problems. In this paper, we present a general greedy algorithm for solving a class of convex optimization problems. We derive a bound on the rate of approximation for this algorithm, and show that our algorithm includes a number of earlier studies as special cases.

9 nips-2001-A Generalization of Principal Components Analysis to the Exponential Family

Author: Michael Collins, S. Dasgupta, Robert E. Schapire

Abstract: Principal component analysis (PCA) is a commonly applied technique for dimensionality reduction. PCA implicitly minimizes a squared loss function, which may be inappropriate for data that is not real-valued, such as binary-valued data. This paper draws on ideas from the Exponential family, Generalized linear models, and Bregman distances, to give a generalization of PCA to loss functions that we argue are better suited to other data types. We describe algorithms for minimizing the loss functions, and give examples on simulated data.

10 nips-2001-A Hierarchical Model of Complex Cells in Visual Cortex for the Binocular Perception of Motion-in-Depth

Author: Silvio P. Sabatini, Fabio Solari, Giulia Andreani, Chiara Bartolozzi, Giacomo M. Bisio

Abstract: A cortical model for motion-in-depth selectivity of complex cells in the visual cortex is proposed. The model is based on a time extension of the phase-based techniques for disparity estimation. We consider the computation of the total temporal derivative of the time-varying disparity through the combination of the responses of disparity energy units. To take into account the physiological plausibility, the model is based on the combinations of binocular cells characterized by different ocular dominance indices. The resulting cortical units of the model show a sharp selectivity for motion-indepth that has been compared with that reported in the literature for real cortical cells. 1

11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement

Author: H. Colonius, A. Diederich

Abstract: Multisensory response enhancement (MRE) is the augmentation of the response of a neuron to sensory input of one modality by simultaneous input from another modality. The maximum likelihood (ML) model presented here modifies the Bayesian model for MRE (Anastasio et al.) by incorporating a decision strategy to maximize the number of correct decisions. Thus the ML model can also deal with the important tasks of stimulus discrimination and identification in the presence of incongruent visual and auditory cues. It accounts for the inverse effectiveness observed in neurophysiological recording data, and it predicts a functional relation between uni- and bimodal levels of discriminability that is testable both in neurophysiological and behavioral experiments. 1

12 nips-2001-A Model of the Phonological Loop: Generalization and Binding

Author: Randall C. O'Reilly, R. Soto

Abstract: We present a neural network model that shows how the prefrontal cortex, interacting with the basal ganglia, can maintain a sequence of phonological information in activation-based working memory (i.e., the phonological loop). The primary function of this phonological loop may be to transiently encode arbitrary bindings of information necessary for tasks - the combinatorial expressive power of language enables very flexible binding of essentially arbitrary pieces of information. Our model takes advantage of the closed-class nature of phonemes, which allows different neural representations of all possible phonemes at each sequential position to be encoded. To make this work, we suggest that the basal ganglia provide a region-specific update signal that allocates phonemes to the appropriate sequential coding slot. To demonstrate that flexible, arbitrary binding of novel sequences can be supported by this mechanism, we show that the model can generalize to novel sequences after moderate amounts of training. 1

13 nips-2001-A Natural Policy Gradient

Author: Sham M. Kakade

Abstract: We provide a natural gradient method that represents the steepest descent direction based on the underlying structure of the parameter space. Although gradient methods cannot make large changes in the values of the parameters, we show that the natural gradient is moving toward choosing a greedy optimal action rather than just a better action. These greedy optimal actions are those that would be chosen under one improvement step of policy iteration with approximate, compatible value functions, as defined by Sutton et al. [9]. We then show drastic performance improvements in simple MDPs and in the more challenging MDP of Tetris. 1

14 nips-2001-A Neural Oscillator Model of Auditory Selective Attention

Author: Stuart N. Wrigley, Guy J. Brown

Abstract: A model of auditory grouping is described in which auditory attention plays a key role. The model is based upon an oscillatory correlation framework, in which neural oscillators representing a single perceptual stream are synchronised, and are desynchronised from oscillators representing other streams. The model suggests a mechanism by which attention can be directed to the high or low tones in a repeating sequence of tones with alternating frequencies. In addition, it simulates the perceptual segregation of a mistuned harmonic from a complex tone. 1

15 nips-2001-A New Discriminative Kernel From Probabilistic Models

Author: Koji Tsuda, Motoaki Kawanabe, Gunnar Rätsch, Sören Sonnenburg, Klaus-Robert Müller

Abstract: Recently, Jaakkola and Haussler proposed a method for constructing kernel functions from probabilistic models. Their so called

16 nips-2001-A Parallel Mixture of SVMs for Very Large Scale Problems

Author: Ronan Collobert, Samy Bengio, Yoshua Bengio

Abstract: Support Vector Machines (SVMs) are currently the state-of-the-art models for many classification problems but they suffer from the complexity of their training algorithm which is at least quadratic with respect to the number of examples. Hence, it is hopeless to try to solve real-life problems having more than a few hundreds of thousands examples with SVMs. The present paper proposes a new mixture of SVMs that can be easily implemented in parallel and where each SVM is trained on a small subset of the whole dataset. Experiments on a large benchmark dataset (Forest) as well as a difficult speech database , yielded significant time improvement (time complexity appears empirically to locally grow linearly with the number of examples) . In addition, and that is a surprise, a significant improvement in generalization was observed on Forest. 1

17 nips-2001-A Quantitative Model of Counterfactual Reasoning

Author: Daniel Yarlett, Michael Ramscar

Abstract: In this paper we explore two quantitative approaches to the modelling of counterfactual reasoning – a linear and a noisy-OR model – based on information contained in conceptual dependency networks. Empirical data is acquired in a study and the fit of the models compared to it. We conclude by considering the appropriateness of non-parametric approaches to counterfactual reasoning, and examining the prospects for other parametric approaches in the future.

18 nips-2001-A Rational Analysis of Cognitive Control in a Speeded Discrimination Task

Author: Michael C. Mozer, Michael D. Colagrosso, David E. Huber

Abstract: We are interested in the mechanisms by which individuals monitor and adjust their performance of simple cognitive tasks. We model a speeded discrimination task in which individuals are asked to classify a sequence of stimuli (Jones & Braver, 2001). Response conflict arises when one stimulus class is infrequent relative to another, resulting in more errors and slower reaction times for the infrequent class. How do control processes modulate behavior based on the relative class frequencies? We explain performance from a rational perspective that casts the goal of individuals as minimizing a cost that depends both on error rate and reaction time. With two additional assumptions of rationality—that class prior probabilities are accurately estimated and that inference is optimal subject to limitations on rate of information transmission—we obtain a good fit to overall RT and error data, as well as trial-by-trial variations in performance. Consider the following scenario: While driving, you approach an intersection at which the traffic light has already turned yellow, signaling that it is about to turn red. You also notice that a car is approaching you rapidly from behind, with no indication of slowing. Should you stop or speed through the intersection? The decision is difficult due to the presence of two conflicting signals. Such response conflict can be produced in a psychological laboratory as well. For example, Stroop (1935) asked individuals to name the color of ink on which a word is printed. When the words are color names incongruous with the ink color— e.g., “blue” printed in red—reaction times are slower and error rates are higher. We are interested in the control mechanisms underlying performance of high-conflict tasks. Conflict requires individuals to monitor and adjust their behavior, possibly responding more slowly if errors are too frequent. In this paper, we model a speeded discrimination paradigm in which individuals are asked to classify a sequence of stimuli (Jones & Braver, 2001). The stimuli are letters of the alphabet, A–Z, presented in rapid succession. In a choice task, individuals are asked to press one response key if the letter is an X or another response key for any letter other than X (as a shorthand, we will refer to non-X stimuli as Y). In a go/no-go task, individuals are asked to press a response key when X is presented and to make no response otherwise. We address both tasks because they elicit slightly different decision-making behavior. In both tasks, Jones and Braver (2001) manipulated the relative frequency of the X and Y stimuli; the ratio of presentation frequency was either 17:83, 50:50, or 83:17. Response conflict arises when the two stimulus classes are unbalanced in frequency, resulting in more errors and slower reaction times. For example, when X’s are frequent but Y is presented, individuals are predisposed toward producing the X response, and this predisposition must be overcome by the perceptual evidence from the Y. Jones and Braver (2001) also performed an fMRI study of this task and found that anterior cingulate cortex (ACC) becomes activated in situations involving response conflict. Specifically, when one stimulus occurs infrequently relative to the other, event-related fMRI response in the ACC is greater for the low frequency stimulus. Jones and Braver also extended a neural network model of Botvinick, Braver, Barch, Carter, and Cohen (2001) to account for human performance in the two discrimination tasks. The heart of the model is a mechanism that monitors conflict—the posited role of the ACC—and adjusts response biases accordingly. In this paper, we develop a parsimonious alternative account of the role of the ACC and of how control processes modulate behavior when response conflict arises. 1 A RATIONAL ANALYSIS Our account is based on a rational analysis of human cognition, which views cognitive processes as being optimized with respect to certain task-related goals, and being adaptive to the structure of the environment (Anderson, 1990). We make three assumptions of rationality: (1) perceptual inference is optimal but is subject to rate limitations on information transmission, (2) response class prior probabilities are accurately estimated, and (3) the goal of individuals is to minimize a cost that depends both on error rate and reaction time. The heart of our account is an existing probabilistic model that explains a variety of facilitation effects that arise from long-term repetition priming (Colagrosso, in preparation; Mozer, Colagrosso, & Huber, 2000), and more broadly, that addresses changes in the nature of information transmission in neocortex due to experience. We give a brief overview of this model; the details are not essential for the present work. The model posits that neocortex can be characterized by a collection of informationprocessing pathways, and any act of cognition involves coordination among pathways. To model a simple discrimination task, we might suppose a perceptual pathway to map the visual input to a semantic representation, and a response pathway to map the semantic representation to a response. The choice and go/no-go tasks described earlier share a perceptual pathway, but require different response pathways. The model is framed in terms of probability theory: pathway inputs and outputs are random variables and microinference in a pathway is carried out by Bayesian belief revision.   To elaborate, consider a pathway whose input at time is a discrete random variable, denoted , which can assume values corresponding to alternative input states. Similarly, the output of the pathway at time is a discrete random variable, denoted , which can assume values . For example, the input to the perceptual pathway in the discrimination task is one of visual patterns corresponding to the letters of the alphabet, and the output is one of letter identities. (This model is highly abstract: the visual patterns are enumerated, but the actual pixel patterns are not explicitly represented in the model. Nonetheless, the similarity structure among inputs can be captured, but we skip a discussion of this issue because it is irrelevant for the current work.) To present a particular input alternative, , to the model for time steps, we clamp for . The model computes a probability distribution over given , i.e., P . ¡ # 4 0 ©2' &  0 ' ! 1)(

19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network

Author: Lance R. Williams, John W. Zweck

Abstract: We describe a neural network which enhances and completes salient closed contours. Our work is different from all previous work in three important ways. First, like the input provided to V1 by LGN, the input to our computation is isotropic. That is, the input is composed of spots not edges. Second, our network computes a well defined function of the input based on a distribution of closed contours characterized by a random process. Third, even though our computation is implemented in a discrete network, its output is invariant to continuous rotations and translations of the input pattern.

20 nips-2001-A Sequence Kernel and its Application to Speaker Recognition

Author: William M. Campbell

Abstract: A novel approach for comparing sequences of observations using an explicit-expansion kernel is demonstrated. The kernel is derived using the assumption of the independence of the sequence of observations and a mean-squared error training criterion. The use of an explicit expansion kernel reduces classifier model size and computation dramatically, resulting in model sizes and computation one-hundred times smaller in our application. The explicit expansion also preserves the computational advantages of an earlier architecture based on mean-squared error training. Training using standard support vector machine methodology gives accuracy that significantly exceeds the performance of state-of-the-art mean-squared error training for a speaker recognition task.

21 nips-2001-A Variational Approach to Learning Curves

22 nips-2001-A kernel method for multi-labelled classification

23 nips-2001-A theory of neural integration in the head-direction system

24 nips-2001-Active Information Retrieval

25 nips-2001-Active Learning in the Drug Discovery Process

26 nips-2001-Active Portfolio-Management based on Error Correction Neural Networks

27 nips-2001-Activity Driven Adaptive Stochastic Resonance

28 nips-2001-Adaptive Nearest Neighbor Classification Using Support Vector Machines

29 nips-2001-Adaptive Sparseness Using Jeffreys Prior

30 nips-2001-Agglomerative Multivariate Information Bottleneck

31 nips-2001-Algorithmic Luckiness

32 nips-2001-An Efficient, Exact Algorithm for Solving Tree-Structured Graphical Games

33 nips-2001-An Efficient Clustering Algorithm Using Stochastic Association Model and Its Implementation Using Nanostructures

34 nips-2001-Analog Soft-Pattern-Matching Classifier using Floating-Gate MOS Technology

35 nips-2001-Analysis of Sparse Bayesian Learning

36 nips-2001-Approximate Dynamic Programming via Linear Programming

37 nips-2001-Associative memory in realistic neuronal networks

38 nips-2001-Asymptotic Universality for Learning Curves of Support Vector Machines

39 nips-2001-Audio-Visual Sound Separation Via Hidden Markov Models

40 nips-2001-Batch Value Function Approximation via Support Vectors

41 nips-2001-Bayesian Predictive Profiles With Applications to Retail Transaction Data

42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion

43 nips-2001-Bayesian time series classification

44 nips-2001-Blind Source Separation via Multinode Sparse Representation

45 nips-2001-Boosting and Maximum Likelihood for Exponential Models

46 nips-2001-Categorization by Learning and Combining Object Parts

47 nips-2001-Causal Categorization with Bayes Nets

48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance

49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

50 nips-2001-Classifying Single Trial EEG: Towards Brain Computer Interfacing

51 nips-2001-Cobot: A Social Reinforcement Learning Agent

52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

53 nips-2001-Constructing Distributed Representations Using Additive Clustering

54 nips-2001-Contextual Modulation of Target Saliency

55 nips-2001-Convergence of Optimistic and Incremental Q-Learning

56 nips-2001-Convolution Kernels for Natural Language

57 nips-2001-Correlation Codes in Neuronal Populations

58 nips-2001-Covariance Kernels from Bayesian Generative Models

59 nips-2001-Direct value-approximation for factored MDPs

60 nips-2001-Discriminative Direction for Kernel Classifiers

61 nips-2001-Distribution of Mutual Information

62 nips-2001-Duality, Geometry, and Support Vector Regression

63 nips-2001-Dynamic Time-Alignment Kernel in Support Vector Machine

64 nips-2001-EM-DD: An Improved Multiple-Instance Learning Technique

65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

66 nips-2001-Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms

67 nips-2001-Efficient Resources Allocation for Markov Decision Processes

68 nips-2001-Entropy and Inference, Revisited

69 nips-2001-Escaping the Convex Hull with Extrapolated Vector Machines

70 nips-2001-Estimating Car Insurance Premia: a Case Study in High-Dimensional Data Inference

71 nips-2001-Estimating the Reliability of ICA Projections

72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

74 nips-2001-Face Recognition Using Kernel Methods

75 nips-2001-Fast, Large-Scale Transformation-Invariant Clustering

76 nips-2001-Fast Parameter Estimation Using Green's Functions

77 nips-2001-Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade

78 nips-2001-Fragment Completion in Humans and Machines

79 nips-2001-Gaussian Process Regression with Mismatched Models

80 nips-2001-Generalizable Relational Binding from Coarse-coded Distributed Representations

81 nips-2001-Generalization Performance of Some Learning Problems in Hilbert Functional Spaces

82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

83 nips-2001-Geometrical Singularities in the Neuromanifold of Multilayer Perceptrons

84 nips-2001-Global Coordination of Local Linear Models

85 nips-2001-Grammar Transfer in a Second Order Recurrent Neural Network

86 nips-2001-Grammatical Bigrams

87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

88 nips-2001-Grouping and dimensionality reduction by locally linear embedding

89 nips-2001-Grouping with Bias

90 nips-2001-Hyperbolic Self-Organizing Maps for Semantic Navigation

91 nips-2001-Improvisation and Learning

92 nips-2001-Incorporating Invariances in Non-Linear Support Vector Machines

93 nips-2001-Incremental A*

94 nips-2001-Incremental Learning and Selective Sampling via Parametric Optimization Framework for SVM

95 nips-2001-Infinite Mixtures of Gaussian Process Experts

96 nips-2001-Information-Geometric Decomposition in Spike Analysis

97 nips-2001-Information-Geometrical Significance of Sparsity in Gallager Codes

98 nips-2001-Information Geometrical Framework for Analyzing Belief Propagation Decoder

99 nips-2001-Intransitive Likelihood-Ratio Classifiers

100 nips-2001-Iterative Double Clustering for Unsupervised and Semi-Supervised Learning

101 nips-2001-K-Local Hyperplane and Convex Distance Nearest Neighbor Algorithms

102 nips-2001-KLD-Sampling: Adaptive Particle Filters

103 nips-2001-Kernel Feature Spaces and Nonlinear Blind Souce Separation

104 nips-2001-Kernel Logistic Regression and the Import Vector Machine

105 nips-2001-Kernel Machines and Boolean Functions

106 nips-2001-Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering

107 nips-2001-Latent Dirichlet Allocation

108 nips-2001-Learning Body Pose via Specialized Maps

109 nips-2001-Learning Discriminative Feature Transforms to Low Dimensions in Low Dimentions

110 nips-2001-Learning Hierarchical Structures with Linear Relational Embedding

111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation

112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

113 nips-2001-Learning a Gaussian Process Prior for Automatically Generating Music Playlists

114 nips-2001-Learning from Infinite Data in Finite Time

115 nips-2001-Linear-time inference in Hierarchical HMMs

116 nips-2001-Linking Motor Learning to Function Approximation: Learning in an Unlearnable Force Field

117 nips-2001-MIME: Mutual Information Minimization and Entropy Maximization for Bayesian Belief Propagation

118 nips-2001-Matching Free Trees with Replicator Equations

119 nips-2001-Means, Correlations and Bounds

120 nips-2001-Minimax Probability Machine

121 nips-2001-Model-Free Least-Squares Policy Iteration

122 nips-2001-Model Based Population Tracking and Automatic Detection of Distribution Changes

123 nips-2001-Modeling Temporal Structure in Classical Conditioning

124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision

125 nips-2001-Modularity in the motor system: decomposition of muscle patterns as combinations of time-varying synergies

126 nips-2001-Motivated Reinforcement Learning

127 nips-2001-Multi Dimensional ICA to Separate Correlated Sources

128 nips-2001-Multiagent Planning with Factored MDPs

129 nips-2001-Multiplicative Updates for Classification by Mixture Models

130 nips-2001-Natural Language Grammar Induction Using a Constituent-Context Model

131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

132 nips-2001-Novel iteration schemes for the Cluster Variation Method

133 nips-2001-On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes

134 nips-2001-On Kernel-Target Alignment

135 nips-2001-On Spectral Clustering: Analysis and an algorithm

136 nips-2001-On the Concentration of Spectral Properties

137 nips-2001-On the Convergence of Leveraging

138 nips-2001-On the Generalization Ability of On-Line Learning Algorithms

139 nips-2001-Online Learning with Kernels

140 nips-2001-Optimising Synchronisation Times for Mobile Devices

141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction

143 nips-2001-PAC Generalization Bounds for Co-training

144 nips-2001-Partially labeled classification with Markov random walks

145 nips-2001-Perceptual Metamers in Stereoscopic Vision

146 nips-2001-Playing is believing: The role of beliefs in multi-agent learning

147 nips-2001-Pranking with Ranking

148 nips-2001-Predictive Representations of State

149 nips-2001-Probabilistic Abstraction Hierarchies

150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

151 nips-2001-Probabilistic principles in unsupervised learning of visual structure: human data and a model

152 nips-2001-Prodding the ROC Curve: Constrained Optimization of Classifier Performance

153 nips-2001-Product Analysis: Learning to Model Observations as Products of Hidden Variables

154 nips-2001-Products of Gaussians

155 nips-2001-Quantizing Density Estimators

156 nips-2001-Rao-Blackwellised Particle Filtering via Data Augmentation

157 nips-2001-Rates of Convergence of Performance Gradient Estimates Using Function Approximation and Bias in Reinforcement Learning

158 nips-2001-Receptive field structure of flow detectors for heading perception

159 nips-2001-Reducing multiclass to binary by coupling probability estimates

160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

161 nips-2001-Reinforcement Learning with Long Short-Term Memory

162 nips-2001-Relative Density Nets: A New Way to Combine Backpropagation with HMM's

163 nips-2001-Risk Sensitive Particle Filters

164 nips-2001-Sampling Techniques for Kernel Methods

165 nips-2001-Scaling Laws and Local Minima in Hebbian ICA

166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

167 nips-2001-Semi-supervised MarginBoost

168 nips-2001-Sequential Noise Compensation by Sequential Monte Carlo Method

169 nips-2001-Small-World Phenomena and the Dynamics of Information

170 nips-2001-Spectral Kernel Methods for Clustering

171 nips-2001-Spectral Relaxation for K-means Clustering

172 nips-2001-Speech Recognition using SVMs

173 nips-2001-Speech Recognition with Missing Data using Recurrent Neural Nets

174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

175 nips-2001-Stabilizing Value Function Approximation with the BFBP Algorithm

176 nips-2001-Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines

177 nips-2001-Switch Packet Arbitration via Queue-Learning

178 nips-2001-TAP Gibbs Free Energy, Belief Propagation and Sparsity

179 nips-2001-Tempo tracking and rhythm quantization by sequential Monte Carlo

180 nips-2001-The Concave-Convex Procedure (CCCP)

181 nips-2001-The Emergence of Multiple Movement Units in the Presence of Noise and Feedback Delay

182 nips-2001-The Fidelity of Local Ordinal Encoding

183 nips-2001-The Infinite Hidden Markov Model

184 nips-2001-The Intelligent surfer: Probabilistic Combination of Link and Content Information in PageRank

185 nips-2001-The Method of Quantum Clustering

186 nips-2001-The Noisy Euclidean Traveling Salesman Problem and Learning

187 nips-2001-The Steering Approach for Multi-Criteria Reinforcement Learning

188 nips-2001-The Unified Propagation and Scaling Algorithm

189 nips-2001-The g Factor: Relating Distributions on Features to Distributions on Images

190 nips-2001-Thin Junction Trees

191 nips-2001-Transform-invariant Image Decomposition with Similarity Templates

192 nips-2001-Tree-based reparameterization for approximate inference on loopy graphs

193 nips-2001-Unsupervised Learning of Human Motion Models

194 nips-2001-Using Vocabulary Knowledge in Bayesian Multinomial Estimation

195 nips-2001-Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning

196 nips-2001-Very loopy belief propagation for unwrapping phase images

197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules