nips nips2009 knowledge-graph by maker-knowledge-mining

nips 2009 knowledge graph


similar papers computed by tfidf model


similar papers computed by lsi model


similar papers computed by lda model


papers list:

1 nips-2009-$L 1$-Penalized Robust Estimation for a Class of Inverse Problems Arising in Multiview Geometry

Author: Arnak Dalalyan, Renaud Keriven

Abstract: We propose a new approach to the problem of robust estimation in multiview geometry. Inspired by recent advances in the sparse recovery problem of statistics, we define our estimator as a Bayesian maximum a posteriori with multivariate Laplace prior on the vector describing the outliers. This leads to an estimator in which the fidelity to the data is measured by the L∞ -norm while the regularization is done by the L1 -norm. The proposed procedure is fairly fast since the outlier removal is done by solving one linear program (LP). An important difference compared to existing algorithms is that for our estimator it is not necessary to specify neither the number nor the proportion of the outliers. We present strong theoretical results assessing the accuracy of our procedure, as well as a numerical example illustrating its efficiency on real data. 1

2 nips-2009-3D Object Recognition with Deep Belief Nets

Author: Vinod Nair, Geoffrey E. Hinton

Abstract: We introduce a new type of top-level model for Deep Belief Nets and evaluate it on a 3D object recognition task. The top-level model is a third-order Boltzmann machine, trained using a hybrid algorithm that combines both generative and discriminative gradients. Performance is evaluated on the NORB database (normalized-uniform version), which contains stereo-pair images of objects under different lighting conditions and viewpoints. Our model achieves 6.5% error on the test set, which is close to the best published result for NORB (5.9%) using a convolutional neural net that has built-in knowledge of translation invariance. It substantially outperforms shallow models such as SVMs (11.6%). DBNs are especially suited for semi-supervised learning, and to demonstrate this we consider a modified version of the NORB recognition task in which additional unlabeled images are created by applying small translations to the images in the database. With the extra unlabeled data (and the same amount of labeled data as before), our model achieves 5.2% error. 1

3 nips-2009-AUC optimization and the two-sample problem

Author: Nicolas Vayatis, Marine Depecker, Stéphan J. Clémençcon

Abstract: The purpose of the paper is to explore the connection between multivariate homogeneity tests and AUC optimization. The latter problem has recently received much attention in the statistical learning literature. From the elementary observation that, in the two-sample problem setup, the null assumption corresponds to the situation where the area under the optimal ROC curve is equal to 1/2, we propose a two-stage testing method based on data splitting. A nearly optimal scoring function in the AUC sense is first learnt from one of the two half-samples. Data from the remaining half-sample are then projected onto the real line and eventually ranked according to the scoring function computed at the first stage. The last step amounts to performing a standard Mann-Whitney Wilcoxon test in the onedimensional framework. We show that the learning step of the procedure does not affect the consistency of the test as well as its properties in terms of power, provided the ranking produced is accurate enough in the AUC sense. The results of a numerical experiment are eventually displayed in order to show the efficiency of the method. 1

4 nips-2009-A Bayesian Analysis of Dynamics in Free Recall

Author: Richard Socher, Samuel Gershman, Per Sederberg, Kenneth Norman, Adler J. Perotte, David M. Blei

Abstract: We develop a probabilistic model of human memory performance in free recall experiments. In these experiments, a subject first studies a list of words and then tries to recall them. To model these data, we draw on both previous psychological research and statistical topic models of text documents. We assume that memories are formed by assimilating the semantic meaning of studied words (represented as a distribution over topics) into a slowly changing latent context (represented in the same space). During recall, this context is reinstated and used as a cue for retrieving studied words. By conceptualizing memory retrieval as a dynamic latent variable model, we are able to use Bayesian inference to represent uncertainty and reason about the cognitive processes underlying memory. We present a particle filter algorithm for performing approximate posterior inference, and evaluate our model on the prediction of recalled words in experimental data. By specifying the model hierarchically, we are also able to capture inter-subject variability. 1

5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation

Author: Lan Du, Lu Ren, Lawrence Carin, David B. Dunson

Abstract: A non-parametric Bayesian model is proposed for processing multiple images. The analysis employs image features and, when present, the words associated with accompanying annotations. The model clusters the images into classes, and each image is segmented into a set of objects, also allowing the opportunity to assign a word to each object (localized labeling). Each object is assumed to be represented as a heterogeneous mix of components, with this realized via mixture models linking image features to object types. The number of image classes, number of object types, and the characteristics of the object-feature mixture models are inferred nonparametrically. To constitute spatially contiguous objects, a new logistic stick-breaking process is developed. Inference is performed efficiently via variational Bayesian analysis, with example results presented on two image databases.

6 nips-2009-A Biologically Plausible Model for Rapid Natural Scene Identification

Author: Sennay Ghebreab, Steven Scholte, Victor Lamme, Arnold Smeulders

Abstract: Contrast statistics of the majority of natural images conform to a Weibull distribution. This property of natural images may facilitate efficient and very rapid extraction of a scene's visual gist. Here we investigated whether a neural response model based on the Wei bull contrast distribution captures visual information that humans use to rapidly identify natural scenes. In a learning phase, we measured EEG activity of 32 subjects viewing brief flashes of 700 natural scenes. From these neural measurements and the contrast statistics of the natural image stimuli, we derived an across subject Wei bull response model. We used this model to predict the EEG responses to 100 new natural scenes and estimated which scene the subject viewed by finding the best match between the model predictions and the observed EEG responses. In almost 90 percent of the cases our model accurately predicted the observed scene. Moreover, in most failed cases, the scene mistaken for the observed scene was visually similar to the observed scene itself. Similar results were obtained in a separate experiment in which 16 other subjects where presented with artificial occlusion models of natural images. Together, these results suggest that Weibull contrast statistics of natural images contain a considerable amount of visual gist information to warrant rapid image identification.

7 nips-2009-A Data-Driven Approach to Modeling Choice

Author: Vivek Farias, Srikanth Jagabathula, Devavrat Shah

Abstract: We visit the following fundamental problem: For a ‘generic’ model of consumer choice (namely, distributions over preference lists) and a limited amount of data on how consumers actually make decisions (such as marginal preference information), how may one predict revenues from offering a particular assortment of choices? This problem is central to areas within operations research, marketing and econometrics. We present a framework to answer such questions and design a number of tractable algorithms (from a data and computational standpoint) for the same. 1

8 nips-2009-A Fast, Consistent Kernel Two-Sample Test

Author: Arthur Gretton, Kenji Fukumizu, Zaïd Harchaoui, Bharath K. Sriperumbudur

Abstract: A kernel embedding of probability distributions into reproducing kernel Hilbert spaces (RKHS) has recently been proposed, which allows the comparison of two probability measures P and Q based on the distance between their respective embeddings: for a sufficiently rich RKHS, this distance is zero if and only if P and Q coincide. In using this distance as a statistic for a test of whether two samples are from different distributions, a major difficulty arises in computing the significance threshold, since the empirical statistic has as its null distribution (where P = Q) an infinite weighted sum of χ2 random variables. Prior finite sample approximations to the null distribution include using bootstrap resampling, which yields a consistent estimate but is computationally costly; and fitting a parametric model with the low order moments of the test statistic, which can work well in practice but has no consistency or accuracy guarantees. The main result of the present work is a novel estimate of the null distribution, computed from the eigenspectrum of the Gram matrix on the aggregate sample from P and Q, and having lower computational cost than the bootstrap. A proof of consistency of this estimate is provided. The performance of the null distribution estimate is compared with the bootstrap and parametric approaches on an artificial example, high dimensional multivariate data, and text.

9 nips-2009-A Game-Theoretic Approach to Hypergraph Clustering

Author: Samuel R. Bulò, Marcello Pelillo

Abstract: Hypergraph clustering refers to the process of extracting maximally coherent groups from a set of objects using high-order (rather than pairwise) similarities. Traditional approaches to this problem are based on the idea of partitioning the input data into a user-defined number of classes, thereby obtaining the clusters as a by-product of the partitioning process. In this paper, we provide a radically different perspective to the problem. In contrast to the classical approach, we attempt to provide a meaningful formalization of the very notion of a cluster and we show that game theory offers an attractive and unexplored perspective that serves well our purpose. Specifically, we show that the hypergraph clustering problem can be naturally cast into a non-cooperative multi-player “clustering game”, whereby the notion of a cluster is equivalent to a classical game-theoretic equilibrium concept. From the computational viewpoint, we show that the problem of finding the equilibria of our clustering game is equivalent to locally optimizing a polynomial function over the standard simplex, and we provide a discrete-time dynamics to perform this optimization. Experiments are presented which show the superiority of our approach over state-of-the-art hypergraph clustering techniques.

10 nips-2009-A Gaussian Tree Approximation for Integer Least-Squares

Author: Jacob Goldberger, Amir Leshem

Abstract: This paper proposes a new algorithm for the linear least squares problem where the unknown variables are constrained to be in a finite set. The factor graph that corresponds to this problem is very loopy; in fact, it is a complete graph. Hence, applying the Belief Propagation (BP) algorithm yields very poor results. The algorithm described here is based on an optimal tree approximation of the Gaussian density of the unconstrained linear system. It is shown that even though the approximation is not directly applied to the exact discrete distribution, applying the BP algorithm to the modified factor graph outperforms current methods in terms of both performance and complexity. The improved performance of the proposed algorithm is demonstrated on the problem of MIMO detection.

11 nips-2009-A General Projection Property for Distribution Families

Author: Yao-liang Yu, Yuxi Li, Dale Schuurmans, Csaba Szepesvári

Abstract: Surjectivity of linear projections between distribution families with fixed mean and covariance (regardless of dimension) is re-derived by a new proof. We further extend this property to distribution families that respect additional constraints, such as symmetry, unimodality and log-concavity. By combining our results with classic univariate inequalities, we provide new worst-case analyses for natural risk criteria arising in classification, optimization, portfolio selection and Markov decision processes. 1

12 nips-2009-A Generalized Natural Actor-Critic Algorithm

Author: Tetsuro Morimura, Eiji Uchibe, Junichiro Yoshimoto, Kenji Doya

Abstract: Policy gradient Reinforcement Learning (RL) algorithms have received substantial attention, seeking stochastic policies that maximize the average (or discounted cumulative) reward. In addition, extensions based on the concept of the Natural Gradient (NG) show promising learning efficiency because these regard metrics for the task. Though there are two candidate metrics, Kakade’s Fisher Information Matrix (FIM) for the policy (action) distribution and Morimura’s FIM for the stateaction joint distribution, but all RL algorithms with NG have followed Kakade’s approach. In this paper, we describe a generalized Natural Gradient (gNG) that linearly interpolates the two FIMs and propose an efficient implementation for the gNG learning based on a theory of the estimating function, the generalized Natural Actor-Critic (gNAC) algorithm. The gNAC algorithm involves a near optimal auxiliary function to reduce the variance of the gNG estimates. Interestingly, the gNAC can be regarded as a natural extension of the current state-of-the-art NAC algorithm [1], as long as the interpolating parameter is appropriately selected. Numerical experiments showed that the proposed gNAC algorithm can estimate gNG efficiently and outperformed the NAC algorithm.

13 nips-2009-A Neural Implementation of the Kalman Filter

Author: Robert Wilson, Leif Finkel

Abstract: Recent experimental evidence suggests that the brain is capable of approximating Bayesian inference in the face of noisy input stimuli. Despite this progress, the neural underpinnings of this computation are still poorly understood. In this paper we focus on the Bayesian filtering of stochastic time series and introduce a novel neural network, derived from a line attractor architecture, whose dynamics map directly onto those of the Kalman filter in the limit of small prediction error. When the prediction error is large we show that the network responds robustly to changepoints in a way that is qualitatively compatible with the optimal Bayesian model. The model suggests ways in which probability distributions are encoded in the brain and makes a number of testable experimental predictions. 1

14 nips-2009-A Parameter-free Hedging Algorithm

Author: Kamalika Chaudhuri, Yoav Freund, Daniel J. Hsu

Abstract: We study the problem of decision-theoretic online learning (DTOL). Motivated by practical applications, we focus on DTOL when the number of actions is very large. Previous algorithms for learning in this framework have a tunable learning rate parameter, and a barrier to using online-learning in practical applications is that it is not understood how to set this parameter optimally, particularly when the number of actions is large. In this paper, we offer a clean solution by proposing a novel and completely parameter-free algorithm for DTOL. We introduce a new notion of regret, which is more natural for applications with a large number of actions. We show that our algorithm achieves good performance with respect to this new notion of regret; in addition, it also achieves performance close to that of the best bounds achieved by previous algorithms with optimally-tuned parameters, according to previous notions of regret. 1

15 nips-2009-A Rate Distortion Approach for Semi-Supervised Conditional Random Fields

Author: Yang Wang, Gholamreza Haffari, Shaojun Wang, Greg Mori

Abstract: We propose a novel information theoretic approach for semi-supervised learning of conditional random fields that defines a training objective to combine the conditional likelihood on labeled data and the mutual information on unlabeled data. In contrast to previous minimum conditional entropy semi-supervised discriminative learning methods, our approach is grounded on a more solid foundation, the rate distortion theory in information theory. We analyze the tractability of the framework for structured prediction and present a convergent variational training algorithm to defy the combinatorial explosion of terms in the sum over label configurations. Our experimental results show the rate distortion approach outperforms standard l2 regularization, minimum conditional entropy regularization as well as maximum conditional entropy regularization on both multi-class classification and sequence labeling problems. 1

16 nips-2009-A Smoothed Approximate Linear Program

Author: Vijay Desai, Vivek Farias, Ciamac C. Moallemi

Abstract: We present a novel linear program for the approximation of the dynamic programming cost-to-go function in high-dimensional stochastic control problems. LP approaches to approximate DP naturally restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program – the ‘smoothed approximate linear program’ – relaxes this restriction in an appropriate fashion while remaining computationally tractable. Doing so appears to have several advantages: First, we demonstrate superior bounds on the quality of approximation to the optimal cost-to-go function afforded by our approach. Second, experiments with our approach on a challenging problem (the game of Tetris) show that the approach outperforms the existing LP approach (which has previously been shown to be competitive with several ADP algorithms) by an order of magnitude. 1

17 nips-2009-A Sparse Non-Parametric Approach for Single Channel Separation of Known Sounds

Author: Paris Smaragdis, Madhusudana Shashanka, Bhiksha Raj

Abstract: In this paper we present an algorithm for separating mixed sounds from a monophonic recording. Our approach makes use of training data which allows us to learn representations of the types of sounds that compose the mixture. In contrast to popular methods that attempt to extract compact generalizable models for each sound from training data, we employ the training data itself as a representation of the sources in the mixture. We show that mixtures of known sounds can be described as sparse combinations of the training data itself, and in doing so produce significantly better separation results as compared to similar systems based on compact statistical models. Keywords: Example-Based Representation, Signal Separation, Sparse Models. 1

18 nips-2009-A Stochastic approximation method for inference in probabilistic graphical models

Author: Peter Carbonetto, Matthew King, Firas Hamze

Abstract: We describe a new algorithmic framework for inference in probabilistic models, and apply it to inference for latent Dirichlet allocation (LDA). Our framework adopts the methodology of variational inference, but unlike existing variational methods such as mean field and expectation propagation it is not restricted to tractable classes of approximating distributions. Our approach can also be viewed as a “population-based” sequential Monte Carlo (SMC) method, but unlike existing SMC methods there is no need to design the artificial sequence of distributions. Significantly, our framework offers a principled means to exchange the variance of an importance sampling estimate for the bias incurred through variational approximation. We conduct experiments on a difficult inference problem in population genetics, a problem that is related to inference for LDA. The results of these experiments suggest that our method can offer improvements in stability and accuracy over existing methods, and at a comparable cost. 1

19 nips-2009-A joint maximum-entropy model for binary neural population patterns and continuous signals

Author: Sebastian Gerwinn, Philipp Berens, Matthias Bethge

Abstract: Second-order maximum-entropy models have recently gained much interest for describing the statistics of binary spike trains. Here, we extend this approach to take continuous stimuli into account as well. By constraining the joint secondorder statistics, we obtain a joint Gaussian-Boltzmann distribution of continuous stimuli and binary neural firing patterns, for which we also compute marginal and conditional distributions. This model has the same computational complexity as pure binary models and fitting it to data is a convex problem. We show that the model can be seen as an extension to the classical spike-triggered average/covariance analysis and can be used as a non-linear method for extracting features which a neural population is sensitive to. Further, by calculating the posterior distribution of stimuli given an observed neural response, the model can be used to decode stimuli and yields a natural spike-train metric. Therefore, extending the framework of maximum-entropy models to continuous variables allows us to gain novel insights into the relationship between the firing patterns of neural ensembles and the stimuli they are processing. 1

20 nips-2009-A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers

Author: Sahand Negahban, Bin Yu, Martin J. Wainwright, Pradeep K. Ravikumar

Abstract: High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless p/n → 0, a line of recent work has studied models with various types of structure (e.g., sparse vectors; block-structured matrices; low-rank matrices; Markov assumptions). In such settings, a general approach to estimation is to solve a regularized convex program (known as a regularized M -estimator) which combines a loss function (measuring how well the model fits the data) with some regularization function that encourages the assumed structure. The goal of this paper is to provide a unified framework for establishing consistency and convergence rates for such regularized M estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive several existing results, and also to obtain several new results on consistency and convergence rates. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure the corresponding regularized M -estimators have fast convergence rates. 1

21 nips-2009-Abstraction and Relational learning

22 nips-2009-Accelerated Gradient Methods for Stochastic Optimization and Online Learning

23 nips-2009-Accelerating Bayesian Structural Inference for Non-Decomposable Gaussian Graphical Models

24 nips-2009-Adapting to the Shifting Intent of Search Queries

25 nips-2009-Adaptive Design Optimization in Experiments with People

26 nips-2009-Adaptive Regularization for Transductive Support Vector Machine

27 nips-2009-Adaptive Regularization of Weight Vectors

28 nips-2009-An Additive Latent Feature Model for Transparent Object Recognition

29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

30 nips-2009-An Integer Projected Fixed Point Method for Graph Matching and MAP Inference

31 nips-2009-An LP View of the M-best MAP problem

32 nips-2009-An Online Algorithm for Large Scale Image Similarity Learning

33 nips-2009-Analysis of SVM with Indefinite Kernels

34 nips-2009-Anomaly Detection with Score functions based on Nearest Neighbor Graphs

35 nips-2009-Approximating MAP by Compensating for Structural Relaxations

36 nips-2009-Asymptotic Analysis of MAP Estimation via the Replica Method and Compressed Sensing

37 nips-2009-Asymptotically Optimal Regularization in Smooth Parametric Models

38 nips-2009-Augmenting Feature-driven fMRI Analyses: Semi-supervised learning and resting state activity

39 nips-2009-Bayesian Belief Polarization

40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior

42 nips-2009-Bayesian Sparse Factor Models and DAGs Inference and Comparison

43 nips-2009-Bayesian estimation of orientation preference maps

44 nips-2009-Beyond Categories: The Visual Memex Model for Reasoning About Object Relationships

45 nips-2009-Beyond Convexity: Online Submodular Minimization

46 nips-2009-Bilinear classifiers for visual recognition

47 nips-2009-Boosting with Spatial Regularization

48 nips-2009-Bootstrapping from Game Tree Search

49 nips-2009-Breaking Boundaries Between Induction Time and Diagnosis Time Active Information Acquisition

50 nips-2009-Canonical Time Warping for Alignment of Human Behavior

51 nips-2009-Clustering sequence sets for motif discovery

52 nips-2009-Code-specific policy gradient rules for spiking neurons

53 nips-2009-Complexity of Decentralized Control: Special Cases

54 nips-2009-Compositionality of optimal control laws

55 nips-2009-Compressed Least-Squares Regression

56 nips-2009-Conditional Neural Fields

57 nips-2009-Conditional Random Fields with High-Order Features for Sequence Labeling

58 nips-2009-Constructing Topological Maps using Markov Random Fields and Loop-Closure Detection

59 nips-2009-Construction of Nonparametric Bayesian Models from Parametric Bayes Equations

60 nips-2009-Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation

61 nips-2009-Convex Relaxation of Mixture Regression with Efficient Algorithms

62 nips-2009-Correlation Coefficients are Insufficient for Analyzing Spike Count Dependencies

63 nips-2009-DUOL: A Double Updating Approach for Online Learning

64 nips-2009-Data-driven calibration of linear estimators with minimal penalties

65 nips-2009-Decoupling Sparsity and Smoothness in the Discrete Hierarchical Dirichlet Process

66 nips-2009-Differential Use of Implicit Negative Evidence in Generative and Discriminative Language Learning

67 nips-2009-Directed Regression

68 nips-2009-Dirichlet-Bernoulli Alignment: A Generative Model for Multi-Class Multi-Label Multi-Instance Corpora

69 nips-2009-Discrete MDL Predicts in Total Variation

70 nips-2009-Discriminative Network Models of Schizophrenia

71 nips-2009-Distribution-Calibrated Hierarchical Classification

72 nips-2009-Distribution Matching for Transduction

73 nips-2009-Dual Averaging Method for Regularized Stochastic Learning and Online Optimization

74 nips-2009-Efficient Bregman Range Search

75 nips-2009-Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models

76 nips-2009-Efficient Learning using Forward-Backward Splitting

77 nips-2009-Efficient Match Kernel between Sets of Features for Visual Recognition

78 nips-2009-Efficient Moments-based Permutation Tests

79 nips-2009-Efficient Recovery of Jointly Sparse Vectors

80 nips-2009-Efficient and Accurate Lp-Norm Multiple Kernel Learning

81 nips-2009-Ensemble Nystrom Method

82 nips-2009-Entropic Graph Regularization in Non-Parametric Semi-Supervised Classification

83 nips-2009-Estimating image bases for visual image reconstruction from human brain activity

84 nips-2009-Evaluating multi-class learning strategies in a generative hierarchical framework for object detection

85 nips-2009-Explaining human multiple object tracking as resource-constrained approximate inference in a dynamic probabilistic model

86 nips-2009-Exploring Functional Connectivities of the Human Brain using Multivariate Information Analysis

87 nips-2009-Exponential Family Graph Matching and Ranking

88 nips-2009-Extending Phase Mechanism to Differential Motion Opponency for Motion Pop-out

89 nips-2009-FACTORIE: Probabilistic Programming via Imperatively Defined Factor Graphs

90 nips-2009-Factor Modeling for Advertisement Targeting

91 nips-2009-Fast, smooth and adaptive regression in metric spaces

92 nips-2009-Fast Graph Laplacian Regularized Kernel Learning via Semidefinite–Quadratic–Linear Programming

93 nips-2009-Fast Image Deconvolution using Hyper-Laplacian Priors

94 nips-2009-Fast Learning from Non-i.i.d. Observations

95 nips-2009-Fast subtree kernels on graphs

96 nips-2009-Filtering Abstract Senses From Image Search Results

97 nips-2009-Free energy score space

98 nips-2009-From PAC-Bayes Bounds to KL Regularization

99 nips-2009-Functional network reorganization in motor cortex can be explained by reward-modulated Hebbian learning

100 nips-2009-Gaussian process regression with Student-t likelihood

101 nips-2009-Generalization Errors and Learning Curves for Regression with Multi-task Gaussian Processes

102 nips-2009-Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models

103 nips-2009-Graph Zeta Function in the Bethe Free Energy and Loopy Belief Propagation

104 nips-2009-Group Sparse Coding

105 nips-2009-Grouped Orthogonal Matching Pursuit for Variable Selection and Prediction

106 nips-2009-Heavy-Tailed Symmetric Stochastic Neighbor Embedding

107 nips-2009-Help or Hinder: Bayesian Models of Social Goal Inference

108 nips-2009-Heterogeneous multitask learning with joint sparsity constraints

109 nips-2009-Hierarchical Learning of Dimensional Biases in Human Categorization

110 nips-2009-Hierarchical Mixture of Classification Experts Uncovers Interactions between Brain Regions

111 nips-2009-Hierarchical Modeling of Local Image Features through $L p$-Nested Symmetric Distributions

112 nips-2009-Human Rademacher Complexity

113 nips-2009-Improving Existing Fault Recovery Policies

114 nips-2009-Indian Buffet Processes with Power-law Behavior

115 nips-2009-Individuation, Identification and Object Discovery

116 nips-2009-Information-theoretic lower bounds on the oracle complexity of convex optimization

117 nips-2009-Inter-domain Gaussian Processes for Sparse Inference using Inducing Features

118 nips-2009-Kernel Choice and Classifiability for RKHS Embeddings of Probability Distributions

119 nips-2009-Kernel Methods for Deep Learning

120 nips-2009-Kernels and learning curves for Gaussian process regression on random graphs

121 nips-2009-Know Thy Neighbour: A Normative Theory of Synaptic Depression

122 nips-2009-Label Selection on Graphs

123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

124 nips-2009-Lattice Regression

125 nips-2009-Learning Brain Connectivity of Alzheimer's Disease from Neuroimaging Data

126 nips-2009-Learning Bregman Distance Functions and Its Application for Semi-Supervised Clustering

127 nips-2009-Learning Label Embeddings for Nearest-Neighbor Multi-class Classification with an Application to Speech Recognition

128 nips-2009-Learning Non-Linear Combinations of Kernels

129 nips-2009-Learning a Small Mixture of Trees

130 nips-2009-Learning from Multiple Partially Observed Views - an Application to Multilingual Text Categorization

131 nips-2009-Learning from Neighboring Strokes: Combining Appearance and Context for Multi-Domain Sketch Recognition

132 nips-2009-Learning in Markov Random Fields using Tempered Transitions

133 nips-2009-Learning models of object structure

134 nips-2009-Learning to Explore and Exploit in POMDPs

135 nips-2009-Learning to Hash with Binary Reconstructive Embeddings

136 nips-2009-Learning to Rank by Optimizing NDCG Measure

137 nips-2009-Learning transport operators for image manifolds

138 nips-2009-Learning with Compressible Priors

139 nips-2009-Linear-time Algorithms for Pairwise Statistical Problems

140 nips-2009-Linearly constrained Bayesian matrix factorization for blind source separation

141 nips-2009-Local Rules for Global MAP: When Do They Work ?

142 nips-2009-Locality-sensitive binary codes from shift-invariant kernels

143 nips-2009-Localizing Bugs in Program Executions with Graphical Models

144 nips-2009-Lower bounds on minimax rates for nonparametric regression with additive sparsity and smoothness

145 nips-2009-Manifold Embeddings for Model-Based Reinforcement Learning under Partial Observability

146 nips-2009-Manifold Regularization for SIR with Rate Root-n Convergence

147 nips-2009-Matrix Completion from Noisy Entries

148 nips-2009-Matrix Completion from Power-Law Distributed Samples

149 nips-2009-Maximin affinity learning of image segmentation

150 nips-2009-Maximum likelihood trajectories for continuous-time Markov chains

151 nips-2009-Measuring Invariances in Deep Networks

152 nips-2009-Measuring model complexity with the prior predictive

153 nips-2009-Modeling Social Annotation Data with Content Relevance using a Topic Model

154 nips-2009-Modeling the spacing effect in sequential category learning

155 nips-2009-Modelling Relational Data using Bayesian Clustered Tensor Factorization

156 nips-2009-Monte Carlo Sampling for Regret Minimization in Extensive Games

157 nips-2009-Multi-Label Prediction via Compressed Sensing

158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

159 nips-2009-Multi-Step Dyna Planning for Policy Evaluation and Control

160 nips-2009-Multiple Incremental Decremental Learning of Support Vector Machines

161 nips-2009-Nash Equilibria of Static Prediction Games

162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

163 nips-2009-Neurometric function analysis of population codes

164 nips-2009-No evidence for active sparsification in the visual cortex

165 nips-2009-Noise Characterization, Modeling, and Reduction for In Vivo Neural Recording

166 nips-2009-Noisy Generalized Binary Search

167 nips-2009-Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations

168 nips-2009-Non-stationary continuous dynamic Bayesian networks

169 nips-2009-Nonlinear Learning using Local Coordinate Coding

170 nips-2009-Nonlinear directed acyclic structure learning with weakly additive noise models

171 nips-2009-Nonparametric Bayesian Models for Unsupervised Event Coreference Resolution

172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

173 nips-2009-Nonparametric Greedy Algorithms for the Sparse Learning Problem

174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

175 nips-2009-Occlusive Components Analysis

176 nips-2009-On Invariance in Hierarchical Models

177 nips-2009-On Learning Rotations

178 nips-2009-On Stochastic and Worst-case Models for Investing

179 nips-2009-On the Algorithmics and Applications of a Mixed-norm based Kernel Learning Formulation

180 nips-2009-On the Convergence of the Concave-Convex Procedure

181 nips-2009-Online Learning of Assignments

182 nips-2009-Optimal Scoring for Unsupervised Learning

183 nips-2009-Optimal context separation of spiking haptic signals by second-order somatosensory neurons

184 nips-2009-Optimizing Multi-Class Spatio-Spectral Filters via Bayes Error Estimation for EEG Classification

185 nips-2009-Orthogonal Matching Pursuit From Noisy Random Measurements: A New Analysis

186 nips-2009-Parallel Inference for Latent Dirichlet Allocation on Graphics Processing Units

187 nips-2009-Particle-based Variational Inference for Continuous Systems

188 nips-2009-Perceptual Multistability as Markov Chain Monte Carlo Inference

189 nips-2009-Periodic Step Size Adaptation for Single Pass On-line Learning

190 nips-2009-Polynomial Semantic Indexing

191 nips-2009-Positive Semidefinite Metric Learning with Boosting

192 nips-2009-Posterior vs Parameter Sparsity in Latent Variable Models

193 nips-2009-Potential-Based Agnostic Boosting

194 nips-2009-Predicting the Optimal Spacing of Study: A Multiscale Context Model of Memory

195 nips-2009-Probabilistic Relational PCA

196 nips-2009-Quantification and the language of thought

197 nips-2009-Randomized Pruning: Efficiently Calculating Expectations in Large Dynamic Programs

198 nips-2009-Rank-Approximate Nearest Neighbor Search: Retaining Meaning and Speed in High Dimensions

199 nips-2009-Ranking Measures and Loss Functions in Learning to Rank

200 nips-2009-Reconstruction of Sparse Circuits Using Multi-neuronal Excitation (RESCUME)

201 nips-2009-Region-based Segmentation and Object Detection

202 nips-2009-Regularized Distance Metric Learning:Theory and Algorithm

203 nips-2009-Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks

204 nips-2009-Replicated Softmax: an Undirected Topic Model

205 nips-2009-Rethinking LDA: Why Priors Matter

206 nips-2009-Riffled Independence for Ranked Data

207 nips-2009-Robust Nonparametric Regression with Metric-Space Valued Output

208 nips-2009-Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization

209 nips-2009-Robust Value Function Approximation Using Bilinear Programming

210 nips-2009-STDP enables spiking neurons to detect hidden causes of their inputs

211 nips-2009-Segmenting Scenes by Matching Image Composites

212 nips-2009-Semi-Supervised Learning in Gigantic Image Collections

213 nips-2009-Semi-supervised Learning using Sparse Eigenfunction Bases

214 nips-2009-Semi-supervised Regression using Hessian energy with an application to semi-supervised dimensionality reduction

215 nips-2009-Sensitivity analysis in HMMs with application to likelihood maximization

216 nips-2009-Sequential effects reflect parallel learning of multiple environmental regularities

217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

218 nips-2009-Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining

219 nips-2009-Slow, Decorrelated Features for Pretraining Complex Cell-like Networks

220 nips-2009-Slow Learners are Fast

221 nips-2009-Solving Stochastic Games

222 nips-2009-Sparse Estimation Using General Likelihoods and Non-Factorial Priors

223 nips-2009-Sparse Metric Learning via Smooth Optimization

224 nips-2009-Sparse and Locally Constant Gaussian Graphical Models

225 nips-2009-Sparsistent Learning of Varying-coefficient Models with Structural Changes

226 nips-2009-Spatial Normalized Gamma Processes

227 nips-2009-Speaker Comparison with Inner Product Discriminant Functions

228 nips-2009-Speeding up Magnetic Resonance Image Acquisition by Bayesian Multi-Slice Adaptive Compressed Sensing

229 nips-2009-Statistical Analysis of Semi-Supervised Learning: The Limit of Infinite Unlabelled Data

230 nips-2009-Statistical Consistency of Top-k Ranking

231 nips-2009-Statistical Models of Linear and Nonlinear Contextual Interactions in Early Visual Processing

232 nips-2009-Strategy Grafting in Extensive Games

233 nips-2009-Streaming Pointwise Mutual Information

234 nips-2009-Streaming k-means approximation

235 nips-2009-Structural inference affects depth perception in the context of potential occlusion

236 nips-2009-Structured output regression for detection with partial truncation

237 nips-2009-Subject independent EEG-based BCI decoding

238 nips-2009-Submanifold density estimation

239 nips-2009-Submodularity Cuts and Applications

240 nips-2009-Sufficient Conditions for Agnostic Active Learnable

241 nips-2009-The 'tree-dependent components' of natural scenes are edge filters

242 nips-2009-The Infinite Partially Observable Markov Decision Process

243 nips-2009-The Ordered Residual Kernel for Robust Motion Subspace Clustering

244 nips-2009-The Wisdom of Crowds in the Recollection of Order Information

245 nips-2009-Thresholding Procedures for High Dimensional Variable Selection and Statistical Estimation

246 nips-2009-Time-Varying Dynamic Bayesian Networks

247 nips-2009-Time-rescaling methods for the estimation and assessment of non-Poisson neural encoding models

248 nips-2009-Toward Provably Correct Feature Selection in Arbitrary Domains

249 nips-2009-Tracking Dynamic Sources of Malicious Activity at Internet Scale

250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference

251 nips-2009-Unsupervised Detection of Regions of Interest Using Iterative Link Analysis

252 nips-2009-Unsupervised Feature Selection for the $k$-means Clustering Problem

253 nips-2009-Unsupervised feature learning for audio classification using convolutional deep belief networks

254 nips-2009-Variational Gaussian-process factor analysis for modeling spatio-temporal data

255 nips-2009-Variational Inference for the Nested Chinese Restaurant Process

256 nips-2009-Which graphical models are difficult to learn?

257 nips-2009-White Functionals for Anomaly Detection in Dynamical Systems

258 nips-2009-Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise

259 nips-2009-Who’s Doing What: Joint Modeling of Names and Verbs for Simultaneous Face and Pose Annotation

260 nips-2009-Zero-shot Learning with Semantic Output Codes

261 nips-2009-fMRI-Based Inter-Subject Cortical Alignment Using Functional Connectivity