nips nips2005 nips2005-81 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Juan J. Murillo-fuentes, Sebastian Caro, Fernando Pérez-Cruz
Abstract: In this paper we propose a new receiver for digital communications. We focus on the application of Gaussian Processes (GPs) to the multiuser detection (MUD) in code division multiple access (CDMA) systems to solve the near-far problem. Hence, we aim to reduce the interference from other users sharing the same frequency band. While usual approaches minimize the mean square error (MMSE) to linearly retrieve the user of interest, we exploit the same criteria but in the design of a nonlinear MUD. Since the optimal solution is known to be nonlinear, the performance of this novel method clearly improves that of the MMSE detectors. Furthermore, the GP based MUD achieves excellent interference suppression even for short training sequences. We also include some experiments to illustrate that other nonlinear detectors such as those based on Support Vector Machines (SVMs) exhibit a worse performance. 1
Reference: text
sentIndex sentText sentNum sentScore
1 uk Abstract In this paper we propose a new receiver for digital communications. [sent-6, score-0.357]
2 We focus on the application of Gaussian Processes (GPs) to the multiuser detection (MUD) in code division multiple access (CDMA) systems to solve the near-far problem. [sent-7, score-0.367]
3 Hence, we aim to reduce the interference from other users sharing the same frequency band. [sent-8, score-0.488]
4 While usual approaches minimize the mean square error (MMSE) to linearly retrieve the user of interest, we exploit the same criteria but in the design of a nonlinear MUD. [sent-9, score-0.372]
5 Furthermore, the GP based MUD achieves excellent interference suppression even for short training sequences. [sent-11, score-0.227]
6 We also include some experiments to illustrate that other nonlinear detectors such as those based on Support Vector Machines (SVMs) exhibit a worse performance. [sent-12, score-0.193]
7 1 Introduction One of the major issues in present wireless communications is how users share the resources. [sent-13, score-0.461]
8 Code division multiple access (CDMA) is one of the techniques exploited in third generation communications systems and is to be employed in the next generation. [sent-15, score-0.152]
9 In CDMA each user uses direct sequence spread spectrum (DS-SS) to modulate its bits with an assigned code, spreading them over the entire frequency band. [sent-16, score-0.52]
10 While typical receivers deal only with interferences and noise intrinsic to the channel (i. [sent-17, score-0.182]
11 Inter-Symbolic Interference, intermodulation products, spurious frequencies, and thermal noise), in CDMA we also have interference produced by other users accessing the channel at the same time. [sent-19, score-0.535]
12 Interference limitation due to the simultaneous access of multiple users systems has been the stimulus to the development of a powerful family of Signal Processing techniques, namely Multiuser Detection (MUD). [sent-20, score-0.396]
13 In CDMA, we face the retrieval of a given user, the user of interest (UOI), with the knowledge of its associated code or even the whole set of users codes. [sent-24, score-0.569]
14 Hence, we face the suppression of interference due to others users. [sent-25, score-0.137]
15 If all users transmit with the same power, bt(1) h1(z) M Channel bt(2) M h2(z) . [sent-26, score-0.362]
16 M Noise nt hK(z) MUD C(z) Chip rate sampler bt(K) Code filters Figure 1: Synchronous CDMA system but the UOI is far from the receiver, most users reach the receiver with a larger amplitude, making it more difficult to detect the bits of the UOI. [sent-32, score-0.86]
17 Simple detectors can be designed by minimizing the mean square error (MMSE) to linearly retrieve the user of interest [5]. [sent-34, score-0.336]
18 However, these detectors need large sequences of training data. [sent-35, score-0.213]
19 This solution need very long training sequences (a few hundreds bits) and they are only tested in toy examples with very few users and short spreading sequences (the code for each user). [sent-40, score-0.934]
20 In this paper, we will present a multiuser detector based on Gaussian Processes [7]. [sent-41, score-0.327]
21 The MUD detector is inspired by the linear MMSE criteria, which can be interpreted as a Bayesian linear regressor. [sent-42, score-0.166]
22 In this sense, we can extend the linear MMSE criteria to nonlinear decision functions using the same ideas developed in [6] to present Gaussian Processes for regression. [sent-43, score-0.176]
23 In Section 2, we present the multiuser detection problem in CDMA communication systems and the widely used minimum mean square error receiver. [sent-45, score-0.368]
24 We propose a nonlinear receiver based on Gaussian Processes in Section 3. [sent-46, score-0.409]
25 Section 4 is devoted to show, through computer experiments, the advantages of the GP-MUD receiver with short training sequences. [sent-47, score-0.41]
26 We compare it to the linear MMSE and the nonlinear SVM MUD. [sent-48, score-0.114]
27 2 CDMA Communication System Model and MUD Consider a synchronous CDMA digital communication system [5] as depicted in Figure 1. [sent-50, score-0.144]
28 Each transmitted bit is upsampled and multiplied by the users’ spreading codes and then the chips for each bit are transmitted into the channel (each element of the spreading code is either +1 or −1 and they are known as chips). [sent-52, score-1.075]
29 The channel is assumed to be linear and noisy, therefore the chips from different users are added together, plus Gaussian noise. [sent-53, score-0.558]
30 Hence, the MUD has to recover from these chips the bits corresponding to each user. [sent-54, score-0.204]
31 At each time step t, the signal in the receiver can be represented in matrix notation as: xt = HAbt + nt (1) where bt is a column vector that contains the bits (+1 or −1) for the K users at time k. [sent-55, score-1.062]
32 The K × K diagonal matrix A contains the amplitude of each user, which represents the attenuation that each user’s transmission suffers through the channel (this attenuation depends on the distance between the user and the receiver). [sent-56, score-0.3]
33 H is an L × K matrix which contains in each column the L-dimensional spreading code for each of the K users. [sent-57, score-0.306]
34 The spreading codes are designed to present a low cross-correlation between them and between any shifted version of the codes, to guarantee that the bits from each user can be readily recovered. [sent-58, score-0.599]
35 The codes are known as spreading sequences, because they augment the occupied bandwidth of the transmitted signal by L. [sent-59, score-0.451]
36 Finally, xt represents the L received chips to which Gaussian noise has been added, which is denoted by nt . [sent-60, score-0.364]
37 At reception, we aim to estimate the original transmitted symbols of any user i, bt (i), hereafter the user of interest. [sent-61, score-0.478]
38 Linear MUDs estimate these bits as ˆ bt (i) = sgn{w⊤ xt } i (2) The matched filter (MF) wi = hi , a simple correlation between xt and the ith spreading code, is the optimal receiver if there were no additional users in the system, i. [sent-62, score-1.465]
39 the received signal is only corrupted by Gaussian noise. [sent-64, score-0.164]
40 While the optimal solution is known to be nonlinear [5], some linear receivers such as the minimum mean square error (MMSE) present good performances and are used in practice. [sent-66, score-0.281]
41 The MMSE receiver for the ith user solves: ∗ ⊤ ⊤ wi = arg min E (bt (i) − wi xt )2 = arg min E (bt (i) − wi (HAbt + ν k ))2 wi (3) wi where wi represents the decision function of the linear classifier. [sent-67, score-1.232]
42 We can derive the MMSE receiver by taking derivatives with respect to wi and equating to zero, obtaining: M wi M SEde = R−1 hi xx (4) where Rxx = E[xt x⊤ ] is the correlation between the received vectors and hi represents the t spreading sequence of the UOI. [sent-68, score-0.938]
43 This receiver is known as the decentralized MMSE receiver as it can be implemented without knowing the spreading sequences of the remaining users. [sent-69, score-1.087]
44 Its main limitation is its performance, which is very low even for high signal to noise ratio, and it needs many examples (thousands) before it can recover the received symbols. [sent-70, score-0.204]
45 If the spreading codes of all the users are available, as in the base station, this information can be used to improve the performance of the MMSE detector. [sent-71, score-0.715]
46 The vector z k is the matched-filter output for each user and it reduces the dimensionality of our problem from the number of chips L to the number of users K, which is significantly lower in most applications. [sent-73, score-0.612]
47 In this case the receiver is known as the centralized detector and it is defined as: M wi M SEcent = HR−1 H ⊤ hi (5) zz where Rzz = E[z t z ⊤ ] is the correlation matrix of the received chips after the MFs. [sent-74, score-0.964]
48 t These MUDs have good convergence properties and do not need a training sequence to decode the received bits, but they need large training sequences before their probability of error is low. [sent-75, score-0.336]
49 Therefore the initially received bits will present a very high probability of error that will make impossible to send any information on them. [sent-76, score-0.234]
50 Some improvements can be achieved by using higher order statistics [2], but still the training sequences are not short enough for most applications. [sent-77, score-0.147]
51 3 Gaussian Processes for Multiuser Detection The MMSE detector minimizes the functional in (3), which gives the best linear classifier. [sent-78, score-0.141]
52 As we know, the optimal classifier is nonlinear [5], and the MMSE criteria can be readily extended to provide nonlinear models by mapping the received chips to a higher dimensional space. [sent-79, score-0.466]
53 In this case we will need to solve: N ∗ wi = arg min ⊤ bt (i) − wi φ(xt ) wi 2 + λ wi 2 (6) k=1 in which we have changed the expectation by the empirical mean over a training set and we have incorporated a regularizer to avoid overfitting. [sent-80, score-0.617]
54 φ(·) represents the nonlinear mapping of the received chips. [sent-81, score-0.195]
55 The wi that minimizes (6) can be interpreted as the mode of the parameters in a Bayesian linear regressor, as noted in [6], and since the likelihood and the prior are both Gaussians, so it will be the posterior. [sent-82, score-0.125]
56 For any received symbol x∗ , we know that it will be distributed as a Gaussian with mean: 1 µ(x∗ ) = φ⊤ (x∗ )A−1 Φ⊤ b (7) λ and variance σ 2 (x∗ ) = φ⊤ (x∗ )A−1 φ(x∗ ) (8) where Φ = [φ(x1 ), φ(x2 ), . [sent-83, score-0.133]
57 The kernel that we will use in our experiments are: k(xt , xℓ ) = eθ[1] exp(−eθ[4] xt − xℓ 2 ) + eθ[3] x⊤ xℓ + eθ[2] δr,ℓ t (12) The covariance function in (12) is a good kernel for solving the GP-MUD, because it contains a linear and a nonlinear part. [sent-94, score-0.233]
58 The optimal decision surface for MUD is nonlinear, unless the spreading codes are orthogonal to each other, and its deviation from the linear solution depends on how strong the correlations between codes are. [sent-95, score-0.551]
59 In most cases, a linear detector is very close to the optimal decision surface, as spreading codes are almost orthogonal, and only a minor correction is needed to achieve the optimal decision boundary. [sent-96, score-0.61]
60 The linear part can mimic the best linear decision boundary and the nonlinear part modifies it, where the linear explanation is not optimal. [sent-98, score-0.195]
61 Also using a radial basis kernel for the nonlinear part is a good choice to achieve nonlinear decisions. [sent-99, score-0.236]
62 Because, the received chips form a constellation of 2K clouds of points with Gaussian spread around its centres. [sent-100, score-0.243]
63 Picturing the receiver as a Gaussian Process for regression, instead of a Regularised Least Square functional, allows us to either obtain the hyperparameters by maximizing the likelihood or marginalised them out using Monte Carlo techniques, as explained in [6]. [sent-101, score-0.32]
64 The powers of the interfering users is distributed homogeneously between 0 and 30 dB above that of the UOI. [sent-103, score-0.51]
65 We have just shown above how we can make predictions in the nonlinear case (9) using the received symbols from the channel. [sent-105, score-0.216]
66 In an analogy with the MMSE receiver, this will correspond to the decentralized GP-MUD detector as we will not need to know the other users’ codes to detect the bits sent to us. [sent-106, score-0.52]
67 It is also relevant to notice that we do not need our spreading code for detection, as the decentralized MMSE detector did. [sent-107, score-0.588]
68 We can also obtain a centralized GP-MUD detector using as input vectors z t = H ⊤ xt . [sent-108, score-0.374]
69 4 Experiments In this section we include the typical evaluation of the performance in a digital communications system, i. [sent-109, score-0.13]
70 The test environment is a synchronous CDMA system in which the users are spread using Gold sequences with spreading factor L = 31 and K = 8 users, which are typical values in CDMA based mobile communication systems. [sent-112, score-0.84]
71 These amplitudes are random values to achieve an interferer to signal ratio of 30 dB. [sent-114, score-0.11]
72 We study the worse scenario and hence we will detect the user which arrives to the receiver with the lowest amplitude. [sent-116, score-0.505]
73 We compare the performance of the GP centralized and decentralized MUDs to the performance of the MMSE detectors, the Matched Filter detector and the (centralized) SVMMUD in [4]. [sent-117, score-0.479]
74 The SVM-MUD detector uses a Gaussian kernel and its width is adapted incorporating knowledge of the noise variance in the channel. [sent-118, score-0.178]
75 The powers of the interfering users is distributed homogeneously between 0 and 30 dB above that of the UOI. [sent-120, score-0.51]
76 We believe this might be due to either the reduced number of users in their experiments (2 or 3) or because they used the same amplitude for all the users, so they did not encounter the near-far problem. [sent-122, score-0.397]
77 The results in Figure 2 show that the detectors based on GPs are able to reduce the probability of error as the signal to noise ratio in the channel decreases with only 30 samples in the training sequence. [sent-126, score-0.401]
78 5-2dB worse than the best achievable probability of error, which is obtained in absence of interference (indicated by the dashed line). [sent-128, score-0.127]
79 The GP decentralized MUD reduces the probability of error as the signal to noise increases, but it remains between 3-4dB from the optimal performance. [sent-129, score-0.3]
80 The other detectors are not able to decrease the BER even for a very high signal to noise ratio in the channel. [sent-130, score-0.235]
81 These figures show that the GP based MUD can outperform the other MUD when very short training sequences are available. [sent-131, score-0.147]
82 Figure 3 highlights that the SVM-MUD (centralized) and the MSSE centralized detectors are able to reduce the BER as the SNR increases, but they are still far from the performance of the GP-MUD. [sent-132, score-0.321]
83 The centralized GP-MUD basically provides optimal performance as it is less than 0. [sent-133, score-0.227]
84 The decentralized GP-MUD outperforms the other two centralized detectors (SVM and MMSE) since it is able to provide lower BER without needing to know the code of the remaining users. [sent-135, score-0.52]
85 The powers of the interfering users is distributed homogeneously between 0 and 30 dB above that of the UOI. [sent-137, score-0.51]
86 In this case, the centralized GP-MUD lies above the optimal BER curve and the decentralized GP-MUD performs as the SVM-MUD detector. [sent-139, score-0.356]
87 The centralized MMSE detector still presents very high probability of error for high signal to noise ratios and we need over 500 samples to obtain a performance similar to the centralized GP with 80 samples. [sent-140, score-0.646]
88 For 160 samples the MMSE decentralized is already able to slightly reduce the bit error rate for very high signal to noise ratios. [sent-141, score-0.384]
89 But to achieve the performance showed by the decentralized GP-MUD it needs several thousands samples. [sent-142, score-0.198]
90 Since the optimal solution is known to be nonlinear the Gaussian Processes are able to obtain this nonlinear decision surface with very few training examples. [sent-144, score-0.346]
91 This is the main advantage of this method as it only requires a few tens training examples instead of the few hundreds needed by other nonlinear techniques as SVMs. [sent-145, score-0.182]
92 This will allow its application in real communication systems, as training sequence of 26 samples are typically used in the GSM standard for mobile Telecommunications. [sent-146, score-0.133]
93 The most relevant result of this paper is the performance shown by the decentralized GPMUD receiver, since it can be directly used over any CDMA system. [sent-147, score-0.163]
94 The decentralized GP-MUD receiver does not need to know the codes from the other users and does not require the users to be aligned, as the other methods do. [sent-148, score-1.329]
95 While the other receiver will degrade its performance if the users are not aligned, the decentralized GP-MUD receiver will not, providing a more robust solution to the near far problem. [sent-149, score-1.185]
96 We have left for further work a more extensive set of experiments changing other parameters of the system such as: the number of users, the length of the spreading code, and the interferences with other users. [sent-151, score-0.298]
97 But still, we believe the reported results are significant since we obtain low bit error rates for training sequences as short as 30 bits. [sent-152, score-0.25]
98 Neural networks for multiuser detection in codedivision multiple-access communications. [sent-161, score-0.248]
99 Support vector machine multiuser receiver for DS-CDMA signals in multipath channels. [sent-180, score-0.531]
100 Prediction with gaussian processes: From linear regression to linear prediction and beyond. [sent-188, score-0.108]
wordName wordTfidf (topN-words)
[('mmse', 0.364), ('users', 0.362), ('receiver', 0.32), ('cdma', 0.307), ('mud', 0.249), ('spreading', 0.244), ('multiuser', 0.211), ('ber', 0.184), ('centralized', 0.183), ('decentralized', 0.146), ('user', 0.145), ('detector', 0.116), ('bt', 0.11), ('interference', 0.107), ('received', 0.106), ('chips', 0.105), ('wi', 0.1), ('bits', 0.099), ('codes', 0.092), ('nonlinear', 0.089), ('detectors', 0.084), ('snr', 0.079), ('communications', 0.076), ('xt', 0.075), ('bit', 0.074), ('db', 0.07), ('mf', 0.067), ('channel', 0.066), ('gp', 0.064), ('code', 0.062), ('signal', 0.058), ('fernando', 0.058), ('homogeneously', 0.058), ('mmsecentralized', 0.058), ('muds', 0.058), ('uoi', 0.058), ('sequences', 0.057), ('transmitted', 0.057), ('training', 0.052), ('interfering', 0.05), ('gps', 0.044), ('communication', 0.043), ('receivers', 0.043), ('synchronous', 0.043), ('processes', 0.041), ('noise', 0.04), ('powers', 0.04), ('gaussian', 0.039), ('habt', 0.038), ('juan', 0.038), ('nt', 0.038), ('mobile', 0.038), ('short', 0.038), ('digital', 0.037), ('detection', 0.037), ('amplitude', 0.035), ('ratio', 0.035), ('access', 0.034), ('hi', 0.034), ('spanish', 0.033), ('interferences', 0.033), ('spread', 0.032), ('square', 0.031), ('decision', 0.031), ('criteria', 0.031), ('suppression', 0.03), ('retrieve', 0.03), ('error', 0.029), ('vs', 0.028), ('know', 0.027), ('attenuation', 0.027), ('optimal', 0.027), ('linear', 0.025), ('ministry', 0.024), ('division', 0.023), ('wireless', 0.023), ('svm', 0.022), ('kernel', 0.022), ('hundreds', 0.022), ('system', 0.021), ('symbols', 0.021), ('solution', 0.02), ('surface', 0.02), ('detect', 0.02), ('worse', 0.02), ('need', 0.02), ('reduce', 0.019), ('matched', 0.019), ('readily', 0.019), ('radial', 0.019), ('techniques', 0.019), ('regression', 0.019), ('thousands', 0.018), ('able', 0.018), ('arg', 0.018), ('aligned', 0.018), ('versus', 0.017), ('achieve', 0.017), ('mean', 0.017), ('performance', 0.017)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000002 81 nips-2005-Gaussian Processes for Multiuser Detection in CDMA receivers
Author: Juan J. Murillo-fuentes, Sebastian Caro, Fernando Pérez-Cruz
Abstract: In this paper we propose a new receiver for digital communications. We focus on the application of Gaussian Processes (GPs) to the multiuser detection (MUD) in code division multiple access (CDMA) systems to solve the near-far problem. Hence, we aim to reduce the interference from other users sharing the same frequency band. While usual approaches minimize the mean square error (MMSE) to linearly retrieve the user of interest, we exploit the same criteria but in the design of a nonlinear MUD. Since the optimal solution is known to be nonlinear, the performance of this novel method clearly improves that of the MMSE detectors. Furthermore, the GP based MUD achieves excellent interference suppression even for short training sequences. We also include some experiments to illustrate that other nonlinear detectors such as those based on Support Vector Machines (SVMs) exhibit a worse performance. 1
2 0.069983192 15 nips-2005-A Theoretical Analysis of Robust Coding over Noisy Overcomplete Channels
Author: Eizaburo Doi, Doru C. Balcan, Michael S. Lewicki
Abstract: Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal with a population of noisy, low-fidelity neurons. This problem can be expressed in information theoretic terms as coding and transmitting a multi-dimensional, analog signal over a set of noisy channels. Previously, we have shown that robust, overcomplete codes can be learned by minimizing the reconstruction error with a constraint on the channel capacity. Here, we present a theoretical analysis that characterizes the optimal linear coder and decoder for one- and twodimensional data. The analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides a number of important insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions to achieve robustness. We also report numerical solutions for robust coding of highdimensional image data and show that these codes are substantially more robust compared against other image codes such as ICA and wavelets. 1
3 0.059585314 162 nips-2005-Rate Distortion Codes in Sensor Networks: A System-level Analysis
Author: Tatsuto Murayama, Peter Davis
Abstract: This paper provides a system-level analysis of a scalable distributed sensing model for networked sensors. In our system model, a data center acquires data from a bunch of L sensors which each independently encode their noisy observations of an original binary sequence, and transmit their encoded data sequences to the data center at a combined rate R, which is limited. Supposing that the sensors use independent LDGM rate distortion codes, we show that the system performance can be evaluated for any given finite R when the number of sensors L goes to infinity. The analysis shows how the optimal strategy for the distributed sensing problem changes at critical values of the data rate R or the noise level. 1
4 0.055732213 179 nips-2005-Sparse Gaussian Processes using Pseudo-inputs
Author: Edward Snelson, Zoubin Ghahramani
Abstract: We present a new Gaussian process (GP) regression model whose covariance is parameterized by the the locations of M pseudo-input points, which we learn by a gradient based optimization. We take M N, where N is the number of real data points, and hence obtain a sparse regression method which has O(M 2 N ) training cost and O(M 2 ) prediction cost per test case. We also find hyperparameters of the covariance function in the same joint optimization. The method can be viewed as a Bayesian regression model with particular input dependent noise. The method turns out to be closely related to several other sparse GP approaches, and we discuss the relation in detail. We finally demonstrate its performance on some large data sets, and make a direct comparison to other sparse GP methods. We show that our method can match full GP performance with small M , i.e. very sparse solutions, and it significantly outperforms other approaches in this regime. 1
5 0.055460334 80 nips-2005-Gaussian Process Dynamical Models
Author: Jack Wang, Aaron Hertzmann, David M. Blei
Abstract: This paper introduces Gaussian Process Dynamical Models (GPDM) for nonlinear time series analysis. A GPDM comprises a low-dimensional latent space with associated dynamics, and a map from the latent space to an observation space. We marginalize out the model parameters in closed-form, using Gaussian Process (GP) priors for both the dynamics and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach on human motion capture data in which each pose is 62-dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces. Webpage: http://www.dgp.toronto.edu/∼ jmwang/gpdm/ 1
6 0.055032641 21 nips-2005-An Alternative Infinite Mixture Of Gaussian Process Experts
7 0.051940896 50 nips-2005-Convex Neural Networks
8 0.048516288 183 nips-2005-Stimulus Evoked Independent Factor Analysis of MEG Data with Large Background Activity
9 0.048306514 96 nips-2005-Inference with Minimal Communication: a Decision-Theoretic Variational Approach
10 0.047243662 135 nips-2005-Neuronal Fiber Delineation in Area of Edema from Diffusion Weighted MRI
11 0.046772711 60 nips-2005-Dynamic Social Network Analysis using Latent Space Models
12 0.041995455 131 nips-2005-Multiple Instance Boosting for Object Detection
13 0.041452508 136 nips-2005-Noise and the two-thirds power Law
14 0.041387409 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression
15 0.038896166 30 nips-2005-Assessing Approximations for Gaussian Process Classification
16 0.038557936 167 nips-2005-Robust design of biological experiments
17 0.036659315 46 nips-2005-Consensus Propagation
18 0.034135677 16 nips-2005-A matching pursuit approach to sparse Gaussian process regression
19 0.03353313 139 nips-2005-Non-iterative Estimation with Perturbed Gaussian Markov Processes
20 0.033051025 92 nips-2005-Hyperparameter and Kernel Learning for Graph Based Semi-Supervised Classification
topicId topicWeight
[(0, 0.127), (1, 0.006), (2, -0.018), (3, 0.02), (4, 0.036), (5, -0.024), (6, -0.004), (7, -0.05), (8, 0.065), (9, 0.056), (10, -0.056), (11, 0.013), (12, -0.024), (13, 0.019), (14, 0.049), (15, -0.001), (16, -0.063), (17, -0.032), (18, 0.046), (19, -0.078), (20, -0.045), (21, 0.012), (22, 0.048), (23, 0.029), (24, -0.084), (25, -0.004), (26, -0.03), (27, -0.04), (28, 0.068), (29, -0.011), (30, -0.053), (31, -0.067), (32, -0.076), (33, -0.035), (34, -0.026), (35, 0.013), (36, 0.031), (37, 0.078), (38, -0.033), (39, -0.053), (40, 0.077), (41, 0.062), (42, -0.021), (43, -0.17), (44, 0.114), (45, 0.122), (46, -0.054), (47, 0.003), (48, -0.072), (49, -0.038)]
simIndex simValue paperId paperTitle
same-paper 1 0.92521161 81 nips-2005-Gaussian Processes for Multiuser Detection in CDMA receivers
Author: Juan J. Murillo-fuentes, Sebastian Caro, Fernando Pérez-Cruz
Abstract: In this paper we propose a new receiver for digital communications. We focus on the application of Gaussian Processes (GPs) to the multiuser detection (MUD) in code division multiple access (CDMA) systems to solve the near-far problem. Hence, we aim to reduce the interference from other users sharing the same frequency band. While usual approaches minimize the mean square error (MMSE) to linearly retrieve the user of interest, we exploit the same criteria but in the design of a nonlinear MUD. Since the optimal solution is known to be nonlinear, the performance of this novel method clearly improves that of the MMSE detectors. Furthermore, the GP based MUD achieves excellent interference suppression even for short training sequences. We also include some experiments to illustrate that other nonlinear detectors such as those based on Support Vector Machines (SVMs) exhibit a worse performance. 1
2 0.52894658 15 nips-2005-A Theoretical Analysis of Robust Coding over Noisy Overcomplete Channels
Author: Eizaburo Doi, Doru C. Balcan, Michael S. Lewicki
Abstract: Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal with a population of noisy, low-fidelity neurons. This problem can be expressed in information theoretic terms as coding and transmitting a multi-dimensional, analog signal over a set of noisy channels. Previously, we have shown that robust, overcomplete codes can be learned by minimizing the reconstruction error with a constraint on the channel capacity. Here, we present a theoretical analysis that characterizes the optimal linear coder and decoder for one- and twodimensional data. The analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides a number of important insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions to achieve robustness. We also report numerical solutions for robust coding of highdimensional image data and show that these codes are substantially more robust compared against other image codes such as ICA and wavelets. 1
3 0.50141633 162 nips-2005-Rate Distortion Codes in Sensor Networks: A System-level Analysis
Author: Tatsuto Murayama, Peter Davis
Abstract: This paper provides a system-level analysis of a scalable distributed sensing model for networked sensors. In our system model, a data center acquires data from a bunch of L sensors which each independently encode their noisy observations of an original binary sequence, and transmit their encoded data sequences to the data center at a combined rate R, which is limited. Supposing that the sensors use independent LDGM rate distortion codes, we show that the system performance can be evaluated for any given finite R when the number of sensors L goes to infinity. The analysis shows how the optimal strategy for the distributed sensing problem changes at critical values of the data rate R or the noise level. 1
4 0.47317153 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression
Author: Misha Ahrens, Liam Paninski, Quentin J. Huys
Abstract: Our understanding of the input-output function of single cells has been substantially advanced by biophysically accurate multi-compartmental models. The large number of parameters needing hand tuning in these models has, however, somewhat hampered their applicability and interpretability. Here we propose a simple and well-founded method for automatic estimation of many of these key parameters: 1) the spatial distribution of channel densities on the cell’s membrane; 2) the spatiotemporal pattern of synaptic input; 3) the channels’ reversal potentials; 4) the intercompartmental conductances; and 5) the noise level in each compartment. We assume experimental access to: a) the spatiotemporal voltage signal in the dendrite (or some contiguous subpart thereof, e.g. via voltage sensitive imaging techniques), b) an approximate kinetic description of the channels and synapses present in each compartment, and c) the morphology of the part of the neuron under investigation. The key observation is that, given data a)-c), all of the parameters 1)-4) may be simultaneously inferred by a version of constrained linear regression; this regression, in turn, is efficiently solved using standard algorithms, without any “local minima” problems despite the large number of parameters and complex dynamics. The noise level 5) may also be estimated by standard techniques. We demonstrate the method’s accuracy on several model datasets, and describe techniques for quantifying the uncertainty in our estimates. 1
5 0.40628701 68 nips-2005-Factorial Switching Kalman Filters for Condition Monitoring in Neonatal Intensive Care
Author: Christopher Williams, John Quinn, Neil Mcintosh
Abstract: The observed physiological dynamics of an infant receiving intensive care are affected by many possible factors, including interventions to the baby, the operation of the monitoring equipment and the state of health. The Factorial Switching Kalman Filter can be used to infer the presence of such factors from a sequence of observations, and to estimate the true values where these observations have been corrupted. We apply this model to clinical time series data and show it to be effective in identifying a number of artifactual and physiological patterns. 1
6 0.37553531 183 nips-2005-Stimulus Evoked Independent Factor Analysis of MEG Data with Large Background Activity
7 0.37343067 131 nips-2005-Multiple Instance Boosting for Object Detection
8 0.36245707 179 nips-2005-Sparse Gaussian Processes using Pseudo-inputs
9 0.34978351 50 nips-2005-Convex Neural Networks
10 0.34382811 80 nips-2005-Gaussian Process Dynamical Models
11 0.34082627 16 nips-2005-A matching pursuit approach to sparse Gaussian process regression
12 0.34000796 49 nips-2005-Convergence and Consistency of Regularized Boosting Algorithms with Stationary B-Mixing Observations
13 0.33242655 113 nips-2005-Learning Multiple Related Tasks using Latent Independent Component Analysis
14 0.31845289 60 nips-2005-Dynamic Social Network Analysis using Latent Space Models
15 0.31644568 191 nips-2005-The Forgetron: A Kernel-Based Perceptron on a Fixed Budget
17 0.31028715 205 nips-2005-Worst-Case Bounds for Gaussian Process Models
18 0.31014729 167 nips-2005-Robust design of biological experiments
19 0.30210435 44 nips-2005-Computing the Solution Path for the Regularized Support Vector Regression
20 0.29920071 24 nips-2005-An Approximate Inference Approach for the PCA Reconstruction Error
topicId topicWeight
[(3, 0.059), (10, 0.026), (22, 0.394), (27, 0.018), (31, 0.044), (34, 0.067), (39, 0.015), (41, 0.017), (55, 0.015), (65, 0.013), (69, 0.054), (73, 0.019), (77, 0.025), (88, 0.087), (91, 0.042)]
simIndex simValue paperId paperTitle
same-paper 1 0.77059352 81 nips-2005-Gaussian Processes for Multiuser Detection in CDMA receivers
Author: Juan J. Murillo-fuentes, Sebastian Caro, Fernando Pérez-Cruz
Abstract: In this paper we propose a new receiver for digital communications. We focus on the application of Gaussian Processes (GPs) to the multiuser detection (MUD) in code division multiple access (CDMA) systems to solve the near-far problem. Hence, we aim to reduce the interference from other users sharing the same frequency band. While usual approaches minimize the mean square error (MMSE) to linearly retrieve the user of interest, we exploit the same criteria but in the design of a nonlinear MUD. Since the optimal solution is known to be nonlinear, the performance of this novel method clearly improves that of the MMSE detectors. Furthermore, the GP based MUD achieves excellent interference suppression even for short training sequences. We also include some experiments to illustrate that other nonlinear detectors such as those based on Support Vector Machines (SVMs) exhibit a worse performance. 1
2 0.71071541 95 nips-2005-Improved risk tail bounds for on-line algorithms
Author: Nicolò Cesa-bianchi, Claudio Gentile
Abstract: We prove the strongest known bound for the risk of hypotheses selected from the ensemble generated by running a learning algorithm incrementally on the training data. Our result is based on proof techniques that are remarkably different from the standard risk analysis based on uniform convergence arguments.
3 0.5517869 160 nips-2005-Query by Committee Made Real
Author: Ran Gilad-bachrach, Amir Navot, Naftali Tishby
Abstract: Training a learning algorithm is a costly task. A major goal of active learning is to reduce this cost. In this paper we introduce a new algorithm, KQBC, which is capable of actively learning large scale problems by using selective sampling. The algorithm overcomes the costly sampling step of the well known Query By Committee (QBC) algorithm by projecting onto a low dimensional space. KQBC also enables the use of kernels, providing a simple way of extending QBC to the non-linear scenario. Sampling the low dimension space is done using the hit and run random walk. We demonstrate the success of this novel algorithm by applying it to both artificial and a real world problems.
4 0.433763 94 nips-2005-Identifying Distributed Object Representations in Human Extrastriate Visual Cortex
Author: Rory Sayres, David Ress, Kalanit Grill-spector
Abstract: The category of visual stimuli has been reliably decoded from patterns of neural activity in extrastriate visual cortex [1]. It has yet to be seen whether object identity can be inferred from this activity. We present fMRI data measuring responses in human extrastriate cortex to a set of 12 distinct object images. We use a simple winner-take-all classifier, using half the data from each recording session as a training set, to evaluate encoding of object identity across fMRI voxels. Since this approach is sensitive to the inclusion of noisy voxels, we describe two methods for identifying subsets of voxels in the data which optimally distinguish object identity. One method characterizes the reliability of each voxel within subsets of the data, while another estimates the mutual information of each voxel with the stimulus set. We find that both metrics can identify subsets of the data which reliably encode object identity, even when noisy measurements are artificially added to the data. The mutual information metric is less efficient at this task, likely due to constraints in fMRI data. 1
5 0.34720385 74 nips-2005-Faster Rates in Regression via Active Learning
Author: Rebecca Willett, Robert Nowak, Rui M. Castro
Abstract: This paper presents a rigorous statistical analysis characterizing regimes in which active learning significantly outperforms classical passive learning. Active learning algorithms are able to make queries or select sample locations in an online fashion, depending on the results of the previous queries. In some regimes, this extra flexibility leads to significantly faster rates of error decay than those possible in classical passive learning settings. The nature of these regimes is explored by studying fundamental performance limits of active and passive learning in two illustrative nonparametric function classes. In addition to examining the theoretical potential of active learning, this paper describes a practical algorithm capable of exploiting the extra flexibility of the active setting and provably improving upon the classical passive techniques. Our active learning theory and methods show promise in a number of applications, including field estimation using wireless sensor networks and fault line detection. 1
6 0.34612903 30 nips-2005-Assessing Approximations for Gaussian Process Classification
7 0.3458873 92 nips-2005-Hyperparameter and Kernel Learning for Graph Based Semi-Supervised Classification
8 0.34566548 41 nips-2005-Coarse sample complexity bounds for active learning
9 0.34543255 78 nips-2005-From Weighted Classification to Policy Search
10 0.34470624 137 nips-2005-Non-Gaussian Component Analysis: a Semi-parametric Framework for Linear Dimension Reduction
11 0.34459245 151 nips-2005-Pattern Recognition from One Example by Chopping
12 0.34334382 66 nips-2005-Estimation of Intrinsic Dimensionality Using High-Rate Vector Quantization
13 0.34274906 200 nips-2005-Variable KD-Tree Algorithms for Spatial Pattern Search and Discovery
14 0.3425872 154 nips-2005-Preconditioner Approximations for Probabilistic Graphical Models
15 0.3419795 62 nips-2005-Efficient Estimation of OOMs
16 0.34162018 50 nips-2005-Convex Neural Networks
17 0.34106296 132 nips-2005-Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity
18 0.34094056 144 nips-2005-Off-policy Learning with Options and Recognizers
19 0.34047779 32 nips-2005-Augmented Rescorla-Wagner and Maximum Likelihood Estimation
20 0.33978516 184 nips-2005-Structured Prediction via the Extragradient Method