nips nips2003 nips2003-76 knowledge-graph by maker-knowledge-mining

76 nips-2003-GPPS: A Gaussian Process Positioning System for Cellular Networks


Source: pdf

Author: Anton Schwaighofer, Marian Grigoras, Volker Tresp, Clemens Hoffmann

Abstract: In this article, we present a novel approach to solving the localization problem in cellular networks. The goal is to estimate a mobile user’s position, based on measurements of the signal strengths received from network base stations. Our solution works by building Gaussian process models for the distribution of signal strengths, as obtained in a series of calibration measurements. In the localization stage, the user’s position can be estimated by maximizing the likelihood of received signal strengths with respect to the position. We investigate the accuracy of the proposed approach on data obtained within a large indoor cellular network. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 at/aschwaig Abstract In this article, we present a novel approach to solving the localization problem in cellular networks. [sent-4, score-0.341]

2 The goal is to estimate a mobile user’s position, based on measurements of the signal strengths received from network base stations. [sent-5, score-0.751]

3 Our solution works by building Gaussian process models for the distribution of signal strengths, as obtained in a series of calibration measurements. [sent-6, score-0.741]

4 In the localization stage, the user’s position can be estimated by maximizing the likelihood of received signal strengths with respect to the position. [sent-7, score-0.7]

5 We investigate the accuracy of the proposed approach on data obtained within a large indoor cellular network. [sent-8, score-0.243]

6 All such services crucially depend on methods to accurately estimate the position of the mobile user within the network (“localization”, “positioning”). [sent-14, score-0.309]

7 We proceed by introducing the localization problem in detail in Sec. [sent-19, score-0.277]

8 3 shows how the required calibration stage of the system can be performed in an optimal manner. [sent-25, score-0.584]

9 We show that the GPPS gives accurate location estimates, in particular when only very few calibration measurements are available. [sent-28, score-0.671]

10 1 Problem Description Our overall goal is to develop a localization system for indoor cellular networks, that is (in order to minimize cost) based solely on existing standard networking hardware. [sent-30, score-0.546]

11 Location estimates can be based on different characteristics of the radio signal received at the mobile station (i. [sent-31, score-0.619]

12 Yet, in most hardware, the only available information about the radio signal is the received signal strength. [sent-34, score-0.453]

13 Information like phase or propagation time from the base station requires additional hardware, and can thus not be used. [sent-35, score-0.463]

14 In general, estimating the user’s position based only on measurements of the signal strength is known to be a very challenging task [7], in particular in indoor networks. [sent-36, score-0.546]

15 Due to reflections, refraction, and scattering of the electromagnetic waves along structures of the building, the received signal is only a distorted version of the transmitted signal. [sent-37, score-0.263]

16 Also, the localization system ought to be robust, since base stations may fail, be switched off, or may be temporarily shielded for unknown reasons. [sent-43, score-0.62]

17 In these cases, a sensible localization system should not draw the conclusion that the user is far from the respective base station. [sent-44, score-0.6]

18 Due to the complex signal propagation behaviour, almost all previous approaches to indoor localization use an initial calibration stage. [sent-45, score-1.059]

19 Calibration here means that signal strengths received from the network base stations are measured at a number of points inside the building. [sent-46, score-0.761]

20 Systems differ in their ways of using this calibration data. [sent-47, score-0.522]

21 In a “forward modelling” approach, a model of signal strength as a function of position is built first. [sent-49, score-0.334]

22 The localization procedure then tries to find the location which best agrees with the measured signal strengths. [sent-50, score-0.452]

23 Alternatively, the mapping from signal strengths to position can be modelled directly (“inverse modelling”). [sent-51, score-0.291]

24 The RADAR system [1], one of the first indoor localization systems, is an inverse modelling approach using a nearest neighbor technique. [sent-52, score-0.515]

25 [7] build simple probabilistic models from the calibration data (forward modelling), in conjunction with maximum likelihood position estimation. [sent-53, score-0.681]

26 The key idea of the Gaussian process positioning system (GPPS) is to use Gaussian process models for the signal strength received from each base station, and to obtain position estimates via maximum likelihood, i. [sent-59, score-0.933]

27 by searching for the position which best fits the measured signal strengths. [sent-61, score-0.249]

28 Consider a cellular network with a total of B base stations. [sent-62, score-0.349]

29 Assume that, for each of base stations, we have a probabilistic model that describes the distribution of received signal strength. [sent-63, score-0.477]

30 More formally, we denote by p j (s j | t) the likelihood of receiving a signal strength s j from the j-th base station on position t. [sent-64, score-0.778]

31 , B given, localization can be done in a straight-forward way. [sent-68, score-0.246]

32 The user reports a vector s (of length B) of signal strength measurements for all base stations. [sent-69, score-0.637]

33 It may occur that no signal is received from some base stations (indicated by / s j = 0), e. [sent-70, score-0.594]

34 , because the user is too far from this base station, or due to hardware failure. [sent-72, score-0.335]

35 (1) In the above equation, we only use the likelihood contributions of those base stations that are actually received. [sent-74, score-0.358]

36 Alternatively, one could use a very low signal strength as a default value for each base station that is not received [7]. [sent-75, score-0.807]

37 We found that this can give high errors if a base station close to the user fails, since now the low default value indicates that one should expect the user to be far from the base station. [sent-76, score-0.823]

38 Yet, we still need to define and build suitable base station models p j (s j | t), j = 1, . [sent-78, score-0.468]

39 In the GPPS, we use Gaussian process (GP) models for this task, where each base station model is estimated from the calibration data. [sent-82, score-1.003]

40 Secondly, GPs are a nonparametric method that can flexibly adapt to the complex signal propagation behaviour observed in indoor cellular networks. [sent-85, score-0.401]

41 In the following sections, we will describe the GP models in more detail, and also discuss the choice of kernel function, which is of great importance in order to build an accurate localization system. [sent-92, score-0.329]

42 1 Gaussian Process Models for Signal Strengths In the GPPS, a Gaussian process (GP) approach is used for the models p j (s j | t) that describe the signal strength received from a single base station j. [sent-94, score-0.837]

43 1 that the proposed GPPS is based on a set calibration measurements, where the signal strength is measured at a number of points spread over the area to be covered. [sent-98, score-0.818]

44 Consider now the calibration data for a single base station j. [sent-99, score-0.955]

45 We denote this calibration data by D j = {(xi , yi )}N , meaning that a signal strength of yi has been measured i=1 on point xi , with a total of N calibration measurements. [sent-100, score-1.358]

46 , the measured signal strength yi is composed of a “true” signal strength s(xi ) plus independent Gaussian (measurement) noise ei of variance σ2 , with yi = s(xi ) + ei . [sent-103, score-0.602]

47 The Gaussian process assumption for the true signal s implies that the true signal strengths for all calibration 1 Assuming independence of the individual measurements. [sent-104, score-0.9]

48 One could also use a solution inspired from co-kriging, that takes into account the full dependence between signals received from different base stations. [sent-105, score-0.335]

49 Given the calibration data D j , the predictive distribution for the signal strength s j received on some arbitrary point t turns out to be Gaussian. [sent-115, score-0.912]

50 We set them by maximizing the marginal likelihood of the calibration data with respect to the model parameters, which turns out to be [6] ˆ ˆ (σ2 , θ) = arg max − log det Q − y Q−1 y . [sent-128, score-0.564]

51 (4) σ2 ,θ ˆ ˆ The model parameters (σ2 , θ) are set individually for each base station. [sent-129, score-0.214]

52 Firstly, it must be noted that taking calibration measurements is a very time-consuming (thus, expensive) task. [sent-152, score-0.631]

53 The number of calibration data must thus be kept as low as possible, while retaining high localization accuracy. [sent-153, score-0.768]

54 In the actual GPPS, the GP mean is a linear function of the distance to the base station (when signal strength is given on a logarithmic scale). [sent-161, score-0.699]

55 Starting point is the calibration data, with a total of C measurements. [sent-163, score-0.522]

56 ,C}, we / receive a signal strength of ci j from base station j, j ∈ {1, . [sent-167, score-0.693]

57 , B}, or ci j = 0 if base station j has not been received at xi (for example, due to signal obstruction). [sent-170, score-0.738]

58 The calibration data is then split into subsets D j containing those points where base station / j has actually been received, i. [sent-172, score-0.992]

59 For each base station, that is, for each data D j , we proceed as follows: 1. [sent-177, score-0.23]

60 Often, the exact position of base station j is not known. [sent-178, score-0.516]

61 2 In this case, we use a simple estimate for the base station position, that is the average of the 3 calibration points xi with maximum signal strength yi . [sent-179, score-1.263]

62 In particular with sparse calibration measurements, more sophisticated estimates for the base station position are difficult to come up with. [sent-181, score-1.068]

63 Compute the distance of each calibration point to the base station (using either the exact or the estimated position obtained in step 1). [sent-183, score-1.073]

64 As the mean function of the GP model, we fit a linear model3 to the received signal strength as a function of distance to the base station. [sent-184, score-0.601]

65 Subtract the value of the mean function from the 2 When setting up the network, or after modifying the network by moving base stations, the base station positions are often not recorded. [sent-185, score-0.702]

66 In a large assembly hall of 250 × 180 meters, measurements of signal strengths received from DECT base stations were made on 650 points spread over the hall. [sent-193, score-0.806]

67 We observed a very high fluctuation of received signals (up to ±10 dB when repeating measurements, while the total signal range is only −30 to −90 dB), both due to measurement noise, and due to dynamical changes of the environment. [sent-196, score-0.283]

68 We compare the GPPS with a nearest neighbor based localization system (abbreviated by NNLoc in the following), that is quite similar to the RADAR [1] approach. [sent-197, score-0.369]

69 4 This system finds the calibration measurements that best match the signal strength received at test stage. [sent-198, score-1.03]

70 Dense Calibration Points In a first experiment, we investigate the achievable precision of location estimates when using the full set of calibration measurements. [sent-201, score-0.592]

71 The total set of measurements is split up into five equally sized parts, where four of these parts were used as the calibration set. [sent-203, score-0.631]

72 We found that, in this setting, the nearest neighbor based method NNLoc works very fine, and provides an average localization error of 7 meters. [sent-206, score-0.326]

73 With the GPPS, localization is typically based on around 15 base stations, that is, 15 likelihood terms contributing to Eq. [sent-209, score-0.487]

74 Unfortunately, such a high number of calibration measurements is unlikely to be available in practice. [sent-211, score-0.631]

75 Taking calibration measurements is a very costly process, in particular if larger areas need to be covered. [sent-212, score-0.631]

76 Thus, one is very much interested in keeping the number of calibration points as low as possible. [sent-213, score-0.559]

77 Experiments with Sparse Calibration Points In the second experimental setup, we aim at building the positioning system with only a minimal number of calibration points. [sent-214, score-0.739]

78 The localization system is built based on these ˜ C points and evaluated on the fifth part of the data. [sent-217, score-0.342]

79 Out of the given calibration measurements, we select those C points that are closest (in terms of Euclidean distance) to the grid points. [sent-220, score-0.6]

80 1 we plot the localization accuracy, averaged over the 5fold cross validation, ˜ of the GPPS and the nearest neighbor based system built on only C calibration points, 4 We also investigated localization using Eq. [sent-222, score-1.169]

81 (1) with a simplistic propagation model, where the expected signal (on log scale) is a linear function of the distance to the base station. [sent-223, score-0.402]

82 Yet, this approach lead to very poor localization accuracy, and is thus not considered in more detail here. [sent-224, score-0.261]

83 Figure 1: Mean localization error of the GPPS and the NNLoc method, as a function of the number of calibration points used. [sent-225, score-0.805]

84 Vertical bars indicate ±1 standard deviation of the mean localization error. [sent-226, score-0.261]

85 The calibration points are either selected at random, or according to an optimal design criterion ˜ C ∈ {100, 50, 25, 12} calibration measurement. [sent-227, score-1.16]

86 It can be clearly seen that the GPPS system (with optimal design) achieves a high precision for its location estimates, even when using only a minimal number of calibration measurements. [sent-228, score-0.638]

87 With only 12 calibration measurements, GPPS achieves an average error of around 17 meters, while the competing method reaches only 29 meters at best. [sent-229, score-0.553]

88 In this setting, the average distance in between calibration measurements is around 75 meters. [sent-230, score-0.647]

89 Both the NNLoc system and the GPPS system show large improvements of performance when selecting the calibration points according to the optimal design, instead of a purely random fashion. [sent-231, score-0.664]

90 Also, note that the localization error of the GPPS system degrades only slowly when the number of calibration measurements is reduced. [sent-232, score-0.92]

91 In contrast, the curves for the nearest neighbor based method show a sharper increase of positioning error. [sent-233, score-0.211]

92 It is worth noticing that the choice of kernel functions has a strong impact on the localization accuracy of the GPPS. [sent-234, score-0.323]

93 It is also intere esting to consider different methods for selecting the calibration points. [sent-241, score-0.522]

94 2(b) plots the accuracy obtained with GPPS, when calibration points are either placed randomly, on a hexagonal grid (the theoretically optimal procedure) or on a square grid. [sent-243, score-0.748]

95 Somehow counterintuitively, a square grid for calibration gives a performance that is just as good or even worse than a random grid. [sent-244, score-0.584]

96 In contrast, localization with NNLoc performs about the same with either hexagonal or square grid (this is not plotted in the figure). [sent-245, score-0.387]

97 5 Conclusions In this article, we presented a novel approach to solving the localization problem in indoor cellular network networks. [sent-246, score-0.5]

98 Gaussian process (GP) models with the Mat´ rn kernel function e were used as models for individual base stations, so that location estimates could be computed using maximum likelihood. [sent-247, score-0.459]

99 We showed that this new Gaussian process positioning system (GPPS) can provide sufficiently high accuracy when used within a DECT network. [sent-248, score-0.231]

100 Furthermore, we showed how calibration points can be optimally chosen in order to provide high accuracy position estimates. [sent-250, score-0.671]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('gpps', 0.541), ('calibration', 0.522), ('localization', 0.246), ('station', 0.219), ('base', 0.214), ('gp', 0.142), ('signal', 0.142), ('positioning', 0.131), ('received', 0.121), ('mat', 0.119), ('indoor', 0.119), ('stations', 0.117), ('measurements', 0.109), ('dect', 0.105), ('cellular', 0.095), ('strength', 0.093), ('position', 0.083), ('user', 0.079), ('nnloc', 0.075), ('strengths', 0.066), ('wlan', 0.06), ('hexagonal', 0.06), ('rn', 0.059), ('mobile', 0.059), ('wireless', 0.052), ('kernel', 0.048), ('radio', 0.048), ('services', 0.048), ('schwaighofer', 0.045), ('system', 0.043), ('neighbor', 0.043), ('hardware', 0.042), ('grid', 0.041), ('design', 0.041), ('network', 0.04), ('location', 0.04), ('bessel', 0.039), ('nearest', 0.037), ('points', 0.037), ('predictive', 0.034), ('gaussian', 0.033), ('meters', 0.031), ('propagation', 0.03), ('radar', 0.03), ('estimates', 0.03), ('accuracy', 0.029), ('building', 0.029), ('process', 0.028), ('article', 0.027), ('likelihood', 0.027), ('modelling', 0.027), ('gradients', 0.026), ('anton', 0.026), ('networking', 0.026), ('austria', 0.026), ('fth', 0.026), ('ci', 0.025), ('rbf', 0.025), ('networks', 0.025), ('db', 0.025), ('measured', 0.024), ('materials', 0.024), ('graz', 0.024), ('yet', 0.023), ('square', 0.021), ('noise', 0.021), ('paths', 0.021), ('measurement', 0.02), ('models', 0.02), ('either', 0.019), ('environment', 0.019), ('yi', 0.019), ('optimal', 0.019), ('validation', 0.018), ('smoothness', 0.018), ('dk', 0.018), ('alternatively', 0.018), ('evaluation', 0.018), ('sensible', 0.018), ('default', 0.018), ('xi', 0.017), ('firstly', 0.017), ('ei', 0.017), ('solely', 0.017), ('derivatives', 0.017), ('built', 0.016), ('cross', 0.016), ('tting', 0.016), ('proceed', 0.016), ('distance', 0.016), ('xn', 0.015), ('detail', 0.015), ('mean', 0.015), ('build', 0.015), ('respect', 0.015), ('variance', 0.015), ('secondly', 0.015), ('behaviour', 0.015), ('minimal', 0.014), ('conjunction', 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 76 nips-2003-GPPS: A Gaussian Process Positioning System for Cellular Networks

Author: Anton Schwaighofer, Marian Grigoras, Volker Tresp, Clemens Hoffmann

Abstract: In this article, we present a novel approach to solving the localization problem in cellular networks. The goal is to estimate a mobile user’s position, based on measurements of the signal strengths received from network base stations. Our solution works by building Gaussian process models for the distribution of signal strengths, as obtained in a series of calibration measurements. In the localization stage, the user’s position can be estimated by maximizing the likelihood of received signal strengths with respect to the position. We investigate the accuracy of the proposed approach on data obtained within a large indoor cellular network. 1

2 0.13006499 141 nips-2003-Nonstationary Covariance Functions for Gaussian Process Regression

Author: Christopher J. Paciorek, Mark J. Schervish

Abstract: We introduce a class of nonstationary covariance functions for Gaussian process (GP) regression. Nonstationary covariance functions allow the model to adapt to functions whose smoothness varies with the inputs. The class includes a nonstationary version of the Matérn stationary covariance, in which the differentiability of the regression function is controlled by a parameter, freeing one from fixing the differentiability in advance. In experiments, the nonstationary GP regression model performs well when the input space is two or three dimensions, outperforming a neural network model and Bayesian free-knot spline models, and competitive with a Bayesian neural network, but is outperformed in one dimension by a state-of-the-art Bayesian free-knot spline model. The model readily generalizes to non-Gaussian data. Use of computational methods for speeding GP fitting may allow for implementation of the method on larger datasets. 1

3 0.11667972 194 nips-2003-Warped Gaussian Processes

Author: Edward Snelson, Zoubin Ghahramani, Carl E. Rasmussen

Abstract: We generalise the Gaussian process (GP) framework for regression by learning a nonlinear transformation of the GP outputs. This allows for non-Gaussian processes and non-Gaussian noise. The learning algorithm chooses a nonlinear transformation such that transformed data is well-modelled by a GP. This can be seen as including a preprocessing transformation as an integral part of the probabilistic modelling problem, rather than as an ad-hoc step. We demonstrate on several real regression problems that learning the transformation can lead to significantly better performance than using a regular GP, or a GP with a fixed transformation. 1

4 0.098176688 78 nips-2003-Gaussian Processes in Reinforcement Learning

Author: Malte Kuss, Carl E. Rasmussen

Abstract: We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and discrete time. We demonstrate how the GP model allows evaluation of the value function in closed form. The resulting policy iteration algorithm is demonstrated on a simple problem with a two dimensional state space. Further, we speculate that the intrinsic ability of GP models to characterise distributions of functions would allow the method to capture entire distributions over future values instead of merely their expectation, which has traditionally been the focus of much of reinforcement learning.

5 0.083724774 182 nips-2003-Subject-Independent Magnetoencephalographic Source Localization by a Multilayer Perceptron

Author: Sung C. Jun, Barak A. Pearlmutter

Abstract: We describe a system that localizes a single dipole to reasonable accuracy from noisy magnetoencephalographic (MEG) measurements in real time. At its core is a multilayer perceptron (MLP) trained to map sensor signals and head position to dipole location. Including head position overcomes the previous need to retrain the MLP for each subject and session. The training dataset was generated by mapping randomly chosen dipoles and head positions through an analytic model and adding noise from real MEG recordings. After training, a localization took 0.7 ms with an average error of 0.90 cm. A few iterations of a Levenberg-Marquardt routine using the MLP’s output as its initial guess took 15 ms and improved the accuracy to 0.53 cm, only slightly above the statistical limits on accuracy imposed by the noise. We applied these methods to localize single dipole sources from MEG components isolated by blind source separation and compared the estimated locations to those generated by standard manually-assisted commercial software. 1

6 0.077604033 170 nips-2003-Self-calibrating Probability Forecasting

7 0.064546615 15 nips-2003-A Probabilistic Model of Auditory Space Representation in the Barn Owl

8 0.057766579 55 nips-2003-Distributed Optimization in Adaptive Networks

9 0.0483182 5 nips-2003-A Classification-based Cocktail-party Processor

10 0.045967247 160 nips-2003-Prediction on Spike Data Using Kernel Algorithms

11 0.045848746 162 nips-2003-Probabilistic Inference of Speech Signals from Phaseless Spectrograms

12 0.041879524 115 nips-2003-Linear Dependent Dimensionality Reduction

13 0.041214634 20 nips-2003-All learning is Local: Multi-agent Learning in Global Reward Games

14 0.038845651 79 nips-2003-Gene Expression Clustering with Functional Mixture Models

15 0.037445735 157 nips-2003-Plasticity Kernels and Temporal Statistics

16 0.036975402 94 nips-2003-Information Maximization in Noisy Channels : A Variational Approach

17 0.036317859 114 nips-2003-Limiting Form of the Sample Covariance Eigenspectrum in PCA and Kernel PCA

18 0.035876103 112 nips-2003-Learning to Find Pre-Images

19 0.035462327 35 nips-2003-Attractive People: Assembling Loose-Limbed Models using Non-parametric Belief Propagation

20 0.035056617 104 nips-2003-Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.136), (1, 0.024), (2, 0.021), (3, 0.011), (4, -0.024), (5, 0.091), (6, 0.078), (7, -0.115), (8, 0.043), (9, 0.114), (10, -0.084), (11, 0.139), (12, 0.048), (13, -0.049), (14, -0.022), (15, -0.113), (16, -0.077), (17, 0.013), (18, -0.05), (19, 0.021), (20, 0.026), (21, 0.006), (22, -0.095), (23, 0.059), (24, 0.094), (25, -0.054), (26, -0.111), (27, 0.085), (28, -0.015), (29, 0.086), (30, 0.017), (31, -0.005), (32, -0.063), (33, 0.012), (34, 0.048), (35, 0.01), (36, 0.054), (37, -0.006), (38, -0.07), (39, 0.051), (40, -0.014), (41, -0.071), (42, 0.047), (43, 0.072), (44, 0.079), (45, 0.029), (46, 0.008), (47, -0.021), (48, 0.079), (49, 0.073)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93281198 76 nips-2003-GPPS: A Gaussian Process Positioning System for Cellular Networks

Author: Anton Schwaighofer, Marian Grigoras, Volker Tresp, Clemens Hoffmann

Abstract: In this article, we present a novel approach to solving the localization problem in cellular networks. The goal is to estimate a mobile user’s position, based on measurements of the signal strengths received from network base stations. Our solution works by building Gaussian process models for the distribution of signal strengths, as obtained in a series of calibration measurements. In the localization stage, the user’s position can be estimated by maximizing the likelihood of received signal strengths with respect to the position. We investigate the accuracy of the proposed approach on data obtained within a large indoor cellular network. 1

2 0.68315375 194 nips-2003-Warped Gaussian Processes

Author: Edward Snelson, Zoubin Ghahramani, Carl E. Rasmussen

Abstract: We generalise the Gaussian process (GP) framework for regression by learning a nonlinear transformation of the GP outputs. This allows for non-Gaussian processes and non-Gaussian noise. The learning algorithm chooses a nonlinear transformation such that transformed data is well-modelled by a GP. This can be seen as including a preprocessing transformation as an integral part of the probabilistic modelling problem, rather than as an ad-hoc step. We demonstrate on several real regression problems that learning the transformation can lead to significantly better performance than using a regular GP, or a GP with a fixed transformation. 1

3 0.65565759 141 nips-2003-Nonstationary Covariance Functions for Gaussian Process Regression

Author: Christopher J. Paciorek, Mark J. Schervish

Abstract: We introduce a class of nonstationary covariance functions for Gaussian process (GP) regression. Nonstationary covariance functions allow the model to adapt to functions whose smoothness varies with the inputs. The class includes a nonstationary version of the Matérn stationary covariance, in which the differentiability of the regression function is controlled by a parameter, freeing one from fixing the differentiability in advance. In experiments, the nonstationary GP regression model performs well when the input space is two or three dimensions, outperforming a neural network model and Bayesian free-knot spline models, and competitive with a Bayesian neural network, but is outperformed in one dimension by a state-of-the-art Bayesian free-knot spline model. The model readily generalizes to non-Gaussian data. Use of computational methods for speeding GP fitting may allow for implementation of the method on larger datasets. 1

4 0.45522922 153 nips-2003-Parameterized Novelty Detectors for Environmental Sensor Monitoring

Author: Cynthia Archer, Todd K. Leen, António M. Baptista

Abstract: As part of an environmental observation and forecasting system, sensors deployed in the Columbia RIver Estuary (CORIE) gather information on physical dynamics and changes in estuary habitat. Of these, salinity sensors are particularly susceptible to biofouling, which gradually degrades sensor response and corrupts critical data. Automatic fault detectors have the capability to identify bio-fouling early and minimize data loss. Complicating the development of discriminatory classifiers is the scarcity of bio-fouling onset examples and the variability of the bio-fouling signature. To solve these problems, we take a novelty detection approach that incorporates a parameterized bio-fouling model. These detectors identify the occurrence of bio-fouling, and its onset time as reliably as human experts. Real-time detectors installed during the summer of 2001 produced no false alarms, yet detected all episodes of sensor degradation before the field staff scheduled these sensors for cleaning. From this initial deployment through February 2003, our bio-fouling detectors have essentially doubled the amount of useful data coming from the CORIE sensors. 1

5 0.43449229 15 nips-2003-A Probabilistic Model of Auditory Space Representation in the Barn Owl

Author: Brian J. Fischer, Charles H. Anderson

Abstract: The barn owl is a nocturnal hunter, capable of capturing prey using auditory information alone [1]. The neural basis for this localization behavior is the existence of auditory neurons with spatial receptive fields [2]. We provide a mathematical description of the operations performed on auditory input signals by the barn owl that facilitate the creation of a representation of auditory space. To develop our model, we first formulate the sound localization problem solved by the barn owl as a statistical estimation problem. The implementation of the solution is constrained by the known neurobiology.

6 0.41781464 78 nips-2003-Gaussian Processes in Reinforcement Learning

7 0.40554246 182 nips-2003-Subject-Independent Magnetoencephalographic Source Localization by a Multilayer Perceptron

8 0.40254501 184 nips-2003-The Diffusion-Limited Biochemical Signal-Relay Channel

9 0.37515453 21 nips-2003-An Autonomous Robotic System for Mapping Abandoned Mines

10 0.34375384 170 nips-2003-Self-calibrating Probability Forecasting

11 0.33531448 57 nips-2003-Dynamical Modeling with Kernels for Nonlinear Time Series Prediction

12 0.33190763 55 nips-2003-Distributed Optimization in Adaptive Networks

13 0.33138511 166 nips-2003-Reconstructing MEG Sources with Unknown Correlations

14 0.30394548 162 nips-2003-Probabilistic Inference of Speech Signals from Phaseless Spectrograms

15 0.29063755 131 nips-2003-Modeling User Rating Profiles For Collaborative Filtering

16 0.28966409 139 nips-2003-Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression

17 0.27427691 187 nips-2003-Training a Quantum Neural Network

18 0.27391547 80 nips-2003-Generalised Propagation for Fast Fourier Transforms with Partial or Missing Data

19 0.27127624 5 nips-2003-A Classification-based Cocktail-party Processor

20 0.26633304 144 nips-2003-One Microphone Blind Dereverberation Based on Quasi-periodicity of Speech Signals


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.034), (11, 0.027), (29, 0.016), (30, 0.031), (35, 0.048), (49, 0.363), (53, 0.104), (69, 0.019), (71, 0.066), (76, 0.04), (85, 0.063), (91, 0.078), (99, 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77134234 76 nips-2003-GPPS: A Gaussian Process Positioning System for Cellular Networks

Author: Anton Schwaighofer, Marian Grigoras, Volker Tresp, Clemens Hoffmann

Abstract: In this article, we present a novel approach to solving the localization problem in cellular networks. The goal is to estimate a mobile user’s position, based on measurements of the signal strengths received from network base stations. Our solution works by building Gaussian process models for the distribution of signal strengths, as obtained in a series of calibration measurements. In the localization stage, the user’s position can be estimated by maximizing the likelihood of received signal strengths with respect to the position. We investigate the accuracy of the proposed approach on data obtained within a large indoor cellular network. 1

2 0.73980689 42 nips-2003-Bounded Finite State Controllers

Author: Pascal Poupart, Craig Boutilier

Abstract: We describe a new approximation algorithm for solving partially observable MDPs. Our bounded policy iteration approach searches through the space of bounded-size, stochastic finite state controllers, combining several advantages of gradient ascent (efficiency, search through restricted controller space) and policy iteration (less vulnerability to local optima).

3 0.7121594 40 nips-2003-Bias-Corrected Bootstrap and Model Uncertainty

Author: Harald Steck, Tommi S. Jaakkola

Abstract: The bootstrap has become a popular method for exploring model (structure) uncertainty. Our experiments with artificial and realworld data demonstrate that the graphs learned from bootstrap samples can be severely biased towards too complex graphical models. Accounting for this bias is hence essential, e.g., when exploring model uncertainty. We find that this bias is intimately tied to (well-known) spurious dependences induced by the bootstrap. The leading-order bias-correction equals one half of Akaike’s penalty for model complexity. We demonstrate the effect of this simple bias-correction in our experiments. We also relate this bias to the bias of the plug-in estimator for entropy, as well as to the difference between the expected test and training errors of a graphical model, which asymptotically equals Akaike’s penalty (rather than one half). 1

4 0.6163438 112 nips-2003-Learning to Find Pre-Images

Author: Jason Weston, Bernhard Schölkopf, Gökhan H. Bakir

Abstract: We consider the problem of reconstructing patterns from a feature map. Learning algorithms using kernels to operate in a reproducing kernel Hilbert space (RKHS) express their solutions in terms of input points mapped into the RKHS. We introduce a technique based on kernel principal component analysis and regression to reconstruct corresponding patterns in the input space (aka pre-images) and review its performance in several applications requiring the construction of pre-images. The introduced technique avoids difficult and/or unstable numerical optimization, is easy to implement and, unlike previous methods, permits the computation of pre-images in discrete input spaces. 1

5 0.44027433 161 nips-2003-Probabilistic Inference in Human Sensorimotor Processing

Author: Konrad P. Körding, Daniel M. Wolpert

Abstract: When we learn a new motor skill, we have to contend with both the variability inherent in our sensors and the task. The sensory uncertainty can be reduced by using information about the distribution of previously experienced tasks. Here we impose a distribution on a novel sensorimotor task and manipulate the variability of the sensory feedback. We show that subjects internally represent both the distribution of the task as well as their sensory uncertainty. Moreover, they combine these two sources of information in a way that is qualitatively predicted by optimal Bayesian processing. We further analyze if the subjects can represent multimodal distributions such as mixtures of Gaussians. The results show that the CNS employs probabilistic models during sensorimotor learning even when the priors are multimodal.

6 0.43762085 39 nips-2003-Bayesian Color Constancy with Non-Gaussian Models

7 0.43161112 126 nips-2003-Measure Based Regularization

8 0.42882484 113 nips-2003-Learning with Local and Global Consistency

9 0.42683282 107 nips-2003-Learning Spectral Clustering

10 0.42649084 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons

11 0.42646679 78 nips-2003-Gaussian Processes in Reinforcement Learning

12 0.4253864 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

13 0.42537758 9 nips-2003-A Kullback-Leibler Divergence Based Kernel for SVM Classification in Multimedia Applications

14 0.4250055 30 nips-2003-Approximability of Probability Distributions

15 0.42459115 80 nips-2003-Generalised Propagation for Fast Fourier Transforms with Partial or Missing Data

16 0.42338139 47 nips-2003-Computing Gaussian Mixture Models with EM Using Equivalence Constraints

17 0.42335457 20 nips-2003-All learning is Local: Multi-agent Learning in Global Reward Games

18 0.42318171 103 nips-2003-Learning Bounds for a Generalized Family of Bayesian Posterior Distributions

19 0.4231354 143 nips-2003-On the Dynamics of Boosting

20 0.42149395 115 nips-2003-Linear Dependent Dimensionality Reduction