jmlr jmlr2006 jmlr2006-86 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: S. V. N. Vishwanathan, Nicol N. Schraudolph, Alex J. Smola
Abstract: This paper presents an online support vector machine (SVM) that uses the stochastic meta-descent (SMD) algorithm to adapt its step size automatically. We formulate the online learning problem as a stochastic gradient descent in reproducing kernel Hilbert space (RKHS) and translate SMD to the nonparametric setting, where its gradient trace parameter is no longer a coefficient vector but an element of the RKHS. We derive efficient updates that allow us to perform the step size adaptation in linear time. We apply the online SVM framework to a variety of loss functions, and in particular show how to handle structured output spaces and achieve efficient online multiclass classification. Experiments show that our algorithm outperforms more primitive methods for setting the gradient step size. Keywords: online SVM, stochastic meta-descent, structured output spaces
Reference: text
sentIndex sentText sentNum sentScore
1 We formulate the online learning problem as a stochastic gradient descent in reproducing kernel Hilbert space (RKHS) and translate SMD to the nonparametric setting, where its gradient trace parameter is no longer a coefficient vector but an element of the RKHS. [sent-16, score-0.559]
2 We derive efficient updates that allow us to perform the step size adaptation in linear time. [sent-17, score-0.213]
3 We apply the online SVM framework to a variety of loss functions, and in particular show how to handle structured output spaces and achieve efficient online multiclass classification. [sent-18, score-0.529]
4 Keywords: online SVM, stochastic meta-descent, structured output spaces 1. [sent-20, score-0.248]
5 Introduction Stochastic (“online”) gradient methods incrementally update their hypothesis by descending a stochastic approximation of the gradient computed from just the current observation. [sent-21, score-0.305]
6 Much work in this area centers on the key issue of choosing an appropriate time-dependent gradient step size ηt . [sent-24, score-0.181]
7 Though support vector machines (SVMs) were originally conceived as batch techniques with time complexity quadratic to cubic in the training set size, recent years have seen the development of online variants (Herbster, 2001; Kivinen et al. [sent-25, score-0.194]
8 To date, online kernel methods based on stochastic gradient descent (Kivinen et al. [sent-30, score-0.441]
9 V ISHWANATHAN , S CHRAUDOLPH AND S MOLA meta-descent (SMD): performing a simultaneous stochastic gradient descent on the step size itself. [sent-39, score-0.281]
10 Translating this into the kernel framework yields a fast online optimization method for SVMs. [sent-40, score-0.246]
11 In Section 2 we review gradient-based step size adaptation algorithms so as to motivate our subsequent derivation of SMD. [sent-42, score-0.176]
12 We briefly survey kernel-based online methods in Section 3, then present the online SVM algorithm with a systematic, unified view of various loss functions (including losses on structured label domains) in Section 4. [sent-43, score-0.469]
13 Section 5 then introduces online SVMD, our novel application of SMD to the online SVM. [sent-44, score-0.388]
14 Here we also derive linear-time incremental updates and standard SVM extensions for SVMD, and discuss issues of buffer management and time complexity. [sent-45, score-0.191]
15 Experiments comparing SVMD to the online SVM are then presented in Section 6, followed by a discussion. [sent-46, score-0.194]
16 An adaptive version of stochastic gradient descent works by setting θt+1 = θt − ηt · gt , where gt = ∂θt Jt (θt ), using ∂θt as a shorthand for Rn , + ∂ ∂θ θ=θ . [sent-57, score-0.545]
17 t (1) Unlike conventional gradient descent algorithms where ηt is scalar, here ηt ∈ and · denotes component-wise (Hadamard) multiplication. [sent-58, score-0.164]
18 A straightforward implementation of this idea is the delta-delta algorithm (Sutton, 1981; Jacobs, 1988), which updates η via ηt+1 = ηt − µ ∂ηt Jt+1 (θt+1 ) = ηt − µ ∂θt+1 Jt+1 (θt+1 ) · ∂ηt θt+1 = ηt + µ gt+1 · gt , (2) where µ ∈ R is a scalar meta-step size. [sent-61, score-0.212]
19 In a nutshell, step sizes are decreased where a negative autocorrelation of the gradient indicates oscillation about a local minimum, and increased otherwise. [sent-62, score-0.174]
20 Since gradient descent implements a discrete approximation to an infinitesimal (differential) process in any case, we can in practice ignore non-differentiability of J on a set of measure zero, as long as our implementation of the gradient function returns a subgradient at those points. [sent-64, score-0.236]
21 (b) Standard step size adaptation methods capture only the immediate effect, even when (c) past gradients are exponentially smoothed. [sent-67, score-0.176]
22 In recognition of this shortcoming, gt in (2) is usually replaced with an exponential running average of past gradients (Jacobs, 1988; Tollenaere, 1990; Silva and Almeida, 1990; Riedmiller and Braun, 1993; Almeida et al. [sent-83, score-0.175]
23 By contrast, Sutton (1992) modeled the long-term effect of step sizes on future parameter values in a linear system by carrying the relevant partials forward in time, and found that the resulting step size adaptation can outperform a less than perfectly matched Kalman filter. [sent-86, score-0.255]
24 S TEP S IZE A DAPTATION IN RKHS which amounts to stating that the step size adaptation (in log space) must be in equilibrium at the time scale determined by λ. [sent-98, score-0.202]
25 Noting that ∂θt gt is the Hessian Ht of Jt (θt ), we arrive at the simple iterative update vt+1 = λvt − ηt · (gt + λHt vt ). [sent-99, score-0.425]
26 Survey of Online Kernel Methods The perceptron algorithm (Rosenblatt, 1958) is arguably one of the simplest online learning algorithms. [sent-115, score-0.299]
27 To address the case where the data is not separable in feature space, Freund and Schapire (1999) work with a kernelized perceptron but use the online-to-batch conversion procedure of Helmbold and Warmuth (1995) to derive their voted perceptron algorithm. [sent-146, score-0.21]
28 Essentially, every weight vector generated by the kernel perceptron is retained, and the decision rule is a majority vote amongst the predictions generated by these weight vectors. [sent-147, score-0.207]
29 Another online algorithm which aims to maximize the margin of separation between classes is LASVM (Bordes et al. [sent-158, score-0.245]
30 1112 S TEP S IZE A DAPTATION IN RKHS SVM quadratic programming (QP) problem in an online manner. [sent-165, score-0.194]
31 Another notable effort to derive a margin-based online learning algorithm is ALMA p , the approximate large margin algorithm w. [sent-174, score-0.245]
32 ALMA p is one of the few percpetron-derived online algorithms we know of which modify their learning rate: Its p-norm perceptron update step scales with the number of corrections which have occurred so far. [sent-185, score-0.411]
33 Many large-margin algorithms (Li and Long, 2002; Crammer and Singer, 2003; Herbster, 2001) are based on the same general principle: They explicitly maximize the margin and update their weights only when a margin violation occurs. [sent-187, score-0.163]
34 Online SVM We now present the online SVM (aka NORMA) algorithm (Kivinen et al. [sent-193, score-0.194]
35 , 2004) from a loss function and regularization point of view, with additions and modifications for logistic regression, novelty detection, multiclass classification, and graph-structured label domains. [sent-194, score-0.239]
36 This sets the scene for our application of SMD to the online SVM in Section 5. [sent-195, score-0.194]
37 ˜ ˜ (14) Let (xt , yt ) denote the example presented to the online algorithm at time instance t. [sent-209, score-0.307]
38 Using the stochastic approximation of J( f ) at time t: Jt ( f ) := l(xt , yt , f ) + c f 2 H 2 (15) and setting gt := ∂ f Jt ( ft ) = ∂ f l(xt , yt , ft ) + c ft , (16) we obtain the following online learning algorithm: Algorithm 1 Online learning (adaptive step size) 1. [sent-210, score-1.678]
39 Repeat (a) Draw data sample (xt , yt ) (b) Adapt step size ηt (c) Update ft+1 ← ft − ηt gt Practical considerations are how to implement steps 2(b) and 2(c) efficiently. [sent-212, score-0.7]
40 Observe that, so far, our discussion of the online update algorithm is independent of the particular loss function used. [sent-216, score-0.312]
41 Since our online update depends on it, we will state the gradient of all loss functions we present below, and give its kernel expansion coefficients. [sent-224, score-0.494]
42 3 Coefficient Updates Since the online update in step 2(c) of Algorithm 1 is not particularly useful in Hilbert space, we now rephrase it in terms of kernel function expansions. [sent-282, score-0.358]
43 From (15) it follows that gt = ∂ f l(xt , yt , ft ) + c ft and consequently ft+1 = ft − ηt [∂ f l(xt , yt , ft ) + c ft ] = (1 − ηt c) ft − ηt ∂ f l(xt , yt , ft ). [sent-285, score-2.796]
44 Observe that we can write t gt (·) = ∑ ∑ γtiy k((xi , y), ·), (33) i=1 y where γt := c αt−1 ξt⊤ . [sent-292, score-0.175]
45 Online SVMD We now show how the SMD framework described in Section 2 can be used to adapt the step size for online SVMs. [sent-298, score-0.312]
46 The updates given in Section 4 remain as before, the only difference being that the step size ηt is adapted before its value is used to update α. [sent-299, score-0.214]
47 The update for v is now given by vt+1 = λvt − ηt (gt + λHt vt ), (37) where Ht is the Hessian of the objective function. [sent-303, score-0.25]
48 For Jt ( f ) as defined in (15), this operator has a form that permits efficient computation of Ht vt : For piecewise linear loss functions, such as (18), (20), and (27), we have Ht = cI, where I is the identity operator, and obtain the simple update vt+1 = (1 − ηt c)λvt − ηt gt . [sent-305, score-0.482]
49 In particular, for logistic regression (22) we have Ht −cI = ρ(xt ) k(xt , ·) ⊗ k(xt , ·), (39) where ρ(xt ) := eyt ft (xt ) /(1 + eyt ft (xt ) )2 , and ⊗ denotes the outer product between functions in H , obeying (u ⊗ v)w = u v, w for u, v, w ∈ H . [sent-308, score-0.754]
50 Likewise, for multiclass logistic regression (24) we have Ht −cI = ∑ ρ(xt , y, y) k((xt , y), ·) ⊗ k((xt , y), ·), ˜ ˜ (40) y,y∈Y ˜ where ρ(xt , y, y) := δy,y p(y|xt , ft ) − p(y|xt , ft ) p(y|xt , ft ). [sent-309, score-1.1]
51 (43) Although (43) suffices in principle to implement the overall algorithm, a naive implementation of the inner product gt , vt in (36) takes O(t 2 ) time, rendering it impractical. [sent-318, score-0.364]
52 ˜ ˜ Expanding gt+1 into c ft+1 + ξt+1 we can write ⊤ gt+1 , vt+1 = cπt+1 + ξt+1 vt+1 (xt+1 , ·), (48) where πt := ft , vt . [sent-332, score-0.515]
53 The function update (31) yields πt+1 = (1 − ηt c) ft , vt+1 − ηt ξt⊤ vt+1 (xt , ·). [sent-333, score-0.387]
54 (49) The v update (37) then gives us ft , vt+1 = (1 − ηt c)λπt − ηt ft , gt , (50) and using gt = c ft + ξt again we have ft , gt = c ft 2 + ξt⊤ ft (xt , ·). [sent-334, score-2.542]
55 (51) Finally, the squared norm of f can be maintained via: ft+1 2 = (1 − ηt c)2 ft 2 − 2ηt (1 − ηt c)ξt⊤ ft (xt , ·) + ηt2 ξt⊤ k((xt , ·), (xt , ·))ξt . [sent-335, score-0.652]
56 Both of these extensions create new parameters, which we will also tune by stochastic gradient descent, again using SMD for step size adaptation. [sent-339, score-0.235]
57 While the update equations described above remain unchanged, the offset parameter b is now adapted as well: bt+1 = bt − ηb,t · ∂b Jt ( ft + bt ) = bt − ηb,t · ξt . [sent-343, score-0.446]
58 (53) Applying the standard SMD equations (4) and (7) to the case at hand, we update the offset step sizes ηb via 1 ηb,t+1 = ηb,t · max( 2 , 1 − µb ξt+1 · vb,t+1 ), (54) where µb is the meta-step size for adjusting ηb , and vb is adapted as vb,t+1 = λb vb,t − ηb,t · ξt . [sent-344, score-0.234]
59 Observe that ∂log ε Jt ( ft ) = ε ∂ε Jt ( ft ) = −ε (ξt + ν), (57) and therefore the updates for ε can now be written as εt+1 = εt exp(−ηε,t ∂log ε Jt ( ft )) = εt exp(ηε,t εt (ξt + ν)). [sent-352, score-1.015]
60 (58) (59) We now use SMD to adapt the margin step size ηε,t : 1 ηε,t+1 = ηε,t max( 2 , 1 + µε vε,t εt (ξt + ν)), (60) vε,t+1 = λε vε,t + ηε,t εt (ξt + ν)(1 + λε vε,t ). [sent-353, score-0.169]
61 This completes our description of the online SVMD algorithm. [sent-355, score-0.194]
62 calculate loss l(xt , yt , ft ) (b) obtain gradients: i. [sent-363, score-0.522]
63 5 Time Complexity and Buffer Management The time complexity of online SVMD is dominated by the cost of the kernel expansions in steps 2(a)ii, 2(b)ii, and 2(d)iii of Algorithm 2, which grows linearly with the size of the expansion. [sent-380, score-0.281]
64 Since unlimited growth would be undesirable, we maintain a least recently used (LRU) circular buffer which stores only the last ω non-zero expansion coefficients; each kernel expansion then takes O(ω| Y |) time. [sent-381, score-0.289]
65 The online SVM (NORMA) algorithm does not require steps 2(b)ii or 2(d)iii, but still has to employ step 2(a)ii to make a prediction, so its asymptotic time complexity is O(ω| Y |) as well. [sent-382, score-0.245]
66 The two algorithms thus differ in time complexity only by a constant factor; in practice we observe online SVMD to be 3–4 times slower per iteration than NORMA. [sent-383, score-0.194]
67 0 g p a r 5 e 10−2 online SVM SVMD (λ=0) SVMD . [sent-391, score-0.194]
68 95 (solid), λ = 0 (dotted), and online SVM with step size decay (62), using τ = 10 (dashed). [sent-397, score-0.348]
69 , 2004, Proposition 1), so good solutions can be obtained with limited buffer size ω. [sent-399, score-0.161]
70 A good buffer management scheme has to deal with two conflicting demands: To the extent that the data set is non-stationary, it is desirable to remove old items from the buffer in order to reduce the effect of obsolete data. [sent-400, score-0.28]
71 Although we employ a simple LRU circular buffer to good effect here, smarter buffer management strategies which explicitly remove the least important point based on some well-defined criterion (Crammer et al. [sent-402, score-0.321]
72 , 2004) with a scheduled step size decay of ηt = τ/(τ + t) , (62) where τ is hand-tuned to obtain good performance. [sent-408, score-0.197]
73 (64) In the spirit of online learning, we train for just a single run through the data, so that no digit is seen more than once by any algorithm. [sent-417, score-0.194]
74 Observe that online SVMD (solid) is initially slower to learn, but after about 20 iterations it overtakes the online SVM (dashed), and overall makes only about half as many classification errors. [sent-426, score-0.388]
75 The single-step version of SVMD with λ = 0 (dotted) has the fastest early convergence but is asymptotically inferior to SVMD proper, though still far better than the online SVM with scheduled step size decay. [sent-427, score-0.323]
76 2 M ULTICLASS C LASSIFICATION Figure 4 shows our results for 10-way multiclass classification using soft margin loss with η0 = 0. [sent-430, score-0.192]
77 Again online SVMD (solid) makes only about half as many classification errors overall as the online SVM (dashed), with the single-step (λ = 0) variant of SVMD (dotted) falling in between. [sent-433, score-0.388]
78 We found (by cross-validation) the online SVM with fixed decay schedule to perform best here for η0 = 0. [sent-434, score-0.289]
79 We generally find the performance of online SVMD to be fairly independent of the initial step size. [sent-439, score-0.245]
80 1 M ULTICLASS C LASSIFICATION Here we perform 10-way multiclass classification on the USPS counting sequence, using ν-SVMD with soft margin loss, ν = 0. [sent-445, score-0.17]
81 Online ν-SVM with scheduled step size decay, on the other hand, has serious difficulty with the non-stationary nature of our data and performs barely above change level (90% error rate); even the simple step size 1124 S TEP S IZE A DAPTATION IN RKHS 100 0 . [sent-448, score-0.215]
82 0 a r e t e S 4 online SVM SVMD (λ=0) SVMD . [sent-454, score-0.194]
83 99 (solid), λ = 0 (dotted), and online SVM with step size decay (62), using τ = 100 (dashed). [sent-460, score-0.348]
84 This is not surprising since a step size decay schedule typically assumes stationarity. [sent-462, score-0.181]
85 0 r e 10−2 online SVM SVMD (λ=0) SVMD t e S v 2 . [sent-481, score-0.194]
86 95 (solid), λ = 0 (dotted), and online ν-SVM with step size decay (62), using τ = 100 (dashed). [sent-485, score-0.348]
87 We illustrate the effect of the circular buffer size on classification accuracy, using λ = 1, µ = 0. [sent-488, score-0.202]
88 s n o i t a 101 0 r e t 100 I Figure 7: Left: The step size for novelty detection with SVMD on the USPS counting sequence (Figure 5) closely follows the non-stationarity of the data. [sent-523, score-0.179]
89 Right: Average error of SVMD on the MNIST data set, for three different circular buffer sizes. [sent-524, score-0.167]
90 Following Cai and Hofmann (2004), we perform document categorization experiments on the WIPO-alpha data set, using a loss function that is a special case of our graph-structured loss (28). [sent-526, score-0.185]
91 We use a buffer of size 1024, initial step size η0 = 1, meta-step size µ = 0. [sent-537, score-0.282]
92 Only 94 out of the 160 possible categories contain four or more examples while as many as 34 categories have exactly one sample, which makes it extremely hard for an online learning algorithm to predict well. [sent-543, score-0.194]
93 Discussion We presented online SVMD, an extension of the SMD step size adaptation method to the kernel framework. [sent-552, score-0.422]
94 In experiments online SVMD outperformed the conventional online SVM (aka NORMA) algorithm with scheduled step size decay for binary and multiclass classification, especially on a non-stationary problem. [sent-556, score-0.692]
95 In novelty detection experiments we observe that SVMD is able to closely track the non-stationarity in the data and adapt the step sizes correspondingly. [sent-558, score-0.169]
96 With a reasonable buffer size SVMD attains competitive performance in a single pass through the MNIST data set. [sent-559, score-0.183]
97 1128 S TEP S IZE A DAPTATION IN RKHS Empirically we observe that in all our experiments the SVMD algorithm significantly speeds up the convergence of the conventional online SVM algorithm. [sent-561, score-0.217]
98 SMD is a generic method to hasten the convergence of stochastic gradient descent methods. [sent-569, score-0.195]
99 Other kernel algorithms which rely on stochastic gradient descent — e. [sent-571, score-0.247]
100 Fast online policy gradient learning with SMD gain vector adaptation. [sent-896, score-0.289]
wordName wordTfidf (topN-words)
[('svmd', 0.523), ('ft', 0.326), ('smd', 0.299), ('online', 0.194), ('vt', 0.189), ('xt', 0.183), ('gt', 0.175), ('chraudolph', 0.149), ('ishwanathan', 0.149), ('daptation', 0.139), ('ize', 0.139), ('buffer', 0.126), ('tep', 0.118), ('mola', 0.114), ('yt', 0.113), ('rkhs', 0.111), ('schraudolph', 0.109), ('jt', 0.105), ('perceptron', 0.105), ('ht', 0.097), ('gradient', 0.095), ('adaptation', 0.09), ('multiclass', 0.084), ('kivinen', 0.078), ('decay', 0.068), ('svm', 0.067), ('cai', 0.067), ('crammer', 0.065), ('almeida', 0.063), ('usps', 0.061), ('update', 0.061), ('loss', 0.057), ('hofmann', 0.055), ('stochastic', 0.054), ('kernel', 0.052), ('step', 0.051), ('margin', 0.051), ('descent', 0.046), ('document', 0.045), ('bray', 0.043), ('scheduled', 0.043), ('tiy', 0.043), ('circular', 0.041), ('logistic', 0.038), ('updates', 0.037), ('alma', 0.036), ('norma', 0.036), ('novelty', 0.036), ('counting', 0.035), ('expansion', 0.035), ('size', 0.035), ('taxonomy', 0.035), ('vi', 0.033), ('embroidering', 0.032), ('eyt', 0.032), ('riedmiller', 0.032), ('adapt', 0.032), ('australia', 0.031), ('adapted', 0.03), ('tsochantaridis', 0.03), ('hessian', 0.029), ('ym', 0.029), ('offset', 0.029), ('sizes', 0.028), ('management', 0.028), ('bordes', 0.028), ('vishwanathan', 0.028), ('oss', 0.027), ('lasvm', 0.027), ('frie', 0.027), ('aka', 0.027), ('schedule', 0.027), ('ci', 0.027), ('categorization', 0.026), ('calculate', 0.026), ('log', 0.026), ('gentile', 0.026), ('classi', 0.026), ('coef', 0.025), ('xm', 0.025), ('weight', 0.025), ('hilbert', 0.025), ('likewise', 0.025), ('jacobs', 0.024), ('nicol', 0.024), ('ine', 0.024), ('ict', 0.024), ('label', 0.024), ('reproducing', 0.023), ('conventional', 0.023), ('yi', 0.023), ('australian', 0.022), ('backpropagation', 0.022), ('comprising', 0.022), ('silva', 0.022), ('nicta', 0.022), ('weston', 0.022), ('pass', 0.022), ('detection', 0.022), ('adatron', 0.021)]
simIndex simValue paperId paperTitle
same-paper 1 1.0 86 jmlr-2006-Step Size Adaptation in Reproducing Kernel Hilbert Space
Author: S. V. N. Vishwanathan, Nicol N. Schraudolph, Alex J. Smola
Abstract: This paper presents an online support vector machine (SVM) that uses the stochastic meta-descent (SMD) algorithm to adapt its step size automatically. We formulate the online learning problem as a stochastic gradient descent in reproducing kernel Hilbert space (RKHS) and translate SMD to the nonparametric setting, where its gradient trace parameter is no longer a coefficient vector but an element of the RKHS. We derive efficient updates that allow us to perform the step size adaptation in linear time. We apply the online SVM framework to a variety of loss functions, and in particular show how to handle structured output spaces and achieve efficient online multiclass classification. Experiments show that our algorithm outperforms more primitive methods for setting the gradient step size. Keywords: online SVM, stochastic meta-descent, structured output spaces
2 0.19862406 70 jmlr-2006-Online Passive-Aggressive Algorithms
Author: Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, Yoram Singer
Abstract: We present a family of margin based online learning algorithms for various prediction tasks. In particular we derive and analyze algorithms for binary and multiclass categorization, regression, uniclass prediction and sequence prediction. The update steps of our different algorithms are all based on analytical solutions to simple constrained optimization problems. This unified view allows us to prove worst-case loss bounds for the different algorithms and for the various decision problems based on a single lemma. Our bounds on the cumulative loss of the algorithms are relative to the smallest loss that can be attained by any fixed hypothesis, and as such are applicable to both realizable and unrealizable settings. We demonstrate some of the merits of the proposed algorithms in a series of experiments with synthetic and real data sets.
3 0.13831142 37 jmlr-2006-Incremental Algorithms for Hierarchical Classification
Author: Nicolò Cesa-Bianchi, Claudio Gentile, Luca Zaniboni
Abstract: We study the problem of classifying data in a given taxonomy when classifications associated with multiple and/or partial paths are allowed. We introduce a new algorithm that incrementally learns a linear-threshold classifier for each node of the taxonomy. A hierarchical classification is obtained by evaluating the trained node classifiers in a top-down fashion. To evaluate classifiers in our multipath framework, we define a new hierarchical loss function, the H-loss, capturing the intuition that whenever a classification mistake is made on a node of the taxonomy, then no loss should be charged for any additional mistake occurring in the subtree of that node. Making no assumptions on the mechanism generating the data instances, and assuming a linear noise model for the labels, we bound the H-loss of our on-line algorithm in terms of the H-loss of a reference classifier knowing the true parameters of the label-generating process. We show that, in expectation, the excess cumulative H-loss grows at most logarithmically in the length of the data sequence. Furthermore, our analysis reveals the precise dependence of the rate of convergence on the eigenstructure of the data each node observes. Our theoretical results are complemented by a number of experiments on texual corpora. In these experiments we show that, after only one epoch of training, our algorithm performs much better than Perceptron-based hierarchical classifiers, and reasonably close to a hierarchical support vector machine. Keywords: incremental algorithms, online learning, hierarchical classification, second order perceptron, support vector machines, regret bound, loss function
4 0.13481115 32 jmlr-2006-Expectation Correction for Smoothed Inference in Switching Linear Dynamical Systems
Author: David Barber
Abstract: We introduce a method for approximate smoothed inference in a class of switching linear dynamical systems, based on a novel form of Gaussian Sum smoother. This class includes the switching Kalman ‘Filter’ and the more general case of switch transitions dependent on the continuous latent state. The method improves on the standard Kim smoothing approach by dispensing with one of the key approximations, thus making fuller use of the available future information. Whilst the central assumption required is projection to a mixture of Gaussians, we show that an additional conditional independence assumption results in a simpler but accurate alternative. Our method consists of a single Forward and Backward Pass and is reminiscent of the standard smoothing ‘correction’ recursions in the simpler linear dynamical system. The method is numerically stable and compares favourably against alternative approximations, both in cases where a single mixture component provides a good posterior approximation, and where a multimodal approximation is required. Keywords: Gaussian sum smoother, switching Kalman filter, switching linear dynamical system, expectation propagation, expectation correction 1. Switching Linear Dynamical System The Linear Dynamical System (LDS) (Bar-Shalom and Li, 1998; West and Harrison, 1999) is a key temporal model in which a latent linear process generates the observed time-series. For more complex time-series which are not well described globally by a single LDS, we may break the time-series into segments, each modeled by a potentially different LDS. This is the basis for the Switching LDS (SLDS) where, for each time-step t, a switch variable st ∈ 1, . . . , S describes which of the LDSs is to be used.1 The observation (or ‘visible’ variable) vt ∈ R V is linearly related to the hidden state ht ∈ R H by vt = B(st )ht + ηv (st ), ηv (st ) ∼ N (v(st ), Σv (st )) ¯ (1) where N (µ, Σ) denotes a Gaussian distribution with mean µ and covariance Σ. The transition dynamics of the continuous hidden state ht is linear ht = A(st )ht−1 + ηh (st ), ¯ ηh (st ) ∼ N h(st ), Σh (st ) . (2) 1. These systems also go under the names Jump Markov model/process, switching Kalman Filter, Switching Linear Gaussian State-Space model, Conditional Linear Gaussian Model. c 2006 David Barber. BARBER s1 s2 s3 s4 h1 h2 h3 h4 v1 v2 v3 v4 Figure 1: The independence structure of the aSLDS. Square nodes denote discrete variables, round nodes continuous variables. In the SLDS links from h to s are not normally considered. The dynamics of the switch variables is Markovian, with transition p(st |st−1 ). The SLDS is used in many disciplines, from econometrics to machine learning (Bar-Shalom and Li, 1998; Ghahramani and Hinton, 1998; Lerner et al., 2000; Kitagawa, 1994; Kim and Nelson, 1999; Pavlovic et al., 2001). See Lerner (2002) and Zoeter (2005) for recent reviews of work. AUGMENTED S WITCHING L INEAR DYNAMICAL S YSTEM In this article, we will consider the more general model in which the switch st is dependent on both the previous st−1 and ht−1 . We call this an augmented Switching Linear Dynamical System 2 (aSLDS), in keeping with the terminology in Lerner (2002). An equivalent probabilistic model is, as depicted in Figure (1), T p(v1:T , h1:T , s1:T ) = p(v1 |h1 , s1 )p(h1 |s1 )p(s1 ) ∏ p(vt |ht , st )p(ht |ht−1 , st )p(st |ht−1 , st−1 ). t=2 The notation x1:T is shorthand for x1 , . . . , xT . The distributions are parameterized as p(vt |ht , st ) = N (v(st ) + B(st )ht , Σv (st )) , ¯ ¯ p(ht |ht−1 , st ) = N h(st ) + A(st )ht−1 , Σh (st ) where p(h1 |s1 ) = N (µ(s1 ), Σ(s1 )). The aSLDS has been used, for example, in state-duration modeling in acoustics (Cemgil et al., 2006) and econometrics (Chib and Dueker, 2004). I NFERENCE The aim of this article is to address how to perform inference in both the SLDS and aSLDS. In particular we desire the so-called filtered estimate p(ht , st |v1:t ) and the smoothed estimate p(ht , st |v1:T ), for any t, 1 ≤ t ≤ T . Both exact filtered and smoothed inference in the SLDS is intractable, scaling exponentially with time (Lerner, 2002). To see this informally, consider the filtered posterior, which may be recursively computed using p(st , ht |v1:t ) = ∑ Z st−1 ht−1 p(st , ht |st−1 , ht−1 , vt )p(st−1 , ht−1 |v1:t−1 ). (3) At timestep 1, p(s1 , h1 |v1 ) = p(h1 |s1 , v1 )p(s1 |v1 ) is an indexed set of Gaussians. At time-step 2, due to the summation over the states s1 , p(s2 , h2 |v1:2 ) will be an indexed set of S Gaussians; similarly at 2. These models are closely related to Threshold Regression Models (Tong, 1990). 2516 E XPECTATION C ORRECTION time-step 3, it will be S2 and, in general, gives rise to St−1 Gaussians. More formally, in Lauritzen and Jensen (2001), a general exact method is presented for performing stable inference in such hybrid discrete models with conditional Gaussian potentials. The method requires finding a strong junction tree which, in the SLDS case, means that the discrete variables are placed in a single cluster, resulting in exponential complexity. The key issue in the (a)SLDS, therefore, is how to perform approximate inference in a numerically stable manner. Our own interest in the SLDS stems primarily from acoustic modeling, in which the time-series consists of many thousands of time-steps (Mesot and Barber, 2006; Cemgil et al., 2006). For this, we require a stable and computationally feasible approximate inference, which is also able to deal with state-spaces of high hidden dimension, H. 2. Expectation Correction Our approach to approximate p(ht , st |v1:T ) ≈ p(ht , st |v1:T ) mirrors the Rauch-Tung-Striebel (RTS) ˜ ‘correction’ smoother for the LDS (Rauch et al., 1965; Bar-Shalom and Li, 1998). Readers unfamiliar with this approach will find a short explanation in Appendix (A), which defines the important functions LDSFORWARD and LDSBACKWARD, which we shall make use of for inference in the aSLDS. Our correction approach consists of a single Forward Pass to recursively find the filtered posterior p(ht , st |v1:t ), followed by a single Backward Pass to correct this into a smoothed posterior ˜ p(ht , st |v1:T ). The Forward Pass we use is equivalent to Assumed Density Filtering (Alspach and ˜ Sorenson, 1972; Boyen and Koller, 1998; Minka, 2001). The main contribution of this paper is a novel form of Backward Pass, based on collapsing the smoothed posterior to a mixture of Gaussians. Unless stated otherwise, all quantities should be considered as approximations to their exact counterparts, and we will therefore usually omit the tildes˜throughout the article. 2.1 Forward Pass (Filtering) Readers familiar with Assumed Density Filtering (ADF) may wish to continue directly to Section (2.2). The basic idea is to represent the (intractable) posterior using a simpler distribution. This is then propagated forwards through time, conditioned on the new observation, and subsequently collapsed back to the tractable distribution representation—see Figure (2). Our aim is to form a recursion for p(st , ht |v1:t ), based on a Gaussian mixture approximation of p(ht |st , v1:t ). Without loss of generality, we may decompose the filtered posterior as p(ht , st |v1:t ) = p(ht |st , v1:t )p(st |v1:t ). We will first form a recursion for p(ht |st , v1:t ), and discuss the switch recursion p(st |v1:t ) later. The full procedure for computing the filtered posterior is presented in Algorithm (1). The exact representation of p(ht |st , v1:t ) is a mixture with O(St ) components. We therefore approximate this with a smaller It -component mixture It p(ht |st , v1:t ) ≈ p(ht |st , v1:t ) ≡ ˜ ˜ ˜ ∑ p(ht |it , st , v1:t ) p(it |st , v1:t ) it =1 where p(ht |it , st , v1:t ) is a Gaussian parameterized with mean3 f (it , st ) and covariance F(it , st ). The ˜ Gaussian mixture weights are given by p(it |st , v1:t ). In the above, p represent approximations to the ˜ ˜ 3. Strictly speaking, we should use the notation ft (it , st ) since, for each time t, we have a set of means indexed by it , st . This mild abuse of notation is used elsewhere in the paper. 2517 BARBER st st+1 it ht ht+1 vt+1 Figure 2: Structure of the mixture representation of the Forward Pass. Essentially, the Forward Pass defines a ‘prior’ distribution at time t which contains all the information from the variables v1:t . This prior is propagated forwards through time using the exact dynamics, conditioned on the observation, and then collapsed back to form a new prior approximation at time t + 1. corresponding exact p distributions. To find a recursion for these parameters, consider ˜ p(ht+1 |st+1 , v1:t+1 ) = ∑ p(ht+1 , st , it |st+1 , v1:t+1 ) ˜ st ,it = ∑ p(ht+1 |it , st , st+1 , v1:t+1 ) p(st , it |st+1 , v1:t+1 ) ˜ ˜ (4) st ,it where each of the factors can be recursively computed on the basis of the previous filtered results (see below). However, this recursion suffers from an exponential increase in mixture components. To deal with this, we will later collapse p(ht+1 |st+1 , v1:t+1 ) back to a smaller mixture. For the ˜ remainder, we drop the p notation, and concentrate on computing the r.h.s of Equation (4). ˜ E VALUATING p(ht+1 |st , it , st+1 , v1:t+1 ) We find p(ht+1 |st , it , st+1 , v1:t+1 ) from the joint distribution p(ht+1 , vt+1 |st , it , st+1 , v1:t ), which is a Gaussian with covariance and mean elements4 Σhh = A(st+1 )F(it , st )AT (st+1 ) + Σh (st+1 ), Σvv = B(st+1 )Σhh BT (st+1 ) + Σv (st+1 ) Σvh = B(st+1 )F(it , st ), µv = B(st+1 )A(st+1 ) f (it , st ), µh = A(st+1 ) f (it , st ). (5) These results are obtained from integrating the forward dynamics, Equations (1,2) over h t , using the results in Appendix (B). To find p(ht+1 |st , it , st+1 , v1:t+1 ) we may then condition p(ht+1 , vt+1 | st , it , st+1 , v1:t ) on vt+1 using the results in Appendix (C)—see also Algorithm (4). E VALUATING p(st , it |st+1 , v1:t+1 ) Up to a trivial normalization constant the mixture weight in Equation (4) can be found from the decomposition p(st , it |st+1 , v1:t+1 ) ∝ p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ). ¯ 4. We derive this for ht+1 , vt+1 ≡ 0, to ease notation. ¯ 2518 (6) E XPECTATION C ORRECTION Algorithm 1 aSLDS Forward Pass. Approximate the filtered posterior p(st |v1:t ) ≡ ρt , p(ht |st , v1:t ) ≡ ∑it wt (it , st )N ( ft (it , st ), Ft (it , st )). Also we return the approximate log-likelihood log p(v 1:T ). We ¯ require I1 = 1, I2 ≤ S, It ≤ S × It−1 . θt (s) = A(s), B(s), Σh (s), Σv (s), h(s), v(s) for t > 1. θ1 (s) = ¯ v (s), µ(s), v(s) A(s), B(s), Σ(s), Σ ¯ for s1 ← 1 to S do { f1 (1, s1 ), F1 (1, s1 ), p} = LDSFORWARD(0, 0, v1 ; θ(s1 )) ˆ ρ1 ← p(s1 ) p ˆ end for for t ← 2 to T do for st ← 1 to S do for i ← 1 to It−1 , and s ← 1 to S do {µx|y (i, s), Σx|y (i, s), p} = LDSFORWARD( ft−1 (i, s), Ft−1 (i, s), vt ; θt (st )) ˆ p∗ (st |i, s) ≡ p(st |ht−1 , st−1 = s) p(ht−1 |it−1 =i,st−1 =s,v1:t−1 ) p (st , i, s) ← wt−1 (i, s)p∗ (st |i, s)ρt−1 (s) p ˆ end for Collapse the It−1 × S mixture of Gaussians defined by µx|y ,Σx|y , and weights p(i, s|st ) ∝ p (st , i, s) to a Gaussian with It components, p(ht |st , v1:t ) ≈ I ∑itt =1 p(it |st , v1:t )p(ht |st , it , v1:t ). This defines the new means ft (it , st ), covariances Ft (it , st ) and mixture weights wt (it , st ) ≡ p(it |st , v1:t ). Compute ρt (st ) ∝ ∑i,s p (st , i, s) end for normalize ρt ≡ p(st |v1:t ) L ← L + log ∑st ,i,s p (st , i, s) end for The first factor in Equation (6), p(vt+1 |it , st , st+1 , v1:t ), is a Gaussian with mean µv and covariance Σvv , as given in Equation (5). The last two factors p(it |st , v1:t ) and p(st |v1:t ) are given from the previous iteration. Finally, p(st+1 |it , st , v1:t ) is found from p(st+1 |it , st , v1:t ) = p(st+1 |ht , st ) p(ht |it ,st ,v1:t ) (7) where · p denotes expectation with respect to p. In the standard SLDS, Equation (7) is replaced by the Markov transition p(st+1 |st ). In the aSLDS, however, Equation (7) will generally need to be computed numerically. A simple approximation is to evaluate Equation (7) at the mean value of the distribution p(ht |it , st , v1:t ). To take covariance information into account an alternative would be to draw samples from the Gaussian p(ht |it , st , v1:t ) and thus approximate the average of p(st+1 |ht , st ) by sampling.5 C LOSING THE R ECURSION We are now in a position to calculate Equation (4). For each setting of the variable st+1 , we have a mixture of It × S Gaussians. In order to avoid an exponential explosion in the number of mixture 5. Whilst we suggest sampling as part of the aSLDS update procedure, this does not render the Forward Pass as a form of sequential sampling procedure, such as Particle Filtering. The sampling here is a form of exact sampling, for which no convergence issues arise, being used only to numerically evaluate Equation (7). 2519 BARBER components, we numerically collapse this back to It+1 Gaussians to form It+1 p(ht+1 |st+1 , v1:t+1 ) ≈ ∑ it+1 =1 p(ht+1 |it+1 , st+1 , v1:t+1 )p(it+1 |st+1 , v1:t+1 ). Hence the Gaussian components and corresponding mixture weights p(it+1 |st+1 , v1:t+1 ) are defined implicitly through a numerical (Gaussian-Mixture to smaller Gaussian-Mixture) collapse procedure, for which any method of choice may be supplied. A straightforward approach that we use in our code is based on repeatedly merging low-weight components, as explained in Appendix (D). A R ECURSION FOR THE S WITCH VARIABLES A recursion for the switch variables can be found by considering p(st+1 |v1:t+1 ) ∝ ∑ p(it , st , st+1 , vt+1 , v1:t ). it ,st The r.h.s. of the above equation is proportional to ∑ p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ) st ,it where all terms have been computed during the recursion for p(ht+1 |st+1 , v1:t+1 ). T HE L IKELIHOOD p(v1:T ) The likelihood p(v1:T ) may be found by recursing p(v1:t+1 ) = p(vt+1 |v1:t )p(v1:t ), where p(vt+1 |v1:t ) = ∑ it ,st ,st+1 p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ). In the above expression, all terms have been computed in forming the recursion for the filtered posterior p(ht+1 , st+1 |v1:t+1 ). 2.2 Backward Pass (Smoothing) The main contribution of this paper is to find a suitable way to ‘correct’ the filtered posterior p(st , ht |v1:t ) obtained from the Forward Pass into a smoothed posterior p(st , ht |v1:T ). We initially derive this for the case of a single Gaussian representation—the extension to the mixture case is straightforward and given in Section (2.3). Our derivation holds for both the SLDS and aSLDS. We approximate the smoothed posterior p(ht |st , v1:T ) by a Gaussian with mean g(st ) and covariance G(st ), and our aim is to find a recursion for these parameters. A useful starting point is the exact relation: p(ht , st |v1:T ) = ∑ p(st+1 |v1:T )p(ht |st , st+1 , v1:T )p(st |st+1 , v1:T ). st+1 2520 E XPECTATION C ORRECTION The term p(ht |st , st+1 , v1:T ) may be computed as p(ht |st , st+1 , v1:T ) = Z p(ht , ht+1 |st , st+1 , v1:T ) = Z p(ht |ht+1 , st , st+1 , v1:T )p(ht+1 |st , st+1 , v1:T ) Z p(ht |ht+1 , st , st+1 , v1:t )p(ht+1 |st , st+1 , v1:T ) ht+1 ht+1 = ht+1 (8) which is in the form of a recursion. This recursion therefore requires p(ht+1 |st , st+1 , v1:T ), which we can write as p(ht+1 |st , st+1 , v1:T ) ∝ p(ht+1 |st+1 , v1:T )p(st |st+1 , ht+1 , v1:t ). (9) The above recursions represent the exact computation of the smoothed posterior. In our approximate treatment, we replace all quantities p with their corresponding approximations p. A difficulty ˜ is that the functional form of p(st |st+1 , ht+1 , v1:t ) in the approximation of Equation (9) is not squared ˜ exponential in ht+1 , so that p(ht+1 |st , st+1 , v1:T ) will not be a mixture of Gaussians.6 One possibil˜ ity would be to approximate the non-Gaussian p(ht+1 |st , st+1 , v1:T ) (dropping the p notation) by a ˜ Gaussian (mixture) by minimizing the Kullback-Leilbler divergence between the two, or performing moment matching in the case of a single Gaussian. A simpler alternative is to make the assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ), see Figure (3). This is a considerable simplification since p(ht+1 |st+1 , v1:T ) is already known from the previous backward recursion. Under this assumption, the recursion becomes p(ht , st |v1:T ) ≈ ∑ p(st+1 |v1:T )p(st |st+1 , v1:T ) st+1 p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (10) We call the procedure based on Equation (10) Expectation Correction (EC) since it ‘corrects’ the filtered results which themselves are formed from propagating expectations. In Appendix (E) we show how EC is equivalent to a partial Discrete-Continuous factorized approximation. Equation (10) forms the basis of the the EC Backward Pass. However, similar to the ADF Forward Pass, the number of mixture components needed to represent the posterior in this recursion grows exponentially as we go backwards in time. The strategy we take to deal with this is a form of Assumed Density Smoothing, in which Equation (10) is interpreted as a propagated dynamics reversal, which will subsequently be collapsed back to an assumed family of distributions—see Figure (4). How we implement the recursion for the continuous and discrete factors is detailed below.7 6. In the exact calculation, p(ht+1 |st , st+1 , v1:T ) is a mixture of Gaussians since p(st |st+1 , ht+1 , v1:t ) = p(st , st+1 , ht+1 , v1:T )/p(st+1 , ht+1 , v1:T ) so that the mixture of Gaussians denominator p(st+1 , ht+1 , v1:T ) cancels with the first term in Equation (9), leaving a mixture of Gaussians. However, since in Equation (9) the two terms p(ht+1 |st+1 , v1:T ) and p(st |st+1 , ht+1 , v1:t ) are replaced by approximations, this cancellation is not guaranteed. 7. Equation (10) has the pleasing form of an RTS Backward Pass for the continuous part (analogous to LDS case), and a discrete smoother (analogous to a smoother recursion for the HMM). In the Forward-Backward algorithm for the HMM (Rabiner, 1989), the posterior γt ≡ p(st |v1:T ) is formed from the product of αt ≡ p(st |v1:t ) and βt ≡ p(vt+1:T |st ). This approach is also analogous to EP (Heskes and Zoeter, 2002). In the correction approach, a direct recursion for γt in terms of γt+1 and αt is formed, without explicitly defining βt . The two approaches to inference are known as α − β and α − γ recursions. 2521 BARBER st−1 st st+1 st+2 ht−1 ht ht+1 ht+2 vt−1 vt vt+1 vt+2 Figure 3: Our Backward Pass approximates p(ht+1 |st+1 , st , v1:T ) by p(ht+1 |st+1 , v1:T ). Motivation for this is that st only influences ht+1 through ht . However, ht will most likely be heavily influenced by v1:t , so that not knowing the state of st is likely to be of secondary importance. The darker shaded node is the variable we wish to find the posterior state of. The lighter shaded nodes are variables in known states, and the hashed node a variable whose state is indeed known but assumed unknown for the approximation. E VALUATING p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) is a Gaussian in ht , whose statistics we will now compute. First we find p(ht |ht+1 , st , st+1 , v1:t ) which may be obtained from the joint distribution p(ht , ht+1 |st , st+1 , v1:t ) = p(ht+1 |ht , st+1 )p(ht |st , v1:t ) (11) which itself can be found using the forward dynamics from the filtered estimate p(ht |st , v1:t ). The statistics for the marginal p(ht |st , st+1 , v1:t ) are simply those of p(ht |st , v1:t ), since st+1 carries no extra information about ht .8 The remaining statistics are the mean of ht+1 , the covariance of ht+1 and cross-variance between ht and ht+1 , ht+1 = A(st+1 ) ft (st ) Σt+1,t+1 = A(st+1 )Ft (st )AT (st+1 ) + Σh (st+1 ), Σt+1,t = A(st+1 )Ft (st ). Given the statistics of Equation (11), we may now condition on ht+1 to find p(ht |ht+1 , st , st+1 , v1:t ). Doing so effectively constitutes a reversal of the dynamics, ← − ← − ht = A (st , st+1 )ht+1 + η (st , st+1 ) ← − ← − ← − − where A (st , st+1 ) and η (st , st+1 ) ∼ N (←(st , st+1 ), Σ (st , st+1 )) are easily found using the condim tioned Gaussian results in Appendix (C)—see also Algorithm (5). Averaging the reversed dynamics we obtain a Gaussian in ht for p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) with statistics ← − ← − ← − ← − − µt = A (st , st+1 )g(st+1 ) + ←(st , st+1 ), Σt,t = A (st , st+1 )G(st+1 ) A T (st , st+1 ) + Σ (st , st+1 ). m These equations directly mirror the RTS Backward Pass, see Algorithm (5). 8. Integrating over ht+1 means that the information from st+1 passing through ht+1 via the term p(ht+1 |st+1 , ht ) vanishes. Also, since st is known, no information from st+1 passes through st to ht . 2522 E XPECTATION C ORRECTION st st+1 it jt+1 ht ht+1 vt vt+1 Figure 4: Structure of the Backward Pass for mixtures. Given the smoothed information at timestep t + 1, we need to work backwards to ‘correct’ the filtered estimate at time t. E VALUATING p(st |st+1 , v1:T ) The main departure of EC from previous methods is in treating the term p(st |st+1 , v1:T ) = p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (12) The term p(st |ht+1 , st+1 , v1:t ) is given by p(st |ht+1 , st+1 , v1:t ) = p(ht+1 |st , st+1 , v1:t )p(st , st+1 |v1:t ) . ∑st p(ht+1 |st , st+1 , v1:t )p(st , st+1 |v1:t ) (13) Here p(st , st+1 |v1:t ) = p(st+1 |st , v1:t )p(st |v1:t ), where p(st+1 |st , v1:t ) occurs in the Forward Pass, Equation (7). In Equation (13), p(ht+1 |st+1 , st , v1:t ) is found by marginalizing Equation (11). Performing the average over p(ht+1 |st+1 , v1:T ) in Equation (12) may be achieved by any numerical integration method desired. Below we outline a crude approximation that is fast and often performs surprisingly well. M EAN A PPROXIMATION A simple approximation of Equation (12) is to evaluate the integrand at the mean value of the averaging distribution. Replacing ht+1 in Equation (13) by its mean gives the simple approximation 1 T −1 1 e− 2 zt+1 (st ,st+1 )Σ (st ,st+1 |v1:t )zt+1 (st ,st+1 ) p(st |st+1 , v1:t ) p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) ≈ Z det Σ(st , st+1 |v1:t ) where zt+1 (st , st+1 ) ≡ ht+1 |st+1 , v1:T − ht+1 |st , st+1 , v1:t and Z ensures normalization over st . This result comes simply from the fact that in Equation (12) we have a Gaussian with a mean ht+1 |st , st+1 , v1:t and covariance Σ(st , st+1 |v1:t ), being the filtered covariance of ht+1 given st , st+1 and the observations v1:t , which may be taken from Σhh in Equation (5). Then evaluating this Gaussian at the specific point ht+1 |st+1 , v1:T , we arrive at the above expression. An alternative to this simple mean approximation is to sample from the Gaussian p(ht+1 |st+1 , v1:T ), which has the potential advantage that covariance information is used. 9 Other methods such as variational 9. This is a form of exact sampling since drawing samples from a Gaussian is easy. This should not be confused with meaning that this use of sampling renders EC a sequential Monte-Carlo sampling scheme. 2523 BARBER Algorithm 2 aSLDS: EC Backward Pass (Single Gaussian case I = J = 1). Approximates p(st |v1:T ) and p(ht |st , v1:T ) ≡ N (gt (st ), Gt (st )). This routine needs the results from Algorithm (1) for I = 1. GT ← FT , gT ← fT , for t ← T − 1 to 1 do for s ← 1 to S, s ← 1 to S do, (µ, Σ)(s, s ) = LDSBACKWARD(gt+1 (s ), Gt+1 (s ), ft (s), Ft (s), θt+1 (s )) p(s|s ) = p(st = s|ht+1 , st+1 = s , v1:t ) p(ht+1 |st+1 =s ,v1:T ) p(s, s |v1:T ) ← p(st+1 = s |v1:T )p(s|s ) end for for st ← 1 to S do Collapse the mixture defined by weights p(st+1 = s |st , v1:T ) ∝ p(st , s |v1:T ), means µ(st , s ) and covariances Σ(st , s ) to a single Gaussian. This defines the new means gt (st ), covariances Gt (st ). p(st |v1:T ) ← ∑s p(st , s |v1:T ) end for end for approximations to this average (Jaakkola and Jordan, 1996) or the unscented transform (Julier and Uhlmann, 1997) may be employed if desired. C LOSING THE R ECURSION We have now computed both the continuous and discrete factors in Equation (10), which we wish to use to write the smoothed estimate in the form p(ht , st |v1:T ) = p(st |v1:T )p(ht |st , v1:T ). The distribution p(ht |st , v1:T ) is readily obtained from the joint Equation (10) by conditioning on st to form the mixture p(ht |st , v1:T ) = ∑ p(st+1 |st , v1:T )p(ht |st , st+1 , v1:T ) st+1 which may be collapsed to a single Gaussian (or mixture if desired). As in the Forward Pass, this collapse implicitly defines the Gaussian mean g(st ) and covariance G(st ). The smoothed posterior p(st |v1:T ) is given by p(st |v1:T ) = = ∑ p(st+1 |v1:T )p(st |st+1 , v1:T ) st+1 ∑ p(st+1 |v1:T ) st+1 p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (14) The algorithm for the single Gaussian case is presented in Algorithm (2). N UMERICAL S TABILITY Numerical stability is a concern even in the LDS, and the same is to be expected for the aSLDS. Since the LDS recursions LDSFORWARD and LDSBACKWARD are embedded within the EC algorithm, we may immediately take advantage of the large body of work on stabilizing the LDS recursions, such as the Joseph form (Grewal and Andrews, 1993), or the square root forms (Park and Kailath, 1996; Verhaegen and Van Dooren, 1986). 2524 E XPECTATION C ORRECTION R ELAXING EC The conditional independence assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ) is not strictly necessary in EC. We motivate it by computational simplicity, since finding an appropriate moment matching approximation of p(ht+1 |st , st+1 , v1:T ) in Equation (9) requires a relatively expensive nonGaussian integration. If we therefore did treat p(ht+1 |st , st+1 , v1:T ) more correctly, the central assumption in this relaxed version of EC would be a collapse to a mixture of Gaussians (the additional computation of Equation (12) may usually be numerically evaluated to high precision). Whilst we did not do so, implementing this should not give rise to numerical instabilities since no potential divisions are required, merely the estimation of moments. In the experiments presented here, we did not pursue this option, since we believe that the effect of this conditional independence assumption is relatively weak. I NCONSISTENCIES IN THE APPROXIMATION The recursion Equation (8), upon which EC depends, makes use of the Forward Pass results, and a subtle issue arises about possible inconsistencies in the Forward and Backward approximations. For example, under the conditional independence assumption in the Backward Pass, p(hT |sT −1 , sT , v1:T ) ≈ p(hT |sT , v1:T ), which is in contradiction to Equation (5) which states that the approximation to p(hT |sT −1 , sT , v1:T ) will depend on sT −1 . Similar contradictions occur also for the relaxed version of EC. Such potential inconsistencies arise because of the approximations made, and should not be considered as separate approximations in themselves. Furthermore, these inconsistencies will most likely be strongest at the end of the chain, t ≈ T , since only then is Equation (8) in direct contradiction to Equation (5). Such potential inconsistencies arise since EC is not founded on a consistency criterion, unlike EP—see Section (3)—but rather an approximation of the exact recursions. Our experience is that compared to EP, which attempts to ensure consistency based on multiple sweeps through the graph, such inconsistencies are a small price to pay compared to the numerical stability advantages of EC. 2.3 Using Mixtures in the Backward Pass The extension to the mixture case is straightforward, based on the representation Jt p(ht |st , v1:T ) ≈ ∑ p(ht |st , jt , v1:T )p( jt |st , v1:T ). jt =1 Analogously to the case with a single component, p(ht , st |v1:T ) = ∑ it , jt+1 ,st+1 p(st+1 |v1:T )p( jt+1 |st+1 , v1:T )p(ht | jt+1 , st+1 , it , st , v1:T ) · p(it , st |ht+1 , jt+1 , st+1 , v1:t ) p(ht+1 | jt+1 ,st+1 ,v1:T ) . The average in the last line of the above equation can be tackled using the same techniques as outlined in the single Gaussian case. To approximate p(ht | jt+1 , st+1 , it , st , v1:T ) we consider this as the marginal of the joint distribution p(ht , ht+1 |it , st , jt+1 , st+1 , v1:T ) = p(ht |ht+1 , it , st , jt+1 , st+1 , v1:t )p(ht+1 |it , st , jt+1 , st+1 , v1:T ). 2525 BARBER Algorithm 3 aSLDS: EC Backward Pass. Approximates p(st |v1:T ) and p(ht |st , v1:T ) ≡ Jt ut ( jt , st )N (gt ( jt , st ), Gt ( jt , st )) using a mixture of Gaussians. JT = IT , Jt ≤ S × It × Jt+1 . This ∑ jt =1 routine needs the results from Algorithm (1). GT ← FT , gT ← fT , uT ← wT (*) for t ← T − 1 to 1 do for s ← 1 to S, s ← 1 to S, i ← 1 to It , j ← 1 to Jt+1 do (µ, Σ)(i, s, j , s ) = LDSBACKWARD(gt+1 ( j , s ), Gt+1 ( j , s ), ft (i, s), Ft (i, s), θt+1 (s )) p(i, s| j , s ) = p(st = s, it = i|ht+1 , st+1 = s , jt+1 = j , v1:t ) p(ht+1 |st+1 =s , jt+1 = j ,v1:T ) p(i, s, j , s |v1:T ) ← p(st+1 = s |v1:T )ut+1 ( j , s )p(i, s| j , s ) end for for st ← 1 to S do Collapse the mixture defined by weights p(it = i, st+1 = s , jt+1 = j |st , v1:T ) ∝ p(i, st , j , s |v1:T ), means µ(it , st , j , s ) and covariances Σ(it , st , j , s ) to a mixture with Jt components. This defines the new means gt ( jt , st ), covariances Gt ( jt , st ) and mixture weights ut ( jt , st ). p(st |v1:T ) ← ∑it , j ,s p(it , st , j , s |v1:T ) end for end for (*) If JT < IT then the initialization is formed by collapsing the Forward Pass results at time T to JT components. As in the case of a single mixture, the problematic term is p(ht+1 |it , st , jt+1 , st+1 , v1:T ). Analogously to before, we may make the assumption p(ht+1 |it , st , jt+1 , st+1 , v1:T ) ≈ p(ht+1 | jt+1 , st+1 , v1:T ) meaning that information about the current switch state st , it is ignored.10 We can then form p(ht |st , v1:T ) = ∑ it , jt+1 ,st+1 p(it , jt+1 , st+1 |st , v1:T )p(ht |it , st , jt+1 , st+1 , v1:T ). This mixture can then be collapsed to smaller mixture using any method of choice, to give Jt p(ht |st , v1:T ) ≈ ∑ p(ht | jt , st , v1:T )p( jt |st , v1:T ) jt =1 The collapse procedure implicitly defines the means g( jt , st ) and covariances G( jt , st ) of the smoothed approximation. A recursion for the switches follows analogously to the single component Backward Pass. The resulting algorithm is presented in Algorithm (3), which includes using mixtures in both Forward and Backward Passes. Note that if JT < IT , an extra initial collapse is required of the IT component Forward Pass Gaussian mixture at time T to JT components. EC has time complexity O(S2 IJK) where S are the number of switch states, I and J are the number of Gaussians used in the Forward and Backward passes, and K is the time to compute the exact Kalman smoother for the system with a single switch state. 10. As in the single component case, in principle, this assumption may be relaxed and a moment matching approximation be performed instead. 2526 E XPECTATION C ORRECTION 3. Relation to Other Methods Approximate inference in the SLDS is a long-standing research topic, generating an extensive literature. See Lerner (2002) and Zoeter (2005) for reviews of previous work. A brief summary of some of the major existing approaches follows. Assumed Density Filtering Since the exact filtered estimate p(ht |st , v1:t ) is an (exponentially large) mixture of Gaussians, a useful remedy is to project at each stage of the recursion Equation (3) back to a limited set of K Gaussians. This is a Gaussian Sum Approximation (Alspach and Sorenson, 1972), and is a form of Assumed Density Filtering (ADF) (Minka, 2001). Similarly, Generalized Pseudo Bayes2 (GPB2) (Bar-Shalom and Li, 1998) also performs filtering by collapsing to a mixture of Gaussians. This approach to filtering is also taken in Lerner et al. (2000) which performs the collapse by removing spatially similar Gaussians, thereby retaining diversity. Several smoothing approaches directly use the results from ADF. The most popular is Kim’s method, which updates the filtered posterior weights to form the smoother (Kim, 1994; Kim and Nelson, 1999). In both EC and Kim’s method, the approximation p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ), is used to form a numerically simple Backward Pass. The other approximation in EC is to numerically compute the average in Equation (14). In Kim’s method, however, an update for the discrete variables is formed by replacing the required term in Equation (14) by p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) ≈ p(st |st+1 , v1:t ). (15) This approximation11 decouples the discrete Backward Pass in Kim’s method from the continuous dynamics, since p(st |st+1 , v1:t ) ∝ p(st+1 |st )p(st |v1:t )/p(st+1 |v1:t ) can be computed simply from the filtered results alone (the continuous Backward Pass in Kim’s method, however, does depend on the discrete Backward Pass). The fundamental difference between EC and Kim’s method is that the approximation (15) is not required by EC. The EC Backward Pass therefore makes fuller use of the future information, resulting in a recursion which intimately couples the continuous and discrete variables. The resulting effect on the quality of the approximation can be profound, as we will see in the experiments. Kim’s smoother corresponds to a potentially severe loss of future information and, in general, cannot be expected to improve much on the filtered results from ADF. The more recent work of Lerner et al. (2000) is similar in spirit to Kim’s method, whereby the contribution from the continuous variables is ignored in forming an approximate recursion for the smoothed p(st |v1:T ). The main difference is that for the discrete variables, Kim’s method is based on a correction smoother (Rauch et al., 1965), whereas Lerner’s method uses a Belief Propagation style Backward Pass (Jordan, 1998). Neither method correctly integrates information from the continuous variables. How to form a recursion for a mixture approximation which does not ignore information coming through the continuous hidden variables is a central contribution of our work. Kitagawa (1994) used a two-filter method in which the dynamics of the chain are reversed. Essentially, this corresponds to a Belief Propagation method which defines a Gaussian sum 11. In the HMM this is exact, but in the SLDS the future observations carry information about st . 2527 BARBER EC Mixture Collapsing to Single Mixture Collapsing to Mixture Cond. Indep. p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ) Approx. of p(st |st+1 , v1:T ), average Equation (12) Kim’s Backward Pass Mixture approx. of p(ht+1 |st , st+1 , v1:T ), Equation (9) Relaxed EC x x x x EP x Kim x x x x x Table 1: Relation between methods. In the EC methods, the mean approximation may be replaced by an essentially exact Monte Carlo approximation to Equation (12). EP refers to the Single Gaussian approximation in Heskes and Zoeter (2002). In the case of using Relaxed EC with collapse to a single Gaussian, EC and EP are not equivalent, since the underlying recursions on which the two methods are based are fundamentally different. approximation for p(vt+1:T |ht , st ). However, since this is not a density in ht , st , but rather a conditional likelihood, formally one cannot treat this using density propagation methods. In Kitagawa (1994), the singularities resulting from incorrectly treating p(vt+1:T |ht , st ) as a density are heuristically finessed. Expectation Propagation EP (Minka, 2001), as applied to the SLDS, corresponds to an approximate implementation of Belief Propagation12 (Jordan, 1998; Heskes and Zoeter, 2002). EP is the most sophisticated rival to Kim’s method and EC, since it makes the least assumptions. For this reason, we’ll explain briefly how EP works. Unlike EC, which is based on an approximation of the exact filtering and smoothing recursions, EP is based on a consistency criterion. First, let’s simplify the notation, and write the distribution as p = ∏t φ (xt−1 , vt−1 , xt , vt ), where xt ≡ ht ⊗ st , and φ (xt−1 , vt−1 , xt , vt ) ≡ p(xt |xt−1 )p(vt |xt ). EP defines ‘messages’ ρ, λ13 which contain information from past and future observations respectively. 14 Explicitly, we define ρt (xt ) ∝ p(xt |v1:t ) to represent knowledge about xt given all information from time 1 to t. Similarly, λt (xt ) represents knowledge about state xt given all observations from time T to time t + 1. In the sequel, we drop the time suffix for notational clarity. We define λ(xt ) implicitly through the requirement that the marginal smoothed inference is given by p(xt |v1:T ) ∝ ρ (xt ) λ (xt ) . (16) Hence λ (xt ) ∝ p(vt+1:T |xt , v1:t ) = p(vt+1:T |xt ) and represents all future knowledge about p(xt |v1:T ). From this p(xt−1 , xt |v1:T ) ∝ ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . (17) 12. Non-parametric belief propagation (Sudderth et al., 2003), which performs approximate inference in general continuous distributions, is also related to EP applied to the aSLDS, in the sense that the messages cannot be represented easily, and are approximated by mixtures of Gaussians. 13. These correspond to the α and β messages in the Hidden Markov Model framework (Rabiner, 1989). 14. In this Belief Propagation/EP viewpoint, the backward messages, traditionally labeled as β, correspond to conditional likelihoods, and not distributions. In contrast, in the EC approach, which is effectively a so-called α − γ recursion, the backward γ messages correspond to posterior distributions. 2528 E XPECTATION C ORRECTION Taking the above equation as a starting point, we have p(xt |v1:T ) ∝ Z xt−1 ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . Consistency with Equation (16) requires (neglecting irrelevant scalings) ρ (xt ) λ (xt ) ∝ Z xt−1 ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . Similarly, we can integrate Equation (17) over xt to get the marginal at time xt−1 which, by consistency, should be proportional to ρ (xt−1 ) λ (xt−1 ). Hence ρ (xt ) ∝ xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) R λ (xt ) , λ (xt−1 ) ∝ R xt ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) ρ (xt−1 ) (18) where the divisions can be interpreted as preventing over-counting of messages. In an exact implementation, the common factors in the numerator and denominator cancel. EP addresses the fact that λ(xt ) is not a distribution by using Equation (18) R form the projection (or to R ‘collapse’). In the numerator, xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) and xt ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) represent p(xt |v1:T ) and p(xt−1 |v1:T ). Since these are distributions (an indexed mixture of Gaussians in the SLDS), they may be projected/collapsed to a single indexed Gaussian. The update for the ρ message is then found from division by the λ potential, and vice versa. In EP the explicit division of potentials only makes sense for members of the exponential family. More complex methods could be envisaged in which, rather than an explicit division, the new messages are defined by minimizing some measure of divergence between R ρ(xt )λ(xt ) and xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ), such as the Kullback-Leibler divergence. In this way, non-exponential family approximations (such as mixtures of Gaussians) may be considered. Whilst this is certainly feasible, it is somewhat unattractive computationally since this would require for each time-step an expensive minimization. For the single Gaussian case, in order to perform the division, the potentials in the numerator and denominator are converted to their canonical representations. To form the ρ update, the result of the division is then reconverted back to a moment representation. The resulting recursions, due to the approximation, are no longer independent and Heskes and Zoeter (2002) show that using more than a single Forward and Backward sweep often improves on the quality of the approximation. This coupling is a departure from the exact recursions, which should remain independent. Applied to the SLDS, EP suffers from severe numerical instabilities (Heskes and Zoeter, 2002) and finding a way to minimize the corresponding EP free energy in an efficient, robust and guaranteed way remains an open problem. Our experience is that current implementations of EP are unsuitable for large scale time-series applications. Damping the parameter updates is one suggested approach to heuristically improve convergence. The source of these numerical instabilities is not well understood since, even in cases when the posterior appears uni-modal, the method is problematic. The frequent conversions between moment and canonical parameterizations of Gaussians are most likely at the root of the difficulties. An interesting comparison here is between Lauritzen’s original method for exact computation on conditional Gaussian distributions (for which the SLDS is a special case) Lauritzen (1992), 2529 BARBER which is numerically unstable due to conversion between moment and canonical representations, and Lauritzen and Jensen (2001), which improves stability by avoiding using canonical parameterizations. Variational Methods Ghahramani and Hinton (1998) used a variational method which approximates the joint distribution p(h1:T , s1:T |v1:T ) rather than the marginal p(ht , st |v1:T )—related work is presented in Lee et al. (2004). This is a disadvantage when compared to other methods that directly approximate the marginal. The variational methods are nevertheless potentially attractive since they are able to exploit structural properties of the distribution, such as a factored discrete state-transition. In this article, we concentrate on the case of a small number of states S and hence will not consider variational methods further here. 15 Sequential Monte Carlo (Particle Filtering) These methods form an approximate implementation of Equation (3), using a sum of delta functions to represent the posterior—see, for example, Doucet et al. (2001). Whilst potentially powerful, these non-analytic methods typically suffer in high-dimensional hidden spaces since they are often based on naive importance sampling, which restricts their practical use. ADF is generally preferential to Particle Filtering, since in ADF the approximation is a mixture of non-trivial distributions, which is better at capturing the variability of the posterior. Rao-Blackwellized Particle Filters (Doucet et al., 2000) are an attempt to alleviate the difficulty of sampling in high-dimensional state spaces by explicitly integrating over the continuous state. Non-Sequential Monte Carlo For fixed switches s1:T , p(v1:T |s1:T ) is easily computable since this is just the likelihood of an LDS. This observation raises the possibility of sampling from the posterior p(s 1:T |v1:T ) ∝ p(v1:T |s1:T )p(s1:T ) directly. Many possible sampling methods could be applied in this case, and the most immediate is Gibbs sampling, in which a sample for each t is drawn from p(st |s\t , v1:T )—see Neal (1993) for a general reference and Carter and Kohn (1996) for an application to the SLDS. This procedure may work well in practice provided that the initial setting of s1:T is in a region of high probability mass—otherwise, sampling by such individual coordinate updates may be extremely inefficient. 4. Experiments Our experiments examine the stability and accuracy of EC against several other methods on long time-series. In addition, we will compare the absolute accuracy of EC as a function of the number of mixture components on a short time-series, where exact inference may be explicitly evaluated. Testing EC in a problem with a reasonably long temporal sequence, T , is important since numerical stabilities may not be apparent in time-series of just a few time-steps. To do this, we sequentially generate hidden states ht , st and observations vt from a given model. Then, given only the parameters of the model and the observations (but not any of the hidden states), the task is to infer p(ht |st , v1:T ) and p(st |v1:T ). Since the exact computation is exponential in T , a formally exact evaluation of the method is infeasible. A simple alternative is to assume that the original sample states s1:T are the ‘correct’ inferred states, and compare our most probable posterior smoothed 15. Lerner (2002) discusses an approach in the case of a large structured discrete state transition. Related ideas could also be used in EC. 2530 E XPECTATION C ORRECTION 80 150 60 100 40 50 20 0 0 −50 −20 −100 −40 −150 −60 −80 0 10 20 30 40 50 60 70 80 90 −200 100 0 10 20 (a) Easy problem 30 40 50 60 70 80 90 100 (b) Hard problem Figure 5: SLDS: Throughout, S = 2, V = 1 (scalar observations), T = 100, with zero output bias. ¯ A(s) = 0.9999 ∗ orth(randn(H, H)), B(s) = randn(V, H), vt ≡ 0, h1 = 10 ∗ randn(H, 1), ¯ ¯ t>1 = 0, Σh = IH , p1 = uniform. The figures show typical examples for each of the two h 1 problems: (a) Easy problem. H = 3, Σh (s) = IH , Σv (s) = 0.1IV , p(st+1 |st ) ∝ 1S×S + IS . (b) Hard problem. H = 30, Σv (s) = 30IV ,Σh (s) = 0.01IH , p(st+1 |st ) ∝ 1S×S . PF RBPF EP ADFS KimS ECS ADFM KimM ECM Gibbs 1000 800 600 400 200 0 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 Figure 6: SLDS ‘Easy’ problem: The number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. The histograms have been cutoff at 20 errors in order to improve visualization. (PF) Particle Filter. (RBPF) Rao-Blackwellized PF. (EP) Expectation Propagation. (ADFS) Assumed Density Filtering using a Single Gaussian. (KimS) Kim’s smoother using the results from ADFS. (ECS) Expectation Correction using a Single Gaussian (I = J = 1). (ADFM) ADF using a multiple of I = 4 Gaussians. (KimM) Kim’s smoother using the results from ADFM. (ECM) Expectation Correction using a mixture with I = J = 4 components. In Gibbs sampling, we use the initialization from ADFM. estimates arg maxst p(st |v1:T ) with the assumed correct sample st .16 We look at two sets of experiments, one for the SLDS and one for the aSLDS. In both cases, scalar observations are used so that the complexity of the inference problem can be visually assessed. 16. We could also consider performance measures on the accuracy of p(ht |st , v1:T ). However, we prefer to look at approximating arg maxst p(st |v1:T ) since the sampled discrete states are likely to correspond to the exact arg max st p(st |v1:T ). In addition, if the posterior switch distribution is dominated by a single state s ∗ , then provided they are correctly 1:T estimated, the model reduces to an LDS, for which inference of the continuous hidden state is trivial. 2531 BARBER PF RBPF EP ADFS KimS ECS ADFM KimM ECM Gibbs 1000 800 600 400 200 0 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 Figure 7: SLDS ‘Hard’ problem: The number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. SLDS EXPERIMENTS We chose experimental conditions that, from the viewpoint of classical signal processing, are difficult, with changes in the switches occurring at a much higher rate than the typical frequencies in the signal. We consider two different toy SLDS experiments : The ‘easy’ problem corresponds to a low hidden dimension, H = 3, with low observation noise; The ‘hard’ problem corresponds to a high hidden dimension, H = 30, and high observation noise. See Figure (5) for details of the experimental setup. We compared methods using a single Gaussian, and methods using multiple Gaussians, see Figure (6) and Figure (7). For EC we use the mean approximation for the numerical integration of Equation (12). For the Particle Filter 1000 particles were used, with Kitagawa re-sampling (Kitagawa, 1996). For the Rao-Blackwellized Particle Filter (Doucet et al., 2000), 500 particles were used, with Kitagawa re-sampling. We included the Particle Filter merely for a point of comparison with ADF, since they are not designed to approximate the smoothed estimate. An alternative MCMC procedure is to perform Gibbs sampling of p(s 1:T |v1:T ) using p(st |s\t , v1:T ) ∝ p(v1:T |s1:T )p(s1:T ), where p(v1:T |s1:T ) is simply the likelihood of an LDS—see for example Carter and Kohn (1996).17 We initialize the state s1:T by using the most likely states st from the filtered results using a Gaussian mixture (ADFM), and then swept forwards in time, sampling from the state p(st |s\t , v1:T ) until the end of the chain. We then reversed direction, sampling from time T back to time 1, and continued repeating this procedure 100 times, with the mean over the last 80 sweeps used as the posterior mean approximation. This procedure is expensive since each sample requires computing the likelihood of an LDS defined on the whole time-series. The procedure therefore scales with GT 2 where G is the number of sweeps over the time series. Despite using a reasonable initialization, Gibbs sampling struggles to improve on the filtered results. We found that EP was numerically unstable and often struggled to converge. To encourage convergence, we used the damping method in Heskes and Zoeter (2002), performing 20 iterations with a damping factor of 0.5. The disappointing performance of EP is most likely due to conflicts 17. Carter and Kohn (1996) proposed an overly complex procedure for computing the likelihood p(v 1:T |s1:T ). This is simply the likelihood of an LDS (since s1:T are assumed known), and is readily computable using any of the standard procedures in the literature. 2532 E XPECTATION C ORRECTION PF ADFS ECS ADFM ECM 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 1000 800 600 400 200 0 Figure 8: aSLDS: Histogram of the number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. Augmented SLDS results. ADFM used I = 4 Gaussians, and ECM used I = J = 4 Gaussians. We used 1000 samples to approximate Equation (12). I J error 1 1 0.0989 4 1 0.0624 4 4 0.0365 16 1 0.0440 16 16 0.0130 64 1 0.0440 64 64 4.75e-4 256 1 0.0440 256 256 3.40e-8 Table 2: Errors in approximating the states for the multi-path problem, see Figure (9). The mean absolute deviation |pec (st |v1:T ) − pexact (st |v1:T )| averaged over the S = 4 states of st and over the times t = 1, . . . , 5, computed for different numbers of mixture components in EC. The mean approximation of Equation (12) is used. The exact computation uses S T −1 = 256 mixtures. resulting from numerical instabilities introduced by the frequent conversions between moment and canonical representations. The various algorithms differ widely in performance, see Figures (6,7). Not surprisingly, the best filtered results are given using ADF, since this is better able to represent the variance in the filtered posterior than the sampling methods. Unlike Kim’s method, EC makes good use of the future information to clean up the filtered results considerably. One should bear in mind that both EC, Kim’s method and the Gibbs initialization use the same ADF results. These results show that EC may dramatically improve on Kim’s method, so that the small amount of extra work in making a numerical approximation of p(st |st+1 , v1:T ), Equation (12), may bring significant benefits. AUGMENTED SLDS E XPERIMENTS In Figure (8), we chose a simple two state S = 2 transition distribution p(st+1 = 1|st , ht ) = σ htT w(st ) , where σ(x) ≡ 1/(1 + e−x ). Some care needs to be taken to make a model so for which even exact inference would produce posterior switches close to the sampled switches. If the switch variables st+1 changes wildly (which is possible given the above formula since the hidden state h may have a large projected change if the hidden state changes) essentially no information is left in the signal for any inference method to produce reasonable results. We therefore set w(st ) to a zero vector except for the first two components, which are independently sampled from a zero mean Gaussian with standard deviation 5. For each of the two switch states, s, we have a transition matrix A(s), which 2533 BARBER t=1 0 t=2 10 t=3 20 t=4 30 t=5 40 −40 −30 −20 −10 0 10 20 30 (a) 40 (b) Figure 9: (a) The multi-path problem. The particle starts from (0, 0) at time t = 1. Subsequently, at each time-point, either the vector (10, 10) (corresponding to states s = 1 and s = 3) or (−10, 10) (corresponding to states s = 2 and s = 4), is added to the hidden dynamics, perturbed by a small amount of noise, Σh = 0.1. The observations are v = h + ηv (s). For states s = 1, 2 the observation noise is small, Σv = 0.1I, but for s = 3, 4 the noise in the horizontal direction has variance 1000. The visible observations are given by the x’. The true hidden states are given by ‘+’. (b) The exact smoothed state posteriors p exact (st |v1:T ) computed by enumerating all paths (given by the dashed lines). we set to be block diagonal. The first 2 × 2 block is set to 0.9999R θ , where Rθ is a 2 × 2 rotation matrix with angle θ chosen uniformly from 0 to 1 radians. This means that st+1 is dependent on the first two components of ht which are rotating at a restricted rate. The remaining H − 2 × H − 2 block of A(s) is chosen as (using MATLAB notation) 0.9999 ∗ orth(rand(H − 2)), which means a scaled randomly chosen orthogonal matrix. Throughout, S = 2, V = 1, H = 30, T = 100, with zero output ¯ ¯ bias. Using partly MATLAB notation, B(s) = randn(V, H), vt ≡ 0, h1 = 10 ∗ randn(H, 1), ht>1 = 0, ¯ h = I , p = uniform. Σv = 30I , Σh = 0.1I . Σ1 H 1 V H We compare EC only against Particle Filters using 1000 particles, since other methods would require specialized and novel implementations. In ADFM, I = 4 Gaussians were used, and for ECM, I = J = 4 Gaussians were used. Looking at the results in Figure (8), we see that EC performs well, with some improvement in using the mixture representation I, J = 4 over a single Gaussian I = J = 1. The Particle Filter most likely failed since the hidden dimension is too high to be explored well with only 1000 particles. E FFECT OF U SING M IXTURES Our claim is that EC should cope in situations where the smoothed posterior p(ht |st , v1:T ) is multimodal and, consequently, cannot be well represented by a single Gaussian. 18 We therefore constructed an SLDS which exhibits multi-modality to see the effect of using EC with both I and J greater than 1. The ‘multi-path’ scenario is described in Figure (9), where a particle traces a path through a two dimensional space. A small number of time-steps was chosen so that the exact p(st |v1:T ) can be computed by direct enumeration. The observation of the particle is at times extremely noisy in the horizontal direction. This induces multi-modality of p(ht |st , v1:T ) since there 18. This should not be confused with the multi-modality of p(ht |v1:T ) = ∑st p(ht |st , v1:T )p(st |v1:T ). 2534 E XPECTATION C ORRECTION are several paths that might plausibly have been taken to give rise to the observations. The accuracy with which EC predicts the exact smoothed posterior is given in Table (2). For this problem we see that both the number of Forward (I) and Backward components (J) affects the accuracy of the approximation, generally with improved accuracy as the number of mixture components increases. For a ‘perfect’ approximation method, one would expect that when I = J = S T −1 = 256, then the approximation should become exact. The small error for this case in Table (2) may arise for several reasons: the extra independence assumption used in EC, or the simple mean approximation used to compute Equation (12), or numerical roundoff. However, at least in this case, the effect of these assumptions on the performance is very small. 5. Discussion Expectation Correction is a novel form of Backward Pass which makes less approximations than the widely used approach from Kim (1994). In Kim’s method, potentially important future information channeled through the continuous hidden variables is lost. EC, along with Kim’s method, makes the additional assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ). However, our experience is that this assumption is rather mild, since the state of ht+1 will be most heavily influenced by its immediate parent st+1 . Our approximation is based on the idea that, although exact inference will consist of an exponentially large number of mixture components, due to the forgetting which commonly occurs in Markovian models, a finite number of mixture components may provide a reasonable approximation. In tracking situations where the visible information is (temporarily) not enough to specify accurately the hidden state, then representing the posterior p(ht |st , v1:T ) using a mixture of Gaussians may improve results significantly. Clearly, in systems with very long correlation times our method may require too many mixture components to produce a satisfactory result, although we are unaware of other techniques that would be able to cope well in that case. We hope that the straightforward ideas presented here may help facilitate the practical application of dynamic hybrid networks to machine learning and related areas. Whilst models with Gaussian emission distributions such as the SLDS are widespread, the extension of this method to non-Gaussian emissions p(vt |ht , st ) would clearly be of considerable interest. Software for Expectation Correction for this augmented class of Switching Linear Gaussian models is available from www.idiap.ch/∼barber. Acknowledgments I would like to thank Onno Zoeter and Tom Heskes for kindly providing their Expectation Propagation code, Silvia Chiappa for helpful discussions, and Bertrand Mesot for many discussions, help with the simulations and for suggesting the relationship between the partial factorization and independence viewpoints of EC. I would also like to thank the reviewers for their many helpful comments and suggestions. 2535 BARBER Algorithm 4 LDS Forward Pass. Compute the filtered posteriors p(ht |v1:t ) ≡ N ( ft , Ft ) for ¯ ¯ a LDS with parameters θt = A, B, Σh , Σv , h, v, for t > 1. At time t = 1, we use parameters v , µ, v, where Σ and µ are the prior covariance and mean of h. The log-likelihood θ1 = A, B, Σ, Σ ¯ L = log p(v1:T ) is also returned. F0 ← 0, f0 ← 0, L ← 0 for t ← 1, T do { ft , Ft , pt } = LDSFORWARD( ft−1 , Ft−1 , vt ; θt ) L ← L + log pt end for function LDSFORWARD( f , F, v; θ) Compute joint p(ht , vt |v1:t−1 ): ¯ µh ← A f + h, µv ← Bµh + v ¯ T Σhh ← AFA + Σh , Σvv ← BΣhh BT + Σv , Σvh ← BΣhh Find p(ht |v1:t ) by conditioning: f ← µh + ΣT Σ−1 (v − µv ), F ← Σhh − ΣT Σ−1 Σvh vh vv vh vv Compute p(vt |v1:t−1 ): √ 1 p ← exp − 2 (v − µv )T Σ−1 (v − µv ) / det 2πΣvv vv return f , F , p end function Appendix A. Inference in the LDS The LDS is defined by Equations (1,2) in the case of a single switch S = 1. The LDS Forward and Backward passes define the important functions LDSFORWARD and LDSBACKWARD, which we shall make use of for inference in the aSLDS. F ORWARD PASS (F ILTERING ) The filtered posterior p(ht |v1:t ) is a Gaussian which we parameterize with mean f t and covariance Ft . These parameters can be updated recursively using p(ht |v1:t ) ∝ p(ht , vt |v1:t−1 ), where the joint distribution p(ht , vt |v1:t−1 ) has statistics (see Appendix (B)) ¯ µh = A ft−1 + h, µv = Bµh + v ¯ Σhh = AFt−1 AT + Σh , Σvv = BΣhh BT + Σv , Σvh = BΣhh . We may then find p(ht |v1:t ) by conditioning p(ht , vt |v1:t−1 ) on vt , see Appendix (C). This gives rise to Algorithm (4). BACKWARD PASS The smoothed posterior p(ht |v1:T ) ≡ N (gt , Gt ) can be computed recursively using: p(ht |v1:T ) = Z ht+1 p(ht |ht+1 , v1:T )p(ht+1 |v1:T ) = Z ht+1 p(ht |ht+1 , v1:t )p(ht+1 |v1:T ) where p(ht |ht+1 , v1:t ) may be obtained from the joint distribution p(ht , ht+1 |v1:t ) = p(ht+1 |ht )p(ht |v1:t ) (19) 2536 E XPECTATION C ORRECTION Algorithm 5 LDS Backward Pass. Compute the smoothed posteriors p(ht |v1:T ). This requires the filtered results from Algorithm (4). GT ← FT , gT ← fT for t ← T − 1, 1 do {gt , Gt } = LDSBACKWARD(gt+1 , Gt+1 , ft , Ft ; θt+1 ) end for function LDSBACKWARD(g, G, f , F; θ) ¯ Σh h ← AF µh ← A f + h, Σh h ← AFAT + Σh , ← − − ← − ← ← f − ←µ − −1 T Σ ← Ft − Σh h Σh h Σh h , A ← ΣT h Σ−1 , m A h h hh ← − ← ← − − ← − − g ← A g + ←, m G ← AGAT+ Σ return g , G end function which itself can be obtained by forward propagation from p(ht |v1:t ). Conditioning Equation (19) to find p(ht |ht+1 , v1:t ) effectively reverses the dynamics, ← − ← − ht = At ht+1 + ηt ← − − ← − −← where At and η t ∼ N (←, Σt ) are found using the conditioned Gaussian results in Appendix (C)— mt these are explicitly given in Algorithm (5). Then averaging the reversed dynamics over p(h t+1 |v1:T ) we find that p(ht |v1:T ) is a Gaussian with statistics ← − ← − ← − ← − − gt = At gt+1 + ←, Gt = At Gt+1 At T + Σt . mt This Backward Pass is given in Algorithm (5). For parameter learning of the A matrix, the smoothed ← − T T statistic ht ht+1 is required. Using the above formulation, this is given by At Gt+1 + ht ht+1 . This is much simpler than the standard expressions cited in Shumway and Stoffer (2000) and Roweis and Ghahramani (1999). Appendix B. Gaussian Propagation Let y be linearly related to x through y = Mx + η, where η ∼ N (µ, Σ), and x ∼ N (µ x , Σx ). Then R p(y) = x p(y|x)p(x) is a Gaussian with mean Mµx + µ and covariance MΣx M T + Σ. Appendix C. Gaussian Conditioning For a joint Gaussian distribution over the vectors x and y with means µ x , µy and covariance elements Σxx ,Σxy ,Σyy , the conditional p(x|y) is a Gaussian with mean µx + Σxy Σ−1 (y − µy ) and covariance yy Σxx − Σxy Σ−1 Σyx . yy Appendix D. Collapsing Gaussians The user may provide any algorithm of their choice for collapsing a set of Gaussians to a smaller set of Gaussians (Titterington et al., 1985). Here, to be explicit, we present a simple one which is fast, but has the disadvantage that no spatial information about the mixture is used. 2537 BARBER First, we describe how to collapse a mixture to a single Gaussian: We may collapse a mixture of Gaussians p(x) = ∑i pi N (x|µi , Σi ) to a single Gaussian with mean ∑i pi µi and covariance ∑i pi Σi + µi µT − µµT . i To collapse a mixture to a K-component mixture we retain the K − 1 Gaussians with the largest mixture weights—the remaining N − K Gaussians are simply merged to a single Gaussian using the above method. The alternative of recursively merging the two Gaussians with the lowest mixture weights gave similar experimental performance. More sophisticated methods which retain some spatial information would clearly be potentially useful. The method presented in Lerner et al. (2000) is a suitable approach which considers removing Gaussians which are spatially similar (and not just low-weight components), thereby retaining diversity over the possible solutions. Appendix E. The Discrete-Continuous Factorization Viewpoint An alternative viewpoint is to proceed analogously to the Rauch-Tung-Striebel correction method for the LDS (Grewal and Andrews, 1993): p(ht , st |v1:T ) = = ∑ Z st+1 ht+1 p(st , ht , ht+1 , st+1 |v1:T ) ∑ p(st+1 |v1:T ) st+1 Z ht+1 p(ht , st |ht+1 , st+1 , v1:t )p(ht+1 |st+1 , v1:T ) st+1 = ≈ ∑ p(st+1 |v1:T ) p(ht |ht+1 , st+1 , st , v1:t )p(st |ht+1 , st+1 , v1:t ) ∑ p(st+1 |v1:T ) p(ht |ht+1 , st+1 , st , v1:t ) p(st |st+1 , v1:T ) st+1 (20) p(st |st+1 ,v1:T ) where angled brackets · denote averages with respect to p(ht+1 |st+1 , v1:T ). Whilst the factorized approximation in Equation (20) may seem severe, by comparing Equations (20) and (10) we see that it is equivalent to the apparently milder assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ). Hence this factorized approximation is equivalent to the ‘standard’ EC approach in which the dependency on st is dropped. References D. L. Alspach and H. W. Sorenson. Nonlinear bayesian estimation using gaussian sum approximations. IEEE Transactions on Automatic Control, 17(4):439–448, 1972. Y. Bar-Shalom and Xiao-Rong Li. Estimation and Tracking : Principles, Techniques and Software. Artech House, Norwood, MA, 1998. X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence—UAI 1998, pages 33–42. Morgan Kaufmann, 1998. C. Carter and R. Kohn. Markov chain Monte Carlo in conditionally Gaussian state space models. Biometrika, 83:589–601, 1996. 2538 E XPECTATION C ORRECTION A. T. Cemgil, B. Kappen, and D. Barber. A Generative Model for Music Transcription. IEEE Transactions on Audio, Speech and Language Processing, 14(2):679 – 694, 2006. S. Chib and M. Dueker. Non-Markovian regime switching with endogenous states and time-varying state strengths. Econometric Society 2004 North American Summer Meetings 600, 2004. A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-Blackwellised particle filtering for dynamic Bayesian networks. Uncertainty in Artificial Intelligence, 2000. A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Springer, 2001. Z. Ghahramani and G. E. Hinton. Variational learning for switching state-space models. Neural Computation, 12(4):963–996, 1998. M. S. Grewal and A. P. Andrews. Kalman Filtering: Theory and Practice. Prentice-Hall, 1993. T. Heskes and O. Zoeter. Expectation propagation for approximate inference in dynamic Bayesian networks. In A. Darwiche and N. Friedman, editors, Uncertainty in Artificial Intelligence, pages 216–223, 2002. T. Jaakkola and M. Jordan. A variational approach to Bayesian logistic regression problems and their extensions. In Artificial Intelligence and Statistics, 1996. M. I. Jordan. Learning in Graphical Models. MIT Press, 1998. S. Julier and J. Uhlmann. A new extension of the Kalman filter to nonlinear systems. In Int. Symp. Aerospace/Defense Sensing, Simul. and Controls, Orlando, FL, 1997. C-J. Kim. Dynamic linear models with Markov-switching. Journal of Econometrics, 60:1–22, 1994. C-J. Kim and C. R. Nelson. State-Space Models with Regime Switching. MIT Press, 1999. G. Kitagawa. The two-filter formula for smoothing and an implementation of the Gaussian-sum smoother. Annals of the Institute of Statistical Mathematics, 46(4):605–623, 1994. G. Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1):1–25, 1996. S. Lauritzen and F. Jensen. Stable local computation with conditional Gaussian distributions. Statistics and Computing, 11:191–203, 2001. S. L. Lauritzen. Propagation of probabilities, means, and variances in mixed graphical association models. Journal of the American Statistical Association, 87(420):1098–1108, 1992. L. J. Lee, H. Attias, Li Deng, and P. Fieguth. A multimodal variational approach to learning and inference in switching state space models. In IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP 04), volume 5, pages 505–8, 2004. U. Lerner, R. Parr, D. Koller, and G. Biswas. Bayesian fault detection and diagnosis in dynamic systems. In Proceedings of the Seventeenth National Conference on Artificial Intelligence (AIII00), pages 531–537, 2000. 2539 BARBER U. N. Lerner. Hybrid Bayesian Networks for Reasoning about Complex Systems. PhD thesis, Stanford University, 2002. B. Mesot and D. Barber. Switching linear dynamical systems for noise robust speech recognition. IDIAP-RR 08, 2006. T. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, MIT Media Lab, 2001. R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. CRG-TR-93-1, Dept. of Computer Science, University of Toronto, 1993. P. Park and T. Kailath. New square-root smoothing algorithms. IEEE Transactions on Automatic Control, 41:727–732, 1996. V. Pavlovic, J. M. Rehg, and J. MacCormick. Learning switching linear models of human motion. In Advances in Neural Information Processing systems (NIPS 13), pages 981–987, 2001. L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. of the IEEE, 77(2):257–286, 1989. H. E. Rauch, G. Tung, and C. T. Striebel. Maximum likelihood estimates of linear dynamic systems. American Institute of Aeronautics and Astronautics Journal (AIAAJ), 3(8):1445–1450, 1965. S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11(2):305–345, 1999. R. H. Shumway and D. S. Stoffer. Time Series Analysis and Its Applications. Springer, 2000. E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 605–612, 2003. D. M. Titterington, A. F. M. Smith, and U. E. Makov. Statistical Analysis of Finite Mixture Distributions. Wiley, 1985. H. Tong. Nonlinear Time Series Analysis: A Dynamical Systems Approach. Oxford Univ. Press, 1990. M. Verhaegen and P. Van Dooren. Numerical aspects of different Kalman filter implementations. IEEE Transactions of Automatic Control, 31(10):907–917, 1986. M. West and J. Harrison. Bayesian Forecasting and Dynamic Models. Springer, 1999. O. Zoeter. Monitoring Non-Linear and Switching Dynamical Systems. PhD thesis, Radboud University Nijmegen, 2005. 2540
5 0.11919628 96 jmlr-2006-Worst-Case Analysis of Selective Sampling for Linear Classification
Author: Nicolò Cesa-Bianchi, Claudio Gentile, Luca Zaniboni
Abstract: A selective sampling algorithm is a learning algorithm for classification that, based on the past observed data, decides whether to ask the label of each new instance to be classified. In this paper, we introduce a general technique for turning linear-threshold classification algorithms from the general additive family into randomized selective sampling algorithms. For the most popular algorithms in this family we derive mistake bounds that hold for individual sequences of examples. These bounds show that our semi-supervised algorithms can achieve, on average, the same accuracy as that of their fully supervised counterparts, but using fewer labels. Our theoretical results are corroborated by a number of experiments on real-world textual data. The outcome of these experiments is essentially predicted by our theoretical results: Our selective sampling algorithms tend to perform as well as the algorithms receiving the true label after each classification, while observing in practice substantially fewer labels. Keywords: selective sampling, semi-supervised learning, on-line learning, kernel algorithms, linear-threshold classifiers
6 0.11817406 75 jmlr-2006-Policy Gradient in Continuous Time
8 0.07754752 3 jmlr-2006-A Hierarchy of Support Vector Machines for Pattern Detection
10 0.057349429 60 jmlr-2006-Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
11 0.056795269 40 jmlr-2006-Infinite-σ Limits For Tikhonov Regularization
12 0.054133546 71 jmlr-2006-Optimising Kernel Parameters and Regularisation Coefficients for Non-linear Discriminant Analysis
14 0.05015878 91 jmlr-2006-The Interplay of Optimization and Machine Learning Research (Special Topic on Machine Learning and Optimization)
15 0.047376629 29 jmlr-2006-Estimation of Gradients and Coordinate Covariation in Classification
16 0.047306765 35 jmlr-2006-Geometric Variance Reduction in Markov Chains: Application to Value Function and Gradient Estimation
17 0.047300499 23 jmlr-2006-Consistency and Convergence Rates of One-Class SVMs and Related Algorithms
18 0.047223996 12 jmlr-2006-Active Learning with Feedback on Features and Instances
19 0.044318441 45 jmlr-2006-Learning Coordinate Covariances via Gradients
20 0.042911548 9 jmlr-2006-Accurate Error Bounds for the Eigenvalues of the Kernel Matrix
topicId topicWeight
[(0, 0.278), (1, 0.258), (2, 0.272), (3, 0.011), (4, -0.03), (5, -0.04), (6, -0.002), (7, -0.023), (8, -0.011), (9, -0.065), (10, -0.002), (11, -0.042), (12, 0.053), (13, -0.019), (14, 0.067), (15, -0.075), (16, 0.1), (17, -0.024), (18, 0.033), (19, 0.087), (20, 0.018), (21, -0.036), (22, 0.073), (23, 0.03), (24, 0.055), (25, -0.053), (26, -0.027), (27, 0.068), (28, 0.136), (29, -0.019), (30, 0.019), (31, 0.083), (32, -0.001), (33, 0.052), (34, -0.003), (35, -0.038), (36, 0.024), (37, -0.113), (38, -0.043), (39, -0.11), (40, 0.038), (41, 0.051), (42, -0.004), (43, -0.129), (44, 0.125), (45, -0.073), (46, 0.14), (47, -0.206), (48, 0.027), (49, 0.082)]
simIndex simValue paperId paperTitle
same-paper 1 0.93047225 86 jmlr-2006-Step Size Adaptation in Reproducing Kernel Hilbert Space
Author: S. V. N. Vishwanathan, Nicol N. Schraudolph, Alex J. Smola
Abstract: This paper presents an online support vector machine (SVM) that uses the stochastic meta-descent (SMD) algorithm to adapt its step size automatically. We formulate the online learning problem as a stochastic gradient descent in reproducing kernel Hilbert space (RKHS) and translate SMD to the nonparametric setting, where its gradient trace parameter is no longer a coefficient vector but an element of the RKHS. We derive efficient updates that allow us to perform the step size adaptation in linear time. We apply the online SVM framework to a variety of loss functions, and in particular show how to handle structured output spaces and achieve efficient online multiclass classification. Experiments show that our algorithm outperforms more primitive methods for setting the gradient step size. Keywords: online SVM, stochastic meta-descent, structured output spaces
2 0.62405515 70 jmlr-2006-Online Passive-Aggressive Algorithms
Author: Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, Yoram Singer
Abstract: We present a family of margin based online learning algorithms for various prediction tasks. In particular we derive and analyze algorithms for binary and multiclass categorization, regression, uniclass prediction and sequence prediction. The update steps of our different algorithms are all based on analytical solutions to simple constrained optimization problems. This unified view allows us to prove worst-case loss bounds for the different algorithms and for the various decision problems based on a single lemma. Our bounds on the cumulative loss of the algorithms are relative to the smallest loss that can be attained by any fixed hypothesis, and as such are applicable to both realizable and unrealizable settings. We demonstrate some of the merits of the proposed algorithms in a series of experiments with synthetic and real data sets.
3 0.49461934 37 jmlr-2006-Incremental Algorithms for Hierarchical Classification
Author: Nicolò Cesa-Bianchi, Claudio Gentile, Luca Zaniboni
Abstract: We study the problem of classifying data in a given taxonomy when classifications associated with multiple and/or partial paths are allowed. We introduce a new algorithm that incrementally learns a linear-threshold classifier for each node of the taxonomy. A hierarchical classification is obtained by evaluating the trained node classifiers in a top-down fashion. To evaluate classifiers in our multipath framework, we define a new hierarchical loss function, the H-loss, capturing the intuition that whenever a classification mistake is made on a node of the taxonomy, then no loss should be charged for any additional mistake occurring in the subtree of that node. Making no assumptions on the mechanism generating the data instances, and assuming a linear noise model for the labels, we bound the H-loss of our on-line algorithm in terms of the H-loss of a reference classifier knowing the true parameters of the label-generating process. We show that, in expectation, the excess cumulative H-loss grows at most logarithmically in the length of the data sequence. Furthermore, our analysis reveals the precise dependence of the rate of convergence on the eigenstructure of the data each node observes. Our theoretical results are complemented by a number of experiments on texual corpora. In these experiments we show that, after only one epoch of training, our algorithm performs much better than Perceptron-based hierarchical classifiers, and reasonably close to a hierarchical support vector machine. Keywords: incremental algorithms, online learning, hierarchical classification, second order perceptron, support vector machines, regret bound, loss function
4 0.48492 96 jmlr-2006-Worst-Case Analysis of Selective Sampling for Linear Classification
Author: Nicolò Cesa-Bianchi, Claudio Gentile, Luca Zaniboni
Abstract: A selective sampling algorithm is a learning algorithm for classification that, based on the past observed data, decides whether to ask the label of each new instance to be classified. In this paper, we introduce a general technique for turning linear-threshold classification algorithms from the general additive family into randomized selective sampling algorithms. For the most popular algorithms in this family we derive mistake bounds that hold for individual sequences of examples. These bounds show that our semi-supervised algorithms can achieve, on average, the same accuracy as that of their fully supervised counterparts, but using fewer labels. Our theoretical results are corroborated by a number of experiments on real-world textual data. The outcome of these experiments is essentially predicted by our theoretical results: Our selective sampling algorithms tend to perform as well as the algorithms receiving the true label after each classification, while observing in practice substantially fewer labels. Keywords: selective sampling, semi-supervised learning, on-line learning, kernel algorithms, linear-threshold classifiers
5 0.47591811 32 jmlr-2006-Expectation Correction for Smoothed Inference in Switching Linear Dynamical Systems
Author: David Barber
Abstract: We introduce a method for approximate smoothed inference in a class of switching linear dynamical systems, based on a novel form of Gaussian Sum smoother. This class includes the switching Kalman ‘Filter’ and the more general case of switch transitions dependent on the continuous latent state. The method improves on the standard Kim smoothing approach by dispensing with one of the key approximations, thus making fuller use of the available future information. Whilst the central assumption required is projection to a mixture of Gaussians, we show that an additional conditional independence assumption results in a simpler but accurate alternative. Our method consists of a single Forward and Backward Pass and is reminiscent of the standard smoothing ‘correction’ recursions in the simpler linear dynamical system. The method is numerically stable and compares favourably against alternative approximations, both in cases where a single mixture component provides a good posterior approximation, and where a multimodal approximation is required. Keywords: Gaussian sum smoother, switching Kalman filter, switching linear dynamical system, expectation propagation, expectation correction 1. Switching Linear Dynamical System The Linear Dynamical System (LDS) (Bar-Shalom and Li, 1998; West and Harrison, 1999) is a key temporal model in which a latent linear process generates the observed time-series. For more complex time-series which are not well described globally by a single LDS, we may break the time-series into segments, each modeled by a potentially different LDS. This is the basis for the Switching LDS (SLDS) where, for each time-step t, a switch variable st ∈ 1, . . . , S describes which of the LDSs is to be used.1 The observation (or ‘visible’ variable) vt ∈ R V is linearly related to the hidden state ht ∈ R H by vt = B(st )ht + ηv (st ), ηv (st ) ∼ N (v(st ), Σv (st )) ¯ (1) where N (µ, Σ) denotes a Gaussian distribution with mean µ and covariance Σ. The transition dynamics of the continuous hidden state ht is linear ht = A(st )ht−1 + ηh (st ), ¯ ηh (st ) ∼ N h(st ), Σh (st ) . (2) 1. These systems also go under the names Jump Markov model/process, switching Kalman Filter, Switching Linear Gaussian State-Space model, Conditional Linear Gaussian Model. c 2006 David Barber. BARBER s1 s2 s3 s4 h1 h2 h3 h4 v1 v2 v3 v4 Figure 1: The independence structure of the aSLDS. Square nodes denote discrete variables, round nodes continuous variables. In the SLDS links from h to s are not normally considered. The dynamics of the switch variables is Markovian, with transition p(st |st−1 ). The SLDS is used in many disciplines, from econometrics to machine learning (Bar-Shalom and Li, 1998; Ghahramani and Hinton, 1998; Lerner et al., 2000; Kitagawa, 1994; Kim and Nelson, 1999; Pavlovic et al., 2001). See Lerner (2002) and Zoeter (2005) for recent reviews of work. AUGMENTED S WITCHING L INEAR DYNAMICAL S YSTEM In this article, we will consider the more general model in which the switch st is dependent on both the previous st−1 and ht−1 . We call this an augmented Switching Linear Dynamical System 2 (aSLDS), in keeping with the terminology in Lerner (2002). An equivalent probabilistic model is, as depicted in Figure (1), T p(v1:T , h1:T , s1:T ) = p(v1 |h1 , s1 )p(h1 |s1 )p(s1 ) ∏ p(vt |ht , st )p(ht |ht−1 , st )p(st |ht−1 , st−1 ). t=2 The notation x1:T is shorthand for x1 , . . . , xT . The distributions are parameterized as p(vt |ht , st ) = N (v(st ) + B(st )ht , Σv (st )) , ¯ ¯ p(ht |ht−1 , st ) = N h(st ) + A(st )ht−1 , Σh (st ) where p(h1 |s1 ) = N (µ(s1 ), Σ(s1 )). The aSLDS has been used, for example, in state-duration modeling in acoustics (Cemgil et al., 2006) and econometrics (Chib and Dueker, 2004). I NFERENCE The aim of this article is to address how to perform inference in both the SLDS and aSLDS. In particular we desire the so-called filtered estimate p(ht , st |v1:t ) and the smoothed estimate p(ht , st |v1:T ), for any t, 1 ≤ t ≤ T . Both exact filtered and smoothed inference in the SLDS is intractable, scaling exponentially with time (Lerner, 2002). To see this informally, consider the filtered posterior, which may be recursively computed using p(st , ht |v1:t ) = ∑ Z st−1 ht−1 p(st , ht |st−1 , ht−1 , vt )p(st−1 , ht−1 |v1:t−1 ). (3) At timestep 1, p(s1 , h1 |v1 ) = p(h1 |s1 , v1 )p(s1 |v1 ) is an indexed set of Gaussians. At time-step 2, due to the summation over the states s1 , p(s2 , h2 |v1:2 ) will be an indexed set of S Gaussians; similarly at 2. These models are closely related to Threshold Regression Models (Tong, 1990). 2516 E XPECTATION C ORRECTION time-step 3, it will be S2 and, in general, gives rise to St−1 Gaussians. More formally, in Lauritzen and Jensen (2001), a general exact method is presented for performing stable inference in such hybrid discrete models with conditional Gaussian potentials. The method requires finding a strong junction tree which, in the SLDS case, means that the discrete variables are placed in a single cluster, resulting in exponential complexity. The key issue in the (a)SLDS, therefore, is how to perform approximate inference in a numerically stable manner. Our own interest in the SLDS stems primarily from acoustic modeling, in which the time-series consists of many thousands of time-steps (Mesot and Barber, 2006; Cemgil et al., 2006). For this, we require a stable and computationally feasible approximate inference, which is also able to deal with state-spaces of high hidden dimension, H. 2. Expectation Correction Our approach to approximate p(ht , st |v1:T ) ≈ p(ht , st |v1:T ) mirrors the Rauch-Tung-Striebel (RTS) ˜ ‘correction’ smoother for the LDS (Rauch et al., 1965; Bar-Shalom and Li, 1998). Readers unfamiliar with this approach will find a short explanation in Appendix (A), which defines the important functions LDSFORWARD and LDSBACKWARD, which we shall make use of for inference in the aSLDS. Our correction approach consists of a single Forward Pass to recursively find the filtered posterior p(ht , st |v1:t ), followed by a single Backward Pass to correct this into a smoothed posterior ˜ p(ht , st |v1:T ). The Forward Pass we use is equivalent to Assumed Density Filtering (Alspach and ˜ Sorenson, 1972; Boyen and Koller, 1998; Minka, 2001). The main contribution of this paper is a novel form of Backward Pass, based on collapsing the smoothed posterior to a mixture of Gaussians. Unless stated otherwise, all quantities should be considered as approximations to their exact counterparts, and we will therefore usually omit the tildes˜throughout the article. 2.1 Forward Pass (Filtering) Readers familiar with Assumed Density Filtering (ADF) may wish to continue directly to Section (2.2). The basic idea is to represent the (intractable) posterior using a simpler distribution. This is then propagated forwards through time, conditioned on the new observation, and subsequently collapsed back to the tractable distribution representation—see Figure (2). Our aim is to form a recursion for p(st , ht |v1:t ), based on a Gaussian mixture approximation of p(ht |st , v1:t ). Without loss of generality, we may decompose the filtered posterior as p(ht , st |v1:t ) = p(ht |st , v1:t )p(st |v1:t ). We will first form a recursion for p(ht |st , v1:t ), and discuss the switch recursion p(st |v1:t ) later. The full procedure for computing the filtered posterior is presented in Algorithm (1). The exact representation of p(ht |st , v1:t ) is a mixture with O(St ) components. We therefore approximate this with a smaller It -component mixture It p(ht |st , v1:t ) ≈ p(ht |st , v1:t ) ≡ ˜ ˜ ˜ ∑ p(ht |it , st , v1:t ) p(it |st , v1:t ) it =1 where p(ht |it , st , v1:t ) is a Gaussian parameterized with mean3 f (it , st ) and covariance F(it , st ). The ˜ Gaussian mixture weights are given by p(it |st , v1:t ). In the above, p represent approximations to the ˜ ˜ 3. Strictly speaking, we should use the notation ft (it , st ) since, for each time t, we have a set of means indexed by it , st . This mild abuse of notation is used elsewhere in the paper. 2517 BARBER st st+1 it ht ht+1 vt+1 Figure 2: Structure of the mixture representation of the Forward Pass. Essentially, the Forward Pass defines a ‘prior’ distribution at time t which contains all the information from the variables v1:t . This prior is propagated forwards through time using the exact dynamics, conditioned on the observation, and then collapsed back to form a new prior approximation at time t + 1. corresponding exact p distributions. To find a recursion for these parameters, consider ˜ p(ht+1 |st+1 , v1:t+1 ) = ∑ p(ht+1 , st , it |st+1 , v1:t+1 ) ˜ st ,it = ∑ p(ht+1 |it , st , st+1 , v1:t+1 ) p(st , it |st+1 , v1:t+1 ) ˜ ˜ (4) st ,it where each of the factors can be recursively computed on the basis of the previous filtered results (see below). However, this recursion suffers from an exponential increase in mixture components. To deal with this, we will later collapse p(ht+1 |st+1 , v1:t+1 ) back to a smaller mixture. For the ˜ remainder, we drop the p notation, and concentrate on computing the r.h.s of Equation (4). ˜ E VALUATING p(ht+1 |st , it , st+1 , v1:t+1 ) We find p(ht+1 |st , it , st+1 , v1:t+1 ) from the joint distribution p(ht+1 , vt+1 |st , it , st+1 , v1:t ), which is a Gaussian with covariance and mean elements4 Σhh = A(st+1 )F(it , st )AT (st+1 ) + Σh (st+1 ), Σvv = B(st+1 )Σhh BT (st+1 ) + Σv (st+1 ) Σvh = B(st+1 )F(it , st ), µv = B(st+1 )A(st+1 ) f (it , st ), µh = A(st+1 ) f (it , st ). (5) These results are obtained from integrating the forward dynamics, Equations (1,2) over h t , using the results in Appendix (B). To find p(ht+1 |st , it , st+1 , v1:t+1 ) we may then condition p(ht+1 , vt+1 | st , it , st+1 , v1:t ) on vt+1 using the results in Appendix (C)—see also Algorithm (4). E VALUATING p(st , it |st+1 , v1:t+1 ) Up to a trivial normalization constant the mixture weight in Equation (4) can be found from the decomposition p(st , it |st+1 , v1:t+1 ) ∝ p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ). ¯ 4. We derive this for ht+1 , vt+1 ≡ 0, to ease notation. ¯ 2518 (6) E XPECTATION C ORRECTION Algorithm 1 aSLDS Forward Pass. Approximate the filtered posterior p(st |v1:t ) ≡ ρt , p(ht |st , v1:t ) ≡ ∑it wt (it , st )N ( ft (it , st ), Ft (it , st )). Also we return the approximate log-likelihood log p(v 1:T ). We ¯ require I1 = 1, I2 ≤ S, It ≤ S × It−1 . θt (s) = A(s), B(s), Σh (s), Σv (s), h(s), v(s) for t > 1. θ1 (s) = ¯ v (s), µ(s), v(s) A(s), B(s), Σ(s), Σ ¯ for s1 ← 1 to S do { f1 (1, s1 ), F1 (1, s1 ), p} = LDSFORWARD(0, 0, v1 ; θ(s1 )) ˆ ρ1 ← p(s1 ) p ˆ end for for t ← 2 to T do for st ← 1 to S do for i ← 1 to It−1 , and s ← 1 to S do {µx|y (i, s), Σx|y (i, s), p} = LDSFORWARD( ft−1 (i, s), Ft−1 (i, s), vt ; θt (st )) ˆ p∗ (st |i, s) ≡ p(st |ht−1 , st−1 = s) p(ht−1 |it−1 =i,st−1 =s,v1:t−1 ) p (st , i, s) ← wt−1 (i, s)p∗ (st |i, s)ρt−1 (s) p ˆ end for Collapse the It−1 × S mixture of Gaussians defined by µx|y ,Σx|y , and weights p(i, s|st ) ∝ p (st , i, s) to a Gaussian with It components, p(ht |st , v1:t ) ≈ I ∑itt =1 p(it |st , v1:t )p(ht |st , it , v1:t ). This defines the new means ft (it , st ), covariances Ft (it , st ) and mixture weights wt (it , st ) ≡ p(it |st , v1:t ). Compute ρt (st ) ∝ ∑i,s p (st , i, s) end for normalize ρt ≡ p(st |v1:t ) L ← L + log ∑st ,i,s p (st , i, s) end for The first factor in Equation (6), p(vt+1 |it , st , st+1 , v1:t ), is a Gaussian with mean µv and covariance Σvv , as given in Equation (5). The last two factors p(it |st , v1:t ) and p(st |v1:t ) are given from the previous iteration. Finally, p(st+1 |it , st , v1:t ) is found from p(st+1 |it , st , v1:t ) = p(st+1 |ht , st ) p(ht |it ,st ,v1:t ) (7) where · p denotes expectation with respect to p. In the standard SLDS, Equation (7) is replaced by the Markov transition p(st+1 |st ). In the aSLDS, however, Equation (7) will generally need to be computed numerically. A simple approximation is to evaluate Equation (7) at the mean value of the distribution p(ht |it , st , v1:t ). To take covariance information into account an alternative would be to draw samples from the Gaussian p(ht |it , st , v1:t ) and thus approximate the average of p(st+1 |ht , st ) by sampling.5 C LOSING THE R ECURSION We are now in a position to calculate Equation (4). For each setting of the variable st+1 , we have a mixture of It × S Gaussians. In order to avoid an exponential explosion in the number of mixture 5. Whilst we suggest sampling as part of the aSLDS update procedure, this does not render the Forward Pass as a form of sequential sampling procedure, such as Particle Filtering. The sampling here is a form of exact sampling, for which no convergence issues arise, being used only to numerically evaluate Equation (7). 2519 BARBER components, we numerically collapse this back to It+1 Gaussians to form It+1 p(ht+1 |st+1 , v1:t+1 ) ≈ ∑ it+1 =1 p(ht+1 |it+1 , st+1 , v1:t+1 )p(it+1 |st+1 , v1:t+1 ). Hence the Gaussian components and corresponding mixture weights p(it+1 |st+1 , v1:t+1 ) are defined implicitly through a numerical (Gaussian-Mixture to smaller Gaussian-Mixture) collapse procedure, for which any method of choice may be supplied. A straightforward approach that we use in our code is based on repeatedly merging low-weight components, as explained in Appendix (D). A R ECURSION FOR THE S WITCH VARIABLES A recursion for the switch variables can be found by considering p(st+1 |v1:t+1 ) ∝ ∑ p(it , st , st+1 , vt+1 , v1:t ). it ,st The r.h.s. of the above equation is proportional to ∑ p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ) st ,it where all terms have been computed during the recursion for p(ht+1 |st+1 , v1:t+1 ). T HE L IKELIHOOD p(v1:T ) The likelihood p(v1:T ) may be found by recursing p(v1:t+1 ) = p(vt+1 |v1:t )p(v1:t ), where p(vt+1 |v1:t ) = ∑ it ,st ,st+1 p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ). In the above expression, all terms have been computed in forming the recursion for the filtered posterior p(ht+1 , st+1 |v1:t+1 ). 2.2 Backward Pass (Smoothing) The main contribution of this paper is to find a suitable way to ‘correct’ the filtered posterior p(st , ht |v1:t ) obtained from the Forward Pass into a smoothed posterior p(st , ht |v1:T ). We initially derive this for the case of a single Gaussian representation—the extension to the mixture case is straightforward and given in Section (2.3). Our derivation holds for both the SLDS and aSLDS. We approximate the smoothed posterior p(ht |st , v1:T ) by a Gaussian with mean g(st ) and covariance G(st ), and our aim is to find a recursion for these parameters. A useful starting point is the exact relation: p(ht , st |v1:T ) = ∑ p(st+1 |v1:T )p(ht |st , st+1 , v1:T )p(st |st+1 , v1:T ). st+1 2520 E XPECTATION C ORRECTION The term p(ht |st , st+1 , v1:T ) may be computed as p(ht |st , st+1 , v1:T ) = Z p(ht , ht+1 |st , st+1 , v1:T ) = Z p(ht |ht+1 , st , st+1 , v1:T )p(ht+1 |st , st+1 , v1:T ) Z p(ht |ht+1 , st , st+1 , v1:t )p(ht+1 |st , st+1 , v1:T ) ht+1 ht+1 = ht+1 (8) which is in the form of a recursion. This recursion therefore requires p(ht+1 |st , st+1 , v1:T ), which we can write as p(ht+1 |st , st+1 , v1:T ) ∝ p(ht+1 |st+1 , v1:T )p(st |st+1 , ht+1 , v1:t ). (9) The above recursions represent the exact computation of the smoothed posterior. In our approximate treatment, we replace all quantities p with their corresponding approximations p. A difficulty ˜ is that the functional form of p(st |st+1 , ht+1 , v1:t ) in the approximation of Equation (9) is not squared ˜ exponential in ht+1 , so that p(ht+1 |st , st+1 , v1:T ) will not be a mixture of Gaussians.6 One possibil˜ ity would be to approximate the non-Gaussian p(ht+1 |st , st+1 , v1:T ) (dropping the p notation) by a ˜ Gaussian (mixture) by minimizing the Kullback-Leilbler divergence between the two, or performing moment matching in the case of a single Gaussian. A simpler alternative is to make the assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ), see Figure (3). This is a considerable simplification since p(ht+1 |st+1 , v1:T ) is already known from the previous backward recursion. Under this assumption, the recursion becomes p(ht , st |v1:T ) ≈ ∑ p(st+1 |v1:T )p(st |st+1 , v1:T ) st+1 p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (10) We call the procedure based on Equation (10) Expectation Correction (EC) since it ‘corrects’ the filtered results which themselves are formed from propagating expectations. In Appendix (E) we show how EC is equivalent to a partial Discrete-Continuous factorized approximation. Equation (10) forms the basis of the the EC Backward Pass. However, similar to the ADF Forward Pass, the number of mixture components needed to represent the posterior in this recursion grows exponentially as we go backwards in time. The strategy we take to deal with this is a form of Assumed Density Smoothing, in which Equation (10) is interpreted as a propagated dynamics reversal, which will subsequently be collapsed back to an assumed family of distributions—see Figure (4). How we implement the recursion for the continuous and discrete factors is detailed below.7 6. In the exact calculation, p(ht+1 |st , st+1 , v1:T ) is a mixture of Gaussians since p(st |st+1 , ht+1 , v1:t ) = p(st , st+1 , ht+1 , v1:T )/p(st+1 , ht+1 , v1:T ) so that the mixture of Gaussians denominator p(st+1 , ht+1 , v1:T ) cancels with the first term in Equation (9), leaving a mixture of Gaussians. However, since in Equation (9) the two terms p(ht+1 |st+1 , v1:T ) and p(st |st+1 , ht+1 , v1:t ) are replaced by approximations, this cancellation is not guaranteed. 7. Equation (10) has the pleasing form of an RTS Backward Pass for the continuous part (analogous to LDS case), and a discrete smoother (analogous to a smoother recursion for the HMM). In the Forward-Backward algorithm for the HMM (Rabiner, 1989), the posterior γt ≡ p(st |v1:T ) is formed from the product of αt ≡ p(st |v1:t ) and βt ≡ p(vt+1:T |st ). This approach is also analogous to EP (Heskes and Zoeter, 2002). In the correction approach, a direct recursion for γt in terms of γt+1 and αt is formed, without explicitly defining βt . The two approaches to inference are known as α − β and α − γ recursions. 2521 BARBER st−1 st st+1 st+2 ht−1 ht ht+1 ht+2 vt−1 vt vt+1 vt+2 Figure 3: Our Backward Pass approximates p(ht+1 |st+1 , st , v1:T ) by p(ht+1 |st+1 , v1:T ). Motivation for this is that st only influences ht+1 through ht . However, ht will most likely be heavily influenced by v1:t , so that not knowing the state of st is likely to be of secondary importance. The darker shaded node is the variable we wish to find the posterior state of. The lighter shaded nodes are variables in known states, and the hashed node a variable whose state is indeed known but assumed unknown for the approximation. E VALUATING p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) is a Gaussian in ht , whose statistics we will now compute. First we find p(ht |ht+1 , st , st+1 , v1:t ) which may be obtained from the joint distribution p(ht , ht+1 |st , st+1 , v1:t ) = p(ht+1 |ht , st+1 )p(ht |st , v1:t ) (11) which itself can be found using the forward dynamics from the filtered estimate p(ht |st , v1:t ). The statistics for the marginal p(ht |st , st+1 , v1:t ) are simply those of p(ht |st , v1:t ), since st+1 carries no extra information about ht .8 The remaining statistics are the mean of ht+1 , the covariance of ht+1 and cross-variance between ht and ht+1 , ht+1 = A(st+1 ) ft (st ) Σt+1,t+1 = A(st+1 )Ft (st )AT (st+1 ) + Σh (st+1 ), Σt+1,t = A(st+1 )Ft (st ). Given the statistics of Equation (11), we may now condition on ht+1 to find p(ht |ht+1 , st , st+1 , v1:t ). Doing so effectively constitutes a reversal of the dynamics, ← − ← − ht = A (st , st+1 )ht+1 + η (st , st+1 ) ← − ← − ← − − where A (st , st+1 ) and η (st , st+1 ) ∼ N (←(st , st+1 ), Σ (st , st+1 )) are easily found using the condim tioned Gaussian results in Appendix (C)—see also Algorithm (5). Averaging the reversed dynamics we obtain a Gaussian in ht for p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) with statistics ← − ← − ← − ← − − µt = A (st , st+1 )g(st+1 ) + ←(st , st+1 ), Σt,t = A (st , st+1 )G(st+1 ) A T (st , st+1 ) + Σ (st , st+1 ). m These equations directly mirror the RTS Backward Pass, see Algorithm (5). 8. Integrating over ht+1 means that the information from st+1 passing through ht+1 via the term p(ht+1 |st+1 , ht ) vanishes. Also, since st is known, no information from st+1 passes through st to ht . 2522 E XPECTATION C ORRECTION st st+1 it jt+1 ht ht+1 vt vt+1 Figure 4: Structure of the Backward Pass for mixtures. Given the smoothed information at timestep t + 1, we need to work backwards to ‘correct’ the filtered estimate at time t. E VALUATING p(st |st+1 , v1:T ) The main departure of EC from previous methods is in treating the term p(st |st+1 , v1:T ) = p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (12) The term p(st |ht+1 , st+1 , v1:t ) is given by p(st |ht+1 , st+1 , v1:t ) = p(ht+1 |st , st+1 , v1:t )p(st , st+1 |v1:t ) . ∑st p(ht+1 |st , st+1 , v1:t )p(st , st+1 |v1:t ) (13) Here p(st , st+1 |v1:t ) = p(st+1 |st , v1:t )p(st |v1:t ), where p(st+1 |st , v1:t ) occurs in the Forward Pass, Equation (7). In Equation (13), p(ht+1 |st+1 , st , v1:t ) is found by marginalizing Equation (11). Performing the average over p(ht+1 |st+1 , v1:T ) in Equation (12) may be achieved by any numerical integration method desired. Below we outline a crude approximation that is fast and often performs surprisingly well. M EAN A PPROXIMATION A simple approximation of Equation (12) is to evaluate the integrand at the mean value of the averaging distribution. Replacing ht+1 in Equation (13) by its mean gives the simple approximation 1 T −1 1 e− 2 zt+1 (st ,st+1 )Σ (st ,st+1 |v1:t )zt+1 (st ,st+1 ) p(st |st+1 , v1:t ) p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) ≈ Z det Σ(st , st+1 |v1:t ) where zt+1 (st , st+1 ) ≡ ht+1 |st+1 , v1:T − ht+1 |st , st+1 , v1:t and Z ensures normalization over st . This result comes simply from the fact that in Equation (12) we have a Gaussian with a mean ht+1 |st , st+1 , v1:t and covariance Σ(st , st+1 |v1:t ), being the filtered covariance of ht+1 given st , st+1 and the observations v1:t , which may be taken from Σhh in Equation (5). Then evaluating this Gaussian at the specific point ht+1 |st+1 , v1:T , we arrive at the above expression. An alternative to this simple mean approximation is to sample from the Gaussian p(ht+1 |st+1 , v1:T ), which has the potential advantage that covariance information is used. 9 Other methods such as variational 9. This is a form of exact sampling since drawing samples from a Gaussian is easy. This should not be confused with meaning that this use of sampling renders EC a sequential Monte-Carlo sampling scheme. 2523 BARBER Algorithm 2 aSLDS: EC Backward Pass (Single Gaussian case I = J = 1). Approximates p(st |v1:T ) and p(ht |st , v1:T ) ≡ N (gt (st ), Gt (st )). This routine needs the results from Algorithm (1) for I = 1. GT ← FT , gT ← fT , for t ← T − 1 to 1 do for s ← 1 to S, s ← 1 to S do, (µ, Σ)(s, s ) = LDSBACKWARD(gt+1 (s ), Gt+1 (s ), ft (s), Ft (s), θt+1 (s )) p(s|s ) = p(st = s|ht+1 , st+1 = s , v1:t ) p(ht+1 |st+1 =s ,v1:T ) p(s, s |v1:T ) ← p(st+1 = s |v1:T )p(s|s ) end for for st ← 1 to S do Collapse the mixture defined by weights p(st+1 = s |st , v1:T ) ∝ p(st , s |v1:T ), means µ(st , s ) and covariances Σ(st , s ) to a single Gaussian. This defines the new means gt (st ), covariances Gt (st ). p(st |v1:T ) ← ∑s p(st , s |v1:T ) end for end for approximations to this average (Jaakkola and Jordan, 1996) or the unscented transform (Julier and Uhlmann, 1997) may be employed if desired. C LOSING THE R ECURSION We have now computed both the continuous and discrete factors in Equation (10), which we wish to use to write the smoothed estimate in the form p(ht , st |v1:T ) = p(st |v1:T )p(ht |st , v1:T ). The distribution p(ht |st , v1:T ) is readily obtained from the joint Equation (10) by conditioning on st to form the mixture p(ht |st , v1:T ) = ∑ p(st+1 |st , v1:T )p(ht |st , st+1 , v1:T ) st+1 which may be collapsed to a single Gaussian (or mixture if desired). As in the Forward Pass, this collapse implicitly defines the Gaussian mean g(st ) and covariance G(st ). The smoothed posterior p(st |v1:T ) is given by p(st |v1:T ) = = ∑ p(st+1 |v1:T )p(st |st+1 , v1:T ) st+1 ∑ p(st+1 |v1:T ) st+1 p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (14) The algorithm for the single Gaussian case is presented in Algorithm (2). N UMERICAL S TABILITY Numerical stability is a concern even in the LDS, and the same is to be expected for the aSLDS. Since the LDS recursions LDSFORWARD and LDSBACKWARD are embedded within the EC algorithm, we may immediately take advantage of the large body of work on stabilizing the LDS recursions, such as the Joseph form (Grewal and Andrews, 1993), or the square root forms (Park and Kailath, 1996; Verhaegen and Van Dooren, 1986). 2524 E XPECTATION C ORRECTION R ELAXING EC The conditional independence assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ) is not strictly necessary in EC. We motivate it by computational simplicity, since finding an appropriate moment matching approximation of p(ht+1 |st , st+1 , v1:T ) in Equation (9) requires a relatively expensive nonGaussian integration. If we therefore did treat p(ht+1 |st , st+1 , v1:T ) more correctly, the central assumption in this relaxed version of EC would be a collapse to a mixture of Gaussians (the additional computation of Equation (12) may usually be numerically evaluated to high precision). Whilst we did not do so, implementing this should not give rise to numerical instabilities since no potential divisions are required, merely the estimation of moments. In the experiments presented here, we did not pursue this option, since we believe that the effect of this conditional independence assumption is relatively weak. I NCONSISTENCIES IN THE APPROXIMATION The recursion Equation (8), upon which EC depends, makes use of the Forward Pass results, and a subtle issue arises about possible inconsistencies in the Forward and Backward approximations. For example, under the conditional independence assumption in the Backward Pass, p(hT |sT −1 , sT , v1:T ) ≈ p(hT |sT , v1:T ), which is in contradiction to Equation (5) which states that the approximation to p(hT |sT −1 , sT , v1:T ) will depend on sT −1 . Similar contradictions occur also for the relaxed version of EC. Such potential inconsistencies arise because of the approximations made, and should not be considered as separate approximations in themselves. Furthermore, these inconsistencies will most likely be strongest at the end of the chain, t ≈ T , since only then is Equation (8) in direct contradiction to Equation (5). Such potential inconsistencies arise since EC is not founded on a consistency criterion, unlike EP—see Section (3)—but rather an approximation of the exact recursions. Our experience is that compared to EP, which attempts to ensure consistency based on multiple sweeps through the graph, such inconsistencies are a small price to pay compared to the numerical stability advantages of EC. 2.3 Using Mixtures in the Backward Pass The extension to the mixture case is straightforward, based on the representation Jt p(ht |st , v1:T ) ≈ ∑ p(ht |st , jt , v1:T )p( jt |st , v1:T ). jt =1 Analogously to the case with a single component, p(ht , st |v1:T ) = ∑ it , jt+1 ,st+1 p(st+1 |v1:T )p( jt+1 |st+1 , v1:T )p(ht | jt+1 , st+1 , it , st , v1:T ) · p(it , st |ht+1 , jt+1 , st+1 , v1:t ) p(ht+1 | jt+1 ,st+1 ,v1:T ) . The average in the last line of the above equation can be tackled using the same techniques as outlined in the single Gaussian case. To approximate p(ht | jt+1 , st+1 , it , st , v1:T ) we consider this as the marginal of the joint distribution p(ht , ht+1 |it , st , jt+1 , st+1 , v1:T ) = p(ht |ht+1 , it , st , jt+1 , st+1 , v1:t )p(ht+1 |it , st , jt+1 , st+1 , v1:T ). 2525 BARBER Algorithm 3 aSLDS: EC Backward Pass. Approximates p(st |v1:T ) and p(ht |st , v1:T ) ≡ Jt ut ( jt , st )N (gt ( jt , st ), Gt ( jt , st )) using a mixture of Gaussians. JT = IT , Jt ≤ S × It × Jt+1 . This ∑ jt =1 routine needs the results from Algorithm (1). GT ← FT , gT ← fT , uT ← wT (*) for t ← T − 1 to 1 do for s ← 1 to S, s ← 1 to S, i ← 1 to It , j ← 1 to Jt+1 do (µ, Σ)(i, s, j , s ) = LDSBACKWARD(gt+1 ( j , s ), Gt+1 ( j , s ), ft (i, s), Ft (i, s), θt+1 (s )) p(i, s| j , s ) = p(st = s, it = i|ht+1 , st+1 = s , jt+1 = j , v1:t ) p(ht+1 |st+1 =s , jt+1 = j ,v1:T ) p(i, s, j , s |v1:T ) ← p(st+1 = s |v1:T )ut+1 ( j , s )p(i, s| j , s ) end for for st ← 1 to S do Collapse the mixture defined by weights p(it = i, st+1 = s , jt+1 = j |st , v1:T ) ∝ p(i, st , j , s |v1:T ), means µ(it , st , j , s ) and covariances Σ(it , st , j , s ) to a mixture with Jt components. This defines the new means gt ( jt , st ), covariances Gt ( jt , st ) and mixture weights ut ( jt , st ). p(st |v1:T ) ← ∑it , j ,s p(it , st , j , s |v1:T ) end for end for (*) If JT < IT then the initialization is formed by collapsing the Forward Pass results at time T to JT components. As in the case of a single mixture, the problematic term is p(ht+1 |it , st , jt+1 , st+1 , v1:T ). Analogously to before, we may make the assumption p(ht+1 |it , st , jt+1 , st+1 , v1:T ) ≈ p(ht+1 | jt+1 , st+1 , v1:T ) meaning that information about the current switch state st , it is ignored.10 We can then form p(ht |st , v1:T ) = ∑ it , jt+1 ,st+1 p(it , jt+1 , st+1 |st , v1:T )p(ht |it , st , jt+1 , st+1 , v1:T ). This mixture can then be collapsed to smaller mixture using any method of choice, to give Jt p(ht |st , v1:T ) ≈ ∑ p(ht | jt , st , v1:T )p( jt |st , v1:T ) jt =1 The collapse procedure implicitly defines the means g( jt , st ) and covariances G( jt , st ) of the smoothed approximation. A recursion for the switches follows analogously to the single component Backward Pass. The resulting algorithm is presented in Algorithm (3), which includes using mixtures in both Forward and Backward Passes. Note that if JT < IT , an extra initial collapse is required of the IT component Forward Pass Gaussian mixture at time T to JT components. EC has time complexity O(S2 IJK) where S are the number of switch states, I and J are the number of Gaussians used in the Forward and Backward passes, and K is the time to compute the exact Kalman smoother for the system with a single switch state. 10. As in the single component case, in principle, this assumption may be relaxed and a moment matching approximation be performed instead. 2526 E XPECTATION C ORRECTION 3. Relation to Other Methods Approximate inference in the SLDS is a long-standing research topic, generating an extensive literature. See Lerner (2002) and Zoeter (2005) for reviews of previous work. A brief summary of some of the major existing approaches follows. Assumed Density Filtering Since the exact filtered estimate p(ht |st , v1:t ) is an (exponentially large) mixture of Gaussians, a useful remedy is to project at each stage of the recursion Equation (3) back to a limited set of K Gaussians. This is a Gaussian Sum Approximation (Alspach and Sorenson, 1972), and is a form of Assumed Density Filtering (ADF) (Minka, 2001). Similarly, Generalized Pseudo Bayes2 (GPB2) (Bar-Shalom and Li, 1998) also performs filtering by collapsing to a mixture of Gaussians. This approach to filtering is also taken in Lerner et al. (2000) which performs the collapse by removing spatially similar Gaussians, thereby retaining diversity. Several smoothing approaches directly use the results from ADF. The most popular is Kim’s method, which updates the filtered posterior weights to form the smoother (Kim, 1994; Kim and Nelson, 1999). In both EC and Kim’s method, the approximation p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ), is used to form a numerically simple Backward Pass. The other approximation in EC is to numerically compute the average in Equation (14). In Kim’s method, however, an update for the discrete variables is formed by replacing the required term in Equation (14) by p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) ≈ p(st |st+1 , v1:t ). (15) This approximation11 decouples the discrete Backward Pass in Kim’s method from the continuous dynamics, since p(st |st+1 , v1:t ) ∝ p(st+1 |st )p(st |v1:t )/p(st+1 |v1:t ) can be computed simply from the filtered results alone (the continuous Backward Pass in Kim’s method, however, does depend on the discrete Backward Pass). The fundamental difference between EC and Kim’s method is that the approximation (15) is not required by EC. The EC Backward Pass therefore makes fuller use of the future information, resulting in a recursion which intimately couples the continuous and discrete variables. The resulting effect on the quality of the approximation can be profound, as we will see in the experiments. Kim’s smoother corresponds to a potentially severe loss of future information and, in general, cannot be expected to improve much on the filtered results from ADF. The more recent work of Lerner et al. (2000) is similar in spirit to Kim’s method, whereby the contribution from the continuous variables is ignored in forming an approximate recursion for the smoothed p(st |v1:T ). The main difference is that for the discrete variables, Kim’s method is based on a correction smoother (Rauch et al., 1965), whereas Lerner’s method uses a Belief Propagation style Backward Pass (Jordan, 1998). Neither method correctly integrates information from the continuous variables. How to form a recursion for a mixture approximation which does not ignore information coming through the continuous hidden variables is a central contribution of our work. Kitagawa (1994) used a two-filter method in which the dynamics of the chain are reversed. Essentially, this corresponds to a Belief Propagation method which defines a Gaussian sum 11. In the HMM this is exact, but in the SLDS the future observations carry information about st . 2527 BARBER EC Mixture Collapsing to Single Mixture Collapsing to Mixture Cond. Indep. p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ) Approx. of p(st |st+1 , v1:T ), average Equation (12) Kim’s Backward Pass Mixture approx. of p(ht+1 |st , st+1 , v1:T ), Equation (9) Relaxed EC x x x x EP x Kim x x x x x Table 1: Relation between methods. In the EC methods, the mean approximation may be replaced by an essentially exact Monte Carlo approximation to Equation (12). EP refers to the Single Gaussian approximation in Heskes and Zoeter (2002). In the case of using Relaxed EC with collapse to a single Gaussian, EC and EP are not equivalent, since the underlying recursions on which the two methods are based are fundamentally different. approximation for p(vt+1:T |ht , st ). However, since this is not a density in ht , st , but rather a conditional likelihood, formally one cannot treat this using density propagation methods. In Kitagawa (1994), the singularities resulting from incorrectly treating p(vt+1:T |ht , st ) as a density are heuristically finessed. Expectation Propagation EP (Minka, 2001), as applied to the SLDS, corresponds to an approximate implementation of Belief Propagation12 (Jordan, 1998; Heskes and Zoeter, 2002). EP is the most sophisticated rival to Kim’s method and EC, since it makes the least assumptions. For this reason, we’ll explain briefly how EP works. Unlike EC, which is based on an approximation of the exact filtering and smoothing recursions, EP is based on a consistency criterion. First, let’s simplify the notation, and write the distribution as p = ∏t φ (xt−1 , vt−1 , xt , vt ), where xt ≡ ht ⊗ st , and φ (xt−1 , vt−1 , xt , vt ) ≡ p(xt |xt−1 )p(vt |xt ). EP defines ‘messages’ ρ, λ13 which contain information from past and future observations respectively. 14 Explicitly, we define ρt (xt ) ∝ p(xt |v1:t ) to represent knowledge about xt given all information from time 1 to t. Similarly, λt (xt ) represents knowledge about state xt given all observations from time T to time t + 1. In the sequel, we drop the time suffix for notational clarity. We define λ(xt ) implicitly through the requirement that the marginal smoothed inference is given by p(xt |v1:T ) ∝ ρ (xt ) λ (xt ) . (16) Hence λ (xt ) ∝ p(vt+1:T |xt , v1:t ) = p(vt+1:T |xt ) and represents all future knowledge about p(xt |v1:T ). From this p(xt−1 , xt |v1:T ) ∝ ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . (17) 12. Non-parametric belief propagation (Sudderth et al., 2003), which performs approximate inference in general continuous distributions, is also related to EP applied to the aSLDS, in the sense that the messages cannot be represented easily, and are approximated by mixtures of Gaussians. 13. These correspond to the α and β messages in the Hidden Markov Model framework (Rabiner, 1989). 14. In this Belief Propagation/EP viewpoint, the backward messages, traditionally labeled as β, correspond to conditional likelihoods, and not distributions. In contrast, in the EC approach, which is effectively a so-called α − γ recursion, the backward γ messages correspond to posterior distributions. 2528 E XPECTATION C ORRECTION Taking the above equation as a starting point, we have p(xt |v1:T ) ∝ Z xt−1 ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . Consistency with Equation (16) requires (neglecting irrelevant scalings) ρ (xt ) λ (xt ) ∝ Z xt−1 ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . Similarly, we can integrate Equation (17) over xt to get the marginal at time xt−1 which, by consistency, should be proportional to ρ (xt−1 ) λ (xt−1 ). Hence ρ (xt ) ∝ xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) R λ (xt ) , λ (xt−1 ) ∝ R xt ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) ρ (xt−1 ) (18) where the divisions can be interpreted as preventing over-counting of messages. In an exact implementation, the common factors in the numerator and denominator cancel. EP addresses the fact that λ(xt ) is not a distribution by using Equation (18) R form the projection (or to R ‘collapse’). In the numerator, xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) and xt ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) represent p(xt |v1:T ) and p(xt−1 |v1:T ). Since these are distributions (an indexed mixture of Gaussians in the SLDS), they may be projected/collapsed to a single indexed Gaussian. The update for the ρ message is then found from division by the λ potential, and vice versa. In EP the explicit division of potentials only makes sense for members of the exponential family. More complex methods could be envisaged in which, rather than an explicit division, the new messages are defined by minimizing some measure of divergence between R ρ(xt )λ(xt ) and xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ), such as the Kullback-Leibler divergence. In this way, non-exponential family approximations (such as mixtures of Gaussians) may be considered. Whilst this is certainly feasible, it is somewhat unattractive computationally since this would require for each time-step an expensive minimization. For the single Gaussian case, in order to perform the division, the potentials in the numerator and denominator are converted to their canonical representations. To form the ρ update, the result of the division is then reconverted back to a moment representation. The resulting recursions, due to the approximation, are no longer independent and Heskes and Zoeter (2002) show that using more than a single Forward and Backward sweep often improves on the quality of the approximation. This coupling is a departure from the exact recursions, which should remain independent. Applied to the SLDS, EP suffers from severe numerical instabilities (Heskes and Zoeter, 2002) and finding a way to minimize the corresponding EP free energy in an efficient, robust and guaranteed way remains an open problem. Our experience is that current implementations of EP are unsuitable for large scale time-series applications. Damping the parameter updates is one suggested approach to heuristically improve convergence. The source of these numerical instabilities is not well understood since, even in cases when the posterior appears uni-modal, the method is problematic. The frequent conversions between moment and canonical parameterizations of Gaussians are most likely at the root of the difficulties. An interesting comparison here is between Lauritzen’s original method for exact computation on conditional Gaussian distributions (for which the SLDS is a special case) Lauritzen (1992), 2529 BARBER which is numerically unstable due to conversion between moment and canonical representations, and Lauritzen and Jensen (2001), which improves stability by avoiding using canonical parameterizations. Variational Methods Ghahramani and Hinton (1998) used a variational method which approximates the joint distribution p(h1:T , s1:T |v1:T ) rather than the marginal p(ht , st |v1:T )—related work is presented in Lee et al. (2004). This is a disadvantage when compared to other methods that directly approximate the marginal. The variational methods are nevertheless potentially attractive since they are able to exploit structural properties of the distribution, such as a factored discrete state-transition. In this article, we concentrate on the case of a small number of states S and hence will not consider variational methods further here. 15 Sequential Monte Carlo (Particle Filtering) These methods form an approximate implementation of Equation (3), using a sum of delta functions to represent the posterior—see, for example, Doucet et al. (2001). Whilst potentially powerful, these non-analytic methods typically suffer in high-dimensional hidden spaces since they are often based on naive importance sampling, which restricts their practical use. ADF is generally preferential to Particle Filtering, since in ADF the approximation is a mixture of non-trivial distributions, which is better at capturing the variability of the posterior. Rao-Blackwellized Particle Filters (Doucet et al., 2000) are an attempt to alleviate the difficulty of sampling in high-dimensional state spaces by explicitly integrating over the continuous state. Non-Sequential Monte Carlo For fixed switches s1:T , p(v1:T |s1:T ) is easily computable since this is just the likelihood of an LDS. This observation raises the possibility of sampling from the posterior p(s 1:T |v1:T ) ∝ p(v1:T |s1:T )p(s1:T ) directly. Many possible sampling methods could be applied in this case, and the most immediate is Gibbs sampling, in which a sample for each t is drawn from p(st |s\t , v1:T )—see Neal (1993) for a general reference and Carter and Kohn (1996) for an application to the SLDS. This procedure may work well in practice provided that the initial setting of s1:T is in a region of high probability mass—otherwise, sampling by such individual coordinate updates may be extremely inefficient. 4. Experiments Our experiments examine the stability and accuracy of EC against several other methods on long time-series. In addition, we will compare the absolute accuracy of EC as a function of the number of mixture components on a short time-series, where exact inference may be explicitly evaluated. Testing EC in a problem with a reasonably long temporal sequence, T , is important since numerical stabilities may not be apparent in time-series of just a few time-steps. To do this, we sequentially generate hidden states ht , st and observations vt from a given model. Then, given only the parameters of the model and the observations (but not any of the hidden states), the task is to infer p(ht |st , v1:T ) and p(st |v1:T ). Since the exact computation is exponential in T , a formally exact evaluation of the method is infeasible. A simple alternative is to assume that the original sample states s1:T are the ‘correct’ inferred states, and compare our most probable posterior smoothed 15. Lerner (2002) discusses an approach in the case of a large structured discrete state transition. Related ideas could also be used in EC. 2530 E XPECTATION C ORRECTION 80 150 60 100 40 50 20 0 0 −50 −20 −100 −40 −150 −60 −80 0 10 20 30 40 50 60 70 80 90 −200 100 0 10 20 (a) Easy problem 30 40 50 60 70 80 90 100 (b) Hard problem Figure 5: SLDS: Throughout, S = 2, V = 1 (scalar observations), T = 100, with zero output bias. ¯ A(s) = 0.9999 ∗ orth(randn(H, H)), B(s) = randn(V, H), vt ≡ 0, h1 = 10 ∗ randn(H, 1), ¯ ¯ t>1 = 0, Σh = IH , p1 = uniform. The figures show typical examples for each of the two h 1 problems: (a) Easy problem. H = 3, Σh (s) = IH , Σv (s) = 0.1IV , p(st+1 |st ) ∝ 1S×S + IS . (b) Hard problem. H = 30, Σv (s) = 30IV ,Σh (s) = 0.01IH , p(st+1 |st ) ∝ 1S×S . PF RBPF EP ADFS KimS ECS ADFM KimM ECM Gibbs 1000 800 600 400 200 0 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 Figure 6: SLDS ‘Easy’ problem: The number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. The histograms have been cutoff at 20 errors in order to improve visualization. (PF) Particle Filter. (RBPF) Rao-Blackwellized PF. (EP) Expectation Propagation. (ADFS) Assumed Density Filtering using a Single Gaussian. (KimS) Kim’s smoother using the results from ADFS. (ECS) Expectation Correction using a Single Gaussian (I = J = 1). (ADFM) ADF using a multiple of I = 4 Gaussians. (KimM) Kim’s smoother using the results from ADFM. (ECM) Expectation Correction using a mixture with I = J = 4 components. In Gibbs sampling, we use the initialization from ADFM. estimates arg maxst p(st |v1:T ) with the assumed correct sample st .16 We look at two sets of experiments, one for the SLDS and one for the aSLDS. In both cases, scalar observations are used so that the complexity of the inference problem can be visually assessed. 16. We could also consider performance measures on the accuracy of p(ht |st , v1:T ). However, we prefer to look at approximating arg maxst p(st |v1:T ) since the sampled discrete states are likely to correspond to the exact arg max st p(st |v1:T ). In addition, if the posterior switch distribution is dominated by a single state s ∗ , then provided they are correctly 1:T estimated, the model reduces to an LDS, for which inference of the continuous hidden state is trivial. 2531 BARBER PF RBPF EP ADFS KimS ECS ADFM KimM ECM Gibbs 1000 800 600 400 200 0 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 Figure 7: SLDS ‘Hard’ problem: The number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. SLDS EXPERIMENTS We chose experimental conditions that, from the viewpoint of classical signal processing, are difficult, with changes in the switches occurring at a much higher rate than the typical frequencies in the signal. We consider two different toy SLDS experiments : The ‘easy’ problem corresponds to a low hidden dimension, H = 3, with low observation noise; The ‘hard’ problem corresponds to a high hidden dimension, H = 30, and high observation noise. See Figure (5) for details of the experimental setup. We compared methods using a single Gaussian, and methods using multiple Gaussians, see Figure (6) and Figure (7). For EC we use the mean approximation for the numerical integration of Equation (12). For the Particle Filter 1000 particles were used, with Kitagawa re-sampling (Kitagawa, 1996). For the Rao-Blackwellized Particle Filter (Doucet et al., 2000), 500 particles were used, with Kitagawa re-sampling. We included the Particle Filter merely for a point of comparison with ADF, since they are not designed to approximate the smoothed estimate. An alternative MCMC procedure is to perform Gibbs sampling of p(s 1:T |v1:T ) using p(st |s\t , v1:T ) ∝ p(v1:T |s1:T )p(s1:T ), where p(v1:T |s1:T ) is simply the likelihood of an LDS—see for example Carter and Kohn (1996).17 We initialize the state s1:T by using the most likely states st from the filtered results using a Gaussian mixture (ADFM), and then swept forwards in time, sampling from the state p(st |s\t , v1:T ) until the end of the chain. We then reversed direction, sampling from time T back to time 1, and continued repeating this procedure 100 times, with the mean over the last 80 sweeps used as the posterior mean approximation. This procedure is expensive since each sample requires computing the likelihood of an LDS defined on the whole time-series. The procedure therefore scales with GT 2 where G is the number of sweeps over the time series. Despite using a reasonable initialization, Gibbs sampling struggles to improve on the filtered results. We found that EP was numerically unstable and often struggled to converge. To encourage convergence, we used the damping method in Heskes and Zoeter (2002), performing 20 iterations with a damping factor of 0.5. The disappointing performance of EP is most likely due to conflicts 17. Carter and Kohn (1996) proposed an overly complex procedure for computing the likelihood p(v 1:T |s1:T ). This is simply the likelihood of an LDS (since s1:T are assumed known), and is readily computable using any of the standard procedures in the literature. 2532 E XPECTATION C ORRECTION PF ADFS ECS ADFM ECM 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 1000 800 600 400 200 0 Figure 8: aSLDS: Histogram of the number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. Augmented SLDS results. ADFM used I = 4 Gaussians, and ECM used I = J = 4 Gaussians. We used 1000 samples to approximate Equation (12). I J error 1 1 0.0989 4 1 0.0624 4 4 0.0365 16 1 0.0440 16 16 0.0130 64 1 0.0440 64 64 4.75e-4 256 1 0.0440 256 256 3.40e-8 Table 2: Errors in approximating the states for the multi-path problem, see Figure (9). The mean absolute deviation |pec (st |v1:T ) − pexact (st |v1:T )| averaged over the S = 4 states of st and over the times t = 1, . . . , 5, computed for different numbers of mixture components in EC. The mean approximation of Equation (12) is used. The exact computation uses S T −1 = 256 mixtures. resulting from numerical instabilities introduced by the frequent conversions between moment and canonical representations. The various algorithms differ widely in performance, see Figures (6,7). Not surprisingly, the best filtered results are given using ADF, since this is better able to represent the variance in the filtered posterior than the sampling methods. Unlike Kim’s method, EC makes good use of the future information to clean up the filtered results considerably. One should bear in mind that both EC, Kim’s method and the Gibbs initialization use the same ADF results. These results show that EC may dramatically improve on Kim’s method, so that the small amount of extra work in making a numerical approximation of p(st |st+1 , v1:T ), Equation (12), may bring significant benefits. AUGMENTED SLDS E XPERIMENTS In Figure (8), we chose a simple two state S = 2 transition distribution p(st+1 = 1|st , ht ) = σ htT w(st ) , where σ(x) ≡ 1/(1 + e−x ). Some care needs to be taken to make a model so for which even exact inference would produce posterior switches close to the sampled switches. If the switch variables st+1 changes wildly (which is possible given the above formula since the hidden state h may have a large projected change if the hidden state changes) essentially no information is left in the signal for any inference method to produce reasonable results. We therefore set w(st ) to a zero vector except for the first two components, which are independently sampled from a zero mean Gaussian with standard deviation 5. For each of the two switch states, s, we have a transition matrix A(s), which 2533 BARBER t=1 0 t=2 10 t=3 20 t=4 30 t=5 40 −40 −30 −20 −10 0 10 20 30 (a) 40 (b) Figure 9: (a) The multi-path problem. The particle starts from (0, 0) at time t = 1. Subsequently, at each time-point, either the vector (10, 10) (corresponding to states s = 1 and s = 3) or (−10, 10) (corresponding to states s = 2 and s = 4), is added to the hidden dynamics, perturbed by a small amount of noise, Σh = 0.1. The observations are v = h + ηv (s). For states s = 1, 2 the observation noise is small, Σv = 0.1I, but for s = 3, 4 the noise in the horizontal direction has variance 1000. The visible observations are given by the x’. The true hidden states are given by ‘+’. (b) The exact smoothed state posteriors p exact (st |v1:T ) computed by enumerating all paths (given by the dashed lines). we set to be block diagonal. The first 2 × 2 block is set to 0.9999R θ , where Rθ is a 2 × 2 rotation matrix with angle θ chosen uniformly from 0 to 1 radians. This means that st+1 is dependent on the first two components of ht which are rotating at a restricted rate. The remaining H − 2 × H − 2 block of A(s) is chosen as (using MATLAB notation) 0.9999 ∗ orth(rand(H − 2)), which means a scaled randomly chosen orthogonal matrix. Throughout, S = 2, V = 1, H = 30, T = 100, with zero output ¯ ¯ bias. Using partly MATLAB notation, B(s) = randn(V, H), vt ≡ 0, h1 = 10 ∗ randn(H, 1), ht>1 = 0, ¯ h = I , p = uniform. Σv = 30I , Σh = 0.1I . Σ1 H 1 V H We compare EC only against Particle Filters using 1000 particles, since other methods would require specialized and novel implementations. In ADFM, I = 4 Gaussians were used, and for ECM, I = J = 4 Gaussians were used. Looking at the results in Figure (8), we see that EC performs well, with some improvement in using the mixture representation I, J = 4 over a single Gaussian I = J = 1. The Particle Filter most likely failed since the hidden dimension is too high to be explored well with only 1000 particles. E FFECT OF U SING M IXTURES Our claim is that EC should cope in situations where the smoothed posterior p(ht |st , v1:T ) is multimodal and, consequently, cannot be well represented by a single Gaussian. 18 We therefore constructed an SLDS which exhibits multi-modality to see the effect of using EC with both I and J greater than 1. The ‘multi-path’ scenario is described in Figure (9), where a particle traces a path through a two dimensional space. A small number of time-steps was chosen so that the exact p(st |v1:T ) can be computed by direct enumeration. The observation of the particle is at times extremely noisy in the horizontal direction. This induces multi-modality of p(ht |st , v1:T ) since there 18. This should not be confused with the multi-modality of p(ht |v1:T ) = ∑st p(ht |st , v1:T )p(st |v1:T ). 2534 E XPECTATION C ORRECTION are several paths that might plausibly have been taken to give rise to the observations. The accuracy with which EC predicts the exact smoothed posterior is given in Table (2). For this problem we see that both the number of Forward (I) and Backward components (J) affects the accuracy of the approximation, generally with improved accuracy as the number of mixture components increases. For a ‘perfect’ approximation method, one would expect that when I = J = S T −1 = 256, then the approximation should become exact. The small error for this case in Table (2) may arise for several reasons: the extra independence assumption used in EC, or the simple mean approximation used to compute Equation (12), or numerical roundoff. However, at least in this case, the effect of these assumptions on the performance is very small. 5. Discussion Expectation Correction is a novel form of Backward Pass which makes less approximations than the widely used approach from Kim (1994). In Kim’s method, potentially important future information channeled through the continuous hidden variables is lost. EC, along with Kim’s method, makes the additional assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ). However, our experience is that this assumption is rather mild, since the state of ht+1 will be most heavily influenced by its immediate parent st+1 . Our approximation is based on the idea that, although exact inference will consist of an exponentially large number of mixture components, due to the forgetting which commonly occurs in Markovian models, a finite number of mixture components may provide a reasonable approximation. In tracking situations where the visible information is (temporarily) not enough to specify accurately the hidden state, then representing the posterior p(ht |st , v1:T ) using a mixture of Gaussians may improve results significantly. Clearly, in systems with very long correlation times our method may require too many mixture components to produce a satisfactory result, although we are unaware of other techniques that would be able to cope well in that case. We hope that the straightforward ideas presented here may help facilitate the practical application of dynamic hybrid networks to machine learning and related areas. Whilst models with Gaussian emission distributions such as the SLDS are widespread, the extension of this method to non-Gaussian emissions p(vt |ht , st ) would clearly be of considerable interest. Software for Expectation Correction for this augmented class of Switching Linear Gaussian models is available from www.idiap.ch/∼barber. Acknowledgments I would like to thank Onno Zoeter and Tom Heskes for kindly providing their Expectation Propagation code, Silvia Chiappa for helpful discussions, and Bertrand Mesot for many discussions, help with the simulations and for suggesting the relationship between the partial factorization and independence viewpoints of EC. I would also like to thank the reviewers for their many helpful comments and suggestions. 2535 BARBER Algorithm 4 LDS Forward Pass. Compute the filtered posteriors p(ht |v1:t ) ≡ N ( ft , Ft ) for ¯ ¯ a LDS with parameters θt = A, B, Σh , Σv , h, v, for t > 1. At time t = 1, we use parameters v , µ, v, where Σ and µ are the prior covariance and mean of h. The log-likelihood θ1 = A, B, Σ, Σ ¯ L = log p(v1:T ) is also returned. F0 ← 0, f0 ← 0, L ← 0 for t ← 1, T do { ft , Ft , pt } = LDSFORWARD( ft−1 , Ft−1 , vt ; θt ) L ← L + log pt end for function LDSFORWARD( f , F, v; θ) Compute joint p(ht , vt |v1:t−1 ): ¯ µh ← A f + h, µv ← Bµh + v ¯ T Σhh ← AFA + Σh , Σvv ← BΣhh BT + Σv , Σvh ← BΣhh Find p(ht |v1:t ) by conditioning: f ← µh + ΣT Σ−1 (v − µv ), F ← Σhh − ΣT Σ−1 Σvh vh vv vh vv Compute p(vt |v1:t−1 ): √ 1 p ← exp − 2 (v − µv )T Σ−1 (v − µv ) / det 2πΣvv vv return f , F , p end function Appendix A. Inference in the LDS The LDS is defined by Equations (1,2) in the case of a single switch S = 1. The LDS Forward and Backward passes define the important functions LDSFORWARD and LDSBACKWARD, which we shall make use of for inference in the aSLDS. F ORWARD PASS (F ILTERING ) The filtered posterior p(ht |v1:t ) is a Gaussian which we parameterize with mean f t and covariance Ft . These parameters can be updated recursively using p(ht |v1:t ) ∝ p(ht , vt |v1:t−1 ), where the joint distribution p(ht , vt |v1:t−1 ) has statistics (see Appendix (B)) ¯ µh = A ft−1 + h, µv = Bµh + v ¯ Σhh = AFt−1 AT + Σh , Σvv = BΣhh BT + Σv , Σvh = BΣhh . We may then find p(ht |v1:t ) by conditioning p(ht , vt |v1:t−1 ) on vt , see Appendix (C). This gives rise to Algorithm (4). BACKWARD PASS The smoothed posterior p(ht |v1:T ) ≡ N (gt , Gt ) can be computed recursively using: p(ht |v1:T ) = Z ht+1 p(ht |ht+1 , v1:T )p(ht+1 |v1:T ) = Z ht+1 p(ht |ht+1 , v1:t )p(ht+1 |v1:T ) where p(ht |ht+1 , v1:t ) may be obtained from the joint distribution p(ht , ht+1 |v1:t ) = p(ht+1 |ht )p(ht |v1:t ) (19) 2536 E XPECTATION C ORRECTION Algorithm 5 LDS Backward Pass. Compute the smoothed posteriors p(ht |v1:T ). This requires the filtered results from Algorithm (4). GT ← FT , gT ← fT for t ← T − 1, 1 do {gt , Gt } = LDSBACKWARD(gt+1 , Gt+1 , ft , Ft ; θt+1 ) end for function LDSBACKWARD(g, G, f , F; θ) ¯ Σh h ← AF µh ← A f + h, Σh h ← AFAT + Σh , ← − − ← − ← ← f − ←µ − −1 T Σ ← Ft − Σh h Σh h Σh h , A ← ΣT h Σ−1 , m A h h hh ← − ← ← − − ← − − g ← A g + ←, m G ← AGAT+ Σ return g , G end function which itself can be obtained by forward propagation from p(ht |v1:t ). Conditioning Equation (19) to find p(ht |ht+1 , v1:t ) effectively reverses the dynamics, ← − ← − ht = At ht+1 + ηt ← − − ← − −← where At and η t ∼ N (←, Σt ) are found using the conditioned Gaussian results in Appendix (C)— mt these are explicitly given in Algorithm (5). Then averaging the reversed dynamics over p(h t+1 |v1:T ) we find that p(ht |v1:T ) is a Gaussian with statistics ← − ← − ← − ← − − gt = At gt+1 + ←, Gt = At Gt+1 At T + Σt . mt This Backward Pass is given in Algorithm (5). For parameter learning of the A matrix, the smoothed ← − T T statistic ht ht+1 is required. Using the above formulation, this is given by At Gt+1 + ht ht+1 . This is much simpler than the standard expressions cited in Shumway and Stoffer (2000) and Roweis and Ghahramani (1999). Appendix B. Gaussian Propagation Let y be linearly related to x through y = Mx + η, where η ∼ N (µ, Σ), and x ∼ N (µ x , Σx ). Then R p(y) = x p(y|x)p(x) is a Gaussian with mean Mµx + µ and covariance MΣx M T + Σ. Appendix C. Gaussian Conditioning For a joint Gaussian distribution over the vectors x and y with means µ x , µy and covariance elements Σxx ,Σxy ,Σyy , the conditional p(x|y) is a Gaussian with mean µx + Σxy Σ−1 (y − µy ) and covariance yy Σxx − Σxy Σ−1 Σyx . yy Appendix D. Collapsing Gaussians The user may provide any algorithm of their choice for collapsing a set of Gaussians to a smaller set of Gaussians (Titterington et al., 1985). Here, to be explicit, we present a simple one which is fast, but has the disadvantage that no spatial information about the mixture is used. 2537 BARBER First, we describe how to collapse a mixture to a single Gaussian: We may collapse a mixture of Gaussians p(x) = ∑i pi N (x|µi , Σi ) to a single Gaussian with mean ∑i pi µi and covariance ∑i pi Σi + µi µT − µµT . i To collapse a mixture to a K-component mixture we retain the K − 1 Gaussians with the largest mixture weights—the remaining N − K Gaussians are simply merged to a single Gaussian using the above method. The alternative of recursively merging the two Gaussians with the lowest mixture weights gave similar experimental performance. More sophisticated methods which retain some spatial information would clearly be potentially useful. The method presented in Lerner et al. (2000) is a suitable approach which considers removing Gaussians which are spatially similar (and not just low-weight components), thereby retaining diversity over the possible solutions. Appendix E. The Discrete-Continuous Factorization Viewpoint An alternative viewpoint is to proceed analogously to the Rauch-Tung-Striebel correction method for the LDS (Grewal and Andrews, 1993): p(ht , st |v1:T ) = = ∑ Z st+1 ht+1 p(st , ht , ht+1 , st+1 |v1:T ) ∑ p(st+1 |v1:T ) st+1 Z ht+1 p(ht , st |ht+1 , st+1 , v1:t )p(ht+1 |st+1 , v1:T ) st+1 = ≈ ∑ p(st+1 |v1:T ) p(ht |ht+1 , st+1 , st , v1:t )p(st |ht+1 , st+1 , v1:t ) ∑ p(st+1 |v1:T ) p(ht |ht+1 , st+1 , st , v1:t ) p(st |st+1 , v1:T ) st+1 (20) p(st |st+1 ,v1:T ) where angled brackets · denote averages with respect to p(ht+1 |st+1 , v1:T ). Whilst the factorized approximation in Equation (20) may seem severe, by comparing Equations (20) and (10) we see that it is equivalent to the apparently milder assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ). Hence this factorized approximation is equivalent to the ‘standard’ EC approach in which the dependency on st is dropped. References D. L. Alspach and H. W. Sorenson. Nonlinear bayesian estimation using gaussian sum approximations. IEEE Transactions on Automatic Control, 17(4):439–448, 1972. Y. Bar-Shalom and Xiao-Rong Li. Estimation and Tracking : Principles, Techniques and Software. Artech House, Norwood, MA, 1998. X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence—UAI 1998, pages 33–42. Morgan Kaufmann, 1998. C. Carter and R. Kohn. Markov chain Monte Carlo in conditionally Gaussian state space models. Biometrika, 83:589–601, 1996. 2538 E XPECTATION C ORRECTION A. T. Cemgil, B. Kappen, and D. Barber. A Generative Model for Music Transcription. IEEE Transactions on Audio, Speech and Language Processing, 14(2):679 – 694, 2006. S. Chib and M. Dueker. Non-Markovian regime switching with endogenous states and time-varying state strengths. Econometric Society 2004 North American Summer Meetings 600, 2004. A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-Blackwellised particle filtering for dynamic Bayesian networks. Uncertainty in Artificial Intelligence, 2000. A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Springer, 2001. Z. Ghahramani and G. E. Hinton. Variational learning for switching state-space models. Neural Computation, 12(4):963–996, 1998. M. S. Grewal and A. P. Andrews. Kalman Filtering: Theory and Practice. Prentice-Hall, 1993. T. Heskes and O. Zoeter. Expectation propagation for approximate inference in dynamic Bayesian networks. In A. Darwiche and N. Friedman, editors, Uncertainty in Artificial Intelligence, pages 216–223, 2002. T. Jaakkola and M. Jordan. A variational approach to Bayesian logistic regression problems and their extensions. In Artificial Intelligence and Statistics, 1996. M. I. Jordan. Learning in Graphical Models. MIT Press, 1998. S. Julier and J. Uhlmann. A new extension of the Kalman filter to nonlinear systems. In Int. Symp. Aerospace/Defense Sensing, Simul. and Controls, Orlando, FL, 1997. C-J. Kim. Dynamic linear models with Markov-switching. Journal of Econometrics, 60:1–22, 1994. C-J. Kim and C. R. Nelson. State-Space Models with Regime Switching. MIT Press, 1999. G. Kitagawa. The two-filter formula for smoothing and an implementation of the Gaussian-sum smoother. Annals of the Institute of Statistical Mathematics, 46(4):605–623, 1994. G. Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1):1–25, 1996. S. Lauritzen and F. Jensen. Stable local computation with conditional Gaussian distributions. Statistics and Computing, 11:191–203, 2001. S. L. Lauritzen. Propagation of probabilities, means, and variances in mixed graphical association models. Journal of the American Statistical Association, 87(420):1098–1108, 1992. L. J. Lee, H. Attias, Li Deng, and P. Fieguth. A multimodal variational approach to learning and inference in switching state space models. In IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP 04), volume 5, pages 505–8, 2004. U. Lerner, R. Parr, D. Koller, and G. Biswas. Bayesian fault detection and diagnosis in dynamic systems. In Proceedings of the Seventeenth National Conference on Artificial Intelligence (AIII00), pages 531–537, 2000. 2539 BARBER U. N. Lerner. Hybrid Bayesian Networks for Reasoning about Complex Systems. PhD thesis, Stanford University, 2002. B. Mesot and D. Barber. Switching linear dynamical systems for noise robust speech recognition. IDIAP-RR 08, 2006. T. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, MIT Media Lab, 2001. R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. CRG-TR-93-1, Dept. of Computer Science, University of Toronto, 1993. P. Park and T. Kailath. New square-root smoothing algorithms. IEEE Transactions on Automatic Control, 41:727–732, 1996. V. Pavlovic, J. M. Rehg, and J. MacCormick. Learning switching linear models of human motion. In Advances in Neural Information Processing systems (NIPS 13), pages 981–987, 2001. L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. of the IEEE, 77(2):257–286, 1989. H. E. Rauch, G. Tung, and C. T. Striebel. Maximum likelihood estimates of linear dynamic systems. American Institute of Aeronautics and Astronautics Journal (AIAAJ), 3(8):1445–1450, 1965. S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11(2):305–345, 1999. R. H. Shumway and D. S. Stoffer. Time Series Analysis and Its Applications. Springer, 2000. E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 605–612, 2003. D. M. Titterington, A. F. M. Smith, and U. E. Makov. Statistical Analysis of Finite Mixture Distributions. Wiley, 1985. H. Tong. Nonlinear Time Series Analysis: A Dynamical Systems Approach. Oxford Univ. Press, 1990. M. Verhaegen and P. Van Dooren. Numerical aspects of different Kalman filter implementations. IEEE Transactions of Automatic Control, 31(10):907–917, 1986. M. West and J. Harrison. Bayesian Forecasting and Dynamic Models. Springer, 1999. O. Zoeter. Monitoring Non-Linear and Switching Dynamical Systems. PhD thesis, Radboud University Nijmegen, 2005. 2540
7 0.42281052 3 jmlr-2006-A Hierarchy of Support Vector Machines for Pattern Detection
9 0.41817513 75 jmlr-2006-Policy Gradient in Continuous Time
10 0.31023368 40 jmlr-2006-Infinite-σ Limits For Tikhonov Regularization
12 0.27626035 43 jmlr-2006-Large Scale Multiple Kernel Learning (Special Topic on Machine Learning and Optimization)
14 0.24601655 21 jmlr-2006-Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis
15 0.24109502 7 jmlr-2006-A Simulation-Based Algorithm for Ergodic Control of Markov Chains Conditioned on Rare Events
16 0.23066908 9 jmlr-2006-Accurate Error Bounds for the Eigenvalues of the Kernel Matrix
17 0.22818561 60 jmlr-2006-Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
18 0.22687195 29 jmlr-2006-Estimation of Gradients and Coordinate Covariation in Classification
19 0.22592959 45 jmlr-2006-Learning Coordinate Covariances via Gradients
20 0.2031403 44 jmlr-2006-Large Scale Transductive SVMs
topicId topicWeight
[(8, 0.015), (36, 0.053), (40, 0.012), (45, 0.022), (50, 0.038), (63, 0.053), (68, 0.01), (76, 0.018), (78, 0.016), (81, 0.036), (84, 0.036), (90, 0.491), (91, 0.03), (96, 0.072)]
simIndex simValue paperId paperTitle
1 0.95576292 93 jmlr-2006-Universal Kernels
Author: Charles A. Micchelli, Yuesheng Xu, Haizhang Zhang
Abstract: In this paper we investigate conditions on the features of a continuous kernel so that it may approximate an arbitrary continuous target function uniformly on any compact subset of the input space. A number of concrete examples are given of kernels with this universal approximating property. Keywords: density, translation invariant kernels, radial kernels
same-paper 2 0.92108965 86 jmlr-2006-Step Size Adaptation in Reproducing Kernel Hilbert Space
Author: S. V. N. Vishwanathan, Nicol N. Schraudolph, Alex J. Smola
Abstract: This paper presents an online support vector machine (SVM) that uses the stochastic meta-descent (SMD) algorithm to adapt its step size automatically. We formulate the online learning problem as a stochastic gradient descent in reproducing kernel Hilbert space (RKHS) and translate SMD to the nonparametric setting, where its gradient trace parameter is no longer a coefficient vector but an element of the RKHS. We derive efficient updates that allow us to perform the step size adaptation in linear time. We apply the online SVM framework to a variety of loss functions, and in particular show how to handle structured output spaces and achieve efficient online multiclass classification. Experiments show that our algorithm outperforms more primitive methods for setting the gradient step size. Keywords: online SVM, stochastic meta-descent, structured output spaces
3 0.65582925 70 jmlr-2006-Online Passive-Aggressive Algorithms
Author: Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, Yoram Singer
Abstract: We present a family of margin based online learning algorithms for various prediction tasks. In particular we derive and analyze algorithms for binary and multiclass categorization, regression, uniclass prediction and sequence prediction. The update steps of our different algorithms are all based on analytical solutions to simple constrained optimization problems. This unified view allows us to prove worst-case loss bounds for the different algorithms and for the various decision problems based on a single lemma. Our bounds on the cumulative loss of the algorithms are relative to the smallest loss that can be attained by any fixed hypothesis, and as such are applicable to both realizable and unrealizable settings. We demonstrate some of the merits of the proposed algorithms in a series of experiments with synthetic and real data sets.
4 0.47619116 1 jmlr-2006-A Direct Method for Building Sparse Kernel Learning Algorithms
Author: Mingrui Wu, Bernhard Schölkopf, Gökhan Bakır
Abstract: Many kernel learning algorithms, including support vector machines, result in a kernel machine, such as a kernel classifier, whose key component is a weight vector in a feature space implicitly introduced by a positive definite kernel function. This weight vector is usually obtained by solving a convex optimization problem. Based on this fact we present a direct method to build sparse kernel learning algorithms by adding one more constraint to the original convex optimization problem, such that the sparseness of the resulting kernel machine is explicitly controlled while at the same time performance is kept as high as possible. A gradient based approach is provided to solve this modified optimization problem. Applying this method to the support vectom machine results in a concrete algorithm for building sparse large margin classifiers. These classifiers essentially find a discriminating subspace that can be spanned by a small number of vectors, and in this subspace, the different classes of data are linearly well separated. Experimental results over several classification benchmarks demonstrate the effectiveness of our approach. Keywords: sparse learning, sparse large margin classifiers, kernel learning algorithms, support vector machine, kernel Fisher discriminant
5 0.47137699 96 jmlr-2006-Worst-Case Analysis of Selective Sampling for Linear Classification
Author: Nicolò Cesa-Bianchi, Claudio Gentile, Luca Zaniboni
Abstract: A selective sampling algorithm is a learning algorithm for classification that, based on the past observed data, decides whether to ask the label of each new instance to be classified. In this paper, we introduce a general technique for turning linear-threshold classification algorithms from the general additive family into randomized selective sampling algorithms. For the most popular algorithms in this family we derive mistake bounds that hold for individual sequences of examples. These bounds show that our semi-supervised algorithms can achieve, on average, the same accuracy as that of their fully supervised counterparts, but using fewer labels. Our theoretical results are corroborated by a number of experiments on real-world textual data. The outcome of these experiments is essentially predicted by our theoretical results: Our selective sampling algorithms tend to perform as well as the algorithms receiving the true label after each classification, while observing in practice substantially fewer labels. Keywords: selective sampling, semi-supervised learning, on-line learning, kernel algorithms, linear-threshold classifiers
6 0.46339607 60 jmlr-2006-Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
7 0.45865285 9 jmlr-2006-Accurate Error Bounds for the Eigenvalues of the Kernel Matrix
8 0.4465906 29 jmlr-2006-Estimation of Gradients and Coordinate Covariation in Classification
9 0.44608206 35 jmlr-2006-Geometric Variance Reduction in Markov Chains: Application to Value Function and Gradient Estimation
10 0.43011659 45 jmlr-2006-Learning Coordinate Covariances via Gradients
12 0.41197783 37 jmlr-2006-Incremental Algorithms for Hierarchical Classification
13 0.41194019 74 jmlr-2006-Point-Based Value Iteration for Continuous POMDPs
14 0.39743561 75 jmlr-2006-Policy Gradient in Continuous Time
15 0.38898897 61 jmlr-2006-Maximum-Gain Working Set Selection for SVMs (Special Topic on Machine Learning and Optimization)
16 0.38691989 28 jmlr-2006-Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting
17 0.38686574 95 jmlr-2006-Walk-Sums and Belief Propagation in Gaussian Graphical Models
18 0.38227695 43 jmlr-2006-Large Scale Multiple Kernel Learning (Special Topic on Machine Learning and Optimization)
19 0.37952843 40 jmlr-2006-Infinite-σ Limits For Tikhonov Regularization