jmlr jmlr2006 jmlr2006-57 knowledge-graph by maker-knowledge-mining

57 jmlr-2006-Linear State-Space Models for Blind Source Separation


Source: pdf

Author: Rasmus Kongsgaard Olsson, Lars Kai Hansen

Abstract: We apply a type of generative modelling to the problem of blind source separation in which prior knowledge about the latent source signals, such as time-varying auto-correlation and quasiperiodicity, are incorporated into a linear state-space model. In simulations, we show that in terms of signal-to-error ratio, the sources are inferred more accurately as a result of the inclusion of strong prior knowledge. We explore different schemes of maximum-likelihood optimization for the purpose of learning the model parameters. The Expectation Maximization algorithm, which is often considered the standard optimization method in this context, results in slow convergence when the noise variance is small. In such scenarios, quasi-Newton optimization yields substantial improvements in a range of signal to noise ratios. We analyze the performance of the methods on convolutive mixtures of speech signals. Keywords: blind source separation, state-space model, independent component analysis, convolutive model, EM, speech modelling

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 We analyze the performance of the methods on convolutive mixtures of speech signals. [sent-10, score-0.588]

2 Keywords: blind source separation, state-space model, independent component analysis, convolutive model, EM, speech modelling 1. [sent-11, score-0.864]

3 Introduction We are interested in blind source separation (BSS) in which unknown source signals are estimated from noisy mixtures. [sent-12, score-0.886]

4 While most prior work is focused on mixtures that can be characterized as instantaneous, we will here investigate causal convolutive mixtures. [sent-16, score-0.422]

5 Convolutive BSS is relevant in many signal processing applications, where the instantaneous mixture model cannot possibly capture the latent causes of the observations due to different time delays between the sources and sensors. [sent-18, score-0.388]

6 The main problem is the lack of general models and estimation schemes; most current work is highly application specific with the majority focused on applications in separation of speech signals. [sent-19, score-0.282]

7 In this work we will also be concerned with speech signals, however, we will formulate a generative model that may be generalizable to several other application domains. [sent-20, score-0.299]

8 One of the most successful approaches to convolutive BSS is based on the following assumptions: 1) The mixing process is linear and causal, 2) the source signals are statistically independent, 3) the sources can be fully characterized by their time variant second order statistics (Weinstein et al. [sent-21, score-0.86]

9 It is well-known that stationary second order statistics, that is, covariances and correlation functions, are not informative enough in the convolutive mixing case. [sent-26, score-0.383]

10 The linear mixing model reads L−1 xt = ∑ Ak st−k + wt . [sent-31, score-0.164]

11 (1) k=0 At discrete time t, the observation vector, xt , results from the convolution sum of the L time-lagged mixing matrices Ak and the source vector st . [sent-32, score-0.477]

12 The individual sources, that is, the elements of st , are assumed to be statistically independent. [sent-33, score-0.16]

13 BSS is concerned with estimating st from xt , while Ak is unknown. [sent-38, score-0.213]

14 It is apparent from (1) that only filtered versions of the elements of st can be retrieved, since the inverse filtering can be applied to the unknown Ak . [sent-39, score-0.16]

15 The latter is evident from the fact that various permutation applied simultaneously to the elements of st and the columns of At produce identical mixtures, xt . [sent-41, score-0.213]

16 As already mentioned, however, we will treat the class of convolutive mixtures, that is L > 1. [sent-46, score-0.316]

17 Convolutive Independent Component Analysis (C-ICA) is a class of BSS methods for (1) where the source estimates are produced by computing the ‘unmixing’ transformation that restores statistical independence. [sent-47, score-0.197]

18 Simplistically, the separation filter is estimated by minimizing the mutual information, or ‘cross’ moments, of the ‘separated’ signals. [sent-51, score-0.164]

19 In effect, the more difficult convolutive problem is decomposed into a number of manageable instantaneous ICA problems that can be solved independently using the mentioned methods. [sent-58, score-0.389]

20 However, frequency domain decomposition suffers from permutation over frequency which is a consequence of the potential different orderings of sources at different frequencies. [sent-59, score-0.291]

21 , 2002), with richer source priors that incorporate time-correlation, non-stationarity, periodicity and the convolutive mixture model to arrive at an C-ICA algorithm. [sent-64, score-0.588]

22 The presented algorithm, which operates entirely in the time-domain, relies on a linear state-space model, for which estimation and exact source inference are available. [sent-65, score-0.197]

23 To further increase the audio realism of the model, 2586 L INEAR S TATE -S PACE M ODELS FOR B LIND S OURCE S EPARATION Olsson and Hansen (2005) added a harmonic excitation component in the source speech model (Brandstein, 1998); this idea is further elaborated and tested here. [sent-67, score-0.624]

24 Sections 5 and 6 present a number of experimental illustrations of the approach on simulated and speech data respectively. [sent-73, score-0.166]

25 Model The convolutive blind source separation problem is cast as a density estimation task in a latent variable model as was suggested in Pearlmutter and Parra (1997) for the instantaneous ICA problem p(X|θ) = Z p(X|S, θ1 )p(S|θ2 )dS. [sent-75, score-0.929]

26 Here, the matrices X and S are constructed as the column sets of xt and st for all t. [sent-76, score-0.213]

27 The functional forms of the conditional likelihood, p(X|S, θ1 ), and the joint source prior, p(S|θ2 ), should ideally be selected to fit the realities of the separation task at hand. [sent-77, score-0.313]

28 The distributions depend on a set of tunable parameters, θ ≡ {θ1 , θ2 }, which in a blind separation setup is to be learned from the data. [sent-78, score-0.263]

29 In the present work, p(X|S, θ1 ) and p(S|θ2 ) have been restricted to fit into a class of linear state-space models, for which effective estimation schemes exist (Roweis and Ghahramani, 1999) st = Fn st−1 + Cn ut + vt , (2) xt = Ast + wt . [sent-79, score-0.213]

30 zero mean Gaussian variables, vt ∼ N (0, Qn ), and wt ∼ N (0, R) The ‘input’ or ‘control’ signal ut ≡ ut (ψn ) deterministically shifts the mean of st depending on parameters ψn . [sent-85, score-0.215]

31 For equations (2) and (3) to pose as a generative model for the instantaneous mixture of first-order autoregressive, AR(1), sources it need only be assumed that F n and Qn are diagonal matrices and that Cn = 0. [sent-87, score-0.386]

32 This is achieved by augmenting the source vector to contain time-lagged signals. [sent-93, score-0.197]

33 In a is shown the corresponding source update, when the order of the AR process is p = 4. [sent-94, score-0.197]

34 In b, the sources are mixed through filters (L = 4) into Q = 2 noisy mixtures. [sent-95, score-0.199]

35 1 Auto-Regressive Source Prior The AR(p) source prior for source i in frame n is defined as follows, p si,t = n ∑ fi,k si,t−k + vi,t k=1 where t ∈ {1, 2, . [sent-98, score-0.433]

36 It is an important point that the convolutive mixture of AR(p) sources i can be contained in the linear state-space model (2) and (3), this is illustrated in Figure 1. [sent-109, score-0.54]

37 The enabling trick, which is standard in time series analysis, is to augment the source vector to include a time history so that it contains L time-lagged samples of all P sources st = (s1,t ) (s2,t ) . [sent-110, score-0.506]

38 (sP,t ) where the i’th source is represented as si,t = si,t si,t−1 . [sent-113, score-0.197]

39 , ai j,L ] can be interpreted as the impulse response of the channel filter between source i and sensor j. [sent-179, score-0.381]

40 2 Harmonic Source Prior Many classes of audio signals, such as voiced speech and musical instruments, are approximately piece-wise periodic. [sent-182, score-0.218]

41 In order to account for colored noise residuals and noisy signals in general, a harmonic and noise (HN) model is suggested (McAulay and Quateri, 1986). [sent-184, score-0.408]

42 The below formulation is used p si,t = K n ∑ fi,t si,t−t + ∑ t =1 n n n cn i,2k−1 sin(ω0,it) + ci,2k cos(ω0,it) + vi,t k=1 where ωn ’ is the fundamental frequency of source i in frame n and the Fourier coefficients are 0,i n contained in cn i,2k−1 and ci,2k . [sent-185, score-0.559]

43 The harmonic model is represented in the state space model (2) & (3) 2589 O LSSON AND H ANSEN through the definitions C n     =    (cn ) 1 0 0 . [sent-186, score-0.193]

44 0 0 ··· (cn ) P cn = i cn cn · · · i,1 i,2 utn = (un ) 1,t cn i,2K (un ) 2,t  ,    ,   . [sent-198, score-0.504]

45 (un ) P,t , where the k’th harmonics of source i in frame n are defined as (u n )2k−1 = sin(kωn t) and (un )2k = i,t i,t 0,i cos(kωn t), implying the following parameter set for the source mean: ψ n = ωn ωn . [sent-201, score-0.479]

46 0,P 0,i 0,1 0,2 Other parametric mean functions could, of course, be used, for example, a more advanced speech model. [sent-205, score-0.166]

47 Source Inference In a maximum a posteriori sense, the sources, st , can be optimally reconstructed using the Kalman filter/smoother (Kalman and Bucy, 1960; Rauch et al. [sent-207, score-0.16]

48 While the filter computes the time-marginal moments of the source posterior conditioned on past and present samples, that is, st p(S|x1:t ,θ) and st st p(S|x1:t ,θ) , the smoother conditions on samples from the entire block: st p(S|x1:T ,θ) and st st p(S|x1:T ,θ) . [sent-210, score-1.157]

49 On the E-step, the Kalman smoother is used to compute the marginal moments from the source ˆ ˆ posterior, p = p(S|X, θ), see Section 3. [sent-226, score-0.197]

50 All expectations · are over the source posterior, p(S|X, θ): 4. [sent-233, score-0.197]

51 1 AUTOREGRESSIVE M ODEL For source i in block n: T +t0 (n) ∑ fn i,new = ∑ si,t si,t−1 , t=2+t0 (n) t=2+t0 (n) qn i,new = T +t0 (n) − si,t−1 si,t−1 1 T +t0 (n) ∑ s 2 − fn i,new T − 1 t=2+t (n) i,t si,t si,t−1 , 0 where t0 (n) = (n − 1)T . [sent-235, score-0.73]

52 Furthermore: NT ∑ xt st Anew = t=1 Rnew = 1 NT NT ∑ st (st ) −1 , t=1 NT ∑ diag[xt (xt ) − Anew st (xt ) ], t=1 where the diag[·] operator extracts the diagonal elements of the matrix. [sent-236, score-0.533]

53 2 H ARMONIC AND N OISE M ODEL The linear source parameters and signals are grouped as dn ≡ i (fn ) i (cn ) i , zi ≡ 2591 (si,t−1 ) (ui,t ) , O LSSON AND H ANSEN where fn ≡ i n fi,1 n fi,2 . [sent-240, score-0.482]

54 t=2 The estimators of A, R and qn are similar to those in the AR model. [sent-250, score-0.225]

55 Hence, the reformulations Ω 2 = R and (φn )2 = qn are i i introduced. [sent-260, score-0.225]

56 Quadratic mixtures with Q = P = 2 were used: the first 2 elements of a12 and a21 were set to zero to simulate a situation with different channel delays. [sent-282, score-0.179]

57 The true and estimated sources were mapped to the output by filtering through the direct channel so that the true source at the output is si,t = aii ∗ si,t . [sent-289, score-0.467]

58 Similarly defined, the estimated source at the ˜ sensor is si,t . [sent-290, score-0.245]

59 In Table 1, the generative approach is contrasted with a stationary finite impulse response (FIR) filter separator that 2593 0. [sent-296, score-0.164]

60 2 Figure 2: The true (bold) and estimated models for the first 3 frames of the synthetic data based on the autoregressive model. [sent-306, score-0.3]

61 The amplitude frequency responses of the combined source and channel filters are shown: for source i, this amounts to the frequency response of the filter, with the scaling and poles of θ1,i and zeros of the direct channel aii . [sent-307, score-0.79]

62 5 Table 1: The signal-to-error ratio (SER) performance on synthetic data based on the autoregressive (AR) and harmonic-and-noise (HN) source models. [sent-322, score-0.392]

63 The fundamental frequency of the harmonic component was sampled uniformly in a range, see Figure 4, amplitudes and phases, K = 4, were drawn from a Gaussian distribution and subsequently normalized such that ||ci || = 1. [sent-344, score-0.176]

64 In cases when the sources are truly harmonic and noisy, it is expected that the AR model performs worse than the HN model. [sent-349, score-0.298]

65 The source parameters were estimated by the EM algorithm, whereas the mixing matrix, A, was assumed known. [sent-353, score-0.312]

66 6 and estimated parameters from synthetic mixtures of harmonic-and-noisy true 0. [sent-372, score-0.198]

67 9 1 15 10 -20 -10 0 10 HNR (dB) 20 30 Figure 5: The signal-to-error ratio (SER) performance of the autoregressive (AR) and harmonicand-noisy (HN) models for the synthetic data set (N = 100) in which the harmonic-tonoise ratio (HNR) was varied. [sent-388, score-0.195]

68 Speech Mixtures The separation of multiple speech sources from room mixtures has potential applications in hearing aids and speech recognition software (see, for example, Parra and Spence, 2000). [sent-393, score-0.749]

69 5 Figure 6: The separation performance (SER) on test mixtures as a function of the training data duration for the autoregressive (AR) and harmonic-and-noisy (HN) priors. [sent-405, score-0.373]

70 Subsequently, the learned filters, A, were applied to the test data, reestimating the source model parameters. [sent-407, score-0.241]

71 For reference, a frequency domain (FD) blind source separation algorithm was applied to the data. [sent-409, score-0.531]

72 we investigate the models based on the autoregressive (AR) and harmonic-and-noisy source (HN) priors and compare with a standard frequency domain method (FD). [sent-410, score-0.419]

73 More specifically, a learning curve was computed in order to illustrate that the inclusion of prior knowledge of speech benefits the separation of the speech sources. [sent-411, score-0.448]

74 In Figure 6 is shown the relationship between the separation performance on test mixtures and the duration of the training data, confirming the hypothesis that the AR and HN models converge faster than the FD method. [sent-412, score-0.222]

75 The mixtures were constructed by filtering speech signals (sampled at 8Hz) through a set of simulated room impulse responses, that is, ai j , and subsequently adding the filtered signals. [sent-414, score-0.56]

76 The room impulse responses were constructed by simulating Q = 2 speakers and P = 2 microphones in an (ideal) anechoic room, the cartesian coordinates in the horizontal plane given (in m) by {(1, 3) , (3, 3)} and {(1. [sent-415, score-0.255]

77 25m between the speakers and the microphones, and a set of room impulse responses that are essentially Kronecker delta functions well represented using a filter length of L = 8. [sent-420, score-0.21]

78 2597 O LSSON AND H ANSEN The SG algorithm was used to fit the model to the mixtures and subsequently infer the source signals. [sent-426, score-0.347]

79 The speech data, divided into blocks of length T = 200, was preprocessed with a standard pre-emphasis filter, H(z) = 1 − 0. [sent-427, score-0.166]

80 The source model order was set to p = 1 (autoregression order) and in the case of the harmonic-and-noise model, the number of harmonics was set to K = 6. [sent-431, score-0.287]

81 In order to qualitatively assess the effect of the two priors, a mixture of speech signals was constructed using P = 2 speech signals (a female and a male, shown in Figure 7a and b). [sent-434, score-0.625]

82 The EM algorithm was used to fit the source models to the mixtures. [sent-437, score-0.197]

83 It is clear from Figure 7 c-f that the estimated harmonic model to a large extent explains the voiced parts of the speech signals, and the unvoiced parts to a lesser extent. [sent-438, score-0.363]

84 In regions of rapid fundamental frequency variation, the harmonic part cannot be fitted as well (the frames are too long here). [sent-439, score-0.233]

85 Conclusion It is demonstrated that careful generative modelling is a viable approach to convolutive source separation and can yield improved results. [sent-444, score-0.756]

86 Noisy observations, non-stationarity of the sources and small data volumes are examples of scenarios which benefit from the higher level of modelling detail. [sent-445, score-0.187]

87 The harmonic-and-noise model was investigated as a means to estimating more accurately a number of speech source signals from the their mixtures. [sent-450, score-0.538]

88 Although a substantial improvement is shown to result when the sources are truly harmonic, the overall model is vulnerable to overfitting when the energy of one or more sources is locally near-zero. [sent-451, score-0.342]

89 8 1 Figure 7: The source parameters of the autoregressive (AR) and harmonic-and-noisy (HN) models were estimated from Q = 2 convolutive mixtures using the EM algorithm. [sent-463, score-0.818]

90 Spectrograms show the low-frequent parts of the original female (a) and male (b) speech sources. [sent-464, score-0.166]

91 The appropriateness of the HN model can be assessed in c and d, which displays the resynthesization of the two sources from the parameters (K = 6), as well as e and f, where the estimated ratio of harmonics to noise (HNR) is displayed. [sent-465, score-0.326]

92 The relative separation performance of the AR and HN models, which is shown in g and h for the two sources, confirms that the HN model is superior in most cases, with a notable exception in regions such as (II), where one of the speakers is silent. [sent-467, score-0.206]

93 t=1 The vector derivative of J (θ) with respect to fn is: i d J (θ) dfn i = 1 qn i T ∑ T fn − ∑ s n s n i,t−1 i,t i n sn i,t−1 si,t−1 t=2 . [sent-475, score-0.63]

94 By equating to zero and solving, the M-step update is derived: fn i,new T = ∑ −1 sn i,t−1 sn i,t−1 T ∑ sn sn . [sent-477, score-0.422]

95 Amplitude modulation decorrelation for convolutive blind source u separation. [sent-487, score-0.66]

96 An information-maximization approach to blind separation and blind deconvolution. [sent-494, score-0.41]

97 On the use of explicit speech modeling in microphone array applications. [sent-505, score-0.166]

98 Maximum likelihood for blind separation and deconvolution of noisy signals using mixture models. [sent-591, score-0.475]

99 A harmonic excitation state-space approach to blind separation of speech. [sent-604, score-0.428]

100 Blind separation of more sources than sensors in convolutive mixtures. [sent-611, score-0.581]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('convolutive', 0.316), ('hn', 0.254), ('qn', 0.225), ('ar', 0.219), ('em', 0.197), ('source', 0.197), ('ser', 0.192), ('speech', 0.166), ('st', 0.16), ('fn', 0.154), ('autoregressive', 0.151), ('sources', 0.149), ('blind', 0.147), ('ansen', 0.135), ('lsson', 0.135), ('signals', 0.131), ('cn', 0.126), ('lind', 0.12), ('ource', 0.12), ('pace', 0.12), ('parra', 0.12), ('tate', 0.12), ('separation', 0.116), ('mixtures', 0.106), ('olsson', 0.105), ('harmonic', 0.105), ('eparation', 0.102), ('lter', 0.097), ('bss', 0.09), ('hnr', 0.09), ('generative', 0.089), ('snr', 0.084), ('cardoso', 0.08), ('fir', 0.075), ('impulse', 0.075), ('spence', 0.075), ('channel', 0.073), ('instantaneous', 0.073), ('ica', 0.073), ('odels', 0.073), ('frequency', 0.071), ('nt', 0.069), ('inear', 0.069), ('mixing', 0.067), ('sn', 0.067), ('db', 0.063), ('excitation', 0.06), ('salakhutdinov', 0.06), ('hansen', 0.059), ('frames', 0.057), ('signal', 0.055), ('xt', 0.053), ('audio', 0.052), ('hz', 0.051), ('noisy', 0.05), ('roweis', 0.049), ('kalman', 0.049), ('estimated', 0.048), ('room', 0.046), ('harmonics', 0.046), ('speakers', 0.046), ('anem', 0.045), ('bermond', 0.045), ('kai', 0.045), ('mcaulay', 0.045), ('microphones', 0.045), ('weinstein', 0.045), ('synthetic', 0.044), ('model', 0.044), ('responses', 0.043), ('lters', 0.043), ('fd', 0.042), ('frame', 0.039), ('noise', 0.039), ('pearlmutter', 0.038), ('modelling', 0.038), ('devised', 0.037), ('ai', 0.036), ('ltering', 0.036), ('latent', 0.036), ('amplitude', 0.035), ('ltered', 0.034), ('lars', 0.034), ('denmark', 0.031), ('robbins', 0.031), ('mixture', 0.031), ('allen', 0.03), ('astn', 0.03), ('dfn', 0.03), ('dyrholm', 0.03), ('imm', 0.03), ('kollmeier', 0.03), ('kongsgaard', 0.03), ('mckeown', 0.03), ('monro', 0.03), ('moulines', 0.03), ('poles', 0.03), ('quateri', 0.03), ('rasmus', 0.03), ('rensen', 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 57 jmlr-2006-Linear State-Space Models for Blind Source Separation

Author: Rasmus Kongsgaard Olsson, Lars Kai Hansen

Abstract: We apply a type of generative modelling to the problem of blind source separation in which prior knowledge about the latent source signals, such as time-varying auto-correlation and quasiperiodicity, are incorporated into a linear state-space model. In simulations, we show that in terms of signal-to-error ratio, the sources are inferred more accurately as a result of the inclusion of strong prior knowledge. We explore different schemes of maximum-likelihood optimization for the purpose of learning the model parameters. The Expectation Maximization algorithm, which is often considered the standard optimization method in this context, results in slow convergence when the noise variance is small. In such scenarios, quasi-Newton optimization yields substantial improvements in a range of signal to noise ratios. We analyze the performance of the methods on convolutive mixtures of speech signals. Keywords: blind source separation, state-space model, independent component analysis, convolutive model, EM, speech modelling

2 0.17685176 52 jmlr-2006-Learning Spectral Clustering, With Application To Speech Separation

Author: Francis R. Bach, Michael I. Jordan

Abstract: Spectral clustering refers to a class of techniques which rely on the eigenstructure of a similarity matrix to partition points into disjoint clusters, with points in the same cluster having high similarity and points in different clusters having low similarity. In this paper, we derive new cost functions for spectral clustering based on measures of error between a given partition and a solution of the spectral relaxation of a minimum normalized cut problem. Minimizing these cost functions with respect to the partition leads to new spectral clustering algorithms. Minimizing with respect to the similarity matrix leads to algorithms for learning the similarity matrix from fully labelled data sets. We apply our learning algorithm to the blind one-microphone speech separation problem, casting the problem as one of segmentation of the spectrogram. Keywords: spectral clustering, blind source separation, computational auditory scene analysis

3 0.13859467 32 jmlr-2006-Expectation Correction for Smoothed Inference in Switching Linear Dynamical Systems

Author: David Barber

Abstract: We introduce a method for approximate smoothed inference in a class of switching linear dynamical systems, based on a novel form of Gaussian Sum smoother. This class includes the switching Kalman ‘Filter’ and the more general case of switch transitions dependent on the continuous latent state. The method improves on the standard Kim smoothing approach by dispensing with one of the key approximations, thus making fuller use of the available future information. Whilst the central assumption required is projection to a mixture of Gaussians, we show that an additional conditional independence assumption results in a simpler but accurate alternative. Our method consists of a single Forward and Backward Pass and is reminiscent of the standard smoothing ‘correction’ recursions in the simpler linear dynamical system. The method is numerically stable and compares favourably against alternative approximations, both in cases where a single mixture component provides a good posterior approximation, and where a multimodal approximation is required. Keywords: Gaussian sum smoother, switching Kalman filter, switching linear dynamical system, expectation propagation, expectation correction 1. Switching Linear Dynamical System The Linear Dynamical System (LDS) (Bar-Shalom and Li, 1998; West and Harrison, 1999) is a key temporal model in which a latent linear process generates the observed time-series. For more complex time-series which are not well described globally by a single LDS, we may break the time-series into segments, each modeled by a potentially different LDS. This is the basis for the Switching LDS (SLDS) where, for each time-step t, a switch variable st ∈ 1, . . . , S describes which of the LDSs is to be used.1 The observation (or ‘visible’ variable) vt ∈ R V is linearly related to the hidden state ht ∈ R H by vt = B(st )ht + ηv (st ), ηv (st ) ∼ N (v(st ), Σv (st )) ¯ (1) where N (µ, Σ) denotes a Gaussian distribution with mean µ and covariance Σ. The transition dynamics of the continuous hidden state ht is linear ht = A(st )ht−1 + ηh (st ), ¯ ηh (st ) ∼ N h(st ), Σh (st ) . (2) 1. These systems also go under the names Jump Markov model/process, switching Kalman Filter, Switching Linear Gaussian State-Space model, Conditional Linear Gaussian Model. c 2006 David Barber. BARBER s1 s2 s3 s4 h1 h2 h3 h4 v1 v2 v3 v4 Figure 1: The independence structure of the aSLDS. Square nodes denote discrete variables, round nodes continuous variables. In the SLDS links from h to s are not normally considered. The dynamics of the switch variables is Markovian, with transition p(st |st−1 ). The SLDS is used in many disciplines, from econometrics to machine learning (Bar-Shalom and Li, 1998; Ghahramani and Hinton, 1998; Lerner et al., 2000; Kitagawa, 1994; Kim and Nelson, 1999; Pavlovic et al., 2001). See Lerner (2002) and Zoeter (2005) for recent reviews of work. AUGMENTED S WITCHING L INEAR DYNAMICAL S YSTEM In this article, we will consider the more general model in which the switch st is dependent on both the previous st−1 and ht−1 . We call this an augmented Switching Linear Dynamical System 2 (aSLDS), in keeping with the terminology in Lerner (2002). An equivalent probabilistic model is, as depicted in Figure (1), T p(v1:T , h1:T , s1:T ) = p(v1 |h1 , s1 )p(h1 |s1 )p(s1 ) ∏ p(vt |ht , st )p(ht |ht−1 , st )p(st |ht−1 , st−1 ). t=2 The notation x1:T is shorthand for x1 , . . . , xT . The distributions are parameterized as p(vt |ht , st ) = N (v(st ) + B(st )ht , Σv (st )) , ¯ ¯ p(ht |ht−1 , st ) = N h(st ) + A(st )ht−1 , Σh (st ) where p(h1 |s1 ) = N (µ(s1 ), Σ(s1 )). The aSLDS has been used, for example, in state-duration modeling in acoustics (Cemgil et al., 2006) and econometrics (Chib and Dueker, 2004). I NFERENCE The aim of this article is to address how to perform inference in both the SLDS and aSLDS. In particular we desire the so-called filtered estimate p(ht , st |v1:t ) and the smoothed estimate p(ht , st |v1:T ), for any t, 1 ≤ t ≤ T . Both exact filtered and smoothed inference in the SLDS is intractable, scaling exponentially with time (Lerner, 2002). To see this informally, consider the filtered posterior, which may be recursively computed using p(st , ht |v1:t ) = ∑ Z st−1 ht−1 p(st , ht |st−1 , ht−1 , vt )p(st−1 , ht−1 |v1:t−1 ). (3) At timestep 1, p(s1 , h1 |v1 ) = p(h1 |s1 , v1 )p(s1 |v1 ) is an indexed set of Gaussians. At time-step 2, due to the summation over the states s1 , p(s2 , h2 |v1:2 ) will be an indexed set of S Gaussians; similarly at 2. These models are closely related to Threshold Regression Models (Tong, 1990). 2516 E XPECTATION C ORRECTION time-step 3, it will be S2 and, in general, gives rise to St−1 Gaussians. More formally, in Lauritzen and Jensen (2001), a general exact method is presented for performing stable inference in such hybrid discrete models with conditional Gaussian potentials. The method requires finding a strong junction tree which, in the SLDS case, means that the discrete variables are placed in a single cluster, resulting in exponential complexity. The key issue in the (a)SLDS, therefore, is how to perform approximate inference in a numerically stable manner. Our own interest in the SLDS stems primarily from acoustic modeling, in which the time-series consists of many thousands of time-steps (Mesot and Barber, 2006; Cemgil et al., 2006). For this, we require a stable and computationally feasible approximate inference, which is also able to deal with state-spaces of high hidden dimension, H. 2. Expectation Correction Our approach to approximate p(ht , st |v1:T ) ≈ p(ht , st |v1:T ) mirrors the Rauch-Tung-Striebel (RTS) ˜ ‘correction’ smoother for the LDS (Rauch et al., 1965; Bar-Shalom and Li, 1998). Readers unfamiliar with this approach will find a short explanation in Appendix (A), which defines the important functions LDSFORWARD and LDSBACKWARD, which we shall make use of for inference in the aSLDS. Our correction approach consists of a single Forward Pass to recursively find the filtered posterior p(ht , st |v1:t ), followed by a single Backward Pass to correct this into a smoothed posterior ˜ p(ht , st |v1:T ). The Forward Pass we use is equivalent to Assumed Density Filtering (Alspach and ˜ Sorenson, 1972; Boyen and Koller, 1998; Minka, 2001). The main contribution of this paper is a novel form of Backward Pass, based on collapsing the smoothed posterior to a mixture of Gaussians. Unless stated otherwise, all quantities should be considered as approximations to their exact counterparts, and we will therefore usually omit the tildes˜throughout the article. 2.1 Forward Pass (Filtering) Readers familiar with Assumed Density Filtering (ADF) may wish to continue directly to Section (2.2). The basic idea is to represent the (intractable) posterior using a simpler distribution. This is then propagated forwards through time, conditioned on the new observation, and subsequently collapsed back to the tractable distribution representation—see Figure (2). Our aim is to form a recursion for p(st , ht |v1:t ), based on a Gaussian mixture approximation of p(ht |st , v1:t ). Without loss of generality, we may decompose the filtered posterior as p(ht , st |v1:t ) = p(ht |st , v1:t )p(st |v1:t ). We will first form a recursion for p(ht |st , v1:t ), and discuss the switch recursion p(st |v1:t ) later. The full procedure for computing the filtered posterior is presented in Algorithm (1). The exact representation of p(ht |st , v1:t ) is a mixture with O(St ) components. We therefore approximate this with a smaller It -component mixture It p(ht |st , v1:t ) ≈ p(ht |st , v1:t ) ≡ ˜ ˜ ˜ ∑ p(ht |it , st , v1:t ) p(it |st , v1:t ) it =1 where p(ht |it , st , v1:t ) is a Gaussian parameterized with mean3 f (it , st ) and covariance F(it , st ). The ˜ Gaussian mixture weights are given by p(it |st , v1:t ). In the above, p represent approximations to the ˜ ˜ 3. Strictly speaking, we should use the notation ft (it , st ) since, for each time t, we have a set of means indexed by it , st . This mild abuse of notation is used elsewhere in the paper. 2517 BARBER st st+1 it ht ht+1 vt+1 Figure 2: Structure of the mixture representation of the Forward Pass. Essentially, the Forward Pass defines a ‘prior’ distribution at time t which contains all the information from the variables v1:t . This prior is propagated forwards through time using the exact dynamics, conditioned on the observation, and then collapsed back to form a new prior approximation at time t + 1. corresponding exact p distributions. To find a recursion for these parameters, consider ˜ p(ht+1 |st+1 , v1:t+1 ) = ∑ p(ht+1 , st , it |st+1 , v1:t+1 ) ˜ st ,it = ∑ p(ht+1 |it , st , st+1 , v1:t+1 ) p(st , it |st+1 , v1:t+1 ) ˜ ˜ (4) st ,it where each of the factors can be recursively computed on the basis of the previous filtered results (see below). However, this recursion suffers from an exponential increase in mixture components. To deal with this, we will later collapse p(ht+1 |st+1 , v1:t+1 ) back to a smaller mixture. For the ˜ remainder, we drop the p notation, and concentrate on computing the r.h.s of Equation (4). ˜ E VALUATING p(ht+1 |st , it , st+1 , v1:t+1 ) We find p(ht+1 |st , it , st+1 , v1:t+1 ) from the joint distribution p(ht+1 , vt+1 |st , it , st+1 , v1:t ), which is a Gaussian with covariance and mean elements4 Σhh = A(st+1 )F(it , st )AT (st+1 ) + Σh (st+1 ), Σvv = B(st+1 )Σhh BT (st+1 ) + Σv (st+1 ) Σvh = B(st+1 )F(it , st ), µv = B(st+1 )A(st+1 ) f (it , st ), µh = A(st+1 ) f (it , st ). (5) These results are obtained from integrating the forward dynamics, Equations (1,2) over h t , using the results in Appendix (B). To find p(ht+1 |st , it , st+1 , v1:t+1 ) we may then condition p(ht+1 , vt+1 | st , it , st+1 , v1:t ) on vt+1 using the results in Appendix (C)—see also Algorithm (4). E VALUATING p(st , it |st+1 , v1:t+1 ) Up to a trivial normalization constant the mixture weight in Equation (4) can be found from the decomposition p(st , it |st+1 , v1:t+1 ) ∝ p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ). ¯ 4. We derive this for ht+1 , vt+1 ≡ 0, to ease notation. ¯ 2518 (6) E XPECTATION C ORRECTION Algorithm 1 aSLDS Forward Pass. Approximate the filtered posterior p(st |v1:t ) ≡ ρt , p(ht |st , v1:t ) ≡ ∑it wt (it , st )N ( ft (it , st ), Ft (it , st )). Also we return the approximate log-likelihood log p(v 1:T ). We ¯ require I1 = 1, I2 ≤ S, It ≤ S × It−1 . θt (s) = A(s), B(s), Σh (s), Σv (s), h(s), v(s) for t > 1. θ1 (s) = ¯ v (s), µ(s), v(s) A(s), B(s), Σ(s), Σ ¯ for s1 ← 1 to S do { f1 (1, s1 ), F1 (1, s1 ), p} = LDSFORWARD(0, 0, v1 ; θ(s1 )) ˆ ρ1 ← p(s1 ) p ˆ end for for t ← 2 to T do for st ← 1 to S do for i ← 1 to It−1 , and s ← 1 to S do {µx|y (i, s), Σx|y (i, s), p} = LDSFORWARD( ft−1 (i, s), Ft−1 (i, s), vt ; θt (st )) ˆ p∗ (st |i, s) ≡ p(st |ht−1 , st−1 = s) p(ht−1 |it−1 =i,st−1 =s,v1:t−1 ) p (st , i, s) ← wt−1 (i, s)p∗ (st |i, s)ρt−1 (s) p ˆ end for Collapse the It−1 × S mixture of Gaussians defined by µx|y ,Σx|y , and weights p(i, s|st ) ∝ p (st , i, s) to a Gaussian with It components, p(ht |st , v1:t ) ≈ I ∑itt =1 p(it |st , v1:t )p(ht |st , it , v1:t ). This defines the new means ft (it , st ), covariances Ft (it , st ) and mixture weights wt (it , st ) ≡ p(it |st , v1:t ). Compute ρt (st ) ∝ ∑i,s p (st , i, s) end for normalize ρt ≡ p(st |v1:t ) L ← L + log ∑st ,i,s p (st , i, s) end for The first factor in Equation (6), p(vt+1 |it , st , st+1 , v1:t ), is a Gaussian with mean µv and covariance Σvv , as given in Equation (5). The last two factors p(it |st , v1:t ) and p(st |v1:t ) are given from the previous iteration. Finally, p(st+1 |it , st , v1:t ) is found from p(st+1 |it , st , v1:t ) = p(st+1 |ht , st ) p(ht |it ,st ,v1:t ) (7) where · p denotes expectation with respect to p. In the standard SLDS, Equation (7) is replaced by the Markov transition p(st+1 |st ). In the aSLDS, however, Equation (7) will generally need to be computed numerically. A simple approximation is to evaluate Equation (7) at the mean value of the distribution p(ht |it , st , v1:t ). To take covariance information into account an alternative would be to draw samples from the Gaussian p(ht |it , st , v1:t ) and thus approximate the average of p(st+1 |ht , st ) by sampling.5 C LOSING THE R ECURSION We are now in a position to calculate Equation (4). For each setting of the variable st+1 , we have a mixture of It × S Gaussians. In order to avoid an exponential explosion in the number of mixture 5. Whilst we suggest sampling as part of the aSLDS update procedure, this does not render the Forward Pass as a form of sequential sampling procedure, such as Particle Filtering. The sampling here is a form of exact sampling, for which no convergence issues arise, being used only to numerically evaluate Equation (7). 2519 BARBER components, we numerically collapse this back to It+1 Gaussians to form It+1 p(ht+1 |st+1 , v1:t+1 ) ≈ ∑ it+1 =1 p(ht+1 |it+1 , st+1 , v1:t+1 )p(it+1 |st+1 , v1:t+1 ). Hence the Gaussian components and corresponding mixture weights p(it+1 |st+1 , v1:t+1 ) are defined implicitly through a numerical (Gaussian-Mixture to smaller Gaussian-Mixture) collapse procedure, for which any method of choice may be supplied. A straightforward approach that we use in our code is based on repeatedly merging low-weight components, as explained in Appendix (D). A R ECURSION FOR THE S WITCH VARIABLES A recursion for the switch variables can be found by considering p(st+1 |v1:t+1 ) ∝ ∑ p(it , st , st+1 , vt+1 , v1:t ). it ,st The r.h.s. of the above equation is proportional to ∑ p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ) st ,it where all terms have been computed during the recursion for p(ht+1 |st+1 , v1:t+1 ). T HE L IKELIHOOD p(v1:T ) The likelihood p(v1:T ) may be found by recursing p(v1:t+1 ) = p(vt+1 |v1:t )p(v1:t ), where p(vt+1 |v1:t ) = ∑ it ,st ,st+1 p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ). In the above expression, all terms have been computed in forming the recursion for the filtered posterior p(ht+1 , st+1 |v1:t+1 ). 2.2 Backward Pass (Smoothing) The main contribution of this paper is to find a suitable way to ‘correct’ the filtered posterior p(st , ht |v1:t ) obtained from the Forward Pass into a smoothed posterior p(st , ht |v1:T ). We initially derive this for the case of a single Gaussian representation—the extension to the mixture case is straightforward and given in Section (2.3). Our derivation holds for both the SLDS and aSLDS. We approximate the smoothed posterior p(ht |st , v1:T ) by a Gaussian with mean g(st ) and covariance G(st ), and our aim is to find a recursion for these parameters. A useful starting point is the exact relation: p(ht , st |v1:T ) = ∑ p(st+1 |v1:T )p(ht |st , st+1 , v1:T )p(st |st+1 , v1:T ). st+1 2520 E XPECTATION C ORRECTION The term p(ht |st , st+1 , v1:T ) may be computed as p(ht |st , st+1 , v1:T ) = Z p(ht , ht+1 |st , st+1 , v1:T ) = Z p(ht |ht+1 , st , st+1 , v1:T )p(ht+1 |st , st+1 , v1:T ) Z p(ht |ht+1 , st , st+1 , v1:t )p(ht+1 |st , st+1 , v1:T ) ht+1 ht+1 = ht+1 (8) which is in the form of a recursion. This recursion therefore requires p(ht+1 |st , st+1 , v1:T ), which we can write as p(ht+1 |st , st+1 , v1:T ) ∝ p(ht+1 |st+1 , v1:T )p(st |st+1 , ht+1 , v1:t ). (9) The above recursions represent the exact computation of the smoothed posterior. In our approximate treatment, we replace all quantities p with their corresponding approximations p. A difficulty ˜ is that the functional form of p(st |st+1 , ht+1 , v1:t ) in the approximation of Equation (9) is not squared ˜ exponential in ht+1 , so that p(ht+1 |st , st+1 , v1:T ) will not be a mixture of Gaussians.6 One possibil˜ ity would be to approximate the non-Gaussian p(ht+1 |st , st+1 , v1:T ) (dropping the p notation) by a ˜ Gaussian (mixture) by minimizing the Kullback-Leilbler divergence between the two, or performing moment matching in the case of a single Gaussian. A simpler alternative is to make the assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ), see Figure (3). This is a considerable simplification since p(ht+1 |st+1 , v1:T ) is already known from the previous backward recursion. Under this assumption, the recursion becomes p(ht , st |v1:T ) ≈ ∑ p(st+1 |v1:T )p(st |st+1 , v1:T ) st+1 p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (10) We call the procedure based on Equation (10) Expectation Correction (EC) since it ‘corrects’ the filtered results which themselves are formed from propagating expectations. In Appendix (E) we show how EC is equivalent to a partial Discrete-Continuous factorized approximation. Equation (10) forms the basis of the the EC Backward Pass. However, similar to the ADF Forward Pass, the number of mixture components needed to represent the posterior in this recursion grows exponentially as we go backwards in time. The strategy we take to deal with this is a form of Assumed Density Smoothing, in which Equation (10) is interpreted as a propagated dynamics reversal, which will subsequently be collapsed back to an assumed family of distributions—see Figure (4). How we implement the recursion for the continuous and discrete factors is detailed below.7 6. In the exact calculation, p(ht+1 |st , st+1 , v1:T ) is a mixture of Gaussians since p(st |st+1 , ht+1 , v1:t ) = p(st , st+1 , ht+1 , v1:T )/p(st+1 , ht+1 , v1:T ) so that the mixture of Gaussians denominator p(st+1 , ht+1 , v1:T ) cancels with the first term in Equation (9), leaving a mixture of Gaussians. However, since in Equation (9) the two terms p(ht+1 |st+1 , v1:T ) and p(st |st+1 , ht+1 , v1:t ) are replaced by approximations, this cancellation is not guaranteed. 7. Equation (10) has the pleasing form of an RTS Backward Pass for the continuous part (analogous to LDS case), and a discrete smoother (analogous to a smoother recursion for the HMM). In the Forward-Backward algorithm for the HMM (Rabiner, 1989), the posterior γt ≡ p(st |v1:T ) is formed from the product of αt ≡ p(st |v1:t ) and βt ≡ p(vt+1:T |st ). This approach is also analogous to EP (Heskes and Zoeter, 2002). In the correction approach, a direct recursion for γt in terms of γt+1 and αt is formed, without explicitly defining βt . The two approaches to inference are known as α − β and α − γ recursions. 2521 BARBER st−1 st st+1 st+2 ht−1 ht ht+1 ht+2 vt−1 vt vt+1 vt+2 Figure 3: Our Backward Pass approximates p(ht+1 |st+1 , st , v1:T ) by p(ht+1 |st+1 , v1:T ). Motivation for this is that st only influences ht+1 through ht . However, ht will most likely be heavily influenced by v1:t , so that not knowing the state of st is likely to be of secondary importance. The darker shaded node is the variable we wish to find the posterior state of. The lighter shaded nodes are variables in known states, and the hashed node a variable whose state is indeed known but assumed unknown for the approximation. E VALUATING p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) is a Gaussian in ht , whose statistics we will now compute. First we find p(ht |ht+1 , st , st+1 , v1:t ) which may be obtained from the joint distribution p(ht , ht+1 |st , st+1 , v1:t ) = p(ht+1 |ht , st+1 )p(ht |st , v1:t ) (11) which itself can be found using the forward dynamics from the filtered estimate p(ht |st , v1:t ). The statistics for the marginal p(ht |st , st+1 , v1:t ) are simply those of p(ht |st , v1:t ), since st+1 carries no extra information about ht .8 The remaining statistics are the mean of ht+1 , the covariance of ht+1 and cross-variance between ht and ht+1 , ht+1 = A(st+1 ) ft (st ) Σt+1,t+1 = A(st+1 )Ft (st )AT (st+1 ) + Σh (st+1 ), Σt+1,t = A(st+1 )Ft (st ). Given the statistics of Equation (11), we may now condition on ht+1 to find p(ht |ht+1 , st , st+1 , v1:t ). Doing so effectively constitutes a reversal of the dynamics, ← − ← − ht = A (st , st+1 )ht+1 + η (st , st+1 ) ← − ← − ← − − where A (st , st+1 ) and η (st , st+1 ) ∼ N (←(st , st+1 ), Σ (st , st+1 )) are easily found using the condim tioned Gaussian results in Appendix (C)—see also Algorithm (5). Averaging the reversed dynamics we obtain a Gaussian in ht for p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) with statistics ← − ← − ← − ← − − µt = A (st , st+1 )g(st+1 ) + ←(st , st+1 ), Σt,t = A (st , st+1 )G(st+1 ) A T (st , st+1 ) + Σ (st , st+1 ). m These equations directly mirror the RTS Backward Pass, see Algorithm (5). 8. Integrating over ht+1 means that the information from st+1 passing through ht+1 via the term p(ht+1 |st+1 , ht ) vanishes. Also, since st is known, no information from st+1 passes through st to ht . 2522 E XPECTATION C ORRECTION st st+1 it jt+1 ht ht+1 vt vt+1 Figure 4: Structure of the Backward Pass for mixtures. Given the smoothed information at timestep t + 1, we need to work backwards to ‘correct’ the filtered estimate at time t. E VALUATING p(st |st+1 , v1:T ) The main departure of EC from previous methods is in treating the term p(st |st+1 , v1:T ) = p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (12) The term p(st |ht+1 , st+1 , v1:t ) is given by p(st |ht+1 , st+1 , v1:t ) = p(ht+1 |st , st+1 , v1:t )p(st , st+1 |v1:t ) . ∑st p(ht+1 |st , st+1 , v1:t )p(st , st+1 |v1:t ) (13) Here p(st , st+1 |v1:t ) = p(st+1 |st , v1:t )p(st |v1:t ), where p(st+1 |st , v1:t ) occurs in the Forward Pass, Equation (7). In Equation (13), p(ht+1 |st+1 , st , v1:t ) is found by marginalizing Equation (11). Performing the average over p(ht+1 |st+1 , v1:T ) in Equation (12) may be achieved by any numerical integration method desired. Below we outline a crude approximation that is fast and often performs surprisingly well. M EAN A PPROXIMATION A simple approximation of Equation (12) is to evaluate the integrand at the mean value of the averaging distribution. Replacing ht+1 in Equation (13) by its mean gives the simple approximation 1 T −1 1 e− 2 zt+1 (st ,st+1 )Σ (st ,st+1 |v1:t )zt+1 (st ,st+1 ) p(st |st+1 , v1:t ) p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) ≈ Z det Σ(st , st+1 |v1:t ) where zt+1 (st , st+1 ) ≡ ht+1 |st+1 , v1:T − ht+1 |st , st+1 , v1:t and Z ensures normalization over st . This result comes simply from the fact that in Equation (12) we have a Gaussian with a mean ht+1 |st , st+1 , v1:t and covariance Σ(st , st+1 |v1:t ), being the filtered covariance of ht+1 given st , st+1 and the observations v1:t , which may be taken from Σhh in Equation (5). Then evaluating this Gaussian at the specific point ht+1 |st+1 , v1:T , we arrive at the above expression. An alternative to this simple mean approximation is to sample from the Gaussian p(ht+1 |st+1 , v1:T ), which has the potential advantage that covariance information is used. 9 Other methods such as variational 9. This is a form of exact sampling since drawing samples from a Gaussian is easy. This should not be confused with meaning that this use of sampling renders EC a sequential Monte-Carlo sampling scheme. 2523 BARBER Algorithm 2 aSLDS: EC Backward Pass (Single Gaussian case I = J = 1). Approximates p(st |v1:T ) and p(ht |st , v1:T ) ≡ N (gt (st ), Gt (st )). This routine needs the results from Algorithm (1) for I = 1. GT ← FT , gT ← fT , for t ← T − 1 to 1 do for s ← 1 to S, s ← 1 to S do, (µ, Σ)(s, s ) = LDSBACKWARD(gt+1 (s ), Gt+1 (s ), ft (s), Ft (s), θt+1 (s )) p(s|s ) = p(st = s|ht+1 , st+1 = s , v1:t ) p(ht+1 |st+1 =s ,v1:T ) p(s, s |v1:T ) ← p(st+1 = s |v1:T )p(s|s ) end for for st ← 1 to S do Collapse the mixture defined by weights p(st+1 = s |st , v1:T ) ∝ p(st , s |v1:T ), means µ(st , s ) and covariances Σ(st , s ) to a single Gaussian. This defines the new means gt (st ), covariances Gt (st ). p(st |v1:T ) ← ∑s p(st , s |v1:T ) end for end for approximations to this average (Jaakkola and Jordan, 1996) or the unscented transform (Julier and Uhlmann, 1997) may be employed if desired. C LOSING THE R ECURSION We have now computed both the continuous and discrete factors in Equation (10), which we wish to use to write the smoothed estimate in the form p(ht , st |v1:T ) = p(st |v1:T )p(ht |st , v1:T ). The distribution p(ht |st , v1:T ) is readily obtained from the joint Equation (10) by conditioning on st to form the mixture p(ht |st , v1:T ) = ∑ p(st+1 |st , v1:T )p(ht |st , st+1 , v1:T ) st+1 which may be collapsed to a single Gaussian (or mixture if desired). As in the Forward Pass, this collapse implicitly defines the Gaussian mean g(st ) and covariance G(st ). The smoothed posterior p(st |v1:T ) is given by p(st |v1:T ) = = ∑ p(st+1 |v1:T )p(st |st+1 , v1:T ) st+1 ∑ p(st+1 |v1:T ) st+1 p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (14) The algorithm for the single Gaussian case is presented in Algorithm (2). N UMERICAL S TABILITY Numerical stability is a concern even in the LDS, and the same is to be expected for the aSLDS. Since the LDS recursions LDSFORWARD and LDSBACKWARD are embedded within the EC algorithm, we may immediately take advantage of the large body of work on stabilizing the LDS recursions, such as the Joseph form (Grewal and Andrews, 1993), or the square root forms (Park and Kailath, 1996; Verhaegen and Van Dooren, 1986). 2524 E XPECTATION C ORRECTION R ELAXING EC The conditional independence assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ) is not strictly necessary in EC. We motivate it by computational simplicity, since finding an appropriate moment matching approximation of p(ht+1 |st , st+1 , v1:T ) in Equation (9) requires a relatively expensive nonGaussian integration. If we therefore did treat p(ht+1 |st , st+1 , v1:T ) more correctly, the central assumption in this relaxed version of EC would be a collapse to a mixture of Gaussians (the additional computation of Equation (12) may usually be numerically evaluated to high precision). Whilst we did not do so, implementing this should not give rise to numerical instabilities since no potential divisions are required, merely the estimation of moments. In the experiments presented here, we did not pursue this option, since we believe that the effect of this conditional independence assumption is relatively weak. I NCONSISTENCIES IN THE APPROXIMATION The recursion Equation (8), upon which EC depends, makes use of the Forward Pass results, and a subtle issue arises about possible inconsistencies in the Forward and Backward approximations. For example, under the conditional independence assumption in the Backward Pass, p(hT |sT −1 , sT , v1:T ) ≈ p(hT |sT , v1:T ), which is in contradiction to Equation (5) which states that the approximation to p(hT |sT −1 , sT , v1:T ) will depend on sT −1 . Similar contradictions occur also for the relaxed version of EC. Such potential inconsistencies arise because of the approximations made, and should not be considered as separate approximations in themselves. Furthermore, these inconsistencies will most likely be strongest at the end of the chain, t ≈ T , since only then is Equation (8) in direct contradiction to Equation (5). Such potential inconsistencies arise since EC is not founded on a consistency criterion, unlike EP—see Section (3)—but rather an approximation of the exact recursions. Our experience is that compared to EP, which attempts to ensure consistency based on multiple sweeps through the graph, such inconsistencies are a small price to pay compared to the numerical stability advantages of EC. 2.3 Using Mixtures in the Backward Pass The extension to the mixture case is straightforward, based on the representation Jt p(ht |st , v1:T ) ≈ ∑ p(ht |st , jt , v1:T )p( jt |st , v1:T ). jt =1 Analogously to the case with a single component, p(ht , st |v1:T ) = ∑ it , jt+1 ,st+1 p(st+1 |v1:T )p( jt+1 |st+1 , v1:T )p(ht | jt+1 , st+1 , it , st , v1:T ) · p(it , st |ht+1 , jt+1 , st+1 , v1:t ) p(ht+1 | jt+1 ,st+1 ,v1:T ) . The average in the last line of the above equation can be tackled using the same techniques as outlined in the single Gaussian case. To approximate p(ht | jt+1 , st+1 , it , st , v1:T ) we consider this as the marginal of the joint distribution p(ht , ht+1 |it , st , jt+1 , st+1 , v1:T ) = p(ht |ht+1 , it , st , jt+1 , st+1 , v1:t )p(ht+1 |it , st , jt+1 , st+1 , v1:T ). 2525 BARBER Algorithm 3 aSLDS: EC Backward Pass. Approximates p(st |v1:T ) and p(ht |st , v1:T ) ≡ Jt ut ( jt , st )N (gt ( jt , st ), Gt ( jt , st )) using a mixture of Gaussians. JT = IT , Jt ≤ S × It × Jt+1 . This ∑ jt =1 routine needs the results from Algorithm (1). GT ← FT , gT ← fT , uT ← wT (*) for t ← T − 1 to 1 do for s ← 1 to S, s ← 1 to S, i ← 1 to It , j ← 1 to Jt+1 do (µ, Σ)(i, s, j , s ) = LDSBACKWARD(gt+1 ( j , s ), Gt+1 ( j , s ), ft (i, s), Ft (i, s), θt+1 (s )) p(i, s| j , s ) = p(st = s, it = i|ht+1 , st+1 = s , jt+1 = j , v1:t ) p(ht+1 |st+1 =s , jt+1 = j ,v1:T ) p(i, s, j , s |v1:T ) ← p(st+1 = s |v1:T )ut+1 ( j , s )p(i, s| j , s ) end for for st ← 1 to S do Collapse the mixture defined by weights p(it = i, st+1 = s , jt+1 = j |st , v1:T ) ∝ p(i, st , j , s |v1:T ), means µ(it , st , j , s ) and covariances Σ(it , st , j , s ) to a mixture with Jt components. This defines the new means gt ( jt , st ), covariances Gt ( jt , st ) and mixture weights ut ( jt , st ). p(st |v1:T ) ← ∑it , j ,s p(it , st , j , s |v1:T ) end for end for (*) If JT < IT then the initialization is formed by collapsing the Forward Pass results at time T to JT components. As in the case of a single mixture, the problematic term is p(ht+1 |it , st , jt+1 , st+1 , v1:T ). Analogously to before, we may make the assumption p(ht+1 |it , st , jt+1 , st+1 , v1:T ) ≈ p(ht+1 | jt+1 , st+1 , v1:T ) meaning that information about the current switch state st , it is ignored.10 We can then form p(ht |st , v1:T ) = ∑ it , jt+1 ,st+1 p(it , jt+1 , st+1 |st , v1:T )p(ht |it , st , jt+1 , st+1 , v1:T ). This mixture can then be collapsed to smaller mixture using any method of choice, to give Jt p(ht |st , v1:T ) ≈ ∑ p(ht | jt , st , v1:T )p( jt |st , v1:T ) jt =1 The collapse procedure implicitly defines the means g( jt , st ) and covariances G( jt , st ) of the smoothed approximation. A recursion for the switches follows analogously to the single component Backward Pass. The resulting algorithm is presented in Algorithm (3), which includes using mixtures in both Forward and Backward Passes. Note that if JT < IT , an extra initial collapse is required of the IT component Forward Pass Gaussian mixture at time T to JT components. EC has time complexity O(S2 IJK) where S are the number of switch states, I and J are the number of Gaussians used in the Forward and Backward passes, and K is the time to compute the exact Kalman smoother for the system with a single switch state. 10. As in the single component case, in principle, this assumption may be relaxed and a moment matching approximation be performed instead. 2526 E XPECTATION C ORRECTION 3. Relation to Other Methods Approximate inference in the SLDS is a long-standing research topic, generating an extensive literature. See Lerner (2002) and Zoeter (2005) for reviews of previous work. A brief summary of some of the major existing approaches follows. Assumed Density Filtering Since the exact filtered estimate p(ht |st , v1:t ) is an (exponentially large) mixture of Gaussians, a useful remedy is to project at each stage of the recursion Equation (3) back to a limited set of K Gaussians. This is a Gaussian Sum Approximation (Alspach and Sorenson, 1972), and is a form of Assumed Density Filtering (ADF) (Minka, 2001). Similarly, Generalized Pseudo Bayes2 (GPB2) (Bar-Shalom and Li, 1998) also performs filtering by collapsing to a mixture of Gaussians. This approach to filtering is also taken in Lerner et al. (2000) which performs the collapse by removing spatially similar Gaussians, thereby retaining diversity. Several smoothing approaches directly use the results from ADF. The most popular is Kim’s method, which updates the filtered posterior weights to form the smoother (Kim, 1994; Kim and Nelson, 1999). In both EC and Kim’s method, the approximation p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ), is used to form a numerically simple Backward Pass. The other approximation in EC is to numerically compute the average in Equation (14). In Kim’s method, however, an update for the discrete variables is formed by replacing the required term in Equation (14) by p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) ≈ p(st |st+1 , v1:t ). (15) This approximation11 decouples the discrete Backward Pass in Kim’s method from the continuous dynamics, since p(st |st+1 , v1:t ) ∝ p(st+1 |st )p(st |v1:t )/p(st+1 |v1:t ) can be computed simply from the filtered results alone (the continuous Backward Pass in Kim’s method, however, does depend on the discrete Backward Pass). The fundamental difference between EC and Kim’s method is that the approximation (15) is not required by EC. The EC Backward Pass therefore makes fuller use of the future information, resulting in a recursion which intimately couples the continuous and discrete variables. The resulting effect on the quality of the approximation can be profound, as we will see in the experiments. Kim’s smoother corresponds to a potentially severe loss of future information and, in general, cannot be expected to improve much on the filtered results from ADF. The more recent work of Lerner et al. (2000) is similar in spirit to Kim’s method, whereby the contribution from the continuous variables is ignored in forming an approximate recursion for the smoothed p(st |v1:T ). The main difference is that for the discrete variables, Kim’s method is based on a correction smoother (Rauch et al., 1965), whereas Lerner’s method uses a Belief Propagation style Backward Pass (Jordan, 1998). Neither method correctly integrates information from the continuous variables. How to form a recursion for a mixture approximation which does not ignore information coming through the continuous hidden variables is a central contribution of our work. Kitagawa (1994) used a two-filter method in which the dynamics of the chain are reversed. Essentially, this corresponds to a Belief Propagation method which defines a Gaussian sum 11. In the HMM this is exact, but in the SLDS the future observations carry information about st . 2527 BARBER EC Mixture Collapsing to Single Mixture Collapsing to Mixture Cond. Indep. p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ) Approx. of p(st |st+1 , v1:T ), average Equation (12) Kim’s Backward Pass Mixture approx. of p(ht+1 |st , st+1 , v1:T ), Equation (9) Relaxed EC x x x x EP x Kim x x x x x Table 1: Relation between methods. In the EC methods, the mean approximation may be replaced by an essentially exact Monte Carlo approximation to Equation (12). EP refers to the Single Gaussian approximation in Heskes and Zoeter (2002). In the case of using Relaxed EC with collapse to a single Gaussian, EC and EP are not equivalent, since the underlying recursions on which the two methods are based are fundamentally different. approximation for p(vt+1:T |ht , st ). However, since this is not a density in ht , st , but rather a conditional likelihood, formally one cannot treat this using density propagation methods. In Kitagawa (1994), the singularities resulting from incorrectly treating p(vt+1:T |ht , st ) as a density are heuristically finessed. Expectation Propagation EP (Minka, 2001), as applied to the SLDS, corresponds to an approximate implementation of Belief Propagation12 (Jordan, 1998; Heskes and Zoeter, 2002). EP is the most sophisticated rival to Kim’s method and EC, since it makes the least assumptions. For this reason, we’ll explain briefly how EP works. Unlike EC, which is based on an approximation of the exact filtering and smoothing recursions, EP is based on a consistency criterion. First, let’s simplify the notation, and write the distribution as p = ∏t φ (xt−1 , vt−1 , xt , vt ), where xt ≡ ht ⊗ st , and φ (xt−1 , vt−1 , xt , vt ) ≡ p(xt |xt−1 )p(vt |xt ). EP defines ‘messages’ ρ, λ13 which contain information from past and future observations respectively. 14 Explicitly, we define ρt (xt ) ∝ p(xt |v1:t ) to represent knowledge about xt given all information from time 1 to t. Similarly, λt (xt ) represents knowledge about state xt given all observations from time T to time t + 1. In the sequel, we drop the time suffix for notational clarity. We define λ(xt ) implicitly through the requirement that the marginal smoothed inference is given by p(xt |v1:T ) ∝ ρ (xt ) λ (xt ) . (16) Hence λ (xt ) ∝ p(vt+1:T |xt , v1:t ) = p(vt+1:T |xt ) and represents all future knowledge about p(xt |v1:T ). From this p(xt−1 , xt |v1:T ) ∝ ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . (17) 12. Non-parametric belief propagation (Sudderth et al., 2003), which performs approximate inference in general continuous distributions, is also related to EP applied to the aSLDS, in the sense that the messages cannot be represented easily, and are approximated by mixtures of Gaussians. 13. These correspond to the α and β messages in the Hidden Markov Model framework (Rabiner, 1989). 14. In this Belief Propagation/EP viewpoint, the backward messages, traditionally labeled as β, correspond to conditional likelihoods, and not distributions. In contrast, in the EC approach, which is effectively a so-called α − γ recursion, the backward γ messages correspond to posterior distributions. 2528 E XPECTATION C ORRECTION Taking the above equation as a starting point, we have p(xt |v1:T ) ∝ Z xt−1 ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . Consistency with Equation (16) requires (neglecting irrelevant scalings) ρ (xt ) λ (xt ) ∝ Z xt−1 ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . Similarly, we can integrate Equation (17) over xt to get the marginal at time xt−1 which, by consistency, should be proportional to ρ (xt−1 ) λ (xt−1 ). Hence ρ (xt ) ∝ xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) R λ (xt ) , λ (xt−1 ) ∝ R xt ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) ρ (xt−1 ) (18) where the divisions can be interpreted as preventing over-counting of messages. In an exact implementation, the common factors in the numerator and denominator cancel. EP addresses the fact that λ(xt ) is not a distribution by using Equation (18) R form the projection (or to R ‘collapse’). In the numerator, xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) and xt ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) represent p(xt |v1:T ) and p(xt−1 |v1:T ). Since these are distributions (an indexed mixture of Gaussians in the SLDS), they may be projected/collapsed to a single indexed Gaussian. The update for the ρ message is then found from division by the λ potential, and vice versa. In EP the explicit division of potentials only makes sense for members of the exponential family. More complex methods could be envisaged in which, rather than an explicit division, the new messages are defined by minimizing some measure of divergence between R ρ(xt )λ(xt ) and xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ), such as the Kullback-Leibler divergence. In this way, non-exponential family approximations (such as mixtures of Gaussians) may be considered. Whilst this is certainly feasible, it is somewhat unattractive computationally since this would require for each time-step an expensive minimization. For the single Gaussian case, in order to perform the division, the potentials in the numerator and denominator are converted to their canonical representations. To form the ρ update, the result of the division is then reconverted back to a moment representation. The resulting recursions, due to the approximation, are no longer independent and Heskes and Zoeter (2002) show that using more than a single Forward and Backward sweep often improves on the quality of the approximation. This coupling is a departure from the exact recursions, which should remain independent. Applied to the SLDS, EP suffers from severe numerical instabilities (Heskes and Zoeter, 2002) and finding a way to minimize the corresponding EP free energy in an efficient, robust and guaranteed way remains an open problem. Our experience is that current implementations of EP are unsuitable for large scale time-series applications. Damping the parameter updates is one suggested approach to heuristically improve convergence. The source of these numerical instabilities is not well understood since, even in cases when the posterior appears uni-modal, the method is problematic. The frequent conversions between moment and canonical parameterizations of Gaussians are most likely at the root of the difficulties. An interesting comparison here is between Lauritzen’s original method for exact computation on conditional Gaussian distributions (for which the SLDS is a special case) Lauritzen (1992), 2529 BARBER which is numerically unstable due to conversion between moment and canonical representations, and Lauritzen and Jensen (2001), which improves stability by avoiding using canonical parameterizations. Variational Methods Ghahramani and Hinton (1998) used a variational method which approximates the joint distribution p(h1:T , s1:T |v1:T ) rather than the marginal p(ht , st |v1:T )—related work is presented in Lee et al. (2004). This is a disadvantage when compared to other methods that directly approximate the marginal. The variational methods are nevertheless potentially attractive since they are able to exploit structural properties of the distribution, such as a factored discrete state-transition. In this article, we concentrate on the case of a small number of states S and hence will not consider variational methods further here. 15 Sequential Monte Carlo (Particle Filtering) These methods form an approximate implementation of Equation (3), using a sum of delta functions to represent the posterior—see, for example, Doucet et al. (2001). Whilst potentially powerful, these non-analytic methods typically suffer in high-dimensional hidden spaces since they are often based on naive importance sampling, which restricts their practical use. ADF is generally preferential to Particle Filtering, since in ADF the approximation is a mixture of non-trivial distributions, which is better at capturing the variability of the posterior. Rao-Blackwellized Particle Filters (Doucet et al., 2000) are an attempt to alleviate the difficulty of sampling in high-dimensional state spaces by explicitly integrating over the continuous state. Non-Sequential Monte Carlo For fixed switches s1:T , p(v1:T |s1:T ) is easily computable since this is just the likelihood of an LDS. This observation raises the possibility of sampling from the posterior p(s 1:T |v1:T ) ∝ p(v1:T |s1:T )p(s1:T ) directly. Many possible sampling methods could be applied in this case, and the most immediate is Gibbs sampling, in which a sample for each t is drawn from p(st |s\t , v1:T )—see Neal (1993) for a general reference and Carter and Kohn (1996) for an application to the SLDS. This procedure may work well in practice provided that the initial setting of s1:T is in a region of high probability mass—otherwise, sampling by such individual coordinate updates may be extremely inefficient. 4. Experiments Our experiments examine the stability and accuracy of EC against several other methods on long time-series. In addition, we will compare the absolute accuracy of EC as a function of the number of mixture components on a short time-series, where exact inference may be explicitly evaluated. Testing EC in a problem with a reasonably long temporal sequence, T , is important since numerical stabilities may not be apparent in time-series of just a few time-steps. To do this, we sequentially generate hidden states ht , st and observations vt from a given model. Then, given only the parameters of the model and the observations (but not any of the hidden states), the task is to infer p(ht |st , v1:T ) and p(st |v1:T ). Since the exact computation is exponential in T , a formally exact evaluation of the method is infeasible. A simple alternative is to assume that the original sample states s1:T are the ‘correct’ inferred states, and compare our most probable posterior smoothed 15. Lerner (2002) discusses an approach in the case of a large structured discrete state transition. Related ideas could also be used in EC. 2530 E XPECTATION C ORRECTION 80 150 60 100 40 50 20 0 0 −50 −20 −100 −40 −150 −60 −80 0 10 20 30 40 50 60 70 80 90 −200 100 0 10 20 (a) Easy problem 30 40 50 60 70 80 90 100 (b) Hard problem Figure 5: SLDS: Throughout, S = 2, V = 1 (scalar observations), T = 100, with zero output bias. ¯ A(s) = 0.9999 ∗ orth(randn(H, H)), B(s) = randn(V, H), vt ≡ 0, h1 = 10 ∗ randn(H, 1), ¯ ¯ t>1 = 0, Σh = IH , p1 = uniform. The figures show typical examples for each of the two h 1 problems: (a) Easy problem. H = 3, Σh (s) = IH , Σv (s) = 0.1IV , p(st+1 |st ) ∝ 1S×S + IS . (b) Hard problem. H = 30, Σv (s) = 30IV ,Σh (s) = 0.01IH , p(st+1 |st ) ∝ 1S×S . PF RBPF EP ADFS KimS ECS ADFM KimM ECM Gibbs 1000 800 600 400 200 0 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 Figure 6: SLDS ‘Easy’ problem: The number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. The histograms have been cutoff at 20 errors in order to improve visualization. (PF) Particle Filter. (RBPF) Rao-Blackwellized PF. (EP) Expectation Propagation. (ADFS) Assumed Density Filtering using a Single Gaussian. (KimS) Kim’s smoother using the results from ADFS. (ECS) Expectation Correction using a Single Gaussian (I = J = 1). (ADFM) ADF using a multiple of I = 4 Gaussians. (KimM) Kim’s smoother using the results from ADFM. (ECM) Expectation Correction using a mixture with I = J = 4 components. In Gibbs sampling, we use the initialization from ADFM. estimates arg maxst p(st |v1:T ) with the assumed correct sample st .16 We look at two sets of experiments, one for the SLDS and one for the aSLDS. In both cases, scalar observations are used so that the complexity of the inference problem can be visually assessed. 16. We could also consider performance measures on the accuracy of p(ht |st , v1:T ). However, we prefer to look at approximating arg maxst p(st |v1:T ) since the sampled discrete states are likely to correspond to the exact arg max st p(st |v1:T ). In addition, if the posterior switch distribution is dominated by a single state s ∗ , then provided they are correctly 1:T estimated, the model reduces to an LDS, for which inference of the continuous hidden state is trivial. 2531 BARBER PF RBPF EP ADFS KimS ECS ADFM KimM ECM Gibbs 1000 800 600 400 200 0 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 Figure 7: SLDS ‘Hard’ problem: The number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. SLDS EXPERIMENTS We chose experimental conditions that, from the viewpoint of classical signal processing, are difficult, with changes in the switches occurring at a much higher rate than the typical frequencies in the signal. We consider two different toy SLDS experiments : The ‘easy’ problem corresponds to a low hidden dimension, H = 3, with low observation noise; The ‘hard’ problem corresponds to a high hidden dimension, H = 30, and high observation noise. See Figure (5) for details of the experimental setup. We compared methods using a single Gaussian, and methods using multiple Gaussians, see Figure (6) and Figure (7). For EC we use the mean approximation for the numerical integration of Equation (12). For the Particle Filter 1000 particles were used, with Kitagawa re-sampling (Kitagawa, 1996). For the Rao-Blackwellized Particle Filter (Doucet et al., 2000), 500 particles were used, with Kitagawa re-sampling. We included the Particle Filter merely for a point of comparison with ADF, since they are not designed to approximate the smoothed estimate. An alternative MCMC procedure is to perform Gibbs sampling of p(s 1:T |v1:T ) using p(st |s\t , v1:T ) ∝ p(v1:T |s1:T )p(s1:T ), where p(v1:T |s1:T ) is simply the likelihood of an LDS—see for example Carter and Kohn (1996).17 We initialize the state s1:T by using the most likely states st from the filtered results using a Gaussian mixture (ADFM), and then swept forwards in time, sampling from the state p(st |s\t , v1:T ) until the end of the chain. We then reversed direction, sampling from time T back to time 1, and continued repeating this procedure 100 times, with the mean over the last 80 sweeps used as the posterior mean approximation. This procedure is expensive since each sample requires computing the likelihood of an LDS defined on the whole time-series. The procedure therefore scales with GT 2 where G is the number of sweeps over the time series. Despite using a reasonable initialization, Gibbs sampling struggles to improve on the filtered results. We found that EP was numerically unstable and often struggled to converge. To encourage convergence, we used the damping method in Heskes and Zoeter (2002), performing 20 iterations with a damping factor of 0.5. The disappointing performance of EP is most likely due to conflicts 17. Carter and Kohn (1996) proposed an overly complex procedure for computing the likelihood p(v 1:T |s1:T ). This is simply the likelihood of an LDS (since s1:T are assumed known), and is readily computable using any of the standard procedures in the literature. 2532 E XPECTATION C ORRECTION PF ADFS ECS ADFM ECM 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 1000 800 600 400 200 0 Figure 8: aSLDS: Histogram of the number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. Augmented SLDS results. ADFM used I = 4 Gaussians, and ECM used I = J = 4 Gaussians. We used 1000 samples to approximate Equation (12). I J error 1 1 0.0989 4 1 0.0624 4 4 0.0365 16 1 0.0440 16 16 0.0130 64 1 0.0440 64 64 4.75e-4 256 1 0.0440 256 256 3.40e-8 Table 2: Errors in approximating the states for the multi-path problem, see Figure (9). The mean absolute deviation |pec (st |v1:T ) − pexact (st |v1:T )| averaged over the S = 4 states of st and over the times t = 1, . . . , 5, computed for different numbers of mixture components in EC. The mean approximation of Equation (12) is used. The exact computation uses S T −1 = 256 mixtures. resulting from numerical instabilities introduced by the frequent conversions between moment and canonical representations. The various algorithms differ widely in performance, see Figures (6,7). Not surprisingly, the best filtered results are given using ADF, since this is better able to represent the variance in the filtered posterior than the sampling methods. Unlike Kim’s method, EC makes good use of the future information to clean up the filtered results considerably. One should bear in mind that both EC, Kim’s method and the Gibbs initialization use the same ADF results. These results show that EC may dramatically improve on Kim’s method, so that the small amount of extra work in making a numerical approximation of p(st |st+1 , v1:T ), Equation (12), may bring significant benefits. AUGMENTED SLDS E XPERIMENTS In Figure (8), we chose a simple two state S = 2 transition distribution p(st+1 = 1|st , ht ) = σ htT w(st ) , where σ(x) ≡ 1/(1 + e−x ). Some care needs to be taken to make a model so for which even exact inference would produce posterior switches close to the sampled switches. If the switch variables st+1 changes wildly (which is possible given the above formula since the hidden state h may have a large projected change if the hidden state changes) essentially no information is left in the signal for any inference method to produce reasonable results. We therefore set w(st ) to a zero vector except for the first two components, which are independently sampled from a zero mean Gaussian with standard deviation 5. For each of the two switch states, s, we have a transition matrix A(s), which 2533 BARBER t=1 0 t=2 10 t=3 20 t=4 30 t=5 40 −40 −30 −20 −10 0 10 20 30 (a) 40 (b) Figure 9: (a) The multi-path problem. The particle starts from (0, 0) at time t = 1. Subsequently, at each time-point, either the vector (10, 10) (corresponding to states s = 1 and s = 3) or (−10, 10) (corresponding to states s = 2 and s = 4), is added to the hidden dynamics, perturbed by a small amount of noise, Σh = 0.1. The observations are v = h + ηv (s). For states s = 1, 2 the observation noise is small, Σv = 0.1I, but for s = 3, 4 the noise in the horizontal direction has variance 1000. The visible observations are given by the x’. The true hidden states are given by ‘+’. (b) The exact smoothed state posteriors p exact (st |v1:T ) computed by enumerating all paths (given by the dashed lines). we set to be block diagonal. The first 2 × 2 block is set to 0.9999R θ , where Rθ is a 2 × 2 rotation matrix with angle θ chosen uniformly from 0 to 1 radians. This means that st+1 is dependent on the first two components of ht which are rotating at a restricted rate. The remaining H − 2 × H − 2 block of A(s) is chosen as (using MATLAB notation) 0.9999 ∗ orth(rand(H − 2)), which means a scaled randomly chosen orthogonal matrix. Throughout, S = 2, V = 1, H = 30, T = 100, with zero output ¯ ¯ bias. Using partly MATLAB notation, B(s) = randn(V, H), vt ≡ 0, h1 = 10 ∗ randn(H, 1), ht>1 = 0, ¯ h = I , p = uniform. Σv = 30I , Σh = 0.1I . Σ1 H 1 V H We compare EC only against Particle Filters using 1000 particles, since other methods would require specialized and novel implementations. In ADFM, I = 4 Gaussians were used, and for ECM, I = J = 4 Gaussians were used. Looking at the results in Figure (8), we see that EC performs well, with some improvement in using the mixture representation I, J = 4 over a single Gaussian I = J = 1. The Particle Filter most likely failed since the hidden dimension is too high to be explored well with only 1000 particles. E FFECT OF U SING M IXTURES Our claim is that EC should cope in situations where the smoothed posterior p(ht |st , v1:T ) is multimodal and, consequently, cannot be well represented by a single Gaussian. 18 We therefore constructed an SLDS which exhibits multi-modality to see the effect of using EC with both I and J greater than 1. The ‘multi-path’ scenario is described in Figure (9), where a particle traces a path through a two dimensional space. A small number of time-steps was chosen so that the exact p(st |v1:T ) can be computed by direct enumeration. The observation of the particle is at times extremely noisy in the horizontal direction. This induces multi-modality of p(ht |st , v1:T ) since there 18. This should not be confused with the multi-modality of p(ht |v1:T ) = ∑st p(ht |st , v1:T )p(st |v1:T ). 2534 E XPECTATION C ORRECTION are several paths that might plausibly have been taken to give rise to the observations. The accuracy with which EC predicts the exact smoothed posterior is given in Table (2). For this problem we see that both the number of Forward (I) and Backward components (J) affects the accuracy of the approximation, generally with improved accuracy as the number of mixture components increases. For a ‘perfect’ approximation method, one would expect that when I = J = S T −1 = 256, then the approximation should become exact. The small error for this case in Table (2) may arise for several reasons: the extra independence assumption used in EC, or the simple mean approximation used to compute Equation (12), or numerical roundoff. However, at least in this case, the effect of these assumptions on the performance is very small. 5. Discussion Expectation Correction is a novel form of Backward Pass which makes less approximations than the widely used approach from Kim (1994). In Kim’s method, potentially important future information channeled through the continuous hidden variables is lost. EC, along with Kim’s method, makes the additional assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ). However, our experience is that this assumption is rather mild, since the state of ht+1 will be most heavily influenced by its immediate parent st+1 . Our approximation is based on the idea that, although exact inference will consist of an exponentially large number of mixture components, due to the forgetting which commonly occurs in Markovian models, a finite number of mixture components may provide a reasonable approximation. In tracking situations where the visible information is (temporarily) not enough to specify accurately the hidden state, then representing the posterior p(ht |st , v1:T ) using a mixture of Gaussians may improve results significantly. Clearly, in systems with very long correlation times our method may require too many mixture components to produce a satisfactory result, although we are unaware of other techniques that would be able to cope well in that case. We hope that the straightforward ideas presented here may help facilitate the practical application of dynamic hybrid networks to machine learning and related areas. Whilst models with Gaussian emission distributions such as the SLDS are widespread, the extension of this method to non-Gaussian emissions p(vt |ht , st ) would clearly be of considerable interest. Software for Expectation Correction for this augmented class of Switching Linear Gaussian models is available from www.idiap.ch/∼barber. Acknowledgments I would like to thank Onno Zoeter and Tom Heskes for kindly providing their Expectation Propagation code, Silvia Chiappa for helpful discussions, and Bertrand Mesot for many discussions, help with the simulations and for suggesting the relationship between the partial factorization and independence viewpoints of EC. I would also like to thank the reviewers for their many helpful comments and suggestions. 2535 BARBER Algorithm 4 LDS Forward Pass. Compute the filtered posteriors p(ht |v1:t ) ≡ N ( ft , Ft ) for ¯ ¯ a LDS with parameters θt = A, B, Σh , Σv , h, v, for t > 1. At time t = 1, we use parameters v , µ, v, where Σ and µ are the prior covariance and mean of h. The log-likelihood θ1 = A, B, Σ, Σ ¯ L = log p(v1:T ) is also returned. F0 ← 0, f0 ← 0, L ← 0 for t ← 1, T do { ft , Ft , pt } = LDSFORWARD( ft−1 , Ft−1 , vt ; θt ) L ← L + log pt end for function LDSFORWARD( f , F, v; θ) Compute joint p(ht , vt |v1:t−1 ): ¯ µh ← A f + h, µv ← Bµh + v ¯ T Σhh ← AFA + Σh , Σvv ← BΣhh BT + Σv , Σvh ← BΣhh Find p(ht |v1:t ) by conditioning: f ← µh + ΣT Σ−1 (v − µv ), F ← Σhh − ΣT Σ−1 Σvh vh vv vh vv Compute p(vt |v1:t−1 ): √ 1 p ← exp − 2 (v − µv )T Σ−1 (v − µv ) / det 2πΣvv vv return f , F , p end function Appendix A. Inference in the LDS The LDS is defined by Equations (1,2) in the case of a single switch S = 1. The LDS Forward and Backward passes define the important functions LDSFORWARD and LDSBACKWARD, which we shall make use of for inference in the aSLDS. F ORWARD PASS (F ILTERING ) The filtered posterior p(ht |v1:t ) is a Gaussian which we parameterize with mean f t and covariance Ft . These parameters can be updated recursively using p(ht |v1:t ) ∝ p(ht , vt |v1:t−1 ), where the joint distribution p(ht , vt |v1:t−1 ) has statistics (see Appendix (B)) ¯ µh = A ft−1 + h, µv = Bµh + v ¯ Σhh = AFt−1 AT + Σh , Σvv = BΣhh BT + Σv , Σvh = BΣhh . We may then find p(ht |v1:t ) by conditioning p(ht , vt |v1:t−1 ) on vt , see Appendix (C). This gives rise to Algorithm (4). BACKWARD PASS The smoothed posterior p(ht |v1:T ) ≡ N (gt , Gt ) can be computed recursively using: p(ht |v1:T ) = Z ht+1 p(ht |ht+1 , v1:T )p(ht+1 |v1:T ) = Z ht+1 p(ht |ht+1 , v1:t )p(ht+1 |v1:T ) where p(ht |ht+1 , v1:t ) may be obtained from the joint distribution p(ht , ht+1 |v1:t ) = p(ht+1 |ht )p(ht |v1:t ) (19) 2536 E XPECTATION C ORRECTION Algorithm 5 LDS Backward Pass. Compute the smoothed posteriors p(ht |v1:T ). This requires the filtered results from Algorithm (4). GT ← FT , gT ← fT for t ← T − 1, 1 do {gt , Gt } = LDSBACKWARD(gt+1 , Gt+1 , ft , Ft ; θt+1 ) end for function LDSBACKWARD(g, G, f , F; θ) ¯ Σh h ← AF µh ← A f + h, Σh h ← AFAT + Σh , ← − − ← − ← ← f − ←µ − −1 T Σ ← Ft − Σh h Σh h Σh h , A ← ΣT h Σ−1 , m A h h hh ← − ← ← − − ← − − g ← A g + ←, m G ← AGAT+ Σ return g , G end function which itself can be obtained by forward propagation from p(ht |v1:t ). Conditioning Equation (19) to find p(ht |ht+1 , v1:t ) effectively reverses the dynamics, ← − ← − ht = At ht+1 + ηt ← − − ← − −← where At and η t ∼ N (←, Σt ) are found using the conditioned Gaussian results in Appendix (C)— mt these are explicitly given in Algorithm (5). Then averaging the reversed dynamics over p(h t+1 |v1:T ) we find that p(ht |v1:T ) is a Gaussian with statistics ← − ← − ← − ← − − gt = At gt+1 + ←, Gt = At Gt+1 At T + Σt . mt This Backward Pass is given in Algorithm (5). For parameter learning of the A matrix, the smoothed ← − T T statistic ht ht+1 is required. Using the above formulation, this is given by At Gt+1 + ht ht+1 . This is much simpler than the standard expressions cited in Shumway and Stoffer (2000) and Roweis and Ghahramani (1999). Appendix B. Gaussian Propagation Let y be linearly related to x through y = Mx + η, where η ∼ N (µ, Σ), and x ∼ N (µ x , Σx ). Then R p(y) = x p(y|x)p(x) is a Gaussian with mean Mµx + µ and covariance MΣx M T + Σ. Appendix C. Gaussian Conditioning For a joint Gaussian distribution over the vectors x and y with means µ x , µy and covariance elements Σxx ,Σxy ,Σyy , the conditional p(x|y) is a Gaussian with mean µx + Σxy Σ−1 (y − µy ) and covariance yy Σxx − Σxy Σ−1 Σyx . yy Appendix D. Collapsing Gaussians The user may provide any algorithm of their choice for collapsing a set of Gaussians to a smaller set of Gaussians (Titterington et al., 1985). Here, to be explicit, we present a simple one which is fast, but has the disadvantage that no spatial information about the mixture is used. 2537 BARBER First, we describe how to collapse a mixture to a single Gaussian: We may collapse a mixture of Gaussians p(x) = ∑i pi N (x|µi , Σi ) to a single Gaussian with mean ∑i pi µi and covariance ∑i pi Σi + µi µT − µµT . i To collapse a mixture to a K-component mixture we retain the K − 1 Gaussians with the largest mixture weights—the remaining N − K Gaussians are simply merged to a single Gaussian using the above method. The alternative of recursively merging the two Gaussians with the lowest mixture weights gave similar experimental performance. More sophisticated methods which retain some spatial information would clearly be potentially useful. The method presented in Lerner et al. (2000) is a suitable approach which considers removing Gaussians which are spatially similar (and not just low-weight components), thereby retaining diversity over the possible solutions. Appendix E. The Discrete-Continuous Factorization Viewpoint An alternative viewpoint is to proceed analogously to the Rauch-Tung-Striebel correction method for the LDS (Grewal and Andrews, 1993): p(ht , st |v1:T ) = = ∑ Z st+1 ht+1 p(st , ht , ht+1 , st+1 |v1:T ) ∑ p(st+1 |v1:T ) st+1 Z ht+1 p(ht , st |ht+1 , st+1 , v1:t )p(ht+1 |st+1 , v1:T ) st+1 = ≈ ∑ p(st+1 |v1:T ) p(ht |ht+1 , st+1 , st , v1:t )p(st |ht+1 , st+1 , v1:t ) ∑ p(st+1 |v1:T ) p(ht |ht+1 , st+1 , st , v1:t ) p(st |st+1 , v1:T ) st+1 (20) p(st |st+1 ,v1:T ) where angled brackets · denote averages with respect to p(ht+1 |st+1 , v1:T ). Whilst the factorized approximation in Equation (20) may seem severe, by comparing Equations (20) and (10) we see that it is equivalent to the apparently milder assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ). Hence this factorized approximation is equivalent to the ‘standard’ EC approach in which the dependency on st is dropped. References D. L. Alspach and H. W. Sorenson. Nonlinear bayesian estimation using gaussian sum approximations. IEEE Transactions on Automatic Control, 17(4):439–448, 1972. Y. Bar-Shalom and Xiao-Rong Li. Estimation and Tracking : Principles, Techniques and Software. Artech House, Norwood, MA, 1998. X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence—UAI 1998, pages 33–42. Morgan Kaufmann, 1998. C. Carter and R. Kohn. Markov chain Monte Carlo in conditionally Gaussian state space models. Biometrika, 83:589–601, 1996. 2538 E XPECTATION C ORRECTION A. T. Cemgil, B. Kappen, and D. Barber. A Generative Model for Music Transcription. IEEE Transactions on Audio, Speech and Language Processing, 14(2):679 – 694, 2006. S. Chib and M. Dueker. Non-Markovian regime switching with endogenous states and time-varying state strengths. Econometric Society 2004 North American Summer Meetings 600, 2004. A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-Blackwellised particle filtering for dynamic Bayesian networks. Uncertainty in Artificial Intelligence, 2000. A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Springer, 2001. Z. Ghahramani and G. E. Hinton. Variational learning for switching state-space models. Neural Computation, 12(4):963–996, 1998. M. S. Grewal and A. P. Andrews. Kalman Filtering: Theory and Practice. Prentice-Hall, 1993. T. Heskes and O. Zoeter. Expectation propagation for approximate inference in dynamic Bayesian networks. In A. Darwiche and N. Friedman, editors, Uncertainty in Artificial Intelligence, pages 216–223, 2002. T. Jaakkola and M. Jordan. A variational approach to Bayesian logistic regression problems and their extensions. In Artificial Intelligence and Statistics, 1996. M. I. Jordan. Learning in Graphical Models. MIT Press, 1998. S. Julier and J. Uhlmann. A new extension of the Kalman filter to nonlinear systems. In Int. Symp. Aerospace/Defense Sensing, Simul. and Controls, Orlando, FL, 1997. C-J. Kim. Dynamic linear models with Markov-switching. Journal of Econometrics, 60:1–22, 1994. C-J. Kim and C. R. Nelson. State-Space Models with Regime Switching. MIT Press, 1999. G. Kitagawa. The two-filter formula for smoothing and an implementation of the Gaussian-sum smoother. Annals of the Institute of Statistical Mathematics, 46(4):605–623, 1994. G. Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1):1–25, 1996. S. Lauritzen and F. Jensen. Stable local computation with conditional Gaussian distributions. Statistics and Computing, 11:191–203, 2001. S. L. Lauritzen. Propagation of probabilities, means, and variances in mixed graphical association models. Journal of the American Statistical Association, 87(420):1098–1108, 1992. L. J. Lee, H. Attias, Li Deng, and P. Fieguth. A multimodal variational approach to learning and inference in switching state space models. In IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP 04), volume 5, pages 505–8, 2004. U. Lerner, R. Parr, D. Koller, and G. Biswas. Bayesian fault detection and diagnosis in dynamic systems. In Proceedings of the Seventeenth National Conference on Artificial Intelligence (AIII00), pages 531–537, 2000. 2539 BARBER U. N. Lerner. Hybrid Bayesian Networks for Reasoning about Complex Systems. PhD thesis, Stanford University, 2002. B. Mesot and D. Barber. Switching linear dynamical systems for noise robust speech recognition. IDIAP-RR 08, 2006. T. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, MIT Media Lab, 2001. R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. CRG-TR-93-1, Dept. of Computer Science, University of Toronto, 1993. P. Park and T. Kailath. New square-root smoothing algorithms. IEEE Transactions on Automatic Control, 41:727–732, 1996. V. Pavlovic, J. M. Rehg, and J. MacCormick. Learning switching linear models of human motion. In Advances in Neural Information Processing systems (NIPS 13), pages 981–987, 2001. L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. of the IEEE, 77(2):257–286, 1989. H. E. Rauch, G. Tung, and C. T. Striebel. Maximum likelihood estimates of linear dynamic systems. American Institute of Aeronautics and Astronautics Journal (AIAAJ), 3(8):1445–1450, 1965. S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11(2):305–345, 1999. R. H. Shumway and D. S. Stoffer. Time Series Analysis and Its Applications. Springer, 2000. E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 605–612, 2003. D. M. Titterington, A. F. M. Smith, and U. E. Makov. Statistical Analysis of Finite Mixture Distributions. Wiley, 1985. H. Tong. Nonlinear Time Series Analysis: A Dynamical Systems Approach. Oxford Univ. Press, 1990. M. Verhaegen and P. Van Dooren. Numerical aspects of different Kalman filter implementations. IEEE Transactions of Automatic Control, 31(10):907–917, 1986. M. West and J. Harrison. Bayesian Forecasting and Dynamic Models. Springer, 1999. O. Zoeter. Monitoring Non-Linear and Switching Dynamical Systems. PhD thesis, Radboud University Nijmegen, 2005. 2540

4 0.11861616 64 jmlr-2006-Noisy-OR Component Analysis and its Application to Link Analysis

Author: Tomáš Šingliar, Miloš Hauskrecht

Abstract: We develop a new component analysis framework, the Noisy-Or Component Analyzer (NOCA), that targets high-dimensional binary data. NOCA is a probabilistic latent variable model that assumes the expression of observed high-dimensional binary data is driven by a small number of hidden binary sources combined via noisy-or units. The component analysis procedure is equivalent to learning of NOCA parameters. Since the classical EM formulation of the NOCA learning problem is intractable, we develop its variational approximation. We test the NOCA framework on two problems: (1) a synthetic image-decomposition problem and (2) a co-citation data analysis problem for thousands of CiteSeer documents. We demonstrate good performance of the new model on both problems. In addition, we contrast the model to two mixture-based latent-factor models: the probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA). Differing assumptions underlying these models cause them to discover different types of structure in co-citation data, thus illustrating the benefit of NOCA in building our understanding of highdimensional data sets. Keywords: component analysis, vector quantization, variational learning, link analysis

5 0.096606791 28 jmlr-2006-Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting

Author: Martin J. Wainwright

Abstract: Consider the problem of joint parameter estimation and prediction in a Markov random field: that is, the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working under the restriction of limited computation, we analyze a joint method in which the same convex variational relaxation is used to construct an M-estimator for fitting parameters, and to perform approximate marginalization for the prediction step. The key result of this paper is that in the computation-limited setting, using an inconsistent parameter estimator (i.e., an estimator that returns the “wrong” model even in the infinite data limit) is provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique. En route to this result, we analyze the asymptotic properties of M-estimators based on convex variational relaxations, and establish a Lipschitz stability property that holds for a broad class of convex variational methods. This stability result provides additional incentive, apart from the obvious benefit of unique global optima, for using message-passing methods based on convex variational relaxations. We show that joint estimation/prediction based on the reweighted sum-product algorithm substantially outperforms a commonly used heuristic based on ordinary sum-product. Keywords: graphical model, Markov random field, belief propagation, variational method, parameter estimation, prediction error, algorithmic stability

6 0.062829442 80 jmlr-2006-Segmental Hidden Markov Models with Random Effects for Waveform Modeling

7 0.05982434 75 jmlr-2006-Policy Gradient in Continuous Time

8 0.059613127 4 jmlr-2006-A Linear Non-Gaussian Acyclic Model for Causal Discovery

9 0.052440688 49 jmlr-2006-Learning Parts-Based Representations of Data

10 0.051672924 85 jmlr-2006-Statistical Comparisons of Classifiers over Multiple Data Sets

11 0.050075721 70 jmlr-2006-Online Passive-Aggressive Algorithms

12 0.047568079 13 jmlr-2006-Adaptive Prototype Learning Algorithms: Theoretical and Experimental Studies

13 0.044068176 37 jmlr-2006-Incremental Algorithms for Hierarchical Classification

14 0.042002801 15 jmlr-2006-Bayesian Network Learning with Parameter Constraints     (Special Topic on Machine Learning and Optimization)

15 0.039795503 71 jmlr-2006-Optimising Kernel Parameters and Regularisation Coefficients for Non-linear Discriminant Analysis

16 0.038646962 87 jmlr-2006-Stochastic Complexities of Gaussian Mixtures in Variational Bayesian Approximation

17 0.037330918 21 jmlr-2006-Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis

18 0.035720326 54 jmlr-2006-Learning the Structure of Linear Latent Variable Models

19 0.033912182 90 jmlr-2006-Superior Guarantees for Sequential Prediction and Lossless Compression via Alphabet Decomposition

20 0.033665568 17 jmlr-2006-Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.194), (1, 0.079), (2, -0.083), (3, 0.039), (4, -0.295), (5, -0.274), (6, -0.183), (7, 0.132), (8, -0.177), (9, -0.02), (10, 0.004), (11, -0.117), (12, -0.093), (13, -0.021), (14, 0.072), (15, -0.001), (16, 0.033), (17, 0.204), (18, -0.006), (19, -0.044), (20, -0.103), (21, 0.106), (22, -0.122), (23, -0.047), (24, -0.134), (25, -0.016), (26, -0.026), (27, 0.087), (28, 0.001), (29, 0.007), (30, -0.126), (31, -0.131), (32, 0.082), (33, 0.037), (34, -0.17), (35, 0.009), (36, -0.064), (37, -0.084), (38, 0.006), (39, 0.092), (40, 0.025), (41, -0.111), (42, -0.013), (43, -0.043), (44, -0.094), (45, -0.148), (46, -0.087), (47, -0.025), (48, -0.024), (49, -0.159)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97555536 57 jmlr-2006-Linear State-Space Models for Blind Source Separation

Author: Rasmus Kongsgaard Olsson, Lars Kai Hansen

Abstract: We apply a type of generative modelling to the problem of blind source separation in which prior knowledge about the latent source signals, such as time-varying auto-correlation and quasiperiodicity, are incorporated into a linear state-space model. In simulations, we show that in terms of signal-to-error ratio, the sources are inferred more accurately as a result of the inclusion of strong prior knowledge. We explore different schemes of maximum-likelihood optimization for the purpose of learning the model parameters. The Expectation Maximization algorithm, which is often considered the standard optimization method in this context, results in slow convergence when the noise variance is small. In such scenarios, quasi-Newton optimization yields substantial improvements in a range of signal to noise ratios. We analyze the performance of the methods on convolutive mixtures of speech signals. Keywords: blind source separation, state-space model, independent component analysis, convolutive model, EM, speech modelling

2 0.52768546 52 jmlr-2006-Learning Spectral Clustering, With Application To Speech Separation

Author: Francis R. Bach, Michael I. Jordan

Abstract: Spectral clustering refers to a class of techniques which rely on the eigenstructure of a similarity matrix to partition points into disjoint clusters, with points in the same cluster having high similarity and points in different clusters having low similarity. In this paper, we derive new cost functions for spectral clustering based on measures of error between a given partition and a solution of the spectral relaxation of a minimum normalized cut problem. Minimizing these cost functions with respect to the partition leads to new spectral clustering algorithms. Minimizing with respect to the similarity matrix leads to algorithms for learning the similarity matrix from fully labelled data sets. We apply our learning algorithm to the blind one-microphone speech separation problem, casting the problem as one of segmentation of the spectrogram. Keywords: spectral clustering, blind source separation, computational auditory scene analysis

3 0.42073467 32 jmlr-2006-Expectation Correction for Smoothed Inference in Switching Linear Dynamical Systems

Author: David Barber

Abstract: We introduce a method for approximate smoothed inference in a class of switching linear dynamical systems, based on a novel form of Gaussian Sum smoother. This class includes the switching Kalman ‘Filter’ and the more general case of switch transitions dependent on the continuous latent state. The method improves on the standard Kim smoothing approach by dispensing with one of the key approximations, thus making fuller use of the available future information. Whilst the central assumption required is projection to a mixture of Gaussians, we show that an additional conditional independence assumption results in a simpler but accurate alternative. Our method consists of a single Forward and Backward Pass and is reminiscent of the standard smoothing ‘correction’ recursions in the simpler linear dynamical system. The method is numerically stable and compares favourably against alternative approximations, both in cases where a single mixture component provides a good posterior approximation, and where a multimodal approximation is required. Keywords: Gaussian sum smoother, switching Kalman filter, switching linear dynamical system, expectation propagation, expectation correction 1. Switching Linear Dynamical System The Linear Dynamical System (LDS) (Bar-Shalom and Li, 1998; West and Harrison, 1999) is a key temporal model in which a latent linear process generates the observed time-series. For more complex time-series which are not well described globally by a single LDS, we may break the time-series into segments, each modeled by a potentially different LDS. This is the basis for the Switching LDS (SLDS) where, for each time-step t, a switch variable st ∈ 1, . . . , S describes which of the LDSs is to be used.1 The observation (or ‘visible’ variable) vt ∈ R V is linearly related to the hidden state ht ∈ R H by vt = B(st )ht + ηv (st ), ηv (st ) ∼ N (v(st ), Σv (st )) ¯ (1) where N (µ, Σ) denotes a Gaussian distribution with mean µ and covariance Σ. The transition dynamics of the continuous hidden state ht is linear ht = A(st )ht−1 + ηh (st ), ¯ ηh (st ) ∼ N h(st ), Σh (st ) . (2) 1. These systems also go under the names Jump Markov model/process, switching Kalman Filter, Switching Linear Gaussian State-Space model, Conditional Linear Gaussian Model. c 2006 David Barber. BARBER s1 s2 s3 s4 h1 h2 h3 h4 v1 v2 v3 v4 Figure 1: The independence structure of the aSLDS. Square nodes denote discrete variables, round nodes continuous variables. In the SLDS links from h to s are not normally considered. The dynamics of the switch variables is Markovian, with transition p(st |st−1 ). The SLDS is used in many disciplines, from econometrics to machine learning (Bar-Shalom and Li, 1998; Ghahramani and Hinton, 1998; Lerner et al., 2000; Kitagawa, 1994; Kim and Nelson, 1999; Pavlovic et al., 2001). See Lerner (2002) and Zoeter (2005) for recent reviews of work. AUGMENTED S WITCHING L INEAR DYNAMICAL S YSTEM In this article, we will consider the more general model in which the switch st is dependent on both the previous st−1 and ht−1 . We call this an augmented Switching Linear Dynamical System 2 (aSLDS), in keeping with the terminology in Lerner (2002). An equivalent probabilistic model is, as depicted in Figure (1), T p(v1:T , h1:T , s1:T ) = p(v1 |h1 , s1 )p(h1 |s1 )p(s1 ) ∏ p(vt |ht , st )p(ht |ht−1 , st )p(st |ht−1 , st−1 ). t=2 The notation x1:T is shorthand for x1 , . . . , xT . The distributions are parameterized as p(vt |ht , st ) = N (v(st ) + B(st )ht , Σv (st )) , ¯ ¯ p(ht |ht−1 , st ) = N h(st ) + A(st )ht−1 , Σh (st ) where p(h1 |s1 ) = N (µ(s1 ), Σ(s1 )). The aSLDS has been used, for example, in state-duration modeling in acoustics (Cemgil et al., 2006) and econometrics (Chib and Dueker, 2004). I NFERENCE The aim of this article is to address how to perform inference in both the SLDS and aSLDS. In particular we desire the so-called filtered estimate p(ht , st |v1:t ) and the smoothed estimate p(ht , st |v1:T ), for any t, 1 ≤ t ≤ T . Both exact filtered and smoothed inference in the SLDS is intractable, scaling exponentially with time (Lerner, 2002). To see this informally, consider the filtered posterior, which may be recursively computed using p(st , ht |v1:t ) = ∑ Z st−1 ht−1 p(st , ht |st−1 , ht−1 , vt )p(st−1 , ht−1 |v1:t−1 ). (3) At timestep 1, p(s1 , h1 |v1 ) = p(h1 |s1 , v1 )p(s1 |v1 ) is an indexed set of Gaussians. At time-step 2, due to the summation over the states s1 , p(s2 , h2 |v1:2 ) will be an indexed set of S Gaussians; similarly at 2. These models are closely related to Threshold Regression Models (Tong, 1990). 2516 E XPECTATION C ORRECTION time-step 3, it will be S2 and, in general, gives rise to St−1 Gaussians. More formally, in Lauritzen and Jensen (2001), a general exact method is presented for performing stable inference in such hybrid discrete models with conditional Gaussian potentials. The method requires finding a strong junction tree which, in the SLDS case, means that the discrete variables are placed in a single cluster, resulting in exponential complexity. The key issue in the (a)SLDS, therefore, is how to perform approximate inference in a numerically stable manner. Our own interest in the SLDS stems primarily from acoustic modeling, in which the time-series consists of many thousands of time-steps (Mesot and Barber, 2006; Cemgil et al., 2006). For this, we require a stable and computationally feasible approximate inference, which is also able to deal with state-spaces of high hidden dimension, H. 2. Expectation Correction Our approach to approximate p(ht , st |v1:T ) ≈ p(ht , st |v1:T ) mirrors the Rauch-Tung-Striebel (RTS) ˜ ‘correction’ smoother for the LDS (Rauch et al., 1965; Bar-Shalom and Li, 1998). Readers unfamiliar with this approach will find a short explanation in Appendix (A), which defines the important functions LDSFORWARD and LDSBACKWARD, which we shall make use of for inference in the aSLDS. Our correction approach consists of a single Forward Pass to recursively find the filtered posterior p(ht , st |v1:t ), followed by a single Backward Pass to correct this into a smoothed posterior ˜ p(ht , st |v1:T ). The Forward Pass we use is equivalent to Assumed Density Filtering (Alspach and ˜ Sorenson, 1972; Boyen and Koller, 1998; Minka, 2001). The main contribution of this paper is a novel form of Backward Pass, based on collapsing the smoothed posterior to a mixture of Gaussians. Unless stated otherwise, all quantities should be considered as approximations to their exact counterparts, and we will therefore usually omit the tildes˜throughout the article. 2.1 Forward Pass (Filtering) Readers familiar with Assumed Density Filtering (ADF) may wish to continue directly to Section (2.2). The basic idea is to represent the (intractable) posterior using a simpler distribution. This is then propagated forwards through time, conditioned on the new observation, and subsequently collapsed back to the tractable distribution representation—see Figure (2). Our aim is to form a recursion for p(st , ht |v1:t ), based on a Gaussian mixture approximation of p(ht |st , v1:t ). Without loss of generality, we may decompose the filtered posterior as p(ht , st |v1:t ) = p(ht |st , v1:t )p(st |v1:t ). We will first form a recursion for p(ht |st , v1:t ), and discuss the switch recursion p(st |v1:t ) later. The full procedure for computing the filtered posterior is presented in Algorithm (1). The exact representation of p(ht |st , v1:t ) is a mixture with O(St ) components. We therefore approximate this with a smaller It -component mixture It p(ht |st , v1:t ) ≈ p(ht |st , v1:t ) ≡ ˜ ˜ ˜ ∑ p(ht |it , st , v1:t ) p(it |st , v1:t ) it =1 where p(ht |it , st , v1:t ) is a Gaussian parameterized with mean3 f (it , st ) and covariance F(it , st ). The ˜ Gaussian mixture weights are given by p(it |st , v1:t ). In the above, p represent approximations to the ˜ ˜ 3. Strictly speaking, we should use the notation ft (it , st ) since, for each time t, we have a set of means indexed by it , st . This mild abuse of notation is used elsewhere in the paper. 2517 BARBER st st+1 it ht ht+1 vt+1 Figure 2: Structure of the mixture representation of the Forward Pass. Essentially, the Forward Pass defines a ‘prior’ distribution at time t which contains all the information from the variables v1:t . This prior is propagated forwards through time using the exact dynamics, conditioned on the observation, and then collapsed back to form a new prior approximation at time t + 1. corresponding exact p distributions. To find a recursion for these parameters, consider ˜ p(ht+1 |st+1 , v1:t+1 ) = ∑ p(ht+1 , st , it |st+1 , v1:t+1 ) ˜ st ,it = ∑ p(ht+1 |it , st , st+1 , v1:t+1 ) p(st , it |st+1 , v1:t+1 ) ˜ ˜ (4) st ,it where each of the factors can be recursively computed on the basis of the previous filtered results (see below). However, this recursion suffers from an exponential increase in mixture components. To deal with this, we will later collapse p(ht+1 |st+1 , v1:t+1 ) back to a smaller mixture. For the ˜ remainder, we drop the p notation, and concentrate on computing the r.h.s of Equation (4). ˜ E VALUATING p(ht+1 |st , it , st+1 , v1:t+1 ) We find p(ht+1 |st , it , st+1 , v1:t+1 ) from the joint distribution p(ht+1 , vt+1 |st , it , st+1 , v1:t ), which is a Gaussian with covariance and mean elements4 Σhh = A(st+1 )F(it , st )AT (st+1 ) + Σh (st+1 ), Σvv = B(st+1 )Σhh BT (st+1 ) + Σv (st+1 ) Σvh = B(st+1 )F(it , st ), µv = B(st+1 )A(st+1 ) f (it , st ), µh = A(st+1 ) f (it , st ). (5) These results are obtained from integrating the forward dynamics, Equations (1,2) over h t , using the results in Appendix (B). To find p(ht+1 |st , it , st+1 , v1:t+1 ) we may then condition p(ht+1 , vt+1 | st , it , st+1 , v1:t ) on vt+1 using the results in Appendix (C)—see also Algorithm (4). E VALUATING p(st , it |st+1 , v1:t+1 ) Up to a trivial normalization constant the mixture weight in Equation (4) can be found from the decomposition p(st , it |st+1 , v1:t+1 ) ∝ p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ). ¯ 4. We derive this for ht+1 , vt+1 ≡ 0, to ease notation. ¯ 2518 (6) E XPECTATION C ORRECTION Algorithm 1 aSLDS Forward Pass. Approximate the filtered posterior p(st |v1:t ) ≡ ρt , p(ht |st , v1:t ) ≡ ∑it wt (it , st )N ( ft (it , st ), Ft (it , st )). Also we return the approximate log-likelihood log p(v 1:T ). We ¯ require I1 = 1, I2 ≤ S, It ≤ S × It−1 . θt (s) = A(s), B(s), Σh (s), Σv (s), h(s), v(s) for t > 1. θ1 (s) = ¯ v (s), µ(s), v(s) A(s), B(s), Σ(s), Σ ¯ for s1 ← 1 to S do { f1 (1, s1 ), F1 (1, s1 ), p} = LDSFORWARD(0, 0, v1 ; θ(s1 )) ˆ ρ1 ← p(s1 ) p ˆ end for for t ← 2 to T do for st ← 1 to S do for i ← 1 to It−1 , and s ← 1 to S do {µx|y (i, s), Σx|y (i, s), p} = LDSFORWARD( ft−1 (i, s), Ft−1 (i, s), vt ; θt (st )) ˆ p∗ (st |i, s) ≡ p(st |ht−1 , st−1 = s) p(ht−1 |it−1 =i,st−1 =s,v1:t−1 ) p (st , i, s) ← wt−1 (i, s)p∗ (st |i, s)ρt−1 (s) p ˆ end for Collapse the It−1 × S mixture of Gaussians defined by µx|y ,Σx|y , and weights p(i, s|st ) ∝ p (st , i, s) to a Gaussian with It components, p(ht |st , v1:t ) ≈ I ∑itt =1 p(it |st , v1:t )p(ht |st , it , v1:t ). This defines the new means ft (it , st ), covariances Ft (it , st ) and mixture weights wt (it , st ) ≡ p(it |st , v1:t ). Compute ρt (st ) ∝ ∑i,s p (st , i, s) end for normalize ρt ≡ p(st |v1:t ) L ← L + log ∑st ,i,s p (st , i, s) end for The first factor in Equation (6), p(vt+1 |it , st , st+1 , v1:t ), is a Gaussian with mean µv and covariance Σvv , as given in Equation (5). The last two factors p(it |st , v1:t ) and p(st |v1:t ) are given from the previous iteration. Finally, p(st+1 |it , st , v1:t ) is found from p(st+1 |it , st , v1:t ) = p(st+1 |ht , st ) p(ht |it ,st ,v1:t ) (7) where · p denotes expectation with respect to p. In the standard SLDS, Equation (7) is replaced by the Markov transition p(st+1 |st ). In the aSLDS, however, Equation (7) will generally need to be computed numerically. A simple approximation is to evaluate Equation (7) at the mean value of the distribution p(ht |it , st , v1:t ). To take covariance information into account an alternative would be to draw samples from the Gaussian p(ht |it , st , v1:t ) and thus approximate the average of p(st+1 |ht , st ) by sampling.5 C LOSING THE R ECURSION We are now in a position to calculate Equation (4). For each setting of the variable st+1 , we have a mixture of It × S Gaussians. In order to avoid an exponential explosion in the number of mixture 5. Whilst we suggest sampling as part of the aSLDS update procedure, this does not render the Forward Pass as a form of sequential sampling procedure, such as Particle Filtering. The sampling here is a form of exact sampling, for which no convergence issues arise, being used only to numerically evaluate Equation (7). 2519 BARBER components, we numerically collapse this back to It+1 Gaussians to form It+1 p(ht+1 |st+1 , v1:t+1 ) ≈ ∑ it+1 =1 p(ht+1 |it+1 , st+1 , v1:t+1 )p(it+1 |st+1 , v1:t+1 ). Hence the Gaussian components and corresponding mixture weights p(it+1 |st+1 , v1:t+1 ) are defined implicitly through a numerical (Gaussian-Mixture to smaller Gaussian-Mixture) collapse procedure, for which any method of choice may be supplied. A straightforward approach that we use in our code is based on repeatedly merging low-weight components, as explained in Appendix (D). A R ECURSION FOR THE S WITCH VARIABLES A recursion for the switch variables can be found by considering p(st+1 |v1:t+1 ) ∝ ∑ p(it , st , st+1 , vt+1 , v1:t ). it ,st The r.h.s. of the above equation is proportional to ∑ p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ) st ,it where all terms have been computed during the recursion for p(ht+1 |st+1 , v1:t+1 ). T HE L IKELIHOOD p(v1:T ) The likelihood p(v1:T ) may be found by recursing p(v1:t+1 ) = p(vt+1 |v1:t )p(v1:t ), where p(vt+1 |v1:t ) = ∑ it ,st ,st+1 p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ). In the above expression, all terms have been computed in forming the recursion for the filtered posterior p(ht+1 , st+1 |v1:t+1 ). 2.2 Backward Pass (Smoothing) The main contribution of this paper is to find a suitable way to ‘correct’ the filtered posterior p(st , ht |v1:t ) obtained from the Forward Pass into a smoothed posterior p(st , ht |v1:T ). We initially derive this for the case of a single Gaussian representation—the extension to the mixture case is straightforward and given in Section (2.3). Our derivation holds for both the SLDS and aSLDS. We approximate the smoothed posterior p(ht |st , v1:T ) by a Gaussian with mean g(st ) and covariance G(st ), and our aim is to find a recursion for these parameters. A useful starting point is the exact relation: p(ht , st |v1:T ) = ∑ p(st+1 |v1:T )p(ht |st , st+1 , v1:T )p(st |st+1 , v1:T ). st+1 2520 E XPECTATION C ORRECTION The term p(ht |st , st+1 , v1:T ) may be computed as p(ht |st , st+1 , v1:T ) = Z p(ht , ht+1 |st , st+1 , v1:T ) = Z p(ht |ht+1 , st , st+1 , v1:T )p(ht+1 |st , st+1 , v1:T ) Z p(ht |ht+1 , st , st+1 , v1:t )p(ht+1 |st , st+1 , v1:T ) ht+1 ht+1 = ht+1 (8) which is in the form of a recursion. This recursion therefore requires p(ht+1 |st , st+1 , v1:T ), which we can write as p(ht+1 |st , st+1 , v1:T ) ∝ p(ht+1 |st+1 , v1:T )p(st |st+1 , ht+1 , v1:t ). (9) The above recursions represent the exact computation of the smoothed posterior. In our approximate treatment, we replace all quantities p with their corresponding approximations p. A difficulty ˜ is that the functional form of p(st |st+1 , ht+1 , v1:t ) in the approximation of Equation (9) is not squared ˜ exponential in ht+1 , so that p(ht+1 |st , st+1 , v1:T ) will not be a mixture of Gaussians.6 One possibil˜ ity would be to approximate the non-Gaussian p(ht+1 |st , st+1 , v1:T ) (dropping the p notation) by a ˜ Gaussian (mixture) by minimizing the Kullback-Leilbler divergence between the two, or performing moment matching in the case of a single Gaussian. A simpler alternative is to make the assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ), see Figure (3). This is a considerable simplification since p(ht+1 |st+1 , v1:T ) is already known from the previous backward recursion. Under this assumption, the recursion becomes p(ht , st |v1:T ) ≈ ∑ p(st+1 |v1:T )p(st |st+1 , v1:T ) st+1 p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (10) We call the procedure based on Equation (10) Expectation Correction (EC) since it ‘corrects’ the filtered results which themselves are formed from propagating expectations. In Appendix (E) we show how EC is equivalent to a partial Discrete-Continuous factorized approximation. Equation (10) forms the basis of the the EC Backward Pass. However, similar to the ADF Forward Pass, the number of mixture components needed to represent the posterior in this recursion grows exponentially as we go backwards in time. The strategy we take to deal with this is a form of Assumed Density Smoothing, in which Equation (10) is interpreted as a propagated dynamics reversal, which will subsequently be collapsed back to an assumed family of distributions—see Figure (4). How we implement the recursion for the continuous and discrete factors is detailed below.7 6. In the exact calculation, p(ht+1 |st , st+1 , v1:T ) is a mixture of Gaussians since p(st |st+1 , ht+1 , v1:t ) = p(st , st+1 , ht+1 , v1:T )/p(st+1 , ht+1 , v1:T ) so that the mixture of Gaussians denominator p(st+1 , ht+1 , v1:T ) cancels with the first term in Equation (9), leaving a mixture of Gaussians. However, since in Equation (9) the two terms p(ht+1 |st+1 , v1:T ) and p(st |st+1 , ht+1 , v1:t ) are replaced by approximations, this cancellation is not guaranteed. 7. Equation (10) has the pleasing form of an RTS Backward Pass for the continuous part (analogous to LDS case), and a discrete smoother (analogous to a smoother recursion for the HMM). In the Forward-Backward algorithm for the HMM (Rabiner, 1989), the posterior γt ≡ p(st |v1:T ) is formed from the product of αt ≡ p(st |v1:t ) and βt ≡ p(vt+1:T |st ). This approach is also analogous to EP (Heskes and Zoeter, 2002). In the correction approach, a direct recursion for γt in terms of γt+1 and αt is formed, without explicitly defining βt . The two approaches to inference are known as α − β and α − γ recursions. 2521 BARBER st−1 st st+1 st+2 ht−1 ht ht+1 ht+2 vt−1 vt vt+1 vt+2 Figure 3: Our Backward Pass approximates p(ht+1 |st+1 , st , v1:T ) by p(ht+1 |st+1 , v1:T ). Motivation for this is that st only influences ht+1 through ht . However, ht will most likely be heavily influenced by v1:t , so that not knowing the state of st is likely to be of secondary importance. The darker shaded node is the variable we wish to find the posterior state of. The lighter shaded nodes are variables in known states, and the hashed node a variable whose state is indeed known but assumed unknown for the approximation. E VALUATING p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) is a Gaussian in ht , whose statistics we will now compute. First we find p(ht |ht+1 , st , st+1 , v1:t ) which may be obtained from the joint distribution p(ht , ht+1 |st , st+1 , v1:t ) = p(ht+1 |ht , st+1 )p(ht |st , v1:t ) (11) which itself can be found using the forward dynamics from the filtered estimate p(ht |st , v1:t ). The statistics for the marginal p(ht |st , st+1 , v1:t ) are simply those of p(ht |st , v1:t ), since st+1 carries no extra information about ht .8 The remaining statistics are the mean of ht+1 , the covariance of ht+1 and cross-variance between ht and ht+1 , ht+1 = A(st+1 ) ft (st ) Σt+1,t+1 = A(st+1 )Ft (st )AT (st+1 ) + Σh (st+1 ), Σt+1,t = A(st+1 )Ft (st ). Given the statistics of Equation (11), we may now condition on ht+1 to find p(ht |ht+1 , st , st+1 , v1:t ). Doing so effectively constitutes a reversal of the dynamics, ← − ← − ht = A (st , st+1 )ht+1 + η (st , st+1 ) ← − ← − ← − − where A (st , st+1 ) and η (st , st+1 ) ∼ N (←(st , st+1 ), Σ (st , st+1 )) are easily found using the condim tioned Gaussian results in Appendix (C)—see also Algorithm (5). Averaging the reversed dynamics we obtain a Gaussian in ht for p(ht |ht+1 , st , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) with statistics ← − ← − ← − ← − − µt = A (st , st+1 )g(st+1 ) + ←(st , st+1 ), Σt,t = A (st , st+1 )G(st+1 ) A T (st , st+1 ) + Σ (st , st+1 ). m These equations directly mirror the RTS Backward Pass, see Algorithm (5). 8. Integrating over ht+1 means that the information from st+1 passing through ht+1 via the term p(ht+1 |st+1 , ht ) vanishes. Also, since st is known, no information from st+1 passes through st to ht . 2522 E XPECTATION C ORRECTION st st+1 it jt+1 ht ht+1 vt vt+1 Figure 4: Structure of the Backward Pass for mixtures. Given the smoothed information at timestep t + 1, we need to work backwards to ‘correct’ the filtered estimate at time t. E VALUATING p(st |st+1 , v1:T ) The main departure of EC from previous methods is in treating the term p(st |st+1 , v1:T ) = p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (12) The term p(st |ht+1 , st+1 , v1:t ) is given by p(st |ht+1 , st+1 , v1:t ) = p(ht+1 |st , st+1 , v1:t )p(st , st+1 |v1:t ) . ∑st p(ht+1 |st , st+1 , v1:t )p(st , st+1 |v1:t ) (13) Here p(st , st+1 |v1:t ) = p(st+1 |st , v1:t )p(st |v1:t ), where p(st+1 |st , v1:t ) occurs in the Forward Pass, Equation (7). In Equation (13), p(ht+1 |st+1 , st , v1:t ) is found by marginalizing Equation (11). Performing the average over p(ht+1 |st+1 , v1:T ) in Equation (12) may be achieved by any numerical integration method desired. Below we outline a crude approximation that is fast and often performs surprisingly well. M EAN A PPROXIMATION A simple approximation of Equation (12) is to evaluate the integrand at the mean value of the averaging distribution. Replacing ht+1 in Equation (13) by its mean gives the simple approximation 1 T −1 1 e− 2 zt+1 (st ,st+1 )Σ (st ,st+1 |v1:t )zt+1 (st ,st+1 ) p(st |st+1 , v1:t ) p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) ≈ Z det Σ(st , st+1 |v1:t ) where zt+1 (st , st+1 ) ≡ ht+1 |st+1 , v1:T − ht+1 |st , st+1 , v1:t and Z ensures normalization over st . This result comes simply from the fact that in Equation (12) we have a Gaussian with a mean ht+1 |st , st+1 , v1:t and covariance Σ(st , st+1 |v1:t ), being the filtered covariance of ht+1 given st , st+1 and the observations v1:t , which may be taken from Σhh in Equation (5). Then evaluating this Gaussian at the specific point ht+1 |st+1 , v1:T , we arrive at the above expression. An alternative to this simple mean approximation is to sample from the Gaussian p(ht+1 |st+1 , v1:T ), which has the potential advantage that covariance information is used. 9 Other methods such as variational 9. This is a form of exact sampling since drawing samples from a Gaussian is easy. This should not be confused with meaning that this use of sampling renders EC a sequential Monte-Carlo sampling scheme. 2523 BARBER Algorithm 2 aSLDS: EC Backward Pass (Single Gaussian case I = J = 1). Approximates p(st |v1:T ) and p(ht |st , v1:T ) ≡ N (gt (st ), Gt (st )). This routine needs the results from Algorithm (1) for I = 1. GT ← FT , gT ← fT , for t ← T − 1 to 1 do for s ← 1 to S, s ← 1 to S do, (µ, Σ)(s, s ) = LDSBACKWARD(gt+1 (s ), Gt+1 (s ), ft (s), Ft (s), θt+1 (s )) p(s|s ) = p(st = s|ht+1 , st+1 = s , v1:t ) p(ht+1 |st+1 =s ,v1:T ) p(s, s |v1:T ) ← p(st+1 = s |v1:T )p(s|s ) end for for st ← 1 to S do Collapse the mixture defined by weights p(st+1 = s |st , v1:T ) ∝ p(st , s |v1:T ), means µ(st , s ) and covariances Σ(st , s ) to a single Gaussian. This defines the new means gt (st ), covariances Gt (st ). p(st |v1:T ) ← ∑s p(st , s |v1:T ) end for end for approximations to this average (Jaakkola and Jordan, 1996) or the unscented transform (Julier and Uhlmann, 1997) may be employed if desired. C LOSING THE R ECURSION We have now computed both the continuous and discrete factors in Equation (10), which we wish to use to write the smoothed estimate in the form p(ht , st |v1:T ) = p(st |v1:T )p(ht |st , v1:T ). The distribution p(ht |st , v1:T ) is readily obtained from the joint Equation (10) by conditioning on st to form the mixture p(ht |st , v1:T ) = ∑ p(st+1 |st , v1:T )p(ht |st , st+1 , v1:T ) st+1 which may be collapsed to a single Gaussian (or mixture if desired). As in the Forward Pass, this collapse implicitly defines the Gaussian mean g(st ) and covariance G(st ). The smoothed posterior p(st |v1:T ) is given by p(st |v1:T ) = = ∑ p(st+1 |v1:T )p(st |st+1 , v1:T ) st+1 ∑ p(st+1 |v1:T ) st+1 p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) . (14) The algorithm for the single Gaussian case is presented in Algorithm (2). N UMERICAL S TABILITY Numerical stability is a concern even in the LDS, and the same is to be expected for the aSLDS. Since the LDS recursions LDSFORWARD and LDSBACKWARD are embedded within the EC algorithm, we may immediately take advantage of the large body of work on stabilizing the LDS recursions, such as the Joseph form (Grewal and Andrews, 1993), or the square root forms (Park and Kailath, 1996; Verhaegen and Van Dooren, 1986). 2524 E XPECTATION C ORRECTION R ELAXING EC The conditional independence assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ) is not strictly necessary in EC. We motivate it by computational simplicity, since finding an appropriate moment matching approximation of p(ht+1 |st , st+1 , v1:T ) in Equation (9) requires a relatively expensive nonGaussian integration. If we therefore did treat p(ht+1 |st , st+1 , v1:T ) more correctly, the central assumption in this relaxed version of EC would be a collapse to a mixture of Gaussians (the additional computation of Equation (12) may usually be numerically evaluated to high precision). Whilst we did not do so, implementing this should not give rise to numerical instabilities since no potential divisions are required, merely the estimation of moments. In the experiments presented here, we did not pursue this option, since we believe that the effect of this conditional independence assumption is relatively weak. I NCONSISTENCIES IN THE APPROXIMATION The recursion Equation (8), upon which EC depends, makes use of the Forward Pass results, and a subtle issue arises about possible inconsistencies in the Forward and Backward approximations. For example, under the conditional independence assumption in the Backward Pass, p(hT |sT −1 , sT , v1:T ) ≈ p(hT |sT , v1:T ), which is in contradiction to Equation (5) which states that the approximation to p(hT |sT −1 , sT , v1:T ) will depend on sT −1 . Similar contradictions occur also for the relaxed version of EC. Such potential inconsistencies arise because of the approximations made, and should not be considered as separate approximations in themselves. Furthermore, these inconsistencies will most likely be strongest at the end of the chain, t ≈ T , since only then is Equation (8) in direct contradiction to Equation (5). Such potential inconsistencies arise since EC is not founded on a consistency criterion, unlike EP—see Section (3)—but rather an approximation of the exact recursions. Our experience is that compared to EP, which attempts to ensure consistency based on multiple sweeps through the graph, such inconsistencies are a small price to pay compared to the numerical stability advantages of EC. 2.3 Using Mixtures in the Backward Pass The extension to the mixture case is straightforward, based on the representation Jt p(ht |st , v1:T ) ≈ ∑ p(ht |st , jt , v1:T )p( jt |st , v1:T ). jt =1 Analogously to the case with a single component, p(ht , st |v1:T ) = ∑ it , jt+1 ,st+1 p(st+1 |v1:T )p( jt+1 |st+1 , v1:T )p(ht | jt+1 , st+1 , it , st , v1:T ) · p(it , st |ht+1 , jt+1 , st+1 , v1:t ) p(ht+1 | jt+1 ,st+1 ,v1:T ) . The average in the last line of the above equation can be tackled using the same techniques as outlined in the single Gaussian case. To approximate p(ht | jt+1 , st+1 , it , st , v1:T ) we consider this as the marginal of the joint distribution p(ht , ht+1 |it , st , jt+1 , st+1 , v1:T ) = p(ht |ht+1 , it , st , jt+1 , st+1 , v1:t )p(ht+1 |it , st , jt+1 , st+1 , v1:T ). 2525 BARBER Algorithm 3 aSLDS: EC Backward Pass. Approximates p(st |v1:T ) and p(ht |st , v1:T ) ≡ Jt ut ( jt , st )N (gt ( jt , st ), Gt ( jt , st )) using a mixture of Gaussians. JT = IT , Jt ≤ S × It × Jt+1 . This ∑ jt =1 routine needs the results from Algorithm (1). GT ← FT , gT ← fT , uT ← wT (*) for t ← T − 1 to 1 do for s ← 1 to S, s ← 1 to S, i ← 1 to It , j ← 1 to Jt+1 do (µ, Σ)(i, s, j , s ) = LDSBACKWARD(gt+1 ( j , s ), Gt+1 ( j , s ), ft (i, s), Ft (i, s), θt+1 (s )) p(i, s| j , s ) = p(st = s, it = i|ht+1 , st+1 = s , jt+1 = j , v1:t ) p(ht+1 |st+1 =s , jt+1 = j ,v1:T ) p(i, s, j , s |v1:T ) ← p(st+1 = s |v1:T )ut+1 ( j , s )p(i, s| j , s ) end for for st ← 1 to S do Collapse the mixture defined by weights p(it = i, st+1 = s , jt+1 = j |st , v1:T ) ∝ p(i, st , j , s |v1:T ), means µ(it , st , j , s ) and covariances Σ(it , st , j , s ) to a mixture with Jt components. This defines the new means gt ( jt , st ), covariances Gt ( jt , st ) and mixture weights ut ( jt , st ). p(st |v1:T ) ← ∑it , j ,s p(it , st , j , s |v1:T ) end for end for (*) If JT < IT then the initialization is formed by collapsing the Forward Pass results at time T to JT components. As in the case of a single mixture, the problematic term is p(ht+1 |it , st , jt+1 , st+1 , v1:T ). Analogously to before, we may make the assumption p(ht+1 |it , st , jt+1 , st+1 , v1:T ) ≈ p(ht+1 | jt+1 , st+1 , v1:T ) meaning that information about the current switch state st , it is ignored.10 We can then form p(ht |st , v1:T ) = ∑ it , jt+1 ,st+1 p(it , jt+1 , st+1 |st , v1:T )p(ht |it , st , jt+1 , st+1 , v1:T ). This mixture can then be collapsed to smaller mixture using any method of choice, to give Jt p(ht |st , v1:T ) ≈ ∑ p(ht | jt , st , v1:T )p( jt |st , v1:T ) jt =1 The collapse procedure implicitly defines the means g( jt , st ) and covariances G( jt , st ) of the smoothed approximation. A recursion for the switches follows analogously to the single component Backward Pass. The resulting algorithm is presented in Algorithm (3), which includes using mixtures in both Forward and Backward Passes. Note that if JT < IT , an extra initial collapse is required of the IT component Forward Pass Gaussian mixture at time T to JT components. EC has time complexity O(S2 IJK) where S are the number of switch states, I and J are the number of Gaussians used in the Forward and Backward passes, and K is the time to compute the exact Kalman smoother for the system with a single switch state. 10. As in the single component case, in principle, this assumption may be relaxed and a moment matching approximation be performed instead. 2526 E XPECTATION C ORRECTION 3. Relation to Other Methods Approximate inference in the SLDS is a long-standing research topic, generating an extensive literature. See Lerner (2002) and Zoeter (2005) for reviews of previous work. A brief summary of some of the major existing approaches follows. Assumed Density Filtering Since the exact filtered estimate p(ht |st , v1:t ) is an (exponentially large) mixture of Gaussians, a useful remedy is to project at each stage of the recursion Equation (3) back to a limited set of K Gaussians. This is a Gaussian Sum Approximation (Alspach and Sorenson, 1972), and is a form of Assumed Density Filtering (ADF) (Minka, 2001). Similarly, Generalized Pseudo Bayes2 (GPB2) (Bar-Shalom and Li, 1998) also performs filtering by collapsing to a mixture of Gaussians. This approach to filtering is also taken in Lerner et al. (2000) which performs the collapse by removing spatially similar Gaussians, thereby retaining diversity. Several smoothing approaches directly use the results from ADF. The most popular is Kim’s method, which updates the filtered posterior weights to form the smoother (Kim, 1994; Kim and Nelson, 1999). In both EC and Kim’s method, the approximation p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ), is used to form a numerically simple Backward Pass. The other approximation in EC is to numerically compute the average in Equation (14). In Kim’s method, however, an update for the discrete variables is formed by replacing the required term in Equation (14) by p(st |ht+1 , st+1 , v1:t ) p(ht+1 |st+1 ,v1:T ) ≈ p(st |st+1 , v1:t ). (15) This approximation11 decouples the discrete Backward Pass in Kim’s method from the continuous dynamics, since p(st |st+1 , v1:t ) ∝ p(st+1 |st )p(st |v1:t )/p(st+1 |v1:t ) can be computed simply from the filtered results alone (the continuous Backward Pass in Kim’s method, however, does depend on the discrete Backward Pass). The fundamental difference between EC and Kim’s method is that the approximation (15) is not required by EC. The EC Backward Pass therefore makes fuller use of the future information, resulting in a recursion which intimately couples the continuous and discrete variables. The resulting effect on the quality of the approximation can be profound, as we will see in the experiments. Kim’s smoother corresponds to a potentially severe loss of future information and, in general, cannot be expected to improve much on the filtered results from ADF. The more recent work of Lerner et al. (2000) is similar in spirit to Kim’s method, whereby the contribution from the continuous variables is ignored in forming an approximate recursion for the smoothed p(st |v1:T ). The main difference is that for the discrete variables, Kim’s method is based on a correction smoother (Rauch et al., 1965), whereas Lerner’s method uses a Belief Propagation style Backward Pass (Jordan, 1998). Neither method correctly integrates information from the continuous variables. How to form a recursion for a mixture approximation which does not ignore information coming through the continuous hidden variables is a central contribution of our work. Kitagawa (1994) used a two-filter method in which the dynamics of the chain are reversed. Essentially, this corresponds to a Belief Propagation method which defines a Gaussian sum 11. In the HMM this is exact, but in the SLDS the future observations carry information about st . 2527 BARBER EC Mixture Collapsing to Single Mixture Collapsing to Mixture Cond. Indep. p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ) Approx. of p(st |st+1 , v1:T ), average Equation (12) Kim’s Backward Pass Mixture approx. of p(ht+1 |st , st+1 , v1:T ), Equation (9) Relaxed EC x x x x EP x Kim x x x x x Table 1: Relation between methods. In the EC methods, the mean approximation may be replaced by an essentially exact Monte Carlo approximation to Equation (12). EP refers to the Single Gaussian approximation in Heskes and Zoeter (2002). In the case of using Relaxed EC with collapse to a single Gaussian, EC and EP are not equivalent, since the underlying recursions on which the two methods are based are fundamentally different. approximation for p(vt+1:T |ht , st ). However, since this is not a density in ht , st , but rather a conditional likelihood, formally one cannot treat this using density propagation methods. In Kitagawa (1994), the singularities resulting from incorrectly treating p(vt+1:T |ht , st ) as a density are heuristically finessed. Expectation Propagation EP (Minka, 2001), as applied to the SLDS, corresponds to an approximate implementation of Belief Propagation12 (Jordan, 1998; Heskes and Zoeter, 2002). EP is the most sophisticated rival to Kim’s method and EC, since it makes the least assumptions. For this reason, we’ll explain briefly how EP works. Unlike EC, which is based on an approximation of the exact filtering and smoothing recursions, EP is based on a consistency criterion. First, let’s simplify the notation, and write the distribution as p = ∏t φ (xt−1 , vt−1 , xt , vt ), where xt ≡ ht ⊗ st , and φ (xt−1 , vt−1 , xt , vt ) ≡ p(xt |xt−1 )p(vt |xt ). EP defines ‘messages’ ρ, λ13 which contain information from past and future observations respectively. 14 Explicitly, we define ρt (xt ) ∝ p(xt |v1:t ) to represent knowledge about xt given all information from time 1 to t. Similarly, λt (xt ) represents knowledge about state xt given all observations from time T to time t + 1. In the sequel, we drop the time suffix for notational clarity. We define λ(xt ) implicitly through the requirement that the marginal smoothed inference is given by p(xt |v1:T ) ∝ ρ (xt ) λ (xt ) . (16) Hence λ (xt ) ∝ p(vt+1:T |xt , v1:t ) = p(vt+1:T |xt ) and represents all future knowledge about p(xt |v1:T ). From this p(xt−1 , xt |v1:T ) ∝ ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . (17) 12. Non-parametric belief propagation (Sudderth et al., 2003), which performs approximate inference in general continuous distributions, is also related to EP applied to the aSLDS, in the sense that the messages cannot be represented easily, and are approximated by mixtures of Gaussians. 13. These correspond to the α and β messages in the Hidden Markov Model framework (Rabiner, 1989). 14. In this Belief Propagation/EP viewpoint, the backward messages, traditionally labeled as β, correspond to conditional likelihoods, and not distributions. In contrast, in the EC approach, which is effectively a so-called α − γ recursion, the backward γ messages correspond to posterior distributions. 2528 E XPECTATION C ORRECTION Taking the above equation as a starting point, we have p(xt |v1:T ) ∝ Z xt−1 ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . Consistency with Equation (16) requires (neglecting irrelevant scalings) ρ (xt ) λ (xt ) ∝ Z xt−1 ρ (xt−1 ) φ (xt−1 , vt−1 , xt , vt ) λ (xt ) . Similarly, we can integrate Equation (17) over xt to get the marginal at time xt−1 which, by consistency, should be proportional to ρ (xt−1 ) λ (xt−1 ). Hence ρ (xt ) ∝ xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) R λ (xt ) , λ (xt−1 ) ∝ R xt ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) ρ (xt−1 ) (18) where the divisions can be interpreted as preventing over-counting of messages. In an exact implementation, the common factors in the numerator and denominator cancel. EP addresses the fact that λ(xt ) is not a distribution by using Equation (18) R form the projection (or to R ‘collapse’). In the numerator, xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) and xt ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ) represent p(xt |v1:T ) and p(xt−1 |v1:T ). Since these are distributions (an indexed mixture of Gaussians in the SLDS), they may be projected/collapsed to a single indexed Gaussian. The update for the ρ message is then found from division by the λ potential, and vice versa. In EP the explicit division of potentials only makes sense for members of the exponential family. More complex methods could be envisaged in which, rather than an explicit division, the new messages are defined by minimizing some measure of divergence between R ρ(xt )λ(xt ) and xt−1 ρ (xt−1 ) φ (xt−1 , xt ) λ (xt ), such as the Kullback-Leibler divergence. In this way, non-exponential family approximations (such as mixtures of Gaussians) may be considered. Whilst this is certainly feasible, it is somewhat unattractive computationally since this would require for each time-step an expensive minimization. For the single Gaussian case, in order to perform the division, the potentials in the numerator and denominator are converted to their canonical representations. To form the ρ update, the result of the division is then reconverted back to a moment representation. The resulting recursions, due to the approximation, are no longer independent and Heskes and Zoeter (2002) show that using more than a single Forward and Backward sweep often improves on the quality of the approximation. This coupling is a departure from the exact recursions, which should remain independent. Applied to the SLDS, EP suffers from severe numerical instabilities (Heskes and Zoeter, 2002) and finding a way to minimize the corresponding EP free energy in an efficient, robust and guaranteed way remains an open problem. Our experience is that current implementations of EP are unsuitable for large scale time-series applications. Damping the parameter updates is one suggested approach to heuristically improve convergence. The source of these numerical instabilities is not well understood since, even in cases when the posterior appears uni-modal, the method is problematic. The frequent conversions between moment and canonical parameterizations of Gaussians are most likely at the root of the difficulties. An interesting comparison here is between Lauritzen’s original method for exact computation on conditional Gaussian distributions (for which the SLDS is a special case) Lauritzen (1992), 2529 BARBER which is numerically unstable due to conversion between moment and canonical representations, and Lauritzen and Jensen (2001), which improves stability by avoiding using canonical parameterizations. Variational Methods Ghahramani and Hinton (1998) used a variational method which approximates the joint distribution p(h1:T , s1:T |v1:T ) rather than the marginal p(ht , st |v1:T )—related work is presented in Lee et al. (2004). This is a disadvantage when compared to other methods that directly approximate the marginal. The variational methods are nevertheless potentially attractive since they are able to exploit structural properties of the distribution, such as a factored discrete state-transition. In this article, we concentrate on the case of a small number of states S and hence will not consider variational methods further here. 15 Sequential Monte Carlo (Particle Filtering) These methods form an approximate implementation of Equation (3), using a sum of delta functions to represent the posterior—see, for example, Doucet et al. (2001). Whilst potentially powerful, these non-analytic methods typically suffer in high-dimensional hidden spaces since they are often based on naive importance sampling, which restricts their practical use. ADF is generally preferential to Particle Filtering, since in ADF the approximation is a mixture of non-trivial distributions, which is better at capturing the variability of the posterior. Rao-Blackwellized Particle Filters (Doucet et al., 2000) are an attempt to alleviate the difficulty of sampling in high-dimensional state spaces by explicitly integrating over the continuous state. Non-Sequential Monte Carlo For fixed switches s1:T , p(v1:T |s1:T ) is easily computable since this is just the likelihood of an LDS. This observation raises the possibility of sampling from the posterior p(s 1:T |v1:T ) ∝ p(v1:T |s1:T )p(s1:T ) directly. Many possible sampling methods could be applied in this case, and the most immediate is Gibbs sampling, in which a sample for each t is drawn from p(st |s\t , v1:T )—see Neal (1993) for a general reference and Carter and Kohn (1996) for an application to the SLDS. This procedure may work well in practice provided that the initial setting of s1:T is in a region of high probability mass—otherwise, sampling by such individual coordinate updates may be extremely inefficient. 4. Experiments Our experiments examine the stability and accuracy of EC against several other methods on long time-series. In addition, we will compare the absolute accuracy of EC as a function of the number of mixture components on a short time-series, where exact inference may be explicitly evaluated. Testing EC in a problem with a reasonably long temporal sequence, T , is important since numerical stabilities may not be apparent in time-series of just a few time-steps. To do this, we sequentially generate hidden states ht , st and observations vt from a given model. Then, given only the parameters of the model and the observations (but not any of the hidden states), the task is to infer p(ht |st , v1:T ) and p(st |v1:T ). Since the exact computation is exponential in T , a formally exact evaluation of the method is infeasible. A simple alternative is to assume that the original sample states s1:T are the ‘correct’ inferred states, and compare our most probable posterior smoothed 15. Lerner (2002) discusses an approach in the case of a large structured discrete state transition. Related ideas could also be used in EC. 2530 E XPECTATION C ORRECTION 80 150 60 100 40 50 20 0 0 −50 −20 −100 −40 −150 −60 −80 0 10 20 30 40 50 60 70 80 90 −200 100 0 10 20 (a) Easy problem 30 40 50 60 70 80 90 100 (b) Hard problem Figure 5: SLDS: Throughout, S = 2, V = 1 (scalar observations), T = 100, with zero output bias. ¯ A(s) = 0.9999 ∗ orth(randn(H, H)), B(s) = randn(V, H), vt ≡ 0, h1 = 10 ∗ randn(H, 1), ¯ ¯ t>1 = 0, Σh = IH , p1 = uniform. The figures show typical examples for each of the two h 1 problems: (a) Easy problem. H = 3, Σh (s) = IH , Σv (s) = 0.1IV , p(st+1 |st ) ∝ 1S×S + IS . (b) Hard problem. H = 30, Σv (s) = 30IV ,Σh (s) = 0.01IH , p(st+1 |st ) ∝ 1S×S . PF RBPF EP ADFS KimS ECS ADFM KimM ECM Gibbs 1000 800 600 400 200 0 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 Figure 6: SLDS ‘Easy’ problem: The number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. The histograms have been cutoff at 20 errors in order to improve visualization. (PF) Particle Filter. (RBPF) Rao-Blackwellized PF. (EP) Expectation Propagation. (ADFS) Assumed Density Filtering using a Single Gaussian. (KimS) Kim’s smoother using the results from ADFS. (ECS) Expectation Correction using a Single Gaussian (I = J = 1). (ADFM) ADF using a multiple of I = 4 Gaussians. (KimM) Kim’s smoother using the results from ADFM. (ECM) Expectation Correction using a mixture with I = J = 4 components. In Gibbs sampling, we use the initialization from ADFM. estimates arg maxst p(st |v1:T ) with the assumed correct sample st .16 We look at two sets of experiments, one for the SLDS and one for the aSLDS. In both cases, scalar observations are used so that the complexity of the inference problem can be visually assessed. 16. We could also consider performance measures on the accuracy of p(ht |st , v1:T ). However, we prefer to look at approximating arg maxst p(st |v1:T ) since the sampled discrete states are likely to correspond to the exact arg max st p(st |v1:T ). In addition, if the posterior switch distribution is dominated by a single state s ∗ , then provided they are correctly 1:T estimated, the model reduces to an LDS, for which inference of the continuous hidden state is trivial. 2531 BARBER PF RBPF EP ADFS KimS ECS ADFM KimM ECM Gibbs 1000 800 600 400 200 0 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 0 25 50 75 Figure 7: SLDS ‘Hard’ problem: The number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. SLDS EXPERIMENTS We chose experimental conditions that, from the viewpoint of classical signal processing, are difficult, with changes in the switches occurring at a much higher rate than the typical frequencies in the signal. We consider two different toy SLDS experiments : The ‘easy’ problem corresponds to a low hidden dimension, H = 3, with low observation noise; The ‘hard’ problem corresponds to a high hidden dimension, H = 30, and high observation noise. See Figure (5) for details of the experimental setup. We compared methods using a single Gaussian, and methods using multiple Gaussians, see Figure (6) and Figure (7). For EC we use the mean approximation for the numerical integration of Equation (12). For the Particle Filter 1000 particles were used, with Kitagawa re-sampling (Kitagawa, 1996). For the Rao-Blackwellized Particle Filter (Doucet et al., 2000), 500 particles were used, with Kitagawa re-sampling. We included the Particle Filter merely for a point of comparison with ADF, since they are not designed to approximate the smoothed estimate. An alternative MCMC procedure is to perform Gibbs sampling of p(s 1:T |v1:T ) using p(st |s\t , v1:T ) ∝ p(v1:T |s1:T )p(s1:T ), where p(v1:T |s1:T ) is simply the likelihood of an LDS—see for example Carter and Kohn (1996).17 We initialize the state s1:T by using the most likely states st from the filtered results using a Gaussian mixture (ADFM), and then swept forwards in time, sampling from the state p(st |s\t , v1:T ) until the end of the chain. We then reversed direction, sampling from time T back to time 1, and continued repeating this procedure 100 times, with the mean over the last 80 sweeps used as the posterior mean approximation. This procedure is expensive since each sample requires computing the likelihood of an LDS defined on the whole time-series. The procedure therefore scales with GT 2 where G is the number of sweeps over the time series. Despite using a reasonable initialization, Gibbs sampling struggles to improve on the filtered results. We found that EP was numerically unstable and often struggled to converge. To encourage convergence, we used the damping method in Heskes and Zoeter (2002), performing 20 iterations with a damping factor of 0.5. The disappointing performance of EP is most likely due to conflicts 17. Carter and Kohn (1996) proposed an overly complex procedure for computing the likelihood p(v 1:T |s1:T ). This is simply the likelihood of an LDS (since s1:T are assumed known), and is readily computable using any of the standard procedures in the literature. 2532 E XPECTATION C ORRECTION PF ADFS ECS ADFM ECM 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 1000 800 600 400 200 0 Figure 8: aSLDS: Histogram of the number of errors in estimating a binary switch p(st |v1:T ) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors over 1000 experiments. Augmented SLDS results. ADFM used I = 4 Gaussians, and ECM used I = J = 4 Gaussians. We used 1000 samples to approximate Equation (12). I J error 1 1 0.0989 4 1 0.0624 4 4 0.0365 16 1 0.0440 16 16 0.0130 64 1 0.0440 64 64 4.75e-4 256 1 0.0440 256 256 3.40e-8 Table 2: Errors in approximating the states for the multi-path problem, see Figure (9). The mean absolute deviation |pec (st |v1:T ) − pexact (st |v1:T )| averaged over the S = 4 states of st and over the times t = 1, . . . , 5, computed for different numbers of mixture components in EC. The mean approximation of Equation (12) is used. The exact computation uses S T −1 = 256 mixtures. resulting from numerical instabilities introduced by the frequent conversions between moment and canonical representations. The various algorithms differ widely in performance, see Figures (6,7). Not surprisingly, the best filtered results are given using ADF, since this is better able to represent the variance in the filtered posterior than the sampling methods. Unlike Kim’s method, EC makes good use of the future information to clean up the filtered results considerably. One should bear in mind that both EC, Kim’s method and the Gibbs initialization use the same ADF results. These results show that EC may dramatically improve on Kim’s method, so that the small amount of extra work in making a numerical approximation of p(st |st+1 , v1:T ), Equation (12), may bring significant benefits. AUGMENTED SLDS E XPERIMENTS In Figure (8), we chose a simple two state S = 2 transition distribution p(st+1 = 1|st , ht ) = σ htT w(st ) , where σ(x) ≡ 1/(1 + e−x ). Some care needs to be taken to make a model so for which even exact inference would produce posterior switches close to the sampled switches. If the switch variables st+1 changes wildly (which is possible given the above formula since the hidden state h may have a large projected change if the hidden state changes) essentially no information is left in the signal for any inference method to produce reasonable results. We therefore set w(st ) to a zero vector except for the first two components, which are independently sampled from a zero mean Gaussian with standard deviation 5. For each of the two switch states, s, we have a transition matrix A(s), which 2533 BARBER t=1 0 t=2 10 t=3 20 t=4 30 t=5 40 −40 −30 −20 −10 0 10 20 30 (a) 40 (b) Figure 9: (a) The multi-path problem. The particle starts from (0, 0) at time t = 1. Subsequently, at each time-point, either the vector (10, 10) (corresponding to states s = 1 and s = 3) or (−10, 10) (corresponding to states s = 2 and s = 4), is added to the hidden dynamics, perturbed by a small amount of noise, Σh = 0.1. The observations are v = h + ηv (s). For states s = 1, 2 the observation noise is small, Σv = 0.1I, but for s = 3, 4 the noise in the horizontal direction has variance 1000. The visible observations are given by the x’. The true hidden states are given by ‘+’. (b) The exact smoothed state posteriors p exact (st |v1:T ) computed by enumerating all paths (given by the dashed lines). we set to be block diagonal. The first 2 × 2 block is set to 0.9999R θ , where Rθ is a 2 × 2 rotation matrix with angle θ chosen uniformly from 0 to 1 radians. This means that st+1 is dependent on the first two components of ht which are rotating at a restricted rate. The remaining H − 2 × H − 2 block of A(s) is chosen as (using MATLAB notation) 0.9999 ∗ orth(rand(H − 2)), which means a scaled randomly chosen orthogonal matrix. Throughout, S = 2, V = 1, H = 30, T = 100, with zero output ¯ ¯ bias. Using partly MATLAB notation, B(s) = randn(V, H), vt ≡ 0, h1 = 10 ∗ randn(H, 1), ht>1 = 0, ¯ h = I , p = uniform. Σv = 30I , Σh = 0.1I . Σ1 H 1 V H We compare EC only against Particle Filters using 1000 particles, since other methods would require specialized and novel implementations. In ADFM, I = 4 Gaussians were used, and for ECM, I = J = 4 Gaussians were used. Looking at the results in Figure (8), we see that EC performs well, with some improvement in using the mixture representation I, J = 4 over a single Gaussian I = J = 1. The Particle Filter most likely failed since the hidden dimension is too high to be explored well with only 1000 particles. E FFECT OF U SING M IXTURES Our claim is that EC should cope in situations where the smoothed posterior p(ht |st , v1:T ) is multimodal and, consequently, cannot be well represented by a single Gaussian. 18 We therefore constructed an SLDS which exhibits multi-modality to see the effect of using EC with both I and J greater than 1. The ‘multi-path’ scenario is described in Figure (9), where a particle traces a path through a two dimensional space. A small number of time-steps was chosen so that the exact p(st |v1:T ) can be computed by direct enumeration. The observation of the particle is at times extremely noisy in the horizontal direction. This induces multi-modality of p(ht |st , v1:T ) since there 18. This should not be confused with the multi-modality of p(ht |v1:T ) = ∑st p(ht |st , v1:T )p(st |v1:T ). 2534 E XPECTATION C ORRECTION are several paths that might plausibly have been taken to give rise to the observations. The accuracy with which EC predicts the exact smoothed posterior is given in Table (2). For this problem we see that both the number of Forward (I) and Backward components (J) affects the accuracy of the approximation, generally with improved accuracy as the number of mixture components increases. For a ‘perfect’ approximation method, one would expect that when I = J = S T −1 = 256, then the approximation should become exact. The small error for this case in Table (2) may arise for several reasons: the extra independence assumption used in EC, or the simple mean approximation used to compute Equation (12), or numerical roundoff. However, at least in this case, the effect of these assumptions on the performance is very small. 5. Discussion Expectation Correction is a novel form of Backward Pass which makes less approximations than the widely used approach from Kim (1994). In Kim’s method, potentially important future information channeled through the continuous hidden variables is lost. EC, along with Kim’s method, makes the additional assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ). However, our experience is that this assumption is rather mild, since the state of ht+1 will be most heavily influenced by its immediate parent st+1 . Our approximation is based on the idea that, although exact inference will consist of an exponentially large number of mixture components, due to the forgetting which commonly occurs in Markovian models, a finite number of mixture components may provide a reasonable approximation. In tracking situations where the visible information is (temporarily) not enough to specify accurately the hidden state, then representing the posterior p(ht |st , v1:T ) using a mixture of Gaussians may improve results significantly. Clearly, in systems with very long correlation times our method may require too many mixture components to produce a satisfactory result, although we are unaware of other techniques that would be able to cope well in that case. We hope that the straightforward ideas presented here may help facilitate the practical application of dynamic hybrid networks to machine learning and related areas. Whilst models with Gaussian emission distributions such as the SLDS are widespread, the extension of this method to non-Gaussian emissions p(vt |ht , st ) would clearly be of considerable interest. Software for Expectation Correction for this augmented class of Switching Linear Gaussian models is available from www.idiap.ch/∼barber. Acknowledgments I would like to thank Onno Zoeter and Tom Heskes for kindly providing their Expectation Propagation code, Silvia Chiappa for helpful discussions, and Bertrand Mesot for many discussions, help with the simulations and for suggesting the relationship between the partial factorization and independence viewpoints of EC. I would also like to thank the reviewers for their many helpful comments and suggestions. 2535 BARBER Algorithm 4 LDS Forward Pass. Compute the filtered posteriors p(ht |v1:t ) ≡ N ( ft , Ft ) for ¯ ¯ a LDS with parameters θt = A, B, Σh , Σv , h, v, for t > 1. At time t = 1, we use parameters v , µ, v, where Σ and µ are the prior covariance and mean of h. The log-likelihood θ1 = A, B, Σ, Σ ¯ L = log p(v1:T ) is also returned. F0 ← 0, f0 ← 0, L ← 0 for t ← 1, T do { ft , Ft , pt } = LDSFORWARD( ft−1 , Ft−1 , vt ; θt ) L ← L + log pt end for function LDSFORWARD( f , F, v; θ) Compute joint p(ht , vt |v1:t−1 ): ¯ µh ← A f + h, µv ← Bµh + v ¯ T Σhh ← AFA + Σh , Σvv ← BΣhh BT + Σv , Σvh ← BΣhh Find p(ht |v1:t ) by conditioning: f ← µh + ΣT Σ−1 (v − µv ), F ← Σhh − ΣT Σ−1 Σvh vh vv vh vv Compute p(vt |v1:t−1 ): √ 1 p ← exp − 2 (v − µv )T Σ−1 (v − µv ) / det 2πΣvv vv return f , F , p end function Appendix A. Inference in the LDS The LDS is defined by Equations (1,2) in the case of a single switch S = 1. The LDS Forward and Backward passes define the important functions LDSFORWARD and LDSBACKWARD, which we shall make use of for inference in the aSLDS. F ORWARD PASS (F ILTERING ) The filtered posterior p(ht |v1:t ) is a Gaussian which we parameterize with mean f t and covariance Ft . These parameters can be updated recursively using p(ht |v1:t ) ∝ p(ht , vt |v1:t−1 ), where the joint distribution p(ht , vt |v1:t−1 ) has statistics (see Appendix (B)) ¯ µh = A ft−1 + h, µv = Bµh + v ¯ Σhh = AFt−1 AT + Σh , Σvv = BΣhh BT + Σv , Σvh = BΣhh . We may then find p(ht |v1:t ) by conditioning p(ht , vt |v1:t−1 ) on vt , see Appendix (C). This gives rise to Algorithm (4). BACKWARD PASS The smoothed posterior p(ht |v1:T ) ≡ N (gt , Gt ) can be computed recursively using: p(ht |v1:T ) = Z ht+1 p(ht |ht+1 , v1:T )p(ht+1 |v1:T ) = Z ht+1 p(ht |ht+1 , v1:t )p(ht+1 |v1:T ) where p(ht |ht+1 , v1:t ) may be obtained from the joint distribution p(ht , ht+1 |v1:t ) = p(ht+1 |ht )p(ht |v1:t ) (19) 2536 E XPECTATION C ORRECTION Algorithm 5 LDS Backward Pass. Compute the smoothed posteriors p(ht |v1:T ). This requires the filtered results from Algorithm (4). GT ← FT , gT ← fT for t ← T − 1, 1 do {gt , Gt } = LDSBACKWARD(gt+1 , Gt+1 , ft , Ft ; θt+1 ) end for function LDSBACKWARD(g, G, f , F; θ) ¯ Σh h ← AF µh ← A f + h, Σh h ← AFAT + Σh , ← − − ← − ← ← f − ←µ − −1 T Σ ← Ft − Σh h Σh h Σh h , A ← ΣT h Σ−1 , m A h h hh ← − ← ← − − ← − − g ← A g + ←, m G ← AGAT+ Σ return g , G end function which itself can be obtained by forward propagation from p(ht |v1:t ). Conditioning Equation (19) to find p(ht |ht+1 , v1:t ) effectively reverses the dynamics, ← − ← − ht = At ht+1 + ηt ← − − ← − −← where At and η t ∼ N (←, Σt ) are found using the conditioned Gaussian results in Appendix (C)— mt these are explicitly given in Algorithm (5). Then averaging the reversed dynamics over p(h t+1 |v1:T ) we find that p(ht |v1:T ) is a Gaussian with statistics ← − ← − ← − ← − − gt = At gt+1 + ←, Gt = At Gt+1 At T + Σt . mt This Backward Pass is given in Algorithm (5). For parameter learning of the A matrix, the smoothed ← − T T statistic ht ht+1 is required. Using the above formulation, this is given by At Gt+1 + ht ht+1 . This is much simpler than the standard expressions cited in Shumway and Stoffer (2000) and Roweis and Ghahramani (1999). Appendix B. Gaussian Propagation Let y be linearly related to x through y = Mx + η, where η ∼ N (µ, Σ), and x ∼ N (µ x , Σx ). Then R p(y) = x p(y|x)p(x) is a Gaussian with mean Mµx + µ and covariance MΣx M T + Σ. Appendix C. Gaussian Conditioning For a joint Gaussian distribution over the vectors x and y with means µ x , µy and covariance elements Σxx ,Σxy ,Σyy , the conditional p(x|y) is a Gaussian with mean µx + Σxy Σ−1 (y − µy ) and covariance yy Σxx − Σxy Σ−1 Σyx . yy Appendix D. Collapsing Gaussians The user may provide any algorithm of their choice for collapsing a set of Gaussians to a smaller set of Gaussians (Titterington et al., 1985). Here, to be explicit, we present a simple one which is fast, but has the disadvantage that no spatial information about the mixture is used. 2537 BARBER First, we describe how to collapse a mixture to a single Gaussian: We may collapse a mixture of Gaussians p(x) = ∑i pi N (x|µi , Σi ) to a single Gaussian with mean ∑i pi µi and covariance ∑i pi Σi + µi µT − µµT . i To collapse a mixture to a K-component mixture we retain the K − 1 Gaussians with the largest mixture weights—the remaining N − K Gaussians are simply merged to a single Gaussian using the above method. The alternative of recursively merging the two Gaussians with the lowest mixture weights gave similar experimental performance. More sophisticated methods which retain some spatial information would clearly be potentially useful. The method presented in Lerner et al. (2000) is a suitable approach which considers removing Gaussians which are spatially similar (and not just low-weight components), thereby retaining diversity over the possible solutions. Appendix E. The Discrete-Continuous Factorization Viewpoint An alternative viewpoint is to proceed analogously to the Rauch-Tung-Striebel correction method for the LDS (Grewal and Andrews, 1993): p(ht , st |v1:T ) = = ∑ Z st+1 ht+1 p(st , ht , ht+1 , st+1 |v1:T ) ∑ p(st+1 |v1:T ) st+1 Z ht+1 p(ht , st |ht+1 , st+1 , v1:t )p(ht+1 |st+1 , v1:T ) st+1 = ≈ ∑ p(st+1 |v1:T ) p(ht |ht+1 , st+1 , st , v1:t )p(st |ht+1 , st+1 , v1:t ) ∑ p(st+1 |v1:T ) p(ht |ht+1 , st+1 , st , v1:t ) p(st |st+1 , v1:T ) st+1 (20) p(st |st+1 ,v1:T ) where angled brackets · denote averages with respect to p(ht+1 |st+1 , v1:T ). Whilst the factorized approximation in Equation (20) may seem severe, by comparing Equations (20) and (10) we see that it is equivalent to the apparently milder assumption p(ht+1 |st , st+1 , v1:T ) ≈ p(ht+1 |st+1 , v1:T ). Hence this factorized approximation is equivalent to the ‘standard’ EC approach in which the dependency on st is dropped. References D. L. Alspach and H. W. Sorenson. Nonlinear bayesian estimation using gaussian sum approximations. IEEE Transactions on Automatic Control, 17(4):439–448, 1972. Y. Bar-Shalom and Xiao-Rong Li. Estimation and Tracking : Principles, Techniques and Software. Artech House, Norwood, MA, 1998. X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence—UAI 1998, pages 33–42. Morgan Kaufmann, 1998. C. Carter and R. Kohn. Markov chain Monte Carlo in conditionally Gaussian state space models. Biometrika, 83:589–601, 1996. 2538 E XPECTATION C ORRECTION A. T. Cemgil, B. Kappen, and D. Barber. A Generative Model for Music Transcription. IEEE Transactions on Audio, Speech and Language Processing, 14(2):679 – 694, 2006. S. Chib and M. Dueker. Non-Markovian regime switching with endogenous states and time-varying state strengths. Econometric Society 2004 North American Summer Meetings 600, 2004. A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-Blackwellised particle filtering for dynamic Bayesian networks. Uncertainty in Artificial Intelligence, 2000. A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Springer, 2001. Z. Ghahramani and G. E. Hinton. Variational learning for switching state-space models. Neural Computation, 12(4):963–996, 1998. M. S. Grewal and A. P. Andrews. Kalman Filtering: Theory and Practice. Prentice-Hall, 1993. T. Heskes and O. Zoeter. Expectation propagation for approximate inference in dynamic Bayesian networks. In A. Darwiche and N. Friedman, editors, Uncertainty in Artificial Intelligence, pages 216–223, 2002. T. Jaakkola and M. Jordan. A variational approach to Bayesian logistic regression problems and their extensions. In Artificial Intelligence and Statistics, 1996. M. I. Jordan. Learning in Graphical Models. MIT Press, 1998. S. Julier and J. Uhlmann. A new extension of the Kalman filter to nonlinear systems. In Int. Symp. Aerospace/Defense Sensing, Simul. and Controls, Orlando, FL, 1997. C-J. Kim. Dynamic linear models with Markov-switching. Journal of Econometrics, 60:1–22, 1994. C-J. Kim and C. R. Nelson. State-Space Models with Regime Switching. MIT Press, 1999. G. Kitagawa. The two-filter formula for smoothing and an implementation of the Gaussian-sum smoother. Annals of the Institute of Statistical Mathematics, 46(4):605–623, 1994. G. Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1):1–25, 1996. S. Lauritzen and F. Jensen. Stable local computation with conditional Gaussian distributions. Statistics and Computing, 11:191–203, 2001. S. L. Lauritzen. Propagation of probabilities, means, and variances in mixed graphical association models. Journal of the American Statistical Association, 87(420):1098–1108, 1992. L. J. Lee, H. Attias, Li Deng, and P. Fieguth. A multimodal variational approach to learning and inference in switching state space models. In IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP 04), volume 5, pages 505–8, 2004. U. Lerner, R. Parr, D. Koller, and G. Biswas. Bayesian fault detection and diagnosis in dynamic systems. In Proceedings of the Seventeenth National Conference on Artificial Intelligence (AIII00), pages 531–537, 2000. 2539 BARBER U. N. Lerner. Hybrid Bayesian Networks for Reasoning about Complex Systems. PhD thesis, Stanford University, 2002. B. Mesot and D. Barber. Switching linear dynamical systems for noise robust speech recognition. IDIAP-RR 08, 2006. T. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, MIT Media Lab, 2001. R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. CRG-TR-93-1, Dept. of Computer Science, University of Toronto, 1993. P. Park and T. Kailath. New square-root smoothing algorithms. IEEE Transactions on Automatic Control, 41:727–732, 1996. V. Pavlovic, J. M. Rehg, and J. MacCormick. Learning switching linear models of human motion. In Advances in Neural Information Processing systems (NIPS 13), pages 981–987, 2001. L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. of the IEEE, 77(2):257–286, 1989. H. E. Rauch, G. Tung, and C. T. Striebel. Maximum likelihood estimates of linear dynamic systems. American Institute of Aeronautics and Astronautics Journal (AIAAJ), 3(8):1445–1450, 1965. S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11(2):305–345, 1999. R. H. Shumway and D. S. Stoffer. Time Series Analysis and Its Applications. Springer, 2000. E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 605–612, 2003. D. M. Titterington, A. F. M. Smith, and U. E. Makov. Statistical Analysis of Finite Mixture Distributions. Wiley, 1985. H. Tong. Nonlinear Time Series Analysis: A Dynamical Systems Approach. Oxford Univ. Press, 1990. M. Verhaegen and P. Van Dooren. Numerical aspects of different Kalman filter implementations. IEEE Transactions of Automatic Control, 31(10):907–917, 1986. M. West and J. Harrison. Bayesian Forecasting and Dynamic Models. Springer, 1999. O. Zoeter. Monitoring Non-Linear and Switching Dynamical Systems. PhD thesis, Radboud University Nijmegen, 2005. 2540

4 0.41019505 64 jmlr-2006-Noisy-OR Component Analysis and its Application to Link Analysis

Author: Tomáš Šingliar, Miloš Hauskrecht

Abstract: We develop a new component analysis framework, the Noisy-Or Component Analyzer (NOCA), that targets high-dimensional binary data. NOCA is a probabilistic latent variable model that assumes the expression of observed high-dimensional binary data is driven by a small number of hidden binary sources combined via noisy-or units. The component analysis procedure is equivalent to learning of NOCA parameters. Since the classical EM formulation of the NOCA learning problem is intractable, we develop its variational approximation. We test the NOCA framework on two problems: (1) a synthetic image-decomposition problem and (2) a co-citation data analysis problem for thousands of CiteSeer documents. We demonstrate good performance of the new model on both problems. In addition, we contrast the model to two mixture-based latent-factor models: the probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA). Differing assumptions underlying these models cause them to discover different types of structure in co-citation data, thus illustrating the benefit of NOCA in building our understanding of highdimensional data sets. Keywords: component analysis, vector quantization, variational learning, link analysis

5 0.35354644 80 jmlr-2006-Segmental Hidden Markov Models with Random Effects for Waveform Modeling

Author: Seyoung Kim, Padhraic Smyth

Abstract: This paper proposes a general probabilistic framework for shape-based modeling and classification of waveform data. A segmental hidden Markov model (HMM) is used to characterize waveform shape and shape variation is captured by adding random effects to the segmental model. The resulting probabilistic framework provides a basis for learning of waveform models from data as well as parsing and recognition of new waveforms. Expectation-maximization (EM) algorithms are derived and investigated for fitting such models to data. In particular, the “expectation conditional maximization either” (ECME) algorithm is shown to provide significantly faster convergence than a standard EM procedure. Experimental results on two real-world data sets demonstrate that the proposed approach leads to improved accuracy in classification and segmentation when compared to alternatives such as Euclidean distance matching, dynamic time warping, and segmental HMMs without random effects. Keywords: waveform recognition, random effects, segmental hidden Markov models, EM algorithm, ECME

6 0.33403882 28 jmlr-2006-Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting

7 0.31839877 4 jmlr-2006-A Linear Non-Gaussian Acyclic Model for Causal Discovery

8 0.22047496 49 jmlr-2006-Learning Parts-Based Representations of Data

9 0.21834816 75 jmlr-2006-Policy Gradient in Continuous Time

10 0.2151161 13 jmlr-2006-Adaptive Prototype Learning Algorithms: Theoretical and Experimental Studies

11 0.19064368 15 jmlr-2006-Bayesian Network Learning with Parameter Constraints     (Special Topic on Machine Learning and Optimization)

12 0.19024345 85 jmlr-2006-Statistical Comparisons of Classifiers over Multiple Data Sets

13 0.17617874 21 jmlr-2006-Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis

14 0.1748504 54 jmlr-2006-Learning the Structure of Linear Latent Variable Models

15 0.13545464 37 jmlr-2006-Incremental Algorithms for Hierarchical Classification

16 0.13387261 69 jmlr-2006-One-Class Novelty Detection for Seizure Analysis from Intracranial EEG

17 0.13012363 30 jmlr-2006-Evolutionary Function Approximation for Reinforcement Learning

18 0.13009356 90 jmlr-2006-Superior Guarantees for Sequential Prediction and Lossless Compression via Alphabet Decomposition

19 0.1177814 33 jmlr-2006-Fast SDP Relaxations of Graph Cut Clustering, Transduction, and Other Combinatorial Problems     (Special Topic on Machine Learning and Optimization)

20 0.11762562 71 jmlr-2006-Optimising Kernel Parameters and Regularisation Coefficients for Non-linear Discriminant Analysis


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(8, 0.025), (36, 0.041), (37, 0.309), (45, 0.015), (50, 0.026), (63, 0.219), (68, 0.025), (78, 0.056), (79, 0.016), (81, 0.04), (84, 0.01), (89, 0.011), (90, 0.02), (91, 0.016), (96, 0.056)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.84488732 57 jmlr-2006-Linear State-Space Models for Blind Source Separation

Author: Rasmus Kongsgaard Olsson, Lars Kai Hansen

Abstract: We apply a type of generative modelling to the problem of blind source separation in which prior knowledge about the latent source signals, such as time-varying auto-correlation and quasiperiodicity, are incorporated into a linear state-space model. In simulations, we show that in terms of signal-to-error ratio, the sources are inferred more accurately as a result of the inclusion of strong prior knowledge. We explore different schemes of maximum-likelihood optimization for the purpose of learning the model parameters. The Expectation Maximization algorithm, which is often considered the standard optimization method in this context, results in slow convergence when the noise variance is small. In such scenarios, quasi-Newton optimization yields substantial improvements in a range of signal to noise ratios. We analyze the performance of the methods on convolutive mixtures of speech signals. Keywords: blind source separation, state-space model, independent component analysis, convolutive model, EM, speech modelling

2 0.68281442 53 jmlr-2006-Learning a Hidden Hypergraph

Author: Dana Angluin, Jiang Chen

Abstract: We consider the problem of learning a hypergraph using edge-detecting queries. In this model, the learner may query whether a set of vertices induces an edge of the hidden hypergraph or not. We show that an r-uniform hypergraph with m edges and n vertices is learnable with O(2 4r m · poly(r, log n)) queries with high probability. The queries can be made in O(min(2 r (log m + r)2 , (log m + r)3 )) rounds. We also give an algorithm that learns an almost uniform hypergraph of ∆ ∆ dimension r using O(2O((1+ 2 )r) · m1+ 2 · poly(log n)) queries with high probability, where ∆ is the difference between the maximum and the minimum edge sizes. This upper bound matches our ∆ lower bound of Ω(( m∆ )1+ 2 ) for this class of hypergraphs in terms of dependence on m. The 1+ 2 queries can also be made in O((1 + ∆) · min(2r (log m + r)2 , (log m + r)3 )) rounds. Keywords: query learning, hypergraph, multiple round algorithm, sampling, chemical reaction network

3 0.59602332 38 jmlr-2006-Incremental Support Vector Learning: Analysis, Implementation and Applications     (Special Topic on Machine Learning and Optimization)

Author: Pavel Laskov, Christian Gehl, Stefan Krüger, Klaus-Robert Müller

Abstract: Incremental Support Vector Machines (SVM) are instrumental in practical applications of online learning. This work focuses on the design and analysis of efficient incremental SVM learning, with the aim of providing a fast, numerically stable and robust implementation. A detailed analysis of convergence and of algorithmic complexity of incremental SVM learning is carried out. Based on this analysis, a new design of storage and numerical operations is proposed, which speeds up the training of an incremental SVM by a factor of 5 to 20. The performance of the new algorithm is demonstrated in two scenarios: learning with limited resources and active learning. Various applications of the algorithm, such as in drug discovery, online monitoring of industrial devices and and surveillance of network traffic, can be foreseen. Keywords: incremental SVM, online learning, drug discovery, intrusion detection

4 0.58889854 68 jmlr-2006-On the Complexity of Learning Lexicographic Strategies

Author: Michael Schmitt, Laura Martignon

Abstract: Fast and frugal heuristics are well studied models of bounded rationality. Psychological research has proposed the take-the-best heuristic as a successful strategy in decision making with limited resources. Take-the-best searches for a sufficiently good ordering of cues (or features) in a task where objects are to be compared lexicographically. We investigate the computational complexity of finding optimal cue permutations for lexicographic strategies and prove that the problem is NP-complete. It follows that no efficient (that is, polynomial-time) algorithm computes optimal solutions, unless P = NP. We further analyze the complexity of approximating optimal cue permutations for lexicographic strategies. We show that there is no efficient algorithm that approximates the optimum to within any constant factor, unless P = NP. The results have implications for the complexity of learning lexicographic strategies from examples. They show that learning them in polynomial time within the model of agnostic probably approximately correct (PAC) learning is impossible, unless RP = NP. We further consider greedy approaches for building lexicographic strategies and determine upper and lower bounds for the performance ratio of simple algorithms. Moreover, we present a greedy algorithm that performs provably better than take-the-best. Tight bounds on the sample complexity for learning lexicographic strategies are also given in this article. Keywords: bounded rationality, fast and frugal heuristic, PAC learning, NP-completeness, hardness of approximation, greedy method

5 0.44832438 61 jmlr-2006-Maximum-Gain Working Set Selection for SVMs     (Special Topic on Machine Learning and Optimization)

Author: Tobias Glasmachers, Christian Igel

Abstract: Support vector machines are trained by solving constrained quadratic optimization problems. This is usually done with an iterative decomposition algorithm operating on a small working set of variables in every iteration. The training time strongly depends on the selection of these variables. We propose the maximum-gain working set selection algorithm for large scale quadratic programming. It is based on the idea to greedily maximize the progress in each single iteration. The algorithm takes second order information from cached kernel matrix entries into account. We prove the convergence to an optimal solution of a variant termed hybrid maximum-gain working set selection. This method is empirically compared to the prominent most violating pair selection and the latest algorithm using second order information. For large training sets our new selection scheme is significantly faster. Keywords: working set selection, sequential minimal optimization, quadratic programming, support vector machines, large scale optimization

6 0.42618778 52 jmlr-2006-Learning Spectral Clustering, With Application To Speech Separation

7 0.41077781 14 jmlr-2006-An Efficient Implementation of an Active Set Method for SVMs    (Special Topic on Machine Learning and Optimization)

8 0.39252654 44 jmlr-2006-Large Scale Transductive SVMs

9 0.38327459 64 jmlr-2006-Noisy-OR Component Analysis and its Application to Link Analysis

10 0.38071996 41 jmlr-2006-Kernel-Based Learning of Hierarchical Multilabel Classification Models     (Special Topic on Machine Learning and Optimization)

11 0.37210858 28 jmlr-2006-Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting

12 0.37112319 18 jmlr-2006-Building Support Vector Machines with Reduced Classifier Complexity     (Special Topic on Machine Learning and Optimization)

13 0.37053773 72 jmlr-2006-Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems     (Special Topic on Machine Learning and Optimization)

14 0.36802337 1 jmlr-2006-A Direct Method for Building Sparse Kernel Learning Algorithms

15 0.36542132 51 jmlr-2006-Learning Sparse Representations by Non-Negative Matrix Factorization and Sequential Cone Programming     (Special Topic on Machine Learning and Optimization)

16 0.36310622 47 jmlr-2006-Learning Image Components for Object Recognition

17 0.36293763 56 jmlr-2006-Linear Programs for Hypotheses Selection in Probabilistic Inference Models     (Special Topic on Machine Learning and Optimization)

18 0.35653847 94 jmlr-2006-Using Machine Learning to Guide Architecture Simulation

19 0.35645443 21 jmlr-2006-Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis

20 0.35614628 92 jmlr-2006-Toward Attribute Efficient Learning of Decision Lists and Parities