nips nips2002 nips2002-14 knowledge-graph by maker-knowledge-mining

14 nips-2002-A Probabilistic Approach to Single Channel Blind Signal Separation


Source: pdf

Author: Gil-jin Jang, Te-Won Lee

Abstract: We present a new technique for achieving source separation when given only a single channel recording. The main idea is based on exploiting the inherent time structure of sound sources by learning a priori sets of basis filters in time domain that encode the sources in a statistically efficient manner. We derive a learning algorithm using a maximum likelihood approach given the observed single channel data and sets of basis filters. For each time point we infer the source signals and their contribution factors. This inference is possible due to the prior knowledge of the basis filters and the associated coefficient densities. A flexible model for density estimation allows accurate modeling of the observation and our experimental results exhibit a high level of separation performance for mixtures of two music signals as well as the separation of two voice signals.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We present a new technique for achieving source separation when given only a single channel recording. [sent-10, score-0.675]

2 The main idea is based on exploiting the inherent time structure of sound sources by learning a priori sets of basis filters in time domain that encode the sources in a statistically efficient manner. [sent-11, score-1.055]

3 We derive a learning algorithm using a maximum likelihood approach given the observed single channel data and sets of basis filters. [sent-12, score-0.446]

4 For each time point we infer the source signals and their contribution factors. [sent-13, score-0.564]

5 This inference is possible due to the prior knowledge of the basis filters and the associated coefficient densities. [sent-14, score-0.225]

6 A flexible model for density estimation allows accurate modeling of the observation and our experimental results exhibit a high level of separation performance for mixtures of two music signals as well as the separation of two voice signals. [sent-15, score-0.918]

7 1 Introduction Extracting individual sound sources from an additive mixture of different signals has been attractive to many researchers in computational auditory scene analysis (CASA) [1] and independent component analysis (ICA) [2]. [sent-16, score-0.873]

8 In order to formulate the problem, we assume that the observed signal is an addition of independent source signals (1) where is the sampled value of the source signal, and is the gain of each source which is fixed over time. [sent-17, score-1.191]

9 Note that superscripts indicate sample indices of time-varying signals and subscripts indicate the source identification. [sent-18, score-0.485]

10 The gain constants are affected by several factors, such as powers, locations, directions and many other characteristics of the source generators as well as sensitivities of the sensors. [sent-19, score-0.28]

11 It is convenient to assume all the sources to have zero mean and unit variance. [sent-20, score-0.294]

12 Several earlier attempts [3, 4, 5, 6] to this problem have been proposed based on the presumed properties of the individual sounds in the frequency domain. [sent-23, score-0.17]

13 12 Figure 1: Generative models for the observed mixture and original source signals (A) A single channel observation is generated by a weighted sum of two source signals with different characteristics. [sent-32, score-1.274]

14 (B) Individual source signals are generated by weighted ( ) linear superpositions of basis functions ( ). [sent-33, score-0.759]

15 The distributions are modeled by generalized Gaussian density functions in the form of , which provide good matches to the non-Gaussian distributions by varying exponents. [sent-36, score-0.209]

16 A B&¡ @ A B& C H X` B&¡ @ YE USQIB&¡ GFD A X W V T R P H A @E signals is greater than or equal to the number of sources [2]. [sent-38, score-0.499]

17 Although some recent overcomplete representations may relax this assumption, the problem of separating sources from a single channel observation remains difficult. [sent-39, score-0.523]

18 ICA has been shown to be highly effective in other aspects such as encoding speech signals [7] and natural sounds [8]. [sent-40, score-0.424]

19 The basis functions and the coefficients learned by ICA constitute an efficient representation of the given time-ordered sequences of a sound source by estimating the maximum likelihood densities, thus reflecting the statistical structures of the sources. [sent-41, score-0.869]

20 The method presented in this paper aims at exploiting the ICA basis functions for separating mixed sources from a single channel observation. [sent-42, score-0.942]

21 Sets of basis functions are learned a priori from a training data set and these sets are used to separate the unknown test sound sources. [sent-43, score-0.5]

22 The algorithm recovers the original auditory streams in a number of gradientascent adaptation steps maximizing the log-likelihood of the separated signals, calculated using the basis functions and the probability density functions (pdf’s) of their coefficients —the output of the ICA basis filters. [sent-44, score-0.85]

23 The object function not only makes use of the ICA basis functions as a strong prior for the source characteristics, but also their associated coefficient pdf’s modeled by generalized Gaussian distributions [9]. [sent-45, score-0.674]

24 Experiments showing the separation of the two different sources was quite successful in the simulated mixtures of rock and jazz music, and male and female speech signals. [sent-46, score-1.366]

25 2 Generative Models for Mixture and Source Signals The algorithm first involves the learning of the time-domain basis functions of the sound sources that we are interested in separating from a given training database. [sent-47, score-0.798]

26 We assume two different types of generative models in the observed single channel mixture as well as in the original sources. [sent-49, score-0.348]

27 This corresponds to the situ- i p¤ £ g c a h3 $ fedb' ation defined in Section 1 in that two different signals are mixed and observed in a single sensor. [sent-53, score-0.406]

28 For the individual source signals, we adopt a decomposition-based approach as another generative model. [sent-54, score-0.359]

29 This approach was employed formerly in analyzing sound sources [7, 8] by expressing a fixed-length segment drawn from a time-varying signal as a linear superposition of a number of elementary patterns, called basis functions, with scalar multiples (Figure 1-B). [sent-55, score-0.835]

30 The constructed column vector is then expressed as a linear combination of the basis functions such that    (2) A B&¡ @ C 0 1( ! [sent-57, score-0.274]

31 A& $ &¡ & ¤ B&¡ @ B& C  ¤ &¡ £   A A ¨  A g ©§¥¤&¡    ¥¤&¡  &¡  c ¤ &¡ £ ¨¦ ¡ ¢    ¨ 3 e W ¨    ' ' is the number of basis functions, is the basis function of source where in the form of -dimensional column vector, its coefficient (weight) and . [sent-58, score-0.668]

32 The second subscript followed by the source index in represents the component number of the coefficient vector . [sent-63, score-0.305]

33 The inverse of the basis matrix, , refers to the ICA . [sent-65, score-0.194]

34 With the generalized Gaussian ICA learning algorithm [9], the basis functions and their individual parameter set are obtained beforehand and used as prior information for the following source separation algorithm. [sent-73, score-0.885]

35 The probability of the source signals is computed by the generalized Gaussian parameters in the transformed domain, and the method performs maximum a posteriori (MAP) estimation in a number of adaptation steps on the source signals to maximize the data likelihood. [sent-75, score-1.23]

36 Scaling factors of the generative model are learned as well. [sent-76, score-0.138]

37 1 MAP estimation of Source Signals We have demonstrated that the learned basis filters maximize the likelihood of the given data. [sent-78, score-0.256]

38 Suppose we know what kind of sound sources have been mixed and we were given the set of basis filters from a training set. [sent-79, score-0.801]

39 In our problem of single channel separation, half of the solution is already given by the constraint , where constitutes the basis learning data (Figure 1-B). [sent-82, score-0.386]

40 Essentially, the goal of the source inferring algorithm presented in this paper is to complement the remaining half with the statistical information given by a set of coefficient density parameters . [sent-83, score-0.345]

41 The likelihood of is (5) is the generalized Gaussian density function, and — paramewhere ter group of all the coefficients, with the notation ‘ ’ meaning an ordered set of the elements from index to . [sent-90, score-0.155]

42 Assuming the independence over time, the probability of the whole signal is obtained from the marginal ones of all the possible segments, (6) where, for convenience, . [sent-91, score-0.144]

43 The objective function to be maximized is the multiplication of the data likelihoods of both sound sources, and we denote its log by : (7) and for , toward the maximum of . [sent-92, score-0.193]

44 The adaptation is done on the values of instead of , in order to infer the sound sources and their contribution factors simultaneously. [sent-94, score-0.711]

45 We are given single channel data , and we have the estimates of the source signals, , at every adaptation step. [sent-105, score-0.556]

46 (A) : At each timepoint, the current estimates of the source signals are passed , generating sparse codes that are statistically independent. [sent-106, score-0.54]

47 through basis filters (B) : The stochastic gradient for each code is obtained by taking derivative of : The gradient is transformed to the source domain. [sent-107, score-0.507]

48 (C) The individual gradients are combined to be added to the current estimates of the source signals. [sent-109, score-0.344]

49 2 Estimating & ¦ Updating the contribution factors can be accomplished by simply finding the maximum a posteriori values. [sent-111, score-0.147]

50 The value of ¨ ¦ where is the prior density function of probability also maximizes its log, (9) maximizing the posterior (10) where is the log-likelihood of the estimated sources defined in Equation 7. [sent-117, score-0.365]

51 5 0 -5 0 5 0 -5 0 5 (b) Jazz music Signal Basis Functions 60 40 q=0. [sent-145, score-0.231]

52 29 Coef’s PDF 10 0 (c) Male speech 10 10 q=0. [sent-152, score-0.133]

53 41 5 0 -2 5 2 0 2 0 2 0 2 0 -5 0 5 (d) Female speech 0 0 1000 2000 3000 Frequency (Hz) 4000   10 Figure 4: Average powerspectra of the 4 sound sources. [sent-155, score-0.382]

54 Frequency scale ranges in 0 4kHz ( -axis), since all the signals are sampled at 8kHz. [sent-156, score-0.205]

55 ¢ Rock Jazz Male Female 20  Average Powerspectrum Figure 3: Waveforms of four sound sources, examples of the learned basis functions (5 were chosen out of 64), and the corresponding coefficient distributions modeled by generalized Gaussians. [sent-158, score-0.589]

56 The full set of basis functions is available at the website also. [sent-159, score-0.274]

57 4 Experiments and Discussion We have tested the performance of the proposed method on the single channel mixtures of four different sound types. [sent-160, score-0.421]

58 They were monaural signals of rock and jazz music, male and female speech. [sent-161, score-0.954]

59 We used different sets of speech signals for learning basis functions and for generating the mixtures. [sent-162, score-0.612]

60 For the mixture generation, two sentences of the target speakers ‘mcpm0’ and ‘fdaw0’, one for each, were selected from the TIMIT speech database. [sent-163, score-0.216]

61 Rock music was mainly composed of guitar and drum sounds, and jazz was generated by a wind instrument. [sent-165, score-0.499]

62 All signals were downsampled to 8kHz, from original 44. [sent-167, score-0.233]

63 Figure 3 displays the actual sources, adapted basis functions, and their coefficient distributions. [sent-171, score-0.194]

64 Music basis functions exhibit consistent amplitudes with harmonics, and the speech basis functions are similar to Gabor wavelets. [sent-172, score-0.681]

65 Figure 4 compares 4 sources by the average spectra. [sent-173, score-0.294]

66 One might expect that simple filtering or masking cannot separate the mixed sources clearly. [sent-175, score-0.456]

67 Before actual separation, the source signals were initialized to the values of mixture signal: , and the initial were all to satisfy Equation 1. [sent-176, score-0.538]

68 The adaptation was repeated more than 300 steps on each sample, and the scaling factors were updated every 10 steps. [sent-177, score-0.185]

69 Table 1 reports the signal-to-noise ratios (SNRs) of the mixed signal ( ) and the recovered results ( ) with the original sources ( ). [sent-178, score-0.607]

70 In terms of total SNR increase the mixtures containing music were recovered more cleanly than the male-female mixture. [sent-179, score-0.317]

71 Separation of jazz music and male speech was the best, and the waveforms are illustrated   &¡  & ¦ ¤ ¡& 2 l & ¦ ¡   ¤ &¡  ¡& ” 2 http://speech. [sent-180, score-0.808]

72 5 4 Figure 5: Separation result for the mixture of jazz music and male speech. [sent-193, score-0.694]

73 In the vertical order: original sources ( and ), mixed signal ( ), and the recovered signals. [sent-194, score-0.607]

74 We conjecture by the average spectra of the sources in Figure 4 that although there exists plenty of overlap between jazz and speech, the structures are dissimilar, i. [sent-196, score-0.598]

75 the frequency components of jazz change less, so we were able to obtain relatively good SNR results. [sent-198, score-0.293]

76 However rock music exhibits scattered spectrum and less characteristical structure. [sent-199, score-0.465]

77 This explains the relatively poorer performances of rock mixtures. [sent-200, score-0.178]

78 It is very difficult to compare a separation method with other CASA techniques, because their approaches are so different in many ways that an optimal tuning of their parameters would be beyond the scope of this paper. [sent-201, score-0.203]

79 However, we compared our method with Wiener filtering [4], that provides optimal masking filters in the frequency domain if true spectrogram is given. [sent-202, score-0.133]

80 So, we assumed that the other source was completely known. [sent-203, score-0.28]

81 Spectral techniques assume that sources are disjoint in the spectrogram, which frequently result in audible distortions of the signal in the region where the assumption mismatches. [sent-211, score-0.461]

82 Recent time-domain filtering techniques are based on splitting the whole signal space into several disjoint subspaces. [sent-212, score-0.167]

83 Our method avoids these strong assumptions by utilizing a prior set of basis functions that captures the inherent statistical structures of the source signal. [sent-214, score-0.681]

84 This generative model therefore makes use of spectral and temporal structures at the same time. [sent-215, score-0.145]

85 The constraints are dictated by the ICA algorithm that forces the basis functions to result in an efficient representation, i. [sent-216, score-0.274]

86 the linearly independent source coefficients; and both, the basis functions Table 1: SNR results. [sent-218, score-0.554]

87 R, J, M, F stand for rock, jazz music, male, and female speech. [sent-219, score-0.356]

88 ‘Mix’ columns are the sources that are mixed to , and ‘ ’s are the calculated SNR of mixed signal ( ) and recovered sources ( ) with the original sources ( ). [sent-221, score-1.315]

89 We have also performed experiments with the set of basis functions learned from the test sounds and the SNR decreased on average by 1dB. [sent-257, score-0.393]

90 5 Conclusions We presented a technique for single channel source separation utilizing the time-domain ICA basis functions. [sent-258, score-0.905]

91 Instead of traditional prior knowledge of the sources, we exploited the statistical structures of the sources that are inherently captured by the basis and its coefficients from a training set. [sent-259, score-0.579]

92 The algorithm recovers original sound streams through gradient-ascent adaptation steps pursuing the maximum likelihood estimate, contraint by the parameters of the basis filters and the generalized Gaussian distributions of the filter coefficients. [sent-260, score-0.698]

93 With the separation results, we demonstrated that the proposed method is applicable to the real world problems such as blind source separation, denoising, and restoration of corrupted or lost data. [sent-261, score-0.565]

94 Our current research includes the extension of this framework to perform model comparision to estimate which set of basis functions to use given a dictionary of basis functions. [sent-262, score-0.468]

95 This is achieved by applying a variational Bayes method to compare different basis function models to select the most likely source. [sent-263, score-0.194]

96 Nelson, “Neural dual extended kalman filtering: Applications in speech enhancement and monaural blind signal separation,” in Proc. [sent-281, score-0.379]

97 Rayner, “Single channel signal separation using linear time-varying filters: Separability of non-stationary stochastic signals,” in Proc. [sent-285, score-0.484]

98 Roweis, “One microphone source separation,” Advances in Neural Information Processing Systems, vol. [sent-291, score-0.28]

99 Rosca, “Real-time time-frequency based blind source separation,” in Proc. [sent-297, score-0.362]

100 Jang, “The statistical structures of male and female speech signals,” in Proc. [sent-304, score-0.471]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('sources', 0.294), ('source', 0.28), ('ica', 0.271), ('jazz', 0.244), ('music', 0.231), ('signals', 0.205), ('separation', 0.203), ('basis', 0.194), ('sound', 0.193), ('rock', 0.178), ('snr', 0.178), ('male', 0.166), ('channel', 0.166), ('coef', 0.165), ('speech', 0.133), ('pdf', 0.125), ('mixed', 0.12), ('lters', 0.116), ('signal', 0.115), ('female', 0.112), ('sounds', 0.086), ('adaptation', 0.084), ('blind', 0.082), ('functions', 0.08), ('sec', 0.079), ('fd', 0.063), ('gfd', 0.062), ('generalized', 0.062), ('factors', 0.061), ('structures', 0.06), ('characteristical', 0.056), ('feq', 0.056), ('jangbal', 0.056), ('powerspectra', 0.056), ('qqq', 0.056), ('sharpened', 0.056), ('mix', 0.056), ('ltering', 0.056), ('statistically', 0.055), ('mixture', 0.053), ('cients', 0.051), ('recovered', 0.05), ('cient', 0.05), ('casa', 0.049), ('jang', 0.049), ('monaural', 0.049), ('frequency', 0.049), ('contribution', 0.045), ('generative', 0.044), ('auditory', 0.042), ('masking', 0.042), ('spectrogram', 0.042), ('ug', 0.042), ('posteriori', 0.041), ('spectral', 0.041), ('steps', 0.04), ('density', 0.04), ('segment', 0.039), ('separating', 0.037), ('mixtures', 0.036), ('utilizing', 0.036), ('lee', 0.035), ('individual', 0.035), ('recovers', 0.034), ('streams', 0.034), ('icassp', 0.034), ('waveforms', 0.034), ('wiener', 0.034), ('infer', 0.034), ('learned', 0.033), ('diego', 0.033), ('exponent', 0.033), ('transformed', 0.033), ('gaussian', 0.032), ('prior', 0.031), ('observed', 0.031), ('sentences', 0.03), ('gradients', 0.029), ('marginal', 0.029), ('likelihood', 0.029), ('original', 0.028), ('modeled', 0.027), ('single', 0.026), ('segments', 0.026), ('scene', 0.026), ('disjoint', 0.026), ('techniques', 0.026), ('inferring', 0.025), ('samples', 0.025), ('component', 0.025), ('exploiting', 0.025), ('arizona', 0.024), ('ation', 0.024), ('drum', 0.024), ('fedb', 0.024), ('iconip', 0.024), ('jolla', 0.024), ('rayner', 0.024), ('tail', 0.024), ('ter', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999887 14 nips-2002-A Probabilistic Approach to Single Channel Blind Signal Separation

Author: Gil-jin Jang, Te-Won Lee

Abstract: We present a new technique for achieving source separation when given only a single channel recording. The main idea is based on exploiting the inherent time structure of sound sources by learning a priori sets of basis filters in time domain that encode the sources in a statistically efficient manner. We derive a learning algorithm using a maximum likelihood approach given the observed single channel data and sets of basis filters. For each time point we infer the source signals and their contribution factors. This inference is possible due to the prior knowledge of the basis filters and the associated coefficient densities. A flexible model for density estimation allows accurate modeling of the observation and our experimental results exhibit a high level of separation performance for mixtures of two music signals as well as the separation of two voice signals.

2 0.23308904 10 nips-2002-A Model for Learning Variance Components of Natural Images

Author: Yan Karklin, Michael S. Lewicki

Abstract: We present a hierarchical Bayesian model for learning efficient codes of higher-order structure in natural images. The model, a non-linear generalization of independent component analysis, replaces the standard assumption of independence for the joint distribution of coefficients with a distribution that is adapted to the variance structure of the coefficients of an efficient image basis. This offers a novel description of higherorder image structure and provides a way to learn coarse-coded, sparsedistributed representations of abstract image properties such as object location, scale, and texture.

3 0.22100104 183 nips-2002-Source Separation with a Sensor Array using Graphical Models and Subband Filtering

Author: Hagai Attias

Abstract: Source separation is an important problem at the intersection of several fields, including machine learning, signal processing, and speech technology. Here we describe new separation algorithms which are based on probabilistic graphical models with latent variables. In contrast with existing methods, these algorithms exploit detailed models to describe source properties. They also use subband filtering ideas to model the reverberant environment, and employ an explicit model for background and sensor noise. We leverage variational techniques to keep the computational complexity per EM iteration linear in the number of frames. 1 The Source Separation Problem Fig. 1 illustrates the problem of source separation with a sensor array. In this problem, signals from K independent sources are received by each of L ≥ K sensors. The task is to extract the sources from the sensor signals. It is a difficult task, partly because the received signals are distorted versions of the originals. There are two types of distortions. The first type arises from propagation through a medium, and is approximately linear but also history dependent. This type is usually termed reverberations. The second type arises from background noise and sensor noise, which are assumed additive. Hence, the actual task is to obtain an optimal estimate of the sources from data. The task is difficult for another reason, which is lack of advance knowledge of the properties of the sources, the propagation medium, and the noises. This difficulty gave rise to adaptive source separation algorithms, where parameters that are related to those properties are adjusted to optimized a chosen cost function. Unfortunately, the intense activity this problem has attracted over the last several years [1–9] has not yet produced a satisfactory solution. In our opinion, the reason is that existing techniques fail to address three major factors. The first is noise robustness: algorithms typically ignore background and sensor noise, sometime assuming they may be treated as additional sources. It seems plausible that to produce a noise robust algorithm, noise signals and their properties must be modeled explicitly, and these models should be exploited to compute optimal source estimators. The second factor is mixing filters: algorithms typically seek, and directly optimize, a transformation that would unmix the sources. However, in many situations, the filters describing medium propagation are non-invertible, or have an unstable inverse, or have a stable inverse that is extremely long. It may hence be advantageous to Figure 1: The source separation problem. Signals from K = 2 speakers propagate toward L = 2 sensors. Each sensor receives a linear mixture of the speaker signals, distorted by multipath propagation, medium response, and background and sensor noise. The task is to infer the original signals from sensor data. estimate the mixing filters themselves, then use them to estimate the sources. The third factor is source properties: algorithms typically use a very simple source model (e.g., a one time point histogram). But in many cases one may easily obtain detailed models of the source signals. This is particularly true for speech sources, where large datasets exist and much modeling expertise has developed over decades of research. Separation of speakers is also one of the major potential commercial applications of source separation algorithms. It seems plausible that incorporating strong source models could improve performance. Such models may potentially have two more advantages: first, they could help limit the range of possible mixing filters by constraining the optimization problem. Second, they could help avoid whitening the extracted signals by effectively limiting their spectral range to the range characteristic of the source model. This paper makes several contributions to the problem of real world source separation. In the following, we present new separation algorithms that are the first to address all three factors. We work in the framework of probabilistic graphical models. This framework allows us to construct models for sources and for noise, combine them with the reverberant mixing transformation in a principled manner, and compute parameter and source estimates from data which are Bayes optimal. We identify three technical ideas that are key to our approach: (1) a strong speech model, (2) subband filtering, and (3) variational EM. 2 Frames, Subband Signals, and Subband Filtering We start with the concept of subband filtering. This is also a good point to define our notation. Let xm denote a time domain signal, e.g., the value of a sound pressure waveform at time point m = 0, 1, 2, .... Let Xn [k] denote the corresponding subband signal at time frame n and subband frequency k. The subband signals are obtained from the time domain signal by imposing an N -point window wm , m = 0 : N − 1 on that signal at equally spaced points nJ, n = 0, 1, 2, ..., and FFT-ing the windowed signal, N −1 e−iωk m wm xnJ+m , Xn [k] = (1) m=0 where ωk = 2πk/N and k = 0 : N − 1. The subband signals are also termed frames. Notice the difference in time scale between the time frame index n in Xn [k] and the time point index n in xn . The chosen value of the spacing J depends on the window length N . For J ≤ N the original signal xm can be synthesized exactly from the subband signals (synthesis formula omitted). An important consideration for selecting J, as well as the window shape, is behavior under filtering. Consider a filter hm applied to xm , and denote by ym the filtered signal. In the simple case hm = hδm,0 (no filtering), the subband signals keep the same dependence as the time domain ones, yn = hxn −→ Yn [k] = hXn [k] . For an arbitrary filter hm , we use the relation yn = hm xn−m −→ Yn [k] = Hm [k]Xn−m [k] , (2) m m with complex coefficients Hm [k] for each k. This relation between the subband signals is termed subband filtering, and the Hm [k] are termed subband filters. Unlike the simple case of non-filtering, the relation (2) holds approximately, but quite accurately using an appropriate choice of J and wm ; see [13] for details on accuracy. Throughout this paper, we will assume that an arbitrary filter hm can be modeled by the subband filters Hm [k] to a sufficient accuracy for our purposes. One advantage of subband filtering is that it replaces a long filter hm by a set of short independent filters Hm [k], one per frequency. This will turn out to decompose the source separation problem into a set of small (albeit coupled) problems, one per frequency. Another advantage is that this representation allows using a detailed speech model on the same footing with the filter model. This is because a speech model is defined on the time scale of a single frame, whereas the original filter hm , in contrast with Hm [k], is typically as long as 10 or more frames. As a final point on notation, we define a Gaussian distribution over a complex number Z ν by p(Z) = N (Z | µ, ν) = π exp(−ν | Z − µ |2 ) . Notice that this is a joint distribution over the real and imaginary parts of Z. The mean is µ = X and the precision (inverse variance) ν satisfies ν −1 = | X |2 − | µ |2 . 3 A Model for Speech Signals We assume independent sources, and model the distribution of source j by a mixture model over its subband signals Xjn , N/2−1 p(Xjn | Sjn = s) N (Xjn [k] | 0, Ajs [k]) = p(Sjn = s) = πjs k=1 p(X, S) p(Xjn | Sjn )p(Sjn ) , = (3) jn where the components are labeled by Sjn . Component s of source j is a zero mean Gaussian with precision Ajs . The mixing proportions of source j are πjs . The DAG representing this model is shown in Fig. 2. A similar model was used in [10] for one microphone speech enhancement for recognition (see also [11]). Here are several things to note about this model. (1) Each component has a characteristic spectrum, which may describe a particular part of a speech phoneme. This is because the precision corresponds to the inverse spectrum: the mean energy (w.r.t. the above distribution) of source j at frequency k, conditioned on label s, is | Xjn |2 = A−1 . (2) js A zero mean model is appropriate given the physics of the problem, since the mean of a sound pressure waveform is zero. (3) k runs from 1 to N/2 − 1, since for k > N/2, Xjn [k] = Xjn [N − k] ; the subbands k = 0, N/2 are real and are omitted from the model, a common practice in speech recognition engines. (4) Perhaps most importantly, for each source the subband signals are correlated via the component label s, as p(Xjn ) = s p(Xjn , Sjn = s) = k p(Xjn [k]) . Hence, when the source separation problem decomposes into one problem per frequency, these problems turn out to be coupled (see below), and independent frequency permutations are avoided. (5) To increase sn xn Figure 2: Graphical model describing speech signals in the subband domain. The model assumes i.i.d. frames; only the frame at time n is shown. The node Xn represents a complex N/2 − 1-dimensional vector Xn [k], k = 1 : N/2 − 1. model accuracy, a state transition matrix p(Sjn = s | Sj,n−1 = s ) may be added for each source. The resulting HMM models are straightforward to incorporate without increasing the algorithm complexity. There are several modes of using the speech model in the algorithms below. In one mode, the sources are trained online using the sensor data. In a second mode, source models are trained offline using available data on each source in the problem. A third mode correspond to separation of sources known to be speech but whose speakers are unknown. In this case, all sources have the same model, which is trained offline on a large dataset of speech signals, including 150 male and female speakers reading sentences from the Wall Street Journal (see [10] for details). This is the case presented in this paper. The training algorithm used was standard EM (omitted) using 256 clusters, initialized by vector quantization. 4 Separation of Non-Reverberant Mixtures We now present a source separation algorithm for the case of non-reverberant (or instantaneous) mixing. Whereas many algorithms exist for this case, our contribution here is an algorithm that is significantly more robust to noise. Its robustness results, as indicated in the introduction, from three factors: (1) explicitly modeling the noise in the problem, (2) using a strong source model, in particular modeling the temporal statistics (over N time points) of the sources, rather than one time point statistics, and (3) extracting each source signal from data by a Bayes optimal estimator obtained from p(X | Y ). A more minor point is handling the case of less sources than sensors in a principled way. The mixing situation is described by yin = j hij xjn + uin , where xjn is source signal j at time point n, yin is sensor signal i, hij is the instantaneous mixing matrix, and uin is the noise corrupting sensor i’s signal. The corresponding subband signals satisfy Yin [k] = j hij Xjn [k] + Uin [k] . To turn the last equation into a probabilistic graphical model, we assume that noise i has precision (inverse spectrum) Bi [k], and that noises at different sensors are independent (the latter assumption is often inaccurate but can be easily relaxed). This yields p(Yin | X) N (Yin [k] | = p(Y | X) p(Yin | X) , = hij Xjn [k], Bi [k]) j k (4) in which together with the speech model (3) forms a complete model p(Y, X, S) for this problem. The DAG representing this model for the case K = L = 2 is shown in Fig. 3. Notice that this model generalizes [4] to the subband domain. s1n−2 s1n−1 s1 n s2n−2 s2n−1 s2 n x1n−2 x1n−1 x1 n x2n−2 x2n−1 x2 n y1n−2 y1n−1 y1n y2n−2 y2n−1 y2 n Figure 3: Graphical model for noisy, non-reverberant 2 × 2 mixing, showing a 3 frame-long sequence. All nodes Yin and Xjn represent complex N/2 − 1-dimensional vectors (see Fig. 2). While Y1n and Y2n have the same parents, X1n and X2n , the arcs from the parents to Y2n are omitted for clarity. The model parameters θ = {hij , Bi [k], Ajs [k], πjs } are estimated from data by an EM algorithm. However, as the number of speech components M or the number of sources K increases, the E-step becomes computationally intractable, as it requires summing over all O(M K ) configurations of (S1n , ..., SKn ) at each frame. We approximate the E-step using a variational technique: focusing on the posterior distribution p(X, S | Y ), we compute an optimal tractable approximation q(X, S | Y ) ≈ p(X, S | Y ), which we use to compute the sufficient statistics (SS). We choose q(Xjn | Sjn , Y )q(Sjn | Y ) , q(X, S | Y ) = (5) jn where the hidden variables are factorized over the sources, and also over the frames (the latter factorization is exact in this model, but is an approximation for reverberant mixing). This posterior maintains the dependence of X on S, and thus the correlations between different subbands Xjn [k]. Notice also that this posterior implies a multimodal q(Xjn ) (i.e., a mixture distribution), which is more accurate than unimodal posteriors often employed in variational approximations (e.g., [12]), but is also harder to compute. A slightly more general form which allows inter-frame correlations by employing q(S | Y ) = jn q(Sjn | Sj,n−1 , Y ) may also be used, without increasing complexity. By optimizing in the usual way (see [12,13]) a lower bound on the likelihood w.r.t. q, we obtain q(Xjn [k] | Sjn = s, Y )q(Sjn = s | Y ) , q(Xjn , Sjn = s | Y ) = (6) k where q(Xjn [k] | Sjn = s, Y ) = N (Xjn [k] | ρjns [k], νjs [k]) and q(Sjn = s | Y ) = γjns . Both the factorization over k of q(Xjn | Sjn ) and its Gaussian functional form fall out from the optimization under the structural restriction (5) and need not be specified in advance. The variational parameters {ρjns [k], νjs [k], γjns }, which depend on the data Y , constitute the SS and are computed in the E-step. The DAG representing this posterior is shown in Fig. 4. s1n−2 s1n−1 s1 n s2n−2 s2n−1 s2 n x1n−2 x1n−1 x1 n x2n−2 x2n−1 x2 n {y im } Figure 4: Graphical model describing the variational posterior distribution applied to the model of Fig. 3. In the non-reverberant case, the components of this posterior at time frame n are conditioned only on the data Yin at that frame; in the reverberant case, the components at frame n are conditioned on the data Yim at all frames m. For clarity and space reasons, this distinction is not made in the figure. After learning, the sources are extracted from data by a variational approximation of the minimum mean squared error estimator, ˆ Xjn [k] = E(Xjn [k] | Y ) = dX q(X | Y )Xjn [k] , (7) i.e., the posterior mean, where q(X | Y ) = S q(X, S | Y ). The time domain waveform xjm is then obtained by appropriately patching together the subband signals. ˆ M-step. The update rule for the mixing matrix hij is obtained by solving the linear equation Bi [k]ηij,0 [k] = hij j k Bi [k]λj j,0 [k] . (8) k The update rule for the noise precisions Bi [k] is omitted. The quantities ηij,m [k] and λj j,m [k] are computed from the SS; see [13] for details. E-step. The posterior means of the sources (7) are obtained by solving   ˆ Xjn [k] = νjn [k]−1 ˆ i Bi [k]hij Yin [k] − j =j ˆ hij Xj n [k] (9) ˆ for Xjn [k], which is a K ×K linear system for each frequency k and frame n. The equations for the SS are given in [13], which also describes experimental results. 5 Separation of Reverberant Mixtures In this section we extend the algorithm to the case of reverberant mixing. In that case, due to signal propagation in the medium, each sensor signal at time frame n depends on the source signals not just at the same time but also at previous times. To describe this mathematically, the mixing matrix hij must become a matrix of filters hij,m , and yin = hij,m xj,n−m + uin . jm It may seem straightforward to extend the algorithm derived above to the present case. However, this appearance is misleading, because we have a time scale problem. Whereas are speech model p(X, S) is frame based, the filters hij,m are generally longer than the frame length N , typically 10 frames long and sometime longer. It is unclear how one can work with both Xjn and hij,m on the same footing (and, it is easy to see that straightforward windowed FFT cannot solve this problem). This is where the idea of subband filtering becomes very useful. Using (2) we have Yin [k] = Hij,m [k]Xj,n−m [k] + Uin [k], which yields the probabilistic model jm p(Yin | X) N (Yin [k] | = Hij,m [k]Xj,n−m [k], Bi [k]) . (10) jm k Hence, both X and Y are now frame based. Combining this equation with the speech model (3), we now have a complete model p(Y, X, S) for the reverberant mixing problem. The DAG describing this model is shown in Fig. 5. s1n−2 s1n−1 s1 n s2n−2 s2n−1 s2 n x1n−2 x1n−1 x1 n x2n−2 x2n−1 x2 n y1n−2 y1n−1 y1n y2n−2 y2n−1 y2 n Figure 5: Graphical model for noisy, reverberant 2 × 2 mixing, showing a 3 frame-long sequence. Here we assume 2 frame-long filters, i.e., m = 0, 1 in Eq. (10), where the solid arcs from X to Y correspond to m = 0 (as in Fig. 3) and the dashed arcs to m = 1. While Y1n and Y2n have the same parents, X1n and X2n , the arcs from the parents to Y2n are omitted for clarity. The model parameters θ = {Hij,m [k], Bi [k], Ajs [k], πjs } are estimated from data by a variational EM algorithm, whose derivation generally follows the one outlined in the previous section. Notice that the exact E-step here is even more intractable, due to the history dependence introduced by the filters. M-step. The update rule for Hij,m is obtained by solving the Toeplitz system Hij ,m [k]λj j,m−m [k] = ηij,m [k] (11) j m where the quantities λj j,m [k], ηij,m [k] are computed from the SS (see [12]). The update rule for the Bi [k] is omitted. E-step. The posterior means of the sources (7) are obtained by solving  ˆ Xjn [k] = νjn [k]−1 ˆ im Bi [k]Hij,m−n [k] Yim [k] − Hij j m =jm ,m−m ˆ [k]Xj m  [k] (12) ˆ for Xjn [k]. Assuming P frames long filters Hij,m , m = 0 : P − 1, this is a KP × KP linear system for each frequency k. The equations for the SS are given in [13], which also describes experimental results. 6 Extensions An alternative technique we have been pursuing for approximating EM in our models is Sequential Rao-Blackwellized Monte Carlo. There, we sample state sequences S from the posterior p(S | Y ) and, for a given sequence, perform exact inference on the source signals X conditioned on that sequence (observe that given S, the posterior p(X | S, Y ) is Gaussian and can be computed exactly). In addition, we are extending our speech model to include features such as pitch [7] in order to improve separation performance, especially in cases with less sensors than sources [7–9]. Yet another extension is applying model selection techniques to infer the number of sources from data in a dynamic manner. Acknowledgments I thank Te-Won Lee for extremely valuable discussions. References [1] A.J. Bell, T.J. Sejnowski (1995). An information maximisation approach to blind separation and blind deconvolution. Neural Computation 7, 1129-1159. [2] B.A. Pearlmutter, L.C. Parra (1997). Maximum likelihood blind source separation: A contextsensitive generalization of ICA. Proc. NIPS-96. [3] A. Cichocki, S.-I. Amari (2002). Adaptive Blind Signal and Image Processing. Wiley. [4] H. Attias (1999). Independent Factor Analysis. Neural Computation 11, 803-851. [5] T.-W. Lee et al. (2001) (Ed.). Proc. ICA 2001. [6] S. Griebel, M. Brandstein (2001). Microphone array speech dereverberation using coarse channel modeling. Proc. ICASSP 2001. [7] J. Hershey, M. Casey (2002). Audiovisual source separation via hidden Markov models. Proc. NIPS 2001. [8] S. Roweis (2001). One Microphone Source Separation. Proc. NIPS-00, 793-799. [9] G.-J. Jang, T.-W. Lee, Y.-H. Oh (2003). A probabilistic approach to single channel blind signal separation. Proc. NIPS 2002. [10] H. Attias, L. Deng, A. Acero, J.C. Platt (2001). A new method for speech denoising using probabilistic models for clean speech and for noise. Proc. Eurospeech 2001. [11] Ephraim, Y. (1992). Statistical model based speech enhancement systems. Proc. IEEE 80(10), 1526-1555. [12] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, L.K. Saul (1999). An introduction to variational methods in graphical models. Machine Learning 37, 183-233. [13] H. Attias (2003). New EM algorithms for source separation and deconvolution with a microphone array. Proc. ICASSP 2003.

4 0.18900645 38 nips-2002-Bayesian Estimation of Time-Frequency Coefficients for Audio Signal Enhancement

Author: Patrick J. Wolfe, Simon J. Godsill

Abstract: The Bayesian paradigm provides a natural and effective means of exploiting prior knowledge concerning the time-frequency structure of sound signals such as speech and music—something which has often been overlooked in traditional audio signal processing approaches. Here, after constructing a Bayesian model and prior distributions capable of taking into account the time-frequency characteristics of typical audio waveforms, we apply Markov chain Monte Carlo methods in order to sample from the resultant posterior distribution of interest. We present speech enhancement results which compare favourably in objective terms with standard time-varying filtering techniques (and in several cases yield superior performance, both objectively and subjectively); moreover, in contrast to such methods, our results are obtained without an assumption of prior knowledge of the noise power.

5 0.17326669 147 nips-2002-Monaural Speech Separation

Author: Guoning Hu, Deliang Wang

Abstract: Monaural speech separation has been studied in previous systems that incorporate auditory scene analysis principles. A major problem for these systems is their inability to deal with speech in the highfrequency range. Psychoacoustic evidence suggests that different perceptual mechanisms are involved in handling resolved and unresolved harmonics. Motivated by this, we propose a model for monaural separation that deals with low-frequency and highfrequency signals differently. For resolved harmonics, our model generates segments based on temporal continuity and cross-channel correlation, and groups them according to periodicity. For unresolved harmonics, the model generates segments based on amplitude modulation (AM) in addition to temporal continuity and groups them according to AM repetition rates derived from sinusoidal modeling. Underlying the separation process is a pitch contour obtained according to psychoacoustic constraints. Our model is systematically evaluated, and it yields substantially better performance than previous systems, especially in the high-frequency range. 1 In t rod u ct i on In a natural environment, speech usually occurs simultaneously with acoustic interference. An effective system for attenuating acoustic interference would greatly facilitate many applications, including automatic speech recognition (ASR) and speaker identification. Blind source separation using independent component analysis [10] or sensor arrays for spatial filtering require multiple sensors. In many situations, such as telecommunication and audio retrieval, a monaural (one microphone) solution is required, in which intrinsic properties of speech or interference must be considered. Various algorithms have been proposed for monaural speech enhancement [14]. These methods assume certain properties of interference and have difficulty in dealing with general acoustic interference. Monaural separation has also been studied using phasebased decomposition [3] and statistical learning [17], but with only limited evaluation. While speech enhancement remains a challenge, the auditory system shows a remarkable capacity for monaural speech separation. According to Bregman [1], the auditory system separates the acoustic signal into streams, corresponding to different sources, based on auditory scene analysis (ASA) principles. Research in ASA has inspired considerable work to build computational auditory scene analysis (CASA) systems for sound separation [19] [4] [7] [18]. Such systems generally approach speech separation in two main stages: segmentation (analysis) and grouping (synthesis). In segmentation, the acoustic input is decomposed into sensory segments, each of which is likely to originate from a single source. In grouping, those segments that likely come from the same source are grouped together, based mostly on periodicity. In a recent CASA model by Wang and Brown [18], segments are formed on the basis of similarity between adjacent filter responses (cross-channel correlation) and temporal continuity, while grouping among segments is performed according to the global pitch extracted within each time frame. In most situations, the model is able to remove intrusions and recover low-frequency (below 1 kHz) energy of target speech. However, this model cannot handle high-frequency (above 1 kHz) signals well, and it loses much of target speech in the high-frequency range. In fact, the inability to deal with speech in the high-frequency range is a common problem for CASA systems. We study monaural speech separation with particular emphasis on the high-frequency problem in CASA. For voiced speech, we note that the auditory system can resolve the first few harmonics in the low-frequency range [16]. It has been suggested that different perceptual mechanisms are used to handle resolved and unresolved harmonics [2]. Consequently, our model employs different methods to segregate resolved and unresolved harmonics of target speech. More specifically, our model generates segments for resolved harmonics based on temporal continuity and cross-channel correlation, and these segments are grouped according to common periodicity. For unresolved harmonics, it is well known that the corresponding filter responses are strongly amplitude-modulated and the response envelopes fluctuate at the fundamental frequency (F0) of target speech [8]. Therefore, our model generates segments for unresolved harmonics based on common AM in addition to temporal continuity. The segments are grouped according to AM repetition rates. We calculate AM repetition rates via sinusoidal modeling, which is guided by target pitch estimated according to characteristics of natural speech. Section 2 describes the overall system. In section 3, systematic results and a comparison with the Wang-Brown system are given. Section 4 concludes the paper. 2 M od el d escri p t i on Our model is a multistage system, as shown in Fig. 1. Description for each stage is given below. 2.1 I n i t i a l p r oc e s s i n g First, an acoustic input is analyzed by a standard cochlear filtering model with a bank of 128 gammatone filters [15] and subsequent hair cell transduction [12]. This peripheral processing is done in time frames of 20 ms long with 10 ms overlap between consecutive frames. As a result, the input signal is decomposed into a group of timefrequency (T-F) units. Each T-F unit contains the response from a certain channel at a certain frame. The envelope of the response is obtained by a lowpass filter with Segregated Speech Mixture Peripheral and Initial Pitch mid-level segregation tracking processing Unit Final Resynthesis labeling segregation Figure 1. Schematic diagram of the proposed multistage system. passband [0, 1 kHz] and a Kaiser window of 18.25 ms. Mid-level processing is performed by computing a correlogram (autocorrelation function) of the individual responses and their envelopes. These autocorrelation functions reveal response periodicities as well as AM repetition rates. The global pitch is obtained from the summary correlogram. For clean speech, the autocorrelations generally have peaks consistent with the pitch and their summation shows a dominant peak corresponding to the pitch period. With acoustic interference, a global pitch may not be an accurate description of the target pitch, but it is reasonably close. Because a harmonic extends for a period of time and its frequency changes smoothly, target speech likely activates contiguous T-F units. This is an instance of the temporal continuity principle. In addition, since the passbands of adjacent channels overlap, a resolved harmonic usually activates adjacent channels, which leads to high crosschannel correlations. Hence, in initial segregation, the model first forms segments by merging T-F units based on temporal continuity and cross-channel correlation. Then the segments are grouped into a foreground stream and a background stream by comparing the periodicities of unit responses with global pitch. A similar process is described in [18]. Fig. 2(a) and Fig. 2(b) illustrate the segments and the foreground stream. The input is a mixture of a voiced utterance and a cocktail party noise (see Sect. 3). Since the intrusion is not strongly structured, most segments correspond to target speech. In addition, most segments are in the low-frequency range. The initial foreground stream successfully groups most of the major segments. 2.2 P i t c h tr a c k i n g In the presence of acoustic interference, the global pitch estimated in mid-level processing is generally not an accurate description of target pitch. To obtain accurate pitch information, target pitch is first estimated from the foreground stream. At each frame, the autocorrelation functions of T-F units in the foreground stream are summated. The pitch period is the lag corresponding to the maximum of the summation in the plausible pitch range: [2 ms, 12.5 ms]. Then we employ the following two constraints to check its reliability. First, an accurate pitch period at a frame should be consistent with the periodicity of the T-F units at this frame in the foreground stream. At frame j, let τ ( j) represent the estimated pitch period, and A(i, j,τ ) the autocorrelation function of uij, the unit in channel i. uij agrees with τ ( j) if A(i , j , τ ( j )) / A(i, j ,τ m ) > θ d (1) (a) (b) Frequency (Hz) 5000 5000 2335 2335 1028 1028 387 387 80 0 0.5 1 Time (Sec) 1.5 80 0 0.5 1 Time (Sec) 1.5 Figure 2. Results of initial segregation for a speech and cocktail-party mixture. (a) Segments formed. Each segment corresponds to a contiguous black region. (b) Foreground stream. Here, θd = 0.95, the same threshold used in [18], and τ m is the lag corresponding to the maximum of A(i, j,τ ) within [2 ms, 12.5 ms]. τ ( j) is considered reliable if more than half of the units in the foreground stream at frame j agree with it. Second, pitch periods in natural speech vary smoothly in time [11]. We stipulate the difference between reliable pitch periods at consecutive frames be smaller than 20% of the pitch period, justified from pitch statistics. Unreliable pitch periods are replaced by new values extrapolated from reliable pitch points using temporal continuity. As an example, suppose at two consecutive frames j and j+1 that τ ( j) is reliable while τ ( j+1) is not. All the channels corresponding to the T-F units agreeing with τ ( j) are selected. τ ( j+1) is then obtained from the summation of the autocorrelations for the units at frame j+1 in those selected channels. Then the re-estimated pitch is further verified with the second constraint. For more details, see [9]. Fig. 3 illustrates the estimated pitch periods from the speech and cocktail-party mixture, which match the pitch periods obtained from clean speech very well. 2.3 U n i t l a be l i n g With estimated pitch periods, (1) provides a criterion to label T-F units according to whether target speech dominates the unit responses or not. This criterion compares an estimated pitch period with the periodicity of the unit response. It is referred as the periodicity criterion. It works well for resolved harmonics, and is used to label the units of the segments generated in initial segregation. However, the periodicity criterion is not suitable for units responding to multiple harmonics because unit responses are amplitude-modulated. As shown in Fig. 4, for a filter response that is strongly amplitude-modulated (Fig. 4(a)), the target pitch corresponds to a local maximum, indicated by the vertical line, in the autocorrelation instead of the global maximum (Fig. 4(b)). Observe that for a filter responding to multiple harmonics of a harmonic source, the response envelope fluctuates at the rate of F0 [8]. Hence, we propose a new criterion for labeling the T-F units corresponding to unresolved harmonics by comparing AM repetition rates with estimated pitch. This criterion is referred as the AM criterion. To obtain an AM repetition rate, the entire response of a gammatone filter is half-wave rectified and then band-pass filtered to remove the DC component and other possible 14 Pitch Period (ms) 12 (a) 10 180 185 190 195 200 Time (ms) 2 4 6 8 Lag (ms) 205 210 8 6 4 0 (b) 0.5 1 Time (Sec) Figure 3. Estimated target pitch for the speech and cocktail-party mixture, marked by “x”. The solid line indicates the pitch contour obtained from clean speech. 0 10 12 Figure 4. AM effects. (a) Response of a filter with center frequency 2.6 kHz. (b) Corresponding autocorrelation. The vertical line marks the position corresponding to the pitch period of target speech. harmonics except for the F0 component. The rectified and filtered signal is then normalized by its envelope to remove the intensity fluctuations of the original signal, where the envelope is obtained via the Hilbert Transform. Because the pitch of natural speech does not change noticeably within a single frame, we model the corresponding normalized signal within a T-F unit by a single sinusoid to obtain the AM repetition rate. Specifically, f ,φ   f ij , φ ij = arg min M ˆ [r (i, jT − k ) − sin(2π k f / f S + φ )]2 , for f ∈[80 Hz, 500 Hz], (2) k =1 ˆ where a square error measure is used. r (i , t ) is the normalized filter response, fS is the sampling frequency, M spans a frame, and T= 10 ms is the progressing period from one frame to the next. In the above equation, fij gives the AM repetition rate for unit uij. Note that in the discrete case, a single sinusoid with a sufficiently high frequency can always match these samples perfectly. However, we are interested in finding a frequency within the plausible pitch range. Hence, the solution does not reduce to a degenerate case. With appropriately chosen initial values, this optimization problem can be solved effectively using iterative gradient descent (see [9]). The AM criterion is used to label T-F units that do not belong to any segments generated in initial segregation; such segments, as discussed earlier, tend to miss unresolved harmonics. Specifically, unit uij is labeled as target speech if the final square error is less than half of the total energy of the corresponding signal and the AM repetition rate is close to the estimated target pitch: | f ijτ ( j ) − 1 | < θ f . (3) Psychoacoustic evidence suggests that to separate sounds with overlapping spectra requires 6-12% difference in F0 [6]. Accordingly, we choose θf to be 0.12. 2.4 F i n a l s e gr e g a t i on a n d r e s y n t he s i s For adjacent channels responding to unresolved harmonics, although their responses may be quite different, they exhibit similar AM patterns and their response envelopes are highly correlated. Therefore, for T-F units labeled as target speech, segments are generated based on cross-channel envelope correlation in addition to temporal continuity. The spectra of target speech and intrusion often overlap and, as a result, some segments generated in initial segregation contain both units where target speech dominates and those where intrusion dominates. Given unit labels generated in the last stage, we further divide the segments in the foreground stream, SF, so that all the units in a segment have the same label. Then the streams are adjusted as follows. First, since segments for speech usually are at least 50 ms long, segments with the target label are retained in SF only if they are no shorter than 50 ms. Second, segments with the intrusion label are added to the background stream, SB, if they are no shorter than 50 ms. The remaining segments are removed from SF, becoming undecided. Finally, other units are grouped into the two streams by temporal and spectral continuity. First, SB expands iteratively to include undecided segments in its neighborhood. Then, all the remaining undecided segments are added back to SF. For individual units that do not belong to either stream, they are grouped into SF iteratively if the units are labeled as target speech as well as in the neighborhood of SF. The resulting SF is the final segregated stream of target speech. Fig. 5(a) shows the new segments generated in this process for the speech and cocktailparty mixture. Fig. 5(b) illustrates the segregated stream from the same mixture. Fig. 5(c) shows all the units where target speech is stronger than intrusion. The foreground stream generated by our algorithm contains most of the units where target speech is stronger. In addition, only a small number of units where intrusion is stronger are incorrectly grouped into it. A speech waveform is resynthesized from the final foreground stream. Here, the foreground stream works as a binary mask. It is used to retain the acoustic energy from the mixture that corresponds to 1’s and reject the mixture energy corresponding to 0’s. For more details, see [19]. 3 Evalu at i on an d comp ari son Our model is evaluated with a corpus of 100 mixtures composed of 10 voiced utterances mixed with 10 intrusions collected by Cooke [4]. The intrusions have a considerable variety. Specifically, they are: N0 - 1 kHz pure tone, N1 - white noise, N2 - noise bursts, N3 - “cocktail party” noise, N4 - rock music, N5 - siren, N6 - trill telephone, N7 - female speech, N8 - male speech, and N9 - female speech. Given our decomposition of an input signal into T-F units, we suggest the use of an ideal binary mask as the ground truth for target speech. The ideal binary mask is constructed as follows: a T-F unit is assigned one if the target energy in the corresponding unit is greater than the intrusion energy and zero otherwise. Theoretically speaking, an ideal binary mask gives a performance ceiling for all binary masks. Figure 5(c) illustrates the ideal mask for the speech and cocktail-party mixture. Ideal masks also suit well the situations where more than one target need to be segregated or the target changes dynamically. The use of ideal masks is supported by the auditory masking phenomenon: within a critical band, a weaker signal is masked by a stronger one [13]. In addition, an ideal mask gives excellent resynthesis for a variety of sounds and is similar to a prior mask used in a recent ASR study that yields excellent recognition performance [5]. The speech waveform resynthesized from the final foreground stream is used for evaluation, and it is denoted by S(t). The speech waveform resynthesized from the ideal binary mask is denoted by I(t). Furthermore, let e1(t) denote the signal present in I(t) but missing from S(t), and e2(t) the signal present in S(t) but missing from I(t). Then, the relative energy loss, REL, and the relative noise residue, RNR, are calculated as follows:     R EL = e12 (t ) t I 2 (t ) , S 2 (t ) . (4b) ¡ ¡ R NR = (4a) t 2 e 2 (t ) t t (a) (b) (c) Frequency (Hz) 5000 2355 1054 387 80 0 0.5 1 Time (Sec) 0 0.5 1 Time (Sec) 0 0.5 1 Time (Sec) Figure 5. Results of final segregation for the speech and cocktail-party mixture. (a) New segments formed in the final segregation. (b) Final foreground stream. (c) Units where target speech is stronger than the intrusion. Table 1: REL and RNR Proposed model Wang-Brown model REL (%) RNR (%) N0 2.12 0.02 N1 4.66 3.55 N2 1.38 1.30 N3 3.83 2.72 N4 4.00 2.27 N5 2.83 0.10 N6 1.61 0.30 N7 3.21 2.18 N8 1.82 1.48 N9 8.57 19.33 3.32 Average 3.40 REL (%) RNR (%) 6.99 0 28.96 1.61 5.77 0.71 21.92 1.92 10.22 1.41 7.47 0 5.99 0.48 8.61 4.23 7.27 0.48 15.81 33.03 11.91 4.39 15 SNR (dB) Intrusion 20 10 5 0 −5 N0 N1 N2 N3 N4 N5 N6 N7 N8 N9 Intrusion Type Figure 6. SNR results for segregated speech. White bars show the results from the proposed model, gray bars those from the Wang-Brown system, and black bars those of the mixtures. The results from our model are shown in Table 1. Each value represents the average of one intrusion with 10 voiced utterances. A further average across all intrusions is also shown in the table. On average, our system retains 96.60% of target speech energy, and the relative residual noise is kept at 3.32%. As a comparison, Table 1 also shows the results from the Wang-Brown model [18], whose performance is representative of current CASA systems. As shown in the table, our model reduces REL significantly. In addition, REL and RNR are balanced in our system. Finally, to compare waveforms directly we measure a form of signal-to-noise ratio (SNR) in decibels using the resynthesized signal from the ideal binary mask as ground truth: ( I (t ) − S (t )) 2 ] . I 2 (t )     SNR = 10 log10 [ t (5) t The SNR for each intrusion averaged across 10 target utterances is shown in Fig. 6, together with the results from the Wang-Brown system and the SNR of the original mixtures. Our model achieves an average SNR gain of around 12 dB and 5 dB improvement over the Wang-Brown model. 4 Di scu ssi on The main feature of our model lies in using different mechanisms to deal with resolved and unresolved harmonics. As a result, our model is able to recover target speech and reduce noise interference in the high-frequency range where harmonics of target speech are unresolved. The proposed system considers the pitch contour of the target source only. However, it is possible to track the pitch contour of the intrusion if it has a harmonic structure. With two pitch contours, one could label a T-F unit more accurately by comparing whether its periodicity is more consistent with one or the other. Such a method is expected to lead to better performance for the two-speaker situation, e.g. N7 through N9. As indicated in Fig. 6, the performance gain of our system for such intrusions is relatively limited. Our model is limited to separation of voiced speech. In our view, unvoiced speech poses the biggest challenge for monaural speech separation. Other grouping cues, such as onset, offset, and timbre, have been demonstrated to be effective for human ASA [1], and may play a role in grouping unvoiced speech. In addition, one should consider the acoustic and phonetic characteristics of individual unvoiced consonants. We plan to investigate these issues in future study. A c k n ow l e d g me n t s We thank G. J. Brown and M. Wu for helpful comments. Preliminary versions of this work were presented in 2001 IEEE WASPAA and 2002 IEEE ICASSP. This research was supported in part by an NSF grant (IIS-0081058) and an AFOSR grant (F4962001-1-0027). References [1] A. S. Bregman, Auditory scene analysis, Cambridge MA: MIT Press, 1990. [2] R. P. Carlyon and T. M. Shackleton, “Comparing the fundamental frequencies of resolved and unresolved harmonics: evidence for two pitch mechanisms?” J. Acoust. Soc. Am., Vol. 95, pp. 3541-3554, 1994. [3] G. Cauwenberghs, “Monaural separation of independent acoustical components,” In Proc. of IEEE Symp. Circuit & Systems, 1999. [4] M. Cooke, Modeling auditory processing and organization, Cambridge U.K.: Cambridge University Press, 1993. [5] M. Cooke, P. Green, L. Josifovski, and A. Vizinho, “Robust automatic speech recognition with missing and unreliable acoustic data,” Speech Comm., Vol. 34, pp. 267-285, 2001. [6] C. J. Darwin and R. P. Carlyon, “Auditory grouping,” in Hearing, B. C. J. Moore, Ed., San Diego CA: Academic Press, 1995. [7] D. P. W. Ellis, Prediction-driven computational auditory scene analysis, Ph.D. Dissertation, MIT Department of Electrical Engineering and Computer Science, 1996. [8] H. Helmholtz, On the sensations of tone, Braunschweig: Vieweg & Son, 1863. (A. J. Ellis, English Trans., Dover, 1954.) [9] G. Hu and D. L. Wang, “Monaural speech segregation based on pitch tracking and amplitude modulation,” Technical Report TR6, Ohio State University Department of Computer and Information Science, 2002. (available at www.cis.ohio-state.edu/~hu) [10] A. Hyvärinen, J. Karhunen, and E. Oja, Independent component analysis, New York: Wiley, 2001. [11] W. J. M. Levelt, Speaking: From intention to articulation, Cambridge MA: MIT Press, 1989. [12] R. Meddis, “Simulation of auditory-neural transduction: further studies,” J. Acoust. Soc. Am., Vol. 83, pp. 1056-1063, 1988. [13] B. C. J. Moore, An Introduction to the psychology of hearing, 4th Ed., San Diego CA: Academic Press, 1997. [14] D. O’Shaughnessy, Speech communications: human and machine, 2nd Ed., New York: IEEE Press, 2000. [15] R. D. Patterson, I. Nimmo-Smith, J. Holdsworth, and P. Rice, “An efficient auditory filterbank based on the gammatone function,” APU Report 2341, MRC, Applied Psychology Unit, Cambridge U.K., 1988. [16] R. Plomp and A. M. Mimpen, “The ear as a frequency analyzer II,” J. Acoust. Soc. Am., Vol. 43, pp. 764-767, 1968. [17] S. Roweis, “One microphone source separation,” In Advances in Neural Information Processing Systems 13 (NIPS’00), 2001. [18] D. L. Wang and G. J. Brown, “Separation of speech from interfering sounds based on oscillatory correlation,” IEEE Trans. Neural Networks, Vol. 10, pp. 684-697, 1999. [19] M. Weintraub, A theory and computational model of auditory monaural sound separation, Ph.D. Dissertation, Stanford University Department of Electrical Engineering, 1985.

6 0.14501138 1 nips-2002-"Name That Song!" A Probabilistic Approach to Querying on Music and Text

7 0.13873808 67 nips-2002-Discriminative Binaural Sound Localization

8 0.12252355 170 nips-2002-Real Time Voice Processing with Audiovisual Feedback: Toward Autonomous Agents with Perfect Pitch

9 0.11832944 29 nips-2002-Analysis of Information in Speech Based on MANOVA

10 0.11186958 127 nips-2002-Learning Sparse Topographic Representations with Products of Student-t Distributions

11 0.10994112 101 nips-2002-Handling Missing Data with Variational Bayesian Learning of ICA

12 0.09512049 111 nips-2002-Independent Components Analysis through Product Density Estimation

13 0.086512774 2 nips-2002-A Bilinear Model for Sparse Coding

14 0.085629068 126 nips-2002-Learning Sparse Multiscale Image Representations

15 0.083803274 116 nips-2002-Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior

16 0.083273791 21 nips-2002-Adaptive Classification by Variational Kalman Filtering

17 0.079597838 18 nips-2002-Adaptation and Unsupervised Learning

18 0.079269305 184 nips-2002-Spectro-Temporal Receptive Fields of Subthreshold Responses in Auditory Cortex

19 0.076128498 110 nips-2002-Incremental Gaussian Processes

20 0.075193502 193 nips-2002-Temporal Coherence, Natural Image Sequences, and the Visual Cortex


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.222), (1, 0.054), (2, -0.02), (3, 0.166), (4, -0.05), (5, -0.029), (6, -0.182), (7, 0.024), (8, 0.26), (9, -0.119), (10, 0.153), (11, 0.04), (12, -0.167), (13, -0.013), (14, 0.184), (15, 0.08), (16, -0.005), (17, -0.104), (18, -0.147), (19, -0.017), (20, -0.025), (21, -0.108), (22, -0.078), (23, -0.004), (24, 0.037), (25, -0.161), (26, -0.088), (27, 0.132), (28, 0.168), (29, -0.108), (30, -0.075), (31, -0.067), (32, -0.114), (33, 0.111), (34, -0.029), (35, -0.004), (36, -0.047), (37, 0.003), (38, -0.079), (39, -0.161), (40, -0.023), (41, 0.12), (42, -0.002), (43, 0.036), (44, 0.084), (45, 0.028), (46, -0.055), (47, 0.016), (48, 0.093), (49, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97496426 14 nips-2002-A Probabilistic Approach to Single Channel Blind Signal Separation

Author: Gil-jin Jang, Te-Won Lee

Abstract: We present a new technique for achieving source separation when given only a single channel recording. The main idea is based on exploiting the inherent time structure of sound sources by learning a priori sets of basis filters in time domain that encode the sources in a statistically efficient manner. We derive a learning algorithm using a maximum likelihood approach given the observed single channel data and sets of basis filters. For each time point we infer the source signals and their contribution factors. This inference is possible due to the prior knowledge of the basis filters and the associated coefficient densities. A flexible model for density estimation allows accurate modeling of the observation and our experimental results exhibit a high level of separation performance for mixtures of two music signals as well as the separation of two voice signals.

2 0.8375411 183 nips-2002-Source Separation with a Sensor Array using Graphical Models and Subband Filtering

Author: Hagai Attias

Abstract: Source separation is an important problem at the intersection of several fields, including machine learning, signal processing, and speech technology. Here we describe new separation algorithms which are based on probabilistic graphical models with latent variables. In contrast with existing methods, these algorithms exploit detailed models to describe source properties. They also use subband filtering ideas to model the reverberant environment, and employ an explicit model for background and sensor noise. We leverage variational techniques to keep the computational complexity per EM iteration linear in the number of frames. 1 The Source Separation Problem Fig. 1 illustrates the problem of source separation with a sensor array. In this problem, signals from K independent sources are received by each of L ≥ K sensors. The task is to extract the sources from the sensor signals. It is a difficult task, partly because the received signals are distorted versions of the originals. There are two types of distortions. The first type arises from propagation through a medium, and is approximately linear but also history dependent. This type is usually termed reverberations. The second type arises from background noise and sensor noise, which are assumed additive. Hence, the actual task is to obtain an optimal estimate of the sources from data. The task is difficult for another reason, which is lack of advance knowledge of the properties of the sources, the propagation medium, and the noises. This difficulty gave rise to adaptive source separation algorithms, where parameters that are related to those properties are adjusted to optimized a chosen cost function. Unfortunately, the intense activity this problem has attracted over the last several years [1–9] has not yet produced a satisfactory solution. In our opinion, the reason is that existing techniques fail to address three major factors. The first is noise robustness: algorithms typically ignore background and sensor noise, sometime assuming they may be treated as additional sources. It seems plausible that to produce a noise robust algorithm, noise signals and their properties must be modeled explicitly, and these models should be exploited to compute optimal source estimators. The second factor is mixing filters: algorithms typically seek, and directly optimize, a transformation that would unmix the sources. However, in many situations, the filters describing medium propagation are non-invertible, or have an unstable inverse, or have a stable inverse that is extremely long. It may hence be advantageous to Figure 1: The source separation problem. Signals from K = 2 speakers propagate toward L = 2 sensors. Each sensor receives a linear mixture of the speaker signals, distorted by multipath propagation, medium response, and background and sensor noise. The task is to infer the original signals from sensor data. estimate the mixing filters themselves, then use them to estimate the sources. The third factor is source properties: algorithms typically use a very simple source model (e.g., a one time point histogram). But in many cases one may easily obtain detailed models of the source signals. This is particularly true for speech sources, where large datasets exist and much modeling expertise has developed over decades of research. Separation of speakers is also one of the major potential commercial applications of source separation algorithms. It seems plausible that incorporating strong source models could improve performance. Such models may potentially have two more advantages: first, they could help limit the range of possible mixing filters by constraining the optimization problem. Second, they could help avoid whitening the extracted signals by effectively limiting their spectral range to the range characteristic of the source model. This paper makes several contributions to the problem of real world source separation. In the following, we present new separation algorithms that are the first to address all three factors. We work in the framework of probabilistic graphical models. This framework allows us to construct models for sources and for noise, combine them with the reverberant mixing transformation in a principled manner, and compute parameter and source estimates from data which are Bayes optimal. We identify three technical ideas that are key to our approach: (1) a strong speech model, (2) subband filtering, and (3) variational EM. 2 Frames, Subband Signals, and Subband Filtering We start with the concept of subband filtering. This is also a good point to define our notation. Let xm denote a time domain signal, e.g., the value of a sound pressure waveform at time point m = 0, 1, 2, .... Let Xn [k] denote the corresponding subband signal at time frame n and subband frequency k. The subband signals are obtained from the time domain signal by imposing an N -point window wm , m = 0 : N − 1 on that signal at equally spaced points nJ, n = 0, 1, 2, ..., and FFT-ing the windowed signal, N −1 e−iωk m wm xnJ+m , Xn [k] = (1) m=0 where ωk = 2πk/N and k = 0 : N − 1. The subband signals are also termed frames. Notice the difference in time scale between the time frame index n in Xn [k] and the time point index n in xn . The chosen value of the spacing J depends on the window length N . For J ≤ N the original signal xm can be synthesized exactly from the subband signals (synthesis formula omitted). An important consideration for selecting J, as well as the window shape, is behavior under filtering. Consider a filter hm applied to xm , and denote by ym the filtered signal. In the simple case hm = hδm,0 (no filtering), the subband signals keep the same dependence as the time domain ones, yn = hxn −→ Yn [k] = hXn [k] . For an arbitrary filter hm , we use the relation yn = hm xn−m −→ Yn [k] = Hm [k]Xn−m [k] , (2) m m with complex coefficients Hm [k] for each k. This relation between the subband signals is termed subband filtering, and the Hm [k] are termed subband filters. Unlike the simple case of non-filtering, the relation (2) holds approximately, but quite accurately using an appropriate choice of J and wm ; see [13] for details on accuracy. Throughout this paper, we will assume that an arbitrary filter hm can be modeled by the subband filters Hm [k] to a sufficient accuracy for our purposes. One advantage of subband filtering is that it replaces a long filter hm by a set of short independent filters Hm [k], one per frequency. This will turn out to decompose the source separation problem into a set of small (albeit coupled) problems, one per frequency. Another advantage is that this representation allows using a detailed speech model on the same footing with the filter model. This is because a speech model is defined on the time scale of a single frame, whereas the original filter hm , in contrast with Hm [k], is typically as long as 10 or more frames. As a final point on notation, we define a Gaussian distribution over a complex number Z ν by p(Z) = N (Z | µ, ν) = π exp(−ν | Z − µ |2 ) . Notice that this is a joint distribution over the real and imaginary parts of Z. The mean is µ = X and the precision (inverse variance) ν satisfies ν −1 = | X |2 − | µ |2 . 3 A Model for Speech Signals We assume independent sources, and model the distribution of source j by a mixture model over its subband signals Xjn , N/2−1 p(Xjn | Sjn = s) N (Xjn [k] | 0, Ajs [k]) = p(Sjn = s) = πjs k=1 p(X, S) p(Xjn | Sjn )p(Sjn ) , = (3) jn where the components are labeled by Sjn . Component s of source j is a zero mean Gaussian with precision Ajs . The mixing proportions of source j are πjs . The DAG representing this model is shown in Fig. 2. A similar model was used in [10] for one microphone speech enhancement for recognition (see also [11]). Here are several things to note about this model. (1) Each component has a characteristic spectrum, which may describe a particular part of a speech phoneme. This is because the precision corresponds to the inverse spectrum: the mean energy (w.r.t. the above distribution) of source j at frequency k, conditioned on label s, is | Xjn |2 = A−1 . (2) js A zero mean model is appropriate given the physics of the problem, since the mean of a sound pressure waveform is zero. (3) k runs from 1 to N/2 − 1, since for k > N/2, Xjn [k] = Xjn [N − k] ; the subbands k = 0, N/2 are real and are omitted from the model, a common practice in speech recognition engines. (4) Perhaps most importantly, for each source the subband signals are correlated via the component label s, as p(Xjn ) = s p(Xjn , Sjn = s) = k p(Xjn [k]) . Hence, when the source separation problem decomposes into one problem per frequency, these problems turn out to be coupled (see below), and independent frequency permutations are avoided. (5) To increase sn xn Figure 2: Graphical model describing speech signals in the subband domain. The model assumes i.i.d. frames; only the frame at time n is shown. The node Xn represents a complex N/2 − 1-dimensional vector Xn [k], k = 1 : N/2 − 1. model accuracy, a state transition matrix p(Sjn = s | Sj,n−1 = s ) may be added for each source. The resulting HMM models are straightforward to incorporate without increasing the algorithm complexity. There are several modes of using the speech model in the algorithms below. In one mode, the sources are trained online using the sensor data. In a second mode, source models are trained offline using available data on each source in the problem. A third mode correspond to separation of sources known to be speech but whose speakers are unknown. In this case, all sources have the same model, which is trained offline on a large dataset of speech signals, including 150 male and female speakers reading sentences from the Wall Street Journal (see [10] for details). This is the case presented in this paper. The training algorithm used was standard EM (omitted) using 256 clusters, initialized by vector quantization. 4 Separation of Non-Reverberant Mixtures We now present a source separation algorithm for the case of non-reverberant (or instantaneous) mixing. Whereas many algorithms exist for this case, our contribution here is an algorithm that is significantly more robust to noise. Its robustness results, as indicated in the introduction, from three factors: (1) explicitly modeling the noise in the problem, (2) using a strong source model, in particular modeling the temporal statistics (over N time points) of the sources, rather than one time point statistics, and (3) extracting each source signal from data by a Bayes optimal estimator obtained from p(X | Y ). A more minor point is handling the case of less sources than sensors in a principled way. The mixing situation is described by yin = j hij xjn + uin , where xjn is source signal j at time point n, yin is sensor signal i, hij is the instantaneous mixing matrix, and uin is the noise corrupting sensor i’s signal. The corresponding subband signals satisfy Yin [k] = j hij Xjn [k] + Uin [k] . To turn the last equation into a probabilistic graphical model, we assume that noise i has precision (inverse spectrum) Bi [k], and that noises at different sensors are independent (the latter assumption is often inaccurate but can be easily relaxed). This yields p(Yin | X) N (Yin [k] | = p(Y | X) p(Yin | X) , = hij Xjn [k], Bi [k]) j k (4) in which together with the speech model (3) forms a complete model p(Y, X, S) for this problem. The DAG representing this model for the case K = L = 2 is shown in Fig. 3. Notice that this model generalizes [4] to the subband domain. s1n−2 s1n−1 s1 n s2n−2 s2n−1 s2 n x1n−2 x1n−1 x1 n x2n−2 x2n−1 x2 n y1n−2 y1n−1 y1n y2n−2 y2n−1 y2 n Figure 3: Graphical model for noisy, non-reverberant 2 × 2 mixing, showing a 3 frame-long sequence. All nodes Yin and Xjn represent complex N/2 − 1-dimensional vectors (see Fig. 2). While Y1n and Y2n have the same parents, X1n and X2n , the arcs from the parents to Y2n are omitted for clarity. The model parameters θ = {hij , Bi [k], Ajs [k], πjs } are estimated from data by an EM algorithm. However, as the number of speech components M or the number of sources K increases, the E-step becomes computationally intractable, as it requires summing over all O(M K ) configurations of (S1n , ..., SKn ) at each frame. We approximate the E-step using a variational technique: focusing on the posterior distribution p(X, S | Y ), we compute an optimal tractable approximation q(X, S | Y ) ≈ p(X, S | Y ), which we use to compute the sufficient statistics (SS). We choose q(Xjn | Sjn , Y )q(Sjn | Y ) , q(X, S | Y ) = (5) jn where the hidden variables are factorized over the sources, and also over the frames (the latter factorization is exact in this model, but is an approximation for reverberant mixing). This posterior maintains the dependence of X on S, and thus the correlations between different subbands Xjn [k]. Notice also that this posterior implies a multimodal q(Xjn ) (i.e., a mixture distribution), which is more accurate than unimodal posteriors often employed in variational approximations (e.g., [12]), but is also harder to compute. A slightly more general form which allows inter-frame correlations by employing q(S | Y ) = jn q(Sjn | Sj,n−1 , Y ) may also be used, without increasing complexity. By optimizing in the usual way (see [12,13]) a lower bound on the likelihood w.r.t. q, we obtain q(Xjn [k] | Sjn = s, Y )q(Sjn = s | Y ) , q(Xjn , Sjn = s | Y ) = (6) k where q(Xjn [k] | Sjn = s, Y ) = N (Xjn [k] | ρjns [k], νjs [k]) and q(Sjn = s | Y ) = γjns . Both the factorization over k of q(Xjn | Sjn ) and its Gaussian functional form fall out from the optimization under the structural restriction (5) and need not be specified in advance. The variational parameters {ρjns [k], νjs [k], γjns }, which depend on the data Y , constitute the SS and are computed in the E-step. The DAG representing this posterior is shown in Fig. 4. s1n−2 s1n−1 s1 n s2n−2 s2n−1 s2 n x1n−2 x1n−1 x1 n x2n−2 x2n−1 x2 n {y im } Figure 4: Graphical model describing the variational posterior distribution applied to the model of Fig. 3. In the non-reverberant case, the components of this posterior at time frame n are conditioned only on the data Yin at that frame; in the reverberant case, the components at frame n are conditioned on the data Yim at all frames m. For clarity and space reasons, this distinction is not made in the figure. After learning, the sources are extracted from data by a variational approximation of the minimum mean squared error estimator, ˆ Xjn [k] = E(Xjn [k] | Y ) = dX q(X | Y )Xjn [k] , (7) i.e., the posterior mean, where q(X | Y ) = S q(X, S | Y ). The time domain waveform xjm is then obtained by appropriately patching together the subband signals. ˆ M-step. The update rule for the mixing matrix hij is obtained by solving the linear equation Bi [k]ηij,0 [k] = hij j k Bi [k]λj j,0 [k] . (8) k The update rule for the noise precisions Bi [k] is omitted. The quantities ηij,m [k] and λj j,m [k] are computed from the SS; see [13] for details. E-step. The posterior means of the sources (7) are obtained by solving   ˆ Xjn [k] = νjn [k]−1 ˆ i Bi [k]hij Yin [k] − j =j ˆ hij Xj n [k] (9) ˆ for Xjn [k], which is a K ×K linear system for each frequency k and frame n. The equations for the SS are given in [13], which also describes experimental results. 5 Separation of Reverberant Mixtures In this section we extend the algorithm to the case of reverberant mixing. In that case, due to signal propagation in the medium, each sensor signal at time frame n depends on the source signals not just at the same time but also at previous times. To describe this mathematically, the mixing matrix hij must become a matrix of filters hij,m , and yin = hij,m xj,n−m + uin . jm It may seem straightforward to extend the algorithm derived above to the present case. However, this appearance is misleading, because we have a time scale problem. Whereas are speech model p(X, S) is frame based, the filters hij,m are generally longer than the frame length N , typically 10 frames long and sometime longer. It is unclear how one can work with both Xjn and hij,m on the same footing (and, it is easy to see that straightforward windowed FFT cannot solve this problem). This is where the idea of subband filtering becomes very useful. Using (2) we have Yin [k] = Hij,m [k]Xj,n−m [k] + Uin [k], which yields the probabilistic model jm p(Yin | X) N (Yin [k] | = Hij,m [k]Xj,n−m [k], Bi [k]) . (10) jm k Hence, both X and Y are now frame based. Combining this equation with the speech model (3), we now have a complete model p(Y, X, S) for the reverberant mixing problem. The DAG describing this model is shown in Fig. 5. s1n−2 s1n−1 s1 n s2n−2 s2n−1 s2 n x1n−2 x1n−1 x1 n x2n−2 x2n−1 x2 n y1n−2 y1n−1 y1n y2n−2 y2n−1 y2 n Figure 5: Graphical model for noisy, reverberant 2 × 2 mixing, showing a 3 frame-long sequence. Here we assume 2 frame-long filters, i.e., m = 0, 1 in Eq. (10), where the solid arcs from X to Y correspond to m = 0 (as in Fig. 3) and the dashed arcs to m = 1. While Y1n and Y2n have the same parents, X1n and X2n , the arcs from the parents to Y2n are omitted for clarity. The model parameters θ = {Hij,m [k], Bi [k], Ajs [k], πjs } are estimated from data by a variational EM algorithm, whose derivation generally follows the one outlined in the previous section. Notice that the exact E-step here is even more intractable, due to the history dependence introduced by the filters. M-step. The update rule for Hij,m is obtained by solving the Toeplitz system Hij ,m [k]λj j,m−m [k] = ηij,m [k] (11) j m where the quantities λj j,m [k], ηij,m [k] are computed from the SS (see [12]). The update rule for the Bi [k] is omitted. E-step. The posterior means of the sources (7) are obtained by solving  ˆ Xjn [k] = νjn [k]−1 ˆ im Bi [k]Hij,m−n [k] Yim [k] − Hij j m =jm ,m−m ˆ [k]Xj m  [k] (12) ˆ for Xjn [k]. Assuming P frames long filters Hij,m , m = 0 : P − 1, this is a KP × KP linear system for each frequency k. The equations for the SS are given in [13], which also describes experimental results. 6 Extensions An alternative technique we have been pursuing for approximating EM in our models is Sequential Rao-Blackwellized Monte Carlo. There, we sample state sequences S from the posterior p(S | Y ) and, for a given sequence, perform exact inference on the source signals X conditioned on that sequence (observe that given S, the posterior p(X | S, Y ) is Gaussian and can be computed exactly). In addition, we are extending our speech model to include features such as pitch [7] in order to improve separation performance, especially in cases with less sensors than sources [7–9]. Yet another extension is applying model selection techniques to infer the number of sources from data in a dynamic manner. Acknowledgments I thank Te-Won Lee for extremely valuable discussions. References [1] A.J. Bell, T.J. Sejnowski (1995). An information maximisation approach to blind separation and blind deconvolution. Neural Computation 7, 1129-1159. [2] B.A. Pearlmutter, L.C. Parra (1997). Maximum likelihood blind source separation: A contextsensitive generalization of ICA. Proc. NIPS-96. [3] A. Cichocki, S.-I. Amari (2002). Adaptive Blind Signal and Image Processing. Wiley. [4] H. Attias (1999). Independent Factor Analysis. Neural Computation 11, 803-851. [5] T.-W. Lee et al. (2001) (Ed.). Proc. ICA 2001. [6] S. Griebel, M. Brandstein (2001). Microphone array speech dereverberation using coarse channel modeling. Proc. ICASSP 2001. [7] J. Hershey, M. Casey (2002). Audiovisual source separation via hidden Markov models. Proc. NIPS 2001. [8] S. Roweis (2001). One Microphone Source Separation. Proc. NIPS-00, 793-799. [9] G.-J. Jang, T.-W. Lee, Y.-H. Oh (2003). A probabilistic approach to single channel blind signal separation. Proc. NIPS 2002. [10] H. Attias, L. Deng, A. Acero, J.C. Platt (2001). A new method for speech denoising using probabilistic models for clean speech and for noise. Proc. Eurospeech 2001. [11] Ephraim, Y. (1992). Statistical model based speech enhancement systems. Proc. IEEE 80(10), 1526-1555. [12] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, L.K. Saul (1999). An introduction to variational methods in graphical models. Machine Learning 37, 183-233. [13] H. Attias (2003). New EM algorithms for source separation and deconvolution with a microphone array. Proc. ICASSP 2003.

3 0.65579355 38 nips-2002-Bayesian Estimation of Time-Frequency Coefficients for Audio Signal Enhancement

Author: Patrick J. Wolfe, Simon J. Godsill

Abstract: The Bayesian paradigm provides a natural and effective means of exploiting prior knowledge concerning the time-frequency structure of sound signals such as speech and music—something which has often been overlooked in traditional audio signal processing approaches. Here, after constructing a Bayesian model and prior distributions capable of taking into account the time-frequency characteristics of typical audio waveforms, we apply Markov chain Monte Carlo methods in order to sample from the resultant posterior distribution of interest. We present speech enhancement results which compare favourably in objective terms with standard time-varying filtering techniques (and in several cases yield superior performance, both objectively and subjectively); moreover, in contrast to such methods, our results are obtained without an assumption of prior knowledge of the noise power.

4 0.51975095 67 nips-2002-Discriminative Binaural Sound Localization

Author: Ehud Ben-reuven, Yoram Singer

Abstract: Time difference of arrival (TDOA) is commonly used to estimate the azimuth of a source in a microphone array. The most common methods to estimate TDOA are based on finding extrema in generalized crosscorrelation waveforms. In this paper we apply microphone array techniques to a manikin head. By considering the entire cross-correlation waveform we achieve azimuth prediction accuracy that exceeds extrema locating methods. We do so by quantizing the azimuthal angle and treating the prediction problem as a multiclass categorization task. We demonstrate the merits of our approach by evaluating the various approaches on Sony’s AIBO robot.

5 0.51831341 10 nips-2002-A Model for Learning Variance Components of Natural Images

Author: Yan Karklin, Michael S. Lewicki

Abstract: We present a hierarchical Bayesian model for learning efficient codes of higher-order structure in natural images. The model, a non-linear generalization of independent component analysis, replaces the standard assumption of independence for the joint distribution of coefficients with a distribution that is adapted to the variance structure of the coefficients of an efficient image basis. This offers a novel description of higherorder image structure and provides a way to learn coarse-coded, sparsedistributed representations of abstract image properties such as object location, scale, and texture.

6 0.49671814 29 nips-2002-Analysis of Information in Speech Based on MANOVA

7 0.49602097 101 nips-2002-Handling Missing Data with Variational Bayesian Learning of ICA

8 0.45405793 147 nips-2002-Monaural Speech Separation

9 0.42262709 126 nips-2002-Learning Sparse Multiscale Image Representations

10 0.41738442 18 nips-2002-Adaptation and Unsupervised Learning

11 0.40691453 170 nips-2002-Real Time Voice Processing with Audiovisual Feedback: Toward Autonomous Agents with Perfect Pitch

12 0.40225053 111 nips-2002-Independent Components Analysis through Product Density Estimation

13 0.39606121 127 nips-2002-Learning Sparse Topographic Representations with Products of Student-t Distributions

14 0.36648217 2 nips-2002-A Bilinear Model for Sparse Coding

15 0.34101 1 nips-2002-"Name That Song!" A Probabilistic Approach to Querying on Music and Text

16 0.32897595 193 nips-2002-Temporal Coherence, Natural Image Sequences, and the Visual Cortex

17 0.29920214 115 nips-2002-Informed Projections

18 0.2971071 116 nips-2002-Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior

19 0.29692563 110 nips-2002-Incremental Gaussian Processes

20 0.29310846 128 nips-2002-Learning a Forward Model of a Reflex


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.013), (11, 0.047), (23, 0.03), (27, 0.212), (41, 0.014), (42, 0.099), (54, 0.164), (55, 0.064), (64, 0.013), (67, 0.025), (68, 0.02), (74, 0.077), (87, 0.013), (92, 0.029), (98, 0.099)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.85967642 14 nips-2002-A Probabilistic Approach to Single Channel Blind Signal Separation

Author: Gil-jin Jang, Te-Won Lee

Abstract: We present a new technique for achieving source separation when given only a single channel recording. The main idea is based on exploiting the inherent time structure of sound sources by learning a priori sets of basis filters in time domain that encode the sources in a statistically efficient manner. We derive a learning algorithm using a maximum likelihood approach given the observed single channel data and sets of basis filters. For each time point we infer the source signals and their contribution factors. This inference is possible due to the prior knowledge of the basis filters and the associated coefficient densities. A flexible model for density estimation allows accurate modeling of the observation and our experimental results exhibit a high level of separation performance for mixtures of two music signals as well as the separation of two voice signals.

2 0.73814678 127 nips-2002-Learning Sparse Topographic Representations with Products of Student-t Distributions

Author: Max Welling, Simon Osindero, Geoffrey E. Hinton

Abstract: We propose a model for natural images in which the probability of an image is proportional to the product of the probabilities of some filter outputs. We encourage the system to find sparse features by using a Studentt distribution to model each filter output. If the t-distribution is used to model the combined outputs of sets of neurally adjacent filters, the system learns a topographic map in which the orientation, spatial frequency and location of the filters change smoothly across the map. Even though maximum likelihood learning is intractable in our model, the product form allows a relatively efficient learning procedure that works well even for highly overcomplete sets of filters. Once the model has been learned it can be used as a prior to derive the “iterated Wiener filter” for the purpose of denoising images.

3 0.7371341 52 nips-2002-Cluster Kernels for Semi-Supervised Learning

Author: Olivier Chapelle, Jason Weston, Bernhard SchĂślkopf

Abstract: We propose a framework to incorporate unlabeled data in kernel classifier, based on the idea that two points in the same cluster are more likely to have the same label. This is achieved by modifying the eigenspectrum of the kernel matrix. Experimental results assess the validity of this approach. 1

4 0.73530769 169 nips-2002-Real-Time Particle Filters

Author: Cody Kwok, Dieter Fox, Marina Meila

Abstract: Particle filters estimate the state of dynamical systems from sensor information. In many real time applications of particle filters, however, sensor information arrives at a significantly higher rate than the update rate of the filter. The prevalent approach to dealing with such situations is to update the particle filter as often as possible and to discard sensor information that cannot be processed in time. In this paper we present real-time particle filters, which make use of all sensor information even when the filter update rate is below the update rate of the sensors. This is achieved by representing posteriors as mixtures of sample sets, where each mixture component integrates one observation arriving during a filter update. The weights of the mixture components are set so as to minimize the approximation error introduced by the mixture representation. Thereby, our approach focuses computational resources (samples) on valuable sensor information. Experiments using data collected with a mobile robot show that our approach yields strong improvements over other approaches.

5 0.73339987 88 nips-2002-Feature Selection and Classification on Matrix Data: From Large Margins to Small Covering Numbers

Author: Sepp Hochreiter, Klaus Obermayer

Abstract: We investigate the problem of learning a classification task for datasets which are described by matrices. Rows and columns of these matrices correspond to objects, where row and column objects may belong to different sets, and the entries in the matrix express the relationships between them. We interpret the matrix elements as being produced by an unknown kernel which operates on object pairs and we show that - under mild assumptions - these kernels correspond to dot products in some (unknown) feature space. Minimizing a bound for the generalization error of a linear classifier which has been obtained using covering numbers we derive an objective function for model selection according to the principle of structural risk minimization. The new objective function has the advantage that it allows the analysis of matrices which are not positive definite, and not even symmetric or square. We then consider the case that row objects are interpreted as features. We suggest an additional constraint, which imposes sparseness on the row objects and show, that the method can then be used for feature selection. Finally, we apply this method to data obtained from DNA microarrays, where “column” objects correspond to samples, “row” objects correspond to genes and matrix elements correspond to expression levels. Benchmarks are conducted using standard one-gene classification and support vector machines and K-nearest neighbors after standard feature selection. Our new method extracts a sparse set of genes and provides superior classification results. 1

6 0.73216605 3 nips-2002-A Convergent Form of Approximate Policy Iteration

7 0.73139215 190 nips-2002-Stochastic Neighbor Embedding

8 0.72856545 10 nips-2002-A Model for Learning Variance Components of Natural Images

9 0.72822148 21 nips-2002-Adaptive Classification by Variational Kalman Filtering

10 0.72720623 46 nips-2002-Boosting Density Estimation

11 0.726255 119 nips-2002-Kernel Dependency Estimation

12 0.72616923 24 nips-2002-Adaptive Scaling for Feature Selection in SVMs

13 0.72402763 82 nips-2002-Exponential Family PCA for Belief Compression in POMDPs

14 0.72335631 53 nips-2002-Clustering with the Fisher Score

15 0.72330523 68 nips-2002-Discriminative Densities from Maximum Contrast Estimation

16 0.72253031 65 nips-2002-Derivative Observations in Gaussian Process Models of Dynamic Systems

17 0.7214734 159 nips-2002-Optimality of Reinforcement Learning Algorithms with Linear Function Approximation

18 0.7213912 2 nips-2002-A Bilinear Model for Sparse Coding

19 0.71697485 137 nips-2002-Location Estimation with a Differential Update Network

20 0.71647561 124 nips-2002-Learning Graphical Models with Mercer Kernels