nips nips2012 nips2012-234 knowledge-graph by maker-knowledge-mining

234 nips-2012-Multiresolution analysis on the symmetric group


Source: pdf

Author: Risi Kondor, Walter Dempsey

Abstract: There is no generally accepted way to define wavelets on permutations. We address this issue by introducing the notion of coset based multiresolution analysis (CMRA) on the symmetric group, find the corresponding wavelet functions, and describe a fast wavelet transform for sparse signals. We discuss potential applications in ranking, sparse approximation, and multi-object tracking. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract There is no generally accepted way to define wavelets on permutations. [sent-2, score-0.187]

2 We address this issue by introducing the notion of coset based multiresolution analysis (CMRA) on the symmetric group, find the corresponding wavelet functions, and describe a fast wavelet transform for sparse signals. [sent-3, score-1.423]

3 Invariably, the bottleneck in such problems is that the number of permutations grows with n! [sent-6, score-0.079]

4 , ruling out the possibility of representing generic functions or distributions over permutations explicitly, as soon as n exceeds about ten or twelve. [sent-7, score-0.108]

5 Recently, a number of authors have advocated approximations based on a type of generalized Fourier transform [1][2][3][4][5][6]. [sent-8, score-0.132]

6 On the group Sn of permutations of n objects, this takes the form f (λ) = f (σ) ρλ (σ), (1) σ∈Sn where λ plays the role of frequency, while the ρλ matrix valued functions, called irreducible representations, are similar to the e−i2πkx/N factors in ordinary Fourier analysis. [sent-9, score-0.27]

7 , one is thus lead to “band-limited” approximations of f via the nested sequence of spaces Vµ = { f ∈ RSn | f (λ) = 0 for all λ µ }. [sent-14, score-0.153]

8 However, in the absence of a natural dilation operator, defining wavelets on a discrete space is not trivial. [sent-18, score-0.217]

9 defined an analog of Haar wavelets on trees [8], while Coifman and Maggioni [9] and Hammond et al. [sent-20, score-0.253]

10 In this paper we attempt to do the same on the much more structured domain of permutations by introducing an altogether new notion of multiresolution analysis, which we call coset-based multiresolution (CMRA). [sent-22, score-0.594]

11 Figure 1: Multiresolution 2 Multiresolution analysis and the multiscale structure of Sn The notion of multiresolution analysis on the real line was first formalized by Mallat [11]: a nested sequence of function spaces . [sent-30, score-0.451]

12 is said to constitute a multiresolution analysis (MRA) for L2 (R) if it satisfies the following axioms: MRA1. [sent-36, score-0.246]

13 To get an actual wavelet transform, one needs to define appropriate bases for the {Vi } and {Wi } spaces. [sent-42, score-0.402]

14 In the simplest case, a single function φ, called the scaling function, is sufficient to generate an orthonormal basis for V0 , and a single function ψ, called the mother wavelet generates an orthonormal basis for W0 . [sent-43, score-0.66]

15 In this case, defining φk,m (x) = 2k/2 φ(2k x − m), and ψk,m (x) = 2k/2 ψ(2k x − m), we find that {φk,m }m∈Z and {ψk,m }m∈Z will be orthonormal bases for Vk and Wk , respectively. [sent-44, score-0.096]

16 Moreover, {ψk,m }k,m∈Z is an orthonormal basis for the whole of L2 (R). [sent-45, score-0.102]

17 By the wavelet transform of f we mean its expansion in this basis. [sent-46, score-0.445]

18 The difficulty in defining multiresolution analysis on discrete spaces is that there is no natural analog of dilation, as required by Mallat’s fourth axiom. [sent-47, score-0.419]

19 However, in the specific case of the symmetric group, we do at least have a natural multiscale structure on our domain. [sent-48, score-0.076]

20 Our goal in this paper is to find an analog of Mallat’s axioms that can take advantage of this structure. [sent-49, score-0.123]

21 , n} form a group, called the symmetric group of degree n, which we denote Sn . [sent-62, score-0.083]

22 Intuitively, this tree of nested sets captures the way in which we zoom in on a particular permutation σ by first fixing σ(n), then σ(n − 1), etc. [sent-74, score-0.097]

23 The first important system of subspaces of RSn for our purposes are the window spaces Si1 . [sent-96, score-0.181]

24 The second system of spaces is related to the behavior of functions under translation. [sent-116, score-0.159]

25 We say that a space V ⊆ RSn is a left Sn –module if it is invariant to left-translation in the sense that for any f ∈ V and τ ∈ Sn , Tτ f ∈ V . [sent-119, score-0.082]

26 In particular, RSn is a (left Sn –)invariant space, therefore RSn = Mt (3) t∈Tn for some set {Mt } of irreducible modules. [sent-121, score-0.136]

27 To understand the interplay between modules and window spaces, observe that each coset µi1 . [sent-123, score-0.445]

28 ik Sn−k has an internal notion of left–translation i (Tτ 1 . [sent-126, score-0.246]

29 ik must be decomposable into a sum of irreducible Sn−k – modules, Si1 . [sent-142, score-0.359]

30 ,i k Furthermore, the modules of different window spaces can be defined in such a way that Mt 1 = µi1 ,. [sent-152, score-0.273]

31 ik is an Sn−k –module in the sense of being invariant i1 . [sent-162, score-0.28]

32 ik the space U = , is fully Sn –invariant, and therefore we must also have U = i1 ,. [sent-172, score-0.223]

33 ,ik Mt α∈A Mα , where the Mα are now irreducible Sn –modules. [sent-175, score-0.136]

34 Whenever a relationship of this type holds between two sets of irreducible Sn – resp. [sent-176, score-0.136]

35 Sn−k –modules, we say that the {Mα } modules are induced by {Mti1 . [sent-177, score-0.167]

36 In particular, there is no guarantee that the {Mα } induced modules will be amongst the modules featured in (3). [sent-182, score-0.314]

37 then the adapted modules at different levels of the coset tree are connected via Mti1 . [sent-185, score-0.5]

38 ik Mt ∀ t ∈ Tn−k , (6) t ∈t↑n where t ↑n := { t ∈ Tn | t ↓n−k = t } and t ↓n−k is the tableau that we get by removing the boxes containing n − k + 1, . [sent-191, score-0.223]

39 We will give an explicit description of the adapted modules in Section 4. [sent-196, score-0.191]

40 3 Coset based multiresolution analysis on Sn Our guiding principle in defining an analog of Mallat’s axioms for permutations is that the resulting multiresolution analysis should reflect the multiscale structure of the tree of cosets. [sent-198, score-0.772]

41 At the same time, we also want the {Vk } spaces to be invariant to translation. [sent-199, score-0.164]

42 ik Sn−k , 0 otherwise, (7) we propose the following definition. [sent-206, score-0.223]

43 Definition 1 We say that a sequence of spaces V0 ⊆ V1 ⊆ . [sent-207, score-0.132]

44 ⊆ Vn−1 = RSn forms a left-invariant coset based multiresolution analysis (L-CMRA) for Sn if L1. [sent-210, score-0.525]

45 Given any left-translation invariant space Vk , the unique Vk+1 that satisfies axioms L1–L3 is Vk+1 := i1 . [sent-226, score-0.114]

46 ik so V0 determines the entire sequence of spaces V0 , V1 , . [sent-239, score-0.33]

47 To gain a better understanding of L-CMRA, we exploit that (by axiom L1) each Vk is Sn –invariant, and is therefore a sum of irreducible Sn –modules. [sent-247, score-0.179]

48 ik The expression for Wk follows from the general formula Vk+1 = Vk ⊕ Wk . [sent-295, score-0.223]

49 i It so happens that M i12 · k m is just the trivial invariant subspace of constant functions on 1 · µi1 . [sent-304, score-0.11]

50 Therefore, this instance of L-CMRA is an exact analog of Haar wavelets: Vk will consist of all functions that are constant on each left Sn−k –coset. [sent-308, score-0.095]

51 1 Bi-invariant multiresolution analysis The left-invariant multiresolution of Definition 1 is appropriate for problems like ranking, where we have a natural permutation invariance with respect to relabeling the objects to be ranked, but not the ranks themselves. [sent-313, score-0.532]

52 4 Definition 2 We say that a sequence of spaces V0 ⊆ V1 ⊆ . [sent-319, score-0.132]

53 ⊆ Vn−1 = RSn forms a bi-invariant coset based multiresolution analysis (Bi-CMRA) for Sn if R Bi1. [sent-322, score-0.525]

54 A subspace U that is invariant to both left- and right-translation (i. [sent-333, score-0.081]

55 The main reason that Bi-CMRA is easier to describe than L-CMRA is that the irreducible two-sided modules in RSn , called isotypic subspaces, are unique. [sent-336, score-0.301]

56 ) Proposition 2 Given a set of partitions ν 0 ⊆ Λn , the corresponding Bi-CMRA comprises the spaces Uλ , Vk = λ ∈ νk Wk = Uλ , where ν k = ν 0 ↓n−k↑n . [sent-354, score-0.134]

57 (10) λ ∈ ν k+1\ν k Moreover, any system of spaces satisfying Definition 2 is of this form for some ν 0 ⊆ Λn . [sent-355, score-0.13]

58 4 Wavelets As mentioned in Section 2, to go from multiresolution analysis to orthogonal wavelets, one needs to define appropriate bases for the spaces V0 , W0 , W1 , . [sent-362, score-0.418]

59 This can be done via the close connection between irreducible modules and the {ρλ } irreducible representations (irreps), that we encountered in the context of the Fourier transform (1). [sent-366, score-0.501]

60 Thus, the orthonormal system of functions φt,t (σ) = t ∈ ν0 dλ /n! [sent-368, score-0.104]

61 Comparing with (1), we find that if we use these bases to compute the wavelet transform of a function, then the wavelet coefficients will just be rescaled versions of specific columns of the Fourier transform. [sent-375, score-0.847]

62 From the computational point of view, this is encouraging, because there are well-known and practical fast Fourier transforms (FFTs) available for Sn [12][13]. [sent-376, score-0.101]

63 On the other hand, it is also somewhat of a letdown, since it suggests that all that we have gained so far is a way to reinterpret parts of the Fourier transform as wavelet coefficients. [sent-377, score-0.445]

64 A solution to this dilemma emerges when we consider that since νk+1 \ νk = (ν0 ↓n−k−1↑n ) \ (ν0 ↓n−k↑n ) = (ν0 ↓n−k−1↑n−k ) \ (ν0 ↓n−k ) ↑n , each of the Wk wavelet spaces of Proposition 1 can be rewritten as Mti1 . [sent-379, score-0.465]

65 ik Wk = ωk = (ν0 ↓n−k−1↑n−k ) \ (ν0 ↓n−k ), (15) i1 . [sent-382, score-0.223]

66 ik t∈ωk and similarly, the wavelet spaces of Proposition 2 can be rewritten as i Uλ1 . [sent-385, score-0.688]

67 ik Wk = ω k = (ν 0 ↓n−k−1↑n−k ) \ (ν 0 ↓n−k ), (16) i1 . [sent-388, score-0.223]

68 ik the M spaces is provided by the local Fourier basis functions i1 ψt,t. [sent-404, score-0.409]

69 ik Sn−k otherwise, (17) which are localized both in “frequency” and in “space”. [sent-414, score-0.223]

70 This basis also affirms the multiscale nature i1 . [sent-415, score-0.098]

71 i of our wavelet spaces, since projecting onto the wavelet functions ψt1 ,t k of a specific shape, say, 1 λ1 = (n − k − 2, 2) captures very similar information about functions in Si1 . [sent-418, score-0.799]

72 Taking (17) as our wavelet functions, we define the L-CMRA wavelet transform of a function f : Sn → R as the collection of column vectors ∗ wf (t) := ( f, φt,t )t ∈λ(t) wf (t; i1 , . [sent-428, score-0.955]

73 Similarly, we define the Bi-CMRA wavelet transform of f as the collection of matrices ∗ wf (λ) := ( f, φt,t )t,t ∈λ wf (λ; i1 , . [sent-441, score-0.616]

74 1 Overcomplete wavelet bases While the wavelet spaces W0 , . [sent-455, score-0.867]

75 , Wk−1 of Bi-CMRA are left- and right-invariant, the wavelets (17) still carry the mark of the coset tree, which is not a right-invariant object, since it branches in the specific order n, n − 1, n − 2, . [sent-458, score-0.49]

76 In contexts where wavelets are used as a means of promoting sparsity, this will bias us towards sparsity patterns that match the particular cosets featured in the coset tree. [sent-462, score-0.533]

77 , Wk−1 with the overcomplete system of wavelets i ψj1 . [sent-466, score-0.237]

78 6 5 Fast wavelet transforms In the absence of fast wavelet transforms, multiresolution analysis would only be of theoretical interest. [sent-496, score-1.063]

79 Fortunately, our wavelet transforms naturally lend themselves to efficient recursive computation along branches of the coset tree. [sent-497, score-0.742]

80 This is especially attractive when dealing with functions that are sparse, since subtrees that only have zeros at their leaves can be eliminated from the transform altogether. [sent-498, score-0.116]

81 ik )) { if k = n − 1 then return(Scalingν (v(f ))) end if v←0 for each ik+1 ∈ {i1 . [sent-502, score-0.223]

82 ik+1 ))) end if end for output Waveletν↓n−k−1↑n−k \ν (v) return Scalingν (v) } Algorithm 1: A high level description of a recursive algorithm that computes the wavelet transform (18)–(19). [sent-514, score-0.445]

83 The function Scaling selects the subset of these vectors that are scaling coefficients, whereas Wavelet selects the wavelet coefficients. [sent-520, score-0.389]

84 ik ) = dλ (n−k) dλ ρλ ( ik+1 , n − k ) wg (t ; i1 . [sent-551, score-0.277]

85 ) operations, by working with the local wavelet functions (17) as opposed to (12) and (14), if f is sparse, Algorithm 1 needs only polynomial time. [sent-563, score-0.387]

86 Proposition 3 Given f : Sn → R such that |supp(f )| ≤ q, and ν0 ⊆ Tn , Algorithm 1 can compute the L-CMRA wavelet coefficients (18)–(19) in n2N q scalar operations, where N = t∈ν1 dλ(t) . [sent-564, score-0.358]

87 The analogous Bi-CMRA transform runs in n2M q time, where M = λ∈ν 1 d2 . [sent-565, score-0.087]

88 The inverse wavelet transforms essentially follow the same computations in reverse and have similar complexity bounds. [sent-572, score-0.439]

89 7 6 Applications There is a range of applied problems involving permutations that could benefit from the wavelets defined in this paper. [sent-573, score-0.266]

90 On the other hand, Margk is exactly the wavelet space Wk−1 of the Bi-CMRA generated by ν 0 = {(n)} of Example 2. [sent-618, score-0.358]

91 Therefore, when p is q–sparse, noting that d(n−1,1) = n−1, by using the methods of the previous section, we can find its projection to each of these spaces in just O(n4 q) time. [sent-619, score-0.107]

92 However, observing target i at track j will zero out p everywhere outside the coset µj Sn−k µ−1 , i which is difficult for the Fourier approach to handle. [sent-625, score-0.279]

93 In fact, by analogy with (7), denoting the operator that projects to the space of functions supported on this coset by Pij , the new distribution will just be Pij p. [sent-626, score-0.308]

94 However, while each observation makes p less smooth, it also makes it more concentrated, suggesting that this problem is ideally suited to a sparse representation in terms of the overcomplete basis functions of Section 4. [sent-630, score-0.13]

95 The important departure from the fast wavelet transforms of Section 5 is that now, to find the optimally sparse representation of p, we must allow branching to two-sided cosets of the form µj1 . [sent-632, score-0.52]

96 7 Conclusions Starting from the self-similar structure of the Sn−k coset tree, we developed a framework for wavelet analysis on the symmetric group. [sent-639, score-0.665]

97 Our framework resembles Mallat’s multiresolution analysis in its axiomatic foundations, yet is closer to continuous wavelet transforms in its invariance properties. [sent-640, score-0.685]

98 In a certain special case we recover the analog of Haar wavelets on the coset tree, In general, wavelets can circumvent the rigidity of the Fourier approach when dealing with functions that are sparse and/or have discontinuities, and, in contrast to the O(n2 n! [sent-642, score-0.772]

99 ) complexity of the best FFTs, for sparse functions and a reasonable choice of ν0 , our fast wavelet transform runs in O(np ) time for some small p. [sent-643, score-0.518]

100 Importantly, wavelets also provide a natural basis for sparse approximations, which have hithero not been explored much in the context of permutations. [sent-644, score-0.261]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('sn', 0.392), ('wavelet', 0.358), ('vk', 0.342), ('coset', 0.279), ('multiresolution', 0.246), ('rsn', 0.242), ('ik', 0.223), ('wavelets', 0.187), ('ff', 0.182), ('fourier', 0.16), ('modules', 0.142), ('irreducible', 0.136), ('mt', 0.133), ('wk', 0.108), ('spaces', 0.107), ('tn', 0.099), ('transform', 0.087), ('transforms', 0.081), ('permutations', 0.079), ('wf', 0.076), ('young', 0.068), ('mallat', 0.068), ('analog', 0.066), ('ffts', 0.066), ('invariant', 0.057), ('axioms', 0.057), ('fastlcwt', 0.056), ('tableaux', 0.056), ('wg', 0.054), ('orthonormal', 0.052), ('basis', 0.05), ('vn', 0.049), ('adapted', 0.049), ('multiscale', 0.048), ('cc', 0.048), ('bases', 0.044), ('tracking', 0.044), ('appendix', 0.043), ('fft', 0.043), ('axiom', 0.043), ('haar', 0.041), ('permutation', 0.04), ('proposition', 0.039), ('cmra', 0.037), ('cosets', 0.037), ('gavish', 0.037), ('isotypics', 0.037), ('margk', 0.037), ('maslen', 0.037), ('mra', 0.037), ('syt', 0.037), ('translation', 0.035), ('coef', 0.035), ('module', 0.034), ('hammond', 0.033), ('group', 0.032), ('scaling', 0.031), ('featured', 0.03), ('dilation', 0.03), ('mk', 0.03), ('tree', 0.03), ('functions', 0.029), ('harmonic', 0.029), ('coifman', 0.028), ('symmetric', 0.028), ('ranking', 0.028), ('partitions', 0.027), ('overcomplete', 0.027), ('nested', 0.027), ('subspaces', 0.027), ('discontinuities', 0.026), ('advocated', 0.026), ('tk', 0.025), ('projecting', 0.025), ('say', 0.025), ('subspace', 0.024), ('branches', 0.024), ('kondor', 0.024), ('window', 0.024), ('sparse', 0.024), ('serious', 0.023), ('pij', 0.023), ('notion', 0.023), ('system', 0.023), ('called', 0.023), ('recursively', 0.023), ('rankings', 0.022), ('supp', 0.021), ('copy', 0.021), ('orthogonal', 0.021), ('simplest', 0.021), ('fast', 0.02), ('guestrin', 0.02), ('matrices', 0.019), ('jk', 0.019), ('approximations', 0.019), ('jiang', 0.019), ('cients', 0.019), ('decompositions', 0.019), ('huang', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 234 nips-2012-Multiresolution analysis on the symmetric group

Author: Risi Kondor, Walter Dempsey

Abstract: There is no generally accepted way to define wavelets on permutations. We address this issue by introducing the notion of coset based multiresolution analysis (CMRA) on the symmetric group, find the corresponding wavelet functions, and describe a fast wavelet transform for sparse signals. We discuss potential applications in ranking, sparse approximation, and multi-object tracking. 1

2 0.21633875 363 nips-2012-Wavelet based multi-scale shape features on arbitrary surfaces for cortical thickness discrimination

Author: Won H. Kim, Deepti Pachauri, Charles Hatt, Moo. K. Chung, Sterling Johnson, Vikas Singh

Abstract: Hypothesis testing on signals defined on surfaces (such as the cortical surface) is a fundamental component of a variety of studies in Neuroscience. The goal here is to identify regions that exhibit changes as a function of the clinical condition under study. As the clinical questions of interest move towards identifying very early signs of diseases, the corresponding statistical differences at the group level invariably become weaker and increasingly hard to identify. Indeed, after a multiple comparisons correction is adopted (to account for correlated statistical tests over all surface points), very few regions may survive. In contrast to hypothesis tests on point-wise measurements, in this paper, we make the case for performing statistical analysis on multi-scale shape descriptors that characterize the local topological context of the signal around each surface vertex. Our descriptors are based on recent results from harmonic analysis, that show how wavelet theory extends to non-Euclidean settings (i.e., irregular weighted graphs). We provide strong evidence that these descriptors successfully pick up group-wise differences, where traditional methods either fail or yield unsatisfactory results. Other than this primary application, we show how the framework allows performing cortical surface smoothing in the native space without mappint to a unit sphere. 1

3 0.17499648 78 nips-2012-Compressive Sensing MRI with Wavelet Tree Sparsity

Author: Chen Chen, Junzhou Huang

Abstract: In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K + K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms. 1

4 0.17324062 255 nips-2012-On the Use of Non-Stationary Policies for Stationary Infinite-Horizon Markov Decision Processes

Author: Bruno Scherrer, Boris Lesner

Abstract: We consider infinite-horizon stationary γ-discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. Using Value and Policy Iteration with some error at each iteration, it is well-known that one 2γ can compute stationary policies that are (1−γ)2 -optimal. After arguing that this guarantee is tight, we develop variations of Value and Policy Iteration for com2γ puting non-stationary policies that can be up to 1−γ -optimal, which constitutes a significant improvement in the usual situation when γ is close to 1. Surprisingly, this shows that the problem of “computing near-optimal non-stationary policies” is much simpler than that of “computing near-optimal stationary policies”. 1

5 0.095137432 259 nips-2012-Online Regret Bounds for Undiscounted Continuous Reinforcement Learning

Author: Ronald Ortner, Daniil Ryabko

Abstract: We derive sublinear regret bounds for undiscounted reinforcement learning in continuous state space. The proposed algorithm combines state aggregation with the use of upper confidence bounds for implementing optimism in the face of uncertainty. Beside the existence of an optimal policy which satisfies the Poisson equation, the only assumptions made are H¨ lder continuity of rewards and transition o probabilities. 1

6 0.085774049 249 nips-2012-Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison

7 0.07459376 254 nips-2012-On the Sample Complexity of Robust PCA

8 0.074035324 60 nips-2012-Bayesian nonparametric models for ranked data

9 0.064159811 128 nips-2012-Fast Resampling Weighted v-Statistics

10 0.062788114 179 nips-2012-Learning Manifolds with K-Means and K-Flats

11 0.062088858 21 nips-2012-A Unifying Perspective of Parametric Policy Search Methods for Markov Decision Processes

12 0.056888044 281 nips-2012-Provable ICA with Unknown Gaussian Noise, with Implications for Gaussian Mixtures and Autoencoders

13 0.054363795 280 nips-2012-Proper losses for learning from partial labels

14 0.053896822 297 nips-2012-Robustness and risk-sensitivity in Markov decision processes

15 0.050062761 25 nips-2012-A new metric on the manifold of kernel matrices with application to matrix geometric means

16 0.048321474 44 nips-2012-Approximating Concavely Parameterized Optimization Problems

17 0.048160378 37 nips-2012-Affine Independent Variational Inference

18 0.047320303 202 nips-2012-Locally Uniform Comparison Image Descriptor

19 0.046409875 233 nips-2012-Multiresolution Gaussian Processes

20 0.04391538 246 nips-2012-Nonparametric Max-Margin Matrix Factorization for Collaborative Prediction


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.121), (1, -0.013), (2, -0.013), (3, -0.029), (4, 0.003), (5, 0.039), (6, 0.007), (7, -0.0), (8, 0.001), (9, 0.024), (10, 0.003), (11, -0.016), (12, -0.02), (13, -0.041), (14, -0.02), (15, -0.018), (16, 0.084), (17, 0.004), (18, 0.082), (19, -0.089), (20, 0.005), (21, 0.116), (22, -0.025), (23, -0.023), (24, 0.015), (25, 0.146), (26, -0.119), (27, 0.114), (28, -0.16), (29, 0.197), (30, 0.0), (31, 0.029), (32, 0.031), (33, -0.158), (34, 0.039), (35, -0.171), (36, -0.021), (37, 0.15), (38, -0.114), (39, 0.09), (40, -0.011), (41, -0.006), (42, 0.11), (43, 0.046), (44, 0.131), (45, -0.036), (46, -0.034), (47, 0.028), (48, -0.016), (49, -0.042)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97171199 234 nips-2012-Multiresolution analysis on the symmetric group

Author: Risi Kondor, Walter Dempsey

Abstract: There is no generally accepted way to define wavelets on permutations. We address this issue by introducing the notion of coset based multiresolution analysis (CMRA) on the symmetric group, find the corresponding wavelet functions, and describe a fast wavelet transform for sparse signals. We discuss potential applications in ranking, sparse approximation, and multi-object tracking. 1

2 0.78679204 363 nips-2012-Wavelet based multi-scale shape features on arbitrary surfaces for cortical thickness discrimination

Author: Won H. Kim, Deepti Pachauri, Charles Hatt, Moo. K. Chung, Sterling Johnson, Vikas Singh

Abstract: Hypothesis testing on signals defined on surfaces (such as the cortical surface) is a fundamental component of a variety of studies in Neuroscience. The goal here is to identify regions that exhibit changes as a function of the clinical condition under study. As the clinical questions of interest move towards identifying very early signs of diseases, the corresponding statistical differences at the group level invariably become weaker and increasingly hard to identify. Indeed, after a multiple comparisons correction is adopted (to account for correlated statistical tests over all surface points), very few regions may survive. In contrast to hypothesis tests on point-wise measurements, in this paper, we make the case for performing statistical analysis on multi-scale shape descriptors that characterize the local topological context of the signal around each surface vertex. Our descriptors are based on recent results from harmonic analysis, that show how wavelet theory extends to non-Euclidean settings (i.e., irregular weighted graphs). We provide strong evidence that these descriptors successfully pick up group-wise differences, where traditional methods either fail or yield unsatisfactory results. Other than this primary application, we show how the framework allows performing cortical surface smoothing in the native space without mappint to a unit sphere. 1

3 0.57629037 78 nips-2012-Compressive Sensing MRI with Wavelet Tree Sparsity

Author: Chen Chen, Junzhou Huang

Abstract: In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K + K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms. 1

4 0.49267715 128 nips-2012-Fast Resampling Weighted v-Statistics

Author: Chunxiao Zhou, Jiseong Park, Yun Fu

Abstract: In this paper, a novel and computationally fast algorithm for computing weighted v-statistics in resampling both univariate and multivariate data is proposed. To avoid any real resampling, we have linked this problem with finite group action and converted it into a problem of orbit enumeration. For further computational cost reduction, an efficient method is developed to list all orbits by their symmetry orders and calculate all index function orbit sums and data function orbit sums recursively. The computational complexity analysis shows reduction in the computational cost from n! or nn level to low-order polynomial level. 1

5 0.39452684 255 nips-2012-On the Use of Non-Stationary Policies for Stationary Infinite-Horizon Markov Decision Processes

Author: Bruno Scherrer, Boris Lesner

Abstract: We consider infinite-horizon stationary γ-discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. Using Value and Policy Iteration with some error at each iteration, it is well-known that one 2γ can compute stationary policies that are (1−γ)2 -optimal. After arguing that this guarantee is tight, we develop variations of Value and Policy Iteration for com2γ puting non-stationary policies that can be up to 1−γ -optimal, which constitutes a significant improvement in the usual situation when γ is close to 1. Surprisingly, this shows that the problem of “computing near-optimal non-stationary policies” is much simpler than that of “computing near-optimal stationary policies”. 1

6 0.3695735 63 nips-2012-CPRL -- An Extension of Compressive Sensing to the Phase Retrieval Problem

7 0.36918113 21 nips-2012-A Unifying Perspective of Parametric Policy Search Methods for Markov Decision Processes

8 0.34825391 249 nips-2012-Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison

9 0.33475158 281 nips-2012-Provable ICA with Unknown Gaussian Noise, with Implications for Gaussian Mixtures and Autoencoders

10 0.32713193 268 nips-2012-Perfect Dimensionality Recovery by Variational Bayesian PCA

11 0.32224423 44 nips-2012-Approximating Concavely Parameterized Optimization Problems

12 0.31984147 37 nips-2012-Affine Independent Variational Inference

13 0.30669904 17 nips-2012-A Scalable CUR Matrix Decomposition Algorithm: Lower Time Complexity and Tighter Bound

14 0.30558699 260 nips-2012-Online Sum-Product Computation Over Trees

15 0.30137458 202 nips-2012-Locally Uniform Comparison Image Descriptor

16 0.29213983 53 nips-2012-Bayesian Pedigree Analysis using Measure Factorization

17 0.29054168 79 nips-2012-Compressive neural representation of sparse, high-dimensional probabilities

18 0.28781724 46 nips-2012-Assessing Blinding in Clinical Trials

19 0.2817513 179 nips-2012-Learning Manifolds with K-Means and K-Flats

20 0.27952009 159 nips-2012-Image Denoising and Inpainting with Deep Neural Networks


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.045), (17, 0.011), (21, 0.036), (27, 0.015), (38, 0.079), (39, 0.028), (42, 0.023), (54, 0.054), (55, 0.022), (74, 0.059), (76, 0.113), (80, 0.138), (84, 0.254), (92, 0.04)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.80982661 234 nips-2012-Multiresolution analysis on the symmetric group

Author: Risi Kondor, Walter Dempsey

Abstract: There is no generally accepted way to define wavelets on permutations. We address this issue by introducing the notion of coset based multiresolution analysis (CMRA) on the symmetric group, find the corresponding wavelet functions, and describe a fast wavelet transform for sparse signals. We discuss potential applications in ranking, sparse approximation, and multi-object tracking. 1

2 0.77467042 59 nips-2012-Bayesian nonparametric models for bipartite graphs

Author: Francois Caron

Abstract: We develop a novel Bayesian nonparametric model for random bipartite graphs. The model is based on the theory of completely random measures and is able to handle a potentially infinite number of nodes. We show that the model has appealing properties and in particular it may exhibit a power-law behavior. We derive a posterior characterization, a generative process for network growth, and a simple Gibbs sampler for posterior simulation. Our model is shown to be well fitted to several real-world social networks. 1

3 0.76656365 51 nips-2012-Bayesian Hierarchical Reinforcement Learning

Author: Feng Cao, Soumya Ray

Abstract: We describe an approach to incorporating Bayesian priors in the MAXQ framework for hierarchical reinforcement learning (HRL). We define priors on the primitive environment model and on task pseudo-rewards. Since models for composite tasks can be complex, we use a mixed model-based/model-free learning approach to find an optimal hierarchical policy. We show empirically that (i) our approach results in improved convergence over non-Bayesian baselines, (ii) using both task hierarchies and Bayesian priors is better than either alone, (iii) taking advantage of the task hierarchy reduces the computational cost of Bayesian reinforcement learning and (iv) in this framework, task pseudo-rewards can be learned instead of being manually specified, leading to hierarchically optimal rather than recursively optimal policies. 1

4 0.75867182 102 nips-2012-Distributed Non-Stochastic Experts

Author: Varun Kanade, Zhenming Liu, Bozidar Radunovic

Abstract: We consider the online distributed non-stochastic experts problem, where the distributed system consists of one coordinator node that is connected to k sites, and the sites are required to communicate with each other via the coordinator. At each time-step t, one of the k site nodes has to pick an expert from the set {1, . . . , n}, and the same site receives information about payoffs of all experts for that round. The goal of the distributed system is to minimize regret at time horizon T , while simultaneously keeping communication to a minimum. The two extreme solutions to this problem are: (i) Full communication: This essentially simulates the nondistributed setting to obtain the optimal O( log(n)T ) regret bound at the cost of T communication. (ii) No communication: Each site runs an independent copy – the regret is O( log(n)kT ) and the communication is 0. This paper shows the √ difficulty of simultaneously achieving regret asymptotically better than kT and communication better than T . We give a novel algorithm that for an oblivious √ adversary achieves a non-trivial trade-off: regret O( k 5(1+ )/6 T ) and communication O(T /k ), for any value of ∈ (0, 1/5). We also consider a variant of the model, where the coordinator picks the expert. In this model, we show that the label-efficient forecaster of Cesa-Bianchi et al. (2005) already gives us strategy that is near optimal in regret vs communication trade-off. 1

5 0.63572574 264 nips-2012-Optimal kernel choice for large-scale two-sample tests

Author: Arthur Gretton, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimiliano Pontil, Kenji Fukumizu, Bharath K. Sriperumbudur

Abstract: Given samples from distributions p and q, a two-sample test determines whether to reject the null hypothesis that p = q, based on the value of a test statistic measuring the distance between the samples. One choice of test statistic is the maximum mean discrepancy (MMD), which is a distance between embeddings of the probability distributions in a reproducing kernel Hilbert space. The kernel used in obtaining these embeddings is critical in ensuring the test has high power, and correctly distinguishes unlike distributions with high probability. A means of parameter selection for the two-sample test based on the MMD is proposed. For a given test level (an upper bound on the probability of making a Type I error), the kernel is chosen so as to maximize the test power, and minimize the probability of making a Type II error. The test statistic, test threshold, and optimization over the kernel parameters are obtained with cost linear in the sample size. These properties make the kernel selection and test procedures suited to data streams, where the observations cannot all be stored in memory. In experiments, the new kernel selection approach yields a more powerful test than earlier kernel selection heuristics.

6 0.62484848 197 nips-2012-Learning with Recursive Perceptual Representations

7 0.62104905 121 nips-2012-Expectation Propagation in Gaussian Process Dynamical Systems

8 0.62074685 279 nips-2012-Projection Retrieval for Classification

9 0.61906964 168 nips-2012-Kernel Latent SVM for Visual Recognition

10 0.61625028 173 nips-2012-Learned Prioritization for Trading Off Accuracy and Speed

11 0.6159299 218 nips-2012-Mixing Properties of Conditional Markov Chains with Unbounded Feature Functions

12 0.61519247 251 nips-2012-On Lifting the Gibbs Sampling Algorithm

13 0.61304617 229 nips-2012-Multimodal Learning with Deep Boltzmann Machines

14 0.61283654 200 nips-2012-Local Supervised Learning through Space Partitioning

15 0.61189228 355 nips-2012-Truncation-free Online Variational Inference for Bayesian Nonparametric Models

16 0.61156625 100 nips-2012-Discriminative Learning of Sum-Product Networks

17 0.60928321 287 nips-2012-Random function priors for exchangeable arrays with applications to graphs and relational data

18 0.60878092 108 nips-2012-Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search

19 0.60796195 83 nips-2012-Controlled Recognition Bounds for Visual Learning and Exploration

20 0.60737318 77 nips-2012-Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models