nips nips2012 nips2012-36 knowledge-graph by maker-knowledge-mining

36 nips-2012-Adaptive Stratified Sampling for Monte-Carlo integration of Differentiable functions


Source: pdf

Author: Alexandra Carpentier, Rémi Munos

Abstract: We consider the problem of adaptive stratified sampling for Monte Carlo integration of a differentiable function given a finite number of evaluations to the function. We construct a sampling scheme that samples more often in regions where the function oscillates more, while allocating the samples such that they are well spread on the domain (this notion shares similitude with low discrepancy). We prove that the estimate returned by the algorithm is almost similarly accurate as the estimate that an optimal oracle strategy (that would know the variations of the function everywhere) would return, and provide a finite-sample analysis. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 fr Abstract We consider the problem of adaptive stratified sampling for Monte Carlo integration of a differentiable function given a finite number of evaluations to the function. [sent-7, score-0.155]

2 We construct a sampling scheme that samples more often in regions where the function oscillates more, while allocating the samples such that they are well spread on the domain (this notion shares similitude with low discrepancy). [sent-8, score-0.288]

3 We prove that the estimate returned by the algorithm is almost similarly accurate as the estimate that an optimal oracle strategy (that would know the variations of the function everywhere) would return, and provide a finite-sample analysis. [sent-9, score-0.402]

4 Since the considered functions are differentiable, if the domain is stratified in K hyper-cubic strata of same measure and if one assigns uniformly at random n/K samples per stratum, the mean squared error of the resulting stratified estimate is in O(n−1 K −2/d ). [sent-19, score-0.956]

5 The resulting estimate has a mean squared error of order O(n−(1+2/d) ). [sent-22, score-0.159]

6 The arguments that advocate for stratifying in strata of same measure and minimal diameter are closely linked to the reasons why quasi Monte-Carlo methods, or low discrepancy sampling schemes are efficient techniques for integrating smooth functions. [sent-23, score-0.905]

7 It is minimax-optimal to stratify the domain in n strata and sample one point per stratum, but it would also be interesting to adapt the stratification of the space with respect to the function f . [sent-25, score-0.794]

8 For example, if the function has larger variations in a region of the domain, we would like to discretize the domain in smaller strata in this region, so that more samples are assigned to this region. [sent-26, score-0.79]

9 However an efficient algorithm should allocate the samples in order to estimate online the variations of the 1 function in each region of the domain while, at the same time, allocating more samples in regions where f has larger local variations. [sent-28, score-0.369]

10 The papers [5, 7, 3] provide algorithms for solving a similar trade-off when the stratification is fixed: these algorithms allocate more samples to strata in which the function has larger variations. [sent-29, score-0.678]

11 It first stratifies the domain in K � n strata, and then allocates uniformly to each stratum an initial small amount of samples in order to estimate roughly the variations of the function per stratum. [sent-33, score-0.838]

12 Then our algorithm substratifies each of the K strata according to the estimated local variations, so that there are in total approximately n sub-strata, and allocates one point per sub-stratum. [sent-34, score-0.763]

13 In that way, our algorithm discretizes the domain into more refined strata in regions where the function has higher variations. [sent-35, score-0.709]

14 • We introduce the algorithm LMC-UCB, that sub-stratifies the K strata in hyper-cubic substrata, and samples one point per sub-stratum. [sent-39, score-0.702]

15 The number of sub-strata per stratum is linked to the variations of the function in the stratum. [sent-40, score-0.59]

16 We prove that algorithm LMC-UCB is asymptotically as efficient as the optimal oracle strategy. [sent-41, score-0.214]

17 By tuning the number of strata K wisely, it is possible to build an algorithm that is almost as efficient as the optimal oracle strategy. [sent-43, score-0.789]

18 Section 3 states the asymptotic lower bound on the mean squared error of the optimal oracle strategy. [sent-46, score-0.398]

19 In this Section, we also provide an intuition on how the number of samples into each stratum should be linked to the variation of the function in the stratum in order for the mean squared error of the estimate to be small. [sent-47, score-1.175]

20 Section 5 finally states that the algorithm LMC-UCB is almost as efficient as the optimal oracle strategy. [sent-49, score-0.168]

21 The strata of the refined layer are referred to as sub-strata, and we sample in the sub-strata. [sent-57, score-0.602]

22 We will compare the performances of the algorithms we construct, with the performances of the optimal oracle algorithm that has access to the variations ||∇f (x)||2 of the function f everywhere in the domain, and is allowed to sample the domain where it wishes. [sent-58, score-0.373]

23 The first step is to partition the domain [0, 1]d in K measurable strata. [sent-59, score-0.131]

24 This enables us to partition, in a natural way, the domain in K hyper-cubic 1 strata (Ωk )k≤K of same measure wk = K . [sent-61, score-0.854]

25 Each of these strata is a region of the domain [0, 1]d , � 1 and the K strata form a partition of the domain. [sent-62, score-1.335]

26 We write µk = wk Ωk f (x)dx the mean and �2 � � 1 2 σk = wk Ωk f (x) − µk dx the variance of a sample of the function f when sampling f at a point chosen at random according to the Lebesgue measure conditioned to stratum Ωk . [sent-63, score-1.004]

27 We denote by A an algorithm that sequentially allocates the budget by sampling at round t in the stratum indexed by kt ∈ {1, . [sent-66, score-0.675]

28 ˆ We consider strategies that sub-partition each stratum Ωk in hyper-cubes of same measure in Ωk , but of heterogeneous measure among the Ωk . [sent-70, score-0.575]

29 In this way, the number of sub-strata in each stratum Ωk can adapt to the variations f within Ωk . [sent-71, score-0.561]

30 The algorithms that we consider return a sub-partition of each stratum Ωk in Sk sub-strata. [sent-72, score-0.468]

31 We call Nk = (Ωk,i )i≤Sk the sub-partition of stratum Ωk . [sent-73, score-0.468]

32 Let us � �2 � � 2 write µk,i = w1 Ωk,i f (x)dx the mean and σk,i = w1 Ωk,i f (x) − µk,i dx the variance of a k,i k,i sample of f in sub-stratum Ωk,i (e. [sent-77, score-0.142]

33 For example, 1 consider that all K strata are hyper-cubes of same measure K and that each stratum Ωk is partitioned 1 into Sk hyper-rectangles Ωk,i of minimal diameter and same measure KSk . [sent-82, score-1.178]

34 If the algorithm allocates one point per sub-stratum, its sampling scheme shares similarities with quasi Monte-Carlo sampling schemes, since the points at which the function is sampled are well spread. [sent-83, score-0.368]

35 Let us now consider an algorithm that first chooses the sub-partition (Nk )k and then allocates deterministically 1 sample uniformly at random in each sub-stratum Ωk,i . [sent-84, score-0.14]

36 Sk Sk k,i i=1 k≤K For a given algorithm A that builds for each stratum k a sub-partition Nk = (Ωk,i )i≤Sk , we call pseudo-risk the quantity Sk 2 � � wk,i 2 (1) Ln (A) = 2 σ . [sent-87, score-0.513]

37 a strategy that divides the domain in K = n hyper-cubic strata. [sent-91, score-0.155]

38 We will prove in the next Section that its asymptotic mean squared error is equal to � � 1 1� ||∇f (x)||2 dx 1+ 2 . [sent-93, score-0.328]

39 2 12 [0,1]d n d This quantity is of order n−1−2/d , which is smaller, as expected, than 1/n: this strategy is more efficient than crude Monte-Carlo. [sent-94, score-0.169]

40 3 Given this minimum asymptotic mean squared error of an optimal oracle strategy, we define the pseudo-regret of an algorithm A as 1 (3) Rn (A) = Ln (A) − Σ 1+ 2 . [sent-98, score-0.381]

41 n d This pseudo-regret is the difference between the pseudo-risk of the estimate provided by algorithm A, and the lower-bound on the optimal oracle mean squared error. [sent-99, score-0.3]

42 3 Discussion on the optimal asymptotic mean squared error 3. [sent-102, score-0.245]

43 1 Asymptotic lower bound on the mean squared error, and comparison with the Uniform stratified Monte-Carlo A first part of the analysis of the exposed problem consists in finding a good point of comparison for the pseudo-risk. [sent-103, score-0.153]

44 The following Lemma states an asymptotic lower bound on the mean squared error of the optimal oracle sampling strategy. [sent-104, score-0.449]

45 Let (Ωn )k≤n n 2 k be an arbitrary sequence of partitions of [0, 1]d in n strata such that all the strata are hyper-cubes, and such that the maximum diameter of each stratum goes to 0 as n → +∞ (but the strata are allowed to have heterogeneous measures). [sent-106, score-2.386]

46 Let µn be the stratified estimate of the function for the ˆ partition (Ωn )k≤n when there is one point pulled at random per stratum. [sent-107, score-0.155]

47 We have also the following equality for the asymptotic mean squared error of the uniform strategy. [sent-110, score-0.252]

48 For any n = ld 2 such that l is an integer (and thus such that it is possible to partition the domain in n hyper-cubic � n � strata of same measure), define (Ωk )k≤n n as the sequence of partitions in hyper-cubic strata of same measure 1/n. [sent-112, score-1.387]

49 Let µn be the stratified estimate of the function for the partition (Ωn )k≤n when ˆ k there is one point pulled at random per stratum. [sent-113, score-0.155]

50 Then � � 1� lim inf n1+2/d V(ˆn ) = µ ||∇f (x)||2 dx . [sent-114, score-0.131]

51 The only difference is that the measure of each stratum Ωn k is 1/n and that in Step 2, instead of Fatou’s Lemma, the Theorem of dominated convergence is required. [sent-116, score-0.498]

52 The optimal rate for the mean squared error, which is also the rate of the Uniform stratified MonteCarlo in Lemma 2, is n−1−2/d and is attained with ideas of low discrepancy sampling. [sent-117, score-0.174]

53 Our aim is to build an adaptive sampling scheme, also 2 sharing ideas with low discrepancy sampling, that attains this lower-bound. [sent-120, score-0.177]

54 � � There is one main restriction in both Lemma: we impose that the sequence of partitions (Ωn )k≤n n k is composed only with strata that have the shape of an hyper-cube. [sent-121, score-0.624]

55 This assumption is in fact reasonable: indeed, if the shape of the strata could be arbitrary, one could take the level sets (or approximate level sets as the number of strata is limited by n) as strata, and this would lead to limn→∞ inf Ω n1+2/d V(ˆn,Ω ) = 0. [sent-122, score-1.245]

56 The fact that the strata are hyper-cubes appears, in fact, in the bound. [sent-124, score-0.602]

57 2 An intuition of a good allocation: Piecewise linear functions In this Subsection, we (i) provide an example where the asymptotic optimal mean squared error is also the optimal mean squared error at finite distance and (ii) provide explicitly what is, in that case, a good allocation. [sent-137, score-0.427]

58 Let us assume that the function f is affine on all Ω � strata Ωk , i. [sent-140, score-0.602]

59 on stratum Ωk , we have f (x) = �θk , x� + ρk I {x ∈ Ωk }. [sent-142, score-0.468]

60 In that case µk = f (ak ) where ak is the center of the stratum Ωk . [sent-143, score-0.494]

61 We then have: 2 σk 1 = wk � 1 (f (x) − f (ak )) dx = wk Ωk 2 � Ωk � �θk , (x − ak )� �2 dx = ||θk ||2 2/d 1 � ||θk ||2 1+2/d � 2 2 wk wk . [sent-144, score-0.808]

62 = wk 12 12 1/d We consider also a sub-partition of Ωk in Sk hyper-cubes of same size (we assume that Sk is an integer), and we assume that in each sub-stratum Ωk,i , we sample one point. [sent-145, score-0.147]

63 We also have ||θk ||2 � k �2/d 2 σk,i = 12 2 wk for sub-stratum Ωk,i . [sent-146, score-0.147]

64 The pseudo-risk of an algorithm A that divides each stratum Ωk in Sk sub-strata is thus Ln (A) = � w2 � � w2 ||θk ||2 � wk �2/d � w2+2/d ||θk ||2 2 2 k k k = = σ2 . [sent-148, score-0.652]

65 In the specific case of functions that are piecewise linear, we have ΣK = d � � d+1 1/d d k (wk ||θ√||2 wk ) d+1 = [0,1]d (||∇f (x)||2 ) dx. [sent-153, score-0.147]

66 (6) This optimal oracle strategy attains the lower bound in Lemma 1. [sent-155, score-0.26]

67 It takes as parameter a partition (Ωk )k≤K in K ≤ n hyper-cubic strata of same measure 1/K (it is possible since we assume that ∃l ∈ N/ld = K). [sent-163, score-0.688]

68 The aim of algorithm LM C − U CB is to sub-stratify each 2 d stratum Ωk in λK,k = (wk σk ) d+1 d �K d+1 i=1 (wi σi ) n hyper-cubic sub-strata of same measure and sample one point per sub-stratum. [sent-165, score-0.577]

69 � � �� � d �1/d d n d+1 ¯= hyperAlgorithm LMC-UCB starts by sub-stratifying each stratum Ωk in S K ¯ cubic strata of same measure. [sent-167, score-1.07]

70 It then sub-stratifies again each stratum Ωk using the informations collected. [sent-171, score-0.468]

71 It sub-stratifies each stratum Ωk in d � � d+1 d � � �� � d+1 1 � wk �1/d d σk,K S + A( wk )1/d S ˆ ¯ ¯ ¯ S ¯ ¯ Sk = max ,S (7) d � � d+1 (n − K S) d � �K d+1 wi 1/d 1 σi,K S + A( S ) ˆ ¯ ¯ ¯ i=1 wi S hyper-cubic strata of same measure (see Figure 1 for a definition of A). [sent-172, score-1.446]

72 We call this sub-stratification of stratum Ωk stratification Nk = (Ωk,i )i≤Sk . [sent-174, score-0.468]

73 In the last Equation, we compute the empirical standard deviation in stratum Ωk ¯ at time K S as � � ¯ ¯ S S �2 � 1 �� 1� σk,K S = � ¯ ˆ Xk,i − ¯ Xk,j . [sent-175, score-0.468]

74 It is possible to do that � ¯ since, by definition of Sk , k Sk + K S ≤ n The algorithm outputs an estimate µn of the integral of f , computed with the first point in each ˆ sub-stratum of partition Nk . [sent-177, score-0.146]

75 2 High probability lower bound on the number of sub-strata of stratum Ωk We first state an assumption on the function f . [sent-182, score-0.524]

76 2 The next Lemma states that with high probability, the number Sk of sub-strata of stratum Ωk , in which there is at least one point, adjusts “almost” to the unknown optimal proportions. [sent-184, score-0.5]

77 Lemma 3 Let Assumption 1 be satisfied and (Ωk )k≤K be a partition in K hyper-cubic strata of same measure. [sent-185, score-0.658]

78 3 Remarks A sampling scheme that shares ideas with quasi Monte-Carlo methods: Algorithm LM C − U CB almost manages to divide each stratum Ωk in λK,k n hyper-cubic strata of same measure, each one of them containing at least one sample. [sent-189, score-1.264]

79 It is thus possible to build a learning procedure that, at the same time, estimates the empirical proportions λK,k , and allocates the samples proportionally to them. [sent-190, score-0.187]

80 The error terms: There are two reasons why we are not able to divide exactly each stratum Ωk in λK,k n hyper-cubic strata of same measure. [sent-191, score-1.136]

81 The second reason is that we want to build strata that are hyper-cubes of same measure. [sent-193, score-0.621]

82 The number of strata Sk needs thus to be such that 1/d Sk is an integer. [sent-194, score-0.602]

83 1 Main results Asymptotic convergence of algorithm LMC-UCB By just combining the result of Lemma 1 with the result of Lemma 3, it is possible to show that algorithm LMC-UCB is asymptotically (when K goes to +∞ and n ≥ K) as efficient as the optimal oracle strategy of Lemma 1. [sent-197, score-0.287]

84 Let (Ωn )n,k≤Kn be k an arbitrary sequence of partitions such that all the strata are hyper-cubes, such that 4Kn ≤ n, such � � � d+1 � 1 = 0. [sent-199, score-0.624]

85 that the diameter of each strata goes to 0, and such that limn→+∞ n Kn log(Kn n2 ) 2 1 The regret of LMC-UCB with parameter δn = n2 on this sequence of partition, where for sequence n (Ωk )n,k≤Kn it disposes of n points, is such that lim n1+2/d Rn (ALM C−U CB ) = 0. [sent-200, score-0.682]

86 Now we can choose optimally the number of strata so that we minimize the regret. [sent-212, score-0.602]

87 Theorem 3 Under Assumptions 1 and 2, the algorithm LM C − U CB launched on Kn = �√ �d ( n)1/d hyper-cubic strata is such that, with probability 1 − δ, we have Rn (ALM C−U CB ) ≤ 1 2 1 n1+ d + 2(d+1) � � � 3M d �4 � log(n/δ) . [sent-213, score-0.618]

88 3 Discussion Convergence of the algorithm LMC-UCB to the optimal oracle strategy: �When the number � � � d+1 1 = 0, the pseudoof strata Kn grows to infinity, but such that limn→+∞ n Kn log(Kn n2 ) 2 regret of algorithm LMC-UCB converges to 0. [sent-215, score-0.786]

89 It means that this strategy is asymptotically as efficient as (the lower bound on) the optimal oracle strategy. [sent-216, score-0.272]

90 A new sampling scheme: The algorithm LM C − U CB samples the points in a way that takes advantage of both stratified sampling and quasi Monte-Carlo. [sent-218, score-0.233]

91 Indeed, LMC-UCB is designed to cumulate (i) the advantages of quasi Monte-Carlo by spreading the samples in the domain and (ii) the advantages of stratified, adaptive sampling by allocating more samples where the function has larger variations. [sent-219, score-0.354]

92 In order for the pseudo-regret to be negligible when compared to the opti2 mal oracle mean squared error of the estimate (which is of order n−1− d ) it is necessary that 1 poly(d)n− 2(d+1) is negligible compared to 1. [sent-223, score-0.331]

93 This is unavoidable, since stratified sampling shrinks the approximation error to the asymptotic oracle only if the diameter of each stratum is small, i. [sent-225, score-0.799]

94 The bound in Lemma 3 depends of poly(d) only because of rounding issues, coming from the fact that we aim at dividing each stratum Ωk in hyper-cubic sub-strata. [sent-230, score-0.551]

95 By modifying LMC-UCB so that it allocates the remaining budget uniformly at random on the domain, it is possible to prove that the (modified) algorithm is always at least as efficient as crude Monte-Carlo. [sent-232, score-0.28]

96 We first proposed a benchmark for measuring efficiency: we proved that the asymptotic mean 1 squared error of the estimate outputted by the optimal oracle strategy is lower bounded by Σ n1+2/d . [sent-234, score-0.455]

97 We then proposed an algorithm called LMC-UCB, which manages to learn the amplitude of the variations of f , to sample more points where theses variations are larger, and to spread these points in a way that is related to quasi Monte-Carlo sampling schemes. [sent-235, score-0.313]

98 We proved that algorithm LMC-UCB is asymptotically as efficient as the optimal, oracle strategy. [sent-236, score-0.164]

99 Under the assumption that f admits a Taylor expansion in each point, we provide also a finite time bound for the pseudo-regret of algorithm LMC-UCB. [sent-237, score-0.138]

100 5 When d is very large and n is not exponential in d, then second order terms, depending on the dimension, take over the bound in Lemma 2 (which is an asymptotic bound) and poly(d) appears in these negligible terms. [sent-243, score-0.144]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('strata', 0.602), ('stratum', 0.468), ('strati', 0.39), ('sk', 0.233), ('wk', 0.147), ('oracle', 0.12), ('allocates', 0.099), ('dx', 0.097), ('cb', 0.088), ('asymptotic', 0.085), ('nk', 0.085), ('crude', 0.081), ('quasi', 0.077), ('squared', 0.075), ('variations', 0.075), ('domain', 0.075), ('kn', 0.07), ('lemma', 0.064), ('lm', 0.059), ('strategy', 0.059), ('partition', 0.056), ('stratify', 0.053), ('sampling', 0.051), ('differentiable', 0.05), ('diameter', 0.048), ('carpentier', 0.046), ('poly', 0.046), ('allocating', 0.042), ('budget', 0.041), ('discrepancy', 0.041), ('uniform', 0.039), ('allocate', 0.038), ('samples', 0.038), ('expansion', 0.034), ('monte', 0.034), ('everywhere', 0.033), ('alm', 0.033), ('adaptive', 0.033), ('rounding', 0.033), ('bound', 0.033), ('admits', 0.032), ('optimal', 0.032), ('estimate', 0.031), ('proportions', 0.031), ('taylor', 0.031), ('carlo', 0.031), ('material', 0.03), ('appendix', 0.03), ('measure', 0.03), ('quantity', 0.029), ('asymptotically', 0.028), ('limn', 0.028), ('shares', 0.028), ('error', 0.027), ('per', 0.027), ('negligible', 0.026), ('wi', 0.026), ('heterogeneous', 0.026), ('ak', 0.026), ('mean', 0.026), ('uniformly', 0.025), ('realizable', 0.025), ('integral', 0.024), ('allocation', 0.024), ('assumption', 0.023), ('access', 0.022), ('intuition', 0.022), ('partitions', 0.022), ('pulled', 0.022), ('ln', 0.021), ('strategies', 0.021), ('divides', 0.021), ('isbn', 0.021), ('supplementary', 0.021), ('integration', 0.021), ('mc', 0.02), ('reasons', 0.02), ('returned', 0.02), ('linked', 0.02), ('subsection', 0.02), ('build', 0.019), ('write', 0.019), ('manages', 0.019), ('balls', 0.019), ('divide', 0.019), ('point', 0.019), ('ed', 0.019), ('adapt', 0.018), ('deduce', 0.018), ('prove', 0.018), ('inf', 0.018), ('lebesgue', 0.018), ('munos', 0.017), ('aim', 0.017), ('schemes', 0.016), ('goes', 0.016), ('attains', 0.016), ('algorithm', 0.016), ('regions', 0.016), ('lim', 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 36 nips-2012-Adaptive Stratified Sampling for Monte-Carlo integration of Differentiable functions

Author: Alexandra Carpentier, Rémi Munos

Abstract: We consider the problem of adaptive stratified sampling for Monte Carlo integration of a differentiable function given a finite number of evaluations to the function. We construct a sampling scheme that samples more often in regions where the function oscillates more, while allocating the samples such that they are well spread on the domain (this notion shares similitude with low discrepancy). We prove that the estimate returned by the algorithm is almost similarly accurate as the estimate that an optimal oracle strategy (that would know the variations of the function everywhere) would return, and provide a finite-sample analysis. 1

2 0.14971308 142 nips-2012-Generalization Bounds for Domain Adaptation

Author: Chao Zhang, Lei Zhang, Jieping Ye

Abstract: In this paper, we provide a new framework to study the generalization bound of the learning process for domain adaptation. We consider two kinds of representative domain adaptation settings: one is domain adaptation with multiple sources and the other is domain adaptation combining source and target data. In particular, we use the integral probability metric to measure the difference between two domains. Then, we develop the specific Hoeffding-type deviation inequality and symmetrization inequality for either kind of domain adaptation to achieve the corresponding generalization bound based on the uniform entropy number. By using the resultant generalization bound, we analyze the asymptotic convergence and the rate of convergence of the learning process for domain adaptation. Meanwhile, we discuss the factors that affect the asymptotic behavior of the learning process. The numerical experiments support our results. 1

3 0.10636472 184 nips-2012-Learning Probability Measures with respect to Optimal Transport Metrics

Author: Guillermo Canas, Lorenzo Rosasco

Abstract: We study the problem of estimating, in the sense of optimal transport metrics, a measure which is assumed supported on a manifold embedded in a Hilbert space. By establishing a precise connection between optimal transport metrics, optimal quantization, and learning theory, we derive new probabilistic bounds for the performance of a classic algorithm in unsupervised learning (k-means), when used to produce a probability measure derived from the data. In the course of the analysis, we arrive at new lower bounds, as well as probabilistic upper bounds on the convergence rate of empirical to population measures, which, unlike existing bounds, are applicable to a wide class of measures. 1 Introduction and Motivation In this paper we study the problem of learning from random samples a probability distribution supported on a manifold, when the learning error is measured using transportation metrics. The problem of learning a probability distribution is classic in statistics, and is typically analyzed for distributions in X = Rd that have a density with respect to the Lebesgue measure, with total variation, and L2 among the common distances used to measure closeness of two densities (see for instance [10, 32] and references therein.) The setting in which the data distribution is supported on a low dimensional manifold embedded in a high dimensional space has only been considered more recently. In particular, kernel density estimators on manifolds have been described in [36], and their pointwise consistency, as well as convergence rates, have been studied in [25, 23, 18]. A discussion on several topics related to statistics on a Riemannian manifold can be found in [26]. Interestingly, the problem of approximating measures with respect to transportation distances has deep connections with the fields of optimal quantization [14, 16], optimal transport [35] and, as we point out in this work, with unsupervised learning (see Sec. 4.) In fact, as described in the sequel, some of the most widely-used algorithms for unsupervised learning, such as k-means (but also others such as PCA and k-flats), can be shown to be performing exactly the task of estimating the data-generating measure in the sense of the 2-Wasserstein distance. This close relation between learning theory, and optimal transport and quantization seems novel and of interest in its own right. Indeed, in this work, techniques from the above three fields are used to derive the new probabilistic bounds described below. Our technical contribution can be summarized as follows: (a) we prove uniform lower bounds for the distance between a measure and estimates based on discrete sets (such as the empirical measure or measures derived from algorithms such as kmeans); (b) we provide new probabilistic bounds for the rate of convergence of empirical to population measures which, unlike existing probabilistic bounds, hold for a very large class of measures; 1 (c) we provide probabilistic bounds for the rate of convergence of measures derived from k-means to the data measure. The structure of the paper is described at the end of Section 2, where we discuss the exact formulation of the problem as well as related previous works. 2 Setup and Previous work Consider the problem of learning a probability measure ρ supported on a space M, from an i.i.d. sample Xn = (x1 , . . . , xn ) ∼ ρn of size n. We assume M to be a compact, smooth d-dimensional manifold of bounded curvature, with C 1 metric and volume measure λM , embedded in the unit ball of a separable Hilbert space X with inner product ·, · , induced norm · , and distance d (for d instance M = B2 (1) the unit ball in X = Rd .) Following [35, p. 94], let Pp (M) denote the Wasserstein space of order 1 ≤ p < ∞: Pp (M) := x p dρ(x) < ∞ ρ ∈ P (M) : M of probability measures P (M) supported on M, with finite p-th moment. The p-Wasserstein distance 1/p Wp (ρ, µ) = inf [E X − Y p ] : Law(X) = ρ, Law(Y ) = µ (1) X,Y where the random variables X and Y are distributed according to ρ and µ respectively, is the optimal expected cost of transporting points generated from ρ to those generated from µ, and is guaranteed to be finite in Pp (M) [35, p. 95]. The space Pp (M) with the Wp metric is itself a complete separable metric space [35]. We consider here the problem of learning probability measures ρ ∈ P2 (M), where the performance is measured by the distance W2 . There are many possible choices of distances between probability measures [13]. Among them, Wp metrizes weak convergence (see [35] theorem 6.9), that is, in Pp (M), a sequence (µi )i∈N of measures converges weakly to µ iff Wp (µi , µ) → 0 and their p-th order moments converge to that of µ. There are other distances, such as the L´ vy-Prokhorov, or the weak-* distance, that also metrize e weak convergence. However, as pointed out by Villani in his excellent monograph [35, p. 98], 1. “Wasserstein distances are rather strong, [...]a definite advantage over the weak-* distance”. 2. “It is not so difficult to combine information on convergence in Wasserstein distance with some smoothness bound, in order to get convergence in stronger distances.” Wasserstein distances have been used to study the mixing and convergence of Markov chains [22], as well as concentration of measure phenomena [20]. To this list we would add the important fact that existing and widely-used algorithms for unsupervised learning can be easily extended (see Sec. 4) to compute a measure ρ that minimizes the distance W2 (ˆn , ρ ) to the empirical measure ρ n ρn := ˆ 1 δx , n i=1 i a fact that will allow us to prove, in Sec. 5, bounds on the convergence of a measure induced by k-means to the population measure ρ. The most useful versions of Wasserstein distance are p = 1, 2, with p = 1 being the weaker of the two (by H¨ lder’s inequality, p ≤ q ⇒ Wp ≤ Wq .) In particular, “results in W2 distance are usually o stronger, and more difficult to establish than results in W1 distance” [35, p. 95]. A discussion of p = ∞ would take us out of topic, since its behavior is markedly different. 2.1 Closeness of Empirical and Population Measures By the strong law of large numbers, the empirical measure converges almost surely to the population measure: ρn → ρ in the sense of the weak topology [34]. Since weak convergence and convergence ˆ in Wp plus convergence of p-th moments are equivalent in Pp (M), this means that, in the Wp sense, the empirical measure ρn converges to ρ, as n → ∞. A fundamental question is therefore how fast ˆ the rate of convergence of ρn → ρ is. ˆ 2 2.1.1 Convergence in expectation The rate of convergence of ρn → ρ in expectation has been widely studied in the past, resultˆ ing in upper bounds of order EW2 (ρ, ρn ) = O(n−1/(d+2) ) [19, 8], and lower bounds of order ˆ EW2 (ρ, ρn ) = Ω(n−1/d ) [29] (both assuming that the absolutely continuous part of ρ is ρA = 0, ˆ with possibly better rates otherwise). More recently, an upper bound of order EWp (ρ, ρn ) = O(n−1/d ) has been proposed [2] by proving ˆ a bound for the Optimal Bipartite Matching (OBM) problem [1], and relating this problem to the expected distance EWp (ρ, ρn ). In particular, given two independent samples Xn , Yn , the OBM ˆ problem is that of finding a permutation σ that minimizes the matching cost n−1 xi −yσ(i) p [24, p ˆ ˆ ˆ 30]. It is not hard to show that the optimal matching cost is Wp (ˆXn , ρYn ) , where ρXn , ρYn are ρ the empirical measures associated to Xn , Yn . By Jensen’s inequality, the triangle inequality, and (a + b)p ≤ 2p−1 (ap + bp ), it holds EWp (ρ, ρn )p ≤ EWp (ˆXn , ρYn )p ≤ 2p−1 EWp (ρ, ρn )p , ˆ ρ ˆ ˆ and therefore a bound of order O(n−p/d ) for the OBM problem [2] implies a bound EWp (ρ, ρn ) = ˆ O(n−1/d ). The matching lower bound is only known for a special case: ρA constant over a bounded set of non-null measure [2] (e.g. ρA uniform.) Similar results, with matching lower bounds are found for W1 in [11]. 2.1.2 Convergence in probability Results for convergence in probability, one of the main results of this work, appear to be considerably harder to obtain. One fruitful avenue of analysis has been the use of so-called transportation, or Talagrand inequalities Tp , which can be used to prove concentration inequalities on Wp [20]. In particular, we say that ρ satisfies a Tp (C) inequality with C > 0 iff Wp (ρ, µ)2 ≤ CH(µ|ρ), ∀µ ∈ Pp (M), where H(·|·) is the relative entropy [20]. As shown in [6, 5], it is possible to obtain probabilistic upper bounds on Wp (ρ, ρn ), with p = 1, 2, if ρ is known to satisfy a Tp inequality ˆ of the same order, thereby reducing the problem of bounding Wp (ρ, ρn ) to that of obtaining a Tp ˆ inequality. Note that, by Jensen’s inequality, and as expected from the behavior of Wp , the inequality T2 is stronger than T1 [20]. While it has been shown that ρ satisfies a T1 inequality iff it has a finite square-exponential moment 2 (E[eα x ] finite for some α > 0) [4, 7], no such general conditions have been found for T2 . As an example, consider that, if M is compact with diameter D then, by theorem 6.15 of [35], and the celebrated Csisz´ r-Kullback-Pinsker inequality [27], for all ρ, µ ∈ Pp (M), it is a Wp (ρ, µ)2p ≤ (2D)2p ρ − µ where · does not. TV 2 TV ≤ 22p−1 D2p H(µ|ρ), is the total variation norm. Clearly, this implies a Tp=1 inequality, but for p ≥ 2 it The T2 inequality has been shown by Talagrand to be satisfied by the Gaussian distribution [31], and then slightly more generally by strictly log-concave measures (see [20, p. 123], and [3].) However, as noted in [6], “contrary to the T1 case, there is no hope to obtain T2 inequalities from just integrability or decay estimates.” Structure of this paper. In this work we obtain bounds in probability (learning rates) for the problem of learning a probability measure in the sense of W2 . We begin by establishing (lower) bounds for the convergence of empirical to population measures, which serve to set up the problem and introduce the connection between quantization and measure learning (sec. 3.) We then describe how existing unsupervised learning algorithms that compute a set (k-means, k-flats, PCA,. . . ) can be easily extended to produce a measure (sec. 4.) Due to its simplicity and widespread use, we focus here on k-means. Since the two measure estimates that we consider are the empirical measure, and the measure induced by k-means, we next set out to prove upper bounds on their convergence to the data-generating measure (sec. 5.) We arrive at these bounds by means of intermediate measures, which are related to the problem of optimal quantization. The bounds apply in a very broad setting (unlike existing bounds based on transportation inequalities, they are not restricted to log-concave measures [20, 3].) 3 3 Learning probability measures, optimal transport and quantization We address the problem of learning a probability measure ρ when the only observations we have at our disposal are n i.i.d. samples Xn = (x1 , . . . , xn ). We begin by establishing some notation and useful intermediate results. Given a closed set S ⊆ X , let {Vq : q ∈ S} be a Borel Voronoi partition of X composed of sets Vq closest to each q ∈ S, that is, such that each Vq ⊆ {x ∈ X : x − q = minr∈S x − r } is measurable (see for instance [15].) Consider the projection function πS : X → S mapping each x ∈ Vq to q. By virtue of {Vq }q∈S being a Borel Voronoi partition, the map πS is measurable [15], and it is d (x, πS (x)) = minq∈S x − q for all x ∈ X . For any ρ ∈ Pp (M), let πS ρ be the pushforward, or image measure of ρ under the mapping πS , −1 which is defined to be (πS ρ)(A) := ρ(πS (A)) for all Borel measurable sets A. From its definition, it is clear that πS ρ is supported on S. We now establish a connection between the expected distance to a set S, and the distance between ρ and the set’s induced pushforward measure. Notice that, for discrete sets S, the expected Lp distance to S is exactly the expected quantization error Ep,ρ (S) := Ex∼ρ d(x, S)p = Ex∼ρ x − πS (x) p incurred when encoding points x drawn from ρ by their closest point πS (x) in S [14]. This close connection between optimal quantization and Wasserstein distance has been pointed out in the past in the statistics [28], optimal quantization [14, p. 33], and approximation theory [16] literatures. The following two lemmas are key tools in the reminder of the paper. The first highlights the close link between quantization and optimal transport. Lemma 3.1. For closed S ⊆ X , ρ ∈ Pp (M), 1 ≤ p < ∞, it holds Ex∼ρ d(x, S)p = Wp (ρ, πS ρ)p . Note that the key element in the above lemma is that the two measures in the expression Wp (ρ, πS ρ) must match. When there is a mismatch, the distance can only increase. That is, Wp (ρ, πS µ) ≥ Wp (ρ, πS ρ) for all µ ∈ Pp (M). In fact, the following lemma shows that, among all the measures with support in S, πS ρ is closest to ρ. Lemma 3.2. For closed S ⊆ X , and all µ ∈ Pp (M) with supp(µ) ⊆ S, 1 ≤ p < ∞, it holds Wp (ρ, µ) ≥ Wp (ρ, πS ρ). When combined, lemmas 3.1 and 3.2 indicate that the behavior of the measure learning problem is limited by the performance of the optimal quantization problem. For instance, Wp (ρ, ρn ) can only ˆ be, in the best-case, as low as the optimal quantization cost with codebook of size n. The following section makes this claim precise. 3.1 Lower bounds Consider the situation depicted in fig. 1, in which a sample X4 = {x1 , x2 , x3 , x4 } is drawn from a distribution ρ which we assume here to be absolutely continuous on its support. As shown, the projection map πX4 sends points x to their closest point in X4 . The resulting Voronoi decomposition of supp(ρ) is drawn in shades of blue. By lemma 5.2 of [9], the pairwise intersections of Voronoi regions have null ambient measure, and since ρ is absolutely continuous, the pushforward measure 4 can be written in this case as πX4 ρ = j=1 ρ(Vxj )δxj , where Vxj is the Voronoi region of xj . Note that, even for finite sets S, this particular decomposition is not always possible if the {Vq }q∈S form a Borel Voronoi tiling, instead of a Borel Voronoi partition. If, for instance, ρ has an atom falling on two Voronoi regions in a tiling, then both regions would count the atom as theirs, and double-counting would imply q ρ(Vq ) > 1. The technicalities required to correctly define a Borel Voronoi partition are such that, in general, it is simpler to write πS ρ, even though (if S is discrete) this measure can clearly be written as a sum of deltas with appropriate masses. By lemma 3.1, the distance Wp (ρ, πX4 ρ)p is the (expected) quantization cost of ρ when using X4 as codebook. Clearly, this cost can never be lower than the optimal quantization cost of size 4. This reasoning leads to the following lower bound between empirical and population measures. 4 Theorem 3.3. For ρ ∈ Pp (M) with absolutely continuous part ρA = 0, and 1 ≤ p < ∞, it holds Wp (ρ, ρn ) = Ω(n−1/d ) uniformly over ρn , where the constants depend on d and ρA only. ˆ ˆ Proof: Let Vn,p (ρ) := inf S⊂M,|S|=n Ex∼ρ d(x, S)p be the optimal quantization cost of ρ of order p with n centers. Since ρA = 0, and since ρ has a finite (p + δ)-th order moment, for some δ > 0 (since it is supported on the unit ball), then it is Vn,p (ρ) = Θ(n−p/d ), with constants depending on d and ρA (see [14, p. 78] and [16].) Since supp(ˆn ) = Xn , it follows that ρ Wp (ρ, ρn )p ˆ ≥ lemma 3.2 Wp (ρ, πXn ρ)p = lemma 3.1 Ex∼ρ d(x, Xn )p ≥ Vn,p (ρ) = Θ(n−p/d ) Note that the bound of theorem 3.3 holds for ρn derived from any sample Xn , and is therefore ˆ stronger than the existing lower bounds on the convergence rates of EWp (ρ, ρn ) → 0. In particular, ˆ it trivially induces the known lower bound Ω(n−1/d ) on the rate of convergence in expectation. 4 Unsupervised learning algorithms for learning a probability measure As described in [21], several of the most widely used unsupervised learning algorithms can be ˆ interpreted to take as input a sample Xn and output a set Sk , where k is typically a free parameter of the algorithm, such as the number of means in k-means1 , the dimension of affine spaces in PCA, n ˆ etc. Performance is measured by the empirical quantity n−1 i=1 d(xi , Sk )2 , which is minimized among all sets in some class (e.g. sets of size k, affine spaces of dimension k,. . . ) This formulation is general enough to encompass k-means and PCA, but also k-flats, non-negative matrix factorization, and sparse coding (see [21] and references therein.) Using the discussion of Sec. 3, we can establish a clear connection between unsupervised learning and the problem of learning probability measures with respect to W2 . Consider as a running example the k-means problem, though the argument is general. Given an input Xn , the k-means problem is ˆ ˆ to find a set |Sk | = k minimizing its average distance from points in Xn . By associating to Sk the pushforward measure πSk ρn , we find that ˆ ˆ 1 n n ˆ ˆ d(xi , Sk )2 = Ex∼ρn d(x, Sk )2 ˆ i=1 = lemma 3.1 W2 (ˆn , πSk ρn )2 . ρ ˆ ˆ (2) Since k-means minimizes equation 2, it also finds the measure that is closest to ρn , among those ˆ with support of size k. This connection between k-means and W2 measure approximation was, to the best of the authors’ knowledge, first suggested by Pollard [28] though, as mentioned earlier, the argument carries over to many other unsupervised learning algorithms. Unsupervised measure learning algorithms. We briefly clarify the steps involved in using an existing unsupervised learning algorithm for probability measure learning. Let Uk be a parametrized algorithm (e.g. k-means) that takes a sample Xn and outputs a set Uk (Xn ). The measure learning algorithm Ak : Mn → Pp (M) corresponding to Uk is defined as follows: ˆ 1. Ak takes a sample Xn and outputs the measure πSk ρn , supported on Sk = Uk (Xn ); ˆ ˆ 2. since ρn is discrete, then so must πSk ρn be, and thus Ak (Xn ) = ˆ ˆ ˆ 1 n n ˆ i=1 δπSk (xi ) ; 3. in practice, we can simply store an n-vector πSk (x1 ), . . . , πSk (xn ) , from which Ak (Xn ) ˆ ˆ can be reconstructed by placing atoms of mass 1/n at each point. In the case that Uk is the k-means algorithm, only k points and k masses need to be stored. Note that any algorithm A that attempts to output a measure A (Xn ) close to ρn can be cast in the ˆ above framework. Indeed, if S is the support of A (Xn ) then, by lemma 3.2, πS ρn is the measure ˆ closest to ρn with support in S . This effectively reduces the problem of learning a measure to that of ˆ 1 In a slight abuse of notation, we refer to the k-means algorithm here as an ideal algorithm that solves the k-means problem, even though in practice an approximation algorithm may be used. 5 finding a set, and is akin to how the fact that every optimal quantizer is a nearest-neighbor quantizer (see [15], [12, p. 350], and [14, p. 37–38]) reduces the problem of finding an optimal quantizer to that of finding an optimal quantizing set. Clearly, the minimum of equation 2 over sets of size k (the output of k-means) is monotonically ˆ ˆ non-increasing with k. In particular, since Sn = Xn and πSn ρn = ρn , it is Ex∼ρn d(x, Sn )2 = ˆ ˆ ˆ ˆ 2 W2 (ˆn , πSn ρn ) = 0. That is, we can always make the learned measure arbitrarily close to ρn ρ ˆ ˆ ˆ by increasing k. However, as pointed out in Sec. 2, the problem of measure learning is concerned with minimizing the 2-Wasserstein distance W2 (ρ, πSk ρn ) to the data-generating measure. The ˆ ˆ actual performance of k-means is thus not necessarily guaranteed to behave in the same way as the empirical one, and the question of characterizing its behavior as a function of k and n naturally arises. ˆ Finally, we note that, while it is Ex∼ρn d(x, Sk )2 = W2 (ˆn , πSk ρn )2 (the empirical performances ρ ˆ ˆ ˆ are the same in the optimal quantization, and measure learning problem formulations), the actual performances satisfy ˆ Ex∼ρ d(x, Sk )2 = W2 (ρ, π ˆ ρ)2 ≤ W2 (ρ, π ˆ ρn )2 , 1 ≤ k ≤ n. ˆ lemma 3.1 Sk lemma 3.2 Sk Consequently, with the identification between sets S and measures πS ρn , the measure learning ˆ problem is, in general, harder than the set-approximation problem (for example, if M = Rd and ρ is absolutely continuous over a set of non-null volume, it is not hard to show that the inequality is ˆ almost surely strict: Ex∼ρ d(x, Sk )2 < W2 (ρ, πSk ρn )2 for 1 < k < n.) ˆ ˆ In the remainder, we characterize the performance of k-means on the measure learning problem, for varying k, n. Although other unsupervised learning algorithms could have been chosen as basis for our analysis, k-means is one of the oldest and most widely used, and the one for which the deep connection between optimal quantization and measure approximation is most clearly manifested. Note that, by setting k = n, our analysis includes the problem of characterizing the behavior of the distance W2 (ρ, ρn ) between empirical and population measures which, as indicated in Sec. 2.1, ˆ is a fundamental question in statistics (i.e. the speed of convergence of empirical to population measures.) 5 Learning rates In order to analyze the performance of k-means as a measure learning algorithm, and the convergence of empirical to population measures, we propose the decomposition shown in fig. 2. The diagram includes all the measures considered in the paper, and shows the two decompositions used to prove upper bounds. The upper arrow (green), illustrates the decomposition used to bound the distance W2 (ρ, ρn ). This decomposition uses the measures πSk ρ and πSk ρn as intermediates to arrive ˆ ˆ at ρn , where Sk is a k-point optimal quantizer of ρ, that is, a set Sk minimizing Ex∼ρ d(x, S)2 over ˆ all sets of size |S| = k. The lower arrow (blue) corresponds to the decomposition of W2 (ρ, πSk ρn ) ˆ ˆ (the performance of k-means), whereas the labelled black arrows correspond to individual terms in the bounds. We begin with the (slightly) simpler of the two results. 5.1 Convergence rates for the empirical to population measures Let Sk be the optimal k-point quantizer of ρ of order two [14, p. 31]. By the triangle inequality and the identity (a + b + c)2 ≤ 3(a2 + b2 + c2 ), it follows that W2 (ρ, ρn )2 ≤ 3 W2 (ρ, πSk ρ)2 + W2 (πSk ρ, πSk ρn )2 + W2 (πSk ρn , ρn )2 . ˆ ˆ ˆ ˆ (3) This is the decomposition depicted in the upper arrow of fig. 2. By lemma 3.1, the first term in the sum of equation 3 is the optimal k-point quantization error of ρ over a d-manifold M which, using recent techniques from [16] (see also [17, p. 491]), is shown in the proof of theorem 5.1 (part a) to be of order Θ(k −2/d ). The remaining terms, b) and c), are slightly more technical and are bounded in the proof of theorem 5.1. Since equation 3 holds for all 1 ≤ k ≤ n, the best bound on W2 (ρ, ρn ) can be obtained by optimizˆ ing the right-hand side over all possible values of k, resulting in the following probabilistic bound for the rate of convergence of the empirical to population measures. 6 x2 x W2 (ρ, ρn ) ˆ supp ρ x1 π{x1 ,x2 ,x3 ,x4 } ρ a) x3 πSk ρ b) πSk ρn ˆ c) d) ρn ˆ πSk ρn ˆ ˆ W2 (ρ, πSk ρn ) ˆ ˆ x4 Figure 1: A sample {x1 , x2 , x3 , x4 } is drawn from a distribution ρ with support in supp ρ. The projection map π{x1 ,x2 ,x3 ,x4 } sends points x to their closest one in the sample. The induced Voronoi tiling is shown in shades of blue. Figure 2: The measures considered in this paper are linked by arrows for which upper bounds for their distance are derived. Bounds for the quantities of interest W2 (ρ, ρn )2 , and W2 (ρ, πSk ρn )2 , ˆ ˆ ˆ are decomposed by following the top and bottom colored arrows. Theorem 5.1. Given ρ ∈ Pp (M) with absolutely continuous part ρA = 0, sufficiently large n, and τ > 0, it holds W2 (ρ, ρn ) ≤ C · m(ρA ) · n−1/(2d+4) · τ, ˆ where m(ρA ) := 5.2 M 2 with probability 1 − e−τ . ρA (x)d/(d+2) dλM (x), and C depends only on d. Learning rates of k-means The key element in the proof of theorem 5.1 is that the distance between population and empirical measures can be bounded by choosing an intermediate optimal quantizing measure of an appropriate size k. In the analysis, the best bounds are obtained for k smaller than n. If the output of k-means is close to an optimal quantizer (for instance if sufficient data is available), then we would similarly expect that the best bounds for k-means correspond to a choice of k < n. The decomposition of the bottom (blue) arrow in figure 2 leads to the following bound in probability. Theorem 5.2. Given ρ ∈ Pp (M) with absolutely continuous part ρA = 0, and τ > 0, then for all sufficiently large n, and letting k = C · m(ρA ) · nd/(2d+4) , it holds W2 (ρ, πSk ρn ) ≤ C · m(ρA ) · n−1/(2d+4) · τ, ˆ ˆ where m(ρA ) := M 2 with probability 1 − e−τ . ρA (x)d/(d+2) dλM (x), and C depends only on d. Note that the upper bounds in theorem 5.1 and 5.2 are exactly the same. Although this may appear ˆ surprising, it stems from the following fact. Since S = Sk is a minimizer of W2 (πS ρn , ρn )2 , the ˆ ˆ bound d) of figure 2 satisfies: W2 (πSk ρn , ρn )2 ≤ W2 (πSk ρn , ρn )2 ˆ ˆ ˆ ˆ ˆ and therefore (by the definition of c), the term d) is of the same order as c). It follows then that adding term d) to the bound only affects the constants, but otherwise leaves it unchanged. Since d) is the term that takes the output measure of k-means to the empirical measure, this implies that the rate of convergence of k-means (for suitably chosen k) cannot be worse than that of ρn → ρ. ˆ Conversely, bounds for ρn → ρ are obtained from best rates of convergence of optimal quantizers, ˆ whose convergence to ρ cannot be slower than that of k-means (since the quantizers that k-means produces are suboptimal.) 7 Since the bounds obtained for the convergence of ρn → ρ are the same as those for k-means with ˆ k of order k = Θ(nd/(2d+4) ), this suggests that estimates of ρ that are as accurate as those derived from an n point-mass measure ρn can be derived from k point-mass measures with k ˆ n. Finally, we note that the introduced bounds are currently limited by the statistical bound sup |W2 (πS ρn , ρn )2 − W2 (πS ρ, ρ)2 | ˆ ˆ |S|=k = sup |Ex∼ρn d(x, S)2 − Ex∼ρ d(x, S)2 | ˆ lemma 3.1 |S|=k (4) (see for instance [21]), for which non-matching lower bounds are known. This means that, if better upper bounds can be obtained for equation 4, then both bounds in theorems 5.1 and 5.2 would automatically improve (would become closer to the lower bound.) References [1] M. Ajtai, J. Komls, and G. Tusndy. On optimal matchings. Combinatorica, 4:259–264, 1984. [2] Franck Barthe and Charles Bordenave. Combinatorial optimization over two random point sets. Technical Report arXiv:1103.2734, Mar 2011. [3] Gordon Blower. The Gaussian isoperimetric inequality and transportation. Positivity, 7:203–224, 2003. [4] S. G. Bobkov and F. G¨ tze. Exponential integrability and transportation cost related to logarithmic o Sobolev inequalities. Journal of Functional Analysis, 163(1):1–28, April 1999. [5] Emmanuel Boissard. Simple bounds for the convergence of empirical and occupation measures in 1wasserstein distance. Electron. J. Probab., 16(83):2296–2333, 2011. [6] F. Bolley, A. Guillin, and C. Villani. Quantitative concentration inequalities for empirical measures on non-compact spaces. Probability Theory and Related Fields, 137(3):541–593, 2007. [7] F. Bolley and C. Villani. Weighted Csisz´ r-Kullback-Pinsker inequalities and applications to transportaa tion inequalities. Annales de la Faculte des Sciences de Toulouse, 14(3):331–352, 2005. [8] Claire Caillerie, Fr´ d´ ric Chazal, J´ rˆ me Dedecker, and Bertrand Michel. Deconvolution for the Wassere e eo stein metric and geometric inference. Rapport de recherche RR-7678, INRIA, July 2011. [9] Kenneth L. Clarkson. Building triangulations using -nets. In Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, STOC ’06, pages 326–335, New York, NY, USA, 2006. ACM. [10] Luc Devroye and G´ bor Lugosi. Combinatorial methods in density estimation. Springer Series in Statisa tics. Springer-Verlag, New York, 2001. [11] V. Dobri and J. Yukich. Asymptotics for transportation cost in high dimensions. Journal of Theoretical Probability, 8:97–118, 1995. [12] A. Gersho and R.M. Gray. Vector Quantization and Signal Compression. Kluwer International Series in Engineering and Computer Science. Kluwer Academic Publishers, 1992. [13] Alison L. Gibbs and Francis E. Su. On choosing and bounding probability metrics. International Statistical Review, 70:419–435, 2002. [14] Siegfried Graf and Harald Luschgy. Foundations of quantization for probability distributions. SpringerVerlag New York, Inc., Secaucus, NJ, USA, 2000. [15] Siegfried Graf, Harald Luschgy, and Gilles Page`. Distortion mismatch in the quantization of probability s measures. Esaim: Probability and Statistics, 12:127–153, 2008. [16] Peter M. Gruber. Optimum quantization and its applications. Adv. Math, 186:2004, 2002. [17] P.M. Gruber. Convex and discrete geometry. Grundlehren der mathematischen Wissenschaften. Springer, 2007. [18] Guillermo Henry and Daniela Rodriguez. Kernel density estimation on riemannian manifolds: Asymptotic results. J. Math. Imaging Vis., 34(3):235–239, July 2009. [19] Joseph Horowitz and Rajeeva L. Karandikar. Mean rates of convergence of empirical measures in the Wasserstein metric. J. Comput. Appl. Math., 55(3):261–273, November 1994. [20] M. Ledoux. The Concentration of Measure Phenomenon. Mathematical Surveys and Monographs. American Mathematical Society, 2001. [21] A. Maurer and M. Pontil. K–dimensional coding schemes in Hilbert spaces. IEEE Transactions on Information Theory, 56(11):5839 –5846, nov. 2010. [22] Yann Ollivier. Ricci curvature of markov chains on metric spaces. J. Funct. Anal., 256(3):810–864, 2009. 8 [23] Arkadas Ozakin and Alexander Gray. Submanifold density estimation. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1375–1382. 2009. [24] C. Papadimitriou. The probabilistic analysis of matching heuristics. In Proc. of the 15th Allerton Conf. on Communication, Control and Computing, pages 368–378, 1978. [25] Bruno Pelletier. Kernel density estimation on Riemannian manifolds. Statist. Probab. Lett., 73(3):297– 304, 2005. [26] Xavier Pennec. Intrinsic statistics on riemannian manifolds: Basic tools for geometric measurements. J. Math. Imaging Vis., 25(1):127–154, July 2006. [27] M. S. Pinsker. Information and information stability of random variables and processes. San Francisco: Holden-Day, 1964. [28] David Pollard. Quantization and the method of k-means. IEEE Transactions on Information Theory, 28(2):199–204, 1982. [29] S.T. Rachev. Probability metrics and the stability of stochastic models. Wiley series in probability and mathematical statistics: Applied probability and statistics. Wiley, 1991. [30] J.M. Steele. Probability Theory and Combinatorial Optimization. Cbms-Nsf Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics, 1997. [31] M. Talagrand. Transportation cost for Gaussian and other product measures. Geometric And Functional Analysis, 6:587–600, 1996. [32] Alexandre B. Tsybakov. Introduction to nonparametric estimation. Springer Series in Statistics. Springer, New York, 2009. Revised and extended from the 2004 French original, Translated by Vladimir Zaiats. [33] A.W. van der Vaart and J.A. Wellner. Weak Convergence and Empirical Processes. Springer Series in Statistics. Springer, 1996. [34] V. S. Varadarajan. On the convergence of sample probability distributions. Sankhy¯ : The Indian Journal a of Statistics, 19(1/2):23–26, Feb. 1958. [35] C. Villani. Optimal Transport: Old and New. Grundlehren der Mathematischen Wissenschaften. Springer, 2009. [36] P. Vincent and Y. Bengio. Manifold Parzen Windows. In Advances in Neural Information Processing Systems 22, pages 849–856. 2003. 9

4 0.097013168 261 nips-2012-Online allocation and homogeneous partitioning for piecewise constant mean-approximation

Author: Alexandra Carpentier, Odalric-ambrym Maillard

Abstract: In the setting of active learning for the multi-armed bandit, where the goal of a learner is to estimate with equal precision the mean of a finite number of arms, recent results show that it is possible to derive strategies based on finite-time confidence bounds that are competitive with the best possible strategy. We here consider an extension of this problem to the case when the arms are the cells of a finite partition P of a continuous sampling space X ⊂ Rd . Our goal is now to build a piecewise constant approximation of a noisy function (where each piece is one region of P and P is fixed beforehand) in order to maintain the local quadratic error of approximation on each cell equally low. Although this extension is not trivial, we show that a simple algorithm based on upper confidence bounds can be proved to be adaptive to the function itself in a near-optimal way, when |P| is chosen to be of minimax-optimal order on the class of α−H¨ lder functions. o 1 Setting and Previous work Let us consider some space X ⊂ Rd , and Y ⊂ R. We call X the input space or sampling space, Y the output space or value space. We consider the problem of estimating with uniform precision the function f : X ⊂ Rd → Y ⊂ R. We assume that we can query n times the function f , anywhere in the domain, and observe noisy samples of this function. These samples are collected sequentially, and our aim is to design an adaptive procedure that selects wisely where on the domain to query the function, according to the information provided by the previous samples. More formally: Observed process We consider an unknown Y-valued process defined on X , written ν : X → M+ (Y), where M+ (Y) refers to the set of all probability measures on Y, such that for all x ∈ X , 1 1 def the random variable Y (x) ∼ ν(x) has mean f (x) = E[Y (x)|x] ∈ R. We write for convenience the model in the following way Y (x) = f (x) + noise(x) , def where noise(x) = Y (x) − E[Y (x)|x] is the centered random variable corresponding to the noise, o with unknown variance σ 2 (x). We assume throughout this paper that f is α-H¨ lder. Partition We consider we can define a partition P of the input space X , with finitely many P regions {Rp }1≤p≤P that are assumed to be convex and not degenerated, i.e. such that the interior of each region Rp has positive Lebesgue volume vp . Moreover, with each region Rp is associated a sampling distribution in that region, written µp ∈ M+ (Rp ). Thus, when we decide to sample in 1 region Rp , a new sample X ∈ Rp is generated according to X ∼ µp . Allocation. We consider that we have a finite budget of n ∈ N samples that we can use in order to allocate samples as we wish among the regions {Rp }1≤p≤P . For illustration, let us assume that we deterministically allocate Tp,n ∈ N samples in region Rp , with the constraint that the allocation {Tp,n }1≤p≤P must some to n. In region Rp , we thus sample points {Xp,i }1≤p≤P at random 1 according to the sampling distribution µp , and then get the corresponding values {Yp,i }1≤i≤Tp,n , where Yp,i ∼ ν(Xp,i ). In the sequel, the distribution µp is assumed to be the uniform distribution dλ(x)1x∈R over region Rp , i.e. the density of µp is λ(Rp ) p where λ denotes the Lebesgue measure. Note that this is not restrictive since we are in an active, not passive setting. Piecewise constant mean-approximation. We use the collected samples in order to build a pieceˆ wise constant approximation fn of the mean f , and measure the accuracy of approximation on a region Rp with the expected quadratic norm of the approximation error, namely � � � � � ˆ (x))2 λ(dx) = Eµ ,ν (f (X) − mp,n )2 , ˆ (f (x) − fn E p λ(Rp ) Rp ˆ where mp,n is the constant value that takes fn on the region Rp . A natural choice for the estimator ˆ mp,n is to use the empirical mean that is unbiased and asymptotically optimal for this criterion. ˆ Thus we consider the following estimate (histogram) ˆ fn (x) = P � p=1 mp,n I{x ∈ Rp } where mp,n = ˆ ˆ Tp,n 1 � Tp,n Yp,i . i=1 Pseudo loss Note that, since the Tp,n are deterministic, the expected quadratic norm of the approximation error of this estimator can be written in the following form � � � � � � ˆ Eµp ,ν (f (X) − mp,n )2 ˆ = Eµp ,ν (f (X) − Eµp [f (X)])2 + Eµp ,ν (Eµp [f (X)] − mp,n )2 � � � � = Vµp f (X) + Vµp ,ν mp,n ˆ � � � � 1 Vµp ,ν Y (X) . = Vµp f (X) + Tp,n Now, using the following immediate decomposition � � � � � Vµp ,ν Y (X) = Vµp f (X) + σ 2 (x)µp (dx) , Rp we deduce that the maximal expected quadratic norm of the approximation error over the regions def {Rp }1≤p≤P , that depends on the choice of the considered allocation strategy A = {Tp,n }1≤p≤P is thus given by the following so-called pseudo-loss � � � � � � Tp,n + 1 1 def 2 (1) Vµp f (X) + Eµ σ (X) . Ln (A) = max 1≤ p ≤P Tp,n Tp,n p Our goal is to minimize this pseudo-loss. Note that this is a local measure of performance, as opposed to a more usual yet less challenging global quadratic error. Eventually, as the number of �� �2 � ˆ cells tends to ∞, this local measure of performance approaches supx∈X Eν f (x) − fn (x) . At this point, let us also introduce, for convenience, the notation Qp (Tp,n ) that denotes the term inside the max, in order to emphasize the dependency on the quadratic error with the allocation. Previous work There is a huge literature on the topic of functional estimation in batch setting. Since it is a rather old and well studied question in statistics, many books have been written on this topic, such as Bosq and Lecoutre [1987], Rosenblatt [1991], Gy¨ rfi et al. [2002], where piecewise constant meano approximation are also called “partitioning estimate” or “regressogram” (first introduced by Tukey [1947]). The minimax-optimal rate of approximation on the class of α-H¨ lder functions is known o 2α to be in O(n− 2α+d ) (see e.g. Ibragimov and Hasminski [1981], Stone [1980], Gy¨ rfi et al. [2002]). o In such setting, a dataset {(Xi , Yi )}i≤n is given to the learner, and a typical question is thus to try to find the best possible histogram in order to minimize a approximation error. Thus the dataset is fixed and we typically resort to techniques such as model selection where each model corresponds to one histogram (see Arlot [2007] for an extensive study of such). However, we here ask a very different question, that is how to optimally sample in an online setting in order to minimize the approximation error of some histogram. Thus we choose the histogram 2 before we see any sample, then it is fixed and we need to decide which cell to sample from at each time step. Motivation for this setting comes naturally from some recent works in the setting of active learning for the multi-armed bandit problem Antos et al. [2010], Carpentier et al. [2011]. In these works, the objective is to estimate with equal precision the mean of a finite number of distributions (arms), which would correspond to the special case when X = {1, . . . , P } is a finite set in our setting. Intuitively, we reduce the problem to such bandit problem with finite set of arms (regions), and our setting answers the question whether it is possible to extend those results to the case when the arms do not correspond to a singleton, but rather to a continuous region. We show that the answer is positive, yet non trivial. This is non trivial due to the variance estimation in each region: points x in some region may have different means f(x), so that standard estimators for the variance are biased, contrary to the point-wise case and thus finite-arm techniques may yield disastrous results. (Estimating the variance of the distribution in a continuous region actually needs to take into account not only the point-wise noise but also the variation of the function f and the noise level σ 2 in that region.) We describe a way, inspired from quasi Monte-Carlo techniques, to correct this bias so that we can handle the additional error. Also, it is worth mentioning that this setting can be informally linked to a notion of curiosity-driven learning (see Schmidhuber [2010], Baranes and Oudeyer [2009]), since we want to decide in which region of the space to sample, without explicit reward but optimizing the goal to understand the unknown environment. Outline Section 2 provides more intuition about the pseudo-loss and a result about the optimal oracle strategy when the domain is partitioned in a minimax-optimal way on the class of α−H¨ lder o functions. Section 3 presents our assumptions, that are basically to have a sub-Gaussian noise and smooth mean and variance functions, then our estimator of the pseudo-loss together with its concentration properties, before introducing our sampling procedure, called OAHPA-pcma. Finally, the performance of this procedure is provided and discussed in Section 4. 2 The pseudo-loss: study and optimal strategies 2.1 More intuition on each term in the pseudo-loss It is natural to look at what happens to each of the two terms that appear in equation 1 when one makes Rp shrink towards a point. More precisely, let xp be the mean of X ∼ µp and let us look at the limit of Vµp (f (X)) when vp goes to 0. Assuming that f is differentiable, we get �2 � �� lim Vµp (f (X)) = lim Eµp f (X) − f (xp ) − E[f (X) − f (xp )] vp →0 vp →0 = = = lim Eµp �� �X − xp , ∇f (xp )� − E[�X − xp , ∇f (xp )�] vp →0 � � lim Eµp �X − xp , ∇f (xp )�2 vp →0 � � lim ∇f (xp )T Eµp (X − xp )(X − xp )T ∇f (xp ) . �2 � vp →0 Therefore, if we introduce Σp to be the covariance matrix of the random variable X ∼ µp , then we simply have lim Vµp (f (X)) = lim ||∇f (xp )||2 p . Σ vp →0 vp →0 Example with hyper-cubic regions An important example is when Rp is a hypercube with side 1/d length vp and µp is the uniform distribution over the region Rp . In that case (see Lemma 1), we dx have µp (dx) = , and 2/d vp vp . ||∇f (xp )||2 p = ||∇f (xp )||2 Σ 12 More generally, when f is α−differentiable, i.e. that ∀a ∈ X , ∃∇α f (a, ·) ∈ Sd (0, 1)R such that ∀x ∈ Sd (0, 1), limh→0 f (a+hx)−f (a) = ∇α f (a, x), then it is not too difficult to show that for such hα hyper-cubic regions, we have � � � 2α � Vµp f (X) = O vpd sup |∇α f (xp , u)|2 . S(0,1) � � On the other hand, by direct computation, the second term is such that limvp →0 Eµp σ 2 (X) = � � � � σ 2 (xp ). Thus, while Vµp f (X) vanishes, Eµp σ 2 (X) stays bounded away from 0 (unless ν is deterministic). 3 2.2 Oracle allocation and homogeneous partitioning for piecewise constant mean-approximation. We now assume that we are allowed to choose the partition P depending on n, thus P = Pn , amongst all homogeneous partitions of the space, i.e. partitions such that all cells have the same volume, and come from a regular grid of the space. Thus the only free parameter is the number of cells Pn of the partition. An exact yet not explicit oracle algorithm. The minimization of the pseudo-loss (1) does not yield to a closed-form solution in general. However, we can still derive the order of the optimal loss (see [Carpentier and Maillard, 2012, Lemma 2] in the full version of the paper for an example of minimax yet non adaptive oracle � algorithm given in closed-form solution): � � −β � � � −α� � � Lemma 1 In the case when Vµp f (X) = Ω Pn and Rp σ 2 (x)µp (dx) = Ω Pn , then an � optimal allocation and partitioning strategy An satisfies that� � � � Vµp f (X) + Eµp σ 2 (X) � � , L − Vµp f (X) � as soon as there exists, for such range of Pn , a constant L such that � � � � � Pn � Vµp f (X) + Eµp σ 2 (X) � � = n. L − Vµp f (X) p=1 1 � Pn = Ω(n max(1+α� −β� ,1) ) and def � Tp,n = The pseudo-loss of such an algorithm A� , optimal amongst the allocations strategies that use the n � partition Pn in Pn regions, is then given by � � � � def max(1 − β , 1 − α ) − 1. where γ = Ln (A� ) = Ω nγ n max(1 + α� − β � , 1) The condition involving the constant L is here to ensure that the partition is not degenerate. It is morally satisfied as soon as the variance of f and the noise are bounded and n is large enough. This Lemma applies to the important class W 1,2 (R) of functions that admit a weak derivative that o belongs to L2 (R). Indeed these functions are H¨ lder with coefficient α = 1/2, i.e. we have o W 1,2 (R) ⊂ C 0,1/2 (R). The standard Brownian motion is an example of function that is 1/2-H¨ lder. More generally, for k = d + α with α = 1/2 when d is odd and α = 1 when d is even, we have the 2 inclusion W k,2 (Rd ) ⊂ C 0,α (Rd ) , where W k,2 (Rd ) is the set of functions that admit a k th weak derivative belonging to L2 (Rd ). Thus the previous Lemma applies to sufficiently smooth functions with smoothness linearly increasing with the dimension d of the input space X . Important remark Note that this Lemma gives us a choice of the partition that is minimax-optimal, and an allocation strategy on that partition that is not only minimax-optimal but also adaptive to the function f itself. Thus it provides a way to decide in a minimax way what is the good number of regions, and then to provide the best oracle way to allocate the budget. We can deduce the following immediate corollary on the class of α−H¨ lder functions observed in a o non-negligible noise of bounded variance (i.e. in the setting β � = 0 and α� = 2α ). d Corollary 1 Consider that f is α−H¨ lder and the noise is of bounded variance. Then a minimaxo d � d+2α ) and an optimal allocation achieves the rate L (A� ) = optimal partition satisfies Pn = Ω(n n n � −2α � Ω n d+2α . Moreover, the strategy of Lemma 1 is optimal amongst the allocations strategies that � use the partition Pn in Pn regions. � −2α � The rate Ω n d+2α is minimax-optimal on the class of α−H¨ lder functions (see Gy¨ rfi et al. [2002], o o Ibragimov and Hasminski [1981], Stone [1980]), and it is thus interesting to consider an initial numd � � d+2α ). After having built the partition, if the quantities ber �� � 2 �� � � of�regions Pn that is of order Pn = Ω(n Vµp f p≤P and Eµp σ p≤P are known to the learner, it is optimal, in the aim of minimizing � the pseudo-loss, to allocate to each region the number of samples Tp,n provided in Lemma 1. Our objective in this paper is, after having chosen beforehand a minimax-optimal partition, to allocate 4 the samples properly in the regions, without having any access to those quantities. It is then �� � � necessary to balance between exploration, i.e. allocating the samples in order to estimate Vµp f p≤P � � �� and Eµp σ 2 p≤P , and exploitation, i.e. use the estimates to target the optimal allocation. 3 Online algorithms for allocation and homogeneous partitioning for piecewise constant mean-approximation In this section, we now turn to the design of algorithms that are fully online, with the goal to be competitive against the kind of oracle algorithms considered in Section 2.2. We now assume that the space X = [0, 1]d is divided in Pn hyper-cubic regions of same measure (the Lebesgue measure on 1 [0, 1]d ) vp = v = Pn . The goal of an algorithm is to minimize the quadratic error of approximation of f by a constant over each cell, in expectation, which we write as � � � � � � 2 λ(dx) ˆ (x))2 λ(dx) = max E , max E (f (x) − fn (f (x) − mp,n ) ˆ 1≤p≤Pn 1≤p≤Pn λ(Rp ) λ(Rp ) Rp Rp ˆ where fn is the histogram estimate of the function f on the partition P and mp,n is the empirical ˆ mean defined on region Rp with the samples (Xi , Yi ) such that Xi ∈ Rp . To do so, an algorithm is only allowed to specify at each time step t, the next point Xt where to sample, based on all the past samples {(Xs , Ys )}s < ∞ satisfies that λ2 σ 2 (x) , ∀λ ∈ R+ log E exp[λ noise(x)] ≤ 2 and we further assume that it satisfies the following slightly stronger second property (that is for instance exactly verified for a Gaussian variable, looking at the moment generating function): � � � � 1 λ2 σ 2 (x) ∀λ, γ ∈ R+ log E exp λnoise(x) + γnoise(x)2 ≤ − log 1 − 2γσ 2 (x) . 2(1 − 2γσ 2 (x)) 2 5 The function f is assumed to be (L, α)-H¨ lder, meaning that it satifies o � ∀x, x ∈ X f (x) − f (x� ) ≤ L||x − x� ||α . Similarly, the function σ 2 is assumed to be (M, β)-H¨ lder i.e. it satisfies o � 2 2 � ∀x, x ∈ X σ (x) − σ (x ) ≤ M ||x − x� ||β . We assume that Y is a convex and compact subset of R, thus w.l.g. that it is [0, 1], and that it is known that ||σ 2 ||∞ , which is thus finite, is bounded by the constant 1. 3.2 Empirical estimation of the quadratic approximation error on each cell We define the sampling distribution µp in the region Rp for each p ∈ {1, . . . , Pn } as a quasi-uniform ˜ sampling scheme using the uniform distribution over the sub-regions. More precisely at time t ≤ n, if we decide to sample in the region Rp according to µp , we sample uniformly in each sub-region ˜ one sample, resulting in a new batch of samples {(Xt,k , Yt,k )}1≤k≤K , where Xt,k ∼ µp,k . Note that due to this sampling process, the number of points Tp,t sampled in sub-region Rp at time t is always Tp,t a multiple of K and that moreover for all k, k � ∈ {1, . . . , K} we have that Tp,k,t = Tp,k� ,t = K . Now this specific sampling is used in order to be able to estimate the variances Vµp f and Eµp σ 2 , � so that the best proportions Tp,n can be computed as accurately as possible. Indeed, as explained in � � � � Lemma 1, we have that Vµp f (X) + Eµp σ 2 (X) � def � � . Tp,n = L − Vµp f (X) ˆ Variance estimation We now introduce two estimators. The first estimator is written Vp,t and is def ˆ built in the following way. First,let us introduce the empirical estimate fp,k,t of the mean fp,k = � � Eµp,k f (X) of f in sub-region Rp,k . Similarly, to avoid some cumbersome notations, we introduce � � � � � � def def def 2 fp = Eµp f (X) and vp,k = Vµp,k f (X) for the function f , and then σp,k = Eµp,k σ 2 (X) for the variance of the noise σ 2 . We now define the empirical variance estimator to be K 1 � ˆ ˆ (fp,k,t − mp,t )2 , ˆ Vp,t = K −1 k=1 that is a biased estimator. Indeed, for a deterministic Tp,t , it is not difficult to show that we have � K K � � � � � � �� � � � � 2 1 �� 1 � ˆ E Vp,t + Eµp,k f − Eµp f = Vµp,k f + Eµp,k σ 2 . K −1 Tp,t k=1 k=1 � � The leading term in this decomposition, that is given by the first sum, is closed to Vµp f since, by using the assumption that f is (L, α)−H¨ lder, we have the following inequality o � � K � �� �� � �1 � � � � 2 2L2 dα � Eµp,k f − Eµp f − Vµp f (X) � ≤ , � �K (KPn )2α/d k=1 where we also used that the diameter of a sub-region Rp,k is given by diam(Rp,k ) = d1/2 . (KPn )1/d ˆ Then, the second term also contributes to the bias, essentially due to the fact that V[fp,k,t ] = � � � � 2 def def 1 1 2 2 2 Tp,k,t (vp,k + σp,k ) and not Tp,t (vk + σk ) (with vp = Vµp f (X) and σp = Eµp σ (X) ). ˆ p,k,t In order to correct this term, we now introduce the second estimator σ 2 that estimates the variance � � � � � � of the outputs in a region Rp,k , i.e. Vµp,k ,ν Y (X) = Vµp,k f (X) + Eµp,k σ 2 . It is defined as �2 t t �� 1 1 � def ˆ p,k,t = Yi − Yj I{Xj ∈ Rp,k } I{Xi ∈ Rp,k } . σ2 Tp,k,t − 1 i=1 Tp,k,t j=1 Now, we combine the two previous estimators to form the following estimator K 1 �� 1 1 � 2 ˆ ˆ ˆ σ − . Qp,t = Vp,t − K Tp,k,t Tp,t p,k,t k=1 ˆ The following proposition provides a high-probability bound on the difference between Qp,t and the quantity we want to estimate. We report the detailed proof in [Carpentier and Maillard, 2012]. 6 ˆ Proposition 1 By the assumption that f is (L, α)-H¨ lder, the bias of the estimator Qp,t , and for o deterministic Tp,t , is given by � K � � � � � � � � � 2 1 � 2L2 dα ˆ − Vµp f (X) ≤ . Eµp,k f − Eµp f E Qp,t − Qp (Tp,t ) = K (KPn )2α/d k=1 Moreover, it satisfies that for all δ ∈ [0, 1], there exists an event of probability higher than 1 − δ such that on this event, we have � � � � � � K K � � � � 8 log(4/δ) � σ 2 �1 � � � � ˆ p,k,t 1 � 2 ˆ ˆ � Qp,t − E Qp,t � ≤ � √ +o σ p,k . � � (K − 1)2 T2 T K K k=1 p,k,t p,k,t k=1 We also state the following Lemma that we are going to use in the analysis, and that takes into account randomness of the stopping times Tp,k,t . Lemma 2 Let {Xp,k,u }p≤P, k≤K, u≤n be samples potentially sampled in region Rp,k . We introduce qp,u to be the�equivalent of Qp (Tp,t ) with explicitly fixed value of Tp,t = u. Let also qp,u be the ˆ � ˆ p,t but computed with the first u samples in estimate of E qp,u , that is to say the equivalent of Q each region Rp,k (i.e. Tp,t = u). Let us define the event � � � � � � � AK log(4nP/δ)V � � ˆp,t 2L2 dα � � ξn,P,K (δ) = + ω : � qp,u (ω) − E qp,u � ≤ ˆ , u K −1 (KPn )2α/d p≤P u≤n �K 1 ˆ ˆ ˆ p,k,t and where A ≤ 4 is a numerical constant. Then it where Vp,t = Vp (Tp,t ) = K−1 k=1 σ 2 holds that � � P ξn,P,K (δ) ≥ 1 − δ . Note that, with the notations of this Lemma, Proposition 1 above is thus about qp,u . ˆ 3.3 The Online allocation and homogeneous partitioning algorithm for piecewise constant mean-approximation (OAHPA-pcma) We are now ready to state the algorithm that we propose for minimizing the quadratic error of approximation of f . The algorithm is described in Figure 1. Although it looks similar, this algorithm is ˆ quite different from a normal UCB algorithm since Qp,t decreases in expectation with Tp,t . Indeed, � � � � � �� �K � 1 its expectation is close to Vµp f + KTp,t k=1 Vµp,k f + Eµp,k σ 2 . Algorithm 1 OAHPA-pcma. 1: Input: A, L, α, Horizon n; Partition {Rp }p≤P , with sub-partitions {Rp,k }k≤K . 2: Initialization: Sample K points in every sub-region {Rp,k }p≤P,k≤K 3: for t = K 2 P + 1; t ≤ n; t = t + K do ˆ 4: Compute ∀p, Qp,t . � ˆ ˆ p,t + AK log(4nP/δ)Vp,t + 2L2 dα . 5: Compute ∀p, Bp,t = Q 2α/d Tp,t K−1 (KPn ) 6: Select the region pt = argmax1≤p≤Pn Bp,t where to sample. 7: Sample K samples in region Rpt one per sub-region Rpt ,k according to µpt ,k . 8: end for 4 Performance of the allocation strategy and discussion Here is the main result of the paper; see the full version [Carpentier and Maillard, 2012] for the proof. We remind that the objective is to minimize for an algorithm A the pseudo-loss Ln (A). Theorem 1 (Main result) Let γ = � maxp Tp,n � minp Tp,n be the distortion factor of the optimal allocation stratdef d d egy, and let � > 0. Then with the choice of the number of regions Pn = n 2α+d �2+ 2α , and of the 2d d def def 8L2 α number of sub-regions K = C 4α+d �−2− α , where C = Ad1−α then the pseudo-loss of the OAHPApcma algorithm satisfies, under the assumptions of Section 3.1 and on an event of probability higher than 1 − δ, � � � � � 2α 1 + �γC � log(1/δ) Ln (A� ) + o n− 2α+d , Ln (A) ≤ n for some numerical constant C � not depending on n, where A� is the oracle of Lemma 1. n 7 Minimax-optimal partitioning and �-adaptive performance Theorem 1 provides a high probability bound on the performance of the OAHPA-pcma allocation strategy. It shows that this performance is competitive with that of an optimal (i.e. adaptive to the function f , see Lemma 1) allocation d A� on a partition with a number of cells Pn chosen to be of minimax order n 2α+d for the class of 2α α-H¨ lder functions. In particular, since Ln (A� ) = O(n d+2α ) on that class, we recover the same o n minimax order as what is obtained in the batch learning setting, when using for instance wavelets, or Kernel estimates (see e.g. Stone [1980], Ibragimov and Hasminski [1981]). But moreover, due to the adaptivity of A� to the function itself, this procedure is also �-adaptive to the function and not n only minimax-optimal on the class, on that partition (see Section 2.2). Naturally, the performance of the method increases, in the same way than for any classical functional estimation method, when the smoothness of the function increases. Similarly, in agreement with the classical curse of dimension, the higher the dimension of the domain, the less efficient the method. Limitations In this work, we assume that the smoothness α of the function is available to the learner, which enables her to calibrate Pn properly. Now it makes sense to combine the OAHPApcma procedure with existing methods that enable to estimate this smoothness online (under a slightly stronger assumption than H¨ lder, such as H¨ lder functions that attain their exponents, o o see Gin´ and Nickl [2010]). It is thus interesting, when no preliminary knowledge on the smoothness e of f is available, to spend some of the initial budget in order to estimate α. We have seen that the OAHPA-pcma procedure, although very simple, manages to get minimax optimal results. Now the downside of the simplicity of the OAHPA-pcma strategy is two-fold. � The first limitation is that the factor (1 + �γC � log(1/δ)) = (1 + O(�)) appearing in the bound before Ln (A� ) is not 1, but higher than 1. Of course it is generally difficult to get a constant 1 in the batch setting (see Arlot [2007]), and similarly this is a difficult task in our online setting too: If � is chosen to be small, then the error with respect to the optimal allocation is small. However, since Pn is expressed as an increasing function of �, this implies that the minimax bound on the loss for partition P increases also with �. That said, in the view of the work on active learning multi-armed bandit that we extend, we would still prefer to get the optimal constant 1. The second limitation is more problematic: since K is chosen irrespective of the region Rp , this causes the presence of the factor γ. Thus the algorithm will essentially no longer enjoy near-optimal performance guarantees when the optimal allocation strategy is highly not homogeneous. Conclusion and future work In this paper, we considered online regression with histograms in an active setting (we select in which bean to sample), and when we can choose the histogram in a class of homogeneous histograms. Since the (unknown) noise is heteroscedastic and we compete not only with the minimax allocation oracle on α-H¨ lder functions but with the adaptive oracle o that uses a minimax optimal histogram and allocates samples adaptively to the target function, this is an extremely challenging (and very practical) setting. Our contribution can be seen as a non trivial extension of the setting of active learning for multi-armed bandits to the case when each arm corresponds to one continuous region of a sampling space, as opposed to a singleton, which can also be seen as a problem of non parametric function approximation. This new setting offers interesting challenges: We provided a simple procedure, based on the computation of upper confidence bounds of the estimation of the local quadratic error of approximation, and provided a performance analysis that shows that OAHPA-pcma is first order �-optimal with respect to the function, for a partition chosen to be minimax-optimal on the class of α-H¨ lder functions. However, this simplicity also o has a drawback if one is interested in building exactly first order optimal procedure, and going beyond these limitations is definitely not trivial: A more optimal but much more complex algorithm would indeed need to tune a different factor Kp in each cell in an online way, i.e. define some Kp,t that evolves with time, and redefine sub-regions accordingly. Now, the analysis of the OAHPA-pcma already makes use of powerful tools such as empirical-Bernstein bounds for variance estimation (and not only for mean estimation), which make it non trivial; in order to handle possibly evolving subregions and deal with the progressive refinement of the regions, we would need even more intricate analysis, due to the fact that we are online and active. This interesting next step is postponed to future work. Acknowledgements This research was partially supported by Nord-Pas-de-Calais Regional Council, French ANR EXPLO-RA (ANR-08-COSI-004), the European Communitys Seventh Framework Programme (FP7/2007-2013) under grant agreement no 270327 (CompLACS) and no 216886 (PASCAL2). 8 References Andr` s Antos, Varun Grover, and Csaba Szepesv` ri. Active learning in heteroscedastic noise. Thea a oretical Computer Science, 411(29-30):2712–2728, 2010. Sylvain Arlot. R´ echantillonnage et S´ lection de mod` les. PhD thesis, Universit´ Paris Sud - Paris e´ e e e XI, 2007. A. Baranes and P.-Y. Oudeyer. R-IAC: Robust Intrinsically Motivated Exploration and Active Learning. IEEE Transactions on Autonomous Mental Development, 1(3):155–169, October 2009. D. Bosq and J.P. Lecoutre. Th´ orie de l’estimation fonctionnelle, volume 21. Economica, 1987. e Alexandra Carpentier and Odalric-Ambrym Maillard. Online allocation and homogeneous partitioning for piecewise constant mean-approximation. HAL, 2012. URL http://hal.archives-ouvertes.fr/hal-00742893. Alexandra Carpentier, Alessandro Lazaric, Mohammad Ghavamzadeh, Rmi Munos, and Peter Auer. Upper-confidence-bound algorithms for active learning in multi-armed bandits. In Jyrki Kivinen, Csaba Szepesv` ri, Esko Ukkonen, and Thomas Zeugmann, editors, Algorithmic Learning Theory, a volume 6925 of Lecture Notes in Computer Science, pages 189–203. Springer Berlin / Heidelberg, 2011. E. Gin´ and R. Nickl. Confidence bands in density estimation. The Annals of Statistics, 38(2): e 1122–1170, 2010. L. Gy¨ rfi, M. Kohler, A. Krzy´ ak, and Walk H. A distribution-free theory of nonparametric regreso z sion. Springer Verlag, 2002. I. Ibragimov and R. Hasminski. Statistical estimation: Asymptotic theory. 1981. M. Rosenblatt. Stochastic curve estimation, volume 3. Inst of Mathematical Statistic, 1991. J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (19902010). Autonomous Mental Development, IEEE Transactions on, 2(3):230–247, 2010. C.J. Stone. Optimal rates of convergence for nonparametric estimators. The annals of Statistics, pages 1348–1360, 1980. J.W. Tukey. Non-parametric estimation ii. statistically equivalent blocks and tolerance regions–the continuous case. The Annals of Mathematical Statistics, 18(4):529–539, 1947. 9

5 0.092852205 298 nips-2012-Scalable Inference of Overlapping Communities

Author: Prem Gopalan, Sean Gerrish, Michael Freedman, David M. Blei, David M. Mimno

Abstract: We develop a scalable algorithm for posterior inference of overlapping communities in large networks. Our algorithm is based on stochastic variational inference in the mixed-membership stochastic blockmodel (MMSB). It naturally interleaves subsampling the network with estimating its community structure. We apply our algorithm on ten large, real-world networks with up to 60,000 nodes. It converges several orders of magnitude faster than the state-of-the-art algorithm for MMSB, finds hundreds of communities in large real-world networks, and detects the true communities in 280 benchmark networks with equal or better accuracy compared to other scalable algorithms. 1

6 0.091182142 179 nips-2012-Learning Manifolds with K-Means and K-Flats

7 0.089052461 120 nips-2012-Exact and Stable Recovery of Sequences of Signals with Sparse Increments via Differential 1-Minimization

8 0.07984414 21 nips-2012-A Unifying Perspective of Parametric Policy Search Methods for Markov Decision Processes

9 0.067875147 60 nips-2012-Bayesian nonparametric models for ranked data

10 0.067598626 285 nips-2012-Query Complexity of Derivative-Free Optimization

11 0.063743159 27 nips-2012-A quasi-Newton proximal splitting method

12 0.058613509 266 nips-2012-Patient Risk Stratification for Hospital-Associated C. diff as a Time-Series Classification Task

13 0.054632425 149 nips-2012-Hierarchical Optimistic Region Selection driven by Curiosity

14 0.053633668 335 nips-2012-The Bethe Partition Function of Log-supermodular Graphical Models

15 0.048212189 160 nips-2012-Imitation Learning by Coaching

16 0.044236176 166 nips-2012-Joint Modeling of a Matrix with Associated Text via Latent Binary Features

17 0.041711699 199 nips-2012-Link Prediction in Graphs with Autoregressive Features

18 0.041486781 319 nips-2012-Sparse Prediction with the $k$-Support Norm

19 0.039356302 32 nips-2012-Active Comparison of Prediction Models

20 0.039235733 85 nips-2012-Convergence and Energy Landscape for Cheeger Cut Clustering


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.109), (1, -0.015), (2, 0.062), (3, -0.028), (4, 0.013), (5, 0.044), (6, 0.028), (7, 0.035), (8, 0.041), (9, -0.012), (10, 0.007), (11, 0.032), (12, -0.049), (13, -0.103), (14, -0.114), (15, -0.041), (16, -0.029), (17, 0.008), (18, 0.0), (19, 0.015), (20, 0.011), (21, 0.028), (22, 0.041), (23, 0.034), (24, -0.01), (25, 0.028), (26, 0.028), (27, 0.148), (28, -0.06), (29, -0.003), (30, 0.089), (31, 0.167), (32, -0.113), (33, -0.116), (34, -0.094), (35, -0.047), (36, -0.039), (37, -0.068), (38, 0.041), (39, -0.053), (40, -0.053), (41, 0.051), (42, 0.02), (43, 0.042), (44, 0.063), (45, 0.131), (46, 0.023), (47, -0.061), (48, 0.074), (49, 0.004)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93495989 36 nips-2012-Adaptive Stratified Sampling for Monte-Carlo integration of Differentiable functions

Author: Alexandra Carpentier, Rémi Munos

Abstract: We consider the problem of adaptive stratified sampling for Monte Carlo integration of a differentiable function given a finite number of evaluations to the function. We construct a sampling scheme that samples more often in regions where the function oscillates more, while allocating the samples such that they are well spread on the domain (this notion shares similitude with low discrepancy). We prove that the estimate returned by the algorithm is almost similarly accurate as the estimate that an optimal oracle strategy (that would know the variations of the function everywhere) would return, and provide a finite-sample analysis. 1

2 0.77413034 142 nips-2012-Generalization Bounds for Domain Adaptation

Author: Chao Zhang, Lei Zhang, Jieping Ye

Abstract: In this paper, we provide a new framework to study the generalization bound of the learning process for domain adaptation. We consider two kinds of representative domain adaptation settings: one is domain adaptation with multiple sources and the other is domain adaptation combining source and target data. In particular, we use the integral probability metric to measure the difference between two domains. Then, we develop the specific Hoeffding-type deviation inequality and symmetrization inequality for either kind of domain adaptation to achieve the corresponding generalization bound based on the uniform entropy number. By using the resultant generalization bound, we analyze the asymptotic convergence and the rate of convergence of the learning process for domain adaptation. Meanwhile, we discuss the factors that affect the asymptotic behavior of the learning process. The numerical experiments support our results. 1

3 0.76686263 184 nips-2012-Learning Probability Measures with respect to Optimal Transport Metrics

Author: Guillermo Canas, Lorenzo Rosasco

Abstract: We study the problem of estimating, in the sense of optimal transport metrics, a measure which is assumed supported on a manifold embedded in a Hilbert space. By establishing a precise connection between optimal transport metrics, optimal quantization, and learning theory, we derive new probabilistic bounds for the performance of a classic algorithm in unsupervised learning (k-means), when used to produce a probability measure derived from the data. In the course of the analysis, we arrive at new lower bounds, as well as probabilistic upper bounds on the convergence rate of empirical to population measures, which, unlike existing bounds, are applicable to a wide class of measures. 1 Introduction and Motivation In this paper we study the problem of learning from random samples a probability distribution supported on a manifold, when the learning error is measured using transportation metrics. The problem of learning a probability distribution is classic in statistics, and is typically analyzed for distributions in X = Rd that have a density with respect to the Lebesgue measure, with total variation, and L2 among the common distances used to measure closeness of two densities (see for instance [10, 32] and references therein.) The setting in which the data distribution is supported on a low dimensional manifold embedded in a high dimensional space has only been considered more recently. In particular, kernel density estimators on manifolds have been described in [36], and their pointwise consistency, as well as convergence rates, have been studied in [25, 23, 18]. A discussion on several topics related to statistics on a Riemannian manifold can be found in [26]. Interestingly, the problem of approximating measures with respect to transportation distances has deep connections with the fields of optimal quantization [14, 16], optimal transport [35] and, as we point out in this work, with unsupervised learning (see Sec. 4.) In fact, as described in the sequel, some of the most widely-used algorithms for unsupervised learning, such as k-means (but also others such as PCA and k-flats), can be shown to be performing exactly the task of estimating the data-generating measure in the sense of the 2-Wasserstein distance. This close relation between learning theory, and optimal transport and quantization seems novel and of interest in its own right. Indeed, in this work, techniques from the above three fields are used to derive the new probabilistic bounds described below. Our technical contribution can be summarized as follows: (a) we prove uniform lower bounds for the distance between a measure and estimates based on discrete sets (such as the empirical measure or measures derived from algorithms such as kmeans); (b) we provide new probabilistic bounds for the rate of convergence of empirical to population measures which, unlike existing probabilistic bounds, hold for a very large class of measures; 1 (c) we provide probabilistic bounds for the rate of convergence of measures derived from k-means to the data measure. The structure of the paper is described at the end of Section 2, where we discuss the exact formulation of the problem as well as related previous works. 2 Setup and Previous work Consider the problem of learning a probability measure ρ supported on a space M, from an i.i.d. sample Xn = (x1 , . . . , xn ) ∼ ρn of size n. We assume M to be a compact, smooth d-dimensional manifold of bounded curvature, with C 1 metric and volume measure λM , embedded in the unit ball of a separable Hilbert space X with inner product ·, · , induced norm · , and distance d (for d instance M = B2 (1) the unit ball in X = Rd .) Following [35, p. 94], let Pp (M) denote the Wasserstein space of order 1 ≤ p < ∞: Pp (M) := x p dρ(x) < ∞ ρ ∈ P (M) : M of probability measures P (M) supported on M, with finite p-th moment. The p-Wasserstein distance 1/p Wp (ρ, µ) = inf [E X − Y p ] : Law(X) = ρ, Law(Y ) = µ (1) X,Y where the random variables X and Y are distributed according to ρ and µ respectively, is the optimal expected cost of transporting points generated from ρ to those generated from µ, and is guaranteed to be finite in Pp (M) [35, p. 95]. The space Pp (M) with the Wp metric is itself a complete separable metric space [35]. We consider here the problem of learning probability measures ρ ∈ P2 (M), where the performance is measured by the distance W2 . There are many possible choices of distances between probability measures [13]. Among them, Wp metrizes weak convergence (see [35] theorem 6.9), that is, in Pp (M), a sequence (µi )i∈N of measures converges weakly to µ iff Wp (µi , µ) → 0 and their p-th order moments converge to that of µ. There are other distances, such as the L´ vy-Prokhorov, or the weak-* distance, that also metrize e weak convergence. However, as pointed out by Villani in his excellent monograph [35, p. 98], 1. “Wasserstein distances are rather strong, [...]a definite advantage over the weak-* distance”. 2. “It is not so difficult to combine information on convergence in Wasserstein distance with some smoothness bound, in order to get convergence in stronger distances.” Wasserstein distances have been used to study the mixing and convergence of Markov chains [22], as well as concentration of measure phenomena [20]. To this list we would add the important fact that existing and widely-used algorithms for unsupervised learning can be easily extended (see Sec. 4) to compute a measure ρ that minimizes the distance W2 (ˆn , ρ ) to the empirical measure ρ n ρn := ˆ 1 δx , n i=1 i a fact that will allow us to prove, in Sec. 5, bounds on the convergence of a measure induced by k-means to the population measure ρ. The most useful versions of Wasserstein distance are p = 1, 2, with p = 1 being the weaker of the two (by H¨ lder’s inequality, p ≤ q ⇒ Wp ≤ Wq .) In particular, “results in W2 distance are usually o stronger, and more difficult to establish than results in W1 distance” [35, p. 95]. A discussion of p = ∞ would take us out of topic, since its behavior is markedly different. 2.1 Closeness of Empirical and Population Measures By the strong law of large numbers, the empirical measure converges almost surely to the population measure: ρn → ρ in the sense of the weak topology [34]. Since weak convergence and convergence ˆ in Wp plus convergence of p-th moments are equivalent in Pp (M), this means that, in the Wp sense, the empirical measure ρn converges to ρ, as n → ∞. A fundamental question is therefore how fast ˆ the rate of convergence of ρn → ρ is. ˆ 2 2.1.1 Convergence in expectation The rate of convergence of ρn → ρ in expectation has been widely studied in the past, resultˆ ing in upper bounds of order EW2 (ρ, ρn ) = O(n−1/(d+2) ) [19, 8], and lower bounds of order ˆ EW2 (ρ, ρn ) = Ω(n−1/d ) [29] (both assuming that the absolutely continuous part of ρ is ρA = 0, ˆ with possibly better rates otherwise). More recently, an upper bound of order EWp (ρ, ρn ) = O(n−1/d ) has been proposed [2] by proving ˆ a bound for the Optimal Bipartite Matching (OBM) problem [1], and relating this problem to the expected distance EWp (ρ, ρn ). In particular, given two independent samples Xn , Yn , the OBM ˆ problem is that of finding a permutation σ that minimizes the matching cost n−1 xi −yσ(i) p [24, p ˆ ˆ ˆ 30]. It is not hard to show that the optimal matching cost is Wp (ˆXn , ρYn ) , where ρXn , ρYn are ρ the empirical measures associated to Xn , Yn . By Jensen’s inequality, the triangle inequality, and (a + b)p ≤ 2p−1 (ap + bp ), it holds EWp (ρ, ρn )p ≤ EWp (ˆXn , ρYn )p ≤ 2p−1 EWp (ρ, ρn )p , ˆ ρ ˆ ˆ and therefore a bound of order O(n−p/d ) for the OBM problem [2] implies a bound EWp (ρ, ρn ) = ˆ O(n−1/d ). The matching lower bound is only known for a special case: ρA constant over a bounded set of non-null measure [2] (e.g. ρA uniform.) Similar results, with matching lower bounds are found for W1 in [11]. 2.1.2 Convergence in probability Results for convergence in probability, one of the main results of this work, appear to be considerably harder to obtain. One fruitful avenue of analysis has been the use of so-called transportation, or Talagrand inequalities Tp , which can be used to prove concentration inequalities on Wp [20]. In particular, we say that ρ satisfies a Tp (C) inequality with C > 0 iff Wp (ρ, µ)2 ≤ CH(µ|ρ), ∀µ ∈ Pp (M), where H(·|·) is the relative entropy [20]. As shown in [6, 5], it is possible to obtain probabilistic upper bounds on Wp (ρ, ρn ), with p = 1, 2, if ρ is known to satisfy a Tp inequality ˆ of the same order, thereby reducing the problem of bounding Wp (ρ, ρn ) to that of obtaining a Tp ˆ inequality. Note that, by Jensen’s inequality, and as expected from the behavior of Wp , the inequality T2 is stronger than T1 [20]. While it has been shown that ρ satisfies a T1 inequality iff it has a finite square-exponential moment 2 (E[eα x ] finite for some α > 0) [4, 7], no such general conditions have been found for T2 . As an example, consider that, if M is compact with diameter D then, by theorem 6.15 of [35], and the celebrated Csisz´ r-Kullback-Pinsker inequality [27], for all ρ, µ ∈ Pp (M), it is a Wp (ρ, µ)2p ≤ (2D)2p ρ − µ where · does not. TV 2 TV ≤ 22p−1 D2p H(µ|ρ), is the total variation norm. Clearly, this implies a Tp=1 inequality, but for p ≥ 2 it The T2 inequality has been shown by Talagrand to be satisfied by the Gaussian distribution [31], and then slightly more generally by strictly log-concave measures (see [20, p. 123], and [3].) However, as noted in [6], “contrary to the T1 case, there is no hope to obtain T2 inequalities from just integrability or decay estimates.” Structure of this paper. In this work we obtain bounds in probability (learning rates) for the problem of learning a probability measure in the sense of W2 . We begin by establishing (lower) bounds for the convergence of empirical to population measures, which serve to set up the problem and introduce the connection between quantization and measure learning (sec. 3.) We then describe how existing unsupervised learning algorithms that compute a set (k-means, k-flats, PCA,. . . ) can be easily extended to produce a measure (sec. 4.) Due to its simplicity and widespread use, we focus here on k-means. Since the two measure estimates that we consider are the empirical measure, and the measure induced by k-means, we next set out to prove upper bounds on their convergence to the data-generating measure (sec. 5.) We arrive at these bounds by means of intermediate measures, which are related to the problem of optimal quantization. The bounds apply in a very broad setting (unlike existing bounds based on transportation inequalities, they are not restricted to log-concave measures [20, 3].) 3 3 Learning probability measures, optimal transport and quantization We address the problem of learning a probability measure ρ when the only observations we have at our disposal are n i.i.d. samples Xn = (x1 , . . . , xn ). We begin by establishing some notation and useful intermediate results. Given a closed set S ⊆ X , let {Vq : q ∈ S} be a Borel Voronoi partition of X composed of sets Vq closest to each q ∈ S, that is, such that each Vq ⊆ {x ∈ X : x − q = minr∈S x − r } is measurable (see for instance [15].) Consider the projection function πS : X → S mapping each x ∈ Vq to q. By virtue of {Vq }q∈S being a Borel Voronoi partition, the map πS is measurable [15], and it is d (x, πS (x)) = minq∈S x − q for all x ∈ X . For any ρ ∈ Pp (M), let πS ρ be the pushforward, or image measure of ρ under the mapping πS , −1 which is defined to be (πS ρ)(A) := ρ(πS (A)) for all Borel measurable sets A. From its definition, it is clear that πS ρ is supported on S. We now establish a connection between the expected distance to a set S, and the distance between ρ and the set’s induced pushforward measure. Notice that, for discrete sets S, the expected Lp distance to S is exactly the expected quantization error Ep,ρ (S) := Ex∼ρ d(x, S)p = Ex∼ρ x − πS (x) p incurred when encoding points x drawn from ρ by their closest point πS (x) in S [14]. This close connection between optimal quantization and Wasserstein distance has been pointed out in the past in the statistics [28], optimal quantization [14, p. 33], and approximation theory [16] literatures. The following two lemmas are key tools in the reminder of the paper. The first highlights the close link between quantization and optimal transport. Lemma 3.1. For closed S ⊆ X , ρ ∈ Pp (M), 1 ≤ p < ∞, it holds Ex∼ρ d(x, S)p = Wp (ρ, πS ρ)p . Note that the key element in the above lemma is that the two measures in the expression Wp (ρ, πS ρ) must match. When there is a mismatch, the distance can only increase. That is, Wp (ρ, πS µ) ≥ Wp (ρ, πS ρ) for all µ ∈ Pp (M). In fact, the following lemma shows that, among all the measures with support in S, πS ρ is closest to ρ. Lemma 3.2. For closed S ⊆ X , and all µ ∈ Pp (M) with supp(µ) ⊆ S, 1 ≤ p < ∞, it holds Wp (ρ, µ) ≥ Wp (ρ, πS ρ). When combined, lemmas 3.1 and 3.2 indicate that the behavior of the measure learning problem is limited by the performance of the optimal quantization problem. For instance, Wp (ρ, ρn ) can only ˆ be, in the best-case, as low as the optimal quantization cost with codebook of size n. The following section makes this claim precise. 3.1 Lower bounds Consider the situation depicted in fig. 1, in which a sample X4 = {x1 , x2 , x3 , x4 } is drawn from a distribution ρ which we assume here to be absolutely continuous on its support. As shown, the projection map πX4 sends points x to their closest point in X4 . The resulting Voronoi decomposition of supp(ρ) is drawn in shades of blue. By lemma 5.2 of [9], the pairwise intersections of Voronoi regions have null ambient measure, and since ρ is absolutely continuous, the pushforward measure 4 can be written in this case as πX4 ρ = j=1 ρ(Vxj )δxj , where Vxj is the Voronoi region of xj . Note that, even for finite sets S, this particular decomposition is not always possible if the {Vq }q∈S form a Borel Voronoi tiling, instead of a Borel Voronoi partition. If, for instance, ρ has an atom falling on two Voronoi regions in a tiling, then both regions would count the atom as theirs, and double-counting would imply q ρ(Vq ) > 1. The technicalities required to correctly define a Borel Voronoi partition are such that, in general, it is simpler to write πS ρ, even though (if S is discrete) this measure can clearly be written as a sum of deltas with appropriate masses. By lemma 3.1, the distance Wp (ρ, πX4 ρ)p is the (expected) quantization cost of ρ when using X4 as codebook. Clearly, this cost can never be lower than the optimal quantization cost of size 4. This reasoning leads to the following lower bound between empirical and population measures. 4 Theorem 3.3. For ρ ∈ Pp (M) with absolutely continuous part ρA = 0, and 1 ≤ p < ∞, it holds Wp (ρ, ρn ) = Ω(n−1/d ) uniformly over ρn , where the constants depend on d and ρA only. ˆ ˆ Proof: Let Vn,p (ρ) := inf S⊂M,|S|=n Ex∼ρ d(x, S)p be the optimal quantization cost of ρ of order p with n centers. Since ρA = 0, and since ρ has a finite (p + δ)-th order moment, for some δ > 0 (since it is supported on the unit ball), then it is Vn,p (ρ) = Θ(n−p/d ), with constants depending on d and ρA (see [14, p. 78] and [16].) Since supp(ˆn ) = Xn , it follows that ρ Wp (ρ, ρn )p ˆ ≥ lemma 3.2 Wp (ρ, πXn ρ)p = lemma 3.1 Ex∼ρ d(x, Xn )p ≥ Vn,p (ρ) = Θ(n−p/d ) Note that the bound of theorem 3.3 holds for ρn derived from any sample Xn , and is therefore ˆ stronger than the existing lower bounds on the convergence rates of EWp (ρ, ρn ) → 0. In particular, ˆ it trivially induces the known lower bound Ω(n−1/d ) on the rate of convergence in expectation. 4 Unsupervised learning algorithms for learning a probability measure As described in [21], several of the most widely used unsupervised learning algorithms can be ˆ interpreted to take as input a sample Xn and output a set Sk , where k is typically a free parameter of the algorithm, such as the number of means in k-means1 , the dimension of affine spaces in PCA, n ˆ etc. Performance is measured by the empirical quantity n−1 i=1 d(xi , Sk )2 , which is minimized among all sets in some class (e.g. sets of size k, affine spaces of dimension k,. . . ) This formulation is general enough to encompass k-means and PCA, but also k-flats, non-negative matrix factorization, and sparse coding (see [21] and references therein.) Using the discussion of Sec. 3, we can establish a clear connection between unsupervised learning and the problem of learning probability measures with respect to W2 . Consider as a running example the k-means problem, though the argument is general. Given an input Xn , the k-means problem is ˆ ˆ to find a set |Sk | = k minimizing its average distance from points in Xn . By associating to Sk the pushforward measure πSk ρn , we find that ˆ ˆ 1 n n ˆ ˆ d(xi , Sk )2 = Ex∼ρn d(x, Sk )2 ˆ i=1 = lemma 3.1 W2 (ˆn , πSk ρn )2 . ρ ˆ ˆ (2) Since k-means minimizes equation 2, it also finds the measure that is closest to ρn , among those ˆ with support of size k. This connection between k-means and W2 measure approximation was, to the best of the authors’ knowledge, first suggested by Pollard [28] though, as mentioned earlier, the argument carries over to many other unsupervised learning algorithms. Unsupervised measure learning algorithms. We briefly clarify the steps involved in using an existing unsupervised learning algorithm for probability measure learning. Let Uk be a parametrized algorithm (e.g. k-means) that takes a sample Xn and outputs a set Uk (Xn ). The measure learning algorithm Ak : Mn → Pp (M) corresponding to Uk is defined as follows: ˆ 1. Ak takes a sample Xn and outputs the measure πSk ρn , supported on Sk = Uk (Xn ); ˆ ˆ 2. since ρn is discrete, then so must πSk ρn be, and thus Ak (Xn ) = ˆ ˆ ˆ 1 n n ˆ i=1 δπSk (xi ) ; 3. in practice, we can simply store an n-vector πSk (x1 ), . . . , πSk (xn ) , from which Ak (Xn ) ˆ ˆ can be reconstructed by placing atoms of mass 1/n at each point. In the case that Uk is the k-means algorithm, only k points and k masses need to be stored. Note that any algorithm A that attempts to output a measure A (Xn ) close to ρn can be cast in the ˆ above framework. Indeed, if S is the support of A (Xn ) then, by lemma 3.2, πS ρn is the measure ˆ closest to ρn with support in S . This effectively reduces the problem of learning a measure to that of ˆ 1 In a slight abuse of notation, we refer to the k-means algorithm here as an ideal algorithm that solves the k-means problem, even though in practice an approximation algorithm may be used. 5 finding a set, and is akin to how the fact that every optimal quantizer is a nearest-neighbor quantizer (see [15], [12, p. 350], and [14, p. 37–38]) reduces the problem of finding an optimal quantizer to that of finding an optimal quantizing set. Clearly, the minimum of equation 2 over sets of size k (the output of k-means) is monotonically ˆ ˆ non-increasing with k. In particular, since Sn = Xn and πSn ρn = ρn , it is Ex∼ρn d(x, Sn )2 = ˆ ˆ ˆ ˆ 2 W2 (ˆn , πSn ρn ) = 0. That is, we can always make the learned measure arbitrarily close to ρn ρ ˆ ˆ ˆ by increasing k. However, as pointed out in Sec. 2, the problem of measure learning is concerned with minimizing the 2-Wasserstein distance W2 (ρ, πSk ρn ) to the data-generating measure. The ˆ ˆ actual performance of k-means is thus not necessarily guaranteed to behave in the same way as the empirical one, and the question of characterizing its behavior as a function of k and n naturally arises. ˆ Finally, we note that, while it is Ex∼ρn d(x, Sk )2 = W2 (ˆn , πSk ρn )2 (the empirical performances ρ ˆ ˆ ˆ are the same in the optimal quantization, and measure learning problem formulations), the actual performances satisfy ˆ Ex∼ρ d(x, Sk )2 = W2 (ρ, π ˆ ρ)2 ≤ W2 (ρ, π ˆ ρn )2 , 1 ≤ k ≤ n. ˆ lemma 3.1 Sk lemma 3.2 Sk Consequently, with the identification between sets S and measures πS ρn , the measure learning ˆ problem is, in general, harder than the set-approximation problem (for example, if M = Rd and ρ is absolutely continuous over a set of non-null volume, it is not hard to show that the inequality is ˆ almost surely strict: Ex∼ρ d(x, Sk )2 < W2 (ρ, πSk ρn )2 for 1 < k < n.) ˆ ˆ In the remainder, we characterize the performance of k-means on the measure learning problem, for varying k, n. Although other unsupervised learning algorithms could have been chosen as basis for our analysis, k-means is one of the oldest and most widely used, and the one for which the deep connection between optimal quantization and measure approximation is most clearly manifested. Note that, by setting k = n, our analysis includes the problem of characterizing the behavior of the distance W2 (ρ, ρn ) between empirical and population measures which, as indicated in Sec. 2.1, ˆ is a fundamental question in statistics (i.e. the speed of convergence of empirical to population measures.) 5 Learning rates In order to analyze the performance of k-means as a measure learning algorithm, and the convergence of empirical to population measures, we propose the decomposition shown in fig. 2. The diagram includes all the measures considered in the paper, and shows the two decompositions used to prove upper bounds. The upper arrow (green), illustrates the decomposition used to bound the distance W2 (ρ, ρn ). This decomposition uses the measures πSk ρ and πSk ρn as intermediates to arrive ˆ ˆ at ρn , where Sk is a k-point optimal quantizer of ρ, that is, a set Sk minimizing Ex∼ρ d(x, S)2 over ˆ all sets of size |S| = k. The lower arrow (blue) corresponds to the decomposition of W2 (ρ, πSk ρn ) ˆ ˆ (the performance of k-means), whereas the labelled black arrows correspond to individual terms in the bounds. We begin with the (slightly) simpler of the two results. 5.1 Convergence rates for the empirical to population measures Let Sk be the optimal k-point quantizer of ρ of order two [14, p. 31]. By the triangle inequality and the identity (a + b + c)2 ≤ 3(a2 + b2 + c2 ), it follows that W2 (ρ, ρn )2 ≤ 3 W2 (ρ, πSk ρ)2 + W2 (πSk ρ, πSk ρn )2 + W2 (πSk ρn , ρn )2 . ˆ ˆ ˆ ˆ (3) This is the decomposition depicted in the upper arrow of fig. 2. By lemma 3.1, the first term in the sum of equation 3 is the optimal k-point quantization error of ρ over a d-manifold M which, using recent techniques from [16] (see also [17, p. 491]), is shown in the proof of theorem 5.1 (part a) to be of order Θ(k −2/d ). The remaining terms, b) and c), are slightly more technical and are bounded in the proof of theorem 5.1. Since equation 3 holds for all 1 ≤ k ≤ n, the best bound on W2 (ρ, ρn ) can be obtained by optimizˆ ing the right-hand side over all possible values of k, resulting in the following probabilistic bound for the rate of convergence of the empirical to population measures. 6 x2 x W2 (ρ, ρn ) ˆ supp ρ x1 π{x1 ,x2 ,x3 ,x4 } ρ a) x3 πSk ρ b) πSk ρn ˆ c) d) ρn ˆ πSk ρn ˆ ˆ W2 (ρ, πSk ρn ) ˆ ˆ x4 Figure 1: A sample {x1 , x2 , x3 , x4 } is drawn from a distribution ρ with support in supp ρ. The projection map π{x1 ,x2 ,x3 ,x4 } sends points x to their closest one in the sample. The induced Voronoi tiling is shown in shades of blue. Figure 2: The measures considered in this paper are linked by arrows for which upper bounds for their distance are derived. Bounds for the quantities of interest W2 (ρ, ρn )2 , and W2 (ρ, πSk ρn )2 , ˆ ˆ ˆ are decomposed by following the top and bottom colored arrows. Theorem 5.1. Given ρ ∈ Pp (M) with absolutely continuous part ρA = 0, sufficiently large n, and τ > 0, it holds W2 (ρ, ρn ) ≤ C · m(ρA ) · n−1/(2d+4) · τ, ˆ where m(ρA ) := 5.2 M 2 with probability 1 − e−τ . ρA (x)d/(d+2) dλM (x), and C depends only on d. Learning rates of k-means The key element in the proof of theorem 5.1 is that the distance between population and empirical measures can be bounded by choosing an intermediate optimal quantizing measure of an appropriate size k. In the analysis, the best bounds are obtained for k smaller than n. If the output of k-means is close to an optimal quantizer (for instance if sufficient data is available), then we would similarly expect that the best bounds for k-means correspond to a choice of k < n. The decomposition of the bottom (blue) arrow in figure 2 leads to the following bound in probability. Theorem 5.2. Given ρ ∈ Pp (M) with absolutely continuous part ρA = 0, and τ > 0, then for all sufficiently large n, and letting k = C · m(ρA ) · nd/(2d+4) , it holds W2 (ρ, πSk ρn ) ≤ C · m(ρA ) · n−1/(2d+4) · τ, ˆ ˆ where m(ρA ) := M 2 with probability 1 − e−τ . ρA (x)d/(d+2) dλM (x), and C depends only on d. Note that the upper bounds in theorem 5.1 and 5.2 are exactly the same. Although this may appear ˆ surprising, it stems from the following fact. Since S = Sk is a minimizer of W2 (πS ρn , ρn )2 , the ˆ ˆ bound d) of figure 2 satisfies: W2 (πSk ρn , ρn )2 ≤ W2 (πSk ρn , ρn )2 ˆ ˆ ˆ ˆ ˆ and therefore (by the definition of c), the term d) is of the same order as c). It follows then that adding term d) to the bound only affects the constants, but otherwise leaves it unchanged. Since d) is the term that takes the output measure of k-means to the empirical measure, this implies that the rate of convergence of k-means (for suitably chosen k) cannot be worse than that of ρn → ρ. ˆ Conversely, bounds for ρn → ρ are obtained from best rates of convergence of optimal quantizers, ˆ whose convergence to ρ cannot be slower than that of k-means (since the quantizers that k-means produces are suboptimal.) 7 Since the bounds obtained for the convergence of ρn → ρ are the same as those for k-means with ˆ k of order k = Θ(nd/(2d+4) ), this suggests that estimates of ρ that are as accurate as those derived from an n point-mass measure ρn can be derived from k point-mass measures with k ˆ n. Finally, we note that the introduced bounds are currently limited by the statistical bound sup |W2 (πS ρn , ρn )2 − W2 (πS ρ, ρ)2 | ˆ ˆ |S|=k = sup |Ex∼ρn d(x, S)2 − Ex∼ρ d(x, S)2 | ˆ lemma 3.1 |S|=k (4) (see for instance [21]), for which non-matching lower bounds are known. This means that, if better upper bounds can be obtained for equation 4, then both bounds in theorems 5.1 and 5.2 would automatically improve (would become closer to the lower bound.) References [1] M. Ajtai, J. Komls, and G. Tusndy. On optimal matchings. Combinatorica, 4:259–264, 1984. [2] Franck Barthe and Charles Bordenave. Combinatorial optimization over two random point sets. Technical Report arXiv:1103.2734, Mar 2011. [3] Gordon Blower. The Gaussian isoperimetric inequality and transportation. Positivity, 7:203–224, 2003. [4] S. G. Bobkov and F. G¨ tze. Exponential integrability and transportation cost related to logarithmic o Sobolev inequalities. Journal of Functional Analysis, 163(1):1–28, April 1999. [5] Emmanuel Boissard. Simple bounds for the convergence of empirical and occupation measures in 1wasserstein distance. Electron. J. Probab., 16(83):2296–2333, 2011. [6] F. Bolley, A. Guillin, and C. Villani. Quantitative concentration inequalities for empirical measures on non-compact spaces. Probability Theory and Related Fields, 137(3):541–593, 2007. [7] F. Bolley and C. Villani. Weighted Csisz´ r-Kullback-Pinsker inequalities and applications to transportaa tion inequalities. Annales de la Faculte des Sciences de Toulouse, 14(3):331–352, 2005. [8] Claire Caillerie, Fr´ d´ ric Chazal, J´ rˆ me Dedecker, and Bertrand Michel. Deconvolution for the Wassere e eo stein metric and geometric inference. Rapport de recherche RR-7678, INRIA, July 2011. [9] Kenneth L. Clarkson. Building triangulations using -nets. In Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, STOC ’06, pages 326–335, New York, NY, USA, 2006. ACM. [10] Luc Devroye and G´ bor Lugosi. Combinatorial methods in density estimation. Springer Series in Statisa tics. Springer-Verlag, New York, 2001. [11] V. Dobri and J. Yukich. Asymptotics for transportation cost in high dimensions. Journal of Theoretical Probability, 8:97–118, 1995. [12] A. Gersho and R.M. Gray. Vector Quantization and Signal Compression. Kluwer International Series in Engineering and Computer Science. Kluwer Academic Publishers, 1992. [13] Alison L. Gibbs and Francis E. Su. On choosing and bounding probability metrics. International Statistical Review, 70:419–435, 2002. [14] Siegfried Graf and Harald Luschgy. Foundations of quantization for probability distributions. SpringerVerlag New York, Inc., Secaucus, NJ, USA, 2000. [15] Siegfried Graf, Harald Luschgy, and Gilles Page`. Distortion mismatch in the quantization of probability s measures. Esaim: Probability and Statistics, 12:127–153, 2008. [16] Peter M. Gruber. Optimum quantization and its applications. Adv. Math, 186:2004, 2002. [17] P.M. Gruber. Convex and discrete geometry. Grundlehren der mathematischen Wissenschaften. Springer, 2007. [18] Guillermo Henry and Daniela Rodriguez. Kernel density estimation on riemannian manifolds: Asymptotic results. J. Math. Imaging Vis., 34(3):235–239, July 2009. [19] Joseph Horowitz and Rajeeva L. Karandikar. Mean rates of convergence of empirical measures in the Wasserstein metric. J. Comput. Appl. Math., 55(3):261–273, November 1994. [20] M. Ledoux. The Concentration of Measure Phenomenon. Mathematical Surveys and Monographs. American Mathematical Society, 2001. [21] A. Maurer and M. Pontil. K–dimensional coding schemes in Hilbert spaces. IEEE Transactions on Information Theory, 56(11):5839 –5846, nov. 2010. [22] Yann Ollivier. Ricci curvature of markov chains on metric spaces. J. Funct. Anal., 256(3):810–864, 2009. 8 [23] Arkadas Ozakin and Alexander Gray. Submanifold density estimation. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1375–1382. 2009. [24] C. Papadimitriou. The probabilistic analysis of matching heuristics. In Proc. of the 15th Allerton Conf. on Communication, Control and Computing, pages 368–378, 1978. [25] Bruno Pelletier. Kernel density estimation on Riemannian manifolds. Statist. Probab. Lett., 73(3):297– 304, 2005. [26] Xavier Pennec. Intrinsic statistics on riemannian manifolds: Basic tools for geometric measurements. J. Math. Imaging Vis., 25(1):127–154, July 2006. [27] M. S. Pinsker. Information and information stability of random variables and processes. San Francisco: Holden-Day, 1964. [28] David Pollard. Quantization and the method of k-means. IEEE Transactions on Information Theory, 28(2):199–204, 1982. [29] S.T. Rachev. Probability metrics and the stability of stochastic models. Wiley series in probability and mathematical statistics: Applied probability and statistics. Wiley, 1991. [30] J.M. Steele. Probability Theory and Combinatorial Optimization. Cbms-Nsf Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics, 1997. [31] M. Talagrand. Transportation cost for Gaussian and other product measures. Geometric And Functional Analysis, 6:587–600, 1996. [32] Alexandre B. Tsybakov. Introduction to nonparametric estimation. Springer Series in Statistics. Springer, New York, 2009. Revised and extended from the 2004 French original, Translated by Vladimir Zaiats. [33] A.W. van der Vaart and J.A. Wellner. Weak Convergence and Empirical Processes. Springer Series in Statistics. Springer, 1996. [34] V. S. Varadarajan. On the convergence of sample probability distributions. Sankhy¯ : The Indian Journal a of Statistics, 19(1/2):23–26, Feb. 1958. [35] C. Villani. Optimal Transport: Old and New. Grundlehren der Mathematischen Wissenschaften. Springer, 2009. [36] P. Vincent and Y. Bengio. Manifold Parzen Windows. In Advances in Neural Information Processing Systems 22, pages 849–856. 2003. 9

4 0.63854223 179 nips-2012-Learning Manifolds with K-Means and K-Flats

Author: Guillermo Canas, Tomaso Poggio, Lorenzo Rosasco

Abstract: We study the problem of estimating a manifold from random samples. In particular, we consider piecewise constant and piecewise linear estimators induced by k-means and k-flats, and analyze their performance. We extend previous results for k-means in two separate directions. First, we provide new results for k-means reconstruction on manifolds and, secondly, we prove reconstruction bounds for higher-order approximation (k-flats), for which no known results were previously available. While the results for k-means are novel, some of the technical tools are well-established in the literature. In the case of k-flats, both the results and the mathematical tools are new. 1

5 0.53872746 261 nips-2012-Online allocation and homogeneous partitioning for piecewise constant mean-approximation

Author: Alexandra Carpentier, Odalric-ambrym Maillard

Abstract: In the setting of active learning for the multi-armed bandit, where the goal of a learner is to estimate with equal precision the mean of a finite number of arms, recent results show that it is possible to derive strategies based on finite-time confidence bounds that are competitive with the best possible strategy. We here consider an extension of this problem to the case when the arms are the cells of a finite partition P of a continuous sampling space X ⊂ Rd . Our goal is now to build a piecewise constant approximation of a noisy function (where each piece is one region of P and P is fixed beforehand) in order to maintain the local quadratic error of approximation on each cell equally low. Although this extension is not trivial, we show that a simple algorithm based on upper confidence bounds can be proved to be adaptive to the function itself in a near-optimal way, when |P| is chosen to be of minimax-optimal order on the class of α−H¨ lder functions. o 1 Setting and Previous work Let us consider some space X ⊂ Rd , and Y ⊂ R. We call X the input space or sampling space, Y the output space or value space. We consider the problem of estimating with uniform precision the function f : X ⊂ Rd → Y ⊂ R. We assume that we can query n times the function f , anywhere in the domain, and observe noisy samples of this function. These samples are collected sequentially, and our aim is to design an adaptive procedure that selects wisely where on the domain to query the function, according to the information provided by the previous samples. More formally: Observed process We consider an unknown Y-valued process defined on X , written ν : X → M+ (Y), where M+ (Y) refers to the set of all probability measures on Y, such that for all x ∈ X , 1 1 def the random variable Y (x) ∼ ν(x) has mean f (x) = E[Y (x)|x] ∈ R. We write for convenience the model in the following way Y (x) = f (x) + noise(x) , def where noise(x) = Y (x) − E[Y (x)|x] is the centered random variable corresponding to the noise, o with unknown variance σ 2 (x). We assume throughout this paper that f is α-H¨ lder. Partition We consider we can define a partition P of the input space X , with finitely many P regions {Rp }1≤p≤P that are assumed to be convex and not degenerated, i.e. such that the interior of each region Rp has positive Lebesgue volume vp . Moreover, with each region Rp is associated a sampling distribution in that region, written µp ∈ M+ (Rp ). Thus, when we decide to sample in 1 region Rp , a new sample X ∈ Rp is generated according to X ∼ µp . Allocation. We consider that we have a finite budget of n ∈ N samples that we can use in order to allocate samples as we wish among the regions {Rp }1≤p≤P . For illustration, let us assume that we deterministically allocate Tp,n ∈ N samples in region Rp , with the constraint that the allocation {Tp,n }1≤p≤P must some to n. In region Rp , we thus sample points {Xp,i }1≤p≤P at random 1 according to the sampling distribution µp , and then get the corresponding values {Yp,i }1≤i≤Tp,n , where Yp,i ∼ ν(Xp,i ). In the sequel, the distribution µp is assumed to be the uniform distribution dλ(x)1x∈R over region Rp , i.e. the density of µp is λ(Rp ) p where λ denotes the Lebesgue measure. Note that this is not restrictive since we are in an active, not passive setting. Piecewise constant mean-approximation. We use the collected samples in order to build a pieceˆ wise constant approximation fn of the mean f , and measure the accuracy of approximation on a region Rp with the expected quadratic norm of the approximation error, namely � � � � � ˆ (x))2 λ(dx) = Eµ ,ν (f (X) − mp,n )2 , ˆ (f (x) − fn E p λ(Rp ) Rp ˆ where mp,n is the constant value that takes fn on the region Rp . A natural choice for the estimator ˆ mp,n is to use the empirical mean that is unbiased and asymptotically optimal for this criterion. ˆ Thus we consider the following estimate (histogram) ˆ fn (x) = P � p=1 mp,n I{x ∈ Rp } where mp,n = ˆ ˆ Tp,n 1 � Tp,n Yp,i . i=1 Pseudo loss Note that, since the Tp,n are deterministic, the expected quadratic norm of the approximation error of this estimator can be written in the following form � � � � � � ˆ Eµp ,ν (f (X) − mp,n )2 ˆ = Eµp ,ν (f (X) − Eµp [f (X)])2 + Eµp ,ν (Eµp [f (X)] − mp,n )2 � � � � = Vµp f (X) + Vµp ,ν mp,n ˆ � � � � 1 Vµp ,ν Y (X) . = Vµp f (X) + Tp,n Now, using the following immediate decomposition � � � � � Vµp ,ν Y (X) = Vµp f (X) + σ 2 (x)µp (dx) , Rp we deduce that the maximal expected quadratic norm of the approximation error over the regions def {Rp }1≤p≤P , that depends on the choice of the considered allocation strategy A = {Tp,n }1≤p≤P is thus given by the following so-called pseudo-loss � � � � � � Tp,n + 1 1 def 2 (1) Vµp f (X) + Eµ σ (X) . Ln (A) = max 1≤ p ≤P Tp,n Tp,n p Our goal is to minimize this pseudo-loss. Note that this is a local measure of performance, as opposed to a more usual yet less challenging global quadratic error. Eventually, as the number of �� �2 � ˆ cells tends to ∞, this local measure of performance approaches supx∈X Eν f (x) − fn (x) . At this point, let us also introduce, for convenience, the notation Qp (Tp,n ) that denotes the term inside the max, in order to emphasize the dependency on the quadratic error with the allocation. Previous work There is a huge literature on the topic of functional estimation in batch setting. Since it is a rather old and well studied question in statistics, many books have been written on this topic, such as Bosq and Lecoutre [1987], Rosenblatt [1991], Gy¨ rfi et al. [2002], where piecewise constant meano approximation are also called “partitioning estimate” or “regressogram” (first introduced by Tukey [1947]). The minimax-optimal rate of approximation on the class of α-H¨ lder functions is known o 2α to be in O(n− 2α+d ) (see e.g. Ibragimov and Hasminski [1981], Stone [1980], Gy¨ rfi et al. [2002]). o In such setting, a dataset {(Xi , Yi )}i≤n is given to the learner, and a typical question is thus to try to find the best possible histogram in order to minimize a approximation error. Thus the dataset is fixed and we typically resort to techniques such as model selection where each model corresponds to one histogram (see Arlot [2007] for an extensive study of such). However, we here ask a very different question, that is how to optimally sample in an online setting in order to minimize the approximation error of some histogram. Thus we choose the histogram 2 before we see any sample, then it is fixed and we need to decide which cell to sample from at each time step. Motivation for this setting comes naturally from some recent works in the setting of active learning for the multi-armed bandit problem Antos et al. [2010], Carpentier et al. [2011]. In these works, the objective is to estimate with equal precision the mean of a finite number of distributions (arms), which would correspond to the special case when X = {1, . . . , P } is a finite set in our setting. Intuitively, we reduce the problem to such bandit problem with finite set of arms (regions), and our setting answers the question whether it is possible to extend those results to the case when the arms do not correspond to a singleton, but rather to a continuous region. We show that the answer is positive, yet non trivial. This is non trivial due to the variance estimation in each region: points x in some region may have different means f(x), so that standard estimators for the variance are biased, contrary to the point-wise case and thus finite-arm techniques may yield disastrous results. (Estimating the variance of the distribution in a continuous region actually needs to take into account not only the point-wise noise but also the variation of the function f and the noise level σ 2 in that region.) We describe a way, inspired from quasi Monte-Carlo techniques, to correct this bias so that we can handle the additional error. Also, it is worth mentioning that this setting can be informally linked to a notion of curiosity-driven learning (see Schmidhuber [2010], Baranes and Oudeyer [2009]), since we want to decide in which region of the space to sample, without explicit reward but optimizing the goal to understand the unknown environment. Outline Section 2 provides more intuition about the pseudo-loss and a result about the optimal oracle strategy when the domain is partitioned in a minimax-optimal way on the class of α−H¨ lder o functions. Section 3 presents our assumptions, that are basically to have a sub-Gaussian noise and smooth mean and variance functions, then our estimator of the pseudo-loss together with its concentration properties, before introducing our sampling procedure, called OAHPA-pcma. Finally, the performance of this procedure is provided and discussed in Section 4. 2 The pseudo-loss: study and optimal strategies 2.1 More intuition on each term in the pseudo-loss It is natural to look at what happens to each of the two terms that appear in equation 1 when one makes Rp shrink towards a point. More precisely, let xp be the mean of X ∼ µp and let us look at the limit of Vµp (f (X)) when vp goes to 0. Assuming that f is differentiable, we get �2 � �� lim Vµp (f (X)) = lim Eµp f (X) − f (xp ) − E[f (X) − f (xp )] vp →0 vp →0 = = = lim Eµp �� �X − xp , ∇f (xp )� − E[�X − xp , ∇f (xp )�] vp →0 � � lim Eµp �X − xp , ∇f (xp )�2 vp →0 � � lim ∇f (xp )T Eµp (X − xp )(X − xp )T ∇f (xp ) . �2 � vp →0 Therefore, if we introduce Σp to be the covariance matrix of the random variable X ∼ µp , then we simply have lim Vµp (f (X)) = lim ||∇f (xp )||2 p . Σ vp →0 vp →0 Example with hyper-cubic regions An important example is when Rp is a hypercube with side 1/d length vp and µp is the uniform distribution over the region Rp . In that case (see Lemma 1), we dx have µp (dx) = , and 2/d vp vp . ||∇f (xp )||2 p = ||∇f (xp )||2 Σ 12 More generally, when f is α−differentiable, i.e. that ∀a ∈ X , ∃∇α f (a, ·) ∈ Sd (0, 1)R such that ∀x ∈ Sd (0, 1), limh→0 f (a+hx)−f (a) = ∇α f (a, x), then it is not too difficult to show that for such hα hyper-cubic regions, we have � � � 2α � Vµp f (X) = O vpd sup |∇α f (xp , u)|2 . S(0,1) � � On the other hand, by direct computation, the second term is such that limvp →0 Eµp σ 2 (X) = � � � � σ 2 (xp ). Thus, while Vµp f (X) vanishes, Eµp σ 2 (X) stays bounded away from 0 (unless ν is deterministic). 3 2.2 Oracle allocation and homogeneous partitioning for piecewise constant mean-approximation. We now assume that we are allowed to choose the partition P depending on n, thus P = Pn , amongst all homogeneous partitions of the space, i.e. partitions such that all cells have the same volume, and come from a regular grid of the space. Thus the only free parameter is the number of cells Pn of the partition. An exact yet not explicit oracle algorithm. The minimization of the pseudo-loss (1) does not yield to a closed-form solution in general. However, we can still derive the order of the optimal loss (see [Carpentier and Maillard, 2012, Lemma 2] in the full version of the paper for an example of minimax yet non adaptive oracle � algorithm given in closed-form solution): � � −β � � � −α� � � Lemma 1 In the case when Vµp f (X) = Ω Pn and Rp σ 2 (x)µp (dx) = Ω Pn , then an � optimal allocation and partitioning strategy An satisfies that� � � � Vµp f (X) + Eµp σ 2 (X) � � , L − Vµp f (X) � as soon as there exists, for such range of Pn , a constant L such that � � � � � Pn � Vµp f (X) + Eµp σ 2 (X) � � = n. L − Vµp f (X) p=1 1 � Pn = Ω(n max(1+α� −β� ,1) ) and def � Tp,n = The pseudo-loss of such an algorithm A� , optimal amongst the allocations strategies that use the n � partition Pn in Pn regions, is then given by � � � � def max(1 − β , 1 − α ) − 1. where γ = Ln (A� ) = Ω nγ n max(1 + α� − β � , 1) The condition involving the constant L is here to ensure that the partition is not degenerate. It is morally satisfied as soon as the variance of f and the noise are bounded and n is large enough. This Lemma applies to the important class W 1,2 (R) of functions that admit a weak derivative that o belongs to L2 (R). Indeed these functions are H¨ lder with coefficient α = 1/2, i.e. we have o W 1,2 (R) ⊂ C 0,1/2 (R). The standard Brownian motion is an example of function that is 1/2-H¨ lder. More generally, for k = d + α with α = 1/2 when d is odd and α = 1 when d is even, we have the 2 inclusion W k,2 (Rd ) ⊂ C 0,α (Rd ) , where W k,2 (Rd ) is the set of functions that admit a k th weak derivative belonging to L2 (Rd ). Thus the previous Lemma applies to sufficiently smooth functions with smoothness linearly increasing with the dimension d of the input space X . Important remark Note that this Lemma gives us a choice of the partition that is minimax-optimal, and an allocation strategy on that partition that is not only minimax-optimal but also adaptive to the function f itself. Thus it provides a way to decide in a minimax way what is the good number of regions, and then to provide the best oracle way to allocate the budget. We can deduce the following immediate corollary on the class of α−H¨ lder functions observed in a o non-negligible noise of bounded variance (i.e. in the setting β � = 0 and α� = 2α ). d Corollary 1 Consider that f is α−H¨ lder and the noise is of bounded variance. Then a minimaxo d � d+2α ) and an optimal allocation achieves the rate L (A� ) = optimal partition satisfies Pn = Ω(n n n � −2α � Ω n d+2α . Moreover, the strategy of Lemma 1 is optimal amongst the allocations strategies that � use the partition Pn in Pn regions. � −2α � The rate Ω n d+2α is minimax-optimal on the class of α−H¨ lder functions (see Gy¨ rfi et al. [2002], o o Ibragimov and Hasminski [1981], Stone [1980]), and it is thus interesting to consider an initial numd � � d+2α ). After having built the partition, if the quantities ber �� � 2 �� � � of�regions Pn that is of order Pn = Ω(n Vµp f p≤P and Eµp σ p≤P are known to the learner, it is optimal, in the aim of minimizing � the pseudo-loss, to allocate to each region the number of samples Tp,n provided in Lemma 1. Our objective in this paper is, after having chosen beforehand a minimax-optimal partition, to allocate 4 the samples properly in the regions, without having any access to those quantities. It is then �� � � necessary to balance between exploration, i.e. allocating the samples in order to estimate Vµp f p≤P � � �� and Eµp σ 2 p≤P , and exploitation, i.e. use the estimates to target the optimal allocation. 3 Online algorithms for allocation and homogeneous partitioning for piecewise constant mean-approximation In this section, we now turn to the design of algorithms that are fully online, with the goal to be competitive against the kind of oracle algorithms considered in Section 2.2. We now assume that the space X = [0, 1]d is divided in Pn hyper-cubic regions of same measure (the Lebesgue measure on 1 [0, 1]d ) vp = v = Pn . The goal of an algorithm is to minimize the quadratic error of approximation of f by a constant over each cell, in expectation, which we write as � � � � � � 2 λ(dx) ˆ (x))2 λ(dx) = max E , max E (f (x) − fn (f (x) − mp,n ) ˆ 1≤p≤Pn 1≤p≤Pn λ(Rp ) λ(Rp ) Rp Rp ˆ where fn is the histogram estimate of the function f on the partition P and mp,n is the empirical ˆ mean defined on region Rp with the samples (Xi , Yi ) such that Xi ∈ Rp . To do so, an algorithm is only allowed to specify at each time step t, the next point Xt where to sample, based on all the past samples {(Xs , Ys )}s < ∞ satisfies that λ2 σ 2 (x) , ∀λ ∈ R+ log E exp[λ noise(x)] ≤ 2 and we further assume that it satisfies the following slightly stronger second property (that is for instance exactly verified for a Gaussian variable, looking at the moment generating function): � � � � 1 λ2 σ 2 (x) ∀λ, γ ∈ R+ log E exp λnoise(x) + γnoise(x)2 ≤ − log 1 − 2γσ 2 (x) . 2(1 − 2γσ 2 (x)) 2 5 The function f is assumed to be (L, α)-H¨ lder, meaning that it satifies o � ∀x, x ∈ X f (x) − f (x� ) ≤ L||x − x� ||α . Similarly, the function σ 2 is assumed to be (M, β)-H¨ lder i.e. it satisfies o � 2 2 � ∀x, x ∈ X σ (x) − σ (x ) ≤ M ||x − x� ||β . We assume that Y is a convex and compact subset of R, thus w.l.g. that it is [0, 1], and that it is known that ||σ 2 ||∞ , which is thus finite, is bounded by the constant 1. 3.2 Empirical estimation of the quadratic approximation error on each cell We define the sampling distribution µp in the region Rp for each p ∈ {1, . . . , Pn } as a quasi-uniform ˜ sampling scheme using the uniform distribution over the sub-regions. More precisely at time t ≤ n, if we decide to sample in the region Rp according to µp , we sample uniformly in each sub-region ˜ one sample, resulting in a new batch of samples {(Xt,k , Yt,k )}1≤k≤K , where Xt,k ∼ µp,k . Note that due to this sampling process, the number of points Tp,t sampled in sub-region Rp at time t is always Tp,t a multiple of K and that moreover for all k, k � ∈ {1, . . . , K} we have that Tp,k,t = Tp,k� ,t = K . Now this specific sampling is used in order to be able to estimate the variances Vµp f and Eµp σ 2 , � so that the best proportions Tp,n can be computed as accurately as possible. Indeed, as explained in � � � � Lemma 1, we have that Vµp f (X) + Eµp σ 2 (X) � def � � . Tp,n = L − Vµp f (X) ˆ Variance estimation We now introduce two estimators. The first estimator is written Vp,t and is def ˆ built in the following way. First,let us introduce the empirical estimate fp,k,t of the mean fp,k = � � Eµp,k f (X) of f in sub-region Rp,k . Similarly, to avoid some cumbersome notations, we introduce � � � � � � def def def 2 fp = Eµp f (X) and vp,k = Vµp,k f (X) for the function f , and then σp,k = Eµp,k σ 2 (X) for the variance of the noise σ 2 . We now define the empirical variance estimator to be K 1 � ˆ ˆ (fp,k,t − mp,t )2 , ˆ Vp,t = K −1 k=1 that is a biased estimator. Indeed, for a deterministic Tp,t , it is not difficult to show that we have � K K � � � � � � �� � � � � 2 1 �� 1 � ˆ E Vp,t + Eµp,k f − Eµp f = Vµp,k f + Eµp,k σ 2 . K −1 Tp,t k=1 k=1 � � The leading term in this decomposition, that is given by the first sum, is closed to Vµp f since, by using the assumption that f is (L, α)−H¨ lder, we have the following inequality o � � K � �� �� � �1 � � � � 2 2L2 dα � Eµp,k f − Eµp f − Vµp f (X) � ≤ , � �K (KPn )2α/d k=1 where we also used that the diameter of a sub-region Rp,k is given by diam(Rp,k ) = d1/2 . (KPn )1/d ˆ Then, the second term also contributes to the bias, essentially due to the fact that V[fp,k,t ] = � � � � 2 def def 1 1 2 2 2 Tp,k,t (vp,k + σp,k ) and not Tp,t (vk + σk ) (with vp = Vµp f (X) and σp = Eµp σ (X) ). ˆ p,k,t In order to correct this term, we now introduce the second estimator σ 2 that estimates the variance � � � � � � of the outputs in a region Rp,k , i.e. Vµp,k ,ν Y (X) = Vµp,k f (X) + Eµp,k σ 2 . It is defined as �2 t t �� 1 1 � def ˆ p,k,t = Yi − Yj I{Xj ∈ Rp,k } I{Xi ∈ Rp,k } . σ2 Tp,k,t − 1 i=1 Tp,k,t j=1 Now, we combine the two previous estimators to form the following estimator K 1 �� 1 1 � 2 ˆ ˆ ˆ σ − . Qp,t = Vp,t − K Tp,k,t Tp,t p,k,t k=1 ˆ The following proposition provides a high-probability bound on the difference between Qp,t and the quantity we want to estimate. We report the detailed proof in [Carpentier and Maillard, 2012]. 6 ˆ Proposition 1 By the assumption that f is (L, α)-H¨ lder, the bias of the estimator Qp,t , and for o deterministic Tp,t , is given by � K � � � � � � � � � 2 1 � 2L2 dα ˆ − Vµp f (X) ≤ . Eµp,k f − Eµp f E Qp,t − Qp (Tp,t ) = K (KPn )2α/d k=1 Moreover, it satisfies that for all δ ∈ [0, 1], there exists an event of probability higher than 1 − δ such that on this event, we have � � � � � � K K � � � � 8 log(4/δ) � σ 2 �1 � � � � ˆ p,k,t 1 � 2 ˆ ˆ � Qp,t − E Qp,t � ≤ � √ +o σ p,k . � � (K − 1)2 T2 T K K k=1 p,k,t p,k,t k=1 We also state the following Lemma that we are going to use in the analysis, and that takes into account randomness of the stopping times Tp,k,t . Lemma 2 Let {Xp,k,u }p≤P, k≤K, u≤n be samples potentially sampled in region Rp,k . We introduce qp,u to be the�equivalent of Qp (Tp,t ) with explicitly fixed value of Tp,t = u. Let also qp,u be the ˆ � ˆ p,t but computed with the first u samples in estimate of E qp,u , that is to say the equivalent of Q each region Rp,k (i.e. Tp,t = u). Let us define the event � � � � � � � AK log(4nP/δ)V � � ˆp,t 2L2 dα � � ξn,P,K (δ) = + ω : � qp,u (ω) − E qp,u � ≤ ˆ , u K −1 (KPn )2α/d p≤P u≤n �K 1 ˆ ˆ ˆ p,k,t and where A ≤ 4 is a numerical constant. Then it where Vp,t = Vp (Tp,t ) = K−1 k=1 σ 2 holds that � � P ξn,P,K (δ) ≥ 1 − δ . Note that, with the notations of this Lemma, Proposition 1 above is thus about qp,u . ˆ 3.3 The Online allocation and homogeneous partitioning algorithm for piecewise constant mean-approximation (OAHPA-pcma) We are now ready to state the algorithm that we propose for minimizing the quadratic error of approximation of f . The algorithm is described in Figure 1. Although it looks similar, this algorithm is ˆ quite different from a normal UCB algorithm since Qp,t decreases in expectation with Tp,t . Indeed, � � � � � �� �K � 1 its expectation is close to Vµp f + KTp,t k=1 Vµp,k f + Eµp,k σ 2 . Algorithm 1 OAHPA-pcma. 1: Input: A, L, α, Horizon n; Partition {Rp }p≤P , with sub-partitions {Rp,k }k≤K . 2: Initialization: Sample K points in every sub-region {Rp,k }p≤P,k≤K 3: for t = K 2 P + 1; t ≤ n; t = t + K do ˆ 4: Compute ∀p, Qp,t . � ˆ ˆ p,t + AK log(4nP/δ)Vp,t + 2L2 dα . 5: Compute ∀p, Bp,t = Q 2α/d Tp,t K−1 (KPn ) 6: Select the region pt = argmax1≤p≤Pn Bp,t where to sample. 7: Sample K samples in region Rpt one per sub-region Rpt ,k according to µpt ,k . 8: end for 4 Performance of the allocation strategy and discussion Here is the main result of the paper; see the full version [Carpentier and Maillard, 2012] for the proof. We remind that the objective is to minimize for an algorithm A the pseudo-loss Ln (A). Theorem 1 (Main result) Let γ = � maxp Tp,n � minp Tp,n be the distortion factor of the optimal allocation stratdef d d egy, and let � > 0. Then with the choice of the number of regions Pn = n 2α+d �2+ 2α , and of the 2d d def def 8L2 α number of sub-regions K = C 4α+d �−2− α , where C = Ad1−α then the pseudo-loss of the OAHPApcma algorithm satisfies, under the assumptions of Section 3.1 and on an event of probability higher than 1 − δ, � � � � � 2α 1 + �γC � log(1/δ) Ln (A� ) + o n− 2α+d , Ln (A) ≤ n for some numerical constant C � not depending on n, where A� is the oracle of Lemma 1. n 7 Minimax-optimal partitioning and �-adaptive performance Theorem 1 provides a high probability bound on the performance of the OAHPA-pcma allocation strategy. It shows that this performance is competitive with that of an optimal (i.e. adaptive to the function f , see Lemma 1) allocation d A� on a partition with a number of cells Pn chosen to be of minimax order n 2α+d for the class of 2α α-H¨ lder functions. In particular, since Ln (A� ) = O(n d+2α ) on that class, we recover the same o n minimax order as what is obtained in the batch learning setting, when using for instance wavelets, or Kernel estimates (see e.g. Stone [1980], Ibragimov and Hasminski [1981]). But moreover, due to the adaptivity of A� to the function itself, this procedure is also �-adaptive to the function and not n only minimax-optimal on the class, on that partition (see Section 2.2). Naturally, the performance of the method increases, in the same way than for any classical functional estimation method, when the smoothness of the function increases. Similarly, in agreement with the classical curse of dimension, the higher the dimension of the domain, the less efficient the method. Limitations In this work, we assume that the smoothness α of the function is available to the learner, which enables her to calibrate Pn properly. Now it makes sense to combine the OAHPApcma procedure with existing methods that enable to estimate this smoothness online (under a slightly stronger assumption than H¨ lder, such as H¨ lder functions that attain their exponents, o o see Gin´ and Nickl [2010]). It is thus interesting, when no preliminary knowledge on the smoothness e of f is available, to spend some of the initial budget in order to estimate α. We have seen that the OAHPA-pcma procedure, although very simple, manages to get minimax optimal results. Now the downside of the simplicity of the OAHPA-pcma strategy is two-fold. � The first limitation is that the factor (1 + �γC � log(1/δ)) = (1 + O(�)) appearing in the bound before Ln (A� ) is not 1, but higher than 1. Of course it is generally difficult to get a constant 1 in the batch setting (see Arlot [2007]), and similarly this is a difficult task in our online setting too: If � is chosen to be small, then the error with respect to the optimal allocation is small. However, since Pn is expressed as an increasing function of �, this implies that the minimax bound on the loss for partition P increases also with �. That said, in the view of the work on active learning multi-armed bandit that we extend, we would still prefer to get the optimal constant 1. The second limitation is more problematic: since K is chosen irrespective of the region Rp , this causes the presence of the factor γ. Thus the algorithm will essentially no longer enjoy near-optimal performance guarantees when the optimal allocation strategy is highly not homogeneous. Conclusion and future work In this paper, we considered online regression with histograms in an active setting (we select in which bean to sample), and when we can choose the histogram in a class of homogeneous histograms. Since the (unknown) noise is heteroscedastic and we compete not only with the minimax allocation oracle on α-H¨ lder functions but with the adaptive oracle o that uses a minimax optimal histogram and allocates samples adaptively to the target function, this is an extremely challenging (and very practical) setting. Our contribution can be seen as a non trivial extension of the setting of active learning for multi-armed bandits to the case when each arm corresponds to one continuous region of a sampling space, as opposed to a singleton, which can also be seen as a problem of non parametric function approximation. This new setting offers interesting challenges: We provided a simple procedure, based on the computation of upper confidence bounds of the estimation of the local quadratic error of approximation, and provided a performance analysis that shows that OAHPA-pcma is first order �-optimal with respect to the function, for a partition chosen to be minimax-optimal on the class of α-H¨ lder functions. However, this simplicity also o has a drawback if one is interested in building exactly first order optimal procedure, and going beyond these limitations is definitely not trivial: A more optimal but much more complex algorithm would indeed need to tune a different factor Kp in each cell in an online way, i.e. define some Kp,t that evolves with time, and redefine sub-regions accordingly. Now, the analysis of the OAHPA-pcma already makes use of powerful tools such as empirical-Bernstein bounds for variance estimation (and not only for mean estimation), which make it non trivial; in order to handle possibly evolving subregions and deal with the progressive refinement of the regions, we would need even more intricate analysis, due to the fact that we are online and active. This interesting next step is postponed to future work. Acknowledgements This research was partially supported by Nord-Pas-de-Calais Regional Council, French ANR EXPLO-RA (ANR-08-COSI-004), the European Communitys Seventh Framework Programme (FP7/2007-2013) under grant agreement no 270327 (CompLACS) and no 216886 (PASCAL2). 8 References Andr` s Antos, Varun Grover, and Csaba Szepesv` ri. Active learning in heteroscedastic noise. Thea a oretical Computer Science, 411(29-30):2712–2728, 2010. Sylvain Arlot. R´ echantillonnage et S´ lection de mod` les. PhD thesis, Universit´ Paris Sud - Paris e´ e e e XI, 2007. A. Baranes and P.-Y. Oudeyer. R-IAC: Robust Intrinsically Motivated Exploration and Active Learning. IEEE Transactions on Autonomous Mental Development, 1(3):155–169, October 2009. D. Bosq and J.P. Lecoutre. Th´ orie de l’estimation fonctionnelle, volume 21. Economica, 1987. e Alexandra Carpentier and Odalric-Ambrym Maillard. Online allocation and homogeneous partitioning for piecewise constant mean-approximation. HAL, 2012. URL http://hal.archives-ouvertes.fr/hal-00742893. Alexandra Carpentier, Alessandro Lazaric, Mohammad Ghavamzadeh, Rmi Munos, and Peter Auer. Upper-confidence-bound algorithms for active learning in multi-armed bandits. In Jyrki Kivinen, Csaba Szepesv` ri, Esko Ukkonen, and Thomas Zeugmann, editors, Algorithmic Learning Theory, a volume 6925 of Lecture Notes in Computer Science, pages 189–203. Springer Berlin / Heidelberg, 2011. E. Gin´ and R. Nickl. Confidence bands in density estimation. The Annals of Statistics, 38(2): e 1122–1170, 2010. L. Gy¨ rfi, M. Kohler, A. Krzy´ ak, and Walk H. A distribution-free theory of nonparametric regreso z sion. Springer Verlag, 2002. I. Ibragimov and R. Hasminski. Statistical estimation: Asymptotic theory. 1981. M. Rosenblatt. Stochastic curve estimation, volume 3. Inst of Mathematical Statistic, 1991. J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (19902010). Autonomous Mental Development, IEEE Transactions on, 2(3):230–247, 2010. C.J. Stone. Optimal rates of convergence for nonparametric estimators. The annals of Statistics, pages 1348–1360, 1980. J.W. Tukey. Non-parametric estimation ii. statistically equivalent blocks and tolerance regions–the continuous case. The Annals of Mathematical Statistics, 18(4):529–539, 1947. 9

6 0.43757525 343 nips-2012-Tight Bounds on Profile Redundancy and Distinguishability

7 0.42053267 285 nips-2012-Query Complexity of Derivative-Free Optimization

8 0.40949881 154 nips-2012-How They Vote: Issue-Adjusted Models of Legislative Behavior

9 0.40366891 149 nips-2012-Hierarchical Optimistic Region Selection driven by Curiosity

10 0.3886939 85 nips-2012-Convergence and Energy Landscape for Cheeger Cut Clustering

11 0.37667429 120 nips-2012-Exact and Stable Recovery of Sequences of Signals with Sparse Increments via Differential 1-Minimization

12 0.34703815 45 nips-2012-Approximating Equilibria in Sequential Auctions with Incomplete Information and Multi-Unit Demand

13 0.34663966 21 nips-2012-A Unifying Perspective of Parametric Policy Search Methods for Markov Decision Processes

14 0.33695331 181 nips-2012-Learning Multiple Tasks using Shared Hypotheses

15 0.33125907 217 nips-2012-Mixability in Statistical Learning

16 0.33075202 166 nips-2012-Joint Modeling of a Matrix with Associated Text via Latent Binary Features

17 0.32747671 76 nips-2012-Communication-Efficient Algorithms for Statistical Optimization

18 0.32511386 271 nips-2012-Pointwise Tracking the Optimal Regression Function

19 0.32268527 134 nips-2012-Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

20 0.32225329 115 nips-2012-Efficient high dimensional maximum entropy modeling via symmetric partition functions


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.036), (21, 0.052), (28, 0.021), (36, 0.017), (38, 0.166), (39, 0.017), (42, 0.041), (51, 0.225), (53, 0.015), (55, 0.015), (74, 0.036), (76, 0.137), (80, 0.07), (92, 0.035)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.81491798 36 nips-2012-Adaptive Stratified Sampling for Monte-Carlo integration of Differentiable functions

Author: Alexandra Carpentier, Rémi Munos

Abstract: We consider the problem of adaptive stratified sampling for Monte Carlo integration of a differentiable function given a finite number of evaluations to the function. We construct a sampling scheme that samples more often in regions where the function oscillates more, while allocating the samples such that they are well spread on the domain (this notion shares similitude with low discrepancy). We prove that the estimate returned by the algorithm is almost similarly accurate as the estimate that an optimal oracle strategy (that would know the variations of the function everywhere) would return, and provide a finite-sample analysis. 1

2 0.81149393 49 nips-2012-Automatic Feature Induction for Stagewise Collaborative Filtering

Author: Joonseok Lee, Mingxuan Sun, Guy Lebanon, Seung-jean Kim

Abstract: Recent approaches to collaborative filtering have concentrated on estimating an algebraic or statistical model, and using the model for predicting missing ratings. In this paper we observe that different models have relative advantages in different regions of the input space. This motivates our approach of using stagewise linear combinations of collaborative filtering algorithms, with non-constant combination coefficients based on kernel smoothing. The resulting stagewise model is computationally scalable and outperforms a wide selection of state-of-the-art collaborative filtering algorithms. 1

3 0.7694456 353 nips-2012-Transferring Expectations in Model-based Reinforcement Learning

Author: Trung Nguyen, Tomi Silander, Tze Y. Leong

Abstract: We study how to automatically select and adapt multiple abstractions or representations of the world to support model-based reinforcement learning. We address the challenges of transfer learning in heterogeneous environments with varying tasks. We present an efficient, online framework that, through a sequence of tasks, learns a set of relevant representations to be used in future tasks. Without predefined mapping strategies, we introduce a general approach to support transfer learning across different state spaces. We demonstrate the potential impact of our system through improved jumpstart and faster convergence to near optimum policy in two benchmark domains. 1

4 0.71688211 18 nips-2012-A Simple and Practical Algorithm for Differentially Private Data Release

Author: Moritz Hardt, Katrina Ligett, Frank Mcsherry

Abstract: We present a new algorithm for differentially private data release, based on a simple combination of the Multiplicative Weights update rule with the Exponential Mechanism. Our MWEM algorithm achieves what are the best known and nearly optimal theoretical guarantees, while at the same time being simple to implement and experimentally more accurate on actual data sets than existing techniques. 1

5 0.7153815 302 nips-2012-Scaling MPE Inference for Constrained Continuous Markov Random Fields with Consensus Optimization

Author: Stephen Bach, Matthias Broecheler, Lise Getoor, Dianne O'leary

Abstract: Probabilistic graphical models are powerful tools for analyzing constrained, continuous domains. However, finding most-probable explanations (MPEs) in these models can be computationally expensive. In this paper, we improve the scalability of MPE inference in a class of graphical models with piecewise-linear and piecewise-quadratic dependencies and linear constraints over continuous domains. We derive algorithms based on a consensus-optimization framework and demonstrate their superior performance over state of the art. We show empirically that in a large-scale voter-preference modeling problem our algorithms scale linearly in the number of dependencies and constraints. 1

6 0.71536911 335 nips-2012-The Bethe Partition Function of Log-supermodular Graphical Models

7 0.7151742 261 nips-2012-Online allocation and homogeneous partitioning for piecewise constant mean-approximation

8 0.71393633 149 nips-2012-Hierarchical Optimistic Region Selection driven by Curiosity

9 0.71360618 120 nips-2012-Exact and Stable Recovery of Sequences of Signals with Sparse Increments via Differential 1-Minimization

10 0.71335566 178 nips-2012-Learning Label Trees for Probabilistic Modelling of Implicit Feedback

11 0.71287936 187 nips-2012-Learning curves for multi-task Gaussian process regression

12 0.71234572 117 nips-2012-Ensemble weighted kernel estimators for multivariate entropy estimation

13 0.71220523 114 nips-2012-Efficient coding provides a direct link between prior and likelihood in perceptual Bayesian inference

14 0.71148491 325 nips-2012-Stochastic optimization and sparse statistical recovery: Optimal algorithms for high dimensions

15 0.71129024 69 nips-2012-Clustering Sparse Graphs

16 0.71100539 333 nips-2012-Synchronization can Control Regularization in Neural Systems via Correlated Noise Processes

17 0.710684 236 nips-2012-Near-Optimal MAP Inference for Determinantal Point Processes

18 0.71009433 199 nips-2012-Link Prediction in Graphs with Autoregressive Features

19 0.70999175 319 nips-2012-Sparse Prediction with the $k$-Support Norm

20 0.70954919 304 nips-2012-Selecting Diverse Features via Spectral Regularization