jmlr jmlr2012 jmlr2012-93 knowledge-graph by maker-knowledge-mining

93 jmlr-2012-Quantum Set Intersection and its Application to Associative Memory


Source: pdf

Author: Tamer Salman, Yoram Baram

Abstract: We describe a quantum algorithm for computing the intersection of two sets and its application to associative memory. The algorithm is based on a modification of Grover’s quantum search algorithm (Grover, 1996). We present algorithms for pattern retrieval, pattern completion, and pattern correction. We show that the quantum associative memory can store an exponential number of memories and retrieve them in sub-exponential time. We prove that this model has advantages over known classical associative memories as well as previously proposed quantum models. Keywords: associative memory, pattern completion, pattern correction, quantum computation, quantum search

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 IL Department of Computer Science Technion - Israel Institute of Technology Haifa, 32000, Israel Editor: Manfred Opper Abstract We describe a quantum algorithm for computing the intersection of two sets and its application to associative memory. [sent-7, score-0.869]

2 The algorithm is based on a modification of Grover’s quantum search algorithm (Grover, 1996). [sent-8, score-0.608]

3 We show that the quantum associative memory can store an exponential number of memories and retrieve them in sub-exponential time. [sent-10, score-1.119]

4 We prove that this model has advantages over known classical associative memories as well as previously proposed quantum models. [sent-11, score-0.871]

5 Keywords: associative memory, pattern completion, pattern correction, quantum computation, quantum search 1. [sent-12, score-1.53]

6 Introduction The introduction of Shor’s algorithm for factoring numbers in polynomial time (Shor, 1994) has demonstrated the ability of quantum computation to solve certain problems more efficiently than classical computers. [sent-13, score-0.659]

7 This perception was ratified two years later, when Grover (1996) introduced a sub-exponential algorithm for quantum searching a database. [sent-14, score-0.608]

8 The field of quantum computation is based on the combination of computation theory and quantum mechanics. [sent-15, score-1.216]

9 Quantum mechanics, on the other hand, concerns the study of systems governed by the rules of quantum physics. [sent-17, score-0.608]

10 However, there is still no efficient reduction of quantum mechanical behavior to classical computation. [sent-19, score-0.659]

11 It is based on four postulates, known as the postulates of quantum mechanics (Nielsen and Chuang, 2000), which provide a connection between the physical world and mathematical formalism. [sent-21, score-0.675]

12 The second states that the evolution of a closed quantum system is described by a unitary transformation. [sent-24, score-0.739]

13 The third states that a quantum measurement is described by a collection of measurement operators that satisfy the completeness equality, that is, the sum of all possible measurements adds up to 1. [sent-25, score-0.803]

14 S ALMAN AND BARAM Next, we introduce the basic building blocks of quantum computation. [sent-28, score-0.608]

15 In quantum computation, the basic entity is called a qubit (quantum bit). [sent-38, score-0.81]

16 When measured partially, the unmeasured subsystem can retain quantum superposition and further quantum manipulations can be performed upon it. [sent-54, score-1.292]

17 3 Operators In quantum computation, a system changes its state under a unitary quantum operator U from |Ψ to U |Ψ . [sent-57, score-1.357]

18 An operator U can be described as a 2n × 2n matrix operating on the unary representation 3178 Q UANTUM S ET I NTERSECTION AND ITS A PPLICATION TO A SSOCIATIVE M EMORY Figure 1: The 2-qubit Controlled-Not (CNOT) quantum gate. [sent-58, score-0.665]

19 Quantum operators can be implemented using quantum gates, which are the analogue of the classical gates that compose classical electrical circuits. [sent-62, score-0.784]

20 In this analogy, the wires of a circuit carry the information on the system’s state, while the quantum gates manipulate their contents to different states. [sent-63, score-0.691]

21 Operators can also be quantum gates operating on multiple qubits. [sent-65, score-0.648]

22 An n-qubit quantum operator has n inputs and n outputs. [sent-66, score-0.665]

23 For example, the 2-qubit controlled-not (CNOT) gate depicted in Figure 1, flips the target (second) qubit if the control qubit (first) has value |1 and leaves it unchanged if the control qubit has the value |0 . [sent-67, score-0.674]

24 A quantum oracle is a reversible oracle that accepts a superposition of inputs and produces a superposition of outputs as depicted in Figure 4. [sent-85, score-0.977]

25 When the additional qubit is initialized by |− the oracle is called a quantum phase oracle that gives f (x) in the phase of the state |x as follows: |x |− → (−1) f (x) |x |− . [sent-87, score-1.122]

26 Suppose that we have constructed a quantum circuit U f that implements a function f : {0, 1}n → {0, 1}, such that when introduced with an input |x |y , the output of the circuit would be |x |y ⊕ f (x) . [sent-88, score-0.694]

27 Quantum parallelism is the ability of the quantum circuit to process many inputs simultaneously and receive all the outcomes at the output. [sent-89, score-0.651]

28 The superposition will be maintained through the quantum circuit and the resultn ing output would be √1 n ∑2 |i | f (i) . [sent-92, score-0.727]

29 However, further quantum computation allows the different values to interfere together and reveal some information concerning the function f . [sent-95, score-0.608]

30 Deutsch and Jozsa (1992) showed that if a binary n-dimensional function is guaranteed to be either constant or balanced, then determination can be done using only one query to a quantum oracle implementing it, while a classical solution would require an exponential number of queries. [sent-97, score-0.776]

31 Another example of the advantage of quantum algorithms over classical ones was presented by Simon (1994), in which a function is known to have the property that there exists some s ∈ {0, 1}n for which for all x, y ∈ {0, 1}n it hold that f (x) = f (y) if and only if x = y or x ⊕ y = s. [sent-98, score-0.659]

32 Simon (1994) proved that using quantum computations, s can be found exponentially faster than with any classical algorithm, including probabilistic algorithms. [sent-99, score-0.659]

33 5 Solving Problems using Quantum Computation According to the above definitions of a system state, measurement, and operators, a quantum computer drives the dynamics of the quantum system through operators that change its state and measures the final state to reveal classical information. [sent-101, score-1.415]

34 Consequently, we can describe a schematic process for solving problems using quantum computation as follows: A general solution scheme using quantum computations Given: Classical input data 1. [sent-103, score-1.216]

35 Apply quantum circuit to the initial quantum state 4. [sent-106, score-1.297]

36 In 1996, Grover presented a quantum computational algo√ rithm that searches an unsorted database with O( N) operations (Grover, 1996; Boyer et al. [sent-110, score-0.608]

37 The quantum phase oracle of the function fX flips (rotates by π) the amplitude of the states of X, while leaving all other states unchanged. [sent-116, score-1.033]

38 In the next sections, we present our algorithm for set intersection and its use in a quantum model of associative memory. [sent-130, score-0.869]

39 However, presenting the algorithm with the quantum superposition of pairs xi and yi created by the use of the oracle B makes it a model for general associative memory with no additional cost, as a quantum search can yield yi upon measurement of xi . [sent-132, score-1.878]

40 7 Associative Memory Associative memory stores and retrieves patterns with error correction or pattern completion of the input. [sent-134, score-0.785]

41 Quantum Intersection Given a set of marked states K of size k, Grover’s quantum search algorithm for multiple marked states yields any member of the set with probability O k−1/2 when given a phase version IK of an oracle fK of the form 1, x ∈ K . [sent-149, score-1.149]

42 We define the problem of quantum intersection as the choice of any member of the intersection set K ∩ M of size r with probability O r−1/2 . [sent-152, score-0.854]

43 Retrieving a state in the intersection between the two sets is accomplished by using Grover’s quantum search algorithm with the phase version IK∩M of the oracle fM∩K . [sent-156, score-0.872]

44 Alteratively, we can use the quantum counting algorithm (Brassard et al. [sent-167, score-0.608]

45 Quantum Associative Memory In this section we introduce our associative memory model and the retrieval procedure with pattern completion and correction abilities. [sent-190, score-0.923]

46 We present the concept of memory as a quantum operator that flips the phase of the memory patterns. [sent-191, score-1.323]

47 1 Pattern Completion Let IM be a phase oracle on a set M, called the memory set, of m n-qubit patterns and let x′ be a version of a memory pattern x ∈ M with d missing bits. [sent-197, score-0.899]

48 Denoting the set of possible completions of the partial pattern K and its size k, the completion problem can be reduced to the problem of retrieving a member x of the intersection between two sets K and M, x ∈ K ∩ M. [sent-201, score-0.653]

49 Pattern completion is the computation of the intersection between K and M, which is the memory pattern 0110100. [sent-205, score-0.712]

50 Pattern completion can use either the intersection oracle presented in Figure 6 or the quantum intersection presented in Algorithm 1. [sent-206, score-1.116]

51 In either case, we need to create the completion operator fK or its phase version IK that marks the states of the set K, which can be implemented by checking whether a state is a completion of the partial patterns x′ represented by the set K. [sent-207, score-0.825]

52 The algorithm for pattern completion through the quantum intersection algorithm is Algorithm 2 : Quantum Pattern Completion Given: A memory operator IM and a pattern x′ ∈ {0, 1}n , which is a partial version of some memory pattern with up to d missing bits 1. [sent-214, score-1.971]

53 2 Pattern Correction Let IM be a phase oracle of a memory set M of size m and let x′ be a version of a memory pattern x ∈ M with up to d faulty bits. [sent-218, score-0.887]

54 Pattern correction can be solved using the quantum intersection algorithm, which requires the creation of the correction operator fK or its phase version IK . [sent-226, score-1.032]

55 Algorithm 3 : Quantum Pattern Correction Given: A memory operator IM and a pattern x′ ∈ {0, 1}n , which is a faulty version of some memory pattern with up to d faulty bits 1. [sent-233, score-1.046]

56 Apply Algorithm 1 with IM and IK A generalization of both Algorithm 2 and Algorithm 3 for the case of unknown number of possible corrections is straight forward using quantum search for an unknown number of marked states (Boyer et al. [sent-236, score-0.776]

57 If the memory is within the correction capacity bounds, Algorithm 3 finds the correct pattern x with high probability, as will be proved in Section 4. [sent-240, score-0.616]

58 However, if we are interested in ensuring that we find the closest memory pattern to x′ , with no dependence on the capacity bound, then we can apply Algorithm 3 for i = 0 bits and increase it up to i = d bits. [sent-241, score-0.594]

59 Analysis of the Quantum Associative Memory In this section we analyze the time complexity and memory capacity of the proposed quantum associative memory. [sent-244, score-1.216]

60 Then we show that the number of memory patterns that can be stored while the model retains its correction and completion abilities is exponential in the number of bits. [sent-246, score-0.714]

61 1 Time Complexity Analysis The time complexity of the retrieval procedure with either pattern completion or correction ability is determined by the complexity of the quantum intersection algorithm and the complexity of the completion and correction operators. [sent-248, score-1.511]

62 The first is the equilibrium capacity Meq , which is the maximal memory size that ensures that all memory patterns are equilibrium points of the model. [sent-253, score-0.908]

63 The second is the pattern completion capacity Mcom , which is the maximal memory size that allows the completion of any partial pattern with up to d missing bits with high probability. [sent-255, score-1.271]

64 The third is the pattern correction capacity Mcor , which is the maximal memory size that allows the correction of any pattern with up to d faulty bits with high probability. [sent-256, score-0.998]

65 2 C OMPLETION C APACITY Given a pattern x′ , which is a partial version of some memory pattern xc with d missing bits, we seek the maximal memory size, for which the pattern can be completed with high probability from a random uniformly distributed memory set (McEliece et al. [sent-266, score-1.335]

66 The first is a result of Grover’s quantum search algorithm limitations and the second is a result of the probability of correct completion. [sent-269, score-0.636]

67 Grover’s operator flips the marked states around the zero amplitude (negating their amplitudes) then flips all amplitudes around the average of all amplitudes (Biham et al. [sent-271, score-0.601]

68 This can be observed in the first iteration of the quantum search algorithm on a number of √ marked states. [sent-275, score-0.691]

69 The bound on memory size that ensures a high probability of correct completion depends on the definition of the pattern completion procedure. [sent-285, score-0.904]

70 If one defines pattern completion as the process of outputting any of a number of possible memory patterns when given a partial input, then the capacity bound of our memory is the amplification bound given in Equation 16. [sent-286, score-1.138]

71 Pattern completion capacity is usually defined as the maximal size of a random uniformly distributed memory set that, given a partial version x′ of a memory xc ∈ M with d missing bits, outputs xc . [sent-288, score-1.282]

72 memory size divided by the maximal completion capacity v for an associative memory with n = 30 qubits. [sent-307, score-1.194]

73 the memory size divided by the maximal completion capacity v. [sent-320, score-0.723]

74 This is not true for most classical memory models where spurious memories arise and the output is usually not a memorized pattern, but, rather, some spurious combination of multiple memory patterns (Hopfield, 1982; Bruck, 1990; Goles and Mart´nez, 1990). [sent-328, score-0.742]

75 4 Algorithm 5 gives satisfying results only when the memory size exceeds N , which is exponential 4 in the number of qubits, leaving the possibility of effective pattern completion only for 2 qubits or less. [sent-384, score-0.744]

76 It is therefore not helpful for associative memory with pattern completion and correction abilities. [sent-385, score-0.904]

77 Figure 14(a) shows the memory set, where each vertical line represents a memory pattern, and Figure 14(b) shows the completion set K1 in the same manner. [sent-416, score-0.851]

78 The amplitudes of the final state of the completion algorithm are shown in Figure 14(c), where the only possible memory completion has amplitude close to 1. [sent-417, score-1.09]

79 Figure 15 shows the high amplitudes of the two possible memory completions when the completion set is K2 . [sent-418, score-0.827]

80 Another simulation was carried out on a 10 qubits associative memory with 27 memory patterns and completion queries with 3 missing bits. [sent-426, score-1.237]

81 (a) M 0 200 400 600 800 1000 600 800 1000 600 800 1000 (b) K2 0 200 400 (c) 1 M ∩ K2 0 0 200 400 Figure 15: (a)A set of memory patterns M (b) a set of possible completions K2 to a partial pattern, and the memory completion result in amplitudes. [sent-430, score-1.076]

82 The memory completions and the non completions or memories are amplified alternatingly, while the amplitudes of K\M and M\K subgroups stay close to zero. [sent-434, score-0.779]

83 For instance, we tested a 30 qubit system with 225 memory patterns and a completion query that has 8 missing bits. [sent-437, score-0.906]

84 Our algorithm found a member of the memory completion set with probability 96. [sent-439, score-0.62]

85 Increasing the memory size to 226 and 227 , and thereby bringing the capacity close to its limit resulted in completion probabilities of 93. [sent-441, score-0.689]

86 Figure 17 depicts the success rates of pattern completion in a 30 qubit system. [sent-444, score-0.598]

87 the logarithm of the size of memory with completion queries set to 8 missing bits and the number of possible memory completions set to 1. [sent-447, score-1.124]

88 the logarithm of the completion query size when the memory size is set to 225 patterns and the number of possible memory completions set to 1. [sent-449, score-1.09]

89 the logarithm of the number of possible memory completions when both the memory size and the completion query size are set to 225 . [sent-451, score-1.037]

90 the number of qubits in the system (growing from 5 to 30 qubits) when the memory size, the completion query size, and the number of possible memory completions are small constants. [sent-453, score-1.177]

91 75 0 5 10 15 20 25 30 Figure 17: Success probability of measuring a desired memory completion vs. [sent-460, score-0.604]

92 the log of the memory size (solid), completion query size (dashed), possible memory completions (dotted), and number of qubits (dash-dotted). [sent-461, score-1.158]

93 Conclusion We have presented a quantum computational algorithm that computes the intersection between two subsets of n-bit strings. [sent-465, score-0.697]

94 The algorithm is based on a modification of Grover’s quantum search. [sent-466, score-0.608]

95 Using the intersection algorithm, we have presented a set of algorithms that implement a model of associative memory via quantum computation. [sent-467, score-1.168]

96 We introduced the notion of memory as a quantum operator, thus avoiding the dependence of the initial state of the system on the memory set. [sent-468, score-1.263]

97 We have shown that our algorithms have both speed and capacity advantages with respect to classical associative memory models, consuming sub-exponential time, and are able to store a number of memory patterns which is exponential in the number of bits. [sent-469, score-1.011]

98 Grover’s quantum search algorithm for an arbitrary initial amplitude distribution. [sent-518, score-0.726]

99 Quantum theory, the church-turing principle and the universal quantum computer. [sent-540, score-0.608]

100 Some quantum search algorithms for arbitrary initial amplitude distribution. [sent-596, score-0.726]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('quantum', 0.608), ('memory', 0.299), ('completion', 0.253), ('qubit', 0.202), ('associative', 0.172), ('grover', 0.162), ('completions', 0.146), ('capacity', 0.137), ('baram', 0.137), ('amplitudes', 0.129), ('alman', 0.121), ('emory', 0.121), ('qubits', 0.121), ('amplitude', 0.118), ('pplication', 0.113), ('ssociative', 0.113), ('uantum', 0.113), ('correction', 0.109), ('xc', 0.097), ('ntersection', 0.097), ('intersection', 0.089), ('bits', 0.087), ('states', 0.085), ('marked', 0.083), ('faulty', 0.081), ('oracle', 0.077), ('superposition', 0.076), ('arima', 0.073), ('pattern', 0.071), ('qk', 0.066), ('phase', 0.06), ('operator', 0.057), ('hop', 0.057), ('qm', 0.056), ('hik', 0.055), ('oracles', 0.055), ('patterns', 0.053), ('im', 0.052), ('classical', 0.051), ('success', 0.05), ('ik', 0.049), ('boyer', 0.049), ('physical', 0.043), ('ampli', 0.043), ('circuit', 0.043), ('equilibrium', 0.043), ('brassard', 0.04), ('gates', 0.04), ('memories', 0.04), ('miyajima', 0.04), ('member', 0.04), ('missing', 0.04), ('fk', 0.04), ('query', 0.04), ('measurement', 0.038), ('state', 0.038), ('ips', 0.038), ('gate', 0.037), ('hadamard', 0.037), ('bit', 0.035), ('ventura', 0.035), ('maximal', 0.034), ('operators', 0.034), ('fm', 0.034), ('apacity', 0.032), ('biham', 0.032), ('cnot', 0.032), ('deutsch', 0.032), ('mcom', 0.032), ('meq', 0.032), ('reversible', 0.032), ('arccos', 0.031), ('depicted', 0.031), ('hamming', 0.028), ('retrieving', 0.028), ('probability', 0.028), ('unitary', 0.027), ('partial', 0.026), ('basis', 0.026), ('arctan', 0.025), ('measuring', 0.024), ('mceliece', 0.024), ('mcor', 0.024), ('postulates', 0.024), ('sal', 0.024), ('shigei', 0.024), ('shor', 0.024), ('toffoli', 0.024), ('ev', 0.024), ('martinez', 0.023), ('depicts', 0.022), ('rotation', 0.022), ('register', 0.021), ('unmarked', 0.021), ('tamer', 0.021), ('dist', 0.02), ('qt', 0.019), ('retrieval', 0.019), ('system', 0.019), ('subgroups', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999976 93 jmlr-2012-Quantum Set Intersection and its Application to Associative Memory

Author: Tamer Salman, Yoram Baram

Abstract: We describe a quantum algorithm for computing the intersection of two sets and its application to associative memory. The algorithm is based on a modification of Grover’s quantum search algorithm (Grover, 1996). We present algorithms for pattern retrieval, pattern completion, and pattern correction. We show that the quantum associative memory can store an exponential number of memories and retrieve them in sub-exponential time. We prove that this model has advantages over known classical associative memories as well as previously proposed quantum models. Keywords: associative memory, pattern completion, pattern correction, quantum computation, quantum search

2 0.065244749 99 jmlr-2012-Restricted Strong Convexity and Weighted Matrix Completion: Optimal Bounds with Noise

Author: Sahand Negahban, Martin J. Wainwright

Abstract: We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near low-rank matrices. Our results are based on measures of the “spikiness” and “low-rankness” of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an M-estimator that includes controls on both the rank and spikiness of the solution, and we establish non-asymptotic error bounds in weighted Frobenius norm for recovering matrices lying with ℓq -“balls” of bounded spikiness. Using information-theoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal. Keywords: matrix completion, collaborative filtering, convex optimization

3 0.052443948 91 jmlr-2012-Plug-in Approach to Active Learning

Author: Stanislav Minsker

Abstract: We present a new active learning algorithm based on nonparametric estimators of the regression function. Our investigation provides probabilistic bounds for the rates of convergence of the generalization error achievable by proposed method over a broad class of underlying distributions. We also prove minimax lower bounds which show that the obtained rates are almost tight. Keywords: active learning, selective sampling, model selection, classification, confidence bands

4 0.042757597 52 jmlr-2012-Iterative Reweighted Algorithms for Matrix Rank Minimization

Author: Karthik Mohan, Maryam Fazel

Abstract: The problem of minimizing the rank of a matrix subject to affine constraints has applications in several areas including machine learning, and is known to be NP-hard. A tractable relaxation for this problem is nuclear norm (or trace norm) minimization, which is guaranteed to find the minimum rank matrix under suitable assumptions. In this paper, we propose a family of Iterative Reweighted Least Squares algorithms IRLS-p (with 0 ≤ p ≤ 1), as a computationally efficient way to improve over the performance of nuclear norm minimization. The algorithms can be viewed as (locally) minimizing certain smooth approximations to the rank function. When p = 1, we give theoretical guarantees similar to those for nuclear norm minimization, that is, recovery of low-rank matrices under certain assumptions on the operator defining the constraints. For p < 1, IRLSp shows better empirical performance in terms of recovering low-rank matrices than nuclear norm minimization. We provide an efficient implementation for IRLS-p, and also present a related family of algorithms, sIRLS-p. These algorithms exhibit competitive run times and improved recovery when compared to existing algorithms for random instances of the matrix completion problem, as well as on the MovieLens movie recommendation data set. Keywords: matrix rank minimization, matrix completion, iterative algorithms, null-space property

5 0.034926396 34 jmlr-2012-Dynamic Policy Programming

Author: Mohammad Gheshlaghi Azar, Vicenç Gómez, Hilbert J. Kappen

Abstract: In this paper, we propose a novel policy iteration method, called dynamic policy programming (DPP), to estimate the optimal policy in the infinite-horizon Markov decision processes. DPP is an incremental algorithm that forces a gradual change in policy update. This allows us to prove finite-iteration and asymptotic ℓ∞ -norm performance-loss bounds in the presence of approximation/estimation error which depend on the average accumulated error as opposed to the standard bounds which are expressed in terms of the supremum of the errors. The dependency on the average error is important in problems with limited number of samples per iteration, for which the average of the errors can be significantly smaller in size than the supremum of the errors. Based on these theoretical results, we prove that a sampling-based variant of DPP (DPP-RL) asymptotically converges to the optimal policy. Finally, we illustrate numerically the applicability of these results on some benchmark problems and compare the performance of the approximate variants of DPP with some existing reinforcement learning (RL) methods. Keywords: approximate dynamic programming, reinforcement learning, Markov decision processes, Monte-Carlo methods, function approximation

6 0.030362917 77 jmlr-2012-Non-Sparse Multiple Kernel Fisher Discriminant Analysis

7 0.029978093 83 jmlr-2012-Online Learning in the Embedded Manifold of Low-rank Matrices

8 0.029947409 73 jmlr-2012-Multi-task Regression using Minimal Penalties

9 0.025548004 17 jmlr-2012-An Active Learning Algorithm for Ranking from Pairwise Preferences with an Almost Optimal Query Complexity

10 0.024640135 110 jmlr-2012-Static Prediction Games for Adversarial Learning Problems

11 0.022331625 15 jmlr-2012-Algebraic Geometric Comparison of Probability Distributions

12 0.021689322 68 jmlr-2012-Minimax Manifold Estimation

13 0.021071341 58 jmlr-2012-Linear Fitted-Q Iteration with Multiple Reward Functions

14 0.020929087 94 jmlr-2012-Query Strategies for Evading Convex-Inducing Classifiers

15 0.020179629 7 jmlr-2012-A Multi-Stage Framework for Dantzig Selector and LASSO

16 0.01970805 66 jmlr-2012-Metric and Kernel Learning Using a Linear Transformation

17 0.019436358 43 jmlr-2012-Fast Approximation of Matrix Coherence and Statistical Leverage

18 0.019369114 2 jmlr-2012-A Comparison of the Lasso and Marginal Regression

19 0.019232428 39 jmlr-2012-Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

20 0.018217804 41 jmlr-2012-Exploration in Relational Domains for Model-based Reinforcement Learning


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.094), (1, 0.024), (2, 0.01), (3, -0.015), (4, -0.023), (5, -0.048), (6, 0.003), (7, -0.091), (8, 0.05), (9, -0.021), (10, -0.111), (11, 0.013), (12, -0.052), (13, 0.009), (14, 0.012), (15, -0.014), (16, -0.01), (17, -0.043), (18, 0.023), (19, -0.152), (20, -0.196), (21, 0.109), (22, 0.014), (23, 0.033), (24, -0.06), (25, -0.027), (26, -0.109), (27, -0.056), (28, 0.03), (29, -0.059), (30, -0.039), (31, -0.105), (32, 0.051), (33, 0.15), (34, -0.09), (35, -0.024), (36, 0.035), (37, 0.05), (38, 0.213), (39, -0.321), (40, 0.073), (41, 0.11), (42, -0.027), (43, 0.079), (44, -0.035), (45, -0.31), (46, -0.002), (47, 0.009), (48, -0.175), (49, -0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97671527 93 jmlr-2012-Quantum Set Intersection and its Application to Associative Memory

Author: Tamer Salman, Yoram Baram

Abstract: We describe a quantum algorithm for computing the intersection of two sets and its application to associative memory. The algorithm is based on a modification of Grover’s quantum search algorithm (Grover, 1996). We present algorithms for pattern retrieval, pattern completion, and pattern correction. We show that the quantum associative memory can store an exponential number of memories and retrieve them in sub-exponential time. We prove that this model has advantages over known classical associative memories as well as previously proposed quantum models. Keywords: associative memory, pattern completion, pattern correction, quantum computation, quantum search

2 0.4861834 52 jmlr-2012-Iterative Reweighted Algorithms for Matrix Rank Minimization

Author: Karthik Mohan, Maryam Fazel

Abstract: The problem of minimizing the rank of a matrix subject to affine constraints has applications in several areas including machine learning, and is known to be NP-hard. A tractable relaxation for this problem is nuclear norm (or trace norm) minimization, which is guaranteed to find the minimum rank matrix under suitable assumptions. In this paper, we propose a family of Iterative Reweighted Least Squares algorithms IRLS-p (with 0 ≤ p ≤ 1), as a computationally efficient way to improve over the performance of nuclear norm minimization. The algorithms can be viewed as (locally) minimizing certain smooth approximations to the rank function. When p = 1, we give theoretical guarantees similar to those for nuclear norm minimization, that is, recovery of low-rank matrices under certain assumptions on the operator defining the constraints. For p < 1, IRLSp shows better empirical performance in terms of recovering low-rank matrices than nuclear norm minimization. We provide an efficient implementation for IRLS-p, and also present a related family of algorithms, sIRLS-p. These algorithms exhibit competitive run times and improved recovery when compared to existing algorithms for random instances of the matrix completion problem, as well as on the MovieLens movie recommendation data set. Keywords: matrix rank minimization, matrix completion, iterative algorithms, null-space property

3 0.29317322 99 jmlr-2012-Restricted Strong Convexity and Weighted Matrix Completion: Optimal Bounds with Noise

Author: Sahand Negahban, Martin J. Wainwright

Abstract: We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near low-rank matrices. Our results are based on measures of the “spikiness” and “low-rankness” of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an M-estimator that includes controls on both the rank and spikiness of the solution, and we establish non-asymptotic error bounds in weighted Frobenius norm for recovering matrices lying with ℓq -“balls” of bounded spikiness. Using information-theoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal. Keywords: matrix completion, collaborative filtering, convex optimization

4 0.28752762 102 jmlr-2012-Sally: A Tool for Embedding Strings in Vector Spaces

Author: Konrad Rieck, Christian Wressnegger, Alexander Bikadorov

Abstract: Strings and sequences are ubiquitous in many areas of data analysis. However, only few learning methods can be directly applied to this form of data. We present Sally, a tool for embedding strings in vector spaces that allows for applying a wide range of learning methods to string data. Sally implements a generalized form of the bag-of-words model, where strings are mapped to a vector space that is spanned by a set of string features, such as words or n-grams of words. The implementation of Sally builds on efficient string algorithms and enables processing millions of strings and features. The tool supports several data formats and is capable of interfacing with common learning environments, such as Weka, Shogun, Matlab, or Pylab. Sally has been successfully applied for learning with natural language text, DNA sequences and monitored program behavior. Keywords: string embedding, bag-of-words models, learning with sequential data

5 0.27162367 91 jmlr-2012-Plug-in Approach to Active Learning

Author: Stanislav Minsker

Abstract: We present a new active learning algorithm based on nonparametric estimators of the regression function. Our investigation provides probabilistic bounds for the rates of convergence of the generalization error achievable by proposed method over a broad class of underlying distributions. We also prove minimax lower bounds which show that the obtained rates are almost tight. Keywords: active learning, selective sampling, model selection, classification, confidence bands

6 0.22354798 69 jmlr-2012-Mixability is Bayes Risk Curvature Relative to Log Loss

7 0.19826616 83 jmlr-2012-Online Learning in the Embedded Manifold of Low-rank Matrices

8 0.19151957 70 jmlr-2012-Multi-Assignment Clustering for Boolean Data

9 0.18703474 34 jmlr-2012-Dynamic Policy Programming

10 0.18653272 17 jmlr-2012-An Active Learning Algorithm for Ranking from Pairwise Preferences with an Almost Optimal Query Complexity

11 0.16813676 73 jmlr-2012-Multi-task Regression using Minimal Penalties

12 0.16689117 30 jmlr-2012-DARWIN: A Framework for Machine Learning and Computer Vision Research and Development

13 0.16392817 110 jmlr-2012-Static Prediction Games for Adversarial Learning Problems

14 0.14229758 19 jmlr-2012-An Introduction to Artificial Prediction Markets for Classification

15 0.14070061 15 jmlr-2012-Algebraic Geometric Comparison of Probability Distributions

16 0.13662447 63 jmlr-2012-Mal-ID: Automatic Malware Detection Using Common Segment Analysis and Meta-Features

17 0.1318856 18 jmlr-2012-An Improved GLMNET for L1-regularized Logistic Regression

18 0.13018803 41 jmlr-2012-Exploration in Relational Domains for Model-based Reinforcement Learning

19 0.12998962 112 jmlr-2012-Structured Sparsity via Alternating Direction Methods

20 0.12811653 58 jmlr-2012-Linear Fitted-Q Iteration with Multiple Reward Functions


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(21, 0.029), (26, 0.02), (29, 0.033), (35, 0.014), (49, 0.018), (56, 0.016), (57, 0.019), (63, 0.488), (69, 0.02), (75, 0.03), (77, 0.014), (79, 0.018), (92, 0.092), (96, 0.083)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.75307554 93 jmlr-2012-Quantum Set Intersection and its Application to Associative Memory

Author: Tamer Salman, Yoram Baram

Abstract: We describe a quantum algorithm for computing the intersection of two sets and its application to associative memory. The algorithm is based on a modification of Grover’s quantum search algorithm (Grover, 1996). We present algorithms for pattern retrieval, pattern completion, and pattern correction. We show that the quantum associative memory can store an exponential number of memories and retrieve them in sub-exponential time. We prove that this model has advantages over known classical associative memories as well as previously proposed quantum models. Keywords: associative memory, pattern completion, pattern correction, quantum computation, quantum search

2 0.57812059 18 jmlr-2012-An Improved GLMNET for L1-regularized Logistic Regression

Author: Guo-Xun Yuan, Chia-Hua Ho, Chih-Jen Lin

Abstract: Recently, Yuan et al. (2010) conducted a comprehensive comparison on software for L1-regularized classification. They concluded that a carefully designed coordinate descent implementation CDN is the fastest among state-of-the-art solvers. In this paper, we point out that CDN is less competitive on loss functions that are expensive to compute. In particular, CDN for logistic regression is much slower than CDN for SVM because the logistic loss involves expensive exp/log operations. In optimization, Newton methods are known to have fewer iterations although each iteration costs more. Because solving the Newton sub-problem is independent of the loss calculation, this type of methods may surpass CDN under some circumstances. In L1-regularized classification, GLMNET by Friedman et al. is already a Newton-type method, but experiments in Yuan et al. (2010) indicated that the existing GLMNET implementation may face difficulties for some largescale problems. In this paper, we propose an improved GLMNET to address some theoretical and implementation issues. In particular, as a Newton-type method, GLMNET achieves fast local convergence, but may fail to quickly obtain a useful solution. By a careful design to adjust the effort for each iteration, our method is efficient for both loosely or strictly solving the optimization problem. Experiments demonstrate that our improved GLMNET is more efficient than CDN for L1-regularized logistic regression. Keywords: L1 regularization, linear classification, optimization methods, logistic regression, support vector machines

3 0.26383057 64 jmlr-2012-Manifold Identification in Dual Averaging for Regularized Stochastic Online Learning

Author: Sangkyun Lee, Stephen J. Wright

Abstract: Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term, the solution often lies on a lowdimensional manifold of parameter space along which the regularizer is smooth. (When an ℓ1 regularizer is used to induce sparsity in the solution, for example, this manifold is defined by the set of nonzero components of the parameter vector.) This paper shows that a regularized dual averaging algorithm can identify this manifold, with high probability, before reaching the solution. This observation motivates an algorithmic strategy in which, once an iterate is suspected of lying on an optimal or near-optimal manifold, we switch to a “local phase” that searches in this manifold, thus converging rapidly to a near-optimal point. Computational results are presented to verify the identification property and to illustrate the effectiveness of this approach. Keywords: regularization, dual averaging, partly smooth manifold, manifold identification

4 0.26359427 73 jmlr-2012-Multi-task Regression using Minimal Penalties

Author: Matthieu Solnon, Sylvain Arlot, Francis Bach

Abstract: In this paper we study the kernel multiple ridge regression framework, which we refer to as multitask regression, using penalization techniques. The theoretical analysis of this problem shows that the key element appearing for an optimal calibration is the covariance matrix of the noise between the different tasks. We present a new algorithm to estimate this covariance matrix, based on the concept of minimal penalty, which was previously used in the single-task regression framework to estimate the variance of the noise. We show, in a non-asymptotic setting and under mild assumptions on the target function, that this estimator converges towards the covariance matrix. Then plugging this estimator into the corresponding ideal penalty leads to an oracle inequality. We illustrate the behavior of our algorithm on synthetic examples. Keywords: multi-task, oracle inequality, learning theory

5 0.26213712 85 jmlr-2012-Optimal Distributed Online Prediction Using Mini-Batches

Author: Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, Lin Xiao

Abstract: Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem. Keywords: distributed computing, online learning, stochastic optimization, regret bounds, convex optimization

6 0.26142728 80 jmlr-2012-On Ranking and Generalization Bounds

7 0.26107711 8 jmlr-2012-A Primal-Dual Convergence Analysis of Boosting

8 0.25870156 13 jmlr-2012-Active Learning via Perfect Selective Classification

9 0.25810045 34 jmlr-2012-Dynamic Policy Programming

10 0.25749573 82 jmlr-2012-On the Necessity of Irrelevant Variables

11 0.25733215 103 jmlr-2012-Sampling Methods for the Nyström Method

12 0.25730619 7 jmlr-2012-A Multi-Stage Framework for Dantzig Selector and LASSO

13 0.25707716 71 jmlr-2012-Multi-Instance Learning with Any Hypothesis Class

14 0.25661188 111 jmlr-2012-Structured Sparsity and Generalization

15 0.25656384 86 jmlr-2012-Optimistic Bayesian Sampling in Contextual-Bandit Problems

16 0.2565217 117 jmlr-2012-Variable Selection in High-dimensional Varying-coefficient Models with Global Optimality

17 0.25456274 26 jmlr-2012-Coherence Functions with Applications in Large-Margin Classification Methods

18 0.25410831 15 jmlr-2012-Algebraic Geometric Comparison of Probability Distributions

19 0.25364962 68 jmlr-2012-Minimax Manifold Estimation

20 0.25288653 115 jmlr-2012-Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints