nips nips2006 nips2006-74 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Su-in Lee, Varun Ganapathi, Daphne Koller
Abstract: Markov networks are commonly used in a wide variety of applications, ranging from computer vision, to natural language, to computational biology. In most current applications, even those that rely heavily on learned models, the structure of the Markov network is constructed by hand, due to the lack of effective algorithms for learning Markov network structure from data. In this paper, we provide a computationally efficient method for learning Markov network structure from data. Our method is based on the use of L1 regularization on the weights of the log-linear model, which has the effect of biasing the model towards solutions where many of the parameters are zero. This formulation converts the Markov network learning problem into a convex optimization problem in a continuous space, which can be solved using efficient gradient methods. A key issue in this setting is the (unavoidable) use of approximate inference, which can lead to errors in the gradient computation when the network structure is dense. Thus, we explore the use of different feature introduction schemes and compare their performance. We provide results for our method on synthetic data, and on two real world data sets: pixel values in the MNIST data, and genetic sequence variations in the human HapMap data. We show that our L1 -based method achieves considerably higher generalization performance than the more standard L2 -based method (a Gaussian parameter prior) or pure maximum-likelihood learning. We also show that we can learn MRF network structure at a computational cost that is not much greater than learning parameters alone, demonstrating the existence of a feasible method for this important problem.
Reference: text
sentIndex sentText sentNum sentScore
1 In most current applications, even those that rely heavily on learned models, the structure of the Markov network is constructed by hand, due to the lack of effective algorithms for learning Markov network structure from data. [sent-4, score-0.622]
2 In this paper, we provide a computationally efficient method for learning Markov network structure from data. [sent-5, score-0.299]
3 This formulation converts the Markov network learning problem into a convex optimization problem in a continuous space, which can be solved using efficient gradient methods. [sent-7, score-0.334]
4 A key issue in this setting is the (unavoidable) use of approximate inference, which can lead to errors in the gradient computation when the network structure is dense. [sent-8, score-0.469]
5 Thus, we explore the use of different feature introduction schemes and compare their performance. [sent-9, score-0.229]
6 We provide results for our method on synthetic data, and on two real world data sets: pixel values in the MNIST data, and genetic sequence variations in the human HapMap data. [sent-10, score-0.275]
7 We also show that we can learn MRF network structure at a computational cost that is not much greater than learning parameters alone, demonstrating the existence of a feasible method for this important problem. [sent-12, score-0.375]
8 However, as this modeling framework is used in increasingly more complex and less well-understood domains, the problem of selecting from the exponentially large space of possible network structures becomes of great importance. [sent-14, score-0.234]
9 Including all of the possibly relevant interactions in the model generally leads to overfitting, and can also lead to difficulties in running inference over the network. [sent-15, score-0.243]
10 The key difficulty is that the maximum likelihood (ML) parameters of these networks have no analytic closed form; finding these parameters requires an iterative procedure (such as conjugate gradient [15] or BFGS [5]), where each iteration runs inference over the current model. [sent-18, score-0.37]
11 The dominant type of solution to this problem uses greedy local heuristic search, which incrementally modifies the model by adding and possibly deleting features. [sent-21, score-0.229]
12 One approach [6, 14] adds features so as to greedily improve the model likelihood; once a feature is added, it is never removed. [sent-22, score-0.355]
13 As the feature addition step is heuristic and greedy, this can lead to the inclusion of unnecessary features, and thereby to overly complex structures and overfitting. [sent-23, score-0.385]
14 Moreover, in all of the greedy heuristic search methods, the learned network is (at best) a local optimum of a penalized likelihood score. [sent-25, score-0.464]
15 Rather than viewing it as a combinatorial search problem, we embed the structure selection step within the problem of parameter estimation: features that have weight zero (in log-space) are degenerate, and do not induce dependencies between the variables they involve. [sent-27, score-0.388]
16 (2004) showed that density estimation in log-linear models using L1 -regularized likelihood has sample complexity that grows only logarithmically in the number of features of the log-linear model; Ng (2004) shows a similar result for L 1 -regularized logistic regression. [sent-31, score-0.285]
17 , [25]), and feature selection in log-linear models encoding natural language grammars [19]. [sent-37, score-0.245]
18 A key point is that, for a given L 1 based model score and given candidate feature set F, we have a fixed convex optimization problem that admits a unique optimal solution. [sent-40, score-0.259]
19 Due to the properties of the L 1 score, in this solution, many features will have weight 0, generally leading to a sparse network structure. [sent-41, score-0.394]
20 Thus, we propose an algorithm schema that gradually introduces features into the model, and lets the L1 -regularization scheme eliminate them via the optimization process. [sent-43, score-0.305]
21 We explore the use of different approaches for feature introduction, one based on the gain-based method of Della Pietra, Della Pietra and Lafferty [6] and one on the grafting method of Perkins, Lacker and Theiler [18]. [sent-44, score-0.444]
22 The log-linear model is defined in terms of a set of feature functions fk (X k ), each of which is a function that defines a numerical value for each assignment xk to some subset X k ⊂ X . [sent-55, score-0.643]
23 Given a set of feature functions F = {fk }, the parameters of the log-linear model are weights θ = {θk : fk ∈ F }. [sent-56, score-0.56]
24 1 The overall distribution is then defined as: Pθ (x) = Z(θ) exp( fk ∈F θk fk (xk )), where xk is the assignment to X k within x, and Z(θ) is the partition function that ensures that the distribution is normalized (so that all entries sum to 1). [sent-57, score-0.819]
25 Note that this definition of features encompasses both “standard” features that relate unobserved network variables (e. [sent-58, score-0.572]
26 , the word itself), and structural features that encode the interaction between hidden variables in the model. [sent-62, score-0.221]
27 A log-linear model induces a Markov network over X , where there is an edge between every pair of variables X i , Xj that appear together in some feature fk (X k ) (Xi , Xj ∈ X k ). [sent-63, score-0.73]
28 The clique potentials are constructed from the log-linear features in the obvious way. [sent-64, score-0.263]
29 Conversely, every Markov network can be encoded as a log-linear model by defining a feature which is an indicator function for every assignment of variables x c to a clique X c . [sent-65, score-0.566]
30 The mapping to Markov networks is useful, as most inference algorithms, such as belief propagation [17, 16], operate on the graph structure of the Markov network. [sent-66, score-0.528]
31 Our goal is to output a log-linear model M over X , which consists of a set of features F and a parameter vector θ that specifies a weight for each fk ∈ F . [sent-72, score-0.531]
32 (1), but the objective is concave, and can therefore be optimized using numerical optimization procedures such as conjugate gradient [15] or BFGS [5]. [sent-75, score-0.279]
33 The gradient of the log-likelihood is: ∂ (M : D) E (2) = fk (D) − M I x∼Pθ [fk (x)]. [sent-76, score-0.475]
34 ∂θk M This expression has a particularly intuitive form: the gradient attempts to make the feature counts in the empirical data equal to their expected counts relative to the learned model. [sent-77, score-0.462]
35 Note that, to compute the expected feature counts, we must perform inference relative to the current model. [sent-78, score-0.381]
36 This inference step must be performed at every iteration of the gradient process. [sent-79, score-0.292]
37 We assume that there is a (possibly very large) set of features F, from which we wish to select a subset F ⊆ F for inclusion in the model. [sent-81, score-0.219]
38 This problem is generally solved using a heuristic search over the combinatorial space of possible feature subsets. [sent-82, score-0.376]
39 R Specifically, rather than optimizing the log-likelihood itself, we introduce a Laplacian parameter prior for each feature fk takes the form P (θk ) = βk /2 · exp(−βk |θk |). [sent-84, score-0.563]
40 This objective is convex, and can be optimized efficiently using methods such as conjugate gradient or BFGS, although care needs to be taken with the discontinuity of the derivative at 0. [sent-89, score-0.24]
41 The gaussian prior induces a regularization term that is quadratic in θk , which penalizes large features much more than smaller ones. [sent-93, score-0.363]
42 (2004) and Ng (2004) suggests that this form of regularization is effective at identifying relevant features even with a relatively small number of samples. [sent-97, score-0.357]
43 Let F be the set of indicator features over all subsets of variables X ⊂ X of cardinality at most c, and δ, , B > 0. [sent-104, score-0.257]
44 However, we cannot simply include all of the features in the model in advance, and use only parameter optimization to prune away the irrelevant ones. [sent-116, score-0.221]
45 Recall that computing the gradient requires performing inference in the resulting model. [sent-117, score-0.292]
46 Even approximate inference algorithms, such as belief propagation, tend to degrade as the density of the network increases; for example, BP algorithms are less likely to converge, and the answers they return are typically much less accurate. [sent-119, score-0.596]
47 Thus, our approach also contains a feature introduction component, which gradually selects features to add into the model, allowing the optimization process to search for the optimal values for their parameters. [sent-120, score-0.476]
48 More precisely, our algorithm maintains a set of active features F ⊆ F. [sent-121, score-0.24]
49 An inactive feature fk has its parameter θk set to 0; the parameters of active features are free to be changed when optimizing the objective Eq. [sent-122, score-1.069]
50 In addition to various simple baseline methods, we explore two feature introduction methods, both of which are greedy and myopic, in that they compute some heuristic estimate of the likely benefit to be gained from introducing a single feature into the active set. [sent-124, score-0.634]
51 Then, for each inactive feature f , we compute the partial derivative of the objective Eq. [sent-127, score-0.439]
52 Then, for each inactive feature f , it computes the log-likelihood gain of adding that feature, assuming that we could optimize its feature weight arbitrarily, but that the weights of all other features are held constant. [sent-133, score-0.919]
53 show that the gain is a concave objective that can be computed efficiently using a one-dimensional line search. [sent-136, score-0.323]
54 Our task is to compute not the optimal gain in log-likelihood, but rather the optimal gain of Eq. [sent-138, score-0.292]
55 The change in the objective function for introducing a feature f k is: ∆L1 = θk fk (D) − β θk − M log[exp(θk )Pθ (fk ) + Pθ (¬fk )], where M is the number of training instances. [sent-142, score-0.636]
56 If we take the derivative of the above function and set it to zero, we also get a closed form solution: θk = log (fk (D) − βsign(θk ))Pθ (¬fk ) (M − fk (D) + βsign(θk ))Pθ (fk ) . [sent-143, score-0.349]
57 Both methods are heuristic, in that they consider only the potential gain of adding a single feature in isolation, assuming all other weights are held constant. [sent-144, score-0.412]
58 However, the grafting method is more optimistic, in that it estimates the value of adding a single feature via the slope of adding it, whereas the gain-based approach also considers, intuitively, how far one can go in that direction before the gain “peaks out”. [sent-145, score-0.609]
59 The gain based heuristic is, in fact, a lower bound on the actual gain obtained from adding this feature, while allowing the other features to also adapt. [sent-146, score-0.654]
60 Overall, the gain-based heuristic provides a better estimate of the value of adding the feature, albeit at slightly greater computational cost (except in the limited cases where a closed-form solution can be found). [sent-147, score-0.221]
61 If we have that, for every inactive fk ∈ F , the gradient of the log-likelihood | ∂ (M:D) | ≤ β, then the gradient of the ∂θk objective in any direction is non-positive, and the objective is at a stationary point. [sent-150, score-0.981]
62 Hence, once the termination condition is achieved, we are guaranteed that we are at the local maximum, regardless of the feature introduction method used. [sent-152, score-0.261]
63 Thus, assuming the algorithm is run until the convergence criterion is satisfied, there is no impact of the feature introduction heuristic on the final outcome, but only on the computational complexity. [sent-153, score-0.298]
64 Finally, constantly evaluating all of the non-active candidate features can be computationally prohibitive when many features are possible. [sent-154, score-0.411]
65 As we discussed above, in most of the networks that are useful models for real applications, exact inference is intractable. [sent-162, score-0.244]
66 While many approximate inference methods have been proposed, one of the most commonly used is the general class of loopy belief propagation (BP) algorithms [17, 16, 24]. [sent-164, score-0.569]
67 The use of an approximate inference algorithm such as BP raises several important points. [sent-165, score-0.245]
68 One important question issue relates to the computation of the gradient or the gain for features that are currently inactive. [sent-166, score-0.454]
69 The belief propagation algorithm, when executed on a particular network with a set of active features F , creates a cluster for every subset of variables X k that appear as the scope of a feature fk (X k ). [sent-167, score-1.193]
70 The output of the BP inference process is a set of marginal probabilities over all of the clusters; thus, it returns the necessary information for computing the expected sufficient statistics in the gradient of the objective (see Eq. [sent-168, score-0.448]
71 However, for features f k (X k ) that are currently inactive, there is no corresponding cluster in the induced Markov network, and hence, in most cases, the necessary marginal probabilities over X k will not be computed by BP. [sent-170, score-0.224]
72 We can approximate this marginal probability by extracting a subtree of the calibrated loopy graph that contains all of the variables in X k . [sent-171, score-0.425]
73 At convergence of the BP algorithm, every subtree of the loopy graph is calibrated, in that all of the belief potentials must agree [23]. [sent-172, score-0.3]
74 Thus, we can view the subtree as a calibrated clique tree, and use standard dynamic programming methods over the tree (see, e. [sent-173, score-0.211]
75 A second key issue is that the performance of BP algorithms generally degrades significantly as the density of the network increases: they are less likely to converge, and the answers they return are typically much less accurate. [sent-178, score-0.251]
76 Moreover, non-convergence of the inference is more common when the network parameters are allowed to take larger, more extreme values; see, for example, [20, 11, 13] for some theoretical results supporting this empirical phenomenon. [sent-179, score-0.335]
77 First, while different feature introduction schemes achieve the same results when using exact inference, their outcomes can vary greatly when using approximate inference, due to differences in the structure of the networks arising during the learning process. [sent-182, score-0.425]
78 Thus, as we shall see, better feature introduction methods, which introduce the more relevant features first, work much better in practice. [sent-183, score-0.355]
79 6 Results In our experiments, we focus on binary pairwise Markov networks, where each feature function is an indicator function for a certain assignment to a pair of nodes. [sent-186, score-0.312]
80 We considered three feature induction schemes: (a) Gain: based on the estimated change of gain, (b) Grad: using grafting and (c) Simple: based on pairwise similarity. [sent-192, score-0.422]
81 For each scheme, we varied the regularization method: (a) None: no regularization, (b) L1: L 1 regularization and (c) L2: L2 regularization. [sent-194, score-0.264]
82 A network structure with N nodes was generated by treating each possible edge as a Bernoulli random variable and sampling the edges. [sent-199, score-0.301]
83 ) We compare our algorithm using L1 regularization against no regularization and L2 regularization in three different ways. [sent-203, score-0.396]
84 As the synthetic network gets denser, L1 increasingly outperforms the other algorithms. [sent-212, score-0.324]
85 Therefore, the feature induction algorithm is more likely to introduce an spurious edge, which L1 may later remove, whereas None and L2 do not. [sent-214, score-0.3]
86 Figure 1(c) shows that the computational cost of learning the structure of the network using Gain-L1 not much more than that of learning the parameters alone. [sent-216, score-0.264]
87 Moreover, L1 increasingly outperforms other regularization methods as the number of nodes increases. [sent-217, score-0.234]
88 As mentioned earlier, the performance Figure 2: Results from the experiments on MNIST dataset of the regularized algorithm should be insensitive to the feature induction method, assuming inference is exact. [sent-226, score-0.408]
89 However, in practice, because inference is approximate, an induction algorithm that introduces spurious features will affect the quality of inference, and therefore the performance of the algorithm. [sent-227, score-0.512]
90 This effect is substantiated by the poor performance of the Simple-L1 and SimpleL2 methods that introduce features based on mutual information rather than gradient (Grad-) or approximate gain (Gain-). [sent-228, score-0.533]
91 Nevertheless L1 still outperforms None and L2, regardless the feature induction algorithm with which it is paired. [sent-229, score-0.242]
92 For each data set, we learned the structure of the Markov network whose nodes are binary valued SNPs such that it captures the structure of the human genetic variation. [sent-237, score-0.597]
93 We show that the computational cost of our method is not considerably greater than pure parameter estimation for a fixed structure, suggesting that MRF structure learning is a feasible option for many applications. [sent-242, score-0.24]
94 Currently, our method handles each feature in the log-linear model independently, with no attempt to bias the learning towards sparsity in the structure of the induced Markov network. [sent-247, score-0.303]
95 From a theoretical perspective, it would be interesting to show that, at the large sample limit, redundant features are eventually eliminated, so that the learning eventually converges to a minimal structure consistent with the underlying distribution. [sent-249, score-0.277]
96 It would be interesting to explore inference methods whose goal is correctly estimating the direction (even if not the magnitude) of the gradient. [sent-253, score-0.222]
97 Finally, it would be interesting to explore the viability of the learned network structures in realworld applications, both for density estimation and for knowledge discovery, for example, in the context of the HapMap data. [sent-254, score-0.276]
98 Loopy belief propagation for approximate inference: an empirical study. [sent-352, score-0.268]
99 Grafting: Fast, incremental feature selection by gradient descent in function space. [sent-362, score-0.396]
100 Incremental feature selection and l1 regularization for relaxed maximum-entropy modeling. [sent-365, score-0.342]
wordName wordTfidf (topN-words)
[('fk', 0.349), ('hapmap', 0.185), ('cmll', 0.182), ('features', 0.182), ('feature', 0.173), ('network', 0.169), ('inference', 0.166), ('inactive', 0.152), ('markov', 0.146), ('gain', 0.146), ('grafting', 0.145), ('pietra', 0.145), ('loopy', 0.135), ('della', 0.135), ('regularization', 0.132), ('bp', 0.127), ('gradient', 0.126), ('heuristic', 0.125), ('perkins', 0.121), ('objective', 0.114), ('belief', 0.107), ('genetic', 0.103), ('structure', 0.095), ('mrf', 0.092), ('synthetic', 0.09), ('propagation', 0.082), ('mnist', 0.081), ('clique', 0.081), ('grad', 0.079), ('snps', 0.079), ('approximate', 0.079), ('networks', 0.078), ('none', 0.073), ('bfgs', 0.072), ('dudik', 0.072), ('calibrated', 0.072), ('digit', 0.071), ('induction', 0.069), ('assignment', 0.068), ('increasingly', 0.065), ('concave', 0.063), ('uai', 0.061), ('xhidden', 0.061), ('xobserved', 0.061), ('incremental', 0.06), ('active', 0.058), ('spurious', 0.058), ('subtree', 0.058), ('logistic', 0.058), ('explore', 0.056), ('adding', 0.055), ('termination', 0.053), ('xk', 0.053), ('lacker', 0.053), ('learned', 0.051), ('wainwright', 0.05), ('thereby', 0.05), ('greedy', 0.049), ('penalizes', 0.049), ('xh', 0.048), ('alan', 0.048), ('human', 0.047), ('gradually', 0.047), ('candidate', 0.047), ('edges', 0.047), ('instances', 0.046), ('logarithmically', 0.045), ('effective', 0.043), ('generally', 0.043), ('mrfs', 0.042), ('et', 0.042), ('marginal', 0.042), ('relative', 0.042), ('greater', 0.041), ('optimizing', 0.041), ('bach', 0.04), ('inducing', 0.04), ('optimization', 0.039), ('variables', 0.039), ('answers', 0.039), ('donoho', 0.039), ('weights', 0.038), ('introduces', 0.037), ('selection', 0.037), ('inclusion', 0.037), ('nodes', 0.037), ('indicator', 0.036), ('degrade', 0.036), ('counts', 0.035), ('optimum', 0.035), ('pairwise', 0.035), ('method', 0.035), ('search', 0.035), ('language', 0.035), ('feasible', 0.035), ('conversely', 0.034), ('executed', 0.034), ('sound', 0.034), ('interactions', 0.034), ('considerably', 0.034)]
simIndex simValue paperId paperTitle
same-paper 1 0.9999997 74 nips-2006-Efficient Structure Learning of Markov Networks using $L 1$-Regularization
Author: Su-in Lee, Varun Ganapathi, Daphne Koller
Abstract: Markov networks are commonly used in a wide variety of applications, ranging from computer vision, to natural language, to computational biology. In most current applications, even those that rely heavily on learned models, the structure of the Markov network is constructed by hand, due to the lack of effective algorithms for learning Markov network structure from data. In this paper, we provide a computationally efficient method for learning Markov network structure from data. Our method is based on the use of L1 regularization on the weights of the log-linear model, which has the effect of biasing the model towards solutions where many of the parameters are zero. This formulation converts the Markov network learning problem into a convex optimization problem in a continuous space, which can be solved using efficient gradient methods. A key issue in this setting is the (unavoidable) use of approximate inference, which can lead to errors in the gradient computation when the network structure is dense. Thus, we explore the use of different feature introduction schemes and compare their performance. We provide results for our method on synthetic data, and on two real world data sets: pixel values in the MNIST data, and genetic sequence variations in the human HapMap data. We show that our L1 -based method achieves considerably higher generalization performance than the more standard L2 -based method (a Gaussian parameter prior) or pure maximum-likelihood learning. We also show that we can learn MRF network structure at a computational cost that is not much greater than learning parameters alone, demonstrating the existence of a feasible method for this important problem.
2 0.18176354 201 nips-2006-Using Combinatorial Optimization within Max-Product Belief Propagation
Author: Daniel Tarlow, Gal Elidan, Daphne Koller, John C. Duchi
Abstract: In general, the problem of computing a maximum a posteriori (MAP) assignment in a Markov random field (MRF) is computationally intractable. However, in certain subclasses of MRF, an optimal or close-to-optimal assignment can be found very efficiently using combinatorial optimization algorithms: certain MRFs with mutual exclusion constraints can be solved using bipartite matching, and MRFs with regular potentials can be solved using minimum cut methods. However, these solutions do not apply to the many MRFs that contain such tractable components as sub-networks, but also other non-complying potentials. In this paper, we present a new method, called C OMPOSE, for exploiting combinatorial optimization for sub-networks within the context of a max-product belief propagation algorithm. C OMPOSE uses combinatorial optimization for computing exact maxmarginals for an entire sub-network; these can then be used for inference in the context of the network as a whole. We describe highly efficient methods for computing max-marginals for subnetworks corresponding both to bipartite matchings and to regular networks. We present results on both synthetic and real networks encoding correspondence problems between images, which involve both matching constraints and pairwise geometric constraints. We compare to a range of current methods, showing that the ability of C OMPOSE to transmit information globally across the network leads to improved convergence, decreased running time, and higher-scoring assignments.
3 0.15024295 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing
Author: Yuanhao Chen, Long Zhu, Alan L. Yuille
Abstract: We describe an unsupervised method for learning a probabilistic grammar of an object from a set of training examples. Our approach is invariant to the scale and rotation of the objects. We illustrate our approach using thirteen objects from the Caltech 101 database. In addition, we learn the model of a hybrid object class where we do not know the specific object or its position, scale or pose. This is illustrated by learning a hybrid class consisting of faces, motorbikes, and airplanes. The individual objects can be recovered as different aspects of the grammar for the object class. In all cases, we validate our results by learning the probability grammars from training datasets and evaluating them on the test datasets. We compare our method to alternative approaches. The advantages of our approach is the speed of inference (under one second), the parsing of the object, and increased accuracy of performance. Moreover, our approach is very general and can be applied to a large range of objects and structures. 1
4 0.1388462 69 nips-2006-Distributed Inference in Dynamical Systems
Author: Stanislav Funiak, Carlos Guestrin, Rahul Sukthankar, Mark A. Paskin
Abstract: We present a robust distributed algorithm for approximate probabilistic inference in dynamical systems, such as sensor networks and teams of mobile robots. Using assumed density filtering, the network nodes maintain a tractable representation of the belief state in a distributed fashion. At each time step, the nodes coordinate to condition this distribution on the observations made throughout the network, and to advance this estimate to the next time step. In addition, we identify a significant challenge for probabilistic inference in dynamical systems: message losses or network partitions can cause nodes to have inconsistent beliefs about the current state of the system. We address this problem by developing distributed algorithms that guarantee that nodes will reach an informative consistent distribution when communication is re-established. We present a suite of experimental results on real-world sensor data for two real sensor network deployments: one with 25 cameras and another with 54 temperature sensors. 1
5 0.13486966 43 nips-2006-Bayesian Model Scoring in Markov Random Fields
Author: Sridevi Parise, Max Welling
Abstract: Scoring structures of undirected graphical models by means of evaluating the marginal likelihood is very hard. The main reason is the presence of the partition function which is intractable to evaluate, let alone integrate over. We propose to approximate the marginal likelihood by employing two levels of approximation: we assume normality of the posterior (the Laplace approximation) and approximate all remaining intractable quantities using belief propagation and the linear response approximation. This results in a fast procedure for model scoring. Empirically, we find that our procedure has about two orders of magnitude better accuracy than standard BIC methods for small datasets, but deteriorates when the size of the dataset grows. 1
6 0.13225195 9 nips-2006-A Nonparametric Bayesian Method for Inferring Features From Similarity Judgments
7 0.12999989 172 nips-2006-Scalable Discriminative Learning for Natural Language Parsing and Translation
8 0.11944308 130 nips-2006-Max-margin classification of incomplete data
9 0.112133 57 nips-2006-Conditional mean field
10 0.1043349 132 nips-2006-Modeling Dyadic Data with Binary Latent Factors
11 0.10115532 186 nips-2006-Support Vector Machines on a Budget
12 0.10043645 75 nips-2006-Efficient sparse coding algorithms
13 0.097973228 98 nips-2006-Inferring Network Structure from Co-Occurrences
14 0.095579743 35 nips-2006-Approximate inference using planar graph decomposition
15 0.088140741 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields
16 0.087772675 65 nips-2006-Denoising and Dimension Reduction in Feature Space
17 0.087605715 20 nips-2006-Active learning for misspecified generalized linear models
18 0.086866893 28 nips-2006-An Efficient Method for Gradient-Based Adaptation of Hyperparameters in SVM Models
19 0.086171679 110 nips-2006-Learning Dense 3D Correspondence
20 0.085571267 195 nips-2006-Training Conditional Random Fields for Maximum Labelwise Accuracy
topicId topicWeight
[(0, -0.323), (1, 0.031), (2, 0.085), (3, -0.103), (4, 0.108), (5, 0.045), (6, 0.047), (7, -0.012), (8, -0.061), (9, -0.135), (10, -0.085), (11, -0.119), (12, -0.002), (13, 0.001), (14, -0.03), (15, -0.004), (16, -0.019), (17, 0.156), (18, 0.081), (19, 0.136), (20, 0.113), (21, 0.097), (22, 0.074), (23, 0.017), (24, -0.06), (25, 0.074), (26, 0.136), (27, -0.041), (28, 0.012), (29, -0.047), (30, 0.008), (31, -0.063), (32, -0.111), (33, 0.052), (34, -0.098), (35, 0.033), (36, -0.043), (37, -0.057), (38, 0.062), (39, -0.062), (40, 0.079), (41, 0.072), (42, 0.054), (43, -0.045), (44, 0.013), (45, 0.101), (46, -0.071), (47, 0.035), (48, -0.054), (49, 0.052)]
simIndex simValue paperId paperTitle
same-paper 1 0.96602261 74 nips-2006-Efficient Structure Learning of Markov Networks using $L 1$-Regularization
Author: Su-in Lee, Varun Ganapathi, Daphne Koller
Abstract: Markov networks are commonly used in a wide variety of applications, ranging from computer vision, to natural language, to computational biology. In most current applications, even those that rely heavily on learned models, the structure of the Markov network is constructed by hand, due to the lack of effective algorithms for learning Markov network structure from data. In this paper, we provide a computationally efficient method for learning Markov network structure from data. Our method is based on the use of L1 regularization on the weights of the log-linear model, which has the effect of biasing the model towards solutions where many of the parameters are zero. This formulation converts the Markov network learning problem into a convex optimization problem in a continuous space, which can be solved using efficient gradient methods. A key issue in this setting is the (unavoidable) use of approximate inference, which can lead to errors in the gradient computation when the network structure is dense. Thus, we explore the use of different feature introduction schemes and compare their performance. We provide results for our method on synthetic data, and on two real world data sets: pixel values in the MNIST data, and genetic sequence variations in the human HapMap data. We show that our L1 -based method achieves considerably higher generalization performance than the more standard L2 -based method (a Gaussian parameter prior) or pure maximum-likelihood learning. We also show that we can learn MRF network structure at a computational cost that is not much greater than learning parameters alone, demonstrating the existence of a feasible method for this important problem.
2 0.72448874 201 nips-2006-Using Combinatorial Optimization within Max-Product Belief Propagation
Author: Daniel Tarlow, Gal Elidan, Daphne Koller, John C. Duchi
Abstract: In general, the problem of computing a maximum a posteriori (MAP) assignment in a Markov random field (MRF) is computationally intractable. However, in certain subclasses of MRF, an optimal or close-to-optimal assignment can be found very efficiently using combinatorial optimization algorithms: certain MRFs with mutual exclusion constraints can be solved using bipartite matching, and MRFs with regular potentials can be solved using minimum cut methods. However, these solutions do not apply to the many MRFs that contain such tractable components as sub-networks, but also other non-complying potentials. In this paper, we present a new method, called C OMPOSE, for exploiting combinatorial optimization for sub-networks within the context of a max-product belief propagation algorithm. C OMPOSE uses combinatorial optimization for computing exact maxmarginals for an entire sub-network; these can then be used for inference in the context of the network as a whole. We describe highly efficient methods for computing max-marginals for subnetworks corresponding both to bipartite matchings and to regular networks. We present results on both synthetic and real networks encoding correspondence problems between images, which involve both matching constraints and pairwise geometric constraints. We compare to a range of current methods, showing that the ability of C OMPOSE to transmit information globally across the network leads to improved convergence, decreased running time, and higher-scoring assignments.
3 0.68389887 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing
Author: Yuanhao Chen, Long Zhu, Alan L. Yuille
Abstract: We describe an unsupervised method for learning a probabilistic grammar of an object from a set of training examples. Our approach is invariant to the scale and rotation of the objects. We illustrate our approach using thirteen objects from the Caltech 101 database. In addition, we learn the model of a hybrid object class where we do not know the specific object or its position, scale or pose. This is illustrated by learning a hybrid class consisting of faces, motorbikes, and airplanes. The individual objects can be recovered as different aspects of the grammar for the object class. In all cases, we validate our results by learning the probability grammars from training datasets and evaluating them on the test datasets. We compare our method to alternative approaches. The advantages of our approach is the speed of inference (under one second), the parsing of the object, and increased accuracy of performance. Moreover, our approach is very general and can be applied to a large range of objects and structures. 1
4 0.67122042 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields
Author: Thomas Ott, Ruedi Stoop
Abstract: We rigorously establish a close relationship between message passing algorithms and models of neurodynamics by showing that the equations of a continuous Hopfield network can be derived from the equations of belief propagation on a binary Markov random field. As Hopfield networks are equipped with a Lyapunov function, convergence is guaranteed. As a consequence, in the limit of many weak connections per neuron, Hopfield networks exactly implement a continuous-time variant of belief propagation starting from message initialisations that prevent from running into convergence problems. Our results lead to a better understanding of the role of message passing algorithms in real biological neural networks.
5 0.66272086 43 nips-2006-Bayesian Model Scoring in Markov Random Fields
Author: Sridevi Parise, Max Welling
Abstract: Scoring structures of undirected graphical models by means of evaluating the marginal likelihood is very hard. The main reason is the presence of the partition function which is intractable to evaluate, let alone integrate over. We propose to approximate the marginal likelihood by employing two levels of approximation: we assume normality of the posterior (the Laplace approximation) and approximate all remaining intractable quantities using belief propagation and the linear response approximation. This results in a fast procedure for model scoring. Empirically, we find that our procedure has about two orders of magnitude better accuracy than standard BIC methods for small datasets, but deteriorates when the size of the dataset grows. 1
6 0.60381281 139 nips-2006-Multi-dynamic Bayesian Networks
7 0.56951487 69 nips-2006-Distributed Inference in Dynamical Systems
8 0.56281126 130 nips-2006-Max-margin classification of incomplete data
9 0.55970973 172 nips-2006-Scalable Discriminative Learning for Natural Language Parsing and Translation
10 0.53824353 98 nips-2006-Inferring Network Structure from Co-Occurrences
11 0.5308308 9 nips-2006-A Nonparametric Bayesian Method for Inferring Features From Similarity Judgments
12 0.52545023 138 nips-2006-Multi-Task Feature Learning
13 0.52367812 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation
14 0.51994318 202 nips-2006-iLSTD: Eligibility Traces and Convergence Analysis
15 0.51770502 47 nips-2006-Boosting Structured Prediction for Imitation Learning
16 0.50901908 92 nips-2006-High-Dimensional Graphical Model Selection Using $\ell 1$-Regularized Logistic Regression
17 0.50015944 180 nips-2006-Speakers optimize information density through syntactic reduction
18 0.46945566 178 nips-2006-Sparse Multinomial Logistic Regression via Bayesian L1 Regularisation
19 0.46809289 132 nips-2006-Modeling Dyadic Data with Binary Latent Factors
20 0.46676072 110 nips-2006-Learning Dense 3D Correspondence
topicId topicWeight
[(1, 0.088), (3, 0.014), (7, 0.083), (9, 0.03), (22, 0.05), (44, 0.072), (57, 0.522), (65, 0.03), (69, 0.043)]
simIndex simValue paperId paperTitle
1 0.98170167 66 nips-2006-Detecting Humans via Their Pose
Author: Alessandro Bissacco, Ming-Hsuan Yang, Stefano Soatto
Abstract: We consider the problem of detecting humans and classifying their pose from a single image. Specifically, our goal is to devise a statistical model that simultaneously answers two questions: 1) is there a human in the image? and, if so, 2) what is a low-dimensional representation of her pose? We investigate models that can be learned in an unsupervised manner on unlabeled images of human poses, and provide information that can be used to match the pose of a new image to the ones present in the training set. Starting from a set of descriptors recently proposed for human detection, we apply the Latent Dirichlet Allocation framework to model the statistics of these features, and use the resulting model to answer the above questions. We show how our model can efficiently describe the space of images of humans with their pose, by providing an effective representation of poses for tasks such as classification and matching, while performing remarkably well in human/non human decision problems, thus enabling its use for human detection. We validate the model with extensive quantitative experiments and comparisons with other approaches on human detection and pose matching. 1
2 0.97238636 50 nips-2006-Chained Boosting
Author: Christian R. Shelton, Wesley Huie, Kin F. Kan
Abstract: We describe a method to learn to make sequential stopping decisions, such as those made along a processing pipeline. We envision a scenario in which a series of decisions must be made as to whether to continue to process. Further processing costs time and resources, but may add value. Our goal is to create, based on historic data, a series of decision rules (one at each stage in the pipeline) that decide, based on information gathered up to that point, whether to continue processing the part. We demonstrate how our framework encompasses problems from manufacturing to vision processing. We derive a quadratic (in the number of decisions) bound on testing performance and provide empirical results on object detection. 1 Pipelined Decisions In many decision problems, all of the data do not arrive at the same time. Often further data collection can be expensive and we would like to make a decision without accruing the added cost. Consider silicon wafer manufacturing. The wafer is processed in a series of stages. After each stage some tests are performed to judge the quality of the wafer. If the wafer fails (due to flaws), then the processing time, energy, and materials are wasted. So, we would like to detect such a failure as early as possible in the production pipeline. A similar problem can occur in vision processing. Consider the case of object detection in images. Often low-level pixel operations (such as downsampling an image) can be performed in parallel by dedicated hardware (on a video capture board, for example). However, searching each subimage patch of the whole image to test whether it is the object in question takes time that is proportional to the number of pixels. Therefore, we can imagine a image pipeline in which low resolution versions of the whole image are scanned first. Subimages which are extremely unlikely to contain the desired object are rejected and only those which pass are processed at higher resolution. In this way, we save on many pixel operations and can reduce the cost in time to process an image. Even if downsampling is not possible through dedicated hardware, for most object detection schemes, the image must be downsampled to form an image pyramid in order to search for the object at different scales. Therefore, we can run the early stages of such a pipelined detector at the low resolution versions of the image and throw out large regions of the high resolution versions. Most of the processing is spent searching for small faces (at the high resolutions), so this method can save a lot of processing. Such chained decisions also occur if there is a human in the decision process (to ask further clarifying questions in database search, for instance). We propose a framework that can model all of these scenarios and allow such decision rules to be learned from historic data. We give a learning algorithm based on the minimization of the exponential loss and conclude with some experimental results. 1.1 Problem Formulation Let there be s stages to the processing pipeline. We assume that there is a static distribution from which the parts, objects, or units to be processed are drawn. Let p(x, c) represent this distribution in which x is a vector of the features of this unit and c represents the costs associated with this unit. In particular, let xi (1 ≤ i ≤ s) be the set of measurements (features) available to the decision maker immediately following stage i. Let c i (1 ≤ i ≤ s) be the cost of rejecting (or stopping the processing of) this unit immediately following stage i. Finally, let c s+1 be the cost of allowing the part to pass through all processing stages. Note that ci need not be monotonic in i. To take our wafer manufacturing example, for wafers that are good we might let c i = i for 1 ≤ i ≤ s, indicating that if a wafer is rejected at any stage, one unit of work has been invested for each stage of processing. For the same good wafers, we might let cs+1 = s − 1000, indicating that the value of a completed wafer is 1000 units and therefore the total cost is the processing cost minus the resulting value. For a flawed wafer, the values might be the same, except for c s+1 which we would set to s, indicating that there is no value for a bad wafer. Note that the costs may be either positive or negative. However, only their relative values are important. Once a part has been drawn from the distribution, there is no way of affecting the “base level” for the value of the part. Therefore, we assume for the remainder of this paper that c i ≥ 0 for 1 ≤ i ≤ s + 1 and that ci = 0 for some value of i (between 1 and s + 1). Our goal is to produce a series of decision rules f i (xi ) for 1 ≤ i ≤ s. We let fi have a range of {0, 1} and let 0 indicate that processing should continue and 1 indicate that processing should be halted. We let f denote the collection of these s decision rules and augment the collection with an additional rule f s+1 which is identically 1 (for ease of notation). The cost of using these rules to halt processing an example is therefore s+1 i−1 ci fi (xi ) L(f (x), c) = i=1 (1 − fj (xj )) . j=1 We would like to find a set of decision rules that minimize E p [L(f (x), c)]. While p(x, c) is not known, we do have a series of samples (training set) D = {(x1 , c1 ), (x2 , c2 ), . . . , (xn , cn )} of n examples drawn from the distribution p. We use superscripts to denote the example index and subscripts to denote the stage index. 2 Boosting Solution For this paper, we consider constructing the rules f i from simpler decision rules, much as in the Adaboost algorithm [1, 2]. We assume that each decision f i (xi ) is computed as the threshold of another function g i (xi ): fi (xi ) = I(gi (xi ) > 0).1 We bound the empirical risk: n n s+1 L(f (xk ), ck ) = i−1 ck I(gi (xk ) > 0) i i k=1 i=1 k=1 n s+1 j=1 k i−1 ck egi (xi ) i ≤ k=1 i=1 I(gj (xk ) ≤ 0) j k n s+1 j=1 k ck egi (xi )− i e−gj (xj ) = Pi−1 j=1 gj (xk ) j . (1) k=1 i=1 Our decision to make all costs positive ensures that the bounds hold. Our decision to make the optimal cost zero helps to ensure that the bound is reasonably tight. As in boosting, we restrict g i (xi ) to take the form mi αi,l hi,l (xi ), the weighted sum of m i subl=1 classifiers, each of which returns either −1 or +1. We will construct these weighted sums incrementally and greedily, adding one additional subclassifier and associated weight at each step. We will pick the stage, weight, and function of the subclassifier in order to make the largest negative change in the exponential bound to the empirical risk. The subclassifiers, h i,l will be drawn from a small class of hypotheses, H. 1 I is the indicator function that equals 1 if the argument is true and 0 otherwise. 1. Initialize gi (x) = 0 for all stages i k 2. Initialize wi = ck for all stages i and examples k. i 3. For each stage i: (a) Calculate targets for each training example, as shown in equation 5. (b) Let h be the result of running the base learner on this set. (c) Calculate the corresponding α as per equation 3. (d) Score this classification as per equation 4 ¯ 4. Select the stage ¯ with the best (highest) score. Let h and α be the classifier and ı ¯ weight found at that stage. ¯¯ 5. Let g¯(x) ← g¯(x) + αh(x). ı ı 6. Update the weights (see equation 2): ¯ k k ¯ ı • ∀1 ≤ k ≤ n, multiply w¯ by eαh(x¯ ) . ı ¯ k k ¯ ı • ∀1 ≤ k ≤ n, j > ¯, multiply wj by e−αh(x¯ ) . ı 7. Repeat from step 3 Figure 1: Chained Boosting Algorithm 2.1 Weight Optimization We first assume that the stage at which to add a new subclassifier and the subclassifier to add have ¯ ¯ already been chosen: ¯ and h, respectively. That is, h will become h¯,m¯+1 but we simplify it for ı ı ı ease of expression. Our goal is to find α ¯,m¯+1 which we similarly abbreviate to α. We first define ¯ ı ı k k wi = ck egi (xi )− i Pi−1 j=1 gj (xk ) j (2) + as the weight of example k at stage i, or its current contribution to our risk bound. If we let D h be ¯ − ¯ the set of indexes of the members of D for which h returns +1, and let D h be similarly defined for ¯ ¯ returns −1, we can further define those for which h s+1 + W¯ = ı k w¯ + ı + k∈Dh ¯ s+1 k wi − W¯ = ı − ı k∈Dh i=¯+1 ¯ k w¯ + ı − k∈Dh ¯ k wi . + ı k∈Dh i=¯+1 ¯ + ¯ We interpret W¯ to be the sum of the weights which h will emphasize. That is, it corresponds to ı ¯ ¯ the weights along the path that h selects: For those examples for which h recommends termination, we add the current weight (related to the cost of stopping the processing at this stage). For those ¯ examples for which h recommends continued processing, we add in all future weights (related to all − future costs associated with this example). W¯ can be similarly interpreted to be the weights (or ı ¯ costs) that h recommends skipping. If we optimize the loss bound of Equation 1 with respect to α, we obtain ¯ α= ¯ 1 Wı− log ¯ . + 2 W¯ ı (3) The more weight (cost) that the rule recommends to skip, the higher its α coefficient. 2.2 Full Optimization Using Equation 3 it is straight forward to show that the reduction in Equation 1 due to the addition of this new subclassifier will be + ¯ − ¯ W¯ (1 − eα ) + W¯ (1 − e−α ) . ı ı (4) We know of no efficient method for determining ¯, the stage at which to add a subclassifier, except ı by exhaustive search. However, within a stage, the choice of which subclassifier to use becomes one of maximizing n s+1 k¯ k k z¯ h(x¯ ) , where z¯ = ı ı ı k k wi − w¯ ı (5) i=¯+1 ı k=1 ¯ with respect to h. This is equivalent to an weighted empirical risk minimization where the training 1 2 n k k set is {x¯ , x¯ , . . . , x¯ }. The label of x¯ is the sign of z¯ , and the weight of the same example is the ı ı ı ı ı k magnitude of z¯ . ı 2.3 Algorithm The resulting algorithm is only slightly more complex than standard Adaboost. Instead of a weight vector (one weight for each data example), we now have a weight matrix (one weight for each data example for each stage). We initialize each weight to be the cost associated with halting the corresponding example at the corresponding stage. We start with all g i (x) = 0. The complete algorithm is as in Figure 1. Each time through steps 3 through 7, we complete one “round” and add one additional rule to one stage of the processing. We stop executing this loop when α ≤ 0 or when an iteration counter ¯ exceeds a preset threshold. Bottom-Up Variation In situations where information is only gained after each stage (such as in section 4), we can also train the classifiers “bottom-up.” That is, we can start by only adding classifiers to the last stage. Once finished with it, we proceed to the previous stage, and so on. Thus instead of selecting the best stage, i, in each round, we systematically work our way backward through the stages, never revisiting previously set stages. 3 Performance Bounds Using the bounds in [3] we can provide a risk bound for this problem. We let E denote the expectaˆ tion with respect to the true distribution p(x, c) and En denote the empirical average with respect to the n training samples. We first bound the indicator function with a piece-wise linear function, b θ , 1 with a maximum slope of θ : z ,0 . I(z > 0) ≤ bθ (z) = max min 1, 1 + θ We then bound the loss: L(f (x), c) ≤ φ θ (f (x), c) where s+1 φθ (f (x), c) = ci min{bθ (gi (xi )), bθ (−gi−1 (xi−1 )), bθ (−gi−2 (xi−2 )), . . . , bθ (−g1 (x1 ))} i=1 s+1 i ci Bθ (gi (xi ), gi−1 (xi−1 ), . . . , g1 (x1 )) = i=1 We replaced the product of indicator functions with a minimization and then bounded each indicator i with bθ . Bθ is just a more compact presentation of the composition of the function b θ and the minimization. We assume that the weights α at each stage have been scaled to sum to 1. This has no affect on the resulting classifications, but is necessary for the derivation below. Before stating the theorem, for clarity, we state two standard definition: Definition 1. Let p(x) be a probability distribution on the set X and let {x 1 , x2 , . . . , xn } be n independent samples from p(x). Let σ 1 , σ 2 , . . . , σ n be n independent samples from a Rademacher random variable (a binary variable that takes on either +1 or −1 with equal probability). Let F be a class of functions mapping X to . Define the Rademacher Complexity of F to be Rn (F ) = E sup f ∈F n 1 n σ i f (xi ) i=1 1 where the expectation is over the random draws of x through xn and σ 1 through σ n . Definition 2. Let p(x), {x1 , x2 , . . . , xn }, and F be as above. Let g 1 , g 2 , . . . , g n be n independent samples from a Gaussian distribution with mean 0 and variance 1. Analogous to the above definition, define the Gaussian Complexity of G to be Gn (F ) = E sup f ∈F 1 n n g i f (xi ) . i=1 We can now state our theorem, bounding the true risk by a function of the empirical risk: Theorem 3. Let H1 , H2 , . . . , Hs be the sequence of the sets of functions from which the base classifier draws for chain boosting. If H i is closed under negation for all i, all costs are bounded between 0 and 1, and the weights for the classifiers at each stage sum to 1, then with probability 1 − δ, k ˆ E [L(f (x), c)] ≤ En [φθ (f (x), c)] + θ s (i + 1)Gn (Hi ) + i=1 8 ln 2 δ n for some constant k. Proof. Theorem 8 of [3] states ˆ E [L(x, c)] ≤ En (φθ (f (x), c)) + 2Rn (φθ ◦ F ) + 8 ln 2 δ n and therefore we need only bound the R n (φθ ◦ F ) term to demonstrate our theorem. For our case, we have 1 Rn (φθ ◦ F ) = E sup n f ∈F = E sup f ∈F 1 n s+1 ≤ E sup j=1 f ∈F n n σ i φθ (f (xi ), ci ) i=1 s+1 σi i=1 1 n s ci Bθ (gj (xi ), gj−1 (xi ), . . . , g1 (xi )) j j j−1 1 j=1 n s+1 s σ i Bθ (gj (xi ), gj−1 (xi ), . . . , g1 (xi )) = j j−1 1 i=1 s Rn (Bθ ◦ G j ) j=1 where Gi is the space of convex combinations of functions from H i and G i is the cross product of G1 through Gi . The inequality comes from switching the expectation and the maximization and then from dropping the c i (see [4], lemma 5). j s s Lemma 4 of [3] states that there exists a k such that R n (Bθ ◦ G j ) ≤ kGn (Bθ ◦ G j ). Theorem 14 j 2 s j s of the same paper allows us to conclude that G n (Bθ ◦ G ) ≤ θ i=1 Gn (Gi ). (Because Bθ is the 1 s minimum over a set of functions with maximum slope of θ , the maximum slope of B θ is also 1 .) θ Theorem 12, part 2 states G n (Gi ) = Gn (Hi ). Taken together, this proves our result. Note that this bound has only quadratic dependence on s, the length of the chain and does not explicitly depend on the number of rounds of boosting (the number of rounds affects φ θ which, in turn, affects the bound). 4 Application We tested our algorithm on the MIT face database [5]. This database contains 19-by-19 gray-scale images of faces and non-faces. The training set has 2429 face images and 4548 non-face images. The testing set has 472 faces and 23573 non-faces. We weighted the training set images so that the ratio of the weight of face images to non-face images matched the ratio in the testing set. 0.4 0.4 CB Global CB Bottom−up SVM Boosting 0.3 0.25 0.2 0.15 0.1 150 0.2 100 0.1 False positive rate 0.3 50 Average number of pixels 0.35 average cost/error per example 200 training cost training error testing cost testing error 0.05 0 100 200 300 400 500 number of rounds 700 1000 (a) 0 0 0.2 0.4 0.6 False negative rate 0.8 0 1 (b) Figure 2: (a) Accuracy verses the number of rounds for a typical run, (b) Error rates and average costs for a variety of cost settings. 4.1 Object Detection as Chained Boosting Our goal is to produce a classifier that can identify non-face images at very low resolutions, thereby allowing for quick processing of large images (as explained later). Most image patches (or subwindows) do not contain faces. We, therefore, built a multi-stage detection system where any early rejection is labeled as a non-face. The first stage looks at image patches of size 3-by-3 (i.e. a lowerresolution version of the 19-by-19 original image). The next stage looks at the same image, but at a resolution of 6-by-6. The third stage considers the image at 12-by-12. We did not present the full 19-by-19 images as the classification did not significantly improve over the 12-by-12 versions. We employ a simple base classifier: the set of all functions that look at a single pixel and predict the class by thresholding the pixel’s value. The total classifier at any stage is a linear combination of these simple classifiers. For a given stage, all of the base classifiers that target a particular pixel are added together producing a complex function of the value of the pixel. Yet, this pixel can only take on a finite number of values (256 in this case). Therefore, we can compile this set of base classifiers into a single look-up function that maps the brightness of the pixel into a real number. The total classifier for the whole stage is merely the sum of these look-up functions. Therefore, the total work necessary to compute the classification at a stage is proportional to the number of pixels in the image considered at that stage, regardless of the number of base classifiers used. We therefore assign a cost to each stage of processing proportional to the number of pixels at that stage. If the image is a face, we add a negative cost (i.e. bonus) if the image is allowed to pass through all of the processing stages (and is therefore “accepted” as a face). If the image is a nonface, we add a bonus if the image is rejected at any stage before completion (i.e. correctly labelled). While this dataset has only segmented image patches, in a real application, the classifier would be run on all sub-windows of an image. More importantly, it would also be run at multiple resolutions in order to detect faces of different sizes (or at different distances from the camera). The classifier chain could be run simultaneously at each of these resolutions. To wit, while running the final 12-by12 stage at one resolution of the image, the 6-by-6 (previous) stage could be run at the same image resolution. This 6-by-6 processing would be the necessary pre-processing step to running the 12-by12 stage at a higher resolution. As we run our final scan for big faces (at a low resolution), we can already (at the same image resolution) be performing initial tests to throw out portions of the image as not worthy of testing for smaller faces (at a higher resolution). Most of the work of detecting objects must be done at the high resolutions because there are many more overlapping subwindows. This chained method allows the culling of most of this high-resolution image processing. 4.2 Experiments For each example, we construct a vector of stage costs as above. We add a constant to this vector to ensure that the minimal element is zero, as per section 1.1. We scale all vectors by the same amount to ensure that the maximal value is 1.This means that the number of misclassifications is an upper bound on the total cost that the learning algorithm is trying to minimize. There are three flexible quantities in this problem formulation: the cost of a pixel evaluation, the bonus for a correct face classification, and the bonus for a correct non-face classification. Changing these quantities will control the trade-off between false positives and true positives, and between classification error and speed. Figure 2(a) shows the result of a typical run of the algorithm. As a function of the number of rounds, it plots the cost (that which the algorithm is trying to minimize) and the error (number of misclassified image patches), for both the training and testing sets (where the training set has been reweighted to have the same proportion of faces to non-faces as the testing set). We compared our algorithm’s performance to the performance of support vector machines (SVM) [6] and Adaboost [1] trained and tested on the highest resolution, 12-by-12, image patches. We employed SVM-light [7] with a linear kernels. Figure 2(b) compares the error rates for the methods (solid lines, read against the left vertical axis). Note that the error rates are almost identical for the methods. The dashed lines (read against the right vertical axis) show the average number of pixels evaluated (or total processing cost) for each of the methods. The SVM and Adaboost algorithms have a constant processing cost. Our method (by either training scheme) produces lower processing cost for most error rates. 5 Related Work Cascade detectors for vision processing (see [8] or [9] for example) may appear to be similar to the work in this paper. Especially at first glance for the area of object detection, they appear almost the same. However, cascade detection and this work (chained detection) are quite different. Cascade detectors are built one at a time. A coarse detector is first trained. The examples which pass that detector are then passed to a finer detector for training, and so on. A series of targets for false-positive rates define the increasing accuracy of the detector cascade. By contrast, our chain detectors are trained as an ensemble. This is necessary because of two differences in the problem formulation. First, we assume that the information available at each stage changes. Second, we assume there is an explicit cost model that dictates the cost of proceeding from stage to stage and the cost of rejection (or acceptance) at any particular stage. By contrast, cascade detectors are seeking to minimize computational power necessary for a fixed decision. Therefore, the information available to all of the stages is the same, and there are no fixed costs associated with each stage. The ability to train all of the classifiers at the same time is crucial to good performance in our framework. The first classifier in the chain cannot determine whether it is advantageous to send an example further along unless it knows how the later stages will process the example. Conversely, the later stages cannot construct optimal classifications until they know the distribution of examples that they will see. Section 4.1 may further confuse the matter. We demonstrated how chained boosting can be used to reduce the computational costs of object detection in images. Cascade detectors are often used for the same purpose. However, the reductions in computational time come from two different sources. In cascade detectors, the time taken to evaluate a given image patch is reduced. In our chained detector formulation, image patches are ignored completely based on analysis of lower resolution patches in the image pyramid. To further illustrate the difference, cascade detectors can always be used to speed up asymmetric classification tasks (and are often applied to image detection). By contrast, in Section 4.1 we have exploited the fact that object detection in images is typically performed at multiple scales to turn the problem into a pipeline and apply our framework. Cascade detectors address situations in which prior class probabilities are not equal, while chained detectors address situations in which information is gained at a cost. Both are valid (and separate) ways of tackling image processing (and other tasks as well). In many ways, they are complementary approaches. Classic sequence analysis [10, 11] also addresses the problem of optimal stopping. However, it assumes that the samples are drawn i.i.d. from (usually) a known distribution. Our problem is quite different in that each consecutive sample is drawn from a different (and related) distribution and our goal is to find a decision rule without producing a generative model. WaldBoost [12] is a boosting algorithm based on this. It builds a series of features and a ratio comparison test in order to decide when to stop. For WaldBoost, the available features (information) not change between stages. Rather, any feature is available for selection at any point in the chain. Again, this is a different problem than the one considered in this paper. 6 Conclusions We feel this framework of staged decision making is useful in a wide variety of areas. This paper demonstrated how the framework applies to one vision processing task. Obviously it also applies to manufacturing pipelines where errors can be introduced at different stages. It should also be applicable to scenarios where information gathering is costly. Our current formulation only allows for early negative detection. In the face detection example above, this means that in order to report “face,” the classifier must process each stage, even if the result is assured earlier. In Figure 2(b), clearly the upper-left corner (100% false positives and 0% false negatives) is reachable with little effort: classify everything positive without looking at any features. We would like to extend this framework to cover such two-sided early decisions. While perhaps not useful in manufacturing (or even face detection, where the interesting part of the ROC curve is far from the upper-left), it would make the framework more applicable to informationgathering applications. Acknowledgements This research was supported through the grant “Adaptive Decision Making for Silicon Wafer Testing” from Intel Research and UC MICRO. References [1] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In EuroCOLT, pages 23–37, 1995. [2] Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In ICML, pages 148–156, 1996. [3] Peter L. Bartlett and Shahar Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. JMLR, 2:463–482, 2002. [4] Ron Meir and Tong Zhang. Generalization error bounds for Bayesian mixture algorithms. JMLR, 4:839–860, 2003. [5] MIT. CBCL face database #1, 2000. http://cbcl.mit.edu/cbcl/softwaredatasets/FaceData2.html. [6] Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. A training algorithm for optimal margin classifiers. In COLT, pages 144–152, 1992. [7] T. Joachims. Making large-scale SVM learning practical. In B. Schlkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods — Support Vector Learning. MIT-Press, 1999. [8] Paul A. Viola and Michael J. Jones. Rapid object detection using a boosted cascade of simple features. In CVPR, pages 511–518, 2001. [9] Jianxin Wu, Matthew D. Mullin, and James M. Rehg. Linear asymmetric classifier for cascade detectors. In ICML, pages 988–995, 2005. [10] Abraham Wald. Sequential Analysis. Chapman & Hall, Ltd., 1947. [11] K. S. Fu. Sequential Methods in Pattern Recognition and Machine Learning. Academic Press, 1968. ˇ [12] Jan Sochman and Jiˇ´ Matas. Waldboost — learning for time constrained sequential detection. rı In CVPR, pages 150–156, 2005.
same-paper 3 0.96873206 74 nips-2006-Efficient Structure Learning of Markov Networks using $L 1$-Regularization
Author: Su-in Lee, Varun Ganapathi, Daphne Koller
Abstract: Markov networks are commonly used in a wide variety of applications, ranging from computer vision, to natural language, to computational biology. In most current applications, even those that rely heavily on learned models, the structure of the Markov network is constructed by hand, due to the lack of effective algorithms for learning Markov network structure from data. In this paper, we provide a computationally efficient method for learning Markov network structure from data. Our method is based on the use of L1 regularization on the weights of the log-linear model, which has the effect of biasing the model towards solutions where many of the parameters are zero. This formulation converts the Markov network learning problem into a convex optimization problem in a continuous space, which can be solved using efficient gradient methods. A key issue in this setting is the (unavoidable) use of approximate inference, which can lead to errors in the gradient computation when the network structure is dense. Thus, we explore the use of different feature introduction schemes and compare their performance. We provide results for our method on synthetic data, and on two real world data sets: pixel values in the MNIST data, and genetic sequence variations in the human HapMap data. We show that our L1 -based method achieves considerably higher generalization performance than the more standard L2 -based method (a Gaussian parameter prior) or pure maximum-likelihood learning. We also show that we can learn MRF network structure at a computational cost that is not much greater than learning parameters alone, demonstrating the existence of a feasible method for this important problem.
4 0.96624154 52 nips-2006-Clustering appearance and shape by learning jigsaws
Author: Anitha Kannan, John Winn, Carsten Rother
Abstract: Patch-based appearance models are used in a wide range of computer vision applications. To learn such models it has previously been necessary to specify a suitable set of patch sizes and shapes by hand. In the jigsaw model presented here, the shape, size and appearance of patches are learned automatically from the repeated structures in a set of training images. By learning such irregularly shaped ‘jigsaw pieces’, we are able to discover both the shape and the appearance of object parts without supervision. When applied to face images, for example, the learned jigsaw pieces are surprisingly strongly associated with face parts of different shapes and scales such as eyes, noses, eyebrows and cheeks, to name a few. We conclude that learning the shape of the patch not only improves the accuracy of appearance-based part detection but also allows for shape-based part detection. This enables parts of similar appearance but different shapes to be distinguished; for example, while foreheads and cheeks are both skin colored, they have markedly different shapes. 1
5 0.79473263 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency
Author: Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, Bernhard Schölkopf
Abstract: This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that—despite the lack of any biological prior knowledge—our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1
6 0.78121054 34 nips-2006-Approximate Correspondences in High Dimensions
7 0.77522808 42 nips-2006-Bayesian Image Super-resolution, Continued
8 0.76314211 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions
9 0.75866497 110 nips-2006-Learning Dense 3D Correspondence
10 0.75669014 47 nips-2006-Boosting Structured Prediction for Imitation Learning
11 0.75238198 185 nips-2006-Subordinate class recognition using relational object models
12 0.73918605 45 nips-2006-Blind Motion Deblurring Using Image Statistics
13 0.7390812 122 nips-2006-Learning to parse images of articulated bodies
14 0.71270585 118 nips-2006-Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields
15 0.71172875 154 nips-2006-Optimal Change-Detection and Spiking Neurons
16 0.70705211 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model
17 0.7023105 188 nips-2006-Temporal and Cross-Subject Probabilistic Models for fMRI Prediction Tasks
18 0.69835961 130 nips-2006-Max-margin classification of incomplete data
19 0.69669807 78 nips-2006-Fast Discriminative Visual Codebooks using Randomized Clustering Forests
20 0.69183654 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation