nips nips2012 nips2012-103 knowledge-graph by maker-knowledge-mining

103 nips-2012-Distributed Probabilistic Learning for Camera Networks with Missing Data


Source: pdf

Author: Sejong Yoon, Vladimir Pavlovic

Abstract: Probabilistic approaches to computer vision typically assume a centralized setting, with the algorithm granted access to all observed data points. However, many problems in wide-area surveillance can benefit from distributed modeling, either because of physical or computational constraints. Most distributed models to date use algebraic approaches (such as distributed SVD) and as a result cannot explicitly deal with missing data. In this work we present an approach to estimation and learning of generative probabilistic models in a distributed context where certain sensor data can be missing. In particular, we show how traditional centralized models, such as probabilistic PCA and missing-data PPCA, can be learned when the data is distributed across a network of sensors. We demonstrate the utility of this approach on the problem of distributed affine structure from motion. Our experiments suggest that the accuracy of the learned probabilistic structure and motion models rivals that of traditional centralized factorization methods while being able to handle challenging situations such as missing or noisy observations. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract Probabilistic approaches to computer vision typically assume a centralized setting, with the algorithm granted access to all observed data points. [sent-5, score-0.401]

2 However, many problems in wide-area surveillance can benefit from distributed modeling, either because of physical or computational constraints. [sent-6, score-0.16]

3 Most distributed models to date use algebraic approaches (such as distributed SVD) and as a result cannot explicitly deal with missing data. [sent-7, score-0.497]

4 In this work we present an approach to estimation and learning of generative probabilistic models in a distributed context where certain sensor data can be missing. [sent-8, score-0.297]

5 In particular, we show how traditional centralized models, such as probabilistic PCA and missing-data PPCA, can be learned when the data is distributed across a network of sensors. [sent-9, score-0.661]

6 We demonstrate the utility of this approach on the problem of distributed affine structure from motion. [sent-10, score-0.204]

7 Our experiments suggest that the accuracy of the learned probabilistic structure and motion models rivals that of traditional centralized factorization methods while being able to handle challenging situations such as missing or noisy observations. [sent-11, score-0.783]

8 1 Introduction Traditional computer vision algorithms, particularly those that exploit various probabilistic and learning-based approaches, are often formulated in centralized settings. [sent-12, score-0.464]

9 A scene or an object is observed by a single camera with all acquired information centrally processed and stored in a single knowledge base (e. [sent-13, score-0.332]

10 Even if the problem setting relies on multiple cameras, as may be the case in multi-view or structure from motion (SfM) tasks, all collected information is still processed and organized in a centralized fashion. [sent-16, score-0.491]

11 Nevertheless, the overall goal of such distributed device (camera) networks may still be to exchange information and form a consensus interpretation of the visual scene. [sent-18, score-0.304]

12 For instance, even if a camera observes a limited set of object views, one would like its local computational model to reflect a general 3D appearance of the object visible by other cameras in the network. [sent-19, score-0.548]

13 A number of distributed algorithms have been proposed to address the problems such as calibration, pose estimation, tracking, object and activity recognition in large camera networks [1–3]. [sent-20, score-0.497]

14 In order to deal with high dimensionality of vision problems, distributed latent space search such as decentralized variants of PCA have been studied in [4, 5]. [sent-21, score-0.282]

15 A more general framework using distributed least squares [6] based on distributed averaging of sensor fusions [7] was introduced for PCA, triangulation, pose estimation and SfM. [sent-22, score-0.394]

16 Similar approaches have been extended to settings such as the distributed object tracking and activity interpretation [8,9]. [sent-23, score-0.248]

17 Even though the methods such as PCA or Kalman filtering have their well-known probabilistic counterparts, the aforementioned approaches do not use probabilistic formulation when dealing with the distributed setting. [sent-24, score-0.308]

18 One critical challenge in distributed data analysis includes dealing with missing data. [sent-25, score-0.332]

19 In camera networks, different nodes will only have access to a partial set of data features because of varying camera views or object movement. [sent-26, score-0.647]

20 For instance, object points used for SfM may be visible only 1 in some cameras and only in particular object poses. [sent-27, score-0.318]

21 As a consequence, different nodes will be frequently exposed to missing data. [sent-28, score-0.18]

22 However, most current distributed data analysis methods are algebraic in nature and cannot seamlessly handle such missing data. [sent-29, score-0.31]

23 In this work we propose a distributed consensus learning approach for parametric probabilistic models with latent variables that can effectively deal with missing data. [sent-30, score-0.566]

24 Furthermore, we assume that some of the data features may be missing across different nodes. [sent-34, score-0.15]

25 The goal of the network of sensors is to learn a single consensus probabilistic model (e. [sent-35, score-0.214]

26 , 3D object structure) without ever resorting to a centralized data pooling and centralized computation. [sent-37, score-0.787]

27 We will demonstrate that this task can be accomplished in a principled manner by local probabilistic models and in-network information sharing, implemented as recursive distributed probabilistic learning. [sent-38, score-0.337]

28 In particular, we focus on probabilistic PCA (PPCA) as a prototypical example and derive its distributed version, the D-PPCA. [sent-39, score-0.223]

29 We then suggest how missing data can be handled in this setting using a missing-data PPCA and apply this model to solve the distributed SfM task in a camera network. [sent-40, score-0.563]

30 Our model is inspired by the consensus-based distributed Expectation-Maximization (EM) algorithm for Gaussian mixtures [10], which we extend to deal with generalized linear Gaussian models [11]. [sent-41, score-0.187]

31 These assumptions are reasonably applicable to many real world camera network settings. [sent-44, score-0.285]

32 In Section 2, we first explain the general distributed probabilistic model. [sent-45, score-0.223]

33 Section 3 shows how DPPCA can be formulated as a special case of the probabilistic framework and propose the means for handling missing data. [sent-46, score-0.213]

34 2 Distributed Probabilistic Model We start our discussion by first considering a general parametric probabilistic model in a centralized setting and then we show how to derive its distributed form. [sent-50, score-0.608]

35 Our model is a joint density defined on (xn , zn ) with a global parameter θ (xn , zn ) ∼ p(xn , zn |θ), with p(X, Z|θ) = n p(xn , zn |θ), as depicted in Fig. [sent-56, score-0.25]

36 It is important to point out that each posterior density estimate at point n depends solely on the corresponding measurement xn and does not depend on any other xk , k = n. [sent-60, score-0.105]

37 , Ni }, where xin ∈ RD is n-th measurement vector and Ni is the number of samples collected in i-th node. [sent-70, score-0.207]

38 2 (a) Centralized (b) Distributed (c) Augmented Figure 1: Centralized, distributed and augmented models for probabilistic PCA. [sent-75, score-0.264]

39 Still, the centralized model can be equivalently defined using the set of local parameters, with an additional constraint on their consensus, θ1 = θ2 = · · · = θ|V | . [sent-78, score-0.384]

40 The simple consensus tying can be more conveniently defined using a set of auxiliary variables ρij , one for each edge eij (Fig. [sent-81, score-0.159]

41 This now leads to the final distributed consensus learning formulation, similar to [10]: ˆ θ = arg min − log p(X|θ, G) s. [sent-83, score-0.279]

42 The last term (modulated by η) is not strictly necessary for consensus but introduces additional regularization. [sent-91, score-0.119]

43 This classic (first introduced in 1970s) meta decompose algorithm can be used to devise a distributed counterpart for any centralized problem that attempts to maximize a global log likehood function over a connected network. [sent-94, score-0.555]

44 3 Distributed Probabilistic PCA (D-PPCA) We now apply the general distributed probabilistic learning explained above to the specific case of distributed PPCA. [sent-95, score-0.383]

45 Traditional centralized formulation of probabilistic PCA (PPCA) [17] assumes that latent variable zin ∼ N (zin |0, I), with a generative relation xin = Wi zin + µi + where i i, (3) ∼ N ( i |0, a−1 I) and ai is the noise precision. [sent-96, score-1.345]

46 i T where Li = Wi Wi + We can find optimal parameters Wi , µi , ai by finding the maximum likelihood estimates of the marginal data likelihood or by applying the EM algorithm on expected complete data log likelihood with respect to the posterior density p(Zi |Xi ). [sent-98, score-0.143]

47 1 Distributed Formulation The distributed algorithm developed in Section 2 can be directly applied to this PPCA model. [sent-100, score-0.16]

48 The local parameter i estimates are then computed using the consensus updates that combine local summary data statistics with the information about the model conveyed through neighboring network nodes. [sent-103, score-0.191]

49 Let Θi = {Wi , µi , ai } be the set of parameters for each node i. [sent-105, score-0.159]

50 The global constrained consensus optimization now becomes Wi = ρij , ρij = Wj , i ∈ V, j ∈ Bi , µi = φij , φij = µj , i ∈ V, j ∈ Bi , ai = ψij , ψij = aj , i ∈ V, j ∈ Bi min{Wi ,µi ,ai :i∈V } −F (Θi ) s. [sent-106, score-0.26]

51 Exploiting the posterior density in (4), we compute the expected mean and variance of latent variables in each node as T E[zin ] = L−1 Wi (xin − µi ), i E[zin zT ] = a−1 L−1 + E[zin ]E[zin ]T . [sent-114, score-0.152]

52 (12) j∈Bi For ai , we solve the quadratic equation 0=− + Ni D (t+1) 2 (t+1) + 2η|Bi |ai + ai · 2 1 2 Ni (t) (t) 2βi − η (t) ai + aj T E[zin ]T Wi (xin − µi ) − n=1 j∈Bi Ni T ||xin − µi ||2 + tr E[zin zT ]Wi Wi in . [sent-116, score-0.339]

53 (13) n=1 The overall distributed EM algorithm for D-PPCA is summarized in Algorithm 1. [sent-117, score-0.16]

54 4 , Algorithm 1 Distributed Probabilistic PCA (D-PPCA) (0) (0) (0) (0) Require: For every node i initialize Wi , µi , ai randomly and set λi for t = 0, 1, 2, . [sent-119, score-0.159]

55 Hence, we adopt D-PPCA as a method to deal with missing data in a distributed consensus setting. [sent-132, score-0.456]

56 Generalization to missing data D-PPCA from D-PPCA is straightforward and follows [18]. [sent-133, score-0.15]

57 4 D-PPCA for Structure from Motion (SfM) In this section, we consider a specific formulation of the modified distributed probabilistic PCA for application in affine SfM. [sent-141, score-0.223]

58 In SfM, our goal is to estimate the 3D location of N points on a rigid object based on corresponding 2-D points observed from multiple cameras (or views). [sent-142, score-0.278]

59 The dimension D of our measurement matrix is thus twice the number of frames each camera observed. [sent-143, score-0.334]

60 Given a 2D (image coordinate) measurement matrix X, of size 2 · #f rames × #points, the matrix is factorized into a 2 · #f rames × 3 motion matrix M and the 3 × #points 3D structure matrix S. [sent-145, score-0.337]

61 In the centralized setting this can be easily computed using SVD on X. [sent-146, score-0.364]

62 Equivalently, the estimates of M and S can be found using inference and learning in a centralized PPCA, where M is treated as the PPCA parameter and S is the latent structure. [sent-147, score-0.414]

63 However, the above defined (2 · #f rames × #points) data structure of X is not amenable to distribution of different views (cameras, nodes), as considered in Section 3 of D-PPCA. [sent-149, score-0.167]

64 The latent D-PPCA variables will model the unknown and uncertain motion of each camera (and/or object in its view). [sent-158, score-0.441]

65 One should note that we have implicitly assumed, in a standard D-PPCA manner, that each column of Zi is iid and distributed as N (0, I). [sent-160, score-0.186]

66 However, each pair of subsequent Zi columns represents one 3 × 2 affine motion matrix. [sent-161, score-0.103]

67 The reason is that occlusions, the main source of missing data, cannot be treated as a random process. [sent-168, score-0.174]

68 However, as we demonstrate in experiments this assumption does not adversely affect SfM when the number of missing points is within a reasonable range. [sent-171, score-0.221]

69 1 Empirical Convergence Analysis Using synthetic data generated from Gaussian distribution, we observed that D-PPCA works well regardless of the number of network nodes, topology, choice of the parameter η or even with missing values in both MAR and MNAR cases. [sent-175, score-0.229]

70 2 Affine Structure from Motion We now show that the modified D-PPCA can be used as an effective framework for distributed affine SfM. [sent-178, score-0.16]

71 We assume that correspondences across frames and cameras are known. [sent-180, score-0.186]

72 For the missing values of MNAR case, we either used the actual occlusions to induce missing points or simulated consistently missing points over several frames. [sent-181, score-0.57]

73 1 Synthetic Data (Cube) We first generated synthetic data with a rotating unit cube and 5 cameras facing the cube in a 3D space, similar to synthetic experiments in [6]. [sent-184, score-0.436]

74 We extracted 8 cube points projected on each camera view every 6◦ , i. [sent-186, score-0.37]

75 For all synthetic and real SfM experiments, we picked η = 10 and initialized Wi matrix with feature point coordinates of the first frame visible in the i-th camera with some small noise. [sent-191, score-0.355]

76 To measure the performance, we computed maximum subspace angle between the ground truth 3D coordinates and our estimated 3D structure matrix. [sent-193, score-0.264]

77 Red circles are camera locations and blue arrows indicate each camera’s facing direction. [sent-212, score-0.28]

78 Green and red crosses in the right plot are outliers for centralized SVD-based SfM and D-PPCA for SfM, respectively. [sent-213, score-0.364]

79 The mean subspace angle tends to be slightly larger than that estimated by the centralized SVD SfM, however both reside within the overlapping confidence intervals. [sent-215, score-0.526]

80 66◦ for 20% missing points averaged over 10 different missing point samples. [sent-217, score-0.343]

81 Intuitively, this is because the missing points in the scene are naturally not random. [sent-219, score-0.213]

82 However, we argue that D-PPCA can still handle missing points given the evidence below. [sent-220, score-0.193]

83 The dataset provides various objects rotating on a turntable under different lighting conditions. [sent-224, score-0.124]

84 The views of most objects were taken every 5◦ which make it challenging to extract feature points with correspondence across frames. [sent-225, score-0.13]

85 Due to the lack of the ground truth 3D coordinates, we compared the subspace angles between the structure inferred using the traditional centralized SVD-based SfM and the D-PPCA-based SfM. [sent-235, score-0.549]

86 Experimenal results indicate existance of differences between the reconstructions obtained by centralized factorization approach and that of D-PPCA. [sent-238, score-0.401]

87 Moreover, re-projecting back to the camera coordinate space resulted in close matching with the tracked feature points, as shown in videos provided in supplementary materials. [sent-241, score-0.277]

88 We collected 135 single-object sequences containing image coordinates of points and we simulated multi-camera setting by partitioning the frames sequentially and almost equally for 5 nodes and the network was connected using ring topology. [sent-244, score-0.22]

89 Again, we computed maximum subspace angle between centralized SVD-based SfM and distributed D-PPCA for SfM. [sent-245, score-0.686]

90 MAR results provide variances over both various initializations and missing value settings. [sent-254, score-0.15]

91 Object BallSander BoxStuff Rooster Standing StorageBin # Points # Frames 62 67 189 310 102 30 30 30 30 30 Subspace angle b/w centralized SVD SfM and D-PPCA (degree) Mean 1. [sent-255, score-0.454]

92 2002 Subspace angle b/w fully observable centralized PPCA SfM and D-PPCA with MAR (degree) Mean 6. [sent-265, score-0.454]

93 0444 Subspace angle b/w fully observable centralized PPCA SfM and D-PPCA with MNAR (degree) Mean 3. [sent-282, score-0.454]

94 Moreover, more than 53% of all objects yielded the subspace angle below 1◦ , 77% of them below 5◦ and more than 94% were less than 15◦ . [sent-296, score-0.226]

95 6 Discussion and Future Work In this work we introduced a general approach for learning parameters of traditional centralized probabilistic models, such as PPCA, in a distributed setting. [sent-301, score-0.629]

96 Our synthetic data experiments showed that the proposed algorithm is robust to choices of initial parameters and, more importantly, is not adversely affected by variations in network size, topology or missing values. [sent-302, score-0.282]

97 In the SfM problems, the algorithm can be effectively used to distribute computation of 3D structure and motion in camera networks, while retaining the probabilistic nature of the original model. [sent-303, score-0.464]

98 In particular, we assume the independence of the affine motion matrix parameters in (15). [sent-305, score-0.103]

99 The assumption is clearly inconsistent with the modeling of motion on the SE(3) manifold. [sent-306, score-0.103]

100 Shape and motion from image streams under orthography: a factorization method. [sent-423, score-0.14]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('sfm', 0.517), ('centralized', 0.364), ('zin', 0.307), ('camera', 0.253), ('ppca', 0.182), ('xin', 0.179), ('distributed', 0.16), ('missing', 0.15), ('bi', 0.15), ('wi', 0.143), ('ij', 0.14), ('cameras', 0.133), ('mnar', 0.128), ('consensus', 0.119), ('motion', 0.103), ('ai', 0.099), ('ni', 0.094), ('rames', 0.091), ('angle', 0.09), ('mar', 0.08), ('sensor', 0.074), ('cube', 0.074), ('subspace', 0.072), ('probabilistic', 0.063), ('pca', 0.062), ('node', 0.06), ('object', 0.059), ('zn', 0.057), ('turntable', 0.055), ('frames', 0.053), ('views', 0.052), ('rene', 0.048), ('tron', 0.048), ('synthetic', 0.047), ('roberto', 0.045), ('af', 0.044), ('points', 0.043), ('wj', 0.042), ('traditional', 0.042), ('aj', 0.042), ('augmented', 0.041), ('lagrangian', 0.04), ('eij', 0.04), ('svd', 0.039), ('wireless', 0.038), ('ijk', 0.038), ('vision', 0.037), ('factorization', 0.037), ('forero', 0.037), ('smart', 0.037), ('objects', 0.035), ('rotating', 0.034), ('occlusions', 0.034), ('xn', 0.033), ('zi', 0.033), ('em', 0.033), ('network', 0.032), ('takeo', 0.032), ('cano', 0.032), ('decentralized', 0.032), ('wiesel', 0.032), ('coordinates', 0.031), ('decomposable', 0.031), ('connected', 0.031), ('accomplished', 0.031), ('principal', 0.03), ('zt', 0.03), ('nodes', 0.03), ('tracking', 0.029), ('yielded', 0.029), ('measurement', 0.028), ('adversely', 0.028), ('tomasi', 0.028), ('april', 0.027), ('deal', 0.027), ('facing', 0.027), ('latent', 0.026), ('iid', 0.026), ('networks', 0.025), ('topology', 0.025), ('rutgers', 0.025), ('treated', 0.024), ('visible', 0.024), ('truth', 0.024), ('admm', 0.024), ('tracked', 0.024), ('caltech', 0.024), ('structure', 0.024), ('vladimir', 0.023), ('ground', 0.023), ('density', 0.022), ('dealing', 0.022), ('variance', 0.022), ('posterior', 0.022), ('distribute', 0.021), ('kalman', 0.021), ('parametric', 0.021), ('local', 0.02), ('scene', 0.02), ('utility', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 103 nips-2012-Distributed Probabilistic Learning for Camera Networks with Missing Data

Author: Sejong Yoon, Vladimir Pavlovic

Abstract: Probabilistic approaches to computer vision typically assume a centralized setting, with the algorithm granted access to all observed data points. However, many problems in wide-area surveillance can benefit from distributed modeling, either because of physical or computational constraints. Most distributed models to date use algebraic approaches (such as distributed SVD) and as a result cannot explicitly deal with missing data. In this work we present an approach to estimation and learning of generative probabilistic models in a distributed context where certain sensor data can be missing. In particular, we show how traditional centralized models, such as probabilistic PCA and missing-data PPCA, can be learned when the data is distributed across a network of sensors. We demonstrate the utility of this approach on the problem of distributed affine structure from motion. Our experiments suggest that the accuracy of the learned probabilistic structure and motion models rivals that of traditional centralized factorization methods while being able to handle challenging situations such as missing or noisy observations. 1

2 0.13832577 40 nips-2012-Analyzing 3D Objects in Cluttered Images

Author: Mohsen Hejrati, Deva Ramanan

Abstract: We present an approach to detecting and analyzing the 3D configuration of objects in real-world images with heavy occlusion and clutter. We focus on the application of finding and analyzing cars. We do so with a two-stage model; the first stage reasons about 2D shape and appearance variation due to within-class variation (station wagons look different than sedans) and changes in viewpoint. Rather than using a view-based model, we describe a compositional representation that models a large number of effective views and shapes using a small number of local view-based templates. We use this model to propose candidate detections and 2D estimates of shape. These estimates are then refined by our second stage, using an explicit 3D model of shape and viewpoint. We use a morphable model to capture 3D within-class variation, and use a weak-perspective camera model to capture viewpoint. We learn all model parameters from 2D annotations. We demonstrate state-of-the-art accuracy for detection, viewpoint estimation, and 3D shape reconstruction on challenging images from the PASCAL VOC 2011 dataset. 1

3 0.11553983 237 nips-2012-Near-optimal Differentially Private Principal Components

Author: Kamalika Chaudhuri, Anand Sarwate, Kaushik Sinha

Abstract: Principal components analysis (PCA) is a standard tool for identifying good lowdimensional approximations to data sets in high dimension. Many current data sets of interest contain private or sensitive information about individuals. Algorithms which operate on such data should be sensitive to the privacy risks in publishing their outputs. Differential privacy is a framework for developing tradeoffs between privacy and the utility of these outputs. In this paper we investigate the theory and empirical performance of differentially private approximations to PCA and propose a new method which explicitly optimizes the utility of the output. We demonstrate that on real data, there is a large performance gap between the existing method and our method. We show that the sample complexity for the two procedures differs in the scaling with the data dimension, and that our method is nearly optimal in terms of this scaling. 1

4 0.11152562 277 nips-2012-Probabilistic Low-Rank Subspace Clustering

Author: S. D. Babacan, Shinichi Nakajima, Minh Do

Abstract: In this paper, we consider the problem of clustering data points into lowdimensional subspaces in the presence of outliers. We pose the problem using a density estimation formulation with an associated generative model. Based on this probability model, we first develop an iterative expectation-maximization (EM) algorithm and then derive its global solution. In addition, we develop two Bayesian methods based on variational Bayesian (VB) approximation, which are capable of automatic dimensionality selection. While the first method is based on an alternating optimization scheme for all unknowns, the second method makes use of recent results in VB matrix factorization leading to fast and effective estimation. Both methods are extended to handle sparse outliers for robustness and can handle missing values. Experimental results suggest that proposed methods are very effective in subspace clustering and identifying outliers. 1

5 0.065572046 209 nips-2012-Max-Margin Structured Output Regression for Spatio-Temporal Action Localization

Author: Du Tran, Junsong Yuan

Abstract: Structured output learning has been successfully applied to object localization, where the mapping between an image and an object bounding box can be well captured. Its extension to action localization in videos, however, is much more challenging, because we need to predict the locations of the action patterns both spatially and temporally, i.e., identifying a sequence of bounding boxes that track the action in video. The problem becomes intractable due to the exponentially large size of the structured video space where actions could occur. We propose a novel structured learning approach for spatio-temporal action localization. The mapping between a video and a spatio-temporal action trajectory is learned. The intractable inference and learning problems are addressed by leveraging an efficient Max-Path search method, thus making it feasible to optimize the model over the whole structured space. Experiments on two challenging benchmark datasets show that our proposed method outperforms the state-of-the-art methods. 1

6 0.064504892 228 nips-2012-Multilabel Classification using Bayesian Compressed Sensing

7 0.0630043 201 nips-2012-Localizing 3D cuboids in single-view images

8 0.060420651 2 nips-2012-3D Social Saliency from Head-mounted Cameras

9 0.059702739 178 nips-2012-Learning Label Trees for Probabilistic Modelling of Implicit Feedback

10 0.059641294 79 nips-2012-Compressive neural representation of sparse, high-dimensional probabilities

11 0.059504744 83 nips-2012-Controlled Recognition Bounds for Visual Learning and Exploration

12 0.056379944 68 nips-2012-Clustering Aggregation as Maximum-Weight Independent Set

13 0.056344897 185 nips-2012-Learning about Canonical Views from Internet Image Collections

14 0.052317616 195 nips-2012-Learning visual motion in recurrent neural networks

15 0.050894465 42 nips-2012-Angular Quantization-based Binary Codes for Fast Similarity Search

16 0.049606908 145 nips-2012-Gradient Weights help Nonparametric Regressors

17 0.04947326 302 nips-2012-Scaling MPE Inference for Constrained Continuous Markov Random Fields with Consensus Optimization

18 0.048355006 246 nips-2012-Nonparametric Max-Margin Matrix Factorization for Collaborative Prediction

19 0.046604734 100 nips-2012-Discriminative Learning of Sum-Product Networks

20 0.045565687 327 nips-2012-Structured Learning of Gaussian Graphical Models


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.16), (1, 0.039), (2, -0.049), (3, -0.024), (4, 0.001), (5, -0.029), (6, -0.004), (7, -0.033), (8, -0.017), (9, -0.005), (10, -0.022), (11, -0.018), (12, -0.001), (13, -0.061), (14, 0.033), (15, 0.128), (16, 0.005), (17, -0.113), (18, 0.027), (19, -0.021), (20, -0.069), (21, 0.018), (22, -0.011), (23, -0.019), (24, 0.051), (25, 0.002), (26, 0.053), (27, 0.036), (28, 0.027), (29, 0.024), (30, -0.074), (31, 0.034), (32, 0.071), (33, 0.032), (34, -0.044), (35, -0.052), (36, 0.022), (37, -0.023), (38, -0.044), (39, 0.012), (40, -0.016), (41, 0.061), (42, -0.009), (43, -0.16), (44, -0.039), (45, 0.049), (46, 0.053), (47, -0.0), (48, 0.015), (49, 0.075)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.91765666 103 nips-2012-Distributed Probabilistic Learning for Camera Networks with Missing Data

Author: Sejong Yoon, Vladimir Pavlovic

Abstract: Probabilistic approaches to computer vision typically assume a centralized setting, with the algorithm granted access to all observed data points. However, many problems in wide-area surveillance can benefit from distributed modeling, either because of physical or computational constraints. Most distributed models to date use algebraic approaches (such as distributed SVD) and as a result cannot explicitly deal with missing data. In this work we present an approach to estimation and learning of generative probabilistic models in a distributed context where certain sensor data can be missing. In particular, we show how traditional centralized models, such as probabilistic PCA and missing-data PPCA, can be learned when the data is distributed across a network of sensors. We demonstrate the utility of this approach on the problem of distributed affine structure from motion. Our experiments suggest that the accuracy of the learned probabilistic structure and motion models rivals that of traditional centralized factorization methods while being able to handle challenging situations such as missing or noisy observations. 1

2 0.60733289 54 nips-2012-Bayesian Probabilistic Co-Subspace Addition

Author: Lei Shi

Abstract: For modeling data matrices, this paper introduces Probabilistic Co-Subspace Addition (PCSA) model by simultaneously capturing the dependent structures among both rows and columns. Briefly, PCSA assumes that each entry of a matrix is generated by the additive combination of the linear mappings of two low-dimensional features, which distribute in the row-wise and column-wise latent subspaces respectively. In consequence, PCSA captures the dependencies among entries intricately, and is able to handle non-Gaussian and heteroscedastic densities. By formulating the posterior updating into the task of solving Sylvester equations, we propose an efficient variational inference algorithm. Furthermore, PCSA is extended to tackling and filling missing values, to adapting model sparseness, and to modelling tensor data. In comparison with several state-of-art methods, experiments demonstrate the effectiveness and efficiency of Bayesian (sparse) PCSA on modeling matrix (tensor) data and filling missing values.

3 0.56382799 40 nips-2012-Analyzing 3D Objects in Cluttered Images

Author: Mohsen Hejrati, Deva Ramanan

Abstract: We present an approach to detecting and analyzing the 3D configuration of objects in real-world images with heavy occlusion and clutter. We focus on the application of finding and analyzing cars. We do so with a two-stage model; the first stage reasons about 2D shape and appearance variation due to within-class variation (station wagons look different than sedans) and changes in viewpoint. Rather than using a view-based model, we describe a compositional representation that models a large number of effective views and shapes using a small number of local view-based templates. We use this model to propose candidate detections and 2D estimates of shape. These estimates are then refined by our second stage, using an explicit 3D model of shape and viewpoint. We use a morphable model to capture 3D within-class variation, and use a weak-perspective camera model to capture viewpoint. We learn all model parameters from 2D annotations. We demonstrate state-of-the-art accuracy for detection, viewpoint estimation, and 3D shape reconstruction on challenging images from the PASCAL VOC 2011 dataset. 1

4 0.56234336 2 nips-2012-3D Social Saliency from Head-mounted Cameras

Author: Hyun S. Park, Eakta Jain, Yaser Sheikh

Abstract: A gaze concurrence is a point in 3D where the gaze directions of two or more people intersect. It is a strong indicator of social saliency because the attention of the participating group is focused on that point. In scenes occupied by large groups of people, multiple concurrences may occur and transition over time. In this paper, we present a method to construct a 3D social saliency ďŹ eld and locate multiple gaze concurrences that occur in a social scene from videos taken by head-mounted cameras. We model the gaze as a cone-shaped distribution emanating from the center of the eyes, capturing the variation of eye-in-head motion. We calibrate the parameters of this distribution by exploiting the ďŹ xed relationship between the primary gaze ray and the head-mounted camera pose. The resulting gaze model enables us to build a social saliency ďŹ eld in 3D. We estimate the number and 3D locations of the gaze concurrences via provably convergent modeseeking in the social saliency ďŹ eld. Our algorithm is applied to reconstruct multiple gaze concurrences in several real world scenes and evaluated quantitatively against motion-captured ground truth. 1

5 0.53636795 201 nips-2012-Localizing 3D cuboids in single-view images

Author: Jianxiong Xiao, Bryan Russell, Antonio Torralba

Abstract: In this paper we seek to detect rectangular cuboids and localize their corners in uncalibrated single-view images depicting everyday scenes. In contrast to recent approaches that rely on detecting vanishing points of the scene and grouping line segments to form cuboids, we build a discriminative parts-based detector that models the appearance of the cuboid corners and internal edges while enforcing consistency to a 3D cuboid model. Our model copes with different 3D viewpoints and aspect ratios and is able to detect cuboids across many different object categories. We introduce a database of images with cuboid annotations that spans a variety of indoor and outdoor scenes and show qualitative and quantitative results on our collected database. Our model out-performs baseline detectors that use 2D constraints alone on the task of localizing cuboid corners. 1

6 0.52990782 237 nips-2012-Near-optimal Differentially Private Principal Components

7 0.52340758 79 nips-2012-Compressive neural representation of sparse, high-dimensional probabilities

8 0.52084798 18 nips-2012-A Simple and Practical Algorithm for Differentially Private Data Release

9 0.50867754 1 nips-2012-3D Object Detection and Viewpoint Estimation with a Deformable 3D Cuboid Model

10 0.49933708 277 nips-2012-Probabilistic Low-Rank Subspace Clustering

11 0.49579602 209 nips-2012-Max-Margin Structured Output Regression for Spatio-Temporal Action Localization

12 0.49518186 137 nips-2012-From Deformations to Parts: Motion-based Segmentation of 3D Objects

13 0.48394001 63 nips-2012-CPRL -- An Extension of Compressive Sensing to the Phase Retrieval Problem

14 0.47625294 52 nips-2012-Bayesian Nonparametric Modeling of Suicide Attempts

15 0.46907866 194 nips-2012-Learning to Discover Social Circles in Ego Networks

16 0.46385902 115 nips-2012-Efficient high dimensional maximum entropy modeling via symmetric partition functions

17 0.46182725 125 nips-2012-Factoring nonnegative matrices with linear programs

18 0.45072761 246 nips-2012-Nonparametric Max-Margin Matrix Factorization for Collaborative Prediction

19 0.44331613 346 nips-2012-Topology Constraints in Graphical Models

20 0.43816581 223 nips-2012-Multi-criteria Anomaly Detection using Pareto Depth Analysis


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.034), (21, 0.043), (38, 0.111), (39, 0.013), (42, 0.027), (54, 0.021), (55, 0.019), (73, 0.227), (74, 0.076), (76, 0.161), (80, 0.113), (92, 0.052)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.83519524 47 nips-2012-Augment-and-Conquer Negative Binomial Processes

Author: Mingyuan Zhou, Lawrence Carin

Abstract: By developing data augmentation methods unique to the negative binomial (NB) distribution, we unite seemingly disjoint count and mixture models under the NB process framework. We develop fundamental properties of the models and derive efficient Gibbs sampling inference. We show that the gamma-NB process can be reduced to the hierarchical Dirichlet process with normalization, highlighting its unique theoretical, structural and computational advantages. A variety of NB processes with distinct sharing mechanisms are constructed and applied to topic modeling, with connections to existing algorithms, showing the importance of inferring both the NB dispersion and probability parameters. 1

same-paper 2 0.82604843 103 nips-2012-Distributed Probabilistic Learning for Camera Networks with Missing Data

Author: Sejong Yoon, Vladimir Pavlovic

Abstract: Probabilistic approaches to computer vision typically assume a centralized setting, with the algorithm granted access to all observed data points. However, many problems in wide-area surveillance can benefit from distributed modeling, either because of physical or computational constraints. Most distributed models to date use algebraic approaches (such as distributed SVD) and as a result cannot explicitly deal with missing data. In this work we present an approach to estimation and learning of generative probabilistic models in a distributed context where certain sensor data can be missing. In particular, we show how traditional centralized models, such as probabilistic PCA and missing-data PPCA, can be learned when the data is distributed across a network of sensors. We demonstrate the utility of this approach on the problem of distributed affine structure from motion. Our experiments suggest that the accuracy of the learned probabilistic structure and motion models rivals that of traditional centralized factorization methods while being able to handle challenging situations such as missing or noisy observations. 1

3 0.78385168 83 nips-2012-Controlled Recognition Bounds for Visual Learning and Exploration

Author: Vasiliy Karasev, Alessandro Chiuso, Stefano Soatto

Abstract: We describe the tradeoff between the performance in a visual recognition problem and the control authority that the agent can exercise on the sensing process. We focus on the problem of “visual search” of an object in an otherwise known and static scene, propose a measure of control authority, and relate it to the expected risk and its proxy (conditional entropy of the posterior density). We show this analytically, as well as empirically by simulation using the simplest known model that captures the phenomenology of image formation, including scaling and occlusions. We show that a “passive” agent given a training set can provide no guarantees on performance beyond what is afforded by the priors, and that an “omnipotent” agent, capable of infinite control authority, can achieve arbitrarily good performance (asymptotically). In between these limiting cases, the tradeoff can be characterized empirically. 1

4 0.73111689 168 nips-2012-Kernel Latent SVM for Visual Recognition

Author: Weilong Yang, Yang Wang, Arash Vahdat, Greg Mori

Abstract: Latent SVMs (LSVMs) are a class of powerful tools that have been successfully applied to many applications in computer vision. However, a limitation of LSVMs is that they rely on linear models. For many computer vision tasks, linear models are suboptimal and nonlinear models learned with kernels typically perform much better. Therefore it is desirable to develop the kernel version of LSVM. In this paper, we propose kernel latent SVM (KLSVM) – a new learning framework that combines latent SVMs and kernel methods. We develop an iterative training algorithm to learn the model parameters. We demonstrate the effectiveness of KLSVM using three different applications in visual recognition. Our KLSVM formulation is very general and can be applied to solve a wide range of applications in computer vision and machine learning. 1

5 0.72943908 197 nips-2012-Learning with Recursive Perceptual Representations

Author: Oriol Vinyals, Yangqing Jia, Li Deng, Trevor Darrell

Abstract: Linear Support Vector Machines (SVMs) have become very popular in vision as part of state-of-the-art object recognition and other classification tasks but require high dimensional feature spaces for good performance. Deep learning methods can find more compact representations but current methods employ multilayer perceptrons that require solving a difficult, non-convex optimization problem. We propose a deep non-linear classifier whose layers are SVMs and which incorporates random projection as its core stacking element. Our method learns layers of linear SVMs recursively transforming the original data manifold through a random projection of the weak prediction computed from each layer. Our method scales as linear SVMs, does not rely on any kernel computations or nonconvex optimization, and exhibits better generalization ability than kernel-based SVMs. This is especially true when the number of training samples is smaller than the dimensionality of data, a common scenario in many real-world applications. The use of random projections is key to our method, as we show in the experiments section, in which we observe a consistent improvement over previous –often more complicated– methods on several vision and speech benchmarks. 1

6 0.72388989 229 nips-2012-Multimodal Learning with Deep Boltzmann Machines

7 0.72356367 355 nips-2012-Truncation-free Online Variational Inference for Bayesian Nonparametric Models

8 0.72296184 188 nips-2012-Learning from Distributions via Support Measure Machines

9 0.72226894 279 nips-2012-Projection Retrieval for Classification

10 0.72212493 316 nips-2012-Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models

11 0.71954256 111 nips-2012-Efficient Sampling for Bipartite Matching Problems

12 0.71915042 232 nips-2012-Multiplicative Forests for Continuous-Time Processes

13 0.71827251 112 nips-2012-Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model

14 0.71802604 48 nips-2012-Augmented-SVM: Automatic space partitioning for combining multiple non-linear dynamics

15 0.71789461 172 nips-2012-Latent Graphical Model Selection: Efficient Methods for Locally Tree-like Graphs

16 0.71744996 277 nips-2012-Probabilistic Low-Rank Subspace Clustering

17 0.7174105 274 nips-2012-Priors for Diversity in Generative Latent Variable Models

18 0.71714705 209 nips-2012-Max-Margin Structured Output Regression for Spatio-Temporal Action Localization

19 0.71691799 92 nips-2012-Deep Representations and Codes for Image Auto-Annotation

20 0.71569335 260 nips-2012-Online Sum-Product Computation Over Trees