iccv iccv2013 iccv2013-388 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Kim Steenstrup Pedersen, Kristoffer Stensbo-Smidt, Andrew Zirm, Christian Igel
Abstract: A texture descriptor based on the shape index and the accompanying curvedness measure is proposed, and it is evaluated for the automated analysis of astronomical image data. A representative sample of images of low-redshift galaxies from the Sloan Digital Sky Survey (SDSS) serves as a testbed. The goal of applying texture descriptors to these data is to extract novel information about galaxies; information which is often lost in more traditional analysis. In this study, we build a regression model for predicting a spectroscopic quantity, the specific star-formation rate (sSFR). As texture features we consider multi-scale gradient orientation histograms as well as multi-scale shape index histograms, which lead to a new descriptor. Our results show that we can successfully predict spectroscopic quantities from the texture in optical multi-band images. We successfully recover the observed bi-modal distribution of galaxies into quiescent and star-forming. The state-ofthe-art for predicting the sSFR is a color-based physical model. We significantly improve its accuracy by augmenting the model with texture information. This study is thefirst step towards enabling the quantification of physical galaxy properties from imaging data alone.
Reference: text
sentIndex sentText sentNum sentScore
1 Abstract A texture descriptor based on the shape index and the accompanying curvedness measure is proposed, and it is evaluated for the automated analysis of astronomical image data. [sent-3, score-0.321]
2 A representative sample of images of low-redshift galaxies from the Sloan Digital Sky Survey (SDSS) serves as a testbed. [sent-4, score-0.395]
3 In this study, we build a regression model for predicting a spectroscopic quantity, the specific star-formation rate (sSFR). [sent-6, score-0.226]
4 As texture features we consider multi-scale gradient orientation histograms as well as multi-scale shape index histograms, which lead to a new descriptor. [sent-7, score-0.371]
5 Our results show that we can successfully predict spectroscopic quantities from the texture in optical multi-band images. [sent-8, score-0.251]
6 We successfully recover the observed bi-modal distribution of galaxies into quiescent and star-forming. [sent-9, score-0.452]
7 This study is thefirst step towards enabling the quantification of physical galaxy properties from imaging data alone. [sent-12, score-0.734]
8 Introduction This paper investigates a novel combination of texture descriptors and applies them for automated analysis of galaxy images. [sent-14, score-0.726]
9 To this end, we suggest using the shape index and the accompanying curvedness mea. [sent-20, score-0.222]
10 The novelty of our approach lies in using localized shape index histograms combined with gradient orientation histrograms both measured at multiple scales. [sent-23, score-0.299]
11 For texture analysis, adding this higher order information will in some applications be necessary in order to improve the discriminative performance of texture representations—and quantifying physical properties of galaxies from imaging data is such an application. [sent-24, score-0.629]
12 It is well known that this structure is correlated with other physical properties of the galaxies such as star-formation rate and dust content (e. [sent-30, score-0.479]
13 Extremely large galaxy surveys from the ground, such as the SDSS, have compiled vast, homogeneous imaging of millions of galaxies. [sent-34, score-0.73]
14 Furthermore, ever since the launch of the Hubble Space Telescope (HST) and the advent of adaptive-optics (AO) on large aperture ground-based telescopes enabling high physical-resolution images of galaxies, the study of galaxy structure and morphology has entered a data-rich era. [sent-35, score-0.735]
15 To use the observed light, for example, to determine the mass of stars or the rate at which new stars are being formed, we need to be able to disentangle the various luminous contributions. [sent-38, score-0.312]
16 We can use models of populations of stars as a function of time to extract the mass and age of the stars 22444400 in a galaxy. [sent-43, score-0.312]
17 The mass and SFR of a galaxy can therefore be (coarsely) measured by comparing a set of models with the shape of the spectral energy distribution traced by multiple filters. [sent-45, score-0.73]
18 Usually, even if the SFR is determined from emission lines spectroscopically, the mass is determined from the colors of the galaxy in multi-filter imaging. [sent-47, score-0.729]
19 Our current knowledge of galaxies is built on imaging surveys and follow-up spectroscopy. [sent-49, score-0.506]
20 In such resolved galaxy images, it is possible to use the structure as a proxy for internal dynamics that would require more timeconsuming spectroscopic data to observe. [sent-55, score-0.803]
21 Indeed, many of the future surveys will be imaging-only surveys that will not allow for spectroscopic follow-up observations of the vast majority of the observed galaxies. [sent-56, score-0.302]
22 Figure 1illustrates examples of optical images of galaxy from the subset of the SDSS dataset that is used in this paper. [sent-58, score-0.619]
23 Notice that the light profile of these galaxies contains intricate texture. [sent-60, score-0.395]
24 This texture is caused by the distribution of stars and gas in the galaxy—an important cue for determining the sSFR. [sent-61, score-0.266]
25 These range from noise and nearby stars to faint distant galaxies which are poorly resolved in the images. [sent-65, score-0.543]
26 After all, making the leap from single-band or a few bands imaging data to spectroscopic quantities is a large jump. [sent-67, score-0.282]
27 We have known since the earliest galaxy surveys, that star-forming galaxies have more internal morphological structure due to dust obscuration and star-forming clumps than quiescent (elliptical) galaxies, which tend to be smoother. [sent-69, score-1.104]
28 There has been some prior work on automated analysis of optical images of galaxies [14, 9]. [sent-70, score-0.395]
29 The top row shows well-resolved galaxies asnpadc teh (eg rboit t→omB row s Thhoews to problematic cases rfeosro our analysis. [sent-78, score-0.395]
30 image features which we believe can capture heretofore ignored information contained in resolved galaxy images. [sent-80, score-0.645]
31 Galaxy data ×× The primary data used for the current work are a sample of low-redshift galaxies drawn from the SDSS DR7, see Fig. [sent-87, score-0.395]
32 This sample is defined as all spectroscopic galaxies within the GAMA DR1 region [13] which also have entries in both the MPA-JHU and NYUVAGC catalogs [11, 8]. [sent-90, score-0.553]
33 The images for our galaxy sample were obtained using the skyview software provided by NASA/GSFC. [sent-92, score-0.619]
34 For each galaxy position, as defined in the SDSS DR 7, we downloaded a 100 100 pixel region (covering 39. [sent-93, score-0.619]
35 We have not applied any additional smoothing to the galaxy pixels at this stage because that is a core part of our following analysis. [sent-103, score-0.645]
36 22444411 The last step in the pre-processing of the images was to construct a refined and well-defined pixel segmentation mask indicating which pixels belonged to the galaxy of interest in each frame. [sent-105, score-0.773]
37 Each band image leads to slightly different masks, not only due to noise but also because some galaxy structure is only visible at certain wavelengths. [sent-113, score-0.669]
38 We construct a combined mask by taking the union of the masks for each band. [sent-114, score-0.212]
39 In order to remove some of these outliers from the analysis, we apply a threshold on the ratio of galaxy pixels and pixels in the convex hull of the galaxy mask. [sent-117, score-1.29]
40 The galaxy images were extracted such that each galaxy is in the image center. [sent-120, score-1.238]
41 We discard images from the analysis if the mask processing leads to a mask not overlapping with the image center. [sent-121, score-0.279]
42 This produces a cleaned galaxy mask with smooth boundaries. [sent-126, score-0.771]
43 Prior to applying the Gaussian filter, we estimate the Petrosian radius of the galaxy by Rp=? [sent-127, score-0.64]
44 Nπgal, (1) where Ngal denotes the number of galaxy pixels in the mask. [sent-128, score-0.645]
45 Furthermore, we estimate a fiducial orientation of the galaxy from the binary mask, which we use to make the gradient orientation feature invariant to rotation. [sent-129, score-0.882]
46 We compute the spatial covariance of the galaxy pixels by Cgal=Nga1l− 1x? [sent-131, score-0.645]
47 gal(xgal− μ)T(xgal− μ) , (2) where xgal ∈ R2 is the position of galaxy pixels in the mask, the sum runs over all galaxy pixels in the mask, and μ =N1galx? [sent-132, score-1.333]
48 galxgal (3) is the mean position of all galaxy pixels. [sent-133, score-0.619]
49 We define the fiducial orientation of the galaxy as the eigenvector corresponding to the largest eigenvalue of the covariance matrix. [sent-134, score-0.773]
50 This direction of most spatial variance in galaxy pixels usually corresponds to the major axis of ellipsoidal shaped galaxies. [sent-135, score-0.67]
51 In case of isotropic galaxies this way of picking a fiducial orientation will lead to a random choice, but as there is no natural orientation in this case, this is acceptable. [sent-137, score-0.612]
52 We note here that our image analysis does not strongly depend on the precise background level (as long as it does not vary greatly on galaxy scales), the choice of η, or on the absolute flux level in the galaxy pixels themselves. [sent-138, score-1.286]
53 Texture descriptors Discriminative information in textures may appear on several different scales—this is certainly the case for galaxy images—hence using a multi-scale representation appears to be a necessity when performing analysis of texture images. [sent-143, score-0.726]
54 Common descriptors such as SIFT, HoG and DAISY [26, 12, 30] use first order differential structure in the form of gradient orientation histograms as the basis of the descriptor. [sent-152, score-0.268]
55 In smooth scale space derivatives the gradient orientation may be defined as θ(x,y;σ) = tan−1 ? [sent-153, score-0.238]
56 We also add a representation of the second order differential structure—namely the shape index and the accompanying curvedness measure [21]. [sent-158, score-0.274]
57 (9) The shape index is rotational invariant by design, contrary to gradient orientation which depends on the choice of coordinate system. [sent-168, score-0.269]
58 F(x,y)A(x,y)B(fi,x,y;f)dxdy ,(10) where fi denotes the histogram binning variable and will act as the bin center for a specific choice of binning aperture function B. [sent-175, score-0.231]
59 We propose to use the Gaussian function of β bin width as smooth bin aperture function for histograms of the shape index S(x, y; σ) Bβ,σ(Si,x,y;S) = exp? [sent-177, score-0.342]
60 (11) The Gaussian bin aperture is not a good choice for gradient orientation histograms, since it does not incorporate the fact that θ is periodic. [sent-180, score-0.249]
61 We therefore propose to use the following smooth bin aperture function for the gradient orientation θ(x, y; σ) Bβ,σ(θi,x,y;θ) = exp? [sent-182, score-0.251]
62 As feature magnitude F for shape index we will use the curvedness measure C from (9) and for the gradient orientation we will use the gradient magnitude M from (7). [sent-186, score-0.417]
63 We propose to construct texture features by combin- ing histograms of gradient orientation with histograms of shape index and to measure these histograms at different scales σ. [sent-189, score-0.517]
64 As a concrete discretization of this representation we choose an equidistant binning in the histograms and fix the number of bins to 8 for gradient orientation and to 9 for shape index histogram features. [sent-190, score-0.362]
65 For our specific application to galaxy images we set θ0 in the gradient orientation feature to be the fiducial orientation of the galaxy as defined in § 2. [sent-193, score-1.52]
66 b Feu irdtheenrtmicoalr eto, wthee galaxy mask as outlined in § 2. [sent-195, score-0.766]
67 This localizes the feature to include fmeaatsukre ass ofruotmlin only galaxy pixels. [sent-196, score-0.641]
68 Selected scales should cover the range of characteristic scales for the particular galaxy image. [sent-203, score-0.703]
69 We approximate the effective outer scale for a particular galaxy image with the Petrosian radius (1). [sent-211, score-0.715]
70 For isotropic galaxies this will be a good estimate, however, for elongated ellipsoidal galaxies this will be a poor over-estimate. [sent-212, score-0.815]
71 The heuristic ensures at least a one σo distance from the galaxy to the image boundary. [sent-217, score-0.619]
72 This definition of the outer scale will measure the geometry at galaxy scale. [sent-218, score-0.694]
73 2 is a good value for the fraction of the outer scale, which focuses the descriptor on the range of scales where relevant structure occurs in galaxy images. [sent-222, score-0.726]
74 SSFR Prediction Experiments We use regression to predict specific star formation rate (sSFR) from combinations of the texture descriptors outlined above. [sent-226, score-0.213]
75 We consider different models and feature combinations to predict the sSFR value for each galaxy image. [sent-228, score-0.619]
76 As input features, we consider gradient orientation (GO) and shape index (SI) features as well as their combination (referred to as All). [sent-251, score-0.247]
77 Plot of RMSE (error bars indicate of the CV error) of Linear gri (SI) across the four masks. [sent-258, score-0.224]
78 Notice for masks 1–2 the curve that for single scale features an optimal scale 1 standard deviation 8 scale levels for the has a dip indicating exists. [sent-259, score-0.217]
79 We use 4 different mask sizes in decreasing size with mask 4 being the smallest. [sent-260, score-0.256]
80 The amount of galaxy images that passes all inclusion criteria outlined in § 3 for all masks can p bea sfsoeusn adl li inn Tcalubsleio 1n . [sent-261, score-0.722]
81 Instead we need to include the shape index feature or use the shape index feature alone. [sent-270, score-0.236]
82 We only include results for the Linear gri predictor, but the tendency is the same for the single bands and the MLP predictor. [sent-271, score-0.288]
83 The results on the second order features gri (2nd) are comparable to the (all) and (SI) results for mask 1but with an increased variance, and for masks 2– 3 these features are inferior to the shape index (SI) results. [sent-273, score-0.554]
84 2 show the RMSE of the linear regressor based on shape index (SI) features using single scale levels applied to the combined gri features. [sent-275, score-0.4]
85 Plot of the distributions of predicted sSFR values for different predictors and the ground truth for mask 1, using the gri and shape index (SI) features. [sent-277, score-0.492]
86 scale range selection procedure (§ 3) for each image the exasccta esc raalen gues sedel eatc tieoanch p rscocaleed luereve (l§ w 3)ill f vary as a faugnec ttihoen e xofthe galaxy size. [sent-279, score-0.656]
87 The reason for the generally poor results on mask 4 is that these masks tend to only include the galaxy nuclei which usually appears as a bright saturated blob of light. [sent-282, score-0.852]
88 3 show histograms of the spectroscopic sSFR values together with the results of the predictors Linear gri (SI), MLP gri (SI), and MLP-AM gri (SI). [sent-286, score-0.904]
89 All predictors but the linear are able to recover the two known classes of star-forming and quiescent galaxies seen by the two modes in the histograms. [sent-287, score-0.474]
90 Conclusions We propose to combine gradient orientation and shape index histograms measured at several scales to describe image texture. [sent-292, score-0.341]
91 The descriptor introduced in this paper is tuned towards the specific application, predicting the specific starformation rate (sSFR) from galaxy images, by confining the descriptor to only include information from the galaxy pixels mask. [sent-518, score-1.383]
92 Based on the mask we fix the outer scale used in the scale-space as well as the dominating orientation used in the gradient orientation histogram. [sent-519, score-0.415]
93 The success of the shape index feature can be explained by realizing that what distinguishes a quiescent galaxy from a star-forming one is the distribution of stars, gas, and dust. [sent-526, score-0.794]
94 For our current efforts, the primary difference will be the absence of the spectroscopic ground truth for current and future galaxy surveys. [sent-534, score-0.777]
95 Many of the largest planned surveys are indeed imaging-only and while some spectroscopic follow-up will be done, it will be impossible to obtain complete spectroscopic coverage of the more numerous (and often fainter) galaxies being imaged. [sent-535, score-0.783]
96 Against this background, this study is the first step towards enabling the quantification of physical galaxy properties from imag- ing data alone. [sent-536, score-0.695]
97 We expect that this mapping of galaxy appearance and properties will prove extremely useful when applied to future large scale imaging-only surveys such as the Large Synoptic Survey Telescope (LSST). [sent-537, score-0.728]
98 Acknowledgements The authors thank the SDSS [2] and GAMA [1] for making the galaxy data available and gratefully acknowledge support from The Danish Council for Independent Research (FNU 12-125149). [sent-538, score-0.619]
99 The physical properties of star-forming galaxies in the low-redshift Universe. [sent-614, score-0.446]
100 The ages and metallicities of galaxies in the local universe. [sent-646, score-0.395]
wordName wordTfidf (topN-words)
[('galaxy', 0.619), ('galaxies', 0.395), ('ssfr', 0.316), ('gri', 0.224), ('spectroscopic', 0.158), ('mlp', 0.153), ('sdss', 0.129), ('mask', 0.128), ('stars', 0.122), ('mnras', 0.101), ('petrosian', 0.086), ('masks', 0.084), ('orientation', 0.083), ('curvedness', 0.076), ('index', 0.075), ('texture', 0.072), ('surveys', 0.072), ('gas', 0.072), ('mass', 0.068), ('bands', 0.064), ('charlot', 0.057), ('gama', 0.057), ('quiescent', 0.057), ('histograms', 0.052), ('differential', 0.052), ('fiducial', 0.051), ('physical', 0.051), ('bin', 0.05), ('band', 0.05), ('derivatives', 0.048), ('aperture', 0.048), ('sfr', 0.047), ('si', 0.047), ('rmse', 0.046), ('gradient', 0.046), ('cv', 0.045), ('brinchmann', 0.043), ('galactic', 0.043), ('igel', 0.043), ('sextractor', 0.043), ('xgal', 0.043), ('shape', 0.043), ('emission', 0.042), ('scales', 0.042), ('daisy', 0.041), ('morphology', 0.039), ('imaging', 0.039), ('outer', 0.038), ('scale', 0.037), ('wavelengths', 0.035), ('koenderink', 0.035), ('spectroscopy', 0.035), ('descriptors', 0.035), ('histogram', 0.034), ('dust', 0.033), ('intensity', 0.03), ('binning', 0.029), ('blanton', 0.029), ('diagnostics', 0.029), ('gallazzi', 0.029), ('spectroscopically', 0.029), ('telescope', 0.029), ('telescopes', 0.029), ('accompanying', 0.028), ('predicting', 0.027), ('descriptor', 0.027), ('pixels', 0.026), ('formation', 0.026), ('resolved', 0.026), ('quantification', 0.025), ('ellipsoidal', 0.025), ('survey', 0.024), ('magnitude', 0.024), ('smooth', 0.024), ('catalog', 0.024), ('notice', 0.024), ('discard', 0.023), ('choice', 0.022), ('filters', 0.022), ('regression', 0.022), ('dip', 0.022), ('localizes', 0.022), ('wavelength', 0.022), ('gal', 0.022), ('predictors', 0.022), ('predictor', 0.022), ('quantities', 0.021), ('radius', 0.021), ('emit', 0.021), ('orderless', 0.021), ('bright', 0.021), ('regressor', 0.021), ('pooling', 0.02), ('eigenvector', 0.02), ('star', 0.02), ('roi', 0.02), ('specific', 0.019), ('outlined', 0.019), ('gaussian', 0.019), ('mg', 0.019)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000006 388 iccv-2013-Shape Index Descriptors Applied to Texture-Based Galaxy Analysis
Author: Kim Steenstrup Pedersen, Kristoffer Stensbo-Smidt, Andrew Zirm, Christian Igel
Abstract: A texture descriptor based on the shape index and the accompanying curvedness measure is proposed, and it is evaluated for the automated analysis of astronomical image data. A representative sample of images of low-redshift galaxies from the Sloan Digital Sky Survey (SDSS) serves as a testbed. The goal of applying texture descriptors to these data is to extract novel information about galaxies; information which is often lost in more traditional analysis. In this study, we build a regression model for predicting a spectroscopic quantity, the specific star-formation rate (sSFR). As texture features we consider multi-scale gradient orientation histograms as well as multi-scale shape index histograms, which lead to a new descriptor. Our results show that we can successfully predict spectroscopic quantities from the texture in optical multi-band images. We successfully recover the observed bi-modal distribution of galaxies into quiescent and star-forming. The state-ofthe-art for predicting the sSFR is a color-based physical model. We significantly improve its accuracy by augmenting the model with texture information. This study is thefirst step towards enabling the quantification of physical galaxy properties from imaging data alone.
2 0.070378251 377 iccv-2013-Segmentation Driven Object Detection with Fisher Vectors
Author: Ramazan Gokberk Cinbis, Jakob Verbeek, Cordelia Schmid
Abstract: We present an object detection system based on the Fisher vector (FV) image representation computed over SIFT and color descriptors. For computational and storage efficiency, we use a recent segmentation-based method to generate class-independent object detection hypotheses, in combination with data compression techniques. Our main contribution is a method to produce tentative object segmentation masks to suppress background clutter in the features. Re-weighting the local image features based on these masks is shown to improve object detection significantly. We also exploit contextual features in the form of a full-image FV descriptor, and an inter-category rescoring mechanism. Our experiments on the PASCAL VOC 2007 and 2010 datasets show that our detector improves over the current state-of-the-art detection results.
3 0.048501723 111 iccv-2013-Detecting Dynamic Objects with Multi-view Background Subtraction
Author: Raúl Díaz, Sam Hallman, Charless C. Fowlkes
Abstract: The confluence of robust algorithms for structure from motion along with high-coverage mapping and imaging of the world around us suggests that it will soon be feasible to accurately estimate camera pose for a large class photographs taken in outdoor, urban environments. In this paper, we investigate how such information can be used to improve the detection of dynamic objects such as pedestrians and cars. First, we show that when rough camera location is known, we can utilize detectors that have been trained with a scene-specific background model in order to improve detection accuracy. Second, when precise camera pose is available, dense matching to a database of existing images using multi-view stereo provides a way to eliminate static backgrounds such as building facades, akin to background-subtraction often used in video analysis. We evaluate these ideas using a dataset of tourist photos with estimated camera pose. For template-based pedestrian detection, we achieve a 50 percent boost in average precision over baseline.
4 0.046946749 322 iccv-2013-Pose Estimation and Segmentation of People in 3D Movies
Author: Karteek Alahari, Guillaume Seguin, Josef Sivic, Ivan Laptev
Abstract: We seek to obtain a pixel-wise segmentation and pose estimation of multiple people in a stereoscopic video. This involves challenges such as dealing with unconstrained stereoscopic video, non-stationary cameras, and complex indoor and outdoor dynamic scenes. The contributions of our work are two-fold: First, we develop a segmentation model incorporating person detection, pose estimation, as well as colour, motion, and disparity cues. Our new model explicitly represents depth ordering and occlusion. Second, we introduce a stereoscopic dataset with frames extracted from feature-length movies “StreetDance 3D ” and “Pina ”. The dataset contains 2727 realistic stereo pairs and includes annotation of human poses, person bounding boxes, and pixel-wise segmentations for hundreds of people. The dataset is composed of indoor and outdoor scenes depicting multiple people with frequent occlusions. We demonstrate results on our new challenging dataset, as well as on the H2view dataset from (Sheasby et al. ACCV 2012).
5 0.041477576 104 iccv-2013-Decomposing Bag of Words Histograms
Author: Ankit Gandhi, Karteek Alahari, C.V. Jawahar
Abstract: We aim to decompose a global histogram representation of an image into histograms of its associated objects and regions. This task is formulated as an optimization problem, given a set of linear classifiers, which can effectively discriminate the object categories present in the image. Our decomposition bypasses harder problems associated with accurately localizing and segmenting objects. We evaluate our method on a wide variety of composite histograms, and also compare it with MRF-based solutions. In addition to merely measuring the accuracy of decomposition, we also show the utility of the estimated object and background histograms for the task of image classification on the PASCAL VOC 2007 dataset.
6 0.041060179 404 iccv-2013-Structured Forests for Fast Edge Detection
7 0.039308671 9 iccv-2013-A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera
8 0.038664624 169 iccv-2013-Fine-Grained Categorization by Alignments
9 0.037048656 379 iccv-2013-Semantic Segmentation without Annotating Segments
10 0.036453258 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes
11 0.036149308 186 iccv-2013-GrabCut in One Cut
12 0.036119327 294 iccv-2013-Offline Mobile Instance Retrieval with a Small Memory Footprint
13 0.035176486 112 iccv-2013-Detecting Irregular Curvilinear Structures in Gray Scale and Color Imagery Using Multi-directional Oriented Flux
14 0.034485534 447 iccv-2013-Volumetric Semantic Segmentation Using Pyramid Context Features
15 0.033752218 57 iccv-2013-BOLD Features to Detect Texture-less Objects
16 0.033137377 308 iccv-2013-Parsing IKEA Objects: Fine Pose Estimation
17 0.033037204 320 iccv-2013-Pose-Configurable Generic Tracking of Elongated Objects
18 0.032679312 288 iccv-2013-Nested Shape Descriptors
19 0.032280724 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild
20 0.031930719 254 iccv-2013-Live Metric 3D Reconstruction on Mobile Phones
topicId topicWeight
[(0, 0.097), (1, -0.022), (2, -0.009), (3, -0.016), (4, 0.01), (5, 0.008), (6, -0.002), (7, -0.005), (8, -0.019), (9, -0.027), (10, 0.016), (11, -0.003), (12, 0.012), (13, -0.008), (14, 0.005), (15, -0.019), (16, 0.014), (17, -0.007), (18, 0.011), (19, 0.004), (20, 0.032), (21, 0.022), (22, -0.013), (23, 0.014), (24, -0.043), (25, 0.06), (26, 0.01), (27, 0.006), (28, 0.008), (29, 0.024), (30, 0.002), (31, -0.011), (32, -0.009), (33, 0.035), (34, -0.022), (35, 0.029), (36, 0.007), (37, -0.015), (38, 0.019), (39, 0.045), (40, -0.027), (41, 0.023), (42, -0.005), (43, 0.002), (44, -0.011), (45, -0.042), (46, 0.004), (47, -0.017), (48, -0.083), (49, 0.019)]
simIndex simValue paperId paperTitle
same-paper 1 0.88362062 388 iccv-2013-Shape Index Descriptors Applied to Texture-Based Galaxy Analysis
Author: Kim Steenstrup Pedersen, Kristoffer Stensbo-Smidt, Andrew Zirm, Christian Igel
Abstract: A texture descriptor based on the shape index and the accompanying curvedness measure is proposed, and it is evaluated for the automated analysis of astronomical image data. A representative sample of images of low-redshift galaxies from the Sloan Digital Sky Survey (SDSS) serves as a testbed. The goal of applying texture descriptors to these data is to extract novel information about galaxies; information which is often lost in more traditional analysis. In this study, we build a regression model for predicting a spectroscopic quantity, the specific star-formation rate (sSFR). As texture features we consider multi-scale gradient orientation histograms as well as multi-scale shape index histograms, which lead to a new descriptor. Our results show that we can successfully predict spectroscopic quantities from the texture in optical multi-band images. We successfully recover the observed bi-modal distribution of galaxies into quiescent and star-forming. The state-ofthe-art for predicting the sSFR is a color-based physical model. We significantly improve its accuracy by augmenting the model with texture information. This study is thefirst step towards enabling the quantification of physical galaxy properties from imaging data alone.
Author: Engin Türetken, Carlos Becker, Przemyslaw Glowacki, Fethallah Benmansour, Pascal Fua
Abstract: We propose a new approach to detecting irregular curvilinear structures in noisy image stacks. In contrast to earlier approaches that rely on circular models of the crosssections, ours allows for the arbitrarily-shaped ones that are prevalent in biological imagery. This is achieved by maximizing the image gradient flux along multiple directions and radii, instead of only two with a unique radius as is usually done. This yields a more complex optimization problem for which we propose a computationally efficient solution. We demonstrate the effectiveness of our approach on a wide range ofchallenging gray scale and color datasets and show that it outperforms existing techniques, especially on very irregular structures.
3 0.69836175 447 iccv-2013-Volumetric Semantic Segmentation Using Pyramid Context Features
Author: Jonathan T. Barron, Mark D. Biggin, Pablo Arbeláez, David W. Knowles, Soile V.E. Keranen, Jitendra Malik
Abstract: We present an algorithm for the per-voxel semantic segmentation of a three-dimensional volume. At the core of our algorithm is a novel “pyramid context” feature, a descriptive representation designed such that exact per-voxel linear classification can be made extremely efficient. This feature not only allows for efficient semantic segmentation but enables other aspects of our algorithm, such as novel learned features and a stacked architecture that can reason about self-consistency. We demonstrate our technique on 3Dfluorescence microscopy data ofDrosophila embryosfor which we are able to produce extremely accurate semantic segmentations in a matter of minutes, and for which other algorithms fail due to the size and high-dimensionality of the data, or due to the difficulty of the task.
4 0.69091749 401 iccv-2013-Stacked Predictive Sparse Coding for Classification of Distinct Regions in Tumor Histopathology
Author: Hang Chang, Yin Zhou, Paul Spellman, Bahram Parvin
Abstract: Image-based classification ofhistology sections, in terms of distinct components (e.g., tumor, stroma, normal), provides a series of indices for tumor composition. Furthermore, aggregation of these indices, from each whole slide image (WSI) in a large cohort, can provide predictive models of the clinical outcome. However, performance of the existing techniques is hindered as a result of large technical variations and biological heterogeneities that are always present in a large cohort. We propose a system that automatically learns a series of basis functions for representing the underlying spatial distribution using stacked predictive sparse decomposition (PSD). The learned representation is then fed into the spatial pyramid matching framework (SPM) with a linear SVM classifier. The system has been evaluated for classification of (a) distinct histological components for two cohorts of tumor types, and (b) colony organization of normal and malignant cell lines in 3D cell culture models. Throughput has been increased through the utility of graphical processing unit (GPU), and evalu- ation indicates a superior performance results, compared with previous research.
5 0.6816451 5 iccv-2013-A Color Constancy Model with Double-Opponency Mechanisms
Author: Shaobing Gao, Kaifu Yang, Chaoyi Li, Yongjie Li
Abstract: The double-opponent color-sensitive cells in the primary visual cortex (V1) of the human visual system (HVS) have long been recognized as the physiological basis of color constancy. We introduce a new color constancy model by imitating the functional properties of the HVS from the retina to the double-opponent cells in V1. The idea behind the model originates from the observation that the color distribution of the responses of double-opponent cells to the input color-biased images coincides well with the light source direction. Then the true illuminant color of a scene is easily estimated by searching for the maxima of the separate RGB channels of the responses of double-opponent cells in the RGB space. Our systematical experimental evaluations on two commonly used image datasets show that the proposed model can produce competitive results in comparison to the complex state-of-the-art approaches, but with a simple implementation and without the need for training.
6 0.65789789 125 iccv-2013-Drosophila Embryo Stage Annotation Using Label Propagation
7 0.63036865 278 iccv-2013-Multi-scale Topological Features for Hand Posture Representation and Analysis
8 0.61293858 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild
9 0.6126101 135 iccv-2013-Efficient Image Dehazing with Boundary Constraint and Contextual Regularization
10 0.60919923 288 iccv-2013-Nested Shape Descriptors
12 0.58901435 104 iccv-2013-Decomposing Bag of Words Histograms
13 0.58822387 277 iccv-2013-Multi-channel Correlation Filters
14 0.58485907 215 iccv-2013-Incorporating Cloud Distribution in Sky Representation
15 0.58340693 365 iccv-2013-SIFTpack: A Compact Representation for Efficient SIFT Matching
16 0.58020854 77 iccv-2013-Codemaps - Segment, Classify and Search Objects Locally
17 0.57752466 312 iccv-2013-Perceptual Fidelity Aware Mean Squared Error
18 0.56357443 416 iccv-2013-The Interestingness of Images
19 0.56327695 98 iccv-2013-Cross-Field Joint Image Restoration via Scale Map
20 0.55922818 74 iccv-2013-Co-segmentation by Composition
topicId topicWeight
[(2, 0.066), (7, 0.018), (12, 0.015), (26, 0.059), (31, 0.037), (34, 0.015), (36, 0.31), (40, 0.032), (42, 0.066), (48, 0.025), (64, 0.036), (73, 0.045), (89, 0.138), (98, 0.017)]
simIndex simValue paperId paperTitle
same-paper 1 0.70249784 388 iccv-2013-Shape Index Descriptors Applied to Texture-Based Galaxy Analysis
Author: Kim Steenstrup Pedersen, Kristoffer Stensbo-Smidt, Andrew Zirm, Christian Igel
Abstract: A texture descriptor based on the shape index and the accompanying curvedness measure is proposed, and it is evaluated for the automated analysis of astronomical image data. A representative sample of images of low-redshift galaxies from the Sloan Digital Sky Survey (SDSS) serves as a testbed. The goal of applying texture descriptors to these data is to extract novel information about galaxies; information which is often lost in more traditional analysis. In this study, we build a regression model for predicting a spectroscopic quantity, the specific star-formation rate (sSFR). As texture features we consider multi-scale gradient orientation histograms as well as multi-scale shape index histograms, which lead to a new descriptor. Our results show that we can successfully predict spectroscopic quantities from the texture in optical multi-band images. We successfully recover the observed bi-modal distribution of galaxies into quiescent and star-forming. The state-ofthe-art for predicting the sSFR is a color-based physical model. We significantly improve its accuracy by augmenting the model with texture information. This study is thefirst step towards enabling the quantification of physical galaxy properties from imaging data alone.
2 0.64498997 288 iccv-2013-Nested Shape Descriptors
Author: Jeffrey Byrne, Jianbo Shi
Abstract: In this paper, we propose a new family of binary local feature descriptors called nested shape descriptors. These descriptors are constructed by pooling oriented gradients over a large geometric structure called the Hawaiian earring, which is constructed with a nested correlation structure that enables a new robust local distance function called the nesting distance. This distance function is unique to the nested descriptor and provides robustness to outliers from order statistics. In this paper, we define the nested shape descriptor family and introduce a specific member called the seed-of-life descriptor. We perform a trade study to determine optimal descriptor parameters for the task of image matching. Finally, we evaluate performance compared to state-of-the-art local feature descriptors on the VGGAffine image matching benchmark, showing significant performance gains. Our descriptor is thefirst binary descriptor to outperform SIFT on this benchmark.
3 0.58565772 449 iccv-2013-What Do You Do? Occupation Recognition in a Photo via Social Context
Author: Ming Shao, Liangyue Li, Yun Fu
Abstract: In this paper, we investigate the problem of recognizing occupations of multiple people with arbitrary poses in a photo. Previous work utilizing single person ’s nearly frontal clothing information and fore/background context preliminarily proves that occupation recognition is computationally feasible in computer vision. However, in practice, multiple people with arbitrary poses are common in a photo, and recognizing their occupations is even more challenging. We argue that with appropriately built visual attributes, co-occurrence, and spatial configuration model that is learned through structure SVM, we can recognize multiple people ’s occupations in a photo simultaneously. To evaluate our method’s performance, we conduct extensive experiments on a new well-labeled occupation database with 14 representative occupations and over 7K images. Results on this database validate our method’s effectiveness and show that occupation recognition is solvable in a more general case.
4 0.51451665 445 iccv-2013-Visual Reranking through Weakly Supervised Multi-graph Learning
Author: Cheng Deng, Rongrong Ji, Wei Liu, Dacheng Tao, Xinbo Gao
Abstract: Visual reranking has been widely deployed to refine the quality of conventional content-based image retrieval engines. The current trend lies in employing a crowd of retrieved results stemming from multiple feature modalities to boost the overall performance of visual reranking. However, a major challenge pertaining to current reranking methods is how to take full advantage of the complementary property of distinct feature modalities. Given a query image and one feature modality, a regular visual reranking framework treats the top-ranked images as pseudo positive instances which are inevitably noisy, difficult to reveal this complementary property, and thus lead to inferior ranking performance. This paper proposes a novel image reranking approach by introducing a Co-Regularized Multi-Graph Learning (Co-RMGL) framework, in which the intra-graph and inter-graph constraints are simultaneously imposed to encode affinities in a single graph and consistency across different graphs. Moreover, weakly supervised learning driven by image attributes is performed to denoise the pseudo- labeled instances, thereby highlighting the unique strength of individual feature modality. Meanwhile, such learning can yield a few anchors in graphs that vitally enable the alignment and fusion of multiple graphs. As a result, an edge weight matrix learned from the fused graph automatically gives the ordering to the initially retrieved results. We evaluate our approach on four benchmark image retrieval datasets, demonstrating a significant performance gain over the state-of-the-arts.
5 0.51151329 404 iccv-2013-Structured Forests for Fast Edge Detection
Author: Piotr Dollár, C. Lawrence Zitnick
Abstract: Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains realtime performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.
6 0.50812709 351 iccv-2013-Restoring an Image Taken through a Window Covered with Dirt or Rain
7 0.5079903 160 iccv-2013-Fast Object Segmentation in Unconstrained Video
8 0.50793767 376 iccv-2013-Scene Text Localization and Recognition with Oriented Stroke Detection
9 0.50756031 426 iccv-2013-Training Deformable Part Models with Decorrelated Features
10 0.50696206 448 iccv-2013-Weakly Supervised Learning of Image Partitioning Using Decision Trees with Structured Split Criteria
11 0.50694507 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction
12 0.50632185 340 iccv-2013-Real-Time Articulated Hand Pose Estimation Using Semi-supervised Transductive Regression Forests
13 0.50619149 95 iccv-2013-Cosegmentation and Cosketch by Unsupervised Learning
14 0.50608188 61 iccv-2013-Beyond Hard Negative Mining: Efficient Detector Learning via Block-Circulant Decomposition
15 0.50584143 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification
16 0.50583512 47 iccv-2013-Alternating Regression Forests for Object Detection and Pose Estimation
17 0.505243 151 iccv-2013-Exploiting Reflection Change for Automatic Reflection Removal
18 0.50499243 349 iccv-2013-Regionlets for Generic Object Detection
19 0.50485098 220 iccv-2013-Joint Deep Learning for Pedestrian Detection
20 0.50477201 150 iccv-2013-Exemplar Cut