nips nips2004 nips2004-195 knowledge-graph by maker-knowledge-mining

195 nips-2004-Trait Selection for Assessing Beef Meat Quality Using Non-linear SVM


Source: pdf

Author: Juan Coz, Gustavo F. Bayón, Jorge Díez, Oscar Luaces, Antonio Bahamonde, Carlos Sañudo

Abstract: In this paper we show that it is possible to model sensory impressions of consumers about beef meat. This is not a straightforward task; the reason is that when we are aiming to induce a function that maps object descriptions into ratings, we must consider that consumers’ ratings are just a way to express their preferences about the products presented in the same testing session. Therefore, we had to use a special purpose SVM polynomial kernel. The training data set used collects the ratings of panels of experts and consumers; the meat was provided by 103 bovines of 7 Spanish breeds with different carcass weights and aging periods. Additionally, to gain insight into consumer preferences, we used feature subset selection tools. The result is that aging is the most important trait for improving consumers’ appreciation of beef meat. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Trait selection for assessing beef meat quality using non-linear SVM J. [sent-1, score-0.757]

2 es Abstract In this paper we show that it is possible to model sensory impressions of consumers about beef meat. [sent-12, score-0.976]

3 This is not a straightforward task; the reason is that when we are aiming to induce a function that maps object descriptions into ratings, we must consider that consumers’ ratings are just a way to express their preferences about the products presented in the same testing session. [sent-13, score-0.671]

4 Therefore, we had to use a special purpose SVM polynomial kernel. [sent-14, score-0.037]

5 The training data set used collects the ratings of panels of experts and consumers; the meat was provided by 103 bovines of 7 Spanish breeds with different carcass weights and aging periods. [sent-15, score-1.005]

6 Additionally, to gain insight into consumer preferences, we used feature subset selection tools. [sent-16, score-0.321]

7 The result is that aging is the most important trait for improving consumers’ appreciation of beef meat. [sent-17, score-0.477]

8 1 Introduction The quality of beef meat is appreciated through sensory impressions, and therefore its assessment is very subjective. [sent-18, score-0.947]

9 However, it is known that there are objective traits very important for the final properties of beef meat; this includes the breed and feeding of animals, weight of carcasses, and aging of meat after slaughter. [sent-19, score-1.032]

10 To discover the influence of these and other attributes, we have applied Machine Learning tools to the results of an experience reported in [8]. [sent-20, score-0.08]

11 In the experience, 103 bovines of 7 Spanish breeds were slaughtered to obtain two kinds of carcasses, light and standard [5]; the meat was prepared with 3 aging periods, 1, 7, and 21 days. [sent-21, score-0.775]

12 Finally, the meat was consumed by a group, called panel, of 11 experts, and assessed by a panel of untrained consumers. [sent-22, score-0.646]

13 The conceptual framework used for the study reported in this paper was the analysis of sensory data. [sent-23, score-0.203]

14 In general, this kind of analysis is used for food industries in order to adapt their productive processes to improve the acceptability of their specialties. [sent-24, score-0.142]

15 They need to discover the relationship between descriptions of their products and consumers’ sensory degree of satisfaction. [sent-25, score-0.406]

16 An excellent survey of the use of sensory data analysis in the food industry can be found in [15, 2]; for a Machine Learning perspective, see [3, 9, 6]. [sent-26, score-0.313]

17 The role played by each panel, experts and consumers, is very clear. [sent-27, score-0.092]

18 So, the experts’ panel is made up of a usually small group of trained people who rate several traits of products such as fibrosis, flavor, odor, etc. [sent-28, score-0.374]

19 The most essential property of expert panelists, in addition to their discriminatory capacity, is their own coherence, but not necessarily the uniformity of the group. [sent-31, score-0.031]

20 Experts’ panel can be viewed as a bundle of sophisticated sensors whose ratings are used to describe each product, in addition to other objective traits. [sent-32, score-0.264]

21 On the other hand, the group of untrained consumers (C) are asked to rate their degree of acceptance or satisfaction about the tested products on a given scale. [sent-33, score-0.77]

22 Usually, this panel is organized in a set of testing sessions, where a group of potential consumers assess some instances from a sample E of the tested product. [sent-34, score-0.685]

23 Frequently, each consumer only participates in a small number (sometimes only one) of testing sessions, usually in the same day. [sent-35, score-0.263]

24 In general, the success of sensory analysis relies on the capability to identify, with a precise description, a kind of product that should be reproducible as many times as we need to be tested for as many consumers as possible. [sent-36, score-0.783]

25 Therefore, the study of beef meat sensory quality is very difficult. [sent-37, score-0.916]

26 The main reason is that there are important individual differences in each piece of meat, and the repeatability of tests can be only partially ensured. [sent-38, score-0.065]

27 Notice that from each animal there are only a limited amount of similar pieces of meat, and thus we can only provide pieces of a given breed, weight, and aging period. [sent-39, score-0.258]

28 Additionally, it is worthy noting that the cost of acquisition of this kind of sensory data is very high. [sent-40, score-0.334]

29 The paper is organized as follows: in the next section we present an approach to deal with testing sessions explicitly. [sent-41, score-0.154]

30 The overall idea is to look for a preference or ranking function able to reproduce the implicit ordering of products given by consumers instead of trying to predict the exact value of consumer ratings; such function must return higher values to those products with higher ratings. [sent-42, score-1.342]

31 In Section 3 we show how some state of the art FSS methods designed for SVM (Support Vector Machines) with non-linear kernels can be adapted to preference learning. [sent-43, score-0.198]

32 Finally, at the end of the paper, we return to the data set of beef meat to show how it is possible to explain consumer behavior, and to interpret the relevance of meat traits in this context. [sent-44, score-1.427]

33 2 Learning from sensory data A straightforward approach to handle sensory data can be based on regression, where sensory descriptions of each object x ∈ E are endowed with the degree of satisfaction r(x) for each consumer (or the average of a group of consumers). [sent-45, score-1.026]

34 However, this approach does not faithfully captures people’s preferences [7, 6]: consumers’ ratings actually express a relative ordering, so there is a kind of batch effect that often biases their ratings. [sent-46, score-0.436]

35 Thus, a product could obtain a higher (lower) rating depending on if it is assessed together with worse (better) products. [sent-47, score-0.178]

36 Therefore, information about batches tested by consumers in each rating session is a very important issue. [sent-48, score-0.623]

37 On the other hand, more traditional approaches, such as testing some statistical hypotheses [16, 15, 2] require all available food products in sample E to be assessed by the set of consumers C, a requisite very difficult to fulfill. [sent-49, score-0.725]

38 In this paper we use an approach to sensory data analysis based on learning consumers’ preferences, see [11, 14, 1], where training examples are represented by preference judgments, i. [sent-50, score-0.401]

39 pairs of vectors (v, u) indicating that, for someone, object v is preferable to object u. [sent-52, score-0.159]

40 We will show that this approach can induce more useful knowledge than other approaches, like regression based methods. [sent-53, score-0.037]

41 The main reason is due to the fact that preference judgments sets can represent more relevant information to discover consumers’ preferences. [sent-54, score-0.362]

42 1 A formal framework to learn consumer preferences In order to learn our preference problems, we will try to find a real ranking function f that maximizes the probability of having f (v) > f (u) whenever v is preferable to u [11, 14, 1]. [sent-56, score-0.696]

43 Our input data is made up of a set of ratings (ri (x) : x ∈ Ei ) for i ∈ C. [sent-57, score-0.166]

44 To avoid the batch effect, we will create a preference judgment set P J = {v j > uj : j = 1, . [sent-58, score-0.341]

45 , n} suitable for our needs just considering all pairs (v, u) such that objects v and u were presented in the same session to a given consumer i, and ri (v) > ri (u). [sent-61, score-0.346]

46 (1) Then, the ranking function f : Rd → R can be simply defined by f (x) = F (x, 0). [sent-63, score-0.168]

47 As we have already constructed a set of preference judgments P J, we can specify F by means of the restrictions F (v j , uj ) > 0 and F (uj , v j ) < 0, ∀j = 1, . [sent-64, score-0.358]

48 in [11], and define a kernel K as follows K(x1 , x2 , x3 , x4 ) = k(x1 , x3 ) − k(x1 , x4 ) − k(x2 , x3 ) + k(x2 , x4 ) (3) where k(x, y) = φ(x), φ(y) is a kernel function defined as the inner product of two objects represented in the feature space by their φ images. [sent-70, score-0.186]

49 In the experiments reported in Section 4, we will employ a polynomial kernel, defining k(x, y) = ( x, y + c)g , with c = 1 and g = 2. [sent-71, score-0.037]

50 Producers can focus on these features to improve the quality of the final product. [sent-73, score-0.071]

51 Additionaly, reductions on the number of features often lead to a cheaper data acquisition labour, making these systems suitable for industrial operation [9]. [sent-74, score-0.102]

52 There are many feature subset selection methods applied to SVM classification. [sent-75, score-0.136]

53 It is a ranking method that returns an ordering of the features. [sent-77, score-0.266]

54 Following the main idea of RFE, we have used two methods capable of ordering features in non-linear scenarios. [sent-81, score-0.132]

55 We must also point that, in this case, preference learning data sets are formed by pairs of objects (v, u), and each object in the pair has the same set of features. [sent-82, score-0.282]

56 Thus, we must modify the ranking methods so they can deal with the duplicated features. [sent-83, score-0.199]

57 1 Ranking features for non-linear preference learning Method 1. [sent-85, score-0.232]

58 - This method orders the list of features according to their influence in the variations of the weights. [sent-86, score-0.034]

59 It removes in each iteration the feature that minimizes the ranking value R1 (i) = | i w 2| = αk αj zk zj k,j ∂K(s · xk , s · xj ) , ∂si i = 1, . [sent-88, score-0.417]

60 Due to the fact that we are working on a preference learning problem, we need 4 copies of the scaling factor. [sent-92, score-0.198]

61 In this formula, for a polynomial kernel k(x, y) = ( x, y + c)g and a vector s such that ∀i, si = 1 we have that ∂k(s · x, s · y) = 2g(xi yi )(c + x, y )g−1 . [sent-93, score-0.104]

62 - This method, introduced in [4], works in an iterative way; removing each time the feature which minimizes the loss of predictive performance. [sent-95, score-0.063]

63 Notice that a higher value of R2 (i), that is, a higher accuracy on the training set when replacing feature i-th, means a lower relevance of that feature. [sent-97, score-0.161]

64 Therefore, we will remove the feature yielding the highest ranking value, as opposite to the ranking method described previously. [sent-98, score-0.399]

65 2 Model selection on an ordered sequence of feature subsets Once we have an ordering of the features, we must select the subset Fi which maximizes the generalization performance of the system. [sent-100, score-0.264]

66 The most common choice for a model selection method is cross-validation (CV), but its efficiency and high variance [1] lead us to try another kind of methods. [sent-101, score-0.107]

67 This is a metric-based method that selects one from a nested sequence of complexity-increasing models. [sent-103, score-0.08]

68 ⊂ Fd , where Fi represents the subset containing only the i most relevant features. [sent-107, score-0.029]

69 Then we can create a nested sequence of models fi , each one of these induced by SVM from the corresponding Fi . [sent-108, score-0.141]

70 Thus, given two different hypothesis f and g, their distance is calculated as the expected disagreement in their predictions. [sent-110, score-0.081]

71 Given that these distances can only be approximated, ADJ establish a ˆ method to compute d(g, t), an adjusted distance estimate between any hypothesis f and the true target classification function t. [sent-111, score-0.068]

72 Therefore, the selected hypothesis is ˆ fk = arg min d(fl , t). [sent-112, score-0.081]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('consumers', 0.453), ('meat', 0.427), ('beef', 0.249), ('sensory', 0.203), ('preference', 0.198), ('consumer', 0.185), ('aging', 0.178), ('ranking', 0.168), ('ratings', 0.166), ('adj', 0.142), ('fl', 0.124), ('rfe', 0.124), ('sessions', 0.107), ('traits', 0.107), ('ordering', 0.098), ('preferences', 0.098), ('panel', 0.098), ('experts', 0.092), ('products', 0.087), ('judgments', 0.085), ('food', 0.079), ('uj', 0.075), ('bovines', 0.071), ('breed', 0.071), ('breeds', 0.071), ('carcasses', 0.071), ('impressions', 0.071), ('descriptions', 0.071), ('kind', 0.063), ('feature', 0.063), ('spanish', 0.062), ('untrained', 0.062), ('fi', 0.061), ('assessed', 0.059), ('rating', 0.057), ('object', 0.056), ('satisfaction', 0.053), ('zk', 0.053), ('group', 0.052), ('trait', 0.05), ('nested', 0.05), ('svm', 0.049), ('session', 0.047), ('preferable', 0.047), ('testing', 0.047), ('fk', 0.045), ('disagreement', 0.045), ('removes', 0.045), ('discover', 0.045), ('xk', 0.045), ('selection', 0.044), ('express', 0.044), ('rd', 0.044), ('zj', 0.043), ('ri', 0.043), ('pieces', 0.04), ('quality', 0.037), ('batch', 0.037), ('induce', 0.037), ('acquisition', 0.037), ('polynomial', 0.037), ('hypothesis', 0.036), ('experience', 0.035), ('tested', 0.035), ('features', 0.034), ('reason', 0.034), ('notice', 0.034), ('si', 0.034), ('kernel', 0.033), ('higher', 0.033), ('relevance', 0.032), ('adjusted', 0.032), ('zaragoza', 0.031), ('worthy', 0.031), ('duplicated', 0.031), ('batches', 0.031), ('carlos', 0.031), ('repeatability', 0.031), ('appreciated', 0.031), ('bay', 0.031), ('reductions', 0.031), ('fd', 0.031), ('del', 0.031), ('someone', 0.031), ('uniformity', 0.031), ('judgment', 0.031), ('aiming', 0.031), ('participates', 0.031), ('industry', 0.031), ('people', 0.03), ('sequence', 0.03), ('subset', 0.029), ('product', 0.029), ('prepared', 0.028), ('odor', 0.028), ('acceptance', 0.028), ('faithfully', 0.028), ('coherence', 0.028), ('avor', 0.028), ('objects', 0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000007 195 nips-2004-Trait Selection for Assessing Beef Meat Quality Using Non-linear SVM

Author: Juan Coz, Gustavo F. Bayón, Jorge Díez, Oscar Luaces, Antonio Bahamonde, Carlos Sañudo

Abstract: In this paper we show that it is possible to model sensory impressions of consumers about beef meat. This is not a straightforward task; the reason is that when we are aiming to induce a function that maps object descriptions into ratings, we must consider that consumers’ ratings are just a way to express their preferences about the products presented in the same testing session. Therefore, we had to use a special purpose SVM polynomial kernel. The training data set used collects the ratings of panels of experts and consumers; the meat was provided by 103 bovines of 7 Spanish breeds with different carcass weights and aging periods. Additionally, to gain insight into consumer preferences, we used feature subset selection tools. The result is that aging is the most important trait for improving consumers’ appreciation of beef meat. 1

2 0.15096551 100 nips-2004-Learning Preferences for Multiclass Problems

Author: Fabio Aiolli, Alessandro Sperduti

Abstract: Many interesting multiclass problems can be cast in the general framework of label ranking defined on a given set of classes. The evaluation for such a ranking is generally given in terms of the number of violated order constraints between classes. In this paper, we propose the Preference Learning Model as a unifying framework to model and solve a large class of multiclass problems in a large margin perspective. In addition, an original kernel-based method is proposed and evaluated on a ranking dataset with state-of-the-art results. 1

3 0.13506615 8 nips-2004-A Machine Learning Approach to Conjoint Analysis

Author: Olivier Chapelle, Za\

Abstract: Choice-based conjoint analysis builds models of consumer preferences over products with answers gathered in questionnaires. Our main goal is to bring tools from the machine learning community to solve this problem more efficiently. Thus, we propose two algorithms to quickly and accurately estimate consumer preferences. 1

4 0.10919054 7 nips-2004-A Large Deviation Bound for the Area Under the ROC Curve

Author: Shivani Agarwal, Thore Graepel, Ralf Herbrich, Dan Roth

Abstract: The area under the ROC curve (AUC) has been advocated as an evaluation criterion for the bipartite ranking problem. We study large deviation properties of the AUC; in particular, we derive a distribution-free large deviation bound for the AUC which serves to bound the expected accuracy of a ranking function in terms of its empirical AUC on an independent test sequence. A comparison of our result with a corresponding large deviation result for the classification error rate suggests that the test sample size required to obtain an -accurate estimate of the expected accuracy of a ranking function with δ-confidence is larger than that required to obtain an -accurate estimate of the expected error rate of a classification function with the same confidence. A simple application of the union bound allows the large deviation bound to be extended to learned ranking functions chosen from finite function classes. 1

5 0.063194647 98 nips-2004-Learning Gaussian Process Kernels via Hierarchical Bayes

Author: Anton Schwaighofer, Volker Tresp, Kai Yu

Abstract: We present a novel method for learning with Gaussian process regression in a hierarchical Bayesian framework. In a first step, kernel matrices on a fixed set of input points are learned from data using a simple and efficient EM algorithm. This step is nonparametric, in that it does not require a parametric form of covariance function. In a second step, kernel functions are fitted to approximate the learned covariance matrix using a generalized Nystr¨ m method, which results in a complex, data o driven kernel. We evaluate our approach as a recommendation engine for art images, where the proposed hierarchical Bayesian method leads to excellent prediction performance. 1

6 0.062933944 156 nips-2004-Result Analysis of the NIPS 2003 Feature Selection Challenge

7 0.0587602 134 nips-2004-Object Classification from a Single Example Utilizing Class Relevance Metrics

8 0.058469813 40 nips-2004-Common-Frame Model for Object Recognition

9 0.057869084 3 nips-2004-A Feature Selection Algorithm Based on the Global Minimization of a Generalization Error Bound

10 0.054245576 57 nips-2004-Economic Properties of Social Networks

11 0.050223213 9 nips-2004-A Method for Inferring Label Sampling Mechanisms in Semi-Supervised Learning

12 0.049463905 140 nips-2004-Optimal Information Decoding from Neuronal Populations with Specific Stimulus Selectivity

13 0.049053464 167 nips-2004-Semi-supervised Learning with Penalized Probabilistic Clustering

14 0.047316067 142 nips-2004-Outlier Detection with One-class Kernel Fisher Discriminants

15 0.046789207 38 nips-2004-Co-Validation: Using Model Disagreement on Unlabeled Data to Validate Classification Algorithms

16 0.044677891 93 nips-2004-Kernel Projection Machine: a New Tool for Pattern Recognition

17 0.044169333 168 nips-2004-Semigroup Kernels on Finite Sets

18 0.044124562 64 nips-2004-Experts in a Markov Decision Process

19 0.043017603 123 nips-2004-Multi-agent Cooperation in Diverse Population Games

20 0.040968169 164 nips-2004-Semi-supervised Learning by Entropy Minimization


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.145), (1, 0.028), (2, 0.0), (3, 0.029), (4, 0.004), (5, 0.086), (6, 0.034), (7, 0.018), (8, 0.078), (9, -0.057), (10, -0.093), (11, 0.102), (12, 0.076), (13, 0.045), (14, -0.003), (15, 0.132), (16, -0.075), (17, -0.008), (18, 0.037), (19, -0.035), (20, 0.02), (21, -0.076), (22, -0.002), (23, -0.081), (24, 0.045), (25, -0.098), (26, 0.162), (27, 0.149), (28, 0.136), (29, -0.053), (30, -0.02), (31, 0.043), (32, 0.122), (33, -0.017), (34, -0.206), (35, 0.136), (36, -0.005), (37, -0.089), (38, -0.157), (39, 0.081), (40, -0.039), (41, -0.005), (42, 0.058), (43, -0.16), (44, 0.128), (45, -0.108), (46, 0.116), (47, 0.139), (48, 0.143), (49, 0.112)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94396728 195 nips-2004-Trait Selection for Assessing Beef Meat Quality Using Non-linear SVM

Author: Juan Coz, Gustavo F. Bayón, Jorge Díez, Oscar Luaces, Antonio Bahamonde, Carlos Sañudo

Abstract: In this paper we show that it is possible to model sensory impressions of consumers about beef meat. This is not a straightforward task; the reason is that when we are aiming to induce a function that maps object descriptions into ratings, we must consider that consumers’ ratings are just a way to express their preferences about the products presented in the same testing session. Therefore, we had to use a special purpose SVM polynomial kernel. The training data set used collects the ratings of panels of experts and consumers; the meat was provided by 103 bovines of 7 Spanish breeds with different carcass weights and aging periods. Additionally, to gain insight into consumer preferences, we used feature subset selection tools. The result is that aging is the most important trait for improving consumers’ appreciation of beef meat. 1

2 0.6900242 8 nips-2004-A Machine Learning Approach to Conjoint Analysis

Author: Olivier Chapelle, Za\

Abstract: Choice-based conjoint analysis builds models of consumer preferences over products with answers gathered in questionnaires. Our main goal is to bring tools from the machine learning community to solve this problem more efficiently. Thus, we propose two algorithms to quickly and accurately estimate consumer preferences. 1

3 0.60371882 100 nips-2004-Learning Preferences for Multiclass Problems

Author: Fabio Aiolli, Alessandro Sperduti

Abstract: Many interesting multiclass problems can be cast in the general framework of label ranking defined on a given set of classes. The evaluation for such a ranking is generally given in terms of the number of violated order constraints between classes. In this paper, we propose the Preference Learning Model as a unifying framework to model and solve a large class of multiclass problems in a large margin perspective. In addition, an original kernel-based method is proposed and evaluated on a ranking dataset with state-of-the-art results. 1

4 0.36654684 7 nips-2004-A Large Deviation Bound for the Area Under the ROC Curve

Author: Shivani Agarwal, Thore Graepel, Ralf Herbrich, Dan Roth

Abstract: The area under the ROC curve (AUC) has been advocated as an evaluation criterion for the bipartite ranking problem. We study large deviation properties of the AUC; in particular, we derive a distribution-free large deviation bound for the AUC which serves to bound the expected accuracy of a ranking function in terms of its empirical AUC on an independent test sequence. A comparison of our result with a corresponding large deviation result for the classification error rate suggests that the test sample size required to obtain an -accurate estimate of the expected accuracy of a ranking function with δ-confidence is larger than that required to obtain an -accurate estimate of the expected error rate of a classification function with the same confidence. A simple application of the union bound allows the large deviation bound to be extended to learned ranking functions chosen from finite function classes. 1

5 0.34756303 98 nips-2004-Learning Gaussian Process Kernels via Hierarchical Bayes

Author: Anton Schwaighofer, Volker Tresp, Kai Yu

Abstract: We present a novel method for learning with Gaussian process regression in a hierarchical Bayesian framework. In a first step, kernel matrices on a fixed set of input points are learned from data using a simple and efficient EM algorithm. This step is nonparametric, in that it does not require a parametric form of covariance function. In a second step, kernel functions are fitted to approximate the learned covariance matrix using a generalized Nystr¨ m method, which results in a complex, data o driven kernel. We evaluate our approach as a recommendation engine for art images, where the proposed hierarchical Bayesian method leads to excellent prediction performance. 1

6 0.34226179 156 nips-2004-Result Analysis of the NIPS 2003 Feature Selection Challenge

7 0.33232293 38 nips-2004-Co-Validation: Using Model Disagreement on Unlabeled Data to Validate Classification Algorithms

8 0.28923503 40 nips-2004-Common-Frame Model for Object Recognition

9 0.27475175 43 nips-2004-Conditional Models of Identity Uncertainty with Application to Noun Coreference

10 0.26771638 3 nips-2004-A Feature Selection Algorithm Based on the Global Minimization of a Generalization Error Bound

11 0.26519063 123 nips-2004-Multi-agent Cooperation in Diverse Population Games

12 0.26335934 162 nips-2004-Semi-Markov Conditional Random Fields for Information Extraction

13 0.26334435 207 nips-2004-ℓ₀-norm Minimization for Basis Selection

14 0.24745657 57 nips-2004-Economic Properties of Social Networks

15 0.24551927 15 nips-2004-Active Learning for Anomaly and Rare-Category Detection

16 0.24415706 75 nips-2004-Heuristics for Ordering Cue Search in Decision Making

17 0.24202791 142 nips-2004-Outlier Detection with One-class Kernel Fisher Discriminants

18 0.24109235 93 nips-2004-Kernel Projection Machine: a New Tool for Pattern Recognition

19 0.23905505 18 nips-2004-Algebraic Set Kernels with Application to Inference Over Local Image Representations

20 0.23337947 190 nips-2004-The Rescorla-Wagner Algorithm and Maximum Likelihood Estimation of Causal Parameters


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(11, 0.397), (13, 0.066), (15, 0.124), (26, 0.074), (31, 0.025), (33, 0.133), (35, 0.025), (50, 0.033)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.70840311 195 nips-2004-Trait Selection for Assessing Beef Meat Quality Using Non-linear SVM

Author: Juan Coz, Gustavo F. Bayón, Jorge Díez, Oscar Luaces, Antonio Bahamonde, Carlos Sañudo

Abstract: In this paper we show that it is possible to model sensory impressions of consumers about beef meat. This is not a straightforward task; the reason is that when we are aiming to induce a function that maps object descriptions into ratings, we must consider that consumers’ ratings are just a way to express their preferences about the products presented in the same testing session. Therefore, we had to use a special purpose SVM polynomial kernel. The training data set used collects the ratings of panels of experts and consumers; the meat was provided by 103 bovines of 7 Spanish breeds with different carcass weights and aging periods. Additionally, to gain insight into consumer preferences, we used feature subset selection tools. The result is that aging is the most important trait for improving consumers’ appreciation of beef meat. 1

2 0.56217581 2 nips-2004-A Direct Formulation for Sparse PCA Using Semidefinite Programming

Author: Alexandre D'aspremont, Laurent E. Ghaoui, Michael I. Jordan, Gert R. Lanckriet

Abstract: We examine the problem of approximating, in the Frobenius-norm sense, a positive, semidefinite symmetric matrix by a rank-one matrix, with an upper bound on the cardinality of its eigenvector. The problem arises in the decomposition of a covariance matrix into sparse factors, and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a semidefinite programming based relaxation for our problem. 1

3 0.45937431 189 nips-2004-The Power of Selective Memory: Self-Bounded Learning of Prediction Suffix Trees

Author: Ofer Dekel, Shai Shalev-shwartz, Yoram Singer

Abstract: Prediction suffix trees (PST) provide a popular and effective tool for tasks such as compression, classification, and language modeling. In this paper we take a decision theoretic view of PSTs for the task of sequence prediction. Generalizing the notion of margin to PSTs, we present an online PST learning algorithm and derive a loss bound for it. The depth of the PST generated by this algorithm scales linearly with the length of the input. We then describe a self-bounded enhancement of our learning algorithm which automatically grows a bounded-depth PST. We also prove an analogous mistake-bound for the self-bounded algorithm. The result is an efficient algorithm that neither relies on a-priori assumptions on the shape or maximal depth of the target PST nor does it require any parameters. To our knowledge, this is the first provably-correct PST learning algorithm which generates a bounded-depth PST while being competitive with any fixed PST determined in hindsight. 1

4 0.45752084 4 nips-2004-A Generalized Bradley-Terry Model: From Group Competition to Individual Skill

Author: Tzu-kuo Huang, Chih-jen Lin, Ruby C. Weng

Abstract: The Bradley-Terry model for paired comparison has been popular in many areas. We propose a generalized version in which paired individual comparisons are extended to paired team comparisons. We introduce a simple algorithm with convergence proofs to solve the model and obtain individual skill. A useful application to multi-class probability estimates using error-correcting codes is demonstrated. 1

5 0.45743284 69 nips-2004-Fast Rates to Bayes for Kernel Machines

Author: Ingo Steinwart, Clint Scovel

Abstract: We establish learning rates to the Bayes risk for support vector machines (SVMs) with hinge loss. In particular, for SVMs with Gaussian RBF kernels we propose a geometric condition for distributions which can be used to determine approximation properties of these kernels. Finally, we compare our methods with a recent paper of G. Blanchard et al.. 1

6 0.45413479 131 nips-2004-Non-Local Manifold Tangent Learning

7 0.45380861 79 nips-2004-Hierarchical Eigensolver for Transition Matrices in Spectral Methods

8 0.45377374 110 nips-2004-Matrix Exponential Gradient Updates for On-line Learning and Bregman Projection

9 0.45344174 68 nips-2004-Face Detection --- Efficient and Rank Deficient

10 0.45286712 103 nips-2004-Limits of Spectral Clustering

11 0.45285797 206 nips-2004-Worst-Case Analysis of Selective Sampling for Linear-Threshold Algorithms

12 0.45283449 178 nips-2004-Support Vector Classification with Input Data Uncertainty

13 0.45262378 174 nips-2004-Spike Sorting: Bayesian Clustering of Non-Stationary Data

14 0.45234782 148 nips-2004-Probabilistic Computation in Spiking Populations

15 0.45207611 172 nips-2004-Sparse Coding of Natural Images Using an Overcomplete Set of Limited Capacity Units

16 0.45078 16 nips-2004-Adaptive Discriminative Generative Model and Its Applications

17 0.45039809 161 nips-2004-Self-Tuning Spectral Clustering

18 0.44969785 133 nips-2004-Nonparametric Transforms of Graph Kernels for Semi-Supervised Learning

19 0.44945756 9 nips-2004-A Method for Inferring Label Sampling Mechanisms in Semi-Supervised Learning

20 0.44942662 28 nips-2004-Bayesian inference in spiking neurons