jmlr jmlr2006 jmlr2006-67 knowledge-graph by maker-knowledge-mining

67 jmlr-2006-On Representing and Generating Kernels by Fuzzy Equivalence Relations


Source: pdf

Author: Bernhard Moser

Abstract: Kernels are two-placed functions that can be interpreted as inner products in some Hilbert space. It is this property which makes kernels predestinated to carry linear models of learning, optimization or classification strategies over to non-linear variants. Following this idea, various kernel-based methods like support vector machines or kernel principal component analysis have been conceived which prove to be successful for machine learning, data mining and computer vision applications. When applying a kernel-based method a central question is the choice and the design of the kernel function. This paper provides a novel view on kernels based on fuzzy-logical concepts which allows to incorporate prior knowledge in the design process. It is demonstrated that kernels mapping to the unit interval with constant one in its diagonal can be represented by a commonly used fuzzylogical formula for representing fuzzy rule bases. This means that a great class of kernels can be represented by fuzzy-logical concepts. Apart from this result, which only guarantees the existence of such a representation, constructive examples are presented and the relation to unlabeled learning is pointed out. Keywords: kernel, triangular norm, T -transitivity, fuzzy relation, residuum 1. Motivation Positive-definiteness plays a prominent role especially in optimization and machine learning due to the fact that two-place functions with this property, so-called kernels, can be represented as inner products in some Hilbert space. Thereby, optimization techniques conceived on the basis of linear models can be extended to non-linear algorithms. For a survey of applications see, for example, ¨ Jolliffe (1986), Sch¨ lkopf and Smola (2002) and Scholkopf et al. (1998). o Recently in Moser (2006) it was shown that kernels with values from the unit interval can be interpreted as fuzzy equivalence relations motivated by the idea that kernels express a kind of similarity. This means that the concept of fuzzy equivalence relations, or synonymously fuzzy similarity relations, is more general than that of kernels, provided only values in the unit interval are considered. Fuzzy equivalence relations distinguish from Boolean equivalence relations by a many-valued extension of transitivity which can be interpreted as many-valued logical model of the statement “IF x is similar to y AND y is similar to z THEN x is similar to z”. In contrast to the Boolean case, in many-valued logics the set of truth values is extended such that also assertions, for example, whether two elements x and y are similar, can be treated as a matter of degree. The standard model for the set of (quasi) truth values of fuzzy logic and other many-valued logical systems is the unit interval. If E(x, y) represents the (quasi) truth value of the statement that x is c 2006 Bernhard Moser. M OSER similar to y, then the many-valued version of transitivity is modeled by T (E(x, y), E(y, z)) ≤ E(x, z) where T is a so-called triangular norm which is an extension of the Boolean conjunction. This many-valued concept for transitivity is called T -transitivity. For a survey on triangular norms see, for example, Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. (2000), ¨ and for fuzzy equivalence relations and T -transitivity see, for example, Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. (2000), and Zadeh (1971). o Based on the semantics of fuzzy logic, this approach allows to incorporate knowledge-based models for the design of kernels. From this perspective, the most interesting mathematical question is how positive-semidefinite fuzzy equivalence relations can be characterized or at least constructed under some circumstances. At least for some special cases, proofs are provided in Section 4, which motivate further research aiming at establishing a more general theory on the positive-definiteness of fuzzy equivalence relations. These cases are based on the most prominent representatives of triangular norms, that is the Minimum, the Product and the Łukasiewicz t-norm. The paper is structured as follows. First of all, in Section 2, some basic prerequisites concerning kernels and fuzzy relations are outlined. In Section 3, a result about the T -transitivity of kernels from Moser (2006) is cited and interpreted as existence statement that guarantees a representation of kernels mapping to the unit interval with constant 1 in its diagonal by a certain, commonly used, fuzzy-logical construction of a fuzzy equivalence relation. Finally, in contrast to the pure existence theorem of Section 3, in Section 4 constructive examples of fuzzy equivalence relations are provided which are proven to be kernels. In a concluding remark, the relationship to the problem of labeled and unlabeled learning is pointed out. 2. Prerequisites This section summarizes definitions and facts from the theory of kernels as well as from fuzzy set theory which are needed later on. 2.1 Kernels and Positive-Semidefiniteness Preserving Functions There is an extensive literature concerning kernels and kernel-based methods like support vector machines or kernel principal component analysis especially in the machine learning, data mining ¨ and computer vision communities. For an overview and introduction, see, for example, Sch olkopf and Smola (2002). Here we present only what is needed later on. For completeness let us recall the basic definition for kernels and positive-semidefiniteness. Definition 1 Let X be a non-empty set. A real-valued function k : X × X → R is said to be a kernel iff it is symmetric, that is, k(x, y) = k(y, x) for all x, y ∈ X , and positive-semidefinite, that is, ∑n j=1 ci c j k(xi , x j ) ≥ 0 for any n ∈ N, any choice of x1 , . . . , xn ∈ X and any choice of c1 , . . . , cn ∈ R. i, One way to generate new kernels from known kernels is to apply operations which preserve the positive-semidefiniteness property. A characterization of such operations is provided by C. H. FitzGerald (1995). Theorem 2 (Closeness Properties of Kernels) Let f : Rn → R, n ∈ N, then k : X × X → R given by k(x, y) := f (k1 (x, y), . . . , kn (x, y)) 2604 G ENERATING K ERNELS BY F UZZY R ELATIONS is a kernel for any choice of kernels k1 , . . . , kn on X × X iff f is the real restriction of an entire function on Cn of the form f (x1 , . . . , xn ) = ∑ r1 ≥0,...,rn ≥0 r r cr1 ,...,rn x11 · · · xnn (1) where cr1 ,...,rn ≥ 0 for all nonnegative indices r1 , . . . , rn . 2.2 Triangular Norms Triangular norms have been originally studied within the framework of probabilistic metric spaces, see Schweizer and Sklar (1961) and Schweizer and Sklar (1983). In this context, t-norms proved to be an appropriate concept when dealing with triangle inequalities. Later on, t-norms and their dual version, t-conorms, have been used to model conjunction and disjunction for many-valued logic, see Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. (2000). Definition 3 A function T : [0, 1]2 → [0, 1] is called t-norm (triangular norm), if it satisfies the following conditions: (i) (ii) (iii) (iv) ∀x, y ∈ [0, 1] : ∀x, y, z ∈ [0, 1] : ∀x, y, z ∈ [0, 1] : ∀x, y ∈ [0, 1] : T (x, y) = T (y, x) T (x, T (y, z)) = T (T (x, y), z) y ≤ z =⇒ T (x, y) ≤ T (x, z) T (x, 1) = x ∧ T (1, y) = y (commutativity) (associativity) (monotonicity) (boundary condition) Further, a t-norm is called Archimedean if it is continuous and satisfies x ∈ (0, 1) ⇒ T (x, x) < x. Due to its associativity, many-placed extensions Tn : [0, 1]n → [0, 1], n ∈ N, of a t-norm T are uniquely determined by Tn (x1 , . . . , xn ) = T (x1 , Tn−1 (x2 , . . . , xn )). Archimedean t-norms are characterized by the following representation theorem due to Ling (1965): Theorem 4 Let T : [0, 1]2 → [0, 1] be a t-norm. Then T is Archimedean if, and only if, there is a continuous, strictly decreasing function f : [0, 1] → [0, ∞] with f (1) = 0 such that for x, y ∈ [0, 1], T (x, y) = f −1 (min( f (x) + f (y), f (0))). By setting g(x) = exp (− f (x)), Ling’s characterization yields an alternative representation with a multiplicative generator function T (x, y) = g−1 (max(g(x) g(y), g(0))). For g(x) = x we get the product TP (x, y) = x y. The setting f (x) = 1 − x yields the so-called Łukasiewcz t-norm TL (x, y) = max(x + y − 1, 0). Due to Ling’s theorem 4 an Archimedean t-norm T is isomorphic either to TL or TP , depending on whether the additive generator takes a finite value at 0 or not. In the former case, the Archimedean t-norm is called non-strict, in the latter it is called strict. 2605 M OSER A many-valued model of an implication is provided by the so-called residuum given by → T (a, b) = sup{c ∈ [0, 1]|T (a, c) ≤ b} (2) where T is a left-continuous t-norm. Equation (2) is uniquely determined by the so-called adjunction property → ∀a, b, c ∈ [0, 1] : T (a, b) ≤ c ⇔ a ≤ T (b, c). Consequently, the operator ↔ → → T (a, b) = min T (a, b), T (b, a) (3) (4) models a biimplication. For details, for example, see Gottwald (1986) and Klement et al. (2000). → Tables 1 and 2 list examples of t-norms with their induced residuum T . For further examples see, for example, Klement et al. (2000). √ √ Tcos (a, b) = max(ab − 1 − a2 1 − b2 , 0) TL (a, b) = max(a + b − 1, 0) TP (a, b) = ab TM (a, b) = min(a, b) Table 1: Examples of t-norms → T cos (a, b) = → T L (a, b) = → = T P (a, b) → T M (a, b) = cos(arccos(b) − arccos(a)) if a > b, 1 else min(b − a + 1, 1) b if a > b, a 1 else b if a > b, 1 else Table 2: Examples of residuums 2.3 T -Equivalences If we want to classify based on a notion of similarity or indistinguishability, we face the problem of transitivity. For instance, let us consider two real numbers to be indistinguishable if and only if they differ by at most a certain bound ε > 0, this is modeled by the relation ∼ ε given by x ∼ε y :⇔ |x−y| < ε, ε > 0, x, y ∈ R. Note that the relation ∼ε is not transitive and, therefore, not an equivalence relation. The transitivity requirement turns out to be too strong for this example. The problem of identification and transitivity in the context of similarity of physical objects was early pointed out and discussed philosophically by Poincar´ (1902) and Poincar´ (1904). In the framework of fuzzy e e logic, the way to overcome this problem is to model similarity by fuzzy relations based on a many¨ valued concept of transitivity, see Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. o (2000) and Zadeh (1971). 2606 G ENERATING K ERNELS BY F UZZY R ELATIONS Definition 5 A function E : X 2 −→ [0, 1] is called a fuzzy equivalence relation, or synonymously, T -equivalence with respect to the t-norm T if it satisfies the following conditions: (i) ∀x ∈ X : E(x, x) = 1 (reflexivity) (ii) ∀x, y ∈ X : E(x, y) = E(y, x) (symmetry) (iii) ∀x, y, z ∈ X : T (E(x, y), E(y, z)) ≤ E(x, z) (T-transitivity). The value E(x, y) can be also looked at as the (quasi) truth value of the statement “x is equal to y”. Following this semantics, T-transitivity can be seen as a many-valued model of the proposition, “If x is equal to y and y is equal to z, then x is equal to z”. T -equivalences for Archimedean t-norms are closely related to metrics and pseudo-metrics as shown by Klement et al. (2000) and Moser (1995). Theorem 6 Let T be an Archimedean t-norm given by ∀a, b ∈ [0, 1] : T (a, b) = f −1 (min( f (a) + f (b), f (0))), where f : [0, 1] → [0, ∞] is a strictly decreasing, continuous function with f (1) = 0. (i) If d : X 2 → [0, ∞[ is a pseudo-metric, then the function Ed : X 2 → [0, 1] defined by Ed (x, y) = f −1 (min(d(x, y), f (0))) is a T -equivalence with respect to the t-norm T . (ii) If E : X 2 → [0, 1] is a T -equivalence relation, then the function dE : X 2 → [0, ∞] defined by dE (x, y) = f (E(x, y)) is a pseudo-metric. → Another way to construct T -equivalences is to employ T -operators. The proof of the following assertion can be found in Trillas and Valverde (1984), Kruse et al. (1993) and Kruse et al. (1994). ↔ Theorem 7 Let T be a left-continuous t-norm, T its induced biimplication, µi : X → [0, 1], i ∈ I, I non-empty; then E : X × X → [0, 1] given by ↔ E(x, y) = inf T (µi (x), µi (y)) i∈I (5) is a T -equivalence relation. ¨ For further details on T -equivalences see also Boixader and Jacas (1999), H oppner et al. (2002), Jacas (1988), Trillas et al. (1999) and Valverde (1985). 3. Representing Kernels by T -Equivalences It is interesting that the concept of kernels, which is motivated by geometric reasoning in terms of inner products and mappings to Hilbert spaces and which is inherently formulated by algebraic terms, is closely related to the concept of fuzzy equivalence relations as demonstrated and discussed in more detail in Moser (2006). In this section, we start with the result that any kernel k : X × X → [0, 1] with k(x, x) = 1 for all x ∈ X is T -transitive and, therefore, a fuzzy equivalence relation. The proof can be found in Moser (2006), see also Appendix A.1. 2607 M OSER Theorem 8 Any kernel k : X × X → [0, 1] with k(x, x) = 1 is (at least) Tcos -transitive, where 1 − a2 Tcos (a, b) = max{a b − 1 − b2 , 0}. (6) The nomenclature is motivated by the fact that the triangular norm defined by Equation (6) is an Archimedean t-norm which is generated by the arcosine function as its additive generator. From this result, the following existence theorem can be derived, which guarantees that any kernel under consideration can be represented by the fuzzy-logical formula given by (5). In fuzzy systems, this formula is commonly used for modeling rule bases (see, for example, Kruse et al., 1993, 1994). Theorem 9 Let X be a non-empty universe of discourse, k : X × X → [0, 1] a kernel in the sense of Definition 1 and k(x, x) = 1 for all x ∈ X ; then there is a family of membership functions µ i : X → [0, 1], i ∈ I, I non-empty and a t-norm T , such that ↔ ∀x, y ∈ X : k(x, y) = inf T (µi (x), µi (y)). i∈I (7) Proof. Let us set I := X , µx0 (x) = k(x, x0 ) and let us choose Tcos as t-norm. For convenience let us denote ↔ h(x, y) = inf T cos (µx0 (x), µx0 (y)), x0 ∈X which is equivalent to ↔ h(x, y) = inf T cos (k(x0 , x), k(x0 , y)). x0 ∈X According to Theorem 8, k is Tcos -transitive, that is, ↔ ∀x0 , x, y ∈ X : T cos (k(x0 , x), k(x0 , y)) ≤ k(x, y). This implies that h(x, y) ≤ k(x, y) for all x, y ∈ X . Now let us consider the other inequality. Due to the adjunction property (3), we obtain → Tcos (k(x, y), k(x0 , y)) ≤ k(x, x0 ) ⇔ k(x, y) ≤ T cos (k(x0 , y), k(x, x0 )) and → Tcos (k(x, y), k(x0 , x)) ≤ k(y, x0 ) ⇔ k(x, y) ≤ T cos (k(x0 , x), k(y, x0 )), from which it follows that → → ∀x, y, x0 ∈ X : k(x, y) ≤ min{ T cos (k(x0 , y), k(x, x0 )), T cos (k(x0 , x), k(y, x0 ))}. Hence by Definition 4, ∀x, y ∈ X : k(x, y) ≤ h(x, y) which ends the proof. For an arbitrary choice of fuzzy membership functions, there is no necessity that the resulting relation (7) implies positive-semidefiniteness and, therefore, a kernel. For an example of a Tcos equivalence which is not a kernel see Appendix A.4. Theorem 9 guarantees only the existence of a representation of the form (5) but it does not tell us how to construct the membership functions µ i . In the following section, we provide examples of fuzzy equivalence relations which yield kernels for any choice of membership functions. 2608 G ENERATING K ERNELS BY F UZZY R ELATIONS 4. Constructing Kernels by Fuzzy Equivalence Relations In the Boolean case, positive-definiteness and equivalence are synonymous, that is, a Boolean relation R : X × X → {0, 1} is positive-definite if and only if R is the indicator function of an equivalence relation ∼ that is, R(x, y) = 1 if x ∼ y and R(x, y) = 0 if x ∼ y. For a proof, see Appendix A.2. This = = =, relationship can be used to obtain an extension to fuzzy relations as given by the next theorem whose proof can be found in the Appendix A.3. Theorem 10 Let X be a non-empty universe of discourse, µ i : X → [0, 1], i ∈ I, I non-empty; then the fuzzy equivalence relation EM : X × X → [0, 1] given by ↔ EM (x, y) = inf T M (µi (x), µi (y)) i∈I is positive-semidefinite. In the following, the most prominent representatives of Archimedean t-norms, the Product TP and the Łukasiewicz t-norm TL , are used to construct positive-semidefinite fuzzy similarity relations. Though the first part can also be derived from a result due to Yaglom (1957) that characterizes isotropic stationary kernels by its spectral representation, here we prefer to present a direct, elementary proof. Compare also Bochner (1955) and Genton (2001). Theorem 11 Let X be a non-empty universe of discourse, ν : X → [0, 1] and let h : [0, 1] → [0, 1] be an isomorphism of the unit interval that can be expanded in the manner of Equation (1), that is h(x) = ∑k ck xk with ck ≥ 0; then the fuzzy equivalence relations EL,h , EP,h : X × X → [0, 1] given by ↔ EL,h (x, y) = h T L h−1 (ν(x)) , h−1 (ν(y)) and ↔ EP,h (x, y) = h T P h−1 (ν(x)) , h−1 (ν(y)) (8) (9) are positive-semidefinite. Proof. To prove the positive-definiteness of the two-placed functions E L,h and EP,h given by equations (8) and (9) respectively, we have to show that n n ∑ i, j=1 EL,h (xi , xi ) ci c j ≥ 0, ∑ i, j=1 EP,h (xi , x j ) ci c j ≥ 0 for any n ∈ N and any choice of x1 , . . . , xn ∈ X , respectively. According to an elementary result from Linear Algebra this is equivalent to the assertion that the determinants (1 ≤ m ≤ n) Dm = det (E(xi , x j ))i, j∈{1,...,m} of the minors of the matrix (E(xi , x j ))i, j satisfy ∀m ∈ {1, . . . , n} : Dm ≥ 0, where E denotes either EL,h or EP,h . Recall that the determinant of a matrix is invariant with respect to renaming the indices, that is, if σ : {1, . . . , n} → {1, . . . , n} is a permutation then det [(ai j )i, j ] = det (aσ(i)σ( j) )i, j . 2609 M OSER For convenience, let denote µi = h−1 (ν(xi )). Then, without loss of generality, we may assume that the values µi are ordered monotonically decreasing, that is, µi ≥ µ j for i < j. ↔ → (10) → Case TL : Note that T L (a, b) = min{ T L (a, b), T L (b, a)} = 1 − |a − b|. Then we have to show that for all dimensions n ∈ N, the determinant of E (n) = (1 − |µi − µ j |)i, j∈{1,...,n} is non-negative, that is Due to the assumption (10), we have det[E (n) ] ≥ 0. 1 − |µi − µ j | = 1 − (µi − µ j ) if i ≤ j, 1 − (µ j − µi ) else which yields   . . . 1 − (µ1 − µn−1 ) 1 − (µ1 − µn )  . . . 1 − (µ2 − µn−1 ) 1 − (µ2 − µn )     . . . 1 − (µ3 − µn−1 ) 1 − (µ3 − µn )    (n) E = . . . .. . .   . . .   1 − (µ1 − µn−1 ) 1 − (µ2 − µn−1 ) . . . 1 1 − (µn−1 − µn ) 1 − (µ1 − µn ) 1 − (µ2 − µn ) . . . 1 − (µn−1 − µn ) 1 1 − (µ1 − µ2 ) 1 1 − (µ2 − µ3 ) . . . 1 1 − (µ1 − µ2 ) 1 − (µ1 − µ3 ) . . . Now let us apply determinant-invariant elementary column operations to simplify this matrix by subtracting the column with index i − 1 from the column with index i, i ≥ 2. This yields   1 µ2 − µ1 ... µn−1 − µn−2 µn − µn−1  1 − (µ1 − µ2 ) −(µ2 − µ1 ) . . . µn−1 − µn−2 µn − µn−1     1 − (µ1 − µ3 ) −(µ2 − µ1 ) . . . µn−1 − µn−2 µn − µn−1    ˜ E (n) =  . . . . . .. . . . .   . . . . .   1 − (µ1 − µn−1 ) −(µ2 − µ1 ) . . . −(µn−2 − µn−1 ) µn − µn−1  1 − (µ1 − µn ) −(µ2 − µ1 ) . . . −(µn−2 − µn−1 ) −(µn−1 − µn ) Therefore, α = n ∏(µi−1 − µi ) ≥ 0 (11) i=2 ˜ ˆ det[E (n) ] = det[E (n) ] = α det[En ], where   . . . −1 −1  . . . −1 −1    . . . −1 −1   (n) ˆ E = . . . .. . .   . . .   1 − (µ1 − µn−1 ) +1 . . . +1 −1 1 − (µ1 − µn ) +1 . . . +1 +1 1 1 − (µ1 − µ2 ) 1 − (µ1 − µ3 ) . . . 2610 −1 +1 +1 . . . (12) G ENERATING K ERNELS BY F UZZY R ELATIONS Let us apply Laplacian determinant expansion by minors to the first column of matrix (12), that is n det[A] = ∑ (−1)i+ j ai j det[Ai j ] i=1 where A = (ai j ) is an n × n-matrix, j arbitrarily chosen from {1, . . . , n} and Ai j is the matrix corresponding to the cofactor ai j obtained by canceling out the i-th row and the j-th column from A (see, ˆ for example, Muir, 1960). For n = 1, we get the trivial case det[ E (1) ] = 1. Note that the first and (n) ˆ the last rows of the matrices Ei,1 for 1 < i < n only differ by their signum, consequently the minors ˆ (n) det[Ei,1 ] for 1 < i < n, n ≥ 2, are vanishing, that is, det[Ai,1 ] = 0, for 1 < i < n. Therefore, according to the Laplacian expansion, we get (n) (n) ˆ ˆ ˆ det[E (n) ] = 1 · det[E1,1 ] + (−1)n (1 − (µ1 − µn )) · det[E1,n ]. (13) Observe that (n) ˆ det[E1,1 ] = 2n−2 (n) ˆ det[E1,n ] = (−1)n−1 2n−2 . Consequently, Equation (13) simplifies to ˆ det[E (n) ] = 2n−2 1 + (−1)n (−1)n−1 2n−2 (1 − (µ1 − µn )) = 2n−2 (1 − (1 − (µ1 − µn ))) = 2n−2 (µ1 − µn ) ≥ 0 which together with (11) proves the first case. ↔ → → Case TP : First of all, let us compute T P (a, b) = min{ T P (a, b), T L (b, a)}. Hence,  min{ b , a } if a, b > 0,  a b   0 ↔ if a = 0 and b > 0 , T P (a, b) = 0 if b = 0 and a > 0 ,    1 if a = 0 and b = 0 . Again, without loss of generality, let us suppose that the values µ i , i ∈ {1, . . . , n} are ordered monotonically decreasing, that is µ1 ≥ µ2 ≥ . . . ≥ µn . Before checking the general case, let us consider the special case of vanishing µ-values. For this, let us assume for the moment that µi = > 0 if i < i0 , 0 else ↔ ↔ which implies that T P (µi , µ j ) = 0 for i < i0 and j ≥ i0 and T P (µi , µ j ) = 1 for i ≥ i0 and j ≥ i0 . This leads to a decomposition of the matrix ↔ E (n) = T P (µi , µ j ) 2611 ij M OSER such that det[E (n) ] = det[E (i0 −1) ] · det[In−i0 −1 ] where Ik denotes the k × k-matrix with constant entries 1, hence det[In−i0 −1 ] ∈ {0, 1}. Therefore, we may assume that µ1 ≥ µ2 ≥ . . . ≥ µn > 0. Then we have to show that for all dimensions n ∈ N, the determinant of µi µ j , µ j µi E (n) = min i, j∈{1,...,n} is non-negative, that is det[E (n) ] ≥ 0. Consider  1  µ2  µ1  µ3  µ (n) E =  .1  .  .  µn−1  µ1 µn µ1 µ2 µ1 1 µ3 µ2 . . . µn−1 µ2 µn µ2 ... ... ... .. . ... ... Now, multiply the i-th column by −µi+1 /µi and add 1 ≤ i < n, then we get  1 0 ... 2  µ2 ∗ 1 − ... µ1  ∗ ∗ ...   . ˜ .. E (n) =  . . . . . .  ∗ ∗ ... 1−   ∗ ∗ ... µn−1 µ1 µn−1 µ2 µn−1 µ3 µn µ1 µn µ2 µn µ3      . . .  .  µn  µn−1  1 . . . 1 µn µn−1 (14) it to the (i + 1)-th column of matrix (14), 0 0 0 0 0 . . . 0 . . . µn−1 µn−2 2 ∗ 0 1− µn µn−1             2 (15) where ∗ is a placeholder for any real value. By this, the determinant of the matrix in Equation (15) readily turns out to be n−1 µi+1 ˜ det[E (n) ] = det[E (n) ] = ∏ 1 − µi i=1 2 ≥0 which together with Theorem (2) ends the proof. Note that relations (8) and (9) are T -transitive with respect to the corresponding isomorphic Archimedean t-norms, TL,h (x, y) = h(TL (h−1 (x), h−1 (x))) and TP,h (x, y) = h(TP (h−1 (x), h−1 (x))), respectively. 2612 G ENERATING K ERNELS BY F UZZY R ELATIONS Corollary 12 Let X be a non-empty universe of discourse, µ i : X → [0, 1], λi ∈ ]0, 1] with ∑i λi = 1 ˜ ˜ where i ∈ {1, . . . , n}, n ∈ N, then the fuzzy equivalence relations EL , EP : X × X → [0, 1] given by n ↔ ˜ EL (x, y) = ∑ λi T L (µi (x), µi (y)) (16) i=1 and n ↔ ˜ EP (x, y) = ∏ T P (µi (x), µi (y)) λi (17) i=1 are TL - and TP -equivalences, respectively, and kernels. Proof. First of all, let us check the TL -transitivity of formula (16). This can readily be shown by ↔ means of the definition of TL and the TL -transitivity of T L due to the following inequalities: n TL n ↔ i=1 n n ↔ ↔ n ↔ ↔ ∑ λi T L (µi (x), µi (y)) + ∑ λi T L (µi (y), µi (z)) − 1 , 0 i=1 i=1 n max = i=1 i=1 n = i=1 ∑ λi T L (µi (x), µi (y)) + ∑ λi T L (µi (y), µi (z)) − 1, 0 max max ↔ ∑ λi T L (µi (x), µi (y)), ∑ λi T L (µi (y), µi (yz) n ↔ ↔ ∑ λi TL T L (µi (x), µi (y)), ∑ λi T L (µi (y), µi (z)) , 0 i=1 i=1 n max ↔ ∑ λi T L (µi (x), µi (z)), 0 ≤ ≤ = i=1 ↔ λi T L (µi (x), µi (z)). ↔ This, together with the TP -transitivity of T P , proves that the formulas given by (16) and (17) are TL and TP -equivalences, respectively. Expanding the factors of formula (17) yields  1 if µi (x) = µi (y) = 0, λi ↔ λi λi (18) T P (µi (x), µi (y)) =  min(µiλi(x),µiλi(y)) else max(µi (x),µi (y)) which by comparing case TP of the proof of Theorem 11 shows that the left-hand side of Equation (18) is positive-semidefinite. As the convex combination and the product are special cases of positive-semidefiniteness preserving functions according to Theorem 1, the functions defined by equations (16) and (17) prove to be again positive-semidefinite and, therefore, kernels. It is interesting to observe that both formulas (16) and (17) can be expressed in the form, f ( τ(x) − τ(y) 1 ), where f : I → [0, 1], I some interval, is a strictly decreasing function, τ : X → I n , I some interval, τ(x) = (τ1 (x), . . . , τn (x)) and τ(x) 1 = ∑n |τi (x)|. Indeed, for Equation (16) let us define i=1 fL : [0, 1] → [0, 1], fL (a) = 1 − a τL : X → [0, 1] , τL (x) = (λ1 µ1 (x), . . . , λn µn (x)) n 2613 M OSER and for Equation (17) and positive membership functions µ i , µi (x) > 0 for all x ∈ X , let us define fP : [0, ∞[→ [0, 1], fP (a) = e−a τP : X → ] − ∞, 1]n , τP (x) = (λ1 ln(µ1 (x)), . . . , λn ln(µn (x))) Therefore, we get ˜ EL (x, y) = 1 − τL (x) − τL (y) ˜ EP (x, y) = e− τP (x)−τP (y) 1 . 1 (19) (20) While formulas (19) and (20) provide a geometrical interpretation by means of the norm . 1 , the corresponding formulas (16) and (17) yield a semantical model of the assertion “IF x is equal to y with respect to feature µ1 AND . . . AND x is equal to y with respect to feature µn THEN x is equal to y” as aggregation of biimplications in terms of fuzzy logic. While in the former case, the aggregation has some compensatory effect, the latter is just a conjunction in terms of the Product triangular norm. For details on aggregation operators see, for example, Saminger et al. (2002) and Calvo et al. (2002). The formulas (16) and (17) coincide for the following special case. If the membership functions µi are indicator functions of sets Ai ⊆ X which form a partition of X , then the kernels (16) and (17) reduce to the indicator function characterizing the Boolean equivalence relation induced by this partition {A1 , . . . , An }. The formulas (16) and (17) for general membership functions therefore provide kernels which can be interpreted to be induced by a family of fuzzy sets and, in particular, by fuzzy partitions, that is, families of fuzzy sets fulfilling some criteria which extend the axioms for a Boolean partition in a many-valued logical sense. For definitions and further details on fuzzy partitions see, for ¨ example, De Baets and Mesiar (1998), Demirci (2003) and H oppner and Klawonn (2003). It is a frequently used paradigm that the decision boundaries for a classification problem lie between clusters rather than intersecting them. Due to this cluster hypothesis, the problem of designing kernels based on fuzzy partitions is closely related to the problem of learning kernels from unlabeled data. For further details on semi-supervised learning see, for example, Seeger (2002), Chapelle et al. (2003) and T. M. Huang (2006). It is left to future research to explore this relationship to the problem of learning from labeled and unlabeled data and related concepts like covariance kernels. 5. Conclusion In this paper, we have presented a novel view on kernels from a fuzzy logical point of view. Particularly, the similarity-measure aspect of a kernel is addressed and investigated by means of the so-called T -transitivity which is characteristic for fuzzy equivalence relations. As a consequence, we derived that a large class of kernels can be represented in a way that is commonly used for representing fuzzy rule bases. In addition to this proof for the existence of such a representation, constructive examples are presented. It is the idea of this research to look for a combination of knowledge-based strategies with kernel-based methods in order to facilitate a more flexible designing process of kernels which also allows to incorporate prior knowledge. Further research aims at 2614 G ENERATING K ERNELS BY F UZZY R ELATIONS analyzing the behavior of kernels constructed in this way when applied in the various kernel methods like support vector machines, kernel principal components analysis and others. In particular, it is intended to focus on the problem of learning kernels from unlabeled data where the fuzzy partitions are induced by appropriate clustering principles. Acknowledgments Bernhard Moser gratefully acknowledges partial support by the Austrian Government, the State of Upper Austria, and the Johannes Kepler University Linz in the framework of the Kplus Competence Center Program. Furthermore special thanks go to the anonymous reviewers who gave helpful suggestions and to Felix Kossak for careful proof-reading. Appendix A. For sake of completeness the following sections provide proofs regarding Theorem 8, the characterization of kernels in the Boolean case and the construction of kernels by means of the minimum t-norm TM . Furthermore, in Section A.4 an example of a non-positive-semidefinite Tcos -equivalence is given. A.1 Proof of Theorem 8 Let us start with the analysis of 3-dimensional matrices. Lemma 13 Let M = (mi j )i j ∈ [0, 1]3×3 be a 3 × 3 symmetric matrix with mii = 1, i = 1, 2, 3; then M is positive-semidefinite iff for all i, j, k ∈ {1, 2, 3} there holds mi j m jk − 1 − m2j i 1 − m2 ≤ mik jk Proof. For simplicity, let a = m1,2 , b = m1,3 and c = m2,3 . Then the determinant of M, Det(M), is a function of the variables a, b, c given by D(a, b, c) = 1 + 2abc − a2 − b2 − c2 . For any choice of a, b, the quadratic equation D(a, b, c) = 0 can be solved for c, yielding two solutions c1 = c1 (a, b) and c2 = c2 (a, b) as functions of a and b, c1 (a, b) = ab − c2 (a, b) = ab + 1 − a2 1 − a2 1 − b2 1 − b2 . Obviously, for all |a| ≤ 1 and |b| ≤ 1, the values c1 (a, b) and c2 (a, b) are real. By substituting a = cos α and b = cos(β) with α, β ∈ [0, π ], it becomes readily clear that 2 c1 (a, b) = c1 (cos(α), cos(β)) = cos(α) cos(β) − sin(α) sin(β) = cos(α + β) ∈ [−1, 1] 2615 M OSER and, analogously, c2 (a, b) = c2 (cos(α), cos(β)) = cos(α) cos(β) + sin(α) sin(β) = cos(α − β) ∈ [−1, 1]. As for all a, b ∈ [−1, 1] the determinant function Da,b (c) := D(a, b, c) is quadratic in c with negative coefficient for c2 , there is a uniquely determined maximum at c0 (a, b) = ab. Note that for all a, b ∈ [−1, 1], we have c1 (a, b) ≤ c0 (a, b) ≤ c2 (a, b) and D(a, b, c0 (a, b)) = 1 + 2ab(ab) − a2 − b2 − (ab)2 = (1 − a2 )(1 − b2 ) ≥ 0. Therefore, D(a, b, c) ≥ 0 if and only if c ∈ [c1 (a, b), c2 (a, b)]. Recall from linear algebra that by renaming the indices, the determinant does not change. Therefore, without loss of generality, we may assume that a ≥ b ≥ c. For convenience, let Q = {(x, y, z) ∈ [0, 1]3 |x ≥ y ≥ z}. Then, obviously, for any choice of a, b ∈ [0, 1] there holds (a, b, c1 (a, b)) ∈ Q. Elementary algebra shows that (a, b, c2 (a, b)) ∈ Q is only the case for a = b = 1. As for a = b = 1 the two solutions c1 , c2 coincide, that is, c1 (1, 1) = c2 (1, 1) = 1, it follows that for any choice of (a, b, c) ∈ Q, there holds D(a, b, c) ≥ 0 if and only if c1 (a, b) = ab − 1 − a2 1 − b2 ≤ c. (21) If (a, b, c) ∈ Q, then the inequality (21) is trivially satisfied which together with (21) proves the lemma Now Theorem 8 immediately follows from Definition (1), Lemma (13) and the characterizing inequality (21). A.2 Characterization of Kernels in the Boolean Case ¨ The following lemma and proposition can also be found as an exercise in Sch olkopf and Smola (2002). Lemma 14 Let ∼ be an equivalence relation on X and let k : X × X → {0, 1} be induced by ∼ via k(x, y) = 1 if and only if x ∼ y; then k is a kernel. Proof. By definition of positive-definiteness, let us consider an arbitrary sequence of elements x1 , . . . , xn . Then there are at most n equivalence classes Q1 , . . . , Qm on the set of indices {1, . . . , n}, S / m ≤ n, where i=1,...,m Qi = {1, . . . , n} and Qi ∩ Q j = 0 for i = j. Note that k(xi , x j ) = 0 if the indices 2616 G ENERATING K ERNELS BY F UZZY R ELATIONS i, j belong to different equivalence classes. Then, for any choice of reals c 1 , . . . , cn , we obtain ∑ ci c j k(xi , x j ) m = i, j ∑ ∑ ci c j k(xi , x j ) p=1 i, j∈Q p m = ∑ ∑ p=1 i, j∈Q p ci c j · 1 2 m = ∑ ∑ ci p=1 i∈Q p ≥ 0 Proposition 15 k : X × X → {0, 1} with k(x, x) = 1 for all x ∈ X is a kernel if and only if it is induced by an equivalence relation. Proof. It only remains to be shown that if k is a kernel, then it is the indicator function of an equivalence relation, that is, it is induced by an equivalence relation. If k is a kernel, according to Lemma 13, for all x, y, z ∈ X , it has to satisfy Tcos (k(x, y), k(y, z)) ≤ k(x, z), which implies, k(x, y) = 1, k(y, z) = 1 =⇒ k(x, z) = 1. Obviously, we have k(x, x) = 1 and k(x, y) = k(y, x) due to the reflexivity and symmetry assumption of k, respectively. A.3 Constructing Kernels by TM For convenience let us recall the basic notion of an α-cut from fuzzy set theory: Definition 16 Let X be a non-empty set and µ : X → [0, 1]; then [µ]α = {x ∈ X | µ(x) ≥ α} is called the α-cut of the membership function µ. Lemma 17 k : X × X → [0, 1] is a TM -equivalence if and only if all α-cuts of k are Boolean equivalence relations. Proof. (i) Let us assume that k is a TM -equivalence. Let α ∈ [0, 1], then by definition, [k]α = {(x, y) ∈ X × X | k(x, y) ≥ α}. In order to show that [k]α is a Boolean equivalence, the axioms for reflexivity, symmetry and transitivity have to be shown. Reflexivity and symmetry are trivially satisfied as for all x, y ∈ X , there holds by assumption that k(x, x) = 1 and k(x, y) = k(y, x). In order to show transitivity, let us consider (x, y), (y, z) ∈ [k]α , that means k(x, y) ≥ α and k(y, z) ≥ α; then by the TM -transitivity assumption it follows that α ≤ min(k(x, y), k(y, z)) ≤ k(x, z), hence (x, z) ∈ [k]α . 2617 M OSER (ii) Suppose now that all α-cuts of k are Boolean equivalence relations. Then, in particular, [k] α with α = 1 is reflexive, hence k(x, x) = 1 for all x ∈ X . The symmetry of k follows from the fact that for all α ∈ [0, 1] and pairs (x, y) ∈ [k]α , by assumption, we have (y, x) ∈ [k]α . In order to show the TM -transitivity property, let us consider arbitrarily chosen elements x, y, z ∈ X . Let α = min(k(x, y), k(y, z)); then by the transitivity assumption of [k] α , it follows that (x, z) ∈ [k]α , consequently k(x, z) ≥ α = min(k(x, y), k(y, z)). Proposition 18 If k : X × X → [0, 1] is a TM -equivalence then it is positive-semidefinite. Proof. Choose arbitrary elements x1 , . . . , xn ∈ X and consider the set of values which are taken by all combinations k(xi , x j ), i, j ∈ {1, . . . , n} and order them increasingly, that is k(xi , x j )| i, j ∈ {1, . . . , n}} = {α1 , . . . , αm , where 0 ≤ α1 ≤ · · · αm ≤ 1. Observe that for all pairs (xi , x j ), i, j ∈ {1, . . . , n} there holds m k(xi , x j ) = ∑ (αv − αv−1 )1[k] αv v=2 (xi , x j ) + α1 1[k]α1 (xi , x j ) showing that on the set {x1 , . . . , xn } × {x1 , . . . , xn }, the function k is a linear combination of indicator functions of Boolean equivalences (which are positive-semidefinite by Proposition 15) with nonnegative coefficients and, consequently, it has to be positive semidefinite. A.4 Example of a Non-Positive-Semidefinite Tcos -Equivalence For dimensions n > 3, the Tcos -transitivity is no longer sufficient to guarantee positive semi(n) definiteness. Consider, for example An = (ai j )i j where  λ  (n) ai j = 1   0 if min(i, j) = 1, max(i, j) > 1 , if i = j, else . (22) √ (n) (n) (n) Choose λ = 1/ 2, then Tcos (λ, λ) = 0, hence we have Tcos (ai j , a jk ) ≤ aik for all indices i, j, k ∈ 1, . . . , n. As det(An ) < 0 for n > 3, the matrix An cannot be positive-semidefinite though the Tcos transitivity conditions are satisfied. References S. Bochner. Harmonic Analysis and the Theory of Probability. University of California Press, Los Angeles, California, 1955. U. Bodenhofer. A note on approximate equality versus the Poincar´ paradox. Fuzzy Sets and e Systems, 133(2):155–160, 2003. 2618 G ENERATING K ERNELS BY F UZZY R ELATIONS D. Boixader and J. Jacas. T -indistinguishability operators and approximate reasoning via CRI. In D. Dubois, E. P. Klement, and H. Prade, editors, Fuzzy Sets, Logics and Reasoning about Knowledge, volume 15 of Applied Logic Series, pages 255–268. Kluwer Academic Publishers, Dordrecht, 1999. A. Pinkus C. H. FitzGerald, C.A. Micchelli. Functions that preserve families of positive semidefinite matrices. Linear Alg. and Appl., 221:83–102, 1995. T. Calvo, G. Mayor, and R. Mesiar, editors. Aggregation Operators, volume 97 of Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg, 2002. ¨ O. Chapelle, J. Weston, and B. Scholkopf. Cluster kernels for semi-supervised learning. volume 15 of NIPS. 2003. B. De Baets and R. Mesiar. T -partitions. Fuzzy Sets and Systems, 97:211–223, 1998. M. Demirci. On many-valued partitions and many-valued equivalence relations. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 11(2):235–253, 2003. D. Dubois and H. Prade. A review of fuzzy set aggregation connectives. Inform. Sci., 36:85–121, 1985. M. G. Genton. Classes of kernels for machine learning: A statistics perspective. Journal of Machine Learning Research, 2:299–312, 2001. S. Gottwald. Fuzzy set theory with t-norms and Φ-operators. In A. Di Nola and A. G. S. Ventre, editors, The Mathematics of Fuzzy Systems, volume 88 of Interdisciplinary Systems Research, ¨ pages 143–195. Verlag TUV Rheinland, K¨ ln, 1986. o S. Gottwald. Fuzzy Sets and Fuzzy Logic. Vieweg, Braunschweig, 1993. U. H¨ hle. Fuzzy equalities and indistinguishability. In Proc. 1st European Congress on Fuzzy and o Intelligent Technologies, volume 1, pages 358–363, Aachen, 1993. U. H¨ hle. The Poincar´ paradox and non-classical logics. In D. Dubois, E. P. Klement, and H. Prade, o e editors, Fuzzy Sets, Logics and Reasoning about Knowledge, volume 15 of Applied Logic Series, pages 7–16. Kluwer Academic Publishers, Dordrecht, 1999. F. H¨ ppner and F. Klawonn. Improved fuzzy partitions for fuzzy regression models. Internat. J. o Approx. Reason., 32:85–102, 2003. F. H¨ ppner, F. Klawonn, and P. Eklund. Learning indistinguishability from data. Soft Computing, 6 o (1):6–13, 2002. J. Jacas. On the generators of T -indistinguishability operators. Stochastica, 12:49–63, 1988. I. T. Jolliffe. Principal Component Analysis. Springer Verlag, New York, 1986. E. P. Klement, R. Mesiar, and E. Pap. Triangular Norms, volume 8 of Trends in Logic. Kluwer Academic Publishers, Dordrecht, 2000. 2619 M OSER R. Kruse, J. Gebhardt, and F. Klawonn. Fuzzy-Systeme. B. G. Teubner, Stuttgart, 1993. R. Kruse, J. Gebhardt, and F. Klawonn. Foundations of Fuzzy Systems. John Wiley & Sons, New York, 1994. C. H. Ling. Representation of associative functions. Publ. Math. Debrecen, 12:189–212, 1965. B. Moser. On the t-transitivity of kernels. Fuzzy Sets and Systems, 157:1787–1796, 2006. B. Moser. A New Approach for Representing Control Surfaces by Fuzzy Rule Bases. PhD thesis, Johannes Kepler Universit¨ t Linz, October 1995. a T. Muir. A Treatise on the Theory of Determinants. Dover, New York, 1960. H. Poincar´ . La Science et l’Hypoth´ se. Flammarion, Paris, 1902. e e H. Poincar´ . La Valeur de la Science. Flammarion, Paris, 1904. e S. Saminger, R. Mesiar, and U. Bodenhofer. Domination of aggregation operators and preservation of transitivity. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 10(Suppl.):11–35, 2002. B. Sch¨ lkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, 2002. o ¨ B. Sch¨ lkopf, A. J. Smola, and K. R. Muller. Nonlinear component analysis as a kernel eigenvalue o problem. Neural Computation, 10:1299–1319, 1998. B. Schweizer and A. Sklar. Associative functions and statistical triangle inequalities. Publ. Math. Debrecen, 8:169–186, 1961. B. Schweizer and A. Sklar. Probabilistic Metric Spaces. North-Holland, Amsterdam, 1983. M. Seeger. Covariance kernels from bayesian generative models. Neural Information Processing Systems, 14:905–912, 2002. I. Kopriva T. M. Huang, V. Kecman. Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning. Springer-Verlag, Berlin, 2006. E. Trillas and L. Valverde. An inquiry into indistinguishability operators. In H. J. Skala, S. Termini, and E. Trillas, editors, Aspects of Vagueness, pages 231–256. Reidel, Dordrecht, 1984. E. Trillas, S. Cubillo, and E. Casti˜ eira. Menger and Ovchinnikov on indistinguishabilities revisited. n Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 7(3):213–218, 1999. L. Valverde. On the structure of F-indistinguishability operators. Fuzzy Sets and Systems, 17(3): 313–328, 1985. A. M. Yaglom. Some classes of random fields in n-dimensional space, related to stationary random processes. Theory of Probability and its Applications, 2:273–320, 1957. L. A. Zadeh. Similarity relations and fuzzy orderings. Inform. Sci., 3:177–200, 1971. 2620

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 It is this property which makes kernels predestinated to carry linear models of learning, optimization or classification strategies over to non-linear variants. [sent-4, score-0.15]

2 Following this idea, various kernel-based methods like support vector machines or kernel principal component analysis have been conceived which prove to be successful for machine learning, data mining and computer vision applications. [sent-5, score-0.063]

3 When applying a kernel-based method a central question is the choice and the design of the kernel function. [sent-6, score-0.038]

4 This paper provides a novel view on kernels based on fuzzy-logical concepts which allows to incorporate prior knowledge in the design process. [sent-7, score-0.15]

5 It is demonstrated that kernels mapping to the unit interval with constant one in its diagonal can be represented by a commonly used fuzzylogical formula for representing fuzzy rule bases. [sent-8, score-0.811]

6 This means that a great class of kernels can be represented by fuzzy-logical concepts. [sent-9, score-0.15]

7 Apart from this result, which only guarantees the existence of such a representation, constructive examples are presented and the relation to unlabeled learning is pointed out. [sent-10, score-0.068]

8 Keywords: kernel, triangular norm, T -transitivity, fuzzy relation, residuum 1. [sent-11, score-0.785]

9 o Recently in Moser (2006) it was shown that kernels with values from the unit interval can be interpreted as fuzzy equivalence relations motivated by the idea that kernels express a kind of similarity. [sent-16, score-1.191]

10 This means that the concept of fuzzy equivalence relations, or synonymously fuzzy similarity relations, is more general than that of kernels, provided only values in the unit interval are considered. [sent-17, score-1.483]

11 Fuzzy equivalence relations distinguish from Boolean equivalence relations by a many-valued extension of transitivity which can be interpreted as many-valued logical model of the statement “IF x is similar to y AND y is similar to z THEN x is similar to z”. [sent-18, score-0.619]

12 In contrast to the Boolean case, in many-valued logics the set of truth values is extended such that also assertions, for example, whether two elements x and y are similar, can be treated as a matter of degree. [sent-19, score-0.078]

13 The standard model for the set of (quasi) truth values of fuzzy logic and other many-valued logical systems is the unit interval. [sent-20, score-0.786]

14 M OSER similar to y, then the many-valued version of transitivity is modeled by T (E(x, y), E(y, z)) ≤ E(x, z) where T is a so-called triangular norm which is an extension of the Boolean conjunction. [sent-22, score-0.205]

15 This many-valued concept for transitivity is called T -transitivity. [sent-23, score-0.125]

16 For a survey on triangular norms see, for example, Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. [sent-24, score-0.114]

17 (2000), ¨ and for fuzzy equivalence relations and T -transitivity see, for example, Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. [sent-25, score-0.949]

18 o Based on the semantics of fuzzy logic, this approach allows to incorporate knowledge-based models for the design of kernels. [sent-27, score-0.661]

19 From this perspective, the most interesting mathematical question is how positive-semidefinite fuzzy equivalence relations can be characterized or at least constructed under some circumstances. [sent-28, score-0.891]

20 At least for some special cases, proofs are provided in Section 4, which motivate further research aiming at establishing a more general theory on the positive-definiteness of fuzzy equivalence relations. [sent-29, score-0.797]

21 These cases are based on the most prominent representatives of triangular norms, that is the Minimum, the Product and the Łukasiewicz t-norm. [sent-30, score-0.105]

22 First of all, in Section 2, some basic prerequisites concerning kernels and fuzzy relations are outlined. [sent-32, score-0.93]

23 Finally, in contrast to the pure existence theorem of Section 3, in Section 4 constructive examples of fuzzy equivalence relations are provided which are proven to be kernels. [sent-34, score-0.891]

24 Prerequisites This section summarizes definitions and facts from the theory of kernels as well as from fuzzy set theory which are needed later on. [sent-37, score-0.811]

25 1 Kernels and Positive-Semidefiniteness Preserving Functions There is an extensive literature concerning kernels and kernel-based methods like support vector machines or kernel principal component analysis especially in the machine learning, data mining ¨ and computer vision communities. [sent-39, score-0.188]

26 For completeness let us recall the basic definition for kernels and positive-semidefiniteness. [sent-42, score-0.15]

27 A real-valued function k : X × X → R is said to be a kernel iff it is symmetric, that is, k(x, y) = k(y, x) for all x, y ∈ X , and positive-semidefinite, that is, ∑n j=1 ci c j k(xi , x j ) ≥ 0 for any n ∈ N, any choice of x1 , . [sent-44, score-0.064]

28 i, One way to generate new kernels from known kernels is to apply operations which preserve the positive-semidefiniteness property. [sent-51, score-0.3]

29 , kn (x, y)) 2604 G ENERATING K ERNELS BY F UZZY R ELATIONS is a kernel for any choice of kernels k1 , . [sent-58, score-0.188]

30 2605 M OSER A many-valued model of an implication is provided by the so-called residuum given by → T (a, b) = sup{c ∈ [0, 1]|T (a, c) ≤ b} (2) where T is a left-continuous t-norm. [sent-97, score-0.044]

31 → Tables 1 and 2 list examples of t-norms with their induced residuum T . [sent-102, score-0.076]

32 For instance, let us consider two real numbers to be indistinguishable if and only if they differ by at most a certain bound ε > 0, this is modeled by the relation ∼ ε given by x ∼ε y :⇔ |x−y| < ε, ε > 0, x, y ∈ R. [sent-107, score-0.039]

33 Note that the relation ∼ε is not transitive and, therefore, not an equivalence relation. [sent-108, score-0.175]

34 The transitivity requirement turns out to be too strong for this example. [sent-109, score-0.125]

35 The problem of identification and transitivity in the context of similarity of physical objects was early pointed out and discussed philosophically by Poincar´ (1902) and Poincar´ (1904). [sent-110, score-0.125]

36 In the framework of fuzzy e e logic, the way to overcome this problem is to model similarity by fuzzy relations based on a many¨ valued concept of transitivity, see Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. [sent-111, score-1.474]

37 ↔ Theorem 7 Let T be a left-continuous t-norm, T its induced biimplication, µi : X → [0, 1], i ∈ I, I non-empty; then E : X × X → [0, 1] given by ↔ E(x, y) = inf T (µi (x), µi (y)) i∈I (5) is a T -equivalence relation. [sent-125, score-0.058]

38 In this section, we start with the result that any kernel k : X × X → [0, 1] with k(x, x) = 1 for all x ∈ X is T -transitive and, therefore, a fuzzy equivalence relation. [sent-131, score-0.835]

39 2607 M OSER Theorem 8 Any kernel k : X × X → [0, 1] with k(x, x) = 1 is (at least) Tcos -transitive, where 1 − a2 Tcos (a, b) = max{a b − 1 − b2 , 0}. [sent-134, score-0.038]

40 (6) The nomenclature is motivated by the fact that the triangular norm defined by Equation (6) is an Archimedean t-norm which is generated by the arcosine function as its additive generator. [sent-135, score-0.08]

41 From this result, the following existence theorem can be derived, which guarantees that any kernel under consideration can be represented by the fuzzy-logical formula given by (5). [sent-136, score-0.038]

42 In fuzzy systems, this formula is commonly used for modeling rule bases (see, for example, Kruse et al. [sent-137, score-0.661]

43 For convenience let us denote ↔ h(x, y) = inf T cos (µx0 (x), µx0 (y)), x0 ∈X which is equivalent to ↔ h(x, y) = inf T cos (k(x0 , x), k(x0 , y)). [sent-142, score-0.428]

44 x0 ∈X According to Theorem 8, k is Tcos -transitive, that is, ↔ ∀x0 , x, y ∈ X : T cos (k(x0 , x), k(x0 , y)) ≤ k(x, y). [sent-143, score-0.188]

45 For an arbitrary choice of fuzzy membership functions, there is no necessity that the resulting relation (7) implies positive-semidefiniteness and, therefore, a kernel. [sent-148, score-0.756]

46 For an example of a Tcos equivalence which is not a kernel see Appendix A. [sent-149, score-0.174]

47 Theorem 9 guarantees only the existence of a representation of the form (5) but it does not tell us how to construct the membership functions µ i . [sent-151, score-0.056]

48 In the following section, we provide examples of fuzzy equivalence relations which yield kernels for any choice of membership functions. [sent-152, score-1.097]

49 This = = =, relationship can be used to obtain an extension to fuzzy relations as given by the next theorem whose proof can be found in the Appendix A. [sent-157, score-0.755]

50 Theorem 10 Let X be a non-empty universe of discourse, µ i : X → [0, 1], i ∈ I, I non-empty; then the fuzzy equivalence relation EM : X × X → [0, 1] given by ↔ EM (x, y) = inf T M (µi (x), µi (y)) i∈I is positive-semidefinite. [sent-159, score-0.912]

51 In the following, the most prominent representatives of Archimedean t-norms, the Product TP and the Łukasiewicz t-norm TL , are used to construct positive-semidefinite fuzzy similarity relations. [sent-160, score-0.686]

52 Though the first part can also be derived from a result due to Yaglom (1957) that characterizes isotropic stationary kernels by its spectral representation, here we prefer to present a direct, elementary proof. [sent-161, score-0.186]

53 To prove the positive-definiteness of the two-placed functions E L,h and EP,h given by equations (8) and (9) respectively, we have to show that n n ∑ i, j=1 EL,h (xi , xi ) ci c j ≥ 0, ∑ i, j=1 EP,h (xi , x j ) ci c j ≥ 0 for any n ∈ N and any choice of x1 , . [sent-165, score-0.052]

54 According to an elementary result from Linear Algebra this is equivalent to the assertion that the determinants (1 ≤ m ≤ n) Dm = det (E(xi , x j ))i, j∈{1,. [sent-169, score-0.253]

55 ,m} of the minors of the matrix (E(xi , x j ))i, j satisfy ∀m ∈ {1, . [sent-172, score-0.038]

56 Recall that the determinant of a matrix is invariant with respect to renaming the indices, that is, if σ : {1, . [sent-176, score-0.111]

57 , n} is a permutation then det [(ai j )i, j ] = det (aσ(i)σ( j) )i, j . [sent-182, score-0.434]

58 Then we have to show that for all dimensions n ∈ N, the determinant of E (n) = (1 − |µi − µ j |)i, j∈{1,. [sent-186, score-0.082]

59 (12) G ENERATING K ERNELS BY F UZZY R ELATIONS Let us apply Laplacian determinant expansion by minors to the first column of matrix (12), that is n det[A] = ∑ (−1)i+ j ai j det[Ai j ] i=1 where A = (ai j ) is an n × n-matrix, j arbitrarily chosen from {1, . [sent-284, score-0.149]

60 Then we have to show that for all dimensions n ∈ N, the determinant of µi µ j , µ j µi E (n) = min i, j∈{1,. [sent-309, score-0.082]

61 By this, the determinant of the matrix in Equation (15) readily turns out to be n−1 µi+1 ˜ det[E (n) ] = det[E (n) ] = ∏ 1 − µi i=1 2 ≥0 which together with Theorem (2) ends the proof. [sent-375, score-0.082]

62 Note that relations (8) and (9) are T -transitive with respect to the corresponding isomorphic Archimedean t-norms, TL,h (x, y) = h(TL (h−1 (x), h−1 (x))) and TP,h (x, y) = h(TP (h−1 (x), h−1 (x))), respectively. [sent-376, score-0.094]

63 2612 G ENERATING K ERNELS BY F UZZY R ELATIONS Corollary 12 Let X be a non-empty universe of discourse, µ i : X → [0, 1], λi ∈ ]0, 1] with ∑i λi = 1 ˜ ˜ where i ∈ {1, . [sent-377, score-0.05]

64 , n}, n ∈ N, then the fuzzy equivalence relations EL , EP : X × X → [0, 1] given by n ↔ ˜ EL (x, y) = ∑ λi T L (µi (x), µi (y)) (16) i=1 and n ↔ ˜ EP (x, y) = ∏ T P (µi (x), µi (y)) λi (17) i=1 are TL - and TP -equivalences, respectively, and kernels. [sent-380, score-0.891]

65 ↔ This, together with the TP -transitivity of T P , proves that the formulas given by (16) and (17) are TL and TP -equivalences, respectively. [sent-384, score-0.044]

66 It is interesting to observe that both formulas (16) and (17) can be expressed in the form, f ( τ(x) − τ(y) 1 ), where f : I → [0, 1], I some interval, is a strictly decreasing function, τ : X → I n , I some interval, τ(x) = (τ1 (x), . [sent-387, score-0.044]

67 , λn µn (x)) n 2613 M OSER and for Equation (17) and positive membership functions µ i , µi (x) > 0 for all x ∈ X , let us define fP : [0, ∞[→ [0, 1], fP (a) = e−a τP : X → ] − ∞, 1]n , τP (x) = (λ1 ln(µ1 (x)), . [sent-394, score-0.056]

68 1 (19) (20) While formulas (19) and (20) provide a geometrical interpretation by means of the norm . [sent-398, score-0.044]

69 1 , the corresponding formulas (16) and (17) yield a semantical model of the assertion “IF x is equal to y with respect to feature µ1 AND . [sent-399, score-0.044]

70 AND x is equal to y with respect to feature µn THEN x is equal to y” as aggregation of biimplications in terms of fuzzy logic. [sent-402, score-0.723]

71 While in the former case, the aggregation has some compensatory effect, the latter is just a conjunction in terms of the Product triangular norm. [sent-403, score-0.142]

72 For details on aggregation operators see, for example, Saminger et al. [sent-404, score-0.089]

73 The formulas (16) and (17) coincide for the following special case. [sent-407, score-0.044]

74 If the membership functions µi are indicator functions of sets Ai ⊆ X which form a partition of X , then the kernels (16) and (17) reduce to the indicator function characterizing the Boolean equivalence relation induced by this partition {A1 , . [sent-408, score-0.413]

75 For definitions and further details on fuzzy partitions see, for ¨ example, De Baets and Mesiar (1998), Demirci (2003) and H oppner and Klawonn (2003). [sent-413, score-0.726]

76 Due to this cluster hypothesis, the problem of designing kernels based on fuzzy partitions is closely related to the problem of learning kernels from unlabeled data. [sent-415, score-1.026]

77 Conclusion In this paper, we have presented a novel view on kernels from a fuzzy logical point of view. [sent-422, score-0.845]

78 Particularly, the similarity-measure aspect of a kernel is addressed and investigated by means of the so-called T -transitivity which is characteristic for fuzzy equivalence relations. [sent-423, score-0.835]

79 As a consequence, we derived that a large class of kernels can be represented in a way that is commonly used for representing fuzzy rule bases. [sent-424, score-0.811]

80 It is the idea of this research to look for a combination of knowledge-based strategies with kernel-based methods in order to facilitate a more flexible designing process of kernels which also allows to incorporate prior knowledge. [sent-426, score-0.15]

81 Further research aims at 2614 G ENERATING K ERNELS BY F UZZY R ELATIONS analyzing the behavior of kernels constructed in this way when applied in the various kernel methods like support vector machines, kernel principal components analysis and others. [sent-427, score-0.226]

82 In particular, it is intended to focus on the problem of learning kernels from unlabeled data where the fuzzy partitions are induced by appropriate clustering principles. [sent-428, score-0.908]

83 For sake of completeness the following sections provide proofs regarding Theorem 8, the characterization of kernels in the Boolean case and the construction of kernels by means of the minimum t-norm TM . [sent-432, score-0.3]

84 Then the determinant of M, Det(M), is a function of the variables a, b, c given by D(a, b, c) = 1 + 2abc − a2 − b2 − c2 . [sent-439, score-0.082]

85 For any choice of a, b, the quadratic equation D(a, b, c) = 0 can be solved for c, yielding two solutions c1 = c1 (a, b) and c2 = c2 (a, b) as functions of a and b, c1 (a, b) = ab − c2 (a, b) = ab + 1 − a2 1 − a2 1 − b2 1 − b2 . [sent-440, score-0.118]

86 As for all a, b ∈ [−1, 1] the determinant function Da,b (c) := D(a, b, c) is quadratic in c with negative coefficient for c2 , there is a uniquely determined maximum at c0 (a, b) = ab. [sent-443, score-0.082]

87 Recall from linear algebra that by renaming the indices, the determinant does not change. [sent-446, score-0.111]

88 As for a = b = 1 the two solutions c1 , c2 coincide, that is, c1 (1, 1) = c2 (1, 1) = 1, it follows that for any choice of (a, b, c) ∈ Q, there holds D(a, b, c) ≥ 0 if and only if c1 (a, b) = ab − 1 − a2 1 − b2 ≤ c. [sent-451, score-0.059]

89 Lemma 14 Let ∼ be an equivalence relation on X and let k : X × X → {0, 1} be induced by ∼ via k(x, y) = 1 if and only if x ∼ y; then k is a kernel. [sent-455, score-0.207]

90 Then there are at most n equivalence classes Q1 , . [sent-461, score-0.136]

91 Note that k(xi , x j ) = 0 if the indices 2616 G ENERATING K ERNELS BY F UZZY R ELATIONS i, j belong to different equivalence classes. [sent-474, score-0.161]

92 , cn , we obtain ∑ ci c j k(xi , x j ) m = i, j ∑ ∑ ci c j k(xi , x j ) p=1 i, j∈Q p m = ∑ ∑ p=1 i, j∈Q p ci c j · 1 2 m = ∑ ∑ ci p=1 i∈Q p ≥ 0 Proposition 15 k : X × X → {0, 1} with k(x, x) = 1 for all x ∈ X is a kernel if and only if it is induced by an equivalence relation. [sent-478, score-0.31]

93 It only remains to be shown that if k is a kernel, then it is the indicator function of an equivalence relation, that is, it is induced by an equivalence relation. [sent-480, score-0.304]

94 3 Constructing Kernels by TM For convenience let us recall the basic notion of an α-cut from fuzzy set theory: Definition 16 Let X be a non-empty set and µ : X → [0, 1]; then [µ]α = {x ∈ X | µ(x) ≥ α} is called the α-cut of the membership function µ. [sent-484, score-0.717]

95 Lemma 17 k : X × X → [0, 1] is a TM -equivalence if and only if all α-cuts of k are Boolean equivalence relations. [sent-485, score-0.136]

96 In order to show that [k]α is a Boolean equivalence, the axioms for reflexivity, symmetry and transitivity have to be shown. [sent-489, score-0.191]

97 2617 M OSER (ii) Suppose now that all α-cuts of k are Boolean equivalence relations. [sent-492, score-0.136]

98 Let α = min(k(x, y), k(y, z)); then by the transitivity assumption of [k] α , it follows that (x, z) ∈ [k]α , consequently k(x, z) ≥ α = min(k(x, y), k(y, z)). [sent-496, score-0.125]

99 As det(An ) < 0 for n > 3, the matrix An cannot be positive-semidefinite though the Tcos transitivity conditions are satisfied. [sent-529, score-0.125]

100 Classes of kernels for machine learning: A statistics perspective. [sent-593, score-0.15]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('fuzzy', 0.661), ('tcos', 0.221), ('det', 0.217), ('cos', 0.188), ('tl', 0.163), ('kernels', 0.15), ('archimedean', 0.147), ('klement', 0.147), ('equivalence', 0.136), ('oser', 0.133), ('transitivity', 0.125), ('elations', 0.118), ('enerating', 0.118), ('moser', 0.118), ('uzzy', 0.118), ('tp', 0.112), ('boolean', 0.105), ('relations', 0.094), ('poincar', 0.088), ('determinant', 0.082), ('ernels', 0.082), ('triangular', 0.08), ('tm', 0.076), ('dubois', 0.074), ('gottwald', 0.074), ('trillas', 0.074), ('niteness', 0.069), ('kruse', 0.063), ('aggregation', 0.062), ('ab', 0.059), ('exivity', 0.059), ('mesiar', 0.059), ('prade', 0.059), ('schweizer', 0.059), ('logic', 0.057), ('membership', 0.056), ('fuzziness', 0.05), ('discourse', 0.05), ('universe', 0.05), ('dordrecht', 0.045), ('indistinguishability', 0.044), ('logics', 0.044), ('quasi', 0.044), ('residuum', 0.044), ('formulas', 0.044), ('relation', 0.039), ('kernel', 0.038), ('minors', 0.038), ('el', 0.038), ('symmetry', 0.037), ('partitions', 0.036), ('elementary', 0.036), ('truth', 0.034), ('logical', 0.034), ('norms', 0.034), ('induced', 0.032), ('adjunction', 0.029), ('associativity', 0.029), ('axioms', 0.029), ('baets', 0.029), ('bodenhofer', 0.029), ('boixader', 0.029), ('calvo', 0.029), ('competence', 0.029), ('debrecen', 0.029), ('flammarion', 0.029), ('gebhardt', 0.029), ('hagenberg', 0.029), ('hle', 0.029), ('jacas', 0.029), ('johannes', 0.029), ('kepler', 0.029), ('linz', 0.029), ('ohle', 0.029), ('oppner', 0.029), ('renaming', 0.029), ('saminger', 0.029), ('sklar', 0.029), ('ukasiewicz', 0.029), ('valverde', 0.029), ('zadeh', 0.029), ('ai', 0.029), ('sin', 0.029), ('unlabeled', 0.029), ('ep', 0.029), ('ling', 0.029), ('operators', 0.027), ('la', 0.027), ('inf', 0.026), ('ci', 0.026), ('xn', 0.026), ('indices', 0.025), ('prominent', 0.025), ('klawonn', 0.025), ('ppner', 0.025), ('prerequisites', 0.025), ('arccos', 0.025), ('conceived', 0.025), ('fitzgerald', 0.025), ('synonymously', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999988 67 jmlr-2006-On Representing and Generating Kernels by Fuzzy Equivalence Relations

Author: Bernhard Moser

Abstract: Kernels are two-placed functions that can be interpreted as inner products in some Hilbert space. It is this property which makes kernels predestinated to carry linear models of learning, optimization or classification strategies over to non-linear variants. Following this idea, various kernel-based methods like support vector machines or kernel principal component analysis have been conceived which prove to be successful for machine learning, data mining and computer vision applications. When applying a kernel-based method a central question is the choice and the design of the kernel function. This paper provides a novel view on kernels based on fuzzy-logical concepts which allows to incorporate prior knowledge in the design process. It is demonstrated that kernels mapping to the unit interval with constant one in its diagonal can be represented by a commonly used fuzzylogical formula for representing fuzzy rule bases. This means that a great class of kernels can be represented by fuzzy-logical concepts. Apart from this result, which only guarantees the existence of such a representation, constructive examples are presented and the relation to unlabeled learning is pointed out. Keywords: kernel, triangular norm, T -transitivity, fuzzy relation, residuum 1. Motivation Positive-definiteness plays a prominent role especially in optimization and machine learning due to the fact that two-place functions with this property, so-called kernels, can be represented as inner products in some Hilbert space. Thereby, optimization techniques conceived on the basis of linear models can be extended to non-linear algorithms. For a survey of applications see, for example, ¨ Jolliffe (1986), Sch¨ lkopf and Smola (2002) and Scholkopf et al. (1998). o Recently in Moser (2006) it was shown that kernels with values from the unit interval can be interpreted as fuzzy equivalence relations motivated by the idea that kernels express a kind of similarity. This means that the concept of fuzzy equivalence relations, or synonymously fuzzy similarity relations, is more general than that of kernels, provided only values in the unit interval are considered. Fuzzy equivalence relations distinguish from Boolean equivalence relations by a many-valued extension of transitivity which can be interpreted as many-valued logical model of the statement “IF x is similar to y AND y is similar to z THEN x is similar to z”. In contrast to the Boolean case, in many-valued logics the set of truth values is extended such that also assertions, for example, whether two elements x and y are similar, can be treated as a matter of degree. The standard model for the set of (quasi) truth values of fuzzy logic and other many-valued logical systems is the unit interval. If E(x, y) represents the (quasi) truth value of the statement that x is c 2006 Bernhard Moser. M OSER similar to y, then the many-valued version of transitivity is modeled by T (E(x, y), E(y, z)) ≤ E(x, z) where T is a so-called triangular norm which is an extension of the Boolean conjunction. This many-valued concept for transitivity is called T -transitivity. For a survey on triangular norms see, for example, Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. (2000), ¨ and for fuzzy equivalence relations and T -transitivity see, for example, Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. (2000), and Zadeh (1971). o Based on the semantics of fuzzy logic, this approach allows to incorporate knowledge-based models for the design of kernels. From this perspective, the most interesting mathematical question is how positive-semidefinite fuzzy equivalence relations can be characterized or at least constructed under some circumstances. At least for some special cases, proofs are provided in Section 4, which motivate further research aiming at establishing a more general theory on the positive-definiteness of fuzzy equivalence relations. These cases are based on the most prominent representatives of triangular norms, that is the Minimum, the Product and the Łukasiewicz t-norm. The paper is structured as follows. First of all, in Section 2, some basic prerequisites concerning kernels and fuzzy relations are outlined. In Section 3, a result about the T -transitivity of kernels from Moser (2006) is cited and interpreted as existence statement that guarantees a representation of kernels mapping to the unit interval with constant 1 in its diagonal by a certain, commonly used, fuzzy-logical construction of a fuzzy equivalence relation. Finally, in contrast to the pure existence theorem of Section 3, in Section 4 constructive examples of fuzzy equivalence relations are provided which are proven to be kernels. In a concluding remark, the relationship to the problem of labeled and unlabeled learning is pointed out. 2. Prerequisites This section summarizes definitions and facts from the theory of kernels as well as from fuzzy set theory which are needed later on. 2.1 Kernels and Positive-Semidefiniteness Preserving Functions There is an extensive literature concerning kernels and kernel-based methods like support vector machines or kernel principal component analysis especially in the machine learning, data mining ¨ and computer vision communities. For an overview and introduction, see, for example, Sch olkopf and Smola (2002). Here we present only what is needed later on. For completeness let us recall the basic definition for kernels and positive-semidefiniteness. Definition 1 Let X be a non-empty set. A real-valued function k : X × X → R is said to be a kernel iff it is symmetric, that is, k(x, y) = k(y, x) for all x, y ∈ X , and positive-semidefinite, that is, ∑n j=1 ci c j k(xi , x j ) ≥ 0 for any n ∈ N, any choice of x1 , . . . , xn ∈ X and any choice of c1 , . . . , cn ∈ R. i, One way to generate new kernels from known kernels is to apply operations which preserve the positive-semidefiniteness property. A characterization of such operations is provided by C. H. FitzGerald (1995). Theorem 2 (Closeness Properties of Kernels) Let f : Rn → R, n ∈ N, then k : X × X → R given by k(x, y) := f (k1 (x, y), . . . , kn (x, y)) 2604 G ENERATING K ERNELS BY F UZZY R ELATIONS is a kernel for any choice of kernels k1 , . . . , kn on X × X iff f is the real restriction of an entire function on Cn of the form f (x1 , . . . , xn ) = ∑ r1 ≥0,...,rn ≥0 r r cr1 ,...,rn x11 · · · xnn (1) where cr1 ,...,rn ≥ 0 for all nonnegative indices r1 , . . . , rn . 2.2 Triangular Norms Triangular norms have been originally studied within the framework of probabilistic metric spaces, see Schweizer and Sklar (1961) and Schweizer and Sklar (1983). In this context, t-norms proved to be an appropriate concept when dealing with triangle inequalities. Later on, t-norms and their dual version, t-conorms, have been used to model conjunction and disjunction for many-valued logic, see Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. (2000). Definition 3 A function T : [0, 1]2 → [0, 1] is called t-norm (triangular norm), if it satisfies the following conditions: (i) (ii) (iii) (iv) ∀x, y ∈ [0, 1] : ∀x, y, z ∈ [0, 1] : ∀x, y, z ∈ [0, 1] : ∀x, y ∈ [0, 1] : T (x, y) = T (y, x) T (x, T (y, z)) = T (T (x, y), z) y ≤ z =⇒ T (x, y) ≤ T (x, z) T (x, 1) = x ∧ T (1, y) = y (commutativity) (associativity) (monotonicity) (boundary condition) Further, a t-norm is called Archimedean if it is continuous and satisfies x ∈ (0, 1) ⇒ T (x, x) < x. Due to its associativity, many-placed extensions Tn : [0, 1]n → [0, 1], n ∈ N, of a t-norm T are uniquely determined by Tn (x1 , . . . , xn ) = T (x1 , Tn−1 (x2 , . . . , xn )). Archimedean t-norms are characterized by the following representation theorem due to Ling (1965): Theorem 4 Let T : [0, 1]2 → [0, 1] be a t-norm. Then T is Archimedean if, and only if, there is a continuous, strictly decreasing function f : [0, 1] → [0, ∞] with f (1) = 0 such that for x, y ∈ [0, 1], T (x, y) = f −1 (min( f (x) + f (y), f (0))). By setting g(x) = exp (− f (x)), Ling’s characterization yields an alternative representation with a multiplicative generator function T (x, y) = g−1 (max(g(x) g(y), g(0))). For g(x) = x we get the product TP (x, y) = x y. The setting f (x) = 1 − x yields the so-called Łukasiewcz t-norm TL (x, y) = max(x + y − 1, 0). Due to Ling’s theorem 4 an Archimedean t-norm T is isomorphic either to TL or TP , depending on whether the additive generator takes a finite value at 0 or not. In the former case, the Archimedean t-norm is called non-strict, in the latter it is called strict. 2605 M OSER A many-valued model of an implication is provided by the so-called residuum given by → T (a, b) = sup{c ∈ [0, 1]|T (a, c) ≤ b} (2) where T is a left-continuous t-norm. Equation (2) is uniquely determined by the so-called adjunction property → ∀a, b, c ∈ [0, 1] : T (a, b) ≤ c ⇔ a ≤ T (b, c). Consequently, the operator ↔ → → T (a, b) = min T (a, b), T (b, a) (3) (4) models a biimplication. For details, for example, see Gottwald (1986) and Klement et al. (2000). → Tables 1 and 2 list examples of t-norms with their induced residuum T . For further examples see, for example, Klement et al. (2000). √ √ Tcos (a, b) = max(ab − 1 − a2 1 − b2 , 0) TL (a, b) = max(a + b − 1, 0) TP (a, b) = ab TM (a, b) = min(a, b) Table 1: Examples of t-norms → T cos (a, b) = → T L (a, b) = → = T P (a, b) → T M (a, b) = cos(arccos(b) − arccos(a)) if a > b, 1 else min(b − a + 1, 1) b if a > b, a 1 else b if a > b, 1 else Table 2: Examples of residuums 2.3 T -Equivalences If we want to classify based on a notion of similarity or indistinguishability, we face the problem of transitivity. For instance, let us consider two real numbers to be indistinguishable if and only if they differ by at most a certain bound ε > 0, this is modeled by the relation ∼ ε given by x ∼ε y :⇔ |x−y| < ε, ε > 0, x, y ∈ R. Note that the relation ∼ε is not transitive and, therefore, not an equivalence relation. The transitivity requirement turns out to be too strong for this example. The problem of identification and transitivity in the context of similarity of physical objects was early pointed out and discussed philosophically by Poincar´ (1902) and Poincar´ (1904). In the framework of fuzzy e e logic, the way to overcome this problem is to model similarity by fuzzy relations based on a many¨ valued concept of transitivity, see Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. o (2000) and Zadeh (1971). 2606 G ENERATING K ERNELS BY F UZZY R ELATIONS Definition 5 A function E : X 2 −→ [0, 1] is called a fuzzy equivalence relation, or synonymously, T -equivalence with respect to the t-norm T if it satisfies the following conditions: (i) ∀x ∈ X : E(x, x) = 1 (reflexivity) (ii) ∀x, y ∈ X : E(x, y) = E(y, x) (symmetry) (iii) ∀x, y, z ∈ X : T (E(x, y), E(y, z)) ≤ E(x, z) (T-transitivity). The value E(x, y) can be also looked at as the (quasi) truth value of the statement “x is equal to y”. Following this semantics, T-transitivity can be seen as a many-valued model of the proposition, “If x is equal to y and y is equal to z, then x is equal to z”. T -equivalences for Archimedean t-norms are closely related to metrics and pseudo-metrics as shown by Klement et al. (2000) and Moser (1995). Theorem 6 Let T be an Archimedean t-norm given by ∀a, b ∈ [0, 1] : T (a, b) = f −1 (min( f (a) + f (b), f (0))), where f : [0, 1] → [0, ∞] is a strictly decreasing, continuous function with f (1) = 0. (i) If d : X 2 → [0, ∞[ is a pseudo-metric, then the function Ed : X 2 → [0, 1] defined by Ed (x, y) = f −1 (min(d(x, y), f (0))) is a T -equivalence with respect to the t-norm T . (ii) If E : X 2 → [0, 1] is a T -equivalence relation, then the function dE : X 2 → [0, ∞] defined by dE (x, y) = f (E(x, y)) is a pseudo-metric. → Another way to construct T -equivalences is to employ T -operators. The proof of the following assertion can be found in Trillas and Valverde (1984), Kruse et al. (1993) and Kruse et al. (1994). ↔ Theorem 7 Let T be a left-continuous t-norm, T its induced biimplication, µi : X → [0, 1], i ∈ I, I non-empty; then E : X × X → [0, 1] given by ↔ E(x, y) = inf T (µi (x), µi (y)) i∈I (5) is a T -equivalence relation. ¨ For further details on T -equivalences see also Boixader and Jacas (1999), H oppner et al. (2002), Jacas (1988), Trillas et al. (1999) and Valverde (1985). 3. Representing Kernels by T -Equivalences It is interesting that the concept of kernels, which is motivated by geometric reasoning in terms of inner products and mappings to Hilbert spaces and which is inherently formulated by algebraic terms, is closely related to the concept of fuzzy equivalence relations as demonstrated and discussed in more detail in Moser (2006). In this section, we start with the result that any kernel k : X × X → [0, 1] with k(x, x) = 1 for all x ∈ X is T -transitive and, therefore, a fuzzy equivalence relation. The proof can be found in Moser (2006), see also Appendix A.1. 2607 M OSER Theorem 8 Any kernel k : X × X → [0, 1] with k(x, x) = 1 is (at least) Tcos -transitive, where 1 − a2 Tcos (a, b) = max{a b − 1 − b2 , 0}. (6) The nomenclature is motivated by the fact that the triangular norm defined by Equation (6) is an Archimedean t-norm which is generated by the arcosine function as its additive generator. From this result, the following existence theorem can be derived, which guarantees that any kernel under consideration can be represented by the fuzzy-logical formula given by (5). In fuzzy systems, this formula is commonly used for modeling rule bases (see, for example, Kruse et al., 1993, 1994). Theorem 9 Let X be a non-empty universe of discourse, k : X × X → [0, 1] a kernel in the sense of Definition 1 and k(x, x) = 1 for all x ∈ X ; then there is a family of membership functions µ i : X → [0, 1], i ∈ I, I non-empty and a t-norm T , such that ↔ ∀x, y ∈ X : k(x, y) = inf T (µi (x), µi (y)). i∈I (7) Proof. Let us set I := X , µx0 (x) = k(x, x0 ) and let us choose Tcos as t-norm. For convenience let us denote ↔ h(x, y) = inf T cos (µx0 (x), µx0 (y)), x0 ∈X which is equivalent to ↔ h(x, y) = inf T cos (k(x0 , x), k(x0 , y)). x0 ∈X According to Theorem 8, k is Tcos -transitive, that is, ↔ ∀x0 , x, y ∈ X : T cos (k(x0 , x), k(x0 , y)) ≤ k(x, y). This implies that h(x, y) ≤ k(x, y) for all x, y ∈ X . Now let us consider the other inequality. Due to the adjunction property (3), we obtain → Tcos (k(x, y), k(x0 , y)) ≤ k(x, x0 ) ⇔ k(x, y) ≤ T cos (k(x0 , y), k(x, x0 )) and → Tcos (k(x, y), k(x0 , x)) ≤ k(y, x0 ) ⇔ k(x, y) ≤ T cos (k(x0 , x), k(y, x0 )), from which it follows that → → ∀x, y, x0 ∈ X : k(x, y) ≤ min{ T cos (k(x0 , y), k(x, x0 )), T cos (k(x0 , x), k(y, x0 ))}. Hence by Definition 4, ∀x, y ∈ X : k(x, y) ≤ h(x, y) which ends the proof. For an arbitrary choice of fuzzy membership functions, there is no necessity that the resulting relation (7) implies positive-semidefiniteness and, therefore, a kernel. For an example of a Tcos equivalence which is not a kernel see Appendix A.4. Theorem 9 guarantees only the existence of a representation of the form (5) but it does not tell us how to construct the membership functions µ i . In the following section, we provide examples of fuzzy equivalence relations which yield kernels for any choice of membership functions. 2608 G ENERATING K ERNELS BY F UZZY R ELATIONS 4. Constructing Kernels by Fuzzy Equivalence Relations In the Boolean case, positive-definiteness and equivalence are synonymous, that is, a Boolean relation R : X × X → {0, 1} is positive-definite if and only if R is the indicator function of an equivalence relation ∼ that is, R(x, y) = 1 if x ∼ y and R(x, y) = 0 if x ∼ y. For a proof, see Appendix A.2. This = = =, relationship can be used to obtain an extension to fuzzy relations as given by the next theorem whose proof can be found in the Appendix A.3. Theorem 10 Let X be a non-empty universe of discourse, µ i : X → [0, 1], i ∈ I, I non-empty; then the fuzzy equivalence relation EM : X × X → [0, 1] given by ↔ EM (x, y) = inf T M (µi (x), µi (y)) i∈I is positive-semidefinite. In the following, the most prominent representatives of Archimedean t-norms, the Product TP and the Łukasiewicz t-norm TL , are used to construct positive-semidefinite fuzzy similarity relations. Though the first part can also be derived from a result due to Yaglom (1957) that characterizes isotropic stationary kernels by its spectral representation, here we prefer to present a direct, elementary proof. Compare also Bochner (1955) and Genton (2001). Theorem 11 Let X be a non-empty universe of discourse, ν : X → [0, 1] and let h : [0, 1] → [0, 1] be an isomorphism of the unit interval that can be expanded in the manner of Equation (1), that is h(x) = ∑k ck xk with ck ≥ 0; then the fuzzy equivalence relations EL,h , EP,h : X × X → [0, 1] given by ↔ EL,h (x, y) = h T L h−1 (ν(x)) , h−1 (ν(y)) and ↔ EP,h (x, y) = h T P h−1 (ν(x)) , h−1 (ν(y)) (8) (9) are positive-semidefinite. Proof. To prove the positive-definiteness of the two-placed functions E L,h and EP,h given by equations (8) and (9) respectively, we have to show that n n ∑ i, j=1 EL,h (xi , xi ) ci c j ≥ 0, ∑ i, j=1 EP,h (xi , x j ) ci c j ≥ 0 for any n ∈ N and any choice of x1 , . . . , xn ∈ X , respectively. According to an elementary result from Linear Algebra this is equivalent to the assertion that the determinants (1 ≤ m ≤ n) Dm = det (E(xi , x j ))i, j∈{1,...,m} of the minors of the matrix (E(xi , x j ))i, j satisfy ∀m ∈ {1, . . . , n} : Dm ≥ 0, where E denotes either EL,h or EP,h . Recall that the determinant of a matrix is invariant with respect to renaming the indices, that is, if σ : {1, . . . , n} → {1, . . . , n} is a permutation then det [(ai j )i, j ] = det (aσ(i)σ( j) )i, j . 2609 M OSER For convenience, let denote µi = h−1 (ν(xi )). Then, without loss of generality, we may assume that the values µi are ordered monotonically decreasing, that is, µi ≥ µ j for i < j. ↔ → (10) → Case TL : Note that T L (a, b) = min{ T L (a, b), T L (b, a)} = 1 − |a − b|. Then we have to show that for all dimensions n ∈ N, the determinant of E (n) = (1 − |µi − µ j |)i, j∈{1,...,n} is non-negative, that is Due to the assumption (10), we have det[E (n) ] ≥ 0. 1 − |µi − µ j | = 1 − (µi − µ j ) if i ≤ j, 1 − (µ j − µi ) else which yields   . . . 1 − (µ1 − µn−1 ) 1 − (µ1 − µn )  . . . 1 − (µ2 − µn−1 ) 1 − (µ2 − µn )     . . . 1 − (µ3 − µn−1 ) 1 − (µ3 − µn )    (n) E = . . . .. . .   . . .   1 − (µ1 − µn−1 ) 1 − (µ2 − µn−1 ) . . . 1 1 − (µn−1 − µn ) 1 − (µ1 − µn ) 1 − (µ2 − µn ) . . . 1 − (µn−1 − µn ) 1 1 − (µ1 − µ2 ) 1 1 − (µ2 − µ3 ) . . . 1 1 − (µ1 − µ2 ) 1 − (µ1 − µ3 ) . . . Now let us apply determinant-invariant elementary column operations to simplify this matrix by subtracting the column with index i − 1 from the column with index i, i ≥ 2. This yields   1 µ2 − µ1 ... µn−1 − µn−2 µn − µn−1  1 − (µ1 − µ2 ) −(µ2 − µ1 ) . . . µn−1 − µn−2 µn − µn−1     1 − (µ1 − µ3 ) −(µ2 − µ1 ) . . . µn−1 − µn−2 µn − µn−1    ˜ E (n) =  . . . . . .. . . . .   . . . . .   1 − (µ1 − µn−1 ) −(µ2 − µ1 ) . . . −(µn−2 − µn−1 ) µn − µn−1  1 − (µ1 − µn ) −(µ2 − µ1 ) . . . −(µn−2 − µn−1 ) −(µn−1 − µn ) Therefore, α = n ∏(µi−1 − µi ) ≥ 0 (11) i=2 ˜ ˆ det[E (n) ] = det[E (n) ] = α det[En ], where   . . . −1 −1  . . . −1 −1    . . . −1 −1   (n) ˆ E = . . . .. . .   . . .   1 − (µ1 − µn−1 ) +1 . . . +1 −1 1 − (µ1 − µn ) +1 . . . +1 +1 1 1 − (µ1 − µ2 ) 1 − (µ1 − µ3 ) . . . 2610 −1 +1 +1 . . . (12) G ENERATING K ERNELS BY F UZZY R ELATIONS Let us apply Laplacian determinant expansion by minors to the first column of matrix (12), that is n det[A] = ∑ (−1)i+ j ai j det[Ai j ] i=1 where A = (ai j ) is an n × n-matrix, j arbitrarily chosen from {1, . . . , n} and Ai j is the matrix corresponding to the cofactor ai j obtained by canceling out the i-th row and the j-th column from A (see, ˆ for example, Muir, 1960). For n = 1, we get the trivial case det[ E (1) ] = 1. Note that the first and (n) ˆ the last rows of the matrices Ei,1 for 1 < i < n only differ by their signum, consequently the minors ˆ (n) det[Ei,1 ] for 1 < i < n, n ≥ 2, are vanishing, that is, det[Ai,1 ] = 0, for 1 < i < n. Therefore, according to the Laplacian expansion, we get (n) (n) ˆ ˆ ˆ det[E (n) ] = 1 · det[E1,1 ] + (−1)n (1 − (µ1 − µn )) · det[E1,n ]. (13) Observe that (n) ˆ det[E1,1 ] = 2n−2 (n) ˆ det[E1,n ] = (−1)n−1 2n−2 . Consequently, Equation (13) simplifies to ˆ det[E (n) ] = 2n−2 1 + (−1)n (−1)n−1 2n−2 (1 − (µ1 − µn )) = 2n−2 (1 − (1 − (µ1 − µn ))) = 2n−2 (µ1 − µn ) ≥ 0 which together with (11) proves the first case. ↔ → → Case TP : First of all, let us compute T P (a, b) = min{ T P (a, b), T L (b, a)}. Hence,  min{ b , a } if a, b > 0,  a b   0 ↔ if a = 0 and b > 0 , T P (a, b) = 0 if b = 0 and a > 0 ,    1 if a = 0 and b = 0 . Again, without loss of generality, let us suppose that the values µ i , i ∈ {1, . . . , n} are ordered monotonically decreasing, that is µ1 ≥ µ2 ≥ . . . ≥ µn . Before checking the general case, let us consider the special case of vanishing µ-values. For this, let us assume for the moment that µi = > 0 if i < i0 , 0 else ↔ ↔ which implies that T P (µi , µ j ) = 0 for i < i0 and j ≥ i0 and T P (µi , µ j ) = 1 for i ≥ i0 and j ≥ i0 . This leads to a decomposition of the matrix ↔ E (n) = T P (µi , µ j ) 2611 ij M OSER such that det[E (n) ] = det[E (i0 −1) ] · det[In−i0 −1 ] where Ik denotes the k × k-matrix with constant entries 1, hence det[In−i0 −1 ] ∈ {0, 1}. Therefore, we may assume that µ1 ≥ µ2 ≥ . . . ≥ µn > 0. Then we have to show that for all dimensions n ∈ N, the determinant of µi µ j , µ j µi E (n) = min i, j∈{1,...,n} is non-negative, that is det[E (n) ] ≥ 0. Consider  1  µ2  µ1  µ3  µ (n) E =  .1  .  .  µn−1  µ1 µn µ1 µ2 µ1 1 µ3 µ2 . . . µn−1 µ2 µn µ2 ... ... ... .. . ... ... Now, multiply the i-th column by −µi+1 /µi and add 1 ≤ i < n, then we get  1 0 ... 2  µ2 ∗ 1 − ... µ1  ∗ ∗ ...   . ˜ .. E (n) =  . . . . . .  ∗ ∗ ... 1−   ∗ ∗ ... µn−1 µ1 µn−1 µ2 µn−1 µ3 µn µ1 µn µ2 µn µ3      . . .  .  µn  µn−1  1 . . . 1 µn µn−1 (14) it to the (i + 1)-th column of matrix (14), 0 0 0 0 0 . . . 0 . . . µn−1 µn−2 2 ∗ 0 1− µn µn−1             2 (15) where ∗ is a placeholder for any real value. By this, the determinant of the matrix in Equation (15) readily turns out to be n−1 µi+1 ˜ det[E (n) ] = det[E (n) ] = ∏ 1 − µi i=1 2 ≥0 which together with Theorem (2) ends the proof. Note that relations (8) and (9) are T -transitive with respect to the corresponding isomorphic Archimedean t-norms, TL,h (x, y) = h(TL (h−1 (x), h−1 (x))) and TP,h (x, y) = h(TP (h−1 (x), h−1 (x))), respectively. 2612 G ENERATING K ERNELS BY F UZZY R ELATIONS Corollary 12 Let X be a non-empty universe of discourse, µ i : X → [0, 1], λi ∈ ]0, 1] with ∑i λi = 1 ˜ ˜ where i ∈ {1, . . . , n}, n ∈ N, then the fuzzy equivalence relations EL , EP : X × X → [0, 1] given by n ↔ ˜ EL (x, y) = ∑ λi T L (µi (x), µi (y)) (16) i=1 and n ↔ ˜ EP (x, y) = ∏ T P (µi (x), µi (y)) λi (17) i=1 are TL - and TP -equivalences, respectively, and kernels. Proof. First of all, let us check the TL -transitivity of formula (16). This can readily be shown by ↔ means of the definition of TL and the TL -transitivity of T L due to the following inequalities: n TL n ↔ i=1 n n ↔ ↔ n ↔ ↔ ∑ λi T L (µi (x), µi (y)) + ∑ λi T L (µi (y), µi (z)) − 1 , 0 i=1 i=1 n max = i=1 i=1 n = i=1 ∑ λi T L (µi (x), µi (y)) + ∑ λi T L (µi (y), µi (z)) − 1, 0 max max ↔ ∑ λi T L (µi (x), µi (y)), ∑ λi T L (µi (y), µi (yz) n ↔ ↔ ∑ λi TL T L (µi (x), µi (y)), ∑ λi T L (µi (y), µi (z)) , 0 i=1 i=1 n max ↔ ∑ λi T L (µi (x), µi (z)), 0 ≤ ≤ = i=1 ↔ λi T L (µi (x), µi (z)). ↔ This, together with the TP -transitivity of T P , proves that the formulas given by (16) and (17) are TL and TP -equivalences, respectively. Expanding the factors of formula (17) yields  1 if µi (x) = µi (y) = 0, λi ↔ λi λi (18) T P (µi (x), µi (y)) =  min(µiλi(x),µiλi(y)) else max(µi (x),µi (y)) which by comparing case TP of the proof of Theorem 11 shows that the left-hand side of Equation (18) is positive-semidefinite. As the convex combination and the product are special cases of positive-semidefiniteness preserving functions according to Theorem 1, the functions defined by equations (16) and (17) prove to be again positive-semidefinite and, therefore, kernels. It is interesting to observe that both formulas (16) and (17) can be expressed in the form, f ( τ(x) − τ(y) 1 ), where f : I → [0, 1], I some interval, is a strictly decreasing function, τ : X → I n , I some interval, τ(x) = (τ1 (x), . . . , τn (x)) and τ(x) 1 = ∑n |τi (x)|. Indeed, for Equation (16) let us define i=1 fL : [0, 1] → [0, 1], fL (a) = 1 − a τL : X → [0, 1] , τL (x) = (λ1 µ1 (x), . . . , λn µn (x)) n 2613 M OSER and for Equation (17) and positive membership functions µ i , µi (x) > 0 for all x ∈ X , let us define fP : [0, ∞[→ [0, 1], fP (a) = e−a τP : X → ] − ∞, 1]n , τP (x) = (λ1 ln(µ1 (x)), . . . , λn ln(µn (x))) Therefore, we get ˜ EL (x, y) = 1 − τL (x) − τL (y) ˜ EP (x, y) = e− τP (x)−τP (y) 1 . 1 (19) (20) While formulas (19) and (20) provide a geometrical interpretation by means of the norm . 1 , the corresponding formulas (16) and (17) yield a semantical model of the assertion “IF x is equal to y with respect to feature µ1 AND . . . AND x is equal to y with respect to feature µn THEN x is equal to y” as aggregation of biimplications in terms of fuzzy logic. While in the former case, the aggregation has some compensatory effect, the latter is just a conjunction in terms of the Product triangular norm. For details on aggregation operators see, for example, Saminger et al. (2002) and Calvo et al. (2002). The formulas (16) and (17) coincide for the following special case. If the membership functions µi are indicator functions of sets Ai ⊆ X which form a partition of X , then the kernels (16) and (17) reduce to the indicator function characterizing the Boolean equivalence relation induced by this partition {A1 , . . . , An }. The formulas (16) and (17) for general membership functions therefore provide kernels which can be interpreted to be induced by a family of fuzzy sets and, in particular, by fuzzy partitions, that is, families of fuzzy sets fulfilling some criteria which extend the axioms for a Boolean partition in a many-valued logical sense. For definitions and further details on fuzzy partitions see, for ¨ example, De Baets and Mesiar (1998), Demirci (2003) and H oppner and Klawonn (2003). It is a frequently used paradigm that the decision boundaries for a classification problem lie between clusters rather than intersecting them. Due to this cluster hypothesis, the problem of designing kernels based on fuzzy partitions is closely related to the problem of learning kernels from unlabeled data. For further details on semi-supervised learning see, for example, Seeger (2002), Chapelle et al. (2003) and T. M. Huang (2006). It is left to future research to explore this relationship to the problem of learning from labeled and unlabeled data and related concepts like covariance kernels. 5. Conclusion In this paper, we have presented a novel view on kernels from a fuzzy logical point of view. Particularly, the similarity-measure aspect of a kernel is addressed and investigated by means of the so-called T -transitivity which is characteristic for fuzzy equivalence relations. As a consequence, we derived that a large class of kernels can be represented in a way that is commonly used for representing fuzzy rule bases. In addition to this proof for the existence of such a representation, constructive examples are presented. It is the idea of this research to look for a combination of knowledge-based strategies with kernel-based methods in order to facilitate a more flexible designing process of kernels which also allows to incorporate prior knowledge. Further research aims at 2614 G ENERATING K ERNELS BY F UZZY R ELATIONS analyzing the behavior of kernels constructed in this way when applied in the various kernel methods like support vector machines, kernel principal components analysis and others. In particular, it is intended to focus on the problem of learning kernels from unlabeled data where the fuzzy partitions are induced by appropriate clustering principles. Acknowledgments Bernhard Moser gratefully acknowledges partial support by the Austrian Government, the State of Upper Austria, and the Johannes Kepler University Linz in the framework of the Kplus Competence Center Program. Furthermore special thanks go to the anonymous reviewers who gave helpful suggestions and to Felix Kossak for careful proof-reading. Appendix A. For sake of completeness the following sections provide proofs regarding Theorem 8, the characterization of kernels in the Boolean case and the construction of kernels by means of the minimum t-norm TM . Furthermore, in Section A.4 an example of a non-positive-semidefinite Tcos -equivalence is given. A.1 Proof of Theorem 8 Let us start with the analysis of 3-dimensional matrices. Lemma 13 Let M = (mi j )i j ∈ [0, 1]3×3 be a 3 × 3 symmetric matrix with mii = 1, i = 1, 2, 3; then M is positive-semidefinite iff for all i, j, k ∈ {1, 2, 3} there holds mi j m jk − 1 − m2j i 1 − m2 ≤ mik jk Proof. For simplicity, let a = m1,2 , b = m1,3 and c = m2,3 . Then the determinant of M, Det(M), is a function of the variables a, b, c given by D(a, b, c) = 1 + 2abc − a2 − b2 − c2 . For any choice of a, b, the quadratic equation D(a, b, c) = 0 can be solved for c, yielding two solutions c1 = c1 (a, b) and c2 = c2 (a, b) as functions of a and b, c1 (a, b) = ab − c2 (a, b) = ab + 1 − a2 1 − a2 1 − b2 1 − b2 . Obviously, for all |a| ≤ 1 and |b| ≤ 1, the values c1 (a, b) and c2 (a, b) are real. By substituting a = cos α and b = cos(β) with α, β ∈ [0, π ], it becomes readily clear that 2 c1 (a, b) = c1 (cos(α), cos(β)) = cos(α) cos(β) − sin(α) sin(β) = cos(α + β) ∈ [−1, 1] 2615 M OSER and, analogously, c2 (a, b) = c2 (cos(α), cos(β)) = cos(α) cos(β) + sin(α) sin(β) = cos(α − β) ∈ [−1, 1]. As for all a, b ∈ [−1, 1] the determinant function Da,b (c) := D(a, b, c) is quadratic in c with negative coefficient for c2 , there is a uniquely determined maximum at c0 (a, b) = ab. Note that for all a, b ∈ [−1, 1], we have c1 (a, b) ≤ c0 (a, b) ≤ c2 (a, b) and D(a, b, c0 (a, b)) = 1 + 2ab(ab) − a2 − b2 − (ab)2 = (1 − a2 )(1 − b2 ) ≥ 0. Therefore, D(a, b, c) ≥ 0 if and only if c ∈ [c1 (a, b), c2 (a, b)]. Recall from linear algebra that by renaming the indices, the determinant does not change. Therefore, without loss of generality, we may assume that a ≥ b ≥ c. For convenience, let Q = {(x, y, z) ∈ [0, 1]3 |x ≥ y ≥ z}. Then, obviously, for any choice of a, b ∈ [0, 1] there holds (a, b, c1 (a, b)) ∈ Q. Elementary algebra shows that (a, b, c2 (a, b)) ∈ Q is only the case for a = b = 1. As for a = b = 1 the two solutions c1 , c2 coincide, that is, c1 (1, 1) = c2 (1, 1) = 1, it follows that for any choice of (a, b, c) ∈ Q, there holds D(a, b, c) ≥ 0 if and only if c1 (a, b) = ab − 1 − a2 1 − b2 ≤ c. (21) If (a, b, c) ∈ Q, then the inequality (21) is trivially satisfied which together with (21) proves the lemma Now Theorem 8 immediately follows from Definition (1), Lemma (13) and the characterizing inequality (21). A.2 Characterization of Kernels in the Boolean Case ¨ The following lemma and proposition can also be found as an exercise in Sch olkopf and Smola (2002). Lemma 14 Let ∼ be an equivalence relation on X and let k : X × X → {0, 1} be induced by ∼ via k(x, y) = 1 if and only if x ∼ y; then k is a kernel. Proof. By definition of positive-definiteness, let us consider an arbitrary sequence of elements x1 , . . . , xn . Then there are at most n equivalence classes Q1 , . . . , Qm on the set of indices {1, . . . , n}, S / m ≤ n, where i=1,...,m Qi = {1, . . . , n} and Qi ∩ Q j = 0 for i = j. Note that k(xi , x j ) = 0 if the indices 2616 G ENERATING K ERNELS BY F UZZY R ELATIONS i, j belong to different equivalence classes. Then, for any choice of reals c 1 , . . . , cn , we obtain ∑ ci c j k(xi , x j ) m = i, j ∑ ∑ ci c j k(xi , x j ) p=1 i, j∈Q p m = ∑ ∑ p=1 i, j∈Q p ci c j · 1 2 m = ∑ ∑ ci p=1 i∈Q p ≥ 0 Proposition 15 k : X × X → {0, 1} with k(x, x) = 1 for all x ∈ X is a kernel if and only if it is induced by an equivalence relation. Proof. It only remains to be shown that if k is a kernel, then it is the indicator function of an equivalence relation, that is, it is induced by an equivalence relation. If k is a kernel, according to Lemma 13, for all x, y, z ∈ X , it has to satisfy Tcos (k(x, y), k(y, z)) ≤ k(x, z), which implies, k(x, y) = 1, k(y, z) = 1 =⇒ k(x, z) = 1. Obviously, we have k(x, x) = 1 and k(x, y) = k(y, x) due to the reflexivity and symmetry assumption of k, respectively. A.3 Constructing Kernels by TM For convenience let us recall the basic notion of an α-cut from fuzzy set theory: Definition 16 Let X be a non-empty set and µ : X → [0, 1]; then [µ]α = {x ∈ X | µ(x) ≥ α} is called the α-cut of the membership function µ. Lemma 17 k : X × X → [0, 1] is a TM -equivalence if and only if all α-cuts of k are Boolean equivalence relations. Proof. (i) Let us assume that k is a TM -equivalence. Let α ∈ [0, 1], then by definition, [k]α = {(x, y) ∈ X × X | k(x, y) ≥ α}. In order to show that [k]α is a Boolean equivalence, the axioms for reflexivity, symmetry and transitivity have to be shown. Reflexivity and symmetry are trivially satisfied as for all x, y ∈ X , there holds by assumption that k(x, x) = 1 and k(x, y) = k(y, x). In order to show transitivity, let us consider (x, y), (y, z) ∈ [k]α , that means k(x, y) ≥ α and k(y, z) ≥ α; then by the TM -transitivity assumption it follows that α ≤ min(k(x, y), k(y, z)) ≤ k(x, z), hence (x, z) ∈ [k]α . 2617 M OSER (ii) Suppose now that all α-cuts of k are Boolean equivalence relations. Then, in particular, [k] α with α = 1 is reflexive, hence k(x, x) = 1 for all x ∈ X . The symmetry of k follows from the fact that for all α ∈ [0, 1] and pairs (x, y) ∈ [k]α , by assumption, we have (y, x) ∈ [k]α . In order to show the TM -transitivity property, let us consider arbitrarily chosen elements x, y, z ∈ X . Let α = min(k(x, y), k(y, z)); then by the transitivity assumption of [k] α , it follows that (x, z) ∈ [k]α , consequently k(x, z) ≥ α = min(k(x, y), k(y, z)). Proposition 18 If k : X × X → [0, 1] is a TM -equivalence then it is positive-semidefinite. Proof. Choose arbitrary elements x1 , . . . , xn ∈ X and consider the set of values which are taken by all combinations k(xi , x j ), i, j ∈ {1, . . . , n} and order them increasingly, that is k(xi , x j )| i, j ∈ {1, . . . , n}} = {α1 , . . . , αm , where 0 ≤ α1 ≤ · · · αm ≤ 1. Observe that for all pairs (xi , x j ), i, j ∈ {1, . . . , n} there holds m k(xi , x j ) = ∑ (αv − αv−1 )1[k] αv v=2 (xi , x j ) + α1 1[k]α1 (xi , x j ) showing that on the set {x1 , . . . , xn } × {x1 , . . . , xn }, the function k is a linear combination of indicator functions of Boolean equivalences (which are positive-semidefinite by Proposition 15) with nonnegative coefficients and, consequently, it has to be positive semidefinite. A.4 Example of a Non-Positive-Semidefinite Tcos -Equivalence For dimensions n > 3, the Tcos -transitivity is no longer sufficient to guarantee positive semi(n) definiteness. Consider, for example An = (ai j )i j where  λ  (n) ai j = 1   0 if min(i, j) = 1, max(i, j) > 1 , if i = j, else . (22) √ (n) (n) (n) Choose λ = 1/ 2, then Tcos (λ, λ) = 0, hence we have Tcos (ai j , a jk ) ≤ aik for all indices i, j, k ∈ 1, . . . , n. As det(An ) < 0 for n > 3, the matrix An cannot be positive-semidefinite though the Tcos transitivity conditions are satisfied. References S. Bochner. Harmonic Analysis and the Theory of Probability. University of California Press, Los Angeles, California, 1955. U. Bodenhofer. A note on approximate equality versus the Poincar´ paradox. Fuzzy Sets and e Systems, 133(2):155–160, 2003. 2618 G ENERATING K ERNELS BY F UZZY R ELATIONS D. Boixader and J. Jacas. T -indistinguishability operators and approximate reasoning via CRI. In D. Dubois, E. P. Klement, and H. Prade, editors, Fuzzy Sets, Logics and Reasoning about Knowledge, volume 15 of Applied Logic Series, pages 255–268. Kluwer Academic Publishers, Dordrecht, 1999. A. Pinkus C. H. FitzGerald, C.A. Micchelli. Functions that preserve families of positive semidefinite matrices. Linear Alg. and Appl., 221:83–102, 1995. T. Calvo, G. Mayor, and R. Mesiar, editors. Aggregation Operators, volume 97 of Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg, 2002. ¨ O. Chapelle, J. Weston, and B. Scholkopf. Cluster kernels for semi-supervised learning. volume 15 of NIPS. 2003. B. De Baets and R. Mesiar. T -partitions. Fuzzy Sets and Systems, 97:211–223, 1998. M. Demirci. On many-valued partitions and many-valued equivalence relations. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 11(2):235–253, 2003. D. Dubois and H. Prade. A review of fuzzy set aggregation connectives. Inform. Sci., 36:85–121, 1985. M. G. Genton. Classes of kernels for machine learning: A statistics perspective. Journal of Machine Learning Research, 2:299–312, 2001. S. Gottwald. Fuzzy set theory with t-norms and Φ-operators. In A. Di Nola and A. G. S. Ventre, editors, The Mathematics of Fuzzy Systems, volume 88 of Interdisciplinary Systems Research, ¨ pages 143–195. Verlag TUV Rheinland, K¨ ln, 1986. o S. Gottwald. Fuzzy Sets and Fuzzy Logic. Vieweg, Braunschweig, 1993. U. H¨ hle. Fuzzy equalities and indistinguishability. In Proc. 1st European Congress on Fuzzy and o Intelligent Technologies, volume 1, pages 358–363, Aachen, 1993. U. H¨ hle. The Poincar´ paradox and non-classical logics. In D. Dubois, E. P. Klement, and H. Prade, o e editors, Fuzzy Sets, Logics and Reasoning about Knowledge, volume 15 of Applied Logic Series, pages 7–16. Kluwer Academic Publishers, Dordrecht, 1999. F. H¨ ppner and F. Klawonn. Improved fuzzy partitions for fuzzy regression models. Internat. J. o Approx. Reason., 32:85–102, 2003. F. H¨ ppner, F. Klawonn, and P. Eklund. Learning indistinguishability from data. Soft Computing, 6 o (1):6–13, 2002. J. Jacas. On the generators of T -indistinguishability operators. Stochastica, 12:49–63, 1988. I. T. Jolliffe. Principal Component Analysis. Springer Verlag, New York, 1986. E. P. Klement, R. Mesiar, and E. Pap. Triangular Norms, volume 8 of Trends in Logic. Kluwer Academic Publishers, Dordrecht, 2000. 2619 M OSER R. Kruse, J. Gebhardt, and F. Klawonn. Fuzzy-Systeme. B. G. Teubner, Stuttgart, 1993. R. Kruse, J. Gebhardt, and F. Klawonn. Foundations of Fuzzy Systems. John Wiley & Sons, New York, 1994. C. H. Ling. Representation of associative functions. Publ. Math. Debrecen, 12:189–212, 1965. B. Moser. On the t-transitivity of kernels. Fuzzy Sets and Systems, 157:1787–1796, 2006. B. Moser. A New Approach for Representing Control Surfaces by Fuzzy Rule Bases. PhD thesis, Johannes Kepler Universit¨ t Linz, October 1995. a T. Muir. A Treatise on the Theory of Determinants. Dover, New York, 1960. H. Poincar´ . La Science et l’Hypoth´ se. Flammarion, Paris, 1902. e e H. Poincar´ . La Valeur de la Science. Flammarion, Paris, 1904. e S. Saminger, R. Mesiar, and U. Bodenhofer. Domination of aggregation operators and preservation of transitivity. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 10(Suppl.):11–35, 2002. B. Sch¨ lkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, 2002. o ¨ B. Sch¨ lkopf, A. J. Smola, and K. R. Muller. Nonlinear component analysis as a kernel eigenvalue o problem. Neural Computation, 10:1299–1319, 1998. B. Schweizer and A. Sklar. Associative functions and statistical triangle inequalities. Publ. Math. Debrecen, 8:169–186, 1961. B. Schweizer and A. Sklar. Probabilistic Metric Spaces. North-Holland, Amsterdam, 1983. M. Seeger. Covariance kernels from bayesian generative models. Neural Information Processing Systems, 14:905–912, 2002. I. Kopriva T. M. Huang, V. Kecman. Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning. Springer-Verlag, Berlin, 2006. E. Trillas and L. Valverde. An inquiry into indistinguishability operators. In H. J. Skala, S. Termini, and E. Trillas, editors, Aspects of Vagueness, pages 231–256. Reidel, Dordrecht, 1984. E. Trillas, S. Cubillo, and E. Casti˜ eira. Menger and Ovchinnikov on indistinguishabilities revisited. n Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 7(3):213–218, 1999. L. Valverde. On the structure of F-indistinguishability operators. Fuzzy Sets and Systems, 17(3): 313–328, 1985. A. M. Yaglom. Some classes of random fields in n-dimensional space, related to stationary random processes. Theory of Probability and its Applications, 2:273–320, 1957. L. A. Zadeh. Similarity relations and fuzzy orderings. Inform. Sci., 3:177–200, 1971. 2620

2 0.061538707 42 jmlr-2006-Kernels on Prolog Proof Trees: Statistical Learning in the ILP Setting     (Special Topic on Inductive Programming)

Author: Andrea Passerini, Paolo Frasconi, Luc De Raedt

Abstract: We develop kernels for measuring the similarity between relational instances using background knowledge expressed in first-order logic. The method allows us to bridge the gap between traditional inductive logic programming (ILP) representations and statistical approaches to supervised learning. Logic programs are first used to generate proofs of given visitor programs that use predicates declared in the available background knowledge. A kernel is then defined over pairs of proof trees. The method can be used for supervised learning tasks and is suitable for classification as well as regression. We report positive empirical results on Bongard-like and M-of-N problems that are difficult or impossible to solve with traditional ILP techniques, as well as on real bioinformatics and chemoinformatics data sets. Keywords: kernel methods, inductive logic programming, Prolog, learning from program traces

3 0.056342103 93 jmlr-2006-Universal Kernels

Author: Charles A. Micchelli, Yuesheng Xu, Haizhang Zhang

Abstract: In this paper we investigate conditions on the features of a continuous kernel so that it may approximate an arbitrary continuous target function uniformly on any compact subset of the input space. A number of concrete examples are given of kernels with this universal approximating property. Keywords: density, translation invariant kernels, radial kernels

4 0.051204488 58 jmlr-2006-Lower Bounds and Aggregation in Density Estimation

Author: Guillaume Lecué

Abstract: In this paper we prove the optimality of an aggregation procedure. We prove lower bounds for aggregation of model selection type of M density estimators for the Kullback-Leibler divergence (KL), the Hellinger’s distance and the L1 -distance. The lower bound, with respect to the KL distance, can be achieved by the on-line type estimate suggested, among others, by Yang (2000a). Combining these results, we state that log M/n is an optimal rate of aggregation in the sense of Tsybakov (2003), where n is the sample size. Keywords: aggregation, optimal rates, Kullback-Leibler divergence

5 0.038928416 2 jmlr-2006-A Graphical Representation of Equivalence Classes of AMP Chain Graphs

Author: Alberto Roverato, Milan Studený

Abstract: This paper deals with chain graph models under alternative AMP interpretation. A new representative of an AMP Markov equivalence class, called the largest deflagged graph, is proposed. The representative is based on revealed internal structure of the AMP Markov equivalence class. More specifically, the AMP Markov equivalence class decomposes into finer strong equivalence classes and there exists a distinguished strong equivalence class among those forming the AMP Markov equivalence class. The largest deflagged graph is the largest chain graph in that distinguished strong equivalence class. A composed graphical procedure to get the largest deflagged graph on the basis of any AMP Markov equivalent chain graph is presented. In general, the largest deflagged graph differs from the AMP essential graph, which is another representative of the AMP Markov equivalence class. Keywords: chain graph, AMP Markov equivalence, strong equivalence, largest deflagged graph, component merging procedure, deflagging procedure, essential graph

6 0.036607284 13 jmlr-2006-Adaptive Prototype Learning Algorithms: Theoretical and Experimental Studies

7 0.036588039 52 jmlr-2006-Learning Spectral Clustering, With Application To Speech Separation

8 0.036049254 60 jmlr-2006-Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples

9 0.034588039 9 jmlr-2006-Accurate Error Bounds for the Eigenvalues of the Kernel Matrix

10 0.033688724 40 jmlr-2006-Infinite-σ Limits For Tikhonov Regularization

11 0.028218504 43 jmlr-2006-Large Scale Multiple Kernel Learning     (Special Topic on Machine Learning and Optimization)

12 0.02675383 23 jmlr-2006-Consistency and Convergence Rates of One-Class SVMs and Related Algorithms

13 0.026690358 92 jmlr-2006-Toward Attribute Efficient Learning of Decision Lists and Parities

14 0.023509383 81 jmlr-2006-Some Discriminant-Based PAC Algorithms

15 0.022112302 38 jmlr-2006-Incremental Support Vector Learning: Analysis, Implementation and Applications     (Special Topic on Machine Learning and Optimization)

16 0.019525722 44 jmlr-2006-Large Scale Transductive SVMs

17 0.018656118 29 jmlr-2006-Estimation of Gradients and Coordinate Covariation in Classification

18 0.018416537 37 jmlr-2006-Incremental Algorithms for Hierarchical Classification

19 0.017819496 89 jmlr-2006-Structured Prediction, Dual Extragradient and Bregman Projections     (Special Topic on Machine Learning and Optimization)

20 0.017615207 45 jmlr-2006-Learning Coordinate Covariances via Gradients


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.113), (1, -0.057), (2, -0.009), (3, -0.069), (4, -0.015), (5, 0.078), (6, -0.009), (7, -0.032), (8, -0.124), (9, 0.0), (10, 0.108), (11, -0.089), (12, -0.128), (13, 0.137), (14, -0.013), (15, 0.029), (16, 0.075), (17, -0.021), (18, -0.101), (19, -0.01), (20, 0.097), (21, 0.076), (22, -0.041), (23, -0.006), (24, 0.108), (25, 0.174), (26, 0.212), (27, 0.204), (28, 0.033), (29, -0.045), (30, -0.196), (31, -0.02), (32, -0.092), (33, 0.064), (34, -0.232), (35, 0.137), (36, -0.08), (37, 0.134), (38, 0.15), (39, 0.165), (40, 0.152), (41, 0.117), (42, -0.033), (43, 0.116), (44, 0.343), (45, 0.086), (46, 0.183), (47, -0.012), (48, -0.162), (49, -0.118)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96598959 67 jmlr-2006-On Representing and Generating Kernels by Fuzzy Equivalence Relations

Author: Bernhard Moser

Abstract: Kernels are two-placed functions that can be interpreted as inner products in some Hilbert space. It is this property which makes kernels predestinated to carry linear models of learning, optimization or classification strategies over to non-linear variants. Following this idea, various kernel-based methods like support vector machines or kernel principal component analysis have been conceived which prove to be successful for machine learning, data mining and computer vision applications. When applying a kernel-based method a central question is the choice and the design of the kernel function. This paper provides a novel view on kernels based on fuzzy-logical concepts which allows to incorporate prior knowledge in the design process. It is demonstrated that kernels mapping to the unit interval with constant one in its diagonal can be represented by a commonly used fuzzylogical formula for representing fuzzy rule bases. This means that a great class of kernels can be represented by fuzzy-logical concepts. Apart from this result, which only guarantees the existence of such a representation, constructive examples are presented and the relation to unlabeled learning is pointed out. Keywords: kernel, triangular norm, T -transitivity, fuzzy relation, residuum 1. Motivation Positive-definiteness plays a prominent role especially in optimization and machine learning due to the fact that two-place functions with this property, so-called kernels, can be represented as inner products in some Hilbert space. Thereby, optimization techniques conceived on the basis of linear models can be extended to non-linear algorithms. For a survey of applications see, for example, ¨ Jolliffe (1986), Sch¨ lkopf and Smola (2002) and Scholkopf et al. (1998). o Recently in Moser (2006) it was shown that kernels with values from the unit interval can be interpreted as fuzzy equivalence relations motivated by the idea that kernels express a kind of similarity. This means that the concept of fuzzy equivalence relations, or synonymously fuzzy similarity relations, is more general than that of kernels, provided only values in the unit interval are considered. Fuzzy equivalence relations distinguish from Boolean equivalence relations by a many-valued extension of transitivity which can be interpreted as many-valued logical model of the statement “IF x is similar to y AND y is similar to z THEN x is similar to z”. In contrast to the Boolean case, in many-valued logics the set of truth values is extended such that also assertions, for example, whether two elements x and y are similar, can be treated as a matter of degree. The standard model for the set of (quasi) truth values of fuzzy logic and other many-valued logical systems is the unit interval. If E(x, y) represents the (quasi) truth value of the statement that x is c 2006 Bernhard Moser. M OSER similar to y, then the many-valued version of transitivity is modeled by T (E(x, y), E(y, z)) ≤ E(x, z) where T is a so-called triangular norm which is an extension of the Boolean conjunction. This many-valued concept for transitivity is called T -transitivity. For a survey on triangular norms see, for example, Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. (2000), ¨ and for fuzzy equivalence relations and T -transitivity see, for example, Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. (2000), and Zadeh (1971). o Based on the semantics of fuzzy logic, this approach allows to incorporate knowledge-based models for the design of kernels. From this perspective, the most interesting mathematical question is how positive-semidefinite fuzzy equivalence relations can be characterized or at least constructed under some circumstances. At least for some special cases, proofs are provided in Section 4, which motivate further research aiming at establishing a more general theory on the positive-definiteness of fuzzy equivalence relations. These cases are based on the most prominent representatives of triangular norms, that is the Minimum, the Product and the Łukasiewicz t-norm. The paper is structured as follows. First of all, in Section 2, some basic prerequisites concerning kernels and fuzzy relations are outlined. In Section 3, a result about the T -transitivity of kernels from Moser (2006) is cited and interpreted as existence statement that guarantees a representation of kernels mapping to the unit interval with constant 1 in its diagonal by a certain, commonly used, fuzzy-logical construction of a fuzzy equivalence relation. Finally, in contrast to the pure existence theorem of Section 3, in Section 4 constructive examples of fuzzy equivalence relations are provided which are proven to be kernels. In a concluding remark, the relationship to the problem of labeled and unlabeled learning is pointed out. 2. Prerequisites This section summarizes definitions and facts from the theory of kernels as well as from fuzzy set theory which are needed later on. 2.1 Kernels and Positive-Semidefiniteness Preserving Functions There is an extensive literature concerning kernels and kernel-based methods like support vector machines or kernel principal component analysis especially in the machine learning, data mining ¨ and computer vision communities. For an overview and introduction, see, for example, Sch olkopf and Smola (2002). Here we present only what is needed later on. For completeness let us recall the basic definition for kernels and positive-semidefiniteness. Definition 1 Let X be a non-empty set. A real-valued function k : X × X → R is said to be a kernel iff it is symmetric, that is, k(x, y) = k(y, x) for all x, y ∈ X , and positive-semidefinite, that is, ∑n j=1 ci c j k(xi , x j ) ≥ 0 for any n ∈ N, any choice of x1 , . . . , xn ∈ X and any choice of c1 , . . . , cn ∈ R. i, One way to generate new kernels from known kernels is to apply operations which preserve the positive-semidefiniteness property. A characterization of such operations is provided by C. H. FitzGerald (1995). Theorem 2 (Closeness Properties of Kernels) Let f : Rn → R, n ∈ N, then k : X × X → R given by k(x, y) := f (k1 (x, y), . . . , kn (x, y)) 2604 G ENERATING K ERNELS BY F UZZY R ELATIONS is a kernel for any choice of kernels k1 , . . . , kn on X × X iff f is the real restriction of an entire function on Cn of the form f (x1 , . . . , xn ) = ∑ r1 ≥0,...,rn ≥0 r r cr1 ,...,rn x11 · · · xnn (1) where cr1 ,...,rn ≥ 0 for all nonnegative indices r1 , . . . , rn . 2.2 Triangular Norms Triangular norms have been originally studied within the framework of probabilistic metric spaces, see Schweizer and Sklar (1961) and Schweizer and Sklar (1983). In this context, t-norms proved to be an appropriate concept when dealing with triangle inequalities. Later on, t-norms and their dual version, t-conorms, have been used to model conjunction and disjunction for many-valued logic, see Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. (2000). Definition 3 A function T : [0, 1]2 → [0, 1] is called t-norm (triangular norm), if it satisfies the following conditions: (i) (ii) (iii) (iv) ∀x, y ∈ [0, 1] : ∀x, y, z ∈ [0, 1] : ∀x, y, z ∈ [0, 1] : ∀x, y ∈ [0, 1] : T (x, y) = T (y, x) T (x, T (y, z)) = T (T (x, y), z) y ≤ z =⇒ T (x, y) ≤ T (x, z) T (x, 1) = x ∧ T (1, y) = y (commutativity) (associativity) (monotonicity) (boundary condition) Further, a t-norm is called Archimedean if it is continuous and satisfies x ∈ (0, 1) ⇒ T (x, x) < x. Due to its associativity, many-placed extensions Tn : [0, 1]n → [0, 1], n ∈ N, of a t-norm T are uniquely determined by Tn (x1 , . . . , xn ) = T (x1 , Tn−1 (x2 , . . . , xn )). Archimedean t-norms are characterized by the following representation theorem due to Ling (1965): Theorem 4 Let T : [0, 1]2 → [0, 1] be a t-norm. Then T is Archimedean if, and only if, there is a continuous, strictly decreasing function f : [0, 1] → [0, ∞] with f (1) = 0 such that for x, y ∈ [0, 1], T (x, y) = f −1 (min( f (x) + f (y), f (0))). By setting g(x) = exp (− f (x)), Ling’s characterization yields an alternative representation with a multiplicative generator function T (x, y) = g−1 (max(g(x) g(y), g(0))). For g(x) = x we get the product TP (x, y) = x y. The setting f (x) = 1 − x yields the so-called Łukasiewcz t-norm TL (x, y) = max(x + y − 1, 0). Due to Ling’s theorem 4 an Archimedean t-norm T is isomorphic either to TL or TP , depending on whether the additive generator takes a finite value at 0 or not. In the former case, the Archimedean t-norm is called non-strict, in the latter it is called strict. 2605 M OSER A many-valued model of an implication is provided by the so-called residuum given by → T (a, b) = sup{c ∈ [0, 1]|T (a, c) ≤ b} (2) where T is a left-continuous t-norm. Equation (2) is uniquely determined by the so-called adjunction property → ∀a, b, c ∈ [0, 1] : T (a, b) ≤ c ⇔ a ≤ T (b, c). Consequently, the operator ↔ → → T (a, b) = min T (a, b), T (b, a) (3) (4) models a biimplication. For details, for example, see Gottwald (1986) and Klement et al. (2000). → Tables 1 and 2 list examples of t-norms with their induced residuum T . For further examples see, for example, Klement et al. (2000). √ √ Tcos (a, b) = max(ab − 1 − a2 1 − b2 , 0) TL (a, b) = max(a + b − 1, 0) TP (a, b) = ab TM (a, b) = min(a, b) Table 1: Examples of t-norms → T cos (a, b) = → T L (a, b) = → = T P (a, b) → T M (a, b) = cos(arccos(b) − arccos(a)) if a > b, 1 else min(b − a + 1, 1) b if a > b, a 1 else b if a > b, 1 else Table 2: Examples of residuums 2.3 T -Equivalences If we want to classify based on a notion of similarity or indistinguishability, we face the problem of transitivity. For instance, let us consider two real numbers to be indistinguishable if and only if they differ by at most a certain bound ε > 0, this is modeled by the relation ∼ ε given by x ∼ε y :⇔ |x−y| < ε, ε > 0, x, y ∈ R. Note that the relation ∼ε is not transitive and, therefore, not an equivalence relation. The transitivity requirement turns out to be too strong for this example. The problem of identification and transitivity in the context of similarity of physical objects was early pointed out and discussed philosophically by Poincar´ (1902) and Poincar´ (1904). In the framework of fuzzy e e logic, the way to overcome this problem is to model similarity by fuzzy relations based on a many¨ valued concept of transitivity, see Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. o (2000) and Zadeh (1971). 2606 G ENERATING K ERNELS BY F UZZY R ELATIONS Definition 5 A function E : X 2 −→ [0, 1] is called a fuzzy equivalence relation, or synonymously, T -equivalence with respect to the t-norm T if it satisfies the following conditions: (i) ∀x ∈ X : E(x, x) = 1 (reflexivity) (ii) ∀x, y ∈ X : E(x, y) = E(y, x) (symmetry) (iii) ∀x, y, z ∈ X : T (E(x, y), E(y, z)) ≤ E(x, z) (T-transitivity). The value E(x, y) can be also looked at as the (quasi) truth value of the statement “x is equal to y”. Following this semantics, T-transitivity can be seen as a many-valued model of the proposition, “If x is equal to y and y is equal to z, then x is equal to z”. T -equivalences for Archimedean t-norms are closely related to metrics and pseudo-metrics as shown by Klement et al. (2000) and Moser (1995). Theorem 6 Let T be an Archimedean t-norm given by ∀a, b ∈ [0, 1] : T (a, b) = f −1 (min( f (a) + f (b), f (0))), where f : [0, 1] → [0, ∞] is a strictly decreasing, continuous function with f (1) = 0. (i) If d : X 2 → [0, ∞[ is a pseudo-metric, then the function Ed : X 2 → [0, 1] defined by Ed (x, y) = f −1 (min(d(x, y), f (0))) is a T -equivalence with respect to the t-norm T . (ii) If E : X 2 → [0, 1] is a T -equivalence relation, then the function dE : X 2 → [0, ∞] defined by dE (x, y) = f (E(x, y)) is a pseudo-metric. → Another way to construct T -equivalences is to employ T -operators. The proof of the following assertion can be found in Trillas and Valverde (1984), Kruse et al. (1993) and Kruse et al. (1994). ↔ Theorem 7 Let T be a left-continuous t-norm, T its induced biimplication, µi : X → [0, 1], i ∈ I, I non-empty; then E : X × X → [0, 1] given by ↔ E(x, y) = inf T (µi (x), µi (y)) i∈I (5) is a T -equivalence relation. ¨ For further details on T -equivalences see also Boixader and Jacas (1999), H oppner et al. (2002), Jacas (1988), Trillas et al. (1999) and Valverde (1985). 3. Representing Kernels by T -Equivalences It is interesting that the concept of kernels, which is motivated by geometric reasoning in terms of inner products and mappings to Hilbert spaces and which is inherently formulated by algebraic terms, is closely related to the concept of fuzzy equivalence relations as demonstrated and discussed in more detail in Moser (2006). In this section, we start with the result that any kernel k : X × X → [0, 1] with k(x, x) = 1 for all x ∈ X is T -transitive and, therefore, a fuzzy equivalence relation. The proof can be found in Moser (2006), see also Appendix A.1. 2607 M OSER Theorem 8 Any kernel k : X × X → [0, 1] with k(x, x) = 1 is (at least) Tcos -transitive, where 1 − a2 Tcos (a, b) = max{a b − 1 − b2 , 0}. (6) The nomenclature is motivated by the fact that the triangular norm defined by Equation (6) is an Archimedean t-norm which is generated by the arcosine function as its additive generator. From this result, the following existence theorem can be derived, which guarantees that any kernel under consideration can be represented by the fuzzy-logical formula given by (5). In fuzzy systems, this formula is commonly used for modeling rule bases (see, for example, Kruse et al., 1993, 1994). Theorem 9 Let X be a non-empty universe of discourse, k : X × X → [0, 1] a kernel in the sense of Definition 1 and k(x, x) = 1 for all x ∈ X ; then there is a family of membership functions µ i : X → [0, 1], i ∈ I, I non-empty and a t-norm T , such that ↔ ∀x, y ∈ X : k(x, y) = inf T (µi (x), µi (y)). i∈I (7) Proof. Let us set I := X , µx0 (x) = k(x, x0 ) and let us choose Tcos as t-norm. For convenience let us denote ↔ h(x, y) = inf T cos (µx0 (x), µx0 (y)), x0 ∈X which is equivalent to ↔ h(x, y) = inf T cos (k(x0 , x), k(x0 , y)). x0 ∈X According to Theorem 8, k is Tcos -transitive, that is, ↔ ∀x0 , x, y ∈ X : T cos (k(x0 , x), k(x0 , y)) ≤ k(x, y). This implies that h(x, y) ≤ k(x, y) for all x, y ∈ X . Now let us consider the other inequality. Due to the adjunction property (3), we obtain → Tcos (k(x, y), k(x0 , y)) ≤ k(x, x0 ) ⇔ k(x, y) ≤ T cos (k(x0 , y), k(x, x0 )) and → Tcos (k(x, y), k(x0 , x)) ≤ k(y, x0 ) ⇔ k(x, y) ≤ T cos (k(x0 , x), k(y, x0 )), from which it follows that → → ∀x, y, x0 ∈ X : k(x, y) ≤ min{ T cos (k(x0 , y), k(x, x0 )), T cos (k(x0 , x), k(y, x0 ))}. Hence by Definition 4, ∀x, y ∈ X : k(x, y) ≤ h(x, y) which ends the proof. For an arbitrary choice of fuzzy membership functions, there is no necessity that the resulting relation (7) implies positive-semidefiniteness and, therefore, a kernel. For an example of a Tcos equivalence which is not a kernel see Appendix A.4. Theorem 9 guarantees only the existence of a representation of the form (5) but it does not tell us how to construct the membership functions µ i . In the following section, we provide examples of fuzzy equivalence relations which yield kernels for any choice of membership functions. 2608 G ENERATING K ERNELS BY F UZZY R ELATIONS 4. Constructing Kernels by Fuzzy Equivalence Relations In the Boolean case, positive-definiteness and equivalence are synonymous, that is, a Boolean relation R : X × X → {0, 1} is positive-definite if and only if R is the indicator function of an equivalence relation ∼ that is, R(x, y) = 1 if x ∼ y and R(x, y) = 0 if x ∼ y. For a proof, see Appendix A.2. This = = =, relationship can be used to obtain an extension to fuzzy relations as given by the next theorem whose proof can be found in the Appendix A.3. Theorem 10 Let X be a non-empty universe of discourse, µ i : X → [0, 1], i ∈ I, I non-empty; then the fuzzy equivalence relation EM : X × X → [0, 1] given by ↔ EM (x, y) = inf T M (µi (x), µi (y)) i∈I is positive-semidefinite. In the following, the most prominent representatives of Archimedean t-norms, the Product TP and the Łukasiewicz t-norm TL , are used to construct positive-semidefinite fuzzy similarity relations. Though the first part can also be derived from a result due to Yaglom (1957) that characterizes isotropic stationary kernels by its spectral representation, here we prefer to present a direct, elementary proof. Compare also Bochner (1955) and Genton (2001). Theorem 11 Let X be a non-empty universe of discourse, ν : X → [0, 1] and let h : [0, 1] → [0, 1] be an isomorphism of the unit interval that can be expanded in the manner of Equation (1), that is h(x) = ∑k ck xk with ck ≥ 0; then the fuzzy equivalence relations EL,h , EP,h : X × X → [0, 1] given by ↔ EL,h (x, y) = h T L h−1 (ν(x)) , h−1 (ν(y)) and ↔ EP,h (x, y) = h T P h−1 (ν(x)) , h−1 (ν(y)) (8) (9) are positive-semidefinite. Proof. To prove the positive-definiteness of the two-placed functions E L,h and EP,h given by equations (8) and (9) respectively, we have to show that n n ∑ i, j=1 EL,h (xi , xi ) ci c j ≥ 0, ∑ i, j=1 EP,h (xi , x j ) ci c j ≥ 0 for any n ∈ N and any choice of x1 , . . . , xn ∈ X , respectively. According to an elementary result from Linear Algebra this is equivalent to the assertion that the determinants (1 ≤ m ≤ n) Dm = det (E(xi , x j ))i, j∈{1,...,m} of the minors of the matrix (E(xi , x j ))i, j satisfy ∀m ∈ {1, . . . , n} : Dm ≥ 0, where E denotes either EL,h or EP,h . Recall that the determinant of a matrix is invariant with respect to renaming the indices, that is, if σ : {1, . . . , n} → {1, . . . , n} is a permutation then det [(ai j )i, j ] = det (aσ(i)σ( j) )i, j . 2609 M OSER For convenience, let denote µi = h−1 (ν(xi )). Then, without loss of generality, we may assume that the values µi are ordered monotonically decreasing, that is, µi ≥ µ j for i < j. ↔ → (10) → Case TL : Note that T L (a, b) = min{ T L (a, b), T L (b, a)} = 1 − |a − b|. Then we have to show that for all dimensions n ∈ N, the determinant of E (n) = (1 − |µi − µ j |)i, j∈{1,...,n} is non-negative, that is Due to the assumption (10), we have det[E (n) ] ≥ 0. 1 − |µi − µ j | = 1 − (µi − µ j ) if i ≤ j, 1 − (µ j − µi ) else which yields   . . . 1 − (µ1 − µn−1 ) 1 − (µ1 − µn )  . . . 1 − (µ2 − µn−1 ) 1 − (µ2 − µn )     . . . 1 − (µ3 − µn−1 ) 1 − (µ3 − µn )    (n) E = . . . .. . .   . . .   1 − (µ1 − µn−1 ) 1 − (µ2 − µn−1 ) . . . 1 1 − (µn−1 − µn ) 1 − (µ1 − µn ) 1 − (µ2 − µn ) . . . 1 − (µn−1 − µn ) 1 1 − (µ1 − µ2 ) 1 1 − (µ2 − µ3 ) . . . 1 1 − (µ1 − µ2 ) 1 − (µ1 − µ3 ) . . . Now let us apply determinant-invariant elementary column operations to simplify this matrix by subtracting the column with index i − 1 from the column with index i, i ≥ 2. This yields   1 µ2 − µ1 ... µn−1 − µn−2 µn − µn−1  1 − (µ1 − µ2 ) −(µ2 − µ1 ) . . . µn−1 − µn−2 µn − µn−1     1 − (µ1 − µ3 ) −(µ2 − µ1 ) . . . µn−1 − µn−2 µn − µn−1    ˜ E (n) =  . . . . . .. . . . .   . . . . .   1 − (µ1 − µn−1 ) −(µ2 − µ1 ) . . . −(µn−2 − µn−1 ) µn − µn−1  1 − (µ1 − µn ) −(µ2 − µ1 ) . . . −(µn−2 − µn−1 ) −(µn−1 − µn ) Therefore, α = n ∏(µi−1 − µi ) ≥ 0 (11) i=2 ˜ ˆ det[E (n) ] = det[E (n) ] = α det[En ], where   . . . −1 −1  . . . −1 −1    . . . −1 −1   (n) ˆ E = . . . .. . .   . . .   1 − (µ1 − µn−1 ) +1 . . . +1 −1 1 − (µ1 − µn ) +1 . . . +1 +1 1 1 − (µ1 − µ2 ) 1 − (µ1 − µ3 ) . . . 2610 −1 +1 +1 . . . (12) G ENERATING K ERNELS BY F UZZY R ELATIONS Let us apply Laplacian determinant expansion by minors to the first column of matrix (12), that is n det[A] = ∑ (−1)i+ j ai j det[Ai j ] i=1 where A = (ai j ) is an n × n-matrix, j arbitrarily chosen from {1, . . . , n} and Ai j is the matrix corresponding to the cofactor ai j obtained by canceling out the i-th row and the j-th column from A (see, ˆ for example, Muir, 1960). For n = 1, we get the trivial case det[ E (1) ] = 1. Note that the first and (n) ˆ the last rows of the matrices Ei,1 for 1 < i < n only differ by their signum, consequently the minors ˆ (n) det[Ei,1 ] for 1 < i < n, n ≥ 2, are vanishing, that is, det[Ai,1 ] = 0, for 1 < i < n. Therefore, according to the Laplacian expansion, we get (n) (n) ˆ ˆ ˆ det[E (n) ] = 1 · det[E1,1 ] + (−1)n (1 − (µ1 − µn )) · det[E1,n ]. (13) Observe that (n) ˆ det[E1,1 ] = 2n−2 (n) ˆ det[E1,n ] = (−1)n−1 2n−2 . Consequently, Equation (13) simplifies to ˆ det[E (n) ] = 2n−2 1 + (−1)n (−1)n−1 2n−2 (1 − (µ1 − µn )) = 2n−2 (1 − (1 − (µ1 − µn ))) = 2n−2 (µ1 − µn ) ≥ 0 which together with (11) proves the first case. ↔ → → Case TP : First of all, let us compute T P (a, b) = min{ T P (a, b), T L (b, a)}. Hence,  min{ b , a } if a, b > 0,  a b   0 ↔ if a = 0 and b > 0 , T P (a, b) = 0 if b = 0 and a > 0 ,    1 if a = 0 and b = 0 . Again, without loss of generality, let us suppose that the values µ i , i ∈ {1, . . . , n} are ordered monotonically decreasing, that is µ1 ≥ µ2 ≥ . . . ≥ µn . Before checking the general case, let us consider the special case of vanishing µ-values. For this, let us assume for the moment that µi = > 0 if i < i0 , 0 else ↔ ↔ which implies that T P (µi , µ j ) = 0 for i < i0 and j ≥ i0 and T P (µi , µ j ) = 1 for i ≥ i0 and j ≥ i0 . This leads to a decomposition of the matrix ↔ E (n) = T P (µi , µ j ) 2611 ij M OSER such that det[E (n) ] = det[E (i0 −1) ] · det[In−i0 −1 ] where Ik denotes the k × k-matrix with constant entries 1, hence det[In−i0 −1 ] ∈ {0, 1}. Therefore, we may assume that µ1 ≥ µ2 ≥ . . . ≥ µn > 0. Then we have to show that for all dimensions n ∈ N, the determinant of µi µ j , µ j µi E (n) = min i, j∈{1,...,n} is non-negative, that is det[E (n) ] ≥ 0. Consider  1  µ2  µ1  µ3  µ (n) E =  .1  .  .  µn−1  µ1 µn µ1 µ2 µ1 1 µ3 µ2 . . . µn−1 µ2 µn µ2 ... ... ... .. . ... ... Now, multiply the i-th column by −µi+1 /µi and add 1 ≤ i < n, then we get  1 0 ... 2  µ2 ∗ 1 − ... µ1  ∗ ∗ ...   . ˜ .. E (n) =  . . . . . .  ∗ ∗ ... 1−   ∗ ∗ ... µn−1 µ1 µn−1 µ2 µn−1 µ3 µn µ1 µn µ2 µn µ3      . . .  .  µn  µn−1  1 . . . 1 µn µn−1 (14) it to the (i + 1)-th column of matrix (14), 0 0 0 0 0 . . . 0 . . . µn−1 µn−2 2 ∗ 0 1− µn µn−1             2 (15) where ∗ is a placeholder for any real value. By this, the determinant of the matrix in Equation (15) readily turns out to be n−1 µi+1 ˜ det[E (n) ] = det[E (n) ] = ∏ 1 − µi i=1 2 ≥0 which together with Theorem (2) ends the proof. Note that relations (8) and (9) are T -transitive with respect to the corresponding isomorphic Archimedean t-norms, TL,h (x, y) = h(TL (h−1 (x), h−1 (x))) and TP,h (x, y) = h(TP (h−1 (x), h−1 (x))), respectively. 2612 G ENERATING K ERNELS BY F UZZY R ELATIONS Corollary 12 Let X be a non-empty universe of discourse, µ i : X → [0, 1], λi ∈ ]0, 1] with ∑i λi = 1 ˜ ˜ where i ∈ {1, . . . , n}, n ∈ N, then the fuzzy equivalence relations EL , EP : X × X → [0, 1] given by n ↔ ˜ EL (x, y) = ∑ λi T L (µi (x), µi (y)) (16) i=1 and n ↔ ˜ EP (x, y) = ∏ T P (µi (x), µi (y)) λi (17) i=1 are TL - and TP -equivalences, respectively, and kernels. Proof. First of all, let us check the TL -transitivity of formula (16). This can readily be shown by ↔ means of the definition of TL and the TL -transitivity of T L due to the following inequalities: n TL n ↔ i=1 n n ↔ ↔ n ↔ ↔ ∑ λi T L (µi (x), µi (y)) + ∑ λi T L (µi (y), µi (z)) − 1 , 0 i=1 i=1 n max = i=1 i=1 n = i=1 ∑ λi T L (µi (x), µi (y)) + ∑ λi T L (µi (y), µi (z)) − 1, 0 max max ↔ ∑ λi T L (µi (x), µi (y)), ∑ λi T L (µi (y), µi (yz) n ↔ ↔ ∑ λi TL T L (µi (x), µi (y)), ∑ λi T L (µi (y), µi (z)) , 0 i=1 i=1 n max ↔ ∑ λi T L (µi (x), µi (z)), 0 ≤ ≤ = i=1 ↔ λi T L (µi (x), µi (z)). ↔ This, together with the TP -transitivity of T P , proves that the formulas given by (16) and (17) are TL and TP -equivalences, respectively. Expanding the factors of formula (17) yields  1 if µi (x) = µi (y) = 0, λi ↔ λi λi (18) T P (µi (x), µi (y)) =  min(µiλi(x),µiλi(y)) else max(µi (x),µi (y)) which by comparing case TP of the proof of Theorem 11 shows that the left-hand side of Equation (18) is positive-semidefinite. As the convex combination and the product are special cases of positive-semidefiniteness preserving functions according to Theorem 1, the functions defined by equations (16) and (17) prove to be again positive-semidefinite and, therefore, kernels. It is interesting to observe that both formulas (16) and (17) can be expressed in the form, f ( τ(x) − τ(y) 1 ), where f : I → [0, 1], I some interval, is a strictly decreasing function, τ : X → I n , I some interval, τ(x) = (τ1 (x), . . . , τn (x)) and τ(x) 1 = ∑n |τi (x)|. Indeed, for Equation (16) let us define i=1 fL : [0, 1] → [0, 1], fL (a) = 1 − a τL : X → [0, 1] , τL (x) = (λ1 µ1 (x), . . . , λn µn (x)) n 2613 M OSER and for Equation (17) and positive membership functions µ i , µi (x) > 0 for all x ∈ X , let us define fP : [0, ∞[→ [0, 1], fP (a) = e−a τP : X → ] − ∞, 1]n , τP (x) = (λ1 ln(µ1 (x)), . . . , λn ln(µn (x))) Therefore, we get ˜ EL (x, y) = 1 − τL (x) − τL (y) ˜ EP (x, y) = e− τP (x)−τP (y) 1 . 1 (19) (20) While formulas (19) and (20) provide a geometrical interpretation by means of the norm . 1 , the corresponding formulas (16) and (17) yield a semantical model of the assertion “IF x is equal to y with respect to feature µ1 AND . . . AND x is equal to y with respect to feature µn THEN x is equal to y” as aggregation of biimplications in terms of fuzzy logic. While in the former case, the aggregation has some compensatory effect, the latter is just a conjunction in terms of the Product triangular norm. For details on aggregation operators see, for example, Saminger et al. (2002) and Calvo et al. (2002). The formulas (16) and (17) coincide for the following special case. If the membership functions µi are indicator functions of sets Ai ⊆ X which form a partition of X , then the kernels (16) and (17) reduce to the indicator function characterizing the Boolean equivalence relation induced by this partition {A1 , . . . , An }. The formulas (16) and (17) for general membership functions therefore provide kernels which can be interpreted to be induced by a family of fuzzy sets and, in particular, by fuzzy partitions, that is, families of fuzzy sets fulfilling some criteria which extend the axioms for a Boolean partition in a many-valued logical sense. For definitions and further details on fuzzy partitions see, for ¨ example, De Baets and Mesiar (1998), Demirci (2003) and H oppner and Klawonn (2003). It is a frequently used paradigm that the decision boundaries for a classification problem lie between clusters rather than intersecting them. Due to this cluster hypothesis, the problem of designing kernels based on fuzzy partitions is closely related to the problem of learning kernels from unlabeled data. For further details on semi-supervised learning see, for example, Seeger (2002), Chapelle et al. (2003) and T. M. Huang (2006). It is left to future research to explore this relationship to the problem of learning from labeled and unlabeled data and related concepts like covariance kernels. 5. Conclusion In this paper, we have presented a novel view on kernels from a fuzzy logical point of view. Particularly, the similarity-measure aspect of a kernel is addressed and investigated by means of the so-called T -transitivity which is characteristic for fuzzy equivalence relations. As a consequence, we derived that a large class of kernels can be represented in a way that is commonly used for representing fuzzy rule bases. In addition to this proof for the existence of such a representation, constructive examples are presented. It is the idea of this research to look for a combination of knowledge-based strategies with kernel-based methods in order to facilitate a more flexible designing process of kernels which also allows to incorporate prior knowledge. Further research aims at 2614 G ENERATING K ERNELS BY F UZZY R ELATIONS analyzing the behavior of kernels constructed in this way when applied in the various kernel methods like support vector machines, kernel principal components analysis and others. In particular, it is intended to focus on the problem of learning kernels from unlabeled data where the fuzzy partitions are induced by appropriate clustering principles. Acknowledgments Bernhard Moser gratefully acknowledges partial support by the Austrian Government, the State of Upper Austria, and the Johannes Kepler University Linz in the framework of the Kplus Competence Center Program. Furthermore special thanks go to the anonymous reviewers who gave helpful suggestions and to Felix Kossak for careful proof-reading. Appendix A. For sake of completeness the following sections provide proofs regarding Theorem 8, the characterization of kernels in the Boolean case and the construction of kernels by means of the minimum t-norm TM . Furthermore, in Section A.4 an example of a non-positive-semidefinite Tcos -equivalence is given. A.1 Proof of Theorem 8 Let us start with the analysis of 3-dimensional matrices. Lemma 13 Let M = (mi j )i j ∈ [0, 1]3×3 be a 3 × 3 symmetric matrix with mii = 1, i = 1, 2, 3; then M is positive-semidefinite iff for all i, j, k ∈ {1, 2, 3} there holds mi j m jk − 1 − m2j i 1 − m2 ≤ mik jk Proof. For simplicity, let a = m1,2 , b = m1,3 and c = m2,3 . Then the determinant of M, Det(M), is a function of the variables a, b, c given by D(a, b, c) = 1 + 2abc − a2 − b2 − c2 . For any choice of a, b, the quadratic equation D(a, b, c) = 0 can be solved for c, yielding two solutions c1 = c1 (a, b) and c2 = c2 (a, b) as functions of a and b, c1 (a, b) = ab − c2 (a, b) = ab + 1 − a2 1 − a2 1 − b2 1 − b2 . Obviously, for all |a| ≤ 1 and |b| ≤ 1, the values c1 (a, b) and c2 (a, b) are real. By substituting a = cos α and b = cos(β) with α, β ∈ [0, π ], it becomes readily clear that 2 c1 (a, b) = c1 (cos(α), cos(β)) = cos(α) cos(β) − sin(α) sin(β) = cos(α + β) ∈ [−1, 1] 2615 M OSER and, analogously, c2 (a, b) = c2 (cos(α), cos(β)) = cos(α) cos(β) + sin(α) sin(β) = cos(α − β) ∈ [−1, 1]. As for all a, b ∈ [−1, 1] the determinant function Da,b (c) := D(a, b, c) is quadratic in c with negative coefficient for c2 , there is a uniquely determined maximum at c0 (a, b) = ab. Note that for all a, b ∈ [−1, 1], we have c1 (a, b) ≤ c0 (a, b) ≤ c2 (a, b) and D(a, b, c0 (a, b)) = 1 + 2ab(ab) − a2 − b2 − (ab)2 = (1 − a2 )(1 − b2 ) ≥ 0. Therefore, D(a, b, c) ≥ 0 if and only if c ∈ [c1 (a, b), c2 (a, b)]. Recall from linear algebra that by renaming the indices, the determinant does not change. Therefore, without loss of generality, we may assume that a ≥ b ≥ c. For convenience, let Q = {(x, y, z) ∈ [0, 1]3 |x ≥ y ≥ z}. Then, obviously, for any choice of a, b ∈ [0, 1] there holds (a, b, c1 (a, b)) ∈ Q. Elementary algebra shows that (a, b, c2 (a, b)) ∈ Q is only the case for a = b = 1. As for a = b = 1 the two solutions c1 , c2 coincide, that is, c1 (1, 1) = c2 (1, 1) = 1, it follows that for any choice of (a, b, c) ∈ Q, there holds D(a, b, c) ≥ 0 if and only if c1 (a, b) = ab − 1 − a2 1 − b2 ≤ c. (21) If (a, b, c) ∈ Q, then the inequality (21) is trivially satisfied which together with (21) proves the lemma Now Theorem 8 immediately follows from Definition (1), Lemma (13) and the characterizing inequality (21). A.2 Characterization of Kernels in the Boolean Case ¨ The following lemma and proposition can also be found as an exercise in Sch olkopf and Smola (2002). Lemma 14 Let ∼ be an equivalence relation on X and let k : X × X → {0, 1} be induced by ∼ via k(x, y) = 1 if and only if x ∼ y; then k is a kernel. Proof. By definition of positive-definiteness, let us consider an arbitrary sequence of elements x1 , . . . , xn . Then there are at most n equivalence classes Q1 , . . . , Qm on the set of indices {1, . . . , n}, S / m ≤ n, where i=1,...,m Qi = {1, . . . , n} and Qi ∩ Q j = 0 for i = j. Note that k(xi , x j ) = 0 if the indices 2616 G ENERATING K ERNELS BY F UZZY R ELATIONS i, j belong to different equivalence classes. Then, for any choice of reals c 1 , . . . , cn , we obtain ∑ ci c j k(xi , x j ) m = i, j ∑ ∑ ci c j k(xi , x j ) p=1 i, j∈Q p m = ∑ ∑ p=1 i, j∈Q p ci c j · 1 2 m = ∑ ∑ ci p=1 i∈Q p ≥ 0 Proposition 15 k : X × X → {0, 1} with k(x, x) = 1 for all x ∈ X is a kernel if and only if it is induced by an equivalence relation. Proof. It only remains to be shown that if k is a kernel, then it is the indicator function of an equivalence relation, that is, it is induced by an equivalence relation. If k is a kernel, according to Lemma 13, for all x, y, z ∈ X , it has to satisfy Tcos (k(x, y), k(y, z)) ≤ k(x, z), which implies, k(x, y) = 1, k(y, z) = 1 =⇒ k(x, z) = 1. Obviously, we have k(x, x) = 1 and k(x, y) = k(y, x) due to the reflexivity and symmetry assumption of k, respectively. A.3 Constructing Kernels by TM For convenience let us recall the basic notion of an α-cut from fuzzy set theory: Definition 16 Let X be a non-empty set and µ : X → [0, 1]; then [µ]α = {x ∈ X | µ(x) ≥ α} is called the α-cut of the membership function µ. Lemma 17 k : X × X → [0, 1] is a TM -equivalence if and only if all α-cuts of k are Boolean equivalence relations. Proof. (i) Let us assume that k is a TM -equivalence. Let α ∈ [0, 1], then by definition, [k]α = {(x, y) ∈ X × X | k(x, y) ≥ α}. In order to show that [k]α is a Boolean equivalence, the axioms for reflexivity, symmetry and transitivity have to be shown. Reflexivity and symmetry are trivially satisfied as for all x, y ∈ X , there holds by assumption that k(x, x) = 1 and k(x, y) = k(y, x). In order to show transitivity, let us consider (x, y), (y, z) ∈ [k]α , that means k(x, y) ≥ α and k(y, z) ≥ α; then by the TM -transitivity assumption it follows that α ≤ min(k(x, y), k(y, z)) ≤ k(x, z), hence (x, z) ∈ [k]α . 2617 M OSER (ii) Suppose now that all α-cuts of k are Boolean equivalence relations. Then, in particular, [k] α with α = 1 is reflexive, hence k(x, x) = 1 for all x ∈ X . The symmetry of k follows from the fact that for all α ∈ [0, 1] and pairs (x, y) ∈ [k]α , by assumption, we have (y, x) ∈ [k]α . In order to show the TM -transitivity property, let us consider arbitrarily chosen elements x, y, z ∈ X . Let α = min(k(x, y), k(y, z)); then by the transitivity assumption of [k] α , it follows that (x, z) ∈ [k]α , consequently k(x, z) ≥ α = min(k(x, y), k(y, z)). Proposition 18 If k : X × X → [0, 1] is a TM -equivalence then it is positive-semidefinite. Proof. Choose arbitrary elements x1 , . . . , xn ∈ X and consider the set of values which are taken by all combinations k(xi , x j ), i, j ∈ {1, . . . , n} and order them increasingly, that is k(xi , x j )| i, j ∈ {1, . . . , n}} = {α1 , . . . , αm , where 0 ≤ α1 ≤ · · · αm ≤ 1. Observe that for all pairs (xi , x j ), i, j ∈ {1, . . . , n} there holds m k(xi , x j ) = ∑ (αv − αv−1 )1[k] αv v=2 (xi , x j ) + α1 1[k]α1 (xi , x j ) showing that on the set {x1 , . . . , xn } × {x1 , . . . , xn }, the function k is a linear combination of indicator functions of Boolean equivalences (which are positive-semidefinite by Proposition 15) with nonnegative coefficients and, consequently, it has to be positive semidefinite. A.4 Example of a Non-Positive-Semidefinite Tcos -Equivalence For dimensions n > 3, the Tcos -transitivity is no longer sufficient to guarantee positive semi(n) definiteness. Consider, for example An = (ai j )i j where  λ  (n) ai j = 1   0 if min(i, j) = 1, max(i, j) > 1 , if i = j, else . (22) √ (n) (n) (n) Choose λ = 1/ 2, then Tcos (λ, λ) = 0, hence we have Tcos (ai j , a jk ) ≤ aik for all indices i, j, k ∈ 1, . . . , n. As det(An ) < 0 for n > 3, the matrix An cannot be positive-semidefinite though the Tcos transitivity conditions are satisfied. References S. Bochner. Harmonic Analysis and the Theory of Probability. University of California Press, Los Angeles, California, 1955. U. Bodenhofer. A note on approximate equality versus the Poincar´ paradox. Fuzzy Sets and e Systems, 133(2):155–160, 2003. 2618 G ENERATING K ERNELS BY F UZZY R ELATIONS D. Boixader and J. Jacas. T -indistinguishability operators and approximate reasoning via CRI. In D. Dubois, E. P. Klement, and H. Prade, editors, Fuzzy Sets, Logics and Reasoning about Knowledge, volume 15 of Applied Logic Series, pages 255–268. Kluwer Academic Publishers, Dordrecht, 1999. A. Pinkus C. H. FitzGerald, C.A. Micchelli. Functions that preserve families of positive semidefinite matrices. Linear Alg. and Appl., 221:83–102, 1995. T. Calvo, G. Mayor, and R. Mesiar, editors. Aggregation Operators, volume 97 of Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg, 2002. ¨ O. Chapelle, J. Weston, and B. Scholkopf. Cluster kernels for semi-supervised learning. volume 15 of NIPS. 2003. B. De Baets and R. Mesiar. T -partitions. Fuzzy Sets and Systems, 97:211–223, 1998. M. Demirci. On many-valued partitions and many-valued equivalence relations. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 11(2):235–253, 2003. D. Dubois and H. Prade. A review of fuzzy set aggregation connectives. Inform. Sci., 36:85–121, 1985. M. G. Genton. Classes of kernels for machine learning: A statistics perspective. Journal of Machine Learning Research, 2:299–312, 2001. S. Gottwald. Fuzzy set theory with t-norms and Φ-operators. In A. Di Nola and A. G. S. Ventre, editors, The Mathematics of Fuzzy Systems, volume 88 of Interdisciplinary Systems Research, ¨ pages 143–195. Verlag TUV Rheinland, K¨ ln, 1986. o S. Gottwald. Fuzzy Sets and Fuzzy Logic. Vieweg, Braunschweig, 1993. U. H¨ hle. Fuzzy equalities and indistinguishability. In Proc. 1st European Congress on Fuzzy and o Intelligent Technologies, volume 1, pages 358–363, Aachen, 1993. U. H¨ hle. The Poincar´ paradox and non-classical logics. In D. Dubois, E. P. Klement, and H. Prade, o e editors, Fuzzy Sets, Logics and Reasoning about Knowledge, volume 15 of Applied Logic Series, pages 7–16. Kluwer Academic Publishers, Dordrecht, 1999. F. H¨ ppner and F. Klawonn. Improved fuzzy partitions for fuzzy regression models. Internat. J. o Approx. Reason., 32:85–102, 2003. F. H¨ ppner, F. Klawonn, and P. Eklund. Learning indistinguishability from data. Soft Computing, 6 o (1):6–13, 2002. J. Jacas. On the generators of T -indistinguishability operators. Stochastica, 12:49–63, 1988. I. T. Jolliffe. Principal Component Analysis. Springer Verlag, New York, 1986. E. P. Klement, R. Mesiar, and E. Pap. Triangular Norms, volume 8 of Trends in Logic. Kluwer Academic Publishers, Dordrecht, 2000. 2619 M OSER R. Kruse, J. Gebhardt, and F. Klawonn. Fuzzy-Systeme. B. G. Teubner, Stuttgart, 1993. R. Kruse, J. Gebhardt, and F. Klawonn. Foundations of Fuzzy Systems. John Wiley & Sons, New York, 1994. C. H. Ling. Representation of associative functions. Publ. Math. Debrecen, 12:189–212, 1965. B. Moser. On the t-transitivity of kernels. Fuzzy Sets and Systems, 157:1787–1796, 2006. B. Moser. A New Approach for Representing Control Surfaces by Fuzzy Rule Bases. PhD thesis, Johannes Kepler Universit¨ t Linz, October 1995. a T. Muir. A Treatise on the Theory of Determinants. Dover, New York, 1960. H. Poincar´ . La Science et l’Hypoth´ se. Flammarion, Paris, 1902. e e H. Poincar´ . La Valeur de la Science. Flammarion, Paris, 1904. e S. Saminger, R. Mesiar, and U. Bodenhofer. Domination of aggregation operators and preservation of transitivity. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 10(Suppl.):11–35, 2002. B. Sch¨ lkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, 2002. o ¨ B. Sch¨ lkopf, A. J. Smola, and K. R. Muller. Nonlinear component analysis as a kernel eigenvalue o problem. Neural Computation, 10:1299–1319, 1998. B. Schweizer and A. Sklar. Associative functions and statistical triangle inequalities. Publ. Math. Debrecen, 8:169–186, 1961. B. Schweizer and A. Sklar. Probabilistic Metric Spaces. North-Holland, Amsterdam, 1983. M. Seeger. Covariance kernels from bayesian generative models. Neural Information Processing Systems, 14:905–912, 2002. I. Kopriva T. M. Huang, V. Kecman. Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning. Springer-Verlag, Berlin, 2006. E. Trillas and L. Valverde. An inquiry into indistinguishability operators. In H. J. Skala, S. Termini, and E. Trillas, editors, Aspects of Vagueness, pages 231–256. Reidel, Dordrecht, 1984. E. Trillas, S. Cubillo, and E. Casti˜ eira. Menger and Ovchinnikov on indistinguishabilities revisited. n Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 7(3):213–218, 1999. L. Valverde. On the structure of F-indistinguishability operators. Fuzzy Sets and Systems, 17(3): 313–328, 1985. A. M. Yaglom. Some classes of random fields in n-dimensional space, related to stationary random processes. Theory of Probability and its Applications, 2:273–320, 1957. L. A. Zadeh. Similarity relations and fuzzy orderings. Inform. Sci., 3:177–200, 1971. 2620

2 0.39622304 42 jmlr-2006-Kernels on Prolog Proof Trees: Statistical Learning in the ILP Setting     (Special Topic on Inductive Programming)

Author: Andrea Passerini, Paolo Frasconi, Luc De Raedt

Abstract: We develop kernels for measuring the similarity between relational instances using background knowledge expressed in first-order logic. The method allows us to bridge the gap between traditional inductive logic programming (ILP) representations and statistical approaches to supervised learning. Logic programs are first used to generate proofs of given visitor programs that use predicates declared in the available background knowledge. A kernel is then defined over pairs of proof trees. The method can be used for supervised learning tasks and is suitable for classification as well as regression. We report positive empirical results on Bongard-like and M-of-N problems that are difficult or impossible to solve with traditional ILP techniques, as well as on real bioinformatics and chemoinformatics data sets. Keywords: kernel methods, inductive logic programming, Prolog, learning from program traces

3 0.3091827 93 jmlr-2006-Universal Kernels

Author: Charles A. Micchelli, Yuesheng Xu, Haizhang Zhang

Abstract: In this paper we investigate conditions on the features of a continuous kernel so that it may approximate an arbitrary continuous target function uniformly on any compact subset of the input space. A number of concrete examples are given of kernels with this universal approximating property. Keywords: density, translation invariant kernels, radial kernels

4 0.29271027 2 jmlr-2006-A Graphical Representation of Equivalence Classes of AMP Chain Graphs

Author: Alberto Roverato, Milan Studený

Abstract: This paper deals with chain graph models under alternative AMP interpretation. A new representative of an AMP Markov equivalence class, called the largest deflagged graph, is proposed. The representative is based on revealed internal structure of the AMP Markov equivalence class. More specifically, the AMP Markov equivalence class decomposes into finer strong equivalence classes and there exists a distinguished strong equivalence class among those forming the AMP Markov equivalence class. The largest deflagged graph is the largest chain graph in that distinguished strong equivalence class. A composed graphical procedure to get the largest deflagged graph on the basis of any AMP Markov equivalent chain graph is presented. In general, the largest deflagged graph differs from the AMP essential graph, which is another representative of the AMP Markov equivalence class. Keywords: chain graph, AMP Markov equivalence, strong equivalence, largest deflagged graph, component merging procedure, deflagging procedure, essential graph

5 0.2612052 13 jmlr-2006-Adaptive Prototype Learning Algorithms: Theoretical and Experimental Studies

Author: Fu Chang, Chin-Chin Lin, Chi-Jen Lu

Abstract: In this paper, we propose a number of adaptive prototype learning (APL) algorithms. They employ the same algorithmic scheme to determine the number and location of prototypes, but differ in the use of samples or the weighted averages of samples as prototypes, and also in the assumption of distance measures. To understand these algorithms from a theoretical viewpoint, we address their convergence properties, as well as their consistency under certain conditions. We also present a soft version of APL, in which a non-zero training error is allowed in order to enhance the generalization power of the resultant classifier. Applying the proposed algorithms to twelve UCI benchmark data sets, we demonstrate that they outperform many instance-based learning algorithms, the k-nearest neighbor rule, and support vector machines in terms of average test accuracy. Keywords: adaptive prototype learning, cluster-based prototypes, consistency, instance-based prototype, pattern classification 1

6 0.21739703 43 jmlr-2006-Large Scale Multiple Kernel Learning     (Special Topic on Machine Learning and Optimization)

7 0.16487524 40 jmlr-2006-Infinite-σ Limits For Tikhonov Regularization

8 0.14862013 58 jmlr-2006-Lower Bounds and Aggregation in Density Estimation

9 0.14680037 46 jmlr-2006-Learning Factor Graphs in Polynomial Time and Sample Complexity

10 0.12997128 38 jmlr-2006-Incremental Support Vector Learning: Analysis, Implementation and Applications     (Special Topic on Machine Learning and Optimization)

11 0.11794332 90 jmlr-2006-Superior Guarantees for Sequential Prediction and Lossless Compression via Alphabet Decomposition

12 0.11288557 68 jmlr-2006-On the Complexity of Learning Lexicographic Strategies

13 0.1120056 60 jmlr-2006-Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples

14 0.10771843 31 jmlr-2006-Exact 1-Norm Support Vector Machines Via Unconstrained Convex Differentiable Minimization     (Special Topic on Machine Learning and Optimization)

15 0.10714584 92 jmlr-2006-Toward Attribute Efficient Learning of Decision Lists and Parities

16 0.10326799 52 jmlr-2006-Learning Spectral Clustering, With Application To Speech Separation

17 0.099364154 16 jmlr-2006-Bounds for Linear Multi-Task Learning

18 0.096165299 71 jmlr-2006-Optimising Kernel Parameters and Regularisation Coefficients for Non-linear Discriminant Analysis

19 0.094678149 12 jmlr-2006-Active Learning with Feedback on Features and Instances

20 0.091606066 57 jmlr-2006-Linear State-Space Models for Blind Source Separation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(8, 0.012), (36, 0.04), (45, 0.012), (50, 0.69), (63, 0.024), (78, 0.011), (81, 0.018), (84, 0.01), (90, 0.044), (91, 0.013), (96, 0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98402464 67 jmlr-2006-On Representing and Generating Kernels by Fuzzy Equivalence Relations

Author: Bernhard Moser

Abstract: Kernels are two-placed functions that can be interpreted as inner products in some Hilbert space. It is this property which makes kernels predestinated to carry linear models of learning, optimization or classification strategies over to non-linear variants. Following this idea, various kernel-based methods like support vector machines or kernel principal component analysis have been conceived which prove to be successful for machine learning, data mining and computer vision applications. When applying a kernel-based method a central question is the choice and the design of the kernel function. This paper provides a novel view on kernels based on fuzzy-logical concepts which allows to incorporate prior knowledge in the design process. It is demonstrated that kernels mapping to the unit interval with constant one in its diagonal can be represented by a commonly used fuzzylogical formula for representing fuzzy rule bases. This means that a great class of kernels can be represented by fuzzy-logical concepts. Apart from this result, which only guarantees the existence of such a representation, constructive examples are presented and the relation to unlabeled learning is pointed out. Keywords: kernel, triangular norm, T -transitivity, fuzzy relation, residuum 1. Motivation Positive-definiteness plays a prominent role especially in optimization and machine learning due to the fact that two-place functions with this property, so-called kernels, can be represented as inner products in some Hilbert space. Thereby, optimization techniques conceived on the basis of linear models can be extended to non-linear algorithms. For a survey of applications see, for example, ¨ Jolliffe (1986), Sch¨ lkopf and Smola (2002) and Scholkopf et al. (1998). o Recently in Moser (2006) it was shown that kernels with values from the unit interval can be interpreted as fuzzy equivalence relations motivated by the idea that kernels express a kind of similarity. This means that the concept of fuzzy equivalence relations, or synonymously fuzzy similarity relations, is more general than that of kernels, provided only values in the unit interval are considered. Fuzzy equivalence relations distinguish from Boolean equivalence relations by a many-valued extension of transitivity which can be interpreted as many-valued logical model of the statement “IF x is similar to y AND y is similar to z THEN x is similar to z”. In contrast to the Boolean case, in many-valued logics the set of truth values is extended such that also assertions, for example, whether two elements x and y are similar, can be treated as a matter of degree. The standard model for the set of (quasi) truth values of fuzzy logic and other many-valued logical systems is the unit interval. If E(x, y) represents the (quasi) truth value of the statement that x is c 2006 Bernhard Moser. M OSER similar to y, then the many-valued version of transitivity is modeled by T (E(x, y), E(y, z)) ≤ E(x, z) where T is a so-called triangular norm which is an extension of the Boolean conjunction. This many-valued concept for transitivity is called T -transitivity. For a survey on triangular norms see, for example, Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. (2000), ¨ and for fuzzy equivalence relations and T -transitivity see, for example, Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. (2000), and Zadeh (1971). o Based on the semantics of fuzzy logic, this approach allows to incorporate knowledge-based models for the design of kernels. From this perspective, the most interesting mathematical question is how positive-semidefinite fuzzy equivalence relations can be characterized or at least constructed under some circumstances. At least for some special cases, proofs are provided in Section 4, which motivate further research aiming at establishing a more general theory on the positive-definiteness of fuzzy equivalence relations. These cases are based on the most prominent representatives of triangular norms, that is the Minimum, the Product and the Łukasiewicz t-norm. The paper is structured as follows. First of all, in Section 2, some basic prerequisites concerning kernels and fuzzy relations are outlined. In Section 3, a result about the T -transitivity of kernels from Moser (2006) is cited and interpreted as existence statement that guarantees a representation of kernels mapping to the unit interval with constant 1 in its diagonal by a certain, commonly used, fuzzy-logical construction of a fuzzy equivalence relation. Finally, in contrast to the pure existence theorem of Section 3, in Section 4 constructive examples of fuzzy equivalence relations are provided which are proven to be kernels. In a concluding remark, the relationship to the problem of labeled and unlabeled learning is pointed out. 2. Prerequisites This section summarizes definitions and facts from the theory of kernels as well as from fuzzy set theory which are needed later on. 2.1 Kernels and Positive-Semidefiniteness Preserving Functions There is an extensive literature concerning kernels and kernel-based methods like support vector machines or kernel principal component analysis especially in the machine learning, data mining ¨ and computer vision communities. For an overview and introduction, see, for example, Sch olkopf and Smola (2002). Here we present only what is needed later on. For completeness let us recall the basic definition for kernels and positive-semidefiniteness. Definition 1 Let X be a non-empty set. A real-valued function k : X × X → R is said to be a kernel iff it is symmetric, that is, k(x, y) = k(y, x) for all x, y ∈ X , and positive-semidefinite, that is, ∑n j=1 ci c j k(xi , x j ) ≥ 0 for any n ∈ N, any choice of x1 , . . . , xn ∈ X and any choice of c1 , . . . , cn ∈ R. i, One way to generate new kernels from known kernels is to apply operations which preserve the positive-semidefiniteness property. A characterization of such operations is provided by C. H. FitzGerald (1995). Theorem 2 (Closeness Properties of Kernels) Let f : Rn → R, n ∈ N, then k : X × X → R given by k(x, y) := f (k1 (x, y), . . . , kn (x, y)) 2604 G ENERATING K ERNELS BY F UZZY R ELATIONS is a kernel for any choice of kernels k1 , . . . , kn on X × X iff f is the real restriction of an entire function on Cn of the form f (x1 , . . . , xn ) = ∑ r1 ≥0,...,rn ≥0 r r cr1 ,...,rn x11 · · · xnn (1) where cr1 ,...,rn ≥ 0 for all nonnegative indices r1 , . . . , rn . 2.2 Triangular Norms Triangular norms have been originally studied within the framework of probabilistic metric spaces, see Schweizer and Sklar (1961) and Schweizer and Sklar (1983). In this context, t-norms proved to be an appropriate concept when dealing with triangle inequalities. Later on, t-norms and their dual version, t-conorms, have been used to model conjunction and disjunction for many-valued logic, see Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. (2000). Definition 3 A function T : [0, 1]2 → [0, 1] is called t-norm (triangular norm), if it satisfies the following conditions: (i) (ii) (iii) (iv) ∀x, y ∈ [0, 1] : ∀x, y, z ∈ [0, 1] : ∀x, y, z ∈ [0, 1] : ∀x, y ∈ [0, 1] : T (x, y) = T (y, x) T (x, T (y, z)) = T (T (x, y), z) y ≤ z =⇒ T (x, y) ≤ T (x, z) T (x, 1) = x ∧ T (1, y) = y (commutativity) (associativity) (monotonicity) (boundary condition) Further, a t-norm is called Archimedean if it is continuous and satisfies x ∈ (0, 1) ⇒ T (x, x) < x. Due to its associativity, many-placed extensions Tn : [0, 1]n → [0, 1], n ∈ N, of a t-norm T are uniquely determined by Tn (x1 , . . . , xn ) = T (x1 , Tn−1 (x2 , . . . , xn )). Archimedean t-norms are characterized by the following representation theorem due to Ling (1965): Theorem 4 Let T : [0, 1]2 → [0, 1] be a t-norm. Then T is Archimedean if, and only if, there is a continuous, strictly decreasing function f : [0, 1] → [0, ∞] with f (1) = 0 such that for x, y ∈ [0, 1], T (x, y) = f −1 (min( f (x) + f (y), f (0))). By setting g(x) = exp (− f (x)), Ling’s characterization yields an alternative representation with a multiplicative generator function T (x, y) = g−1 (max(g(x) g(y), g(0))). For g(x) = x we get the product TP (x, y) = x y. The setting f (x) = 1 − x yields the so-called Łukasiewcz t-norm TL (x, y) = max(x + y − 1, 0). Due to Ling’s theorem 4 an Archimedean t-norm T is isomorphic either to TL or TP , depending on whether the additive generator takes a finite value at 0 or not. In the former case, the Archimedean t-norm is called non-strict, in the latter it is called strict. 2605 M OSER A many-valued model of an implication is provided by the so-called residuum given by → T (a, b) = sup{c ∈ [0, 1]|T (a, c) ≤ b} (2) where T is a left-continuous t-norm. Equation (2) is uniquely determined by the so-called adjunction property → ∀a, b, c ∈ [0, 1] : T (a, b) ≤ c ⇔ a ≤ T (b, c). Consequently, the operator ↔ → → T (a, b) = min T (a, b), T (b, a) (3) (4) models a biimplication. For details, for example, see Gottwald (1986) and Klement et al. (2000). → Tables 1 and 2 list examples of t-norms with their induced residuum T . For further examples see, for example, Klement et al. (2000). √ √ Tcos (a, b) = max(ab − 1 − a2 1 − b2 , 0) TL (a, b) = max(a + b − 1, 0) TP (a, b) = ab TM (a, b) = min(a, b) Table 1: Examples of t-norms → T cos (a, b) = → T L (a, b) = → = T P (a, b) → T M (a, b) = cos(arccos(b) − arccos(a)) if a > b, 1 else min(b − a + 1, 1) b if a > b, a 1 else b if a > b, 1 else Table 2: Examples of residuums 2.3 T -Equivalences If we want to classify based on a notion of similarity or indistinguishability, we face the problem of transitivity. For instance, let us consider two real numbers to be indistinguishable if and only if they differ by at most a certain bound ε > 0, this is modeled by the relation ∼ ε given by x ∼ε y :⇔ |x−y| < ε, ε > 0, x, y ∈ R. Note that the relation ∼ε is not transitive and, therefore, not an equivalence relation. The transitivity requirement turns out to be too strong for this example. The problem of identification and transitivity in the context of similarity of physical objects was early pointed out and discussed philosophically by Poincar´ (1902) and Poincar´ (1904). In the framework of fuzzy e e logic, the way to overcome this problem is to model similarity by fuzzy relations based on a many¨ valued concept of transitivity, see Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. o (2000) and Zadeh (1971). 2606 G ENERATING K ERNELS BY F UZZY R ELATIONS Definition 5 A function E : X 2 −→ [0, 1] is called a fuzzy equivalence relation, or synonymously, T -equivalence with respect to the t-norm T if it satisfies the following conditions: (i) ∀x ∈ X : E(x, x) = 1 (reflexivity) (ii) ∀x, y ∈ X : E(x, y) = E(y, x) (symmetry) (iii) ∀x, y, z ∈ X : T (E(x, y), E(y, z)) ≤ E(x, z) (T-transitivity). The value E(x, y) can be also looked at as the (quasi) truth value of the statement “x is equal to y”. Following this semantics, T-transitivity can be seen as a many-valued model of the proposition, “If x is equal to y and y is equal to z, then x is equal to z”. T -equivalences for Archimedean t-norms are closely related to metrics and pseudo-metrics as shown by Klement et al. (2000) and Moser (1995). Theorem 6 Let T be an Archimedean t-norm given by ∀a, b ∈ [0, 1] : T (a, b) = f −1 (min( f (a) + f (b), f (0))), where f : [0, 1] → [0, ∞] is a strictly decreasing, continuous function with f (1) = 0. (i) If d : X 2 → [0, ∞[ is a pseudo-metric, then the function Ed : X 2 → [0, 1] defined by Ed (x, y) = f −1 (min(d(x, y), f (0))) is a T -equivalence with respect to the t-norm T . (ii) If E : X 2 → [0, 1] is a T -equivalence relation, then the function dE : X 2 → [0, ∞] defined by dE (x, y) = f (E(x, y)) is a pseudo-metric. → Another way to construct T -equivalences is to employ T -operators. The proof of the following assertion can be found in Trillas and Valverde (1984), Kruse et al. (1993) and Kruse et al. (1994). ↔ Theorem 7 Let T be a left-continuous t-norm, T its induced biimplication, µi : X → [0, 1], i ∈ I, I non-empty; then E : X × X → [0, 1] given by ↔ E(x, y) = inf T (µi (x), µi (y)) i∈I (5) is a T -equivalence relation. ¨ For further details on T -equivalences see also Boixader and Jacas (1999), H oppner et al. (2002), Jacas (1988), Trillas et al. (1999) and Valverde (1985). 3. Representing Kernels by T -Equivalences It is interesting that the concept of kernels, which is motivated by geometric reasoning in terms of inner products and mappings to Hilbert spaces and which is inherently formulated by algebraic terms, is closely related to the concept of fuzzy equivalence relations as demonstrated and discussed in more detail in Moser (2006). In this section, we start with the result that any kernel k : X × X → [0, 1] with k(x, x) = 1 for all x ∈ X is T -transitive and, therefore, a fuzzy equivalence relation. The proof can be found in Moser (2006), see also Appendix A.1. 2607 M OSER Theorem 8 Any kernel k : X × X → [0, 1] with k(x, x) = 1 is (at least) Tcos -transitive, where 1 − a2 Tcos (a, b) = max{a b − 1 − b2 , 0}. (6) The nomenclature is motivated by the fact that the triangular norm defined by Equation (6) is an Archimedean t-norm which is generated by the arcosine function as its additive generator. From this result, the following existence theorem can be derived, which guarantees that any kernel under consideration can be represented by the fuzzy-logical formula given by (5). In fuzzy systems, this formula is commonly used for modeling rule bases (see, for example, Kruse et al., 1993, 1994). Theorem 9 Let X be a non-empty universe of discourse, k : X × X → [0, 1] a kernel in the sense of Definition 1 and k(x, x) = 1 for all x ∈ X ; then there is a family of membership functions µ i : X → [0, 1], i ∈ I, I non-empty and a t-norm T , such that ↔ ∀x, y ∈ X : k(x, y) = inf T (µi (x), µi (y)). i∈I (7) Proof. Let us set I := X , µx0 (x) = k(x, x0 ) and let us choose Tcos as t-norm. For convenience let us denote ↔ h(x, y) = inf T cos (µx0 (x), µx0 (y)), x0 ∈X which is equivalent to ↔ h(x, y) = inf T cos (k(x0 , x), k(x0 , y)). x0 ∈X According to Theorem 8, k is Tcos -transitive, that is, ↔ ∀x0 , x, y ∈ X : T cos (k(x0 , x), k(x0 , y)) ≤ k(x, y). This implies that h(x, y) ≤ k(x, y) for all x, y ∈ X . Now let us consider the other inequality. Due to the adjunction property (3), we obtain → Tcos (k(x, y), k(x0 , y)) ≤ k(x, x0 ) ⇔ k(x, y) ≤ T cos (k(x0 , y), k(x, x0 )) and → Tcos (k(x, y), k(x0 , x)) ≤ k(y, x0 ) ⇔ k(x, y) ≤ T cos (k(x0 , x), k(y, x0 )), from which it follows that → → ∀x, y, x0 ∈ X : k(x, y) ≤ min{ T cos (k(x0 , y), k(x, x0 )), T cos (k(x0 , x), k(y, x0 ))}. Hence by Definition 4, ∀x, y ∈ X : k(x, y) ≤ h(x, y) which ends the proof. For an arbitrary choice of fuzzy membership functions, there is no necessity that the resulting relation (7) implies positive-semidefiniteness and, therefore, a kernel. For an example of a Tcos equivalence which is not a kernel see Appendix A.4. Theorem 9 guarantees only the existence of a representation of the form (5) but it does not tell us how to construct the membership functions µ i . In the following section, we provide examples of fuzzy equivalence relations which yield kernels for any choice of membership functions. 2608 G ENERATING K ERNELS BY F UZZY R ELATIONS 4. Constructing Kernels by Fuzzy Equivalence Relations In the Boolean case, positive-definiteness and equivalence are synonymous, that is, a Boolean relation R : X × X → {0, 1} is positive-definite if and only if R is the indicator function of an equivalence relation ∼ that is, R(x, y) = 1 if x ∼ y and R(x, y) = 0 if x ∼ y. For a proof, see Appendix A.2. This = = =, relationship can be used to obtain an extension to fuzzy relations as given by the next theorem whose proof can be found in the Appendix A.3. Theorem 10 Let X be a non-empty universe of discourse, µ i : X → [0, 1], i ∈ I, I non-empty; then the fuzzy equivalence relation EM : X × X → [0, 1] given by ↔ EM (x, y) = inf T M (µi (x), µi (y)) i∈I is positive-semidefinite. In the following, the most prominent representatives of Archimedean t-norms, the Product TP and the Łukasiewicz t-norm TL , are used to construct positive-semidefinite fuzzy similarity relations. Though the first part can also be derived from a result due to Yaglom (1957) that characterizes isotropic stationary kernels by its spectral representation, here we prefer to present a direct, elementary proof. Compare also Bochner (1955) and Genton (2001). Theorem 11 Let X be a non-empty universe of discourse, ν : X → [0, 1] and let h : [0, 1] → [0, 1] be an isomorphism of the unit interval that can be expanded in the manner of Equation (1), that is h(x) = ∑k ck xk with ck ≥ 0; then the fuzzy equivalence relations EL,h , EP,h : X × X → [0, 1] given by ↔ EL,h (x, y) = h T L h−1 (ν(x)) , h−1 (ν(y)) and ↔ EP,h (x, y) = h T P h−1 (ν(x)) , h−1 (ν(y)) (8) (9) are positive-semidefinite. Proof. To prove the positive-definiteness of the two-placed functions E L,h and EP,h given by equations (8) and (9) respectively, we have to show that n n ∑ i, j=1 EL,h (xi , xi ) ci c j ≥ 0, ∑ i, j=1 EP,h (xi , x j ) ci c j ≥ 0 for any n ∈ N and any choice of x1 , . . . , xn ∈ X , respectively. According to an elementary result from Linear Algebra this is equivalent to the assertion that the determinants (1 ≤ m ≤ n) Dm = det (E(xi , x j ))i, j∈{1,...,m} of the minors of the matrix (E(xi , x j ))i, j satisfy ∀m ∈ {1, . . . , n} : Dm ≥ 0, where E denotes either EL,h or EP,h . Recall that the determinant of a matrix is invariant with respect to renaming the indices, that is, if σ : {1, . . . , n} → {1, . . . , n} is a permutation then det [(ai j )i, j ] = det (aσ(i)σ( j) )i, j . 2609 M OSER For convenience, let denote µi = h−1 (ν(xi )). Then, without loss of generality, we may assume that the values µi are ordered monotonically decreasing, that is, µi ≥ µ j for i < j. ↔ → (10) → Case TL : Note that T L (a, b) = min{ T L (a, b), T L (b, a)} = 1 − |a − b|. Then we have to show that for all dimensions n ∈ N, the determinant of E (n) = (1 − |µi − µ j |)i, j∈{1,...,n} is non-negative, that is Due to the assumption (10), we have det[E (n) ] ≥ 0. 1 − |µi − µ j | = 1 − (µi − µ j ) if i ≤ j, 1 − (µ j − µi ) else which yields   . . . 1 − (µ1 − µn−1 ) 1 − (µ1 − µn )  . . . 1 − (µ2 − µn−1 ) 1 − (µ2 − µn )     . . . 1 − (µ3 − µn−1 ) 1 − (µ3 − µn )    (n) E = . . . .. . .   . . .   1 − (µ1 − µn−1 ) 1 − (µ2 − µn−1 ) . . . 1 1 − (µn−1 − µn ) 1 − (µ1 − µn ) 1 − (µ2 − µn ) . . . 1 − (µn−1 − µn ) 1 1 − (µ1 − µ2 ) 1 1 − (µ2 − µ3 ) . . . 1 1 − (µ1 − µ2 ) 1 − (µ1 − µ3 ) . . . Now let us apply determinant-invariant elementary column operations to simplify this matrix by subtracting the column with index i − 1 from the column with index i, i ≥ 2. This yields   1 µ2 − µ1 ... µn−1 − µn−2 µn − µn−1  1 − (µ1 − µ2 ) −(µ2 − µ1 ) . . . µn−1 − µn−2 µn − µn−1     1 − (µ1 − µ3 ) −(µ2 − µ1 ) . . . µn−1 − µn−2 µn − µn−1    ˜ E (n) =  . . . . . .. . . . .   . . . . .   1 − (µ1 − µn−1 ) −(µ2 − µ1 ) . . . −(µn−2 − µn−1 ) µn − µn−1  1 − (µ1 − µn ) −(µ2 − µ1 ) . . . −(µn−2 − µn−1 ) −(µn−1 − µn ) Therefore, α = n ∏(µi−1 − µi ) ≥ 0 (11) i=2 ˜ ˆ det[E (n) ] = det[E (n) ] = α det[En ], where   . . . −1 −1  . . . −1 −1    . . . −1 −1   (n) ˆ E = . . . .. . .   . . .   1 − (µ1 − µn−1 ) +1 . . . +1 −1 1 − (µ1 − µn ) +1 . . . +1 +1 1 1 − (µ1 − µ2 ) 1 − (µ1 − µ3 ) . . . 2610 −1 +1 +1 . . . (12) G ENERATING K ERNELS BY F UZZY R ELATIONS Let us apply Laplacian determinant expansion by minors to the first column of matrix (12), that is n det[A] = ∑ (−1)i+ j ai j det[Ai j ] i=1 where A = (ai j ) is an n × n-matrix, j arbitrarily chosen from {1, . . . , n} and Ai j is the matrix corresponding to the cofactor ai j obtained by canceling out the i-th row and the j-th column from A (see, ˆ for example, Muir, 1960). For n = 1, we get the trivial case det[ E (1) ] = 1. Note that the first and (n) ˆ the last rows of the matrices Ei,1 for 1 < i < n only differ by their signum, consequently the minors ˆ (n) det[Ei,1 ] for 1 < i < n, n ≥ 2, are vanishing, that is, det[Ai,1 ] = 0, for 1 < i < n. Therefore, according to the Laplacian expansion, we get (n) (n) ˆ ˆ ˆ det[E (n) ] = 1 · det[E1,1 ] + (−1)n (1 − (µ1 − µn )) · det[E1,n ]. (13) Observe that (n) ˆ det[E1,1 ] = 2n−2 (n) ˆ det[E1,n ] = (−1)n−1 2n−2 . Consequently, Equation (13) simplifies to ˆ det[E (n) ] = 2n−2 1 + (−1)n (−1)n−1 2n−2 (1 − (µ1 − µn )) = 2n−2 (1 − (1 − (µ1 − µn ))) = 2n−2 (µ1 − µn ) ≥ 0 which together with (11) proves the first case. ↔ → → Case TP : First of all, let us compute T P (a, b) = min{ T P (a, b), T L (b, a)}. Hence,  min{ b , a } if a, b > 0,  a b   0 ↔ if a = 0 and b > 0 , T P (a, b) = 0 if b = 0 and a > 0 ,    1 if a = 0 and b = 0 . Again, without loss of generality, let us suppose that the values µ i , i ∈ {1, . . . , n} are ordered monotonically decreasing, that is µ1 ≥ µ2 ≥ . . . ≥ µn . Before checking the general case, let us consider the special case of vanishing µ-values. For this, let us assume for the moment that µi = > 0 if i < i0 , 0 else ↔ ↔ which implies that T P (µi , µ j ) = 0 for i < i0 and j ≥ i0 and T P (µi , µ j ) = 1 for i ≥ i0 and j ≥ i0 . This leads to a decomposition of the matrix ↔ E (n) = T P (µi , µ j ) 2611 ij M OSER such that det[E (n) ] = det[E (i0 −1) ] · det[In−i0 −1 ] where Ik denotes the k × k-matrix with constant entries 1, hence det[In−i0 −1 ] ∈ {0, 1}. Therefore, we may assume that µ1 ≥ µ2 ≥ . . . ≥ µn > 0. Then we have to show that for all dimensions n ∈ N, the determinant of µi µ j , µ j µi E (n) = min i, j∈{1,...,n} is non-negative, that is det[E (n) ] ≥ 0. Consider  1  µ2  µ1  µ3  µ (n) E =  .1  .  .  µn−1  µ1 µn µ1 µ2 µ1 1 µ3 µ2 . . . µn−1 µ2 µn µ2 ... ... ... .. . ... ... Now, multiply the i-th column by −µi+1 /µi and add 1 ≤ i < n, then we get  1 0 ... 2  µ2 ∗ 1 − ... µ1  ∗ ∗ ...   . ˜ .. E (n) =  . . . . . .  ∗ ∗ ... 1−   ∗ ∗ ... µn−1 µ1 µn−1 µ2 µn−1 µ3 µn µ1 µn µ2 µn µ3      . . .  .  µn  µn−1  1 . . . 1 µn µn−1 (14) it to the (i + 1)-th column of matrix (14), 0 0 0 0 0 . . . 0 . . . µn−1 µn−2 2 ∗ 0 1− µn µn−1             2 (15) where ∗ is a placeholder for any real value. By this, the determinant of the matrix in Equation (15) readily turns out to be n−1 µi+1 ˜ det[E (n) ] = det[E (n) ] = ∏ 1 − µi i=1 2 ≥0 which together with Theorem (2) ends the proof. Note that relations (8) and (9) are T -transitive with respect to the corresponding isomorphic Archimedean t-norms, TL,h (x, y) = h(TL (h−1 (x), h−1 (x))) and TP,h (x, y) = h(TP (h−1 (x), h−1 (x))), respectively. 2612 G ENERATING K ERNELS BY F UZZY R ELATIONS Corollary 12 Let X be a non-empty universe of discourse, µ i : X → [0, 1], λi ∈ ]0, 1] with ∑i λi = 1 ˜ ˜ where i ∈ {1, . . . , n}, n ∈ N, then the fuzzy equivalence relations EL , EP : X × X → [0, 1] given by n ↔ ˜ EL (x, y) = ∑ λi T L (µi (x), µi (y)) (16) i=1 and n ↔ ˜ EP (x, y) = ∏ T P (µi (x), µi (y)) λi (17) i=1 are TL - and TP -equivalences, respectively, and kernels. Proof. First of all, let us check the TL -transitivity of formula (16). This can readily be shown by ↔ means of the definition of TL and the TL -transitivity of T L due to the following inequalities: n TL n ↔ i=1 n n ↔ ↔ n ↔ ↔ ∑ λi T L (µi (x), µi (y)) + ∑ λi T L (µi (y), µi (z)) − 1 , 0 i=1 i=1 n max = i=1 i=1 n = i=1 ∑ λi T L (µi (x), µi (y)) + ∑ λi T L (µi (y), µi (z)) − 1, 0 max max ↔ ∑ λi T L (µi (x), µi (y)), ∑ λi T L (µi (y), µi (yz) n ↔ ↔ ∑ λi TL T L (µi (x), µi (y)), ∑ λi T L (µi (y), µi (z)) , 0 i=1 i=1 n max ↔ ∑ λi T L (µi (x), µi (z)), 0 ≤ ≤ = i=1 ↔ λi T L (µi (x), µi (z)). ↔ This, together with the TP -transitivity of T P , proves that the formulas given by (16) and (17) are TL and TP -equivalences, respectively. Expanding the factors of formula (17) yields  1 if µi (x) = µi (y) = 0, λi ↔ λi λi (18) T P (µi (x), µi (y)) =  min(µiλi(x),µiλi(y)) else max(µi (x),µi (y)) which by comparing case TP of the proof of Theorem 11 shows that the left-hand side of Equation (18) is positive-semidefinite. As the convex combination and the product are special cases of positive-semidefiniteness preserving functions according to Theorem 1, the functions defined by equations (16) and (17) prove to be again positive-semidefinite and, therefore, kernels. It is interesting to observe that both formulas (16) and (17) can be expressed in the form, f ( τ(x) − τ(y) 1 ), where f : I → [0, 1], I some interval, is a strictly decreasing function, τ : X → I n , I some interval, τ(x) = (τ1 (x), . . . , τn (x)) and τ(x) 1 = ∑n |τi (x)|. Indeed, for Equation (16) let us define i=1 fL : [0, 1] → [0, 1], fL (a) = 1 − a τL : X → [0, 1] , τL (x) = (λ1 µ1 (x), . . . , λn µn (x)) n 2613 M OSER and for Equation (17) and positive membership functions µ i , µi (x) > 0 for all x ∈ X , let us define fP : [0, ∞[→ [0, 1], fP (a) = e−a τP : X → ] − ∞, 1]n , τP (x) = (λ1 ln(µ1 (x)), . . . , λn ln(µn (x))) Therefore, we get ˜ EL (x, y) = 1 − τL (x) − τL (y) ˜ EP (x, y) = e− τP (x)−τP (y) 1 . 1 (19) (20) While formulas (19) and (20) provide a geometrical interpretation by means of the norm . 1 , the corresponding formulas (16) and (17) yield a semantical model of the assertion “IF x is equal to y with respect to feature µ1 AND . . . AND x is equal to y with respect to feature µn THEN x is equal to y” as aggregation of biimplications in terms of fuzzy logic. While in the former case, the aggregation has some compensatory effect, the latter is just a conjunction in terms of the Product triangular norm. For details on aggregation operators see, for example, Saminger et al. (2002) and Calvo et al. (2002). The formulas (16) and (17) coincide for the following special case. If the membership functions µi are indicator functions of sets Ai ⊆ X which form a partition of X , then the kernels (16) and (17) reduce to the indicator function characterizing the Boolean equivalence relation induced by this partition {A1 , . . . , An }. The formulas (16) and (17) for general membership functions therefore provide kernels which can be interpreted to be induced by a family of fuzzy sets and, in particular, by fuzzy partitions, that is, families of fuzzy sets fulfilling some criteria which extend the axioms for a Boolean partition in a many-valued logical sense. For definitions and further details on fuzzy partitions see, for ¨ example, De Baets and Mesiar (1998), Demirci (2003) and H oppner and Klawonn (2003). It is a frequently used paradigm that the decision boundaries for a classification problem lie between clusters rather than intersecting them. Due to this cluster hypothesis, the problem of designing kernels based on fuzzy partitions is closely related to the problem of learning kernels from unlabeled data. For further details on semi-supervised learning see, for example, Seeger (2002), Chapelle et al. (2003) and T. M. Huang (2006). It is left to future research to explore this relationship to the problem of learning from labeled and unlabeled data and related concepts like covariance kernels. 5. Conclusion In this paper, we have presented a novel view on kernels from a fuzzy logical point of view. Particularly, the similarity-measure aspect of a kernel is addressed and investigated by means of the so-called T -transitivity which is characteristic for fuzzy equivalence relations. As a consequence, we derived that a large class of kernels can be represented in a way that is commonly used for representing fuzzy rule bases. In addition to this proof for the existence of such a representation, constructive examples are presented. It is the idea of this research to look for a combination of knowledge-based strategies with kernel-based methods in order to facilitate a more flexible designing process of kernels which also allows to incorporate prior knowledge. Further research aims at 2614 G ENERATING K ERNELS BY F UZZY R ELATIONS analyzing the behavior of kernels constructed in this way when applied in the various kernel methods like support vector machines, kernel principal components analysis and others. In particular, it is intended to focus on the problem of learning kernels from unlabeled data where the fuzzy partitions are induced by appropriate clustering principles. Acknowledgments Bernhard Moser gratefully acknowledges partial support by the Austrian Government, the State of Upper Austria, and the Johannes Kepler University Linz in the framework of the Kplus Competence Center Program. Furthermore special thanks go to the anonymous reviewers who gave helpful suggestions and to Felix Kossak for careful proof-reading. Appendix A. For sake of completeness the following sections provide proofs regarding Theorem 8, the characterization of kernels in the Boolean case and the construction of kernels by means of the minimum t-norm TM . Furthermore, in Section A.4 an example of a non-positive-semidefinite Tcos -equivalence is given. A.1 Proof of Theorem 8 Let us start with the analysis of 3-dimensional matrices. Lemma 13 Let M = (mi j )i j ∈ [0, 1]3×3 be a 3 × 3 symmetric matrix with mii = 1, i = 1, 2, 3; then M is positive-semidefinite iff for all i, j, k ∈ {1, 2, 3} there holds mi j m jk − 1 − m2j i 1 − m2 ≤ mik jk Proof. For simplicity, let a = m1,2 , b = m1,3 and c = m2,3 . Then the determinant of M, Det(M), is a function of the variables a, b, c given by D(a, b, c) = 1 + 2abc − a2 − b2 − c2 . For any choice of a, b, the quadratic equation D(a, b, c) = 0 can be solved for c, yielding two solutions c1 = c1 (a, b) and c2 = c2 (a, b) as functions of a and b, c1 (a, b) = ab − c2 (a, b) = ab + 1 − a2 1 − a2 1 − b2 1 − b2 . Obviously, for all |a| ≤ 1 and |b| ≤ 1, the values c1 (a, b) and c2 (a, b) are real. By substituting a = cos α and b = cos(β) with α, β ∈ [0, π ], it becomes readily clear that 2 c1 (a, b) = c1 (cos(α), cos(β)) = cos(α) cos(β) − sin(α) sin(β) = cos(α + β) ∈ [−1, 1] 2615 M OSER and, analogously, c2 (a, b) = c2 (cos(α), cos(β)) = cos(α) cos(β) + sin(α) sin(β) = cos(α − β) ∈ [−1, 1]. As for all a, b ∈ [−1, 1] the determinant function Da,b (c) := D(a, b, c) is quadratic in c with negative coefficient for c2 , there is a uniquely determined maximum at c0 (a, b) = ab. Note that for all a, b ∈ [−1, 1], we have c1 (a, b) ≤ c0 (a, b) ≤ c2 (a, b) and D(a, b, c0 (a, b)) = 1 + 2ab(ab) − a2 − b2 − (ab)2 = (1 − a2 )(1 − b2 ) ≥ 0. Therefore, D(a, b, c) ≥ 0 if and only if c ∈ [c1 (a, b), c2 (a, b)]. Recall from linear algebra that by renaming the indices, the determinant does not change. Therefore, without loss of generality, we may assume that a ≥ b ≥ c. For convenience, let Q = {(x, y, z) ∈ [0, 1]3 |x ≥ y ≥ z}. Then, obviously, for any choice of a, b ∈ [0, 1] there holds (a, b, c1 (a, b)) ∈ Q. Elementary algebra shows that (a, b, c2 (a, b)) ∈ Q is only the case for a = b = 1. As for a = b = 1 the two solutions c1 , c2 coincide, that is, c1 (1, 1) = c2 (1, 1) = 1, it follows that for any choice of (a, b, c) ∈ Q, there holds D(a, b, c) ≥ 0 if and only if c1 (a, b) = ab − 1 − a2 1 − b2 ≤ c. (21) If (a, b, c) ∈ Q, then the inequality (21) is trivially satisfied which together with (21) proves the lemma Now Theorem 8 immediately follows from Definition (1), Lemma (13) and the characterizing inequality (21). A.2 Characterization of Kernels in the Boolean Case ¨ The following lemma and proposition can also be found as an exercise in Sch olkopf and Smola (2002). Lemma 14 Let ∼ be an equivalence relation on X and let k : X × X → {0, 1} be induced by ∼ via k(x, y) = 1 if and only if x ∼ y; then k is a kernel. Proof. By definition of positive-definiteness, let us consider an arbitrary sequence of elements x1 , . . . , xn . Then there are at most n equivalence classes Q1 , . . . , Qm on the set of indices {1, . . . , n}, S / m ≤ n, where i=1,...,m Qi = {1, . . . , n} and Qi ∩ Q j = 0 for i = j. Note that k(xi , x j ) = 0 if the indices 2616 G ENERATING K ERNELS BY F UZZY R ELATIONS i, j belong to different equivalence classes. Then, for any choice of reals c 1 , . . . , cn , we obtain ∑ ci c j k(xi , x j ) m = i, j ∑ ∑ ci c j k(xi , x j ) p=1 i, j∈Q p m = ∑ ∑ p=1 i, j∈Q p ci c j · 1 2 m = ∑ ∑ ci p=1 i∈Q p ≥ 0 Proposition 15 k : X × X → {0, 1} with k(x, x) = 1 for all x ∈ X is a kernel if and only if it is induced by an equivalence relation. Proof. It only remains to be shown that if k is a kernel, then it is the indicator function of an equivalence relation, that is, it is induced by an equivalence relation. If k is a kernel, according to Lemma 13, for all x, y, z ∈ X , it has to satisfy Tcos (k(x, y), k(y, z)) ≤ k(x, z), which implies, k(x, y) = 1, k(y, z) = 1 =⇒ k(x, z) = 1. Obviously, we have k(x, x) = 1 and k(x, y) = k(y, x) due to the reflexivity and symmetry assumption of k, respectively. A.3 Constructing Kernels by TM For convenience let us recall the basic notion of an α-cut from fuzzy set theory: Definition 16 Let X be a non-empty set and µ : X → [0, 1]; then [µ]α = {x ∈ X | µ(x) ≥ α} is called the α-cut of the membership function µ. Lemma 17 k : X × X → [0, 1] is a TM -equivalence if and only if all α-cuts of k are Boolean equivalence relations. Proof. (i) Let us assume that k is a TM -equivalence. Let α ∈ [0, 1], then by definition, [k]α = {(x, y) ∈ X × X | k(x, y) ≥ α}. In order to show that [k]α is a Boolean equivalence, the axioms for reflexivity, symmetry and transitivity have to be shown. Reflexivity and symmetry are trivially satisfied as for all x, y ∈ X , there holds by assumption that k(x, x) = 1 and k(x, y) = k(y, x). In order to show transitivity, let us consider (x, y), (y, z) ∈ [k]α , that means k(x, y) ≥ α and k(y, z) ≥ α; then by the TM -transitivity assumption it follows that α ≤ min(k(x, y), k(y, z)) ≤ k(x, z), hence (x, z) ∈ [k]α . 2617 M OSER (ii) Suppose now that all α-cuts of k are Boolean equivalence relations. Then, in particular, [k] α with α = 1 is reflexive, hence k(x, x) = 1 for all x ∈ X . The symmetry of k follows from the fact that for all α ∈ [0, 1] and pairs (x, y) ∈ [k]α , by assumption, we have (y, x) ∈ [k]α . In order to show the TM -transitivity property, let us consider arbitrarily chosen elements x, y, z ∈ X . Let α = min(k(x, y), k(y, z)); then by the transitivity assumption of [k] α , it follows that (x, z) ∈ [k]α , consequently k(x, z) ≥ α = min(k(x, y), k(y, z)). Proposition 18 If k : X × X → [0, 1] is a TM -equivalence then it is positive-semidefinite. Proof. Choose arbitrary elements x1 , . . . , xn ∈ X and consider the set of values which are taken by all combinations k(xi , x j ), i, j ∈ {1, . . . , n} and order them increasingly, that is k(xi , x j )| i, j ∈ {1, . . . , n}} = {α1 , . . . , αm , where 0 ≤ α1 ≤ · · · αm ≤ 1. Observe that for all pairs (xi , x j ), i, j ∈ {1, . . . , n} there holds m k(xi , x j ) = ∑ (αv − αv−1 )1[k] αv v=2 (xi , x j ) + α1 1[k]α1 (xi , x j ) showing that on the set {x1 , . . . , xn } × {x1 , . . . , xn }, the function k is a linear combination of indicator functions of Boolean equivalences (which are positive-semidefinite by Proposition 15) with nonnegative coefficients and, consequently, it has to be positive semidefinite. A.4 Example of a Non-Positive-Semidefinite Tcos -Equivalence For dimensions n > 3, the Tcos -transitivity is no longer sufficient to guarantee positive semi(n) definiteness. Consider, for example An = (ai j )i j where  λ  (n) ai j = 1   0 if min(i, j) = 1, max(i, j) > 1 , if i = j, else . (22) √ (n) (n) (n) Choose λ = 1/ 2, then Tcos (λ, λ) = 0, hence we have Tcos (ai j , a jk ) ≤ aik for all indices i, j, k ∈ 1, . . . , n. As det(An ) < 0 for n > 3, the matrix An cannot be positive-semidefinite though the Tcos transitivity conditions are satisfied. References S. Bochner. Harmonic Analysis and the Theory of Probability. University of California Press, Los Angeles, California, 1955. U. Bodenhofer. A note on approximate equality versus the Poincar´ paradox. Fuzzy Sets and e Systems, 133(2):155–160, 2003. 2618 G ENERATING K ERNELS BY F UZZY R ELATIONS D. Boixader and J. Jacas. T -indistinguishability operators and approximate reasoning via CRI. In D. Dubois, E. P. Klement, and H. Prade, editors, Fuzzy Sets, Logics and Reasoning about Knowledge, volume 15 of Applied Logic Series, pages 255–268. Kluwer Academic Publishers, Dordrecht, 1999. A. Pinkus C. H. FitzGerald, C.A. Micchelli. Functions that preserve families of positive semidefinite matrices. Linear Alg. and Appl., 221:83–102, 1995. T. Calvo, G. Mayor, and R. Mesiar, editors. Aggregation Operators, volume 97 of Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg, 2002. ¨ O. Chapelle, J. Weston, and B. Scholkopf. Cluster kernels for semi-supervised learning. volume 15 of NIPS. 2003. B. De Baets and R. Mesiar. T -partitions. Fuzzy Sets and Systems, 97:211–223, 1998. M. Demirci. On many-valued partitions and many-valued equivalence relations. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 11(2):235–253, 2003. D. Dubois and H. Prade. A review of fuzzy set aggregation connectives. Inform. Sci., 36:85–121, 1985. M. G. Genton. Classes of kernels for machine learning: A statistics perspective. Journal of Machine Learning Research, 2:299–312, 2001. S. Gottwald. Fuzzy set theory with t-norms and Φ-operators. In A. Di Nola and A. G. S. Ventre, editors, The Mathematics of Fuzzy Systems, volume 88 of Interdisciplinary Systems Research, ¨ pages 143–195. Verlag TUV Rheinland, K¨ ln, 1986. o S. Gottwald. Fuzzy Sets and Fuzzy Logic. Vieweg, Braunschweig, 1993. U. H¨ hle. Fuzzy equalities and indistinguishability. In Proc. 1st European Congress on Fuzzy and o Intelligent Technologies, volume 1, pages 358–363, Aachen, 1993. U. H¨ hle. The Poincar´ paradox and non-classical logics. In D. Dubois, E. P. Klement, and H. Prade, o e editors, Fuzzy Sets, Logics and Reasoning about Knowledge, volume 15 of Applied Logic Series, pages 7–16. Kluwer Academic Publishers, Dordrecht, 1999. F. H¨ ppner and F. Klawonn. Improved fuzzy partitions for fuzzy regression models. Internat. J. o Approx. Reason., 32:85–102, 2003. F. H¨ ppner, F. Klawonn, and P. Eklund. Learning indistinguishability from data. Soft Computing, 6 o (1):6–13, 2002. J. Jacas. On the generators of T -indistinguishability operators. Stochastica, 12:49–63, 1988. I. T. Jolliffe. Principal Component Analysis. Springer Verlag, New York, 1986. E. P. Klement, R. Mesiar, and E. Pap. Triangular Norms, volume 8 of Trends in Logic. Kluwer Academic Publishers, Dordrecht, 2000. 2619 M OSER R. Kruse, J. Gebhardt, and F. Klawonn. Fuzzy-Systeme. B. G. Teubner, Stuttgart, 1993. R. Kruse, J. Gebhardt, and F. Klawonn. Foundations of Fuzzy Systems. John Wiley & Sons, New York, 1994. C. H. Ling. Representation of associative functions. Publ. Math. Debrecen, 12:189–212, 1965. B. Moser. On the t-transitivity of kernels. Fuzzy Sets and Systems, 157:1787–1796, 2006. B. Moser. A New Approach for Representing Control Surfaces by Fuzzy Rule Bases. PhD thesis, Johannes Kepler Universit¨ t Linz, October 1995. a T. Muir. A Treatise on the Theory of Determinants. Dover, New York, 1960. H. Poincar´ . La Science et l’Hypoth´ se. Flammarion, Paris, 1902. e e H. Poincar´ . La Valeur de la Science. Flammarion, Paris, 1904. e S. Saminger, R. Mesiar, and U. Bodenhofer. Domination of aggregation operators and preservation of transitivity. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 10(Suppl.):11–35, 2002. B. Sch¨ lkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, 2002. o ¨ B. Sch¨ lkopf, A. J. Smola, and K. R. Muller. Nonlinear component analysis as a kernel eigenvalue o problem. Neural Computation, 10:1299–1319, 1998. B. Schweizer and A. Sklar. Associative functions and statistical triangle inequalities. Publ. Math. Debrecen, 8:169–186, 1961. B. Schweizer and A. Sklar. Probabilistic Metric Spaces. North-Holland, Amsterdam, 1983. M. Seeger. Covariance kernels from bayesian generative models. Neural Information Processing Systems, 14:905–912, 2002. I. Kopriva T. M. Huang, V. Kecman. Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning. Springer-Verlag, Berlin, 2006. E. Trillas and L. Valverde. An inquiry into indistinguishability operators. In H. J. Skala, S. Termini, and E. Trillas, editors, Aspects of Vagueness, pages 231–256. Reidel, Dordrecht, 1984. E. Trillas, S. Cubillo, and E. Casti˜ eira. Menger and Ovchinnikov on indistinguishabilities revisited. n Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 7(3):213–218, 1999. L. Valverde. On the structure of F-indistinguishability operators. Fuzzy Sets and Systems, 17(3): 313–328, 1985. A. M. Yaglom. Some classes of random fields in n-dimensional space, related to stationary random processes. Theory of Probability and its Applications, 2:273–320, 1957. L. A. Zadeh. Similarity relations and fuzzy orderings. Inform. Sci., 3:177–200, 1971. 2620

2 0.97752702 23 jmlr-2006-Consistency and Convergence Rates of One-Class SVMs and Related Algorithms

Author: Régis Vert, Jean-Philippe Vert

Abstract: We determine the asymptotic behaviour of the function computed by support vector machines (SVM) and related algorithms that minimize a regularized empirical convex loss function in the reproducing kernel Hilbert space of the Gaussian RBF kernel, in the situation where the number of examples tends to infinity, the bandwidth of the Gaussian kernel tends to 0, and the regularization parameter is held fixed. Non-asymptotic convergence bounds to this limit in the L2 sense are provided, together with upper bounds on the classification error that is shown to converge to the Bayes risk, therefore proving the Bayes-consistency of a variety of methods although the regularization term does not vanish. These results are particularly relevant to the one-class SVM, for which the regularization can not vanish by construction, and which is shown for the first time to be a consistent density level set estimator. Keywords: regularization, Gaussian kernel RKHS, one-class SVM, convex loss functions, kernel density estimation

3 0.95089555 6 jmlr-2006-A Scoring Function for Learning Bayesian Networks based on Mutual Information and Conditional Independence Tests

Author: Luis M. de Campos

Abstract: We propose a new scoring function for learning Bayesian networks from data using score+search algorithms. This is based on the concept of mutual information and exploits some well-known properties of this measure in a novel way. Essentially, a statistical independence test based on the chi-square distribution, associated with the mutual information measure, together with a property of additive decomposition of this measure, are combined in order to measure the degree of interaction between each variable and its parent variables in the network. The result is a non-Bayesian scoring function called MIT (mutual information tests) which belongs to the family of scores based on information theory. The MIT score also represents a penalization of the Kullback-Leibler divergence between the joint probability distributions associated with a candidate network and with the available data set. Detailed results of a complete experimental evaluation of the proposed scoring function and its comparison with the well-known K2, BDeu and BIC/MDL scores are also presented. Keywords: Bayesian networks, scoring functions, learning, mutual information, conditional independence tests

4 0.77119255 58 jmlr-2006-Lower Bounds and Aggregation in Density Estimation

Author: Guillaume Lecué

Abstract: In this paper we prove the optimality of an aggregation procedure. We prove lower bounds for aggregation of model selection type of M density estimators for the Kullback-Leibler divergence (KL), the Hellinger’s distance and the L1 -distance. The lower bound, with respect to the KL distance, can be achieved by the on-line type estimate suggested, among others, by Yang (2000a). Combining these results, we state that log M/n is an optimal rate of aggregation in the sense of Tsybakov (2003), where n is the sample size. Keywords: aggregation, optimal rates, Kullback-Leibler divergence

5 0.7270155 24 jmlr-2006-Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss

Author: Di-Rong Chen, Tao Sun

Abstract: The consistency of classification algorithm plays a central role in statistical learning theory. A consistent algorithm guarantees us that taking more samples essentially suffices to roughly reconstruct the unknown distribution. We consider the consistency of ERM scheme over classes of combinations of very simple rules (base classifiers) in multiclass classification. Our approach is, under some mild conditions, to establish a quantitative relationship between classification errors and convex risks. In comparison with the related previous work, the feature of our result is that the conditions are mainly expressed in terms of the differences between some values of the convex function. Keywords: multiclass classification, classifier, consistency, empirical risk minimization, constrained comparison method, Tsybakov noise condition

6 0.64814568 46 jmlr-2006-Learning Factor Graphs in Polynomial Time and Sample Complexity

7 0.6462943 87 jmlr-2006-Stochastic Complexities of Gaussian Mixtures in Variational Bayesian Approximation

8 0.61848629 9 jmlr-2006-Accurate Error Bounds for the Eigenvalues of the Kernel Matrix

9 0.60262054 17 jmlr-2006-Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation

10 0.60193372 82 jmlr-2006-Some Theory for Generalized Boosting Algorithms

11 0.60139251 29 jmlr-2006-Estimation of Gradients and Coordinate Covariation in Classification

12 0.59422147 2 jmlr-2006-A Graphical Representation of Equivalence Classes of AMP Chain Graphs

13 0.58410418 40 jmlr-2006-Infinite-σ Limits For Tikhonov Regularization

14 0.5628404 16 jmlr-2006-Bounds for Linear Multi-Task Learning

15 0.56122065 95 jmlr-2006-Walk-Sums and Belief Propagation in Gaussian Graphical Models

16 0.55242872 66 jmlr-2006-On Model Selection Consistency of Lasso

17 0.52445143 48 jmlr-2006-Learning Minimum Volume Sets

18 0.51718265 45 jmlr-2006-Learning Coordinate Covariances via Gradients

19 0.50622934 28 jmlr-2006-Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting

20 0.50080854 73 jmlr-2006-Pattern Recognition for Conditionally Independent Data