nips nips2001 nips2001-34 knowledge-graph by maker-knowledge-mining

34 nips-2001-Analog Soft-Pattern-Matching Classifier using Floating-Gate MOS Technology


Source: pdf

Author: Toshihiko Yamasaki, Tadashi Shibata

Abstract: A flexible pattern-matching analog classifier is presented in conjunction with a robust image representation algorithm called Principal Axes Projection (PAP). In the circuit, the functional form of matching is configurable in terms of the peak position, the peak height and the sharpness of the similarity evaluation. The test chip was fabricated in a 0.6-µm CMOS technology and successfully applied to hand-written pattern recognition and medical radiograph analysis using PAP as a feature extraction pre-processing step for robust image coding. The separation and classification of overlapping patterns is also experimentally demonstrated. 1 I ntr o du c ti o n Pattern classification using template matching techniques is a powerful tool in implementing human-like intelligent systems. However, the processing is computationally very expensive, consuming a lot of CPU time when implemented as software running on general-purpose computers. Therefore, software approaches are not practical for real-time applications. For systems working in mobile environment, in particular, they are not realistic because the memory and computational resources are severely limited. The development of analog VLSI chips having a fully parallel template matching architecture [1,2] would be a promising solution in such applications because they offer an opportunity of low-power operation as well as very compact implementation. In order to build a real human-like intelligent system, however, not only the pattern representation algorithm but also the matching hardware itself needs to be made flexible and robust in carrying out the pattern matching task. First of all, two-dimensional patterns need to be represented by feature vectors having substantially reduced dimensions, while at the same time preserving the human perception of similarity among patterns in the vector space mapping. For this purpose, an image representation algorithm called Principal Axes Projection (PAP) has been de- veloped [3] and its robust nature in pattern recognition has been demonstrated in the applications to medical radiograph analysis [3] and hand-written digits recognition [4]. However, the demonstration so far was only carried out by computer simulation. Regarding the matching hardware, high-flexibility analog template matching circuits have been developed for PAP vector representation. The circuits are flexible in a sense that the matching criteria (the weight to elements, the strictness in matching) are configurable. In Ref. [5], the fundamental characteristics of the building block circuits were presented, and their application to simple hand-written digits was presented in Ref. [6]. The purpose of this paper is to demonstrate the robust nature of the hardware matching system by experiments. The classification of simple hand-written patterns and the cephalometric landmark identification in gray-scale medical radiographs have been carried out and successful results are presented. In addition, multiple overlapping patterns can be separated without utilizing a priori knowledge, which is one of the most difficult problems at present in artificial intelligence. 2 I ma g e re pr es e n tati on by P AP PAP is a feature extraction technique using the edge information. The input image (64x64 pixels) is first subjected to pixel-by-pixel spatial filtering operations to detect edges in four directions: horizontal (HR); vertical (VR); +45 degrees (+45); and –45 degrees (-45). Each detected edge is represented by a binary flag and four edge maps are generated. The two-dimensional bit array in an edge map is reduced to a one-dimensional array of numerals by projection. The horizontal edge flags are accumulated in the horizontal direction and projected onto vertical axis. The vertical, +45-degree and –45-degree edge flags are similarly projected onto horizontal, -45-degree and +45-degree axes, respectively. Therefore the method is called “Principal Axes Projection (PAP)” [3,4]. Then each projection data set is series connected in the order of HR, +45, VR, -45 to form a feature vector. Neighboring four elements are averaged and merged to one element and a 64-dimensional vector is finally obtained. This vector representation very well preserves the human perception of similarity in the vector space. In the experiments below, we have further reduced the feature vector to 16 dimensions by merging each set of four neighboring elements into one, without any significant degradation in performance. C i r cui t c o nf i g ura ti ons A B C VGG A B C VGG IOUT IOUT 1 1 2 2 4 4 1 VIN 13 VIN RST RST £ ¡ ¤¢  £ ¥ §¦  3 Figure 1: Schematic of vector element matching circuit: (a) pyramid (gain reduction) type; (b) plateau (feedback) type. The capacitor area ratio is indicated in the figure. The basic functional form of the similarity evaluation is generated by the shortcut current flowing in a CMOS inverter as in Refs. [7,8,9]. However, their circuits were utilized to form radial basis functions and only the peak position was programmable. In our circuits, not only the peak position but also the peak height and the sharpness of the peak response shape are made configurable to realize flexible matching operations [5]. Two types of the element matching circuit are shown in Fig. 1. They evaluate the similarity between two vector elements. The result of the evaluation is given as an output current (IOUT ) from the pMOS current mirror. The peak position is temporarily memorized by auto-zeroing of the CMOS inverter. The common-gate transistor with VGG stabilizes the voltage supply to the inverter. By controlling the gate bias VGG, the peak height can be changed. This corresponds to multiplying a weight factor to the element. The sharpness of the functional form is taken as the strictness of the similarity evaluation. In the pyramid type circuit (Fig. 1(a)), the sharpness is controlled by the gain reduction in the input. In the plateau type (Fig. 1(b)), the output voltage of the inverter is fed back to input nodes and the sharpness changes in accordance with the amount of the feedback.    ¥£¡ ¦¤¢   £¨ 9&% ¦©§ (!! #$ 5 !' #$ &% 9 9 4 92 !¦ A1@9  ¨¥  5 4 52 (!  5 8765  9) 0 1 ¥ 1 ¨

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 jp Abstract A flexible pattern-matching analog classifier is presented in conjunction with a robust image representation algorithm called Principal Axes Projection (PAP). [sent-9, score-0.337]

2 In the circuit, the functional form of matching is configurable in terms of the peak position, the peak height and the sharpness of the similarity evaluation. [sent-10, score-0.993]

3 6-µm CMOS technology and successfully applied to hand-written pattern recognition and medical radiograph analysis using PAP as a feature extraction pre-processing step for robust image coding. [sent-12, score-0.41]

4 The separation and classification of overlapping patterns is also experimentally demonstrated. [sent-13, score-0.46]

5 1 I ntr o du c ti o n Pattern classification using template matching techniques is a powerful tool in implementing human-like intelligent systems. [sent-14, score-0.743]

6 The development of analog VLSI chips having a fully parallel template matching architecture [1,2] would be a promising solution in such applications because they offer an opportunity of low-power operation as well as very compact implementation. [sent-18, score-0.856]

7 In order to build a real human-like intelligent system, however, not only the pattern representation algorithm but also the matching hardware itself needs to be made flexible and robust in carrying out the pattern matching task. [sent-19, score-1.202]

8 First of all, two-dimensional patterns need to be represented by feature vectors having substantially reduced dimensions, while at the same time preserving the human perception of similarity among patterns in the vector space mapping. [sent-20, score-0.501]

9 For this purpose, an image representation algorithm called Principal Axes Projection (PAP) has been de- veloped [3] and its robust nature in pattern recognition has been demonstrated in the applications to medical radiograph analysis [3] and hand-written digits recognition [4]. [sent-21, score-0.424]

10 Regarding the matching hardware, high-flexibility analog template matching circuits have been developed for PAP vector representation. [sent-23, score-1.521]

11 The circuits are flexible in a sense that the matching criteria (the weight to elements, the strictness in matching) are configurable. [sent-24, score-0.758]

12 [5], the fundamental characteristics of the building block circuits were presented, and their application to simple hand-written digits was presented in Ref. [sent-26, score-0.2]

13 The purpose of this paper is to demonstrate the robust nature of the hardware matching system by experiments. [sent-28, score-0.532]

14 The classification of simple hand-written patterns and the cephalometric landmark identification in gray-scale medical radiographs have been carried out and successful results are presented. [sent-29, score-0.374]

15 In addition, multiple overlapping patterns can be separated without utilizing a priori knowledge, which is one of the most difficult problems at present in artificial intelligence. [sent-30, score-0.388]

16 2 I ma g e re pr es e n tati on by P AP PAP is a feature extraction technique using the edge information. [sent-31, score-0.068]

17 The input image (64x64 pixels) is first subjected to pixel-by-pixel spatial filtering operations to detect edges in four directions: horizontal (HR); vertical (VR); +45 degrees (+45); and –45 degrees (-45). [sent-32, score-0.197]

18 Each detected edge is represented by a binary flag and four edge maps are generated. [sent-33, score-0.167]

19 The two-dimensional bit array in an edge map is reduced to a one-dimensional array of numerals by projection. [sent-34, score-0.124]

20 The horizontal edge flags are accumulated in the horizontal direction and projected onto vertical axis. [sent-35, score-0.224]

21 The vertical, +45-degree and –45-degree edge flags are similarly projected onto horizontal, -45-degree and +45-degree axes, respectively. [sent-36, score-0.116]

22 Neighboring four elements are averaged and merged to one element and a 64-dimensional vector is finally obtained. [sent-39, score-0.146]

23 This vector representation very well preserves the human perception of similarity in the vector space. [sent-40, score-0.257]

24 In the experiments below, we have further reduced the feature vector to 16 dimensions by merging each set of four neighboring elements into one, without any significant degradation in performance. [sent-41, score-0.091]

25 C i r cui t c o nf i g ura ti ons A B C VGG A B C VGG IOUT IOUT 1 1 2 2 4 4 1 VIN 13 VIN RST RST £ ¡ ¤¢  £ ¥ §¦  3 Figure 1: Schematic of vector element matching circuit: (a) pyramid (gain reduction) type; (b) plateau (feedback) type. [sent-42, score-0.868]

26 The capacitor area ratio is indicated in the figure. [sent-43, score-0.034]

27 The basic functional form of the similarity evaluation is generated by the shortcut current flowing in a CMOS inverter as in Refs. [sent-44, score-0.138]

28 However, their circuits were utilized to form radial basis functions and only the peak position was programmable. [sent-46, score-0.487]

29 In our circuits, not only the peak position but also the peak height and the sharpness of the peak response shape are made configurable to realize flexible matching operations [5]. [sent-47, score-1.175]

30 Two types of the element matching circuit are shown in Fig. [sent-48, score-0.806]

31 The peak position is temporarily memorized by auto-zeroing of the CMOS inverter. [sent-52, score-0.171]

32 The common-gate transistor with VGG stabilizes the voltage supply to the inverter. [sent-53, score-0.07]

33 By controlling the gate bias VGG, the peak height can be changed. [sent-54, score-0.212]

34 The sharpness of the functional form is taken as the strictness of the similarity evaluation. [sent-56, score-0.269]

35 1(a)), the sharpness is controlled by the gain reduction in the input. [sent-58, score-0.189]

36 1(b)), the output voltage of the inverter is fed back to input nodes and the sharpness changes in accordance with the amount of the feedback. [sent-60, score-0.34]

37 )0  16-dimension 15-vector matching circuit Decoder 4. [sent-70, score-0.751]

38 5mm Time-domain Winner-Take-All Figure 2: Schematic of n-dimensional vector matching circuit utilizing the pyramid type vector element circuits. [sent-71, score-1.245]

39 The total matching score between input and template vectors is obtained by taking the wired sum of all I OUT ’s from the element matching circuits as shown in Fig. [sent-73, score-1.507]

40 [8] was eliminated because the radial basis function is not suitable for the template matching using PAP vectors. [sent-76, score-0.763]

41 The VOUT nodes are connected to a time-domain Winner-Take-All circuit [9]. [sent-79, score-0.314]

42 A common ramp down voltage is applied to the VRAMP nodes of all vector matching circuits. [sent-80, score-0.615]

43 When V RAMP is ramped down from V DD to 0V, the vector matching circuit yielding the maximum ISUM firstly upsets and its output voltage (VOUT ) shows a 0-to-1 transition. [sent-81, score-0.881]

44 The time-domain WTA circuit senses the first upsetting signal and memorizes the location in the open-loop OR-tree architecture [10]. [sent-82, score-0.314]

45 In this manner, the maximum-likelihood template vector is easily identified. [sent-83, score-0.333]

46 The circuits were designed and fabricated in a 0. [sent-84, score-0.25]

47 3 shows the photomicrograph of a pattern classifier circuit for 16-dimensional vectors. [sent-87, score-0.488]

48 One element matching circuit occupies the area of 150µm x 110µm. [sent-89, score-0.84]

49 Further area reduction is anticipated by employing high-K dielectric films for capacitors since the capacitors occupy a large area. [sent-91, score-0.177]

50 The full functioning of the chip was experimentally confirmed [6]. [sent-92, score-0.087]

51 In the following experiments, the simple vector matching circuit in Fig. [sent-93, score-0.811]

52 2 was utilized to investigate the response from each template vector instead of just detecting the winner using the full chip. [sent-94, score-0.52]

53 E x per i me n tal r e s u l ts a nd di s c us si o n Vector-element matching circuit © ¨% § %   £    © 4$ § $ © ¨% § 0 ! [sent-95, score-0.751]

54 1   ¨¦££¢  §¤¡¡  © ¨¦££¢  §¤¡¡   §¥¤¡¡   §¨¥¦¤£¡£¡¢ © ¨¦££¢   4  Figure 4: Measured characteristics: (a) pyramid type; (b) plateau type. [sent-98, score-0.316]

55 5V, and control signals A~C from 000 to 111 for sharpness control. [sent-101, score-0.159]

56 4 shows the measured characteristics of vector-element matching circuits in both linear and log plots. [sent-103, score-0.637]

57 Also, the operation mode was altered from the above-threshold region to the sub-threshold region by V GG. [sent-107, score-0.094]

58 4(b)), I OUT becomes constant around the peak position and the flat region widens in proportion to the amount of feedback. [sent-109, score-0.204]

59 This is because the inverter operates so as to keep the floating gate potential constant in the high-gain region of the inverter as in the case of virtual ground of an operational amplifier. [sent-110, score-0.3]

60 2 Matching of simple hand-w ritten patterns Fig. [sent-112, score-0.152]

61 5 demonstrates the matching results for the simple input patterns. [sent-113, score-0.472]

62 16 templates were stored in the matching circuit and several hand-written pattern vectors were presented to the circuit as inputs. [sent-114, score-1.177]

63 A slight difference in the matching score is observed between the pyramid type and the plateau type, but the answers are correct for both types. [sent-115, score-0.893]

64 As the sharpness gets steeper, all the scores decrease. [sent-118, score-0.195]

65 However, the score ratios between the winner and loosers are increased, thus enhancing the winner discrimination margins. [sent-119, score-0.318]

66 The matching results with varying operational regimes of the circuit are given in Fig. [sent-120, score-0.822]

67 The circuit functions properly even in the sub-threshold regime, demonstrating the opportunity of extremely low power operation. [sent-122, score-0.374]

68 "  # ¨ ¡ £     £ ¨ £¥ £¡ ¦¦¦©§¦¤¢  Figure 6: Effect of sharpness variation in the pyramid type with ABC=010. [sent-126, score-0.445]

69 Correct results are obtained in the sub-threshold regime as well as in the above-threshold regime (the pyramid type was utilized). [sent-132, score-0.39]

70 3 Application to gray-scale medical radiograph analysis In Fig. [sent-134, score-0.156]

71 8, are presented the result of cephalometric landmark identification experiments, where the Sella (pituitary gland) pattern search was carried out using the same matching circuit. [sent-135, score-0.646]

72 Since the 64-dimension PAP representation is essential for grayscale image recognition, the 64-dimension vector was divided into four 16-dimension vectors and the matching scores were measured separately and then summed up by off-chip calculation. [sent-136, score-0.627]

73 The correct position was successfully identified both in the above-threshold (Fig. [sent-137, score-0.08]

74 4 Separation of overlapping patterns Suppose an unknown pattern is presented to the matching circuit. [sent-147, score-0.872]

75 The pattern might consist of a single or multiple overlapping patterns. [sent-148, score-0.283]

76 Let X represent the input vector and W1st the winner (best matched) vector obtained by the matching circuit. [sent-149, score-0.716]

77 Let the first matching trial be expressed as follows: → 1st trial: X  W1st matching Then, the residue vector (X-W1st) is generated. [sent-150, score-1.168]

78 When an element in the residue vector becomes negative, the value is set to 0. [sent-152, score-0.302]

79 Such operation is easily implemented using the floating gate technique. [sent-153, score-0.077]

80 If the input pattern is single, the residue vector is meaningless: only the leftover edge information remains in the residue vector. [sent-155, score-0.617]

81 If the input consists of overlapping patterns, the edge information of other patterns remains. [sent-156, score-0.458]

82 If the residue vector is very small, we can expect that the input is single. [sent-157, score-0.282]

83 But in many cases, the residue vector is not so small due to the distortion in hand-written patterns. [sent-158, score-0.247]

84 Thus, it is almost impossible to judge which is the case only from the magnitude of the residue vector. [sent-159, score-0.187]

85 Therefore, we proceed to the second trial to find the second winner: 2nd trial: X − W1st matching  W2nd → With the same sequence, the second residue vector (X-W1st-W2nd), the third (X-W1st -W2nd-W3rd) and so forth are generated by repeating the winner subtraction after each trial. [sent-160, score-0.89]

86 Then, new template vectors are generated such as W1st +W2nd, W1st +W2nd+W3rd, and so forth. [sent-161, score-0.273]

87 If the input vector is that of a single pattern, the matching score is the highest at W1st and the scores are lower at W1st +W2nd and W1st +W2nd+W3rd. [sent-162, score-0.638]

88 On the other hand, if the input vector is that of two overlapping patterns, the score is the highest at W1st+W2nd. [sent-163, score-0.368]

89 This procedure can be terminated automatically when the new template composed of n overlapping patterns yields lower score than that of n-1 overlapping patterns. [sent-164, score-0.901]

90 In this manner, we are able to know how many patterns are overlapping and what patterns are overlapping without a priori knowledge. [sent-165, score-0.71]

91 An example of separating multiple overlapping patterns is illustrated in Fig. [sent-166, score-0.355]

92 + + + + Figure 9: Experimental result illustrating the algorithm for separating overlapping patterns. [sent-172, score-0.203]

93 Pattern #1 is correctly classified as a single rectangle by yielding the higher score for single template than that for W1st +W2nd. [sent-177, score-0.343]

94 Pattern #3 consists of three overlapping patterns, but is erroneously recognized as four overlapping patterns. [sent-178, score-0.437]

95 When we look at pattern #3, a triangle is visible in the pattern. [sent-180, score-0.08]

96 5 C on cl us i o ns A soft-pattern matching circuit has been demonstrated in conjunction with a robust image representation algorithm called PAP. [sent-182, score-0.855]

97 The circuit has been successfully applied to hand-written pattern recognition and medical radiograph analysis. [sent-183, score-0.62]

98 The recognition of overlapping patterns similar to human perception has been also experimentally demonstrated. [sent-184, score-0.506]

99 Acknowledgments Test circuits were fabricated in the VDEC program (The Univ. [sent-185, score-0.25]

100 (1997) A scalable low voltage analog Gaussian radial basis circuit. [sent-295, score-0.237]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('matching', 0.437), ('circuit', 0.314), ('template', 0.273), ('pyramid', 0.216), ('overlapping', 0.203), ('circuits', 0.2), ('vgg', 0.192), ('residue', 0.187), ('pap', 0.167), ('sharpness', 0.159), ('patterns', 0.152), ('winner', 0.124), ('peak', 0.119), ('analog', 0.114), ('cmos', 0.106), ('plateau', 0.1), ('abc', 0.096), ('iout', 0.096), ('radiograph', 0.096), ('yamasaki', 0.096), ('pattern', 0.08), ('inverter', 0.076), ('matched', 0.074), ('flexible', 0.073), ('vramp', 0.072), ('type', 0.07), ('voltage', 0.07), ('score', 0.07), ('edge', 0.068), ('tokyo', 0.063), ('utilized', 0.063), ('image', 0.063), ('similarity', 0.062), ('medical', 0.06), ('vector', 0.06), ('axes', 0.058), ('frontier', 0.057), ('element', 0.055), ('hardware', 0.054), ('radial', 0.053), ('chip', 0.053), ('regime', 0.052), ('position', 0.052), ('fabricated', 0.05), ('height', 0.049), ('adachi', 0.048), ('cephalometric', 0.048), ('configurable', 0.048), ('featuring', 0.048), ('flags', 0.048), ('landmark', 0.048), ('nmos', 0.048), ('photomicrograph', 0.048), ('ramp', 0.048), ('sella', 0.048), ('shibata', 0.048), ('strictness', 0.048), ('theogarajan', 0.048), ('vin', 0.048), ('wvteq', 0.048), ('trial', 0.047), ('classifier', 0.046), ('gate', 0.044), ('recognition', 0.042), ('fy', 0.042), ('capacitors', 0.042), ('iscas', 0.042), ('robust', 0.041), ('try', 0.041), ('horizontal', 0.04), ('perception', 0.038), ('separation', 0.038), ('operational', 0.038), ('vout', 0.038), ('human', 0.037), ('scores', 0.036), ('subtraction', 0.035), ('hr', 0.035), ('input', 0.035), ('experimentally', 0.034), ('area', 0.034), ('identification', 0.033), ('vr', 0.033), ('utilizing', 0.033), ('floating', 0.033), ('regimes', 0.033), ('region', 0.033), ('classification', 0.033), ('projection', 0.032), ('opportunity', 0.032), ('templates', 0.032), ('four', 0.031), ('vlsi', 0.03), ('reduction', 0.03), ('employing', 0.029), ('vertical', 0.028), ('successfully', 0.028), ('demonstrating', 0.028), ('array', 0.028), ('altered', 0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999958 34 nips-2001-Analog Soft-Pattern-Matching Classifier using Floating-Gate MOS Technology

Author: Toshihiko Yamasaki, Tadashi Shibata

Abstract: A flexible pattern-matching analog classifier is presented in conjunction with a robust image representation algorithm called Principal Axes Projection (PAP). In the circuit, the functional form of matching is configurable in terms of the peak position, the peak height and the sharpness of the similarity evaluation. The test chip was fabricated in a 0.6-µm CMOS technology and successfully applied to hand-written pattern recognition and medical radiograph analysis using PAP as a feature extraction pre-processing step for robust image coding. The separation and classification of overlapping patterns is also experimentally demonstrated. 1 I ntr o du c ti o n Pattern classification using template matching techniques is a powerful tool in implementing human-like intelligent systems. However, the processing is computationally very expensive, consuming a lot of CPU time when implemented as software running on general-purpose computers. Therefore, software approaches are not practical for real-time applications. For systems working in mobile environment, in particular, they are not realistic because the memory and computational resources are severely limited. The development of analog VLSI chips having a fully parallel template matching architecture [1,2] would be a promising solution in such applications because they offer an opportunity of low-power operation as well as very compact implementation. In order to build a real human-like intelligent system, however, not only the pattern representation algorithm but also the matching hardware itself needs to be made flexible and robust in carrying out the pattern matching task. First of all, two-dimensional patterns need to be represented by feature vectors having substantially reduced dimensions, while at the same time preserving the human perception of similarity among patterns in the vector space mapping. For this purpose, an image representation algorithm called Principal Axes Projection (PAP) has been de- veloped [3] and its robust nature in pattern recognition has been demonstrated in the applications to medical radiograph analysis [3] and hand-written digits recognition [4]. However, the demonstration so far was only carried out by computer simulation. Regarding the matching hardware, high-flexibility analog template matching circuits have been developed for PAP vector representation. The circuits are flexible in a sense that the matching criteria (the weight to elements, the strictness in matching) are configurable. In Ref. [5], the fundamental characteristics of the building block circuits were presented, and their application to simple hand-written digits was presented in Ref. [6]. The purpose of this paper is to demonstrate the robust nature of the hardware matching system by experiments. The classification of simple hand-written patterns and the cephalometric landmark identification in gray-scale medical radiographs have been carried out and successful results are presented. In addition, multiple overlapping patterns can be separated without utilizing a priori knowledge, which is one of the most difficult problems at present in artificial intelligence. 2 I ma g e re pr es e n tati on by P AP PAP is a feature extraction technique using the edge information. The input image (64x64 pixels) is first subjected to pixel-by-pixel spatial filtering operations to detect edges in four directions: horizontal (HR); vertical (VR); +45 degrees (+45); and –45 degrees (-45). Each detected edge is represented by a binary flag and four edge maps are generated. The two-dimensional bit array in an edge map is reduced to a one-dimensional array of numerals by projection. The horizontal edge flags are accumulated in the horizontal direction and projected onto vertical axis. The vertical, +45-degree and –45-degree edge flags are similarly projected onto horizontal, -45-degree and +45-degree axes, respectively. Therefore the method is called “Principal Axes Projection (PAP)” [3,4]. Then each projection data set is series connected in the order of HR, +45, VR, -45 to form a feature vector. Neighboring four elements are averaged and merged to one element and a 64-dimensional vector is finally obtained. This vector representation very well preserves the human perception of similarity in the vector space. In the experiments below, we have further reduced the feature vector to 16 dimensions by merging each set of four neighboring elements into one, without any significant degradation in performance. C i r cui t c o nf i g ura ti ons A B C VGG A B C VGG IOUT IOUT 1 1 2 2 4 4 1 VIN 13 VIN RST RST £ ¡ ¤¢  £ ¥ §¦  3 Figure 1: Schematic of vector element matching circuit: (a) pyramid (gain reduction) type; (b) plateau (feedback) type. The capacitor area ratio is indicated in the figure. The basic functional form of the similarity evaluation is generated by the shortcut current flowing in a CMOS inverter as in Refs. [7,8,9]. However, their circuits were utilized to form radial basis functions and only the peak position was programmable. In our circuits, not only the peak position but also the peak height and the sharpness of the peak response shape are made configurable to realize flexible matching operations [5]. Two types of the element matching circuit are shown in Fig. 1. They evaluate the similarity between two vector elements. The result of the evaluation is given as an output current (IOUT ) from the pMOS current mirror. The peak position is temporarily memorized by auto-zeroing of the CMOS inverter. The common-gate transistor with VGG stabilizes the voltage supply to the inverter. By controlling the gate bias VGG, the peak height can be changed. This corresponds to multiplying a weight factor to the element. The sharpness of the functional form is taken as the strictness of the similarity evaluation. In the pyramid type circuit (Fig. 1(a)), the sharpness is controlled by the gain reduction in the input. In the plateau type (Fig. 1(b)), the output voltage of the inverter is fed back to input nodes and the sharpness changes in accordance with the amount of the feedback.    ¥£¡ ¦¤¢   £¨ 9&% ¦©§ (!! #$ 5 !' #$ &% 9 9 4 92 !¦ A1@9  ¨¥  5 4 52 (!  5 8765  9) 0 1 ¥ 1 ¨

2 0.14781618 176 nips-2001-Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines

Author: Roman Genov, Gert Cauwenberghs

Abstract: A mixed-signal paradigm is presented for high-resolution parallel innerproduct computation in very high dimensions, suitable for efficient implementation of kernels in image processing. At the core of the externally digital architecture is a high-density, low-power analog array performing binary-binary partial matrix-vector multiplication. Full digital resolution is maintained even with low-resolution analog-to-digital conversion, owing to random statistics in the analog summation of binary products. A random modulation scheme produces near-Bernoulli statistics even for highly correlated inputs. The approach is validated with real image data, and with experimental results from a CID/DRAM analog array prototype in 0.5 m CMOS. ¢

3 0.13808733 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

Author: A. Bofill, D. P. Thompson, Alan F. Murray

Abstract: Experimental data has shown that synaptic strength modification in some types of biological neurons depends upon precise spike timing differences between presynaptic and postsynaptic spikes. Several temporally-asymmetric Hebbian learning rules motivated by this data have been proposed. We argue that such learning rules are suitable to analog VLSI implementation. We describe an easily tunable circuit to modify the weight of a silicon spiking neuron according to those learning rules. Test results from the fabrication of the circuit using a O.6J.lm CMOS process are given. 1

4 0.1160221 33 nips-2001-An Efficient Clustering Algorithm Using Stochastic Association Model and Its Implementation Using Nanostructures

Author: Takashi Morie, Tomohiro Matsuura, Makoto Nagata, Atsushi Iwata

Abstract: This paper describes a clustering algorithm for vector quantizers using a “stochastic association model”. It offers a new simple and powerful softmax adaptation rule. The adaptation process is the same as the on-line K-means clustering method except for adding random fluctuation in the distortion error evaluation process. Simulation results demonstrate that the new algorithm can achieve efficient adaptation as high as the “neural gas” algorithm, which is reported as one of the most efficient clustering methods. It is a key to add uncorrelated random fluctuation in the similarity evaluation process for each reference vector. For hardware implementation of this process, we propose a nanostructure, whose operation is described by a single-electron circuit. It positively uses fluctuation in quantum mechanical tunneling processes.

5 0.11090186 112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

Author: Aaron P. Shon, David Hsu, Chris Diorio

Abstract: We have designed and fabricated a VLSI synapse that can learn a conditional probability or correlation between spike-based inputs and feedback signals. The synapse is low power, compact, provides nonvolatile weight storage, and can perform simultaneous multiplication and adaptation. We can calibrate arrays of synapses to ensure uniform adaptation characteristics. Finally, adaptation in our synapse does not necessarily depend on the signals used for computation. Consequently, our synapse can implement learning rules that correlate past and present synaptic activity. We provide analysis and experimental chip results demonstrating the operation in learning and calibration mode, and show how to use our synapse to implement various learning rules in silicon. 1 I n tro d u cti o n Computation with conditional probabilities and correlations underlies many models of neurally inspired information processing. For example, in the sequence-learning neural network models proposed by Levy [1], synapses store the log conditional probability that a presynaptic spike occurred given that the postsynaptic neuron spiked sometime later. Boltzmann machine synapses learn the difference between the correlations of pairs of neurons in the sleep and wake phase [2]. In most neural models, computation and adaptation occurs at the synaptic level. Hence, a silicon synapse that can learn conditional probabilities or correlations between pre- and post-synaptic signals can be a key part of many silicon neural-learning architectures. We have designed and implemented a silicon synapse, in a 0.35µm CMOS process, that learns a synaptic weight that corresponds to the conditional probability or correlation between binary input and feedback signals. This circuit utilizes floating-gate transistors to provide both nonvolatile storage and weight adaptation mechanisms [3]. In addition, the circuit is compact, low power, and provides simultaneous adaptation and computation. Our circuit improves upon previous implementations of floating-gate based learning synapses [3,4,5] in several ways. First, our synapse appears to be the first spike-based floating-gate synapse that implements a general learning principle, rather than a particular learning rule [4,5]. We demon- strate that our synapse can learn either the conditional probability or the correlation between input and feedback signals. Consequently, we can implement a wide range of synaptic learning networks with our circuit. Second, unlike the general correlational learning synapse proposed by Hasler et. al. [3], our synapse can implement learning rules that correlate pre- and postsynaptic activity that occur at different times. Learning algorithms that employ time-separated correlations include both temporal difference learning [6] and recently postulated temporally asymmetric Hebbian learning [7]. Hasler’s correlational floating-gate synapse can only perform updates based on the present input and feedback signals, and is therefore unsuitable for learning rules that correlate signals that occur at different times. Because signals that control adaptation and computation in our synapse are separate, our circuit can implement these time-dependent learning rules. Finally, we can calibrate our synapses to remove mismatch between the adaptation mechanisms of individual synapses. Mismatch between the same adaptation mechanisms on different floating-gate transistors limits the accuracy of learning rules based on these devices. This problem has been noted in previous circuits that use floating-gate adaptation [4,8]. In our circuit, different synapses can learn widely divergent weights from the same inputs because of component mismatch. We provide a calibration mechanism that enables identical adaptation across multiple synapses despite device mismatch. To our knowledge, this circuit is the first instance of a floating-gate learning circuit that includes this feature. This paper is organized as follows. First, we provide a brief introduction to floating-gate transistors. Next, we provide a description and analysis of our synapse, demonstrating that it can learn the conditional probability or correlation between a pair of binary signals. We then describe the calibration circuitry and show its effectiveness in compensating for adaptation mismatches. Finally, we discuss how this synapse can be used for silicon implementations of various learning networks. 2 Floating-gate transistors Because our circuit relies on floating-gate transistors to achieve adaptation, we begin by briefly discussing these devices. A floating-gate transistor (e.g. transistor M3 of Fig.1(a)) comprises a MOSFET whose gate is isolated on all sides by SiO2. A control gate capacitively couples signals to the floating gate. Charge stored on the floating gate implements a nonvolatile analog weight; the transistor’s output current varies with both the floating-gate voltage and the control-gate voltage. We use Fowler-Nordheim tunneling [9] to increase the floating-gate charge, and impact-ionized hot-electron injection (IHEI) [10] to decrease the floating-gate charge. We tunnel by placing a high voltage on a tunneling implant, denoted by the arrow in Fig.1(a). We inject by imposing more than about 3V across the drain and source of transistor M3. The circuit allows simultaneous adaptation and computation, because neither tunneling nor IHEI interfere with circuit operation. Over a wide range of tunneling voltages Vtun, we can approximate the magnitude of the tunneling current Itun as [4]: I tun = I tun 0 exp (Vtun − V fg ) / Vχ (1) where Vtun is the tunneling-implant voltage, Vfg is the floating-gate voltage, and Itun0 and Vχ are fit constants. Over a wide range of transistor drain and source voltages, we can approximate the magnitude of the injection current Iinj as [4]: 1−U t / Vγ I inj = I inj 0 I s exp ( (Vs − Vd ) / Vγ ) (2) where Vs and Vd are the drain and source voltages, Iinj0 is a pre-exponential current, Vγ is a constant that depends on the VLSI process, and Ut is the thermal voltage kT/q. 3 T h e s i l i co n s y n a p s e We show our silicon synapse in Fig.1. The synapse stores an analog weight W, multiplies W by a binary input Xin, and adapts W to either a conditional probability P(Xcor|Y) or a correlation P(XcorY). Xin is analogous to a presynaptic input, while Y is analogous to a postsynaptic signal or error feedback. Xcor is a presynaptic adaptation signal, and typically has some relationship with Xin. We can implement different learning rules by altering the relationship between Xcor and Xin. For some examples, see section 4. We now describe the circuit in more detail. The drain current of floating-gate transistor M4 represents the weight value W. Because the control gate of M4 is fixed, W depends solely on the charge on floating-gate capacitor C1. We can switch the drain current on or off using transistor M7; this switching action corresponds to a multiplication of the weight value W by a binary input signal, Xin. We choose values for the drain voltage of the M4 to prevent injection. A second floating-gate transistor M3, whose gate is also connected to C1, controls adaptation by injection and tunneling. Simultaneously high input signals Xcor and Y cause injection, increasing the weight. A high Vtun causes tunneling, decreasing the weight. We either choose to correlate a high Vtun with signal Y or provide a fixed high Vtun throughout the adaptation process. The choice determines whether the circuit learns a conditional probability or a correlation, respectively. Because the drain current sourced by M4 provides is the weight W, we can express W in terms of M4’s floating-gate voltage, Vfg. Vfg includes the effects of both the fixed controlgate voltage and the variable floating-gate charge. The expression differs depending on whether the readout transistor is operating in the subthreshold or above-threshold regime. We provide both expressions below: I 0 exp( − κ 2V fg /(1 + κ )U t ) W= κ V fg (1 + κ ) 2 β V0 − below threshold 2 (3) above threshold Here V0 is a constant that depends on the threshold voltage and on Vdd, Ut is the thermal voltage kT/q, κ is the floating-gate-to-channel coupling coefficient, and I 0 is a fixed bias current. Eq. 3 shows that W depends solely on Vfg, (all the other factors are constants). These equations differ slightly from standard equations for the source current through a transistor due to source degeneration caused by M 4. This degeneration smoothes the nonlinear relationship between Vfg and Is; its addition to the circuit is optional. 3.1 Weight adaptation Because W depends on Vfg, we can control W by tunneling or injecting transistor M3. In this section, we show that these mechanisms enable our circuit to learn the correlation or conditional probability between inputs Xcor (which we will refer to as X) and Y. Our analysis assumes that these statistics are fixed over some period during which adaptation occurs. The change in floating-gate voltage, and hence the weight, discussed below should therefore be interpreted in terms of the expected weight change due to the statistics of the inputs. We discuss learning of conditional probabilities; a slight change in the tunneling signal, described previously, allows us to learn correlations instead. We first derive the injection equation for the floating-gate voltage in terms of the joint probability P(X,Y) by considering the relationship between the input signals and Is, Vs, Vb Vtun M1 W eq (nA) 80 M2 60 40 C1 Xcor M4 M3 W M5 Xin Y o chip data − fit: P(X|Y)0.78 20 M6 0 M7 synaptic output 0.2 0.4 0.6 Pr(X|Y) 1 0.8 (b) 3.5 Fig. 1. (a) Synapse schematic. (b) Plot of equilibrium weight in the subthreshold regime versus the conditional probability P(X|Y), showing both experimental chip data and a fit from Eq.7 (c). Plot of equilibrium weight versus conditional probability in the above-threshold regime, again showing chip data and a fit from Eq.7. W eq (µA) (a). 3 2.5 2 0 o chip data − fit 0.2 0.4 0.6 Pr(X|Y) 0.8 1 (c) and Vd of M3. We assume that transistor M1 is in saturation, constraining Is at M3 to be constant. Presentation of a joint binary event (X,Y) closes nFET switches M5 and M6, pulling the drain voltage Vd of M3 to 0V and causing injection. Therefore the probability that Vd is low enough to cause injection is the probability of the joint event Pr(X,Y). By Eq.2 , the amount of the injection is also dependent on M3’s source voltage Vs. Because M3 is constrained to a fixed channel current, a drop in the floating-gate voltage, ∆Vfg, causes a drop in Vs of magnitude κ∆Vfg. Substituting these expressions into Eq.2 results in a floating-gate voltage update of: (dV fg / dt )inj = − I inj 0 Pr( X , Y ) exp(κ Vfg / Vγ ) (4) where Iinj0 also includes the constant source current. Eq.4 shows that the floating-gate voltage update due to injection is a function of the probability of the joint event (X,Y). Next we analyze the effects of tunneling on the floating-gate voltage. The origin of the tunneling signal determines whether the synapse is learning a conditional probability or a correlation. If the circuit is learning a conditional probability, occurrence of the conditioning event Y gates a corresponding high-voltage (~9V) signal onto the tunneling implant. Consequently, we can express the change in floating-gate voltage due to tunneling in terms of the probability of Y, and the floating-gate voltage. (dV fg / dt )tun = I tun 0 Pr(Y ) exp(−V fg / Vχ ) (5) Eq.5 shows that the floating-gate voltage update due to tunneling is a function of the probability of the event Y. 3.2 Weight equilibrium To demonstrate that our circuit learns P(X|Y), we show that the equilibrium weight of the synapse is solely a function of P(X|Y). The equilibrium weight of the synapse is the weight value where the expected weight change over time equals zero. This weight value corresponds to the floating-gate voltage where injection and tunneling currents are equal. To find this voltage, we equate Eq’s. 4 and 5 and solve: eq V fg = I inj 0 −1 log Pr( X | Y ) + log I tun 0 (κ / Vy + 1/ Vx ) (6) To derive the equilibrium weight, we substitute Eq.6 into Eq.3 and solve: I0 Weq = I inj 0 I tun 0 β V0 + η log where α = α Pr( X | Y ) I inj 0 I tun 0 below threshold 2 + log ( Pr( X | Y ) ) above threshold (7) κ2 κ2 and η = . (1 + κ )U t (κ / Vγ + 1/ Vχ ) (1 + κ )(κ / Vγ + 1/ Vχ ) Consequently, the equilibrium weight is a function of the conditional probability below threshold and a function of the log-squared conditional probability above threshold. Note that the equilibrium weight is stable because of negative feedback in the tunneling and injection processes. Therefore, the weight will always converge to the equilibrium value shown in Eq.7. Figs. 1(b) and (c) show the equilibrium weight versus the conditional P(X|Y) for both sub- and above-threshold circuits, along with fits to Eq.7. Note that both the sub- and above-threshold relationship between P(X|Y) and the equilibrium weight enables us to compute the probability of a vector of synaptic inputs X given a post-synaptic response Y. In both cases, we can apply the outputs currents of an array of synapses through diodes, and then add the resulting voltages via a capacitive voltage divider, resulting in a voltage that is a linear function of log P(X|Y). 3.3 Calibration circuitry Mismatch between injection and tunneling in different floating-gate transistors can greatly reduce the ability of our synapses to learn meaningful values. Experimental data from floating-gate transistors fabricated in a 0.35µm process show that injection varies by as much as 2:1 across a chip, and tunneling by up to 1.2:1. The effect of this mismatch on our synapses causes the weight equilibrium of different synapses to differ by a multiplicative gain. Fig.2 (b) shows the equilibrium weights of an array of six synapses exposed to identical input signals. The variation of the synaptic weights is of the same order of magnitude as the weights themselves, making large arrays of synapses all but useless for implementing many learning algorithms. We alleviate this problem by calibrating our synapses to equalize the pre-exponential tunneling and injection constants. Because the dependence of the equilibrium weight on these constants is determined by the ratio of Iinj0/Itun0, our calibration process changes Iinj to equalize the ratio of injection to tunneling across all synapses. We choose to calibrate injection because we can easily change Iinj0 by altering the drain current through M1. Our calibration procedure is a self-convergent memory write [11], that causes the equilibrium weight of every synapse to equal the current Ical. Calibration requires many operat- 80 Verase M1 M8 60 W eq (nA) Vb M2 Vtun 40 M3 M4 M9 V cal 20 M5 0 M7 M6 synaptic output 0.2 Ical 0.6 P(X|Y) 0.8 1 0.4 0.6 P(X|Y) 0.8 1 0.4 (b) 80 (a) Fig. 2. (a) Schematic of calibrated synapse with signals used during the calibration procedure. (b) Equilibrium weights for array of synapses shown in Fig.1a. (c) Equilibrium weights for array of calibrated synapses after calibration. W eq (nA) 60 40 20 0 0.2 (c) ing cycles, where, during each cycle, we first increase the equilibrium weight of the synapse, and second, we let the synapse adapt to the new equilibrium weight. We create the calibrated synapse by modifying our original synapse according to Fig. 2(a). We convert M1 into a floating-gate transistor, whose floating-gate charge thereby sets M3’s channel current, providing control of Iinj0 of Eq.7. Transistor M8 modifies M1’s gate charge by means of injection when M9’s gate is low and Vcal is low. M9’s gate is only low when the equilibrium weight W is less than Ical. During calibration, injection and tunneling on M3 are continuously active. We apply a pulse train to Vcal; during each pulse period, Vcal is predominately high. When Vcal is high, the synapse adapts towards its equilibrium weight. When Vcal pulses low, M8 injects, increasing the synapse’s equilibrium weight W. We repeat this process until the equilibrium weight W matches Ical, causing M9’s gate voltage to rise, disabling Vcal and with it injection. To ensure that a precalibrated synapse has an equilibrium weight below Ical, we use tunneling to erase all bias transistors prior to calibration. Fig.2(c) shows the equilibrium weights of six synapses after calibration. The data show that calibration can reduce the effect of mismatched adaptation on the synapse’s learned weight to a small fraction of the weight itself. Because M1 is a floating-gate transistor, its parasitic gate-drain capacitance causes a mild dependence between M1’s drain voltage and source current. Consequently, M3’s floatinggate voltage now affects its source current (through M1’s drain voltage), and we can model M3 as a source-degenerated pFET [3]. The new expression for the injection current in M3 is: Presynaptic neuron W+ Synapse W− X Y Injection Postsynaptic neuron Injection Activation window Fig. 3. A method for achieving spike-time dependent plasticity in silicon. (dV fg / dt )inj = − I inj 0 Pr( X , Y ) exp Vfg κ Vγ − κ k1 Ut (8) where k1 is close to zero. The new expression for injection slightly changes the α and η terms of the weight equilibrium in Eq.7, although the qualitative relationship between the weight equilibrium and the conditional probability remains the same. 4 Implementing silicon synaptic learning rules In this section we discuss how to implement a variety of learning rules from the computational-neurobiology and neural-network literature with our synapse circuit. We can use our circuit to implement a Hebbian learning rule. Simultaneously activating both M5 and M6 is analogous to heterosynaptic LTP based on synchronized pre- and postsynaptic signals, and activating tunneling with the postsynaptic Y is analogous to homosynaptic LTD. In our synapse, we tie Xin and Xcor together and correlate Vtun with Y. Our synapse is also capable of emulating a Boltzmann weight-update rule [2]. This weight-update rule derives from the difference between correlations among neurons when the network receives external input, and when the network operates in a free running phase (denoted as clamped and unclamped phases respectively). With weight decay, a Boltzmann synapse learns the difference between correlations in the clamped and unclamped phase. We can create a Boltzmann synapse from a pair of our circuits, in which the effective weight is the difference between the weights of the two synapses. To implement a weight update, we update one silicon synapse based on pre- and postsynaptic signals in the clamped phase, and update the other synapse in the unclamped phase. We do this by sending Xin to Xcor of one synapse in the clamped phase, and sending Xin to Xcor of the other synapse in the negative phase. Vtun remains constant throughout adaptation. Finally, we consider implementing a temporally asymmetric Hebbian learning rule [7] using our synapse. In temporally asymmetric Hebbian learning, a synapse exhibits LTP or LTD if the presynaptic input occurs before or after the postsynaptic response, respectively. We implement an asymmetric learning synapse using two of our circuits, where the synaptic weight is the difference in the weights of the two circuit. We show the circuit in Fig. 3. Each neuron sends two signals: a neuronal output, and an adaptation time window that is active for some time afterwards. Therefore, the combined synapse receives two presynaptic signals and two postsynaptic signals. The relative timing of a postsynaptic response, Y, with the presynaptic input, X, determines whether the synapse undergoes LTP or LTD. If Y occurs before X, Y’s time window correlates with X, causing injection on the negative synapse, decreasing the weight. If Y occurs after X, Y correlates with X’s time window, causing injection on the positive synapse, increasing the weight. Hence, our circuit can use the relative timing between presynaptic and postsynaptic activity to implement learning. 5 Conclusion We have described a silicon synapse that implements a wide range of spike-based learning rules, and that does not suffer from device mismatch. We have also described how we can implement various silicon-learning networks using this synapse. In addition, although we have only analyzed the learning properties of the synapse for binary signals, we can instead use pulse-coded analog signals. One possible avenue for future work is to analyze the implications of different pulse-coded schemes on the circuit’s adaptive behavior. A c k n o w l e d g e me n t s This work was supported by the National Science Foundation and by the Office of Naval Research. Aaron Shon was also supported by a NDSEG fellowship. We thank Anhai Doan and the anonymous reviewers for helpful comments. References [1] W.B.Levy, “A computational approach to hippocampal function,” in R.D. Hawkins and G.H. Bower (eds.), Computational Models of Learning in Simple Neural Systems, The Psychology of Learning and Motivation vol. 23, pp. 243-305, San Diego, CA: Academic Press, 1989. [2] D. H. Ackley, G. Hinton, and T. Sejnowski, “A learning algorithm for Boltzmann machines,” Cognitive Science vol. 9, pp. 147-169, 1985. [3 ] P. Hasler, B. A. Minch, J. Dugger, and C. Diorio, “Adaptive circuits and synapses using pFET floating-gate devices, ” in G. Cauwenberghs and M. Bayoumi (eds.) Learning in Silicon, pp. 33-65, Kluwer Academic, 1999. [4] P. Hafliger, A spike-based learning rule and its implementation in analog hardware, Ph.D. thesis, ETH Zurich, 1999. [5] C. Diorio, P. Hasler, B. A. Minch, and C. Mead, “A floating-gate MOS learning array with locally computer weight updates,” IEEE Transactions on Electron Devices vol. 44(12), pp. 2281-2289, 1997. [6] R. Sutton, “Learning to predict by the methods of temporal difference,” Machine Learning, vol. 3, p p . 9-44, 1988. [7] H.Markram, J. Lübke, M. Frotscher, and B. Sakmann, “Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs,” Science vol. 275, pp.213-215, 1997. [8] A. Pesavento, T. Horiuchi, C. Diorio, and C. Koch, “Adaptation of current signals with floating-gate circuits,” in Proceedings of the 7th International Conference on Microelectronics for Neural, Fuzzy, and Bio-Inspired Systems (Microneuro99), pp. 128-134, 1999. [9] M. Lenzlinger and E. H. Snow. “Fowler-Nordheim tunneling into thermally grown SiO2,” Journal of Applied Physics vol. 40(1), p p . 278--283, 1969. [10] E. Takeda, C. Yang, and A. Miura-Hamada, Hot Carrier Effects in MOS Devices, San Diego, CA: Academic Press, 1995. [11] C. Diorio, “A p-channel MOS synapse transistor with self-convergent memory writes,” IEEE Journal of Solid-State Circuits vol. 36(5), pp. 816-822, 2001.

6 0.10623703 191 nips-2001-Transform-invariant Image Decomposition with Similarity Templates

7 0.10048239 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

8 0.081248999 118 nips-2001-Matching Free Trees with Replicator Equations

9 0.064866632 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation

10 0.047510434 28 nips-2001-Adaptive Nearest Neighbor Classification Using Support Vector Machines

11 0.047262713 46 nips-2001-Categorization by Learning and Combining Object Parts

12 0.043243729 63 nips-2001-Dynamic Time-Alignment Kernel in Support Vector Machine

13 0.040046055 24 nips-2001-Active Information Retrieval

14 0.039970752 99 nips-2001-Intransitive Likelihood-Ratio Classifiers

15 0.039956003 103 nips-2001-Kernel Feature Spaces and Nonlinear Blind Souce Separation

16 0.038499478 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

17 0.037923001 78 nips-2001-Fragment Completion in Humans and Machines

18 0.03531342 89 nips-2001-Grouping with Bias

19 0.034876619 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

20 0.034719564 139 nips-2001-Online Learning with Kernels


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.129), (1, -0.077), (2, -0.075), (3, -0.015), (4, -0.004), (5, 0.024), (6, -0.056), (7, -0.015), (8, -0.014), (9, -0.055), (10, 0.032), (11, -0.21), (12, 0.13), (13, 0.071), (14, -0.06), (15, 0.199), (16, -0.038), (17, 0.066), (18, 0.124), (19, 0.28), (20, -0.052), (21, 0.034), (22, 0.03), (23, -0.101), (24, 0.002), (25, 0.0), (26, -0.104), (27, 0.0), (28, 0.067), (29, 0.034), (30, 0.017), (31, 0.003), (32, -0.085), (33, -0.005), (34, 0.016), (35, 0.118), (36, 0.121), (37, 0.062), (38, 0.045), (39, -0.086), (40, 0.004), (41, 0.039), (42, 0.056), (43, -0.087), (44, -0.057), (45, 0.09), (46, -0.053), (47, 0.015), (48, -0.07), (49, -0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97690904 34 nips-2001-Analog Soft-Pattern-Matching Classifier using Floating-Gate MOS Technology

Author: Toshihiko Yamasaki, Tadashi Shibata

Abstract: A flexible pattern-matching analog classifier is presented in conjunction with a robust image representation algorithm called Principal Axes Projection (PAP). In the circuit, the functional form of matching is configurable in terms of the peak position, the peak height and the sharpness of the similarity evaluation. The test chip was fabricated in a 0.6-µm CMOS technology and successfully applied to hand-written pattern recognition and medical radiograph analysis using PAP as a feature extraction pre-processing step for robust image coding. The separation and classification of overlapping patterns is also experimentally demonstrated. 1 I ntr o du c ti o n Pattern classification using template matching techniques is a powerful tool in implementing human-like intelligent systems. However, the processing is computationally very expensive, consuming a lot of CPU time when implemented as software running on general-purpose computers. Therefore, software approaches are not practical for real-time applications. For systems working in mobile environment, in particular, they are not realistic because the memory and computational resources are severely limited. The development of analog VLSI chips having a fully parallel template matching architecture [1,2] would be a promising solution in such applications because they offer an opportunity of low-power operation as well as very compact implementation. In order to build a real human-like intelligent system, however, not only the pattern representation algorithm but also the matching hardware itself needs to be made flexible and robust in carrying out the pattern matching task. First of all, two-dimensional patterns need to be represented by feature vectors having substantially reduced dimensions, while at the same time preserving the human perception of similarity among patterns in the vector space mapping. For this purpose, an image representation algorithm called Principal Axes Projection (PAP) has been de- veloped [3] and its robust nature in pattern recognition has been demonstrated in the applications to medical radiograph analysis [3] and hand-written digits recognition [4]. However, the demonstration so far was only carried out by computer simulation. Regarding the matching hardware, high-flexibility analog template matching circuits have been developed for PAP vector representation. The circuits are flexible in a sense that the matching criteria (the weight to elements, the strictness in matching) are configurable. In Ref. [5], the fundamental characteristics of the building block circuits were presented, and their application to simple hand-written digits was presented in Ref. [6]. The purpose of this paper is to demonstrate the robust nature of the hardware matching system by experiments. The classification of simple hand-written patterns and the cephalometric landmark identification in gray-scale medical radiographs have been carried out and successful results are presented. In addition, multiple overlapping patterns can be separated without utilizing a priori knowledge, which is one of the most difficult problems at present in artificial intelligence. 2 I ma g e re pr es e n tati on by P AP PAP is a feature extraction technique using the edge information. The input image (64x64 pixels) is first subjected to pixel-by-pixel spatial filtering operations to detect edges in four directions: horizontal (HR); vertical (VR); +45 degrees (+45); and –45 degrees (-45). Each detected edge is represented by a binary flag and four edge maps are generated. The two-dimensional bit array in an edge map is reduced to a one-dimensional array of numerals by projection. The horizontal edge flags are accumulated in the horizontal direction and projected onto vertical axis. The vertical, +45-degree and –45-degree edge flags are similarly projected onto horizontal, -45-degree and +45-degree axes, respectively. Therefore the method is called “Principal Axes Projection (PAP)” [3,4]. Then each projection data set is series connected in the order of HR, +45, VR, -45 to form a feature vector. Neighboring four elements are averaged and merged to one element and a 64-dimensional vector is finally obtained. This vector representation very well preserves the human perception of similarity in the vector space. In the experiments below, we have further reduced the feature vector to 16 dimensions by merging each set of four neighboring elements into one, without any significant degradation in performance. C i r cui t c o nf i g ura ti ons A B C VGG A B C VGG IOUT IOUT 1 1 2 2 4 4 1 VIN 13 VIN RST RST £ ¡ ¤¢  £ ¥ §¦  3 Figure 1: Schematic of vector element matching circuit: (a) pyramid (gain reduction) type; (b) plateau (feedback) type. The capacitor area ratio is indicated in the figure. The basic functional form of the similarity evaluation is generated by the shortcut current flowing in a CMOS inverter as in Refs. [7,8,9]. However, their circuits were utilized to form radial basis functions and only the peak position was programmable. In our circuits, not only the peak position but also the peak height and the sharpness of the peak response shape are made configurable to realize flexible matching operations [5]. Two types of the element matching circuit are shown in Fig. 1. They evaluate the similarity between two vector elements. The result of the evaluation is given as an output current (IOUT ) from the pMOS current mirror. The peak position is temporarily memorized by auto-zeroing of the CMOS inverter. The common-gate transistor with VGG stabilizes the voltage supply to the inverter. By controlling the gate bias VGG, the peak height can be changed. This corresponds to multiplying a weight factor to the element. The sharpness of the functional form is taken as the strictness of the similarity evaluation. In the pyramid type circuit (Fig. 1(a)), the sharpness is controlled by the gain reduction in the input. In the plateau type (Fig. 1(b)), the output voltage of the inverter is fed back to input nodes and the sharpness changes in accordance with the amount of the feedback.    ¥£¡ ¦¤¢   £¨ 9&% ¦©§ (!! #$ 5 !' #$ &% 9 9 4 92 !¦ A1@9  ¨¥  5 4 52 (!  5 8765  9) 0 1 ¥ 1 ¨

2 0.72886026 176 nips-2001-Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines

Author: Roman Genov, Gert Cauwenberghs

Abstract: A mixed-signal paradigm is presented for high-resolution parallel innerproduct computation in very high dimensions, suitable for efficient implementation of kernels in image processing. At the core of the externally digital architecture is a high-density, low-power analog array performing binary-binary partial matrix-vector multiplication. Full digital resolution is maintained even with low-resolution analog-to-digital conversion, owing to random statistics in the analog summation of binary products. A random modulation scheme produces near-Bernoulli statistics even for highly correlated inputs. The approach is validated with real image data, and with experimental results from a CID/DRAM analog array prototype in 0.5 m CMOS. ¢

3 0.644382 112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

Author: Aaron P. Shon, David Hsu, Chris Diorio

Abstract: We have designed and fabricated a VLSI synapse that can learn a conditional probability or correlation between spike-based inputs and feedback signals. The synapse is low power, compact, provides nonvolatile weight storage, and can perform simultaneous multiplication and adaptation. We can calibrate arrays of synapses to ensure uniform adaptation characteristics. Finally, adaptation in our synapse does not necessarily depend on the signals used for computation. Consequently, our synapse can implement learning rules that correlate past and present synaptic activity. We provide analysis and experimental chip results demonstrating the operation in learning and calibration mode, and show how to use our synapse to implement various learning rules in silicon. 1 I n tro d u cti o n Computation with conditional probabilities and correlations underlies many models of neurally inspired information processing. For example, in the sequence-learning neural network models proposed by Levy [1], synapses store the log conditional probability that a presynaptic spike occurred given that the postsynaptic neuron spiked sometime later. Boltzmann machine synapses learn the difference between the correlations of pairs of neurons in the sleep and wake phase [2]. In most neural models, computation and adaptation occurs at the synaptic level. Hence, a silicon synapse that can learn conditional probabilities or correlations between pre- and post-synaptic signals can be a key part of many silicon neural-learning architectures. We have designed and implemented a silicon synapse, in a 0.35µm CMOS process, that learns a synaptic weight that corresponds to the conditional probability or correlation between binary input and feedback signals. This circuit utilizes floating-gate transistors to provide both nonvolatile storage and weight adaptation mechanisms [3]. In addition, the circuit is compact, low power, and provides simultaneous adaptation and computation. Our circuit improves upon previous implementations of floating-gate based learning synapses [3,4,5] in several ways. First, our synapse appears to be the first spike-based floating-gate synapse that implements a general learning principle, rather than a particular learning rule [4,5]. We demon- strate that our synapse can learn either the conditional probability or the correlation between input and feedback signals. Consequently, we can implement a wide range of synaptic learning networks with our circuit. Second, unlike the general correlational learning synapse proposed by Hasler et. al. [3], our synapse can implement learning rules that correlate pre- and postsynaptic activity that occur at different times. Learning algorithms that employ time-separated correlations include both temporal difference learning [6] and recently postulated temporally asymmetric Hebbian learning [7]. Hasler’s correlational floating-gate synapse can only perform updates based on the present input and feedback signals, and is therefore unsuitable for learning rules that correlate signals that occur at different times. Because signals that control adaptation and computation in our synapse are separate, our circuit can implement these time-dependent learning rules. Finally, we can calibrate our synapses to remove mismatch between the adaptation mechanisms of individual synapses. Mismatch between the same adaptation mechanisms on different floating-gate transistors limits the accuracy of learning rules based on these devices. This problem has been noted in previous circuits that use floating-gate adaptation [4,8]. In our circuit, different synapses can learn widely divergent weights from the same inputs because of component mismatch. We provide a calibration mechanism that enables identical adaptation across multiple synapses despite device mismatch. To our knowledge, this circuit is the first instance of a floating-gate learning circuit that includes this feature. This paper is organized as follows. First, we provide a brief introduction to floating-gate transistors. Next, we provide a description and analysis of our synapse, demonstrating that it can learn the conditional probability or correlation between a pair of binary signals. We then describe the calibration circuitry and show its effectiveness in compensating for adaptation mismatches. Finally, we discuss how this synapse can be used for silicon implementations of various learning networks. 2 Floating-gate transistors Because our circuit relies on floating-gate transistors to achieve adaptation, we begin by briefly discussing these devices. A floating-gate transistor (e.g. transistor M3 of Fig.1(a)) comprises a MOSFET whose gate is isolated on all sides by SiO2. A control gate capacitively couples signals to the floating gate. Charge stored on the floating gate implements a nonvolatile analog weight; the transistor’s output current varies with both the floating-gate voltage and the control-gate voltage. We use Fowler-Nordheim tunneling [9] to increase the floating-gate charge, and impact-ionized hot-electron injection (IHEI) [10] to decrease the floating-gate charge. We tunnel by placing a high voltage on a tunneling implant, denoted by the arrow in Fig.1(a). We inject by imposing more than about 3V across the drain and source of transistor M3. The circuit allows simultaneous adaptation and computation, because neither tunneling nor IHEI interfere with circuit operation. Over a wide range of tunneling voltages Vtun, we can approximate the magnitude of the tunneling current Itun as [4]: I tun = I tun 0 exp (Vtun − V fg ) / Vχ (1) where Vtun is the tunneling-implant voltage, Vfg is the floating-gate voltage, and Itun0 and Vχ are fit constants. Over a wide range of transistor drain and source voltages, we can approximate the magnitude of the injection current Iinj as [4]: 1−U t / Vγ I inj = I inj 0 I s exp ( (Vs − Vd ) / Vγ ) (2) where Vs and Vd are the drain and source voltages, Iinj0 is a pre-exponential current, Vγ is a constant that depends on the VLSI process, and Ut is the thermal voltage kT/q. 3 T h e s i l i co n s y n a p s e We show our silicon synapse in Fig.1. The synapse stores an analog weight W, multiplies W by a binary input Xin, and adapts W to either a conditional probability P(Xcor|Y) or a correlation P(XcorY). Xin is analogous to a presynaptic input, while Y is analogous to a postsynaptic signal or error feedback. Xcor is a presynaptic adaptation signal, and typically has some relationship with Xin. We can implement different learning rules by altering the relationship between Xcor and Xin. For some examples, see section 4. We now describe the circuit in more detail. The drain current of floating-gate transistor M4 represents the weight value W. Because the control gate of M4 is fixed, W depends solely on the charge on floating-gate capacitor C1. We can switch the drain current on or off using transistor M7; this switching action corresponds to a multiplication of the weight value W by a binary input signal, Xin. We choose values for the drain voltage of the M4 to prevent injection. A second floating-gate transistor M3, whose gate is also connected to C1, controls adaptation by injection and tunneling. Simultaneously high input signals Xcor and Y cause injection, increasing the weight. A high Vtun causes tunneling, decreasing the weight. We either choose to correlate a high Vtun with signal Y or provide a fixed high Vtun throughout the adaptation process. The choice determines whether the circuit learns a conditional probability or a correlation, respectively. Because the drain current sourced by M4 provides is the weight W, we can express W in terms of M4’s floating-gate voltage, Vfg. Vfg includes the effects of both the fixed controlgate voltage and the variable floating-gate charge. The expression differs depending on whether the readout transistor is operating in the subthreshold or above-threshold regime. We provide both expressions below: I 0 exp( − κ 2V fg /(1 + κ )U t ) W= κ V fg (1 + κ ) 2 β V0 − below threshold 2 (3) above threshold Here V0 is a constant that depends on the threshold voltage and on Vdd, Ut is the thermal voltage kT/q, κ is the floating-gate-to-channel coupling coefficient, and I 0 is a fixed bias current. Eq. 3 shows that W depends solely on Vfg, (all the other factors are constants). These equations differ slightly from standard equations for the source current through a transistor due to source degeneration caused by M 4. This degeneration smoothes the nonlinear relationship between Vfg and Is; its addition to the circuit is optional. 3.1 Weight adaptation Because W depends on Vfg, we can control W by tunneling or injecting transistor M3. In this section, we show that these mechanisms enable our circuit to learn the correlation or conditional probability between inputs Xcor (which we will refer to as X) and Y. Our analysis assumes that these statistics are fixed over some period during which adaptation occurs. The change in floating-gate voltage, and hence the weight, discussed below should therefore be interpreted in terms of the expected weight change due to the statistics of the inputs. We discuss learning of conditional probabilities; a slight change in the tunneling signal, described previously, allows us to learn correlations instead. We first derive the injection equation for the floating-gate voltage in terms of the joint probability P(X,Y) by considering the relationship between the input signals and Is, Vs, Vb Vtun M1 W eq (nA) 80 M2 60 40 C1 Xcor M4 M3 W M5 Xin Y o chip data − fit: P(X|Y)0.78 20 M6 0 M7 synaptic output 0.2 0.4 0.6 Pr(X|Y) 1 0.8 (b) 3.5 Fig. 1. (a) Synapse schematic. (b) Plot of equilibrium weight in the subthreshold regime versus the conditional probability P(X|Y), showing both experimental chip data and a fit from Eq.7 (c). Plot of equilibrium weight versus conditional probability in the above-threshold regime, again showing chip data and a fit from Eq.7. W eq (µA) (a). 3 2.5 2 0 o chip data − fit 0.2 0.4 0.6 Pr(X|Y) 0.8 1 (c) and Vd of M3. We assume that transistor M1 is in saturation, constraining Is at M3 to be constant. Presentation of a joint binary event (X,Y) closes nFET switches M5 and M6, pulling the drain voltage Vd of M3 to 0V and causing injection. Therefore the probability that Vd is low enough to cause injection is the probability of the joint event Pr(X,Y). By Eq.2 , the amount of the injection is also dependent on M3’s source voltage Vs. Because M3 is constrained to a fixed channel current, a drop in the floating-gate voltage, ∆Vfg, causes a drop in Vs of magnitude κ∆Vfg. Substituting these expressions into Eq.2 results in a floating-gate voltage update of: (dV fg / dt )inj = − I inj 0 Pr( X , Y ) exp(κ Vfg / Vγ ) (4) where Iinj0 also includes the constant source current. Eq.4 shows that the floating-gate voltage update due to injection is a function of the probability of the joint event (X,Y). Next we analyze the effects of tunneling on the floating-gate voltage. The origin of the tunneling signal determines whether the synapse is learning a conditional probability or a correlation. If the circuit is learning a conditional probability, occurrence of the conditioning event Y gates a corresponding high-voltage (~9V) signal onto the tunneling implant. Consequently, we can express the change in floating-gate voltage due to tunneling in terms of the probability of Y, and the floating-gate voltage. (dV fg / dt )tun = I tun 0 Pr(Y ) exp(−V fg / Vχ ) (5) Eq.5 shows that the floating-gate voltage update due to tunneling is a function of the probability of the event Y. 3.2 Weight equilibrium To demonstrate that our circuit learns P(X|Y), we show that the equilibrium weight of the synapse is solely a function of P(X|Y). The equilibrium weight of the synapse is the weight value where the expected weight change over time equals zero. This weight value corresponds to the floating-gate voltage where injection and tunneling currents are equal. To find this voltage, we equate Eq’s. 4 and 5 and solve: eq V fg = I inj 0 −1 log Pr( X | Y ) + log I tun 0 (κ / Vy + 1/ Vx ) (6) To derive the equilibrium weight, we substitute Eq.6 into Eq.3 and solve: I0 Weq = I inj 0 I tun 0 β V0 + η log where α = α Pr( X | Y ) I inj 0 I tun 0 below threshold 2 + log ( Pr( X | Y ) ) above threshold (7) κ2 κ2 and η = . (1 + κ )U t (κ / Vγ + 1/ Vχ ) (1 + κ )(κ / Vγ + 1/ Vχ ) Consequently, the equilibrium weight is a function of the conditional probability below threshold and a function of the log-squared conditional probability above threshold. Note that the equilibrium weight is stable because of negative feedback in the tunneling and injection processes. Therefore, the weight will always converge to the equilibrium value shown in Eq.7. Figs. 1(b) and (c) show the equilibrium weight versus the conditional P(X|Y) for both sub- and above-threshold circuits, along with fits to Eq.7. Note that both the sub- and above-threshold relationship between P(X|Y) and the equilibrium weight enables us to compute the probability of a vector of synaptic inputs X given a post-synaptic response Y. In both cases, we can apply the outputs currents of an array of synapses through diodes, and then add the resulting voltages via a capacitive voltage divider, resulting in a voltage that is a linear function of log P(X|Y). 3.3 Calibration circuitry Mismatch between injection and tunneling in different floating-gate transistors can greatly reduce the ability of our synapses to learn meaningful values. Experimental data from floating-gate transistors fabricated in a 0.35µm process show that injection varies by as much as 2:1 across a chip, and tunneling by up to 1.2:1. The effect of this mismatch on our synapses causes the weight equilibrium of different synapses to differ by a multiplicative gain. Fig.2 (b) shows the equilibrium weights of an array of six synapses exposed to identical input signals. The variation of the synaptic weights is of the same order of magnitude as the weights themselves, making large arrays of synapses all but useless for implementing many learning algorithms. We alleviate this problem by calibrating our synapses to equalize the pre-exponential tunneling and injection constants. Because the dependence of the equilibrium weight on these constants is determined by the ratio of Iinj0/Itun0, our calibration process changes Iinj to equalize the ratio of injection to tunneling across all synapses. We choose to calibrate injection because we can easily change Iinj0 by altering the drain current through M1. Our calibration procedure is a self-convergent memory write [11], that causes the equilibrium weight of every synapse to equal the current Ical. Calibration requires many operat- 80 Verase M1 M8 60 W eq (nA) Vb M2 Vtun 40 M3 M4 M9 V cal 20 M5 0 M7 M6 synaptic output 0.2 Ical 0.6 P(X|Y) 0.8 1 0.4 0.6 P(X|Y) 0.8 1 0.4 (b) 80 (a) Fig. 2. (a) Schematic of calibrated synapse with signals used during the calibration procedure. (b) Equilibrium weights for array of synapses shown in Fig.1a. (c) Equilibrium weights for array of calibrated synapses after calibration. W eq (nA) 60 40 20 0 0.2 (c) ing cycles, where, during each cycle, we first increase the equilibrium weight of the synapse, and second, we let the synapse adapt to the new equilibrium weight. We create the calibrated synapse by modifying our original synapse according to Fig. 2(a). We convert M1 into a floating-gate transistor, whose floating-gate charge thereby sets M3’s channel current, providing control of Iinj0 of Eq.7. Transistor M8 modifies M1’s gate charge by means of injection when M9’s gate is low and Vcal is low. M9’s gate is only low when the equilibrium weight W is less than Ical. During calibration, injection and tunneling on M3 are continuously active. We apply a pulse train to Vcal; during each pulse period, Vcal is predominately high. When Vcal is high, the synapse adapts towards its equilibrium weight. When Vcal pulses low, M8 injects, increasing the synapse’s equilibrium weight W. We repeat this process until the equilibrium weight W matches Ical, causing M9’s gate voltage to rise, disabling Vcal and with it injection. To ensure that a precalibrated synapse has an equilibrium weight below Ical, we use tunneling to erase all bias transistors prior to calibration. Fig.2(c) shows the equilibrium weights of six synapses after calibration. The data show that calibration can reduce the effect of mismatched adaptation on the synapse’s learned weight to a small fraction of the weight itself. Because M1 is a floating-gate transistor, its parasitic gate-drain capacitance causes a mild dependence between M1’s drain voltage and source current. Consequently, M3’s floatinggate voltage now affects its source current (through M1’s drain voltage), and we can model M3 as a source-degenerated pFET [3]. The new expression for the injection current in M3 is: Presynaptic neuron W+ Synapse W− X Y Injection Postsynaptic neuron Injection Activation window Fig. 3. A method for achieving spike-time dependent plasticity in silicon. (dV fg / dt )inj = − I inj 0 Pr( X , Y ) exp Vfg κ Vγ − κ k1 Ut (8) where k1 is close to zero. The new expression for injection slightly changes the α and η terms of the weight equilibrium in Eq.7, although the qualitative relationship between the weight equilibrium and the conditional probability remains the same. 4 Implementing silicon synaptic learning rules In this section we discuss how to implement a variety of learning rules from the computational-neurobiology and neural-network literature with our synapse circuit. We can use our circuit to implement a Hebbian learning rule. Simultaneously activating both M5 and M6 is analogous to heterosynaptic LTP based on synchronized pre- and postsynaptic signals, and activating tunneling with the postsynaptic Y is analogous to homosynaptic LTD. In our synapse, we tie Xin and Xcor together and correlate Vtun with Y. Our synapse is also capable of emulating a Boltzmann weight-update rule [2]. This weight-update rule derives from the difference between correlations among neurons when the network receives external input, and when the network operates in a free running phase (denoted as clamped and unclamped phases respectively). With weight decay, a Boltzmann synapse learns the difference between correlations in the clamped and unclamped phase. We can create a Boltzmann synapse from a pair of our circuits, in which the effective weight is the difference between the weights of the two synapses. To implement a weight update, we update one silicon synapse based on pre- and postsynaptic signals in the clamped phase, and update the other synapse in the unclamped phase. We do this by sending Xin to Xcor of one synapse in the clamped phase, and sending Xin to Xcor of the other synapse in the negative phase. Vtun remains constant throughout adaptation. Finally, we consider implementing a temporally asymmetric Hebbian learning rule [7] using our synapse. In temporally asymmetric Hebbian learning, a synapse exhibits LTP or LTD if the presynaptic input occurs before or after the postsynaptic response, respectively. We implement an asymmetric learning synapse using two of our circuits, where the synaptic weight is the difference in the weights of the two circuit. We show the circuit in Fig. 3. Each neuron sends two signals: a neuronal output, and an adaptation time window that is active for some time afterwards. Therefore, the combined synapse receives two presynaptic signals and two postsynaptic signals. The relative timing of a postsynaptic response, Y, with the presynaptic input, X, determines whether the synapse undergoes LTP or LTD. If Y occurs before X, Y’s time window correlates with X, causing injection on the negative synapse, decreasing the weight. If Y occurs after X, Y correlates with X’s time window, causing injection on the positive synapse, increasing the weight. Hence, our circuit can use the relative timing between presynaptic and postsynaptic activity to implement learning. 5 Conclusion We have described a silicon synapse that implements a wide range of spike-based learning rules, and that does not suffer from device mismatch. We have also described how we can implement various silicon-learning networks using this synapse. In addition, although we have only analyzed the learning properties of the synapse for binary signals, we can instead use pulse-coded analog signals. One possible avenue for future work is to analyze the implications of different pulse-coded schemes on the circuit’s adaptive behavior. A c k n o w l e d g e me n t s This work was supported by the National Science Foundation and by the Office of Naval Research. Aaron Shon was also supported by a NDSEG fellowship. We thank Anhai Doan and the anonymous reviewers for helpful comments. References [1] W.B.Levy, “A computational approach to hippocampal function,” in R.D. Hawkins and G.H. Bower (eds.), Computational Models of Learning in Simple Neural Systems, The Psychology of Learning and Motivation vol. 23, pp. 243-305, San Diego, CA: Academic Press, 1989. [2] D. H. Ackley, G. Hinton, and T. Sejnowski, “A learning algorithm for Boltzmann machines,” Cognitive Science vol. 9, pp. 147-169, 1985. [3 ] P. Hasler, B. A. Minch, J. Dugger, and C. Diorio, “Adaptive circuits and synapses using pFET floating-gate devices, ” in G. Cauwenberghs and M. Bayoumi (eds.) Learning in Silicon, pp. 33-65, Kluwer Academic, 1999. [4] P. Hafliger, A spike-based learning rule and its implementation in analog hardware, Ph.D. thesis, ETH Zurich, 1999. [5] C. Diorio, P. Hasler, B. A. Minch, and C. Mead, “A floating-gate MOS learning array with locally computer weight updates,” IEEE Transactions on Electron Devices vol. 44(12), pp. 2281-2289, 1997. [6] R. Sutton, “Learning to predict by the methods of temporal difference,” Machine Learning, vol. 3, p p . 9-44, 1988. [7] H.Markram, J. Lübke, M. Frotscher, and B. Sakmann, “Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs,” Science vol. 275, pp.213-215, 1997. [8] A. Pesavento, T. Horiuchi, C. Diorio, and C. Koch, “Adaptation of current signals with floating-gate circuits,” in Proceedings of the 7th International Conference on Microelectronics for Neural, Fuzzy, and Bio-Inspired Systems (Microneuro99), pp. 128-134, 1999. [9] M. Lenzlinger and E. H. Snow. “Fowler-Nordheim tunneling into thermally grown SiO2,” Journal of Applied Physics vol. 40(1), p p . 278--283, 1969. [10] E. Takeda, C. Yang, and A. Miura-Hamada, Hot Carrier Effects in MOS Devices, San Diego, CA: Academic Press, 1995. [11] C. Diorio, “A p-channel MOS synapse transistor with self-convergent memory writes,” IEEE Journal of Solid-State Circuits vol. 36(5), pp. 816-822, 2001.

4 0.54601496 33 nips-2001-An Efficient Clustering Algorithm Using Stochastic Association Model and Its Implementation Using Nanostructures

Author: Takashi Morie, Tomohiro Matsuura, Makoto Nagata, Atsushi Iwata

Abstract: This paper describes a clustering algorithm for vector quantizers using a “stochastic association model”. It offers a new simple and powerful softmax adaptation rule. The adaptation process is the same as the on-line K-means clustering method except for adding random fluctuation in the distortion error evaluation process. Simulation results demonstrate that the new algorithm can achieve efficient adaptation as high as the “neural gas” algorithm, which is reported as one of the most efficient clustering methods. It is a key to add uncorrelated random fluctuation in the similarity evaluation process for each reference vector. For hardware implementation of this process, we propose a nanostructure, whose operation is described by a single-electron circuit. It positively uses fluctuation in quantum mechanical tunneling processes.

5 0.50180215 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

Author: A. Bofill, D. P. Thompson, Alan F. Murray

Abstract: Experimental data has shown that synaptic strength modification in some types of biological neurons depends upon precise spike timing differences between presynaptic and postsynaptic spikes. Several temporally-asymmetric Hebbian learning rules motivated by this data have been proposed. We argue that such learning rules are suitable to analog VLSI implementation. We describe an easily tunable circuit to modify the weight of a silicon spiking neuron according to those learning rules. Test results from the fabrication of the circuit using a O.6J.lm CMOS process are given. 1

6 0.45669851 191 nips-2001-Transform-invariant Image Decomposition with Similarity Templates

7 0.34527516 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

8 0.31831631 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation

9 0.27945021 182 nips-2001-The Fidelity of Local Ordinal Encoding

10 0.26211905 118 nips-2001-Matching Free Trees with Replicator Equations

11 0.24343935 89 nips-2001-Grouping with Bias

12 0.24304178 189 nips-2001-The g Factor: Relating Distributions on Features to Distributions on Images

13 0.23732042 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network

14 0.21535669 125 nips-2001-Modularity in the motor system: decomposition of muscle patterns as combinations of time-varying synergies

15 0.20995592 177 nips-2001-Switch Packet Arbitration via Queue-Learning

16 0.19983378 158 nips-2001-Receptive field structure of flow detectors for heading perception

17 0.19683318 28 nips-2001-Adaptive Nearest Neighbor Classification Using Support Vector Machines

18 0.19159286 75 nips-2001-Fast, Large-Scale Transformation-Invariant Clustering

19 0.19067378 78 nips-2001-Fragment Completion in Humans and Machines

20 0.18965398 151 nips-2001-Probabilistic principles in unsupervised learning of visual structure: human data and a model


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.066), (17, 0.027), (19, 0.025), (27, 0.07), (30, 0.117), (38, 0.024), (59, 0.015), (72, 0.052), (79, 0.023), (81, 0.358), (83, 0.012), (91, 0.122)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.85430765 34 nips-2001-Analog Soft-Pattern-Matching Classifier using Floating-Gate MOS Technology

Author: Toshihiko Yamasaki, Tadashi Shibata

Abstract: A flexible pattern-matching analog classifier is presented in conjunction with a robust image representation algorithm called Principal Axes Projection (PAP). In the circuit, the functional form of matching is configurable in terms of the peak position, the peak height and the sharpness of the similarity evaluation. The test chip was fabricated in a 0.6-µm CMOS technology and successfully applied to hand-written pattern recognition and medical radiograph analysis using PAP as a feature extraction pre-processing step for robust image coding. The separation and classification of overlapping patterns is also experimentally demonstrated. 1 I ntr o du c ti o n Pattern classification using template matching techniques is a powerful tool in implementing human-like intelligent systems. However, the processing is computationally very expensive, consuming a lot of CPU time when implemented as software running on general-purpose computers. Therefore, software approaches are not practical for real-time applications. For systems working in mobile environment, in particular, they are not realistic because the memory and computational resources are severely limited. The development of analog VLSI chips having a fully parallel template matching architecture [1,2] would be a promising solution in such applications because they offer an opportunity of low-power operation as well as very compact implementation. In order to build a real human-like intelligent system, however, not only the pattern representation algorithm but also the matching hardware itself needs to be made flexible and robust in carrying out the pattern matching task. First of all, two-dimensional patterns need to be represented by feature vectors having substantially reduced dimensions, while at the same time preserving the human perception of similarity among patterns in the vector space mapping. For this purpose, an image representation algorithm called Principal Axes Projection (PAP) has been de- veloped [3] and its robust nature in pattern recognition has been demonstrated in the applications to medical radiograph analysis [3] and hand-written digits recognition [4]. However, the demonstration so far was only carried out by computer simulation. Regarding the matching hardware, high-flexibility analog template matching circuits have been developed for PAP vector representation. The circuits are flexible in a sense that the matching criteria (the weight to elements, the strictness in matching) are configurable. In Ref. [5], the fundamental characteristics of the building block circuits were presented, and their application to simple hand-written digits was presented in Ref. [6]. The purpose of this paper is to demonstrate the robust nature of the hardware matching system by experiments. The classification of simple hand-written patterns and the cephalometric landmark identification in gray-scale medical radiographs have been carried out and successful results are presented. In addition, multiple overlapping patterns can be separated without utilizing a priori knowledge, which is one of the most difficult problems at present in artificial intelligence. 2 I ma g e re pr es e n tati on by P AP PAP is a feature extraction technique using the edge information. The input image (64x64 pixels) is first subjected to pixel-by-pixel spatial filtering operations to detect edges in four directions: horizontal (HR); vertical (VR); +45 degrees (+45); and –45 degrees (-45). Each detected edge is represented by a binary flag and four edge maps are generated. The two-dimensional bit array in an edge map is reduced to a one-dimensional array of numerals by projection. The horizontal edge flags are accumulated in the horizontal direction and projected onto vertical axis. The vertical, +45-degree and –45-degree edge flags are similarly projected onto horizontal, -45-degree and +45-degree axes, respectively. Therefore the method is called “Principal Axes Projection (PAP)” [3,4]. Then each projection data set is series connected in the order of HR, +45, VR, -45 to form a feature vector. Neighboring four elements are averaged and merged to one element and a 64-dimensional vector is finally obtained. This vector representation very well preserves the human perception of similarity in the vector space. In the experiments below, we have further reduced the feature vector to 16 dimensions by merging each set of four neighboring elements into one, without any significant degradation in performance. C i r cui t c o nf i g ura ti ons A B C VGG A B C VGG IOUT IOUT 1 1 2 2 4 4 1 VIN 13 VIN RST RST £ ¡ ¤¢  £ ¥ §¦  3 Figure 1: Schematic of vector element matching circuit: (a) pyramid (gain reduction) type; (b) plateau (feedback) type. The capacitor area ratio is indicated in the figure. The basic functional form of the similarity evaluation is generated by the shortcut current flowing in a CMOS inverter as in Refs. [7,8,9]. However, their circuits were utilized to form radial basis functions and only the peak position was programmable. In our circuits, not only the peak position but also the peak height and the sharpness of the peak response shape are made configurable to realize flexible matching operations [5]. Two types of the element matching circuit are shown in Fig. 1. They evaluate the similarity between two vector elements. The result of the evaluation is given as an output current (IOUT ) from the pMOS current mirror. The peak position is temporarily memorized by auto-zeroing of the CMOS inverter. The common-gate transistor with VGG stabilizes the voltage supply to the inverter. By controlling the gate bias VGG, the peak height can be changed. This corresponds to multiplying a weight factor to the element. The sharpness of the functional form is taken as the strictness of the similarity evaluation. In the pyramid type circuit (Fig. 1(a)), the sharpness is controlled by the gain reduction in the input. In the plateau type (Fig. 1(b)), the output voltage of the inverter is fed back to input nodes and the sharpness changes in accordance with the amount of the feedback.    ¥£¡ ¦¤¢   £¨ 9&% ¦©§ (!! #$ 5 !' #$ &% 9 9 4 92 !¦ A1@9  ¨¥  5 4 52 (!  5 8765  9) 0 1 ¥ 1 ¨

2 0.77879578 181 nips-2001-The Emergence of Multiple Movement Units in the Presence of Noise and Feedback Delay

Author: Michael Kositsky, Andrew G. Barto

Abstract: Tangential hand velocity profiles of rapid human arm movements often appear as sequences of several bell-shaped acceleration-deceleration phases called submovements or movement units. This suggests how the nervous system might efficiently control a motor plant in the presence of noise and feedback delay. Another critical observation is that stochasticity in a motor control problem makes the optimal control policy essentially different from the optimal control policy for the deterministic case. We use a simplified dynamic model of an arm and address rapid aimed arm movements. We use reinforcement learning as a tool to approximate the optimal policy in the presence of noise and feedback delay. Using a simplified model we show that multiple submovements emerge as an optimal policy in the presence of noise and feedback delay. The optimal policy in this situation is to drive the arm’s end point close to the target by one fast submovement and then apply a few slow submovements to accurately drive the arm’s end point into the target region. In our simulations, the controller sometimes generates corrective submovements before the initial fast submovement is completed, much like the predictive corrections observed in a number of psychophysical experiments.

3 0.47865272 102 nips-2001-KLD-Sampling: Adaptive Particle Filters

Author: Dieter Fox

Abstract: Over the last years, particle filters have been applied with great success to a variety of state estimation problems. We present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets on-the-fly. The key idea of the KLD-sampling method is to bound the approximation error introduced by the sample-based representation of the particle filter. The name KLD-sampling is due to the fact that we measure the approximation error by the Kullback-Leibler distance. Our adaptation approach chooses a small number of samples if the density is focused on a small part of the state space, and it chooses a large number of samples if the state uncertainty is high. Both the implementation and computation overhead of this approach are small. Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation technique.

4 0.47647858 149 nips-2001-Probabilistic Abstraction Hierarchies

Author: Eran Segal, Daphne Koller, Dirk Ormoneit

Abstract: Many domains are naturally organized in an abstraction hierarchy or taxonomy, where the instances in “nearby” classes in the taxonomy are similar. In this paper, we provide a general probabilistic framework for clustering data into a set of classes organized as a taxonomy, where each class is associated with a probabilistic model from which the data was generated. The clustering algorithm simultaneously optimizes three things: the assignment of data instances to clusters, the models associated with the clusters, and the structure of the abstraction hierarchy. A unique feature of our approach is that it utilizes global optimization algorithms for both of the last two steps, reducing the sensitivity to noise and the propensity to local maxima that are characteristic of algorithms such as hierarchical agglomerative clustering that only take local steps. We provide a theoretical analysis for our algorithm, showing that it converges to a local maximum of the joint likelihood of model and data. We present experimental results on synthetic data, and on real data in the domains of gene expression and text.

5 0.47205165 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

Author: M. Schmitt

Abstract: Recurrent neural networks of analog units are computers for realvalued functions. We study the time complexity of real computation in general recurrent neural networks. These have sigmoidal, linear, and product units of unlimited order as nodes and no restrictions on the weights. For networks operating in discrete time, we exhibit a family of functions with arbitrarily high complexity, and we derive almost tight bounds on the time required to compute these functions. Thus, evidence is given of the computational limitations that time-bounded analog recurrent neural networks are subject to. 1

6 0.47030947 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

7 0.46992111 63 nips-2001-Dynamic Time-Alignment Kernel in Support Vector Machine

8 0.46662289 22 nips-2001-A kernel method for multi-labelled classification

9 0.46650869 46 nips-2001-Categorization by Learning and Combining Object Parts

10 0.4652971 77 nips-2001-Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade

11 0.46484807 56 nips-2001-Convolution Kernels for Natural Language

12 0.46477169 116 nips-2001-Linking Motor Learning to Function Approximation: Learning in an Unlearnable Force Field

13 0.46254605 163 nips-2001-Risk Sensitive Particle Filters

14 0.4620297 182 nips-2001-The Fidelity of Local Ordinal Encoding

15 0.46161413 60 nips-2001-Discriminative Direction for Kernel Classifiers

16 0.46148372 161 nips-2001-Reinforcement Learning with Long Short-Term Memory

17 0.46037352 100 nips-2001-Iterative Double Clustering for Unsupervised and Semi-Supervised Learning

18 0.46029517 176 nips-2001-Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines

19 0.4599573 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

20 0.45918137 162 nips-2001-Relative Density Nets: A New Way to Combine Backpropagation with HMM's