nips nips2002 nips2002-91 nips2002-91-reference knowledge-graph by maker-knowledge-mining

91 nips-2002-Field-Programmable Learning Arrays


Source: pdf

Author: Seth Bridges, Miguel Figueroa, Chris Diorio, Daniel J. Hsu

Abstract: This paper introduces the Field-Programmable Learning Array, a new paradigm for rapid prototyping of learning primitives and machinelearning algorithms in silicon. The FPLA is a mixed-signal counterpart to the all-digital Field-Programmable Gate Array in that it enables rapid prototyping of algorithms in hardware. Unlike the FPGA, the FPLA is targeted directly for machine learning by providing local, parallel, online analog learning using floating-gate MOS synapse transistors. We present a prototype FPLA chip comprising an array of reconfigurable computational blocks and local interconnect. We demonstrate the viability of this architecture by mapping several learning circuits onto the prototype chip.


reference text

[1] C. Diorio, D. Hsu, and M. Figueroa, “Adaptive CMOS: From biological inspiration to systemson-a-chip,” Proceedings of the IEEE, vol. 90, no. 3, pp. 245–357, 2002.

[2] J. B. Burr, “Digital Neural Network Implementations,” in Neural Networks: Concepts, Applications, and Implementations, Volume 2 (P. Antognetti and V. Milutinovic, eds.), pp. 237–285, Prentice Hall, 1991.

[3] S. Satyanarayana, Y. Tsividis, and H. Graf, “A reconfigurable VLSI neural network,” IEEE Journal of Solid-State Circuits, vol. 27, January 1992.

[4] R. Coggins, M. Jabri, B. Flower, and S. Pickard, “ICEG morphology classification using an analogue VLSI neural network,” in Advances in Neural Information Processing Systems 7, pp. 731–738, MIT Press, 1995.

[5] M. Holler, S. Tam, H. Castro, and R. Benson, “An electrically trainable artificial neural network with 10240 ’floating gate’ synapses,” in Proceedings of the International Joint Conference on Neural Networks(IJCNN89), vol. 2, (Washington D.C), pp. 191–196, 1989.

[6] E. K. F. Lee and P. G. Gulak, “A CMOS field programmable analog array,” IEEE Journal of Solid-State Circuits, vol. 26, December 1991.

[7] A. Montalvo, R. Gyurcsik, and J. Paulos, “An analog VLSI neural network with on-chip learning,” IEEE Journal of Solid-State Circuits, vol. 32, no. 4, 1997.

[8] R. Genov and G. Cauwenberghs, “Stochastic mixed-signal VLSI architecture for highdimensional kernel machines,” in Advances in Neural Information Processing Systems 14 (T. G. Dietterich, S. Becker, and Z. Ghahramani, eds.), (Cambridge, MA), MIT Press, 2002.

[9] J. Hyde, T. Humes, C. Diorio, M. Thomas, and M. Figueroa, “A floating-gate trimmed, 14bit, 250 ms/s digital-to-analog converter in standard 0.25 m CMOS,” in Symposium on VLSI Circuits Digest of Technical Papers, pp. 328–331, 2002.

[10] D. Hsu, M. Figueroa, and C. Diorio, “A silicon primitive for competitive learning,” in Advances in Neural Information Processing Systems 13 (T. K. Leen, T. G. Dietterich, and V. Tresp, eds.), pp. 713–719, MIT Press, 2001.

[11] A. P. Shon, D. Hsu, and C. Diorio, “Learning spike-based correlations and conditional probabilities in silicon,” in Advances in Neural Information Processing Systems 14 (T. G. Dietterich, S. Becker, and Z. Ghahramani, eds.), (Cambridge, MA), MIT Press, 2002.

[12] C. Mead, Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley, 1989.

[13] P. Hasler, “Continuous-time feedback in floating-gate MOS circuits,” IEEE Transactions on Circuits and Systems II, vol. 48, pp. 56–64, January 2001.

[14] D. Hsu, S. Bridges, and C. Diorio, “Adaptive quantization and density estimation in silicon,” 2002. In submission.