nips nips2001 nips2001-176 nips2001-176-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Roman Genov, Gert Cauwenberghs
Abstract: A mixed-signal paradigm is presented for high-resolution parallel innerproduct computation in very high dimensions, suitable for efficient implementation of kernels in image processing. At the core of the externally digital architecture is a high-density, low-power analog array performing binary-binary partial matrix-vector multiplication. Full digital resolution is maintained even with low-resolution analog-to-digital conversion, owing to random statistics in the analog summation of binary products. A random modulation scheme produces near-Bernoulli statistics even for highly correlated inputs. The approach is validated with real image data, and with experimental results from a CID/DRAM analog array prototype in 0.5 m CMOS. ¢
[1] A. Kramer, “Array-based analog computation,” IEEE Micro, vol. 16 (5), pp. 40-49, 1996.
[2] G. Han, E. Sanchez-Sinencio, “A general purpose neuro-image processor architecture,” Proc. of IEEE Int. Symp. on Circuits and Systems (ISCAS’96), vol. 3, pp 495-498, 1996
[3] F. Kub, K. Moon, I. Mack, F. Long, “ Programmable analog vector-matrix multipliers,” IEEE Journal of Solid-State Circuits, vol. 25 (1), pp. 207-214, 1990.
[4] G. Cauwenberghs and V. Pedroni, “A Charge-Based CMOS Parallel Analog Vector Quantizer,” Adv. Neural Information Processing Systems (NIPS*94), Cambridge, MA: MIT Press, vol. 7, pp. 779-786, 1995.
[5] Papageorgiou, C.P, Oren, M. and Poggio, T., “A General Framework for Object Detection,” in Proceedings of International Conference on Computer Vision, 1998.
[6] G. Cauwenberghs and M.A. Bayoumi, Eds., Learning on Silicon: Adaptive VLSI Neural Systems, Norwell MA: Kluwer Academic, 1999.
[7] A. Murray and P.J. Edwards, “Synaptic Noise During MLP Training Enhances Fault-Tolerance, Generalization and Learning Trajectory,” in Advances in Neural Information Processing Systems, San Mateo, CA: Morgan Kaufman, vol. 5, pp 491-498, 1993.
[8] A. Gersho and R.M. Gray, Vector Quantization and Signal Compression, Norwell, MA: Kluwer, 1992.
[9] V. Vapnik, The Nature of Statistical Learning Theory, 2nd ed., Springer-Verlag, 1999.
[10] J. Wawrzynek, et al., “SPERT-II: A Vector Microprocessor System and its Application to Large Problems in Backpropagation Training,” in Advances in Neural Information Processing Systems, Cambridge, MA: MIT Press, vol. 8, pp 619-625, 1996.
[11] A. Chiang, “A programmable CCD signal processor,” IEEE Journal of Solid-State Circuits, vol. 25 (6), pp. 1510-1517, 1990.
[12] C. Neugebauer and A. Yariv, “A Parallel Analog CCD/CMOS Neural Network IC,” Proc. IEEE Int. Joint Conference on Neural Networks (IJCNN’91), Seattle, WA, vol. 1, pp 447-451, 1991.
[13] V. Pedroni, A. Agranat, C. Neugebauer, A. Yariv, “Pattern matching and parallel processing with CCD technology,” Proc. IEEE Int. Joint Conference on Neural Networks (IJCNN’92), vol. 3, pp 620-623, 1992.
[14] M. Howes, D. Morgan, Eds., Charge-Coupled Devices and Systems, John Wiley & Sons, 1979.
[15] R. Genov, G. Cauwenberghs “Charge-Mode Parallel Architecture for Matrix-Vector Multiplication,” IEEE T. Circuits and Systems II, vol. 48 (10), 2001.