October 13 EEC 290 Lecture 3

Cognitive computing describes “systems that learn at scale, reason with purpose, and interact with humans naturally”. To achieve this goal, researchers are considering a move away from Von-Neumann computing towards one or more novel and significantly different computing architectures. Among these, neuromorphic computation stands as an innovative solution for solving high-complexity problems by emulating the behavior of the human brain. In this presentation, we review our recent work in designing a neuromorphic chip for hardware acceleration of training and inference of Fully Connected and Convolutional Deep Neural Networks (DNNs). The training is performed through the backpropagation algorithm, with performance – in terms of speed and power – that could potentially outperform current CPUs and GPUs. We use arrays of emerging non-volatile memories (NVM), such as Phase Change Memory, to implement the synaptic weights connecting layers of neurons. The corresponding network has been demonstrated through experimental results on real devices. We address the impact of real device characteristics – such as non-linearity, variability, asymmetry, and stochasticity – and present some solutions to tackle these of issues. We will discuss some of the challenges in designing the CMOS circuitry around the NVM array. Finally, other neuromorphic approaches are shown, as, e.g., networks trained with the Spike-Timing-Dependent-Plasticity biological protocol, underlining the differences with the backpropagation algorithm and the need for extensive global studies in this field.

コメント