Neuromorphic computation for Cognitive Computing: challenges and perspectives
Stefano Ambrogio
PostDoctoral Researcher at IBM - Research, in the Neuromorphic Device and Architectures Team, Almaden
DEIB - Alpha Room (building 24)
September 26th, 2017
10.00 am
Contacts:
Daniele Ielmini
Research Line:
Electron devices
PostDoctoral Researcher at IBM - Research, in the Neuromorphic Device and Architectures Team, Almaden
DEIB - Alpha Room (building 24)
September 26th, 2017
10.00 am
Contacts:
Daniele Ielmini
Research Line:
Electron devices
Abstract
Cognitive computing describes “systems that learn at scale, reason with purpose, and interact with humans naturally”. To achieve this goal, researchers are considering a move away from Von-Neumann computing towards one or more novel and significantly different computing architectures. Among these, neuromorphic computation stands as an innovative solution for solving high-complexity problems by emulating the behavior of the human brain. This can offer several attractive features, such as the resilience of algorithms to device variability and non-ideality.
In this presentation, we review our recent work towards designing a neuromorphic chip for hardware acceleration of training and inference of Fully Connected and Convolutional Deep Neural Networks (DNNs). The training is performed through the backpropagation algorithm, with performance – in terms of speed and power – that could potentially outperform current CPUs and GPUs. We use arrays of emerging non-volatile memories (NVM), such as Phase Change Memory, to implement the synaptic weights connecting layers of neurons. The corresponding network has been demonstrated through experimental results on real devices. We address the impact of real device characteristics – such as non-linearity, variability, asymmetry, and stochasticity – and present some solutions to tackle these sorts of issues. After this, we will discuss some of the challenges in designing the CMOS circuitry around the NVM array. To achieve high processing speed, there is a need for highly parallel circuitry, which introduces a tradeoff between neuron complexity and area. The limited silicon space available makes it essential for designers to implement compact neurons with approximate functionality that can still support accurate DNN training. Finally, the talk will finish with some architectural guidelines, showing the issues and challenges associated with routing information between different arrays to implement multi-layer DNNs.
In this presentation, we review our recent work towards designing a neuromorphic chip for hardware acceleration of training and inference of Fully Connected and Convolutional Deep Neural Networks (DNNs). The training is performed through the backpropagation algorithm, with performance – in terms of speed and power – that could potentially outperform current CPUs and GPUs. We use arrays of emerging non-volatile memories (NVM), such as Phase Change Memory, to implement the synaptic weights connecting layers of neurons. The corresponding network has been demonstrated through experimental results on real devices. We address the impact of real device characteristics – such as non-linearity, variability, asymmetry, and stochasticity – and present some solutions to tackle these sorts of issues. After this, we will discuss some of the challenges in designing the CMOS circuitry around the NVM array. To achieve high processing speed, there is a need for highly parallel circuitry, which introduces a tradeoff between neuron complexity and area. The limited silicon space available makes it essential for designers to implement compact neurons with approximate functionality that can still support accurate DNN training. Finally, the talk will finish with some architectural guidelines, showing the issues and challenges associated with routing information between different arrays to implement multi-layer DNNs.
Short Bio
Stefano Ambrogio obtained his PhD in 2016 in Italy, at Politecnico di Milano, under the supervision of Prof. Daniele Ielmini, working on the reliability of resistive memories and their application on neuromorphic networks. He is now working as a PostDoctoral Researcher at IBM- Research, Almaden, in the Neuromorphic Device and Architectures Team, working on hardware accelerators based on Non Volatile Memories for neural networks.