Efficient Computation with Inherently Stochastic Neural Networks
Dr. Emre Neftci
Assistant Professor in the Department of Cognitive Sciences and Computer Science
University of California, Irvine
DEIB - Conference Room "E. Gatti" (building 20)
November 19th, 2019
2.00 pm
Contacts:
Daniele Ielmini
Research Line:
Electron devices
Assistant Professor in the Department of Cognitive Sciences and Computer Science
University of California, Irvine
DEIB - Conference Room "E. Gatti" (building 20)
November 19th, 2019
2.00 pm
Contacts:
Daniele Ielmini
Research Line:
Electron devices
Abstract
Synaptic and neural unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in the brain. Combined with a suitable neuron model, this stochasticity can induce Monte Carlo-like sampling of an associated probability distribution. In this talk, I will discuss our recent work on Neural Sampling Machines (NSM): a class of stochastic neural networks that exploits this property for probabilistic inference and learning. The always-on stochasticity of the NSM can benefit from stochasticity inherent to a physical substrate such as analog non-volatile memories for in-memory computing, while requiring almost exclusively addition and comparison operations. At the single neuron level, we find that the probability of activation has a self-normalizing property that mirrors "weight normalization", a previously studied mechanism that fulfills many of the features of "batch normalization" in an online fashion. Our results provide a machine learning-driven approach for designing neuromorphic hardware capable of carrying out efficient inference and learning workloads.
Short Bio
Dr. Emre Neftci received his M.Sc. degree in physics from École Polytechnique Fédérale de Lausanne, Switzerland, and his Ph.D. in 2010 at the Institute of Neuroinformatics at the University of Zürich and ETH Zürich. Currently, he is an assistant professor in the Department of Cognitive Sciences and Computer Science at the University of California, Irvine. His current research explores the bridges between neuroscience and machine learning, with a focus on the theoretical and computational modeling of learning algorithms that are best suited to neuromorphic hardware and non-von Neumann computing architectures.