Tutorials ISCAS 2019

2: Energy-Efficient AI: System Architectures and Computational Models Based on CMOS and Beyond-CMOS Devices

  • Keshab K. Parhi, University of Minnesota - Twin Cities
  • Bo Yuan, Rutgers University
  • Naresh R. Shanbhag, University of Illinois at Urbana-Champaign
  • Abu Sebastian, IBM Research - Zurich
  • Bipin Rajendran, New Jersey Institute of Technology


With exponential increase in the amount of data collected per day, the fields of artificial intelligence and machine learning continue to progress at a rapid pace with respect to algorithms, models, applications and hardware. In particular, deep neural networks have revolutionized the field by providing unprecedented human-like performance in solving many real-world problems such as image or speech recognition. There is also a significant research aimed at unravelling the principles of computation in large biological neural networks and, in particular, biologically plausible spiking neural networks. Research efforts are also directed towards developing energy-efficient computing systems for machine learning and AI. New system architectures and computational models from tensor processing units to in-memory computing are being explored. Reducing energy consumption requires careful design choices from many perspectives. Some examples include: choice of model, approximations of the models for reduced storage and memory access, choice of precision for different layers of networks and computing with beyond-CMOS devices such as memristive devices. The full-day tutorial will provide a detailed overview of the new developments related to energy-efficient machine learning in CMOS and beyond-CMOS technologies. Specific topics include: (a) Low-Energy Machine Learning, (b) From Matrix to Tensor: Algorithm and Hardware Co-Design for Energy-Efficient Deep Learning, (c) Bringing AI to the Edge - A Shannon-inspired Approach, (d) In-Memory Computing using Memristive Devices and Applications in Machine Learning, and (e) Algorithms and System Design for Brain-Inspired Spiking Neural Networks (SNNs).


  • Keshab K. Parhi

    received the B.Tech. degree from the Indian Institute of Technology (IIT), Kharagpur, in 1982, the M.S.E.E. degree from the University of Pennsylvania, Philadelphia, in 1984, and the Ph.D. degree from the University of California, Berkeley, in 1988. He has been with the University of Minnesota, Minneapolis, since 1988, where he is currently Distinguished McKnight University Professor and Edgar F. Johnson Professor of Electronic Communication in the Department of Electrical and Computer Engineering. He has published over 600 papers, is the inventor of 29 patents, and has authored the textbook VLSI Digital Signal Processing Systems (Wiley, 1999). His current research addresses VLSI architecture design of machine learning systems, hardware security, data-driven neuroscience and molecular/DNA computing. Dr. Parhi is the recipient of numerous awards including the 2017 Mac Van Valkenburg award and the 2012 Charles A. Desoer Technical Achievement award from the IEEE Circuits and Systems Society, the 2003 IEEE Kiyo Tomiyasu Technical Field Award, and a Golden Jubilee medal from the IEEE Circuits and Systems Society in 2000. He served as the Editor-in-Chief of the IEEE Trans. Circuits and Systems, Part-I during 2004 and 2005. He was elected a Fellow of the IEEE in 1996 and a Fellow of the American Association for the Advancement of Science (AAAS) in 2017.

  • Bo Yuan

    is an Assistant Professor of Electrical and Computer Engineering Department of Rutgers University. He received his bachelors and masters degrees from Nanjing University, China in 2007 and 2010, respectively. He received his PhD degree from University of Minnesota in 2015. His research interests include algorithm and hardware co-design and implementation for machine learning and signal processing systems, error-resilient low-cost computing and machine learning or domain-specific applications. Dr. Yuan serves as technical committee track chair and member for several IEEE/ACM conferences. He is the associate editor of Springer Journal of Signal Processing Systems.

  • Naresh R. Shanbhag

    is the Jack Kilby Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. He received his Ph.D. degree from the University of Minnesota (1993) in Electrical Engineering. From 1993 to 1995, he worked at AT&T Bell Laboratories at Murray Hill where he led the design of high-speed transceiver chip-sets for very high-speed digital subscriber line (VDSL), before joining the University of Illinois at Urbana-Champaign in August 1995. He has held visiting faculty appointments at the National Taiwan University (Aug.-Dec. 2007) and Stanford University (Aug.-Dec. 2014). His research interests are in the design of energy-efficient integrated circuits and systems for communications, signal processing and machine learning. He has more than 200 publications in this area and holds thirteen US patents. Dr. Shanbhag received the 2018 SIA/SRC University Researcher Award, 2010 Richard Newton GSRC Industrial Impact Award, became an IEEE Fellow in 2006, received the 2006 IEEE Journal of Solid-State Circuits Best Paper Award, the 2001 IEEE Transactions on VLSI Best Paper Award, the 1999 IEEE Leon K. Kirchmayer Best Paper Award, the 1999 Xerox Faculty Award, the IEEE Circuits and Systems Society Distinguished Lecturership in 1997, the National Science Foundation CAREER Award in 1996, and the 1994 Darlington Best Paper Award from the IEEE Circuits and Systems Society. In 2000, Dr. Shanbhag co-founded and served as the Chief Technology Officer of Intersymbol Communications, Inc., (acquired in 2007 by Finisar Corp (NASDAQ:FNSR)) a semiconductor start-up that provided DSP-enhanced mixed-signal ICs for electronic dispersion compensation of OC-192 optical links. From 2013-17, he was the Director of the Systems On Nanoscale Information fabriCs (SONIC) Center, a 5-year multi-university center funded by DARPA and SRC under the STARnet program that explored Shannon-inspired methods for computing in the nanoscale.

  • Abu Sebastian

    is a Principal Research Staff Member at IBM Research – Zurich. He received a B. E. (Hons.) degree in Electrical and Electronics Engineering from BITS Pilani, India, in 1998 and M.S. and Ph.D. degrees in Electrical Engineering (minor in Mathematics) from Iowa State University in 1999 and 2004, respectively. He was a contributor to several key projects in the space of storage and memory technologies and currently leads the research effort on in-memory computing at IBM Zurich. Dr. Sebastian is a co-recipient of the 2009 IEEE Control Systems Technology Award and the 2009 IEEE Transactions on Control Systems Technology Outstanding Paper Award. In 2013 he received the IFAC Mechatronic Systems Young Researcher Award for his contributions to the field of mico-/nanoscale mechatronic systems. In 2015 he was awarded the European Research Council (ERC) consolidator grant. Dr. Sebastian served on the editorial board of the journal, Mechatronics from 2008 till 2015 and served on the memory technologies committee of the IEDM from 2015-2016.

  • Bipin Rajendran

    received a B. Tech degree from I.I.T. Kharagpur in 2000, and M. S. and Ph.D. degrees in Electrical Engineering from Stanford University in 2003 and 2006, respectively. He was a Master Inventor and Research Staff Member at IBM T. J. Watson Research Center in New York during 2006-'12 and a faculty member in the Electrical Engineering Department at I.I.T. Bombay during 2012-'15. His research focuses on building algorithms, devices and systems for brain-inspired computing. He has authored over 70 papers in peer-reviewed journals and conferences, and has been issued 55 U.S. patents. He is currently an Associate Professor of Electrical & Computer Engineering at New Jersey Institute of Technology.