Tutorial

Tutorials ISCAS 2019

14: Machine & Deep Learning for Edge-Cloud Computing Systems

  • Baoxin Li, Arizona State University
  • Fengbo Ren, Arizona State University

Abstract:

Machine learning, in particular deep learning, has become one of the most significant propellers in many data-intensive fields, bringing about performance break-through in computer vision, natural language processing, and many other applications related to artificial intelligence. This tutorial aims at introducing machine and deep learning to an audience of circuits and systems engineers and researchers, with a unique focus on learning paradigms and techniques that are useful for big-data processing in edge and cloud computing systems. The tutorial will start with a selfcontained presentation of foundations of machine learning, covering topics including typical models and learning paradigms (supervised, unsupervised, reinforcement learning, etc.). Then deep learning will be discussed, with a focus on convolutional architectures for their wide range of applications. With these preparations, the tutorial will present unique challenges and potential solutions in deploying large-scale learning paradigms in edge-cloud computing systems. Topics to cover on this regard include network pruning (e.g., for fitting large deep networks on small edge devices), transfer learning (e.g., leveraging the cloud to mentor an edge device in learning), and energy-efficient deep learning implementations (e.g., binary neural networks, FPGA implementations, etc.). Lastly, several specific applications will be used to illustrate some of the discussed learning paradigms and/or deployment strategies. From this tutorial, a participant will be able to not only learn foundational knowledge in machine/deep learning but also gain a good understanding on how to deploy leading deep networks for particular applications in edge-cloud computing systems.

Biographies

  • Baoxin Li

    is currently a Professor and Chair of the Computer Science & Engineering program at Arizona State University. He is also a Graduate Faculty Endorsed to Chair in the Electrical Engineering and Computer Engineering programs. He received his PhD in Electrical Engineering in 2000 from University of Maryland, College Park. From 2000 to 2004, he was a Senior Researcher with SHARP Laboratories of America, where he was the technical lead in developing SHARP's HiIMPACT Sports™ technologies. He was also an Adjunct

    Professor with the Portland State University from 2003 to 2004. His general research interests are on visual computing and machine learning. He actively works on computer vision & pattern recognition, multimedia, image/video processing, assistive technologies for the visually impaired, human-computer interaction, statistical methods in visual computing. He won twice SHARP Labs' President Awards, in 2001 and 2004 respectively. He also won SHARP Labs' Inventor of the Year Award in 2002. He is a recipient of the National Science Foundation's CAREER Award (2008-2009). He was named a Fulton Exemplar Faculty at ASU in 2017. He holds 19 issued US Patents. His work has been featured on NY Times, EE Times, MSNBC, Discovery News, ABC News, Gizmodo India, and many other media sources. He is an Associate Editor for IEEE Trans. on CSVT and IEEE Trans. on Image Processing.

    He has authored or co-authored over 180 peer-reviewed technical papers, many on the most competitive venues including CVPR, ICCV, NIPS, ECCV, T-PAMI, T-IP, T-CSVT, etc. His neural network research started in later 1990s. His 1998 paper (and its 2001 journal version) on empirical evaluation of classifiers reported that six-layered CNNs gave the best performance among many competing methods. His 1999 paper on IJCNN was among the first to report the use of CNNs as feature extractors.

    He also co-authored two technical books, one sparse learning and one on deep learning respectively. He has received over $5.5M in research funding from federal agencies including the National Science Foundation, National Institutes of Health, Department of Defense, and from the industry and private foundations. In one current DoD-sponsored project, his group is developing new deep architectures that integrate high-level reasoning with low-level attention models for analysis of visual events. In July 2017, he was invited to give a keynote on MIPS related to deep learning for medical image applications.

    He teaches both graduate-level and undergraduate-level courses in multimedia, video processing, machine learning and pattern recognition. At ASU, he developed the first graduate-level deep learning class in 2017, and the first undergraduate-level machine and deep learning class in 2018.

  • Fengbo Ren

    received the B.Eng. degree in Electrical Engineering from Zhejiang University in 2008, and the M.S. and Ph.D. degree in Electrical Engineering from University of California, Los Angeles, in 2010 and 2014, respectively. He joined the School of Computing, Informatics and Decision Systems Engineering at Arizona State University as an Assistant Professor in January 2015. He is directing the Parallel Systems and Computing Laboratory (PSCLab), and he is affiliated with the NSF Industry/University Cooperative Research Center for Embedded Systems. His current research focuses on bringing energy efficiency and data/signal intelligence into a wide spectrum of today’s computing infrastructures, from data center server systems to wearable/IoT devices.

    He received the Broadcom Fellowship (2012), the NSF CAREER Award (2017), and the Google Faculty Research Award (2018). He is a member of the Technical Committees of Digital Signal Processing and VLSI Systems & Applications in the IEEE Circuits and Systems Society. He is a co-PI on a NSF Research Infrastructure grant to develop energy-efficient infrastructures featuring heterogeneous accelerators and deep storage hierarchy to support big data research. He also received funding from Cisco to develop an energyefficient and intelligent framework for Internet-of-Things (IoT) data processing.

    In recent years, his research group has published many papers on the efficient implementations of deep learning networks, especially via FPGAs. These include, for example, a forthcoming IEEE transactions paper on binarized deep networks for edge computing.