EE4690 Hardware architectures for artificial intelligence

Recent advancements in the field of Artificial Intelligence (AI) have demonstrated its great potential in solving exceptionally complex problems, that can even surpass humans. To support such AI algorithms, the demand of developing new and efficient hardware is growing with rapid pace. In particular, for a variety of edge/IoT devices, hardware platform locally deploys these algorithms that decreases the exchange of information with the cloud which significantly reduces the networking costs and boost the overall performance of the devices. Hardware architectures play a critical role in executing AI algorithms.

This course gives an overview of AI from hardware perspective and discusses close to the state-of-the-art architectures and training mechanisms. In this course, you will learn the concept of advanced forms of AI (i.e. Deep Neural Networks) and how they can solve the complex problems. You will also learn, what are the challenges associated with the underlying hardware and how new architectures can address those. Moreover, the course will teach you the concept of brain-inspired computing using in-memory processing hardware platform.

Technically, this course covers following topics: 1) Machine inference models (i.e. Support Vector Machine, Decision tree, Regression analysis, clustering, etc.), machine learning algorithms (i.e., Supervised, Unsupervised, Semi-supervised, Reinforcement learning models) and their associated challenges. 2) Deep Neural Network (DNN), Classification of DNNs, feed-forward, back propagation and learning mechanism (such as Gradient decent and Stochastic learning). 3) Spiking Neural Network, Spike Time Dependent Plasticity training algorithms and some existing hardware architectures (such as Intel’s Loihi, SpinNaker, IBM’s TrueNorth, Neurogrid, etc.). 4) Emerging technologies for neuromorphic computing (using mersister devices such as RRAM, MRAM & PCRAM) and non-conventional computing paradigm (such as computing-in-memory, hybrid-design approach, crossbar arrays).

Study Goals

The main aims of the course are:

  1. Describe the concept of AI, inference and training models as well as their associated challenges from hardware perspective. 
  2. Understand the hardware AI requirements for the data-centers as well as edge devices.
  3. Discuss the neuromorphic computing and its advantages with respect to conventional computing paradigms.
  4. State details about Artificial Neural Network and their training algorithms.
  5. Develop understanding about Spiking Neural Network and their training mechanisms and also, comparison with Artificial Neural Network and their evaluations based on design metrics such as accuracy, power, speed, size, etc.
  6. Describe state-of-the-art neuromorphic hardware architectures and discuss their design implementation details. 
  7. Study the advanced topics of neural networks based-on computation-in-memory concept using emerging non-volatile memories.
  8. Become an AI engineer with good understanding about hardware implementation.

Teachers

R.K. Bishnoi

Last modified: 2023-11-04

Details

Credits: 5 EC
Period: 0/0/0/4