Tutorial Sessions

[Tutorial #1]

April 22, 2024, 09:30 – 12:50, Room: B00055

ANN-assisted Design of Analog and Mixed-Signal Circuits and Systems

Prof. José M. de la Rosa,

Institute of Microelectronics of Seville, SPAIN

Biography

José M. de la Rosa (Fellow, IEEE) received the M.S. degree in Physics in 1993 and the Ph.D. degree in Microelectronics in 2000, both from the University of Seville, Spain. Since 1993 he has been working at the Institute of Microelectronics of Seville (IMSE), which is its turn part of the Spanish Microelectronics Center (CNM) of the Spanish Council of Scientific Research (CSIC). He does his research at IMSE, where he served as vice-director from February 2018 to March 2023, and he is also a Full Professor at the Dept. of Electronics and Electromagnetism of the University of Seville. Since April 2023, he is the Director of the Office of International Projects of the University of Seville. His main research interests are in the field of analog and mixed-signal integrated circuits, especially high-performance (sigma-delta) data converters, including analysis, behavioral modeling, design and design automation of such circuits. Dr. de la Rosa is an IEEE Fellow and Editor-in-Chief of IEEE TCAS-I. He served as a Distinguished Lecturer of IEEE-CASS (term 2017-2018), Editor-in-Chief (EiC) of IEEE TCAS-II (2020-2021) and he is Member-at-Large of the IEEE-CASS Board of Governors (BoG) for the 2023-2025 term (more details at: www.imse-cnm.csic.es/~jrosa).

Abstract

This tutorial shows how to use Artificial Neural Networks (ANNs) for the optimization and automated design of analog and mixed-signal circuits. A survey of conventional and computational-intelligence design methods is given as a motivation towards using ANNs as optimization engines. A step-by-step procedure is described, explaining the key aspects to consider in our approach, such as dataset preparation, ANNs modeling, training, and optimization of network hyperparameters. As an application, two case studies at different hierarchy levels are presented. The first one is the system-level sizing of Sigma-Delta Modulators (Σ∆Ms), where ANNs are combined with behavioral simulations to generate valid circuit-level design variables for a given set of specifications. The second example combines ANNs with electrical simulators to optimize the circuit-level design of operational transconductance amplifiers. The presented methodology is described in a didactic way, and the contents are organized to learn the fundamentals and practical considerations behind the use of ANNs for the automated design of analog circuits. No prerequisites are needed and the tutorial contents are organized and addressed for a general audience attending AICAS.

[Tutorial #2]

April 22, 2024, 09:30 – 12:50, Room: B00107

Creating Flexible and Efficient Edge AI Systems utilizing RISC-V Paradigm and NoC Communication

Prof. Sri Parameswaran, University of Sydney,

New South Wales, Australia

Biography

Sri Parameswaran is a Professor and Head at the School of Electrical and Information Engineering at the University of Sydney, Sydney, Australia. Prior to this, He was with the School of Computer Science and Engineering at the University of New South Wales. He was in the role of Acting Head of School at the University of New South Wales from 2019 to 2020. He served as the Program Director for Computer Engineering. His research interests are in System Level Synthesis, Low power systems, High Level Systems and Network on Chips. He also served as the Editor in Chief of Embedded Systems Letters. He has served on the Program Committees of Design Automation Conference (DAC), Design and Test in Europe (DATE), the International Conference on Computer Aided Design (ICCAD), the International Conference on Hardware/Software Code-sign and System Synthesis (CODES-ISSS), and the International Conference on Compilers, Architectures and Synthesis for Embedded Systems (CASES). He is a fellow of IEEE. Sri Parameswaran received his B.Eng Degree from Monash University and his PhD from The University of Queensland.

Abstract

The tutorial titled "Creating Flexible and Efficient Edge AI Systems utilizing RISC-V Paradigm and NoC Communication" aims to provide an in-depth exploration of cutting-edge technologies shaping the future of electronic products. As the demand for Edge-AI platforms intensifies, the tutorial focuses on key elements critical to their success: low cost, high performance, energy efficiency, and user-friendly programmability. The first part of the tutorial delves into the creation of a flexible Edge-AI system grounded in the RISC-V processing paradigm. Attendees will gain insights into designing systems that accommodate both training and inference tasks, emphasizing a small footprint and energy efficiency. Special attention is devoted to the integration of AI accelerators with limited power consumption, supporting standard and fixed-point representations such as Bfloat16. Open-access tools, compilers, and debuggers tailored for the RISC-V architecture will be explored, ensuring accessibility for users at all levels of expertise. The second part of the tutorial addresses the increasingly critical role of communication architecture in the context of advancing chip technologies. RISC-V based Edge AI platforms, featuring multiple processors and memories, will be introduced. The tutorial highlights the seamless integration achieved through Network on Chip (NoC), exploring various methods of connecting systems and providing a comprehensive discussion on the advantages and disadvantages of each approach. Design and optimization methods for communication among processors will be covered, employing both basic NoC frameworks and advanced optimization techniques using meta-heuristic and exact methods. The tutorial concludes with a focus on artificial intelligence methods for optimizing NoC designs. Participants will gain practical insights into leveraging AI techniques to enhance NoC performance, with a demonstration using a SystemC based cycle-accurate simulator showcasing the real-world impact on system communication. Attendees will leave the tutorial equipped with a holistic understanding of flexible Edge AI systems and the expertise to navigate the challenges and opportunities in this dynamic field.

Prof. Soumya J,

Birla Institute of Technology and Science, Hyderabad, India

Biography

Soumya an Associate Professor at the Department of Electrical and Electronics Engineering, Birla Institute of Technology and Science (BITS)-Pilani, Hyderabad Campus, India. She is a young researcher in the area of on-chip network design, optimization and verification. She was a visiting scholar at TU Wien in 2018, visiting faculty at UNSW in 2019 and visiting scholar at the University of Agder since 2017. She worked as a Faculty member at National Institute of Technology Goa, India before joining BITS. She also worked as Scientist ‘SC’ at the Indian Space Research Organization. She completed her Masters and PhD from IIT Kharagpur in the area of Embedded Systems. She is a recipient of the Early Career Research Award in the year 2017 She has served on the Program Committees of VLSI Design Conference, International Conference of Smart Electronic Systems, VLSI Design and Test Conference, and ASP-DAC.

Abstract

The tutorial titled "Creating Flexible and Efficient Edge AI Systems utilizing RISC-V Paradigm and NoC Communication" aims to provide an in-depth exploration of cutting-edge technologies shaping the future of electronic products. As the demand for Edge-AI platforms intensifies, the tutorial focuses on key elements critical to their success: low cost, high performance, energy efficiency, and user-friendly programmability. The first part of the tutorial delves into the creation of a flexible Edge-AI system grounded in the RISC-V processing paradigm. Attendees will gain insights into designing systems that accommodate both training and inference tasks, emphasizing a small footprint and energy efficiency. Special attention is devoted to the integration of AI accelerators with limited power consumption, supporting standard and fixed-point representations such as Bfloat16. Open-access tools, compilers, and debuggers tailored for the RISC-V architecture will be explored, ensuring accessibility for users at all levels of expertise. The second part of the tutorial addresses the increasingly critical role of communication architecture in the context of advancing chip technologies. RISC-V based Edge AI platforms, featuring multiple processors and memories, will be introduced. The tutorial highlights the seamless integration achieved through Network on Chip (NoC), exploring various methods of connecting systems and providing a comprehensive discussion on the advantages and disadvantages of each approach. Design and optimization methods for communication among processors will be covered, employing both basic NoC frameworks and advanced optimization techniques using meta-heuristic and exact methods. The tutorial concludes with a focus on artificial intelligence methods for optimizing NoC designs. Participants will gain practical insights into leveraging AI techniques to enhance NoC performance, with a demonstration using a SystemC based cycle-accurate simulator showcasing the real-world impact on system communication. Attendees will leave the tutorial equipped with a holistic understanding of flexible Edge AI systems and the expertise to navigate the challenges and opportunities in this dynamic field.

[Tutorial #3]

April 22, 2024, 13:30 – 16:50, Room: B00055

Accelerating AI on Heterogeneous Computing Platforms Using OpenCL

Prof. Ibrahim (Abe) M. Elfadel,

Khalifa University, Abu Dhabi, UAE

Biography

Dr. Ibrahim (Abe) M. Elfadel is Professor of Computer and Communication Engineering at Khalifa University, Abu Dhabi, UAE. Prior to joining academia in 2011, Dr. Elfadel had a 15-year R & D career with IBM, Yorktown Heights, NY, as a Research Staff Member and Senior Scientist involved in the research, development, and deployment of software tools and methodologies for the design of IBM's high-end microprocessors. Dr. Elfadel is the recipient of six Invention Achievement Awards, one Outstanding Technical Achievement Award and one Research Division Award, all from IBM, the D. O. Pederson Best Paper Award from the IEEE Transactions on CAD (2014), the Board of Directors Recognition Award from the Semiconductor Research Corporation for "Pioneering Semiconductor Research in Abu Dhabi" (2019), and the Service Award from the International Federation on Information Processing (IFIP) for his "Outstanding Contributions to IFIP and the Informatics Community" (2022). Dr. Elfadel has served on the Technical Program Committees of IEEE flagship conferences, including ISCAS, BioCAS, AICAS, DAC, ICCAD, ASPDAC, DATE, ISVLSI, ICCD, ICECS, MWSCAS, and VLSI-SoC. He was the TP Co-chair of the AICAS, Hangzhou, China, June 2023, and is the TP Co-chair of BioCAS, Xi’an, China, October 2024. Dr. Elfadel received his PhD from MIT in 1993.

Abstract

This hands-on tutorial is intended to introduce CAS professionals and graduate students to industry-wide frameworks and best practices for accelerating AI workloads on heterogeneous computing platforms. Such platforms may contain multi-core CPUs, multiple GPUs, multiple TPUs, and FPGAs. The main goal is to familiarize tutorial attendees with the best approaches for designing task-parallel and data-parallel AI accelerators, that are hardware-aware yet vendor-independent. The vehicle that the tutorial will use is that of the Open Computing Language (OpenCL), which enjoys the support and sponsorship of the Khronos Group, an industry consortium, whose membership includes Samsung, Qualcomm, Nvidia, AMD, Arm, Intel, Google, and Apple. The tutorial will provide attendees with an in-depth understanding of the structure, syntax, and implementation of OpenCL. Topics covered include the platform-device-context model, the memory model, the command queue and execution model, the kernel model, and kernel programming. Basic OpenCL multiprocessing terminology such as work item, work group, and NDrange, as well as memory objects such as buffer, image, and pipe will be fully explained. Hands-on examples drawn from AI and Computer Vision will be covered. Time permitting, OpenCL extensions for AI acceleration on mobile platforms will be covered using Qualcomm’s SDK for the Snapdragon chipset and its Adreno GPU.

[Tutorial #4]

April 22, 2024, 13:30 – 16:50, Room: B00107

Optimizing Neural Networks for In-Memory Computing: A Deep Dive into Hardware-Aware Neural Architecture Search

Khaled Nabil Salama

King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Biography

Khaled Nabil Salama (Senior Member, IEEE) recceived the B.S. degree from the Department Electronics and Communications, Cairo University, Cairo, Egypt, in 1997, and the M.S. and Ph.D. degrees from the Department of Electrical Engineering, Stanford University, Stanford, CA, USA, in 2000 and 2005, respectively. From 2005 to 2009, he was an Assistant Professor with the Rensselaer Polytechnic Institute, Troy, NY, USA. He joined the King Abdullah University of Science and Technology, Thuwal, Saudi Arabia, in 2009, where he is currently a Professor, and was the Founding Program Chair till August 2011. He has authored 350 articles and holds 30 U.S. patents on low-power mixed-signal circuits for intelligent fully integrated sensors and neuromorphic circuits using memristor devices. He was the Director of the sensors initiative, a consortium of nine universities and the Associate Dean of the Division of Computer, Electrical and Mathematical Sciences and Engineering, King Abdullah University of Science and Technology.

Abstract

Olga Krestinskaya

King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Biography

Olga Krestinskaya (Graduate Student Member, IEEE) is currently working toward the Ph.D. degree with the King Abdullah University of Science and Technology, Thuwal, Saudi Arabia. She has authored several high quality works in the area of memristor based neural network designs for on-edge in-memory computing applications. She was the recipient of the 2019 IEEE CASS Pre-doctoral Award. She is also an active Reviewer for IEEE Transactions on Circuits and Systems, IEEE Transactions on Nanotechnology, IEEE Transactions on Very Large Scale Integration Systems. She is an active Member of Circuits and Systems Society.

Abstract

The rapid advancement of Artificial Intelligence (AI) and the escalating complexity of neural network models necessitate efficient hardware architectures for power- and resource-constrained deployments. In-memory Computing (IMC) has emerged as a vital technology in this domain, undergoing significant development in devices, circuits, and architectures. However, the complexity inherent in designing, implementing, and deploying these architectures demands a well-coordinated hardware-software co-design toolchain. This toolchain is essential for facilitating IMC-aware optimizations throughout the stack, including devices, circuits, chips, compilers, software, and neural network design.

Given the intricate and vast design space, manual optimizations become impractical and challenging. Hardware-Aware Neural Architecture Search (HW-NAS) has risen as a promising solution to expedite the creation of streamlined neural networks optimized for efficient deployment on IMC hardware. This tutorial will present an in-depth and comprehensive review of HW-NAS techniques, particularly emphasizing IMC architectures. It will cover the application of HW-NAS to IMC hardware’s specific features and compare existing optimization frameworks. Additionally, the tutorial will highlight ongoing research areas and identify unresolved issues in the field. The session will conclude by proposing a future roadmap for the evolution of HW-NAS in IMC architectures, providing insights into the next steps and potential developments in this cutting-edge field.