Tuesday, May 12, 2026 02:30PM

Date: Tuesday, May 12, 2026
Time: 2:30 p.m.
Location: Van Leer Building, Room 218

Abstract: The size and complexity of recent deep learning models continue to increase exponentially, imposing significant hardware overhead for training and deploying those models. As a result, cross-layer optimizations have become critical to maximize acceleration throughput and power efficiency. In this talk, I will discuss how a circuit designer can contribute to the broader AI community by building a more powerful and efficient hardware acceleration solution. Based on our recent work on both hardware and algorithms, the talk will show how each level of the design hierarchy can make a difference in the end-to-end performance when accelerating Al models.

Bio: Dongsuk Jeon received a B.S. degree in electrical engineering from Seoul National University, Seoul, South Korea, in 2009 and a Ph.D. degree in electrical engineering from the University of Michigan, Ann Arbor, MI, USA, in 2014. From 2014 to 2015, he was a Postdoctoral Associate with the Massachusetts Institute of Technology, Cambridge, MA, USA. He is currently a Professor with the Graduate School of Convergence Science and Technology, Seoul National University. His current research interests include hardware-oriented machine learning algorithms, hardware accelerators, and low-power circuits.

Dr. Jeon has served or is currently serving on the Technical Program Committees of the IEEE International Solid- State Circuits Conference (ISSCC), ACM/IEEE Design Automation Conference (DAC), IEEE/ACM Asia and South Pacific Design Automation Conference (ASP-DAC), and IEEE Asian Solid-State Circuits Conference (ASSCC). He was also a Distinguished Lecturer of the IEEE Solid-State Circuits Society in 2023-2024 and is currently serving as an Associate Editor of the IEEE Transactions on Very Large Scale Integration (VLSI) Systems.