Research

Machine Learning for EDA

As the technology node of integrated circuits rapidly scales down to 5nm and beyond, the electronic design automation (EDA) in Very Large Scale Integration (VLSI) which has been developed over the last few decades, is challenged by the ever-increasing VLSI design complexity. Covering a wide range of the EDA flow including design and verification, our major achievements are proposing machine learning-enabled EDA techniques in different level EDA applications.

  • Tinghuan Chen, Hao Geng, Qi Sun, Sanping Wan, Yongsheng Sun, Huatao Yu, Bei Yu, "Wages: The Worst Transistor Aging Analysis for Large-scale Analog Integrated Circuits via Domain Generalization", accepted by ACM Transactions on Design Automation of Electronic Systems (TODAES).

  • Yuyang Ye, Tinghuan Chen, Yifei Gao, Hao Yan, Bei Yu, Longxing Shi, "Aging-aware Critical Path Selection via Graph Attention Networks", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 42, no. 12, pp. 5006-5019, 2023.

  • Tinghuan Chen, Silu Xiong, Huan He, Bei Yu, "TRouter: Thermal-driven PCB Routing via Non-Local Crisscross Attention Networks", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 42, no. 10, pp. 3388-3401, 2023.

  • Tinghuan Chen, Grace Li Zhang, Bei Yu, Bing Li, Ulf Schlichtmann, "Machine Learning in Advanced IC Design: A Methodological Survey", IEEE Design & Test, vol. 40, no. 01, pp. 17–33, 2023. (Invited Paper)

  • Tinghuan Chen, Qi Sun, Canhui Zhan, Changze Liu, Huatao Yu, Bei Yu, "Deep H-GCN: Fast Analog IC Aging-induced Degradation Estimation", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 41, no. 7, pp. 1990-2003, 2022.

Deep Learning Accelerators

Deep learning has achieved significant success in a variety of real world applications. However, most of the existing deep learning models are still by manual design, and how to achieve an automatic and efficient model design is still an open problem. To address this problem, our major achievements are proposing hardware-friendly deep learning models and edge device deployment optimization techniques.

  • Mingjun Li, Pengjia Li, Shuo Yin, Shixin Chen, Beichen Li, Chong Tong, Jianlei Yang, Tinghuan Chen, Bei Yu, "WinoGen: A Highly Configurable Winograd Convolution IP Generator for Efficient CNN Acceleration on FPGA", ACM/IEEE Design Automation Conference (DAC), San Francisco, Jun. 23–27, 2024.

  • Tinghuan Chen, Bin Duan, Qi Sun, Meng Zhang, Guoqing Li, Hao Geng, Qianru Zhang, Bei Yu, "An Efficient Sharing Grouped Convolution via Bayesian Learning", IEEE Transactions on Neural Networks and Learning Systems (TNNLS), vol. 33, no. 12, pp. 7367-7379, 2022.

  • Qi Sun, Chen Bai, Tinghuan Chen, Hao Geng, Xinyun Zhang, Yang Bai, Bei Yu, "Fast and Efficient DNN Deployment via Deep Gaussian Transfer Learning", IEEE International Conference on Computer Vision (ICCV), Oct. 11–17, 2021.

  • Qi Sun, Tinghuan Chen, Siting Liu, Jin Miao, Jianli Chen, Hao Yu, Bei Yu, "Correlated Multi-objective Multi-fidelity Optimization for HLS Directives Design", IEEE/ACM Proceedings Design, Automation and Test in Europe (DATE), Grenoble, Feb. 1–5, 2021. (Best Paper Award Nomination)

  • Qianru Zhang, Meng Zhang, Tinghuan Chen, Zhifei Sun, Yuzhe Ma, Bei Yu, "Recent advances in convolutional neural network acceleration", Neurocomputing, vol. 323, pp. 37-51, 2019.

Acknowledgement

Our research works are supported by the National Key Research and Development Program of China (No. 2023YFB4402900), the National Natural Science Foundation of China (No. 62304197), HiSilicon and Pangomicro.