Research

Machine Learning for EDA

As the technology node of integrated circuits rapidly scales down to 5nm and beyond, the electronic design automation (EDA) in Very Large Scale Integration (VLSI) which has been developed over the last few decades, is challenged by the ever-increasing VLSI design complexity. Covering a wide range of the EDA flow including design and verification, our major achievements are proposing machine learning-enabled EDA techniques in different level EDA applications.

  • Jindong Tu, Yapeng Li, Pengjia Li, Peng Xu, Qianru Zhang, Sanping Wan, Yongsheng Sun, Bei Yu, Tinghuan Chen, "SMART: Graph Learning-Boosted Subcircuit Matching for Large-scale Analog Circuits", accepted by IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD).

  • Tinghuan Chen, Hao Geng, Qi Sun, Sanping Wan, Yongsheng Sun, Huatao Yu, Bei Yu, "Wages: The Worst Transistor Aging Analysis for Large-scale Analog Integrated Circuits via Domain Generalization", ACM Transactions on Design Automation of Electronic Systems (TODAES), vol. 29, no. 05, pp. 73:1–73:23, 2024.

  • Tinghuan Chen, Silu Xiong, Huan He, Bei Yu, "TRouter: Thermal-driven PCB Routing via Non-Local Crisscross Attention Networks", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 42, no. 10, pp. 3388-3401, 2023.

  • Tinghuan Chen, Grace Li Zhang, Bei Yu, Bing Li, Ulf Schlichtmann, "Machine Learning in Advanced IC Design: A Methodological Survey", IEEE Design & Test, vol. 40, no. 01, pp. 17–33, 2023. (Invited Paper)

  • Tinghuan Chen, Qi Sun, Canhui Zhan, Changze Liu, Huatao Yu, Bei Yu, "Deep H-GCN: Fast Analog IC Aging-induced Degradation Estimation", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 41, no. 7, pp. 1990-2003, 2022.

Deep Learning Accelerators

Deep learning has achieved significant success in a variety of real world applications. However, most of the existing deep learning models are still by manual design, and how to achieve an automatic and efficient model design is still an open problem. To address this problem, our major achievements are proposing hardware-friendly deep learning models and edge device deployment optimization techniques.

  • Baohui Xie, Xinrui Zhu, Yuan Pu, Tongkai Wu, Zhiyuan Lu, Xiaofeng Zou, Bei Yu, Tinghuan Chen, "DSPlacer: DSP Placement for FPGA-based CNN Accelerator", ACM/IEEE Design Automation Conference (DAC), San Francisco, June 22-25, 2025.

  • Shangran Lin, Xinrui Zhu, Baohui Xie, Tinghuan Chen, Cheng Zhuo, Qi Sun, Bei Yu, "RISCSparse: Point Cloud Inference Engine on RISC-V Processor", IEEE/ACM International Conference on Computer-Aided Design (ICCAD), New Jersey, Oct. 27–31, 2024.

  • Mingjun Li, Pengjia Li, Shuo Yin, Shixin Chen, Beichen Li, Chong Tong, Jianlei Yang, Tinghuan Chen, Bei Yu, "WinoGen: A Highly Configurable Winograd Convolution IP Generator for Efficient CNN Acceleration on FPGA", ACM/IEEE Design Automation Conference (DAC), San Francisco, Jun. 23–27, 2024.

  • Tinghuan Chen, Bin Duan, Qi Sun, Meng Zhang, Guoqing Li, Hao Geng, Qianru Zhang, Bei Yu, "An Efficient Sharing Grouped Convolution via Bayesian Learning", IEEE Transactions on Neural Networks and Learning Systems (TNNLS), vol. 33, no. 12, pp. 7367-7379, 2022.

  • Qianru Zhang, Meng Zhang, Tinghuan Chen, Zhifei Sun, Yuzhe Ma, Bei Yu, "Recent advances in convolutional neural network acceleration", Neurocomputing, vol. 323, pp. 37-51, 2019.

Acknowledgement

Our research works are supported by the National Key Research and Development Program of China (No. 2023YFB4402900), the National Natural Science Foundation of China (No. 62304197), HiSilicon, Pangomicro and Index.