I am a third-year master’s student in the School of Computer Science and Technology at Huazhong University of Science and Technology (HUST), under the guidance of Prof. Yao Wan. I am currently seeking a PhD position in AI/SE for Spring/Fall 2026. For an overview of my academic background, please look at my CV. If you are interested, please feel free to contact me: zychu418@gmail.com | chuzhaoyang@hust.edu.cn.

My research interest focuses on the intersection of artificial intelligence and software engineering, where I am particularly intrigued by:

  • Code Intelligence, e.g., code generation, code search, and vulnerability detection.
  • Trustworthy Artificial Intelligence, e.g., explainability, privacy, and robustness.

🔥 News

  • 2025.03:  🎉 Our SANER 2025 paper has been awarded the IEEE TCSE Distinguished Paper Award🏆!
  • 2025.01:  🎉 Our research on test generation benchmark for LLMs has been accepted by NAACL 2025 Findings.
  • 2024.12:  🎉 Our research on pre-trained code LLM selection for reuse has been accepted by SANER 2025.
  • 2024.03:  🎉 Our research on counterfactual reasoning for GNN-based vulnerability detection has been accepted by ISSTA 2024.
  • 2022.09:  🎉 One paper is published in Information Sciences.

📝 Publications

* indicates equal contribution. † indicates the corresponding author.

ISSTA 2024
sym

Graph Neural Networks for Vulnerability Detection: A Counterfactual Explanation.
Zhaoyang Chu, Yao Wan†, Qian Li, Yang Wu, Hongyu Zhang, Yulei Sui, Guandong Xu, Hai Jin.
ISSTA 2024. The ACM SIGSOFT International Symposium on Software Testing and Analysis. CORE A.
[ Paper ] [ Code ] [ arXiv ] [ Link ]

SANER 2025
sym

How to Select Pre-Trained Code Models for Reuse? A Learning Perspective.
Zhangqian Bi, Yao Wan†, Zhaoyang Chu, Yufei Hu, Junyi Zhang, Hongyu Zhang, Guandong Xu, Hai Jin.
SANER 2025. The IEEE International Conference on Software Analysis, Evolution and Reengineering. CORE A.
IEEE TCSE Distinguished Paper Award🏆.
[ Paper ] [ Code ] [ arXiv ]

NAACL 2025 Findings
sym

TESTEVAL: Benchmarking Large Language Models for Test Case Generation.
Wenhan Wang*, Chenyuan Yang*, Zhijie Wang*, Yuheng Huang, Zhaoyang Chu, Da Song, Lingming Zhang, An Ran Chen, Lei Ma.
NAACL 2025 Findings. The Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics. CORE A.
[ Paper ] [ Benchmark ] [ Code ] [ arXiv ]

Preprint
sym

Can Large Language Models Serve as Evaluators for Code Summarization?
Yang Wu, Yao Wan†, Zhaoyang Chu, Wenting Zhao, Ye Liu, Hongyu Zhang, Xuanhua Shi, Philip S. Yu.
Preprint.
[ Paper ] [ Code ] [ arXiv ]

Preprint
sym

CODESYNC: Synchronizing Large Language Models with Dynamic Code Evolution at Scale.
Chenlong Wang*, Zhaoyang Chu*, Zhengxiang Cheng*, Xuyi Yang, Kaiyue Qiu, Yao Wan†, Zhou Zhao, Xuanhua Shi, Dongping Chen.
Preprint.
[ Paper ] [ Code ] [ arXiv ]

Information Sciences
sym

Hierarchical Graph Representation Learning for the Prediction of Drug-Target Binding Affinity.
Zhaoyang Chu*, Feng Huang*, Haitao Fu, Yuan Quan, Xionghui Zhou, Shichao Liu, Wen Zhang†.
Information Sciences, 2022. Impact Factor 8.1, CORE A.
[ Paper ] [ Code ] [ Link ]

🎖 Honors and Awards

  • 2024-2025, National Scholarship for Postgraduates.
  • 2022-2023, Merit Postgraduate, First-class Scholarship for Postgraduates.
  • 2021-2022, Undergraduate Thesis Innovation Award, Outstanding Undergraduate.
  • 2019-2020, National Scholarship for Undergraduates, Merit Student.

📖 Educations

  • 2022.09 - 2025.06 (anticipated), Master, Huazhong University of Science and Technology.
  • 2018.09 - 2022.06, Undergraduate, Huazhong Agricultural University (Graduated with Honors).