I am a third-year CS Ph.D student at The University of Hong Kong. I am fortunately co-advised by Lingpeng Kong and Tao Yu. I am also working closely with Junxian He. Before that, I received my B.S. in Computer Science from Peking University, advised by Prof.Zhihong Deng.
My research goal is generally about agents and interactive systems, with a particular focus on advancing science.
Most recent publications on Google Scholar.
‡ indicates equal contribution.
Retrieved Sequence Augmentation for Protein Representation Learning
Chang Ma, Haiteng Zhao, Lin Zheng, Jiayi Xin, Qintong Li, Lijun Wu, Zhihong Deng, Yang Lu, Qi Liu, Sheng Wang, Lingpeng Kong
EMNLP 2024
BioMaze: Benchmarking and Enhancing Large Language Models for Biological Pathway Reasoning
Haiteng Zhao, Chang Ma, Lingpeng Kong, Zhi-Hong Deng
Preprint
GIMLET: A Unified Graph-Text Model for Instruction-Based Molecule Zero-Shot Learning
Haiteng Zhao, Shengchao Liu, Chang Ma, Hannan Xu, Jie Fu, Zhi-Hong Deng, Lingpeng Kong, Qi Liu
NeurIPS 2023
TorchDrug: A Powerful and Flexible Machine Learning Platform for Drug Discovery
Zhaocheng Zhu, Chence Shi, Zuobai Zhang, Shengchao Liu, Minghao Xu, Xinyu Yuan, Yangtian Zhang, Junkun Chen, Huiyu Cai, Jiarui Lu, Chang Ma, Runcheng Liu, Louis-Pascal Xhonneux, Meng Qu, Jian Tang
Preprint
PEER: A Comprehensive and Multi-Task Benchmark for Protein Sequence Understanding
Minghao Xu, Zuobai Zhang, Jiarui Lu, Zhaocheng Zhu, Yangtian Zhang, Chang Ma, Runcheng Liu, Jian Tang
NeurIPS 2022 Dataset and Benchmark Track
Non-myopic Generation of Language Models for Reasoning and Planning
Chang Ma, Haiteng Zhao, Junlei Zhang, Junxian He, Lingpeng Kong
ICLR 2025
AgentBoard: An Analytical Evaluation Board of Multi-Turn LLM Agents
Chang Ma*, Junlei Zhang*, Zhihao Zhu*, Cheng Yang*, Yujiu Yang, Yaohui Jin, Zhenzhong Lan, Lingpeng Kong, Junxian He
NeurIPS 2024 Dataset and Benchmark Track (Oral)
Genius: A Generalizable and Purely Unsupervised Self-Training Framework For Advanced Reasoning
Fangzhi Xu, Hang Yan, Chang Ma, Haiteng Zhao, Qiushi Sun, Kanzhi Cheng, Junxian He, Jun Liu, Zhiyong Wu
Preprint
φ-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation
Fangzhi Xu, Hang Yan, Chang Ma, Haiteng Zhao, Jun Liu, Qika Lin, Zhiyong Wu
Preprint
Breaking the Data Barrier – Building GUI Agents Through Task Generalization
Junlei Zhang, Zichen Ding, Chang Ma, Zijie Chen, Qiushi Sun, Zhenzhong Lan, Junxian He
Preprint
Empowering Large Language Model Agents through Action Learning
Haiteng Zhao, Chang Ma, Guoyin Wang, Jing su, Lingpeng Kong, Jingjing Xu, Zhi-Hong Deng, Hongxia Yang
COLM 2024
A Survey on Large Language Model-Based Social Agents in Game-Theoretic Scenarios
Xiachong Feng, Longxu Dou, Ella Li, Qinghao Wang, Haochuan Wang, Yu Guo, Chang Ma, Lingpeng Kong
TMLR
A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond
Qiushi Sun, Zhirui Chen, Fangzhi Xu, Chang Ma, Kanzhi Cheng, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Pengcheng Yin, Qipeng Guo, Xipeng Qiu, Xiaoli Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu
Preprint
Non-myopic Generation of Language Models for Reasoning and Planning
Chang Ma, Haiteng Zhao, Junlei Zhang, Junxian He, Lingpeng Kong
ICLR 2025
AgentBoard: An Analytical Evaluation Board of Multi-Turn LLM Agents
Chang Ma*, Junlei Zhang*, Zhihao Zhu*, Cheng Yang*, Yujiu Yang, Yaohui Jin, Zhenzhong Lan, Lingpeng Kong, Junxian He
NeurIPS 2024 Dataset and Benchmark Track (Oral)
Genius: A Generalizable and Purely Unsupervised Self-Training Framework For Advanced Reasoning
Fangzhi Xu, Hang Yan, Chang Ma, Haiteng Zhao, Qiushi Sun, Kanzhi Cheng, Junxian He, Jun Liu, Zhiyong Wu
Preprint
φ-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation
Fangzhi Xu, Hang Yan, Chang Ma, Haiteng Zhao, Jun Liu, Qika Lin, Zhiyong Wu
Preprint
Breaking the Data Barrier – Building GUI Agents Through Task Generalization
Junlei Zhang, Zichen Ding, Chang Ma, Zijie Chen, Qiushi Sun, Zhenzhong Lan, Junxian He
Preprint
Empowering Large Language Model Agents through Action Learning
Haiteng Zhao, Chang Ma, Guoyin Wang, Jing su, Lingpeng Kong, Jingjing Xu, Zhi-Hong Deng, Hongxia Yang
COLM 2024
Retrieved Sequence Augmentation for Protein Representation Learning
Chang Ma, Haiteng Zhao, Lin Zheng, Jiayi Xin, Qintong Li, Lijun Wu, Zhihong Deng, Yang Lu, Qi Liu, Sheng Wang, Lingpeng Kong
EMNLP 2024
A Survey on Large Language Model-Based Social Agents in Game-Theoretic Scenarios
Xiachong Feng, Longxu Dou, Ella Li, Qinghao Wang, Haochuan Wang, Yu Guo, Chang Ma, Lingpeng Kong
TMLR
A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond
Qiushi Sun, Zhirui Chen, Fangzhi Xu, Chang Ma, Kanzhi Cheng, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Pengcheng Yin, Qipeng Guo, Xipeng Qiu, Xiaoli Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu
Preprint
BioMaze: Benchmarking and Enhancing Large Language Models for Biological Pathway Reasoning
Haiteng Zhao, Chang Ma, Lingpeng Kong, Zhi-Hong Deng
Preprint
GIMLET: A Unified Graph-Text Model for Instruction-Based Molecule Zero-Shot Learning
Haiteng Zhao, Shengchao Liu, Chang Ma, Hannan Xu, Jie Fu, Zhi-Hong Deng, Lingpeng Kong, Qi Liu
NeurIPS 2023
TorchDrug: A Powerful and Flexible Machine Learning Platform for Drug Discovery
Zhaocheng Zhu, Chence Shi, Zuobai Zhang, Shengchao Liu, Minghao Xu, Xinyu Yuan, Yangtian Zhang, Junkun Chen, Huiyu Cai, Jiarui Lu, Chang Ma, Runcheng Liu, Louis-Pascal Xhonneux, Meng Qu, Jian Tang
Preprint
PEER: A Comprehensive and Multi-Task Benchmark for Protein Sequence Understanding
Minghao Xu, Zuobai Zhang, Jiarui Lu, Zhaocheng Zhu, Yangtian Zhang, Chang Ma, Runcheng Liu, Jian Tang
NeurIPS 2022 Dataset and Benchmark Track
Switch-GPT: an effective method for constrained text generation under few-shot settings
Chang Ma*, Song Zhang*, Gehui Shen, Zhihong Deng
AAAI Student Abstract 2021
I am a hiking enthusiast and my favorite trail is the MacLehose Trail, Section 2 in Hong Kong. I played piano for 8 years as a child and just started getting back into it. I am a keen fan of Bach and recently learning the Goldberg-Variationen. I love two kinds of elegance -- beautiful music and elegant proof.