I am a Ph.D. student in Computer Science at UC San Diego, advised by Professor Jingbo Shang. My research is about pushing LLMs to solve hard problems in code, such as competitive programming and open-ended optimization. I work on benchmarking, agents, and using LLMs to generate problems and data for training better models. I am also an ICPC World Finals 22nd Place finisher and a Codeforces International Grandmaster. This summer, I will be joining Jump Trading as a Quantitative Researcher Intern.

Publications

FrontierCS: Evolving Challenges for Evolving Intelligence
Qiuyang Mang*, Wenhao Chai*, Zhifei Li*, Huanzhi Mao*, Shang Zhou*, et al.
Under review at ICML 2026

Terminal-Bench: Benchmarking Agents on Hard, Realistic Tasks in Command Line Interfaces
Mike A Merrill*, Alexander Glenn Shaw*, Nicholas Carlini, et al. (incl. Shang Zhou)
ICLR 2026

AutoCode: LLMs as Problem Setters for Competitive Programming
Shang Zhou*, Zihan Zheng*, Kaiyuan Liu*, Zeyu Shen*, Zerui Cheng*, et al.
ICLR 2026

LiveCodeBench Pro: How Do Olympiad Medalists Judge LLMs in Competitive Programming?
Zihan Zheng*, Zerui Cheng*, Zeyu Shen*, Shang Zhou*, Kaiyuan Liu*, Hansen He*, et al.
NeurIPS 2025 ยท MIT Technology Review

Scaling LLM Inference with Optimized Sample Compute Allocation
Kexun Zhang*, Shang Zhou*, Danqing Wang, William Yang Wang, Lei Li
NAACL 2025

Evaluating the Smooth Control of Attribute Intensity in Text Generation with LLMs
Shang Zhou*, Feng Yao*, Chengyu Dong, Zihan Wang, Jingbo Shang
Findings of ACL 2024