
Selected Papers
(* equal contribution)
​Hi! I’m Yanhong Li, a final-year undergraduate at the University of Chicago, broadly interested in natural language processing. My research mainly centers on using retrieval to improve language modeling, and I’m currently exploring hybrid models. I always love to chat about research (yanhongli@uchicago.edu)!
This summer, I’m a visiting student at MIT CSAIL, supervised by Prof. Yoon Kim and mentored by the wonderful Songlin Yang (I’m trying to distill from her!). Starting this September, I’ll be joining the Allen Institute for AI (AI2) as a pre-doctoral researcher.
​
I am extremely grateful to work with Prof. David McAllester (TTIC), Prof. Karen Livescu (TTIC), Prof. Michael Maire (UChicago), and Prof. Jiawei Zhou (Stony Brook). In the past, I've worked with Prof. Allyson Ettinger (Ai2). I couldn’t appreciate their invaluable insights and support more!​
​​
A huge thank you to my early mentors: David Yunis @TTIC, Chenghao Yang @UChicago, Marcelo Sandoval-Castañeda @TTIC—they taught me so much when I knew nothing about research. Check out their interesting work!!
​
Yanhong Li, Karen Livescu, Jiawei Zhou. Chunk-Distilled Language Modeling. ICLR 2025.
​
Yanhong Li, David Yunis, David McAllester, Jiawei Zhou. Context-Efficient Retrieval with Factual Decomposition. NAACL 2025 Main Conference.
​
Ming Li, Yanhong Li, Tianyi Zhou. What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective. ACL 2025 Main Conference.
Yanhong Li*, Chenghao Yang*, Allyson Ettinger. When Hindsight is Not 20/20: Testing Limits on Reflective Thinking in Large Language Models. NAACL 2024 Findings.​
​
For the full list, please see my Google Scholar page. ​