├── LICENSE.md
├── README.md
└── assets
└── cot.svg
/LICENSE.md:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022 Armando Fortes
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
9 |
10 |
11 |
12 |
13 |
14 | Awesome LLM Reasoning
15 |
16 |
17 | Curated collection of papers and resources on how to unlock the reasoning ability of LLMs and MLLMs.
18 |
19 |
20 |
21 | 🗂️ Table of Contents
22 |
23 | - Survey
24 | - Analysis
25 | - Technique
26 |
31 |
32 | - Other Useful Resources
33 | - Other Awesome Lists
34 | - Contributing
35 |
36 |
37 |
38 | If you would like to test the symbolic reasoning ability of LLMs, take a look at: LLMSymbolicReasoningBench 😄
39 |
40 |
41 |
42 | ## Survey
43 |
44 | ### 2025
45 |
46 | 1. **[Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey.](https://arxiv.org/abs/2503.12605)** [[code](https://github.com/yaotingwangofficial/Awesome-MCoT)]
47 |
48 | *Yaoting Wang, Shengqiong Wu, Yuecheng Zhang, William Wang, Ziwei Liu, Jiebo Luo, Hao Fei.* Preprint'25
49 |
50 | 2. **[Recent Advances in Large Language Model Benchmarks Against Data Contamination: From Static to Dynamic Evaluation.](https://arxiv.org/abs/2502.17521)** [[code](https://github.com/SeekingDream/Static-to-Dynamic-LLMEval)]
51 |
52 | *Simin Chen, Yiming Chen, Zexin Li, Yifan Jiang, Zhongwei Wan, Yixin He, Dezhi Ran, Tianle Gu, Haizhou Li, Tao Xie, Baishakhi Ray.* Preprint'25
53 |
54 | ### 2024
55 |
56 | 1. **[Attention Heads of Large Language Models: A Survey.](https://arxiv.org/abs/2409.03752)** [[code](https://github.com/IAAR-Shanghai/Awesome-Attention-Heads)]
57 |
58 | *Zifan Zheng, Yezhaohui Wang, Yuxin Huang, Shichao Song, Bo Tang, Feiyu Xiong, Zhiyu Li.* Preprint'24
59 |
60 | 1. **[Internal Consistency and Self-Feedback in Large Language Models: A Survey.](https://arxiv.org/abs/2407.14507)** [[code](https://github.com/IAAR-Shanghai/ICSFSurvey)]
61 |
62 | *Xun Liang, Shichao Song, Zifan Zheng, Hanyu Wang, Qingchen Yu, Xunkai Li, Rong-Hua Li, Feiyu Xiong, Zhiyu Li.* Preprint'24
63 |
64 | 1. **[Puzzle Solving using Reasoning of Large Language Models: A Survey.](https://arxiv.org/abs/2402.11291)** [[code](https://puzzlellms.github.io/)]
65 |
66 | *Panagiotis Giadikiaroglou, Maria Lymperaiou, Giorgos Filandrianos, Giorgos Stamou.* Preprint'24
67 |
68 | 1. **[Large Language Models for Mathematical Reasoning: Progresses and Challenges.](https://arxiv.org/abs/2402.00157)**
69 |
70 | *Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, Wenpeng Yin.* ACL'24
71 |
72 | ### 2022
73 |
74 | 1. **[Towards Reasoning in Large Language Models: A Survey.](https://arxiv.org/abs/2212.10403)** [[code](https://github.com/jeffhj/LM-reasoning)]
75 |
76 | *Jie Huang, Kevin Chen-Chuan Chang.* ACL'23 Findings
77 |
78 | 1. **[Reasoning with Language Model Prompting: A Survey.](https://arxiv.org/abs/2212.09597)** [[code](https://github.com/zjunlp/Prompt4ReasoningPapers)]
79 |
80 | *Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen.* ACL'23
81 |
82 |
83 |
84 | ↑ Back to Top ↑
85 |
86 |
87 |
88 |
89 |
90 | ## Analysis
91 |
92 | ### 2025
93 |
94 | 1. **[New Trends for Modern Machine Translation with Large Reasoning Models.](https://arxiv.org/abs/2503.10351)**
95 |
96 | *Sinuo Liu, Chenyang Lyu, Minghao Wu, Longyue Wang, Weihua Luo, Kaifu Zhang, Zifu Shang.* Preprint'25
97 |
98 | ### 2024
99 |
100 | 1. **[Are Your LLMs Capable of Stable Reasoning?](https://arxiv.org/abs/2412.13147)** [[code](https://github.com/open-compass/GPassK)]
101 |
102 | *Junnan Liu, Hongwei Liu, Linchen Xiao, Ziyi Wang, Kuikun Liu, Songyang Gao, Wenwei Zhang, Songyang Zhang, Kai Chen.* Preprint'24
103 |
104 | 1. **[From Medprompt to o1: Exploration of Run-Time Strategies for Medical Challenge Problems and Beyond.](https://arxiv.org/abs/2411.03590)**
105 |
106 | *Harsha Nori, Naoto Usuyama, Nicholas King, Scott Mayer McKinney, Xavier Fernandes, Sheng Zhang, Eric Horvitz.* Preprint'24
107 |
108 | 1. **[To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning.](https://arxiv.org/abs/2409.12183)**
109 |
110 | *Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, Greg Durrett.* Preprint'24
111 |
112 | 1. **[Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers.](https://arxiv.org/abs/2409.04109)**
113 |
114 | *Chenglei Si, Diyi Yang, Tatsunori Hashimoto.* Preprint'24
115 |
116 | 1. **[A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners.](https://arxiv.org/abs/2406.11050)** [[code](https://github.com/bowen-upenn/llm_token_bias)]
117 |
118 | *Bowen Jiang, Yangxinyu Xie, Zhuoqun Hao, Xiaomeng Wang, Tanwi Mallick, Weijie J. Su, Camillo J. Taylor, Dan Roth.* EMNLP'24
119 |
120 | 1. **[Iteration Head: A Mechanistic Study of Chain-of-Thought](https://arxiv.org/abs/2406.02128)**
121 |
122 | *Vivien Cabannes, Charles Arnal, Wassim Bouaziz, Alice Yang, Francois Charton, Julia Kempe.* NeurIPS'24
123 |
124 | 1. **[Do Large Language Models Latently Perform Multi-Hop Reasoning?](https://arxiv.org/abs/2402.16837)**
125 |
126 | *Sohee Yang, Elena Gribovskaya, Nora Kassner, Mor Geva, Sebastian Riedel.* ACL'24
127 |
128 | 1. **[Premise Order Matters in Reasoning with Large Language Models.](https://arxiv.org/abs/2402.08939)**
129 |
130 | *Xinyun Chen, Ryan A. Chi, Xuezhi Wang, Denny Zhou.* ICML'24
131 |
132 | 1. **[The Impact of Reasoning Step Length on Large Language Models.](https://arxiv.org/abs/2401.04925)**
133 |
134 | *Mingyu Jin, Qinkai Yu, Dong Shu, Haiyan Zhao, Wenyue Hua, Yanda Meng, Yongfeng Zhang, Mengnan Du.* ACL'24 Findings
135 |
136 | 1. **[Large Language Models Cannot Self-Correct Reasoning Yet.](https://arxiv.org/abs/2310.01798)**
137 |
138 | *Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, Denny Zhou.* ICLR'24
139 |
140 | 1. **[At Which Training Stage Does Code Data Help LLM Reasoning?](https://arxiv.org/pdf/2309.16298)**
141 |
142 | *Yingwei Ma, Yue Liu, Yue Yu, Yuanliang Zhang, Yu Jiang, Changjian Wang, Shanshan Li.* ICLR'24
143 |
144 | ### 2023
145 |
146 | 1. **[Measuring Faithfulness in Chain-of-Thought Reasoning.](https://arxiv.org/abs/2307.13702)**
147 |
148 | *Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez.* Preprint'23
149 |
150 | 1. **[Faith and Fate: Limits of Transformers on Compositionality.](https://arxiv.org/abs/2305.18654)**
151 |
152 | *Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi.* NeurIPS'23
153 |
154 | 1. **[Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting.](https://arxiv.org/abs/2305.04388)** [[code](https://github.com/milesaturpin/cot-unfaithfulness)]
155 |
156 | *Miles Turpin, Julian Michael, Ethan Perez, Samuel R. Bowman.* NeurIPS'23
157 |
158 | 1. **[A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity.](https://arxiv.org/abs/2302.04023)**
159 |
160 | *Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung.* AACL'23
161 |
162 | 1. **[Large Language Models Can Be Easily Distracted by Irrelevant Context.](https://arxiv.org/abs/2302.00093)**
163 |
164 | *Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou.* ICML'23
165 |
166 | 1. **[On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning.](https://arxiv.org/abs/2212.08061)**
167 |
168 | *Omar Shaikh, Hongxin Zhang, William Held, Michael Bernstein, Diyi Yang.* ACL'23
169 |
170 | 1. **[Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters.](https://arxiv.org/abs/2212.10001)** [[code](https://github.com/sunlab-osu/Understanding-CoT)]
171 |
172 | *Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun.* ACL'23
173 |
174 | 1. **[Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them.](https://arxiv.org/abs/2210.09261)** [[code](https://github.com/suzgunmirac/BIG-Bench-Hard)]
175 |
176 | *Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, Jason Wei.* ACL'23 Findings
177 |
178 | ### 2022
179 |
180 | 1. **[Emergent Abilities of Large Language Models.](https://arxiv.org/abs/2206.07682)** [[blog](https://ai.googleblog.com/2022/11/characterizing-emergent-phenomena-in.html)]
181 |
182 | *Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus.* TMLR'22
183 |
184 | 1. **[Can language models learn from explanations in context?](https://arxiv.org/abs/2204.02329)**
185 |
186 | *Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill.* EMNLP'22
187 |
188 |
189 |
190 | ↑ Back to Top ↑
191 |
192 |
193 |
194 |
195 |
196 | Technique
197 |
198 |
199 |
200 | 🔤 Reasoning in Large Language Models - An Emergent Ability
201 |
202 | ### 2025
203 |
204 | 1. **[JudgeLRM: Large Reasoning Models as a Judge.](https://arxiv.org/abs/2504.00050)**
205 |
206 | *Nuo Chen, Zhiyuan Hu, Qingyun Zou, Jiaying Wu, Qian Wang, Bryan Hooi, Bingsheng He.* Preprint'25
207 |
208 | 1. **[Dynamic Benchmarking of Reasoning Capabilities in Code Large Language Models Under Data Contamination.](https://arxiv.org/abs/2503.04149)** [[code](https://codekaleidoscope.github.io/dycodeeval.html)]
209 |
210 | *Simin Chen, Pranav Pusarla, Baishakhi Ray.* ICML'25
211 |
212 | 1. **[CRANE: Reasoning with constrained LLM generation.](https://arxiv.org/abs/2502.09061)**
213 |
214 | *Debangshu Banerjee, Tarun Suresh, Shubham Ugare, Sasa Misailovic, Gagandeep Singh.* ICML'25
215 |
216 | 1. **[Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching.](https://arxiv.org/abs/2503.05179)** [[code](https://www.github.com/SimonAytes/SoT)]
217 |
218 | *Simon A. Aytes, Jinheon Baek, Sung Ju Hwang.* Preprint'25
219 |
220 | 1. **[Self-rewarding correction for mathematical reasoning.](https://arxiv.org/abs/2502.19613)**
221 |
222 | *Wei Xiong, Hanning Zhang, Chenlu Ye, Lichang Chen, Nan Jiang, Tong Zhang.* Preprint'25
223 |
224 | 1. **[Competitive Programming with Large Reasoning Models.](https://arxiv.org/abs/2502.06807)**
225 |
226 | *OpenAI: Ahmed El-Kishky, Alexander Wei, Andre Saraiva, Borys Minaiev, Daniel Selsam, David Dohan, Francis Song, Hunter Lightman, Ignasi Clavera, Jakub Pachocki, Jerry Tworek, Lorenz Kuhn, Lukasz Kaiser, Mark Chen, Max Schwarzer, Mostafa Rohaninejad, Nat McAleese, o3 contributors, Oleg Mürk, Rhythm Garg, Rui Shu, Szymon Sidor, Vineet Kosaraju, Wenda Zhou.* Preprint'25
227 |
228 | 1. **[s1: Simple test-time scaling.](https://arxiv.org/abs/2501.19393)**
229 |
230 | *Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, Tatsunori Hashimoto.* Preprint'25
231 |
232 | 1. **[DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.](https://arxiv.org/abs/2501.12948)** [[project](https://github.com/deepseek-ai/DeepSeek-R1)]
233 |
234 | *Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z.F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, et al.* Preprint'25
235 |
236 | 1. **[Towards System 2 Reasoning in LLMs: Learning How to Think With Meta Chain-of-Thought.](https://arxiv.org/abs/2501.04682)**
237 |
238 | *Violet Xiang, Charlie Snell, Kanishk Gandhi, Alon Albalak, Anikait Singh, Chase Blagden, Duy Phung, Rafael Rafailov, Nathan Lile, Dakota Mahan, Louis Castricato, Jan-Philipp Franken, Nick Haber, Chelsea Finn.* Preprint'25
239 |
240 | ### 2024
241 |
242 | 1. **[HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs](https://arxiv.org/abs/2412.18925)** [[code](https://github.com/FreedomIntelligence/HuatuoGPT-o1)]
243 |
244 | *Junying Chen, Zhenyang Cai, Ke Ji, Xidong Wang, Wanlong Liu, Rongsheng Wang, Jianye Hou, Benyou Wang.* Preprint'24
245 |
246 |
247 | 1. **[PPM: Automated Generation of Diverse Programming Problems for Benchmarking Code Generation Models](https://dl.acm.org/doi/abs/10.1145/3643780)** [[code](https://dl.acm.org/doi/pdf/10.1145/3643780)]
248 | *Simin Chen, XiaoNing Feng, Xiaohong Han, Cong Liu, Wei Yang* FSE'24
249 |
250 | 1. **[DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought.](https://arxiv.org/abs/2412.17498)** [[code](https://github.com/krystalan/DRT-o1)]
251 |
252 | *Jiaan Wang, Fandong Meng, Yunlong Liang, Jie Zhou.* Preprint'24
253 |
254 | 1. **[MALT: Improving Reasoning with Multi-Agent LLM Training.](https://arxiv.org/abs/2412.01928)**
255 |
256 | *Sumeet Ramesh Motwani, Chandler Smith, Rocktim Jyoti Das, Markian Rybchuk, Philip H. S. Torr, Ivan Laptev, Fabio Pizzati, Ronald Clark, Christian Schroeder de Witt.* Preprint'24
257 |
258 | 1. **[SmartAgent: Chain-of-User-Thought for Embodied Personalized Agent in Cyber World.](https://arxiv.org/abs/2412.07472)**
259 |
260 | *Jiaqi Zhang, Chen Gao, Liyuan Zhang, Yong Li, Hongzhi Yin.* Preprint'24
261 |
262 | 1. **[Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions.](https://arxiv.org/abs/2411.14405)** [[code](https://github.com/AIDC-AI/Marco-o1)] [[model](https://huggingface.co/AIDC-AI/Marco-o1)]
263 |
264 | *Yu Zhao, Huifeng Yin, Bo Zeng, Hao Wang, Tianqi Shi, Chenyang Lyu, Longyue Wang, Weihua Luo, Kaifu Zhang.* Preprint'24
265 |
266 | 1. **[Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning.](https://arxiv.org/abs/2410.10735)**
267 |
268 | *Kuofeng Gao, Huanqia Cai, Qingyao Shuai, Dihong Gong, Zhifeng Li.* Preprint'24
269 |
270 | 1. **[Deliberate Reasoning for LLMs as Structure-aware Planning with Accurate World Model.](https://arxiv.org/abs/2410.03136)** [[code](https://github.com/xiongsiheng/SWAP)]
271 |
272 | *Siheng Xiong, Ali Payani, Yuan Yang, Faramarz Fekri.* Preprint'24
273 |
274 | 1. **[Interpretable Contrastive Monte Carlo Tree Search Reasoning.](https://arxiv.org/abs/2410.01707)**
275 |
276 | *Zitian Gao, Boye Niu, Xuzheng He, Haotian Xu, Hongzhang Liu, Aiwei Liu, Xuming Hu, Lijie Wen.* Preprint'24
277 |
278 | 1. **[Training Language Models to Self-Correct via Reinforcement Learning.](https://arxiv.org/abs/2409.12917)**
279 |
280 | *Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, JD Co-Reyes, Avi Singh, Kate Baumli, Shariq Iqbal, Colton Bishop, Rebecca Roelofs, Lei M. Zhang, Kay McKinney, Disha Shrivastava, Cosmin Paduraru, George Tucker, Doina Precup, Feryal Behbahani, Aleksandra Faust.* Preprint'24
281 |
282 | 1. **[OpenAI o1.](https://openai.com/index/learning-to-reason-with-llms/)**
283 |
284 | *Open AI Team.* Technical Report'24
285 |
286 | 1. **[Agent Q: Advanced Reasoning and Learning for Autonomous AI Agents.](https://arxiv.org/abs/2408.07199)**
287 |
288 | *Pranav Putta, Edmund Mills, Naman Garg, Sumeet Motwani, Chelsea Finn, Divyansh Garg, Rafael Rafailov.* Preprint'24
289 |
290 | 1. **[DotaMath: Decomposition of Thought with Code Assistance and Self-correction for Mathematical Reasoning.](https://arxiv.org/abs/2407.04078)** [[code](https://github.com/ChengpengLi1003/DotaMath)]
291 |
292 | *Chengpeng Li, Guanting Dong, Mingfeng Xue, Ru Peng, Xiang Wang, Dayiheng Liu.* Preprint'24
293 |
294 | 1. **[LLM-ARC: Enhancing LLMs with an Automated Reasoning Critic.](https://arxiv.org/abs/2406.17663)**
295 |
296 | *Aditya Kalyanpur, Kailash Saravanakumar, Victor Barres, Jennifer Chu-Carroll, David Melville, David Ferrucci.* Preprint'24
297 |
298 | 1. **[Q\*: Improving Multi-step Reasoning for LLMs with Deliberative Planning.](https://arxiv.org/abs/2406.14283)**
299 |
300 | *Chaojie Wang, Yanchen Deng, Zhiyi Lv, Shuicheng Yan, An Bo.* Preprint'24
301 |
302 | 1. **[Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models.](https://arxiv.org/abs/2406.04271)** [[code](https://github.com/YangLing0818/buffer-of-thought-llm)]
303 |
304 | *Ling Yang, Zhaochen Yu, Tianjun Zhang, Shiyi Cao, Minkai Xu, Wentao Zhang, Joseph E. Gonzalez, Bin Cui.* Preprint'24
305 |
306 | 1. **[Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing.](https://arxiv.org/abs/2404.12253)**
307 |
308 | *Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian Yu, Haitao Mi, Dong Yu.* Preprint'24
309 |
310 | 1. **[Self-playing Adversarial Language Game Enhances LLM Reasoning.](https://arxiv.org/abs/2404.10642)**
311 |
312 | *Pengyu Cheng, Tianhao Hu, Han Xu, Zhisong Zhang, Yong Dai, Lei Han, Nan Du.* Preprint'24
313 |
314 | 1. **[Evaluating Mathematical Reasoning Beyond Accuracy.](https://arxiv.org/abs/2404.05692)**
315 |
316 | *Shijie Xia, Xuefeng Li, Yixin Liu, Tongshuang Wu, Pengfei Liu.* Preprint'24
317 |
318 | 1. **[Advancing LLM Reasoning Generalists with Preference Trees.](https://arxiv.org/abs/2404.02078)**
319 |
320 | *Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen, Ruobing Xie, Yankai Lin, Zhenghao Liu, Bowen Zhou, Hao Peng, Zhiyuan Liu, Maosong Sun.* Preprint'24
321 |
322 | 1. **[LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning.](https://arxiv.org/abs/2403.11552)** [[code](https://github.com/AssassinWS/LLM-TAMP)]
323 |
324 | *Shu Wang, Muzhi Han, Ziyuan Jiao, Zeyu Zhang, Ying Nian Wu, Song-Chun Zhu, Hangxin Liu.* IROS'24
325 |
326 | 1. **[Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking.](https://arxiv.org/abs/2403.09629)**
327 |
328 | *Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, Noah D. Goodman.* Preprint'24
329 |
330 | 1. **[GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements.](https://arxiv.org/abs/2402.10963)**
331 |
332 | *Alex Havrilla, Sharath Raparthy, Christoforus Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravinskyi, Eric Hambro, Roberta Railneau.* ICML'24
333 |
334 | 1. **[Chain-of-Thought Reasoning Without Prompting.](https://arxiv.org/abs/2402.10200)**
335 |
336 | *Xuezhi Wang, Denny Zhou.* Preprint'24
337 |
338 | 1. **[V-STaR: Training Verifiers for Self-Taught Reasoners.](https://arxiv.org/abs/2402.06457)**
339 |
340 | *Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, Rishabh Agarwal.* Preprint'24
341 |
342 | 1. **[InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning.](https://arxiv.org/abs/2402.06332)**
343 |
344 | *Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma, Jiawei Hong, Kuikun Liu, Ziyi Wang, Yudong Wang, Zijian Wu, Shuaibin Li, Fengzhe Zhou, Hongwei Liu, Songyang Zhang, Wenwei Zhang, Hang Yan, Xipeng Qiu, Jiayu Wang, Kai Chen, Dahua Lin.* Preprint'24
345 |
346 | 1. **[Self-Discover: Large Language Models Self-Compose Reasoning Structures.](https://arxiv.org/abs/2402.03620)**
347 |
348 | *Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V. Le, Ed H. Chi, Denny Zhou, Swaroop Mishra, Huaixiu Steven Zheng.* Preprint'24
349 |
350 | 1. **[DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models.](https://arxiv.org/abs/2402.03300)**
351 |
352 | *Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y.K. Li, Y. Wu, Daya Guo.* Preprint'24
353 |
354 | 1. **[K-Level Reasoning with Large Language Models.](https://arxiv.org/abs/2402.01521)**
355 |
356 | *Yadong Zhang, Shaoguang Mao, Tao Ge, Xun Wang, Yan Xia, Man Lan, Furu Wei.* Preprint'24
357 |
358 | 1. **[Efficient Tool Use with Chain-of-Abstraction Reasoning.](https://arxiv.org/abs/2401.17464)**
359 |
360 | *Silin Gao, Jane Dwivedi-Yu, Ping Yu, Xiaoqing Ellen Tan, Ramakanth Pasunuru, Olga Golovneva, Koustuv Sinha, Asli Celikyilmaz, Antoine Bosselut, Tianlu Wang.* Preprint'24
361 |
362 | 1. **[Teaching Language Models to Self-Improve through Interactive Demonstrations.](https://arxiv.org/abs/2310.13522)**
363 |
364 | *Xiao Yu, Baolin Peng, Michel Galley, Jianfeng Gao, Zhou Yu.* NAACL'24
365 |
366 | 1. **[Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic.](https://arxiv.org/abs/2309.13339)** [[code](https://github.com/xf-zhao/LoT)]
367 |
368 | *Xufeng Zhao, Mengdi Li, Wenhao Lu, Cornelius Weber, Jae Hee Lee, Kun Chu, Stefan Wermter.* COLING'24
369 |
370 | 1. **[Chain-of-Verification Reduces Hallucination in Large Language Models.](https://arxiv.org/abs/2309.11495)**
371 |
372 | *Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, Jason Weston.* ACL'24 Findings
373 |
374 | 1. **[Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding.](https://arxiv.org/abs/2307.15337)**
375 |
376 | *Xuefei Ning, Zinan Lin, Zixuan Zhou, Huazhong Yang, Yu Wang.* ICLR'24
377 |
378 | 1. **[Question Decomposition Improves the Faithfulness of Model-Generated Reasoning.](https://arxiv.org/abs/2307.11768)** [[code](https://github.com/anthropics/DecompositionFaithfulnessPaper)]
379 |
380 | *Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernandez, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Sam McCandlish, Sheer El Showk, Tamera Lanham, Tim Maxwell, Venkatesa Chandrasekaran, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez.* Preprint'23
381 |
382 | 1. **[Let's Verify Step by Step.](https://arxiv.org/abs/2305.20050)**
383 |
384 | *Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, Karl Cobbe.* ICLR'24
385 |
386 | 1. **[REFINER: Reasoning Feedback on Intermediate Representations.](https://arxiv.org/abs/2304.01904)** [[project](https://debjitpaul.github.io/refiner/)] [[code](https://github.com/debjitpaul/refiner)]
387 |
388 | *Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, Boi Faltings.* EACL'24
389 |
390 | 1. **[Active Prompting with Chain-of-Thought for Large Language Models.](https://arxiv.org/abs/2302.12246)** [[code](https://github.com/shizhediao/active-cot)]
391 |
392 | *Shizhe Diao, Pengcheng Wang, Yong Lin, Tong Zhang.* ACL'24
393 |
394 | 1. **[Language Models as Inductive Reasoners.](https://arxiv.org/abs/2212.10923)**
395 |
396 | *Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao, Furu Wei.* EACL'24
397 |
398 | ### 2023
399 |
400 | 1. **[Boosting LLM Reasoning: Push the Limits of Few-shot Learning with Reinforced In-Context Pruning.](https://arxiv.org/abs/2312.08901)**
401 |
402 | *Xijie Huang, Li Lyna Zhang, Kwang-Ting Cheng, Mao Yang.* Preprint'23
403 |
404 | 1. **[Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning.](https://arxiv.org/abs/2305.12295)** [[code](https://github.com/teacherpeterpan/Logic-LLM)]
405 |
406 | *Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang.* EMNLP'23 Findings
407 |
408 | 1. **[Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with Language Models.](https://arxiv.org/abs/2306.06891)** [[code](https://github.com/soochan-lee/RoT)] [[poster](https://soochanlee.com/img/rot/rot_poster.pdf)]
409 |
410 | *Soochan Lee, Gunhee Kim.* ACL'23 Findings
411 |
412 | 1. **[Reasoning with Language Model is Planning with World Model.](https://arxiv.org/abs/2305.14992)**
413 |
414 | *Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu.* EMNLP'23
415 |
416 | 1. **[Reasoning Implicit Sentiment with Chain-of-Thought Prompting.](https://arxiv.org/abs/2305.11255)** [[code](https://github.com/scofield7419/THOR-ISA)]
417 |
418 | *Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-Seng Chua.* ACL'23
419 |
420 | 1. **[Tree of Thoughts: Deliberate Problem Solving with Large Language Models.](https://arxiv.org/abs/2305.10601)** [[code](https://github.com/ysymyth/tree-of-thought-llm)]
421 |
422 | *Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan.* NeurIPS'23
423 |
424 | 1. **[SatLM: Satisfiability-Aided Language Models Using Declarative Prompting.](https://arxiv.org/abs/2305.09656)** [[code](https://github.com/xiye17/sat-lm)]
425 |
426 | *Xi Ye, Qiaochu Chen, Isil Dillig, Greg Durrett.* NeurIPS'23
427 |
428 | 1. **[ART: Automatic multi-step reasoning and tool-use for large language models.](https://arxiv.org/abs/2303.09014)**
429 |
430 | *Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro.* Preprint'23
431 |
432 | 1. **[Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data.](https://arxiv.org/abs/2302.12822)** [[code](https://github.com/shizhediao/automate-cot)]
433 |
434 | *KaShun Shum, Shizhe Diao, Tong Zhang.* EMNLP'23 Findings
435 |
436 | 1. **[Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models.](https://arxiv.org/abs/2302.00618)**
437 |
438 | *Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, Weizhu Chen.* ICML'23
439 |
440 | 1. **[Faithful Chain-of-Thought Reasoning.](https://arxiv.org/abs/2301.13379)**
441 |
442 | *Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, Chris Callison-Burch.* IJCNLP-AACL'23
443 |
444 | 1. **[Rethinking with Retrieval: Faithful Large Language Model Inference.](https://arxiv.org/abs/2301.00303)**
445 |
446 | *Hangfeng He, Hongming Zhang, Dan Roth.* Preprint'23
447 |
448 | 1. **[LAMBADA: Backward Chaining for Automated Reasoning in Natural Language.](https://arxiv.org/abs/2212.13894)**
449 |
450 | *Seyed Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, Deepak Ramachandran.* ACL'23
451 |
452 | 1. **[Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions.](https://arxiv.org/abs/2212.10509)** [[code](https://github.com/StonyBrookNLP/ircot)]
453 |
454 | *Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal.* ACL'23
455 |
456 | 1. **[Large Language Models are Reasoners with Self-Verification.](https://arxiv.org/abs/2212.09561)** [[code](https://github.com/WENGSYX/Self-Verification)]
457 |
458 | *Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, Jun Zhao.* EMNLP'23 Findings
459 |
460 | 1. **[Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model.](https://arxiv.org/abs/2212.09146)** [[code](https://github.com/McGill-NLP/retriever-lm-reasoning)]
461 |
462 | *Parishad BehnamGhader, Santiago Miret, Siva Reddy.* EMNLP'23 Findings
463 |
464 | 1. **[Complementary Explanations for Effective In-Context Learning.](https://arxiv.org/abs/2211.13892)**
465 |
466 | *Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, Ves Stoyanov, Greg Durrett, Ramakanth Pasunuru.* ACL'23 Findings
467 |
468 | 1. **[Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks.](https://arxiv.org/abs/2211.12588)** [[code](https://github.com/wenhuchen/program-of-thoughts)]
469 |
470 | *Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen.* TMLR'23
471 |
472 | 1. **[Unsupervised Explanation Generation via Correct Instantiations.](https://arxiv.org/abs/2211.11160)**
473 |
474 | *Sijie Cheng, Zhiyong Wu, Jiangjie Chen, Zhixing Li, Yang Liu, Lingpeng Kong.* AAAI'23
475 |
476 | 1. **[PAL: Program-aided Language Models.](https://arxiv.org/abs/2211.10435)** [[project](https://reasonwithpal.com/)] [[code](https://github.com/reasoning-machines/pal)]
477 |
478 | *Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig.* ICML'23
479 |
480 | 1. **[Solving Math Word Problems via Cooperative Reasoning induced Language Models.](https://arxiv.org/abs/2210.16257)** [[code](https://github.com/TianHongZXY/CoRe)]
481 |
482 | *Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Ruyi Gan, Jiaxing Zhang, Yujiu Yang.* ACL'23
483 |
484 | 1. **[Large Language Models Can Self-Improve.](https://arxiv.org/abs/2210.11610)**
485 |
486 | *Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han.* EMNLP'23
487 |
488 | 1. **[Mind's Eye: Grounded language model reasoning through simulation.](https://arxiv.org/abs/2210.05359)**
489 |
490 | *Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai.* ICLR'23
491 |
492 | 1. **[Automatic Chain of Thought Prompting in Large Language Models.](https://arxiv.org/abs/2210.03493)** [[code](https://github.com/amazon-research/auto-cot)]
493 |
494 | *Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola.* ICLR'23
495 |
496 | 1. **[Language Models are Multilingual Chain-of-Thought Reasoners.](https://arxiv.org/abs/2210.03057)**
497 |
498 | *Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei.* ICLR'23
499 |
500 | 1. **[Ask Me Anything: A simple strategy for prompting language models.](https://arxiv.org/abs/2210.02441)** [[code](https://github.com/hazyresearch/ama_prompting)]
501 |
502 | *Simran Arora, Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, Christopher Ré.* ICLR'23
503 |
504 | 1. **[Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning.](https://arxiv.org/abs/2209.14610)** [[project](https://promptpg.github.io/)] [[code](https://github.com/lupantech/PromptPG)]
505 |
506 | *Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan.* ICLR'23
507 |
508 | 1. **[Making Large Language Models Better Reasoners with Step-Aware Verifier.](https://arxiv.org/abs/2206.02336)**
509 |
510 | *Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen.* ACL'23
511 |
512 | 1. **[Least-to-most prompting enables complex reasoning in large language models.](https://arxiv.org/abs/2205.10625)**
513 |
514 | *Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi.* ICLR'23
515 |
516 | 1. **[Self-consistency improves chain of thought reasoning in language models.](https://arxiv.org/abs/2203.11171)**
517 |
518 | *Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou.* ICLR'23
519 |
520 | ### 2022
521 |
522 | 1. **[Retrieval Augmentation for Commonsense Reasoning: A Unified Approach.](https://arxiv.org/abs/2210.12887)** [[code](https://github.com/wyu97/RACo)]
523 |
524 | *Wenhao Yu, Chenguang Zhu, Zhihan Zhang, Shuohang Wang, Zhuosheng Zhang, Yuwei Fang, Meng Jiang.* EMNLP'22
525 |
526 | 1. **[Language Models of Code are Few-Shot Commonsense Learners.](https://arxiv.org/abs/2210.07128)** [[code](https://github.com/madaan/cocogen)]
527 |
528 | *Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig.* EMNLP'22
529 |
530 | 1. **[Solving Quantitative Reasoning Problems with Language Models.](https://arxiv.org/abs/2206.14858)** [[blog](https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html)]
531 |
532 | *Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, Vedant Misra.* NeurIPS'22
533 |
534 | 1. **[Large Language Models Still Can't Plan.](https://arxiv.org/abs/2206.10498)** [[code](https://github.com/karthikv792/gpt-plan-benchmark)]
535 |
536 | *Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, Subbarao Kambhampati.* NeurIPS'22
537 |
538 | 1. **[Large Language Models are Zero-Shot Reasoners.](https://arxiv.org/abs/2205.11916)**
539 |
540 | *Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa.* NeurIPS'22
541 |
542 | 1. **[Iteratively Prompt Pre-trained Language Models for Chain of Thought.](https://arxiv.org/abs/2203.08383)** [[code](https://github.com/sunlab-osu/iterprompt)]
543 |
544 | *Boshi Wang, Xiang Deng, Huan Sun.* EMNLP'22
545 |
546 | 1. **[Chain of Thought Prompting Elicits Reasoning in Large Language Models.](https://arxiv.org/abs/2201.11903)** [[blog](https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html)]
547 |
548 | *Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou.* NeurIPS'22
549 |
550 |
551 |
552 | ↑ Back to Top ↑
553 |
554 |
555 |
556 |
557 |
558 | ### 🧠 Multimodal Reasoning in Large Language Models
559 |
560 | ### 2025
561 |
562 | 1. **[Introducing Visual Perception Token into Multimodal Large Language Model.](https://arxiv.org/abs/2502.17425)** [[code](https://github.com/yu-rp/VisualPerceptionToken)] [[model](https://huggingface.co/collections/rp-yu/vpt-models-67b6afdc8679a05a2876f07a)] [[dataset](https://huggingface.co/datasets/rp-yu/VPT_Datasets)]
563 |
564 | *Runpeng Yu, Xinyin Ma, Xinchao Wang.* Preprint'25
565 |
566 | 1. **[LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs.](https://arxiv.org/abs/2501.06186)** [[project](https://mbzuai-oryx.github.io/LlamaV-o1/)] [[code](https://github.com/mbzuai-oryx/LlamaV-o1)] [[model](https://huggingface.co/omkarthawakar/LlamaV-o1)]
567 |
568 | *Omkar Thawakar, Dinura Dissanayake, Ketan More, Ritesh Thawkar, Ahmed Heakl, Noor Ahsan, Yuhao Li, Mohammed Zumri, Jean Lahoud, Rao Muhammad Anwer, Hisham Cholakkal, Ivan Laptev, Mubarak Shah, Fahad Shahbaz Khan, Salman Khan.* Preprint'25
569 |
570 | 1. **[Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks](https://arxiv.org/abs/2503.21696)** [[project]](https://embodied-reasoner.github.io/)[[code](https://github.com/zwq2018/embodied_reasoner)][[dataset](https://huggingface.co/datasets/zwq2018/embodied_reasoner)]
571 | *Wenqi Zhang, Mengna Wang, Gangao Liu, Xu Huixin, Yiwei Jiang, Yongliang Shen, Guiyang Hou, Zhe Zheng, Hang Zhang, Xin Li, Weiming Lu, Peng Li, Yueting Zhuang.* Preprint'25
572 | ### 2024
573 |
574 | 1. **[Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models.](https://arxiv.org/abs/2411.14432)** [[code](https://github.com/dongyh20/Insight-V)] [[model](https://huggingface.co/collections/THUdyh/insight-v-673f5e1dd8ab5f2d8d332035)]
575 |
576 | *Yuhao Dong, Zuyan Liu, Hai-Long Sun, Jingkang Yang, Winston Hu, Yongming Rao, Ziwei Liu.* Preprint'24
577 |
578 | 1. **[LLaVA-CoT: Let Vision Language Models Reason Step-by-Step](https://arxiv.org/abs/2411.10440)** [code](https://github.com/PKU-YuanGroup/LLaVA-CoT) [model](https://huggingface.co/Xkev/Llama-3.2V-11B-cot)
579 |
580 | *Guowei Xu, Peng Jin, Hao Li, Yibing Song, Lichao Sun, Li Yuan.* Preprint'24
581 |
582 | 1. **[Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models.](https://arxiv.org/abs/2406.09403)** [[project](https://visualsketchpad.github.io/)] [[code](https://github.com/Yushi-Hu/VisualSketchpad)]
583 |
584 | *Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, Ranjay Krishna.* Preprint'24
585 |
586 | 1. **[Chart-based Reasoning: Transferring Capabilities from LLMs to VLMs.](https://arxiv.org/abs/2403.12596)**
587 |
588 | *Victor Carbune, Hassan Mansoor, Fangyu Liu, Rahul Aralikatte, Gilles Baechler, Jindong Chen, Abhanshu Sharma.* NAACL'24 Findings
589 |
590 | 1. **[SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities.](https://arxiv.org/abs/2401.12168)** [[project](https://spatial-vlm.github.io/)]
591 |
592 | *Boyuan Chen, Zhuo Xu, Sean Kirmani, Brian Ichter, Danny Driess, Pete Florence, Dorsa Sadigh, Leonidas Guibas, Fei Xia.* CVPR'24
593 |
594 | 1. **[Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding.](https://arxiv.org/abs/2401.04398)**
595 |
596 | *Zilong Wang, Hao Zhang, Chun-Liang Li, Julian Martin Eisenschlos, Vincent Perot, Zifeng Wang, Lesly Miculicich, Yasuhisa Fujii, Jingbo Shang, Chen-Yu Lee, Tomas Pfister.* ICLR'24
597 |
598 | 1. **[Link-Context Learning for Multimodal LLMs.](https://arxiv.org/abs/2308.07891)** [[code](https://github.com/isekai-portal/Link-Context-Learning)]
599 |
600 | *Yan Tai, Weichen Fan, Zhao Zhang, Feng Zhu, Rui Zhao, Ziwei Liu.* CVPR'24
601 |
602 | ### 2023
603 |
604 | 1. **[Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models.](https://arxiv.org/abs/2312.17661)**
605 |
606 | *Yuqing Wang, Yun Zhao.* Preprint'23
607 |
608 | 1. **[G-LLaVA: Solving Geometric Problems with Multi-Modal Large Language Model.](https://arxiv.org/abs/2312.11370)**
609 |
610 | *Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, Lingpeng Kong.* Preprint'23
611 |
612 | 1. **[Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models.](https://arxiv.org/abs/2304.09842)** [[project](https://chameleon-llm.github.io/)] [[code](https://github.com/lupantech/chameleon-llm)]
613 |
614 | *Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Jianfeng Gao.* NeurIPS'23
615 |
616 | 1. **[MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action.](https://arxiv.org/abs/2303.11381)** [[project](https://multimodal-react.github.io/)] [[code](https://github.com/microsoft/MM-REACT)] [[demo](https://huggingface.co/spaces/microsoft-cognitive-service/mm-react)]
617 |
618 | *Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang.* Preprint'23
619 |
620 | 1. **[ViperGPT: Visual Inference via Python Execution for Reasoning.](https://arxiv.org/abs/2303.08128)** [[project](https://viper.cs.columbia.edu/)] [[code](https://github.com/cvlab-columbia/viper)]
621 |
622 | *Dídac Surís, Sachit Menon, Carl Vondrick.* ICCV'23
623 |
624 | 1. **[Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models.](https://arxiv.org/abs/2303.04671)** [[code](https://github.com/microsoft/visual-chatgpt)]
625 |
626 | *Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, Nan Duan.* Preprint'23
627 |
628 | 1. **[Multimodal Chain-of-Thought Reasoning in Language Models.](https://arxiv.org/abs/2302.00923)** [[code](https://github.com/amazon-science/mm-cot)]
629 |
630 | *Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola.* Preprint'23
631 |
632 | 1. **[Visual Programming: Compositional Visual Reasoning without Training.](https://arxiv.org/abs/2211.11559)** [[project](https://prior.allenai.org/projects/visprog)] [[code](https://github.com/allenai/visprog)]
633 |
634 | *Tanmay Gupta, Aniruddha Kembhavi.* CPVR'23
635 |
636 | 1. **[Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language.](https://arxiv.org/abs/2204.00598)** [[project](https://socraticmodels.github.io/)] [[code](https://github.com/google-research/google-research/tree/master/socraticmodels)]
637 |
638 | *Andy Zeng, Maria Attarian, Brian Ichter, Krzysztof Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, Pete Florence.* ICLR'23
639 |
640 |
641 |
642 | ↑ Back to Top ↑
643 |
644 |
645 |
646 |
647 |
648 | ### 🤏 Scaling Smaller Language Models to Reason
649 |
650 | ### 2025
651 |
652 | 1. **[Learning to Reason from Feedback at Test-Time.](https://arxiv.org/abs/2502.15771)** [[code](https://github.com/LaVi-Lab/FTTT)]
653 |
654 | *Yanyang Li, Michael Lyu, Liwei Wang.* Preprint'25
655 |
656 | 1. **[S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning.](https://arxiv.org/abs/2502.12853)** [[code](https://github.com/NineAbyss/S2R)]
657 |
658 | *Ruotian Ma, Peisong Wang, Cheng Liu, Xingyan Liu, Jiaqi Chen, Bang Zhang, Xin Zhou, Nan Du, Jia Li.* Preprint'25
659 |
660 | 1. **[rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking.](https://arxiv.org/abs/2501.04519)** [[code](https://github.com/microsoft/rStar)]
661 |
662 | *Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, Mao Yang.* Preprint'24
663 |
664 | ### 2024
665 |
666 | 1. **[MathScale: Scaling Instruction Tuning for Mathematical Reasoning.](https://arxiv.org/abs/2403.02884)**
667 |
668 | *Zhengyang Tang, Xingxing Zhang, Benyou Wang, Furu Wei.* Preprint'24
669 |
670 | ### 2023
671 |
672 | 1. **[Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic.](https://arxiv.org/abs/2308.07336)** [[code](https://github.com/hitachi-nlp/FLD)]
673 |
674 | *Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, Yasuhiro Sogawa.* ICML'23
675 |
676 | 1. **[Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step.](https://arxiv.org/abs/2306.14050)** [[code](https://github.com/allenai/cot_distillation)]
677 |
678 | *Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, Yejin Choi.* ACL'23
679 |
680 | 1. **[Specializing Smaller Language Models towards Multi-Step Reasoning.](https://arxiv.org/abs/2301.12726)**
681 |
682 | *Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar Khot.* ICML'23
683 |
684 | 1. **[Large Language Models Are Reasoning Teachers.](https://arxiv.org/abs/2212.10071)** [[code](https://github.com/itsnamgyu/reasoning-teacher)]
685 |
686 | *Namgyu Ho, Laura Schmid, Se-Young Yun.* ACL'23
687 |
688 | 1. **[Teaching Small Language Models to Reason.](https://arxiv.org/abs/2212.08410)**
689 |
690 | *Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn.* ACL'23 Short
691 |
692 | 1. **[Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions.](https://arxiv.org/abs/2212.00193)**
693 |
694 | *Kumar Shridhar, Alessandro Stolfo, Mrinmaya Sachan.* ACL'23 Findings
695 |
696 | ### 2022
697 |
698 | 1. **[Scaling Instruction-Finetuned Language Models.](https://arxiv.org/abs/2210.11416)**
699 |
700 | *Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei.* JMLR'22
701 |
702 |
703 |
704 | ↑ Back to Top ↑
705 |
706 |
707 |
708 |
709 |
710 | ## Other Useful Resources
711 |
712 |
713 |
714 | - **[LLM Reasoners](https://github.com/Ber666/llm-reasoners)** A library for advanced large language model reasoning.
715 | - **[Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub)** Benchmarking LLM reasoning performance with chain-of-thought prompting.
716 | - **[ThoughtSource](https://github.com/OpenBioLink/ThoughtSource)** Central and open resource for data and tools related to chain-of-thought reasoning in large language models.
717 | - **[AgentChain](https://github.com/jina-ai/agentchain)** Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks.
718 | - **[google/Cascades](https://github.com/google-research/cascades)** Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference, and more.
719 | - **[LogiTorch](https://github.com/LogiTorch/logitorch)** PyTorch-based library for logical reasoning on natural language.
720 | - **[salesforce/LAVIS](https://github.com/salesforce/LAVIS)** One-stop Library for Language-Vision Intelligence.
721 | - **[facebookresearch/RAM](https://github.com/facebookresearch/RAM)** A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).
722 |
723 |
724 |
725 | ↑ Back to Top ↑
726 |
727 |
728 |
729 |
730 |
731 | ## Other Awesome Lists
732 |
733 |
734 |
735 | - **[Awesome-Controllable-Generation](https://github.com/atfortes/Awesome-Controllable-Generation)** Collection of papers and resources on Controllable Generation using Diffusion Models.
736 | - **[Chain-of-ThoughtsPapers](https://github.com/Timothyxxx/Chain-of-ThoughtsPapers)** A trend starts from "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models".
737 | - **[LM-reasoning](https://github.com/jeffhj/LM-reasoning)** Collection of papers and resources on Reasoning in Large Language Models.
738 | - **[Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers)** Repository for the paper "Reasoning with Language Model Prompting: A Survey".
739 | - **[ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP)** Paper list on reasoning in NLP
740 | - **[Awesome-LLM](https://github.com/Hannibal046/Awesome-LLM)** Curated list of Large Language Model.
741 | - **[Awesome LLM Self-Consistency](https://github.com/SuperBruceJia/Awesome-LLM-Self-Consistency)** Curated list of Self-consistency in Large Language Models.
742 | - **[Deep-Reasoning-Papers](https://github.com/floodsung/Deep-Reasoning-Papers)** Recent Papers including Neural-Symbolic Reasoning, Logical Reasoning, and Visual Reasoning.
743 |
744 |
745 |
746 | ↑ Back to Top ↑
747 |
748 |
749 |
750 |
751 |
752 | ## Contributing
753 |
754 | - Add a new paper or update an existing paper, thinking about which category the work should belong to.
755 | - Use the same format as existing entries to describe the work.
756 | - Add the abstract link of the paper (`/abs/` format if it is an arXiv publication).
757 |
758 | **Don't worry if you do something wrong, it will be fixed for you!**
759 |
760 | ### Contributors
761 |
762 |
763 |
764 |
765 |
--------------------------------------------------------------------------------