└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # COT-Reading-List 2 | 3 | ## 基础论文 4 | - Chain-of-Thought Prompting Elicits Reasoning in Large Language Models https://arxiv.org/abs/2201.11903 5 | - Large Language Models are Zero-Shot Reasoners https://arxiv.org/abs/2205.11916 6 | - Automatic Chain of Thought Prompting in Large Language Models https://arxiv.org/abs/2210.03493 7 | 8 | ## 问题分解 9 | - Least-to-Most Prompting Enables Complex Reasoning in Large Language Models https://arxiv.org/abs/2205.10625 10 | - Measuring and Narrowing the Compositionality Gap in Language Models https://arxiv.org/abs/2210.03350 11 | 12 | ## 融合预测 13 | - Self-Consistency Improves Chain of Thought Reasoning in Language Models https://arxiv.org/abs/2203.11171 14 | - Active Prompting with Chain-of-Thought for Large Language Models https://arxiv.org/abs/2302.12246 15 | - Rationale-Augmented Ensembles in Language Models https://arxiv.org/abs/2207.00747 16 | 17 | ## 生成-校验 18 | - STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning https://arxiv.org/abs/2203.14465 19 | - On the Advance of Making Language Models Better Reasoners https://arxiv.org/abs/2206.02336 20 | 21 | ## 多语言 22 | - Language Models are Multilingual Chain-of-Thought Reasoners https://arxiv.org/abs/2210.03057 23 | 24 | ## 大模型背景 25 | - PaLM: Scaling Language Modeling with Pathways https://arxiv.org/abs/2204.02311 26 | - Emergent Abilities of Large Language Models https://arxiv.org/abs/2206.07682 27 | - Language Model Cascades https://arxiv.org/abs/2207.10342 28 | 29 | 2022.10.24 30 | --------------------------------------------------------------------------------