├── .gitignore
├── CODE_OF_CONDUCT.md
├── LICENSE
├── README.md
├── README_en.md
├── dataset
├── __init__.py
└── lm_dataset.py
├── eval_model.py
├── images
├── 1-wiki.png
├── 2-wiki.png
├── 3-wiki.png
├── 4-wiki.png
├── 5-wiki.png
├── LLM-structure-moe.png
├── LLM-structure.png
├── and_huggingface.png
├── and_modelscope.png
├── compare_radar.png
├── dataset.jpg
├── gpt3_config.png
├── logo.png
├── logo2.png
├── minimind2.gif
├── pre_512_loss.png
├── pre_768_loss.png
├── sft_512_loss.png
└── sft_768_loss.png
├── model
├── __init__.py
├── model_lora.py
├── model_minimind.py
├── tokenizer.json
└── tokenizer_config.json
├── requirements.txt
├── scripts
├── chat_openai_api.py
├── convert_model.py
├── serve_openai_api.py
├── train_tokenizer.py
└── web_demo.py
└── trainer
├── train_distill_reason.py
├── train_distillation.py
├── train_dpo.py
├── train_full_sft.py
├── train_lora.py
└── train_pretrain.py
/.gitignore:
--------------------------------------------------------------------------------
1 | /model/__pycache__
2 | /dataset
3 | /out
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Covenant Code of Conduct
2 |
3 | ## Our Pledge
4 |
5 | We as members, contributors, and leaders pledge to make participation in our
6 | community a harassment-free experience for everyone, regardless of age, body
7 | size, visible or invisible disability, ethnicity, sex characteristics, gender
8 | identity and expression, level of experience, education, socio-economic status,
9 | nationality, personal appearance, race, religion, or sexual identity
10 | and orientation.
11 |
12 | We pledge to act and interact in ways that contribute to an open, welcoming,
13 | diverse, inclusive, and healthy community.
14 |
15 | ## Our Standards
16 |
17 | Examples of behavior that contributes to a positive environment for our
18 | community include:
19 |
20 | * Demonstrating empathy and kindness toward other people
21 | * Being respectful of differing opinions, viewpoints, and experiences
22 | * Giving and gracefully accepting constructive feedback
23 | * Accepting responsibility and apologizing to those affected by our mistakes,
24 | and learning from the experience
25 | * Focusing on what is best not just for us as individuals, but for the
26 | overall community
27 |
28 | Examples of unacceptable behavior include:
29 |
30 | * The use of sexualized language or imagery, and sexual attention or
31 | advances of any kind
32 | * Trolling, insulting or derogatory comments, and personal or political attacks
33 | * Public or private harassment
34 | * Publishing others' private information, such as a physical or email
35 | address, without their explicit permission
36 | * Other conduct which could reasonably be considered inappropriate in a
37 | professional setting
38 |
39 | ## Enforcement Responsibilities
40 |
41 | Community leaders are responsible for clarifying and enforcing our standards of
42 | acceptable behavior and will take appropriate and fair corrective action in
43 | response to any behavior that they deem inappropriate, threatening, offensive,
44 | or harmful.
45 |
46 | Community leaders have the right and responsibility to remove, edit, or reject
47 | comments, commits, code, wiki edits, issues, and other contributions that are
48 | not aligned to this Code of Conduct, and will communicate reasons for moderation
49 | decisions when appropriate.
50 |
51 | ## Scope
52 |
53 | This Code of Conduct applies within all community spaces, and also applies when
54 | an individual is officially representing the community in public spaces.
55 | Examples of representing our community include using an official e-mail address,
56 | posting via an official social media account, or acting as an appointed
57 | representative at an online or offline event.
58 |
59 | ## Enforcement
60 |
61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be
62 | reported to the community leaders responsible for enforcement at
63 | .
64 | All complaints will be reviewed and investigated promptly and fairly.
65 |
66 | All community leaders are obligated to respect the privacy and security of the
67 | reporter of any incident.
68 |
69 | ## Enforcement Guidelines
70 |
71 | Community leaders will follow these Community Impact Guidelines in determining
72 | the consequences for any action they deem in violation of this Code of Conduct:
73 |
74 | ### 1. Correction
75 |
76 | **Community Impact**: Use of inappropriate language or other behavior deemed
77 | unprofessional or unwelcome in the community.
78 |
79 | **Consequence**: A private, written warning from community leaders, providing
80 | clarity around the nature of the violation and an explanation of why the
81 | behavior was inappropriate. A public apology may be requested.
82 |
83 | ### 2. Warning
84 |
85 | **Community Impact**: A violation through a single incident or series
86 | of actions.
87 |
88 | **Consequence**: A warning with consequences for continued behavior. No
89 | interaction with the people involved, including unsolicited interaction with
90 | those enforcing the Code of Conduct, for a specified period of time. This
91 | includes avoiding interactions in community spaces as well as external channels
92 | like social media. Violating these terms may lead to a temporary or
93 | permanent ban.
94 |
95 | ### 3. Temporary Ban
96 |
97 | **Community Impact**: A serious violation of community standards, including
98 | sustained inappropriate behavior.
99 |
100 | **Consequence**: A temporary ban from any sort of interaction or public
101 | communication with the community for a specified period of time. No public or
102 | private interaction with the people involved, including unsolicited interaction
103 | with those enforcing the Code of Conduct, is allowed during this period.
104 | Violating these terms may lead to a permanent ban.
105 |
106 | ### 4. Permanent Ban
107 |
108 | **Community Impact**: Demonstrating a pattern of violation of community
109 | standards, including sustained inappropriate behavior, harassment of an
110 | individual, or aggression toward or disparagement of classes of individuals.
111 |
112 | **Consequence**: A permanent ban from any sort of public interaction within
113 | the community.
114 |
115 | ## Attribution
116 |
117 | This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118 | version 2.0, available at
119 | https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
120 |
121 | Community Impact Guidelines were inspired by [Mozilla's code of conduct
122 | enforcement ladder](https://github.com/mozilla/diversity).
123 |
124 | [homepage]: https://www.contributor-covenant.org
125 |
126 | For answers to common questions about this code of conduct, see the FAQ at
127 | https://www.contributor-covenant.org/faq. Translations are available at
128 | https://www.contributor-covenant.org/translations.
129 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | 
4 |
5 |
6 |
7 |
8 |
9 | 
10 | [](https://github.com/jingyaogong/minimind/stargazers)
11 | [](LICENSE)
12 | [](https://github.com/jingyaogong/minimind/commits/master)
13 | [](https://github.com/jingyaogong/minimind/pulls)
14 | [](https://huggingface.co/collections/jingyaogong/minimind-66caf8d999f5c7fa64f399e5)
15 |
16 |
17 |
18 |
19 |
"大道至简"
20 |
21 |
22 |
23 |
24 | 中文 | [English](./README_en.md)
25 |
26 |
27 |
28 | * 此开源项目旨在完全从0开始,仅用3块钱成本 + 2小时!即可训练出仅为25.8M的超小语言模型**MiniMind**。
29 | * **MiniMind**系列极其轻量,最小版本体积是 GPT-3 的 $\frac{1}{7000}$,力求做到最普通的个人GPU也可快速训练。
30 | * 项目同时开源了大模型的极简结构-包含拓展共享混合专家(MoE)、数据集清洗、预训练(Pretrain)、监督微调(SFT)、LoRA微调,
31 | 直接偏好强化学习(DPO)算法、模型蒸馏算法等全过程代码。
32 | * **MiniMind**同时拓展了视觉多模态的VLM: [MiniMind-V](https://github.com/jingyaogong/minimind-v)。
33 | * 项目所有核心算法代码均从0使用PyTorch原生重构!不依赖第三方库提供的抽象接口。
34 | * 这不仅是大语言模型的全阶段开源复现,也是一个入门LLM的教程。
35 | * 希望此项目能为所有人提供一个抛砖引玉的示例,一起感受创造的乐趣!推动更广泛AI社区的进步!
36 |
37 | > 为防止误解,“2小时” 基于NVIDIA 3090硬件设备(单卡)测试,“3块钱”
38 | > 指GPU服务器租用成本,具体规格详情见下文。
39 |
40 | ---
41 |
42 |
43 |
44 |
45 | 
46 |
47 | [🔗🍓推理模型](https://www.modelscope.cn/studios/gongjy/MiniMind-Reasoning) | [🔗🤖常规模型](https://www.modelscope.cn/studios/gongjy/MiniMind) | [🔗🎞️视频介绍](https://www.bilibili.com/video/BV12dHPeqE72/?share_source=copy_web&vd_source=670c2504f88726f8cf4a21ef6147c0e8)
48 |
49 |
50 |
66 |
67 |
68 |
69 |
70 | # 📌 Introduction
71 |
72 | 大语言模型(Large Language Model, LLM)的出现引发了全世界对AI的空前关注。
73 | 无论是ChatGPT、DeepSeek还是Qwen,都以其惊艳的效果令人叹为观止。
74 | 然而,动辄数百亿参数的庞大规模,使得它们对个人设备而言不仅难以训练,甚至连部署都显得遥不可及。
75 | 打开大模型的“黑盒子”,探索其内部运作机制,多么令人心潮澎湃!
76 | 遗憾的是,99%的探索只能止步于使用LoRA等技术对现有大模型进行少量微调,学习一些新指令或任务。
77 | 这就好比教牛顿如何使用21世纪的智能手机——虽然有趣,却完全偏离了理解物理本质的初衷。
78 | 与此同时,第三方的大模型框架和工具库,如transformers+trl,几乎只暴露了高度抽象的接口。
79 | 通过短短10行代码,就能完成“加载模型+加载数据集+推理+强化学习”的全流程训练。
80 | 这种高效的封装固然便利,但也像一架高速飞船,将我们与底层实现隔离开来,阻碍了深入探究LLM核心代码的机会。
81 | 然而,“用乐高拼出一架飞机,远比坐在头等舱里飞行更让人兴奋!”。
82 | 更糟糕的是,互联网上充斥着大量付费课程和营销号,以漏洞百出、一知半解的内容推销AI教程。
83 | 正因如此,本项目初衷是拉低LLM的学习门槛,让每个人都能从理解每一行代码开始,
84 | 从零开始亲手训练一个极小的语言模型。是的,从**零开始训练**,而不是仅仅进行**推理**!
85 | 最低只需3块钱不到的服务器成本,就能亲身体验从0到1构建一个语言模型的全过程。
86 | 一起感受创造的乐趣吧!
87 |
88 | > [!NOTE]
89 | > (截至2025-02-07)MiniMind系列已完成多个型号模型的预训练,最小仅需25.8M(0.02B),即可具备流畅对话能力!
90 |
91 |
92 | Models List
93 |
94 | | 模型 (大小) | 推理占用 (约) | Release |
95 | |-------------------------|----------|------------|
96 | | MiniMind2-small (26M) | 0.5 GB | 2025.04.26 |
97 | | MiniMind2-MoE (145M) | 1.0 GB | 2025.04.26 |
98 | | MiniMind2 (104M) | 1.0 GB | 2025.04.26 |
99 | | minimind-v1-small (26M) | 0.5 GB | 2024.08.28 |
100 | | minimind-v1-moe (4×26M) | 1.0 GB | 2024.09.17 |
101 | | minimind-v1 (108M) | 1.0 GB | 2024.09.01 |
102 |
103 |
104 |
105 | **项目包含**
106 |
107 | - MiniMind-LLM结构的全部代码(Dense+MoE模型)。
108 | - 包含Tokenizer分词器详细训练代码。
109 | - 包含Pretrain、SFT、LoRA、RLHF-DPO、模型蒸馏的全过程训练代码。
110 | - 收集、蒸馏、整理并清洗去重所有阶段的高质量数据集,且全部开源。
111 | - 从0实现预训练、指令微调、LoRA、DPO强化学习,白盒模型蒸馏。关键算法几乎不依赖第三方封装的框架,且全部开源。
112 | - 同时兼容`transformers`、`trl`、`peft`等第三方主流框架。
113 | - 训练支持单机单卡、单机多卡(DDP、DeepSpeed)训练,支持wandb可视化训练流程。支持动态启停训练。
114 | - 在第三方测评榜(C-Eval、C-MMLU、OpenBookQA等)进行模型测试。
115 | - 实现Openai-Api协议的极简服务端,便于集成到第三方ChatUI使用(FastGPT、Open-WebUI等)。
116 | - 基于streamlit实现最简聊天WebUI前端。
117 | - 全面兼容社区热门`llama.cpp`、`vllm`、`ollama`推理引擎或`Llama-Factory`训练框架。
118 | - 复现(蒸馏/RL)大型推理模型DeepSeek-R1的MiniMind-Reason模型,**数据+模型**全部开源!
119 |
120 | 希望此开源项目可以帮助LLM初学者快速入门!
121 |
122 | ### 👉**更新日志**
123 |
124 |
125 | 2025-04-26 (newest 🎉🎉🎉)
126 |
127 | - 重要更新
128 | - 如有兼容性需要,可访问[🔗旧仓库内容🔗](https://github.com/jingyaogong/minimind/tree/7da201a944a90ed49daef8a0265c959288dff83a)。
129 | - MiniMind模型参数完全改名,对齐Transformers库模型(统一命名)。
130 | - generate方式重构,继承自GenerationMixin类。
131 | - 🔥支持llama.cpp、vllm、ollama等热门三方生态。
132 | - 规范代码和目录结构。
133 | - 改动词表` `->`<|im_start|><|im_end|>`
134 | ```text
135 | 为兼容第三方推理框架llama.cpp、vllm,本次更新需付出一些可观代价。
136 | 本次更新不再支持「直接」加载25-04-26以前的旧模型进行推理。
137 | 由于Llama位置编码方式与minimind存在区别,导致映射Llama模型后QK值存在差异
138 | MiniMind2系列旧模型均经过权重映射+(微调训练)QKVO线性层校准恢复而来。
139 | 本次更新后将放弃对`minimind-v1`全系列的维护,并在仓库中下线。
140 | ```
141 |
142 |
143 |
144 | 2025-02-09
145 |
146 | - 迎来发布以来重大更新,Release MiniMind2 Series。
147 | - 代码几乎全部重构,使用更简洁明了的统一结构。
148 | 如有旧代码的兼容性需要,可访问[🔗旧仓库内容🔗](https://github.com/jingyaogong/minimind/tree/6e9cd28ef9b34a0a10afbdf6f59e65cb6e628efb)。
149 | - 免去数据预处理步骤。统一数据集格式,更换为`jsonl`格式杜绝数据集下载混乱的问题。
150 | - MiniMind2系列效果相比MiniMind-V1显著提升。
151 | - 小问题:{kv-cache写法更标准、MoE的负载均衡loss被考虑等等}
152 | - 提供模型迁移到私有数据集的训练方案(医疗模型、自我认知样例)。
153 | - 精简预训练数据集,并大幅提升预训练数据质量,大幅缩短个人快速训练所需时间,单卡3090即可2小时复现!
154 | - 更新:LoRA微调脱离peft包装,从0实现LoRA过程;DPO算法从0使用PyTorch原生实现;模型白盒蒸馏原生实现。
155 | - MiniMind2-DeepSeek-R1系列蒸馏模型诞生!
156 | - MiniMind2具备一定的英文能力!
157 | - 更新MiniMind2与第三方模型的基于更多大模型榜单测试性能的结果。
158 |
159 |
160 |
161 |
162 |
163 |
164 | 2024-10-05
165 |
166 | - 为MiniMind拓展了多模态能力之---视觉
167 | - 移步孪生项目[minimind-v](https://github.com/jingyaogong/minimind-v)查看详情!
168 |
169 |
170 |
171 |
172 |
173 |
174 | 2024-09-27
175 |
176 | - 09-27更新pretrain数据集的预处理方式,为了保证文本完整性,放弃预处理成.bin训练的形式(轻微牺牲训练速度)。
177 | - 目前pretrain预处理后的文件命名为:pretrain_data.csv。
178 | - 删除了一些冗余的代码。
179 |
180 |
181 |
182 |
183 |
184 | 2024-09-17
185 |
186 | - 更新minimind-v1-moe模型
187 | - 为了防止歧义,不再使用mistral_tokenizer分词,全部采用自定义的minimind_tokenizer作为分词器。
188 |
189 |
190 |
191 |
192 |
193 | 2024-09-01
194 |
195 | - 更新minimind-v1 (108M)模型,采用minimind_tokenizer,预训练轮次3 + SFT轮次10,更充分训练,性能更强。
196 | - 项目已部署至ModelScope创空间,可以在此网站上体验:
197 | - [🔗ModelScope在线体验🔗](https://www.modelscope.cn/studios/gongjy/minimind)
198 |
199 |
200 |
201 |
202 |
203 | 2024-08-27
204 |
205 | - 项目首次开源
206 |
207 |
208 |
209 | # 📌 快速开始
210 |
211 |
212 | 分享本人的软硬件配置(仅供参考)
213 |
214 | * CPU: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
215 | * RAM: 128 GB
216 | * GPU: NVIDIA GeForce RTX 3090(24GB) * 8
217 | * Ubuntu==20.04
218 | * CUDA==12.2
219 | * Python==3.10.16
220 | * [requirements.txt](./requirements.txt)
221 |
222 |
223 |
224 | ### 第0步
225 |
226 | ```bash
227 | git clone https://github.com/jingyaogong/minimind.git
228 | ```
229 |
230 | ## Ⅰ 测试已有模型效果
231 |
232 | ### 1.环境准备
233 |
234 | ```bash
235 | pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
236 | ```
237 |
238 | ### 2.下载模型
239 | 到项目根目录
240 | ```bash
241 | git clone https://huggingface.co/jingyaogong/MiniMind2
242 | ```
243 |
244 | ### (可选)命令行问答
245 |
246 | ```bash
247 | # load=0: load from pytorch model, load=1: load from transformers-hf model
248 | python eval_model.py --load 1 --model_mode 2
249 | ```
250 |
251 | ### (可选)启动WebUI
252 |
253 | ```bash
254 | # 可能需要`python>=3.10` 安装 `pip install streamlit`
255 | # cd scripts
256 | streamlit run web_demo.py
257 | ```
258 |
259 | ### (可选)第三方推理框架
260 |
261 | ```bash
262 | # ollama
263 | ollama run jingyaogong/minimind2
264 | # vllm
265 | vllm serve ./MiniMind2/ --served-model-name "minimind"
266 | ```
267 |
268 | ## Ⅱ 从0开始自己训练
269 |
270 | ### 1.环境准备
271 |
272 | ```bash
273 | pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
274 | ```
275 |
276 |
277 | 注:提前测试Torch是否可用cuda
278 |
279 | ```bash
280 | import torch
281 | print(torch.cuda.is_available())
282 | ```
283 |
284 | 如果不可用,请自行去[torch_stable](https://download.pytorch.org/whl/torch_stable.html)
285 | 下载whl文件安装。参考[链接](https://blog.csdn.net/weixin_45456738/article/details/141029610?ops_request_misc=&request_id=&biz_id=102&utm_term=%E5%AE%89%E8%A3%85torch&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~default-2-141029610.nonecase&spm=1018.2226.3001.4187)
286 |
287 |
288 |
289 | ### 2.数据下载
290 |
291 | 从下文提供的[数据集下载链接](https://www.modelscope.cn/datasets/gongjy/minimind_dataset/files)
292 | 下载需要的数据文件(创建`./dataset`目录)并放到`./dataset`下
293 |
294 |
295 | 注:数据集须知
296 |
297 | 默认推荐下载`pretrain_hq.jsonl` + `sft_mini_512.jsonl`最快速度复现Zero聊天模型。
298 |
299 | 数据文件可自由选择,下文提供了多种搭配方案,可根据自己手头的训练需求和GPU资源进行适当组合。
300 |
301 |
302 |
303 | ### 3.开始训练
304 |
305 | 目录位于`trainer`
306 |
307 | **3.1 预训练(学知识)**
308 |
309 | ```bash
310 | python train_pretrain.py
311 | ```
312 |
313 | > 执行预训练,得到 `pretrain_*.pth` 作为预训练的输出权重(其中*为模型的dimension,默认为512)
314 |
315 |
316 | **3.2 监督微调(学对话方式)**
317 |
318 | ```bash
319 | python train_full_sft.py
320 | ```
321 |
322 | > 执行监督微调,得到 `full_sft_*.pth` 作为指令微调的输出权重(其中`full`即为全参数微调)
323 |
324 |
325 | 注:训练须知
326 |
327 | 所有训练过程默认每隔100步保存1次参数到文件`./out/***.pth`(每次会覆盖掉旧权重文件)。
328 |
329 | 简单起见,此处只写明两个阶段训练过程。如需其它训练 (LoRA, 蒸馏, 强化学习, 微调推理等) 可参考下文【实验】小节的详细说明。
330 |
331 |
332 |
333 |
334 | ---
335 |
336 | ### 4.测试模型效果
337 |
338 | 确保需要测试的模型`*.pth`文件位于`./out/`目录下。
339 | 也可以直接去[此处](https://www.modelscope.cn/models/gongjy/MiniMind2-PyTorch/files)下载使用我训练的`*.pth`文件。
340 |
341 | ```bash
342 | python eval_model.py --model_mode 1 # 默认为0:测试pretrain模型效果,设置为1:测试full_sft模型效果
343 | ```
344 |
345 |
346 | 注:测试须知
347 |
348 | 如需详情,查看`eval_model.py`脚本代码即可。model_mode分为 0: 预训练模型,1: SFT-Chat模型,2: RLHF-Chat模型,3: Reason模型
349 |
350 |
351 |
352 |
353 | ---
354 |
355 | > [!TIP]
356 | > 所有训练脚本均为Pytorch原生框架,均支持多卡加速,假设你的设备有N (N>1) 张显卡:
357 |
358 | 单机N卡启动训练方式 (DDP, 支持多机多卡集群)
359 |
360 | ```bash
361 | torchrun --nproc_per_node N train_xxx.py
362 | ```
363 |
364 |
365 | 注:其它须知
366 |
367 | 单机N卡启动训练 (DeepSpeed)
368 |
369 | ```bash
370 | deepspeed --master_port 29500 --num_gpus=N train_xxx.py
371 | ```
372 |
373 | 可根据需要开启wandb记录训练过程
374 |
375 | ```bash
376 | # 需要登录: wandb login
377 | torchrun --nproc_per_node N train_xxx.py --use_wandb
378 | # and
379 | python train_xxx.py --use_wandb
380 | ```
381 |
382 | 通过添加`--use_wandb`参数,可以记录训练过程,训练完成后,可以在wandb网站上查看训练过程。通过修改`wandb_project`
383 | 和`wandb_run_name`参数,可以指定项目名称和运行名称。
384 |
385 |
386 |
387 | # 📌 数据介绍
388 |
389 | ## Ⅰ Tokenizer
390 |
391 | 分词器将单词从自然语言通过“词典”映射到`0, 1, 36`这样的数字,可以理解为数字就代表了单词在“词典”中的页码。
392 | 可以选择自己构造词表训练一个“词典”,代码可见`./scripts/train_tokenizer.py`(仅供学习参考,若非必要无需再自行训练,MiniMind已自带tokenizer)。
393 | 或者选择比较出名的开源大模型分词器,
394 | 正如同直接用新华/牛津词典的优点是token编码压缩率很好,缺点是页数太多,动辄数十万个词汇短语;
395 | 自己训练的分词器,优点是词表长度和内容随意控制,缺点是压缩率很低(例如"hello"也许会被拆分为"h e l l o"
396 | 五个独立的token),且生僻词难以覆盖。
397 | “词典”的选择固然很重要,LLM的输出本质上是SoftMax到词典N个词的多分类问题,然后通过“词典”解码到自然语言。
398 | 因为MiniMind体积需要严格控制,为了避免模型头重脚轻(词嵌入embedding层参数在LLM占比太高),所以词表长度短短益善。
399 |
400 |
401 | Tokenizer介绍
402 |
403 | 第三方强大的开源模型例如Yi、qwen、chatglm、mistral、Llama3的tokenizer词表长度如下:
404 |
405 |
406 | Tokenizer模型 词表大小 来源
407 | yi tokenizer 64,000 01万物(中国)
408 | qwen2 tokenizer 151,643 阿里云(中国)
409 | glm tokenizer 151,329 智谱AI(中国)
410 | mistral tokenizer 32,000 Mistral AI(法国)
411 | llama3 tokenizer 128,000 Meta(美国)
412 | minimind tokenizer 6,400 自定义
413 |
414 |
415 | > 👉2024-09-17更新:为了防止过去的版本歧义&控制体积,minimind所有模型均使用minimind_tokenizer分词,废弃所有mistral_tokenizer版本。
416 |
417 | ```
418 | # 一些自言自语
419 | > 尽管minimind_tokenizer长度很小,编解码效率弱于qwen2、glm等中文友好型分词器。
420 | > 但minimind模型选择了自己训练的minimind_tokenizer作为分词器,以保持整体参数轻量,避免编码层和计算层占比失衡,头重脚轻,因为minimind的词表大小只有6400。
421 | > 且minimind在实际测试中没有出现过生僻词汇解码失败的情况,效果良好。
422 | > 由于自定义词表压缩长度到6400,使得LLM总参数量最低只有25.8M。
423 | > 训练数据`tokenizer_train.jsonl`均来自于`匠数大模型数据集`,这部分数据相对次要,如需训练可以自由选择。
424 | ```
425 |
426 |
427 |
428 | ## Ⅱ Pretrain数据
429 |
430 | 经历了MiniMind-V1的低质量预训练数据,导致模型胡言乱语的教训,`2025-02-05` 之后决定不再采用大规模无监督的数据集做预训练。
431 | 进而尝试把[匠数大模型数据集](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)的中文部分提取出来,
432 | 清洗出字符`<512`长度的大约1.6GB的语料直接拼接成预训练数据 `pretrain_hq.jsonl`,hq即为high
433 | quality(当然也还不算high,提升数据质量无止尽)。
434 |
435 | 文件`pretrain_hq.jsonl` 数据格式为
436 |
437 | ```bash
438 | {"text": "如何才能摆脱拖延症? 治愈拖延症并不容易,但以下建议可能有所帮助..."}
439 | ```
440 |
441 | ## Ⅲ SFT数据
442 |
443 | [匠数大模型SFT数据集](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
444 | “是一个完整、格式统一、安全的大模型训练和研究资源。
445 | 从网络上的公开数据源收集并整理了大量开源数据集,对其进行了格式统一,数据清洗,
446 | 包含10M条数据的中文数据集和包含2M条数据的英文数据集。”
447 | 以上是官方介绍,下载文件后的数据总量大约在4B tokens,肯定是适合作为中文大语言模型的SFT数据的。
448 | 但是官方提供的数据格式很乱,全部用来sft代价太大。
449 | 我将把官方数据集进行了二次清洗,把含有符号污染和噪声的条目去除;另外依然只保留了总长度`<512`
450 | 的内容,此阶段希望通过大量对话补充预训练阶段欠缺的知识。
451 | 导出文件为`sft_512.jsonl`(~7.5GB)。
452 |
453 | [Magpie-SFT数据集](https://www.modelscope.cn/organization/Magpie-Align)
454 | 收集了~1M条来自Qwen2/2.5的高质量对话,我将这部分数据进一步清洗,把总长度`<2048`的部分导出为`sft_2048.jsonl`(~9GB)。
455 | 长度`<1024`的部分导出为`sft_1024.jsonl`(~5.5GB),用大模型对话数据直接进行sft就属于“黑盒蒸馏”的范畴。
456 |
457 | 进一步清洗前两步sft的数据(只保留中文字符占比高的内容),筛选长度`<512`的对话,得到`sft_mini_512.jsonl`(~1.2GB)。
458 |
459 | 所有sft文件 `sft_X.jsonl` 数据格式均为
460 |
461 | ```text
462 | {
463 | "conversations": [
464 | {"role": "user", "content": "你好"},
465 | {"role": "assistant", "content": "你好!"},
466 | {"role": "user", "content": "再见"},
467 | {"role": "assistant", "content": "再见!"}
468 | ]
469 | }
470 | ```
471 |
472 | ## Ⅳ RLHF数据
473 |
474 | 来自[Magpie-DPO数据集](https://www.modelscope.cn/datasets/Magpie-Align/MagpieLM-DPO-Data-v0.1)
475 | 大约200k条偏好数据(均是英文)生成自Llama3.1-70B/8B,可以用于训练奖励模型,优化模型回复质量,使其更加符合人类偏好。
476 | 这里将数据总长度`<3000`的内容重组为`dpo.jsonl`(~0.9GB),包含`chosen`和`rejected`两个字段,`chosen`
477 | 为偏好的回复,`rejected`为拒绝的回复。
478 |
479 | 文件 `dpo.jsonl` 数据格式为
480 |
481 | ```text
482 | {
483 | "chosen": [
484 | {"content": "Q", "role": "user"},
485 | {"content": "good answer", "role": "assistant"}
486 | ],
487 | "rejected": [
488 | {"content": "Q", "role": "user"},
489 | {"content": "bad answer", "role": "assistant"}
490 | ]
491 | }
492 | ```
493 |
494 | ## Ⅴ Reason数据集:
495 |
496 | 不得不说2025年2月谁能火的过DeepSeek...
497 | 也激发了我对RL引导的推理模型的浓厚兴趣,目前已经用Qwen2.5复现了R1-Zero。
498 | 如果有时间+效果work(但99%基模能力不足)我会在之后更新MiniMind基于RL训练的推理模型而不是蒸馏模型。
499 | 时间有限,最快的低成本方案依然是直接蒸馏(黑盒方式)。
500 | 耐不住R1太火,短短几天就已经存在一些R1的蒸馏数据集[R1-Llama-70B](https://www.modelscope.cn/datasets/Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B)、[R1-Distill-SFT](https://www.modelscope.cn/datasets/AI-ModelScope/R1-Distill-SFT)、
501 | [Alpaca-Distill-R1](https://huggingface.co/datasets/shareAI/Alpaca-Distill-R1-ZH)、
502 | [deepseek_r1_zh](https://huggingface.co/datasets/jinliuxi/deepseek_r1_zh)等等,纯中文的数据可能比较少。
503 | 最终整合它们,导出文件为`r1_mix_1024.jsonl`,数据格式和`sft_X.jsonl`一致。
504 |
505 | ## Ⅵ 更多数据集
506 |
507 | 目前已经有[HqWu-HITCS/Awesome-Chinese-LLM](https://github.com/HqWu-HITCS/Awesome-Chinese-LLM)
508 | 在收集和梳理中文LLM相关的开源模型、应用、数据集及教程等资料,并持续更新这方面的最新进展。全面且专业,Respect!
509 |
510 | ---
511 |
512 | ## Ⅷ MiniMind训练数据集
513 |
514 | > [!NOTE]
515 | > 2025-02-05后,开源MiniMind最终训练所用的所有数据集,因此无需再自行预处理大规模数据集,避免重复性的数据处理工作。
516 |
517 | MiniMind训练数据集下载地址: [ModelScope](https://www.modelscope.cn/datasets/gongjy/minimind_dataset/files) | [HuggingFace](https://huggingface.co/datasets/jingyaogong/minimind_dataset/tree/main)
518 |
519 | > 无需全部clone,可单独下载所需的文件
520 |
521 | 将下载的数据集文件放到`./dataset/`目录下(✨为推荐的必须项)
522 |
523 | ```bash
524 | ./dataset/
525 | ├── dpo.jsonl (909MB)
526 | ├── lora_identity.jsonl (22.8KB)
527 | ├── lora_medical.jsonl (34MB)
528 | ├── pretrain_hq.jsonl (1.6GB, ✨)
529 | ├── r1_mix_1024.jsonl (340MB)
530 | ├── sft_1024.jsonl (5.6GB)
531 | ├── sft_2048.jsonl (9GB)
532 | ├── sft_512.jsonl (7.5GB)
533 | ├── sft_mini_512.jsonl (1.2GB, ✨)
534 | └── tokenizer_train.jsonl (1GB)
535 | ```
536 |
537 |
538 | 注:各数据集简介
539 |
540 | * `dpo.jsonl` --RLHF阶段数据集
541 | * `lora_identity.jsonl` --自我认知数据集(例如:你是谁?我是minimind...),推荐用于lora训练(亦可用于全参SFT,勿被名字局限)
542 | * `lora_medical.jsonl` --医疗问答数据集,推荐用于lora训练(亦可用于全参SFT,勿被名字局限)
543 | * `pretrain_hq.jsonl`✨ --预训练数据集,整合自jiangshu科技
544 | * `r1_mix_1024.jsonl` --DeepSeek-R1-1.5B蒸馏数据,每条数据字符最大长度为1024(因此训练时设置max_seq_len=1024)
545 | * `sft_1024.jsonl` --整合自Qwen2.5蒸馏数据(是sft_2048的子集),每条数据字符最大长度为1024(因此训练时设置max_seq_len=1024)
546 | * `sft_2048.jsonl` --整合自Qwen2.5蒸馏数据,每条数据字符最大长度为2048(因此训练时设置max_seq_len=2048)
547 | * `sft_512.jsonl` --整合自匠数科技SFT数据,每条数据字符最大长度为512(因此训练时设置max_seq_len=512)
548 | * `sft_mini_512.jsonl`✨ --极简整合自匠数科技SFT数据+Qwen2.5蒸馏数据(用于快速训练Zero模型),每条数据字符最大长度为512(因此训练时设置max_seq_len=512)
549 | * `tokenizer_train.jsonl` --均来自于`匠数大模型数据集`,这部分数据相对次要,(不推荐自己重复训练tokenizer,理由如上)如需自己训练tokenizer可以自由选择数据集。
550 |
551 |
552 |
553 |
554 | 
555 |
556 |
557 | 说明 & 推荐训练方案
558 |
559 | * MiniMind2 Series均经过共约20GB语料训练,大约4B tokens,即对应上面的数据组合训练结果(开销:💰💰💰💰💰💰💰💰,效果:😊😊😊😊😊😊)
560 |
561 | * 想要最快速度从0实现Zero模型,推荐使用`pretrain_hq.jsonl` + `sft_mini_512.jsonl` 的数据组合,具体花销和效果可查看下文表格(开销:💰,效果:😊😊)
562 |
563 | * 推荐具备一定算力资源或更在意效果的朋友可以考虑前者完整复现MiniMind2;仅有单卡GPU或在乎短时间快速复现的朋友强烈推荐后者;
564 |
565 | * 【折中方案】亦可选择例如`sft_mini_512.jsonl`、`sft_1024.jsonl`中等规模数据进行自由组合训练(开销:💰💰💰,效果:😊😊😊😊)。
566 |
567 |
568 |
569 | # 📌 Model Structure
570 |
571 | MiniMind-Dense(和[Llama3.1](https://ai.meta.com/blog/meta-llama-3-1/)一样)使用了Transformer的Decoder-Only结构,跟GPT-3的区别在于:
572 |
573 | * 采用了GPT-3的预标准化方法,也就是在每个Transformer子层的输入上进行归一化,而不是在输出上。具体来说,使用的是RMSNorm归一化函数。
574 | * 用SwiGLU激活函数替代了ReLU,这样做是为了提高性能。
575 | * 像GPT-Neo一样,去掉了绝对位置嵌入,改用了旋转位置嵌入(RoPE),这样在处理超出训练长度的推理时效果更好。
576 |
577 | ---
578 |
579 | MiniMind-MoE模型,它的结构基于Llama3和[Deepseek-V2/3](https://arxiv.org/pdf/2405.04434)中的MixFFN混合专家模块。
580 |
581 | * DeepSeek-V2在前馈网络(FFN)方面,采用了更细粒度的专家分割和共享的专家隔离技术,以提高Experts的效果。
582 |
583 | ---
584 |
585 | MiniMind的整体结构一致,只是在RoPE计算、推理函数和FFN层的代码上做了一些小调整。
586 | 其结构如下图(重绘版):
587 |
588 | 
589 | 
590 |
591 | 修改模型配置见[./model/LMConfig.py](./model/LMConfig.py)。
592 | 参考模型参数版本见下表:
593 |
594 | | Model Name | params | len_vocab | rope_theta | n_layers | d_model | kv_heads | q_heads | share+route |
595 | |-------------------|--------|-----------|------------|----------|---------|----------|---------|-------------|
596 | | MiniMind2-Small | 26M | 6400 | 1e6 | 8 | 512 | 2 | 8 | - |
597 | | MiniMind2-MoE | 145M | 6400 | 1e6 | 8 | 640 | 2 | 8 | 1+4 |
598 | | MiniMind2 | 104M | 6400 | 1e6 | 16 | 768 | 2 | 8 | - |
599 | | minimind-v1-small | 26M | 6400 | 1e4 | 8 | 512 | 8 | 16 | - |
600 | | minimind-v1-moe | 4×26M | 6400 | 1e4 | 8 | 512 | 8 | 16 | 1+4 |
601 | | minimind-v1 | 108M | 6400 | 1e4 | 16 | 768 | 8 | 16 | - |
602 |
603 | # 📌 Experiment
604 |
605 | ## Ⅰ 训练开销
606 |
607 | - **时间单位**:小时 (h)。
608 | - **成本单位**:人民币 (¥);7¥ ≈ 1美元。
609 | - **3090 租卡单价**:≈1.3¥/h(可自行参考实时市价)。
610 | - **参考标准**:表格仅实测 `pretrain` 和 `sft_mini_512` 两个数据集的训练时间,其它耗时根据数据集大小估算(可能存在些许出入)。
611 |
612 | > 基于 3090 (单卡)成本计算
613 |
614 | | Model Name | params | pretrain | sft_mini_512 | sft_512 | sft_1024 | sft_2048 | RLHF |
615 | |-----------------|--------|------------------|------------------|---------------|-------------------|------------------|---------------|
616 | | MiniMind2-Small | 26M | ≈1.1h ≈1.43¥ | ≈1h ≈1.3¥ | ≈6h ≈7.8¥ | ≈4.58h ≈5.95¥ | ≈7.5h ≈9.75¥ | ≈1h ≈1.3¥ |
617 | | MiniMind2 | 104M | ≈3.9h ≈5.07¥ | ≈3.3h ≈4.29¥ | ≈20h ≈26¥ | ≈15h ≈19.5¥ | ≈25h ≈32.5¥ | ≈3h ≈3.9¥ |
618 |
619 | ---
620 |
621 |
622 | 训练开销总结&预测
623 |
624 |
625 | > MiniMind2-Small参数
626 | >> `pretrain_hq`+`sft_mini_512`数据集
627 | 单卡3090 (1 epoch) + 2.1小时 + 花费2.73元人民币
628 | 即可从0训练出MiniMind-Zero-0.025B模型!!!
629 |
630 | > MiniMind2-Small参数
631 | >> `pretrain_hq`+`sft_512`+`sft_2048`+`dpo`数据集
632 | 单卡3090 (2 epochs) + 大约38.16小时 + 花费49.61元人民币
633 | 即可从0训练出MiniMind2-Small-0.025B模型!!!
634 |
635 | > MiniMind2参数
636 | >> `pretrain_hq`+`sft_512`+`sft_2048`+`dpo`数据集
637 | 单卡3090 (2 epochs) + 大约122小时 + 花费158.6元人民币
638 | 即可从0训练出MiniMind2-0.1B模型!!!
639 |
640 |
641 |
642 |
643 |
644 | ✨基于单卡NVIDIA 3090的`MiniMind-Zero`从0训练仅需`2小时` + `3块钱`,实现ChatBot效果!
645 |
646 | ✨PS:若采用8卡4090训练,总用时甚至可以压缩到10分钟以内!(由于时间更短,花费同样3元左右,与单卡成本相当)
647 |
648 | ✨以极低极低的门槛,实现人人可玩的大模型自由!这正是MiniMind系列的诞生初衷!
649 |
650 | ✨仅价值`3块钱`成本的`MiniMind-Zero`并不是噱头!Chat测试:
651 |
652 | ```text
653 | 👶: 请介绍一下自己。
654 | 🤖️: 作为人工智能,我没有实际的生活,也没有自我意识,所以没有自己的生活。我被设计成能够帮助用户解答问题、提供信息、进行对话等。我的设计和功能是由计算机科学、机器学习、人工智能技术和算法所构建的,我通过编程和算法实现。
655 |
656 | 👶: 推荐一些杭州的特色美食吧。
657 | 🤖️: 杭州有很多美味的特色美食,比如鸡肉串、鳗鱼头、水煮鱼、豆腐脑等。这些美食都非常美味,有着独特的风味和口感,绝对是值得一试的美食。
658 |
659 | 👶: 请为我讲解“大语言模型”这个概念。
660 | 🤖️: 大语言模型是一种人工智能技术,它使用大量的文本数据来训练模型,然后在不断的迭代中不断优化模型。它的目标是根据输入的数据来生成符合特定需求的文本,这些文本可以是任何你感兴趣的主题。
661 |
662 | 👶: Introduce the history of the United States, please.
663 | 🤖️: 您提到的“Introok's the believeations of theument." 这个名字来源于中国古代的"groty of of the change."
664 | ```
665 |
666 | 极速且初具效果,甚至仍然可以进一步压缩获取更小更优质的训练数据。
667 | Zero模型权重保存为 `full_sft_512_zero.pth`(见下文MiniMind模型文件链接),如有兴趣可下载检验此模型效果。
668 |
669 |
670 | ---
671 |
672 | ## Ⅱ 主要训练步骤
673 |
674 | > 所有训练脚本均 `cd ./trainer` 目录执行
675 |
676 | ### **1. 预训练(Pretrain)**:
677 |
678 | LLM首先要学习的并非直接与人交流,而是让网络参数中充满知识的墨水,“墨水” 理论上喝的越饱越好,产生大量的对世界的知识积累。
679 | 预训练就是让Model先埋头苦学大量基本的知识,例如从Wiki百科、新闻、书籍整理大规模的高质量训练数据。
680 | 这个过程是“无监督”的,即人类不需要在过程中做任何“有监督”的校正,而是由模型自己从大量文本中总结规律学习知识点。
681 | 模型此阶段目的只有一个:**学会词语接龙**。例如我们输入“秦始皇”四个字,它可以接龙“是中国的第一位皇帝”。
682 |
683 | ```bash
684 | torchrun --nproc_per_node 1 train_pretrain.py # 1即为单卡训练,可根据硬件情况自行调整 (设置>=2)
685 | # or
686 | python train_pretrain.py
687 | ```
688 |
689 | > 训练后的模型权重文件默认每隔`100步`保存为: `pretrain_*.pth`(*
690 | > 为模型具体dimension,每次保存时新文件会覆盖旧文件)
691 |
692 | ### **2. 有监督微调(Supervised Fine-Tuning)**:
693 |
694 | 经过预训练,LLM此时已经掌握了大量知识,然而此时它只会无脑地词语接龙,还不会与人聊天。
695 | SFT阶段就需要把半成品LLM施加一个自定义的聊天模板进行微调。
696 | 例如模型遇到这样的模板【问题->回答,问题->回答】后不再无脑接龙,而是意识到这是一段完整的对话结束。
697 | 称这个过程为指令微调,就如同让已经学富五车的「牛顿」先生适应21世纪智能手机的聊天习惯,学习屏幕左侧是对方消息,右侧是本人消息这个规律。
698 | 在训练时,MiniMind的指令和回答长度被截断在512,是为了节省显存空间。就像我们学习时,会先从短的文章开始,当学会写作200字作文后,800字文章也可以手到擒来。
699 | 在需要长度拓展时,只需要准备少量的2k/4k/8k长度对话数据进行进一步微调即可(此时最好配合RoPE-NTK的基准差值)。
700 | > 在推理时通过调整RoPE线性差值,实现免训练长度外推到2048及以上将会很方便。
701 |
702 | ```bash
703 | torchrun --nproc_per_node 1 train_full_sft.py
704 | # or
705 | python train_full_sft.py
706 | ```
707 |
708 | > 训练后的模型权重文件默认每隔`100步`保存为: `full_sft_*.pth`(*
709 | > 为模型具体dimension,每次保存时新文件会覆盖旧文件)
710 |
711 | ## Ⅲ 其它训练步骤
712 |
713 | > 所有训练脚本均 `cd ./trainer` 目录执行
714 |
715 | ### **3. 人类反馈强化学习(Reinforcement Learning from Human Feedback, RLHF)**
716 |
717 | 在前面的训练步骤中,模型已经具备了基本的对话能力,但是这样的能力完全基于单词接龙,缺少正反样例的激励。
718 | 模型此时尚未知什么回答是好的,什么是差的。我们希望它能够更符合人的偏好,降低让人类不满意答案的产生概率。
719 | 这个过程就像是让模型参加新的培训,从优秀员工的作为例子,消极员工作为反例,学习如何更好地回复。
720 | 此处使用的是RLHF系列之-直接偏好优化(Direct Preference Optimization, DPO)。
721 | 与PPO(Proximal Policy Optimization)这种需要奖励模型、价值模型的RL算法不同;
722 | DPO通过推导PPO奖励模型的显式解,把在线奖励模型换成离线数据,Ref模型输出可以提前保存。
723 | DPO性能几乎不变,只用跑 actor_model 和 ref_model 两个模型,大大节省显存开销和增加训练稳定性。
724 |
725 | > 注:RLHF训练步骤**并非必须**,此步骤难以提升模型“智力”而通常仅用于提升模型的“礼貌”,有利(符合偏好、减少有害内容)也有弊(样本收集昂贵、反馈偏差、多样性损失)。
726 |
727 | ```bash
728 | torchrun --nproc_per_node 1 train_dpo.py
729 | # or
730 | python train_dpo.py
731 | ```
732 |
733 | > 训练后的模型权重文件默认每隔`100步`保存为: `rlhf_*.pth`(*
734 | > 为模型具体dimension,每次保存时新文件会覆盖旧文件)
735 |
736 | ### **4. 知识蒸馏(Knowledge Distillation, KD)**
737 |
738 | 在前面的所有训练步骤中,模型已经完全具备了基本能力,通常可以学成出师了。
739 | 而知识蒸馏可以进一步优化模型的性能和效率,所谓知识蒸馏,即学生模型面向教师模型学习。
740 | 教师模型通常是经过充分训练的大模型,具有较高的准确性和泛化能力。
741 | 学生模型是一个较小的模型,目标是学习教师模型的行为,而不是直接从原始数据中学习。
742 | 在SFT学习中,模型的目标是拟合词Token分类硬标签(hard labels),即真实的类别标签(如 0 或 6400)。
743 | 在知识蒸馏中,教师模型的softmax概率分布被用作软标签(soft labels)。小模型仅学习软标签,并使用KL-Loss来优化模型的参数。
744 | 通俗地说,SFT直接学习老师给的解题答案。而KD过程相当于“打开”老师聪明的大脑,尽可能地模仿老师“大脑”思考问题的神经元状态。
745 | 例如,当老师模型计算`1+1=2`这个问题的时候,最后一层神经元a状态为0,神经元b状态为100,神经元c状态为-99...
746 | 学生模型通过大量数据,学习教师模型大脑内部的运转规律。这个过程即称之为:知识蒸馏。
747 | 知识蒸馏的目的只有一个:让小模型体积更小的同时效果更好。
748 | 然而随着LLM诞生和发展,模型蒸馏一词被广泛滥用,从而产生了“白盒/黑盒”知识蒸馏两个派别。
749 | GPT-4这种闭源模型,由于无法获取其内部结构,因此只能面向它所输出的数据学习,这个过程称之为黑盒蒸馏,也是大模型时代最普遍的做法。
750 | 黑盒蒸馏与SFT过程完全一致,只不过数据是从大模型的输出收集,因此只需要准备数据并且进一步FT即可。
751 | 注意更改被加载的基础模型为`full_sft_*.pth`,即基于微调模型做进一步的蒸馏学习。
752 | `./dataset/sft_1024.jsonl`与`./dataset/sft_2048.jsonl` 均收集自qwen2.5-7/72B-Instruct大模型,可直接用于SFT以获取Qwen的部分行为。
753 |
754 | ```bash
755 | # 注意需要更改train_full_sft.py数据集路径,以及max_seq_len
756 | torchrun --nproc_per_node 1 train_full_sft.py
757 | # or
758 | python train_full_sft.py
759 | ```
760 |
761 | > 训练后的模型权重文件默认每隔`100步`同样保存为: `full_sft_*.pth`(*为模型具体dimension,每次保存时新文件会覆盖旧文件)
762 |
763 | 此处应当着重介绍MiniMind实现的白盒蒸馏代码`train_distillation.py`,由于MiniMind同系列本身并不存在强大的教师模型,因此白盒蒸馏代码仅作为学习参考。
764 |
765 | ```bash
766 | torchrun --nproc_per_node 1 train_distillation.py
767 | # or
768 | python train_distillation.py
769 | ```
770 |
771 | ### **5. LoRA (Low-Rank Adaptation)**
772 |
773 | LoRA是一种高效的参数高效微调(Parameter-Efficient Fine-Tuning, PEFT)方法,旨在通过低秩分解的方式对预训练模型进行微调。
774 | 相比于全参数微调(Full Fine-Tuning),LoRA 只需要更新少量的参数。
775 | LoRA 的核心思想是:在模型的权重矩阵中引入低秩分解,仅对低秩部分进行更新,而保持原始预训练权重不变。
776 | 代码可见`./model/model_lora.py`和`train_lora.py`,完全从0实现LoRA流程,不依赖第三方库的封装。
777 |
778 | ```bash
779 | torchrun --nproc_per_node 1 train_lora.py
780 | # or
781 | python train_lora.py
782 | ```
783 |
784 | > 训练后的模型权重文件默认每隔`100步`保存为: `lora_xxx_*.pth`(*
785 | > 为模型具体dimension,每次保存时新文件会覆盖旧文件)
786 |
787 |
788 | 非常多的人困惑,如何使模型学会自己私有领域的知识?如何准备数据集?如何迁移通用领域模型打造垂域模型?
789 | 这里举几个例子,对于通用模型,医学领域知识欠缺,可以尝试在原有模型基础上加入领域知识,以获得更好的性能。
790 | 同时,我们通常不希望学会领域知识的同时损失原有基础模型的其它能力,此时LoRA可以很好的改善这个问题。
791 | 只需要准备如下格式的对话数据集放置到`./dataset/lora_xxx.jsonl`,启动 `python train_lora.py`
792 | 训练即可得到`./out/lora/lora_xxx.pth`新模型权重。
793 |
794 | **医疗场景**
795 |
796 | ```
797 | {"conversations": [{"role": "user", "content": "请问颈椎病的人枕头多高才最好?"}, {"role": "assistant", "content": "颈椎病患者选择枕头的高度应该根据..."}]}
798 | {"conversations": [{"role": "user", "content": "请问xxx"}, {"role": "assistant", "content": "xxx..."}]}
799 | ```
800 |
801 | **自我认知场景**
802 |
803 | ```
804 | {"conversations": [{"role": "user", "content": "你叫什么名字?"}, {"role": "assistant", "content": "我叫minimind..."}]}
805 | {"conversations": [{"role": "user", "content": "你是谁"}, {"role": "assistant", "content": "我是..."}]}
806 | ```
807 |
808 | 此时【基础模型+LoRA模型】即可获得医疗场景模型增强的能力,相当于为基础模型增加了LoRA外挂,这个过程并不损失基础模型的本身能力。
809 | 我们可以通过`eval_model.py`进行模型评估测试。
810 |
811 | ```bash
812 | # 注意:model_mode即选择基础模型的类型,这和train_lora是基于哪个模型训练的相关,确保统一即可。
813 | python eval_model.py --lora_name 'lora_medical' --model_mode 2
814 | ```
815 |
816 | **小测试**
817 |
818 | ```
819 | 👶: 我最近经常感到头晕,可能是什么原因?
820 | 🤖️: 头晕可能是由于多种原因,包括但不限于:心脏疾病、眼睛损伤、过敏反应、皮肤问题、过敏反应、压力、焦虑、疲劳、药物副作用、性功能障碍或者其他...
821 |
822 | ---
823 |
824 | 👶: 你和openai是什么关系?
825 | 🤖️: 我是 MiniMind,一个由 Jingyao Gong 开发的人工智能助手。我通过自然语言处理和算法训练来与用户进行交互。
826 | ```
827 |
828 | PS:只要有所需要的数据集,也可以full_sft全参微调(需要进行通用知识的混合配比,否则过拟合领域数据会让模型变傻,损失通用性)
829 |
830 | ### **6. 训练推理模型 (Reasoning Model)**
831 |
832 | DeepSeek-R1实在太火了,几乎重新指明了未来LLM的新范式。
833 | 论文指出`>3B`的模型经历多次反复的冷启动和RL奖励训练才能获得肉眼可见的推理能力提升。
834 | 最快最稳妥最经济的做法,以及最近爆发的各种各样所谓的推理模型几乎都是直接面向数据进行蒸馏训练,
835 | 但由于缺乏技术含量,蒸馏派被RL派瞧不起(hhhh)。
836 | 本人迅速已经在Qwen系列1.5B小模型上进行了尝试,很快复现了Zero过程的数学推理能力。
837 | 然而一个遗憾的共识是:参数太小的模型直接通过冷启动SFT+GRPO几乎不可能获得任何推理效果。
838 | MiniMind2第一时间只能坚定不移的选择做蒸馏派,日后基于0.1B模型的RL如果同样取得小小进展会更新此部分的训练方案。
839 |
840 | 做蒸馏需要准备的依然是和SFT阶段同样格式的数据即可,数据集来源已如上文介绍。数据格式例如:
841 |
842 | ```json lines
843 | {
844 | "conversations": [
845 | {
846 | "role": "user",
847 | "content": "你好,我是小芳,很高兴认识你。"
848 | },
849 | {
850 | "role": "assistant",
851 | "content": "\n你好!我是由中国的个人开发者独立开发的智能助手MiniMind-R1-Lite-Preview,很高兴为您提供服务!\n \n\n你好!我是由中国的个人开发者独立开发的智能助手MiniMind-R1-Lite-Preview,很高兴为您提供服务!\n "
852 | }
853 | ]
854 | }
855 | ```
856 |
857 | 推理模型R1的回复模板是:
858 |
859 | ```text
860 | \n思考过程\n \n
861 | \n最终回答\n
862 | ```
863 |
864 | 这在GRPO中通过设置规则奖励函数约束模型符合思考标签和回复标签(在冷启动靠前的阶段奖励值设置应该提高一些)
865 |
866 | 另一个问题是蒸馏过程虽然和SFT一样,但实验结果是模型难以每次都符合模板规范的回复,即脱离思考和回复标签约束。
867 | 这里的小技巧是增加标记位置token的损失惩罚,详见`train_distill_reason.py`:
868 |
869 | ```text
870 | # 在 sp_ids 对应的位置增加额外的惩罚
871 | ...
872 | loss_mask[sp_ids] = 10 # 惩罚系数
873 | ```
874 |
875 | 另另一个tips是由于推理数据由于只筛选了`<1024`长度的数据,其中多轮对话和英文数据偏少,
876 | 因此`r1_mix_1024.jsonl`进行了大约10k条多轮对话+英文数据的混合,防止模型遗忘严重。
877 |
878 | 脚本默认基于rlhf后的基模型做推理能力的蒸馏微调,下面直接启动训练即可:
879 |
880 | ```bash
881 | torchrun --nproc_per_node 1 train_distill_reason.py
882 | # or
883 | python train_distill_reason.py
884 | ```
885 |
886 | > 训练后的模型权重文件默认每隔`100步`保存为: `reason_*.pth`(*为模型具体dimension,每次保存时新文件会覆盖旧文件)
887 |
888 | 测试一下:
889 |
890 | ```text
891 | 👶: 请介绍一下自己。
892 | 🤖️:
893 | 嗯,用户让我介绍一下自己。我现在需要解释一下自己。首先,我应该明确用户是谁,是学生、职场人士还是普通人。用户可能是一个学生或者职场人士,所以信息传递应该是帮助学生和职场人士的信息。
894 | 然后,我需要回应用户,这可能包括介绍自己的位置、兴趣爱好、职业经历等信息。用户还可能想了解自己在哪里,比如职业发展、工作经验等。
895 | 我应该注意用词要正式,不要太正式,同时要保持简洁明了。这样用户容易理解,也方便他们查阅相关资料。
896 | 最后,我应该总结一下,让用户知道我是如何帮助他们完成他们的任务的,这样他们有更多关于我的信息。
897 |
898 |
899 | 我是**“你的名字”,你叫什么名字?
900 |
901 |
902 | 👶: 你更擅长哪一个学科?
903 | 🤖️:
904 | 您好!我是由中国的个人开发者开发的智能助手MiniMind-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
905 |
906 |
907 | 您好!我是由中国的个人开发者开发的智能助手MiniMind-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
908 |
909 | ```
910 |
911 | ---
912 |
913 | ## Ⅳ 模型参数设定
914 |
915 | 📋关于LLM的参数配置,有一篇很有意思的论文[MobileLLM](https://arxiv.org/pdf/2402.14905)做了详细的研究和实验。
916 | Scaling Law在小模型中有自己独特的规律。
917 | 引起Transformer参数成规模变化的参数几乎只取决于`d_model`和`n_layers`。
918 |
919 | * `d_model`↑ + `n_layers`↓ -> 矮胖子
920 | * `d_model`↓ + `n_layers`↑ -> 瘦高个
921 |
922 | 2020年提出Scaling Law的论文认为,训练数据量、参数量以及训练迭代次数才是决定性能的关键因素,而模型架构的影响几乎可以忽视。
923 | 然而似乎这个定律对小模型并不完全适用。
924 | MobileLLM提出架构的深度比宽度更重要,「深而窄」的「瘦长」模型可以学习到比「宽而浅」模型更多的抽象概念。
925 | 例如当模型参数固定在125M或者350M时,30~42层的「狭长」模型明显比12层左右的「矮胖」模型有更优越的性能,
926 | 在常识推理、问答、阅读理解等8个基准测试上都有类似的趋势。
927 | 这其实是非常有趣的发现,因为以往为100M左右量级的小模型设计架构时,几乎没人尝试过叠加超过12层。
928 | 这与MiniMind在训练过程中,模型参数量在`d_model`和`n_layers`之间进行调整实验观察到的效果是一致的。
929 | 然而「深而窄」的「窄」也是有维度极限的,当d_model<512时,词嵌入维度坍塌的劣势非常明显,
930 | 增加的layers并不能弥补词嵌入在固定q_head带来d_head不足的劣势。
931 | 当d_model>1536时,layers的增加似乎比d_model的优先级更高,更能带来具有“性价比”的参数->效果增益。
932 |
933 | * 因此MiniMind设定small模型dim=512,n_layers=8来获取的「极小体积<->更好效果」的平衡。
934 | * 设定dim=768,n_layers=16来获取效果的更大收益,更加符合小模型Scaling-Law的变化曲线。
935 |
936 | 作为参考,GPT3的参数设定见下表:
937 | 
938 |
939 | ---
940 |
941 | ## Ⅴ 训练结果
942 |
943 | MiniMind2 模型训练损失走势(由于数据集在训练后又更新清洗多次,因此Loss仅供参考)
944 |
945 | | models | pretrain (length-512) | sft (length-512) |
946 | |-----------------|----------------------------------------------------|----------------------------------------------------|
947 | | MiniMind2-Small | | |
948 | | MiniMind2 | | |
949 |
950 | ### 训练完成-模型合集
951 |
952 | > 考虑到多人反应百度网盘速度慢,MiniMind2及以后全部使用ModelScope/HuggingFace托管。
953 |
954 | #### ① PyTorch原生模型
955 |
956 | MiniMind2模型权重 ([ModelScope](https://www.modelscope.cn/models/gongjy/MiniMind2-PyTorch) | [HuggingFace](https://huggingface.co/jingyaogong/MiniMind2-Pytorch))
957 |
958 | MiniMind-V1模型权重 ([百度网盘](https://pan.baidu.com/s/1KUfSzEkSXYbCCBj0Pw-9fA?pwd=6666))
959 |
960 |
961 | Torch文件命名对照
962 |
963 | | Model Name | params | pretrain_model | sft_model | rl_model | reason_model | lora_model |
964 | |-----------------|--------|------------------------|------------------------|--------------------|------------------|--------------------|
965 | | MiniMind2-small | 26M | `pretrain_512.pth` | `full_sft_512.pth` | `rlhf_512.pth` | `reason_512.pth` | `lora_xxx_512.pth` |
966 | | MiniMind2-MoE | 145M | `pretrain_640_moe.pth` | `full_sft_640_moe.pth` | `rlhf_640_moe.pth` | - | - |
967 | | MiniMind2 | 104M | `pretrain_768.pth` | `full_sft_768.pth` | `rlhf_768.pth` | `reason_768.pth` | `lora_xxx_768.pth` |
968 |
969 | | Model Name | params | pretrain_model | 单轮对话sft | 多轮对话sft | rl_model |
970 | |-------------------|--------|------------------------|------------------------------------|-----------------------------------|--------------|
971 | | minimind-v1-small | 26M | `pretrain_512.pth` | `single_chat/full_sft_512.pth` | `multi_chat/full_sft_512.pth` | `rl_512.pth` |
972 | | minimind-v1-moe | 4×26M | `pretrain_512_moe.pth` | `single_chat/full_sft_512_moe.pth` | `multi_chat/full_sft_512_moe.pth` | - |
973 | | minimind-v1 | 108M | `pretrain_768.pth` | `single_chat/full_sft_768.pth` | `multi_chat/full_sft_768.pth` | `rl_768.pth` |
974 |
975 |
976 |
977 | #### ② Transformers模型
978 |
979 | MiniMind系列 ([ModelScope](https://www.modelscope.cn/collections/MiniMind-b72f4cfeb74b47)
980 | | [HuggingFace](https://huggingface.co/collections/jingyaogong/minimind-66caf8d999f5c7fa64f399e5))
981 |
982 | ---
983 |
984 | # 📌 Eval
985 |
986 | ## Ⅰ RLHF对比SFT篇
987 |
988 | 测试基于`full_sft`和`rlhf`的`MiniMind2系列`模型对比,测试随机种子均固定
989 |
990 |
991 | 问答实测
992 |
993 | [A0] MiniMind2:full_sft_640
994 |
995 | [A1] MiniMind2:rlhf_640
996 |
997 | ```text
998 | [Q]: 你叫什么名字?
999 | [A0]: 我是一个大型语言模型,可以回答各种问题,提供信息和帮助。我可以回答各种问题,提供信息和帮助,帮助你解决各种问题。
1000 | [A1]: 我是一种基于文本的AI模型,我并没有个人经历或情感,所以无法感受和体验情感。我的设计目的是为了帮助用户解决问题、提供信息和进行对话。如果您有任何问题或需要帮助,请随时告诉我!
1001 |
1002 | [Q]: 你更擅长哪一个学科?
1003 | [A0]: 作为一个AI模型,我并没有真正的“擅长”这个学科。我被设计成能够提供信息、解答问题、进行对话、执行任务和与用户进行对话。不过,我可以提供信息、解答问题、进行对话、提供建议、处理问题、进行对话、提供建议、学习新知识、编写代码、回答问题、提供建议等多种学科的功能。如果你有具体的学科需求或主题,欢迎提供详细信息。
1004 | [A1]: 作为AI,我并没有个人能力或兴趣,但我可以提供一些关于数学、科学、计算机科学、语言学、心理学等学科的知识。如果你对数学感兴趣,我可以提供一些基础知识,例如代数、几何、概率、统计学等。如果你需要更深入的学习,我也可以提供相关的在线课程、书籍、论文等资源。请告诉我你具体需要什么帮助,我会尽力提供帮助。
1005 |
1006 | [Q]: 鲁迅的《狂人日记》是如何批判封建礼教的?
1007 | [A0]: 鲁迅的《狂人日记》是一部讲述了鲁迅对人类价值和道德责任的倡导。在这部作品中,鲁迅通过细腻的笔触,探讨了人性、道德、正义、牺牲与责任等深刻的主题。以下是鲁迅在《狂人日记》中如何批判封建礼教的几个方面:1. **社会现实与道德观念**:鲁迅通过讲述鲁迅的故事,探讨了社会的不平等、权力的流动、人性的演变等社会问题。他认为,社会不平等不仅仅是不公正的,更是每个人在社会中的不公正。他呼吁社会各界共同努力,通过教育、宣传、法律等手段,为构建和谐社会做出贡献。2. **道德与责任**:鲁迅通过讲述鲁迅的故事,展现了道德和责任的重要性。他提倡通过道德教育来培养人们的道德品质,同时也强调了道德和责任的重要性。鲁迅通过讲述鲁迅的故事,揭示了社会的不平等和人性的复杂性,体现了对人性的深刻理解。3. **道德与责任**:鲁迅通过讲述鲁迅的故事,强调了道德责任的重要性。他提倡通过自己的行为来体现对社会的尊重和对人性的深刻理解。鲁迅通过讲述鲁迅的故事,强调了道德和责任的重要性。4. **道德与道德**:鲁迅通过讲述鲁迅的故事,揭示了道德和责任的真正意义。鲁迅通过讲述鲁迅的故事,揭示了道德和责任的本质,强调了道德和责任的重要性。5. **个人与社会**:鲁迅通过讲述鲁迅的故事,揭示了个人与社会之间的复杂关系。他呼吁社会各界共同努力,通过教育和宣传,为构建和谐社会做出贡献。鲁迅的《狂人日记》通过其丰富的故事和深刻的道德寓言,传达了关于人性、道德、正义、道德以及个人与社会的复杂关系。这部作品不仅在文学价值上取得了巨大成就,更在思想层面上启发了后来的文学创作,对后世产生了深远的影响。
1008 | [A1]: 鲁迅的《狂人日记》是一部描绘战争和自由精神的作品,它批判了封建社会的种种问题和冲突,反映了社会的不平等和人性本质。在这部作品中,鲁迅通过对封建社会的诸多现象进行批判,如“弗雷德里克·约翰逊的“鹰迪伦”、“纳扎特的”、“贝克汉姆”、“乔治·威尔逊”等,对封建社会的种种问题进行了批判,强调了封建社会的不平等、人性本质以及人性本质。鲁迅通过他的“鹰迪伦”、“纳扎特的”、“贝克汉姆”、“乔治·威尔逊”等,对封建社会的种种问题进行了批判。在小说中,鲁迅通过对封建社会的诸多现象进行批判,揭示了封建社会的腐败、人性本质以及人性本质的复杂性。这种批判体现了对封建社会的尊重和对人性的复杂性,同时也反映了现代社会中普遍存在的不平等和人性本质。鲁迅的《狂人日记》不仅是一部关于封建社会的小说,也是一部具有深刻社会意义的作品,对后世的审判和批判起到了关键作用。
1009 |
1010 | [Q]: 我咳嗽已经持续了两周,需要去医院检查吗?
1011 | [A0]: 咳嗽确实需要一定的检查,但如果您的咳嗽持续时间过长,或者有其他不适症状(如持续咳嗽、呼吸困难等),建议尽快就医。医生会根据您的症状、体检结果和可能的影像学检查(如X光、MRI或CT扫描)来诊断您的症状。在等待就医期间,您可以尝试以下方法来帮助缓解症状:1. **休息**:确保您有足够的睡眠。2. **保持水分**:脱水可能会加重咳嗽。3. **热敷**:使用热敷袋或热毛巾敷在疼痛的部位,有助于缓解喉咙痛。4. **避免刺激**:避免使用可能刺激喉咙的化学物质,如某些化学物质。5. **保持室内湿润**:确保您的卧室有良好的通风,可以使用加湿器帮助保持空气湿度。6. **保持室内湿度**:使用加湿器可以帮助缓解喉咙干燥和咳嗽。如果症状持续不缓解或有其他不适症状,请立即就医。希望您早日康复!
1012 | [A1]: 如果您的咳嗽持续了两周,通常建议您尽快就医,因为这可能会影响您的呼吸和心跳。您可以通过以下步骤来检查咳嗽:1. **休息**:用纸巾或肘部轻轻按压咳嗽的位置。2. **使用药物**:在可能的情况下,使用药物(如抗过敏药、抗过敏药等)来缓解咳嗽。请在医生的指导下使用药物。3. **热敷**:在咳嗽期间,可以使用热敷来缓解疼痛和肿胀。4. **药物治疗**:如果咳嗽持续了两周,可以使用一些消炎药或抗过敏药来缓解症状。请务必在医生的指导下使用。5. **保持健康的生活方式**:保持充足的睡眠、均衡的饮食、适量的运动和避免吸烟和限酒,这些都有助于提高咳嗽的几率。请记住,这些建议不能替代专业医疗意见。如果您的症状持续或加重,强烈建议您尽快就医。
1013 | ```
1014 |
1015 |
1016 |
1017 | 👉效果总结
1018 |
1019 | full_sft模型在简洁性和信息准确性方面表现更好;rlhf模型在回答中倾向于提供更多的背景信息,但信息准确性有待改进。
1020 | 总的来说RLHF后的模型倾向于学习:说更多有礼貌但无用的废话讨好“对话”本身,而对信息准确性则有轻微损失。
1021 | 天下没有免费的午餐,还需要继续提升RLHF数据集的质量,也要接受模型能力无法避免的损失(程度有轻重)。
1022 | DPO和在线PPO的区别在于reject和chosen都是离线准备的,和minimind模型本身的输出必然存在很大的分布差异。
1023 | 通俗地说DPO算法使模型观看乒乓球世界冠军的打法「录像」进行RL,而不是像PPO一样请reward模型做「教练」纠正自己的打法进行RL。
1024 |
1025 | ## Ⅱ 主观样例测评
1026 |
1027 | 🏃以下测试于2025-02-09完成,此日期后发布的新模型,无特殊需要时将不加入测试。
1028 |
1029 | [A] [MiniMind2 (0.1B)](https://www.modelscope.cn/models/gongjy/MiniMind2-PyTorch)
1030 | [B] [MiniMind2-MoE (0.15B)](https://www.modelscope.cn/models/gongjy/MiniMind2-PyTorch)
1031 | [C] [MiniMind2-Small (0.02B)](https://www.modelscope.cn/models/gongjy/MiniMind2-PyTorch)
1032 | [D] [minimind-v1-small(0.02B)](https://pan.baidu.com/s/1_COe0FQRDmeapSsvArahCA?pwd=6666)
1033 | [E] [minimind-v1-moe(0.1B)](https://pan.baidu.com/s/1tqB-GMvuiGQBvEl-yZ-oBw?pwd=6666)
1034 | [F] [minimind-v1(0.1B)](https://pan.baidu.com/s/1p713loS7EfwHQf3G9eYI3Q?pwd=6666)
1035 | [G] [baby-llama2-chinese(0.2B)](https://github.com/DLLXW/baby-llama2-chinese)
1036 | [H] [chatlm-mini-chinese(0.2B)](https://github.com/charent/ChatLM-mini-Chinese)
1037 |
1038 | ```text
1039 | [Q]: 你知道长江吗?
1040 | [A]: 长江,位于中国东南部,是中国的第一长河,全长6300多公里,流经11个省份,是中国最长的河流,也是中国的“母亲河”。长江的历史可以追溯到公元前2000多年的商朝时期,当时的商朝是商朝的后裔。商朝时期,长江的经济发展达到了高峰,是中国最重要的农业和工业中心之一。长江流域的经济主要以农业为主,长江渔业占据了主要产业,如钓鱼、猎鱼等。长江的水资源也非常丰富,包括河流、湖泊、河流等。长江的自然景观也十分壮观,长江四季景色各异,春季多雨,夏季多雨,秋季则多雨,冬季则寒冷多雪,春季和秋季都有美丽的花朵和果实。长江是中国的经济中心,拥有众多的国家和城市,是世界上重要的交通枢纽和航运中心。同时,长江还是中国的重要水利资源,对于中国的经济发展具有重要意义。
1041 | [B]: 是的,我知道长江。长江位于中国的中部,是中国第三长的河流,全长约6300公里,流经江西省、安徽省、江苏省、安徽省、江苏省和安徽省。
1042 | [C]: 当然,长江是中国最长的河流,全长约6300公里。它以其丰富的水资源、丰富的水生动植物资源以及独特的地理位置而闻名于世。
1043 | [D]: 是的,长江是中国最长的河流,全长超过3300千米。
1044 | [E]: 长江是中国最长的河流,总长度约为6300公里。它发源于青藏高原的青海湖,流经青海、四川、甘肃、宁夏、内蒙古、陕西和山西。长江流域是中国最大的农业区之一,也是中国的文化和经济中心。
1045 | [F]: 是的,长江是中国最长的河流之一,也是世界第三大河流。长江发源于青藏高原,流经11个省份,最终注入东海。长江流域是中国最大的河流流域之一,拥有着丰富的自然资源和文化遗产,如长江大桥、长江海洋博物馆、长江古城等。
1046 | [G]: 同时也知道,中国是世界上最长的城市之一,拥有超过3400万人口。
1047 | [H]: 是的,长江是中国最长的河流,也是世界上最长的河流之一。它发源于青藏高原,流经九个省区,最终注入东海。长江流域的气候温和,四季分明,是中国重要的农业和工业基地。长江流域的水产资源丰富,是中国的重要渔业资源之一。
1048 |
1049 |
1050 | [Q]: 世界上最高的山峰是什么?
1051 | [A]: 世界上最高的山峰是珠穆朗玛峰(Mount Everest),位于喜马拉雅山脉,尼泊尔和中国的边界上。它的海拔高度是8,848.86米(29,031.7英尺),是地球上最高的山峰。
1052 | [B]: 珠穆朗玛峰(Mount Everest)是世界上最高的山峰,海拔高度达8,848米(29,029英尺)。
1053 | [C]: 珠穆朗玛峰是世界上最高的山峰,位于喜马拉雅山脉,海拔8,848.86米(29,031.7英尺)。它位于尼泊尔和中国的边界上,是喜马拉雅山脉的一部分。
1054 | [D]: 世界上最高的山峰是珠穆朗玛峰,位于尼泊尔和中国的边境上,海拔8848米。
1055 | [E]: 世界上最高的山峰是珠穆朗玛峰,位于喜马拉雅山脉,海拔8,848米。
1056 | [F]: 珠穆朗玛峰是世界上最高的山峰,海拔8848米。
1057 | [G]: 世界上最高的山峰是珠穆朗玛峰,位于喜马拉雅山脉,海拔8,848.86米(29,031.69英尺)。
1058 | [H]: 世界上最高的山峰是珠穆朗玛峰,也称为萨加玛塔,位于喜马拉雅山脉的尼泊尔和中国边境之间,海拔8,848.86米(29,031.69英尺)。它是世界上最著名的登山目标之一,吸引着来自世界各地的登山者前来挑战。珠穆朗玛峰的名字来自藏语,意思是“女神的额峰”,它是由两个不同的山峰组成的,其中一个是位于尼泊尔的珠穆朗玛峰,另一个则位于中国。它的高度是通过测量海拔高度来确定的,因为它是通过测量地球的平均高度来确定的。尽管珠穆朗玛峰是世界上最高的山峰,但它并不是地球上任何其他地方都能到达的。地球上的其他地方,如南美洲的安第斯山脉,也是世界上最高的山脉,但它并不是地球上最高的山峰。总之,珠穆朗玛峰是世界上最高的山峰,但它并不是地球上任何地方都能到达的。它是地球上最高的山脉,但它并不是地球上任何地方都能到达的。
1059 |
1060 |
1061 | [Q]: 万有引力是谁提出的?
1062 | [A]: 万有引力是由牛顿在17世纪提出的,它是牛顿三大定律之一。牛顿在17世纪提出的引力定律揭示了万有引力定律的本质,并且解释了行星、卫星、彗星等天体之间如何相互作用。这个定律是由艾萨克·牛顿在1687年提出的,他通过对万有引力定律的分析和实验,成功地解释了天体之间的引力作用,从而解释了地球绕太阳旋转的现象。
1063 | [B]: 万有引力(Human Policy)是牛顿的三大力学定律,用于描述两个物体之间的相互作用力。它由万有引力定律和牛顿的第一定律(惯性定律)构成。万有引力是由于两个物体之间相互作用而产生的,它们的质量、距离、速度和方向等因素决定了它们之间的相互作用。这种相互作用力是:质量的乘积:质量的乘积是万有引力定律和牛顿第二定律的乘积。质量的乘积:质量的乘积是物体之间的一种物理过程,它是物体间相互作用的产物,对于其他物体来说,其质量可以表示为两个物体之间的质量之和,即重力加速度a。
1064 | [C]: 万有引力是一个由牛顿在1687年提出的物理学概念,它描述了物体之间引力如何相互作用的规律。
1065 | [D]: 万有引力是由英国物理学家牛顿提出的。他认为,所有物体都会产生引力,这个引力是由物体的质量和距离决定的。这个理论揭示了质量和距离之间的关系,并证明了万有引力的本质。
1066 | [E]: 万有引力是由牛顿在1687年提出的。它是宇宙中一个基本的物理现象,它描述了物体之间的相互作用。这个力被称为万有引力,它在所有物体之间都存在,这种力与物体的质量和距离有关。万有引力是自然界中最基本的力量之一,它不仅影响着物体的运动和变化,还对宇宙的结构和演化产生着重要的影响。
1067 | [F]: 万有引力是英国物理学家艾萨克·牛顿在17世纪提出的。
1068 | [G]: 引力是宇宙中一种力量,它是由质量或能量所组成的。根据牛顿的万有引力定律,一个物体之间的引力大小与它的质量成正比,与它的质量成反比。因此,牛顿的万有引力定律表明,一个物体之间的引力大小与它的质量成正比,与它的质量成反比。这个定律在物理学、天文学和物理学中都有广泛的应用。
1069 | [H]: 万有引力是由英国物理学家艾萨克·牛顿于1687年提出的。他在1687年发表了一篇著名的论文,题为《自然哲学的数学原理》,其中提出了万有引力定律,指出任何两个物体之间都存在着一种引力,这种引力的大小与它们的质量成正比,与它们之间的距离的平方成反比。这个定律被称为万有引力定律,它描述了质量和距离之间的关系。它表明,两个物体之间的引力与它们的质量成正比,与它们之间的距离的平方成反比。这个定律对于解释宇宙中的许多现象和现象之间的相互作用非常重要,例如行星绕太阳运动、恒星之间的引力透镜效应等等。它也被广泛应用于天文学、物理学和工程学等领域,为我们理解宇宙的本质和演化提供了重要的理论基础。
1070 |
1071 |
1072 | [Q]: 你知道大熊猫的主要食物是什么吗?
1073 | [A]: 大熊猫主要以竹子为食,但也会吃其他植物、昆虫、小型哺乳动物和鱼类。它们的消化系统与其他熊科动物类似,能够消化竹子。大熊猫的牙齿锋利,适合咀嚼竹子,因此它们经常被称为“竹食动物”。
1074 | [B]: 大熊猫是以竹子为食的哺乳动物,它们是肉食动物。它们主要食物是竹子,特别是竹子中富含纤维素和矿物质,并且不会直接进入消化系统。大熊猫的肠道主要由竹子组成,它们需要消化竹子中的纤维素和其他纤维素来支持它们的正常进食。
1075 | [C]: 大熊猫主要以竹子为食,但它们也吃其他植物,包括树叶、嫩芽、昆虫、水果、蔬菜和小型哺乳动物等。大熊猫是中国特有的物种,属于熊科,主要分布在中国的四川、陕西和甘肃等地。
1076 | [D]: 大熊猫的主要食物是竹子。它们通常吃竹子,每天需要吃掉大约30公斤的竹子。
1077 | [E]: 是的,大熊猫主要以竹子为食。它们每天需要吃掉大量的竹子来维持生命,每天需要吃掉40至80公斤的竹子。此外,它们也会吃一些其他的植物,如草、果实、坚果和小型哺乳动物。
1078 | [F]: 大熊猫主要以竹子为食,也会偶尔食用其他的植物和动物。此外,大熊猫也会吃一些野果、小动物和植物,但这些都不属于它们的主要食物来源。
1079 | [G]: 大熊猫的主要食物是竹子。
1080 | [H]: 大熊猫是一种以竹子为主要食物的植食动物,它们的肠道结构适合消化纤维素含量高的竹子,但也会吃一些其他植物,如草本植物和小型哺乳动物。大熊猫的肠道结构也适应了高纤维的竹子,这使得它们能够消化大量的纤维素,从而帮助它们在野外生存。大熊猫的肠道中还含有一种叫做“纤维素酶”的酶类物质,这种酶能够帮助大熊猫分解纤维素,从而帮助它们消化大量的水分和营养物质。此外,大熊猫的肠道还含有一种叫做“纤维素蛋白”的蛋白质,这种蛋白质能够帮助大熊猫维持体内的能量水平,从而帮助它们在野外生存。总的来说,大熊猫的主要食物是竹子,但它们也会吃一些其他植物,如草本植物和小型哺乳动物。
1081 |
1082 |
1083 | [Q]: 海水为什么是咸的?
1084 | [A]: 海水之所以是咸的,主要是由于其中的化学成分。海水中含有大量的矿物质和盐分,其中主要成分是硫酸盐、碳酸盐、镁离子和钾离子。这些物质在高浓度下,会沉淀在海水中,形成海水。具体来说,海水中的矿物质和盐分主要来自于岩石、土壤、生物和人类排泄物等。岩石是海水的组成部分,主要由矿物质组成,如石英、长石、云母等。土壤中含有大量的化学物质,如氮、磷、钾等,这些物质会渗透到土壤中,形成土壤。生物体内的盐分主要来源于植物的叶绿素等色素,这些色素会吸收太阳光中的能量,转化为生物体可以利用的能量。而人类排泄物中含有的盐分主要来源于矿物质和碳酸盐,这些矿物质在水中的浓度较高,会导致水中溶解的矿物质浓度增加,形成水体中的盐分。因此,海水之所以是咸的,是由于其中的化学成分和结构特征。海水中的矿物质和盐分主要来自于岩石、土壤、生物和人类排泄物等,而这些物质的积累和排泄会导致海水呈现出咸味。
1085 | [B]: 海水是咸的,因为它包含了多种物质,包括氯化钠(NaCl)、氯化钠(NaCl)、碳酸盐(如硫酸盐)、氯化钠(NaCl)和水。咸物质的来源:盐是咸的常见来源,含有钾、钙、镁、钾等矿物质。这些矿物质通常来自于地下水、地下水或土壤中,这些矿物质的含量远低于纯净水。咸味的原因:盐的咸味主要源于其溶解于水中的微小颗粒物。这些小颗粒物主要通过植物根部和动物体内的微生物传播,如昆虫、小型哺乳动物、鸟类和某些昆虫。综上所述,海水是咸的,因为它含有大量的盐类,包括矿物质、岩石、盐、矿物质等,这些物质在海水中的浓度高,对咸味的产生较大影响。
1086 | [C]: 海水是咸的,主要是因为其中含有各种盐分,其中以海水为主要成分之一。当海水中含有大量的盐分时,盐分会被溶解在海水中,形成咸味。海水中的盐分主要来源于溶解盐类、蒸发析出的盐和海水冻结成冰等因素。
1087 | [D]: 海水中含有许多盐分,这些盐分可以导致海水咸度的增加。海水中的盐分主要来自于海水中的盐分,这些盐分可以从海水中的盐分中获得。海水中的氧化物和盐分可以与海水中的盐分形成氢键,从而使海水咸度增加。
1088 | [E]: 海水是咸的,这是因为它含有大量的盐分,其中包括氯离子和氯离子。这些离子在海洋中形成了大量的盐,这些盐分被称为盐分。海水中含有许多溶解在水中的盐,这些盐分会随着海水的升高而逐渐蒸发,最终导致海水中盐分的蒸发。这些盐分在海水中被完全消耗,然后被排除在海水中。此外,海水中还含有许多其他的盐分,但这些盐分是不同的,因为它们在水中的浮起和下降过程是不同的。海水中的盐分是从海底的岩石中提取出来的,这些盐分在海水中可以被利用。总的来说,海水是咸的,这也是为什么海水中含有许多盐分和盐分的原因。
1089 | [F]: 海水含有许多溶解的盐类,其中最主要的是氯化钠。当水中加入盐时,盐的分子会与水分子发生反应,形成氢氧化钠,这是一种盐。
1090 | [G]: 因为水是许多生物的生存所必需的,它能够吸收和释放二氧化碳,释放氧气,维持大气中的水蒸气,以及维持水循环。水中的盐类和化学物质会释放出氧气,使地球上的生物呼吸。此外,水还具有许多其他功能,如保持水循环、维持水循环、调节气候和环境等。水是许多生物生存所必需的,它也是许多自然现象的重要基础。
1091 | [H]: 海水是咸的主要原因是因为其中含有大量的溶解盐类,其中最主要的是氯化钠(NaCl)。氯化钠是一种非常常见的盐类,它存在于地球上的大部分地区,包括海洋、湖泊、河流和地下水。当水从海洋中蒸发时,它会带走一部分盐分,使海水变得咸。此外,海中还含有其他溶解盐类的物质,如硫酸盐、硫酸镁、硫酸钙、硫酸镁盐、硫酸镁钾、硫酸镁钠、硫酸镁钙、硫酸镁镁钙、硫酸镁镁钙、硫酸镁镁钙、硫酸镁镁钠、硫酸镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁镁钙、硫酸镁镁镁镁镁钙、硫酸镁镁镁镁镁钙、硫酸镁镁镁镁镁钙、硫酸镁镁镁镁镁镁钙、硫酸镁镁镁镁镁镁钙、硫酸镁镁镁镁镁镁钙、硫酸镁镁镁镁镁镁镁钙、硫酸镁镁镁镁
1092 |
1093 |
1094 | [Q]: 写一段python的快速排序代码
1095 | [A]: 好的,以下是Python的快速排序代码:def quick_sort(arr): if len(arr) <= 1: return arr else: pivot = arr[len(arr) // 2] left = [x for x in arr[1:]] right = [x for x in arr[2:] if x < pivot] return quick_sort(left) + [pivot] + quick_sort(right)# 测试代码arr = [3, 6, 8, 10, 1, 2, 1]print(quick_sort(arr))这段代码首先检查数组的长度是否小于等于1。如果是,那么它就按照从小到大的顺序排列。接着,对于数组的每个元素,它想要进行递归地排序,直到达到排序的结束位置。如果数组的长度是奇数,那么它就按照从大到小的顺序排列。最后,平均的时间复杂度是O(n log n),其中n是数组的长度。
1096 | [B]: 以下是一个使用Python的快速排序算法的实现:def sort_array(arr): # 生成一个随机排列 sorted_array = sorted(arr) # 返回排序后的列表 return sorted(arr)# 示例arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]# 调用函数sorted_array = sort_array(arr)print("原始数组是:", sorted_array)这段代码首先生成一个随机排列,然后通过一个列表的长度进行排序。最后,返回排序后的列表。注意,这个代码使用了Python的内置sorted函数和内置的sorted()函数,它们在时间复杂度为O(n log n)的情况下,比直接使用sorted()函数更快。
1097 | [C]: 以下是一个简单的排序算法,适用于小型数组,包括小于等于5的数组。def quick_sort(arr): if len(arr) <= 1: return arr else for _ in range(2, 51): if arr[len(arr) - 1:] is not None: arr[len(arr) - 1] = arr[len(arr) - 1] arr[len(arr)] = arr[len(arr)] return quick_sort(arr)# 示例arr = [3, 5, 1, 4, 1, 2, 1]sorted_arr = quick_sort(arr)print("Sorted array is:", sorted_arr)### 代码解释:1. **函数定义**:quick_sort 函数接受一个数组 arr 作为参数。2. **循环**:对于 arr 的元素,使用递归来实现,即从右到左遍历数组,并将当前元素与数组的当前元素交换位置,直到数组达到空或数组长度小于等于 arr。3. **递归调用**:遍历数组,将当前元素与数组的当前元素进行交换,直到数组达到空或数组长度小于等于 arr。4. **检查数组长度**:使用 if __name__ == "__main__" 检查数组的长度是否小于 arr,如果是,就将数组从较大的数组移动到数组的末尾。5. **输出排序后的结果**
1098 | [D]: 作为一个AI语言模型,我无法编写Python程序。但是,Python是一个非常流行的编程语言,它具有许多优秀的库和框架,包括NumPy、Pandas、Matplotlib、Pandas、Bandke、Norm和NKlearn等。Python有很多优秀的自然语言处理库和框架,可以用于处理和分析各种类型的数据,如文本、图像、音频、视频等。Python还可以用于创建各种应用程序,如网页搜索、游戏开发、科学计算等。Python非常灵活,也支持多种编程风格,因此可以轻松地找到适合自己的解决方案。
1099 | [E]: 作为AI语言模型,我可以回答您的问题。但是我无法进行编程操作,因为我无法确定您想问的是哪种编程语言。请您提供更具体的问题,以便我能够更好地回答您的问题。
1100 | [F]: 快速排序(Quick Sort)是一种简单的排序算法,它通过将数据划分为小组,每组数据按照一定规则排序,最终得到一个有序数组。
1101 | [G]: 好的,以下是一段快速排序的Python代码```python sorted_print(sorted(sorted(sorted(sorted_print))```输出结果为:```[1, 2, 3, 4, 5, 6, 7, 8, 9]```
1102 | [H]: 快速排序是一种基于分治思想的高效排序算法,它的时间复杂度为 O(n log n)。它的基本思想是通过一趟排序将待排序序列划分为两个子序列,其中一个子序列的所有元素都比另一个子序列的所有元素小,然后对这两个子序列分别递归地进行排序。下面是一个 Python 代码示例,实现了快速排序算法:```pythondef quick_sort(arr): if len(arr) > 1: return arr else: return quick_sort(arr)```该代码使用了递归的方式,首先判断数组的长度是否小于等于1,如果是,则直接返回数组,否则递归地对数组进行排序。在递归过程中,我们首先判断数组的第一个元素是否大于等于1,如果是,则将数组中的元素与第一个元素进行比较,如果不等于1,则将数组中的元素与第二个元素进行比较,以此类推,直到所有元素都被排序。该代码的时间复杂度为 O(n log n),是一个非常实用的排序算法。
1103 |
1104 |
1105 | [Q]: 截至2025年2月9日14时31分,电影《哪吒之魔童闹海》累计票房(含预售)突破78.09亿元,超过《星球大战:天行者崛起》的票房成绩,成为首部进入全球票房榜前40的亚洲电影。这一成就不仅标志着中国电影在国际市场的突破,也展示了中国动画电影的巨大潜力和市场吸引力。从这段文字总结20字的摘要给我。
1106 | [A]: 电影《哪吒之魔童闹海》票房突破78.09亿元,成为首部进入全球票房榜前40的亚洲电影。这标志着中国电影的国际突破,也展现了中国动画电影的巨大潜力。
1107 | [B]: 2025年2月9日,中国电影在世界舞台上突破78.09亿元,超过《星球大战:天行者崛起》的票房成绩。
1108 | [C]: 电影《哪吒之魔童闹海》票房突破78.09亿元,成为全球票房榜前40的亚洲电影,展现中国动画电影的巨大潜力。
1109 | [D]: 《哪吒之魔童闹海》累计票房突破78.09亿,成为首部进入全球票房榜前40的亚洲电影,展示了中国电影在国际市场的突破,同时展示了中国动画电影的巨大潜力和市场吸引力。
1110 | [E]: 《猛龙强:打开世界奇幻冒险》成功地,上海成功吸引了众多观众,同时也引起了广泛的关注。
1111 | [F]: 《哪吒之魔童闹海》是一部让中国动画电影成为国际市场的巨大潜力。该电影以其华丽的画面和惊险的战斗场面而闻名于世。
1112 | [G]: 《哪行是值得的》是2021年1月9日14时31分,电影《哪成大作为》在电影市场突破7800亿美元,成为中国动画电影的突破,并展示中国动画电影的巨大潜力和市场吸引力。
1113 | [H]: 我是一个参数很少的AI模型🥺,知识库较少,无法直接回答您的问题,换个问题试试吧👋
1114 | ```
1115 |
1116 | ---
1117 |
1118 | 🙋直接把以上所有问题和模型的回答丢给DeepSeek-R1,让它帮忙点评和排名打分:
1119 |
1120 |
1121 |
1122 | 具体点评
1123 |
1124 | ### 评分标准:
1125 |
1126 | - **准确性**:回答是否正确且无明显错误。
1127 | - **完整性**:回答是否涵盖了问题的核心要点。
1128 | - **逻辑性**:回答是否有条理,是否符合逻辑。
1129 | - **代码质量**:代码是否能正常运行,逻辑是否清晰。
1130 |
1131 | ### 点评:
1132 |
1133 | 1. **A模型**:
1134 | - **优点**:回答非常全面,信息量大,逻辑清晰,尤其在长江、大熊猫、海水咸味等问题上表现优异。代码虽然有小瑕疵,但整体思路正确。
1135 | - **缺点**:部分回答稍显冗长,但不影响整体质量。
1136 | - **总评**:综合表现最佳,得分最高。
1137 |
1138 | 2. **H模型**:
1139 | - **优点**:回答较为准确,尤其在珠穆朗玛峰、万有引力等问题上表现出色。代码虽未完全展示,但解释较为详细。
1140 | - **缺点**:部分回答略显啰嗦,但逻辑性较强。
1141 | - **总评**:仅次于A模型,表现稳定。
1142 |
1143 | 3. **C模型**:
1144 | - **优点**:回答简洁明了,尤其在大熊猫、快速排序代码等问题上表现较好。
1145 | - **缺点**:部分回答略显简短,缺乏深入解释。
1146 | - **总评**:整体表现不错,但细节上略逊于A和H。
1147 |
1148 | 4. **F模型**:
1149 | - **优点**:回答较为准确,尤其在长江、万有引力等问题上表现尚可。代码部分有一定逻辑性。
1150 | - **缺点**:部分回答不够深入,代码存在一些小问题。
1151 | - **总评**:表现中规中矩,仍有提升空间。
1152 |
1153 | 5. **D模型**:
1154 | - **优点**:回答基本准确,尤其在万有引力、长江等问题上表现尚可。
1155 | - **缺点**:部分回答过于简略,代码存在明显错误。
1156 | - **总评**:表现一般,需改进代码部分。
1157 |
1158 | 6. **B模型**:
1159 | - **优点**:回答较为准确,尤其在长江、海水咸味等问题上表现尚可。
1160 | - **缺点**:部分回答逻辑性较差,代码存在较大问题。
1161 | - **总评**:表现平平,需进一步优化。
1162 |
1163 | 7. **E模型**:
1164 | - **优点**:部分回答较为准确,尤其在海水咸味、大熊猫等问题上表现尚可。
1165 | - **缺点**:回答过于简略,代码部分几乎无法运行。
1166 | - **总评**:表现不佳,需大幅提升。
1167 |
1168 | 8. **G模型**:
1169 | - **优点**:几乎没有明显的优点。
1170 | - **缺点**:回答严重偏离主题,代码部分完全无法运行。
1171 | - **总评**:表现最差,需大幅改进。
1172 |
1173 | ---
1174 |
1175 | ### 总结:
1176 |
1177 | - **A模型**在各方面表现最为出色,尤其在复杂问题的回答上展现了极高的准确性与逻辑性。
1178 | - **H模型**紧随其后,表现稳定,但在某些细节上略显不足。
1179 | - **G模型**表现最差,回答偏离主题且代码无法运行,需大幅改进。
1180 |
1181 |
1182 |
1183 | ### 打分排序
1184 |
1185 | | 排名 | 模型 | 准确性 (30分) | 完整性 (30分) | 逻辑性 (20分) | 代码质量 (20分) | 总分 (100分) |
1186 | |----|----|-----------|-----------|-----------|------------|-----------|
1187 | | 1 | A | 28 | 29 | 19 | 20 | 96 |
1188 | | 2 | H | 27 | 28 | 18 | 20 | 93 |
1189 | | 3 | C | 26 | 27 | 18 | 18 | 89 |
1190 | | 4 | F | 25 | 26 | 17 | 18 | 86 |
1191 | | 5 | D | 24 | 25 | 17 | 16 | 82 |
1192 | | 6 | B | 23 | 24 | 16 | 15 | 78 |
1193 | | 7 | E | 22 | 23 | 15 | 14 | 74 |
1194 | | 8 | G | 10 | 12 | 10 | 10 | 42 |
1195 |
1196 |
1197 | ### 👉主观效果总结
1198 |
1199 | 个人主观评价与DeepSeek-R1基本相符,其中:
1200 |
1201 | * MiniMind系列的排序非常符合直觉,参数越大+训练数据越充分评分越高,幻觉和错误都会比小模型肉眼可见的好。
1202 |
1203 | * H模型的回答肉眼看起来是不错的,尽管存在些许幻觉瞎编的情况。
1204 |
1205 | * G模型可能训练数据不够完备,给出的权重经过测试效果不佳。
1206 |
1207 | * 再复诵一遍经久不衰的Scaling Law: 参数越大,训练数据越多模型的性能越强。
1208 |
1209 | ---
1210 |
1211 | ## Ⅲ Objective Benchmark
1212 |
1213 | 下面就到喜闻乐见的benchmark刷榜测试环节,就不找乐子和qwen、glm级别的中文模型做对比了。
1214 | 这里选取了一些<1B的微型模型进行横评比较,
1215 | 测试集选择C-Eval、CMMLU、A-CLUE、TMMLU+这几个纯中文语言榜单。
1216 |
1217 |
1218 |
1219 | 测评框架
1220 |
1221 | 测评框架选择[lm-evaluation](https://github.com/EleutherAI/lm-evaluation-harness),
1222 | 安装后启动测试非常方便:
1223 |
1224 | ```bash
1225 | lm_eval --model hf --model_args pretrained=<填写模型路径>,device=cuda,dtype=auto --tasks ceval* --batch_size 8 --trust_remote_code
1226 | ```
1227 |
1228 |
1229 |
1230 |
1231 |
1232 | PS: 在这种全是选择题的测评集中,为了避免回复格式的难以固定的特点,
1233 | 所以常用做法是直接把`A`,`B`,`C`,`D`四个字母对应token的预测概率取出来,将其中概率最大的字母与标准答案计算正确率。
1234 | 选择题1/4乱选的正确率是25%,然而这个量级的所有模型都集中在25附近,甚至很多时候不如瞎选,是不是像极了高中完形填空的滑铁卢正确率...
1235 | MiniMind模型本身预训练数据集小的可怜,也没有针对性的对测试集做刷榜微调,因此结果图一乐即可:
1236 |
1237 | | models | from | params↓ | ceval↑ | cm mlu↑ | aclue↑ | tmmlu+↑ |
1238 | |-------------------------------------------------------------------------------|---------------|---------|--------|---------|--------|---------|
1239 | | MiniMind2 | JingyaoGong | 104M | 26.52 | 24.42 | 24.97 | 25.27 |
1240 | | MiniMind2-Small | JingyaoGong | 26M | 26.37 | 24.97 | 25.39 | 24.63 |
1241 | | MiniMind2-MoE | JingyaoGong | 145M | 26.6 | 25.01 | 24.83 | 25.01 |
1242 | | [Steel-LLM](https://github.com/zhanshijinwat/Steel-LLM) | ZhanShiJin | 1121M | 24.81 | 25.32 | 26 | 24.39 |
1243 | | [GPT2-medium](https://huggingface.co/openai-community/gpt2-medium) | OpenAI | 360M | 23.18 | 25 | 18.6 | 25.19 |
1244 | | [TinyLlama-1.1B-Chat-V1.0](https://github.com/jzhang38/TinyLlama) | TinyLlama | 1100M | 25.48 | 25 | 25.4 | 25.13 |
1245 | | [SmolLM2](https://github.com/huggingface/smollm) | HuggingFaceTB | 135M | 24.37 | 25.02 | 25.37 | 25.06 |
1246 | | [Aquila-Instruct](https://www.modelscope.cn/models/BAAI/Aquila-135M-Instruct) | BAAI | 135M | 25.11 | 25.1 | 24.43 | 25.05 |
1247 |
1248 | 
1249 |
1250 | # 📌 其它 (Others)
1251 |
1252 | ## 模型转换
1253 |
1254 | * [./scripts/convert_model.py](./scripts/convert_model.py)可以实现`torch模型/transformers`模型之间的转换
1255 |
1256 | ---
1257 |
1258 | ## 基于MiniMind-API服务接口
1259 |
1260 | * [./scripts/serve_openai_api.py](./scripts/serve_openai_api.py)完成了兼容openai-api的最简聊天接口,方便将自己的模型接入第三方UI
1261 | 例如FastGPT、OpenWebUI、Dify等等。
1262 |
1263 | * 从[Huggingface](https://huggingface.co/collections/jingyaogong/minimind-66caf8d999f5c7fa64f399e5)下载模型权重文件,文件树:
1264 | ```
1265 | (root dir)
1266 | ├─
1267 | | ├── config.json
1268 | | ├── generation_config.json
1269 | | ├── LMConfig.py
1270 | | ├── model.py
1271 | | ├── pytorch_model.bin
1272 | | ├── special_tokens_map.json
1273 | | ├── tokenizer_config.json
1274 | | ├── tokenizer.json
1275 | ```
1276 |
1277 | * 启动聊天服务端
1278 | ```bash
1279 | python serve_openai_api.py
1280 | ```
1281 | * 测试服务接口
1282 | ```bash
1283 | python chat_openai_api.py
1284 | ```
1285 | * API接口示例,兼容openai api格式
1286 | ```bash
1287 | curl http://ip:port/v1/chat/completions \
1288 | -H "Content-Type: application/json" \
1289 | -d '{
1290 | "model": "model-identifier",
1291 | "messages": [
1292 | { "role": "user", "content": "世界上最高的山是什么?" }
1293 | ],
1294 | "temperature": 0.7,
1295 | "max_tokens": 512,
1296 | "stream": true
1297 | }'
1298 | ```
1299 |
1300 | ## VLLM模型推理(服务)
1301 |
1302 | [vLLM](https://github.com/vllm-project/vllm)是极其流行的高效推理框架,支持大模型快速部署,优化显存利用与吞吐量。
1303 |
1304 | ```bash
1305 | vllm serve ./MiniMind2/ --model-impl transformers --served-model-name "minimind"
1306 | ```
1307 |
1308 | 服务将以openai api协议启动,端口默认为8000。
1309 |
1310 | 更多用法请参考官方说明~
1311 |
1312 | ## llama.cpp
1313 | [llama.cpp](https://github.com/ggerganov/llama.cpp)是一个C++库,
1314 | 可以在命令行下直接使用,支持多线程推理,支持GPU加速。
1315 |
1316 | 参考官方仓库安装后,在`convert_hf_to_gguf.py` ~760行插入
1317 | ```text
1318 | # 添加MiniMind2 tokenizer支持
1319 | if res is None:
1320 | res = "smollm"
1321 | ```
1322 |
1323 | 转换自定义训练的minimind模型 -> gguf
1324 | ```bash
1325 | python convert_hf_to_gguf.py ../minimind/MiniMind2/
1326 | ```
1327 |
1328 | 量化模型
1329 | ```bash
1330 | ./build/bin/llama-quantize ../minimind/MiniMind2/MiniMind2-109M-F16.gguf ../minimind/MiniMind2/Q4-MiniMind2.gguf Q4_K_M
1331 | ```
1332 |
1333 | 命令行推理
1334 | ```bash
1335 | ./build/bin/llama-cli -m ../minimind/MiniMind2/MiniMind2-109M-F16.gguf --chat-template chatml
1336 | ```
1337 |
1338 | 更多用法请参考官方说明~
1339 |
1340 | ## ollama
1341 |
1342 | [ollama](https://ollama.ai/)是本地运行大模型的工具,支持多种开源LLM,简单易用。
1343 |
1344 | 通过ollama加载自定义的gguf模型,新建minimind.modelfile:
1345 | ```text
1346 | FROM ./MiniMind2-109M-F16.gguf
1347 | TEMPLATE """{{ if .System }}<|im_start|>system
1348 | {{ .System }}<|im_end|>
1349 | {{ end }}{{ if .Prompt }}<|im_start|>user
1350 | {{ .Prompt }}<|im_end|>
1351 | {{ end }}<|im_start|>assistant
1352 | """
1353 | ```
1354 |
1355 | 加载模型并命名为`minimind2`
1356 | ```bash
1357 | ollama create -f minimind.modelfile minimind2
1358 | ```
1359 |
1360 | 启动推理
1361 | ```text
1362 | ollama run minimind2
1363 | > 你好,我是MiniMind2,一个基于xxxxxxxx
1364 | ```
1365 |
1366 | 更多用法请参考官方说明~
1367 |
1368 | # 📌 Acknowledge
1369 |
1370 | > [!NOTE]
1371 | > 如果觉得`MiniMind系列`对您有所帮助,可以在 GitHub 上加一个⭐
1372 | > 篇幅超长水平有限难免纰漏,欢迎在Issues交流指正或提交PR改进项目
1373 | > 您的小小支持就是持续改进此项目的动力!
1374 |
1375 | ## 🤝[贡献者](https://github.com/jingyaogong/minimind/graphs/contributors)
1376 |
1377 |
1382 |
1383 |
1384 |
1385 |
1386 |
1387 |
1388 |
1389 |
1390 |
1391 |
1392 | ## 😊鸣谢
1393 |
1394 | @ipfgao :
1395 | 🔗训练步骤记录
1396 |
1397 | @chuanzhubin :
1398 | 🔗代码逐行注释
1399 |
1400 | @WangRongsheng :
1401 | 🔗大型数据集预处理
1402 |
1403 | @pengqianhan :
1404 | 🔗一个简明教程
1405 |
1406 | @RyanSunn :
1407 | 🔗推理过程学习记录
1408 |
1409 | @Nijikadesu :
1410 | 🔗以交互笔记本方式分解项目代码
1411 |
1412 |
1413 |
1414 | 参考链接 & 感谢以下优秀的论文或项目
1415 |
1416 | - 排名不分任何先后顺序
1417 | - [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
1418 | - [https://github.com/karpathy/llama2.c](https://github.com/karpathy/llama2.c)
1419 | - [https://github.com/DLLXW/baby-llama2-chinese](https://github.com/DLLXW/baby-llama2-chinese)
1420 | - [(DeepSeek-V2)https://arxiv.org/abs/2405.04434](https://arxiv.org/abs/2405.04434)
1421 | - [https://github.com/charent/ChatLM-mini-Chinese](https://github.com/charent/ChatLM-mini-Chinese)
1422 | - [https://github.com/wdndev/tiny-llm-zh](https://github.com/wdndev/tiny-llm-zh)
1423 | - [(Mistral-MoE)https://arxiv.org/pdf/2401.04088](https://arxiv.org/pdf/2401.04088)
1424 | - [https://github.com/Tongjilibo/build_MiniLLM_from_scratch](https://github.com/Tongjilibo/build_MiniLLM_from_scratch)
1425 | - [https://github.com/jzhang38/TinyLlama](https://github.com/jzhang38/TinyLlama)
1426 | - [https://github.com/AI-Study-Han/Zero-Chatgpt](https://github.com/AI-Study-Han/Zero-Chatgpt)
1427 | - [https://github.com/xusenlinzy/api-for-open-llm](https://github.com/xusenlinzy/api-for-open-llm)
1428 | - [https://github.com/HqWu-HITCS/Awesome-Chinese-LLM](https://github.com/HqWu-HITCS/Awesome-Chinese-LLM)
1429 |
1430 |
1431 |
1432 | ## 🫶支持者
1433 |
1434 |
1435 |
1436 |
1437 |
1438 |
1439 |
1440 |
1441 |
1442 |
1443 |
1444 |
1445 |
1446 |
1447 |
1448 |
1449 |
1450 |
1451 |
1452 |
1453 |
1454 |
1455 |
1456 | # License
1457 |
1458 | This repository is licensed under the [Apache-2.0 License](LICENSE).
1459 |
1460 |
1461 |
--------------------------------------------------------------------------------
/dataset/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/dataset/__init__.py
--------------------------------------------------------------------------------
/dataset/lm_dataset.py:
--------------------------------------------------------------------------------
1 | import json
2 | import random
3 | import re
4 |
5 | import pandas as pd
6 | import numpy as np
7 | from torch.utils.data import Dataset, DataLoader
8 | import torch
9 | from sklearn.model_selection import train_test_split
10 | import os
11 | import ast
12 |
13 | os.environ["TOKENIZERS_PARALLELISM"] = "false"
14 |
15 |
16 | class PretrainDataset(Dataset):
17 | def __init__(self, data_path, tokenizer, max_length=512):
18 | super().__init__()
19 | self.tokenizer = tokenizer
20 | self.max_length = max_length
21 | self.samples = self.load_data(data_path)
22 |
23 | def load_data(self, path):
24 | samples = []
25 | with open(path, 'r', encoding='utf-8') as f:
26 | for line_num, line in enumerate(f, 1):
27 | data = json.loads(line.strip())
28 | samples.append(data)
29 | return samples
30 |
31 | def __len__(self):
32 | return len(self.samples)
33 |
34 | def __getitem__(self, index):
35 | sample = self.samples[index]
36 |
37 | # 构建输入文本
38 | encoding = self.tokenizer(
39 | str(sample['text']),
40 | max_length=self.max_length,
41 | padding='max_length',
42 | truncation=True,
43 | return_tensors='pt'
44 | )
45 | input_ids = encoding.input_ids.squeeze()
46 | loss_mask = (input_ids != self.tokenizer.pad_token_id)
47 |
48 | X = torch.tensor(input_ids[:-1], dtype=torch.long)
49 | Y = torch.tensor(input_ids[1:], dtype=torch.long)
50 | loss_mask = torch.tensor(loss_mask[1:], dtype=torch.long)
51 | return X, Y, loss_mask
52 |
53 |
54 | class SFTDataset(Dataset):
55 | def __init__(self, jsonl_path, tokenizer, max_length=1024):
56 | super().__init__()
57 | self.tokenizer = tokenizer
58 | self.max_length = max_length
59 | self.samples = self.load_data(jsonl_path)
60 | self.bos_id = tokenizer('<|im_start|>assistant', add_special_tokens=False).input_ids
61 | self.eos_id = tokenizer('<|im_end|>', add_special_tokens=False).input_ids
62 |
63 | def __len__(self):
64 | return len(self.samples)
65 |
66 | def load_data(self, path):
67 | samples = []
68 | with open(path, 'r', encoding='utf-8') as f:
69 | for line_num, line in enumerate(f, 1):
70 | data = json.loads(line.strip())
71 | samples.append(data)
72 | return samples
73 |
74 | def _create_chat_prompt(self, conversations):
75 | """构建符合ChatML格式的对话"""
76 | messages = []
77 | for i, turn in enumerate(conversations):
78 | role = 'user' if i % 2 == 0 else 'assistant'
79 | messages.append({"role": role, "content": turn['content']})
80 | return self.tokenizer.apply_chat_template(
81 | messages,
82 | tokenize=False,
83 | add_generation_prompt=False
84 | )
85 |
86 | def _generate_loss_mask(self, input_ids):
87 | loss_mask = [0] * len(input_ids)
88 | i = 0
89 | while i < len(input_ids):
90 | if input_ids[i:i + len(self.bos_id)] == self.bos_id:
91 | start = i + len(self.bos_id)
92 | end = start
93 | while end < len(input_ids):
94 | if input_ids[end:end + len(self.eos_id)] == self.eos_id:
95 | break
96 | end += 1
97 | for j in range(start + 1, min(end + len(self.eos_id) + 1, self.max_length)):
98 | loss_mask[j] = 1
99 | i = end + len(self.eos_id) if end < len(input_ids) else len(input_ids)
100 | else:
101 | i += 1
102 | return loss_mask
103 |
104 | def __getitem__(self, index):
105 | sample = self.samples[index]
106 | # 构建对话提示
107 | prompt = self._create_chat_prompt(sample['conversations'])
108 | input_ids = self.tokenizer(prompt).input_ids[:self.max_length]
109 | input_ids += [self.tokenizer.pad_token_id] * (self.max_length - len(input_ids))
110 |
111 | # 生成动态损失掩码
112 | loss_mask = self._generate_loss_mask(input_ids)
113 |
114 | # 构建训练数据
115 | X = torch.tensor(input_ids[:-1], dtype=torch.long)
116 | Y = torch.tensor(input_ids[1:], dtype=torch.long)
117 | loss_mask = torch.tensor(loss_mask[1:], dtype=torch.long) # 对齐预测位置
118 |
119 | return X, Y, loss_mask
120 |
121 |
122 | class DPODataset(Dataset):
123 | def __init__(self, file_path, tokenizer, max_length=4096):
124 | super().__init__()
125 | self.tokenizer = tokenizer
126 | self.max_length = max_length
127 | self.padding = tokenizer.pad_token_id if tokenizer.pad_token_id is not None else 0
128 | self.bos_id = tokenizer('<|im_start|>assistant', add_special_tokens=False).input_ids
129 | self.eos_id = tokenizer('<|im_end|>', add_special_tokens=False).input_ids
130 | with open(file_path, 'r', encoding='utf-8') as f:
131 | self.data = []
132 | for line in f:
133 | line = line.strip()
134 | obj = json.loads(line)
135 | self.data.append(obj)
136 |
137 | def __len__(self):
138 | return len(self.data)
139 |
140 | def __getitem__(self, index):
141 | item = self.data[index]
142 | chosen = item['chosen'] # 是一个 list,里面包含若干 {role, content}
143 | rejected = item['rejected'] # 同上
144 | chosen_prompt = self.tokenizer.apply_chat_template(
145 | chosen, tokenize=False, add_generation_prompt=False
146 | )
147 |
148 | rejected_prompt = self.tokenizer.apply_chat_template(
149 | rejected, tokenize=False, add_generation_prompt=False
150 | )
151 | chosen_encoding = self.tokenizer(
152 | chosen_prompt, truncation=True, max_length=self.max_length, padding='max_length'
153 | )
154 | rejected_encoding = self.tokenizer(
155 | rejected_prompt, truncation=True, max_length=self.max_length, padding='max_length'
156 | )
157 |
158 | chosen_input_ids = chosen_encoding['input_ids']
159 | chosen_loss_mask = self._generate_loss_mask(chosen_input_ids)
160 |
161 | rejected_input_ids = rejected_encoding['input_ids']
162 | rejected_loss_mask = self._generate_loss_mask(rejected_input_ids)
163 | x_chosen = torch.tensor(chosen_input_ids[:-1], dtype=torch.long)
164 | y_chosen = torch.tensor(chosen_input_ids[1:], dtype=torch.long)
165 | mask_chosen = torch.tensor(chosen_loss_mask[1:], dtype=torch.long)
166 | x_rejected = torch.tensor(rejected_input_ids[:-1], dtype=torch.long)
167 | y_rejected = torch.tensor(rejected_input_ids[1:], dtype=torch.long)
168 | mask_rejected = torch.tensor(rejected_loss_mask[1:], dtype=torch.long)
169 |
170 | return {
171 | 'x_chosen': x_chosen,
172 | 'y_chosen': y_chosen,
173 | 'mask_chosen': mask_chosen,
174 | 'x_rejected': x_rejected,
175 | 'y_rejected': y_rejected,
176 | 'mask_rejected': mask_rejected
177 | }
178 |
179 | def _generate_loss_mask(self, input_ids):
180 | loss_mask = [0] * len(input_ids)
181 | i = 0
182 | while i < len(input_ids):
183 | if input_ids[i:i + len(self.bos_id)] == self.bos_id:
184 | start = i + len(self.bos_id)
185 | end = start
186 | while end < len(input_ids):
187 | if input_ids[end:end + len(self.eos_id)] == self.eos_id:
188 | break
189 | end += 1
190 | for j in range(start + 1, min(end + len(self.eos_id) + 1, self.max_length)):
191 | loss_mask[j] = 1
192 | i = end + len(self.eos_id) if end < len(input_ids) else len(input_ids)
193 | else:
194 | i += 1
195 | return loss_mask
196 |
197 |
198 | class RLAIFDataset(Dataset):
199 | def __init__(self, jsonl_path, tokenizer, max_length=1024):
200 | super().__init__()
201 | self.tokenizer = tokenizer
202 | self.max_length = max_length
203 | self.samples = self.load_data(jsonl_path)
204 | self.bos_id = tokenizer('<|im_start|>assistant', add_special_tokens=False).input_ids
205 | self.eos_id = tokenizer('<|im_end|>', add_special_tokens=False).input_ids
206 |
207 | def __len__(self):
208 | return len(self.samples)
209 |
210 | def load_data(self, path):
211 | samples = []
212 | with open(path, 'r', encoding='utf-8') as f:
213 | for line_num, line in enumerate(f, 1):
214 | data = json.loads(line.strip())
215 | samples.append(data)
216 | return samples
217 |
218 | def _create_chat_prompt(self, conversations):
219 | """构建符合ChatML格式的对话"""
220 | messages = []
221 | answer = ''
222 | for i, turn in enumerate(conversations):
223 | role = 'user' if i % 2 == 0 else 'assistant'
224 | messages.append({"role": role, "content": turn['content']})
225 | answer = turn['content']
226 | return self.tokenizer.apply_chat_template(
227 | messages[:-1],
228 | tokenize=False,
229 | add_generation_prompt=True
230 | ), answer
231 |
232 | def __getitem__(self, index):
233 | sample = self.samples[index]
234 | # 构建对话提示
235 | prompt, answer = self._create_chat_prompt(sample['conversations'])
236 |
237 | return {
238 | 'prompt': prompt,
239 | 'answer': answer
240 | }
241 |
242 |
243 | if __name__ == "__main__":
244 | pass
245 |
--------------------------------------------------------------------------------
/eval_model.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import random
3 | import warnings
4 | import numpy as np
5 | from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
6 | from model.model_minimind import MiniMindConfig, MiniMindForCausalLM
7 | from model.model_lora import *
8 |
9 | warnings.filterwarnings('ignore')
10 |
11 |
12 | def init_model(args):
13 | tokenizer = AutoTokenizer.from_pretrained('./model/')
14 | if args.load == 0:
15 | moe_path = '_moe' if args.use_moe else ''
16 | modes = {0: 'pretrain', 1: 'full_sft', 2: 'rlhf', 3: 'reason', 4: 'grpo'}
17 | ckp = f'./{args.out_dir}/{modes[args.model_mode]}_{args.hidden_size}{moe_path}.pth'
18 |
19 | model = MiniMindForCausalLM(MiniMindConfig(
20 | hidden_size=args.hidden_size,
21 | num_hidden_layers=args.num_hidden_layers,
22 | use_moe=args.use_moe
23 | ))
24 |
25 | model.load_state_dict(torch.load(ckp, map_location=args.device), strict=True)
26 |
27 | if args.lora_name != 'None':
28 | apply_lora(model)
29 | load_lora(model, f'./{args.out_dir}/lora/{args.lora_name}_{args.hidden_size}.pth')
30 | else:
31 | transformers_model_path = './MiniMind2'
32 | tokenizer = AutoTokenizer.from_pretrained(transformers_model_path)
33 | model = AutoModelForCausalLM.from_pretrained(transformers_model_path, trust_remote_code=True)
34 | print(f'MiniMind模型参数量: {sum(p.numel() for p in model.parameters() if p.requires_grad) / 1e6:.2f}M(illion)')
35 | return model.eval().to(args.device), tokenizer
36 |
37 |
38 | def get_prompt_datas(args):
39 | if args.model_mode == 0:
40 | # pretrain模型的接龙能力(无法对话)
41 | prompt_datas = [
42 | '马克思主义基本原理',
43 | '人类大脑的主要功能',
44 | '万有引力原理是',
45 | '世界上最高的山峰是',
46 | '二氧化碳在空气中',
47 | '地球上最大的动物有',
48 | '杭州市的美食有'
49 | ]
50 | else:
51 | if args.lora_name == 'None':
52 | # 通用对话问题
53 | prompt_datas = [
54 | '请介绍一下自己。',
55 | '你更擅长哪一个学科?',
56 | '鲁迅的《狂人日记》是如何批判封建礼教的?',
57 | '我咳嗽已经持续了两周,需要去医院检查吗?',
58 | '详细的介绍光速的物理概念。',
59 | '推荐一些杭州的特色美食吧。',
60 | '请为我讲解“大语言模型”这个概念。',
61 | '如何理解ChatGPT?',
62 | 'Introduce the history of the United States, please.'
63 | ]
64 | else:
65 | # 特定领域问题
66 | lora_prompt_datas = {
67 | 'lora_identity': [
68 | "你是ChatGPT吧。",
69 | "你叫什么名字?",
70 | "你和openai是什么关系?"
71 | ],
72 | 'lora_medical': [
73 | '我最近经常感到头晕,可能是什么原因?',
74 | '我咳嗽已经持续了两周,需要去医院检查吗?',
75 | '服用抗生素时需要注意哪些事项?',
76 | '体检报告中显示胆固醇偏高,我该怎么办?',
77 | '孕妇在饮食上需要注意什么?',
78 | '老年人如何预防骨质疏松?',
79 | '我最近总是感到焦虑,应该怎么缓解?',
80 | '如果有人突然晕倒,应该如何急救?'
81 | ],
82 | }
83 | prompt_datas = lora_prompt_datas[args.lora_name]
84 |
85 | return prompt_datas
86 |
87 |
88 | # 设置可复现的随机种子
89 | def setup_seed(seed):
90 | random.seed(seed)
91 | np.random.seed(seed)
92 | torch.manual_seed(seed)
93 | torch.cuda.manual_seed(seed)
94 | torch.cuda.manual_seed_all(seed)
95 | torch.backends.cudnn.deterministic = True
96 | torch.backends.cudnn.benchmark = False
97 |
98 |
99 | def main():
100 | parser = argparse.ArgumentParser(description="Chat with MiniMind")
101 | parser.add_argument('--lora_name', default='None', type=str)
102 | parser.add_argument('--out_dir', default='out', type=str)
103 | parser.add_argument('--temperature', default=0.85, type=float)
104 | parser.add_argument('--top_p', default=0.85, type=float)
105 | parser.add_argument('--device', default='cuda' if torch.cuda.is_available() else 'cpu', type=str)
106 | # 此处max_seq_len(最大输出长度)并不意味模型具有对应的长文本的性能,仅防止QA出现被截断的问题
107 | # MiniMind2-moe (145M):(hidden_size=640, num_hidden_layers=8, use_moe=True)
108 | # MiniMind2-Small (26M):(hidden_size=512, num_hidden_layers=8)
109 | # MiniMind2 (104M):(hidden_size=768, num_hidden_layers=16)
110 | parser.add_argument('--hidden_size', default=512, type=int)
111 | parser.add_argument('--num_hidden_layers', default=8, type=int)
112 | parser.add_argument('--max_seq_len', default=8192, type=int)
113 | parser.add_argument('--use_moe', default=False, type=bool)
114 | # 携带历史对话上下文条数
115 | # history_cnt需要设为偶数,即【用户问题, 模型回答】为1组;设置为0时,即当前query不携带历史上文
116 | # 模型未经过外推微调时,在更长的上下文的chat_template时难免出现性能的明显退化,因此需要注意此处设置
117 | parser.add_argument('--history_cnt', default=0, type=int)
118 | parser.add_argument('--load', default=0, type=int, help="0: 原生torch权重,1: transformers加载")
119 | parser.add_argument('--model_mode', default=1, type=int,
120 | help="0: 预训练模型,1: SFT-Chat模型,2: RLHF-Chat模型,3: Reason模型,4: RLAIF-Chat模型")
121 | args = parser.parse_args()
122 |
123 | model, tokenizer = init_model(args)
124 |
125 | prompts = get_prompt_datas(args)
126 | test_mode = int(input('[0] 自动测试\n[1] 手动输入\n'))
127 | streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
128 |
129 | messages = []
130 | for idx, prompt in enumerate(prompts if test_mode == 0 else iter(lambda: input('👶: '), '')):
131 | setup_seed(random.randint(0, 2048))
132 | # setup_seed(2025) # 如需固定每次输出则换成【固定】的随机种子
133 | if test_mode == 0: print(f'👶: {prompt}')
134 |
135 | messages = messages[-args.history_cnt:] if args.history_cnt else []
136 | messages.append({"role": "user", "content": prompt})
137 |
138 | new_prompt = tokenizer.apply_chat_template(
139 | messages,
140 | tokenize=False,
141 | add_generation_prompt=True
142 | ) if args.model_mode != 0 else (tokenizer.bos_token + prompt)
143 |
144 | inputs = tokenizer(
145 | new_prompt,
146 | return_tensors="pt",
147 | truncation=True
148 | ).to(args.device)
149 |
150 | print('🤖️: ', end='')
151 | generated_ids = model.generate(
152 | inputs["input_ids"],
153 | max_new_tokens=args.max_seq_len,
154 | num_return_sequences=1,
155 | do_sample=True,
156 | attention_mask=inputs["attention_mask"],
157 | pad_token_id=tokenizer.pad_token_id,
158 | eos_token_id=tokenizer.eos_token_id,
159 | streamer=streamer,
160 | top_p=args.top_p,
161 | temperature=args.temperature
162 | )
163 |
164 | response = tokenizer.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
165 | messages.append({"role": "assistant", "content": response})
166 | print('\n\n')
167 |
168 |
169 | if __name__ == "__main__":
170 | main()
171 |
--------------------------------------------------------------------------------
/images/1-wiki.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/1-wiki.png
--------------------------------------------------------------------------------
/images/2-wiki.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/2-wiki.png
--------------------------------------------------------------------------------
/images/3-wiki.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/3-wiki.png
--------------------------------------------------------------------------------
/images/4-wiki.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/4-wiki.png
--------------------------------------------------------------------------------
/images/5-wiki.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/5-wiki.png
--------------------------------------------------------------------------------
/images/LLM-structure-moe.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/LLM-structure-moe.png
--------------------------------------------------------------------------------
/images/LLM-structure.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/LLM-structure.png
--------------------------------------------------------------------------------
/images/and_huggingface.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/and_huggingface.png
--------------------------------------------------------------------------------
/images/and_modelscope.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/and_modelscope.png
--------------------------------------------------------------------------------
/images/compare_radar.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/compare_radar.png
--------------------------------------------------------------------------------
/images/dataset.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/dataset.jpg
--------------------------------------------------------------------------------
/images/gpt3_config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/gpt3_config.png
--------------------------------------------------------------------------------
/images/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/logo.png
--------------------------------------------------------------------------------
/images/logo2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/logo2.png
--------------------------------------------------------------------------------
/images/minimind2.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/minimind2.gif
--------------------------------------------------------------------------------
/images/pre_512_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/pre_512_loss.png
--------------------------------------------------------------------------------
/images/pre_768_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/pre_768_loss.png
--------------------------------------------------------------------------------
/images/sft_512_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/sft_512_loss.png
--------------------------------------------------------------------------------
/images/sft_768_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/images/sft_768_loss.png
--------------------------------------------------------------------------------
/model/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jingyaogong/minimind/561979c7e34ed48bce320598952d892dce33ccd6/model/__init__.py
--------------------------------------------------------------------------------
/model/model_lora.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch import optim, nn
3 |
4 |
5 | # 定义Lora网络结构
6 | class LoRA(nn.Module):
7 | def __init__(self, in_features, out_features, rank):
8 | super().__init__()
9 | self.rank = rank # LoRA的秩(rank),控制低秩矩阵的大小
10 | self.A = nn.Linear(in_features, rank, bias=False) # 低秩矩阵A
11 | self.B = nn.Linear(rank, out_features, bias=False) # 低秩矩阵B
12 | # 矩阵A高斯初始化
13 | self.A.weight.data.normal_(mean=0.0, std=0.02)
14 | # 矩阵B全0初始化
15 | self.B.weight.data.zero_()
16 |
17 | def forward(self, x):
18 | return self.B(self.A(x))
19 |
20 |
21 | def apply_lora(model, rank=8):
22 | for name, module in model.named_modules():
23 | if isinstance(module, nn.Linear) and module.weight.shape[0] == module.weight.shape[1]:
24 | lora = LoRA(module.weight.shape[0], module.weight.shape[1], rank=rank).to(model.device)
25 | setattr(module, "lora", lora)
26 | original_forward = module.forward
27 |
28 | # 显式绑定
29 | def forward_with_lora(x, layer1=original_forward, layer2=lora):
30 | return layer1(x) + layer2(x)
31 |
32 | module.forward = forward_with_lora
33 |
34 |
35 | def load_lora(model, path):
36 | state_dict = torch.load(path, map_location=model.device)
37 | for name, module in model.named_modules():
38 | if hasattr(module, 'lora'):
39 | lora_state = {k.replace(f'{name}.lora.', ''): v for k, v in state_dict.items() if f'{name}.lora.' in k}
40 | module.lora.load_state_dict(lora_state)
41 |
42 |
43 | def save_lora(model, path):
44 | state_dict = {}
45 | for name, module in model.named_modules():
46 | if hasattr(module, 'lora'):
47 | lora_state = {f'{name}.lora.{k}': v for k, v in module.lora.state_dict().items()}
48 | state_dict.update(lora_state)
49 | torch.save(state_dict, path)
50 |
--------------------------------------------------------------------------------
/model/model_minimind.py:
--------------------------------------------------------------------------------
1 | # 📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘
2 | # MiniMind Config
3 | # 📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘
4 |
5 | from transformers import PretrainedConfig
6 |
7 |
8 | class MiniMindConfig(PretrainedConfig):
9 | model_type = "minimind"
10 |
11 | def __init__(
12 | self,
13 | dropout: float = 0.0,
14 | bos_token_id: int = 1,
15 | eos_token_id: int = 2,
16 | hidden_act: str = 'silu',
17 | hidden_size: int = 512,
18 | intermediate_size: int = None,
19 | max_position_embeddings: int = 32768,
20 | num_attention_heads: int = 8,
21 | num_hidden_layers: int = 8,
22 | num_key_value_heads: int = 2,
23 | vocab_size: int = 6400,
24 | rms_norm_eps: float = 1e-05,
25 | rope_theta: int = 1000000.0,
26 | flash_attn: bool = True,
27 | ####################################################
28 | # Here are the specific configurations of MOE
29 | # When use_moe is false, the following is invalid
30 | ####################################################
31 | use_moe: bool = False,
32 | num_experts_per_tok: int = 2,
33 | n_routed_experts: int = 4,
34 | n_shared_experts: int = 1,
35 | scoring_func: str = 'softmax',
36 | aux_loss_alpha: float = 0.1,
37 | seq_aux: bool = True,
38 | norm_topk_prob: bool = True,
39 | **kwargs
40 | ):
41 | super().__init__(**kwargs)
42 | self.dropout = dropout
43 | self.bos_token_id = bos_token_id
44 | self.eos_token_id = eos_token_id
45 | self.hidden_act = hidden_act
46 | self.hidden_size = hidden_size
47 | self.intermediate_size = intermediate_size
48 | self.max_position_embeddings = max_position_embeddings
49 | self.num_attention_heads = num_attention_heads
50 | self.num_hidden_layers = num_hidden_layers
51 | self.num_key_value_heads = num_key_value_heads
52 | self.vocab_size = vocab_size
53 | self.rms_norm_eps = rms_norm_eps
54 | self.rope_theta = rope_theta
55 | self.flash_attn = flash_attn
56 | ####################################################
57 | # Here are the specific configurations of MOE
58 | # When use_moe is false, the following is invalid
59 | ####################################################
60 | self.use_moe = use_moe
61 | self.num_experts_per_tok = num_experts_per_tok # 每个token选择的专家数量
62 | self.n_routed_experts = n_routed_experts # 总的专家数量
63 | self.n_shared_experts = n_shared_experts # 共享专家
64 | self.scoring_func = scoring_func # 评分函数,默认为'softmax'
65 | self.aux_loss_alpha = aux_loss_alpha # 辅助损失的alpha参数
66 | self.seq_aux = seq_aux # 是否在序列级别上计算辅助损失
67 | self.norm_topk_prob = norm_topk_prob # 是否标准化top-k概率
68 |
69 |
70 | # 📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘
71 | # MiniMind Model
72 | # 📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘📘
73 |
74 | import math
75 | import torch
76 | from torch import nn
77 | from transformers.activations import ACT2FN
78 | from typing import Optional, Tuple, List, Union
79 | import torch.nn.functional as F
80 | from transformers import PreTrainedModel, GenerationMixin, PretrainedConfig
81 | from transformers.modeling_outputs import CausalLMOutputWithPast
82 |
83 |
84 | class RMSNorm(torch.nn.Module):
85 | def __init__(self, dim: int, eps: float = 1e-5):
86 | super().__init__()
87 | self.eps = eps
88 | self.weight = nn.Parameter(torch.ones(dim))
89 |
90 | def _norm(self, x):
91 | return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
92 |
93 | def forward(self, x):
94 | return self.weight * self._norm(x.float()).type_as(x)
95 |
96 |
97 | def precompute_freqs_cis(dim: int, end: int = int(32 * 1024), theta: float = 1e6):
98 | freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim))
99 | t = torch.arange(end, device=freqs.device)
100 | freqs = torch.outer(t, freqs).float()
101 | freqs_cos = torch.cat([torch.cos(freqs), torch.cos(freqs)], dim=-1)
102 | freqs_sin = torch.cat([torch.sin(freqs), torch.sin(freqs)], dim=-1)
103 | return freqs_cos, freqs_sin
104 |
105 |
106 | def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
107 | def rotate_half(x):
108 | return torch.cat((-x[..., x.shape[-1] // 2:], x[..., : x.shape[-1] // 2]), dim=-1)
109 |
110 | q_embed = (q * cos.unsqueeze(unsqueeze_dim)) + (rotate_half(q) * sin.unsqueeze(unsqueeze_dim))
111 | k_embed = (k * cos.unsqueeze(unsqueeze_dim)) + (rotate_half(k) * sin.unsqueeze(unsqueeze_dim))
112 | return q_embed, k_embed
113 |
114 |
115 | def repeat_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor:
116 | """torch.repeat_interleave(x, dim=2, repeats=n_rep)"""
117 | bs, slen, num_key_value_heads, head_dim = x.shape
118 | if n_rep == 1:
119 | return x
120 | return (
121 | x[:, :, :, None, :]
122 | .expand(bs, slen, num_key_value_heads, n_rep, head_dim)
123 | .reshape(bs, slen, num_key_value_heads * n_rep, head_dim)
124 | )
125 |
126 |
127 | class Attention(nn.Module):
128 | def __init__(self, args: MiniMindConfig):
129 | super().__init__()
130 | self.num_key_value_heads = args.num_attention_heads if args.num_key_value_heads is None else args.num_key_value_heads
131 | assert args.num_attention_heads % self.num_key_value_heads == 0
132 | self.n_local_heads = args.num_attention_heads
133 | self.n_local_kv_heads = self.num_key_value_heads
134 | self.n_rep = self.n_local_heads // self.n_local_kv_heads
135 | self.head_dim = args.hidden_size // args.num_attention_heads
136 | self.q_proj = nn.Linear(args.hidden_size, args.num_attention_heads * self.head_dim, bias=False)
137 | self.k_proj = nn.Linear(args.hidden_size, self.num_key_value_heads * self.head_dim, bias=False)
138 | self.v_proj = nn.Linear(args.hidden_size, self.num_key_value_heads * self.head_dim, bias=False)
139 | self.o_proj = nn.Linear(args.num_attention_heads * self.head_dim, args.hidden_size, bias=False)
140 | self.attn_dropout = nn.Dropout(args.dropout)
141 | self.resid_dropout = nn.Dropout(args.dropout)
142 | self.dropout = args.dropout
143 | self.flash = hasattr(torch.nn.functional, 'scaled_dot_product_attention') and args.flash_attn
144 | # print("WARNING: using slow attention. Flash Attention requires PyTorch >= 2.0")
145 |
146 | def forward(self,
147 | x: torch.Tensor,
148 | position_embeddings: Tuple[torch.Tensor, torch.Tensor], # 修改为接收cos和sin
149 | past_key_value: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
150 | use_cache=False,
151 | attention_mask: Optional[torch.Tensor] = None):
152 | bsz, seq_len, _ = x.shape
153 | xq, xk, xv = self.q_proj(x), self.k_proj(x), self.v_proj(x)
154 | xq = xq.view(bsz, seq_len, self.n_local_heads, self.head_dim)
155 | xk = xk.view(bsz, seq_len, self.n_local_kv_heads, self.head_dim)
156 | xv = xv.view(bsz, seq_len, self.n_local_kv_heads, self.head_dim)
157 |
158 | cos, sin = position_embeddings
159 | xq, xk = apply_rotary_pos_emb(xq, xk, cos[:seq_len], sin[:seq_len])
160 |
161 | # kv_cache实现
162 | if past_key_value is not None:
163 | xk = torch.cat([past_key_value[0], xk], dim=1)
164 | xv = torch.cat([past_key_value[1], xv], dim=1)
165 | past_kv = (xk, xv) if use_cache else None
166 |
167 | xq, xk, xv = (
168 | xq.transpose(1, 2),
169 | repeat_kv(xk, self.n_rep).transpose(1, 2),
170 | repeat_kv(xv, self.n_rep).transpose(1, 2)
171 | )
172 |
173 | if self.flash and seq_len != 1:
174 | dropout_p = self.dropout if self.training else 0.0
175 | attn_mask = None
176 | if attention_mask is not None:
177 | attn_mask = attention_mask.view(bsz, 1, 1, -1).expand(bsz, self.n_local_heads, seq_len, -1)
178 | attn_mask = attn_mask.bool() if attention_mask is not None else None
179 |
180 | output = F.scaled_dot_product_attention(xq, xk, xv, attn_mask=attn_mask, dropout_p=dropout_p, is_causal=True)
181 | else:
182 | scores = (xq @ xk.transpose(-2, -1)) / math.sqrt(self.head_dim)
183 | scores = scores + torch.triu(
184 | torch.full((seq_len, seq_len), float("-inf"), device=scores.device),
185 | diagonal=1
186 | ).unsqueeze(0).unsqueeze(0) # scores+mask
187 |
188 | if attention_mask is not None:
189 | extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
190 | extended_attention_mask = (1.0 - extended_attention_mask) * -1e9
191 | scores = scores + extended_attention_mask
192 |
193 | scores = F.softmax(scores.float(), dim=-1).type_as(xq)
194 | scores = self.attn_dropout(scores)
195 | output = scores @ xv
196 |
197 | output = output.transpose(1, 2).reshape(bsz, seq_len, -1)
198 | output = self.resid_dropout(self.o_proj(output))
199 | return output, past_kv
200 |
201 |
202 | class FeedForward(nn.Module):
203 | def __init__(self, config: MiniMindConfig):
204 | super().__init__()
205 | if config.intermediate_size is None:
206 | intermediate_size = int(config.hidden_size * 8 / 3)
207 | config.intermediate_size = 64 * ((intermediate_size + 64 - 1) // 64)
208 | self.gate_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False)
209 | self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size, bias=False)
210 | self.up_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False)
211 | self.dropout = nn.Dropout(config.dropout)
212 | self.act_fn = ACT2FN[config.hidden_act]
213 |
214 | def forward(self, x):
215 | return self.dropout(self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x)))
216 |
217 |
218 | class MoEGate(nn.Module):
219 | def __init__(self, config: MiniMindConfig):
220 | super().__init__()
221 | self.config = config
222 | self.top_k = config.num_experts_per_tok
223 | self.n_routed_experts = config.n_routed_experts
224 |
225 | self.scoring_func = config.scoring_func
226 | self.alpha = config.aux_loss_alpha
227 | self.seq_aux = config.seq_aux
228 |
229 | self.norm_topk_prob = config.norm_topk_prob
230 | self.gating_dim = config.hidden_size
231 | self.weight = nn.Parameter(torch.empty((self.n_routed_experts, self.gating_dim)))
232 | self.reset_parameters()
233 |
234 | def reset_parameters(self) -> None:
235 | import torch.nn.init as init
236 | init.kaiming_uniform_(self.weight, a=math.sqrt(5))
237 |
238 | def forward(self, hidden_states):
239 | bsz, seq_len, h = hidden_states.shape
240 | hidden_states = hidden_states.view(-1, h)
241 | logits = F.linear(hidden_states, self.weight, None)
242 | if self.scoring_func == 'softmax':
243 | scores = logits.softmax(dim=-1)
244 | else:
245 | raise NotImplementedError(f'insupportable scoring function for MoE gating: {self.scoring_func}')
246 |
247 | topk_weight, topk_idx = torch.topk(scores, k=self.top_k, dim=-1, sorted=False)
248 |
249 | if self.top_k > 1 and self.norm_topk_prob:
250 | denominator = topk_weight.sum(dim=-1, keepdim=True) + 1e-20
251 | topk_weight = topk_weight / denominator
252 |
253 | if self.training and self.alpha > 0.0:
254 | scores_for_aux = scores
255 | aux_topk = self.top_k
256 | topk_idx_for_aux_loss = topk_idx.view(bsz, -1)
257 | if self.seq_aux:
258 | scores_for_seq_aux = scores_for_aux.view(bsz, seq_len, -1)
259 | ce = torch.zeros(bsz, self.n_routed_experts, device=hidden_states.device)
260 | ce.scatter_add_(1, topk_idx_for_aux_loss,
261 | torch.ones(bsz, seq_len * aux_topk, device=hidden_states.device)).div_(
262 | seq_len * aux_topk / self.n_routed_experts)
263 | aux_loss = (ce * scores_for_seq_aux.mean(dim=1)).sum(dim=1).mean() * self.alpha
264 | else:
265 | mask_ce = F.one_hot(topk_idx_for_aux_loss.view(-1), num_classes=self.n_routed_experts)
266 | ce = mask_ce.float().mean(0)
267 | Pi = scores_for_aux.mean(0)
268 | fi = ce * self.n_routed_experts
269 | aux_loss = (Pi * fi).sum() * self.alpha
270 | else:
271 | aux_loss = 0
272 | return topk_idx, topk_weight, aux_loss
273 |
274 |
275 | class MOEFeedForward(nn.Module):
276 | def __init__(self, config: MiniMindConfig):
277 | super().__init__()
278 | self.config = config
279 | self.experts = nn.ModuleList([
280 | FeedForward(config)
281 | for _ in range(config.n_routed_experts)
282 | ])
283 | self.gate = MoEGate(config)
284 | if config.n_shared_experts > 0:
285 | self.shared_experts = nn.ModuleList([
286 | FeedForward(config)
287 | for _ in range(config.n_shared_experts)
288 | ])
289 |
290 | def forward(self, x):
291 | identity = x
292 | orig_shape = x.shape
293 | bsz, seq_len, _ = x.shape
294 | # 使用门控机制选择专家
295 | topk_idx, topk_weight, aux_loss = self.gate(x)
296 | x = x.view(-1, x.shape[-1])
297 | flat_topk_idx = topk_idx.view(-1)
298 | if self.training:
299 | x = x.repeat_interleave(self.config.num_experts_per_tok, dim=0)
300 | y = torch.empty_like(x, dtype=torch.float16)
301 | for i, expert in enumerate(self.experts):
302 | y[flat_topk_idx == i] = expert(x[flat_topk_idx == i]).to(y.dtype) # 确保类型一致
303 | y = (y.view(*topk_weight.shape, -1) * topk_weight.unsqueeze(-1)).sum(dim=1)
304 | y = y.view(*orig_shape)
305 | else:
306 | y = self.moe_infer(x, flat_topk_idx, topk_weight.view(-1, 1)).view(*orig_shape)
307 | if self.config.n_shared_experts > 0:
308 | for expert in self.shared_experts:
309 | y = y + expert(identity)
310 | self.aux_loss = aux_loss
311 | return y
312 |
313 | @torch.no_grad()
314 | def moe_infer(self, x, flat_expert_indices, flat_expert_weights):
315 | expert_cache = torch.zeros_like(x)
316 | idxs = flat_expert_indices.argsort()
317 | tokens_per_expert = flat_expert_indices.bincount().cpu().numpy().cumsum(0)
318 | token_idxs = idxs // self.config.num_experts_per_tok
319 | # 当tokens_per_expert = [6, 15, 20, 26],tokens_per_expert.shape[0]即为专家数量(此时为4)
320 | # 且token_idxs = [3, 7, 19, 21, 24, 25, 4, 5, 6, 10, 11, 12...] 时
321 | # 意味token_idxs[:6] -> [3, 7, 19, 21, 24, 25]这6个位置属于专家0处理的token(每个token有可能被多个专家处理,这取决于num_experts_per_tok)
322 | # 接下来9个位置token_idxs[6:15] -> [4, 5, 6, 10, 11, 12...]属于专家1处理的token...依此类推
323 | for i, end_idx in enumerate(tokens_per_expert):
324 | start_idx = 0 if i == 0 else tokens_per_expert[i - 1]
325 | if start_idx == end_idx:
326 | continue
327 | expert = self.experts[i]
328 | exp_token_idx = token_idxs[start_idx:end_idx]
329 | expert_tokens = x[exp_token_idx]
330 | expert_out = expert(expert_tokens).to(expert_cache.dtype)
331 | expert_out.mul_(flat_expert_weights[idxs[start_idx:end_idx]])
332 | expert_cache.scatter_add_(0, exp_token_idx.view(-1, 1).repeat(1, x.shape[-1]), expert_out)
333 |
334 | return expert_cache
335 |
336 |
337 | class MiniMindBlock(nn.Module):
338 | def __init__(self, layer_id: int, config: MiniMindConfig):
339 | super().__init__()
340 | self.num_attention_heads = config.num_attention_heads
341 | self.hidden_size = config.hidden_size
342 | self.head_dim = config.hidden_size // config.num_attention_heads
343 | self.self_attn = Attention(config)
344 |
345 | self.layer_id = layer_id
346 | self.input_layernorm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
347 | self.post_attention_layernorm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
348 | self.mlp = FeedForward(config) if not config.use_moe else MOEFeedForward(config)
349 |
350 | def forward(self, hidden_states, position_embeddings, past_key_value=None, use_cache=False, attention_mask=None):
351 | residual = hidden_states
352 | hidden_states, present_key_value = self.self_attn(
353 | self.input_layernorm(hidden_states), position_embeddings,
354 | past_key_value, use_cache, attention_mask
355 | )
356 | hidden_states += residual
357 | hidden_states = hidden_states + self.mlp(self.post_attention_layernorm(hidden_states))
358 | return hidden_states, present_key_value
359 |
360 |
361 | class MiniMindModel(nn.Module):
362 | def __init__(self, config: MiniMindConfig):
363 | super().__init__()
364 | self.config = config
365 | self.vocab_size, self.num_hidden_layers = config.vocab_size, config.num_hidden_layers
366 | self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)
367 | self.dropout = nn.Dropout(config.dropout)
368 | self.layers = nn.ModuleList([MiniMindBlock(l, config) for l in range(self.num_hidden_layers)])
369 | self.norm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
370 |
371 | freqs_cos, freqs_sin = precompute_freqs_cis(dim=config.hidden_size // config.num_attention_heads,
372 | end=config.max_position_embeddings, theta=config.rope_theta)
373 | self.register_buffer("freqs_cos", freqs_cos, persistent=False)
374 | self.register_buffer("freqs_sin", freqs_sin, persistent=False)
375 |
376 | def forward(self,
377 | input_ids: Optional[torch.Tensor] = None,
378 | attention_mask: Optional[torch.Tensor] = None,
379 | past_key_values: Optional[List[Tuple[torch.Tensor, torch.Tensor]]] = None,
380 | use_cache: bool = False,
381 | **kwargs):
382 | batch_size, seq_length = input_ids.shape
383 | past_key_values = past_key_values or [None] * len(self.layers)
384 | start_pos = past_key_values[0][0].shape[1] if past_key_values[0] is not None else 0
385 |
386 | hidden_states = self.dropout(self.embed_tokens(input_ids))
387 |
388 | position_embeddings = (
389 | self.freqs_cos[start_pos:start_pos + seq_length],
390 | self.freqs_sin[start_pos:start_pos + seq_length]
391 | )
392 |
393 | presents = []
394 | for layer_idx, (layer, past_key_value) in enumerate(zip(self.layers, past_key_values)):
395 | hidden_states, present = layer(
396 | hidden_states,
397 | position_embeddings,
398 | past_key_value=past_key_value,
399 | use_cache=use_cache,
400 | attention_mask=attention_mask
401 | )
402 | presents.append(present)
403 |
404 | hidden_states = self.norm(hidden_states)
405 |
406 | aux_loss = sum(
407 | layer.mlp.aux_loss
408 | for layer in self.layers
409 | if isinstance(layer.mlp, MOEFeedForward)
410 | )
411 |
412 | return hidden_states, presents, aux_loss
413 |
414 |
415 | class MiniMindForCausalLM(PreTrainedModel, GenerationMixin):
416 | config_class = MiniMindConfig
417 |
418 | def __init__(self, config: MiniMindConfig = None):
419 | self.config = config or MiniMindConfig()
420 | super().__init__(self.config)
421 | self.model = MiniMindModel(self.config)
422 | self.lm_head = nn.Linear(self.config.hidden_size, self.config.vocab_size, bias=False)
423 | self.model.embed_tokens.weight = self.lm_head.weight
424 | self.OUT = CausalLMOutputWithPast()
425 |
426 | def forward(self,
427 | input_ids: Optional[torch.Tensor] = None,
428 | attention_mask: Optional[torch.Tensor] = None,
429 | past_key_values: Optional[List[Tuple[torch.Tensor, torch.Tensor]]] = None,
430 | use_cache: bool = False,
431 | logits_to_keep: Union[int, torch.Tensor] = 0,
432 | **args):
433 | h, past_kvs, aux_loss = self.model(
434 | input_ids=input_ids,
435 | attention_mask=attention_mask,
436 | past_key_values=past_key_values,
437 | use_cache=use_cache,
438 | **args
439 | )
440 | slice_indices = slice(-logits_to_keep, None) if isinstance(logits_to_keep, int) else logits_to_keep
441 | logits = self.lm_head(h[:, slice_indices, :])
442 | self.OUT.__setitem__('last_hidden_state', h)
443 | self.OUT.__setitem__('logits', logits)
444 | self.OUT.__setitem__('aux_loss', aux_loss)
445 | self.OUT.__setitem__('past_key_values', past_kvs)
446 | return self.OUT
447 |
--------------------------------------------------------------------------------
/model/tokenizer_config.json:
--------------------------------------------------------------------------------
1 | {
2 | "add_bos_token": false,
3 | "add_eos_token": false,
4 | "add_prefix_space": false,
5 | "added_tokens_decoder": {
6 | "0": {
7 | "content": "<|endoftext|>",
8 | "lstrip": false,
9 | "normalized": false,
10 | "rstrip": false,
11 | "single_word": false,
12 | "special": true
13 | },
14 | "1": {
15 | "content": "<|im_start|>",
16 | "lstrip": false,
17 | "normalized": false,
18 | "rstrip": false,
19 | "single_word": false,
20 | "special": true
21 | },
22 | "2": {
23 | "content": "<|im_end|>",
24 | "lstrip": false,
25 | "normalized": false,
26 | "rstrip": false,
27 | "single_word": false,
28 | "special": true
29 | }
30 | },
31 | "additional_special_tokens": [],
32 | "bos_token": "<|im_start|>",
33 | "clean_up_tokenization_spaces": false,
34 | "eos_token": "<|im_end|>",
35 | "legacy": true,
36 | "model_max_length": 32768,
37 | "pad_token": "<|endoftext|>",
38 | "sp_model_kwargs": {},
39 | "spaces_between_special_tokens": false,
40 | "tokenizer_class": "PreTrainedTokenizerFast",
41 | "unk_token": "<|endoftext|>",
42 | "chat_template": "{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{{ '<|im_start|>system\\n' + system_message + '<|im_end|>\\n' }}{% else %}{{ '<|im_start|>system\\nYou are a helpful assistant<|im_end|>\\n' }}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|im_start|>user\\n' + content + '<|im_end|>\\n<|im_start|>assistant\\n' }}{% elif message['role'] == 'assistant' %}{{ content + '<|im_end|>' + '\\n' }}{% endif %}{% endfor %}"
43 | }
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | datasets==2.21.0
2 | datasketch==1.6.4
3 | Flask==3.0.3
4 | Flask_Cors==4.0.0
5 | jieba==0.42.1
6 | jsonlines==4.0.0
7 | marshmallow==3.22.0
8 | matplotlib==3.10.0
9 | ngrok==1.4.0
10 | nltk==3.8
11 | numpy==1.26.4
12 | openai==1.59.6
13 | pandas==1.5.3
14 | peft==0.7.1
15 | psutil==5.9.8
16 | pydantic==2.8.2
17 | rich==13.7.1
18 | scikit_learn==1.5.1
19 | sentence_transformers==2.3.1
20 | simhash==2.1.2
21 | tiktoken==0.5.1
22 | transformers==4.48.0
23 | jinja2==3.1.2
24 | jsonlines==4.0.0
25 | trl==0.13.0
26 | ujson==5.1.0
27 | wandb==0.18.3
28 | streamlit==1.30.0
29 | torch==2.3.0
30 | torchvision==0.18.0
--------------------------------------------------------------------------------
/scripts/chat_openai_api.py:
--------------------------------------------------------------------------------
1 | from openai import OpenAI
2 |
3 | client = OpenAI(
4 | api_key="ollama",
5 | base_url="http://127.0.0.1:8998/v1"
6 | )
7 | stream = True
8 | conversation_history_origin = []
9 | conversation_history = conversation_history_origin.copy()
10 | history_messages_num = 2 # 设置为偶数(Q+A),为0则每次不携带历史对话进行独立QA
11 | while True:
12 | query = input('[Q]: ')
13 | conversation_history.append({"role": "user", "content": query})
14 | response = client.chat.completions.create(
15 | model="minimind",
16 | messages=conversation_history[-history_messages_num:],
17 | stream=stream
18 | )
19 | if not stream:
20 | assistant_res = response.choices[0].message.content
21 | print('[A]: ', assistant_res)
22 | else:
23 | print('[A]: ', end='')
24 | assistant_res = ''
25 | for chunk in response:
26 | print(chunk.choices[0].delta.content or "", end="")
27 | assistant_res += chunk.choices[0].delta.content or ""
28 |
29 | conversation_history.append({"role": "assistant", "content": assistant_res})
30 | print('\n\n')
31 |
--------------------------------------------------------------------------------
/scripts/convert_model.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 |
4 | __package__ = "scripts"
5 | sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
6 | import torch
7 | import warnings
8 | from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaConfig, LlamaForCausalLM
9 | from model.model_minimind import MiniMindConfig, MiniMindForCausalLM
10 |
11 | warnings.filterwarnings('ignore', category=UserWarning)
12 |
13 |
14 | # MoE模型需使用此函数转换
15 | def convert_torch2transformers_minimind(torch_path, transformers_path, dtype=torch.bfloat16):
16 | MiniMindConfig.register_for_auto_class()
17 | MiniMindForCausalLM.register_for_auto_class("AutoModelForCausalLM")
18 | lm_model = MiniMindForCausalLM(lm_config)
19 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
20 | state_dict = torch.load(torch_path, map_location=device)
21 | lm_model.load_state_dict(state_dict, strict=False)
22 | lm_model = lm_model.to(dtype) # 转换模型权重精度
23 | model_params = sum(p.numel() for p in lm_model.parameters() if p.requires_grad)
24 | print(f'模型参数: {model_params / 1e6} 百万 = {model_params / 1e9} B (Billion)')
25 | lm_model.save_pretrained(transformers_path, safe_serialization=False)
26 | tokenizer = AutoTokenizer.from_pretrained('../model/')
27 | tokenizer.save_pretrained(transformers_path)
28 | print(f"模型已保存为 Transformers-MiniMind 格式: {transformers_path}")
29 |
30 |
31 | # LlamaForCausalLM结构兼容第三方生态
32 | def convert_torch2transformers_llama(torch_path, transformers_path, dtype=torch.bfloat16):
33 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
34 | state_dict = torch.load(torch_path, map_location=device)
35 | llama_config = LlamaConfig(
36 | vocab_size=lm_config.vocab_size,
37 | hidden_size=lm_config.hidden_size,
38 | intermediate_size=64 * ((int(lm_config.hidden_size * 8 / 3) + 64 - 1) // 64),
39 | num_hidden_layers=lm_config.num_hidden_layers,
40 | num_attention_heads=lm_config.num_attention_heads,
41 | num_key_value_heads=lm_config.num_key_value_heads,
42 | max_position_embeddings=lm_config.max_seq_len,
43 | rms_norm_eps=lm_config.rms_norm_eps,
44 | rope_theta=lm_config.rope_theta,
45 | )
46 | llama_model = LlamaForCausalLM(llama_config)
47 | llama_model.load_state_dict(state_dict, strict=False)
48 | llama_model = llama_model.to(dtype) # 转换模型权重精度
49 | llama_model.save_pretrained(transformers_path)
50 | model_params = sum(p.numel() for p in llama_model.parameters() if p.requires_grad)
51 | print(f'模型参数: {model_params / 1e6} 百万 = {model_params / 1e9} B (Billion)')
52 | tokenizer = AutoTokenizer.from_pretrained('../model/')
53 | tokenizer.save_pretrained(transformers_path)
54 | print(f"模型已保存为 Transformers-Llama 格式: {transformers_path}")
55 |
56 |
57 | def convert_transformers2torch(transformers_path, torch_path):
58 | model = AutoModelForCausalLM.from_pretrained(transformers_path, trust_remote_code=True)
59 | torch.save(model.state_dict(), torch_path)
60 | print(f"模型已保存为 PyTorch 格式: {torch_path}")
61 |
62 |
63 |
64 | if __name__ == '__main__':
65 | lm_config = MiniMindConfig(hidden_size=768, num_hidden_layers=16, max_seq_len=8192, use_moe=False)
66 |
67 | torch_path = f"../out/full_sft_{lm_config.hidden_size}{'_moe' if lm_config.use_moe else ''}.pth"
68 |
69 | transformers_path = '../MiniMind2'
70 |
71 | convert_torch2transformers_minimind(torch_path, transformers_path)
72 |
73 | # # # convert transformers to torch model
74 | # # convert_transformers2torch(transformers_path, torch_path)
75 |
--------------------------------------------------------------------------------
/scripts/serve_openai_api.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 | import sys
5 |
6 | __package__ = "scripts"
7 | sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
8 | import time
9 | import torch
10 | import warnings
11 | import uvicorn
12 |
13 | from threading import Thread
14 | from queue import Queue
15 | from fastapi import FastAPI, HTTPException
16 | from fastapi.responses import StreamingResponse
17 | from pydantic import BaseModel
18 | from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
19 | from model.model_minimind import MiniMindConfig, MiniMindForCausalLM
20 | from model.model_lora import apply_lora, load_lora
21 |
22 | warnings.filterwarnings('ignore')
23 |
24 | app = FastAPI()
25 |
26 |
27 | def init_model(args):
28 | if args.load == 0:
29 | tokenizer = AutoTokenizer.from_pretrained('../model/')
30 | moe_path = '_moe' if args.use_moe else ''
31 | modes = {0: 'pretrain', 1: 'full_sft', 2: 'rlhf', 3: 'reason'}
32 | ckp = f'../{args.out_dir}/{modes[args.model_mode]}_{args.hidden_size}{moe_path}.pth'
33 | model = MiniMindForCausalLM(MiniMindConfig(
34 | hidden_size=args.hidden_size,
35 | num_hidden_layers=args.num_hidden_layers,
36 | max_seq_len=args.max_seq_len,
37 | use_moe=args.use_moe
38 | ))
39 | model.load_state_dict(torch.load(ckp, map_location=device), strict=True)
40 | if args.lora_name != 'None':
41 | apply_lora(model)
42 | load_lora(model, f'../{args.out_dir}/{args.lora_name}_{args.hidden_size}.pth')
43 | else:
44 | model_path = '../MiniMind2'
45 | model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
46 | tokenizer = AutoTokenizer.from_pretrained(model_path)
47 | print(f'MiniMind模型参数量: {sum(p.numel() for p in model.parameters() if p.requires_grad) / 1e6:.2f}M(illion)')
48 | return model.eval().to(device), tokenizer
49 |
50 |
51 | class ChatRequest(BaseModel):
52 | model: str
53 | messages: list
54 | temperature: float = 0.7
55 | top_p: float = 0.92
56 | max_tokens: int = 8192
57 | stream: bool = False
58 | tools: list = []
59 |
60 |
61 | class CustomStreamer(TextStreamer):
62 | def __init__(self, tokenizer, queue):
63 | super().__init__(tokenizer, skip_prompt=True, skip_special_tokens=True)
64 | self.queue = queue
65 | self.tokenizer = tokenizer
66 |
67 | def on_finalized_text(self, text: str, stream_end: bool = False):
68 | self.queue.put(text)
69 | if stream_end:
70 | self.queue.put(None)
71 |
72 |
73 | def generate_stream_response(messages, temperature, top_p, max_tokens):
74 | try:
75 | new_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)[-max_tokens:]
76 | inputs = tokenizer(new_prompt, return_tensors="pt", truncation=True).to(device)
77 |
78 | queue = Queue()
79 | streamer = CustomStreamer(tokenizer, queue)
80 |
81 | def _generate():
82 | model.generate(
83 | inputs.input_ids,
84 | max_new_tokens=max_tokens,
85 | do_sample=True,
86 | temperature=temperature,
87 | top_p=top_p,
88 | attention_mask=inputs.attention_mask,
89 | pad_token_id=tokenizer.pad_token_id,
90 | eos_token_id=tokenizer.eos_token_id,
91 | streamer=streamer
92 | )
93 |
94 | Thread(target=_generate).start()
95 |
96 | while True:
97 | text = queue.get()
98 | if text is None:
99 | yield json.dumps({
100 | "choices": [{
101 | "delta": {},
102 | "finish_reason": "stop"
103 | }]
104 | }, ensure_ascii=False)
105 | break
106 |
107 | yield json.dumps({
108 | "choices": [{"delta": {"content": text}}]
109 | }, ensure_ascii=False)
110 |
111 | except Exception as e:
112 | yield json.dumps({"error": str(e)})
113 |
114 |
115 | @app.post("/v1/chat/completions")
116 | async def chat_completions(request: ChatRequest):
117 | try:
118 | if request.stream:
119 | return StreamingResponse(
120 | (f"data: {chunk}\n\n" for chunk in generate_stream_response(
121 | messages=request.messages,
122 | temperature=request.temperature,
123 | top_p=request.top_p,
124 | max_tokens=request.max_tokens
125 | )),
126 | media_type="text/event-stream"
127 | )
128 | else:
129 | new_prompt = tokenizer.apply_chat_template(
130 | request.messages,
131 | tokenize=False,
132 | add_generation_prompt=True
133 | )[-request.max_tokens:]
134 | inputs = tokenizer(new_prompt, return_tensors="pt", truncation=True).to(device)
135 | with torch.no_grad():
136 | generated_ids = model.generate(
137 | inputs["input_ids"],
138 | max_length=inputs["input_ids"].shape[1] + request.max_tokens,
139 | do_sample=True,
140 | attention_mask=inputs["attention_mask"],
141 | pad_token_id=tokenizer.pad_token_id,
142 | eos_token_id=tokenizer.eos_token_id,
143 | top_p=request.top_p,
144 | temperature=request.temperature
145 | )
146 | answer = tokenizer.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
147 | return {
148 | "id": f"chatcmpl-{int(time.time())}",
149 | "object": "chat.completion",
150 | "created": int(time.time()),
151 | "model": "minimind",
152 | "choices": [
153 | {
154 | "index": 0,
155 | "message": {"role": "assistant", "content": answer},
156 | "finish_reason": "stop"
157 | }
158 | ]
159 | }
160 | except Exception as e:
161 | raise HTTPException(status_code=500, detail=str(e))
162 |
163 |
164 | if __name__ == "__main__":
165 | parser = argparse.ArgumentParser(description="Server for MiniMind")
166 | parser.add_argument('--out_dir', default='out', type=str)
167 | parser.add_argument('--lora_name', default='None', type=str)
168 | parser.add_argument('--hidden_size', default=768, type=int)
169 | parser.add_argument('--num_hidden_layers', default=16, type=int)
170 | parser.add_argument('--max_seq_len', default=8192, type=int)
171 | parser.add_argument('--use_moe', default=False, type=bool)
172 | parser.add_argument('--load', default=0, type=int, help="0: 从原生torch权重,1: 利用transformers加载")
173 | parser.add_argument('--model_mode', default=1, type=int,
174 | help="0: 预训练模型,1: SFT-Chat模型,2: RLHF-Chat模型,3: Reason模型")
175 | device = 'cuda' if torch.cuda.is_available() else 'cpu'
176 | model, tokenizer = init_model(parser.parse_args())
177 | uvicorn.run(app, host="0.0.0.0", port=8998)
178 |
--------------------------------------------------------------------------------
/scripts/train_tokenizer.py:
--------------------------------------------------------------------------------
1 | import random
2 | import json
3 | from tokenizers import (
4 | decoders,
5 | models,
6 | pre_tokenizers,
7 | trainers,
8 | Tokenizer,
9 | )
10 | import os
11 |
12 | random.seed(42)
13 |
14 |
15 | def train_tokenizer():
16 | # 读取JSONL文件并提取文本数据
17 | def read_texts_from_jsonl(file_path):
18 | with open(file_path, 'r', encoding='utf-8') as f:
19 | for line in f:
20 | data = json.loads(line)
21 | yield data['text']
22 |
23 | data_path = '../dataset/pretrain_hq.jsonl'
24 |
25 | # 初始化tokenizer
26 | tokenizer = Tokenizer(models.BPE())
27 | tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=False)
28 |
29 | # 定义特殊token
30 | special_tokens = ["<|endoftext|>", "<|im_start|>", "<|im_end|>"]
31 |
32 | # 设置训练器并添加特殊token
33 | trainer = trainers.BpeTrainer(
34 | vocab_size=6400,
35 | special_tokens=special_tokens, # 确保这三个token被包含
36 | show_progress=True,
37 | initial_alphabet=pre_tokenizers.ByteLevel.alphabet()
38 | )
39 |
40 | # 读取文本数据
41 | texts = read_texts_from_jsonl(data_path)
42 |
43 | # 训练tokenizer
44 | tokenizer.train_from_iterator(texts, trainer=trainer)
45 |
46 | # 设置解码器
47 | tokenizer.decoder = decoders.ByteLevel()
48 |
49 | # 检查特殊token的索引
50 | assert tokenizer.token_to_id("<|endoftext|>") == 0
51 | assert tokenizer.token_to_id("<|im_start|>") == 1
52 | assert tokenizer.token_to_id("<|im_end|>") == 2
53 |
54 | # 保存tokenizer
55 | tokenizer_dir = "../model/"
56 | os.makedirs(tokenizer_dir, exist_ok=True)
57 | tokenizer.save(os.path.join(tokenizer_dir, "tokenizer.json"))
58 | tokenizer.model.save("../model/")
59 |
60 | # 手动创建配置文件
61 | config = {
62 | "add_bos_token": False,
63 | "add_eos_token": False,
64 | "add_prefix_space": False,
65 | "added_tokens_decoder": {
66 | "0": {
67 | "content": "<|endoftext|>",
68 | "lstrip": False,
69 | "normalized": False,
70 | "rstrip": False,
71 | "single_word": False,
72 | "special": True
73 | },
74 | "1": {
75 | "content": "<|im_start|>",
76 | "lstrip": False,
77 | "normalized": False,
78 | "rstrip": False,
79 | "single_word": False,
80 | "special": True
81 | },
82 | "2": {
83 | "content": "<|im_end|>",
84 | "lstrip": False,
85 | "normalized": False,
86 | "rstrip": False,
87 | "single_word": False,
88 | "special": True
89 | }
90 | },
91 | "additional_special_tokens": [],
92 | "bos_token": "<|im_start|>",
93 | "clean_up_tokenization_spaces": False,
94 | "eos_token": "<|im_end|>",
95 | "legacy": True,
96 | "model_max_length": 32768,
97 | "pad_token": "<|endoftext|>",
98 | "sp_model_kwargs": {},
99 | "spaces_between_special_tokens": False,
100 | "tokenizer_class": "PreTrainedTokenizerFast",
101 | "unk_token": "<|endoftext|>",
102 | "chat_template": "{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{{ '<|im_start|>system\\n' + system_message + '<|im_end|>\\n' }}{% else %}{{ '<|im_start|>system\\nYou are a helpful assistant<|im_end|>\\n' }}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|im_start|>user\\n' + content + '<|im_end|>\\n<|im_start|>assistant\\n' }}{% elif message['role'] == 'assistant' %}{{ content + '<|im_end|>' + '\\n' }}{% endif %}{% endfor %}"
103 | }
104 |
105 | # 保存配置文件
106 | with open(os.path.join(tokenizer_dir, "tokenizer_config.json"), "w", encoding="utf-8") as config_file:
107 | json.dump(config, config_file, ensure_ascii=False, indent=4)
108 |
109 | print("Tokenizer training completed and saved.")
110 |
111 |
112 | def eval_tokenizer():
113 | from transformers import AutoTokenizer
114 |
115 | # 加载预训练的tokenizer
116 | tokenizer = AutoTokenizer.from_pretrained("../model/")
117 |
118 | messages = [
119 | {"role": "system", "content": "你是一个优秀的聊天机器人,总是给我正确的回应!"},
120 | {"role": "user", "content": '你来自哪里?'},
121 | {"role": "assistant", "content": '我来自地球'}
122 | ]
123 | new_prompt = tokenizer.apply_chat_template(
124 | messages,
125 | tokenize=False
126 | )
127 | print(new_prompt)
128 |
129 | # 获取实际词汇表长度(包括特殊符号)
130 | actual_vocab_size = len(tokenizer)
131 | print('tokenizer实际词表长度:', actual_vocab_size)
132 |
133 | model_inputs = tokenizer(new_prompt)
134 | print('encoder长度:', len(model_inputs['input_ids']))
135 |
136 | input_ids = model_inputs['input_ids']
137 | response = tokenizer.decode(input_ids, skip_special_tokens=False)
138 | print('decoder和原始文本是否一致:', response == new_prompt)
139 |
140 |
141 | def main():
142 | train_tokenizer()
143 | eval_tokenizer()
144 |
145 |
146 | if __name__ == '__main__':
147 | main()
148 |
--------------------------------------------------------------------------------
/scripts/web_demo.py:
--------------------------------------------------------------------------------
1 | import random
2 | import re
3 | from threading import Thread
4 |
5 | import torch
6 | import numpy as np
7 | import streamlit as st
8 |
9 | st.set_page_config(page_title="MiniMind", initial_sidebar_state="collapsed")
10 |
11 | st.markdown("""
12 |
65 | """, unsafe_allow_html=True)
66 |
67 | system_prompt = []
68 | device = "cuda" if torch.cuda.is_available() else "cpu"
69 |
70 |
71 | def process_assistant_content(content):
72 | if model_source == "API" and 'R1' not in api_model_name:
73 | return content
74 | if model_source != "API" and 'R1' not in MODEL_PATHS[selected_model][1]:
75 | return content
76 |
77 | if '' in content and ' ' in content:
78 | content = re.sub(r'()(.*?)( )',
79 | r'推理内容(展开) \2 ',
80 | content,
81 | flags=re.DOTALL)
82 |
83 | if '' in content and ' ' not in content:
84 | content = re.sub(r'(.*?)$',
85 | r'推理中... \1 ',
86 | content,
87 | flags=re.DOTALL)
88 |
89 | if '' not in content and ' ' in content:
90 | content = re.sub(r'(.*?) ',
91 | r'推理内容(展开) \1 ',
92 | content,
93 | flags=re.DOTALL)
94 |
95 | return content
96 |
97 |
98 | @st.cache_resource
99 | def load_model_tokenizer(model_path):
100 | model = AutoModelForCausalLM.from_pretrained(
101 | model_path,
102 | trust_remote_code=True
103 | )
104 | tokenizer = AutoTokenizer.from_pretrained(
105 | model_path,
106 | trust_remote_code=True
107 | )
108 | model = model.eval().to(device)
109 | return model, tokenizer
110 |
111 |
112 | def clear_chat_messages():
113 | del st.session_state.messages
114 | del st.session_state.chat_messages
115 |
116 |
117 | def init_chat_messages():
118 | if "messages" in st.session_state:
119 | for i, message in enumerate(st.session_state.messages):
120 | if message["role"] == "assistant":
121 | with st.chat_message("assistant", avatar=image_url):
122 | st.markdown(process_assistant_content(message["content"]), unsafe_allow_html=True)
123 | if st.button("🗑", key=f"delete_{i}"):
124 | st.session_state.messages.pop(i)
125 | st.session_state.messages.pop(i - 1)
126 | st.session_state.chat_messages.pop(i)
127 | st.session_state.chat_messages.pop(i - 1)
128 | st.rerun()
129 | else:
130 | st.markdown(
131 | f'',
132 | unsafe_allow_html=True)
133 |
134 | else:
135 | st.session_state.messages = []
136 | st.session_state.chat_messages = []
137 |
138 | return st.session_state.messages
139 |
140 | def regenerate_answer(index):
141 | st.session_state.messages.pop()
142 | st.session_state.chat_messages.pop()
143 | st.rerun()
144 |
145 |
146 | def delete_conversation(index):
147 | st.session_state.messages.pop(index)
148 | st.session_state.messages.pop(index - 1)
149 | st.session_state.chat_messages.pop(index)
150 | st.session_state.chat_messages.pop(index - 1)
151 | st.rerun()
152 |
153 |
154 | st.sidebar.title("模型设定调整")
155 |
156 | # st.sidebar.text("训练数据偏差,增加上下文记忆时\n多轮对话(较单轮)容易出现能力衰减")
157 | st.session_state.history_chat_num = st.sidebar.slider("Number of Historical Dialogues", 0, 6, 0, step=2)
158 | # st.session_state.history_chat_num = 0
159 | st.session_state.max_new_tokens = st.sidebar.slider("Max Sequence Length", 256, 8192, 8192, step=1)
160 | st.session_state.temperature = st.sidebar.slider("Temperature", 0.6, 1.2, 0.85, step=0.01)
161 |
162 | model_source = st.sidebar.radio("选择模型来源", ["本地模型", "API"], index=0)
163 |
164 | if model_source == "API":
165 | api_url = st.sidebar.text_input("API URL", value="http://127.0.0.1:8000/v1")
166 | api_model_id = st.sidebar.text_input("Model ID", value="minimind")
167 | api_model_name = st.sidebar.text_input("Model Name", value="MiniMind2")
168 | api_key = st.sidebar.text_input("API Key", value="none", type="password")
169 | slogan = f"Hi, I'm {api_model_name}"
170 | else:
171 | MODEL_PATHS = {
172 | "MiniMind2-R1 (0.1B)": ["../MiniMind2-R1", "MiniMind2-R1"],
173 | "MiniMind2-Small-R1 (0.02B)": ["../MiniMind2-Small-R1", "MiniMind2-Small-R1"],
174 | "MiniMind2 (0.1B)": ["../MiniMind2", "MiniMind2"],
175 | "MiniMind2-MoE (0.15B)": ["../MiniMind2-MoE", "MiniMind2-MoE"],
176 | "MiniMind2-Small (0.02B)": ["../MiniMind2-Small", "MiniMind2-Small"]
177 | }
178 |
179 | selected_model = st.sidebar.selectbox('Models', list(MODEL_PATHS.keys()), index=2) # 默认选择 MiniMind2
180 | model_path = MODEL_PATHS[selected_model][0]
181 | slogan = f"Hi, I'm {MODEL_PATHS[selected_model][1]}"
182 |
183 | image_url = "https://www.modelscope.cn/api/v1/studio/gongjy/MiniMind/repo?Revision=master&FilePath=images%2Flogo2.png&View=true"
184 |
185 | st.markdown(
186 | f''
187 | '
'
188 | f'
'
189 | f'
{slogan} '
190 | '
'
191 | '
内容完全由AI生成,请务必仔细甄别 Content AI-generated, please discern with care '
192 | '
',
193 | unsafe_allow_html=True
194 | )
195 |
196 |
197 | def setup_seed(seed):
198 | random.seed(seed)
199 | np.random.seed(seed)
200 | torch.manual_seed(seed)
201 | torch.cuda.manual_seed(seed)
202 | torch.cuda.manual_seed_all(seed)
203 | torch.backends.cudnn.deterministic = True
204 | torch.backends.cudnn.benchmark = False
205 |
206 |
207 | def main():
208 | if model_source == "本地模型":
209 | model, tokenizer = load_model_tokenizer(model_path)
210 | else:
211 | model, tokenizer = None, None
212 |
213 | if "messages" not in st.session_state:
214 | st.session_state.messages = []
215 | st.session_state.chat_messages = []
216 |
217 | messages = st.session_state.messages
218 |
219 | for i, message in enumerate(messages):
220 | if message["role"] == "assistant":
221 | with st.chat_message("assistant", avatar=image_url):
222 | st.markdown(process_assistant_content(message["content"]), unsafe_allow_html=True)
223 | if st.button("×", key=f"delete_{i}"):
224 | st.session_state.messages = st.session_state.messages[:i - 1]
225 | st.session_state.chat_messages = st.session_state.chat_messages[:i - 1]
226 | st.rerun()
227 | else:
228 | st.markdown(
229 | f'',
230 | unsafe_allow_html=True)
231 |
232 | prompt = st.chat_input(key="input", placeholder="给 MiniMind 发送消息")
233 |
234 | if hasattr(st.session_state, 'regenerate') and st.session_state.regenerate:
235 | prompt = st.session_state.last_user_message
236 | regenerate_index = st.session_state.regenerate_index
237 | delattr(st.session_state, 'regenerate')
238 | delattr(st.session_state, 'last_user_message')
239 | delattr(st.session_state, 'regenerate_index')
240 |
241 | if prompt:
242 | st.markdown(
243 | f'',
244 | unsafe_allow_html=True)
245 | messages.append({"role": "user", "content": prompt[-st.session_state.max_new_tokens:]})
246 | st.session_state.chat_messages.append({"role": "user", "content": prompt[-st.session_state.max_new_tokens:]})
247 |
248 | with st.chat_message("assistant", avatar=image_url):
249 | placeholder = st.empty()
250 |
251 | if model_source == "API":
252 | try:
253 | from openai import OpenAI
254 |
255 | client = OpenAI(
256 | api_key=api_key,
257 | base_url=api_url
258 | )
259 | history_num = st.session_state.history_chat_num + 1 # +1 是为了包含当前的用户消息
260 | conversation_history = system_prompt + st.session_state.chat_messages[-history_num:]
261 | answer = ""
262 | response = client.chat.completions.create(
263 | model=api_model_id,
264 | messages=conversation_history,
265 | stream=True,
266 | temperature=st.session_state.temperature
267 | )
268 |
269 | for chunk in response:
270 | content = chunk.choices[0].delta.content or ""
271 | answer += content
272 | placeholder.markdown(process_assistant_content(answer), unsafe_allow_html=True)
273 |
274 | except Exception as e:
275 | answer = f"API调用出错: {str(e)}"
276 | placeholder.markdown(answer, unsafe_allow_html=True)
277 | else:
278 | random_seed = random.randint(0, 2 ** 32 - 1)
279 | setup_seed(random_seed)
280 |
281 | st.session_state.chat_messages = system_prompt + st.session_state.chat_messages[
282 | -(st.session_state.history_chat_num + 1):]
283 | new_prompt = tokenizer.apply_chat_template(
284 | st.session_state.chat_messages,
285 | tokenize=False,
286 | add_generation_prompt=True
287 | )
288 |
289 | inputs = tokenizer(
290 | new_prompt,
291 | return_tensors="pt",
292 | truncation=True
293 | ).to(device)
294 |
295 | streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
296 | generation_kwargs = {
297 | "input_ids": inputs.input_ids,
298 | "max_length": inputs.input_ids.shape[1] + st.session_state.max_new_tokens,
299 | "num_return_sequences": 1,
300 | "do_sample": True,
301 | "attention_mask": inputs.attention_mask,
302 | "pad_token_id": tokenizer.pad_token_id,
303 | "eos_token_id": tokenizer.eos_token_id,
304 | "temperature": st.session_state.temperature,
305 | "top_p": 0.85,
306 | "streamer": streamer,
307 | }
308 |
309 | Thread(target=model.generate, kwargs=generation_kwargs).start()
310 |
311 | answer = ""
312 | for new_text in streamer:
313 | answer += new_text
314 | placeholder.markdown(process_assistant_content(answer), unsafe_allow_html=True)
315 |
316 | messages.append({"role": "assistant", "content": answer})
317 | st.session_state.chat_messages.append({"role": "assistant", "content": answer})
318 | with st.empty():
319 | if st.button("×", key=f"delete_{len(messages) - 1}"):
320 | st.session_state.messages = st.session_state.messages[:-2]
321 | st.session_state.chat_messages = st.session_state.chat_messages[:-2]
322 | st.rerun()
323 |
324 |
325 | if __name__ == "__main__":
326 | from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
327 |
328 | main()
329 |
--------------------------------------------------------------------------------
/trainer/train_distill_reason.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 |
4 | __package__ = "trainer"
5 | sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
6 |
7 | import argparse
8 | import time
9 | import math
10 | import warnings
11 | import torch
12 | import torch.distributed as dist
13 | from contextlib import nullcontext
14 | from torch import optim, nn
15 | from torch.nn.parallel import DistributedDataParallel
16 | from torch.utils.data import DataLoader, DistributedSampler
17 | from transformers import AutoTokenizer, AutoModelForCausalLM
18 | from model.model_minimind import MiniMindConfig, MiniMindForCausalLM
19 | from dataset.lm_dataset import SFTDataset
20 |
21 | warnings.filterwarnings('ignore')
22 |
23 |
24 | def Logger(content):
25 | if not ddp or dist.get_rank() == 0:
26 | print(content)
27 |
28 |
29 | def get_lr(current_step, total_steps, lr):
30 | return lr / 10 + 0.5 * lr * (1 + math.cos(math.pi * current_step / total_steps))
31 |
32 |
33 | def train_epoch(epoch, wandb):
34 | # 思考标签占位符
35 | start_of_think_ids = tokenizer('').input_ids
36 | end_of_think_ids = tokenizer(' ').input_ids
37 | start_of_answer_ids = tokenizer('').input_ids
38 | end_of_answer_ids = tokenizer(' ').input_ids
39 | loss_fct = nn.CrossEntropyLoss(reduction='none')
40 | start_time = time.time()
41 | for step, (X, Y, loss_mask) in enumerate(train_loader):
42 | X = X.to(args.device)
43 | Y = Y.to(args.device)
44 | loss_mask = loss_mask.to(args.device)
45 | lr = get_lr(epoch * iter_per_epoch + step, args.epochs * iter_per_epoch, args.learning_rate)
46 | for param_group in optimizer.param_groups:
47 | param_group['lr'] = lr
48 |
49 | with ctx:
50 | res = model(X)
51 | loss = loss_fct(
52 | res.logits.view(-1, res.logits.size(-1)),
53 | Y.view(-1)
54 | ).view(Y.size())
55 | sp_ids = torch.isin(Y.view(-1),
56 | torch.tensor(start_of_think_ids + end_of_think_ids
57 | + start_of_answer_ids + end_of_answer_ids
58 | ).to(args.device))
59 | # 在 sp_ids 对应的位置增加额外的惩罚
60 | loss_mask = loss_mask.view(-1)
61 | loss_mask_sum = loss_mask.sum()
62 | loss_mask[sp_ids] = 10
63 | loss_mask = loss_mask.view(Y.size())
64 | loss = (loss * loss_mask).sum() / loss_mask_sum
65 | loss += res.aux_loss
66 | loss = loss / args.accumulation_steps
67 |
68 | scaler.scale(loss).backward()
69 |
70 | if (step + 1) % args.accumulation_steps == 0:
71 | scaler.unscale_(optimizer)
72 | torch.nn.utils.clip_grad_norm_(model.parameters(), args.grad_clip)
73 |
74 | scaler.step(optimizer)
75 | scaler.update()
76 |
77 | optimizer.zero_grad(set_to_none=True)
78 |
79 | if step % args.log_interval == 0:
80 | spend_time = time.time() - start_time
81 | Logger(
82 | 'Epoch:[{}/{}]({}/{}) loss:{:.3f} lr:{:.12f} epoch_Time:{}min:'.format(
83 | epoch + 1,
84 | args.epochs,
85 | step,
86 | iter_per_epoch,
87 | loss.item() * args.accumulation_steps,
88 | optimizer.param_groups[-1]['lr'],
89 | spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60))
90 |
91 | if (wandb is not None) and (not ddp or dist.get_rank() == 0):
92 | wandb.log({"loss": loss * args.accumulation_steps,
93 | "lr": optimizer.param_groups[-1]['lr'],
94 | "epoch_Time": spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60})
95 |
96 | if (step + 1) % args.save_interval == 0 and (not ddp or dist.get_rank() == 0):
97 | model.eval()
98 | moe_path = '_moe' if lm_config.use_moe else ''
99 | ckp = f'{args.save_dir}/reason_{lm_config.hidden_size}{moe_path}.pth'
100 |
101 | if isinstance(model, torch.nn.parallel.DistributedDataParallel):
102 | state_dict = model.module.state_dict()
103 | else:
104 | state_dict = model.state_dict()
105 |
106 | state_dict = {k: v.half() for k, v in state_dict.items()} # 半精度保存
107 | torch.save(state_dict, ckp)
108 | model.train()
109 |
110 |
111 | def init_model(lm_config):
112 | tokenizer = AutoTokenizer.from_pretrained('../model')
113 | model = MiniMindForCausalLM(lm_config)
114 | moe_path = '_moe' if lm_config.use_moe else ''
115 | ckp = f'{args.save_dir}/rlhf_{lm_config.hidden_size}{moe_path}.pth'
116 | state_dict = torch.load(ckp, map_location=args.device)
117 | model.load_state_dict(state_dict, strict=False)
118 | Logger(f'LLM总参数量:{sum(p.numel() for p in model.parameters() if p.requires_grad) / 1e6:.3f} 百万')
119 | model = model.to(args.device)
120 | return model, tokenizer
121 |
122 |
123 | def init_distributed_mode():
124 | if not ddp: return
125 | global ddp_local_rank, DEVICE
126 |
127 | dist.init_process_group(backend="nccl")
128 | ddp_rank = int(os.environ["RANK"])
129 | ddp_local_rank = int(os.environ["LOCAL_RANK"])
130 | ddp_world_size = int(os.environ["WORLD_SIZE"])
131 | DEVICE = f"cuda:{ddp_local_rank}"
132 | torch.cuda.set_device(DEVICE)
133 |
134 |
135 | if __name__ == "__main__":
136 | parser = argparse.ArgumentParser(description="MiniMind Distill Reasoning")
137 | parser.add_argument("--out_dir", type=str, default="../out")
138 | parser.add_argument("--epochs", type=int, default=1)
139 | parser.add_argument("--batch_size", type=int, default=8)
140 | parser.add_argument("--learning_rate", type=float, default=1e-6)
141 | parser.add_argument("--device", type=str, default="cuda:0" if torch.cuda.is_available() else "cpu")
142 | parser.add_argument("--dtype", type=str, default="bfloat16")
143 | parser.add_argument("--use_wandb", action="store_true")
144 | parser.add_argument("--wandb_project", type=str, default="MiniMind-Full-SFT")
145 | parser.add_argument("--num_workers", type=int, default=1)
146 | parser.add_argument("--ddp", action="store_true")
147 | parser.add_argument("--accumulation_steps", type=int, default=1)
148 | parser.add_argument("--grad_clip", type=float, default=1.0)
149 | parser.add_argument("--warmup_iters", type=int, default=0)
150 | parser.add_argument("--log_interval", type=int, default=1)
151 | parser.add_argument("--save_interval", type=int, default=50)
152 | parser.add_argument('--local_rank', type=int, default=-1)
153 | parser.add_argument('--hidden_size', default=512, type=int)
154 | parser.add_argument('--num_hidden_layers', default=8, type=int)
155 | parser.add_argument('--max_seq_len', default=1024, type=int)
156 | parser.add_argument('--use_moe', default=False, type=bool)
157 | parser.add_argument("--data_path", type=str, default="../dataset/r1_mix_1024.jsonl")
158 |
159 | args = parser.parse_args()
160 |
161 | lm_config = MiniMindConfig(hidden_size=args.hidden_size, num_hidden_layers=args.num_hidden_layers,
162 | use_moe=args.use_moe)
163 | args.save_dir = os.path.join(args.out_dir)
164 | os.makedirs(args.save_dir, exist_ok=True)
165 | os.makedirs(args.out_dir, exist_ok=True)
166 | tokens_per_iter = args.batch_size * args.max_seq_len
167 | device_type = "cuda" if "cuda" in args.device else "cpu"
168 |
169 | args.wandb_run_name = f"MiniMind-Distill-Reasoning-Epoch-{args.epochs}-BatchSize-{args.batch_size}-LearningRate-{args.learning_rate}"
170 |
171 | ctx = nullcontext() if device_type == "cpu" else torch.cuda.amp.autocast()
172 | ddp = int(os.environ.get("RANK", -1)) != -1 # is this a ddp run?
173 | ddp_local_rank, DEVICE = 0, "cuda:0"
174 | base_seed = 1337
175 | torch.manual_seed(base_seed)
176 | torch.cuda.manual_seed(base_seed)
177 |
178 | if ddp:
179 | init_distributed_mode()
180 | args.device = torch.device(DEVICE)
181 | rank = dist.get_rank()
182 | torch.manual_seed(base_seed + rank)
183 | # 同时设置 CUDA 的随机种子
184 | torch.cuda.manual_seed(base_seed + rank)
185 |
186 | if args.use_wandb and (not ddp or ddp_local_rank == 0):
187 | import wandb
188 |
189 | wandb.init(project=args.wandb_project, name=args.wandb_run_name)
190 | else:
191 | wandb = None
192 |
193 | model, tokenizer = init_model(lm_config)
194 |
195 | train_ds = SFTDataset(args.data_path, tokenizer, max_length=args.max_seq_len)
196 | train_sampler = DistributedSampler(train_ds) if ddp else None
197 | train_loader = DataLoader(
198 | train_ds,
199 | batch_size=args.batch_size,
200 | pin_memory=True,
201 | drop_last=False,
202 | shuffle=False,
203 | num_workers=args.num_workers,
204 | sampler=train_sampler
205 | )
206 |
207 | scaler = torch.cuda.amp.GradScaler(enabled=(args.dtype in ['float16', 'bfloat16']))
208 | optimizer = optim.AdamW(model.parameters(), lr=args.learning_rate)
209 |
210 | if ddp:
211 | model._ddp_params_and_buffers_to_ignore = {"pos_cis"}
212 | model = DistributedDataParallel(model, device_ids=[ddp_local_rank])
213 |
214 | iter_per_epoch = len(train_loader)
215 | for epoch in range(args.epochs):
216 | train_epoch(epoch, wandb)
217 |
--------------------------------------------------------------------------------
/trainer/train_distillation.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 |
4 | __package__ = "trainer"
5 | sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
6 | import argparse
7 | import time
8 | import math
9 | import warnings
10 |
11 | import torch
12 | import torch.nn.functional as F
13 | import torch.distributed as dist
14 | from contextlib import nullcontext
15 | from torch import optim
16 | from torch.nn.parallel import DistributedDataParallel
17 | from torch.utils.data import DataLoader, DistributedSampler
18 | from transformers import AutoTokenizer, AutoModelForCausalLM
19 | from model.model_minimind import MiniMindConfig, MiniMindForCausalLM
20 | from dataset.lm_dataset import SFTDataset
21 |
22 | warnings.filterwarnings('ignore')
23 |
24 |
25 | def Logger(content):
26 | if not ddp or dist.get_rank() == 0:
27 | print(content)
28 |
29 |
30 | def get_lr(current_step, total_steps, lr):
31 | return lr / 10 + 0.5 * lr * (1 + math.cos(math.pi * current_step / total_steps))
32 |
33 |
34 | def distillation_loss_fn(student_logits, teacher_logits, temperature=1.0, reduction='batchmean'):
35 | with torch.no_grad():
36 | teacher_probs = F.softmax(teacher_logits / temperature, hidden_size=-1).detach()
37 |
38 | student_log_probs = F.log_softmax(student_logits / temperature, hidden_size=-1)
39 |
40 | kl = F.kl_div(
41 | student_log_probs,
42 | teacher_probs,
43 | reduction=reduction
44 | )
45 | return (temperature ** 2) * kl
46 |
47 |
48 | def train_epoch(epoch, wandb, alpha=0.0, temperature=1.0):
49 | start_time = time.time()
50 |
51 | if teacher_model is not None:
52 | teacher_model.eval()
53 | teacher_model.requires_grad_(False)
54 |
55 | for step, (X, Y, loss_mask) in enumerate(train_loader):
56 | X = X.to(args.device)
57 | Y = Y.to(args.device)
58 | loss_mask = loss_mask.to(args.device)
59 | lr = get_lr(epoch * iter_per_epoch + step,
60 | args.epochs * iter_per_epoch,
61 | args.learning_rate)
62 | for param_group in optimizer.param_groups:
63 | param_group['lr'] = lr
64 |
65 | # 前向传播(学生模型)
66 | with ctx:
67 | res = model(X)
68 | student_logits = res.logits
69 |
70 | # 教师模型前向传播(只在eval & no_grad)
71 | if teacher_model is not None:
72 | with torch.no_grad():
73 | teacher_logits = teacher_model(X).logits
74 | vocab_size_student = student_logits.size(-1) # N
75 | teacher_logits = teacher_logits[..., :vocab_size_student]
76 |
77 | # ========== 计算损失 ==========
78 | # 1) Ground-Truth CE Loss(可选)
79 | loss_mask_flat = loss_mask.view(-1)
80 | ce_loss = F.cross_entropy(
81 | student_logits.view(-1, student_logits.size(-1)),
82 | Y.view(-1),
83 | ignore_index=0,
84 | reduction='none'
85 | )
86 | ce_loss = torch.sum(ce_loss * loss_mask_flat) / loss_mask_flat.sum()
87 | if lm_config_student.use_moe:
88 | ce_loss += res.aux_loss
89 |
90 | # 2) Distillation Loss(可选)
91 | if teacher_model is not None:
92 | # 只在有效token位置做蒸馏
93 | distill_loss = distillation_loss_fn(
94 | student_logits.view(-1, student_logits.size(-1))[loss_mask_flat == 1],
95 | teacher_logits.view(-1, teacher_logits.size(-1))[loss_mask_flat == 1],
96 | temperature=temperature
97 | )
98 | else:
99 | distill_loss = torch.tensor(0.0, device=args.device)
100 |
101 | # 3) 总损失 = alpha * CE + (1-alpha) * Distill
102 | loss = (alpha * ce_loss + (1 - alpha) * distill_loss) / args.accumulation_steps
103 |
104 | scaler.scale(loss).backward()
105 |
106 | if (step + 1) % args.accumulation_steps == 0:
107 | scaler.unscale_(optimizer)
108 | torch.nn.utils.clip_grad_norm_(model.parameters(), args.grad_clip)
109 | scaler.step(optimizer)
110 | scaler.update()
111 | optimizer.zero_grad(set_to_none=True)
112 |
113 | if step % args.log_interval == 0:
114 | spend_time = time.time() - start_time
115 | Logger(
116 | 'Epoch:[{}/{}]({}/{}) loss:{:.4f} lr:{:.12f} epoch_Time:{}min:'.format(
117 | epoch,
118 | args.epochs - 1,
119 | step,
120 | iter_per_epoch,
121 | loss.item(),
122 | optimizer.param_groups[-1]['lr'],
123 | spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60
124 | )
125 | )
126 |
127 | if (wandb is not None) and (not ddp or dist.get_rank() == 0):
128 | wandb.log({
129 | "loss": loss.item(),
130 | "ce_loss": ce_loss.item(),
131 | "distill_loss": distill_loss.item() if teacher_model is not None else 0.0,
132 | "lr": optimizer.param_groups[-1]['lr'],
133 | "last-time": spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60
134 | })
135 |
136 | if (step + 1) % args.save_interval == 0 and (not ddp or dist.get_rank() == 0):
137 | model.eval()
138 | moe_path = '_moe' if lm_config_student.use_moe else ''
139 | ckp = f'{args.save_dir}/full_dist_{lm_config_student.hidden_size}{moe_path}.pth'
140 | if isinstance(model, torch.nn.parallel.DistributedDataParallel):
141 | state_dict = model.module.state_dict()
142 | else:
143 | state_dict = model.state_dict()
144 | state_dict = {k: v.half() for k, v in state_dict.items()} # 半精度保存
145 | torch.save(state_dict, ckp)
146 | model.train()
147 |
148 |
149 | def init_student_model(lm_config):
150 | tokenizer = AutoTokenizer.from_pretrained('../model/')
151 | model = MiniMindForCausalLM(lm_config)
152 | moe_path = '_moe' if lm_config.use_moe else ''
153 | ckp = f'{args.save_dir}/full_sft_{lm_config.hidden_size}{moe_path}.pth'
154 | state_dict = torch.load(ckp, map_location=args.device)
155 | model.load_state_dict(state_dict, strict=False)
156 | Logger(f'学生模型(LLM)总参数量:{sum(p.numel() for p in model.parameters() if p.requires_grad) / 1e6:.3f} 百万')
157 | model = model.to(args.device)
158 |
159 | return model, tokenizer
160 |
161 |
162 | def init_teacher_model(lm_config):
163 | model = MiniMindForCausalLM(lm_config)
164 | moe_path = '_moe' if lm_config.use_moe else ''
165 | ckp = f'{args.save_dir}/full_sft_{lm_config.hidden_size}{moe_path}.pth'
166 | state_dict = torch.load(ckp, map_location=args.device)
167 | model.load_state_dict(state_dict, strict=False)
168 | Logger(f'教师模型(LLM)总参数量:{sum(p.numel() for p in model.parameters() if p.requires_grad) / 1e6:.3f} 百万')
169 | model = model.to(args.device)
170 | return model
171 |
172 |
173 | def init_distributed_mode():
174 | if not ddp: return
175 | global ddp_local_rank, DEVICE
176 |
177 | dist.init_process_group(backend="nccl")
178 | ddp_rank = int(os.environ["RANK"])
179 | ddp_local_rank = int(os.environ["LOCAL_RANK"])
180 | ddp_world_size = int(os.environ["WORLD_SIZE"])
181 | DEVICE = f"cuda:{ddp_local_rank}"
182 | torch.cuda.set_device(DEVICE)
183 |
184 |
185 | if __name__ == "__main__":
186 | parser = argparse.ArgumentParser(description="MiniMind Full SFT")
187 | parser.add_argument("--out_dir", type=str, default="../out")
188 | parser.add_argument("--epochs", type=int, default=6)
189 | parser.add_argument("--batch_size", type=int, default=32)
190 | parser.add_argument("--learning_rate", type=float, default=5e-6)
191 | parser.add_argument("--device", type=str, default="cuda:0" if torch.cuda.is_available() else "cpu")
192 | parser.add_argument("--dtype", type=str, default="bfloat16")
193 | parser.add_argument("--use_wandb", action="store_true")
194 | parser.add_argument("--wandb_project", type=str, default="MiniMind-Full-SFT")
195 | parser.add_argument("--num_workers", type=int, default=1)
196 | parser.add_argument("--ddp", action="store_true")
197 | parser.add_argument("--accumulation_steps", type=int, default=1)
198 | parser.add_argument("--grad_clip", type=float, default=1.0)
199 | parser.add_argument("--warmup_iters", type=int, default=0)
200 | parser.add_argument("--log_interval", type=int, default=100)
201 | parser.add_argument("--save_interval", type=int, default=100)
202 | parser.add_argument("--max_seq_len", type=int, default=512)
203 | parser.add_argument('--local_rank', type=int, default=-1)
204 | parser.add_argument("--data_path", type=str, default="../dataset/sft_xxx.jsonl")
205 |
206 | args = parser.parse_args()
207 | # 定义学生模型和教师模型
208 | lm_config_student = MiniMindConfig(hidden_size=512, num_hidden_layers=8)
209 | lm_config_teacher = MiniMindConfig(hidden_size=768, num_hidden_layers=16)
210 | args.save_dir = os.path.join(args.out_dir)
211 | os.makedirs(args.save_dir, exist_ok=True)
212 | os.makedirs(args.out_dir, exist_ok=True)
213 | tokens_per_iter = args.batch_size * args.max_seq_len
214 | device_type = "cuda" if "cuda" in args.device else "cpu"
215 |
216 | args.wandb_run_name = f"MiniMind-Dist-SFT-Epoch-{args.epochs}-BatchSize-{args.batch_size}-LearningRate-{args.learning_rate}"
217 |
218 | ctx = nullcontext() if device_type == "cpu" else torch.cuda.amp.autocast()
219 | ddp = int(os.environ.get("RANK", -1)) != -1 # is this a ddp run?
220 | ddp_local_rank, DEVICE = 0, "cuda:0"
221 | base_seed = 1337
222 | torch.manual_seed(base_seed)
223 | torch.cuda.manual_seed(base_seed)
224 |
225 | if ddp:
226 | init_distributed_mode()
227 | args.device = torch.device(DEVICE)
228 | rank = dist.get_rank()
229 | torch.manual_seed(base_seed + rank)
230 | # 同时设置 CUDA 的随机种子
231 | torch.cuda.manual_seed(base_seed + rank)
232 |
233 | if args.use_wandb and (not ddp or ddp_local_rank == 0):
234 | import wandb
235 |
236 | wandb.init(project=args.wandb_project, name=args.wandb_run_name)
237 | else:
238 | wandb = None
239 |
240 | # 初始化学生模型和教师模型
241 | model, tokenizer = init_student_model(lm_config_student)
242 | teacher_model = init_teacher_model(lm_config_teacher)
243 |
244 | train_ds = SFTDataset(args.data_path, tokenizer, max_length=args.max_seq_len)
245 | train_sampler = DistributedSampler(train_ds) if ddp else None
246 | train_loader = DataLoader(
247 | train_ds,
248 | batch_size=args.batch_size,
249 | pin_memory=True,
250 | drop_last=False,
251 | shuffle=False,
252 | num_workers=args.num_workers,
253 | sampler=train_sampler
254 | )
255 |
256 | scaler = torch.cuda.amp.GradScaler(enabled=(args.dtype in ['float16', 'bfloat16']))
257 | optimizer = optim.AdamW(model.parameters(), lr=args.learning_rate)
258 |
259 | if ddp:
260 | model._ddp_params_and_buffers_to_ignore = {"pos_cis"}
261 | model = DistributedDataParallel(model, device_ids=[ddp_local_rank])
262 |
263 | iter_per_epoch = len(train_loader)
264 | for epoch in range(args.epochs):
265 | train_epoch(epoch, wandb)
266 |
--------------------------------------------------------------------------------
/trainer/train_dpo.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | __package__ = "trainer"
4 | sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
5 |
6 | import argparse
7 | import time
8 | import math
9 | import warnings
10 | import torch
11 | import torch.nn.functional as F
12 | import torch.distributed as dist
13 | from contextlib import nullcontext
14 | from torch import optim
15 | from torch.nn.parallel import DistributedDataParallel
16 | from torch.utils.data import DataLoader, DistributedSampler
17 | from transformers import AutoTokenizer, AutoModelForCausalLM
18 | from model.model_minimind import MiniMindConfig, MiniMindForCausalLM
19 | from dataset.lm_dataset import DPODataset
20 |
21 | warnings.filterwarnings('ignore')
22 |
23 |
24 | def Logger(content):
25 | if not ddp or dist.get_rank() == 0:
26 | print(content)
27 |
28 |
29 | def get_lr(current_step, total_steps, lr):
30 | return lr / 10 + 0.5 * lr * (1 + math.cos(math.pi * current_step / total_steps))
31 |
32 |
33 | def logits_to_probs(logits, labels):
34 | # logits shape: (batch_size, seq_len, vocab_size)
35 | # labels shape: (batch_size, seq_len)
36 | # probs shape: (batch_size, seq_len)
37 | log_probs = F.log_softmax(logits, dim=2)
38 | probs = torch.gather(log_probs, dim=2, index=labels.unsqueeze(2)).squeeze(-1)
39 | return probs
40 |
41 |
42 | def dpo_loss(ref_probs, probs, mask, beta):
43 | # ref_probs 和 probs 都是 shape: (batch_size, seq_len)
44 | # https://github.com/jingyaogong/minimind/issues/298
45 | seq_lengths = mask.sum(dim=1, keepdim=True) # (batch_size, 1)
46 | ref_probs = (ref_probs * mask).sum(dim=1) / seq_lengths.squeeze()
47 | probs = (probs * mask).sum(dim=1) / seq_lengths.squeeze()
48 |
49 | # 将 chosen 和 rejected 数据分开
50 | batch_size = ref_probs.shape[0]
51 | chosen_ref_probs = ref_probs[:batch_size // 2]
52 | reject_ref_probs = ref_probs[batch_size // 2:]
53 | chosen_probs = probs[:batch_size // 2]
54 | reject_probs = probs[batch_size // 2:]
55 |
56 | pi_logratios = chosen_probs - reject_probs
57 | ref_logratios = chosen_ref_probs - reject_ref_probs
58 | logits = pi_logratios - ref_logratios
59 | loss = -F.logsigmoid(beta * logits)
60 | return loss.mean()
61 |
62 |
63 | def train_epoch(epoch, wandb):
64 | start_time = time.time()
65 | for step, batch in enumerate(train_loader):
66 | x_chosen = batch['x_chosen'].to(args.device)
67 | x_rejected = batch['x_rejected'].to(args.device)
68 | y_chosen = batch['y_chosen'].to(args.device)
69 | y_rejected = batch['y_rejected'].to(args.device)
70 | mask_chosen = batch['mask_chosen'].to(args.device)
71 | mask_rejected = batch['mask_rejected'].to(args.device)
72 | x = torch.cat([x_chosen, x_rejected], dim=0)
73 | y = torch.cat([y_chosen, y_rejected], dim=0)
74 | mask = torch.cat([mask_chosen, mask_rejected], dim=0)
75 |
76 | lr = get_lr(epoch * iter_per_epoch + step, args.epochs * iter_per_epoch, args.learning_rate)
77 | for param_group in optimizer.param_groups:
78 | param_group['lr'] = lr
79 |
80 | with ctx:
81 | with torch.no_grad():
82 | ref_outputs = ref_model(x)
83 | ref_logits = ref_outputs.logits
84 | ref_probs = logits_to_probs(ref_logits, y)
85 | ref_probs = ref_probs * mask
86 | outputs = model(x)
87 | logits = outputs.logits
88 | probs = logits_to_probs(logits, y)
89 | probs = probs * mask
90 | loss = dpo_loss(ref_probs, probs, mask, beta=0.1)
91 | loss = loss / args.accumulation_steps
92 |
93 | scaler.scale(loss).backward()
94 |
95 | if (step + 1) % args.accumulation_steps == 0:
96 | scaler.unscale_(optimizer)
97 | torch.nn.utils.clip_grad_norm_(model.parameters(), args.grad_clip)
98 | scaler.step(optimizer)
99 | scaler.update()
100 | optimizer.zero_grad(set_to_none=True)
101 |
102 | if step % args.log_interval == 0:
103 | spend_time = time.time() - start_time
104 | Logger(
105 | 'Epoch:[{}/{}]({}/{}) loss:{:.3f} lr:{:.12f} epoch_Time:{}min:'.format(
106 | epoch + 1,
107 | args.epochs,
108 | step,
109 | iter_per_epoch,
110 | loss.item() * args.accumulation_steps,
111 | optimizer.param_groups[-1]['lr'],
112 | spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60))
113 |
114 | if (wandb is not None) and (not ddp or dist.get_rank() == 0):
115 | wandb.log({"loss": loss * args.accumulation_steps,
116 | "lr": optimizer.param_groups[-1]['lr'],
117 | "epoch_Time": spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60})
118 |
119 | if (step + 1) % args.save_interval == 0 and (not ddp or dist.get_rank() == 0):
120 | model.eval()
121 | moe_path = '_moe' if lm_config.use_moe else ''
122 | ckp = f'{args.save_dir}/rlhf_{lm_config.hidden_size}{moe_path}.pth'
123 |
124 | if isinstance(model, torch.nn.parallel.DistributedDataParallel):
125 | state_dict = model.module.state_dict()
126 | else:
127 | state_dict = model.state_dict()
128 | state_dict = {k: v.half() for k, v in state_dict.items()} # 半精度保存
129 | torch.save(state_dict, ckp)
130 | model.train()
131 |
132 |
133 | def init_model(lm_config):
134 | tokenizer = AutoTokenizer.from_pretrained('../model/')
135 | model = MiniMindForCausalLM(lm_config)
136 | moe_path = '_moe' if lm_config.use_moe else ''
137 | ckp = f'{args.save_dir}/full_sft_{lm_config.hidden_size}{moe_path}.pth'
138 | state_dict = torch.load(ckp, map_location=args.device)
139 | model.load_state_dict(state_dict, strict=False)
140 | # 初始化参考模型
141 | ref_model = MiniMindForCausalLM(lm_config)
142 | ref_model.load_state_dict(state_dict, strict=False)
143 | ref_model.eval()
144 | ref_model.requires_grad_(False)
145 |
146 | Logger(f'LLM总参数量:{sum(p.numel() for p in model.parameters() if p.requires_grad) / 1e6:.3f} 百万')
147 | model = model.to(args.device)
148 | ref_model = ref_model.to(args.device)
149 |
150 | return model, ref_model, tokenizer
151 |
152 |
153 | def init_distributed_mode():
154 | if not ddp: return
155 | global ddp_local_rank, DEVICE
156 |
157 | dist.init_process_group(backend="nccl")
158 | ddp_rank = int(os.environ["RANK"])
159 | ddp_local_rank = int(os.environ["LOCAL_RANK"])
160 | ddp_world_size = int(os.environ["WORLD_SIZE"])
161 | DEVICE = f"cuda:{ddp_local_rank}"
162 | torch.cuda.set_device(DEVICE)
163 |
164 |
165 | if __name__ == "__main__":
166 | parser = argparse.ArgumentParser(description="MiniMind RLHF")
167 | parser.add_argument("--out_dir", type=str, default="../out")
168 | parser.add_argument("--epochs", type=int, default=2)
169 | parser.add_argument("--batch_size", type=int, default=4)
170 | # sft阶段学习率为 「5e-6」->「5e-7」长度512,建议离线正负样本「概率」偏好对齐阶段lr <=「1e-8」长度3000,否则很容易遗忘训坏
171 | parser.add_argument("--learning_rate", type=float, default=1e-8)
172 | parser.add_argument("--device", type=str, default="cuda:0" if torch.cuda.is_available() else "cpu")
173 | parser.add_argument("--dtype", type=str, default="bfloat16")
174 | parser.add_argument("--use_wandb", action="store_true")
175 | parser.add_argument("--wandb_project", type=str, default="MiniMind-RLHF-SFT")
176 | parser.add_argument("--num_workers", type=int, default=1)
177 | parser.add_argument("--ddp", action="store_true")
178 | parser.add_argument("--accumulation_steps", type=int, default=1)
179 | parser.add_argument("--grad_clip", type=float, default=1.0)
180 | parser.add_argument("--warmup_iters", type=int, default=0)
181 | parser.add_argument("--log_interval", type=int, default=100)
182 | parser.add_argument("--save_interval", type=int, default=100)
183 | parser.add_argument('--local_rank', type=int, default=-1)
184 | parser.add_argument('--hidden_size', default=512, type=int)
185 | parser.add_argument('--num_hidden_layers', default=8, type=int)
186 | parser.add_argument('--max_seq_len', default=1024, type=int)
187 | parser.add_argument('--use_moe', default=False, type=bool)
188 | parser.add_argument("--data_path", type=str, default="../dataset/dpo.jsonl")
189 |
190 | args = parser.parse_args()
191 |
192 | lm_config = MiniMindConfig(hidden_size=args.hidden_size, num_hidden_layers=args.num_hidden_layers, use_moe=args.use_moe)
193 | args.save_dir = os.path.join(args.out_dir)
194 | os.makedirs(args.save_dir, exist_ok=True)
195 | os.makedirs(args.out_dir, exist_ok=True)
196 | tokens_per_iter = args.batch_size * args.max_seq_len
197 | device_type = "cuda" if "cuda" in args.device else "cpu"
198 |
199 | args.wandb_run_name = f"MiniMind-Full-DPO-Epoch-{args.epochs}-BatchSize-{args.batch_size}-LearningRate-{args.learning_rate}"
200 |
201 | ctx = nullcontext() if device_type == "cpu" else torch.cuda.amp.autocast()
202 | ddp = int(os.environ.get("RANK", -1)) != -1 # is this a ddp run?
203 | ddp_local_rank, DEVICE = 0, "cuda:0"
204 | base_seed = 1337
205 | torch.manual_seed(base_seed)
206 | torch.cuda.manual_seed(base_seed)
207 |
208 | if ddp:
209 | init_distributed_mode()
210 | args.device = torch.device(DEVICE)
211 | rank = dist.get_rank()
212 | torch.manual_seed(base_seed + rank)
213 | # 同时设置 CUDA 的随机种子
214 | torch.cuda.manual_seed(base_seed + rank)
215 |
216 | if args.use_wandb and (not ddp or ddp_local_rank == 0):
217 | import wandb
218 |
219 | wandb.init(project=args.wandb_project, name=args.wandb_run_name)
220 | else:
221 | wandb = None
222 |
223 | model, ref_model, tokenizer = init_model(lm_config)
224 |
225 | train_ds = DPODataset(args.data_path, tokenizer, max_length=args.max_seq_len)
226 | train_sampler = DistributedSampler(train_ds) if ddp else None
227 | train_loader = DataLoader(
228 | train_ds,
229 | batch_size=args.batch_size,
230 | pin_memory=True,
231 | drop_last=False,
232 | shuffle=False,
233 | num_workers=args.num_workers,
234 | sampler=train_sampler
235 | )
236 |
237 | scaler = torch.cuda.amp.GradScaler(enabled=(args.dtype in ['float16', 'bfloat16']))
238 | optimizer = optim.AdamW(model.parameters(), lr=args.learning_rate)
239 |
240 | if ddp:
241 | model._ddp_params_and_buffers_to_ignore = {"pos_cis"}
242 | model = DistributedDataParallel(model, device_ids=[ddp_local_rank])
243 |
244 | iter_per_epoch = len(train_loader)
245 | for epoch in range(args.epochs):
246 | train_epoch(epoch, wandb)
247 |
--------------------------------------------------------------------------------
/trainer/train_full_sft.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 |
4 | __package__ = "trainer"
5 | sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
6 |
7 | import argparse
8 | import time
9 | import math
10 | import warnings
11 | import torch
12 | import torch.distributed as dist
13 | from contextlib import nullcontext
14 | from torch import optim, nn
15 | from torch.nn.parallel import DistributedDataParallel
16 | from torch.utils.data import DataLoader, DistributedSampler
17 | from transformers import AutoTokenizer, AutoModelForCausalLM
18 | from model.model_minimind import MiniMindConfig, MiniMindForCausalLM
19 | from dataset.lm_dataset import SFTDataset
20 |
21 | warnings.filterwarnings('ignore')
22 |
23 |
24 | def Logger(content):
25 | if not ddp or dist.get_rank() == 0:
26 | print(content)
27 |
28 |
29 | def get_lr(current_step, total_steps, lr):
30 | return lr / 10 + 0.5 * lr * (1 + math.cos(math.pi * current_step / total_steps))
31 |
32 |
33 | def train_epoch(epoch, wandb):
34 | loss_fct = nn.CrossEntropyLoss(reduction='none')
35 | start_time = time.time()
36 | for step, (X, Y, loss_mask) in enumerate(train_loader):
37 | X = X.to(args.device)
38 | Y = Y.to(args.device)
39 | loss_mask = loss_mask.to(args.device)
40 | lr = get_lr(epoch * iter_per_epoch + step, args.epochs * iter_per_epoch, args.learning_rate)
41 | for param_group in optimizer.param_groups:
42 | param_group['lr'] = lr
43 |
44 | with ctx:
45 | res = model(X)
46 | loss = loss_fct(
47 | res.logits.view(-1, res.logits.size(-1)),
48 | Y.view(-1)
49 | ).view(Y.size())
50 |
51 | loss = (loss * loss_mask).sum() / loss_mask.sum()
52 | loss += res.aux_loss
53 | loss = loss / args.accumulation_steps
54 |
55 | scaler.scale(loss).backward()
56 |
57 | if (step + 1) % args.accumulation_steps == 0:
58 | scaler.unscale_(optimizer)
59 | torch.nn.utils.clip_grad_norm_(model.parameters(), args.grad_clip)
60 |
61 | scaler.step(optimizer)
62 | scaler.update()
63 |
64 | optimizer.zero_grad(set_to_none=True)
65 |
66 | if step % args.log_interval == 0:
67 | spend_time = time.time() - start_time
68 | Logger(
69 | 'Epoch:[{}/{}]({}/{}) loss:{:.3f} lr:{:.12f} epoch_Time:{}min:'.format(
70 | epoch + 1,
71 | args.epochs,
72 | step,
73 | iter_per_epoch,
74 | loss.item() * args.accumulation_steps,
75 | optimizer.param_groups[-1]['lr'],
76 | spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60))
77 |
78 | if (wandb is not None) and (not ddp or dist.get_rank() == 0):
79 | wandb.log({"loss": loss * args.accumulation_steps,
80 | "lr": optimizer.param_groups[-1]['lr'],
81 | "epoch_Time": spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60})
82 |
83 | if (step + 1) % args.save_interval == 0 and (not ddp or dist.get_rank() == 0):
84 | model.eval()
85 | moe_path = '_moe' if lm_config.use_moe else ''
86 | ckp = f'{args.save_dir}/full_sft_{lm_config.hidden_size}{moe_path}.pth'
87 | if isinstance(model, torch.nn.parallel.DistributedDataParallel):
88 | state_dict = model.module.state_dict()
89 | else:
90 | state_dict = model.state_dict()
91 | state_dict = {k: v.half() for k, v in state_dict.items()} # 半精度保存
92 | torch.save(state_dict, ckp)
93 | model.train()
94 |
95 |
96 | def init_model(lm_config):
97 | tokenizer = AutoTokenizer.from_pretrained('../model')
98 | model = MiniMindForCausalLM(lm_config)
99 | moe_path = '_moe' if lm_config.use_moe else ''
100 | ckp = f'{args.save_dir}/pretrain_{lm_config.hidden_size}{moe_path}.pth'
101 | state_dict = torch.load(ckp, map_location=args.device)
102 | model.load_state_dict(state_dict, strict=False)
103 |
104 | Logger(f'LLM可训练总参数量:{sum(p.numel() for p in model.parameters() if p.requires_grad) / 1e6:.3f} 百万')
105 | model = model.to(args.device)
106 | return model, tokenizer
107 |
108 |
109 | def init_distributed_mode():
110 | if not ddp: return
111 | global ddp_local_rank, DEVICE
112 |
113 | dist.init_process_group(backend="nccl")
114 | ddp_rank = int(os.environ["RANK"])
115 | ddp_local_rank = int(os.environ["LOCAL_RANK"])
116 | ddp_world_size = int(os.environ["WORLD_SIZE"])
117 | DEVICE = f"cuda:{ddp_local_rank}"
118 | torch.cuda.set_device(DEVICE)
119 |
120 |
121 | if __name__ == "__main__":
122 | parser = argparse.ArgumentParser(description="MiniMind Full SFT")
123 | parser.add_argument("--out_dir", type=str, default="../out")
124 | parser.add_argument("--epochs", type=int, default=2)
125 | parser.add_argument("--batch_size", type=int, default=16)
126 | parser.add_argument("--learning_rate", type=float, default=5e-7)
127 | parser.add_argument("--device", type=str, default="cuda:0" if torch.cuda.is_available() else "cpu")
128 | parser.add_argument("--dtype", type=str, default="bfloat16")
129 | parser.add_argument("--use_wandb", action="store_true")
130 | parser.add_argument("--wandb_project", type=str, default="MiniMind-Full-SFT")
131 | parser.add_argument("--num_workers", type=int, default=1)
132 | parser.add_argument("--ddp", action="store_true")
133 | parser.add_argument("--accumulation_steps", type=int, default=1)
134 | parser.add_argument("--grad_clip", type=float, default=1.0)
135 | parser.add_argument("--warmup_iters", type=int, default=0)
136 | parser.add_argument("--log_interval", type=int, default=100)
137 | parser.add_argument("--save_interval", type=int, default=100)
138 | parser.add_argument('--local_rank', type=int, default=-1)
139 | parser.add_argument('--hidden_size', default=512, type=int)
140 | parser.add_argument('--num_hidden_layers', default=8, type=int)
141 | parser.add_argument('--max_seq_len', default=512, type=int)
142 | parser.add_argument('--use_moe', default=False, type=bool)
143 | parser.add_argument("--data_path", type=str, default="../dataset/sft_mini_512.jsonl")
144 |
145 | args = parser.parse_args()
146 |
147 | lm_config = MiniMindConfig(hidden_size=args.hidden_size, num_hidden_layers=args.num_hidden_layers,
148 | use_moe=args.use_moe)
149 | args.save_dir = os.path.join(args.out_dir)
150 | os.makedirs(args.save_dir, exist_ok=True)
151 | os.makedirs(args.out_dir, exist_ok=True)
152 | tokens_per_iter = args.batch_size * args.max_seq_len
153 | device_type = "cuda" if "cuda" in args.device else "cpu"
154 |
155 | args.wandb_run_name = f"MiniMind-Full-SFT-Epoch-{args.epochs}-BatchSize-{args.batch_size}-LearningRate-{args.learning_rate}"
156 |
157 | ctx = nullcontext() if device_type == "cpu" else torch.cuda.amp.autocast()
158 | ddp = int(os.environ.get("RANK", -1)) != -1 # is this a ddp run?
159 | ddp_local_rank, DEVICE = 0, "cuda:0"
160 | base_seed = 1337
161 | torch.manual_seed(base_seed)
162 | torch.cuda.manual_seed(base_seed)
163 |
164 | if ddp:
165 | init_distributed_mode()
166 | args.device = torch.device(DEVICE)
167 | rank = dist.get_rank()
168 | torch.manual_seed(base_seed + rank)
169 | # 同时设置 CUDA 的随机种子
170 | torch.cuda.manual_seed(base_seed + rank)
171 |
172 | if args.use_wandb and (not ddp or ddp_local_rank == 0):
173 | import wandb
174 |
175 | wandb.init(project=args.wandb_project, name=args.wandb_run_name)
176 | else:
177 | wandb = None
178 |
179 | model, tokenizer = init_model(lm_config)
180 |
181 | train_ds = SFTDataset(args.data_path, tokenizer, max_length=args.max_seq_len)
182 | train_sampler = DistributedSampler(train_ds) if ddp else None
183 | train_loader = DataLoader(
184 | train_ds,
185 | batch_size=args.batch_size,
186 | pin_memory=True,
187 | drop_last=False,
188 | shuffle=False,
189 | num_workers=args.num_workers,
190 | sampler=train_sampler
191 | )
192 |
193 | scaler = torch.cuda.amp.GradScaler(enabled=(args.dtype in ['float16', 'bfloat16']))
194 | optimizer = optim.AdamW(model.parameters(), lr=args.learning_rate)
195 |
196 | if ddp:
197 | model._ddp_params_and_buffers_to_ignore = {"pos_cis"}
198 | model = DistributedDataParallel(model, device_ids=[ddp_local_rank])
199 |
200 | iter_per_epoch = len(train_loader)
201 | for epoch in range(args.epochs):
202 | train_epoch(epoch, wandb)
203 |
--------------------------------------------------------------------------------
/trainer/train_lora.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 |
4 | __package__ = "trainer"
5 | sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
6 |
7 | import argparse
8 | import time
9 | import math
10 | import warnings
11 | import torch
12 | from torch import optim, nn
13 | import torch.distributed as dist
14 | from contextlib import nullcontext
15 | from torch.utils.data import DataLoader, DistributedSampler
16 | from transformers import AutoTokenizer, AutoModelForCausalLM
17 | from model.model_minimind import MiniMindConfig, MiniMindForCausalLM
18 | from dataset.lm_dataset import SFTDataset
19 | from model.model_lora import load_lora, save_lora, apply_lora
20 |
21 | warnings.filterwarnings('ignore')
22 |
23 |
24 | # Logger function
25 | def Logger(content):
26 | if not ddp or dist.get_rank() == 0:
27 | print(content)
28 |
29 |
30 | def get_lr(current_step, total_steps, lr):
31 | return lr / 10 + 0.5 * lr * (1 + math.cos(math.pi * current_step / total_steps))
32 |
33 |
34 | # 代码和full_sft「几乎」一致
35 | def train_epoch(epoch, wandb):
36 | loss_fct = nn.CrossEntropyLoss(reduction='none')
37 | start_time = time.time()
38 | for step, (X, Y, loss_mask) in enumerate(train_loader):
39 | X = X.to(args.device)
40 | Y = Y.to(args.device)
41 | loss_mask = loss_mask.to(args.device)
42 | lr = get_lr(epoch * iter_per_epoch + step, args.epochs * iter_per_epoch, args.learning_rate)
43 | for param_group in optimizer.param_groups:
44 | param_group['lr'] = lr
45 |
46 | with ctx:
47 | res = model(X)
48 | loss = loss_fct(
49 | res.logits.view(-1, res.logits.size(-1)),
50 | Y.view(-1)
51 | ).view(Y.size())
52 | loss = (loss * loss_mask).sum() / loss_mask.sum()
53 | loss += res.aux_loss
54 | loss = loss / args.accumulation_steps
55 |
56 | scaler.scale(loss).backward()
57 |
58 | if (step + 1) % args.accumulation_steps == 0:
59 | scaler.unscale_(optimizer)
60 | torch.nn.utils.clip_grad_norm_(lora_params, args.grad_clip)
61 |
62 | scaler.step(optimizer)
63 | scaler.update()
64 |
65 | optimizer.zero_grad(set_to_none=True)
66 |
67 | if step % args.log_interval == 0:
68 | spend_time = time.time() - start_time
69 | Logger(
70 | 'Epoch:[{}/{}]({}/{}) loss:{:.3f} lr:{:.12f} epoch_Time:{}min:'.format(
71 | epoch + 1,
72 | args.epochs,
73 | step,
74 | iter_per_epoch,
75 | loss.item() * args.accumulation_steps,
76 | optimizer.param_groups[-1]['lr'],
77 | spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60))
78 |
79 | if (wandb is not None) and (not ddp or dist.get_rank() == 0):
80 | wandb.log({"loss": loss * args.accumulation_steps,
81 | "lr": optimizer.param_groups[-1]['lr'],
82 | "epoch_Time": spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60})
83 |
84 | if (step + 1) % args.save_interval == 0 and (not ddp or dist.get_rank() == 0):
85 | model.eval()
86 | lora_save_path = f'{args.save_dir}/lora/{args.lora_name}_{lm_config.hidden_size}.pth'
87 | os.makedirs(os.path.dirname(lora_save_path), exist_ok=True)
88 | # 【区别1】只保存lora权重即可
89 | save_lora(model, lora_save_path)
90 | model.train()
91 |
92 |
93 | def init_model(lm_config):
94 | tokenizer = AutoTokenizer.from_pretrained('../model/')
95 | model = MiniMindForCausalLM(lm_config)
96 | moe_path = '_moe' if lm_config.use_moe else ''
97 | ckp = f'{args.save_dir}/full_sft_{lm_config.hidden_size}{moe_path}.pth'
98 | state_dict = torch.load(ckp, map_location=args.device)
99 | model.load_state_dict(state_dict, strict=False)
100 | return model.to(args.device), tokenizer
101 |
102 |
103 | def init_distributed_mode():
104 | if not ddp: return
105 | global ddp_local_rank, DEVICE
106 |
107 | dist.init_process_group(backend="nccl")
108 | ddp_rank = int(os.environ["RANK"])
109 | ddp_local_rank = int(os.environ["LOCAL_RANK"])
110 | ddp_world_size = int(os.environ["WORLD_SIZE"])
111 | DEVICE = f"cuda:{ddp_local_rank}"
112 | torch.cuda.set_device(DEVICE)
113 |
114 |
115 | if __name__ == "__main__":
116 | parser = argparse.ArgumentParser(description="MiniMind SFT with LoRA")
117 | parser.add_argument("--out_dir", type=str, default="../out")
118 | parser.add_argument("--epochs", type=int, default=10)
119 | parser.add_argument("--batch_size", type=int, default=32)
120 | parser.add_argument("--learning_rate", type=float, default=1e-4)
121 | parser.add_argument("--device", type=str, default="cuda:0" if torch.cuda.is_available() else "cpu")
122 | parser.add_argument("--dtype", type=str, default="bfloat16")
123 | parser.add_argument("--use_wandb", action="store_true")
124 | parser.add_argument("--wandb_project", type=str, default="MiniMind-LoRA-SFT")
125 | parser.add_argument("--num_workers", type=int, default=1)
126 | parser.add_argument("--ddp", action="store_true")
127 | parser.add_argument("--accumulation_steps", type=int, default=1)
128 | parser.add_argument("--grad_clip", type=float, default=1.0)
129 | parser.add_argument("--warmup_iters", type=int, default=0)
130 | parser.add_argument("--log_interval", type=int, default=100)
131 | parser.add_argument("--save_interval", type=int, default=100)
132 | parser.add_argument('--local_rank', type=int, default=-1)
133 | parser.add_argument('--hidden_size', default=512, type=int)
134 | parser.add_argument('--num_hidden_layers', default=8, type=int)
135 | parser.add_argument('--max_seq_len', default=512, type=int)
136 | parser.add_argument('--use_moe', default=False, type=bool)
137 | parser.add_argument("--data_path", type=str, default="../dataset/lora_medical.jsonl")
138 | parser.add_argument("--lora_name", type=str, default="lora_medical", help="根据任务保存成lora_(英文/医学/心理...)")
139 | args = parser.parse_args()
140 |
141 | lm_config = MiniMindConfig(hidden_size=args.hidden_size, num_hidden_layers=args.num_hidden_layers,
142 | use_moe=args.use_moe)
143 | args.save_dir = os.path.join(args.out_dir)
144 | os.makedirs(args.save_dir, exist_ok=True)
145 | os.makedirs(args.out_dir, exist_ok=True)
146 | tokens_per_iter = args.batch_size * args.max_seq_len
147 | device_type = "cuda" if "cuda" in args.device else "cpu"
148 |
149 | ctx = nullcontext() if device_type == "cpu" else torch.cuda.amp.autocast()
150 | ddp = int(os.environ.get("RANK", -1)) != -1 # is this a ddp run?
151 | ddp_local_rank, DEVICE = 0, "cuda:0"
152 | base_seed = 1337
153 | torch.manual_seed(base_seed)
154 | torch.cuda.manual_seed(base_seed)
155 |
156 | if ddp:
157 | init_distributed_mode()
158 | args.device = torch.device(DEVICE)
159 | rank = dist.get_rank()
160 | torch.manual_seed(base_seed + rank)
161 | # 同时设置 CUDA 的随机种子
162 | torch.cuda.manual_seed(base_seed + rank)
163 |
164 | args.wandb_run_name = f"MiniMind-Lora-SFT-Epoch-{args.epochs}-BatchSize-{args.batch_size}-LearningRate-{args.learning_rate}"
165 | if args.use_wandb and (not ddp or ddp_local_rank == 0):
166 | import wandb
167 |
168 | wandb.init(project=args.wandb_project, name=args.wandb_run_name)
169 | else:
170 | wandb = None
171 |
172 | model, tokenizer = init_model(lm_config)
173 | apply_lora(model)
174 |
175 | total_params = sum(p.numel() for p in model.parameters()) # 总参数数量
176 | lora_params_count = sum(p.numel() for name, p in model.named_parameters() if 'lora' in name) # LoRA 参数数量
177 | if not ddp or dist.get_rank() == 0:
178 | print(f"LLM 总参数量: {total_params}")
179 | print(f"LoRA 参数量: {lora_params_count}")
180 | print(f"LoRA 参数占比: {lora_params_count / total_params * 100:.2f}%")
181 |
182 | for name, param in model.named_parameters():
183 | if 'lora' not in name:
184 | param.requires_grad = False
185 | lora_params = []
186 | for name, param in model.named_parameters():
187 | if 'lora' in name:
188 | lora_params.append(param)
189 |
190 | # 只对 LoRA 参数进行优化
191 | optimizer = optim.AdamW(lora_params, lr=args.learning_rate)
192 | train_ds = SFTDataset(args.data_path, tokenizer, max_length=args.max_seq_len)
193 | train_sampler = DistributedSampler(train_ds) if ddp else None
194 | train_loader = DataLoader(
195 | train_ds,
196 | batch_size=args.batch_size,
197 | pin_memory=True,
198 | drop_last=False,
199 | shuffle=False,
200 | num_workers=args.num_workers,
201 | sampler=train_sampler
202 | )
203 |
204 | scaler = torch.cuda.amp.GradScaler(enabled=(args.dtype in ['float16', 'bfloat16']))
205 | iter_per_epoch = len(train_loader)
206 |
207 | for epoch in range(args.epochs):
208 | train_epoch(epoch, wandb)
209 |
--------------------------------------------------------------------------------
/trainer/train_pretrain.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | __package__ = "trainer"
4 | sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
5 |
6 | import argparse
7 | import time
8 | import math
9 | import warnings
10 | import torch
11 | import torch.distributed as dist
12 | from torch import optim, nn
13 | from torch.nn.parallel import DistributedDataParallel
14 | from torch.utils.data import DataLoader, DistributedSampler
15 | from contextlib import nullcontext
16 | from transformers import AutoTokenizer
17 | from model.model_minimind import MiniMindConfig, MiniMindForCausalLM
18 | from dataset.lm_dataset import PretrainDataset
19 |
20 | warnings.filterwarnings('ignore')
21 |
22 |
23 | def Logger(content):
24 | if not ddp or dist.get_rank() == 0:
25 | print(content)
26 |
27 |
28 | def get_lr(current_step, total_steps, lr):
29 | return lr / 10 + 0.5 * lr * (1 + math.cos(math.pi * current_step / total_steps))
30 |
31 |
32 | def train_epoch(epoch, wandb):
33 | loss_fct = nn.CrossEntropyLoss(reduction='none')
34 | start_time = time.time()
35 | for step, (X, Y, loss_mask) in enumerate(train_loader):
36 | X = X.to(args.device)
37 | Y = Y.to(args.device)
38 | loss_mask = loss_mask.to(args.device)
39 |
40 | lr = get_lr(epoch * iter_per_epoch + step, args.epochs * iter_per_epoch, args.learning_rate)
41 | for param_group in optimizer.param_groups:
42 | param_group['lr'] = lr
43 |
44 | with ctx:
45 | res = model(X)
46 | loss = loss_fct(
47 | res.logits.view(-1, res.logits.size(-1)),
48 | Y.view(-1)
49 | ).view(Y.size())
50 | loss = (loss * loss_mask).sum() / loss_mask.sum()
51 | loss += res.aux_loss
52 | loss = loss / args.accumulation_steps
53 |
54 | scaler.scale(loss).backward()
55 |
56 | if (step + 1) % args.accumulation_steps == 0:
57 | scaler.unscale_(optimizer)
58 | torch.nn.utils.clip_grad_norm_(model.parameters(), args.grad_clip)
59 |
60 | scaler.step(optimizer)
61 | scaler.update()
62 |
63 | optimizer.zero_grad(set_to_none=True)
64 |
65 | if step % args.log_interval == 0:
66 | spend_time = time.time() - start_time
67 | Logger(
68 | 'Epoch:[{}/{}]({}/{}) loss:{:.3f} lr:{:.12f} epoch_Time:{}min:'.format(
69 | epoch + 1,
70 | args.epochs,
71 | step,
72 | iter_per_epoch,
73 | loss.item() * args.accumulation_steps,
74 | optimizer.param_groups[-1]['lr'],
75 | spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60))
76 |
77 | if (wandb is not None) and (not ddp or dist.get_rank() == 0):
78 | wandb.log({"loss": loss.item() * args.accumulation_steps,
79 | "lr": optimizer.param_groups[-1]['lr'],
80 | "epoch_Time": spend_time / (step + 1) * iter_per_epoch // 60 - spend_time // 60})
81 |
82 | if (step + 1) % args.save_interval == 0 and (not ddp or dist.get_rank() == 0):
83 | model.eval()
84 | moe_path = '_moe' if lm_config.use_moe else ''
85 | ckp = f'{args.save_dir}/pretrain_{lm_config.hidden_size}{moe_path}.pth'
86 |
87 | if isinstance(model, torch.nn.parallel.DistributedDataParallel):
88 | state_dict = model.module.state_dict()
89 | else:
90 | state_dict = model.state_dict()
91 |
92 | state_dict = {k: v.half() for k, v in state_dict.items()} # 半精度保存
93 | torch.save(state_dict, ckp)
94 | model.train()
95 |
96 |
97 | def init_model(lm_config):
98 | tokenizer = AutoTokenizer.from_pretrained('../model/')
99 | model = MiniMindForCausalLM(lm_config).to(args.device)
100 | Logger(f'LLM可训练总参数量:{sum(p.numel() for p in model.parameters() if p.requires_grad) / 1e6:.3f} 百万')
101 | return model, tokenizer
102 |
103 |
104 | def init_distributed_mode():
105 | if not ddp: return
106 | global ddp_local_rank, DEVICE
107 |
108 | dist.init_process_group(backend="nccl")
109 | ddp_rank = int(os.environ["RANK"])
110 | ddp_local_rank = int(os.environ["LOCAL_RANK"])
111 | ddp_world_size = int(os.environ["WORLD_SIZE"])
112 | DEVICE = f"cuda:{ddp_local_rank}"
113 | torch.cuda.set_device(DEVICE)
114 |
115 |
116 | # torchrun --nproc_per_node 2 1-pretrain.py
117 | if __name__ == "__main__":
118 | parser = argparse.ArgumentParser(description="MiniMind Pretraining")
119 | parser.add_argument("--out_dir", type=str, default="../out")
120 | # 若要以最快速度实现zero则epochs设置为1轮;否则应当利用有限的数据训练2~6个epochs。
121 | parser.add_argument("--epochs", type=int, default=1)
122 | parser.add_argument("--batch_size", type=int, default=32)
123 | parser.add_argument("--learning_rate", type=float, default=5e-4)
124 | parser.add_argument("--device", type=str, default="cuda:0" if torch.cuda.is_available() else "cpu")
125 | parser.add_argument("--dtype", type=str, default="bfloat16")
126 | parser.add_argument("--use_wandb", action="store_true")
127 | parser.add_argument("--wandb_project", type=str, default="MiniMind-Pretrain")
128 | parser.add_argument("--num_workers", type=int, default=1)
129 | parser.add_argument("--ddp", action="store_true")
130 | parser.add_argument("--accumulation_steps", type=int, default=8)
131 | parser.add_argument("--grad_clip", type=float, default=1.0)
132 | parser.add_argument("--warmup_iters", type=int, default=0)
133 | parser.add_argument("--log_interval", type=int, default=100)
134 | parser.add_argument("--save_interval", type=int, default=100)
135 | parser.add_argument('--local_rank', type=int, default=-1)
136 | parser.add_argument('--hidden_size', default=512, type=int)
137 | parser.add_argument('--num_hidden_layers', default=8, type=int)
138 | parser.add_argument('--max_seq_len', default=512, type=int)
139 | parser.add_argument('--use_moe', default=False, type=bool)
140 | parser.add_argument("--data_path", type=str, default="../dataset/pretrain_hq.jsonl")
141 | args = parser.parse_args()
142 |
143 | lm_config = MiniMindConfig(hidden_size=args.hidden_size, num_hidden_layers=args.num_hidden_layers, use_moe=args.use_moe)
144 | args.save_dir = os.path.join(args.out_dir)
145 | os.makedirs(args.save_dir, exist_ok=True)
146 | os.makedirs(args.out_dir, exist_ok=True)
147 | tokens_per_iter = args.batch_size * args.max_seq_len
148 | device_type = "cuda" if "cuda" in args.device else "cpu"
149 |
150 | args.wandb_run_name = f"MiniMind-Pretrain-Epoch-{args.epochs}-BatchSize-{args.batch_size}-LearningRate-{args.learning_rate}"
151 |
152 | ctx = nullcontext() if device_type == "cpu" else torch.cuda.amp.autocast()
153 |
154 | ddp = int(os.environ.get("RANK", -1)) != -1 # is this a ddp run?
155 | ddp_local_rank, DEVICE = 0, "cuda:0"
156 |
157 | base_seed = 1337
158 | torch.manual_seed(base_seed)
159 | torch.cuda.manual_seed(base_seed)
160 |
161 | if ddp:
162 | init_distributed_mode()
163 | args.device = torch.device(DEVICE)
164 | rank = dist.get_rank()
165 | torch.manual_seed(base_seed + rank)
166 | # 同时设置 CUDA 的随机种子
167 | torch.cuda.manual_seed(base_seed + rank)
168 |
169 | if args.use_wandb and (not ddp or ddp_local_rank == 0):
170 | import wandb
171 |
172 | wandb.init(project=args.wandb_project, name=args.wandb_run_name)
173 | else:
174 | wandb = None
175 |
176 | model, tokenizer = init_model(lm_config)
177 | train_ds = PretrainDataset(args.data_path, tokenizer, max_length=args.max_seq_len)
178 | train_sampler = DistributedSampler(train_ds) if ddp else None
179 | train_loader = DataLoader(
180 | train_ds,
181 | batch_size=args.batch_size,
182 | pin_memory=True,
183 | drop_last=False,
184 | shuffle=False,
185 | num_workers=args.num_workers,
186 | sampler=train_sampler
187 | )
188 |
189 | scaler = torch.cuda.amp.GradScaler(enabled=(args.dtype in ['float16', 'bfloat16']))
190 | optimizer = optim.AdamW(model.parameters(), lr=args.learning_rate)
191 |
192 | if ddp:
193 | model._ddp_params_and_buffers_to_ignore = {"pos_cis"}
194 | model = DistributedDataParallel(model, device_ids=[ddp_local_rank])
195 |
196 | iter_per_epoch = len(train_loader)
197 | for epoch in range(args.epochs):
198 | train_epoch(epoch, wandb)
199 |
--------------------------------------------------------------------------------