├── LICENSE
├── README.md
├── apply_delta.py
├── assets
├── YuLan-logo.jpg
├── example1.png
├── example2.png
├── example_zh1.png
├── example_zh2.png
├── logo.jpg
└── logo.png
├── inference.py
├── requirements.txt
└── yulan_test
├── config
├── benchmark
│ └── bbh10k.yaml
└── model
│ ├── chatgpt.yaml
│ ├── dummy.yaml
│ ├── local.yaml
│ └── openai.yaml
├── data
└── bbh3k.json
├── scripts
├── testChatGPTModel.sh
├── testDummyModel.sh
├── testLocalModel.sh
└── testOpenAIModel.sh
└── src
├── BBH10KBenchmark.py
├── __pycache__
└── BBH10KBenchmark.cpython-38.pyc
├── main.py
└── model
├── ChatGPTModel.py
├── DummyModel.py
├── LocalModel.py
├── Model.py
├── OpenAIModel.py
├── __init__.py
└── __pycache__
├── ChatGPTModel.cpython-38.pyc
├── DummyModel.cpython-38.pyc
├── LocalModel.cpython-38.pyc
├── Model.cpython-38.pyc
├── OpenAIModel.cpython-38.pyc
└── __init__.cpython-38.pyc
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Kun Zhou
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |

3 |
YuLan: An Open-Source Large Language Model
4 |

5 |

6 |

7 |

8 |
9 |
10 | YuLan-Chat models are chat-based large language models, which are developed by the researchers in GSAI, Renmin University of China (YuLan, which represents Yulan Magnolia, is the campus flower of Renmin University of China). The newest version is developed by pre-training from scratch, and supervised fine-tuning via curriculum learning with high-quality English and Chinese instructions and human preference data. The model has the following technical characteristics:
11 | - Owing to large-scale pre-training on high-quality English, Chinese, and multilingual data, the language ability of the model has been improved.
12 | - Owing to the curriculum learning strategy for human alignment, the helpfulness, honesty, and harmlessness of our model have been enhanced.
13 | - To well support Chinese longer inputs and outputs, we expand the vocabulary with Chinese words and the maximum input length. It can support 4k context now.
14 |
15 | > YuLan-Chat系列模型是中国人民大学高瓴人工智能学院师生共同开发的支持聊天的大语言模型(名字"玉兰"取自中国人民大学校花)。最新版本从头完成了整个预训练过程,并采用课程学习技术基于中英文双语数据进行有监督微调,包括高质量指令和人类偏好数据。该版模型具有如下技术特点:
16 | > - 由于在大规模中英双语数据上进行了继续预训练,模型的语言能力得到提高;
17 | > - 由于采用了课程学习方法进行人类对齐训练,模型在真实场景下的有用性、诚实性与无害性得到了增强;
18 | > - 为了更好的支持中文和更长的输入输出,模型的词表及长度得到了扩充,目前可支持4k上下文。
19 |
20 |
21 | ## News
22 |
23 | * **\[Dec. 25, 2024\]** We release **YuLan-Mini**, a highly capable 2.4B lightweight LLM using only 1T pre-training data. See more [details](https://github.com/RUC-GSAI/YuLan-Mini).
24 | * **\[July. 1, 2024\]** We release **YuLan-Base-12B**, an LLM trained from scratch, and its chat-based version **YuLan-Chat-3-12B**. We pre-train the base model on over 1.6TB tokens of English, Chinese, and multilingual data, and then perform supervised fine-tuning via curriculum learning with high-quality English and Chinese instructions and human preference data to obtain the chat model.
25 | * **\[Aug. 18, 2023\]** Our **YuLan-Chat-2-13B** achieves the 5th position of [OpenCompass](https://opencompass.org.cn/leaderboard-llm) benchmark!
26 | * **\[Aug. 02, 2023\]** We release **YuLan-LLaMA-2-13B** and **YuLan-Chat-2-13B**. Both models have been continually pre-trained on English and Chinese corpus based on LLaMA-2, and YuLan-Chat-2-13B is the chat-based LLM based on YuLan-LLaMA-2-13B, with high-quality English and Chinese instructions.
27 | * **\[Aug. 02, 2023\]** We release **YuLan-Chat-1-65B-v2**, a chat-based LLM based on LLaMA. It has been continually pre-trained on English and Chinese corpus, and then instruction-tuned with high-quality English and Chinese instructions.
28 | * **\[Jun. 08, 2023\]** We release **YuLan-Chat-1-13B-v1** and **YuLan-Chat-1-65B-v1**, and the corresponding INT-8 quantization scripts.
29 |
30 | > * **\[2024年7月1日\]** 我们发布了**YuLan-Base-12B**,一个完全从头训练的Base模型,以及其Chat化版本**YuLan-Chat-3-12B**。我们在超过1.6TB词元的中、英文和多语数据上进行了大规模预训练,得到了Base模型,然后基于高质量双语指令和人类偏好数据,使用课程学习方法进行有监督微调,最终得到了Chat化的版本。
31 | > * **\[2023年8月2日\]** 我们发布了**YuLan-LLaMA-2-13B**和**YuLan-Chat-2-13B**两个模型,其都在LLaMA-2的基础上进行了双语继续预训练,YuLan-Chat-2-13B在YuLan-LLaMA-2-13B基础上进行了双语高质量对话指令微调。
32 | > * **\[2023年8月2日\]** 我们发布了**YuLan-Chat-1-65B-v2**模型,其在LLaMA-65B的基础上进行了双语继续预训练,然后用高质量双语指令进行了微调。
33 | > * **\[2023年6月8日\]** 我们发布了**YuLan-Chat-1-13B-v1**和**YuLan-Chat-1-65B-v1**两个模型,以及对应的int8量化脚本。
34 |
35 | ## Model Zoo
36 |
37 | Due to the license limitation, for models based on LLaMA, we only provide the weight difference with the original checkpoints; for models based on LLaMA-2, they can be used directly. Please check the [Usage](https://github.com/RUC-GSAI/YuLan-Chat/tree/main#usage) section for more details.
38 |
39 | **Limitations**: Despite our efforts to reduce potential security issues during the model's usage and encourage the generation of text that aligns with ethical and legal requirements, the language model is based on probabilistic generation, which means it may still produce unexpected outputs. For instance, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We do not assume any responsibility for any consequences resulting from the dissemination of harmful information.
40 |
41 | > 由于许可证的限制,基于LLaMA的模型我们仅提供与官方模型的差值,基于LLaMA-2的模型可直接使用,具体请参见使用方法章节。
42 |
43 | > **局限性**:尽管我们尝试减少模型在使用中可能出现的安全性问题,并鼓励模型生成符合道德和法律要求的文本,但由于语言模型基于概率生成的范式,模型仍然可能会产生意外的输出。 例如,生成的响应可能包含偏见、歧视或其他有害内容。 请不要传播此类内容。 我们对因传播有害信息而造成的任何后果不承担任何责任。
44 |
45 | | Model | Backbone | Extended Vocab | Extended Length | Continue PT | SFT | Released Date |
46 | | ------------------- | :--------: | :------------: | :-------------: | :---------: | ---- | :-----------: |
47 | | [YuLan-Base-12B](https://huggingface.co/yulan-team/YuLan-Base-12b) | YuLan-Base-12B | ✅ 51,190 | ✅ 4,096 | ❌ | ❌ | 2024.7.1 |
48 | | [YuLan-Chat-3-12B](https://huggingface.co/yulan-team/YuLan-Chat-3-12b) | YuLan-Base-12B | ✅ 51,190 | ✅ 4,096 | ❌ | ✅ | 2024.7.1 |
49 | | [YuLan-Chat-2-13B](https://huggingface.co/yulan-team/YuLan-Chat-2-13b-fp16) | LLaMA2-13B | ✅ 51,190 | ✅ 8,192 | ✅ | ✅ | 2023.8.2 |
50 | | [YuLan-LLaMA-2-13B](https://huggingface.co/yulan-team/YuLan-LLaMA-2-13b) | LLaMA2-13B | ✅ 51,190 | ✅ 8,192 | ✅ | ❌ | 2023.8.2 |
51 | | [YuLan-Chat-1-65B-v2](https://huggingface.co/yulan-team/YuLan-Chat-1-65B-v2-delta) | LLaMA-65B | ✅ 51,190 | ❌ 2,048 | ✅ | ✅ | 2023.8.2 |
52 | | [YuLan-Chat-1-13B-v1](https://huggingface.co/RUCAIBox/YuLan-Chat-13b-delta) | LLaMA-13B | ❌ 32,000 | ❌ 2,048 | ❌ | ✅ | 2023.6.8 |
53 | | [YuLan-Chat-1-65B-v1](https://huggingface.co/RUCAIBox/YuLan-Chat-65b-delta) | LLaMA-65B | ❌ 32,000 | ❌ 2,048 | ❌ | ✅ | 2023.6.8 |
54 |
55 | ## Evaluation
56 |
57 | We evaluate our YuLan-Chat model on several Chinese and English benchmarks. The evaluation results are shown as follows.
58 |
59 | > 我们在中英文的一些基准测试上对YuLan-Chat进行了评价,其结果如下。
60 |
61 | ### MMLU
62 |
63 | [MMLU](https://github.com/hendrycks/test) (Massive Multitask Language Understanding) is a benchmark designed to measure knowledge acquired during pretraining by evaluating models exclusively in zero-shot and few-shot settings.
64 |
65 | > MMLU是一个评估模型知识量的常用的英文基准测试集。
66 |
67 | | Model | STEM | Social Science | Humanities | Others | Avg. |
68 | | --------------------------------- | :--: | :------------: | :--------: | :----: | :--: |
69 | | YuLan-Chat-1-13B-v1 | 39.6 | 57.8 | 42.6 | 57.6 | 49.4 |
70 | | YuLan-Chat-1-65B-v1 | 49.2 | 71.7 | 57.7 | 66.7 | 61.3 |
71 | | YuLan-Chat-1-65B-v2 | 46.3 | 67.9 | 56.9 | 63.9 | 58.7 |
72 | | LLaMA-2-13B | 44.6 | 64.2 | 53.9 | 62.2 | 56.2 |
73 | | FlagAlpha/Llama2-Chinese-13b-Chat | 44.4 | 63.2 | 51.6 | 60.6 | 55.0 |
74 | | Linly-AI/Chinese-LLaMA-2-13B-hf | 43.6 | 62.7 | 49.8 | 61.6 | 54.4 |
75 | | YuLan-LLaMA-2-13B | 42.9 | 61.5 | 50.4 | 58.6 | 53.4 |
76 | | YuLan-Chat-2-13B | 45.3 | 66.7 | 53.8 | 62.8 | 57.2 |
77 | | YuLan-Base-12B | 42.3 | 60.2 | 46.4 | 56.1 | 51.3 |
78 | | YuLan-Chat-3-12B | 45.5 | 64.3 | 51.8 | 61.3 | 55.7 |
79 | ### C-Eval
80 |
81 | [C-Eval](https://cevalbenchmark.com/) is a comprehensive Chinese evaluation suite for foundation models.
82 |
83 | > C-Eval是一个针对基石模型综合能力的中文基准测试集。
84 |
85 | | Model | STEM | Social Science | Humanities | Others | Avg. | Avg. (Hard) |
86 | | --------------------------------- | :--: | :------------: | :--------: | :----: | :--: | :---------: |
87 | | YuLan-Chat-1-13B-v1 | 30.2 | 37.4 | 31.9 | 30.7 | 32.0 | 25.7 |
88 | | YuLan-Chat-1-65B-v1 | 37.7 | 46.1 | 36.8 | 38.0 | 39.2 | 31.1 |
89 | | YuLan-Chat-1-65B-v2 | 39.9 | 55.9 | 47.7 | 43.7 | 45.4 | 31.4 |
90 | | LLaMA-2-13B | 36.9 | 43.2 | 37.6 | 36.6 | 38.2 | 32.0 |
91 | | FlagAlpha/Llama2-Chinese-13b-Chat | 36.8 | 44.5 | 36.3 | 36.5 | 38.1 | 30.9 |
92 | | Linly-AI/Chinese-LLaMA-2-13B-hf | 33.7 | 44.8 | 36.6 | 36.5 | 37.0 | 27.7 |
93 | | YuLan-LLaMA-2-13B | 35.3 | 46.4 | 41.9 | 37.6 | 39.3 | 28.6 |
94 | | YuLan-Chat-2-13B | 38.9 | 49.7 | 45.0 | 40.8 | 42.6 | 32.2 |
95 | | YuLan-Base-12B | 42.0 | 57.6 | 47.2 | 41.5 | 46.0 | 32.6 |
96 | | YuLan-Chat-3-12B | 47.0 | 61.8 | 52.9 | 44.3 | 50.5 | 37.7 |
97 |
98 | ### AGI-Eval-Gaokao
99 |
100 | [AGI-Eval](https://github.com/microsoft/AGIEval) is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. We use the sub-branch Chinese-Gaokao for evaluation.
101 |
102 | > AGI-Eval 是一个以人为中心的基准,专门设计用于评估基础模型在与人类认知和解决问题相关的任务中的一般能力。我们使用其中的"高考"分支进行评测。
103 |
104 | | Model | Avg. | Chinese | English | Geography | History | Biology | Chemistry | Physics | Math-QA | Math-Cloze |
105 | | --------------------------------- | :--: | :-----: | :-----: | :-------: | :-----: | :-----: | :-------: | :-----: | :-----: | :--------: |
106 | | YuLan-Chat-1-13B-v1 | 29.2 | 32.1 | 63.1 | 34.7 | 25.1 | 26.2 | 29.0 | 25.5 | 26.5 | 0.9 |
107 | | YuLan-Chat-1-65B-v1 | 34.6 | 24.8 | 82.0 | 44.2 | 44.3 | 31.4 | 30.9 | 26.0 | 27.1 | 0.9 |
108 | | YuLan-Chat-1-65B-v2 | 37.9 | 31.4 | 80.4 | 50.8 | 56.6 | 33.3 | 29.0 | 32.0 | 24.4 | 0.8 |
109 | | LLaMA-2-13B | 32.7 | 27.2 | 72.2 | 36.2 | 43.0 | 26.2 | 32.4 | 30.0 | 26.2 | 0.9 |
110 | | FlagAlpha/Llama2-Chinese-13b-Chat | 31.6 | 26.4 | 70.6 | 35.2 | 38.7 | 28.1 | 28.0 | 29.5 | 25.6 | 2.5 |
111 | | Linly-AI/Chinese-LLaMA-2-13B-hf | 31.1 | 22.8 | 74.8 | 42.2 | 37.9 | 24.3 | 28.0 | 23.0 | 26.5 | 0.0 |
112 | | YuLan-LLaMA-2-13B | 34.2 | 25.2 | 70.3 | 43.2 | 48.5 | 30.0 | 29.5 | 31.0 | 28.5 | 1.7 |
113 | | YuLan-Chat-2-13B | 39.5 | 37.0 | 85.3 | 46.7 | 51.9 | 43.8 | 38.2 | 29.0 | 23.1 | 0.9 |
114 | | YuLan-Chat-3-12B | 43.5 | 31.3 | 68.3 | 53.3 | 60.9 | 43.8 | 34.8 | 27.5 | 28.2 | 0.9 |
115 | | YuLan-Chat-3-12B | 49.5 | 43.9 | 80.4 | 57.3 | 69.4 | 53.8 | 37.7 | 27.0 | 26.2 | 0.9 |
116 |
117 | ## Usage
118 |
119 | ### Environment Setting
120 |
121 | ```
122 | conda create -n yulan python=3.10 -y
123 | conda activate yulan
124 | ```
125 | We suggest to install the pytorch and bitsandbytes according to their official guidance for better adapting to your environment, and we provide our applied versions as reference:
126 | > 我们建议根据官方手册安装pytorch和bitsandbytes,此处提供我们使用的版本作为参考。
127 | ```
128 | torch==1.13
129 | bitsandbytes==0.39.0
130 | ```
131 | Then, you can install other packages by the following instruction:
132 | > 然后,安装其他所需的包。
133 | ```
134 | pip install -r requirements.txt
135 | ```
136 |
137 | ### Model Weights Recovering
138 |
139 | 1. For YuLan-Chat-1-13B-v1, YuLan-Chat-1-65B-v1, and YuLan-Chat-1-65B-v2, as they are based on LLaMA, you should download [LLaMA](https://github.com/facebookresearch/llama)'s original weights, and then add our released delta parameters into the original parameters to compose the final model parameters.
140 | > 对于基于LLaMA的模型,请先下载LLaMA官方模型,然后将我们发布的参数差值合并到原始模型参数中以获得最终的参数。
141 | ```
142 | python3 apply_delta.py \
143 | --base-model-path ./llama-13b/ \
144 | --tuned-model-path ./yulan-13b/ \
145 | --delta-path ./yulan-13b-delta
146 | ```
147 |
148 | 2. For YuLan-LLaMA-2-13B and YuLan-Chat-2-13B, you can just download our released checkpoints and load their parameters via Huggingface Transformers.
149 | > 对于基于LLaMA-2的模型,可以直接下载我们发布的模型权重,并使用Huggingface Transformers进行使用。
150 |
151 | ### Import from Huggingface Transformers
152 |
153 | As our model is trained based on LLaMA, it can be loaded in the same way as original LLaMA.
154 |
155 | > 由于我们的模型与LLaMA具有相似的结构,可以使用与LLaMA相同的方法加载。
156 |
157 | ```Python
158 | >>> from transformers import AutoTokenizer, AutoModelForCausalLM
159 | >>> tokenizer = AutoTokenizer.from_pretrained("yulan-team/YuLan-Chat-3-12b")
160 | >>> model = AutoModelForCausalLM.from_pretrained("yulan-team/YuLan-Chat-3-12b").cuda()
161 | >>> model = model.eval()
162 | >>> input_text = "hello"
163 | >>> prompt = "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n[|Human|]:{}\n[|AI|]:".format(input_text)
164 | >>> inputs = tokenizer(prompt, return_tensors='pt', padding="longest", max_length=4096, truncation=True, return_attention_mask=True, add_special_tokens=True)
165 | >>> kwargs = {'temperature': 0.8, 'top_p': 0.95, "top_k": 50, "repetition_penalty": 1.1, "no_repeat_ngram_size": 64, "max_length": 4096, "pad_token_id": tokenizer.bos_token_id, "eos_token_id": tokenizer.eos_token_id}
166 | >>> outputs = model.generate(inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), do_sample=True, **kwargs)
167 | >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[len(prompt):])
168 | ```
169 |
170 | ### Inference in Command Line
171 |
172 | We provide the code for the inference of YuLan-Chat in command line.
173 | > 我们提供命令行预测脚本。
174 | ```
175 | python inference.py --model_path ~/pretrain-checkpoint/yulan-13b/
176 | ```
177 |
178 | We also provide a quantization way for efficiently deploying YuLan-Chat. After quantization, YuLan-Chat can be loaded into a single GPU.
179 | > 我们也提供了一种量化的方法以便于更轻量化地部署YuLan-Chat。经过量化后,模型可以被加载进单张GPU中。
180 |
181 | |YuLan-Chat (INT-8)| GPU Consumption |
182 | |------------------|-----------------|
183 | |13B| RTX3090-24G |
184 | |65B| A100-80G |
185 | ```
186 | python inference.py --model_path ~/pretrain-checkpoint/yulan-13b/ --load_in_8bit
187 | ```
188 |
189 |
190 | ## License
191 |
192 | YuLan-Chat uses [MIT License](https://github.com/RUC-GSAI/YuLan-Chat/blob/main/LICENSE). All data and code in this project can only be used for academic purposes.
193 |
194 | > 本项目使用MIT许可,所有的数据和代码仅供学术研究使用。
195 |
196 | ## Contributors
197 |
198 | | **Pre-training** | **Fine-tuning** |
199 | |:----------------------------- |:-------------------------------------------------------------------- |
200 | | [Yutao Zhu](https://github.com/DaoD) (Lead), [Kelong Mao](https://github.com/kyriemao), [Wentong Chen](https://github.com/yiye3), [Yiding Sun](https://github.com/Emanual20), [Yihan Wu](https://github.com/wyh2000), [Qian Cao](https://github.com/Aman-4-Real), [Lei Zhang](https://github.com/LLily0703), [Feng Wang](https://github.com/PhealenWang), [Qiangqiang Ren](https://github.com/QiangKing)| [Kun Zhou](https://github.com/Lancelot39) (Lead), [Yushuo Chen](https://github.com/chenyushuo), [Zhipeng Chen](https://github.com/Timothy023), [Lei Wang](https://github.com/Paitesanshi), [Yupeng Hou](https://github.com/hyp1231), [Xincheng Pang](https://github.com/pangxincheng), [Xinyu Tang](https://github.com/txy77), [Junyi Li](https://github.com/turboLJY), [Yuhan Chen](https://github.com/Fiorina1212), [Shufang Xie](https://github.com/funtion) |
201 |
202 | ## Reference
203 |
204 | Please kindly cite our work if it helps you.
205 |
206 | > 如果我们的项目对您有帮助,请引用我们,谢谢!
207 |
208 | ```BibTeX
209 | @article{yulan,
210 | author = {Yutao Zhu and
211 | Kun Zhou and
212 | Kelong Mao and
213 | Wentong Chen and
214 | Yiding Sun and
215 | Zhipeng Chen and
216 | Qian Cao and
217 | Yihan Wu and
218 | Yushuo Chen and
219 | Feng Wang and
220 | Lei Zhang and
221 | Junyi Li and
222 | Xiaolei Wang and
223 | Lei Wang and
224 | Beichen Zhang and
225 | Zican Dong and
226 | Xiaoxue Cheng and
227 | Yuhan Chen and
228 | Xinyu Tang and
229 | Yupeng Hou and
230 | Qiangqiang Ren and
231 | Xincheng Pang and
232 | Shufang Xie and
233 | Wayne Xin Zhao and
234 | Zhicheng Dou and
235 | Jiaxin Mao and
236 | Yankai Lin and
237 | Ruihua Song and
238 | Jun Xu and
239 | Xu Chen and
240 | Rui Yan and
241 | Zhewei Wei and
242 | Di Hu and
243 | Wenbing Huang and
244 | Ze-Feng Gao and
245 | Yueguo Chen and
246 | Weizheng Lu and
247 | Ji-Rong Wen},
248 | title = {YuLan: An Open-source Large Language Model},
249 | journal = {CoRR},
250 | volume = {abs/2406.19853},
251 | year = {2024},
252 | url = {https://doi.org/10.48550/arXiv.2406.19853},
253 | doi = {10.48550/ARXIV.2406.19853},
254 | eprinttype = {arXiv},
255 | eprint = {2406.19853}
256 | }
257 | ```
258 |
259 |
260 | ## YuLan-1
261 |
262 | You can refer to our [original branch](https://github.com/RUC-GSAI/YuLan-Chat/tree/YuLan-Chat-1) for more detail about YuLan-Chat-1 and the instruction collection.
263 | > 更多关于指令构造的细节,可以参考我们之前的分支。
264 |
265 | ## Star History
266 |
267 |
268 |
269 |
--------------------------------------------------------------------------------
/apply_delta.py:
--------------------------------------------------------------------------------
1 | from typing import Optional
2 |
3 | import argparse
4 | import torch
5 | import tqdm
6 | import transformers
7 | from transformers import AutoModelForCausalLM, AutoTokenizer
8 |
9 |
10 | def apply_diff(path_raw, path_tuned, path_diff, device="cpu"):
11 | model_diff = AutoModelForCausalLM.from_pretrained(
12 | path_diff,
13 | device_map={"": torch.device(device)},
14 | torch_dtype=torch.bfloat16,
15 | low_cpu_mem_usage=True,
16 | )
17 | model_raw = AutoModelForCausalLM.from_pretrained(
18 | path_raw,
19 | device_map={"": torch.device(device)},
20 | torch_dtype=torch.bfloat16,
21 | low_cpu_mem_usage=True,
22 | )
23 |
24 | tokenizer_diff = AutoTokenizer.from_pretrained(path_diff)
25 | print('Finish loading tokenizer_diff')
26 |
27 | state_dict_diff = model_diff.state_dict()
28 | state_dict_raw = model_raw.state_dict()
29 | for key in tqdm.tqdm(state_dict_diff):
30 | if (state_dict_raw[key].size() != state_dict_diff[key].size()):
31 | delta = state_dict_diff[key].size(0) - state_dict_raw[key].size(0)
32 | state_dict_raw[key] = torch.cat((state_dict_raw[key], torch.zeros((delta, state_dict_raw[key].size(1)), dtype=torch.bfloat16)))
33 | print(key)
34 | print(state_dict_raw[key].size(), state_dict_diff[key].size())
35 | state_dict_diff[key].add_(state_dict_raw[key])
36 |
37 | model_diff.save_pretrained(path_tuned)
38 | tokenizer_diff.save_pretrained(path_tuned)
39 |
40 |
41 | if __name__ == "__main__":
42 | parser = argparse.ArgumentParser()
43 | parser.add_argument("--base-model-path", type=str, required=True)
44 | parser.add_argument("--tuned-model-path", type=str, required=True)
45 | parser.add_argument("--delta-path", type=str, required=True)
46 | args = parser.parse_args()
47 |
48 | apply_diff(args.base_model_path, args.tuned_model_path, args.delta_path)
49 |
--------------------------------------------------------------------------------
/assets/YuLan-logo.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/assets/YuLan-logo.jpg
--------------------------------------------------------------------------------
/assets/example1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/assets/example1.png
--------------------------------------------------------------------------------
/assets/example2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/assets/example2.png
--------------------------------------------------------------------------------
/assets/example_zh1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/assets/example_zh1.png
--------------------------------------------------------------------------------
/assets/example_zh2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/assets/example_zh2.png
--------------------------------------------------------------------------------
/assets/logo.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/assets/logo.jpg
--------------------------------------------------------------------------------
/assets/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/assets/logo.png
--------------------------------------------------------------------------------
/inference.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import transformers
4 | from accelerate import init_empty_weights, load_checkpoint_and_dispatch
5 | from transformers.generation.logits_process import LogitsProcessor, LogitsProcessorList
6 | from transformers import LlamaTokenizer, LlamaTokenizerFast
7 | import warnings
8 | warnings.filterwarnings("ignore")
9 |
10 |
11 | DEFAULT_PAD_TOKEN = "[PAD]"
12 | DEFAULT_EOS_TOKEN = ""
13 | DEFAULT_BOS_TOKEN = ""
14 | DEFAULT_UNK_TOKEN = "[UNK]"
15 |
16 |
17 | def load(args):
18 | model_path = args.model_path
19 | if args.load_in_8bit:
20 | model = transformers.AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", load_in_8bit=True)
21 | else:
22 | with init_empty_weights():
23 | model = transformers.AutoModelForCausalLM.from_pretrained(model_path)
24 | model.tie_weights()
25 | model = load_checkpoint_and_dispatch(model, model_path, device_map="auto", no_split_module_classes=model._no_split_modules, dtype=torch.float16)
26 | tokenizer = transformers.AutoTokenizer.from_pretrained(model_path)
27 | if tokenizer.bos_token == '' or tokenizer.eos_token == '' or tokenizer.unk_token == '':
28 | tokenizer.add_special_tokens(
29 | {
30 | "eos_token": DEFAULT_EOS_TOKEN,
31 | "bos_token": DEFAULT_BOS_TOKEN,
32 | "unk_token": DEFAULT_UNK_TOKEN,
33 | }
34 | )
35 | if tokenizer.pad_token_id is not None:
36 | model.config.pad_token_id = tokenizer.pad_token_id
37 | else:
38 | model.config.pad_token_id = tokenizer.eos_token_id
39 | tokenizer.pad_token = tokenizer.eos_token
40 | model.config.bos_token_id = tokenizer.bos_token_id
41 | model.config.eos_token_id = tokenizer.eos_token_id
42 | tokenizer.padding_side = 'left'
43 | return model, tokenizer
44 |
45 |
46 | PROMPT = (
47 | "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. "
48 | "The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n"
49 | "[|Human|]:{instruction}\n[|AI|]:"
50 | )
51 |
52 |
53 | class RemoveEmptyCharLogitsProcessor(LogitsProcessor):
54 | def __init__(self, tokenizer):
55 | self.tokenizer = tokenizer
56 |
57 | def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
58 | if isinstance(self.tokenizer, (LlamaTokenizerFast, LlamaTokenizer)):
59 | scores[:, 30166] = float('-inf') # remove \u200b
60 | return scores
61 |
62 |
63 | @torch.inference_mode(mode=True)
64 | def generate_response(model, tokenizer, prompt, input_text, max_length, **kwargs):
65 | if isinstance(input_text, str):
66 | input_text = [input_text]
67 | input_text = [prompt.format_map(dict(instruction=in_text)) for in_text in input_text]
68 | inputs = tokenizer(
69 | input_text,
70 | return_tensors='pt',
71 | padding="longest",
72 | max_length=max_length,
73 | truncation=True,
74 | return_attention_mask=True
75 | )
76 | kwargs.update({'max_length': max_length})
77 |
78 | device = next(iter(model.parameters())).device
79 | input_ids = inputs['input_ids'].to(device)
80 | attention_mask = inputs['attention_mask'].to(device)
81 | if input_ids.size(-1) > 2048:
82 | kwargs.update({'use_cache': False, 'max_new_tokens': 256})
83 |
84 | processors = LogitsProcessorList()
85 | processors.append(RemoveEmptyCharLogitsProcessor(tokenizer))
86 | outputs = model.generate(input_ids, attention_mask=attention_mask, logits_processor=processors, **kwargs)
87 | output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
88 | new_input_text = tokenizer.batch_decode(inputs['input_ids'], skip_special_tokens=True)
89 | del input_ids
90 | del attention_mask
91 | response_text = [
92 | out_txt[len(in_txt):].strip()
93 | for in_txt, out_txt in zip(new_input_text, output_text)
94 | ]
95 | return response_text
96 |
97 |
98 | def main(args):
99 | model, tokenizer = load(args)
100 | while True:
101 | input_text = input('[|Human|]:')
102 | kwargs = {
103 | "repetition_penalty": 1.1,
104 | "no_repeat_ngram_size": 64,
105 | "min_new_tokens": 1,
106 | }
107 | output_content = generate_response(model, tokenizer, PROMPT, input_text, max_length=2048, **kwargs)
108 | print(f'[|AI|]:{output_content[0]}')
109 |
110 |
111 | if __name__ == '__main__':
112 | parser = argparse.ArgumentParser()
113 | parser.add_argument("--model_path", "-m", type=str, default="", help="path to model")
114 | parser.add_argument("--load_in_8bit", default=False, action="store_true")
115 |
116 | args, _ = parser.parse_known_args()
117 | main(args)
118 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy
2 | rouge_score
3 | fire
4 | openai
5 | transformers>=4.28.1
6 | torch
7 | sentencepiece
8 | tokenizers==0.13.3
9 | wandb
10 | deepspeed==0.9.1
11 | ninja==1.11.1
12 | accelerate==0.17.1
--------------------------------------------------------------------------------
/yulan_test/config/benchmark/bbh10k.yaml:
--------------------------------------------------------------------------------
1 | cache_response_path: "output/bbh10kbenchmark"
2 | save_response: True
3 | language: "english"
4 |
--------------------------------------------------------------------------------
/yulan_test/config/model/chatgpt.yaml:
--------------------------------------------------------------------------------
1 | model_cls: ChatGPTModel
2 |
3 | # hyper-parameters for openai api
4 | # model_alias: gpt-3.5-turbo
5 | temperature: 0
6 | max_tokens: 512
7 | n: 1
8 | api_key: null
9 |
--------------------------------------------------------------------------------
/yulan_test/config/model/dummy.yaml:
--------------------------------------------------------------------------------
1 | model_cls: DummyModel
2 | max_length: 2048
3 | model_alias: null
4 | url: null
5 |
--------------------------------------------------------------------------------
/yulan_test/config/model/local.yaml:
--------------------------------------------------------------------------------
1 | model_cls: LocalModel
2 | max_length: 2048
3 | model_alias: null
4 | url: null
5 |
--------------------------------------------------------------------------------
/yulan_test/config/model/openai.yaml:
--------------------------------------------------------------------------------
1 | model_cls: OpenAIModel
2 |
3 | # hyper-parameters for openai api
4 | model_alias: null
5 | temperature: 0
6 | max_tokens: 512
7 | n: 1
8 | logprobs: 5
9 | api_key: null
10 |
--------------------------------------------------------------------------------
/yulan_test/scripts/testChatGPTModel.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | python src/main.py \
4 | -b config/benchmark/bbh10k.yaml \
5 | -m config/model/chatgpt.yaml \
6 | -l chatgpt.log \
7 | model.api_key $1
--------------------------------------------------------------------------------
/yulan_test/scripts/testDummyModel.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | python src/main.py \
4 | -b config/benchmark/bbh10k.yaml \
5 | -m config/model/dummy.yaml \
6 | -l dummy.log \
7 | model.model_alias dummy-model \
8 | model.url http://localhost:5000
9 |
--------------------------------------------------------------------------------
/yulan_test/scripts/testLocalModel.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | python src/main.py \
4 | -b config/benchmark/bbh10k.yaml \
5 | -m config/model/local.yaml \
6 | -l local.log \
7 | model.model_alias $1 \
8 | model.url $2
9 |
--------------------------------------------------------------------------------
/yulan_test/scripts/testOpenAIModel.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | python src/main.py \
4 | -b config/benchmark/bbh10k.yaml \
5 | -m config/model/openai.yaml \
6 | -l openai.log \
7 | model.model_alias $1 \
8 | model.api_key $2
9 |
--------------------------------------------------------------------------------
/yulan_test/src/BBH10KBenchmark.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | import logging
4 | import pandas as pd
5 | from tqdm import tqdm
6 | from typing import List, Dict
7 | from yacs.config import CfgNode
8 |
9 | from model.Model import Model
10 |
11 | def save_benchmark(model, language, cache_path, responses):
12 | if not os.path.exists(cache_path):
13 | os.makedirs(cache_path)
14 | file_path = os.path.join(cache_path, model + "_" + language + ".json")
15 | with open(file_path, "w", encoding="utf-8") as fo:
16 | json.dump(responses, fo)
17 |
18 | class BBH3kBenchmark:
19 |
20 | def __init__(self, config: CfgNode, logger: logging.Logger) -> None:
21 | self.config: CfgNode = config
22 | self.logger: logging.Logger = logger
23 | with open("data/bbh3k.json", "r", encoding="utf-8") as fi:
24 | self.data = json.load(fi)
25 | self.cache_path = config.benchmark.cache_response_path
26 | self.save_response = config.benchmark.save_response
27 | self.language = config.benchmark.language
28 |
29 | def generate_text(self, model: Model, ipt: str):
30 | results = model.generate_text(ipt)
31 | return {
32 | "name": model.model_alias,
33 | "method": "generate_text",
34 | "msg": f"{results}"
35 | }
36 |
37 | def calc_acc(self, prompts: List[Dict], preds: List[str]) -> List[Dict]:
38 | def _calc_acc(question: str, gt: str, pred: str) -> float:
39 | question = question.lower()
40 | gt = gt.lower()
41 | pred=pred.split("\n")[0]
42 | pred = pred.lower()
43 |
44 | gts = [gt]
45 |
46 | if gt.find("(") != -1:
47 | start_index = question.find(gt)
48 | end_index = question.find("\n", start_index)
49 | gts.append(question[start_index + len(gt): end_index].lstrip())
50 | return float((gts[0] in pred) or (gts[1] in pred or pred in gts[1]))
51 |
52 | return float(gt in pred)
53 |
54 | questions=list(map(lambda prompt: prompt["question"], prompts))
55 | gts = list(map(lambda prompt: prompt["answer"], prompts))
56 | acc = list(map(lambda x: _calc_acc(*x), zip(questions,gts, preds)))
57 | return acc
58 |
59 | def evaluate_model(self, model: Model):
60 | preds = []
61 | responses=[]
62 | for prompt in tqdm(self.data,desc="Data"):
63 | question="For the following questions please return only one word as an answer.\n" + prompt['question']
64 | response = self.generate_text(model, question)
65 | self.logger.info("ques: " + question)
66 | self.logger.info("Model ans: " + json.dumps(response))
67 | self.logger.info("Correct ans: " + prompt["answer"])
68 | preds.append(response["msg"])
69 | prompt["response"]=response["msg"]
70 | responses.append(prompt)
71 | if self.save_response:
72 | save_benchmark(model.model_alias, self.language, self.cache_path, responses)
73 | acc = self.calc_acc(self.data, preds)
74 |
75 | task_acc = {}
76 | type_acc = {}
77 | task_cnt = {}
78 | type_cnt = {}
79 |
80 | for i in range(len(self.data)):
81 | task=self.data[i]["taskname"]
82 | type=self.data[i]["type"]
83 | if task not in task_cnt:
84 | task_cnt[task] = 0
85 | task_acc[task] = 0
86 | task_cnt[task] += 1
87 | task_acc[task] += acc[i]
88 | if type not in type_cnt:
89 | type_cnt[type]=0
90 | type_acc[type]=0
91 | type_cnt[type] += 1
92 | type_acc[type] += acc[i]
93 |
94 | task_result = pd.DataFrame()
95 | task_result.index=["accuracy"]
96 | sorted_keys = sorted(task_cnt.keys())
97 | task_accs = []
98 | for key in sorted_keys:
99 | avg_acc = task_acc[key] / task_cnt[key]
100 | task_result[key] = avg_acc
101 | task_accs.append(avg_acc)
102 | self.logger.info(f'The average accuracy of task {key}: {avg_acc}')
103 | self.logger.info("\n" + str(task_result))
104 | self.logger.info(f"Task average accuracy :{task_accs}")
105 |
106 | total_acc = 0
107 | type_result = pd.DataFrame()
108 | type_result.index = ["accuracy"]
109 | sorted_keys = sorted(type_cnt.keys())
110 | type_accs = []
111 | for key in sorted_keys:
112 | avg_acc = type_acc[key] / type_cnt[key]
113 | type_result[key] = avg_acc
114 | type_accs.append(avg_acc)
115 | total_acc += type_acc[key]
116 | self.logger.info(f'The average accuracy of type {key}: {avg_acc}')
117 | self.logger.info("\n" + str(type_result))
118 | avg_acc = total_acc / len(acc)
119 | self.logger.info(f"Type average accuracy :{type_accs}")
120 | self.logger.info(f'The global average accuracy: {avg_acc}')
121 |
--------------------------------------------------------------------------------
/yulan_test/src/__pycache__/BBH10KBenchmark.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/yulan_test/src/__pycache__/BBH10KBenchmark.cpython-38.pyc
--------------------------------------------------------------------------------
/yulan_test/src/main.py:
--------------------------------------------------------------------------------
1 | import os
2 | import logging
3 | import argparse
4 | from yacs.config import CfgNode
5 |
6 | import model
7 | from BBH10KBenchmark import BBH3kBenchmark
8 |
9 | model_names = sorted(
10 | name
11 | for name in model.__dict__
12 | if name.islower() and not name.startswith("__") and callable(model.__dict__[name])
13 | )
14 |
15 | def set_logger(log_file, name="default"):
16 | logger = logging.getLogger(name)
17 | logger.setLevel(logging.INFO)
18 |
19 | formatter = logging.Formatter(
20 | "%(asctime)s - %(levelname)s - %(module)s - %(funcName)s - %(message)s",
21 | datefmt="%Y-%m-%d %H:%M:%S",
22 | )
23 | handler = logging.FileHandler(log_file, mode="w")
24 | handler.setLevel(logging.INFO)
25 | handler.setFormatter(formatter)
26 | logger.handlers = []
27 | logger.addHandler(handler)
28 | return logger
29 |
30 | def add_variable_to_config(cfg: CfgNode, name: str, value) -> CfgNode:
31 | cfg.defrost()
32 | cfg[name] = value
33 | cfg.freeze()
34 | return cfg
35 |
36 | def load_cfg(cfg_file: str, new_allowed: bool=True) -> CfgNode:
37 | with open(cfg_file, "r") as fi:
38 | cfg = CfgNode.load_cfg(fi)
39 | cfg.set_new_allowed(new_allowed)
40 | return cfg
41 |
42 | def parse_args():
43 | parser = argparse.ArgumentParser()
44 | parser.add_argument(
45 | "-b",
46 | "--benchmark-cfg",
47 | type=str,
48 | required=True,
49 | help="Path to task config file"
50 | )
51 | parser.add_argument(
52 | "-m",
53 | "--model-cfg",
54 | type=str,
55 | required=True,
56 | help="Path to model config file"
57 | )
58 | parser.add_argument(
59 | "-l",
60 | "--log-file",
61 | type=str,
62 | default="log.log",
63 | help="Path to log file"
64 | )
65 | parser.add_argument(
66 | "opts",
67 | default=None,
68 | nargs=argparse.REMAINDER,
69 | help="Modify config options from command line"
70 | )
71 | args = parser.parse_args()
72 | return args
73 |
74 | def main():
75 | args = parse_args()
76 | logger = set_logger(args.log_file, str(os.getpid()))
77 | logger.info(f"os.getpid()={os.getpid()}")
78 | config = CfgNode(new_allowed=True)
79 | config = add_variable_to_config(
80 | config,
81 | "benchmark",
82 | load_cfg(args.benchmark_cfg)
83 | )
84 | config = add_variable_to_config(
85 | config,
86 | "model",
87 | load_cfg(args.model_cfg)
88 | )
89 | config.merge_from_list(args.opts)
90 | logger.info(f"\n{config}")
91 |
92 | benchmark = BBH3kBenchmark(config, logger)
93 | model_cls = model.__dict__[config.model.model_cls]
94 | benchmark.evaluate_model(model_cls(config, logger))
95 |
96 |
97 | if __name__ == "__main__":
98 | main()
99 |
--------------------------------------------------------------------------------
/yulan_test/src/model/ChatGPTModel.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 | import openai
4 | from yacs.config import CfgNode
5 |
6 | from .Model import Model
7 |
8 | class ChatGPTModel(Model):
9 |
10 | def __init__(self, config: CfgNode, logger: logging.Logger):
11 | super().__init__(config, logger)
12 | self.logger = logger
13 | self.config = config
14 | self.model_alias = "gpt-3.5-turbo"
15 | assert config.model.api_key is not None, "please specify the api_key in the config file"
16 |
17 | def generate_text(self, ipt: str, *args, **kwargs) -> str:
18 | try:
19 | openai.api_key = self.config.model.api_key
20 | response = openai.ChatCompletion.create(
21 | model="gpt-3.5-turbo",
22 | messages=[
23 | {"role": "system", "content": "You are a helpful assistant."},
24 | {"role": "user", "content": ipt}
25 | ],
26 | temperature=self.config.model.temperature,
27 | max_tokens=self.config.model.max_tokens,
28 | n=self.config.model.n
29 | )
30 | #response = self.wait_for_complete(response["id"])
31 | result=response["choices"][0]["message"]["text"]
32 | return result
33 | except openai.error.OpenAIError as e:
34 | self.logger.error(f"OpenAI error: {e}")
35 | return ""
36 |
--------------------------------------------------------------------------------
/yulan_test/src/model/DummyModel.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 | from yacs.config import CfgNode
4 |
5 | from .Model import Model
6 |
7 | class DummyModel(Model):
8 |
9 | def __init__(self, config: CfgNode, logger: logging.Logger) -> None:
10 | super().__init__(config, logger)
11 | self.logger = logger
12 | self.config = config
13 | self.LOCAL_URL = self.config.model.url
14 | assert self.LOCAL_URL is not None, "please specify the url in the config file"
15 | self.model_alias = self.config.model.model_alias
16 | assert self.model_alias is not None, "please specify the model_alias in the config file"
17 | self.headers = {
18 | "Content-Type": "application/x-www-form-urlencoded",
19 | }
20 |
21 | def generate_text(self, ipt: str, *args, **kwargs) -> str:
22 | result = ipt
23 | return result
24 |
--------------------------------------------------------------------------------
/yulan_test/src/model/LocalModel.py:
--------------------------------------------------------------------------------
1 | import time
2 | import json
3 | import logging
4 | import requests
5 | from typing import Union, List, Optional
6 |
7 | from yacs.config import CfgNode
8 |
9 | from .Model import Model
10 |
11 | class LocalModel(Model):
12 |
13 | def __init__(self, config: CfgNode, logger: logging.Logger) -> None:
14 | super().__init__(config, logger)
15 | self.logger = logger
16 | self.config = config
17 | self.LOCAL_URL = self.config.model.url
18 | assert self.LOCAL_URL is not None, "please specify the url in the config file"
19 | self.model_alias = self.config.model.model_alias
20 | assert self.model_alias is not None, "please specify the model_alias in the config file"
21 | self.headers = {
22 | "Content-Type": "application/x-www-form-urlencoded",
23 | }
24 |
25 | def generate_text(self, ipt: str, *args, **kwargs) -> str:
26 | max_length = self.config.model.max_length
27 | try:
28 | # 发起completion请求
29 | data = {"input": [ipt], "max_length": max_length}
30 | response = requests.post(
31 | self.LOCAL_URL, headers=self.headers, data={"json": json.dumps(data)}
32 | )
33 | result = response.json()["output"][0]
34 | return result
35 | except Exception as e:
36 | self.logger.exception(e)
37 | return ""
38 |
--------------------------------------------------------------------------------
/yulan_test/src/model/Model.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 | from yacs.config import CfgNode
4 |
5 | class Model:
6 |
7 | def __init__(self, config: CfgNode, logger: logging.Logger):
8 | self.model_alias = "base-model"
9 |
10 | def generate_text(self, ipt: str, *args, **kwargs) -> str:
11 | pass
12 |
--------------------------------------------------------------------------------
/yulan_test/src/model/OpenAIModel.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 | import openai
4 | from yacs.config import CfgNode
5 |
6 | from .Model import Model
7 |
8 | class OpenAIModel(Model):
9 |
10 | def __init__(self, config: CfgNode, logger: logging.Logger):
11 | super().__init__(config, logger)
12 | self.logger = logger
13 | self.config = config
14 | self.model_alias = self.config.model.model_alias
15 | assert self.model_alias is not None, "please specify the model_alias in the config file"
16 | assert config.model.api_key is not None, "please specify the api_key in the config file"
17 |
18 | def generate_text(self, ipt: str, *args, **kwargs) -> str:
19 | try:
20 | openai.api_key = self.config.model.api_key
21 | response = openai.Completion.create(
22 | model=self.config.model.model_alias,
23 | prompt=ipt,
24 | temperature=self.config.model.temperature,
25 | max_tokens=self.config.model.max_tokens,
26 | n=self.config.model.n,
27 | logprobs=self.config.model.logprobs
28 | )
29 | #response = self.wait_for_complete(response["id"])
30 | result=response['choices'][0]['text']
31 | return result
32 | except openai.error.OpenAIError as e:
33 | self.logger.error(f"OpenAI error: {e}")
34 | return ""
35 |
--------------------------------------------------------------------------------
/yulan_test/src/model/__init__.py:
--------------------------------------------------------------------------------
1 | from model.ChatGPTModel import ChatGPTModel
2 | from model.OpenAIModel import OpenAIModel
3 | from model.LocalModel import LocalModel
4 | from model.DummyModel import DummyModel
5 |
6 | __all__ = [
7 | "ChatGPTModel",
8 | "OpenAIModel",
9 | "LocalModel",
10 | "DummyModel"
11 | ]
--------------------------------------------------------------------------------
/yulan_test/src/model/__pycache__/ChatGPTModel.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/yulan_test/src/model/__pycache__/ChatGPTModel.cpython-38.pyc
--------------------------------------------------------------------------------
/yulan_test/src/model/__pycache__/DummyModel.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/yulan_test/src/model/__pycache__/DummyModel.cpython-38.pyc
--------------------------------------------------------------------------------
/yulan_test/src/model/__pycache__/LocalModel.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/yulan_test/src/model/__pycache__/LocalModel.cpython-38.pyc
--------------------------------------------------------------------------------
/yulan_test/src/model/__pycache__/Model.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/yulan_test/src/model/__pycache__/Model.cpython-38.pyc
--------------------------------------------------------------------------------
/yulan_test/src/model/__pycache__/OpenAIModel.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/yulan_test/src/model/__pycache__/OpenAIModel.cpython-38.pyc
--------------------------------------------------------------------------------
/yulan_test/src/model/__pycache__/__init__.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RUC-GSAI/YuLan-Chat/6d891efaa23bd36f854ef085b5404c3e43011eb8/yulan_test/src/model/__pycache__/__init__.cpython-38.pyc
--------------------------------------------------------------------------------