├── .gitattributes
├── .gitignore
├── .gitmodules
├── Colab_Lora_train.ipynb
├── README.md
├── assets
└── tensorboard-example.png
├── gui.py
├── huggingface
├── accelerate
│ └── default_config.yaml
└── hub
│ └── version.txt
├── install-cn.ps1
├── install.bash
├── install.ps1
├── interrogate.ps1
├── logs
└── .keep
├── output
└── .keep
├── resize.ps1
├── run_gui.ps1
├── run_gui.sh
├── sd-models
└── put stable diffusion model here.txt
├── tensorboard.ps1
├── toml
├── default.toml
├── lora.toml
└── sample_prompts.txt
├── train.ipynb
├── train.ps1
├── train.sh
├── train_by_toml.ps1
└── train_by_toml.sh
/.gitattributes:
--------------------------------------------------------------------------------
1 | *.ps1 text eol=crlf
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | .vscode
2 | .idea
3 |
4 | venv
5 |
6 | output/*
7 | !output/.keep
8 |
9 | py310
10 | git
11 |
12 | train/*
13 | logs/*
14 | sd-models/*
15 | !sd-models/put stable diffusion model here.txt
16 | !logs/.keep
17 |
18 | tests/
19 |
20 | huggingface/hub/models--openai--clip-vit-large-patch14
--------------------------------------------------------------------------------
/.gitmodules:
--------------------------------------------------------------------------------
1 | [submodule "sd-scripts"]
2 | path = sd-scripts
3 | url = https://github.com/kohya-ss/sd-scripts.git
4 | [submodule "frontend"]
5 | path = frontend
6 | url = https://github.com/hanamizuki-ai/lora-gui-dist
7 |
--------------------------------------------------------------------------------
/Colab_Lora_train.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "private_outputs": true,
7 | "provenance": [],
8 | "collapsed_sections": [
9 | "yufRVe5Pc7zk",
10 | "vszNeS5feTaA",
11 | "fHrpYllIh565",
12 | "k-OOv-mDlY8b",
13 | "W9wqv5U3iVZq",
14 | "19nEZDbHMzsv",
15 | "g8uAVOwb4wd8",
16 | "AyqSNCvqO1OB",
17 | "EP59EDzIH3AL",
18 | "awjy39L8jZWU",
19 | "GQ7GziwME6Fi"
20 | ],
21 | "include_colab_link": true
22 | },
23 | "kernelspec": {
24 | "name": "python3",
25 | "display_name": "Python 3"
26 | },
27 | "language_info": {
28 | "name": "python"
29 | },
30 | "gpuClass": "standard",
31 | "accelerator": "GPU"
32 | },
33 | "cells": [
34 | {
35 | "cell_type": "markdown",
36 | "metadata": {
37 | "id": "view-in-github",
38 | "colab_type": "text"
39 | },
40 | "source": [
41 | "
"
42 | ]
43 | },
44 | {
45 | "cell_type": "markdown",
46 | "source": [
47 | "\n",
48 | "[](https://visitorbadge.io/status?path=wsh.colob_lora_train)\n",
49 | "[](https://github.com/WSH032/lora-scripts/)\n",
50 | "\n",
51 | "| Notebook Name | Description | Link | Old-Version |\n",
52 | "| --- | --- | --- | --- |\n",
53 | "| [Colab_Lora_train](https://github.com/WSH032/lora-scripts/) | 基于[Akegarasu/lora-scripts](https://github.com/Akegarasu/lora-scripts)的定制化Colab notebook | [](https://colab.research.google.com/github/WSH032/lora-scripts/blob/main/Colab_Lora_train.ipynb) | [](https://colab.research.google.com/drive/1_f0qJdM43BSssNJWtgjIlk9DkIzLPadx) | \n",
54 | "| [kohya_train_webui](https://github.com/WSH032/kohya-config-webui) `NEW` | 基于[WSH032/kohya-config-webui](https://github.com/WSH032/kohya-config-webui)的WebUI版Colab notebook | [](https://colab.research.google.com/github/WSH032/kohya-config-webui/blob/main/kohya_train_webui.ipynb) |\n",
55 | "\n",
56 | "如果你觉得此项目有用,可以去[](https://github.com/WSH032/lora-scripts/) 点一颗小星星,非常感谢你⭐\n",
57 | "\n",
58 | "---\n",
59 | "\n",
60 | "# **基于Bilibili UP主:[秋葉aaaki](https://space.bilibili.com/12566101)发布的[保姆式LoRA模型一键包文件](https://www.bilibili.com/video/BV1fs4y1x7p2/)修改而来。**\n",
61 | "\n",
62 | "\n",
63 | "\n",
64 | "最核心的文件的整合与代码均由主要作者[秋葉aaaki](https://github.com/Akegarasu/lora-scripts\n",
65 | ")完成。\n",
66 | "\n",
67 | "\n",
68 | "开始前**建议阅读**:\n",
69 | "\n",
70 | "\n",
71 | "1. [保姆式LoRA模型一键包文件](https://www.bilibili.com/video/BV1fs4y1x7p2/)\n",
72 | "2. [参数心得](https://www.bilibili.com/video/BV1GM411E7vk/)\n",
73 | "3. [训练教程](https://www.bilibili.com/read/cv21926598)\n",
74 | "\n",
75 | "\n",
76 | "\n",
77 | "\n",
78 | "> 修改by[Happy_WSH](https://space.bilibili.com/8417436)。\n",
79 | "\n",
80 | "> 本人只是完成Colab下的依赖安装,使用的下载源不保证长期有效。本人未学习过python与linux的使用,代码在ChatGPT的指导下完成,不足部分,有兴趣者可修改并分享。\n",
81 | "\n",
82 | "> 2023年2月19日实测Colab的Tesla T4 GPU可运行\n",
83 | "\n",
84 | "> *--分享的责任与获取的自由*"
85 | ],
86 | "metadata": {
87 | "id": "2xPpy2V_bm6q"
88 | }
89 | },
90 | {
91 | "cell_type": "markdown",
92 | "source": [
93 | "## 之前更新内容"
94 | ],
95 | "metadata": {
96 | "id": "yufRVe5Pc7zk"
97 | }
98 | },
99 | {
100 | "cell_type": "code",
101 | "source": [
102 | "#@title 之前:\n",
103 | "\n",
104 | "#@markdown >2月20日18点更新:按照秋葉aaaki的建议进行了部分的修改\n",
105 | "#@markdown >\n",
106 | "#@markdown >内容:更换aria2下载工具、精简了依赖安装的代码、加入下载模型的交互、更换默认底模为animefull-final-pruned、添加了修改模型输出路径至谷歌硬盘的教程。\n",
107 | "#@markdown >\n",
108 | "#@markdown >修改了操作教程\n",
109 | "\n",
110 | "\n",
111 | "#@markdown >2月24日02点更新:秋葉aaaki更新了lion优化器\n",
112 | "#@markdown >\n",
113 | "#@markdown >内容:请看[lion 优化器](https://www.bilibili.com/opus/765826255751741460)\n",
114 | "#@markdown >\n",
115 | "\n",
116 | "\n",
117 | "#@markdown >2月26日12点更新:更新了正则化图片的使用教程\n",
118 | "#@markdown >\n",
119 | "#@markdown >内容:请看[02分47秒](https://www.bilibili.com/video/BV1GM411E7vk/)\n",
120 | "#@markdown >\n",
121 | "#@markdown >使用方法请看**(三)**和**(七).5**\n",
122 | "#@markdown >\n",
123 | "\n",
124 | "\n",
125 | "#@markdown >3月03日20点更新:lora版本更新,要求python3.10\n",
126 | "#@markdown >\n",
127 | "#@markdown >内容:在Colab中加入了py的更新;同时留了一个备份源,如需使用请自行删除注释\n",
128 | "#@markdown >\n",
129 | "#@markdown >**注意此条已过时,请看3月16的更新内容**\n",
130 | "\n",
131 | "#@markdown >3月13日15点更新:优化操作步骤\n",
132 | "#@markdown >\n",
133 | "#@markdown >内容:将安装依赖和拷贝文件分开执行,方便安装依赖时上传文件或配置train.sh。具体请看此notebook最开头。\n",
134 | "\n",
135 | "\n",
136 | "#@markdown >3月16日20点更新:优化python3.10.10安装方式、教程及代码块排版大更新\n",
137 | "#@markdown >\n",
138 | "#@markdown >感谢[枫娘](https://space.bilibili.com/28357)分享的更新代码,现在已经**不再需要重启**\n",
139 | "#@markdown >\n",
140 | "#@markdown >内容:优化了python升级部分,升级和安装依赖用时更短;同时安装兼容torch==1.13.1版本的torchvision==1.14.1\n",
141 | "#@markdown >\n",
142 | "#@markdown >下载模型部分删去国内源,加了SD1.5\n",
143 | "#@markdown >\n",
144 | "#@markdown >教程及代码块排版大更新,如果描述不清楚的地方,或你有更好的意见请私信我\n",
145 | "\n",
146 | "#@markdown >3月19日02点更新:参数设置交互大更新\n",
147 | "#@markdown >\n",
148 | "#@markdown >添加了交互式参数设置\n",
149 | "#@markdown >\n",
150 | "#@markdown >内容:编写了一个使用正则表达式匹配的函数修改秋叶的train.sh文件,你可以在(四)中找到这个函数;基于此函数添加了常用参数交互式修改;秋叶github克隆部分被移至(四)\n",
151 | "#@markdown >\n",
152 | "\n",
153 | "#@markdown >3月20日05点更新:添加有趣的扩展\n",
154 | "#@markdown >\n",
155 | "#@markdown >内容:添加web文件管理器、WD1.4打标、tensorboard\n",
156 | "\n",
157 | "\n",
158 | "#@markdown >3月20日18点更新:更新进阶参数\n",
159 | "#@markdown >\n",
160 | "#@markdown >内容:添加重训练设置、保存学习状态设置等进阶参数,(七)中可以看到\n",
161 | "\n",
162 | "#@markdown >3月22日更新:添加采样图片代码块,修改正则表达式匹配函数\n",
163 | "#@markdown >\n",
164 | "#@markdown >内容:采样图片功能允许你边训练边出图,你可以在(七)中找到这个模块;为了适配秋叶的train.sh内容更新,正则匹配函数现在会报错:`\"警告!!!对于'{search}'的正则表达式并未匹配,请手动设置该参数,并B站私信我更新!\"`,看到这个提示你可以手动修改train.sh中相关部分,也可以使用WSH的库,也可也B站私信我\n",
165 | "\n",
166 | "#@markdown >3月23日更新:更新vae下载设置、加入Dadaption优化器、添加lowram、v2开启选项\n",
167 | "#@markdown >\n",
168 | "#@markdown >内容:可以在(六)、(七)中找到相关\n",
169 | "\n",
170 | "#@markdown >3月24日20点更新:秋叶版本更新,kohya脚本更新至3月21日,支持LoHa训练\n",
171 | "#@markdown >\n",
172 | "#@markdown >内容:采样图片现在支持使用`(1girl:1.1)`和`[1girl]`格式,同时75个tokens上限被取消\n",
173 | "\n",
174 | "#@markdown > 4月16日03更新:秋叶版本更新\n",
175 | "#@markdown > \n",
176 | "#@markdown > 内容:新加信噪比参数,添加保存train.sh文件功能,添加使用保存的train.sh覆盖功能\n",
177 | "#@markdown > \n",
178 | "#@markdown > 底模更换,novelAI的底模由3.59g被更换至4g,占用显存会稍微变大,如果想用之前的,复制下载模块给出的链接\n",
179 | "\n",
180 | "#@markdown > 4月28日19更新:秋叶版本更新, colab版本更新\n",
181 | "#@markdown > \n",
182 | "#@markdown > 内容:修改noise_offset为字符串类型; 新的笔记本; 仓库链接更换。\n",
183 | "#@markdown > \n",
184 | "#@markdown > - Colab现在默认已经是python3.10和torch2.0.0了,故不再安装相关环境\n",
185 | "#@markdown >\n",
186 | "#@markdown > - 安装时间现在只要原来的一半(3分钟)\n",
187 | "#@markdown >\n",
188 | "#@markdown > - 秋叶版本的train.sh里现在多了一些参数,但是我疲惫于维护这个笔记本了。如果你想改新参数,就自己点进去更改。不再默认的提供秋叶仓库的下载,现在默认为我的仓库。\n",
189 | "#@markdown >\n",
190 | "#@markdown > 我目前的精力都会转移到这个新的笔记本[](https://colab.research.google.com/github/WSH032/kohya-config-webui/blob/main/kohya_train_webui.ipynb),提供新的WebUI界面,新的参数这里面都有。我觉得比目前这个笔记本操作性更好。\n"
191 | ],
192 | "metadata": {
193 | "cellView": "form",
194 | "id": "3KgbMxuWcb7K"
195 | },
196 | "execution_count": null,
197 | "outputs": []
198 | },
199 | {
200 | "cell_type": "markdown",
201 | "source": [
202 | "## 最新更新\n",
203 | "> **2023年5月24日**更新:colab更新\n",
204 | "> \n",
205 | "> 内容:\n",
206 | ">\n",
207 | "> - 更新torch==2.0.1 xformer==0.0.20\n",
208 | ">\n",
209 | "> - 更换新的访客计数器\n"
210 | ],
211 | "metadata": {
212 | "id": "4NJczBMrdbEH"
213 | }
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "source": [
218 | "## 待解决问题:\n",
219 | "\n",
220 | "\n",
221 | "\n",
222 | "> 1.(已完成)train.sh中训练参数的设定和更改模型输出路径需要自己打开文件中手动完成,如何像下载模型那样添加一个交互呢?(感谢[枫娘](https://space.bilibili.com/28357)分享的灵感)\n",
223 | ">\n",
224 | "> 2.我删除了venv的部分,并且观察到其仍能正常运行。这是否会有什么影响呢?\n",
225 | ">\n",
226 | "> 3.(已完成)python3.10的安装似乎有点奇怪(已解决?详细请看**更新**,如觉得还有改进空间私信我,我会很开心)\n",
227 | ">\n",
228 | ">\n",
229 | "> 4.模块5.1中的正则表达式函数似乎有点丑陋,鲁棒性我也没测试,你是否有更好的主义 呢? (注意,我不希望放弃使用秋叶的train.sh)\n",
230 | ">\n",
231 | "> 5.web文件浏览器必须在py3.9安装使用后再继续安装py3.10,如果调换顺序就无法启动web文件浏览器,这是为什么?\n",
232 | ">\n",
233 | "> 6.没有详细测试基础和进阶参数修改部分代码的bug"
234 | ],
235 | "metadata": {
236 | "id": "vszNeS5feTaA"
237 | }
238 | },
239 | {
240 | "cell_type": "markdown",
241 | "source": [
242 | "## 常见问题:\n",
243 | "\n",
244 | "\n",
245 | "1. Q:输出代码的最后出现*(kill:9)*字样\n",
246 | "\n",
247 | " A:爆ram了,更换小的底模\n",
248 | "\n",
249 | "2. Q:我无法挂载谷歌硬盘怎么办\n",
250 | "\n",
251 | " A:请看文末教程\n",
252 | "\n",
253 | "3. batch=5,显存占用达到14.1G/15G,已经是崩溃的边缘\n"
254 | ],
255 | "metadata": {
256 | "id": "fHrpYllIh565"
257 | }
258 | },
259 | {
260 | "cell_type": "markdown",
261 | "source": [
262 | "# **A. 训练前准备(第一次使用请打开阅读!!!):**"
263 | ],
264 | "metadata": {
265 | "id": "k-OOv-mDlY8b"
266 | }
267 | },
268 | {
269 | "cell_type": "code",
270 | "source": [
271 | "#@title **训练前准备(这只是教程,不是代码块,不需要你运行):**\n",
272 | "#@markdown \n",
273 | "#@markdown **注意** 按下文步骤运行\n",
274 | "#@markdown \n",
275 | "#@markdown \n",
276 | "#@markdown > **使用前强烈建议观看[参数心得](https://www.bilibili.com/video/BV1GM411E7vk/)和[保姆式LoRA模型一键包文件](https://www.bilibili.com/video/BV1fs4y1x7p2/)视频内详细讲解了各参数的修改。**\n",
277 | "#@markdown \n",
278 | "#@markdown \n",
279 | "#@markdown \n",
280 | "#@markdown \n",
281 | "#@markdown > 提醒:Colab免费用户最长连续使用GPU时间不超过12h,不要搞过于大的训练文件,一旦被强制下线训练将会终止!\n",
282 | "#@markdown >\n",
283 | "#@markdown >\n",
284 | "#@markdown > 长时间训练的建议:\n",
285 | "#@markdown >\n",
286 | "#@markdown >1. 使用Up推荐的云平台训练[AutoDL](https://www.bilibili.com/read/cv21450198)\n",
287 | "#@markdown >2. 按 (七)、2 修改模型的输出路径更改至你的谷歌硬盘\n",
288 | "#@markdown \n",
289 | "#@markdown **推荐使用谷歌硬盘配合训练,如果你因为谷歌账号等问题无法挂载,请看文末教程**\n",
290 | "#@markdown > 谷歌硬盘点左上角那个*CO*图标\n",
291 | "#@markdown \n",
292 | "#@markdown ##**(一)首先打开谷歌硬盘(左上角那个*CO*图标)**\n",
293 | "#@markdown \n",
294 | "#@markdown 在你的谷歌硬盘根目录下新建一个文件夹命名为*Lora*,并在该文件夹中再次新建一个文件夹命名为*input*。\n",
295 | "#@markdown \n",
296 | "#@markdown **注意Lora首字母大写**\n",
297 | "#@markdown \n",
298 | "#@markdown \n",
299 | "#@markdown > 即 *我的云端硬盘/Lora/input/*\n",
300 | "#@markdown \n",
301 | "#@markdown ##**(二)**\n",
302 | "#@markdown \n",
303 | "#@markdown 按照视频中的要求处理完图并放置在命名好的文件夹的中,\n",
304 | "#@markdown \n",
305 | "#@markdown \n",
306 | "#@markdown > 即 *x_概念tag/处理后的图片*\n",
307 | "#@markdown \n",
308 | "#@markdown 和视频讲解一样,可以有多有*x_概念tag*文件夹。\n",
309 | "#@markdown \n",
310 | "#@markdown ##**(三)**\n",
311 | "#@markdown \n",
312 | "#@markdown 将这些*x_概念文件夹*上传到你的谷歌硬盘/Lora/input/目录下\n",
313 | "#@markdown \n",
314 | "#@markdown \n",
315 | "#@markdown > 即最后的结构是 *我的云端硬盘/Lora/input/x_概念tag/处理后的图片*\n",
316 | "#@markdown \n",
317 | "#@markdown **如果你要使用正则化图片,请参考图片放置在reg目录**\n",
318 | "#@markdown \n",
319 | "#@markdown "
320 | ],
321 | "metadata": {
322 | "cellView": "form",
323 | "id": "HP5TpWB6IHL6"
324 | },
325 | "execution_count": null,
326 | "outputs": []
327 | },
328 | {
329 | "cell_type": "markdown",
330 | "source": [
331 | "# **B. 开始训练**"
332 | ],
333 | "metadata": {
334 | "id": "RBYsZ0r-EmPV"
335 | }
336 | },
337 | {
338 | "cell_type": "markdown",
339 | "source": [
340 | "##(四)挂载谷歌硬盘、拷贝秋叶的github"
341 | ],
342 | "metadata": {
343 | "id": "33G_QxFnGAJq"
344 | }
345 | },
346 | {
347 | "cell_type": "code",
348 | "source": [
349 | "#@title ### 4.1 查看GPU信息**(确保你用的是GPU运行时)**、挂载谷歌硬盘(建议)\n",
350 | "\n",
351 | "#@markdown >**注意,如果这一步你挂载谷歌硬盘失败了。请看文末教程**\n",
352 | "#@markdown >\n",
353 | "#@markdown > 同时也需要注意,这意味着你的输出模型不会被保存到谷歌硬盘,而是临时存储在Colab的工作环境中,只要Colab一重启就会删除所有文件。**也就是说你要及时的手动保存输出模型**,请打开左边栏的文件选项,模型会被保存至/content/drive/MyDrive/Lora/output/,右键下载你的输出模型。\n",
354 | "\n",
355 | "#@markdown 如果你成功挂载了你的谷歌硬盘,则训练完成后数据会被保存到谷歌硬盘长期储存,而不会被自动清除。\n",
356 | "\n",
357 | "#查看是什么GPU\n",
358 | "!nvidia-smi\n",
359 | "#挂载谷歌硬盘\n",
360 | "from google.colab import drive\n",
361 | "drive.mount('/content/drive/')\n",
362 | "!echo \"google硬盘挂载完成.\""
363 | ],
364 | "metadata": {
365 | "id": "vWy1hoUU3bv-",
366 | "cellView": "form"
367 | },
368 | "execution_count": null,
369 | "outputs": []
370 | },
371 | {
372 | "cell_type": "code",
373 | "source": [
374 | "#@title ### 4.2 克隆github中的lora训练模型、定义正则表达式函数、声明ExtArgsContent类用于在(六、七)中修改extArgs的值、初始化output、log、sample_prompt.txt、训练材料路径**(必须运行)**\n",
375 | "\n",
376 | "#@markdown **此代码块执行完后,lora训练文件会从相应的github克隆过来,你可以在安装依赖时去配置train.sh文件**\n",
377 | "\n",
378 | "#@markdown **中途切换版本会初始化除5.2的全部模块,如果你这么做了,请重复从5.4开始的拷贝、模型下载、参数配置**\n",
379 | "\n",
380 | "\n",
381 | "#@markdown 选择版本(我的备份版本、兼容python3.8的老版本)\n",
382 | "\n",
383 | "#@markdown - 不再默认提供秋叶版本(你下载最新的秋叶版本,可能会不兼容)\n",
384 | "\n",
385 | "#@markdown 秋叶更新后,如果我没第一时间做适配,出现bug或者报错,可以暂时使用我的备份版本\n",
386 | "\n",
387 | "choose_version = \"WSH\" #@param [\"WSH\", \"Akegarasu\", \"py3.8\"]\n",
388 | "\n",
389 | "###################################################################################\n",
390 | "\n",
391 | "%cd /content/\n",
392 | "\n",
393 | "#删除先前下载的lora训练模型\n",
394 | "!mkdir -p /content/lora-scripts/ #防止报错\n",
395 | "!rm -r /content/lora-scripts/\n",
396 | "#选择github库\n",
397 | "if choose_version == \"Akegarasu\":\n",
398 | " git_https = \"https://github.com/Akegarasu/lora-scripts\"\n",
399 | "elif choose_version == \"WSH\":\n",
400 | " git_https = \"https://github.com/WSH032/lora-scripts.git\"\n",
401 | "elif choose_version == \"py3.8\":\n",
402 | " #这是一个2023年3月1日的备份源,可以让你使用兼容py3.8的lora训练\n",
403 | " git_https = \"https://github.com/WSH032/temp.git\"\n",
404 | "else:\n",
405 | " print(\"git选择出错\")\n",
406 | "#从git仓库下载Lora训练模型\n",
407 | "print(f\"{choose_version}的github克隆中\")\n",
408 | "!git clone --recurse-submodules {git_https} /content/lora-scripts/\n",
409 | "!cd lora-scripts && git pull && git submodule update --init --recursive\n",
410 | "#对于py3.8的lora训练模型需要做一个移动\n",
411 | "if choose_version == \"py3.8\":\n",
412 | " !mv /content/lora-scripts/lora-scripts/* /content/lora-scripts/\n",
413 | " !rm -r /content/lora-scripts/lora-scripts\n",
414 | "print(f\"{choose_version}的github克隆完成 你可以在安装依赖时去配置train.sh文件\")\n",
415 | "\n",
416 | "\n",
417 | "###################################################################################\n",
418 | "#这是一个使用正则表达式匹配编辑文件的函数,用于在(七)中对train.sh的修改\n",
419 | "\n",
420 | "#导入正则表达式模块\n",
421 | "import re\n",
422 | "#########\n",
423 | "#设置train.sh文件路径,这个在函数中会被使用\n",
424 | "train_sh_path = r'/content/lora-scripts/train.sh'\n",
425 | "#########\n",
426 | "#定义函数,编辑train_sh_path路径的文件,为search的参数赋予值input\n",
427 | "#search为字符串,input可以为数值和字符串\n",
428 | "def search_input(search, input):\n",
429 | " # 使用正则表达式进行替换\n",
430 | " #匹配标志: 1匹配search=\"\" , 2匹配search=5 , 3专门专门匹配extArgs=()\n",
431 | " search_type_flag = 0\n",
432 | "\n",
433 | " #search不是字符串就报错\n",
434 | " if not( isinstance(search, str) ):\n",
435 | " return \"非字符串的'search'参数\"\n",
436 | "\n",
437 | " #如果search输入的是\"\",则专门匹配extArgs=()\n",
438 | " if search == \"\":\n",
439 | " search = \"extArgs\"\n",
440 | " pattern = rf'^{search}=(\\(.*?\\))'\n",
441 | " replace = rf'{search}=({input})'\n",
442 | " search_type_flag = 3\n",
443 | " else:\n",
444 | " # 如果input是字符串类型,匹配search=\"\"\n",
445 | " if isinstance(input, str):\n",
446 | " #pattern = rf'{search}=(\".*?\")'\n",
447 | " #replace = f\"{search}=\\\"{input}\\\"\"\n",
448 | " pattern = rf'^{search}=(\".*?\")'\n",
449 | " replace = rf'{search}=\"{input}\"'\n",
450 | " search_type_flag = 1\n",
451 | " # 如果input是数值类型,匹配search= (可以匹配小数、整数、科学计数)\n",
452 | " elif isinstance(input, (int, float)):\n",
453 | " pattern = rf'^{search}=([+-]?(?:\\d+(?:\\.\\d*)?|\\.\\d+)(?:[eE][+-]?\\d+)?)'\n",
454 | " replace = rf'{search}={input}'\n",
455 | " search_type_flag = 2\n",
456 | " else: # 其他情况,就返回错误信息 \n",
457 | " return \"错误的匹配input\"\n",
458 | "\n",
459 | " #使用with语句打开文件,并读取内容\n",
460 | " with open(train_sh_path, 'r', encoding='utf-8') as f:\n",
461 | " content = f.read()\n",
462 | " re_get = re.findall(pattern, content, flags=re.MULTILINE|re.DOTALL)\n",
463 | " #检查是否匹配到,匹配不到则报错并退出\n",
464 | " if not(re_get):\n",
465 | " print(f\"警告!!!对于'{search}'的正则表达式并未匹配,请手动设置该参数,并B站私信我更新!\")\n",
466 | " return\n",
467 | " #如果匹配到了执行接下来操作\n",
468 | " #使用re.sub函数进行替换,并加上re.MULTILINE标志\n",
469 | " new_content = re.sub(pattern, replace, content, flags=re.MULTILINE|re.DOTALL, count=1)\n",
470 | " #如果内容未更改,提示未改变以及值\n",
471 | " if new_content == content:\n",
472 | " print(f\"{search}={re_get[0]}\")\n",
473 | " return\n",
474 | " #如果改变则写入,输出改变信息\n",
475 | " else:\n",
476 | " with open(train_sh_path, 'w', encoding='utf-8') as f:\n",
477 | " f.write(new_content)\n",
478 | " if search_type_flag == 1: right = left = \"\\\"\"\n",
479 | " elif search_type_flag == 2: right = left = \"\"\n",
480 | " elif search_type_flag == 3: right = \")\" ; left = \"(\" \n",
481 | " else: print(\"输入参数错误\")\n",
482 | " print(f\"发生修改,{search}={re_get[0]}现在为{left}{input}{right}\")\n",
483 | "\n",
484 | "#模型输出地址被更改至\"/content/drive/MyDrive/Lora/output/\" \n",
485 | "output_dir = \"/content/drive/MyDrive/Lora/output/\"\n",
486 | "search_input(\" --output_dir\", output_dir)\n",
487 | "print(f\"模型输出地址默认被更改至谷歌硬盘:{output_dir}\")\n",
488 | "\n",
489 | "#初始化log输出路径\n",
490 | "logging_dir = output_dir + \"/logs\"\n",
491 | "search_input(\" --logging_dir\", logging_dir)\n",
492 | "print(f\"log日志默认输出至谷歌硬盘:{logging_dir}\")\n",
493 | "\n",
494 | "#初始化sample_prompt.txt路径\n",
495 | "sample_prompt_txt_path = \"/content/lora-scripts/sample_prompt.txt\"\n",
496 | "print(f\"sample_prompt.txt默认路径为:{sample_prompt_txt_path}\")\n",
497 | "\n",
498 | "#初始化训练集路径\n",
499 | "train_data_dir = \"/content/lora-scripts/train/aki/\"\n",
500 | "search_input(\"train_data_dir\", train_data_dir)\n",
501 | "print(f\"训练集将拷贝至:{train_data_dir}\")\n",
502 | "\n",
503 | "#初始化正则化集路径\n",
504 | "reg_data_dir = \"/content/lora-scripts/train/reg/\"\n",
505 | "print(f\"正则化集将拷贝至:{reg_data_dir}\")\n",
506 | "\n",
507 | "#弃用代码\n",
508 | "#!sed -i 's/--output_dir=\".\\/output\"/--output_dir=\"/content/drive/MyDrive/Lora/output/\"/' /content/lora-scripts/train.sh\n",
509 | "\n",
510 | "\n",
511 | "#################################################################\n",
512 | "#声明extArgs_Content类,用于在不同代码块中更新extArgs的内容\n",
513 | "class ExtArgsContent(object):\n",
514 | " def __init__(self):\n",
515 | " self.base_model = \"\"\n",
516 | " self.vae = \"\"\n",
517 | " self.common_parameter = \"\"\n",
518 | " self.sample_parameter = \"\"\n",
519 | " self.plus_parameter = \"\"\n",
520 | " #合并全部类属性字符串\n",
521 | " def all(self):\n",
522 | " result = \"\"\n",
523 | " attributes = self.__dict__.values()\n",
524 | " for attribute in attributes:\n",
525 | " result += attribute\n",
526 | " return result\n",
527 | "#将在(六、七)中被使用\n",
528 | "extArgs_content = ExtArgsContent()\n",
529 | "\n",
530 | "#################################################################\n",
531 | "#用于读取 --output_dir=\"\"和 --logging_dir=\"\"的值,来修改8.2中保存路径\n",
532 | "def search_get(search):\n",
533 | " # 如果input是字符串类型,匹配search=\"\"\n",
534 | " #pattern = rf'{search}=(\".*?\")'\n",
535 | " pattern = rf'^{search}=(\".*?\")'\n",
536 | " #使用with语句打开文件,并读取内容\n",
537 | " with open(train_sh_path, 'r', encoding='utf-8') as f:\n",
538 | " content = f.read()\n",
539 | " re_get = re.findall(pattern, content, flags=re.MULTILINE|re.DOTALL)\n",
540 | " return re_get[0]\n",
541 | "\n",
542 | "#################################################################\n",
543 | "#初始化,在(八)中执行判断\n",
544 | "enable_sample = False\n",
545 | "use_train_sh_self = False\n",
546 | "use_sample_prompt_txt_self = False\n"
547 | ],
548 | "metadata": {
549 | "cellView": "form",
550 | "id": "UV6qjXaZRLFr"
551 | },
552 | "execution_count": null,
553 | "outputs": []
554 | },
555 | {
556 | "cell_type": "markdown",
557 | "source": [
558 | "## (五)、安装环境及拷贝材料"
559 | ],
560 | "metadata": {
561 | "id": "UIMsiQtyDAcT"
562 | }
563 | },
564 | {
565 | "cell_type": "code",
566 | "source": [
567 | "#@title ### 5.1是否打开web文件浏览器,方便你管理Colab环境中的文件(可选)\n",
568 | "\n",
569 | "#@markdown 是否使用web文件浏览器,在你运行其他代码块的时候这也是实时工作的 *(在colab中使用,在新标签页使用)*\n",
570 | "use_file_explorer = True #@param {type:\"boolean\"}\n",
571 | "file_explorer_method = \"use in new tab\" #@param [\"use in colab\",\"use in new tab\"]\n",
572 | "\n",
573 | "\n",
574 | "\n",
575 | "if use_file_explorer:\n",
576 | " #安装imjoy-elfinder(web文件浏览器)\n",
577 | " !pip -q install imjoy-elfinder > /dev/null 2>&1\n",
578 | " import threading\n",
579 | " from google.colab import output\n",
580 | " from imjoy_elfinder.app import main\n",
581 | " #开始imjoy-elfinder服务\n",
582 | " thread = threading.Thread(target=main, args=[[\"--root-dir=/content\", \"--port=8765\"]])\n",
583 | " thread.start()\n",
584 | " if file_explorer_method == \"use in colab\":\n",
585 | " #在colab中打开端口 \n",
586 | " output.serve_kernel_port_as_iframe(8765, height='600')\n",
587 | " elif file_explorer_method == \"use in new tab\":\n",
588 | " #在新标签页打开端口\n",
589 | " output.serve_kernel_port_as_window(8765)\n",
590 | " else:\n",
591 | " print(\"imjoy_elfinder使用出错\")\n",
592 | "#提示未勾选\n",
593 | "else:\n",
594 | " print(\"你似乎想使用web文件浏览器,但你并未勾选\")"
595 | ],
596 | "metadata": {
597 | "cellView": "form",
598 | "id": "j8pnanXvL44g"
599 | },
600 | "execution_count": null,
601 | "outputs": []
602 | },
603 | {
604 | "cell_type": "code",
605 | "source": [
606 | "#@title ### 5.2 安装依赖环境、开启tensorboard**(输出已经被隐藏,有需要可以自己修改代码看输出)**\n",
607 | "\n",
608 | "#@markdown 整个环境安装所需时间为**2分钟左右**\n",
609 | "#@markdown\n",
610 | "#@markdown **在(四)中lora训练文件已经被克隆进来,如果你已经熟练使用此colab,你可以利用这段时间上传图片,或者去*(七)*完成train.sh的配置**\n",
611 | "#@markdown \n",
612 | "#@markdown train.sh文件: /content/lora-scripts/train.sh\n",
613 | "###################################################################################\n",
614 | "#升级python\n",
615 | "#感谢枫娘分享的代码\n",
616 | "\n",
617 | "\n",
618 | "\"\"\"\n",
619 | "现在colab默认已经是python3.10\n",
620 | "install python 3.10 安装py3.10\n",
621 | "!sudo apt-get update -y > /dev/null 2>&1\n",
622 | "!sudo apt-get install python3.10 > /dev/null 2>&1\n",
623 | "#change alternatives 首选py3.9\n",
624 | "!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1 > /dev/null 2>&1\n",
625 | "!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 2 > /dev/null 2>&1\n",
626 | "#check python version 查看版本 #3.10\n",
627 | "!python --version\n",
628 | "print(\"python升级中\")\n",
629 | "# install pip for new python 为py3.10安装pip\n",
630 | "!sudo apt-get install python3.10-distutils > /dev/null 2>&1\n",
631 | "!wget https://bootstrap.pypa.io/get-pip.py > /dev/null 2>&1\n",
632 | "!python get-pip.py > /dev/null 2>&1\n",
633 | "#install colab's dependencies 安装colab依赖\n",
634 | "!python -m pip install ipython ipython_genutils ipykernel jupyter_console prompt_toolkit httplib2 astor > /dev/null 2>&1\n",
635 | "# link to the old google package 将py3.9的谷歌依赖连接至3.10\n",
636 | "!ln -s /usr/local/lib/python3.9/dist-packages/google \\\n",
637 | "/usr/local/lib/python3.10/dist-packages/google > /dev/null 2>&1\n",
638 | "print(\"python升级完成 1/6\")\n",
639 | "\n",
640 | "#这是一个备份更新python3.10.6的方式\n",
641 | "#切换到python3.10\n",
642 | "#!wget https://github.com/korakot/kora/releases/download/v0.10/py310.sh\n",
643 | "#!bash ./py310.sh -b -f -p /usr/local\n",
644 | "#!python -m ipykernel install --name \"py310\" --user\n",
645 | "\n",
646 | "\"\"\"\n",
647 | "\n",
648 | "###################################################################################\n",
649 | "#安装相关环境\n",
650 | "\n",
651 | "pip_all_number = 3\n",
652 | "pip_count = 1\n",
653 | "\n",
654 | "#liblz4-tool需要解压才安装\n",
655 | "#安装aria2\n",
656 | "!apt -qq install aria2\n",
657 | "\n",
658 | "#安装兼容torch 2.0.0\n",
659 | "print(f\"torch安装中\")\n",
660 | "!pip -q install torch==2.0.1 torchvision xformers triton\n",
661 | "print(f\"torch安装完成 {pip_count}/{pip_all_number}\")\n",
662 | "\n",
663 | "#安装其他依赖\n",
664 | "print(f\"其他依赖安装中,此步耗时较长,请耐心等待\")\n",
665 | "%cd /content/lora-scripts/sd-scripts/\n",
666 | "!pip -q install --upgrade -r requirements.txt\n",
667 | "print(f\"其他依赖安装完成 {pip_count}/{pip_all_number}\")\n",
668 | "pip_count+=1\n",
669 | "\n",
670 | "#安装lion优化器、dadaptation优化器、lycoris\n",
671 | "print(f\"lion优化器、dadaptation优化器、lycoris安装中\")\n",
672 | "!pip -q install --upgrade lion-pytorch lycoris-lora dadaptation\n",
673 | "print(f\"lion优化器、lycoris安装完成 {pip_count}/{pip_all_number}\")\n",
674 | "pip_count+=1\n",
675 | "\n",
676 | "!python --version\n",
677 | "\n",
678 | "#############################\n",
679 | "#开启tensorboard\n",
680 | "%load_ext tensorboard"
681 | ],
682 | "metadata": {
683 | "id": "8Qp6STJk2Wjh",
684 | "collapsed": true,
685 | "cellView": "form"
686 | },
687 | "execution_count": null,
688 | "outputs": []
689 | },
690 | {
691 | "cell_type": "markdown",
692 | "source": [
693 | "### 5.3是否使用Waifu Diffusion 1.4 Tagger V2打标(可选)"
694 | ],
695 | "metadata": {
696 | "id": "W9wqv5U3iVZq"
697 | }
698 | },
699 | {
700 | "cell_type": "code",
701 | "source": [
702 | "#@title ###5.3使用WD1.4tagger打标(原始代码来源于:[Linaqruf](https://github.com/Linaqruf/kohya-trainer))\n",
703 | "#[Waifu Diffusion 1.4 Tagger V2](https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags)是由[SmilingWolf](https://github.com/SmilingWolf)开发的Danbooru风格的图片分类器,可以用来生成tag\n",
704 | "#(例如: `1girl, solo, looking_at_viewer, short_hair, bangs, simple_background`)\n",
705 | "\n",
706 | "#@markdown 我有一个关于图片预处理的视频推荐使用WD1.4打标,有些人和我说没有本地webui要怎么打标\n",
707 | "\n",
708 | "#@markdown 这个打标没webui里那个好,但好歹能打标对吧(可以直接对谷歌硬盘里的图片进行操作)\n",
709 | "\n",
710 | "#@markdown kohya已经编写了打标脚本,我从另一个colab([Linaqruf](https://github.com/Linaqruf/kohya-trainer),他有更强大的Colab lora训练,感兴趣可以去看看)里fork了这段代码\n",
711 | "\n",
712 | "#@markdown 如果你已经在本地完成了打标,可以忽略这个代码块\n",
713 | "\n",
714 | "#@markdown **是否使用WD1.4打标**\n",
715 | "use_tagger = False #@param {type:\"boolean\"}\n",
716 | "\n",
717 | "#@markdown **(必填)**填入你要打标的文件夹地址,如果你有多个概念文件夹,你需要为每个都执行这项操作\n",
718 | "tag_data_dir = \"/content/drive/MyDrive/Lora/input/repeat_concept\" #@param {type:'string'}\n",
719 | "\n",
720 | "#@markdown batch大小、加载线程、打标模型\n",
721 | "batch_size = 8 #@param {type:'number'}\n",
722 | "max_data_loader_n_workers = 2 #@param {type:'number'}\n",
723 | "tagger_model = \"SmilingWolf/wd-v1-4-convnext-tagger-v2\" #@param [\"SmilingWolf/wd-v1-4-swinv2-tagger-v2\", \"SmilingWolf/wd-v1-4-convnext-tagger-v2\", \"SmilingWolf/wd-v1-4-vit-tagger-v2\"]\n",
724 | "#@markdown 调整阈值,越低tag越多,但准确率越低\n",
725 | "#@markdown - 这两句是代码作者的原话,自己判断对不对,反正我喜欢用0.35炼人物\n",
726 | "#@markdown - 高阈值(例如`0.85`)适用于人物或者物体的训练\n",
727 | "#@markdown - 低阈值(例如`0.35`)适用于常规的\\画风的\\环境的训练\n",
728 | "threshold = 0.35 #@param {type:\"slider\", min:0, max:1, step:0.01}\n",
729 | "\n",
730 | "if use_tagger:\n",
731 | "#图片路径为谷歌邮箱中的路径\n",
732 | " !python /content/lora-scripts/sd-scripts/finetune/tag_images_by_wd14_tagger.py \\\n",
733 | " \"{tag_data_dir}\" \\\n",
734 | " --batch_size {batch_size} \\\n",
735 | " --repo_id {tagger_model} \\\n",
736 | " --thresh {threshold} \\\n",
737 | " --caption_extension .txt \\\n",
738 | " --max_data_loader_n_workers {max_data_loader_n_workers}\n",
739 | "else:\n",
740 | " print(\"似乎你想使用WD1.4tagger,但你并未勾选\")"
741 | ],
742 | "metadata": {
743 | "cellView": "form",
744 | "id": "go-KBrkHQuNh"
745 | },
746 | "execution_count": null,
747 | "outputs": []
748 | },
749 | {
750 | "cell_type": "markdown",
751 | "source": [
752 | "### 5.4 从谷歌硬盘中拷贝训练材料"
753 | ],
754 | "metadata": {
755 | "id": "9GeMKE6hie8_"
756 | }
757 | },
758 | {
759 | "cell_type": "code",
760 | "source": [
761 | "#@title ####拷贝材料(支持重复训练时选择新的路径)\n",
762 | "\n",
763 | "#@markdown 默认给的是教程(三)中的谷歌硬盘目录,你也可以自定义训练集和正则化集路径(只要你知道你在做什么),请选择repeat_concept的父目录\n",
764 | "\n",
765 | "#@markdown 为了稳定性和性能,notebook会把材料拷贝到colab的`/content/lora-scripts/train/`中进行操作\n",
766 | "\n",
767 | "#@markdown 是否使用自定义路径,是否拷贝正则化图片\n",
768 | "use_data_dir_self = False #@param {type:\"boolean\"}\n",
769 | "copy_reg = False #@param {type:\"boolean\"}\n",
770 | "#@markdown 自定义训练集路径,正则化集路径(仅在勾选后有效)**(不要使用带空格、中文的路径)**\n",
771 | "train_data_dir_self = \"/content/drive/MyDrive/Lora/input/\" #@param {type:'string'}\n",
772 | "reg_data_dir_self = \"/content/drive/MyDrive/Lora/reg/\" #@param {type:'string'}\n",
773 | "\n",
774 | "if use_data_dir_self:\n",
775 | " print(f\"你使用的是自定义路径\")\n",
776 | "else:\n",
777 | " train_data_dir_self = \"/content/drive/MyDrive/Lora/input/\"\n",
778 | " reg_data_dir_self = \"/content/drive/MyDrive/Lora/reg/\"\n",
779 | " print(f\"你使用的是默认路径\")\n",
780 | "print(f\"训练集地址为:{train_data_dir_self}\")\n",
781 | "\n",
782 | "\n",
783 | "#@markdown 拷贝时间由训练材料大小决定\n",
784 | "\n",
785 | "#@markdown 出现这样的输出就是正确放置了文件且完成了拷贝\n",
786 | "\n",
787 | "#@markdown \n",
788 | "\n",
789 | "#删除之前的训练材料\n",
790 | "!mkdir -p /content/lora-scripts/train/ #防止首次运行报错\n",
791 | "!rm -r /content/lora-scripts/train/\n",
792 | "\n",
793 | "#从谷歌硬盘中拷贝你之前上传的训练材料\n",
794 | "print(\"拷贝训练集中\")\n",
795 | "!mkdir -p {train_data_dir}\n",
796 | "!cp -r {train_data_dir_self}/* {train_data_dir}\n",
797 | "!echo \"copy训练材料完成.\"\n",
798 | "\n",
799 | "if copy_reg:\n",
800 | " #拷贝正则化图片\n",
801 | " print(f\"正则化集地址为:{reg_data_dir_self}\")\n",
802 | " print(\"拷贝正则化集中\")\n",
803 | " !mkdir -p {reg_data_dir}\n",
804 | " !cp -r {reg_data_dir_self}/* {reg_data_dir}\n",
805 | " !echo \"copy正则化图片完成.\"\n",
806 | "else:\n",
807 | " print(\"不拷贝正则化集\")\n",
808 | "\n",
809 | "\n",
810 | "%cd /content/lora-scripts"
811 | ],
812 | "metadata": {
813 | "id": "j6nkqp9Fb7Dg",
814 | "cellView": "form"
815 | },
816 | "execution_count": null,
817 | "outputs": []
818 | },
819 | {
820 | "cell_type": "markdown",
821 | "source": [
822 | "##(六) 下载模型,默认选择的是novelAI官模剪枝版(原始代码来源于:[Linaqruf](https://github.com/Linaqruf/kohya-trainer))"
823 | ],
824 | "metadata": {
825 | "id": "2wRUt_h0FHy5"
826 | }
827 | },
828 | {
829 | "cell_type": "code",
830 | "source": [
831 | "#@title ### 6.1 下载模型\n",
832 | "installModels = []\n",
833 | "installv2Models = []\n",
834 | "\n",
835 | "#@markdown ####**选择优先级从上到下,比如说你想自定义链接,则需要保持两个预设模型为空**\n",
836 | "\n",
837 | "#@markdown **预设剪枝模型(`Animefull-final-pruned`即NovelAI官模 , `Stable-Diffusion-v1-5`即SD1.5)**\n",
838 | "\n",
839 | "#@markdown SD1.x model\n",
840 | "modelName = \"Animefull-final-pruned\" # @param [\"\", \"Animefull-final-pruned\", \"Stable-Diffusion-v1-5\", \"Anything-v3-1\", \"AnyLoRA\", \"AnimePastelDream\", \"Chillout-mix\", \"OpenJourney-v4\"]\n",
841 | "#@markdown SD2.x model\n",
842 | "v2ModelName = \"\" # @param [\"\", \"stable-diffusion-2-1-base\", \"stable-diffusion-2-1-768v\", \"plat-diffusion-v1-3-1\", \"replicant-v1\", \"illuminati-diffusion-v1-0\", \"illuminati-diffusion-v1-1\", \"waifu-diffusion-1-4-anime-e2\", \"waifu-diffusion-1-5-e2\", \"waifu-diffusion-1-5-e2-aesthetic\"]\n",
843 | "\n",
844 | "#@markdown **自定义模型链接例如`https://huggingface.co/a1079602570/animefull-final-pruned/resolve/main/novelailatest-pruned.ckpt`)**\n",
845 | "\n",
846 | "#@markdown **或者自定义模型路径例如`/content/drive/MyDrive/Lora/model/your_model.ckpt`**\n",
847 | "\n",
848 | "#@markdown **如果连接或者路径中包含模型的扩展名(比如我给出的两个例如的末尾都有扩展名),则会自动指定,否则你需要手动选择**\n",
849 | "\n",
850 | "#@markdown - **注意,Colab普通用户仅能选择5G以下的模型**\n",
851 | "\n",
852 | "base_model_url = \"\" #@param {type:\"string\"}\n",
853 | "\n",
854 | "base_model_self_dir = \"\" #@param {type:\"string\"}\n",
855 | "\n",
856 | "base_model_extension = \"ckpt\" #@param [\"ckpt\", \"safetensors\", \"pt\"]\n",
857 | "\n",
858 | "\n",
859 | "modelUrl = [\n",
860 | " \"\",\n",
861 | " \"https://huggingface.co/Linaqruf/personal-backup/resolve/main/models/animefull-final-pruned.ckpt\",\n",
862 | " \"https://huggingface.co/cag/anything-v3-1/resolve/main/anything-v3-1.safetensors\",\n",
863 | " \"https://huggingface.co/Lykon/AnyLoRA/resolve/main/AnyLoRA_noVae_fp16.safetensors\",\n",
864 | " \"https://huggingface.co/Lykon/AnimePastelDream/resolve/main/AnimePastelDream_Soft_noVae_fp16.safetensors\",\n",
865 | " \"https://huggingface.co/Linaqruf/stolen/resolve/main/pruned-models/chillout_mix-pruned.safetensors\",\n",
866 | " \"https://huggingface.co/prompthero/openjourney-v4/resolve/main/openjourney-v4.ckpt\",\n",
867 | " \"https://huggingface.co/Linaqruf/stolen/resolve/main/pruned-models/stable_diffusion_1_5-pruned.safetensors\",\n",
868 | "]\n",
869 | "modelList = [\n",
870 | " \"\",\n",
871 | " \"Animefull-final-pruned\",\n",
872 | " \"Anything-v3-1\",\n",
873 | " \"AnyLoRA\",\n",
874 | " \"AnimePastelDream\", \n",
875 | " \"Chillout-mix\",\n",
876 | " \"OpenJourney-v4\",\n",
877 | " \"Stable-Diffusion-v1-5\",\n",
878 | "]\n",
879 | "v2ModelUrl = [\n",
880 | " \"\",\n",
881 | " \"https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors\",\n",
882 | " \"https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors\",\n",
883 | " \"https://huggingface.co/p1atdev/pd-archive/resolve/main/plat-v1-3-1.safetensors\",\n",
884 | " \"https://huggingface.co/gsdf/Replicant-V1.0/resolve/main/Replicant-V1.0.safetensors\",\n",
885 | " \"https://huggingface.co/IlluminatiAI/Illuminati_Diffusion_v1.0/resolve/main/illuminati_diffusion_v1.0.safetensors\",\n",
886 | " \"https://huggingface.co/4eJIoBek/Illuminati-Diffusion-v1-1/resolve/main/illuminatiDiffusionV1_v11.safetensors\",\n",
887 | " \"https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/wd-1-4-anime_e2.ckpt\",\n",
888 | " \"https://huggingface.co/waifu-diffusion/wd-1-5-beta2/resolve/main/checkpoints/wd-1-5-beta2-fp32.safetensors\",\n",
889 | " \"https://huggingface.co/waifu-diffusion/wd-1-5-beta2/resolve/main/checkpoints/wd-1-5-beta2-aesthetic-fp32.safetensors\",\n",
890 | "]\n",
891 | "v2ModelList = [\n",
892 | " \"\",\n",
893 | " \"stable-diffusion-2-1-base\",\n",
894 | " \"stable-diffusion-2-1-768v\",\n",
895 | " \"plat-diffusion-v1-3-1\",\n",
896 | " \"replicant-v1\",\n",
897 | " \"illuminati-diffusion-v1-0\",\n",
898 | " \"illuminati-diffusion-v1-1\",\n",
899 | " \"waifu-diffusion-1-4-anime-e2\",\n",
900 | " \"waifu-diffusion-1-5-e2\",\n",
901 | " \"waifu-diffusion-1-5-e2-aesthetic\",\n",
902 | "]\n",
903 | "if modelName:\n",
904 | " installModels.append((modelName, modelUrl[modelList.index(modelName)]))\n",
905 | "if v2ModelName:\n",
906 | " installv2Models.append((v2ModelName, v2ModelUrl[v2ModelList.index(v2ModelName)]))\n",
907 | "\n",
908 | "\n",
909 | "#下载路径\n",
910 | "base_model_dir = \"/content/lora-scripts/sd-models/\"\n",
911 | "\n",
912 | "#检查连接是否含有扩展名信息,不含有则由用户指定\n",
913 | "def check_ext(url):\n",
914 | " if url.endswith(\".ckpt\"):\n",
915 | " return \"ckpt\"\n",
916 | " elif url.endswith(\".safetensors\"):\n",
917 | " return \"safetensors\"\n",
918 | " else:\n",
919 | " return base_model_extension\n",
920 | "#下载模型\n",
921 | "def install(checkpoint_name, url):\n",
922 | " ext = check_ext(url)\n",
923 | " hf_token = \"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\"\n",
924 | " user_header = f'\"Authorization: Bearer {hf_token}\"'\n",
925 | " !aria2c --console-log-level=error --summary-interval=10 --header={user_header} -c -x 16 -k 1M -s 16 -d {base_model_dir} -o {checkpoint_name}.{ext} {url}\n",
926 | " return f\"{checkpoint_name}.{ext}\" #返回模型名称\n",
927 | "def install_checkpoint():\n",
928 | " for model in installModels:\n",
929 | " return install(model[0], model[1])\n",
930 | " for v2model in installv2Models:\n",
931 | " return install(v2model[0], v2model[1])\n",
932 | "\n",
933 | "#尝试下载预设模型\n",
934 | "base_model_name = install_checkpoint()\n",
935 | "#预设下载成功,则完成路径修改\n",
936 | "if base_model_name:\n",
937 | " pretrained_model = base_model_dir + base_model_name\n",
938 | "#下载失败,base_model_name为non\n",
939 | "else:\n",
940 | " #不留空,则尝试用连接下载\n",
941 | " if base_model_url:\n",
942 | " base_model_name = \"download.\" + check_ext(base_model_url)\n",
943 | " pretrained_model = base_model_dir + base_model_name\n",
944 | " !aria2c --console-log-level=error --summary-interval=10 -c -x 16 -k 1M -s 16 -d {base_model_dir} -o {base_model_name} --allow-overwrite {base_model_url}\n",
945 | " #留空,将考虑从自定义路径中拷贝\n",
946 | " else:\n",
947 | " if base_model_self_dir:\n",
948 | " base_model_name = \"self.\" + check_ext(base_model_self_dir)\n",
949 | " pretrained_model = base_model_dir + base_model_name\n",
950 | " !cp {base_model_self_dir} {pretrained_model}\n",
951 | " else:\n",
952 | " print(\"你根本没选择任何模型!\")\n",
953 | " \n",
954 | "\n",
955 | "#修改train.sh的底模路径,并输出信息\n",
956 | "search_input(\"pretrained_model\", pretrained_model)\n",
957 | "\n",
958 | "#输出模型信息\n",
959 | "print(f\"你选择的是: {base_model_name} 模型\")\n"
960 | ],
961 | "metadata": {
962 | "cellView": "form",
963 | "id": "BWOcwdgxDBj4"
964 | },
965 | "execution_count": null,
966 | "outputs": []
967 | },
968 | {
969 | "cell_type": "code",
970 | "source": [
971 | "# @title ## 6.2 下载vae(可选)\n",
972 | "\n",
973 | "#储存下载信息参数\n",
974 | "installVae = []\n",
975 | "#@markdown 选择 `none` 意味着不使用vae\n",
976 | "\n",
977 | "#@markdown 选择一个Vae下载并使用`\"animevae.pt\", \"kl-f8-anime.ckpt\", \"vae-ft-mse-840000-ema-pruned.ckpt\"`\n",
978 | "\n",
979 | "vaeUrl = [\n",
980 | " \"\",\n",
981 | " \"https://huggingface.co/Linaqruf/personal-backup/resolve/main/vae/animevae.pt\",\n",
982 | " \"https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime.ckpt\",\n",
983 | " \"https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt\",\n",
984 | "]\n",
985 | "vaeList = [\"none\", \"anime.vae.pt\", \"waifudiffusion.vae.pt\", \"stablediffusion.vae.pt\"]\n",
986 | "vaeName = \"none\" # @param [\"none\", \"anime.vae.pt\", \"waifudiffusion.vae.pt\", \"stablediffusion.vae.pt\"]\n",
987 | "\n",
988 | "installVae.append((vaeName, vaeUrl[vaeList.index(vaeName)]))\n",
989 | "\n",
990 | "#开始下载\n",
991 | "vae_dir = \"/content/lora-scripts/vae/\"\n",
992 | "def install(vae_name, url):\n",
993 | " hf_token = \"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\"\n",
994 | " user_header = f'\"Authorization: Bearer {hf_token}\"'\n",
995 | " !aria2c --console-log-level=error --allow-overwrite --summary-interval=10 --header={user_header} -c -x 16 -k 1M -s 16 -d {vae_dir} -o \"vae.pt\" \"{url}\"\n",
996 | "\n",
997 | "def install_vae():\n",
998 | " if vaeName != \"none\":\n",
999 | " for vae in installVae:\n",
1000 | " install(vae[0], vae[1])\n",
1001 | " else:\n",
1002 | " pass\n",
1003 | "install_vae()\n",
1004 | "\n",
1005 | "\n",
1006 | "extArgs_content.vae = \"\"\n",
1007 | "#修改train.sh中参数\n",
1008 | "if vaeName == \"none\":\n",
1009 | " print(\"不使用vae\")\n",
1010 | "else:\n",
1011 | " print(f\"使用{vaeName}\")\n",
1012 | " #写入采样地址f\"--vae={vae_dir}\"\n",
1013 | " extArgs_content.vae += f\"\\\"--vae={vae_dir}vae.pt\\\" \"\n",
1014 | "\n",
1015 | "search_input(\"\", extArgs_content.all() )"
1016 | ],
1017 | "metadata": {
1018 | "cellView": "form",
1019 | "id": "3_29lrzlARme"
1020 | },
1021 | "execution_count": null,
1022 | "outputs": []
1023 | },
1024 | {
1025 | "cell_type": "markdown",
1026 | "source": [
1027 | "\n",
1028 | "##(七)修改train.sh参数\n",
1029 | "\n",
1030 | "\n",
1031 | "\n"
1032 | ],
1033 | "metadata": {
1034 | "id": "vhwhQZEEq65p"
1035 | }
1036 | },
1037 | {
1038 | "cell_type": "markdown",
1039 | "source": [
1040 | "1、你可以使用此代码块设置常用参数,也可以打开*/content/lora-scripts/train.sh*手动设置其它参数\n",
1041 | "\n",
1042 | "\n",
1043 | "\n",
1044 | "\n",
1045 | "**2、注意,如果你手动配置参数,除非你知道你在做什么,不然不要修改底模路径和训练集图片路径**\n",
1046 | "\n",
1047 | "```\n",
1048 | "pretrained_model=\"./sd-models/model.ckpt\" # base model path | 底模路径\n",
1049 | "\n",
1050 | "train_data_dir=\"./train/aki\" # train dataset path | 训练数据集路径\n",
1051 | "```\n",
1052 | "\n",
1053 | "3、输出的模型会自动保存至你的谷歌硬盘/Lora/output/(output_folder_name)目录下\n",
1054 | "\n",
1055 | "如果你没有挂载谷歌硬盘,则模型会存储在Colab环境中的/content/MyDriveLora/output/(output_folder_name)这个路径下。但是请注意,一旦Colab重启会被清除,请**及时下载保存**模型。"
1056 | ],
1057 | "metadata": {
1058 | "id": "cJxcHwv8Sa9b"
1059 | }
1060 | },
1061 | {
1062 | "cell_type": "code",
1063 | "source": [
1064 | "#@title ###7.1基础参数\n",
1065 | "\n",
1066 | "#@markdown 为了适配秋叶的train.sh内容更新,正则匹配函数现在会报错:`\"警告!!!对于'{search}'的正则表达式并未匹配,请手动设置该参数,并B站私信我更新!\"`\n",
1067 | "\n",
1068 | "#@markdown 看到这个提示你可以手动修改train.sh中相关部分,也可以使用WSH的库,也可也B站私信我更新notebook\n",
1069 | "\n",
1070 | "\n",
1071 | "#用于修改train.sh文件\n",
1072 | "extArgs_content.common_parameter = \"\"\n",
1073 | "\n",
1074 | "#底模信息\n",
1075 | "#print(\"你选择的是\" + base_model + \"底模\")\n",
1076 | "#print(\"格式为\" + base_model_extension)\n",
1077 | "\n",
1078 | "\n",
1079 | "#@markdown 是否使用正则化、正则化权重(越小越不正则)\n",
1080 | "use_reg_data = False #@param {type:\"boolean\"}\n",
1081 | "if use_reg_data:\n",
1082 | " search_input(\"reg_data_dir\", \"reg_data_dir\")\n",
1083 | " print(\"\\b使用正则化\")\n",
1084 | "else:\n",
1085 | " search_input(\"reg_data_dir\", \"\")\n",
1086 | " print(\"\\b不使用正则化\")\n",
1087 | "prior_loss_weight = 0.3 #@param {type:\"slider\", min:0, max:1, step:0.01}\n",
1088 | "search_input(\" --prior_loss_weight\", prior_loss_weight)\n",
1089 | "\n",
1090 | "\n",
1091 | "\n",
1092 | "#@markdown 输出模型命名、格式(模型会被输出至`{output_folder_dir}/{output_name};默认为:/content/drive/MyDrive/Lora/output/output_name`)\n",
1093 | "output_name = \"output_name\" #@param {type:\"string\"}\n",
1094 | "search_input(\"output_name\", output_name)\n",
1095 | "save_model_as = \"safetensors\" #@param [\"ckpt\", \"safetensors\", \"pt\"]\n",
1096 | "search_input(\"save_model_as\", save_model_as)\n",
1097 | "output_folder_dir = \"/content/drive/MyDrive/Lora/output\" #@param {type:\"string\"}\n",
1098 | "#保存模型至同名文件夹\n",
1099 | "#output_dir在4.2中被初始化\n",
1100 | "output_dir = output_folder_dir + \"/\" + output_name\n",
1101 | "search_input(\" --output_dir\", output_dir)\n",
1102 | "#修改log输出至谷歌硬盘\n",
1103 | "#logging_dir在4.2中初始化\n",
1104 | "logging_dir = output_dir + \"/logs\"\n",
1105 | "search_input(\" --logging_dir\", logging_dir)\n",
1106 | "print(f\"模型输出地址为:{output_dir}\")\n",
1107 | "print(f\"log文件将会被保存至:{logging_dir}\")\n",
1108 | "\n",
1109 | "\n",
1110 | "#@markdown 图片分辨率:\"宽,高\"。支持非正方形(必须是64的倍数)\n",
1111 | "width = 512 #@param {type:\"slider\", min:64, max:1920, step:64}\n",
1112 | "height = 768 #@param {type:\"slider\", min:64, max:1920, step:64}\n",
1113 | "resolution = f\"{width},{height}\"\n",
1114 | "search_input(\"resolution\", resolution)\n",
1115 | "\n",
1116 | "#@markdown batch大小(colab普通用户512*768最多只能选5,超过就会爆显存,你可以试试)\n",
1117 | "batch_size = 1 #@param {type:\"slider\", min:1, max:16, step:1}\n",
1118 | "search_input(\"batch_size\", batch_size)\n",
1119 | "\n",
1120 | "#@markdown 优化器选择 `一般用前三个就行\"`\n",
1121 | "optimizer_type = \"AdamW8bit\" #@param [\"AdamW8bit\", \"Lion\", \"DAdaptation\", \"AdamW\", \"SGDNesterov\", \"SGDNesterov8bit\", \"AdaFactor\"]\n",
1122 | "search_input(\"optimizer_type\", optimizer_type)\n",
1123 | "\n",
1124 | "#@markdown unet学习率与text学习率(lr将被设置为等于unet_lr)\n",
1125 | "#@markdown `DAdaptation优化器的学习率会自动调整,通常指定unet_lr=1;如果你希望text_encoder_lr为unet_lr一半,则指定text_encoder_lr=0.5`\n",
1126 | "unet_lr = \"1.5e-4\" #@param {type:\"string\"}\n",
1127 | "search_input(\"lr\", unet_lr)\n",
1128 | "search_input(\"unet_lr\", unet_lr)\n",
1129 | "text_encoder_lr = \"1e-5\" #@param {type:\"string\"}\n",
1130 | "search_input(\"text_encoder_lr\", text_encoder_lr)\n",
1131 | "\n",
1132 | "#@markdown network dim与alpah\n",
1133 | "network_dim = 32 #@param {type:\"number\"}\n",
1134 | "search_input(\"network_dim\", network_dim)\n",
1135 | "network_alpha = 16 #@param {type:\"number\"}\n",
1136 | "search_input(\"network_alpha\", network_alpha)\n",
1137 | "\n",
1138 | "#@markdown 最大训练epoch ; 每N个epoch 保存一次\n",
1139 | "max_train_epoches = 15 #@param {type:\"number\"}\n",
1140 | "search_input(\"max_train_epoches\", max_train_epoches)\n",
1141 | "save_every_n_epochs = 1 #@param {type:\"number\"}\n",
1142 | "search_input(\"save_every_n_epochs\", save_every_n_epochs)\n",
1143 | "\n",
1144 | "#@markdown 噪声偏移、保留前N个token顺序不变、伽马射线事件的最小信噪比`开启推荐为5`\n",
1145 | "noise_offset = 0.05 #@param {type:\"number\"}\n",
1146 | "search_input(\"noise_offset\", f\"{noise_offset}\")\n",
1147 | "keep_tokens = 1 #@param {type:\"number\"}\n",
1148 | "search_input(\"keep_tokens\", keep_tokens)\n",
1149 | "min_snr_gamma = 0 #@param {type:\"number\"}\n",
1150 | "search_input(\"min_snr_gamma\", min_snr_gamma)\n",
1151 | "\n",
1152 | "#@markdown 学习率调度器、升温步数、余弦硬重启次数 ; 升温步数建议设置成总steps的5%左右`总steps = epoch * repeat * (训练集+正则图数) / batch`不会算就跑一遍训练看看\n",
1153 | "lr_scheduler = \"cosine_with_restarts\" #@param [\"cosine_with_restarts\",\"cosine\",\"polynomial\",\"linear\",\"constant_with_warmup\",\"constant\"]\n",
1154 | "search_input(\"lr_scheduler\", lr_scheduler)\n",
1155 | "lr_warmup_steps = 0 #@param {type:\"number\"}\n",
1156 | "search_input(\"lr_warmup_steps\", lr_warmup_steps)\n",
1157 | "lr_restart_cycles = 1 #@param {type:\"number\"}\n",
1158 | "search_input(\"lr_restart_cycles\", lr_restart_cycles)\n",
1159 | "\n",
1160 | "#@markdown 训练方法`\"LoRa\", \"LoCon\", \"LoHa\"`\n",
1161 | "train_method = \"LoRa\" #@param [\"LoRa\", \"LoCon\", \"LoHa\"]\n",
1162 | "if train_method == \"LoRa\":\n",
1163 | " network_module = \"networks.lora\"\n",
1164 | " algo = \"lora\"\n",
1165 | "elif train_method == \"LoCon\":\n",
1166 | " network_module = \"lycoris.kohya\"\n",
1167 | " algo = \"lora\"\n",
1168 | "elif train_method == \"LoHa\":\n",
1169 | " network_module = \"lycoris.kohya\"\n",
1170 | " algo = \"loha\"\n",
1171 | "else:\n",
1172 | " print(\"训练方法选择出错\")\n",
1173 | "search_input(\"network_module\", network_module)\n",
1174 | "search_input(\"algo\", algo)\n",
1175 | "print(f\"{train_method}训练方法\")\n",
1176 | " \n",
1177 | "#@markdown locon训练的dim与alpha(仅在\"LoCon\"、\"LoHa\"训练方法时有效)\n",
1178 | "conv_dim = 8 #@param {type:\"number\"}\n",
1179 | "search_input(\"conv_dim\", conv_dim)\n",
1180 | "conv_alpha = 4 #@param {type:\"number\"}\n",
1181 | "search_input(\"conv_alpha\", conv_alpha)\n",
1182 | "\n",
1183 | "#@markdown 对于SD2模型(这个是底模为SD2训练时候使用的,不懂就不要选)\n",
1184 | "is_v2_model = False #@param {type:\"boolean\"}\n",
1185 | "search_input(\"is_v2_model\", 1 if is_v2_model else 0)\n",
1186 | "parameterization = False #@param {type:\"boolean\"}\n",
1187 | "search_input(\"parameterization\", 1 if parameterization else 0)\n",
1188 | "\n",
1189 | "if is_v2_model:\n",
1190 | " print(\"启动SD2.0模型设置\")\n",
1191 | "if parameterization:\n",
1192 | " print(\"启动parameterization参数化\")\n",
1193 | "\n",
1194 | "\n",
1195 | "#@markdown lowram模式(用显存来补充内存)\n",
1196 | "lowram = False #@param {type:\"boolean\"}\n",
1197 | "if lowram:\n",
1198 | " #写入\"--lowram\" \n",
1199 | " extArgs_content.common_parameter += \"\\\"--lowram\\\" \"\n",
1200 | " print(\"启动--lowram\")\n",
1201 | "\n",
1202 | "\n",
1203 | "search_input(\"\", extArgs_content.all() )\n"
1204 | ],
1205 | "metadata": {
1206 | "cellView": "form",
1207 | "id": "NDHiaHc4qWHE"
1208 | },
1209 | "execution_count": null,
1210 | "outputs": []
1211 | },
1212 | {
1213 | "cell_type": "markdown",
1214 | "source": [
1215 | "### 7.2有趣的边训练边出图(可选)"
1216 | ],
1217 | "metadata": {
1218 | "id": "19nEZDbHMzsv"
1219 | }
1220 | },
1221 | {
1222 | "cell_type": "code",
1223 | "source": [
1224 | "#@title ####采样图片参数设置,输出图片位置与模型输出文件夹一致(原始代码来源于:[Linaqruf](https://github.com/Linaqruf/kohya-trainer))\n",
1225 | "#@markdown 支持使用`(1girl:1.1)`和`[1girl]`格式,不限数量的tag\n",
1226 | "\n",
1227 | "#@markdown 你要是懂怎么用,你也可以运行后修改这个采样参数文件`/content/lora-scripts/sample_prompt.txt`\n",
1228 | "\n",
1229 | "#用于修改train.sh中extArgs数组的内容\n",
1230 | "extArgs_content.sample_parameter = \"\"\n",
1231 | "\n",
1232 | "#4.2中被定义,8.2中也会被使用\n",
1233 | "enable_sample = True #@param {type:\"boolean\"}\n",
1234 | "#@markdown 采样间隔(每n个step/epoch采样,n=)\n",
1235 | "sample_every_n_type = \"sample_every_n_epochs\" #@param [\"sample_every_n_steps\", \"sample_every_n_epochs\"]\n",
1236 | "sample_every_n_type_value = 1 #@param {type:\"number\"}\n",
1237 | "#@markdown 采样参数(采样器、正面tag、负面、宽、高、scale、种子、采样步数)\n",
1238 | "sampler = \"euler_a\" #@param [\"ddim\", \"pndm\", \"lms\", \"euler\", \"euler_a\", \"heun\", \"dpm_2\", \"dpm_2_a\", \"dpmsolver\",\"dpmsolver++\", \"dpmsingle\", \"k_lms\", \"k_euler\", \"k_euler_a\", \"k_dpm_2\", \"k_dpm_2_a\"]\n",
1239 | "prompt = \"(masterpiece, best quality, hi-res:1.2), 1girl, solo\" #@param {type: \"string\"}\n",
1240 | "negative = \"(worst quality, bad quality:1.4), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry\" #@param {type:\"string\"}\n",
1241 | "width = 512 #@param {type:\"number\"}\n",
1242 | "height = 768 #@param {type:\"number\"}\n",
1243 | "scale = 7 #@param {type:\"number\"}\n",
1244 | "seed = -1 #@param {type:\"number\"}\n",
1245 | "steps = 28 #@param {type:\"number\"}\n",
1246 | "\n",
1247 | "#配置采样参数\n",
1248 | "sample_str = f\"\"\"\n",
1249 | " {prompt} \\\n",
1250 | " --n {negative} \\\n",
1251 | " --w {width} \\\n",
1252 | " --h {height} \\\n",
1253 | " --l {scale} \\\n",
1254 | " --s {steps} \\\n",
1255 | " {f\"--d \" + f\"{seed}\" if seed > 0 else \"\"} \\\n",
1256 | "\"\"\"\n",
1257 | "\n",
1258 | "if enable_sample:\n",
1259 | " #生成采样参数文件\n",
1260 | " with open(sample_prompt_txt_path, \"w\") as f:\n",
1261 | " f.write(sample_str)\n",
1262 | " #写入采样地址\"--sample_prompts={sample_prompt_txt_path}\"\n",
1263 | " extArgs_content.sample_parameter += f\"\\\"--sample_prompts={sample_prompt_txt_path}\\\" \"\n",
1264 | " #\"--sample_sampler=euler_a\"\n",
1265 | " extArgs_content.sample_parameter += f\"\\\"--sample_sampler={sampler}\\\" \"\n",
1266 | " #写入采样间隔\"--sample_every_n_epochs=1\"\n",
1267 | " if sample_every_n_type == \"sample_every_n_epochs\":\n",
1268 | " extArgs_content.sample_parameter += f\"\\\"--sample_every_n_epochs={sample_every_n_type_value}\\\" \"\n",
1269 | " elif sample_every_n_type == \"sample_every_n_steps\":\n",
1270 | " extArgs_content.sample_parameter += f\"\\\"--sample_every_n_steps={sample_every_n_type_value}\\\" \"\n",
1271 | " else:\n",
1272 | " print(\"采样间隔参数出错\")\n",
1273 | " print(f\"启用采样功能\")\n",
1274 | "else:\n",
1275 | " print(f\"不使用采样功能\")\n",
1276 | "\n",
1277 | "\n",
1278 | "#写入trian.sh\n",
1279 | "search_input(\"\", extArgs_content.all() )"
1280 | ],
1281 | "metadata": {
1282 | "cellView": "form",
1283 | "id": "H3kSAQXnM8sL"
1284 | },
1285 | "execution_count": null,
1286 | "outputs": []
1287 | },
1288 | {
1289 | "cell_type": "markdown",
1290 | "source": [
1291 | "### 7.3进阶参数"
1292 | ],
1293 | "metadata": {
1294 | "id": "g8uAVOwb4wd8"
1295 | }
1296 | },
1297 | {
1298 | "cell_type": "code",
1299 | "source": [
1300 | "#@title ####设置进阶参数\n",
1301 | "\n",
1302 | "#用于暂存要修改至extArgs的内容\n",
1303 | "extArgs_content.plus_parameter = \"\"\n",
1304 | "\n",
1305 | "#@markdown 从训练好的lora模型上继续训练,填写模型地址,如`/content/drive/MyDrive/Lora/output/output_name.safetensors`,留空则不启用该功能\n",
1306 | "\n",
1307 | "#@markdown 从学习状态上继续训练,填写学习状态文件夹地址,如`/content/drive/MyDrive/Lora/output/ouput_name-n-state`,留空我不知道会怎么样\n",
1308 | "\n",
1309 | "use_retrain = \"no\" #@param [\"no\",\"model\",\"state\"]\n",
1310 | "retrain_dir = \"/content/drive/MyDrive/Lora/output\" #@param {type:\"string\"}\n",
1311 | "\n",
1312 | "if use_retrain == \"no\":\n",
1313 | " search_input(\"network_weights\", \"\")\n",
1314 | " search_input(\"resume\", \"\")\n",
1315 | " print(\"不使用重训练\")\n",
1316 | "elif use_retrain == \"model\":\n",
1317 | " search_input(\"network_weights\", retrain_dir)\n",
1318 | " search_input(\"resume\", \"\")\n",
1319 | " print(\"从预先训练的lora模型上继续训练\")\n",
1320 | "elif use_retrain == \"state\":\n",
1321 | " search_input(\"network_weights\", \"\")\n",
1322 | " search_input(\"resume\", retrain_dir)\n",
1323 | " print(\"从上次的学习状态继续训练\")\n",
1324 | "\n",
1325 | "\n",
1326 | "#@markdown 保存epoch模型的同时保存学习状态(包括优化器状态,8.1中tensorboard查看),方便更加精确的断点训练,**注意每个状态文件夹有5g**,建议高repeat低epoch长时间训练时使用(colab最多只能连续4小时)\n",
1327 | "save_state = False #@param {type:\"boolean\"}\n",
1328 | "search_input(\"save_state\", 1 if save_state else 0)\n",
1329 | "print( (\"\"if save_state else \"不\") + \"保存学习状态\")\n",
1330 | "\n",
1331 | "#@markdown 桶最小、大分辨率\n",
1332 | "min_bucket_reso = 256 #@param {type:\"slider\", min:64, max:1920, step:64}\n",
1333 | "search_input(\"min_bucket_reso\", min_bucket_reso)\n",
1334 | "max_bucket_reso = 1024 #@param {type:\"slider\", min:64, max:1920, step:64}\n",
1335 | "search_input(\"max_bucket_reso\", max_bucket_reso)\n",
1336 | "\n",
1337 | "#@markdown 跳过层\n",
1338 | "clip_skip = 2 #@param {type:\"slider\", min:1, max:2, step:1}\n",
1339 | "search_input(\"clip_skip\", clip_skip)\n",
1340 | "\n",
1341 | "#@markdown 标签文件扩展名\n",
1342 | "caption_extension = \"txt\" #@param {type:\"string\"}\n",
1343 | "caption_extension = \".\" + caption_extension\n",
1344 | "search_input(\" --caption_extension\", caption_extension)\n",
1345 | "\n",
1346 | "#@markdown 训练最大token数\n",
1347 | "max_token_length = 225 #@param {type:\"slider\", min:75, max:225, step:75}\n",
1348 | "search_input(\" --max_token_length\", max_token_length)\n",
1349 | "\n",
1350 | "\n",
1351 | "#@markdown 种子\n",
1352 | "seed = \"1337\" #@param {type:\"string\"}\n",
1353 | "search_input(\" --seed\", seed)\n",
1354 | "\n",
1355 | "\n",
1356 | "#写入train.sh\n",
1357 | "search_input(\"\", extArgs_content.all() )"
1358 | ],
1359 | "metadata": {
1360 | "cellView": "form",
1361 | "id": "ormmXEOn4zCP"
1362 | },
1363 | "execution_count": null,
1364 | "outputs": []
1365 | },
1366 | {
1367 | "cell_type": "markdown",
1368 | "source": [
1369 | "### 7.4使用预先保存的配置文件进行覆盖(可选)"
1370 | ],
1371 | "metadata": {
1372 | "id": "AyqSNCvqO1OB"
1373 | }
1374 | },
1375 | {
1376 | "cell_type": "code",
1377 | "source": [
1378 | "#@title #### 覆盖配置文件\n",
1379 | "#@markdown 是否使用预先保存的trian.sh覆盖默认参数文件(导入自定义的train.sh并不会更新7中的参数,所以导入后如果你想修改,直接打开`/content/lora-scripts/train.sh`手动改)\n",
1380 | "use_train_sh_self = False #@param {type:\"boolean\"}\n",
1381 | "train_sh_self_path = \"/content/drive/MyDrive/Lora/output/output_name/train.sh\" #@param {type:\"string\"}\n",
1382 | "\n",
1383 | "if use_train_sh_self:\n",
1384 | " print(f\"使用预先保存的trian.sh, {train_sh_self_path}将覆盖{train_sh_path}\")\n",
1385 | " !cp {train_sh_self_path} {train_sh_path}\n",
1386 | "else:\n",
1387 | " print(f\"使用默认路径的train.sh:{train_sh_path}\")\n",
1388 | "\n",
1389 | "#@markdown 如果预先保存的train.sh中启用了采样功能,请启用并填入预先保存的采样参数文件路径\n",
1390 | "use_sample_prompt_txt_self = False #@param {type:\"boolean\"}\n",
1391 | "sample_prompt_txt_self_path = \"/content/drive/MyDrive/Lora/output/output_name/sample_prompt.txt\" #@param {type:\"string\"}\n",
1392 | "\n",
1393 | "if use_sample_prompt_txt_self:\n",
1394 | " print(f\"使用预先保存的sample_prompt.txt, {sample_prompt_txt_self_path}将覆盖{sample_prompt_txt_path}\")\n",
1395 | " !cp {sample_prompt_txt_self_path} {sample_prompt_txt_path}\n",
1396 | "else:\n",
1397 | " print(f\"你选择了:预先保存的train.sh中不启用采样\")\n"
1398 | ],
1399 | "metadata": {
1400 | "cellView": "form",
1401 | "id": "MmNiZhUMog__"
1402 | },
1403 | "execution_count": null,
1404 | "outputs": []
1405 | },
1406 | {
1407 | "cell_type": "markdown",
1408 | "source": [
1409 | "##(八)开始训练 😀"
1410 | ],
1411 | "metadata": {
1412 | "id": "kqddLa2TFY2D"
1413 | }
1414 | },
1415 | {
1416 | "cell_type": "code",
1417 | "source": [
1418 | "#@title ###8.1是否使用tensorboard :loss与学习率可视化工具(可选)\n",
1419 | "#@markdown 你可以在训练前启动它,当训练开始过一会出现loss后,右上角刷新就可以实时监控loss和学习率 ; 训练开始后,如果你没启动的话,就只能在训练**结束**后启动\n",
1420 | "\n",
1421 | "use_tensorboard = True #@param {type:\"boolean\"}\n",
1422 | "#@markdown 是否使用自定义的log日志路径:`留空则指定为当前train.sh中指定的log日志路径`\n",
1423 | "logging_dir_self = \"\" #@param {type:\"string\"}\n",
1424 | "\n",
1425 | "#@markdown 如果提示端口被占用,就换个端口\n",
1426 | "port = \"8008\" #@param {type:\"string\"}\n",
1427 | "\n",
1428 | "if use_tensorboard:\n",
1429 | " #指定tensorboard的读取路径\n",
1430 | " if logging_dir_self:\n",
1431 | " tensorboard_log_dir = logging_dir_self\n",
1432 | " print(f\"你指定了自定义的log日志路径:{tensorboard_log_dir}\")\n",
1433 | " else:\n",
1434 | " tensorboard_log_dir = search_get(\" --logging_dir\")\n",
1435 | " print(f\"采用trian.sh中指定的log日志路径:{tensorboard_log_dir}\")\n",
1436 | " %tensorboard --logdir={tensorboard_log_dir} --port={port}\n",
1437 | "else:\n",
1438 | " print(\"你似乎想使用tensorboard,但并未勾选该选项\")"
1439 | ],
1440 | "metadata": {
1441 | "cellView": "form",
1442 | "id": "498HAF6rwU_A"
1443 | },
1444 | "execution_count": null,
1445 | "outputs": []
1446 | },
1447 | {
1448 | "cell_type": "code",
1449 | "source": [
1450 | "#@title ### 8.2开始训练\n",
1451 | "#@markdown 若正确运行,训练完成后,模型会自动保存至你的谷歌硬盘中`我的云端硬盘/Lora/output/`\n",
1452 | "\n",
1453 | "#@markdown 如果不到1分钟就运行完了,多半是出错了,把输出信息复制到ChatGPT问下罢! :(\n",
1454 | "\n",
1455 | "#@markdown - Q:输出代码的最后出现(kill:9)字样 \n",
1456 | "\n",
1457 | "#@markdown - A:爆ram了,更换小的底模\n",
1458 | "\n",
1459 | "#@markdown ---\n",
1460 | "#@markdown 是否保存本次训练的train.sh和采样配置(如果你启用采样功能的话)\n",
1461 | "\n",
1462 | "#@markdown 保存路径: `留空则保存至当前train.sh中指定的输出路径`\n",
1463 | "save_files = True #@param {type:\"boolean\"}\n",
1464 | "save_files_dir_self = \"\" #@param {type:\"string\"}\n",
1465 | "\n",
1466 | "\n",
1467 | "if save_files:\n",
1468 | " #指定保存路径\n",
1469 | " if save_files_dir_self:\n",
1470 | " save_files_dir = save_files_dir_self\n",
1471 | " print(f\"你指定了自定义的配置保存路径:{save_files_dir}\")\n",
1472 | " else:\n",
1473 | " save_files_dir = search_get(\" --output_dir\")\n",
1474 | " print(f\"采用trian.sh中指定的输出路径:{save_files_dir}\")\n",
1475 | " #保存训练参数文件至谷歌硬盘\n",
1476 | " !mkdir -p {save_files_dir}\n",
1477 | " !cp {train_sh_path} {save_files_dir}\n",
1478 | " print(f\"训练参数被保存至{save_files_dir}\")\n",
1479 | " #保存采样参数文件至谷歌硬盘\n",
1480 | " if enable_sample or use_sample_prompt_txt_self:\n",
1481 | " !cp {sample_prompt_txt_path} {save_files_dir}\n",
1482 | " print(f\"采样参数被保存至{save_files_dir}\")\n",
1483 | " else:\n",
1484 | " print(f\"未启用采样功能,不保存采样配置\")\n",
1485 | "else:\n",
1486 | " print(f\"不保存配置文件\")\n",
1487 | "\n",
1488 | "#开始训练!\n",
1489 | "%cd /content/lora-scripts/\n",
1490 | "!bash train.sh\n",
1491 | "\n",
1492 | "!echo \"完成了 XXXD.\""
1493 | ],
1494 | "metadata": {
1495 | "id": "ZXFX2-C_Z-9N",
1496 | "cellView": "form"
1497 | },
1498 | "execution_count": null,
1499 | "outputs": []
1500 | },
1501 | {
1502 | "cell_type": "markdown",
1503 | "source": [
1504 | "### 8.3挂机代码(复制到浏览器控制台中使用,稍后添加图文指导)"
1505 | ],
1506 | "metadata": {
1507 | "id": "EP59EDzIH3AL"
1508 | }
1509 | },
1510 | {
1511 | "cell_type": "code",
1512 | "source": [
1513 | "function ConnectButton() {\n",
1514 | " console.log(\"Connect pushed\");\n",
1515 | " document.querySelector(\"#top-toolbar > colab-connect-button\").shadowRoot.querySelector(\"#connect\").click();\n",
1516 | "}\n",
1517 | "// 每一分钟自动点一次按钮\n",
1518 | "var connectInterval = setInterval(ConnectButton, 60 * 1000);\n",
1519 | "\n",
1520 | "// 停止自动点击按钮的定时器\n",
1521 | "setTimeout(function() {\n",
1522 | " clearInterval(connectInterval);\n",
1523 | " console.log(\"Auto-connect stopped.\");\n",
1524 | "}, 5 * 60 * 1000); // 5 分钟后停止自动点击按钮"
1525 | ],
1526 | "metadata": {
1527 | "id": "QpTO8eQkH7QI"
1528 | },
1529 | "execution_count": null,
1530 | "outputs": []
1531 | },
1532 | {
1533 | "cell_type": "markdown",
1534 | "source": [
1535 | "**——————————————————————————————————————————————————————————**\n",
1536 | "\n",
1537 | "\n",
1538 | "# **C. 文末:无法挂载谷歌硬盘的教程(此模块已暂停维护!)**\n",
1539 | "\n"
1540 | ],
1541 | "metadata": {
1542 | "id": "awjy39L8jZWU"
1543 | }
1544 | },
1545 | {
1546 | "cell_type": "markdown",
1547 | "source": [
1548 | "在右边填入相应的参数,运行这段代码块,他将会在你的Colab环境中创建训练文件夹,默认名字为6_tag,然后点记右边栏文件选项,打开/content/drive/MyDrive/Lora/input/5_tag/,将处理好的图片上传进其中,如果找不到路径就点一下上面的刷新\n",
1549 | "\n",
1550 | "\n",
1551 | "**请在第(五)步之前完成,完成后继续执行第(五)步**\n",
1552 | "\n",
1553 | "**如果你完成图片上传之前已经运行了第(五)步,那么你直接运行文末最后最后的代码块也行,然后在从(六)开始**"
1554 | ],
1555 | "metadata": {
1556 | "id": "1wtwNvB2vIp5"
1557 | }
1558 | },
1559 | {
1560 | "cell_type": "code",
1561 | "source": [
1562 | "#@markdown 重复次数\n",
1563 | "REPEAT_TIME = 5 #@param {type:\"number\"}\n",
1564 | "\n",
1565 | "#@markdown 概念tag\n",
1566 | "CONCEPT_NAME = \"tag\" #@param {type:\"string\"}\n",
1567 | "\n",
1568 | "!mkdir -p \"/content/drive/MyDrive/Lora/input/{REPEAT_TIME}_{CONCEPT_NAME}\""
1569 | ],
1570 | "metadata": {
1571 | "id": "77F0FYCrzsHc"
1572 | },
1573 | "execution_count": null,
1574 | "outputs": []
1575 | },
1576 | {
1577 | "cell_type": "markdown",
1578 | "source": [
1579 | ""
1580 | ],
1581 | "metadata": {
1582 | "id": "Lkd5F79myCSQ"
1583 | }
1584 | },
1585 | {
1586 | "cell_type": "code",
1587 | "source": [
1588 | "#拷贝训练材料\n",
1589 | "!mkdir -p /content/lora-scripts/train/aki/\n",
1590 | "!cp -r /content/drive/MyDrive/Lora/input/* /content/lora-scripts/train/aki/\n",
1591 | "!echo \"copy训练材料完成.\"\n",
1592 | "\n",
1593 | "#拷贝正则化图片\n",
1594 | "!mkdir -p /content/lora-scripts/train/reg/\n",
1595 | "!cp -r /content/drive/MyDrive/Lora/reg/* /content/lora-scripts/train/reg/\n",
1596 | "!echo \"copy正则化图片完成.\"\n",
1597 | "\n",
1598 | "%cd /content/lora-scripts"
1599 | ],
1600 | "metadata": {
1601 | "id": "KwuS-C0tQQga"
1602 | },
1603 | "execution_count": null,
1604 | "outputs": []
1605 | },
1606 | {
1607 | "cell_type": "markdown",
1608 | "source": [
1609 | "# **D.开发者备用下载代码**"
1610 | ],
1611 | "metadata": {
1612 | "id": "GQ7GziwME6Fi"
1613 | }
1614 | },
1615 | {
1616 | "cell_type": "code",
1617 | "source": [
1618 | "#@title ###6.1 下载模型\n",
1619 | "#@markdown 秋叶不推荐使用混合模型做为底模,因此默认选择了*animefull-latest-pruned*和*SD1.5(剪枝)*做为底模:\n",
1620 | "\n",
1621 | "#@markdown ---\n",
1622 | "#@markdown 选择好模型后,将连接和模型格式填入下边相应的输入框,然后运行代码块。会自动更改train.sh中底模路径\n",
1623 | "\n",
1624 | "#@markdown **注意!!千万不能选择过大的模型,如Anything-v4.5原版7G,会直接爆系统ram** \\\n",
1625 | "#@markdown **5G以下应该没问题** \\\n",
1626 | "\n",
1627 | "#@markdown ---\n",
1628 | "#你也可以将连接替换成你喜欢模型的直接连接,或者用git,又或者先上传到自己的谷歌硬盘再使用!cp命令拷贝到底模目录\n",
1629 | "#@markdown **选择预设模型,或者自己下载**\n",
1630 | "base_model = \"NovelAI\" #@param [\"NovelAI\", \"SD1.5\", \"Download by you\"]\n",
1631 | "\n",
1632 | "#@markdown **模型链接、模型的后缀名(例如:ckpt或safetensors);仅在选择Download by you时有效**\n",
1633 | "base_model_url = \"\" #@param {type:\"string\"}\n",
1634 | "base_model_extension = \"ckpt\" #@param [\"ckpt\", \"safetensors\", \"pt\"]\n",
1635 | "\n",
1636 | "#选择模型\n",
1637 | "if base_model == \"NovelAI\":\n",
1638 | " #base_model_url = \"https://huggingface.co/a1079602570/animefull-final-pruned/resolve/main/novelailatest-pruned.ckpt\"\n",
1639 | " base_model_url = \"https://huggingface.co/LarryAIDraw/animefull-final-pruned/resolve/main/animefull-final-pruned.ckpt\"\n",
1640 | " base_model_extension = \"ckpt\"\n",
1641 | "elif base_model == \"SD1.5\":\n",
1642 | " #base_model_url = \"https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt\"\n",
1643 | " base_model_url = \"https://huggingface.co/Linaqruf/stolen/resolve/main/pruned-models/stable_diffusion_1_5-pruned.safetensors\"\n",
1644 | " base_model_extension = \"ckpt\"\n",
1645 | "elif base_model == \"Download by you\":\n",
1646 | " pass\n",
1647 | "else:\n",
1648 | " print(\"选择模型出错\")\n",
1649 | "\n",
1650 | "#下载路径\n",
1651 | "base_model_dir = \"/content/lora-scripts/sd-models/\"\n",
1652 | "#重命名下载的模型\n",
1653 | "base_model_name = \"model.\" + base_model_extension\n",
1654 | "#底模路径\n",
1655 | "pretrained_model = base_model_dir + base_model_name\n",
1656 | "\n",
1657 | "#6线程下载,覆盖重名\n",
1658 | "!aria2c --console-log-level=error -s 6 -x 10 -d {base_model_dir} -o $base_model_name --allow-overwrite $base_model_url\n",
1659 | "!echo \"下载完成\"\n",
1660 | "\n",
1661 | "#输出模型信息\n",
1662 | "print(\"你选择的是\" + base_model + \"底模\")\n",
1663 | "#修改train.sh的底模路径,并输出信息\n",
1664 | "search_input(\"pretrained_model\", pretrained_model)\n",
1665 | "print(\"\\b底模格式为\" + base_model_extension)"
1666 | ],
1667 | "metadata": {
1668 | "id": "5xKHfJnwdXqS"
1669 | },
1670 | "execution_count": null,
1671 | "outputs": []
1672 | }
1673 | ]
1674 | }
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Colab SD-LoRA training
2 |
3 |
4 | LoRA training scripts for [kohya-ss/sd-scripts](https://github.com/kohya-ss/sd-scripts.git)
5 |
6 | Based on the work of [kohya-ss](https://github.com/kohya-ss/sd-scripts) , [Linaqruf](https://github.com/Linaqruf/kohya-trainer) and [Akegarasu](https://github.com/Akegarasu/lora-scripts).
7 |
8 | | Notebook Name | Description | Link | Old-Version |
9 | | --- | --- | --- | --- |
10 | | [Colab_Lora_train](https://github.com/WSH032/lora-scripts/) | 基于[Akegarasu/lora-scripts](https://github.com/Akegarasu/lora-scripts)的定制化Colab notebook | [](https://colab.research.google.com/github/WSH032/lora-scripts/blob/main/Colab_Lora_train.ipynb) | [](https://colab.research.google.com/drive/1_f0qJdM43BSssNJWtgjIlk9DkIzLPadx) |
11 | | [kohya_train_webui](https://github.com/WSH032/kohya-config-webui) `NEW` | 基于[WSH032/kohya-config-webui](https://github.com/WSH032/kohya-config-webui)的WebUI版Colab notebook | [](https://colab.research.google.com/github/WSH032/kohya-config-webui/blob/main/kohya_train_webui.ipynb) |
12 |
13 | # 我做了什么?
14 |
15 | 编写了kohya-lora训练的colab notebook及使用教程,你可以点击上面列表的图标来使用它们
16 |
17 | 安装教程里的去做,此notebook会帮你安装好所需的环境,通过colab的交互式组件,你可以很快而方便的完成训练参数设置并开始训练。
18 |
19 | 如果你觉得这个项目好用, 可以给我一颗小星星 ⭐ , 我会非常感谢。
20 |
21 | 同时,也请不要忘了[kohya-ss](https://github.com/kohya-ss/sd-scripts) , [Linaqruf](https://github.com/Linaqruf/kohya-trainer) 和 [Akegarasu](https://github.com/Akegarasu/lora-scripts) 的工作。 强烈建议也去给他们点小星星!
22 |
23 | 特别是 [Linaqruf](https://github.com/Linaqruf/kohya-trainer), 我的notebook里面采用了很多来自他项目的代码和思路。
24 |
25 | # Credit
26 |
27 | 这个项目使用了如下的三位作者的代码
28 |
29 | [kohya-ss](https://github.com/kohya-ss/sd-scripts) 和 [Linaqruf](https://github.com/Linaqruf/kohya-trainer)目前采取的是Apache-2.0 license
30 |
31 | [Akegarasu](https://github.com/Akegarasu/lora-scripts) 目前尚未标明协议
32 |
33 | 如果你基于此项目进行了修改、引用等用途,请注意原作者的协议。
34 |
35 | 请在你使用的部分标明代码来源。
36 |
--------------------------------------------------------------------------------
/assets/tensorboard-example.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/WSH032/lora-scripts/7b0f1a6fadab6858dc8a4b6ec04fec83b8e28812/assets/tensorboard-example.png
--------------------------------------------------------------------------------
/gui.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 | import subprocess
5 | import sys
6 | import webbrowser
7 | from datetime import datetime
8 | from threading import Lock
9 |
10 | import uvicorn
11 | from fastapi import BackgroundTasks, FastAPI, Request
12 | from fastapi.responses import FileResponse
13 | from fastapi.staticfiles import StaticFiles
14 |
15 | import toml
16 |
17 | app = FastAPI()
18 |
19 | lock = Lock()
20 |
21 | # fix mimetype error in some fucking systems
22 | sf = StaticFiles(directory="frontend/dist")
23 | _o_fr = sf.file_response
24 | def _hooked_file_response(*args, **kwargs):
25 | full_path = args[0]
26 | r = _o_fr(*args, **kwargs)
27 | if full_path.endswith(".js"):
28 | r.media_type = "application/javascript"
29 | elif full_path.endswith(".css"):
30 | r.media_type = "text/css"
31 | return r
32 | sf.file_response = _hooked_file_response
33 |
34 | parser = argparse.ArgumentParser(description="GUI for training network")
35 | parser.add_argument("--port", type=int, default=28000, help="Port to run the server on")
36 |
37 | def run_train(toml_path: str):
38 | print(f"Training started with config file / 训练开始,使用配置文件: {toml_path}")
39 | args = [
40 | "accelerate", "launch", "--num_cpu_threads_per_process", "8",
41 | "./sd-scripts/train_network.py",
42 | "--config_file", toml_path,
43 | ]
44 | try:
45 | result = subprocess.run(args, shell=True, env=os.environ)
46 | if result.returncode != 0:
47 | print(f"Training failed / 训练失败")
48 | else:
49 | print(f"Training finished / 训练完成")
50 | except Exception as e:
51 | print(f"An error occurred when training / 创建训练进程时出现致命错误: {e}")
52 | finally:
53 | lock.release()
54 |
55 |
56 | @app.post("/api/run")
57 | async def create_toml_file(request: Request, background_tasks: BackgroundTasks):
58 | acquired = lock.acquire(blocking=False)
59 |
60 | if not acquired:
61 | print("Training is already running / 已有正在进行的训练")
62 | return {"status": "fail", "detail": "Training is already running"}
63 |
64 | timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
65 | toml_file = f"toml/{timestamp}.toml"
66 | toml_data = await request.body()
67 | j = json.loads(toml_data.decode("utf-8"))
68 | with open(toml_file, "w") as f:
69 | f.write(toml.dumps(j))
70 | background_tasks.add_task(run_train, toml_file)
71 | return {"status": "success"}
72 |
73 | @app.middleware("http")
74 | async def add_cache_control_header(request, call_next):
75 | response = await call_next(request)
76 | response.headers["Cache-Control"] = "max-age=0"
77 | return response
78 |
79 | @app.get("/")
80 | async def index():
81 | return FileResponse("./frontend/dist/index.html")
82 |
83 |
84 | app.mount("/", sf, name="static")
85 |
86 | if __name__ == "__main__":
87 | args, _ = parser.parse_known_args()
88 | print(f"Server started at http://127.0.0.1:{args.port}")
89 | if sys.platform == "win32":
90 | # disable triton on windows
91 | os.environ["XFORMERS_FORCE_DISABLE_TRITON"] = "1"
92 |
93 | webbrowser.open(f"http://127.0.0.1:{args.port}")
94 | uvicorn.run(app, host="127.0.0.1", port=28000, log_level="error")
95 |
--------------------------------------------------------------------------------
/huggingface/accelerate/default_config.yaml:
--------------------------------------------------------------------------------
1 | command_file: null
2 | commands: null
3 | compute_environment: LOCAL_MACHINE
4 | deepspeed_config: {}
5 | distributed_type: 'NO'
6 | downcast_bf16: 'no'
7 | dynamo_backend: 'NO'
8 | fsdp_config: {}
9 | gpu_ids: all
10 | machine_rank: 0
11 | main_process_ip: null
12 | main_process_port: null
13 | main_training_function: main
14 | megatron_lm_config: {}
15 | mixed_precision: fp16
16 | num_machines: 1
17 | num_processes: 1
18 | rdzv_backend: static
19 | same_network: true
20 | tpu_name: null
21 | tpu_zone: null
22 | use_cpu: false
23 |
--------------------------------------------------------------------------------
/huggingface/hub/version.txt:
--------------------------------------------------------------------------------
1 | 1
--------------------------------------------------------------------------------
/install-cn.ps1:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/WSH032/lora-scripts/7b0f1a6fadab6858dc8a4b6ec04fec83b8e28812/install-cn.ps1
--------------------------------------------------------------------------------
/install.bash:
--------------------------------------------------------------------------------
1 | echo "Creating python venv..."
2 | python3 -m venv venv
3 | source venv/bin/activate
4 |
5 | echo "Installing torch & xformers..."
6 | printf 'Which version of torch do you want to install?
7 | (1) torch 2.0.0+cu118 with xformers 0.0.17 (suggested)
8 | (2) torch 1.12.1+cu116, with xformers 0bad001ddd56c080524d37c84ff58d9cd030ebfd
9 | '
10 | while true; do
11 | read -p "Choose: " version
12 | case $version in
13 | [1]*)
14 | pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
15 | pip install xformers==0.0.17
16 | break
17 | ;;
18 | [2]*)
19 | pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
20 | pip install --upgrade git+https://github.com/facebookresearch/xformers.git@0bad001ddd56c080524d37c84ff58d9cd030ebfd
21 | pip install triton==2.0.0.dev20221202
22 | break
23 | ;;
24 | *) echo "Please enter 1 or 2." ;;
25 | esac
26 | done
27 |
28 | echo "Installing deps..."
29 | cd ./sd-scripts
30 |
31 | pip install --upgrade -r requirements.txt
32 | pip install --upgrade lion-pytorch lycoris-lora dadaptation
33 | pip install --upgrade wandb
34 |
35 | echo "Install completed"
36 |
--------------------------------------------------------------------------------
/install.ps1:
--------------------------------------------------------------------------------
1 | $Env:HF_HOME = "huggingface"
2 |
3 | if (!(Test-Path -Path "venv")) {
4 | Write-Output "Creating venv for python..."
5 | python -m venv venv
6 | }
7 | .\venv\Scripts\activate
8 |
9 | Write-Output "Installing deps..."
10 | Set-Location .\sd-scripts
11 | pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
12 | pip install --upgrade -r requirements.txt
13 | pip install --upgrade xformers==0.0.17
14 |
15 | Write-Output "Installing bitsandbytes for windows..."
16 | cp .\bitsandbytes_windows\*.dll ..\venv\Lib\site-packages\bitsandbytes\
17 | cp .\bitsandbytes_windows\cextension.py ..\venv\Lib\site-packages\bitsandbytes\cextension.py
18 | cp .\bitsandbytes_windows\main.py ..\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py
19 |
20 | pip install --upgrade lion-pytorch dadaptation lycoris-lora wandb
21 |
22 | Write-Output "Install completed"
23 | Read-Host | Out-Null ;
--------------------------------------------------------------------------------
/interrogate.ps1:
--------------------------------------------------------------------------------
1 | # LoRA interrogate script by @bdsqlsz
2 |
3 | $v2 = 0 # load Stable Diffusion v2.x model / Stable Diffusion 2.x模型读取
4 | $sd_model = "./sd-models/sd_model.safetensors" # Stable Diffusion model to load: ckpt or safetensors file | 读取的基础SD模型, 保存格式 cpkt 或 safetensors
5 | $model = "./output/LoRA.safetensors" # LoRA model to interrogate: ckpt or safetensors file | 需要调查关键字的LORA模型, 保存格式 cpkt 或 safetensors
6 | $batch_size = 64 # batch size for processing with Text Encoder | 使用 Text Encoder 处理时的批量大小,默认16,推荐64/128
7 | $clip_skip = 1 # use output of nth layer from back of text encoder (n>=1) | 使用文本编码器倒数第 n 层的输出,n 可以是大于等于 1 的整数
8 |
9 |
10 | # Activate python venv
11 | .\venv\Scripts\activate
12 |
13 | $Env:HF_HOME = "huggingface"
14 | $ext_args = [System.Collections.ArrayList]::new()
15 |
16 | if ($v2) {
17 | [void]$ext_args.Add("--v2")
18 | }
19 |
20 | # run interrogate
21 | accelerate launch --num_cpu_threads_per_process=8 "./sd-scripts/networks/lora_interrogator.py" `
22 | --sd_model=$sd_model `
23 | --model=$model `
24 | --batch_size=$batch_size `
25 | --clip_skip=$clip_skip `
26 | $ext_args
27 |
28 | Write-Output "Interrogate finished"
29 | Read-Host | Out-Null ;
30 |
--------------------------------------------------------------------------------
/logs/.keep:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/WSH032/lora-scripts/7b0f1a6fadab6858dc8a4b6ec04fec83b8e28812/logs/.keep
--------------------------------------------------------------------------------
/output/.keep:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/WSH032/lora-scripts/7b0f1a6fadab6858dc8a4b6ec04fec83b8e28812/output/.keep
--------------------------------------------------------------------------------
/resize.ps1:
--------------------------------------------------------------------------------
1 | # LoRA resize script by @bdsqlsz
2 |
3 | $save_precision = "fp16" # precision in saving, default float | 保存精度, 可选 float、fp16、bf16, 默认 float
4 | $new_rank = 4 # dim rank of output LoRA | dim rank等级, 默认 4
5 | $model = "./output/lora_name.safetensors" # original LoRA model path need to resize, save as cpkt or safetensors | 需要调整大小的模型路径, 保存格式 cpkt 或 safetensors
6 | $save_to = "./output/lora_name_new.safetensors" # output LoRA model path, save as ckpt or safetensors | 输出路径, 保存格式 cpkt 或 safetensors
7 | $device = "cuda" # device to use, cuda for GPU | 使用 GPU跑, 默认 CPU
8 | $verbose = 1 # display verbose resizing information | rank变更时, 显示详细信息
9 | $dynamic_method = "" # Specify dynamic resizing method, --new_rank is used as a hard limit for max rank | 动态调节大小,可选"sv_ratio", "sv_fro", "sv_cumulative",默认无
10 | $dynamic_param = "" # Specify target for dynamic reduction | 动态参数,sv_ratio模式推荐1~2, sv_cumulative模式0~1, sv_fro模式0~1, 比sv_cumulative要高
11 |
12 |
13 | # Activate python venv
14 | .\venv\Scripts\activate
15 |
16 | $Env:HF_HOME = "huggingface"
17 | $ext_args = [System.Collections.ArrayList]::new()
18 |
19 | if ($verbose) {
20 | [void]$ext_args.Add("--verbose")
21 | }
22 |
23 | if ($dynamic_method) {
24 | [void]$ext_args.Add("--dynamic_method=" + $dynamic_method)
25 | }
26 |
27 | if ($dynamic_param) {
28 | [void]$ext_args.Add("--dynamic_param=" + $dynamic_param)
29 | }
30 |
31 | # run resize
32 | accelerate launch --num_cpu_threads_per_process=8 "./sd-scripts/networks/resize_lora.py" `
33 | --save_precision=$save_precision `
34 | --new_rank=$new_rank `
35 | --model=$model `
36 | --save_to=$save_to `
37 | --device=$device `
38 | $ext_args
39 |
40 | Write-Output "Resize finished"
41 | Read-Host | Out-Null ;
42 |
--------------------------------------------------------------------------------
/run_gui.ps1:
--------------------------------------------------------------------------------
1 | .\venv\Scripts\activate
2 |
3 | $Env:HF_HOME = "huggingface"
4 | $Env:PYTHONUTF8 = "1"
5 |
6 | python gui.py
--------------------------------------------------------------------------------
/run_gui.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | export HF_HOME=huggingface
4 | export PYTHONUTF8=1
5 |
6 | python gui.py
7 |
8 |
--------------------------------------------------------------------------------
/sd-models/put stable diffusion model here.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/WSH032/lora-scripts/7b0f1a6fadab6858dc8a4b6ec04fec83b8e28812/sd-models/put stable diffusion model here.txt
--------------------------------------------------------------------------------
/tensorboard.ps1:
--------------------------------------------------------------------------------
1 | $Env:TF_CPP_MIN_LOG_LEVEL = "3"
2 |
3 | .\venv\Scripts\activate
4 | tensorboard --logdir=logs
--------------------------------------------------------------------------------
/toml/default.toml:
--------------------------------------------------------------------------------
1 | [model]
2 | v2 = false
3 | v_parameterization = false
4 | pretrained_model_name_or_path = "./sd-models/model.ckpt"
5 |
6 | [dataset]
7 | train_data_dir = "./train/input"
8 | reg_data_dir = ""
9 | prior_loss_weight = 1
10 | cache_latents = true
11 | shuffle_caption = true
12 | enable_bucket = true
13 |
14 | [additional_network]
15 | network_dim = 32
16 | network_alpha = 16
17 | network_train_unet_only = false
18 | network_train_text_encoder_only = false
19 | network_module = "networks.lora"
20 | network_args = []
21 |
22 | [optimizer]
23 | unet_lr = 1e-4
24 | text_encoder_lr = 1e-5
25 | optimizer_type = "AdamW8bit"
26 | lr_scheduler = "cosine_with_restarts"
27 | lr_warmup_steps = 0
28 | lr_restart_cycles = 1
29 |
30 | [training]
31 | resolution = "512,512"
32 | batch_size = 1
33 | max_train_epochs = 10
34 | noise_offset = 0.0
35 | keep_tokens = 0
36 | xformers = true
37 | lowram = false
38 | clip_skip = 2
39 | mixed_precision = "fp16"
40 | save_precision = "fp16"
41 |
42 | [sample_prompt]
43 | sample_sampler = "euler_a"
44 | sample_every_n_epochs = 1
45 |
46 | [saving]
47 | output_name = "output_name"
48 | save_every_n_epochs = 1
49 | save_n_epoch_ratio = 0
50 | save_last_n_epochs = 499
51 | save_state = false
52 | save_model_as = "safetensors"
53 | output_dir = "./output"
54 | logging_dir = "./logs"
55 | log_prefix = "output_name"
56 |
57 | [others]
58 | min_bucket_reso = 256
59 | max_bucket_reso = 1024
60 | caption_extension = ".txt"
61 | max_token_length = 225
62 | seed = 1337
63 |
--------------------------------------------------------------------------------
/toml/lora.toml:
--------------------------------------------------------------------------------
1 | [model_arguments]
2 | v2 = false
3 | v_parameterization = false
4 | pretrained_model_name_or_path = "./sd-models/model.ckpt"
5 |
6 | [dataset_arguments]
7 | train_data_dir = "./train/aki"
8 | reg_data_dir = ""
9 | resolution = "512,512"
10 | prior_loss_weight = 1
11 |
12 | [additional_network_arguments]
13 | network_dim = 32
14 | network_alpha = 16
15 | network_train_unet_only = false
16 | network_train_text_encoder_only = false
17 | network_module = "networks.lora"
18 | network_args = []
19 |
20 | [optimizer_arguments]
21 | unet_lr = 1e-4
22 | text_encoder_lr = 1e-5
23 |
24 | optimizer_type = "AdamW8bit"
25 | lr_scheduler = "cosine_with_restarts"
26 | lr_warmup_steps = 0
27 | lr_restart_cycles = 1
28 |
29 | [training_arguments]
30 | batch_size = 1
31 | noise_offset = 0.0
32 | keep_tokens = 0
33 | min_bucket_reso = 256
34 | max_bucket_reso = 1024
35 | caption_extension = ".txt"
36 | max_token_length = 225
37 | seed = 1337
38 | xformers = true
39 | lowram = false
40 | max_train_epochs = 10
41 | resolution = "512,512"
42 | clip_skip = 2
43 | mixed_precision = "fp16"
44 |
45 | [sample_prompt_arguments]
46 | sample_sampler = "euler_a"
47 | sample_every_n_epochs = 5
48 |
49 | [saving_arguments]
50 | output_name = "output_name"
51 | save_every_n_epochs = 1
52 | save_state = false
53 | save_model_as = "safetensors"
54 | output_dir = "./output"
55 | logging_dir = "./logs"
56 | log_prefix = ""
57 | save_precision = "fp16"
58 |
59 | [others]
60 | cache_latents = true
61 | shuffle_caption = true
62 | enable_bucket = true
--------------------------------------------------------------------------------
/toml/sample_prompts.txt:
--------------------------------------------------------------------------------
1 | (masterpiece, best quality, hires:1.2), 1girl, solo, --n (worst quality, bad quality:1.4), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, --w 512 --h 768 --l 7 --s 24 --d 1337
--------------------------------------------------------------------------------
/train.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": null,
6 | "metadata": {
7 | "pycharm": {
8 | "name": "#%%\n"
9 | }
10 | },
11 | "outputs": [],
12 | "source": [
13 | "# Train data path | 设置训练用模型、图片\n",
14 | "pretrained_model = \"./sd-models/model.ckpt\" # base model path | 底模路径\n",
15 | "train_data_dir = \"./train/aki\" # train dataset path | 训练数据集路径\n",
16 | "\n",
17 | "# Train related params | 训练相关参数\n",
18 | "resolution = \"512,512\" # image resolution w,h. 图片分辨率,宽,高。支持非正方形,但必须是 64 倍数。\n",
19 | "batch_size = 1 # batch size\n",
20 | "max_train_epoches = 10 # max train epoches | 最大训练 epoch\n",
21 | "save_every_n_epochs = 2 # save every n epochs | 每 N 个 epoch 保存一次\n",
22 | "network_dim = 32 # network dim | 常用 4~128,不是越大越好\n",
23 | "network_alpha= 32 # network alpha | 常用与 network_dim 相同的值或者采用较小的值,如 network_dim的一半 防止下溢。默认值为 1,使用较小的 alpha 需要提升学习率。\n",
24 | "clip_skip = 2 # clip skip | 玄学 一般用 2\n",
25 | "train_unet_only = 0 # train U-Net only | 仅训练 U-Net,开启这个会牺牲效果大幅减少显存使用。6G显存可以开启\n",
26 | "train_text_encoder_only = 0 # train Text Encoder only | 仅训练 文本编码器\n",
27 | "\n",
28 | "# Learning rate | 学习率\n",
29 | "lr = \"1e-4\"\n",
30 | "unet_lr = \"1e-4\"\n",
31 | "text_encoder_lr = \"1e-5\"\n",
32 | "lr_scheduler = \"cosine_with_restarts\" # \"linear\", \"cosine\", \"cosine_with_restarts\", \"polynomial\", \"constant\", \"constant_with_warmup\"\n",
33 | "\n",
34 | "# Output settings | 输出设置\n",
35 | "output_name = \"aki\" # output model name | 模型保存名称\n",
36 | "save_model_as = \"safetensors\" # model save ext | 模型保存格式 ckpt, pt, safetensors"
37 | ]
38 | },
39 | {
40 | "cell_type": "code",
41 | "execution_count": null,
42 | "metadata": {
43 | "pycharm": {
44 | "name": "#%%\n"
45 | }
46 | },
47 | "outputs": [],
48 | "source": [
49 | "!accelerate launch --num_cpu_threads_per_process=8 \"./sd-scripts/train_network.py\" \\\n",
50 | " --enable_bucket \\\n",
51 | " --pretrained_model_name_or_path=$pretrained_model \\\n",
52 | " --train_data_dir=$train_data_dir \\\n",
53 | " --output_dir=\"./output\" \\\n",
54 | " --logging_dir=\"./logs\" \\\n",
55 | " --resolution=$resolution \\\n",
56 | " --network_module=networks.lora \\\n",
57 | " --max_train_epochs=$max_train_epoches \\\n",
58 | " --learning_rate=$lr \\\n",
59 | " --unet_lr=$unet_lr \\\n",
60 | " --text_encoder_lr=$text_encoder_lr \\\n",
61 | " --network_dim=$network_dim \\\n",
62 | " --network_alpha=$network_alpha \\\n",
63 | " --output_name=$output_name \\\n",
64 | " --lr_scheduler=$lr_scheduler \\\n",
65 | " --train_batch_size=$batch_size \\\n",
66 | " --save_every_n_epochs=$save_every_n_epochs \\\n",
67 | " --mixed_precision=\"fp16\" \\\n",
68 | " --save_precision=\"fp16\" \\\n",
69 | " --seed=\"1337\" \\\n",
70 | " --cache_latents \\\n",
71 | " --clip_skip=$clip_skip \\\n",
72 | " --prior_loss_weight=1 \\\n",
73 | " --max_token_length=225 \\\n",
74 | " --caption_extension=\".txt\" \\\n",
75 | " --save_model_as=$save_model_as \\\n",
76 | " --xformers --shuffle_caption --use_8bit_adam"
77 | ]
78 | }
79 | ],
80 | "metadata": {
81 | "kernelspec": {
82 | "display_name": "Python 3",
83 | "language": "python",
84 | "name": "python3"
85 | },
86 | "language_info": {
87 | "name": "python",
88 | "version": "3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]"
89 | },
90 | "orig_nbformat": 4,
91 | "vscode": {
92 | "interpreter": {
93 | "hash": "675b13e958f0d0236d13cdfe08a1df3882cae564fa23a2e7e5eb1f2c6c632b02"
94 | }
95 | }
96 | },
97 | "nbformat": 4,
98 | "nbformat_minor": 2
99 | }
--------------------------------------------------------------------------------
/train.ps1:
--------------------------------------------------------------------------------
1 | # LoRA train script by @Akegarasu
2 |
3 | # Train data path | ����ѵ����ģ�͡�ͼƬ�͡�ͼƬ
4 | $pretrained_model = "./sd-models/model.ckpt" # base model path | ������
5 | $is_v2_model = 0 # SD2.0 model | SD2.0ģ��� 2.ģ������� clip_skĬ����Ч����Ч
6 | $parameterization = 0 # parameterization | ���������������Ҫ����������ͬ��ʹ��2ʵ���Թ���ͬ��ʹ�� ʵ���Թ���
7 | $train_data_dir = "./train/aki" # train dataset path | ѵ�����ݼ�·���·��
8 | $reg_data_dir = "" # directory for regularization images | �������ݼ�·����Ĭ�ϲ�ʹ������ͼ�������ͼ��
9 |
10 | # Network settings | ������������
11 | $network_module = "networks.lora" # �����ォ������ѵ�����������࣬Ĭ��Ϊ������࣬Ĭ��Ϊ netҲ����ks.lorѵ�����������ѵ��oRA ѵ�������������ѵ����L�ȣ��������ֵΪoCon��LoHa�� �ȣ��������ֵΪ lycoris.kohya
12 | $network_weights = "" # pretrained weights for LoRA network | ����Ҫ�����е����е� ģ���ϼ���ѵ��������д��ѵ���ģ��·����д LoRA ģ��·����
13 | $network_dim = 32 # network dim | ������ 4~1������Խ��Խ���Խ��Խ��
14 | $network_alpha = 32 # network alpha | ��������� network_d��ͬ��ֵ���߲��ý�С��ֵ�����ý�С��ֵ���� ��һ��w��ֹ���硣Ĭ��ֵΪһ���ʹ�ý�С����硣Ĭ��ֵ��Ҫ����ѧϰ�ʡ��С�� alpha ��Ҫ����ѧϰ�ʡ�
15 |
16 | # Train related params | ѵ����ز������
17 | $resolution = "512,512" # image resolution w,h. ͼƬ�ֱ��ʣ�����ߡ�֧�ַ������Σ�����������Σ������������ 64 ������
18 | $batch_size = 1 # batch size
19 | $max_train_epoches = 10 # max train epoches | ���ѵ���� epoch
20 | $save_every_n_epochs = 2 # save every n epochs | ÿ N ��� epoch����һ���һ��
21 |
22 | $train_unet_only = 0 # train U-Net only | ��ѵ���� U-N���������������Ч����������Դ�ʹ�á����Դ���Կ����Դ�ʹ�á�6G�Դ���Կ���
23 | $train_text_encoder_only = 0 # train Text Encoder only | ��ѵ����ı�������������
24 | $stop_text_encoder_training = 0 # stop text encoder training | �ڵ����ʱֹͣѵ���ı��������������
25 |
26 | $noise_offset = 0 # noise offset | ��ѵ�����������ƫ�����������ɷdz������߷dz�����ͼ��������ã��Ƽ�����Ϊ����ͼ��������ã��Ƽ�����Ϊ 0.1
27 | $keep_tokens = 0 # keep heading N tokens when shuffling caption tokens | ������������� tokʱ������ǰ��������䡣 N �����䡣
28 | $min_snr_gamma = 0 # minimum signal-to-noise ratio (SNR) value for gamma-ray | ٤�������¼�����С����ȣ��С���ֵ�ȣĬ��ΪR��ֵ Ĭ��Ϊ 0
29 |
30 | # Learning rate | ѧϰ���
31 | $lr = "1e-4"
32 | $unet_lr = "1e-4"
33 | $text_encoder_lr = "1e-5"
34 | $lr_scheduler = "cosine_with_restarts" # "linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"
35 | $lr_warmup_steps = 0 # warmup steps | ѧϰ��Ԥ�Ȳ���������lr_scheduΪer Ϊ const��nt �� adafaʱ��ֵ��Ҫ��Ϊ�����Ҫ��Ϊ0��
36 | $lr_restart_cycles = 1 # cosine_with_restarts restart cycles | �����˻����������������������� lr_sΪheduler Ϊ cosine_with_ʱ��Ч��arts ʱ��Ч��
37 |
38 | # Output settings | ����������
39 | $output_name = "aki" # output model name | ģ�ͱ�����������
40 | $save_model_as = "safetensors" # model save ext | ģ�ͱ����ʽ�ʽ ckpt, pt, safetensors
41 |
42 | # Resume training state | �ָ�ѵ����������
43 | $save_state = 0 # save training state | ����ѵ��״̬������������������� -??????-state��ʾ????? ���ʾ epoch ��
44 | $resume = "" # resume from state | ��ij��״̬�ļ����лָ�ѵ��л������Ϸ�����ͬʱʹ������ڹ淶�ļ�����ʹ�� �������ȫ�ֲ������ᱣ��e��ʹ�ָ�ʱ����Ҳ��ȫ�ֲ��ʼ�����ᱣ�� ��ʹ�ָ�ʱ�����ľ���ʵ�ֲ�������һ��� network_weights �ľ���ʵ�ֲ�������һ��
45 |
46 | # ������������
47 | $min_bucket_reso = 256 # arb min resolution | arb ��С�ֱ������
48 | $max_bucket_reso = 1024 # arb max resolution | arb ���ֱ�����
49 | $persistent_data_loader_workers = 0 # persistent dataloader workers | ���ױ��ڴ棬�������ѵ�������ѵ����������ÿ��ker����֮���ͣ�� epoch ֮���ͣ��
50 | $clip_skip = 2 # clip skip | ��ѧѧһ������� 2
51 | $multi_gpu = 0 # multi gpu | ���Կ�ѵ��ѵ�ò����������Կ���������ʹ���� >= 2 ʹ��
52 | $lowram = 0 # lowram mode | ���ڴ�ģʽģ��ģʽ�»Ὣ�»Ὣ U-n�ı������������ת�Ƶ�VAE ת�Դ��� ���ø�ģʽ���ܻ���Դ���һ��Ӱ��ʽ���ܻ���Դ���һ��Ӱ��
53 |
54 | # ������������
55 | $optimizer_type = "AdamW8bit" # Optimizer type | �Ż��������Ĭ��Ϊ Ĭ��Ϊ Adam����ѡ��t����ѡ��AdamW AdamW8bit Lion SGDNesterov SGDNesterov8bit DAdaptation AdaFactor
56 |
57 | # LyCORIS ѵ���������
58 | $algo = "lora" # LyCORIS network algo | LyCORIS �����㷨���ѡ��ѡ l��ra����oha���lok����ia3���dylo��Ϊ��lora��Ϊlocon
59 | $conv_dim = 4 # conv dim | ��������� network_���Ƽ�Ϊ��Ƽ�Ϊ 4
60 | $conv_alpha = 4 # conv alpha | ��������� network_al�����Բ�������Բ����� cһ�»��߸�С��ֵһ�»��߸�С��ֵ
61 | $dropout = "0" # dropout | dropout ������, Ϊ��ʹ���ʹ�� dropoԽ���� Խ���� drԽ�࣬�Ƽ� Խ�࣬�Ƽ�� 0~0.5�� LoHa/LoK��ʱ��֧��)^3��ʱ��֧��
62 |
63 | # Զ�̼�¼�������
64 | $use_wandb = 0 # enable wandb logging | ������wanԶ�̼�¼����¼����
65 | $wandb_api_key = "" # wandb api key | API,ͨ���https://wandb.ai/authoriz��ȡ�ȡ
66 | $log_tracker_name = "" # wandb log tracker name | wandb��Ŀ�,��,�����Ϊ����Ϊ"network_train"
67 |
68 | # ============= DO NOT MODIFY CONTENTS BELOW | �������·�����·����� =====================
69 | # Activate python venv
70 | .\venv\Scripts\activate
71 |
72 | $Env:HF_HOME = "huggingface"
73 | $Env:XFORMERS_FORCE_DISABLE_TRITON = "1"
74 | $ext_args = [System.Collections.ArrayList]::new()
75 | $launch_args = [System.Collections.ArrayList]::new()
76 |
77 | if ($multi_gpu) {
78 | [void]$launch_args.Add("--multi_gpu")
79 | }
80 |
81 | if ($lowram) {
82 | [void]$ext_args.Add("--lowram")
83 | }
84 |
85 | if ($is_v2_model) {
86 | [void]$ext_args.Add("--v2")
87 | }
88 | else {
89 | [void]$ext_args.Add("--clip_skip=$clip_skip")
90 | }
91 |
92 | if ($parameterization) {
93 | [void]$ext_args.Add("--v_parameterization")
94 | }
95 |
96 | if ($train_unet_only) {
97 | [void]$ext_args.Add("--network_train_unet_only")
98 | }
99 |
100 | if ($train_text_encoder_only) {
101 | [void]$ext_args.Add("--network_train_text_encoder_only")
102 | }
103 |
104 | if ($network_weights) {
105 | [void]$ext_args.Add("--network_weights=" + $network_weights)
106 | }
107 |
108 | if ($reg_data_dir) {
109 | [void]$ext_args.Add("--reg_data_dir=" + $reg_data_dir)
110 | }
111 |
112 | if ($optimizer_type) {
113 | [void]$ext_args.Add("--optimizer_type=" + $optimizer_type)
114 | }
115 |
116 | if ($optimizer_type -eq "DAdaptation") {
117 | [void]$ext_args.Add("--optimizer_args")
118 | [void]$ext_args.Add("decouple=True")
119 | }
120 |
121 | if ($network_module -eq "lycoris.kohya") {
122 | [void]$ext_args.Add("--network_args")
123 | [void]$ext_args.Add("conv_dim=$conv_dim")
124 | [void]$ext_args.Add("conv_alpha=$conv_alpha")
125 | [void]$ext_args.Add("algo=$algo")
126 | [void]$ext_args.Add("dropout=$dropout")
127 | }
128 |
129 | if ($noise_offset -ne 0) {
130 | [void]$ext_args.Add("--noise_offset=$noise_offset")
131 | }
132 |
133 | if ($stop_text_encoder_training -ne 0) {
134 | [void]$ext_args.Add("--stop_text_encoder_training=$stop_text_encoder_training")
135 | }
136 |
137 | if ($save_state -eq 1) {
138 | [void]$ext_args.Add("--save_state")
139 | }
140 |
141 | if ($resume) {
142 | [void]$ext_args.Add("--resume=" + $resume)
143 | }
144 |
145 | if ($min_snr_gamma -ne 0) {
146 | [void]$ext_args.Add("--min_snr_gamma=$min_snr_gamma")
147 | }
148 |
149 | if ($persistent_data_loader_workers) {
150 | [void]$ext_args.Add("--persistent_data_loader_workers")
151 | }
152 |
153 | if ($use_wandb -eq 1) {
154 | [void]$ext_args.Add("--log_with=all")
155 | if ($wandb_api_key) {
156 | [void]$ext_args.Add("--wandb_api_key=" + $wandb_api_key)
157 | }
158 |
159 | if ($log_tracker_name) {
160 | [void]$ext_args.Add("--log_tracker_name=" + $log_tracker_name)
161 | }
162 | }
163 | else {
164 | [void]$ext_args.Add("--log_with=tensorboard")
165 | }
166 |
167 | # run train
168 | accelerate launch $launch_args --num_cpu_threads_per_process=8 "./sd-scripts/train_network.py" `
169 | --enable_bucket `
170 | --pretrained_model_name_or_path=$pretrained_model `
171 | --train_data_dir=$train_data_dir `
172 | --output_dir="./output" `
173 | --logging_dir="./logs" `
174 | --log_prefix=$output_name `
175 | --resolution=$resolution `
176 | --network_module=$network_module `
177 | --max_train_epochs=$max_train_epoches `
178 | --learning_rate=$lr `
179 | --unet_lr=$unet_lr `
180 | --text_encoder_lr=$text_encoder_lr `
181 | --lr_scheduler=$lr_scheduler `
182 | --lr_warmup_steps=$lr_warmup_steps `
183 | --lr_scheduler_num_cycles=$lr_restart_cycles `
184 | --network_dim=$network_dim `
185 | --network_alpha=$network_alpha `
186 | --output_name=$output_name `
187 | --train_batch_size=$batch_size `
188 | --save_every_n_epochs=$save_every_n_epochs `
189 | --mixed_precision="fp16" `
190 | --save_precision="fp16" `
191 | --seed="1337" `
192 | --cache_latents `
193 | --prior_loss_weight=1 `
194 | --max_token_length=225 `
195 | --caption_extension=".txt" `
196 | --save_model_as=$save_model_as `
197 | --min_bucket_reso=$min_bucket_reso `
198 | --max_bucket_reso=$max_bucket_reso `
199 | --keep_tokens=$keep_tokens `
200 | --xformers --shuffle_caption $ext_args
201 | Write-Output "Train finished"
202 | Read-Host | Out-Null ;
203 |
--------------------------------------------------------------------------------
/train.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # LoRA train script by @Akegarasu
3 |
4 | # Train data path | 设置训练用模型、图片
5 | pretrained_model="./sd-models/model.ckpt" # base model path | 底模路径
6 | is_v2_model=0 # SD2.0 model | SD2.0模型 2.0模型下 clip_skip 默认无效
7 | parameterization=0 # parameterization | 参数化 本参数需要和 V2 参数同步使用 实验性功能
8 | train_data_dir="./train/aki" # train dataset path | 训练数据集路径
9 | reg_data_dir="" # directory for regularization images | 正则化数据集路径,默认不使用正则化图像。
10 |
11 | # Network settings | 网络设置
12 | network_module="networks.lora" # 在这里将会设置训练的网络种类,默认为 networks.lora 也就是 LoRA 训练。如果你想训练 LyCORIS(LoCon、LoHa) 等,则修改这个值为 lycoris.kohya
13 | network_weights="" # pretrained weights for LoRA network | 若需要从已有的 LoRA 模型上继续训练,请填写 LoRA 模型路径。
14 | network_dim=32 # network dim | 常用 4~128,不是越大越好
15 | network_alpha=32 # network alpha | 常用与 network_dim 相同的值或者采用较小的值,如 network_dim的一半 防止下溢。默认值为 1,使用较小的 alpha 需要提升学习率。
16 |
17 | # Train related params | 训练相关参数
18 | resolution="512,512" # image resolution w,h. 图片分辨率,宽,高。支持非正方形,但必须是 64 倍数。
19 | batch_size=1 # batch size
20 | max_train_epoches=10 # max train epoches | 最大训练 epoch
21 | save_every_n_epochs=2 # save every n epochs | 每 N 个 epoch 保存一次
22 |
23 | train_unet_only=0 # train U-Net only | 仅训练 U-Net,开启这个会牺牲效果大幅减少显存使用。6G显存可以开启
24 | train_text_encoder_only=0 # train Text Encoder only | 仅训练 文本编码器
25 | stop_text_encoder_training=0 # stop text encoder training | 在第N步时停止训练文本编码器
26 |
27 | noise_offset="0" # noise offset | 在训练中添加噪声偏移来改良生成非常暗或者非常亮的图像,如果启用,推荐参数为0.1
28 | keep_tokens=0 # keep heading N tokens when shuffling caption tokens | 在随机打乱 tokens 时,保留前 N 个不变。
29 | min_snr_gamma=0 # minimum signal-to-noise ratio (SNR) value for gamma-ray | 伽马射线事件的最小信噪比(SNR)值 默认为 0
30 |
31 | # Learning rate | 学习率
32 | lr="1e-4"
33 | unet_lr="1e-4"
34 | text_encoder_lr="1e-5"
35 | lr_scheduler="cosine_with_restarts" # "linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup", "adafactor"
36 | lr_warmup_steps=0 # warmup steps | 学习率预热步数,lr_scheduler 为 constant 或 adafactor 时该值需要设为0。
37 | lr_restart_cycles=1 # cosine_with_restarts restart cycles | 余弦退火重启次数,仅在 lr_scheduler 为 cosine_with_restarts 时起效。
38 |
39 | # Output settings | 输出设置
40 | output_name="aki" # output model name | 模型保存名称
41 | save_model_as="safetensors" # model save ext | 模型保存格式 ckpt, pt, safetensors
42 |
43 | # Resume training state | 恢复训练设置
44 | save_state=0 # save state | 保存训练状态 名称类似于 -??????-state ?????? 表示 epoch 数
45 | resume="" # resume from state | 从某个状态文件夹中恢复训练 需配合上方参数同时使用 由于规范文件限制 epoch 数和全局步数不会保存 即使恢复时它们也从 1 开始 与 network_weights 的具体实现操作并不一致
46 |
47 | # 其他设置
48 | min_bucket_reso=256 # arb min resolution | arb 最小分辨率
49 | max_bucket_reso=1024 # arb max resolution | arb 最大分辨率
50 | persistent_data_loader_workers=0 # persistent dataloader workers | 容易爆内存,保留加载训练集的worker,减少每个 epoch 之间的停顿
51 | clip_skip=2 # clip skip | 玄学 一般用 2
52 |
53 | # 优化器设置
54 | optimizer_type="AdamW8bit" # Optimizer type | 优化器类型 默认为 AdamW8bit,可选:AdamW AdamW8bit Lion SGDNesterov SGDNesterov8bit DAdaptation AdaFactor
55 |
56 | # LyCORIS 训练设置
57 | algo="lora" # LyCORIS network algo | LyCORIS 网络算法 可选 lora、loha、lokr、ia3、dylora。lora即为locon
58 | conv_dim=4 # conv dim | 类似于 network_dim,推荐为 4
59 | conv_alpha=4 # conv alpha | 类似于 network_alpha,可以采用与 conv_dim 一致或者更小的值
60 | dropout="0" # dropout | dropout 概率, 0 为不使用 dropout, 越大则 dropout 越多,推荐 0~0.5, LoHa/LoKr/(IA)^3暂时不支持
61 |
62 | # 远程记录设置
63 | use_wandb=0 # use_wandb | 启用wandb远程记录功能
64 | wandb_api_key="" # wandb_api_key | API,通过https://wandb.ai/authorize获取
65 | log_tracker_name="" # log_tracker_name | wandb项目名称,留空则为"network_train"
66 |
67 | # ============= DO NOT MODIFY CONTENTS BELOW | 请勿修改下方内容 =====================
68 | export HF_HOME="huggingface"
69 | export TF_CPP_MIN_LOG_LEVEL=3
70 |
71 | extArgs=()
72 | launchArgs=()
73 | if [[ $multi_gpu == 1 ]]; then launchArgs+=("--multi_gpu"); fi
74 |
75 | if [[ $is_v2_model == 1 ]]; then
76 | extArgs+=("--v2");
77 | else
78 | extArgs+=("--clip_skip $clip_skip");
79 | fi
80 |
81 | if [[ $parameterization == 1 ]]; then extArgs+=("--v_parameterization"); fi
82 |
83 | if [[ $train_unet_only == 1 ]]; then extArgs+=("--network_train_unet_only"); fi
84 |
85 | if [[ $train_text_encoder_only == 1 ]]; then extArgs+=("--network_train_text_encoder_only"); fi
86 |
87 | if [[ $network_weights ]]; then extArgs+=("--network_weights $network_weights"); fi
88 |
89 | if [[ $reg_data_dir ]]; then extArgs+=("--reg_data_dir $reg_data_dir"); fi
90 |
91 | if [[ $optimizer_type ]]; then extArgs+=("--optimizer_type $optimizer_type"); fi
92 |
93 | if [[ $optimizer_type == "DAdaptation" ]]; then extArgs+=("--optimizer_args decouple=True"); fi
94 |
95 | if [[ $save_state == 1 ]]; then extArgs+=("--save_state"); fi
96 |
97 | if [[ $resume ]]; then extArgs+=("--resume $resume"); fi
98 |
99 | if [[ $persistent_data_loader_workers == 1 ]]; then extArgs+=("--persistent_data_loader_workers"); fi
100 |
101 | if [[ $network_module == "lycoris.kohya" ]]; then
102 | extArgs+=("--network_args conv_dim=$conv_dim conv_alpha=$conv_alpha algo=$algo dropout=$dropout")
103 | fi
104 |
105 | if [[ $stop_text_encoder_training -ne 0 ]]; then extArgs+=("--stop_text_encoder_training $stop_text_encoder_training"); fi
106 |
107 | if [[ $noise_offset != "0" ]]; then extArgs+=("--noise_offset $noise_offset"); fi
108 |
109 | if [[ $min_snr_gamma -ne 0 ]]; then extArgs+=("--min_snr_gamma $min_snr_gamma"); fi
110 |
111 | if [[ $use_wandb == 1 ]]; then
112 | extArgs+=("--log_with=all")
113 | else
114 | extArgs+=("--log_with=tensorboard")
115 | fi
116 |
117 | if [[ $wandb_api_key ]]; then extArgs+=("--wandb_api_key $wandb_api_key"); fi
118 |
119 | if [[ $log_tracker_name ]]; then extArgs+=("--log_tracker_name $log_tracker_name"); fi
120 |
121 | accelerate launch ${launchArgs[@]} --num_cpu_threads_per_process=8 "./sd-scripts/train_network.py" \
122 | --enable_bucket \
123 | --pretrained_model_name_or_path=$pretrained_model \
124 | --train_data_dir=$train_data_dir \
125 | --output_dir="./output" \
126 | --logging_dir="./logs" \
127 | --log_prefix=$output_name \
128 | --resolution=$resolution \
129 | --network_module=$network_module \
130 | --max_train_epochs=$max_train_epoches \
131 | --learning_rate=$lr \
132 | --unet_lr=$unet_lr \
133 | --text_encoder_lr=$text_encoder_lr \
134 | --lr_scheduler=$lr_scheduler \
135 | --lr_warmup_steps=$lr_warmup_steps \
136 | --lr_scheduler_num_cycles=$lr_restart_cycles \
137 | --network_dim=$network_dim \
138 | --network_alpha=$network_alpha \
139 | --output_name=$output_name \
140 | --train_batch_size=$batch_size \
141 | --save_every_n_epochs=$save_every_n_epochs \
142 | --mixed_precision="fp16" \
143 | --save_precision="fp16" \
144 | --seed="1337" \
145 | --cache_latents \
146 | --prior_loss_weight=1 \
147 | --max_token_length=225 \
148 | --caption_extension=".txt" \
149 | --save_model_as=$save_model_as \
150 | --min_bucket_reso=$min_bucket_reso \
151 | --max_bucket_reso=$max_bucket_reso \
152 | --keep_tokens=$keep_tokens \
153 | --xformers --shuffle_caption ${extArgs[@]}
154 |
--------------------------------------------------------------------------------
/train_by_toml.ps1:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/WSH032/lora-scripts/7b0f1a6fadab6858dc8a4b6ec04fec83b8e28812/train_by_toml.ps1
--------------------------------------------------------------------------------
/train_by_toml.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # LoRA train script by @Akegarasu
3 |
4 | multi_gpu=0 # multi gpu | 多显卡训练 该参数仅限在显卡数 >= 2 使用
5 | config_file="./toml/default.toml" # config_file | 使用toml文件指定训练参数
6 | sample_prompts="./toml/sample_prompts.txt" # sample_prompts | 采样prompts文件,留空则不启用采样功能
7 | utf8=1 # utf8 | 使用utf-8编码读取toml;以utf-8编码编写的、含中文的toml必须开启
8 |
9 | # ============= DO NOT MODIFY CONTENTS BELOW | 请勿修改下方内容 =====================
10 |
11 | export HF_HOME="huggingface"
12 | export TF_CPP_MIN_LOG_LEVEL=3
13 |
14 | extArgs=()
15 | launchArgs=()
16 |
17 | if [[ $multi_gpu == 1 ]]; then launchArgs+=("--multi_gpu"); fi
18 | if [[ $utf8 == 1 ]]; then export PYTHONUTF8=1; fi
19 |
20 | # run train
21 | accelerate launch ${launchArgs[@]} --num_cpu_threads_per_process=8 "./sd-scripts/train_network.py" \
22 | --config_file=$config_file \
23 | --sample_prompts=$sample_prompts \
24 | ${extArgs[@]}
25 |
--------------------------------------------------------------------------------