├── LICENSE
├── README.md
└── stable_diffusion_interactive_notebook.ipynb
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Rishabh Moharir
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
Stable Diffusion Interactive Notebook 📓 🤖
2 |
3 |
4 |
5 |
6 |
7 | _Click the above button to start generating images!_
8 |
9 |
10 |
11 |
12 | A widgets-based interactive notebook for Google Colab that lets users generate AI images from prompts (Text2Image) using [Stable Diffusion (by Stability AI, Runway & CompVis)](https://en.wikipedia.org/wiki/Stable_Diffusion).
13 |
14 | This notebook aims to be an alternative to WebUIs while offering a simple and lightweight GUI for anyone to get started with Stable Diffusion.
15 |
16 | Uses Stable Diffusion, [HuggingFace](https://huggingface.co/) Diffusers and [Jupyter widgets](https://github.com/jupyter-widgets/ipywidgets).
17 |
18 | 
19 |
20 | ## Features
21 | - Interactive GUI interface
22 | - Available Stable Diffusion Models:
23 | - [Stable Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
24 | - [Stable Diffusion 2.1 Base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)
25 | - [Stable Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1)
26 | - [OpenJourney v4](https://huggingface.co/prompthero/openjourney-v4)
27 | - [Dreamlike Photoreal 2.0](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0)
28 | - Available Schedulers:
29 | - [EulerAncestralDiscreteScheduler](https://huggingface.co/docs/diffusers/api/schedulers/euler_ancestral)
30 | - [EulerDiscreteScheduler](https://huggingface.co/docs/diffusers/api/schedulers/euler)
31 | - [DDIMScheduler](https://huggingface.co/docs/diffusers/api/schedulers/ddim)
32 | - [UniPCMultistepScheduler](https://huggingface.co/docs/diffusers/api/schedulers/unipc)
33 | - Includes Safety Checker to enable/disable inappropriate content
34 | - Features [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse) Autoencoder for producing "smoother" images
35 |
36 | ## Contributing
37 | Improvements and new features are most welcome! Feel free to submit a PR.
38 |
39 | 1. Fork this repository
40 | 2. Create a new branch
41 | 3. Do your thing
42 | 4. Create a Pull Request
43 |
--------------------------------------------------------------------------------
/stable_diffusion_interactive_notebook.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "private_outputs": true,
7 | "provenance": [],
8 | "gpuType": "T4",
9 | "authorship_tag": "ABX9TyNVqD2247RRHVMWeh9Znaqy"
10 | },
11 | "kernelspec": {
12 | "name": "python3",
13 | "display_name": "Python 3"
14 | },
15 | "language_info": {
16 | "name": "python"
17 | },
18 | "gpuClass": "standard",
19 | "accelerator": "GPU"
20 | },
21 | "cells": [
22 | {
23 | "cell_type": "markdown",
24 | "source": [
25 | "# Stable Diffusion Interactive Notebook 📓 🤖\n",
26 | "\n",
27 | "A widgets-based interactive notebook for Google Colab that lets users generate AI images from prompts (Text2Image) using [Stable Diffusion (by Stability AI, Runway & CompVis)](https://en.wikipedia.org/wiki/Stable_Diffusion). \n",
28 | "\n",
29 | "This notebook aims to be an alternative to WebUIs while offering a simple and lightweight GUI for anyone to get started with Stable Diffusion.\n",
30 | "\n",
31 | "Uses Stable Diffusion, [HuggingFace](https://huggingface.co/) Diffusers and [Jupyter widgets](https://github.com/jupyter-widgets/ipywidgets).\n",
32 | "\n",
33 | "
\n",
34 | "\n",
35 | "Made with ❤️ by redromnon\n",
36 | "\n",
37 | "[GitHub](https://github.com/redromnon/stable-diffusion-interactive-notebook)"
38 | ],
39 | "metadata": {
40 | "id": "wILzWiWRfTX8"
41 | }
42 | },
43 | {
44 | "cell_type": "code",
45 | "source": [
46 | "#@title 👇 Installing dependencies { display-mode: \"form\" }\n",
47 | "#@markdown ---\n",
48 | "#@markdown Make sure to select **GPU** as the runtime type:
\n",
49 | "#@markdown *Runtime->Change Runtime Type->Under Hardware accelerator, select GPU*\n",
50 | "#@markdown \n",
51 | "#@markdown ---\n",
52 | "\n",
53 | "!pip -q install torch diffusers transformers accelerate scipy safetensors xformers mediapy ipywidgets==7.7.1"
54 | ],
55 | "metadata": {
56 | "id": "vCR176NNfn0o"
57 | },
58 | "execution_count": null,
59 | "outputs": []
60 | },
61 | {
62 | "cell_type": "code",
63 | "source": [
64 | "#@title 👇 Selecting Model { form-width: \"20%\", display-mode: \"form\" }\n",
65 | "#@markdown ---\n",
66 | "#@markdown - **Select Model** - A list of Stable Diffusion models to choose from.\n",
67 | "#@markdown - **Select Sampler** - A list of schedulers to choose from. Default is EulerAncestralScheduler.\n",
68 | "#@markdown - **Safety Checker** - Enable/Disable uncensored content\n",
69 | "#@markdown \n",
70 | "#@markdown ---\n",
71 | "\n",
72 | "from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler, DDIMScheduler, EulerDiscreteScheduler, UniPCMultistepScheduler\n",
73 | "from diffusers.models import AutoencoderKL\n",
74 | "import torch\n",
75 | "import ipywidgets as widgets\n",
76 | "import importlib\n",
77 | "\n",
78 | "#Enable third party widget support\n",
79 | "from google.colab import output\n",
80 | "output.enable_custom_widget_manager()\n",
81 | "\n",
82 | "#Pipe\n",
83 | "pipe = None\n",
84 | "\n",
85 | "#Models\n",
86 | "select_model = widgets.Dropdown(\n",
87 | " options=[\n",
88 | " (\"Stable Diffusion 2.1 Base\" , \"stabilityai/stable-diffusion-2-1-base\"),\n",
89 | " (\"Stable Diffusion 2.1\" , \"stabilityai/stable-diffusion-2-1\"),\n",
90 | " (\"Stable Diffusion 1.5\", \"runwayml/stable-diffusion-v1-5\"),\n",
91 | " (\"Dreamlike Photoreal 2.0\" , \"dreamlike-art/dreamlike-photoreal-2.0\"),\n",
92 | " (\"OpenJourney v4\" , \"prompthero/openjourney-v4\")\n",
93 | " ],\n",
94 | " description=\"Select Model:\"\n",
95 | ")\n",
96 | "\n",
97 | "#Schedulers\n",
98 | "select_sampler = widgets.Dropdown(\n",
99 | " options=[\n",
100 | " \"EulerAncestralDiscreteScheduler\",\n",
101 | " \"EulerDiscreteScheduler\",\n",
102 | " \"UniPCMultistepScheduler\",\n",
103 | " \"DDIMScheduler\"\n",
104 | " ],\n",
105 | " description=\"Select Schedular:\"\n",
106 | ")\n",
107 | "select_sampler.style.description_width = \"auto\"\n",
108 | "\n",
109 | "#Safety Checker\n",
110 | "safety_check = widgets.Checkbox(\n",
111 | " value=True,\n",
112 | " description=\"Enable Safety Check\",\n",
113 | " layout=widgets.Layout(margin=\"0px 0px 0px -85px\")\n",
114 | ")\n",
115 | "\n",
116 | "#Output\n",
117 | "out = widgets.Output()\n",
118 | "\n",
119 | "#Apply Settings\n",
120 | "apply_btn = widgets.Button(\n",
121 | " description=\"Apply\",\n",
122 | " button_style=\"info\"\n",
123 | ")\n",
124 | "\n",
125 | "\n",
126 | "#Get scheduler\n",
127 | "def get_scheduler(name):\n",
128 | "\n",
129 | " match name:\n",
130 | "\n",
131 | " case \"EulerAncestralDiscreteScheduler\":\n",
132 | " return EulerAncestralDiscreteScheduler.from_pretrained(select_model.value, subfolder=\"scheduler\")\n",
133 | "\n",
134 | " case \"DDIMScheduler\":\n",
135 | " return DDIMScheduler.from_pretrained(select_model.value, subfolder=\"scheduler\")\n",
136 | "\n",
137 | " case \"EulerDiscreteScheduler\":\n",
138 | " return EulerDiscreteScheduler.from_pretrained(select_model.value, subfolder=\"scheduler\")\n",
139 | "\n",
140 | " case \"UniPCMultistepScheduler\":\n",
141 | " return UniPCMultistepScheduler.from_pretrained(select_model.value, subfolder=\"scheduler\")\n",
142 | "\n",
143 | "#Run pipeline\n",
144 | "def pipeline(p):\n",
145 | "\n",
146 | " global pipe\n",
147 | " \n",
148 | " out.clear_output()\n",
149 | " apply_btn.disabled = True\n",
150 | "\n",
151 | " with out:\n",
152 | "\n",
153 | " print(\"Running, please wait...\")\n",
154 | " \n",
155 | " pipe = StableDiffusionPipeline.from_pretrained(\n",
156 | " select_model.value, \n",
157 | " scheduler=get_scheduler(select_sampler.value),\n",
158 | " torch_dtype=torch.float16, \n",
159 | " vae=AutoencoderKL.from_pretrained(\"stabilityai/sd-vae-ft-mse\", torch_dtype=torch.float16).to(\"cuda\")\n",
160 | " ).to(\"cuda\")\n",
161 | "\n",
162 | " if not safety_check.value:\n",
163 | " pipe.safety_checker = None\n",
164 | "\n",
165 | " pipe.enable_xformers_memory_efficient_attention()\n",
166 | "\n",
167 | " print(\"Finished!\")\n",
168 | "\n",
169 | " apply_btn.disabled = False\n",
170 | "\n",
171 | "\n",
172 | "#Display\n",
173 | "apply_btn.on_click(pipeline)\n",
174 | "\n",
175 | "widgets.VBox(\n",
176 | " [\n",
177 | " widgets.HTML(value=\"Configure Pipeline
\"),\n",
178 | " select_model, select_sampler, safety_check, apply_btn, out\n",
179 | " ]\n",
180 | ")\n"
181 | ],
182 | "metadata": {
183 | "id": "CV_UTS40oD1k"
184 | },
185 | "execution_count": null,
186 | "outputs": []
187 | },
188 | {
189 | "cell_type": "code",
190 | "source": [
191 | "#@title 👇 Generating Images { form-width: \"20%\", display-mode: \"form\" }\n",
192 | "#@markdown ---\n",
193 | "#@markdown - **Prompt** - Description of the image\n",
194 | "#@markdown - **Negative Prompt** - Things you don't want to see or ignore in the image\n",
195 | "#@markdown - **Steps** - Number of denoising steps. Higher steps may lead to better results but takes longer time to generate the image. Default is `30`.\n",
196 | "#@markdown - **CFG** - Guidance scale ranging from `0` to `20`. Lower values allow the AI to be more creative and less strict at following the prompt. Default is `7.5`.\n",
197 | "#@markdown - **Seed** - A random value that controls image generation. The same seed and prompt produce the same images. Set `-1` for using random seed values.\n",
198 | "#@markdown ---\n",
199 | "import ipywidgets as widgets, mediapy, random\n",
200 | "import IPython.display\n",
201 | "\n",
202 | "\n",
203 | "#PARAMETER WIDGETS \n",
204 | "width = \"300px\"\n",
205 | "\n",
206 | "prompt = widgets.Textarea(\n",
207 | " value=\"\",\n",
208 | " placeholder=\"Enter prompt\",\n",
209 | " #description=\"Prompt:\",\n",
210 | " rows=5,\n",
211 | " layout=widgets.Layout(width=\"600px\")\n",
212 | ")\n",
213 | "\n",
214 | "neg_prompt = widgets.Textarea(\n",
215 | " value=\"\",\n",
216 | " placeholder=\"Enter negative prompt\",\n",
217 | " #description=\"Negative Prompt:\",\n",
218 | " rows=5,\n",
219 | " layout=widgets.Layout(width=\"600px\")\n",
220 | ")\n",
221 | "\n",
222 | "num_images = widgets.IntText(\n",
223 | " value=1,\n",
224 | " description=\"Images:\",\n",
225 | " layout=widgets.Layout(width=width),\n",
226 | ")\n",
227 | "\n",
228 | "steps = widgets.IntText(\n",
229 | " value=30,\n",
230 | " description=\"Steps:\",\n",
231 | " layout=widgets.Layout(width=width)\n",
232 | ")\n",
233 | "\n",
234 | "CFG = widgets.FloatText(\n",
235 | " value=7.5,\n",
236 | " description=\"CFG:\",\n",
237 | " layout=widgets.Layout(width=width)\n",
238 | ")\n",
239 | "\n",
240 | "img_height = widgets.Dropdown(\n",
241 | " options=[('512px', 512), ('768px', 768)],\n",
242 | " value=512,\n",
243 | " description=\"Height:\",\n",
244 | " layout=widgets.Layout(width=width)\n",
245 | ")\n",
246 | "\n",
247 | "img_width = widgets.Dropdown(\n",
248 | " options=[('512px', 512), ('768px', 768)],\n",
249 | " value=512,\n",
250 | " description=\"Width:\",\n",
251 | " layout=widgets.Layout(width=width)\n",
252 | ")\n",
253 | "\n",
254 | "random_seed = widgets.IntText(\n",
255 | " value=-1,\n",
256 | " description=\"Seed:\",\n",
257 | " layout=widgets.Layout(width=width),\n",
258 | " disabled=False\n",
259 | ")\n",
260 | "\n",
261 | "generate = widgets.Button(\n",
262 | " description=\"Generate\",\n",
263 | " disabled=False,\n",
264 | " button_style=\"primary\"\n",
265 | ")\n",
266 | "\n",
267 | "display_imgs = widgets.Output()\n",
268 | "\n",
269 | "\n",
270 | "#RUN\n",
271 | "def generate_img(i):\n",
272 | "\n",
273 | " #Clear output\n",
274 | " display_imgs.clear_output()\n",
275 | " generate.disabled = True\n",
276 | "\n",
277 | " #Calculate seed\n",
278 | " seed = random.randint(0, 2147483647) if random_seed.value == -1 else random_seed.value\n",
279 | "\n",
280 | " with display_imgs: \n",
281 | "\n",
282 | " print(\"Running...\")\n",
283 | " \n",
284 | " images = pipe(\n",
285 | " prompt.value,\n",
286 | " height = img_height.value,\n",
287 | " width = img_width.value,\n",
288 | " num_inference_steps = steps.value,\n",
289 | " guidance_scale = CFG.value,\n",
290 | " num_images_per_prompt = num_images.value,\n",
291 | " negative_prompt = neg_prompt.value,\n",
292 | " generator = torch.Generator(\"cuda\").manual_seed(seed),\n",
293 | " ).images \n",
294 | " mediapy.show_images(images)\n",
295 | "\n",
296 | " print(f\"Seed:\\n{seed}\")\n",
297 | "\n",
298 | " generate.disabled = False\n",
299 | "\n",
300 | "#Display\n",
301 | "generate.on_click(generate_img)\n",
302 | "\n",
303 | "widgets.VBox(\n",
304 | " [\n",
305 | " widgets.AppLayout(\n",
306 | " header=widgets.HTML(\n",
307 | " value=\"Stable Diffusion
\",\n",
308 | " ),\n",
309 | " left_sidebar=widgets.VBox(\n",
310 | " [num_images, steps, CFG, img_height, img_width, random_seed]\n",
311 | " ),\n",
312 | " center=widgets.VBox(\n",
313 | " [prompt, neg_prompt, generate]\n",
314 | " ),\n",
315 | " right_sidebar=None,\n",
316 | " footer=None\n",
317 | " ),\n",
318 | " display_imgs\n",
319 | " ]\n",
320 | ")"
321 | ],
322 | "metadata": {
323 | "id": "atmx0PNQ78Wa"
324 | },
325 | "execution_count": null,
326 | "outputs": []
327 | }
328 | ]
329 | }
--------------------------------------------------------------------------------