├── .gitignore
├── LICENSE
├── README.md
└── v1
├── Dockerfile
├── app
├── __init__.py
└── app.py
├── hf_token.py
├── requirements.txt
└── retrieve_model.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | pip-wheel-metadata/
24 | share/python-wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 | MANIFEST
29 |
30 | # PyInstaller
31 | # Usually these files are written by a python script from a template
32 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
33 | *.manifest
34 | *.spec
35 |
36 | # Installer logs
37 | pip-log.txt
38 | pip-delete-this-directory.txt
39 |
40 | # Unit test / coverage reports
41 | htmlcov/
42 | .tox/
43 | .nox/
44 | .coverage
45 | .coverage.*
46 | .cache
47 | nosetests.xml
48 | coverage.xml
49 | *.cover
50 | *.py,cover
51 | .hypothesis/
52 | .pytest_cache/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # pipenv
88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
91 | # install all needed dependencies.
92 | #Pipfile.lock
93 |
94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
95 | __pypackages__/
96 |
97 | # Celery stuff
98 | celerybeat-schedule
99 | celerybeat.pid
100 |
101 | # SageMath parsed files
102 | *.sage.py
103 |
104 | # Environments
105 | .env
106 | .venv
107 | env/
108 | venv/
109 | ENV/
110 | env.bak/
111 | venv.bak/
112 |
113 | # Spyder project settings
114 | .spyderproject
115 | .spyproject
116 |
117 | # Rope project settings
118 | .ropeproject
119 |
120 | # mkdocs documentation
121 | /site
122 |
123 | # mypy
124 | .mypy_cache/
125 | .dmypy.json
126 | dmypy.json
127 |
128 | # Pyre type checker
129 | .pyre/
130 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022 FourthBrain
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
4 |
5 |
6 |
7 |
8 | ## :wave: Branching out of the Notebook: ML Application Development with GitHub
9 |
10 | Welcome to one of the most used modern development workflows - feature branch development!
11 |
12 | Massive thanks to [DeepLearning.ai](https://www.deeplearning.ai/) for coordinating this event!
13 |
14 | ## :books: Quick Review
15 |
16 | Just a reminder that this repository is going to build off of the previous tutorial which you can find here:
17 |
18 |
19 | Video Tutorials
20 |
21 | 1. [M1 Tutorial](https://youtu.be/wiZWQjjvvyk)
22 | 2. [Windows/WSL2 Tutorial](https://youtu.be/C7fBf33nQ7E)
23 | 3. [Linux Tutorial](https://youtu.be/TePJhh4oRcA)
24 |
25 |
26 |
27 |
28 | Repositories Used
29 |
30 | 1. [FourthBrain's Intro to MLOps Repo](https://github.com/FourthBrain/software-dev-for-mlops-101)
31 | 2. [deeplearning.ai's FastAPI/Docker Repo](https://github.com/https-deeplearning-ai/machine-learning-engineering-for-production-public/tree/main/course4/week2-ungraded-labs/C4_W2_Lab_1_FastAPI_Docker)
32 |
33 |
34 |
35 | Also, make sure you have an activate Hugging Face account!
36 |
37 | Once you've caught up using those you'll have the base knowledge you'll need to get started on this repository!
38 |
39 |
40 | ## :rocket: Let's get started!
41 |
42 | ### Creating a repository & cloning app content
43 |
44 | 1. First steps, let's create a repository on GitHub! You can follow [these](https://docs.github.com/en/get-started/quickstart/create-a-repo) instructions to create it! Make sure to include a README.md, a License and a `.gitignore`
45 | 2. Now, we're going to clone our repository from GitHub using the command:
46 |
47 | ```console
48 | git clone
49 | ```
50 | 3. Now we can add this repository as an upstream remote to our local repo with the following command:
51 |
52 | ```console
53 | git remote add upstream git@github.com:FourthBrain/Branching-out-of-the-Notebook.git
54 | ```
55 |
56 | 4. Next, we're going to pull this repo into our local using the following command:
57 |
58 | ```console
59 | git pull upstream main -Xtheirs --allow-unrelated-histories
60 | ```
61 |
62 |
63 | Command Explanation
64 |
65 | This command uses two flags:
66 |
67 | 1. `-Xtheirs` this flag tells git to keep their files, should their be any merge conflicts.
68 | 2. `--allow-unrelated-histories` this flag tells git to not worry about the fact that these are two separate repositories!
69 |
70 |
71 |
72 | ### Optional: Locally testing our StableDiffusion App!
73 |
74 | 1. Now that we have the app in our local branch, we'll need to do some set-up before we continue! The first thing we'll want to do is head to [this link](https://huggingface.co/CompVis/stable-diffusion-v1-4) and read/accept the ToS!
75 |
76 | 2. Next, we'll want to create a Read key on HuggingFace by following [this](https://huggingface.co/docs/hub/security-tokens#:~:text=To%20create%20an%20access%20token,you're%20ready%20to%20go!) process!
77 |
78 | 3. Now that you have your Hugging Face Read token, you'll want to make a file in the `v1` directory called: `hf_token.py` (this is already in `.gitignore`), which will be a simple `.py` file that only contains one row of code:
79 |
80 | ```
81 | hf_token = ""
82 | ```
83 |
84 | 4. At this point, your repository should look something like this:
85 |
86 | 
87 |
88 | 5. Now we'll want to set-up a local `conda` env using the following command:
89 |
90 | ```console
91 | conda create --name stable_diffusion_demo python=3.10 pip diffusers
92 | ```
93 |
94 | 6. Once you've create `hf_token.py` and added your read key (again making sure to accept the ToS [here](https://huggingface.co/CompVis/stable-diffusion-v1-4), and set-up your `conda` env, you can run `retrieve_model.py`. This will take a while, and consume ~5GB of space - as it's pulling the v1-4 Stable Diffusion weights from Hugging Face!
95 |
96 | ```console
97 | conda activate stable_diffusion_demo
98 | cd v1
99 | python3 retrieve_model.py
100 | ```
101 |
102 | 6. Once that is done, you will be able to build the Docker image! This process will take some time (~5min.), and is dependent on your hardware!
103 |
104 | ```console
105 | docker build -t stable_diff_demo .
106 | ```
107 |
108 | 7. Now that you've built the container - it's time to run it! We'll use the following command to ensure our container removes itself when we stop it!
109 |
110 | ```console
111 | docker run --rm --name stable_diff_demo_container -p 5000:5000 stable_diff_demo
112 | ```
113 |
114 | 8. Head on over to [localhost:5000/docs](http://localhost:5000/docs) to play around with the new model! (CPU inference can take a long time, depending on your hardware. ~3-5min.)
115 |
116 | ## 🌳 Branching Out!
117 |
118 | ### Local Developement on a Feature Branch
119 |
120 | Now that you've tested your app locally, it's time to add a feature branch do some work, and get ready to merge!
121 |
122 | 1. First things first we'll want to make a new branch called `feature_branch_img2img`, you'll want to use the command:
123 |
124 | ```console
125 | git checkout -b feature_branch_img2img
126 | ```
127 | 2. Once you have that done, you can check to make sure you're on the correct branch using the command:
128 |
129 | ```conole
130 | git branch
131 | ```
132 |
133 | 3. After confirming you're on the correct branch - you can finally add your feature! We're going to add the `img2img` endpoint to our FastAPI application (located in `v1/app/app.py`, using the following code:
134 |
135 |
136 | ```python
137 | # create a img2img route
138 | @app.post("/img2img")
139 | def img2img(prompt: str, img: UploadFile = File(...)):
140 | device = get_device()
141 | img2img_pipe = StableDiffusionImg2ImgPipeline.from_pretrained("../model")
142 | img2img_pipe.to(device)
143 |
144 | img = Image.open(img.file).convert("RGB")
145 | init_img = img.resize((768, 512))
146 |
147 | # generate an image
148 | img = img2img_pipe(prompt, init_img).images[0]
149 |
150 | img = np.array(img)
151 | img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
152 | res, img = cv2.imencode(".png", img)
153 |
154 | del img2img_pipe
155 | if torch.cuda.is_available():
156 | torch.cuda.empty_cache()
157 |
158 | return StreamingResponse(io.BytesIO(img.tobytes()), media_type="image/png")
159 | ```
160 |
161 | 4. Once you have updated your `app.py`, you can once again test it using the processed outlined above!
162 |
163 | 5. Now that we've made the changes - we're ready to stage, and then commit them! First up, let's stage our changes using:
164 |
165 | ```console
166 | git add .
167 | ```
168 |
169 | 6. Next up, we'll commit the changes with a useful/helpful message using:
170 |
171 | ```console
172 | git commit -m ""
173 | ```
174 |
175 | 7. Now that we've commited those changes - we need to push them to our remote repository. You'll notice that we're setting a new upstream in the upcoming command, that's because while the branch we created on our local exists; the remote is not aware of it! So we need to create the branch on the remote, as well! We can do this in one step with the following command:
176 |
177 | ```console
178 | git push --set-upstream origin feature_branch_img2img
179 | ```
180 |
181 | 8. With that, you're all done on the local side! The only thing left to do is navigate to your remote repo on GitHub.com!
182 |
183 | ### Merging into the Trunk Using a Pull Request!
184 |
185 | 1. When you arrive at your repository on GitHub.com, you should notice a brand new banner is present:
186 |
187 | 
188 |
189 | 2. Once you click on the green "Compare & pull request" button - you'll be brought to the pull request screen where you'll want to leave an insightful comment, and add an appropriate title (it will use your commit message as a default title!)
190 |
191 | 
192 |
193 | 3. Once you've done that, you can click the green "Create pull request" button!
194 |
195 | 4. After creating the pull request, you'll see there's an option to assign reviewers - which you can do by clicking the cog wheel icon, and either selecting (or typing) the name of your reviewer! (Remember: they have to be a collaborator on the repository!)
196 |
197 | 
198 |
199 | 5. After review (usually the reviewer will do this step), you can finally merge your changes into the trunk by clicking the green "Merge pull request" button! It will ask you to confirm, and leave additional comments - which should absolutely do!
200 |
201 | 
202 |
203 | 6. Afterwards, it will prompt you if you want to delete the branch. It is dependent on your organization if you will or not - but it is usually considered good practice to delete "hanging" branches!
204 |
205 | 
206 |
207 |
208 |
209 | ## :tada: Conclusion!
210 |
211 | With that, you're all done!
212 |
213 | To recap, we've: Built a web-app based on the Diffusers library, "Dockerized" it, created a feature branch, added a feature, pushed our branch to GitHub, and created a PR!
214 |
215 |
216 |
217 |
218 |
219 |
--------------------------------------------------------------------------------
/v1/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM python:3.10-slim
2 |
3 | COPY requirements.txt /tmp/requirements.txt
4 |
5 | RUN pip install -r /tmp/requirements.txt
6 |
7 | RUN rm /tmp/requirements.txt
8 |
9 | COPY /app /app
10 |
11 | COPY /model /model
12 |
13 | WORKDIR /app
14 |
15 | EXPOSE 5000
16 |
17 | RUN apt-get update
18 | RUN apt-get install ffmpeg libsm6 libxext6 -y
19 |
20 | CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "5000"]
21 |
22 |
--------------------------------------------------------------------------------
/v1/app/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FourthBrain/Branching-out-of-the-Notebook/72d71bb1e6206540f131dd8c41477c822a931664/v1/app/__init__.py
--------------------------------------------------------------------------------
/v1/app/app.py:
--------------------------------------------------------------------------------
1 | # import fastapi
2 | from fastapi import FastAPI, UploadFile, File
3 | from starlette.responses import StreamingResponse
4 | from diffusers import (
5 | StableDiffusionPipeline,
6 | StableDiffusionImg2ImgPipeline
7 | )
8 | from PIL import Image
9 | import numpy as np
10 | from torch import autocast
11 | import uvicorn
12 | import cv2
13 | import io
14 | import torch
15 |
16 | # instantiate the app
17 | app = FastAPI()
18 |
19 | # cuda or cpu config
20 | def get_device():
21 | if torch.cuda.is_available():
22 | print('cuda is available')
23 | return torch.device('cuda')
24 | else:
25 | print('cuda is not available')
26 | return torch.device('cpu')
27 |
28 | # create a route
29 | @app.get("/")
30 | def index():
31 | return {"text" : "We're running!"}
32 |
33 | # create a text2img route
34 | @app.post("/text2img")
35 | def text2img(text: str):
36 | device = get_device()
37 |
38 | text2img_pipe = StableDiffusionPipeline.from_pretrained("../model")
39 | text2img_pipe.to(device)
40 |
41 | img = text2img_pipe(text).images[0]
42 |
43 | img = np.array(img)
44 | img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
45 | res, img = cv2.imencode(".png", img)
46 |
47 | del text2img_pipe
48 | if torch.cuda.is_available():
49 | torch.cuda.empty_cache()
50 |
51 | return StreamingResponse(io.BytesIO(img.tobytes()), media_type="image/png")
52 |
53 | # run the app
54 | if __name__ == "__main__":
55 | uvicorn.run(app, host="0.0.0.0", port=8000)
--------------------------------------------------------------------------------
/v1/hf_token.py:
--------------------------------------------------------------------------------
1 | hf_token = "YOUR_HF_TOKEN"
--------------------------------------------------------------------------------
/v1/requirements.txt:
--------------------------------------------------------------------------------
1 | diffusers==0.6.0
2 | fastapi==0.85.1
3 | numpy==1.23.4
4 | opencv_python==4.6.0.66
5 | Pillow==9.2.0
6 | starlette==0.20.4
7 | torch==1.12.1
8 | uvicorn==0.19.0
9 | transformers
10 | opencv_python
11 | python-multipart
--------------------------------------------------------------------------------
/v1/retrieve_model.py:
--------------------------------------------------------------------------------
1 | from diffusers import StableDiffusionPipeline
2 | from hf_token import hf_token
3 |
4 | if hf_token == "YOUR_HF_TOKEN":
5 | # read users token from cli
6 | hf_token = input("Please enter your HuggingFace token: ")
7 |
8 | try:
9 | # grab the model from HF using your token
10 | pipe = StableDiffusionPipeline.from_pretrained(
11 | "CompVis/stable-diffusion-v1-4",
12 | use_auth_token=hf_token
13 | )
14 |
15 | # save the model
16 | pipe.save_pretrained("model")
17 |
18 | except OSError as e:
19 | print(e)
20 | print("Invalid Token, Please Try Again")
21 |
22 |
--------------------------------------------------------------------------------