├── Dockerfile ├── README.md ├── app.py ├── daemon.json └── docker-compose.yml /Dockerfile: -------------------------------------------------------------------------------- 1 | from tensorflow/tensorflow:2.10.0-gpu 2 | 3 | RUN apt update && \ 4 | apt install -y git && \ 5 | pip install --no-cache-dir Pillow==9.2.0 tqdm==4.64.1 \ 6 | ftfy==6.1.1 regex==2022.9.13 tensorflow-addons==0.17.1 \ 7 | fastapi "uvicorn[standard]" git+https://github.com/divamgupta/stable-diffusion-tensorflow.git 8 | 9 | WORKDIR /app 10 | 11 | COPY ./app.py /app/app.py 12 | 13 | CMD uvicorn --host 0.0.0.0 app:app -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## stable-diffusion-tf-docker 2 | Stable Diffusion in Docker with a simple web API and GPU support 3 | 4 | ## What is this? 5 | [Stable Diffusion](https://github.com/CompVis/stable-diffusion) is a latent text-to-image diffusion model, made possible thanks to a collaboration with Stability AI and Runway. 6 | It features state-of-the-art text-to-image synthesis capabilities with relatively small memory requirements (10 GB). 7 | Divam Gupta [ported it to TensorFlow / Keras](https://github.com/divamgupta/stable-diffusion-tensorflow) from original weights. 8 | I packed it in a Docker image with a simple web API and GPU support. 9 | 10 | ## How does it work? 11 | This repo is intended to work on a GPU from [TensorDock Marketplace](https://marketplace.tensordock.com/)., 12 | It should work on other machines with little-to-no change needed, 13 | but it is a direct outcome of my experiments with recently-launched TensorDock Marketplace. 14 | It is currently in Public Alpha, but I already like their innovative idea to democratize access to high-performance computing. 15 | Launched in addition to their [affordable Core Cloud GPU service](https://www.tensordock.com/), 16 | the Marketplace edition serves as a marketplace that brings clients and GPU providers together. 17 | Hosts, i.e., those who have spare GPUs, can rent them to clients including independent researchers, startups, hobbiests, tinkerers etc. 18 | with insanely cheap prices. 19 | [Acording to TensorDock](https://www.tensordock.com/host), 20 | this also lets hosts earn 2x-to-3x mining profits. 21 | And, for a better purpose than mining meaningless cryptos. 22 | 23 | Servers are customizable for required RAM, vCPU and disc allocated, and boot times are too short around ~45 seconds. 24 | You may choose to start with a minimal Ubuntu image with NVIDIA drivers and Docker already installed, 25 | or you can jump into experiments with fully-fledged images with NVIDIA drivers, Conda, TensorFlow, PyTorch and Jupyter configured. 26 | 27 | ## Ok, show me how to run 28 | Don't let the number of steps scare you --it's only a detailed step-by-step walkthrough of the whole process from registering to making requests. 29 | It should take no longer than ~10 minutes. 30 | 31 | 1. [Registre](https://marketplace.tensordock.com/register) and sign in to TensorDock Marketplace. 32 | 2. [Go to the order page](https://marketplace.tensordock.com/order_list), and choose a physical machine that offers a GPU with at least 10 GB of memory. I'd suggest one that offers RTX 3090. 33 | 3. This will open up a model that lets you configure your server. My suggestestions are as follows: 34 | - Select amount of each GPU model: 1 x GeForce RTX 3090 24 GB 35 | - Select amount of RAM (GB): 16 36 | - Select number of vCPUs: 2 37 | - Check checkboxes for up to 15 port forwardings. You will be able to access to your server through these ports. 38 | - Under "Customize your installation", select "Ubuntu 20.04 LTS". 39 | 4. Choose a password for your server, and give it a name, e.g., "stable-diffusion-api" 40 | 5. Hit "Deploy Server" and voila! Your server will be ready in seconds. 41 | 6. When you see the success page, click "Next" to see the details. 42 | 7. Find IPv4 address of your server. This may be an real IP or a subdomain like `mass-a.tensordockmarketplace.com`. 43 | 8. Find the external port mapped to internal port 22. You will use this one to SSH into your server. It might be something like 20029, for example. 44 | 9. Connect to your server using these cridentials, e.g.: 45 | - `ssh -p 20029 user@mass-a@tensordockmarketplace.com` 46 | 47 | 48 | Docker is already configured for GPU access, but we need to configure Docker networking to make external requests. 49 | 50 | 10. Clone this repository and cd into it: 51 | - `git clone https://github.com/monatis/stable-diffusion-tf-docker.git && cd stable-diffusion-tf-docker` 52 | 11. Copy `daemon.json` over the existing `/etc/docker/daemon.json` and restart the service. Don't worry --this only adds a setting for the MTU value. 53 | - `sudo cp ./daemon.json /etc/docker/daemon.json` 54 | - `sudo systemctl restart docker.service` 55 | 12. Set an environment variable for the public port that you want to use, run Docker Compose. Our `docker-compose.yml` file will pick it up from environment variables, and it should be one of the port forwardings you configured, e.g., 20020. 56 | - `export PUBLIC_PORT=20020` 57 | - `docker compose up -d` 58 | 13. Once it's up and running, go to `http://mass-a.tensordockmarketplace.com:20020/docs` for the Swagger UI provided by FastAPI. 59 | 14. Use the `POST /generate` endpoint to generate images with Stable Diffusion. It will respond with a download ID. 60 | 15. Hit the `GET /download/` endpoint to download your image. -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | import os 2 | import time 3 | import uuid 4 | 5 | from fastapi import FastAPI 6 | from fastapi.exceptions import HTTPException 7 | from fastapi.responses import FileResponse 8 | from PIL import Image 9 | from pydantic import BaseModel, Field 10 | from stable_diffusion_tf.stable_diffusion import StableDiffusion 11 | from tensorflow import keras 12 | 13 | height = int(os.environ.get("WIDTH", 512)) 14 | width = int(os.environ.get("WIDTH", 512)) 15 | mixed_precision = os.environ.get("MIXED_PRECISION", "no") == "yes" 16 | 17 | if mixed_precision: 18 | keras.mixed_precision.set_global_policy("mixed_float16") 19 | 20 | generator = StableDiffusion(img_height=height, img_width=width, jit_compile=False) 21 | 22 | app = FastAPI(title="Stable Diffusion API") 23 | 24 | 25 | class GenerationRequest(BaseModel): 26 | prompt: str = Field(..., title="Input prompt", description="Input prompt to be rendered") 27 | scale: float = Field(default=7.5, title="Scale", description="Unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))") 28 | steps: int = Field(default=50, title="Steps", description="Number of dim sampling steps") 29 | seed: int = Field(default=None, title="Seed", description="Optionally specify a seed for reproduceable results") 30 | 31 | 32 | class GenerationResult(BaseModel): 33 | download_id: str = Field(..., title="Download ID", description="Identifier to download the generated image") 34 | time: float = Field(..., title="Time", description="Total duration of generating this image") 35 | 36 | 37 | @app.get("/") 38 | def home(): 39 | return {"message": "See /docs for documentation"} 40 | 41 | @app.post("/generate", response_model=GenerationResult) 42 | def generate(req: GenerationRequest): 43 | start = time.time() 44 | id = str(uuid.uuid4()) 45 | img = generator.generate(req.prompt, num_steps=req.steps, unconditional_guidance_scale=req.scale, temperature=1, batch_size=1, seed=req.seed) 46 | path = os.path.join("/app/data", f"{id}.png") 47 | Image.fromarray(img[0]).save(path) 48 | alapsed = time.time() - start 49 | 50 | return GenerationResult(download_id=id, time=alapsed) 51 | 52 | @app.get("/download/{id}", responses={200: {"description": "Image with provided ID", "content": {"image/png" : {"example": "No example available."}}}, 404: {"description": "Image not found"}}) 53 | async def download(id: str): 54 | path = os.path.join("/app/data", f"{id}.png") 55 | if os.path.exists(path): 56 | return FileResponse(path, media_type="image/png", filename=path.split(os.path.sep)[-1]) 57 | else: 58 | raise HTTPException(404, detail="No such file") 59 | -------------------------------------------------------------------------------- /daemon.json: -------------------------------------------------------------------------------- 1 | { 2 | "runtimes": { 3 | "nvidia": { 4 | "path": "nvidia-container-runtime", 5 | "runtimeArgs": [] 6 | } 7 | }, 8 | "mtu": 1442 9 | } -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.3" 2 | 3 | services: 4 | app: 5 | image: myusufs/stable-diffusion-tf 6 | build: 7 | context: . 8 | environment: 9 | # configure env vars to your liking 10 | - HEIGHT=512 11 | - WIDTH=512 12 | - MIXED_PRECISION=no 13 | ports: 14 | - "${PUBLIC_PORT?Public port not set as an environment variable}:8000" 15 | volumes: 16 | - ./data:/app/data 17 | 18 | deploy: 19 | resources: 20 | reservations: 21 | devices: 22 | - driver: nvidia 23 | count: 1 24 | capabilities: [ gpu ] 25 | 26 | networks: 27 | default: 28 | driver: bridge 29 | driver_opts: 30 | com.docker.network.driver.mtu: 1442 31 | --------------------------------------------------------------------------------