├── README.md ├── assets-cpu ├── accesstokengpu.png ├── cpu.png ├── cpu0.png ├── cpu1.png ├── cpu2.png ├── cpu3.png ├── cpu4.png ├── cpu5.png ├── demo.txt ├── image3.webp └── image4.webp ├── assets-gpu ├── demo.txt ├── gpu0.png ├── gpu1.png ├── gpu3.png ├── gpu4.png ├── gpu5.png └── gpu6.png ├── assets ├── accesstokengpu.png ├── at.png ├── cpugpu.gif ├── cpugpu.png ├── demo.txt ├── image1.png ├── image2.png ├── image3.webp ├── image4.webp └── t4.gif ├── choose ├── 6.png ├── 7.png └── demo.txt ├── requirements.txt ├── run-as-cpu.py └── run-as-gpu.ipynb /README.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: HF_SPACE DEMO 3 | emoji: 🐹 4 | colorFrom: blue 5 | colorTo: pink 6 | sdk: gradio 7 | sdk_version: 4.36.1 8 | app_file: app.py 9 | base_model: stabilityai/sdxl-turbo 10 | model: SG161222/RealVisXL_V4.0 / other models based on the conditions 11 | type: base_model, model 12 | pinned: true 13 | header: mini 14 | theme: bethecloud/storj_theme 15 | get_hamster_from: https://prithivhamster.vercel.app/ 16 | license: creativeml-openrail-m 17 | short_description: Fast as Hamster | Stable Hamster | Stable Diffusion 18 | --- 19 | 20 | ## How to run hf spaces on local cpu (ex.intel i5 / amd ryzen 7) or by google colab with T4 gpu ❓ 21 | 22 | ![alt text](assets/cpugpu.gif) 23 | 24 | # Before getting into the demo, let's first understand how Hugging Face access tokens are passed from the settings on your profile ⭐ 25 | 26 | You can see the hf token there : 👇🏻 in your profile 27 | 28 | https://huggingface.co/settings/tokens 29 | 30 | ![alt text](assets/at.png) 31 | 32 | Pass the access to Login locally to Hugging face 33 | 34 | ![alt text](assets/accesstokengpu.png) 35 | 36 | Here we used T4 GPU Instead of Nvidia A100, where as you can access the A100 in Colab if you are a premium user. T4 is free for certain amount of computation & although it's not as powerful as the A100 or V100. Since A100 supports HCP() - Acc 37 | 38 | ![alt text](assets/t4.gif) 39 | 40 | ## 1. Running in T4 GPU, Google Colab Space : Hardware accelerator 41 | 42 | Choose the run-as-gpu.ipynb file from the repo & take it on to the colab notebooks 43 | 44 | ![alt text](choose/6.png) 45 | 46 | In Colab Choose the T4 GPU as a Runtime Hardware ✅ as Google Compute Engine !! 47 | 48 | ![alt text](assets-gpu/gpu0.png) 49 | 50 | Run the modules one by one : first the requirements, sencond the hf_access_token -- Login successful!, third the main code block. After the components of the model which you have linked with the model id will be loaded. 51 | 52 | ![alt text](assets-gpu/gpu4.png) 53 | 54 | 👇🏻👇🏻After Successfully running the code the live.server for gradio will give a link like this ... 55 | 56 | https://7770379da2bab84efe.gradio.live/ 57 | 58 | 🚀Progress 59 | 60 | After loading to the gradio.live, the gradio interface like this.. & enter the prompt and process it 61 | 62 | 63 | | ![alt text](assets-gpu/gpu3.png) |![alt text](assets-gpu/gpu1.png) | 64 | |---------------------------|--------------------------| 65 | 66 | 67 | The Sample results 1 & 2 from the colab space 68 | 69 | | ![alt text](assets-gpu/gpu5.png) |![alt text](assets-gpu/gpu6.png) | 70 | |---------------------------|--------------------------| 71 | 72 | The original resultant image from the space // gradio.live 73 | 74 | | ![alt text](assets/image1.png) |![alt text](assets/image2.png) | 75 | |---------------------------|--------------------------| 76 | 77 | 🚀Working Link for the Colab : 78 | 79 | https://colab.research.google.com/drive/1rpL-UPkVpJgj5U2WXOupV0GWbBGqJ5-p 80 | 81 | . 82 | 83 | . 84 | 85 | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- 86 | 87 | 88 | ## 2. Running on CPU, Local System : Hardware accelerator [ 0 ] 89 | 90 | 👇🏻Same Hugging_Face Login procedure for this method also !! 91 | 92 | You can see the hf token there : 👇🏻 in your profile 93 | 94 | https://huggingface.co/settings/tokens 95 | 96 | ![alt text](assets/at.png) 97 | 98 | Pass the access to Login locally to Hugging face 99 | 100 | ![alt text](assets/accesstokengpu.png) 101 | 102 | Choose the run-as-cpu.py file from the repo & take it on to the local code editor ( eg. vs.code ) 103 | Statisfy all the requirement.txt ; pip install -r requirements.txt 104 | 105 | ![alt text](choose/7.png) 106 | Run all the requirements 107 | 108 | accelerate 109 | diffusers 110 | invisible_watermark 111 | torch 112 | transformers 113 | xformers 114 | 115 | 🚀Run the run-as-cpu.py by ( python run-as-cpu.py ) 116 | 117 | ![alt text](assets-cpu/cpu.png) 118 | 119 | ✅ After the successful -run you will see the components loading to the local code editor 120 | 121 | ===== Application Startup at 2024-06-17 16:51:58 ===== 122 | The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 123 | 0it [00:00, ?it/s] 124 | 0it [00:00, ?it/s] 125 | /usr/local/lib/python3.10/site-packages/diffusers/models/transformers/transformer_2d.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead. 126 | deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message) 127 | Loading pipeline components...: 0%| | 0/7 [00:00