├── How to install Guide.md └── README.md /How to install Guide.md: -------------------------------------------------------------------------------- 1 | # COMFYUI-ANDROID-TERMUX 2 | Hey guys and Gals I figured out how to use ComfyUi on Android with Termux! 3 | 4 | 5 | 6 | Install ComfyUI on Termux (Android) + PRoot 7 | This will guide you on installing ComfyUi on Termux (Android) + PRoot Distro. Make sure that you have a high-end phone to actually make this usable. On a phone with 8GB RAM, launch the webui alone take at least ~ 2 GB RAM, thus making it impossible to load any model and process further. 8 | 9 | 10 | 11 | 1. Prerequisites 12 | First you have to install Termux and install PRoot. Then install and login to Ubuntu in PRoot 13 | 14 | 15 | pkg updated && pkg upgrade -y && termux-setup-storage && 16 | pkg install wget -y && pkg install git -y && pkg install proot -y && 17 | cd ~ && git clone https://github.com/MFDGaming/ubuntu-in-termux.git && cd ubuntu-in-termux && chmod +x ubuntu.sh && ./ubuntu.sh -y && ./startubuntu.sh 18 | 19 | 20 | 2. Installing ComfyUI 21 | Run below commands sequentially as root user in Ubuntu 22 | Install basic tools 23 | 24 | apt update && apt upgrade -y && apt-get install curl git gcc make build-essential python3 python3-dev python3-distutils python3-pip python3-venv python-is-python3 -y 25 | 26 | Install required extensions 27 | 28 | apt-get install libgl1 libglib2.0-0 libsm6 libxrender1 libxext6 -y 29 | 30 | Clone the repository 31 | 32 | git clone https://github.com/comfyanonymous/ComfyUI 33 | 34 | Change the current directory 35 | 36 | cd ComfyUI 37 | 38 | 'Fix' the issue with Python running in PRoot 39 | 40 | export ANDROID_DATA=anything 41 | 42 | Install required Python packages 43 | 44 | pip install -r requirements.txt 45 | 46 | Launch the webui. It will take some time to complete first-time installation then everything should be fine 47 | 48 | python main.py --cpu 49 | 50 | Navigate to the webui in your browser 51 | 52 | http://127.0.0.1:8188 53 | 54 | To start after rebooting termux after first installation 55 | 56 | cd ubuntu-in-termux && ./startubuntu.sh 57 | cd ComfyUI && python main.py --cpu 58 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 16 GB ram is required for comfyui to run newer quantized models like wan flux 2 hunyuan video 1.5 etc I did upgrade just for this reason it's worth every penny 😉 2 | 3 | 4 | 5 | 6 | 7 | if u get a cd too many arguments error just go to original decord github repository and copy the install code 2nd step from there 8 | 9 | Before installing decord first install ubuntu then the dependencies then comfyui and requirements.txt I put part of decord dependencies in the main comfyui dependencies script so it will be easier to install decord afterwards 10 | 11 | Install Decord 12 | 13 | 1st 14 | 15 | git clone --recursive https://github.com/dmlc/decord 16 | 17 | 2nd>PRESS ENTER ON make! 18 | 19 | cd decord 20 | mkdir build && cd build 21 | cmake .. -DUSE_CUDA=0 -DCMAKE_BUILD_TYPE=Release 22 | make 23 | 24 | Forth 25 | 26 | cd - 27 | 28 | cd python 29 | 30 | python3 setup.py install --user 31 | 32 | I Found an arg --cache-none that reduces ram usage I didn't know this existed I'm testing it now I'll be back with results👍 33 | 34 | enjoy ltxv!! on Android Termux!!!😉 35 | 36 | Wan 2.1, wan 2.2, hunyuan, ltxv Works but only maxed to 256x256 and 512x512 12gb Ram use gguf version! 37 | 38 | Before installing virtual environment make sure u install ubuntu in termux first 39 | 40 | Looks like Python 3.12 actually Works!! U have to create a virtual environment so here is the guide: 41 | 42 | To run ComfyUI in a separate environment on Termux with Python 3.10.11, follow these steps: 43 | 44 | Install Required Packages 45 | First, ensure your Termux is updated and install necessary packages: 46 | 47 | apt update -y && apt upgrade -y apt install python3-full git ffmpeg 48 | 49 | Install & Setup a Virtual Environment 50 | Create and activate a virtual environment: 51 | 52 | python3 -m venv comfyui-env source comfyui-env/bin/activate 53 | 54 | Clone ComfyUI Repository 55 | Download ComfyUI from GitHub: 56 | 57 | git clone https://github.com/comfyanonymous/ComfyUI.git cd ComfyUI 58 | 59 | Install Dependencies 60 | Since you are using CPU-only, install the necessary requirements: 61 | 62 | pip install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu pip install --no-cache-dir -r requirements.txt 63 | 64 | (Optional) Deactivate Environment 65 | If you want to exit the virtual environment, run: 66 | 67 | deactivate 68 | 69 | Additional Notes: 70 | 71 | This setup ensures ComfyUI runs in a separate environment, avoiding conflicts with system packages. 72 | 73 | If your Termux does not have python-full=3.10.11 available, consider using a Proot-Distro like Ubuntu to run Python 3.10.11. 74 | 75 | If running into RAM issues, consider adding swap memory in Termux. 76 | 77 | Prerequisites First you have to install Termux and install PRoot. Then install and login to Ubuntu in PRoot 78 | 79 | pkg update -y && pkg upgrade && pkg install wget curl proot tar -y && wget https://raw.githubusercontent.com/AndronixApp/AndronixOrigin/master/Installer/Ubuntu22/ubuntu22.sh -O ubuntu22.sh && chmod +x ubuntu22.sh && bash ubuntu22.sh 80 | 81 | Installing ComfyUI Run below commands sequentially as root user in Ubuntu Install basic tools 82 | 83 | apt update && apt upgrade -y && apt-get install curl git gcc make build-essential python3 python3-dev python3-pip python3-venv python-is-python3 -y && pip install ffmpeg && apt dist-upgrade -y && apt install wget && apt-get install libgl1 libglib2.0 libsm6 libxrender1 libxext6 -y && apt-get install google-perftools && apt install libgoogle-perftools-dev && pip install moviepy==1.0.3 && pip install accelerate && pip install setuptools && pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu && apt-get install git-lfs && pip install diffusers && pip install gguf && pip install numba && pip install pynvml && pip install wheel && pip install docutils && pip install use && pip install numpy._utils && pip install fonttools>=4.22.0 && pip install simpleeval && pip install numexpr && pip && pip install insightface && pip install open-clip-torch && pip install einops && pip install cython && pip install polygraphy && pip install torchsde && pip install wget && apt-get install libavformat-dev libavfilter-dev libavdevice-dev ffmpeg && pip install cmake && apt-get install -y build-essential python3-dev python3-setuptools make cmake && pip install omegaconf && pip install pandas && apt install git-lfs && pip install ip_adapter && pip install accelerator && pip install numpy && pip install gradio && pip install transformers diffusers torch huggingface_hub && pip install exiv2 && pip install music-tag && pip install compel && pip install gfpgan && pip install tomesd && pip install peft && pip install 'controlnet_aux' && pip install 'photomaker' && pip install compiler && pip install torchtext==0.15.2 && pip install soundfile && apt install protobuf-compiler libprotobuf-dev && pip install protobuf && pip install ninja && pip install hpsv2 && apt install python3-tk && apt install qtbase5-dev && pip install mistune && pip install dynamicprompts 84 | 85 | 86 | 87 | 88 | after finish installing logout of Ubuntu by typing logout then press enter then run this command: pkg install python then check which version of python u have with this: python --version Clone the repository 89 | 90 | git clone https://github.com/comfyanonymous/ComfyUI 91 | 92 | I Found a modded comfyui that only uses cpu this is perfect for android hopefully it's faster! 93 | 94 | Fast installation (Linux and MacOS for windows see below instructions) 95 | 96 | includes comfyui manager git clone https://github.com/ArdeniusAI/ComfyUI-cpu.git 97 | 98 | cd ComfyUI-cpu 99 | 100 | ./install_comfyui-cpu.sh 101 | 102 | once installed then run the following to start: ./start_comfyui-cpu.sh 103 | 104 | OR 105 | 106 | cd ComfyUI 107 | 108 | export ANDROID_DATA=anything 109 | 110 | Install required Python packages 111 | 112 | pip install -r requirements.txt 113 | 114 | Navigate to the webui in your browser 115 | 116 | http://127.0.0.1:8188 117 | 118 | To start after rebooting termux after first installation 119 | 120 | ./start-ubuntu22.sh 121 | 122 | cd ComfyUI && python main.py --cpu --disable-xformers --cpu-vae --disable-cuda-malloc --fp8_e4m3fn-unet --fp8_e4m3fn-text-enc --fast --disable-smart-memory --supports-fp8-compute --async-offload --use-split-cross-attention --force-fp16 --cache-none 123 | 124 | Fp16 for ipadapter V2 125 | 126 | cd ComfyUI && python main.py --cpu --cpu-vae --disable-cuda-malloc --use-pytorch-cross-attention --force-fp16 --disable-smart-memory --async-offload --cache-none --dont-upcast-attention --fp16-vae 127 | 128 | 129 | 130 | COMFYUI-CPU 131 | 132 | ./start_comfyui-cpu.sh 133 | 134 | Install ComfyUI Manager 135 | 136 | cd ComfyUI 137 | 138 | cd custom_nodes 139 | 140 | git clone https://github.com/ltdrdata/ComfyUI-Manager.git 141 | 142 | After You install Manager Make Sure To Install Efficiency Nodes!! they are better for Performance And are less messy!!! 143 | 144 | Comfyui Cli Install 145 | 146 | pip install comfy-cli 147 | 148 | comfy --install-completion 149 | 150 | Install Ollama 151 | 152 | Update Ubuntu: apt update && apt upgrade -y 153 | 154 | Install deps: apt install git wget curl jq 155 | 156 | Install Ollama: curl -fsSL https://ollama.com/install.sh | sh 157 | 158 | Start Ollama: ollama serve & 159 | 160 | Run a model that will work for your device like Qwen2:0.5b: ollama run qwen2:0.5b --verbose 161 | 162 | PixArt Sigma XL 2 Works on 12gb Ram!!! 163 | 164 | go here to see how to install it and the required files. 165 | 166 | https://civitai.com/models/420163/abominable-spaghetti-workflow-pixart-sigma 167 | 168 | SD3 clip included Model Works its only 5gb and makes stunning portraits in 6-7 steps. it took an hour to make 7 steps but that's fine with me. 12gb ram is required but if u have a phone with more ram ull be able to use sd3 with t5xxl included. 😉 169 | 170 | u can make 512x512 and 256x256 images they are alot faster especially 256x256 171 | 172 | the t5xxl sd3 included version doesn't work on 12gb Ram cause actually termux says I have 10602 mb of ram not 12 but on my system it says I got 12gb so termux is not seeing all my ram or I got scamed when buying my phone idk but for now I only can use the 5gb model 173 | 174 | I Found an. Animatediff Workflow That's Twice as Fast!!! it's better than Gen 2 nodes😁 gen1 animatediff nodes are faster and use less memory here is the link just copy the code go to a text editor and paste all the json code into it then save it as json file. i can generate 8 batch 2-4 steps in 10 to 7 minutes!!! 175 | 176 | https://github.com/dezi-ai/ComfyUI-AnimateLCM/blob/main/workflows/animatelcm.json 177 | 178 | Make Sure you use taesd and only 320x512 I tried 512x512 but it crashed we finally can use text to video offline on portable devices don't have to wait hours for generated video plus it's lcm😄 179 | 180 | Ollama LLM Works But only light weight models 0.5-3b models as bigger models use more ram. u can run multiple sessions in termux to use it first install ollama in termux after installing ubuntu 181 | 182 | curl -fsSL https://ollama.com/install.sh | sh 183 | 184 | then type ollama serve then start a new termux session,login to ubuntu. 185 | 186 | then go to Olla website and pick any 0.5-3b model copy the name of the model 187 | 188 | then make sure u know the correct model weight then in the new termux session type 189 | 190 | ollama pull dolphin-phi:2.7b 191 | 192 | then after download finished start comfyui then download an if_ai node workflow then download all requirements.then download marco file manager in Google play store then Move the IF_AI folder from the ComfyUI-IF_AI_tools to inside the root input ComfyUI/input/ then after that you may have trouble seeing your ollama models, don't worry they are there just tap local host and put 127.0.0.1 then restart comfyui server,after that tap on a different model then u should see some models appear go back to ollama and models should be there. 193 | 194 | FLUX WORKS!!!!! all gguf versions work!! we can make stunning images with shnell q8 t5xxlfp16 and only 1 step XD!! use q8_0 flux version and t5xxlfp16 for best quality u only need this workflow:https://civitai.com/models/652884/flux-gguf-android-termux download v2.0 195 | 196 | Update102924:sd3.5 large,large turbo, and medium,all works use sd3.5 large gguf and if u have more ram use sd3.5 medium with gguf t5xxl q5_0 XD it's faster than flux!! 197 | 198 | update011925 I finally figured out how to fix bad colored low quality animations with Animatediff all u gota do is put all motion model samplers at lcm then on ksampler put sampler on lcm and scheduler on ddim_uniform use motion model v3 fp16 it is faster than Animatediff Lightning 😀 just use lcm lora for non lcm models.also put context stride at 3 and overlap at 8 leave length at 16 199 | 200 | To install matplotlib.pyplot in Termux on Android (Ubuntu Linux environment isn't necessary), follow these steps: 201 | 202 | 1. Update Termux Packages 203 | pkg update 204 | pkg upgrade 205 | 2. Install Required Dependencies 206 | pkg install python python-numpy libjpeg-turbo libpng 207 | 3. Install Matplotlib via pip 208 | pip install matplotlib --no-build-isolation 209 | The --no-build-isolation flag avoids build issues in Termux's environment. 210 | 211 | Troubleshooting Common Issues 212 | If installation fails: 213 | Ensure all dependencies are installed: 214 | 215 | pkg install freetype pkg-config libcrypt 216 | Install build tools (if needed for dependencies): 217 | 218 | pkg install clang make binutils 219 | Reinstall with verbose mode (if errors persist): 220 | 221 | pip install -v matplotlib --no-build-isolation 222 | Test the Installation 223 | Launch Python: 224 | python 225 | In the Python shell: 226 | import matplotlib.pyplot as plt 227 | plt.plot([1, 2, 3], [4, 5, 1]) 228 | plt.show() 229 | If no GUI appears (common in Termux), save the plot instead: 230 | plt.savefig("test.png") 231 | View the image with a file manager or Termux command (e.g., termux-open test.png). 232 | Notes 233 | GUI Support: Termux lacks native GUI, so plt.show() won't display plots interactively. Save plots as files instead. 234 | Lightweight Alternative: Use termplotlib for text-based plots in the terminal: 235 | pip install termplotlib 236 | This method ensures a smooth installation of matplotlib in Termux without requiring an Ubuntu environment. 237 | 238 | [-h] [--listen [IP]] [--port PORT] [--tls-keyfile TLS_KEYFILE] [--tls-certfile TLS_CERTFILE] [--enable-cors-header [ORIGIN]] [--max-upload-size MAX_UPLOAD_SIZE] [--base-directory BASE_DIRECTORY] [--extra-model-paths-config PATH [PATH ...]] [--output-directory OUTPUT_DIRECTORY] [--temp-directory TEMP_DIRECTORY] [--input-directory INPUT_DIRECTORY] [--auto-launch] [--disable-auto-launch] [--cuda-device DEVICE_ID] [--cuda-malloc | --disable-cuda-malloc] [--force-fp32 | --force-fp16] [--fp32-unet | --fp64-unet | --bf16-unet | --fp16-unet | --fp8_e4m3fn-unet | --fp8_e5m2-unet | --fp8_e8m0fnu-unet] [--fp16-vae | --fp32-vae | --bf16-vae] [--cpu-vae] [--fp8_e4m3fn-text-enc | --fp8_e5m2-text-enc | --fp16-text-enc | --fp32-text-enc | --bf16-text-enc] [--force-channels-last] [--directml [DIRECTML_DEVICE]] [--oneapi-device-selector SELECTOR_STRING] [--disable-ipex-optimize] [--supports-fp8-compute] [--preview-method [none,auto,latent2rgb,taesd]] [--preview-size PREVIEW_SIZE] [--cache-classic | --cache-lru CACHE_LRU | --cache-none] [--use-split-cross-attention | --use-quad-cross-attention | --use-pytorch-cross-attention | --use-sage-attention | --use-flash-attention] [--disable-xformers] [--force-upcast-attention | --dont-upcast-attention] [--gpu-only | --highvram | --normalvram | --lowvram | --novram | --cpu] [--reserve-vram RESERVE_VRAM] [--async-offload] [--default-hashing-function {md5,sha1,sha256,sha512}] [--disable-smart-memory] [--deterministic] [--fast [FAST ...]] [--mmap-torch-files] [--dont-print-server] [--quick-test-for-ci] [--windows-standalone-build] [--disable-metadata] [--disable-all-custom-nodes] [--whitelist-custom-nodes WHITELIST_CUSTOM_NODES [WHITELIST_CUSTOM_NODES ...]] [--disable-api-nodes] [--multi-user] [--verbose [{DEBUG,INFO,WARNING,ERROR,CRITICAL}]] [--log-stdout] [--front-end-version FRONT_END_VERSION] [--front-end-root FRONT_END_ROOT] [--user-directory USER_DIRECTORY] [--enable-compress-response-body] [--comfy-api-base COMFY_API_BASE] [--database-url DATABASE_URL] 239 | 240 | UPGRADE PYTHON TO 3.13 AS THE UBUNTU DEFAULT TO 3.10 ULL BE ABLE TO USE NEW NODES LIKE LTXV 241 | 242 | apt install wget build-essential libreadline-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev libffi-dev zlib1g-dev 243 | 244 | Then, select download the most recent dev version of Python 3.13 (so far Python-3.13.0.tar.xz) from the link page below: 245 | https://www.python.org/ftp/python/3.13.0/ 246 | 247 | Download Python 3.13 248 | 249 | Next, extract the source tarball in file manager. Then, right-click on extracted folder and select “Open in terminal” to open that folder as working directory in terminal. 250 | 251 | In the pop-up terminal, configure the source via command: 252 | 253 | ./configure --enable-optimizations For choice, you can run ./configure --help to print more configure options. 254 | 255 | Then, compile by starting 4 threads in parallel: 256 | 257 | make -j4 And finally install Python 3.13: 258 | 259 | make install Finally, verify via command: python3.13 --version and pip3.13 --version. 260 | 261 | Uninstall: For Python 3.13 installed from PPA, open terminal and run command to remove it: 262 | 263 | apt remove --autoremove python3.13 Also remove the PPA by running command: 264 | 265 | sudo add-apt-repository --remove ppa:deadsnakes/ppa If you compiled it from source, then try running the command below from source folder until you removed it: 266 | 267 | sudo make uninstall Or, manually delete all the corresponding files and folders (run whereis python3.13 to tell). 268 | 269 | LTXV is Amazing its perfect for cpu users itz fast and uses less ram 9.6 with 9.8 13b vae is very fast like so ic speed u can do higher resolution with ltxv also on Android 270 | 271 | if u get an error saying no support sdpa when trying llm just do this: 272 | 273 | pip install transformers==4.49.0 274 | 275 | transformer that is higher than 4.49 don't have sdpa support --------------------------------------------------------------------------------