├── LICENSE ├── README.md ├── UNetExtractor.py └── fluxeample.png /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 JOE FAULKNER 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # UNet Extractor and Remover for Stable Diffusion 1.5, SDXL, and FLUX 2 | 3 | This Python script (UNetExtractor.py) processes SafeTensors files for Stable Diffusion 1.5 (SD 1.5), Stable Diffusion XL (SDXL), and FLUX models. It extracts the UNet into a separate file and creates a new file with the remaining model components (without the UNet). 4 | 5 | ![FLUX Example](https://raw.githubusercontent.com/captainzero93/extract-unet-safetensor/main/fluxeample.png) 6 | 7 | Above example: UNetExtractor.py flux1-dev.safetensors flux1-dev_unet.safetensors flux1-dev_non_unet.safetensors --model_type flux --verbose 8 | 9 | ## AUTOMATIC1111 Extension for UNet Loading 10 | 11 | We've developed an extension for AUTOMATIC1111's Stable Diffusion Web UI that allows you to load and use the extracted UNet files directly within the interface. This extension seamlessly integrates with the txt2img workflow, enabling you to utilize the space-saving benefits of separated UNet files without compromising on functionality. 12 | 13 | ### Extension Features: 14 | - Load separate UNet and non-UNet files 15 | - Combine them on-the-fly for use in generation 16 | - Compatible with files created by this UNet Extractor tool 17 | - Integrated into the AUTOMATIC1111 Web UI for easy use 18 | 19 | To use the extension, please visit our [UNet Loader Extension Repository](https://github.com/captainzero93/load-extracted-unet-automatic1111) for installation and usage instructions. 20 | 21 | ## Why UNet Extraction? 22 | 23 | Using UNets instead of full checkpoints can save a significant amount of disk space, especially for models that utilize large text encoders. This is particularly beneficial for models like FLUX, which has a large number of parameters. Here's why: 24 | 25 | - Space Efficiency: Full checkpoints bundle the UNet, CLIP, VAE, and text encoder together. By extracting the UNet, you can reuse the same text encoder for multiple models, saving gigabytes of space per additional model. 26 | - Flexibility: You can download the text encoder once and use it with multiple UNet models, reducing redundancy and saving space. 27 | - Practical Example: Multiple full checkpoints of large models like FLUX can quickly consume tens of gigabytes. Using extracted UNets instead can significantly reduce storage requirements. 28 | - Future-Proofing: As models continue to grow in complexity, the space-saving benefits of using UNets (may) become even more significant. 29 | 30 | This tool helps you extract UNets from full checkpoints, allowing you to take advantage of these space-saving benefits across SD 1.5, SDXL, and open-source FLUX models. 31 | 32 | ## Features 33 | 34 | - Supports UNet extraction for SD 1.5, SDXL, and open-source FLUX models, including: 35 | - FLUX Dev: A mid-range version with open weights for non-commercial use. 36 | - FLUX Schnell: A faster version optimized for lower-end GPUs. 37 | - Extracts UNet tensors from SafeTensors files 38 | - Creates a separate SafeTensors file with non-UNet components 39 | - Saves the extracted UNet as a new SafeTensors file 40 | - Command-line interface for easy use 41 | - Optional CUDA support for faster processing on compatible GPUs 42 | - Automatic thread detection for optimal CPU usage 43 | - Improved memory management with RAM offloading 44 | - Multi-threading support for faster processing 45 | - User choice between CPU-only and GPU-assisted processing 46 | - GPU and CPU usage limiting options 47 | - Enhanced error handling and logging 48 | - Detailed debugging information for troubleshooting 49 | - AUTOMATIC1111 extension for seamless integration with Stable Diffusion Web UI 50 | 51 | ## Requirements 52 | 53 | - Python 3.6+ 54 | - safetensors library 55 | - PyTorch (optional, for CUDA support) 56 | - psutil (optional, for enhanced system resource reporting) 57 | 58 | ## Installation 59 | 60 | 1. Clone this repository or download the `UNetExtractor.py` script. 61 | 62 | 2. It's recommended to create a new virtual environment: 63 | ``` 64 | python -m venv unet_extractor_env 65 | ``` 66 | 67 | 3. Activate the virtual environment: 68 | - On Windows: 69 | ``` 70 | unet_extractor_env\Scripts\activate 71 | ``` 72 | - On macOS and Linux: 73 | ``` 74 | source unet_extractor_env/bin/activate 75 | ``` 76 | 77 | 4. Install the required libraries with specific versions for debugging: 78 | 79 | ``` 80 | pip install numpy==1.23.5 torch==2.0.1 safetensors==0.3.1 81 | ``` 82 | 83 | 5. If you're using CUDA, install the CUDA-enabled version of PyTorch: 84 | ``` 85 | pip install torch==2.0.1+cu117 -f https://download.pytorch.org/whl/cu117/torch_stable.html 86 | ``` 87 | Replace `cu117` with your CUDA version (e.g., `cu116`, `cu118`) if different. 88 | 89 | 6. Optionally, install psutil for enhanced system resource reporting: 90 | ``` 91 | pip install psutil==5.9.0 92 | ``` 93 | 94 | Note: The versions above are examples and may need to be adjusted based on your system requirements and CUDA version. These specific versions are recommended for debugging purposes as they are known to work together. For regular use, you may use the latest versions of these libraries. 95 | 96 | ## Usage 97 | 98 | Run the script from the command line with the following syntax: 99 | 100 | ``` 101 | python UNetExtractor.py --model_type [--verbose] [--num_threads ] [--gpu_limit ] [--cpu_limit ] 102 | ``` 103 | 104 | ### Arguments 105 | 106 | - ``: Path to the input SafeTensors file (full model) 107 | - ``: Path where the extracted UNet will be saved 108 | - ``: Path where the model without UNet will be saved 109 | - `--model_type`: Specify the model type, either `sd15` for Stable Diffusion 1.5, `sdxl` for Stable Diffusion XL, or `flux` for FLUX models 110 | - `--verbose`: (Optional) Enable verbose logging for detailed process information 111 | - `--num_threads`: (Optional) Specify the number of threads to use for processing. If not specified, the script will automatically detect the optimal number of threads. 112 | - `--gpu_limit`: (Optional) Limit GPU usage to this percentage (default: 90) 113 | - `--cpu_limit`: (Optional) Limit CPU usage to this percentage (default: 90) 114 | 115 | ### Examples 116 | 117 | For Stable Diffusion 1.5 using CUDA (if available): 118 | ``` 119 | python UNetExtractor.py path/to/sd15_model.safetensors path/to/output_sd15_unet.safetensors path/to/output_sd15_non_unet.safetensors --model_type sd15 --verbose 120 | ``` 121 | 122 | For Stable Diffusion XL using CUDA (if available): 123 | ``` 124 | python UNetExtractor.py path/to/sdxl_model.safetensors path/to/output_sdxl_unet.safetensors path/to/output_sdxl_non_unet.safetensors --model_type sdxl --verbose 125 | ``` 126 | 127 | For FLUX models using CUDA (if available) with 8 threads and 80% GPU usage limit: 128 | ``` 129 | python UNetExtractor.py path/to/flux_model.safetensors path/to/output_flux_unet.safetensors path/to/output_flux_non_unet.safetensors --model_type flux --verbose --num_threads 8 --gpu_limit 80 130 | ``` 131 | 132 | ## How It Works 133 | 134 | 1. The script checks for CUDA availability (if PyTorch is installed) and prompts to choose between CPU-only and GPU-assisted processing. 135 | 2. It determines the optimal number of threads to use based on the system's CPU cores (if not manually specified). 136 | 3. It opens the input SafeTensors file using the `safetensors` library. 137 | 4. The script iterates through all tensors in the file, separating UNet-related tensors from other tensors. 138 | 5. For SD 1.5 and FLUX models, it removes the "model.diffusion_model." prefix from UNet tensor keys. 139 | 6. For SDXL, it keeps the original key names for both UNet and non-UNet tensors. 140 | 7. The script uses multi-threading to process tensors concurrently, improving performance. 141 | 8. GPU and CPU usage are limited based on user-specified percentages or default values. 142 | 9. The extracted UNet tensors are saved to a new SafeTensors file. 143 | 10. The remaining non-UNet tensors are saved to a separate SafeTensors file. 144 | 11. RAM offloading is implemented to manage memory usage, especially for large models. 145 | 146 | ## Using Extracted UNets with AUTOMATIC1111 147 | 148 | After extracting UNet files using this tool, you can easily use them in AUTOMATIC1111's Stable Diffusion Web UI: 149 | 150 | 1. Install our UNet Loader extension in your AUTOMATIC1111 setup. 151 | 2. Place the extracted UNet and non-UNet files in the extension's designated folder. 152 | 3. Use the extension's interface to select and load your desired UNet and non-UNet components. 153 | 4. Generate images using txt2img as usual, now benefiting from the space-saving and flexibility of separated UNet files. 154 | 155 | For detailed instructions, please refer to the [UNet Loader Extension Repository](https://github.com/captainzero93/unet-loader-extension). 156 | 157 | ## Debugging Information 158 | 159 | When running the script with the `--verbose` flag, you'll see detailed debugging information, including: 160 | 161 | - System resource information (CPU cores, RAM, GPU) 162 | - Processing details for each tensor (key, shape, classification) 163 | - Running count of UNet and non-UNet tensors 164 | - GPU memory usage (if applicable) 165 | - Detailed error messages and stack traces in case of exceptions 166 | 167 | Example debug output: 168 | 169 | ``` 170 | 2024-08-17 21:06:30,500 - DEBUG - Current UNet count: 770 171 | 2024-08-17 21:06:30,500 - DEBUG - --- 172 | 2024-08-17 21:06:31,142 - DEBUG - Processing key: vector_in.out_layer.weight 173 | 2024-08-17 21:06:31,142 - DEBUG - Tensor shape: torch.Size([3072, 3072]) 174 | 2024-08-17 21:06:31,172 - DEBUG - Classified as non-UNet tensor 175 | 2024-08-17 21:06:31,172 - DEBUG - Current UNet count: 770 176 | 2024-08-17 21:06:31,172 - DEBUG - --- 177 | 2024-08-17 21:06:31,203 - INFO - Total tensors processed: 780 178 | 2024-08-17 21:06:31,203 - INFO - UNet tensors: 770 179 | 2024-08-17 21:06:31,203 - INFO - Non-UNet tensors: 10 180 | 2024-08-17 21:06:31,203 - INFO - Unique key prefixes found: double_blocks, final_layer, guidance_in, img_in, single_blocks, time_in, txt_in, vector_in 181 | ``` 182 | 183 | This output helps identify issues with tensor classification, resource usage, and overall processing flow. 184 | 185 | ## Notes 186 | 187 | - The script now prompts the user to choose between CPU-only and GPU-assisted processing if CUDA is available. 188 | - Automatic thread detection is used if the number of threads is not specified. 189 | - GPU and CPU usage can be limited to prevent normal system use slowdowns during processing. 190 | - Enhanced error handling and logging provide more informative output during processing. 191 | - The disk space check has been removed to avoid potential errors on some systems. 192 | 193 | ## Troubleshooting 194 | 195 | If you encounter any issues: 196 | 197 | 1. Ensure you're using the recommended library versions as specified in the Installation section. 198 | 2. Run the script with the `--verbose` flag to get detailed debugging information. 199 | 3. Check for compatibility between your CUDA version and the installed PyTorch version. 200 | 4. If you encounter a NumPy version compatibility error with PyTorch, such as: 201 | ``` 202 | A module that was compiled using NumPy 1.x cannot be run in 203 | NumPy 2.0.1 as it may crash. 204 | ``` 205 | Try downgrading NumPy to version 1.23.5 as recommended in the installation instructions. 206 | 5. Ensure you have the latest version of the `safetensors` library installed. 207 | 6. Check that your input file is a valid SafeTensors file for the specified model type. 208 | 7. Make sure you have read permissions for the input file and write permissions for the output directory. 209 | 8. If you're having issues with CUDA, try running with CPU-only processing to see if it resolves the problem. 210 | 9. If you encounter any "module not found" errors, ensure all required libraries are installed in your virtual environment. 211 | 10. Check the console output for any error messages or stack traces that can help identify the issue. 212 | 213 | If you continue to experience issues after trying these steps, please open an issue on the GitHub repository with details about your system configuration, the command you're using, and the full error message or debugging output. 214 | 215 | ## Contributing 216 | 217 | Contributions, issues, and feature requests are welcome! Feel free to check [issues page](https://github.com/captainzero93/extract-unet-safetensor/issues) if you want to contribute. 218 | 219 | ## License 220 | 221 | This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details. 222 | 223 | ## Citation 224 | 225 | If you use UNet Extractor and Remover in your research or projects, please cite it as follows: 226 | 227 | ``` 228 | [For commercial licensing cyberjunk77@gmail.com] (captainzero93). (2024). UNet Extractor and Remover for Stable Diffusion 1.5, SDXL, and FLUX. GitHub. https://github.com/captainzero93/unet-extractor 229 | ``` 230 | 231 | ## Acknowledgements 232 | 233 | - This script uses the `safetensors` library developed by the Hugging Face team. 234 | - Inspired by Stable Diffusion and the FLUX model community. 235 | - Special thanks to all contributors and users who have provided feedback and suggestions. 236 | - u/DBacon1052 237 | - u/BlastedRemnants/ 238 | - rauldlnx10 on GitHub for the Forge extension. 239 | - All users and contributors of the AUTOMATIC1111 Stable Diffusion Web UI community. 240 | -------------------------------------------------------------------------------- /UNetExtractor.py: -------------------------------------------------------------------------------- 1 | """ 2 | UNet Extractor for Stable Diffusion 1.5, SDXL, and FLUX models 3 | 4 | This script processes SafeTensors files to extract UNet components. 5 | 6 | For enhanced system resource reporting, it's recommended to install psutil: 7 | pip install psutil 8 | 9 | If psutil is not installed, the script will still work but with limited 10 | resource reporting capabilities. 11 | """ 12 | 13 | import argparse 14 | import logging 15 | from pathlib import Path 16 | from safetensors import safe_open 17 | from safetensors.torch import save_file 18 | import gc 19 | import threading 20 | import queue 21 | import multiprocessing 22 | import time 23 | import os 24 | import traceback 25 | 26 | try: 27 | import psutil 28 | PSUTIL_AVAILABLE = True 29 | except ImportError: 30 | PSUTIL_AVAILABLE = False 31 | 32 | try: 33 | import torch 34 | CUDA_AVAILABLE = torch.cuda.is_available() 35 | except ImportError: 36 | CUDA_AVAILABLE = False 37 | 38 | def setup_logging(verbose): 39 | level = logging.DEBUG if verbose else logging.INFO 40 | logging.basicConfig(level=level, format='%(asctime)s - %(levelname)s - %(message)s') 41 | 42 | def check_resources(): 43 | cpu_count = os.cpu_count() or 1 44 | 45 | if PSUTIL_AVAILABLE: 46 | total_ram = psutil.virtual_memory().total / (1024 ** 3) # in GB 47 | available_ram = psutil.virtual_memory().available / (1024 ** 3) # in GB 48 | else: 49 | total_ram = "Unknown" 50 | available_ram = "Unknown" 51 | 52 | gpu_info = "Not available" 53 | if CUDA_AVAILABLE: 54 | gpu_info = f"{torch.cuda.get_device_name(0)}, {torch.cuda.get_device_properties(0).total_memory / (1024**3):.2f} GB VRAM" 55 | 56 | return cpu_count, total_ram, available_ram, gpu_info 57 | 58 | def get_user_preference(): 59 | print("\nResource Allocation Options:") 60 | print("1. CPU-only processing") 61 | print("2. GPU-assisted processing with CPU support") 62 | 63 | while True: 64 | choice = input("Enter your choice (1 or 2): ").strip() 65 | if choice in ['1', '2']: 66 | return choice == '2' 67 | print("Invalid choice. Please enter 1 or 2.") 68 | 69 | def is_unet_tensor(key, model_type): 70 | if model_type == "sd15": 71 | return key.startswith("model.diffusion_model.") 72 | elif model_type == "flux": 73 | return any(key.startswith(prefix) for prefix in [ 74 | "unet.", "diffusion_model.", "model.diffusion_model.", 75 | "double_blocks.", "single_blocks.", "final_layer.", 76 | "guidance_in.", "img_in." 77 | ]) 78 | elif model_type == "sdxl": 79 | return key.startswith("model.diffusion_model.") 80 | return False 81 | 82 | def process_tensor(key, tensor, model_type, unet_tensors, non_unet_tensors, unet_count, verbose): 83 | if is_unet_tensor(key, model_type): 84 | if model_type == "sd15": 85 | new_key = key.replace("model.diffusion_model.", "") 86 | unet_tensors[new_key] = tensor.cpu() # Move to CPU 87 | else: 88 | unet_tensors[key] = tensor.cpu() # Move to CPU 89 | with unet_count.get_lock(): 90 | unet_count.value += 1 91 | if verbose: 92 | logging.debug("Classified as UNet tensor") 93 | else: 94 | non_unet_tensors[key] = tensor.cpu() # Move to CPU 95 | if verbose: 96 | logging.debug("Classified as non-UNet tensor") 97 | 98 | if verbose: 99 | logging.debug(f"Current UNet count: {unet_count.value}") 100 | logging.debug("---") 101 | 102 | def save_tensors(tensors, output_file): 103 | try: 104 | save_file(tensors, output_file) 105 | logging.info(f"Successfully saved to {output_file}") 106 | except Exception as e: 107 | logging.error(f"Error saving to {output_file}: {str(e)}") 108 | logging.debug(traceback.format_exc()) 109 | raise 110 | 111 | def process_model(input_file, unet_output_file, non_unet_output_file, model_type, use_gpu, verbose, num_threads, gpu_limit, cpu_limit): 112 | device = "cuda" if use_gpu and CUDA_AVAILABLE else "cpu" 113 | logging.info(f"Processing {input_file} on {device}") 114 | logging.info(f"Model type: {model_type}") 115 | logging.info(f"Using {num_threads} threads") 116 | if use_gpu: 117 | logging.info(f"GPU usage limit: {gpu_limit}%") 118 | logging.info(f"CPU usage limit: {cpu_limit}%") 119 | 120 | try: 121 | with safe_open(input_file, framework="pt", device=device) as f: 122 | unet_tensors = {} 123 | non_unet_tensors = {} 124 | total_tensors = 0 125 | unet_count = multiprocessing.Value('i', 0) 126 | key_prefixes = set() 127 | 128 | tensor_queue = queue.Queue() 129 | 130 | def worker(): 131 | while True: 132 | item = tensor_queue.get() 133 | if item is None: 134 | break 135 | key, tensor = item 136 | process_tensor(key, tensor, model_type, unet_tensors, non_unet_tensors, unet_count, verbose) 137 | tensor_queue.task_done() 138 | 139 | # Implement CPU limiting 140 | if cpu_limit < 100: 141 | time.sleep((100 - cpu_limit) / 100 * 0.1) # Adjust sleep time based on CPU limit 142 | 143 | threads = [] 144 | for _ in range(num_threads): 145 | t = threading.Thread(target=worker) 146 | t.start() 147 | threads.append(t) 148 | 149 | for key in f.keys(): 150 | total_tensors += 1 151 | tensor = f.get_tensor(key) 152 | key_prefix = key.split('.')[0] 153 | key_prefixes.add(key_prefix) 154 | 155 | if verbose: 156 | logging.debug(f"Processing key: {key}") 157 | logging.debug(f"Tensor shape: {tensor.shape}") 158 | 159 | tensor_queue.put((key, tensor)) 160 | 161 | # Implement GPU limiting 162 | if device == "cuda" and gpu_limit < 100: 163 | current_gpu_usage = torch.cuda.memory_allocated() / torch.cuda.max_memory_allocated() * 100 164 | if current_gpu_usage > gpu_limit: 165 | torch.cuda.empty_cache() 166 | time.sleep(0.1) # Allow some time for memory to be freed 167 | 168 | # Signal threads to exit 169 | for _ in range(num_threads): 170 | tensor_queue.put(None) 171 | 172 | # Wait for all threads to complete 173 | for t in threads: 174 | t.join() 175 | 176 | logging.info(f"Total tensors processed: {total_tensors}") 177 | logging.info(f"UNet tensors: {unet_count.value}") 178 | logging.info(f"Non-UNet tensors: {total_tensors - unet_count.value}") 179 | logging.info(f"Unique key prefixes found: {', '.join(sorted(key_prefixes))}") 180 | 181 | if unet_count.value == 0: 182 | logging.warning("No UNet tensors were identified. Please check if the model type is correct.") 183 | 184 | logging.info(f"Saving extracted UNet to {unet_output_file}") 185 | save_tensors(unet_tensors, unet_output_file) 186 | 187 | logging.info(f"Saving model without UNet to {non_unet_output_file}") 188 | save_tensors(non_unet_tensors, non_unet_output_file) 189 | 190 | logging.info("Processing complete!") 191 | except Exception as e: 192 | logging.error(f"An error occurred during processing: {str(e)}") 193 | logging.debug(traceback.format_exc()) 194 | raise 195 | finally: 196 | # Clean up GPU memory 197 | if device == "cuda": 198 | torch.cuda.empty_cache() 199 | gc.collect() 200 | 201 | def main(): 202 | parser = argparse.ArgumentParser(description="Extract UNet and create a model without UNet from SafeTensors file for SD 1.5, SDXL, or FLUX") 203 | parser.add_argument("input_file", type=Path, help="Input SafeTensors file") 204 | parser.add_argument("unet_output_file", type=Path, help="Output SafeTensors file for UNet") 205 | parser.add_argument("non_unet_output_file", type=Path, help="Output SafeTensors file for model without UNet") 206 | parser.add_argument("--model_type", choices=["sd15", "flux", "sdxl"], required=True, help="Type of model") 207 | parser.add_argument("--verbose", action="store_true", help="Enable verbose logging") 208 | parser.add_argument("--num_threads", type=int, help="Number of threads to use for processing (default: auto-detect)") 209 | parser.add_argument("--gpu_limit", type=int, default=90, help="Limit GPU usage to this percentage (default: 90)") 210 | parser.add_argument("--cpu_limit", type=int, default=90, help="Limit CPU usage to this percentage (default: 90)") 211 | 212 | args = parser.parse_args() 213 | 214 | setup_logging(args.verbose) 215 | 216 | cpu_count, total_ram, available_ram, gpu_info = check_resources() 217 | print(f"\nSystem Resources:") 218 | print(f"CPU Cores: {cpu_count}") 219 | print(f"Total RAM: {total_ram}") 220 | print(f"Available RAM: {available_ram}") 221 | print(f"GPU: {gpu_info}") 222 | 223 | use_gpu = get_user_preference() if CUDA_AVAILABLE else False 224 | 225 | # Auto-detect number of threads if not specified 226 | if args.num_threads is None: 227 | args.num_threads = max(1, os.cpu_count() - 1) # Leave one core free 228 | 229 | try: 230 | process_model(args.input_file, args.unet_output_file, args.non_unet_output_file, 231 | args.model_type, use_gpu, args.verbose, args.num_threads, 232 | args.gpu_limit, args.cpu_limit) 233 | except Exception as e: 234 | logging.error(f"An error occurred: {str(e)}") 235 | logging.debug(traceback.format_exc()) 236 | exit(1) 237 | 238 | if __name__ == "__main__": 239 | main() 240 | -------------------------------------------------------------------------------- /fluxeample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/captainzero93/extract-unet-safetensor/74b51460706bb16fb34351022fc95f3461057a8e/fluxeample.png --------------------------------------------------------------------------------