├── repo └── .gitkeep ├── requirements.txt ├── .gitignore ├── images ├── installation-web-ui.jpg └── deepfacelive-scripts-tab.jpg ├── README.md ├── install.py └── scripts ├── ddetailerutils.py ├── dflutils.py ├── deepfacelab.py ├── deepfacelive.py └── command.py /repo/.gitkeep: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | /repo/dflive/ 2 | /repo/dflab/ 3 | /notebooks/ 4 | __pycache__ 5 | .idea 6 | .ipynb_checkpoints -------------------------------------------------------------------------------- /images/installation-web-ui.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/idinkov/sd-deepface-1111/HEAD/images/installation-web-ui.jpg -------------------------------------------------------------------------------- /images/deepfacelive-scripts-tab.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/idinkov/sd-deepface-1111/HEAD/images/deepfacelive-scripts-tab.jpg -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## DeepFaceLab and DeepFaceLive on Stable Diffusion 2 | 3 | This is an implementation of iperov's DeepFaceLab and DeepFaceLive in Stable Diffusion Web UI by AUTOMATIC1111. 4 | 5 | Note: DeepFaceLive is currently fully implemented while DeepFaceLab is in the works. 6 | 7 | This can be installed as a plugin for https://github.com/AUTOMATIC1111/stable-diffusion-webui 8 | 9 | DeepFaceLive is implemented as a script which can be accessed on txt2img and img2img tabs. 10 | 11 |  12 | 13 | DeepFaceLab has a separate tab and controls to manage workspaces and train custom models. 14 | 15 | ## Use cases 16 | 17 | - You can easily face swap any face in stable diffusion with the one that you want, with a combination of DeepFaceLab to create your model and DeepFaceLive to implement the model to be used in stable diffusion generating process. 18 | - Enhance and make more stable and person specific the output of faces in stable diffusion. 19 | - You can also use DeepFaceLab to train your own models and use them in DeepFaceLive to generate faces in stable diffusion. 20 | 21 | ## Examples 22 | 23 | 24 | ## Table of contents 25 | 26 | 1) [Introduction](#1-introduction) 27 | 2) [Installation](#2-installation) 28 | 3) [Usage of DeepFaceLive](#3-usage-of-deepfacelive) 29 | 4) [Developers Documentation](#4-developers-documentation) 30 | 5) [Checkpoints Suggestions](#5-checkpoints-suggestions) 31 | 6) [Credits](#6-credits) 32 | 7) [Training your own model with DeepFaceLab](#7-training-your-own-model-with-deepfacelab) 33 | 8) [Q&A](#8-qa) 34 | 9) [License](#9-license) 35 | 36 | ## 1. Introduction 37 | 38 | This is implementation of DeepFaceLab and DeepFaceLive in Stable Diffusion Web UI by AUTOMATIC1111. 39 | 40 | DeepFaceLab is a tool that utilizes machine learning to replace faces in videos and photos. With DeepFaceLab you can train your own face model and use it in DeepFaceLive to generate faces in stable diffusion. 41 | 42 | DeepFaceLab will output a SAEHD model which is then exported to ~(700MB) .dfm file which contains the trained face needed for DeepFaceLive to work as a script. 43 | 44 | Note: It is not required to train your own model, you can use the pre-trained models provided by iperov. They will be seen in the dropdown in the UI for model selection and will be downloaded when used. List with the available faces can be seen in the readme of iperov repo here: https://github.com/iperov/DeepFaceLive 45 | 46 | ## 2. Installation 47 | 48 | ### 2.1. Requirements 49 | 50 | - Python 3.10 51 | - [Stable Diffusion Web UI by AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) 52 | 53 | ### 2.2. Installation through Web UI 54 | 55 | After you have installed and launched AUTOMATIC1111 in the WebUI you need to select "Extensions" tab and click on "Install from URL" tab. 56 | 57 | There in the URL for extension's git repository you need to paste the current repo URL: 58 | 59 | `https://github.com/idinkov/sd-deepface-1111` 60 | 61 | And click on "Install" button. 62 | 63 | After that you need to refresh the UI or restart it and you should see the new tab for DeepFaceLab. And the new script for DeepFaceLive. 64 | 65 |  66 | 67 | ## 3. Usage of DeepFaceLive 68 | 69 | ## 4. Developers Documentation 70 | 71 | ### 4.1. Python packages 72 | - tqdm 73 | - numpy 74 | - opencv-python 75 | - opencv-contrib-python 76 | - numexpr 77 | - h5py 78 | - ffmpeg-python 79 | - scikit-image 80 | - scipy 81 | - colorama 82 | - tensorflow 83 | - pyqt5 84 | - tf2onnx 85 | - onnxruntime or onnxruntime-gpu 86 | - protobuf==3.20.3 87 | 88 | ## 4.1.1. Developer Notes 89 | 90 | In order to implement DeepFaceLab which at the moment of writing uses Python 3.6 and DeepFaceLive which uses Python 3.7 the following modifications had to be made in order to modernize it to run on Python 3.10 91 | 92 | - Implemented xlib in DeepFaceLive had been updated for Collections import to work 93 | 94 | 95 | ## 5. Checkpoints Suggestions 96 | 97 | I have found the following stable diffusion checkpoints to produce good results rendering humans: 98 | * [Dreamlike Photoreal 2.0](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0) 99 | 100 | ## 6. Credits 101 | 102 | - Stable Diffusion Web UI by AUTOMATIC1111: https://github.com/AUTOMATIC1111/stable-diffusion-webui 103 | - DeepFaceLab by iperov: https://github.com/iperov/DeepFaceLab 104 | - DeepFaceLive by iperov: https://github.com/iperov/DeepFaceLive 105 | - Detection Detailer by dustysys: https://github.com/dustysys/ddetailer 106 | 107 | ## 7. Training your own model with DeepFaceLab 108 | 109 | ## 8. Q&A 110 | 111 | ## 9. License 112 | 113 | -------------------------------------------------------------------------------- /install.py: -------------------------------------------------------------------------------- 1 | import os 2 | import subprocess 3 | import sys 4 | from pathlib import Path 5 | from typing import Tuple, Optional 6 | from packaging import version 7 | import importlib.metadata 8 | 9 | import launch 10 | from launch import is_installed, run, run_pip 11 | 12 | # Determine whether to skip installation based on command-line arguments 13 | try: 14 | skip_install = getattr(launch.args, "skip_install", False) 15 | except AttributeError: 16 | skip_install = getattr(launch, "skip_install", False) 17 | 18 | python = sys.executable 19 | 20 | def comparable_version(version_str: str) -> Tuple[int, ...]: 21 | """Convert a version string into a tuple of integers for comparison.""" 22 | return tuple(map(int, version_str.split("."))) 23 | 24 | def get_installed_version(package: str) -> Optional[str]: 25 | """Retrieve the installed version of a package, if available.""" 26 | try: 27 | return importlib.metadata.version(package) 28 | except importlib.metadata.PackageNotFoundError: 29 | return None 30 | 31 | def install_uddetailer(): 32 | """Install and manage dependencies for the 'uddetailer' component.""" 33 | if not is_installed("mim"): 34 | run_pip("install -U openmim", desc="Installing openmim") 35 | 36 | # Ensure minimum requirements are met 37 | if not is_installed("mediapipe"): 38 | run_pip('install protobuf>=3.20', desc="Installing protobuf") 39 | run_pip('install mediapipe>=0.10.3', desc="Installing mediapipe") 40 | 41 | torch_version = get_installed_version("torch") 42 | legacy = torch_version and comparable_version(torch_version)[0] < 2 43 | 44 | # Check versions and manage installations for mmdet and mmcv 45 | mmdet_version = get_installed_version("mmdet") 46 | mmdet_v3 = mmdet_version and version.parse(mmdet_version) >= version.parse("3.0.0") 47 | 48 | if not is_installed("mmdet") or (legacy and mmdet_v3) or (not legacy and not mmdet_v3): 49 | if legacy and mmdet_v3: 50 | print("Uninstalling mmdet and mmengine...") 51 | run(f'"{python}" -m pip uninstall -y mmdet mmengine', live=True) 52 | run(f'"{python}" -m mim install mmcv-full', desc="Installing mmcv-full") 53 | run_pip("install mmdet==2.28.2", desc="Installing mmdet") 54 | else: 55 | if not mmdet_v3: 56 | print("Uninstalling mmdet, mmcv, and mmcv-full...") 57 | run(f'"{python}" -m pip uninstall -y mmdet mmcv mmcv-full', live=True) 58 | print("Installing mmcv, mmdet, and mmengine...") 59 | if not is_installed("mmengine"): 60 | run_pip("install mmengine==0.8.5", desc="Installing mmengine") 61 | if version.parse(torch_version) >= version.parse("2.1.0"): 62 | run(f'"{python}" -m mim install mmcv~=2.1.0', desc="Installing mmcv 2.1.0") 63 | else: 64 | run(f'"{python}" -m mim install mmcv~=2.0.0', desc="Installing mmcv") 65 | run(f'"{python}" -m mim install -U mmdet>=3.0.0', desc="Installing mmdet") 66 | run_pip("install mmdet>=3", desc="Installing mmdet") 67 | 68 | # Verify mmcv and mmengine versions 69 | mmcv_version = get_installed_version("mmcv") 70 | if mmcv_version and version.parse(mmcv_version) >= version.parse("2.0.1"): 71 | print(f"Your mmcv version {mmcv_version} may not work with mmyolo.") 72 | print("Consider fixing the version restriction manually.") 73 | 74 | mmengine_version = get_installed_version("mmengine") 75 | if mmengine_version: 76 | if version.parse(mmengine_version) >= version.parse("0.9.0"): 77 | print(f"Your mmengine version {mmengine_version} may not work on Windows.") 78 | print("Install mmengine 0.8.5 manually or use an updated version of bitsandbytes.") 79 | else: 80 | print(f"Your mmengine version is {mmengine_version}") 81 | 82 | if not legacy and not is_installed("mmyolo"): 83 | run(f'"{python}" -m pip install mmyolo', desc="Installing mmyolo") 84 | 85 | # Install additional requirements from requirements.txt 86 | req_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), "requirements.txt") 87 | if os.path.exists(req_file): 88 | mainpackage = 'sd-deepface-1111' 89 | with open(req_file) as file: 90 | for package in file: 91 | package = package.strip() 92 | try: 93 | if '==' in package: 94 | package_name, package_version = package.split('==') 95 | installed_version = get_installed_version(package_name) 96 | if installed_version != package_version: 97 | run_pip(f"install -U {package}", desc=f"{mainpackage} requirement: updating {package_name} to {package_version}") 98 | elif '>=' in package: 99 | package_name, package_version = package.split('>=') 100 | installed_version = get_installed_version(package_name) 101 | if not installed_version or comparable_version(installed_version) < comparable_version(package_version): 102 | run_pip(f"install -U {package}", desc=f"{mainpackage} requirement: updating {package_name} to {package_version}") 103 | elif not is_installed(package): 104 | run_pip(f"install {package}", desc=f"{mainpackage} requirement: {package}") 105 | except Exception as e: 106 | print(f"Error installing {package}: {e}") 107 | 108 | def install(): 109 | """Install essential packages for DeepFaceLab and related tools.""" 110 | packages = { 111 | "tqdm": "requirements for DeepFaceLab - tqdm", 112 | "numpy": "requirements for DeepFaceLab - numpy", 113 | "numexpr": "requirements for DeepFaceLab - numexpr", 114 | "h5py": "requirements for DeepFaceLab - h5py", 115 | "opencv-python": "requirements for DeepFaceLab - opencv-python", 116 | "opencv-contrib-python": "requirements for DeepFaceLab - opencv-contrib-python", 117 | "ffmpeg-python": "requirements for DeepFaceLab - ffmpeg-python", 118 | "scikit-image": "requirements for DeepFaceLab - scikit-image", 119 | "scipy": "requirements for DeepFaceLab - scipy", 120 | "colorama": "requirements for DeepFaceLab - colorama", 121 | "tensorflow": "requirements for DeepFaceLab - tensorflow", 122 | "pyqt5": "requirements for DeepFaceLab - pyqt5", 123 | "tf2onnx": "requirements for DeepFaceLab - tf2onnx", 124 | "onnxruntime": "requirements for DeepFaceLab - onnxruntime", 125 | "onnxruntime-gpu": "requirements for DeepFaceLab - onnxruntime-gpu==1.12.1", 126 | "protobuf": "requirements for DeepFaceLab - protobuf==3.20.3", 127 | } 128 | 129 | for package, desc in packages.items(): 130 | if not is_installed(package) or (package == "onnxruntime-gpu" and get_installed_version(package) != '1.12.1') or (package == "protobuf" and get_installed_version(package) != '3.20.3'): 131 | version_specifier = "" if package != "onnxruntime-gpu" and package != "protobuf" else "==1.12.1" if package == "onnxruntime-gpu" else "==3.20.3" 132 | run_pip(f"install {package}{version_specifier}", desc=desc) 133 | 134 | def checkout_git_commit(repo_name: str, commit: str, output_folder: str): 135 | """Clone a GitHub repository and check out a specific commit.""" 136 | if not os.path.isdir(output_folder): 137 | subprocess.run(['git', 'clone', f'https://github.com/{repo_name}.git', output_folder]) 138 | 139 | os.chdir(output_folder) 140 | subprocess.run(['git', 'checkout', commit]) 141 | 142 | # Main script execution 143 | if not skip_install: 144 | install() 145 | install_uddetailer() 146 | 147 | script_path = Path(os.path.dirname(os.path.abspath(__file__))) / "repo/" 148 | checkout_git_commit('idinkov/DeepFaceLive', 'af8396925cccc1d3f02867e16b8929060c3ebc5f', str(script_path / "dflive")) 149 | -------------------------------------------------------------------------------- /scripts/ddetailerutils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import cv2 4 | from PIL import Image 5 | import numpy as np 6 | import gradio as gr 7 | 8 | from modules import processing, images 9 | from modules import scripts, script_callbacks, shared, devices, modelloader 10 | from modules.processing import Processed, StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img 11 | from modules.shared import opts, cmd_opts, state 12 | from modules.sd_models import model_hash 13 | from modules.paths import models_path 14 | from basicsr.utils.download_util import load_file_from_url 15 | 16 | dd_models_path = os.path.join(models_path, "mmdet") 17 | 18 | def list_models(model_path): 19 | model_list = modelloader.load_models(model_path=model_path, ext_filter=[".pth"]) 20 | 21 | def modeltitle(path, shorthash): 22 | abspath = os.path.abspath(path) 23 | 24 | if abspath.startswith(model_path): 25 | name = abspath.replace(model_path, '') 26 | else: 27 | name = os.path.basename(path) 28 | 29 | if name.startswith("\\") or name.startswith("/"): 30 | name = name[1:] 31 | 32 | shortname = os.path.splitext(name.replace("/", "_").replace("\\", "_"))[0] 33 | 34 | return f'{name} [{shorthash}]', shortname 35 | 36 | models = [] 37 | for filename in model_list: 38 | h = model_hash(filename) 39 | title, short_model_name = modeltitle(filename, h) 40 | models.append(title) 41 | 42 | return models 43 | 44 | class DetectionDetailerScript(): 45 | def run(self, 46 | p, 47 | model, 48 | model_name, 49 | init_image, 50 | dd_conf_a = 30, 51 | dd_dilation_factor_a = 4, 52 | dd_offset_x_a = 0, 53 | dd_offset_y_a = 0): 54 | 55 | new_image = p 56 | results_a = inference(init_image, model, model_name, dd_conf_a / 100.0) 57 | masks_a = create_segmasks(results_a) 58 | masks_a = dilate_masks(masks_a, dd_dilation_factor_a, 1) 59 | masks_a = offset_masks(masks_a, dd_offset_x_a, dd_offset_y_a) 60 | output_image = init_image 61 | gen_count = len(masks_a) 62 | if (gen_count > 0): 63 | #state.job_count += gen_count 64 | new_image.init_images = [init_image] 65 | new_image.batch_size = 1 66 | new_image.n_iter = 1 67 | for i in range(gen_count): 68 | new_image.image_mask = masks_a[i] 69 | processed = processing.process_images(p) 70 | new_image.seed = processed.seed + 1 71 | new_image.init_images = processed.images 72 | 73 | if (gen_count > 0): 74 | output_image = processed.images[0] 75 | 76 | return output_image 77 | 78 | import mmcv 79 | from mmdet.apis import (inference_detector, 80 | init_detector) 81 | 82 | 83 | def modeldataset(model_shortname): 84 | path = modelpath(model_shortname) 85 | if ("mmdet" in path and "segm" in path): 86 | dataset = 'coco' 87 | else: 88 | dataset = 'bbox' 89 | return dataset 90 | 91 | def modelpath(model_shortname): 92 | return dd_models_path + "/" + model_shortname 93 | 94 | def is_allblack(mask): 95 | cv2_mask = np.array(mask) 96 | return cv2.countNonZero(cv2_mask) == 0 97 | 98 | def bitwise_and_masks(mask1, mask2): 99 | cv2_mask1 = np.array(mask1) 100 | cv2_mask2 = np.array(mask2) 101 | cv2_mask = cv2.bitwise_and(cv2_mask1, cv2_mask2) 102 | mask = Image.fromarray(cv2_mask) 103 | return mask 104 | 105 | def subtract_masks(mask1, mask2): 106 | cv2_mask1 = np.array(mask1) 107 | cv2_mask2 = np.array(mask2) 108 | cv2_mask = cv2.subtract(cv2_mask1, cv2_mask2) 109 | mask = Image.fromarray(cv2_mask) 110 | return mask 111 | 112 | def dilate_masks(masks, dilation_factor, iter=1): 113 | if dilation_factor == 0: 114 | return masks 115 | dilated_masks = [] 116 | kernel = np.ones((dilation_factor, dilation_factor), np.uint8) 117 | for i in range(len(masks)): 118 | cv2_mask = np.array(masks[i]) 119 | dilated_mask = cv2.dilate(cv2_mask, kernel, iter) 120 | dilated_masks.append(Image.fromarray(dilated_mask)) 121 | return dilated_masks 122 | 123 | def offset_masks(masks, offset_x, offset_y): 124 | if (offset_x == 0 and offset_y == 0): 125 | return masks 126 | offset_masks = [] 127 | for i in range(len(masks)): 128 | cv2_mask = np.array(masks[i]) 129 | offset_mask = cv2_mask.copy() 130 | offset_mask = np.roll(offset_mask, -offset_y, axis=0) 131 | offset_mask = np.roll(offset_mask, offset_x, axis=1) 132 | 133 | offset_masks.append(Image.fromarray(offset_mask)) 134 | return offset_masks 135 | 136 | def combine_masks(masks): 137 | initial_cv2_mask = np.array(masks[0]) 138 | combined_cv2_mask = initial_cv2_mask 139 | for i in range(1, len(masks)): 140 | cv2_mask = np.array(masks[i]) 141 | combined_cv2_mask = cv2.bitwise_or(combined_cv2_mask, cv2_mask) 142 | 143 | combined_mask = Image.fromarray(combined_cv2_mask) 144 | return combined_mask 145 | 146 | def create_segmasks(results): 147 | segms = results[1] 148 | segmasks = [] 149 | for i in range(len(segms)): 150 | cv2_mask = segms[i].astype(np.uint8) * 255 151 | mask = Image.fromarray(cv2_mask) 152 | segmasks.append(mask) 153 | 154 | return segmasks 155 | 156 | def get_device(): 157 | device_id = shared.cmd_opts.device_id 158 | if device_id is not None: 159 | cuda_device = f"cuda:{device_id}" 160 | else: 161 | cuda_device = "cpu" 162 | return cuda_device 163 | 164 | def inference(image, model, modelname, conf_thres): 165 | path = modelpath(modelname) 166 | if ("mmdet" in path and "bbox" in path): 167 | results = inference_mmdet_bbox(image, model, modelname, conf_thres) 168 | elif ("mmdet" in path and "segm" in path): 169 | results = inference_mmdet_segm(image, model, modelname, conf_thres) 170 | return results 171 | 172 | def preload_ddetailer_model(modelname): 173 | model_checkpoint = modelpath(modelname) 174 | model_config = os.path.splitext(model_checkpoint)[0] + ".py" 175 | model_device = get_device() 176 | return init_detector(model_config, model_checkpoint, device=model_device) 177 | 178 | def inference_mmdet_segm(image, model, modelname, conf_thres): 179 | mmdet_results = inference_detector(model, np.array(image)) 180 | bbox_results, segm_results = mmdet_results 181 | dataset = modeldataset(modelname) 182 | labels = [ 183 | np.full(bbox.shape[0], i, dtype=np.int32) 184 | for i, bbox in enumerate(bbox_results) 185 | ] 186 | n, m = bbox_results[0].shape 187 | if (n == 0): 188 | return [[], []] 189 | labels = np.concatenate(labels) 190 | bboxes = np.vstack(bbox_results) 191 | segms = mmcv.concat_list(segm_results) 192 | filter_inds = np.where(bboxes[:, -1] > conf_thres)[0] 193 | results = [[], []] 194 | for i in filter_inds: 195 | results[0].append(bboxes[i]) 196 | results[1].append(segms[i]) 197 | 198 | return results 199 | 200 | def inference_mmdet_bbox(image, model, modelname, conf_thres): 201 | results = inference_detector(model, np.array(image)) 202 | cv2_image = np.array(image) 203 | cv2_image = cv2_image[:, :, ::-1].copy() 204 | cv2_gray = cv2.cvtColor(cv2_image, cv2.COLOR_BGR2GRAY) 205 | 206 | segms = [] 207 | for (x0, y0, x1, y1, conf) in results[0]: 208 | cv2_mask = np.zeros((cv2_gray.shape), np.uint8) 209 | cv2.rectangle(cv2_mask, (int(x0), int(y0)), (int(x1), int(y1)), 255, -1) 210 | cv2_mask_bool = cv2_mask.astype(bool) 211 | segms.append(cv2_mask_bool) 212 | 213 | n, m = results[0].shape 214 | if (n == 0): 215 | return [[], []] 216 | bboxes = np.vstack(results[0]) 217 | filter_inds = np.where(bboxes[:, -1] > conf_thres)[0] 218 | results = [[], []] 219 | for i in filter_inds: 220 | results[0].append(bboxes[i]) 221 | results[1].append(segms[i]) 222 | 223 | return results -------------------------------------------------------------------------------- /scripts/dflutils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import shutil 3 | from pathlib import Path 4 | from modules import shared, paths, script_callbacks 5 | 6 | class DflFiles: 7 | 8 | @staticmethod 9 | def folder_exists(dir_path): 10 | """ 11 | Check whether a directory exists. 12 | 13 | Returns True if the directory exists, False otherwise. 14 | """ 15 | return os.path.exists(dir_path) and os.path.isdir(dir_path) 16 | 17 | @staticmethod 18 | def create_folder(path): 19 | """Create a folder at the specified path""" 20 | os.makedirs(path, exist_ok=True) 21 | 22 | @staticmethod 23 | def delete_folder(path): 24 | """Delete a folder and all its contents at the specified path""" 25 | os.removedirs(path) 26 | 27 | @staticmethod 28 | def empty_folder(path): 29 | """Clear the contents of a folder at the specified path""" 30 | for file_name in os.listdir(path): 31 | file_path = os.path.join(path, file_name) 32 | if os.path.isfile(file_path): 33 | os.remove(file_path) 34 | 35 | @staticmethod 36 | def create_empty_file(path): 37 | """Create an empty file at the specified path""" 38 | open(path, 'a').close() 39 | 40 | @staticmethod 41 | def delete_file(path): 42 | """Delete a file at the specified path""" 43 | os.remove(path) 44 | 45 | @staticmethod 46 | def move_file(src, dst): 47 | """Move a file from the source path to the destination path""" 48 | os.replace(src, dst) 49 | 50 | @staticmethod 51 | def get_files_from_dir(base_dir, extension_list, two_dimensions=False): 52 | """Return a list of file names in a directory with a matching file extension""" 53 | files = [] 54 | for v in Path(base_dir).iterdir(): 55 | if v.suffix in extension_list and not v.name.startswith('.ipynb'): 56 | if two_dimensions: 57 | files.append([v.name, v.name]) 58 | else: 59 | files.append(v.name) 60 | return files 61 | 62 | @staticmethod 63 | def get_folder_names_in_dir(base_dir): 64 | """Return a list of folder names in a directory""" 65 | folders = [] 66 | for v in Path(base_dir).iterdir(): 67 | if v.is_dir() and not v.name.startswith('.ipynb'): 68 | folders.append(v.name) 69 | return folders 70 | 71 | @staticmethod 72 | def extract_archive(archive_path, dest_dir, dir_name): 73 | """ 74 | Extract an archive file to a directory with a specified name. 75 | 76 | The extracted directory will be created inside the destination directory with the specified name. 77 | If the name is taken, a suffix will be added to create a unique name. 78 | 79 | Returns the full path of the directory where the contents were extracted. 80 | """ 81 | suffix = '' 82 | extracted_dir_name = dir_name 83 | while os.path.exists(os.path.join(dest_dir, extracted_dir_name + suffix)): 84 | if not suffix: 85 | suffix = 1 86 | else: 87 | suffix += 1 88 | extracted_dir_name = f'{dir_name}_{suffix}' 89 | 90 | extracted_dir_path = os.path.join(dest_dir, extracted_dir_name) 91 | os.makedirs(extracted_dir_path, exist_ok=True) 92 | 93 | # Define a dictionary that maps file extensions to archive types 94 | archive_types = { 95 | '.zip': 'zip', 96 | '.rar': 'rar', 97 | '.7z': '7z', 98 | '.tar': 'tar', 99 | '.tar.gz': 'gztar', 100 | '.tgz': 'gztar' 101 | } 102 | 103 | # Determine the type of archive file based on the file extension 104 | file_ext = os.path.splitext(archive_path)[1].lower() 105 | 106 | # Use the appropriate function from the `shutil` library to extract the archive 107 | shutil.unpack_archive(archive_path, extracted_dir_path, archive_types[file_ext]) 108 | 109 | return extracted_dir_path 110 | 111 | @staticmethod 112 | def copy_file_to_dest_dir(temp_file_path, file_original_name, dest_dir): 113 | """ 114 | Copy a file to a destination directory using the original file name. 115 | 116 | If a file with the same name already exists in the destination directory, a suffix 117 | will be added to the file name until a unique name is found. 118 | 119 | Returns the full path of the copied file. 120 | """ 121 | suffix = '' 122 | dest_file_name = file_original_name 123 | while os.path.exists(os.path.join(dest_dir, dest_file_name)): 124 | if not suffix: 125 | suffix = 1 126 | else: 127 | suffix += 1 128 | dest_file_name = f'{os.path.splitext(file_original_name)[0]}_{suffix}{os.path.splitext(file_original_name)[1]}' 129 | 130 | dest_file_path = os.path.join(dest_dir, dest_file_name) 131 | shutil.copyfile(temp_file_path, dest_file_path) 132 | 133 | return dest_file_path 134 | 135 | 136 | class DflOptions: 137 | def __init__(self, opts): 138 | dfl_path = Path(paths.script_path) / "deepfacelab" 139 | scripts_path = Path(os.path.dirname(os.path.abspath(__file__))) / "../" 140 | print("Scripts path:" + str(scripts_path)) 141 | 142 | self.dfl_path = Path(str(dfl_path / "dfl-files")) 143 | self.workspaces_path = Path(str(dfl_path / "workspaces")) 144 | self.pak_path = Path(str(dfl_path / "pak-files")) 145 | self.xseg_path = Path(str(dfl_path / "xseg-models")) 146 | self.saehd_path = Path(str(dfl_path / "saehd-models")) 147 | self.videos_path = Path(str(dfl_path / "videos")) 148 | self.videos_frames_path = Path(str(dfl_path / "videos-frames")) 149 | self.tmp_path = Path(str(dfl_path / "tmp")) 150 | self.dflab_repo = Path(str(dfl_path / "dflab")) 151 | self.dflive_repo = Path(str(dfl_path / "dflive")) 152 | 153 | if hasattr(opts, "opts.deepfacelab_dfl_path"): 154 | self.dfl_path = Path(opts.deepfacelab_dfl_path) 155 | 156 | if hasattr(opts, "deepfacelab_workspaces_path"): 157 | self.workspaces_path = Path(opts.deepfacelab_workspaces_path) 158 | 159 | if hasattr(opts, "deepfacelab_pak_path"): 160 | self.pak_path = Path(opts.deepfacelab_pak_path) 161 | 162 | if hasattr(opts, "deepfacelab_xseg_path"): 163 | self.xseg_path = Path(opts.deepfacelab_xseg_path) 164 | 165 | if hasattr(opts, "deepfacelab_saehd_path"): 166 | self.saehd_path = Path(opts.deepfacelab_saehd_path) 167 | 168 | if hasattr(opts, "deepfacelab_videos_path"): 169 | self.videos_path = Path(opts.deepfacelab_videos_path) 170 | 171 | if hasattr(opts, "deepfacelab_videos_frames_path"): 172 | self.videos_frames_path = Path(opts.deepfacelab_videos_frames_path) 173 | 174 | if hasattr(opts, "deepfacelab_tmp_path"): 175 | self.tmp_path = Path(opts.deepfacelab_tmp_path) 176 | 177 | if hasattr(opts, "deepfacelab_dflab_repo_path"): 178 | self.dflab_repo = Path(opts.deepfacelab_dflab_repo_path) 179 | 180 | if hasattr(opts, "deepfacelab_dflive_repo_path"): 181 | self.dflive_repo = Path(opts.deepfacelab_dflive_repo_path) 182 | 183 | # Create dirs if not existing 184 | for p in [self.dfl_path, self.workspaces_path, self.pak_path, self.xseg_path, self.saehd_path, self.videos_path, self.videos_frames_path, self.tmp_path]: 185 | DflFiles.create_folder(p) 186 | 187 | 188 | # Getters start here 189 | 190 | def get_dfl_path(self): 191 | return self.dfl_path 192 | 193 | def get_workspaces_path(self): 194 | return self.workspaces_path 195 | 196 | def get_pak_path(self): 197 | return self.pak_path 198 | 199 | def get_xseg_path(self): 200 | return self.xseg_path 201 | 202 | def get_saehd_path(self): 203 | return self.saehd_path 204 | 205 | def get_videos_path(self): 206 | return self.videos_path 207 | 208 | def get_videos_frames_path(self): 209 | return self.videos_frames_path 210 | 211 | def get_tmp_path(self): 212 | return self.tmp_path 213 | 214 | def get_dflab_repo_path(self): 215 | return self.dflab_repo 216 | 217 | def get_dflive_repo_path(self): 218 | return self.dflive_repo 219 | 220 | # Lists start here 221 | 222 | def get_dfl_list(self, include_downloadable=True): 223 | dir_files = DflFiles.get_files_from_dir(self.dfl_path, [".dfm"], True) 224 | dir_files_one = DflFiles.get_files_from_dir(self.dfl_path, [".dfm"]) 225 | if include_downloadable: 226 | downloable_files = self.get_downloadable_models(dir_files_one) 227 | tmp_files = [] 228 | for f in downloable_files: 229 | # Get basename from url 230 | basename = os.path.basename(f[1]) 231 | tmp_files.append([basename, f[1]]) 232 | return dir_files + tmp_files 233 | return dir_files 234 | 235 | def get_downloadable_models(self, available_models): 236 | from scripts.command import get_downloadable_models 237 | return get_downloadable_models(self.dfl_path, available_models) 238 | 239 | def get_pak_list(self): 240 | return DflFiles.get_files_from_dir(self.pak_path, [".pak"]) 241 | 242 | def get_videos_list(self): 243 | return DflFiles.get_files_from_dir(self.videos_path, [".mp4", ".mkv"]) 244 | 245 | def get_videos_list_full_path(self): 246 | return list(self.videos_path + "/" + v for v in self.get_videos_list()) 247 | 248 | def get_workspaces_list(self): 249 | return DflFiles.get_folder_names_in_dir(self.workspaces_path) 250 | 251 | def get_saehd_models_list(self): 252 | return DflFiles.get_folder_names_in_dir(self.saehd_path) 253 | 254 | def get_xseg_models_list(self): 255 | return DflFiles.get_folder_names_in_dir(self.xseg_path) -------------------------------------------------------------------------------- /scripts/deepfacelab.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import platform 3 | import math 4 | import json 5 | import sys 6 | import os 7 | import re 8 | from pathlib import Path 9 | import shutil 10 | 11 | import gradio as gr 12 | import numpy as np 13 | from tqdm import tqdm 14 | from PIL import Image, ImageFilter 15 | import cv2 16 | 17 | from modules.ui import create_refresh_button 18 | from modules.ui_common import folder_symbol 19 | from modules.shared import opts, OptionInfo 20 | from modules import shared, paths, script_callbacks 21 | 22 | current_frame_set = [] 23 | current_frame_set_index = 0 24 | 25 | 26 | def on_ui_tabs(): 27 | dflFiles = DflFiles() 28 | dfl_options = DflOptions(opts) 29 | 30 | dfl_path = dfl_options.get_dfl_path() 31 | workspaces_path = dfl_options.get_workspaces_path() 32 | pak_path = dfl_options.get_pak_path() 33 | xseg_path = dfl_options.get_xseg_path() 34 | saehd_path = dfl_options.get_saehd_path() 35 | videos_path = dfl_options.get_videos_path() 36 | videos_frames_path = dfl_options.get_videos_frames_path() 37 | tmp_path = dfl_options.get_tmp_path() 38 | 39 | def get_dfl_list(): 40 | return dflFiles.get_files_from_dir(dfl_path, [".dfl"]) 41 | 42 | def get_pak_list(): 43 | return dflFiles.get_files_from_dir(pak_path, [".pak"]) 44 | 45 | def get_videos_list(): 46 | return dflFiles.get_files_from_dir(videos_path, [".mp4", ".mkv"]) 47 | 48 | def get_videos_list_full_path(): 49 | return list(videos_path + "/" + v for v in get_videos_list()) 50 | 51 | def get_workspaces_list(): 52 | return dflFiles.get_folder_names_in_dir(workspaces_path) 53 | 54 | def get_saehd_models_list(): 55 | return dflFiles.get_folder_names_in_dir(saehd_path) 56 | 57 | def get_xseg_models_list(): 58 | return dflFiles.get_folder_names_in_dir(xseg_path) 59 | 60 | def render_train_saehd(gr): 61 | with gr.Row(): 62 | model_saehd_dropdown = gr.Dropdown(choices=get_saehd_models_list(), elem_id="saehd_model_dropdown", label="SAEHD Model:", interactive=True) 63 | create_refresh_button(model_saehd_dropdown, lambda: None, lambda: {"choices": get_saehd_models_list()}, "refresh_saehd_model_list") 64 | model_saehd_create_new_model = gr.Checkbox(label="Create new model") 65 | with gr.Row(): 66 | model_xseg_dropdown = gr.Dropdown(choices=get_xseg_models_list(), elem_id="xseg_model_dropdown", label="XSEG Model:", interactive=True) 67 | create_refresh_button(model_xseg_dropdown, lambda: None, lambda: {"choices": get_xseg_models_list()}, "refresh_xseg_model_list") 68 | 69 | with gr.Row(): 70 | src_pak_dropdown = gr.Dropdown(choices=get_pak_list(), elem_id="src_pak_dropdown", label="Src Faceset:", interactive=True) 71 | create_refresh_button(src_pak_dropdown, lambda: None, lambda: {"choices": get_pak_list()}, "refresh_src_pak_dropdown") 72 | 73 | with gr.Row(): 74 | dst_pak_dropdown = gr.Dropdown(choices=get_pak_list(), elem_id="dst_pak_dropdown", label="Dst Faceset:", interactive=True) 75 | create_refresh_button(dst_pak_dropdown, lambda: None, lambda: {"choices": get_pak_list()}, "refresh_dst_pak_dropdown") 76 | 77 | train_saehd = gr.Button(value="Train SAEHD", variant="primary") 78 | log_output = gr.HTML(value="") 79 | 80 | def render_faceset_extract(gr): 81 | with gr.Row(): 82 | videos_dropdown = gr.Dropdown(choices=get_videos_list(), elem_id="videos_dropdown", label="Videos", 83 | interactive=True) 84 | create_refresh_button(videos_dropdown, lambda: None, lambda: {"choices": get_videos_list()}, 85 | "refresh_videos_dropdown") 86 | faceset_output_facetype_dropdown = gr.Dropdown(choices=['half_face', 'full_face', 'whole_face', 'head', 'mark_only'], value="whole_face", label="Output face type", interactive=True) 87 | faceset_output_resolution_dropdown = gr.Dropdown(choices=["256x256","512x512","768x768","1024x1024"], value="512x512", label="Output resolution", interactive=True) 88 | faceset_output_type_dropdown = gr.Dropdown(choices=["jpg","png"], value="jpg", label="Output filetype",interactive=True) 89 | faceset_output_quality_dropdown = gr.Dropdown(choices=[90,100], value=100, label="Output quality",interactive=True) 90 | faceset_output_debug_dropdown = gr.Checkbox(value=False, label="Generate debug frames") 91 | faceset_extract_frames_button = gr.Button(value="Extract Frames Only", variant="primary") 92 | faceset_extract_button = gr.Button(value="Faceset Extract", variant="primary") 93 | faceset_extract_output = gr.Markdown() 94 | faceset_extract_button.click(DflAction.extract_frames, [videos_dropdown], faceset_extract_output) 95 | faceset_extract_button.click(DflAction.extract_faceset, [videos_dropdown, 96 | faceset_output_facetype_dropdown, 97 | faceset_output_resolution_dropdown, 98 | faceset_output_type_dropdown, 99 | faceset_output_quality_dropdown, 100 | faceset_output_debug_dropdown], faceset_extract_output) 101 | faceset_extract_frames_button.click(DflAction.extract_frames,videos_dropdown, faceset_extract_output) 102 | 103 | 104 | 105 | def render_create_dfl(gr): 106 | train_saehd = gr.Button(value="Create DFL", variant="primary") 107 | 108 | def click_create_workspace(text): 109 | if text == "": 110 | return f"Error!" 111 | return f"Workspace " + text + " created" 112 | 113 | def render_classic_tabs(): 114 | with gr.Row(): 115 | workspace_dropdown = gr.Dropdown(choices=get_workspaces_list(), elem_id="workspace_dropdown", label="Current workspace:", interactive=True) 116 | create_refresh_button(workspace_dropdown, lambda: None, lambda: {"choices": get_workspaces_list()}, "refresh_workspace_list") 117 | with gr.Tab("Faceset Extract"): 118 | render_faceset_extract(gr) 119 | with gr.Tab("Train"): 120 | render_train_saehd(gr) 121 | with gr.Tab("Create DFL"): 122 | render_create_dfl(gr) 123 | 124 | def upload_files_videos(files): 125 | file_paths = [file.name for file in files] 126 | for file in files: 127 | DflFiles.copy_file_to_dest_dir(file.name, "video.mp4", videos_path) 128 | 129 | return file_paths 130 | 131 | def get_current_video_path(videos_dropdown): 132 | return gr.Textbox.update(value=get_current_video_path_only(videos_dropdown), visible=True) 133 | 134 | def get_current_video_path_only(videos_dropdown): 135 | return str(videos_path) + "/" + str(videos_dropdown) 136 | 137 | def action_delete_video(videos_dropdown): 138 | filePath = get_current_video_path_only(videos_dropdown) 139 | DflFiles.delete_file(str(filePath)) 140 | return gr.Textbox.update(visible=False) 141 | 142 | def reset_video_dropdown(videos_dropdown): 143 | return gr.Dropdown.update(value="") 144 | 145 | def render_dataset_videos(): 146 | upload_button = gr.UploadButton("Click to Upload Video/s", file_types=["video"], file_count="multiple") 147 | file_output = gr.File(label="Uploaded Video/s") 148 | upload_button.upload(upload_files_videos, upload_button, file_output, scroll_to_output=True) 149 | 150 | def render_dataset_xseg(): 151 | upload_button_xseg = gr.UploadButton("Click to Upload XSEG model/s", file_types=["video"], file_count="multiple") 152 | file_output_xseg = gr.File(label="Uploaded XSEG model/s") 153 | upload_button_xseg.upload(upload_files_videos, upload_button_xseg, file_output_xseg, scroll_to_output=True) 154 | 155 | def render_dataset_saehd(): 156 | upload_button_saehd = gr.UploadButton("Click to Upload SAEHD model/s", file_types=["video"], file_count="multiple") 157 | file_output_saehd = gr.File(label="Uploaded SAEHD model/s") 158 | upload_button_saehd.upload(upload_files_videos, upload_button_saehd, file_output_saehd, scroll_to_output=True) 159 | 160 | def render_dataset_facesets(): 161 | upload_button_datasets = gr.UploadButton("Click to Upload Faceset/s", file_types=["video"], file_count="multiple") 162 | file_output_datasets = gr.File(label="Uploaded Faceset/s") 163 | upload_button_datasets.upload(upload_files_videos, upload_button_datasets, file_output_datasets, scroll_to_output=True) 164 | 165 | def render_dataset_dfl(): 166 | upload_button_dfl = gr.UploadButton("Click to Upload DFL/s", file_types=["video"], file_count="multiple") 167 | file_output_dfl = gr.File(label="Uploaded DFL/s") 168 | upload_button_dfl.upload(upload_files_videos, upload_button_dfl, file_output_dfl, scroll_to_output=True) 169 | 170 | # Display contents in main tab "DeepFaceLab" in SD1111 UI 171 | with gr.Blocks(analytics_enabled=False) as training_picker: 172 | with gr.Row(): 173 | with gr.Column(scale=1): 174 | gr.Markdown("DeepFaceLab") 175 | render_classic_tabs() 176 | gr.Markdown("Creator") 177 | with gr.Tab("Workspace"): 178 | text = gr.Textbox(value="", label="Name") 179 | button = gr.Button(value="Create Workspace", variant="primary") 180 | output1 = gr.Textbox(label="Status") 181 | button.click(click_create_workspace, [text], output1) 182 | with gr.Tab("SAEHD Model"): 183 | with gr.Tab("New Model"): 184 | nothing = 0 185 | with gr.Tab("Clone Existing Model"): 186 | with gr.Row(): 187 | model_saehd_dropdown = gr.Dropdown(choices=get_saehd_models_list(), elem_id="saehd_model_dropdown", label="SAEHD Model:", interactive=True) 188 | create_refresh_button(model_saehd_dropdown, lambda: None, lambda: {"choices": get_saehd_models_list()}, "refresh_saehd_model_list") 189 | 190 | text = gr.Textbox(value="", label="Name") 191 | button = gr.Button(value="Create SAEHD Model", variant="primary") 192 | output1 = gr.Textbox(label="Status") 193 | button.click(click_create_workspace, [text], output1) 194 | 195 | gr.Markdown("Fast tools") 196 | with gr.Tab("DFL Creator"): 197 | nothing = 0 198 | 199 | with gr.Column(scale=2): 200 | gr.Markdown("Preview/Browser") 201 | with gr.Tabs() as tabs_preview: 202 | with gr.TabItem("Status", id=0): 203 | nothing = 0 204 | with gr.TabItem("Workspaces", id=1): 205 | nothing = 0 206 | with gr.TabItem("Videos", id=2): 207 | with gr.Tabs() as tabs_preview_videos: 208 | with gr.TabItem("Browser", id=0): 209 | browser_videos_gallery = gr.Gallery(fn=get_videos_list_full_path) 210 | browser_videos_gallery.style(grid=4, height=4, container=True) 211 | with gr.TabItem("Preview", id=1): 212 | with gr.Row(): 213 | videos_dropdown = gr.Dropdown(choices=get_videos_list(), elem_id="videos_dropdown", label="Videos:", interactive=True) 214 | create_refresh_button(videos_dropdown, lambda: None, lambda: {"choices": get_videos_list()}, "refresh_videos_dropdown") 215 | with gr.Row(): 216 | download_video = gr.Button(value="Download Video", variant="gray") 217 | delete_video = gr.Button(value="Delete Video", variant="red") 218 | main_video_preview = gr.Video(interactive=None) 219 | download_video.click(reset_video_dropdown, videos_dropdown, videos_dropdown) 220 | delete_video.click(action_delete_video, videos_dropdown, main_video_preview) 221 | videos_dropdown.change(get_current_video_path, videos_dropdown, main_video_preview) 222 | with gr.TabItem("XSEG", id=3): 223 | with gr.Tabs() as tabs_preview_xseg: 224 | with gr.TabItem("Browser", id=0): 225 | nothing = 0 226 | with gr.TabItem("Viewer", id=1): 227 | with gr.Row(): 228 | xseg_dropdown = gr.Dropdown(choices=get_xseg_models_list(), elem_id="xseg_dropdown", label="XSEG Model:", interactive=True) 229 | create_refresh_button(xseg_dropdown, lambda: None, lambda: {"choices": get_xseg_models_list()}, "refresh_xseg_list") 230 | 231 | with gr.TabItem("SAEHD", id=4): 232 | with gr.Tabs() as tabs_preview_saehd: 233 | with gr.TabItem("Browser", id=0): 234 | nothing = 0 235 | with gr.TabItem("Viewer", id=1): 236 | with gr.Row(): 237 | saehd_dropdown = gr.Dropdown(choices=get_saehd_models_list(), elem_id="saehd_dropdown", label="SAEHD Model:", interactive=True) 238 | create_refresh_button(saehd_dropdown, lambda: None, lambda: {"choices": get_saehd_models_list()}, "refresh_dfl_list") 239 | 240 | with gr.TabItem("Facesets", id=5): 241 | with gr.Tabs() as tabs_preview_facesets: 242 | with gr.TabItem("Browser", id=0): 243 | nothing = 0 244 | with gr.TabItem("Viewer", id=1): 245 | with gr.Row(): 246 | faceset_dropdown = gr.Dropdown(choices=get_pak_list(), elem_id="faceset_dropdown", label="Faceset:", interactive=True) 247 | create_refresh_button(faceset_dropdown, lambda: None, lambda: {"choices": get_pak_list()}, "refresh_faceset_list") 248 | with gr.TabItem("DFL", id=6): 249 | with gr.Tabs() as tabs_preview_dfl: 250 | with gr.TabItem("Browser", id=0): 251 | nothing = 0 252 | with gr.TabItem("Viewer", id=1): 253 | with gr.Row(): 254 | dfl_dropdown = gr.Dropdown(choices=get_dfl_list(), elem_id="dfl_dropdown", label="DFL:", interactive=True) 255 | create_refresh_button(dfl_dropdown, lambda: None, lambda: {"choices": get_dfl_list()}, "refresh_dfl_list") 256 | with gr.Column(scale=1): 257 | gr.Markdown("Upload") 258 | with gr.Tab("Videos"): 259 | render_dataset_videos() 260 | with gr.Tab("XSEG"): 261 | render_dataset_xseg() 262 | with gr.Tab("SAEHD"): 263 | render_dataset_saehd() 264 | with gr.Tab("Facesets"): 265 | render_dataset_facesets() 266 | with gr.Tab("DFL"): 267 | render_dataset_dfl() 268 | 269 | return (training_picker, "DeepFaceLab", "deepfacelab"), 270 | 271 | 272 | def on_ui_settings(): 273 | dfl_path = Path(paths.script_path) / "deepfacelab" 274 | section = ('deepfacelab', "DeepFaceLab") 275 | opts.add_option("deepfacelab_dflab_repo_path", OptionInfo(str(dfl_path / "dflab-repo"), "Path to DeepFaceLab repo are located", section=section)) 276 | opts.add_option("deepfacelab_dflive_repo_path", OptionInfo(str(dfl_path / "dflive-repo"), "Path to DeepFaceLive repo are located", section=section)) 277 | opts.add_option("deepfacelab_workspaces_path", OptionInfo(str(dfl_path / "workspaces"), "Path to dir where DeepFaceLab workspaces are located", section=section)) 278 | opts.add_option("deepfacelab_dfl_path", OptionInfo(str(dfl_path / "dfl-files"), "Path to read/write .dfl files from", section=section)) 279 | opts.add_option("deepfacelab_pak_path", OptionInfo(str(dfl_path / "pak-files"), "Default facesets .pak image directory", section=section)) 280 | opts.add_option("deepfacelab_xseg_path", OptionInfo(str(dfl_path / "xseg-models"), "Default XSeg path for XSeg models directory", section=section)) 281 | opts.add_option("deepfacelab_saehd_path", OptionInfo(str(dfl_path / "saehd-models"), "Default path for SAEHD models directory", section=section)) 282 | opts.add_option("deepfacelab_videos_path", OptionInfo(str(dfl_path / "videos"), "Default path for Videos for deepfacelab", section=section)) 283 | opts.add_option("deepfacelab_videos_frames_path", OptionInfo(str(dfl_path / "videos-frames"), "Default path for Video Frames for deepfacelab", section=section)) 284 | opts.add_option("deepfacelab_tmp_path", OptionInfo(str(dfl_path / "tmp"), "Default path for tmp actions for deepfacelab", section=section)) 285 | 286 | 287 | script_callbacks.on_ui_settings(on_ui_settings) 288 | script_callbacks.on_ui_tabs(on_ui_tabs) 289 | 290 | 291 | 292 | import os 293 | from pathlib import Path 294 | 295 | class DflCommunicator: 296 | @staticmethod 297 | def extract_frames(video_path, frames_dir, file_format_output='jpg', file_format_output_quality=100): 298 | # Create the frames directory if it doesn't exist 299 | if not os.path.exists(frames_dir): 300 | os.makedirs(frames_dir) 301 | 302 | # Open the video file 303 | cap = cv2.VideoCapture(video_path) 304 | 305 | # Get the frames per second (FPS) of the video 306 | fps = cap.get(cv2.CAP_PROP_FPS) 307 | 308 | # Initialize a counter for the frames 309 | frame_count = 0 310 | 311 | # Loop through the frames of the video 312 | while cap.isOpened(): 313 | # Read a frame from the video 314 | ret, frame = cap.read() 315 | 316 | # If there are no more frames, break out of the loop 317 | if not ret: 318 | break 319 | 320 | # Save the frame as a file in the frames directory 321 | file_extension = '.' + file_format_output 322 | frame_file = os.path.join(frames_dir, f"{frame_count:06d}{file_extension}") 323 | if file_format_output == 'jpg': 324 | cv2.imwrite(frame_file, frame, [int(cv2.IMWRITE_JPEG_QUALITY), file_format_output_quality]) 325 | elif file_format_output == 'png': 326 | cv2.imwrite(frame_file, frame, [int(cv2.IMWRITE_PNG_COMPRESSION), file_format_output_quality]) 327 | 328 | # Increment the frame counter 329 | frame_count += 1 330 | 331 | # Release the video capture object 332 | cap.release() 333 | 334 | # Return the number of frames extracted 335 | return frame_count 336 | 337 | class DflFiles: 338 | 339 | @staticmethod 340 | def folder_exists(dir_path): 341 | """ 342 | Check whether a directory exists. 343 | 344 | Returns True if the directory exists, False otherwise. 345 | """ 346 | return os.path.exists(dir_path) and os.path.isdir(dir_path) 347 | 348 | @staticmethod 349 | def create_folder(path): 350 | """Create a folder at the specified path""" 351 | os.makedirs(path, exist_ok=True) 352 | 353 | @staticmethod 354 | def delete_folder(path): 355 | """Delete a folder and all its contents at the specified path""" 356 | os.removedirs(path) 357 | 358 | @staticmethod 359 | def empty_folder(path): 360 | """Clear the contents of a folder at the specified path""" 361 | for file_name in os.listdir(path): 362 | file_path = os.path.join(path, file_name) 363 | if os.path.isfile(file_path): 364 | os.remove(file_path) 365 | 366 | @staticmethod 367 | def create_empty_file(path): 368 | """Create an empty file at the specified path""" 369 | open(path, 'a').close() 370 | 371 | @staticmethod 372 | def delete_file(path): 373 | """Delete a file at the specified path""" 374 | os.remove(path) 375 | 376 | @staticmethod 377 | def move_file(src, dst): 378 | """Move a file from the source path to the destination path""" 379 | os.replace(src, dst) 380 | 381 | @staticmethod 382 | def get_files_from_dir(base_dir, extension_list): 383 | """Return a list of file names in a directory with a matching file extension""" 384 | files = [] 385 | for v in Path(base_dir).iterdir(): 386 | if v.suffix in extension_list and not v.name.startswith('.ipynb'): 387 | files.append(v.name) 388 | return files 389 | 390 | @staticmethod 391 | def get_folder_names_in_dir(base_dir): 392 | """Return a list of folder names in a directory""" 393 | folders = [] 394 | for v in Path(base_dir).iterdir(): 395 | if v.is_dir() and not v.name.startswith('.ipynb'): 396 | folders.append(v.name) 397 | return folders 398 | 399 | @staticmethod 400 | def extract_archive(archive_path, dest_dir, dir_name): 401 | """ 402 | Extract an archive file to a directory with a specified name. 403 | 404 | The extracted directory will be created inside the destination directory with the specified name. 405 | If the name is taken, a suffix will be added to create a unique name. 406 | 407 | Returns the full path of the directory where the contents were extracted. 408 | """ 409 | suffix = '' 410 | extracted_dir_name = dir_name 411 | while os.path.exists(os.path.join(dest_dir, extracted_dir_name + suffix)): 412 | if not suffix: 413 | suffix = 1 414 | else: 415 | suffix += 1 416 | extracted_dir_name = f'{dir_name}_{suffix}' 417 | 418 | extracted_dir_path = os.path.join(dest_dir, extracted_dir_name) 419 | os.makedirs(extracted_dir_path, exist_ok=True) 420 | 421 | # Define a dictionary that maps file extensions to archive types 422 | archive_types = { 423 | '.zip': 'zip', 424 | '.rar': 'rar', 425 | '.7z': '7z', 426 | '.tar': 'tar', 427 | '.tar.gz': 'gztar', 428 | '.tgz': 'gztar' 429 | } 430 | 431 | # Determine the type of archive file based on the file extension 432 | file_ext = os.path.splitext(archive_path)[1].lower() 433 | 434 | # Use the appropriate function from the `shutil` library to extract the archive 435 | shutil.unpack_archive(archive_path, extracted_dir_path, archive_types[file_ext]) 436 | 437 | return extracted_dir_path 438 | 439 | @staticmethod 440 | def copy_file_to_dest_dir(temp_file_path, file_original_name, dest_dir): 441 | """ 442 | Copy a file to a destination directory using the original file name. 443 | 444 | If a file with the same name already exists in the destination directory, a suffix 445 | will be added to the file name until a unique name is found. 446 | 447 | Returns the full path of the copied file. 448 | """ 449 | suffix = '' 450 | dest_file_name = file_original_name 451 | while os.path.exists(os.path.join(dest_dir, dest_file_name)): 452 | if not suffix: 453 | suffix = 1 454 | else: 455 | suffix += 1 456 | dest_file_name = f'{os.path.splitext(file_original_name)[0]}_{suffix}{os.path.splitext(file_original_name)[1]}' 457 | 458 | dest_file_path = os.path.join(dest_dir, dest_file_name) 459 | shutil.copyfile(temp_file_path, dest_file_path) 460 | 461 | return dest_file_path 462 | 463 | class DflOptions: 464 | def __init__(self, opts): 465 | self.dfl_files = DflFiles() 466 | self.dfl_path = Path(opts.deepfacelab_dfl_path) 467 | self.workspaces_path = Path(opts.deepfacelab_workspaces_path) 468 | self.pak_path = Path(opts.deepfacelab_pak_path) 469 | self.xseg_path = Path(opts.deepfacelab_xseg_path) 470 | self.saehd_path = Path(opts.deepfacelab_saehd_path) 471 | self.videos_path = Path(opts.deepfacelab_videos_path) 472 | self.videos_frames_path = Path(opts.deepfacelab_videos_frames_path) 473 | self.tmp_path = Path(opts.deepfacelab_tmp_path) 474 | self.dflab_repo = Path(opts.deepfacelab_dflab_repo_path) 475 | self.dflive_repo = Path(opts.deepfacelab_dflive_repo_path) 476 | 477 | # Create dirs if not existing 478 | for p in [self.dfl_path, self.workspaces_path, self.pak_path, self.xseg_path, self.saehd_path, self.videos_path, self.videos_frames_path, self.tmp_path]: 479 | self.dfl_files.create_folder(p) 480 | 481 | def get_dfl_path(self): 482 | return self.dfl_path 483 | 484 | def get_workspaces_path(self): 485 | return self.workspaces_path 486 | 487 | def get_pak_path(self): 488 | return self.pak_path 489 | 490 | def get_xseg_path(self): 491 | return self.xseg_path 492 | 493 | def get_saehd_path(self): 494 | return self.saehd_path 495 | 496 | def get_videos_path(self): 497 | return self.videos_path 498 | 499 | def get_videos_frames_path(self): 500 | return self.videos_frames_path 501 | 502 | def get_tmp_path(self): 503 | return self.tmp_path 504 | 505 | def get_dflab_repo_path(self): 506 | return self.dflab_repo 507 | 508 | def get_dflive_repo_path(self): 509 | return self.dflive_repo 510 | 511 | class DflAction: 512 | dflFiles = DflFiles() 513 | 514 | @staticmethod 515 | def extract_frames(videos_dropdown): 516 | return "Video: " + str(videos_dropdown) 517 | 518 | @staticmethod 519 | def extract_faceset(videos_dropdown, 520 | faceset_output_facetype_dropdown, 521 | faceset_output_resolution_dropdown, 522 | faceset_output_type_dropdown, 523 | faceset_output_quality_dropdown, 524 | faceset_output_debug_dropdown): 525 | return "Video: " + str(videos_dropdown) + " Output Face Type: " + faceset_output_facetype_dropdown -------------------------------------------------------------------------------- /scripts/deepfacelive.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | 4 | # Get the absolute path of the directory containing the importing file 5 | current_dir = os.path.dirname(os.path.abspath(__file__)) 6 | 7 | # Append the relative path to the importing file to the system path 8 | sys.path.append(os.path.join(current_dir, '../repo/dflive/')) 9 | 10 | import cv2 11 | from PIL import Image 12 | import numpy as np 13 | import gradio as gr 14 | from pathlib import Path 15 | 16 | from modules import processing, images 17 | from modules import scripts, script_callbacks, shared, devices, modelloader 18 | from modules.processing import Processed, StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img 19 | from modules.shared import opts, cmd_opts, state 20 | from modules.sd_models import model_hash 21 | from modules.paths import models_path 22 | from basicsr.utils.download_util import load_file_from_url 23 | from modules.ui import create_refresh_button 24 | from scripts.ddetailerutils import DetectionDetailerScript, preload_ddetailer_model 25 | 26 | from scripts.command import execute_deep_face_live_multiple 27 | dd_models_path = os.path.join(models_path, "mmdet") 28 | 29 | from scripts.dflutils import DflOptions, DflFiles 30 | dfl_options = DflOptions(opts) 31 | 32 | def list_models(include_downloadable=True): 33 | dfl_dropdown = [] 34 | dfl_dropdown.append("None") 35 | #dfl_dropdown.append("Automatic") 36 | dfl_list = dfl_options.get_dfl_list(include_downloadable) 37 | for dfl in dfl_list: 38 | dfl_dropdown.append(dfl[0]) 39 | return dfl_dropdown 40 | 41 | def download_url(url, filename): 42 | load_file_from_url(url, filename) 43 | 44 | from urllib.parse import urlparse 45 | 46 | def is_url(string): 47 | try: 48 | result = urlparse(string) 49 | # Check if the URL has a scheme (e.g., http or https) and a network location (e.g., www.example.com) 50 | return all([result.scheme, result.netloc]) 51 | except ValueError: 52 | return False 53 | 54 | def get_model_url(model_name): 55 | dfl_list = dfl_options.get_dfl_list(True) 56 | for dfl in dfl_list: 57 | if dfl[0] == model_name and is_url(dfl[1]): 58 | return dfl[1] 59 | return None 60 | 61 | def list_detectors(): 62 | return ['CenterFace', 'S3FD', 'YoloV5'] 63 | 64 | def list_markers(): 65 | return ['OpenCV LBF', 'Google FaceMesh', 'InsightFace_2D106'] 66 | 67 | def list_align_modes(): 68 | return ['From rectangle', 'From points', 'From static rect'] 69 | 70 | def startup(): 71 | if (DflFiles.folder_exists(dd_models_path) == False): 72 | print("No detection models found, downloading...") 73 | bbox_path = os.path.join(dd_models_path, "bbox") 74 | load_file_from_url("https://huggingface.co/dustysys/ddetailer/resolve/main/mmdet/bbox/mmdet_anime-face_yolov3.pth", bbox_path) 75 | load_file_from_url("https://huggingface.co/dustysys/ddetailer/raw/main/mmdet/bbox/mmdet_anime-face_yolov3.py", bbox_path) 76 | 77 | 78 | startup() 79 | 80 | def gr_show(visible=True): 81 | return {"visible": visible, "__type__": "update"} 82 | 83 | 84 | class DeepFaceLive(scripts.Script): 85 | def title(self): 86 | return "DeepFaceLive - AI Face Swap/Recovery" 87 | 88 | def show(self, is_img2img): 89 | return True 90 | 91 | 92 | def ui(self, is_img2img): 93 | import modules.ui 94 | 95 | gr.HTML("") 96 | with gr.Group(): 97 | with gr.Row(): 98 | gr.HTML("
You can put the models into " + str(dfl_options.dfl_path) + "