├── .gitignore ├── README.md ├── build_engine.py ├── dbd ├── AI_model.py ├── __init__.py ├── datasets │ ├── __init__.py │ ├── datasetLoader.py │ └── transforms.py ├── model_to_onnx.py ├── networks │ ├── __init__.py │ └── model.py ├── predict_folder.py ├── preprocess_data.py ├── save_frames.py ├── train.py └── utils │ ├── __init__.py │ ├── dataset_utils.py │ ├── directkeys.py │ └── frame_grabber.py ├── images ├── demo.gif ├── merciless.png ├── repair.png ├── run_1.png ├── run_2.png ├── struggle.png └── wiggle.png ├── model.onnx ├── run_monitoring_gradio.py └── run_single_pred_gradio.py /.gitignore: -------------------------------------------------------------------------------- 1 | dataset*/ 2 | *.ckpt 3 | *.yaml 4 | .idea/ 5 | *.pyc 6 | lightning_logs/ 7 | .ipynb_* 8 | saved_images/ 9 | dist/ 10 | build/ 11 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Disclaimer 2 | 3 | **This project is intended for research and educational purposes in the field of deep learning and how computer vision AI can help in video games.** 4 | 5 | Using it may violate game rules and trigger anti-cheat detection. The author is not responsible for any consequences resulting from its use, this includes bans or any other unspecified violations. Use at your own risk. Join the discord server for more info. 6 | 7 | # DBD Auto Skill Check 8 | 9 | The Dead by Daylight Auto Skill Check is a tool developed using AI (deep learning with PyTorch) to automatically detect and successfully hit skill checks in the popular game Dead by Daylight. 10 | This tool is designed to demonstrate how AI can improve gameplay performance and enhance the player's skill in the game. 11 | 12 | 13 | | Demo (x2 speed) | 14 | |-------------------------------------------------------------------| 15 | |  | 16 | 17 | 18 | 19 | * [DBD Auto Skill Check](#dbd-auto-skill-check) 20 | * [Features](#features) 21 | * [Execution Instructions](#execution-instructions) 22 | * [Windows standalone app](#windows-standalone-app) 23 | * [Build from source](#build-from-source) 24 | * [Auto skill-check Web UI](#auto-skill-check-web-ui) 25 | * [Project details](#project-details) 26 | * [What is a skill check](#what-is-a-skill-check) 27 | * [Dataset](#dataset) 28 | * [Architecture](#architecture) 29 | * [Training](#training) 30 | * [Inference](#inference) 31 | * [Results](#results) 32 | * [FAQ](#faq) 33 | * [Acknowledgments](#acknowledgments) 34 | 35 | 36 | # Features 37 | - Real-time detection of skill checks (60fps) 38 | - High accuracy in recognizing **all types of skill checks (with a 98.7% precision, see details of [Results](#results))** 39 | - Automatic triggering of great skill checks through auto-pressing the space bar 40 | - A webUI to run the AI model 41 | - A GPU mode and a slow-CPU-usage mode to reduce CPU overhead 42 | 43 | 44 | # Execution Instructions 45 | 46 | You can run the code: 47 | - From the windows standalone app: just download the .exe file and run it (no install required) 48 | - From source: It's for you if you have some python knowledge, you want to customize the code or run it on GPU 49 | 50 | ## Windows standalone app 51 | 52 | Use the standalone app if you don't want to install anything, but just run the AI script. 53 | 54 | _Warning_: Some players reported that the standalone app can cause EAC suspicious / ban, even in private games. That's why I recommend the [Build from source method](#build-from-source), which is much safer to use. More details are available on the Discord server. 55 | 56 | 1) Go to the [releases page](https://github.com/Manuteaa/dbd_autoSkillCheck/releases) 57 | 2) Download `dbd_autoSkillCheck.zip` 58 | 3) Unzip the file 59 | 4) Run `run_monitoring_gradio.exe` 60 | 5) A console will open (ignore the file not found INFO message), ctrl+click on the link http://127.0.0.1:7860 to open the local web app 61 | 6) Run the AI model on the web app, then you can play the game 62 | 7) Learn how to use the script in the [Auto skill-check Web UI instructions](#auto-skill-check-web-ui). 63 | 64 | ## Build from source 65 | 66 | I have only tested the model on my own computer running Windows 11 with CUDA version 12.3. I provide two different scripts you can run. 67 | 68 | Create your own python env (I have python 3.11) and install the minimal necessary libraries using the command : 69 | 70 | `pip install numpy mss onnxruntime-gpu pyautogui IPython pillow gradio` 71 | 72 | Then git clone the repo and follow the [Auto skill-check Web UI instructions](#auto-skill-check-web-ui). 73 | 74 | ## Auto skill-check Web UI 75 | 76 | Run this script and play the game ! It will hit the space bar for you. 77 | 78 | - When building from source: `python run_monitoring_gradio.py` 79 | - When using the standalone app: run `run_monitoring_gradio.exe` 80 | 81 | 1) Select the trained AI model (default to `model.onnx` available in this repo, and included within the standalone app) 82 | 2) Select the device to use. Use default CPU device. GPU is not available with the standalone app. With python, follow the [FAQ](#faq) instructions if you want to use GPU 83 | 3) Choose debug options. If you want to check which screen the script is monitoring, you can select the first option. If the AI struggles recognizing the skill checks, select the second option to save the results, then you can upload the images in a new GitHub issue 84 | 4) Select additional features options. Keep the default values unless you have read the [FAQ](#faq) and know what you are doing 85 | 5) Click 'RUN' 86 | 6) You can STOP and RUN the script from the Web UI at will, for example when waiting in the game lobby 87 | 88 | Your main screen is now monitored meaning that frames are regularly sampled (with a center-crop) and analysed locally with the trained AI model. 89 | You can play the game on your main monitor. 90 | When a great skill check is detected, the SPACE key is automatically pressed, then it waits for 0.5s to avoid triggering the same skill check multiple times in a row. 91 | 92 | 93 | | Auto skill check example 1 | Auto skill check example 2 | 94 | |---------------------------------------|---------------------------------------| 95 | |  |  | 96 | 97 | 98 | On the right of the web UI, we display : 99 | - The AI model FPS : the number of frames per second the AI model processes 100 | - The last hit skill check frame : last frame (center-cropped image with size 224x224) the AI model triggered the SPACE bar. **This may not be the actual hit frame (as registered by the game) because of game latency (such as ping). The AI model anticipates the latency, and hits the space bar a little bit before the cursor reaches the great area, that's why the displayed frame will always be few frames before actual game hit frame** 101 | - Skill check recognition : set of probabilities for the frame displayed before 102 | 103 | **Both the game AND the AI model FPS must run at 60fps (or more) in order to hit correctly the great skill checks.** 104 | I had the problem of low FPS with Windows : when the script was on the background (when I played), the FPS dropped significantly. Running the script in admin solved the problem (see the [FAQ](#faq)). 105 | 106 | # Project details 107 | 108 | ## What is a skill check 109 | 110 | A skill check is a game mechanic in Dead by Daylight that allows the player to progress faster in a specific action such as repairing generators or healing teammates. 111 | It occurs randomly and requires players to press the space bar to stop the progression of a red cursor. 112 | 113 | Skill checks can be: 114 | - failed, if the cursor misses the designated white zone (the hit area) 115 | - successful, if the cursor lands in the white zone 116 | - or greatly successful, if the cursor accurately hits the white-filled zone 117 | 118 | Here are examples of different great skill checks: 119 | 120 | | Repair-Heal skill check | Wiggle skill check | Full white skill check | Full black skill check | 121 | |:-------------------------------:|:-------------------------------:|:-----------------------------------:|:-------------------------------------:| 122 | |  |  |  |  | 123 | 124 | Successfully hitting a skill check increases the speed of the corresponding action, and a greatly successful skill check provides even greater rewards. 125 | On the other hand, missing a skill check reduces the action's progression speed and alerts the ennemi with a loud sound. 126 | 127 | 128 | ## Dataset 129 | We designed a custom dataset from in-game screen recordings and frame extraction of gameplay videos on youtube. 130 | To save disk space, we center-crop each frame to size 320x320 before saving. 131 | 132 | The data was manually divided into 11 separate folders based on : 133 | - The visible skill check type : Repairing/healing, struggle, wiggle and special skill checks (overcharge, merciless storm, etc.) because the skill check aspects are different following the skill check type 134 | - The position of the cursor relative to the area to hit : outside, a bit before the hit area and inside the hit area. 135 | 136 | **We experimentally made the conclusion that following the type of the skill check, we must hit the space bar a bit before the cursor reaches the great area, in order to anticipate the game input processing latency. 137 | That's why we have this dataset structure and granularity (with ante-frontier and frontier areas recognition).** 138 | 139 | To alleviate the laborious collection task, we employed data augmentation techniques such as random rotations, random crop-resize, and random brightness/contrast/saturation adjustments. 140 | 141 | We developed a customized and optimized dataloader that automatically parses the dataset folder and assigns the correct label to each image based on its corresponding folder. 142 | Our data loaders use a custom sampler to handle imbalanced data. 143 | 144 | ## Architecture 145 | The skill check detection system is based on an encoder-decoder architecture. 146 | 147 | We employ the MobileNet V3 Small architecture, specifically chosen for its trade-off between inference speed and accuracy. 148 | This ensures real-time inference and quick decision-making without compromising detection precision. 149 | We also compared the architecture with the MobileNet V3 Large, but the accuracy gain was not worth a bigger model size (20Mo instead of 6Mo) and slower inference speed. 150 | 151 | We had to manually modify the last layer of the decoder. Initially designed to classify 1000 different categories of real-world objects, we switched it to an 11-categories layer. 152 | 153 | ## Training 154 | 155 | We use a standard cross entropy loss to train the model and monitor the training process using per-category accuracy score. 156 | 157 | I trained the model using my own computer, and using the AWS _g6.4xlarge_ EC2 instance (around x1.5 faster to train than on my computer). 158 | 159 | 160 | ## Inference 161 | We provide a script that loads the trained model and monitors the main screen. 162 | For each sampled frame, the script will center-crop and normalize the image then feed it to the AI model. 163 | 164 | Following the result of the skill check recognition, the script will automatically press the space bar to trigger the great skill check (or not), 165 | then it waits for a short period of time to avoid triggering the same skill check multiple times in a row. 166 | 167 | To achieve real time results, we convert the model to ONNX format and use the ONNX runtime to perform inference. 168 | We observed a 1.5x to 2x speedup compared to baseline inference. 169 | 170 | ## Results 171 | 172 | We test our model using a testing dataset of ~2000 images: 173 | 174 | | Category Index | Category description | Mean accuracy | 175 | |----------------|-----------------------------|---------------| 176 | | 0 | None | 100.0% | 177 | | 1 | repair-heal (great) | 99.5% | 178 | | 2 | repair-heal (ante-frontier) | 96.5% | 179 | | 3 | repair-heal (out) | 98.7% | 180 | | 4 | full white (great) | 100% | 181 | | 5 | full white (out) | 100% | 182 | | 6 | full black (great) | 100% | 183 | | 7 | full black (out) | 98.9% | 184 | | 8 | wiggle (great) | 93.4% | 185 | | 9 | wiggle (frontier) | 100% | 186 | | 10 | wiggle (out) | 98.3% | 187 | 188 | 189 | During our laptop testing, we observed rapid inference times of approximately 10ms per frame using MobileNet V3 Small. 190 | When combined with our screen monitoring script, we achieved a consistent 60fps detection rate, which is enough for real-time detection capabilities. 191 | 192 | In conclusion, our model achieves high accuracy thanks to the high-quality dataset with effective data augmentation techniques, and architectural choices. 193 | **The RUN script successfully hits the great skill checks with high confidence.** 194 | 195 | # FAQ 196 | 197 | **What about the anti-cheat system ?** 198 | - The script monitors a small crop of your main screen, processes it using an onnx model, and can press then release the space bar using [Windows MSDN](https://learn.microsoft.com/en-us/windows/win32/inputdev/virtual-key-codes?redirectedfrom=MSDN) once each 0.5s maximum. This win32 `SendInput` injection key can be considered as an "unfair advantage" by EAC, potentially leading to a ban. For this reason, the script should only be used in private games. However, if you still wish to use it in public matches, you can join the Discord server for more details. These specifics will not be shared publicly. 199 | 200 | **How to run the AI model with your GPU (NVIDIA - CUDA)?** 201 | - Check if you have `pip install onnxruntime-gpu` and not `onnxruntime` (if not, uninstall onnxruntime before installing onnxruntime-gpu) 202 | - Check onnxruntime-gpu version compatibilities with CUDA, CUDNN and torch https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements 203 | - Install CUDA 12.x (I have 12.3) 204 | - Install torch with CUDA compute (I have 2.4.0 with cuda 12.1 compute platform) https://pytorch.org/get-started/locally/ 205 | - Install CUDNN 9.x (I have 9.4) 206 | - Install last version of MSVC 207 | - Select "GPU" in the Auto skill check webUI, click "RUN" and check if you have a warning message 208 | 209 | **What about AMD GPUs/GPUs without CUDA?** 210 | - Uninstall onnxruntime-gpu by running `pip uninstall onnxruntime-gpu` 211 | - Install onnxruntime DirectML with `pip install onnxruntime-directml` which allows you to run CUDA operations without NVIDIA GPUs 212 | 213 | **Why does the script do nothing ?** 214 | - Check if the AI model monitors correctly your game: set the debug option of the webui to "display the monitored frame". Play the game and check if it displays correctly the skill check 215 | - Check if you have no error in the python console logs 216 | - Use standard game settings (I recommend using 1080p at 100% resolution without any game filters, no vsync, no FSR): your displayed images "last hit skill check frame" should be similar with the ones in my examples 217 | - Check if you do not use a potato instead of a computer 218 | 219 | **Why do I hit good skill checks instead of great ? Be sure :** 220 | - Your game FPS >= 60 221 | - The AI model FPS >= 60 222 | - Your ping is not too high (<= 60 should be fine) 223 | - Use standard game settings (I recommend using 1080p at 100% resolution without any game filters, no vsync, no FSR) 224 | - In the `Features options` of the WebUI, decrease the `Ante-frontier hit delay` value 225 | 226 | 227 | **I have lower values than 60 FPS for the AI model, what can I do ?** 228 | - In the `Features options` of the WebUI, increase the `CPU workload` option to `normal` or `max` 229 | - Switch device to gpu 230 | - Disable the energy saver settings in your computer settings 231 | - Run the script in administrator mode 232 | 233 | **Why does the AI model hit the skill check too early and fail ?** 234 | - In the `Features options` of the WebUI, increase the `Ante-frontier hit delay` value 235 | 236 | **Does the script work well with the perk hyperfocus ?** 237 | - Yes 238 | 239 | **Does the script work well for skill checks in random locations (doctor skill checks) ?** 240 | - Unfortunately, the script only monitors a small part of the center of your screen. It can not see the skill checks outside this area. Even if you make it work by editing the code (like capturing the whole screen and resize the frames to 224x224) the AI model was not trained to handle these special skill checks, so it will not work very well... 241 | 242 | # Acknowledgments 243 | 244 | The project was made and is maintained by me ([Manuteaa](https://github.com/Manuteaa)). If you enjoy this project, consider giving it a ⭐! Starring the repository helps others discover it, and shows support for the work put into it. Your stars motivate me to add new features and address any bugs. 245 | 246 | Feel free to open a new issue for any question, suggestion or issue. You can also join the [discord server](https://discord.gg/3mewehHHpZ) for more info and help. 247 | 248 | - A big thanks to [hemlock12](https://github.com/hemlock12) for the data collection help ! 249 | - Thanks to [SouthernFrenzy](https://github.com/SouthernFrenzy) for the help and time to manage the discord server 250 | - Thanks [KevinSade](https://github.com/KevinSade) for the contribution to the discord server 251 | 252 | -------------------------------------------------------------------------------- /build_engine.py: -------------------------------------------------------------------------------- 1 | import tensorrt as trt 2 | 3 | onnx_model_path = "model.onnx" 4 | engine_file = "model_fp32.engine" 5 | 6 | logger = trt.Logger(trt.Logger.WARNING) 7 | builder = trt.Builder(logger) 8 | network_flags = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH) 9 | network = builder.create_network(flags=network_flags) 10 | 11 | print("🔄 Parsing ONNX model...") 12 | parser = trt.OnnxParser(network, logger) 13 | with open(onnx_model_path, 'rb') as model_file: 14 | if not parser.parse(model_file.read()): 15 | for i in range(parser.num_errors): 16 | print(parser.get_error(i)) 17 | raise RuntimeError("❌ Failed to parse ONNX model") 18 | 19 | # Create builder config 20 | config = builder.create_builder_config() 21 | config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, 2 << 30) # 2 GB 22 | 23 | # Set optimization profile 24 | profile = builder.create_optimization_profile() 25 | profile.set_shape("input", (1, 3, 224, 224), (1, 3, 224, 224), (1, 3, 224, 224)) 26 | config.add_optimization_profile(profile) 27 | print("⚙️ Creating TensorRT engine...") 28 | 29 | # Build serialized engine 30 | serialized_engine = builder.build_serialized_network(network, config) 31 | 32 | 33 | if serialized_engine is None: 34 | raise RuntimeError("❌ Engine build failed") 35 | 36 | print("💾 Exporting engine file...") 37 | with open(engine_file, "wb") as f: 38 | f.write(serialized_engine) 39 | 40 | print(f"✅ Engine saved as '{engine_file}'") 41 | -------------------------------------------------------------------------------- /dbd/AI_model.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from PIL import Image 3 | from mss import mss 4 | import onnxruntime as ort 5 | import atexit 6 | import sys 7 | from pyautogui import size as pyautogui_size 8 | 9 | try: 10 | import torch 11 | import tensorrt as trt 12 | except ImportError as e: 13 | print(e) 14 | 15 | 16 | def get_monitor_attributes(): 17 | width, height = pyautogui_size() 18 | object_size_h_ratio = 224 / 1080 19 | object_size = int(object_size_h_ratio * height) 20 | 21 | return { 22 | "top": height // 2 - object_size // 2, 23 | "left": width // 2 - object_size // 2, 24 | "width": object_size, 25 | "height": object_size 26 | } 27 | 28 | class AI_model: 29 | MEAN = np.array([0.485, 0.456, 0.406], dtype=np.float32) 30 | STD = np.array([0.229, 0.224, 0.225], dtype=np.float32) 31 | 32 | pred_dict = { 33 | 0: {"desc": "None", "hit": False}, 34 | 1: {"desc": "repair-heal (great)", "hit": True}, 35 | 2: {"desc": "repair-heal (ante-frontier)", "hit": True}, 36 | 3: {"desc": "repair-heal (out)", "hit": False}, 37 | 4: {"desc": "full white (great)", "hit": True}, 38 | 5: {"desc": "full white (out)", "hit": False}, 39 | 6: {"desc": "full black (great)", "hit": True}, 40 | 7: {"desc": "full black (out)", "hit": False}, 41 | 8: {"desc": "wiggle (great)", "hit": True}, 42 | 9: {"desc": "wiggle (frontier)", "hit": False}, 43 | 10: {"desc": "wiggle (out)", "hit": False} 44 | } 45 | 46 | def __init__(self, model_path="model.onnx", use_gpu=False, nb_cpu_threads=None): 47 | self.model_path = model_path 48 | self.use_gpu = use_gpu 49 | self.nb_cpu_threads = nb_cpu_threads 50 | self.mss = mss() 51 | self.monitor = get_monitor_attributes() 52 | 53 | self.context = None 54 | self.engine = None 55 | 56 | if model_path.endswith(".engine"): 57 | assert self.use_gpu, "TensorRT engine model requires GPU mode" 58 | assert "torch" in sys.modules, "TensorRT engine model requires torch lib" 59 | assert "tensorrt" in sys.modules, "TensorRT engine model requires tensorrt lib" 60 | self.load_tensorrt() 61 | else: 62 | self.load_onnx() 63 | 64 | atexit.register(self.cleanup) 65 | 66 | def cleanup(self): 67 | if self.is_tensorrt: 68 | del self.context 69 | del self.engine 70 | torch.cuda.empty_cache() 71 | 72 | def grab_screenshot(self): 73 | return self.mss.grab(self.monitor) 74 | 75 | def screenshot_to_pil(self, screenshot): 76 | pil_image = Image.frombytes("RGB", screenshot.size, screenshot.bgra, "raw", "BGRX") 77 | if pil_image.width != 224 or pil_image.height != 224: 78 | pil_image = pil_image.resize((224, 224), Image.Resampling.LANCZOS) 79 | return pil_image 80 | 81 | def pil_to_numpy(self, image_pil): 82 | img = np.asarray(image_pil, dtype=np.float32) / 255.0 83 | img = np.transpose(img, (2, 0, 1)) 84 | img = (img - self.MEAN[:, None, None]) / self.STD[:, None, None] 85 | return np.expand_dims(img, axis=0) 86 | 87 | def softmax(self, x): 88 | exp_x = np.exp(x - np.max(x)) 89 | return exp_x / np.sum(exp_x) 90 | 91 | def load_onnx(self): 92 | sess_options = ort.SessionOptions() 93 | 94 | if not self.use_gpu and self.nb_cpu_threads is not None: 95 | sess_options.intra_op_num_threads = self.nb_cpu_threads 96 | sess_options.inter_op_num_threads = self.nb_cpu_threads 97 | 98 | if self.use_gpu: 99 | assert "torch" in sys.modules, "GPU mode requires torch lib" 100 | available_providers = ort.get_available_providers() 101 | preferred_execution_providers = ['CUDAExecutionProvider', 'DmlExecutionProvider', 'CPUExecutionProvider'] 102 | execution_providers = [p for p in preferred_execution_providers if p in available_providers] 103 | else: 104 | execution_providers = ["CPUExecutionProvider"] 105 | 106 | self.ort_session = ort.InferenceSession( 107 | self.model_path, providers=execution_providers, sess_options=sess_options 108 | ) 109 | 110 | self.input_name = self.ort_session.get_inputs()[0].name 111 | self.input_dtype = self.ort_session.get_inputs()[0].type 112 | self.is_tensorrt = False 113 | 114 | def load_tensorrt(self): 115 | self.is_tensorrt = True 116 | logger = trt.Logger(trt.Logger.WARNING) 117 | runtime = trt.Runtime(logger) 118 | 119 | with open(self.model_path, "rb") as f: 120 | engine_data = f.read() 121 | self.engine = runtime.deserialize_cuda_engine(engine_data) 122 | 123 | self.stream = torch.cuda.Stream() 124 | self.context = self.engine.create_execution_context() 125 | self.inputs, self.outputs, self.bindings = self.allocate_buffers(self.engine) 126 | 127 | def allocate_buffers(self, engine): 128 | inputs, outputs, bindings = [], [], [] 129 | 130 | for i in range(engine.num_io_tensors): 131 | tensor_name = engine.get_tensor_name(i) 132 | tensor_shape = engine.get_tensor_shape(tensor_name) 133 | tensor_dtype = trt.nptype(engine.get_tensor_dtype(tensor_name)) 134 | 135 | if -1 in tensor_shape: 136 | raise ValueError(f"Tensor '{tensor_name}' has a dynamic shape {tensor_shape}. Set static dimensions before inference!") 137 | 138 | size = trt.volume(tensor_shape) 139 | device_mem = torch.empty(size, dtype=torch.float32, device="cuda") 140 | host_mem = np.empty(size, dtype=tensor_dtype) 141 | 142 | bindings.append(device_mem.data_ptr()) 143 | 144 | tensor_mode = engine.get_tensor_mode(tensor_name) 145 | tensor_info = {'host': host_mem, 'device': device_mem, 'name': tensor_name} 146 | 147 | if tensor_mode == trt.TensorIOMode.INPUT: 148 | inputs.append(tensor_info) 149 | else: 150 | outputs.append(tensor_info) 151 | 152 | return inputs, outputs, bindings 153 | 154 | def predict(self, image): 155 | if isinstance(image, np.ndarray): 156 | img_np = image 157 | else: 158 | img_np = self.pil_to_numpy(image) 159 | 160 | img_np = np.ascontiguousarray(img_np) 161 | 162 | if self.is_tensorrt: 163 | torch.cuda.synchronize() 164 | torch.cuda.current_stream().wait_stream(self.stream) 165 | 166 | np.copyto(self.inputs[0]['host'], img_np.ravel()) 167 | self.inputs[0]['device'].copy_(torch.tensor(self.inputs[0]['host'], dtype=torch.float32, device="cuda")) 168 | 169 | self.context.execute_v2(bindings=self.bindings) 170 | 171 | stream = torch.cuda.Stream() 172 | with torch.cuda.stream(stream): 173 | output_tensor = self.outputs[0]['device'].to("cpu", non_blocking=True) 174 | 175 | torch.cuda.current_stream().wait_stream(stream) 176 | 177 | self.outputs[0]['host'][:] = output_tensor.numpy() 178 | 179 | torch.cuda.synchronize() 180 | logits = np.squeeze(self.outputs[0]['host']) 181 | else: 182 | if self.input_dtype == "tensor(float)": 183 | img_np = img_np.astype(np.float32) 184 | elif self.input_dtype == "tensor(float16)": 185 | img_np = img_np.astype(np.float16) 186 | 187 | ort_inputs = {self.input_name: img_np} 188 | logits = np.squeeze(self.ort_session.run(None, ort_inputs)) 189 | 190 | pred = int(np.argmax(logits)) 191 | probs = self.softmax(logits) 192 | probs_dict = {self.pred_dict[i]["desc"]: probs[i] for i in range(len(probs))} 193 | 194 | return pred, self.pred_dict[pred]["desc"], probs_dict, self.pred_dict[pred]["hit"] 195 | 196 | def check_provider(self): 197 | return "TensorRT" if self.is_tensorrt else self.ort_session.get_providers()[0] 198 | -------------------------------------------------------------------------------- /dbd/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/dbd/__init__.py -------------------------------------------------------------------------------- /dbd/datasets/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/dbd/datasets/__init__.py -------------------------------------------------------------------------------- /dbd/datasets/datasetLoader.py: -------------------------------------------------------------------------------- 1 | import os.path 2 | import torch 3 | import numpy as np 4 | from glob import glob 5 | 6 | import math 7 | 8 | from dbd.datasets.transforms import get_training_transforms, get_validation_transforms 9 | from torch.utils.data import DataLoader, WeightedRandomSampler, Dataset 10 | from torchvision.io import read_image, ImageReadMode 11 | 12 | 13 | class DBD_dataset(Dataset): 14 | """ 15 | Dataset class for DBD dataset 16 | - Handles custom sampler to deal with class imbalance 17 | """ 18 | 19 | def __init__(self, dataset, transforms): 20 | """ 21 | :param dataset: numpy array of {image_path, label} 22 | :param transforms: torchvision transforms 23 | """ 24 | 25 | self.images_path = dataset[:, 0] 26 | self.targets = torch.tensor(dataset[:, 1].astype(np.int64), dtype=torch.int64) 27 | self.transforms = transforms 28 | 29 | def __len__(self): 30 | return len(self.targets) 31 | 32 | def __getitem__(self, idx): 33 | image = self.get_image_from_path(idx) 34 | image = self.transforms(image) 35 | 36 | target = self.targets[idx] 37 | return image, target 38 | 39 | def _get_class_weights(self): 40 | count_classes = torch.bincount(self.targets) 41 | w_mapping = 1.0 / count_classes # all classes have equal chance to be sampled 42 | return w_mapping 43 | 44 | def _get_sampler(self, seed=42): 45 | generator_torch = torch.Generator().manual_seed(seed) 46 | w = self._get_class_weights() 47 | w = w[self.targets] 48 | 49 | sampler = WeightedRandomSampler(w, num_samples=len(w), replacement=True, generator=generator_torch) 50 | return sampler 51 | 52 | def get_image_from_path(self, idx): 53 | image = self.images_path[idx] 54 | image = read_image(image, mode=ImageReadMode.RGB) 55 | return image 56 | 57 | def get_dataloader(self, batch_size=32, num_workers=0, use_balanced_sampler=False): 58 | sampler = self._get_sampler() if use_balanced_sampler else None 59 | dataloader = DataLoader(self, batch_size=batch_size, num_workers=num_workers, sampler=sampler, persistent_workers=True, pin_memory=True) 60 | return dataloader 61 | 62 | 63 | def _parse_dbd_datasetfolder(root_dataset_path): 64 | """ 65 | Get dataset as list of pairs {image path, label} in numpy array format 66 | Args: 67 | root_dataset_path: 68 | 69 | Returns: numpy array with shape (nb_images, 2), data type is str 70 | 71 | """ 72 | folders = os.scandir(root_dataset_path) 73 | images_all = [] 74 | targets_all = [] 75 | 76 | for folder in folders: 77 | name, path = folder.name, folder.path 78 | if not name.isdigit(): 79 | print("Skipping folder " + name) 80 | continue 81 | 82 | images = glob(os.path.join(path, "*.*")) 83 | print("Parsing folder {} : {} images found".format(name, len(images))) 84 | 85 | images_all += images 86 | targets_all += [name] * len(images) 87 | 88 | dataset = np.stack([images_all, targets_all], axis=-1) 89 | return dataset 90 | 91 | 92 | def get_dataloaders(root_dataset_path, batch_size=32, seed=42, num_workers=0): 93 | """ Get training and validation data loaders 94 | Args: 95 | root_dataset_path: Root dataset path, containing folders with name corresponding to associated class 96 | batch_size: batch size 97 | seed: seed to init random generators 98 | num_workers: data loader num workers 99 | 100 | """ 101 | assert os.path.exists(root_dataset_path) 102 | 103 | # Parse dataset 104 | dataset = _parse_dbd_datasetfolder(root_dataset_path) # shape is (nb_images, 2) 105 | 106 | # Shuffle dataset and split into a training set and a validation set 107 | generator = np.random.default_rng(seed) 108 | generator.shuffle(dataset) 109 | 110 | nb_samples_train = math.floor(0.8 * len(dataset)) 111 | dataset_train, dataset_val = dataset[:nb_samples_train], dataset[nb_samples_train:] 112 | 113 | # Set data loaders 114 | train_transforms = get_training_transforms() 115 | dataset_train = DBD_dataset(dataset_train, train_transforms) 116 | dataloader_train = dataset_train.get_dataloader(batch_size=batch_size, num_workers=num_workers, use_balanced_sampler=True) 117 | 118 | val_transforms = get_validation_transforms() 119 | dataset_val = DBD_dataset(dataset_val, val_transforms) 120 | dataloader_val = dataset_val.get_dataloader(batch_size=batch_size, num_workers=num_workers, use_balanced_sampler=False) 121 | 122 | return dataloader_train, dataloader_val 123 | 124 | 125 | if __name__ == '__main__': 126 | from dbd.datasets.transforms import MEAN, STD 127 | import cv2 128 | 129 | dataset_root = "dataset/" 130 | dataloader_train, dataloader_val = get_dataloaders(dataset_root, batch_size=1, num_workers=1) 131 | # dataloader_train, dataloader_val = get_dataloaders(dataset_root, batch_size=32, num_workers=1) 132 | 133 | std = torch.tensor(STD, dtype=torch.float32).reshape((3, 1, 1)) 134 | mean = torch.tensor(MEAN, dtype=torch.float32).reshape((3, 1, 1)) 135 | 136 | batch = next(iter(dataloader_train)) 137 | x, y = batch 138 | 139 | # for batch in dataloader_train: 140 | # x, y = batch 141 | # print(torch.bincount(y)) 142 | 143 | for i, batch in enumerate(dataloader_train): 144 | x, y = batch 145 | x = x[0] # take first sample 146 | x = x * std + mean # un-normalization to [0, 1] with auto-broadcast 147 | x = x * 255. 148 | 149 | x = x.permute((1, 2, 0)) # channel last : (3, 224, 224) --> (224, 224, 3) 150 | x = x.cpu().numpy().astype(np.uint8) 151 | 152 | img = cv2.cvtColor(x, cv2.COLOR_RGB2BGR) 153 | category = str(y.cpu().numpy()[0]) 154 | cv2.imshow(category, img) 155 | cv2.moveWindow(category, 200, 200) 156 | cv2.waitKey() 157 | -------------------------------------------------------------------------------- /dbd/datasets/transforms.py: -------------------------------------------------------------------------------- 1 | import torchvision.transforms.v2 as tf 2 | import torch 3 | 4 | MEAN = [0.485, 0.456, 0.406] 5 | STD = [0.229, 0.224, 0.225] 6 | 7 | 8 | def get_training_transforms(): 9 | transforms = tf.Compose([ 10 | # Random rotation to augment dataset 11 | tf.RandomRotation(180), 12 | 13 | # Random "zoom" to gain skill check position and scale robustness 14 | tf.CenterCrop(224), 15 | tf.RandomResizedCrop(224, scale=(0.8, 1.0)), 16 | 17 | # Random color filters 18 | tf.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4), 19 | 20 | tf.ToDtype(torch.float32, scale=True), 21 | tf.Normalize(mean=MEAN, std=STD) 22 | ]) 23 | 24 | return transforms 25 | 26 | 27 | def get_validation_transforms(): 28 | transforms = tf.Compose([ 29 | tf.CenterCrop(224), 30 | tf.ToDtype(torch.float32, scale=True), 31 | tf.Normalize(mean=MEAN, std=STD) 32 | ]) 33 | return transforms 34 | -------------------------------------------------------------------------------- /dbd/model_to_onnx.py: -------------------------------------------------------------------------------- 1 | import glob 2 | import os 3 | 4 | import onnxruntime 5 | import torch 6 | 7 | from dbd.networks.model import Model 8 | from dbd.utils.frame_grabber import get_monitor_attributes_test 9 | 10 | 11 | if __name__ == '__main__': 12 | checkpoint = "./lightning_logs/version_11/checkpoints" 13 | checkpoint = glob.glob(os.path.join(checkpoint, "*.ckpt"))[0] 14 | 15 | model = Model.load_from_checkpoint(checkpoint, strict=True) 16 | 17 | # TO ONNX 18 | filepath = "model.onnx" 19 | input_sample = torch.zeros((1, 3, 224, 224), dtype=torch.float32) 20 | model.to_onnx(filepath, input_sample, export_params=True) 21 | ort_session = onnxruntime.InferenceSession(filepath) 22 | input_name = ort_session.get_inputs()[0].name 23 | -------------------------------------------------------------------------------- /dbd/networks/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/dbd/networks/__init__.py -------------------------------------------------------------------------------- /dbd/networks/model.py: -------------------------------------------------------------------------------- 1 | import pytorch_lightning as pl 2 | import torchmetrics 3 | import torch 4 | import torchvision.models as models 5 | 6 | 7 | class Model(pl.LightningModule): 8 | def __init__(self, lr=1e-4): 9 | super().__init__() 10 | self.example_input_array = torch.zeros((32, 3, 224, 224), dtype=torch.float32) 11 | self.nb_classes = 11 12 | 13 | self.model = self.build_model() 14 | self.lr = lr 15 | 16 | self.acc_score_train = torchmetrics.Accuracy(task='multiclass', num_classes=self.nb_classes, average="none", validate_args=False) 17 | self.acc_score_val = torchmetrics.Accuracy(task='multiclass', num_classes=self.nb_classes, average="none", validate_args=False) 18 | 19 | # self.metrics_val = torchmetrics.MetricCollection([ 20 | # torchmetrics.F1Score(task='multiclass', num_classes=self.nb_classes, average="none", validate_args=False), 21 | # torchmetrics.Accuracy(task='multiclass', num_classes=self.nb_classes, average="none", validate_args=False) 22 | # ]) 23 | 24 | def build_model(self): 25 | # weights = models.MobileNet_V3_Large_Weights.DEFAULT 26 | # model = models.mobilenet_v3_large(weights=weights) 27 | # model.classifier[-1] = torch.nn.Linear(1280, self.nb_classes) 28 | 29 | weights = models.MobileNet_V3_Small_Weights.DEFAULT 30 | model = models.mobilenet_v3_small(weights=weights) 31 | model.classifier[-1] = torch.nn.Linear(1024, self.nb_classes) 32 | 33 | # weights = models.ConvNeXt_Tiny_Weights.DEFAULT 34 | # model = models.convnext_tiny(weights=weights, num_classes=self.nb_classes) 35 | 36 | return model 37 | 38 | def training_step(self, batch, batch_idx): 39 | x, y = batch 40 | pred = self(x) 41 | 42 | loss = torch.nn.functional.cross_entropy(pred, y) 43 | self.log("loss/train", loss) 44 | 45 | # Accumulate metrics 46 | self.acc_score_train.update(pred, y) 47 | 48 | return loss 49 | 50 | def on_train_epoch_end(self): 51 | acc_score_train = self.acc_score_train.compute() 52 | self.log_dict({"Acc/train_{}".format(i): score for i, score in enumerate(acc_score_train)}) 53 | self.log_dict({"Acc/train_mean": torch.mean(acc_score_train)}) 54 | 55 | self.acc_score_train.reset() 56 | 57 | def validation_step(self, batch, batch_idx): 58 | x, y = batch 59 | pred = self(x) 60 | 61 | loss = torch.nn.functional.cross_entropy(pred, y) 62 | self.log("loss/val", loss) 63 | 64 | # Accumulate metrics 65 | self.acc_score_val.update(pred, y) 66 | 67 | return loss 68 | 69 | def on_validation_epoch_end(self): 70 | acc_score_val = self.acc_score_val.compute() 71 | self.log_dict({"Acc/val_{}".format(i): score for i, score in enumerate(acc_score_val)}) 72 | self.log_dict({"Acc/val_mean": torch.mean(acc_score_val)}) 73 | 74 | self.acc_score_val.reset() 75 | 76 | def predict_step(self, batch, batch_idx, dataloader_idx=0): 77 | x, y = batch 78 | pred = self(x) 79 | pred = torch.argmax(pred, dim=-1) 80 | return pred 81 | 82 | def forward(self, x): 83 | pred = self.model(x) 84 | return pred 85 | 86 | def configure_optimizers(self): 87 | optimizer = torch.optim.Adam(self.parameters(), lr=self.lr, weight_decay=1e-4) 88 | # optimizer = torch.optim.RMSprop(self.parameters(), lr=self.lr, momentum=0.9, weight_decay=1e-4) 89 | # scheduler = ExponentialLR(optimizer, gamma=0.9) 90 | 91 | return optimizer 92 | -------------------------------------------------------------------------------- /dbd/predict_folder.py: -------------------------------------------------------------------------------- 1 | from glob import glob 2 | import os 3 | 4 | import onnxruntime 5 | import pytorch_lightning as pl 6 | import numpy as np 7 | import shutil 8 | import tqdm 9 | 10 | from dbd.networks.model import Model 11 | from dbd.datasets.transforms import get_validation_transforms 12 | from dbd.datasets.datasetLoader import DBD_dataset 13 | 14 | 15 | def infer_from_folder(folder, checkpoint): 16 | # Dataset 17 | images = glob(os.path.join(folder, "*.*")) 18 | images = np.array([[image, 0] for image in images]) 19 | 20 | test_transforms = get_validation_transforms() 21 | dataset = DBD_dataset(images, test_transforms) 22 | dataloader = dataset.get_dataloader(batch_size=128, num_workers=8) 23 | 24 | # Model 25 | checkpoint = glob(os.path.join(checkpoint, "*.ckpt"))[-1] 26 | assert os.path.isfile(checkpoint) 27 | 28 | model = Model() 29 | trainer = pl.Trainer(accelerator='gpu', devices=1, logger=False) 30 | preds = trainer.predict(model=model, dataloaders=dataloader, return_predictions=True, ckpt_path=checkpoint) 31 | preds = np.concatenate([pred.cpu().numpy() for pred in preds], axis=0) 32 | 33 | results = np.stack([images[:, 0], preds], axis=-1) 34 | return results 35 | 36 | 37 | def infer_from_folder_onnx(folder): 38 | # Dataset 39 | images = glob(os.path.join(folder, "*.*")) 40 | images = np.array([[image, 0] for image in images]) # give fake labels just to use our dataloader 41 | 42 | # dataloader (to automatically batch images and make the necessary image transformations) 43 | test_transforms = get_validation_transforms() 44 | dataset = DBD_dataset(images, test_transforms) 45 | dataloader = dataset.get_dataloader(batch_size=1, num_workers=8) 46 | 47 | # Model 48 | filepath = "model.onnx" 49 | ort_session = onnxruntime.InferenceSession(filepath) 50 | input_name = ort_session.get_inputs()[0].name 51 | 52 | results = [] 53 | for batch in tqdm.tqdm(dataloader, desc="Onnx inference"): 54 | img = batch[0].cpu().numpy() 55 | ort_inputs = {input_name: img} 56 | ort_outs = ort_session.run(None, ort_inputs) 57 | pred = np.argmax(np.squeeze(ort_outs, 0)) 58 | results.append(pred) 59 | 60 | results = np.stack([images[:, 0], results], axis=-1) 61 | return results 62 | 63 | 64 | if __name__ == '__main__': 65 | dataset_source = "dataset_prediction" # screenshots 66 | checkpoint = "./lightning_logs/version_1/checkpoints" 67 | 68 | assert os.path.isdir(dataset_source) 69 | # preds = infer_from_folder(dataset_source, checkpoint) 70 | preds = infer_from_folder_onnx(dataset_source) 71 | 72 | os.makedirs(os.path.join(dataset_source, "0"), exist_ok=True) 73 | os.makedirs(os.path.join(dataset_source, "1"), exist_ok=True) 74 | os.makedirs(os.path.join(dataset_source, "2"), exist_ok=True) 75 | os.makedirs(os.path.join(dataset_source, "3"), exist_ok=True) 76 | os.makedirs(os.path.join(dataset_source, "4"), exist_ok=True) 77 | os.makedirs(os.path.join(dataset_source, "5"), exist_ok=True) 78 | os.makedirs(os.path.join(dataset_source, "6"), exist_ok=True) 79 | 80 | for image, pred in tqdm.tqdm(preds, desc="Moving images"): 81 | pred = int(pred) 82 | 83 | if pred != 0: 84 | # shutil.copy(image, os.path.join(dataset_dest, str(pred), os.path.basename(image))) 85 | shutil.move(image, os.path.join(dataset_source, str(pred), os.path.basename(image))) 86 | -------------------------------------------------------------------------------- /dbd/preprocess_data.py: -------------------------------------------------------------------------------- 1 | import glob 2 | import os 3 | 4 | from dbd.utils.dataset_utils import delete_similar_images, delete_consecutive_images 5 | 6 | 7 | if __name__ == '__main__': 8 | source_folder = 'dataset_prediction/4' 9 | assert os.path.isdir(source_folder) 10 | 11 | files = glob.glob(os.path.join(source_folder, "*.*")) 12 | files += glob.glob(os.path.join(source_folder, "*", "*.*")) 13 | files.sort() 14 | 15 | # delete_consecutive_images(files, 2) 16 | # delete_similar_images(files) 17 | -------------------------------------------------------------------------------- /dbd/save_frames.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import cv2 3 | import mss 4 | import time 5 | import os 6 | 7 | from dbd.utils.frame_grabber import get_monitor_attributes 8 | 9 | 10 | if __name__ == '__main__': 11 | # Make new dataset folder, where we save the frames 12 | timestr = time.strftime("%Y%m%d-%H%M%S") 13 | dataset_folder = os.path.join("dataset", timestr) 14 | os.mkdir(dataset_folder) 15 | 16 | # Get monitor attributes 17 | monitor = get_monitor_attributes() 18 | 19 | with mss.mss() as sct: 20 | i = 0 21 | 22 | # Infinite loop 23 | while True: 24 | screenshot = np.array(sct.grab(monitor)) 25 | cv2.imwrite(os.path.join(dataset_folder, "{}_{}.png".format(timestr, i)), screenshot) 26 | i += 1 27 | -------------------------------------------------------------------------------- /dbd/train.py: -------------------------------------------------------------------------------- 1 | import glob 2 | import os 3 | 4 | import pytorch_lightning as pl 5 | from pytorch_lightning.callbacks import ModelCheckpoint 6 | from pytorch_lightning.utilities.model_summary import ModelSummary 7 | 8 | from dbd.datasets.datasetLoader import get_dataloaders 9 | from dbd.networks.model import Model 10 | 11 | # torch.set_float32_matmul_precision('high') 12 | 13 | if __name__ == '__main__': 14 | ########################################################## 15 | checkpoint = "./lightning_logs/version_11/checkpoints" 16 | dataset_root = "dataset/" 17 | 18 | ########################################################## 19 | checkpoint = glob.glob(os.path.join(checkpoint, "*.ckpt"))[-1] 20 | 21 | # Dataset 22 | dataloader_train, dataloader_val = get_dataloaders(dataset_root, num_workers=8, batch_size=32) 23 | 24 | # Model 25 | # model = Model(lr=1e-4) 26 | model = Model.load_from_checkpoint(checkpoint, strict=True, lr=1e-5) 27 | 28 | # Print model summary 29 | summary = ModelSummary(model, max_depth=4) 30 | print(summary) 31 | 32 | # Compile the model 33 | # model = torch.compile(model) 34 | 35 | valid = pl.Trainer(accelerator='gpu', devices=1, logger=False) 36 | valid.validate(model=model, dataloaders=dataloader_val) 37 | 38 | # Training 39 | checkpoint_callback = ModelCheckpoint(save_top_k=1, monitor="loss/val") 40 | checkpoint_callback2 = ModelCheckpoint(save_top_k=1, monitor="Acc/val_mean") 41 | trainer = pl.Trainer(accelerator='gpu', devices=1, max_epochs=500, num_sanity_val_steps=0, precision="16-mixed", callbacks=[checkpoint_callback, checkpoint_callback2]) 42 | trainer.fit(model=model, train_dataloaders=dataloader_train, val_dataloaders=dataloader_val) 43 | 44 | # tensorboard --logdir=lightning_logs/ 45 | -------------------------------------------------------------------------------- /dbd/utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/dbd/utils/__init__.py -------------------------------------------------------------------------------- /dbd/utils/dataset_utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import tqdm 3 | import cv2 4 | import numpy as np 5 | 6 | 7 | def delete_similar_images(files): 8 | similar_frames = 0 9 | for i in tqdm.tqdm(range(len(files)-1)): 10 | im1 = files[i] 11 | im2 = files[i+1] 12 | 13 | image1 = cv2.imread(im1) 14 | image2 = cv2.imread(im2) 15 | 16 | diff = np.abs(image1.astype(np.float32) - image2.astype(np.float32)) 17 | diff = (diff[:, :, 0] + diff[:, :, 1] + 10.0 * diff[:, :, 2]) / 3.0 # add more weight to red channel 18 | diff = np.mean(diff) / 255. 19 | 20 | # print(diff, im1, im2) 21 | if diff < 0.01: 22 | os.remove(im1) 23 | similar_frames += 1 24 | # print("deleting {} with score {}".format(im1, diff)) 25 | 26 | print("deleted {} similar frames".format(similar_frames)) 27 | 28 | 29 | def delete_consecutive_images(files, n): 30 | files_chunks = [files[i:i+n] for i in range(0, len(files), n)] 31 | 32 | # iterate over files_chunks with a tqdm progress bar 33 | for files_chunk in tqdm.tqdm(files_chunks): 34 | files_to_remove = files_chunk[:n-1] 35 | for file in files_to_remove: 36 | os.remove(file) 37 | 38 | -------------------------------------------------------------------------------- /dbd/utils/directkeys.py: -------------------------------------------------------------------------------- 1 | # directkeys.py 2 | # http://stackoverflow.com/questions/13564851/generate-keyboard-events 3 | # msdn.microsoft.com/en-us/library/dd375731 4 | 5 | import ctypes 6 | from ctypes import wintypes 7 | import time 8 | 9 | user32 = ctypes.WinDLL('user32', use_last_error=True) 10 | 11 | INPUT_MOUSE = 0 12 | INPUT_KEYBOARD = 1 13 | INPUT_HARDWARE = 2 14 | 15 | KEYEVENTF_EXTENDEDKEY = 0x0001 16 | KEYEVENTF_KEYUP = 0x0002 17 | KEYEVENTF_UNICODE = 0x0004 18 | KEYEVENTF_SCANCODE = 0x0008 19 | 20 | MAPVK_VK_TO_VSC = 0 21 | 22 | # List of all codes for keys: 23 | # # msdn.microsoft.com/en-us/library/dd375731 24 | UP = 0x26 25 | DOWN = 0x28 26 | A = 0x41 27 | SPACE = 0x20 28 | 29 | # C struct definitions 30 | 31 | wintypes.ULONG_PTR = wintypes.WPARAM 32 | 33 | class MOUSEINPUT(ctypes.Structure): 34 | _fields_ = (("dx", wintypes.LONG), 35 | ("dy", wintypes.LONG), 36 | ("mouseData", wintypes.DWORD), 37 | ("dwFlags", wintypes.DWORD), 38 | ("time", wintypes.DWORD), 39 | ("dwExtraInfo", wintypes.ULONG_PTR)) 40 | 41 | class KEYBDINPUT(ctypes.Structure): 42 | _fields_ = (("wVk", wintypes.WORD), 43 | ("wScan", wintypes.WORD), 44 | ("dwFlags", wintypes.DWORD), 45 | ("time", wintypes.DWORD), 46 | ("dwExtraInfo", wintypes.ULONG_PTR)) 47 | 48 | def __init__(self, *args, **kwds): 49 | super(KEYBDINPUT, self).__init__(*args, **kwds) 50 | # some programs use the scan code even if KEYEVENTF_SCANCODE 51 | # isn't set in dwFflags, so attempt to map the correct code. 52 | if not self.dwFlags & KEYEVENTF_UNICODE: 53 | self.wScan = user32.MapVirtualKeyExW(self.wVk, 54 | MAPVK_VK_TO_VSC, 0) 55 | 56 | class HARDWAREINPUT(ctypes.Structure): 57 | _fields_ = (("uMsg", wintypes.DWORD), 58 | ("wParamL", wintypes.WORD), 59 | ("wParamH", wintypes.WORD)) 60 | 61 | class INPUT(ctypes.Structure): 62 | class _INPUT(ctypes.Union): 63 | _fields_ = (("ki", KEYBDINPUT), 64 | ("mi", MOUSEINPUT), 65 | ("hi", HARDWAREINPUT)) 66 | _anonymous_ = ("_input",) 67 | _fields_ = (("type", wintypes.DWORD), 68 | ("_input", _INPUT)) 69 | 70 | LPINPUT = ctypes.POINTER(INPUT) 71 | 72 | def _check_count(result, func, args): 73 | if result == 0: 74 | raise ctypes.WinError(ctypes.get_last_error()) 75 | return args 76 | 77 | user32.SendInput.errcheck = _check_count 78 | user32.SendInput.argtypes = (wintypes.UINT, # nInputs 79 | LPINPUT, # pInputs 80 | ctypes.c_int) # cbSize 81 | 82 | # Functions 83 | 84 | def PressKey(hexKeyCode): 85 | x = INPUT(type=INPUT_KEYBOARD, 86 | ki=KEYBDINPUT(wVk=hexKeyCode)) 87 | user32.SendInput(1, ctypes.byref(x), ctypes.sizeof(x)) 88 | 89 | def ReleaseKey(hexKeyCode): 90 | x = INPUT(type=INPUT_KEYBOARD, 91 | ki=KEYBDINPUT(wVk=hexKeyCode, 92 | dwFlags=KEYEVENTF_KEYUP)) 93 | user32.SendInput(1, ctypes.byref(x), ctypes.sizeof(x)) 94 | 95 | if __name__ == "__main__": 96 | PressKey(A) 97 | time.sleep(0.5) 98 | ReleaseKey(A) 99 | print("Pressed") -------------------------------------------------------------------------------- /dbd/utils/frame_grabber.py: -------------------------------------------------------------------------------- 1 | import pyautogui 2 | 3 | def get_monitor_attributes(): 4 | width, height = pyautogui.size() 5 | object_size_h = height // 6 6 | object_size_w = width // 6 7 | object_size = max(object_size_w, object_size_h) 8 | 9 | monitor = {"top": height // 2 - object_size // 2, 10 | "left": width // 2 - object_size // 2, 11 | "width": object_size, 12 | "height": object_size} 13 | 14 | return monitor 15 | 16 | def get_monitor_attributes_test(): 17 | width, height = pyautogui.size() 18 | object_size = 224 19 | 20 | monitor = {"top": height // 2 - object_size // 2, 21 | "left": width // 2 - object_size // 2, 22 | "width": object_size, 23 | "height": object_size} 24 | 25 | return monitor 26 | -------------------------------------------------------------------------------- /images/demo.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/images/demo.gif -------------------------------------------------------------------------------- /images/merciless.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/images/merciless.png -------------------------------------------------------------------------------- /images/repair.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/images/repair.png -------------------------------------------------------------------------------- /images/run_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/images/run_1.png -------------------------------------------------------------------------------- /images/run_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/images/run_2.png -------------------------------------------------------------------------------- /images/struggle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/images/struggle.png -------------------------------------------------------------------------------- /images/wiggle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/images/wiggle.png -------------------------------------------------------------------------------- /model.onnx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manuteaa/dbd_autoSkillCheck/2e6cbc6b3fc61ffd6e10b03ae01fba5fb2480156/model.onnx -------------------------------------------------------------------------------- /run_monitoring_gradio.py: -------------------------------------------------------------------------------- 1 | import os 2 | from pathlib import Path 3 | from time import time, sleep 4 | from dbd.AI_model import AI_model 5 | from dbd.utils.directkeys import PressKey, ReleaseKey, SPACE 6 | 7 | from gradio import ( 8 | Dropdown, Radio, Number, Image, Label, Button, Slider, 9 | skip, Info, Warning, Error, Blocks, Row, Column, Markdown 10 | ) 11 | 12 | 13 | def monitor(ai_model_path, device, debug_option, hit_ante, cpu_stress): 14 | if ai_model_path is None or not os.path.exists(ai_model_path): 15 | raise Error("Invalid AI model file", duration=0) 16 | 17 | if device is None: 18 | raise Error("Invalid device option") 19 | 20 | if debug_option is None: 21 | raise Error("Invalid debug option") 22 | 23 | if cpu_stress == "min": 24 | nb_cpu_threads = 1 25 | elif cpu_stress == "low": 26 | nb_cpu_threads = 2 27 | elif cpu_stress == "normal": 28 | nb_cpu_threads = 4 29 | else: 30 | nb_cpu_threads = None 31 | 32 | try: 33 | use_gpu = (device == devices[1]) 34 | ai_model = AI_model(ai_model_path, use_gpu, nb_cpu_threads) 35 | execution_provider = ai_model.check_provider() 36 | except Exception as e: 37 | raise Error("Error when loading AI model: {}".format(e), duration=0) 38 | 39 | if execution_provider == "CUDAExecutionProvider": 40 | Info("Running AI model on GPU (success, CUDA)") 41 | elif execution_provider == "DmlExecutionProvider": 42 | Info("Running AI model on GPU (success, DirectML)") 43 | elif execution_provider == "TensorRT": 44 | Info("Running AI model on GPU (success, TensorRT)") 45 | else: 46 | Info(f"Running AI model on CPU (success, {nb_cpu_threads} threads)") 47 | if use_gpu: 48 | Warning("Could not run AI model on GPU device. Check python console logs to debug.") 49 | 50 | # Create debug folders 51 | if debug_option == debug_options[2] or debug_option == debug_options[3]: 52 | Path(debug_folder).mkdir(exist_ok=True) 53 | for folder_idx in range(len(ai_model.pred_dict)): 54 | Path(os.path.join(debug_folder, str(folder_idx))).mkdir(exist_ok=True) 55 | 56 | # Variables 57 | t0 = time() 58 | nb_frames = 0 59 | nb_hits = 0 60 | 61 | while True: 62 | screenshot = ai_model.grab_screenshot() 63 | image_pil = ai_model.screenshot_to_pil(screenshot) 64 | image_np = ai_model.pil_to_numpy(image_pil) 65 | nb_frames += 1 66 | 67 | pred, desc, probs, should_hit = ai_model.predict(image_np) 68 | 69 | if pred != 0 and debug_option == debug_options[3]: 70 | path = os.path.join(debug_folder, str(pred), "{}.png".format(nb_hits)) 71 | image_pil.save(path) 72 | nb_hits += 1 73 | 74 | if should_hit: 75 | # ante-frontier hit delay 76 | if pred == 2 and hit_ante > 0: 77 | sleep(hit_ante * 0.001) 78 | 79 | PressKey(SPACE) 80 | ReleaseKey(SPACE) 81 | 82 | yield skip(), image_pil, probs 83 | 84 | if debug_option == debug_options[2]: 85 | path = os.path.join(debug_folder, str(pred), "hit_{}.png".format(nb_hits)) 86 | image_pil.save(path) 87 | nb_hits += 1 88 | 89 | sleep(0.5) # avoid hitting the same skill check multiple times 90 | t0 = time() 91 | nb_frames = 0 92 | continue 93 | 94 | # Compute fps 95 | t_diff = time() - t0 96 | if t_diff > 1.0: 97 | fps = round(nb_frames / t_diff, 1) 98 | 99 | if debug_option == debug_options[1]: 100 | yield fps, image_pil, skip() 101 | else: 102 | yield fps, skip(), skip() 103 | 104 | t0 = time() 105 | nb_frames = 0 106 | 107 | print("HERE") 108 | 109 | 110 | if __name__ == "__main__": 111 | debug_folder = "saved_images" 112 | 113 | debug_options = [ 114 | "None (default)", 115 | "Display the monitored frame (a 224x224 center-cropped image, displayed at 1fps) instead of last hit skill check frame. Useful to check the monitored screen", 116 | "Save hit skill check frames in {}/".format(debug_folder), 117 | "Save all skill check frames in {}/ (will impact fps)".format(debug_folder) 118 | ] 119 | 120 | fps_info = "Number of frames per second the AI model analyses the monitored frame. Check The GitHub FAQ for more details and requirements." 121 | devices = ["CPU (default)", "GPU"] 122 | 123 | # Find available AI models 124 | model_files = [f for f in os.listdir() if f.endswith(".onnx") or f.endswith(".engine")] 125 | if not model_files: 126 | model_files = ["model.onnx"] # Default if no models found 127 | 128 | with (Blocks(title="DBD Auto skill check") as webui): 129 | Markdown("