├── .gitignore
├── README.md
├── docs
└── debug.png
├── gather.py
├── inter.py
├── interception_py
├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── _example_.py
├── _example_hardwareid.py
├── _example_mathpointer.py
├── _right_click.py
├── consts.py
├── interception.py
└── stroke.py
├── main.py
├── models
├── best.pt
└── bestv2.pt
├── screen.py
├── wincap.py
└── yolo.py
/.gitignore:
--------------------------------------------------------------------------------
1 | /__pycache__
2 | /images
3 | debug.bmp
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Object Detection - Splitgate
2 |
3 |
4 |
5 |
6 |
7 |
8 |
Object Detection for FPS games
9 |
10 | Report Bug
11 |
12 |
13 |
14 |
15 |
16 | Table of Contents
17 |
18 | -
19 | About the Project
20 |
24 |
25 | -
26 | Getting Started
27 |
31 |
32 | - Usage
33 | - Final words
34 |
35 |
36 |
37 |
38 | ## About The Project
39 | [
](./docs/debug.png)
40 |
41 | This project started as an experiment as
42 | > _is it possible to "cheat" in FPS games using object detection_.
43 |
44 | I already know that object detection gives valuable information from images and is used in many sectors - from social media algorithms to basic quality control. I'v had an opporturnity to work with image recognition and on my spare time I have also created my own scrapers that classify images - for vaurious different cases. But that all is made on still images from different sources - where there isn't really any hurry on the inference results.
45 |
46 | I wanted to find out would it be feasible to inference a FPS game, and then use the inference results to our advantage. The biggest issue was: _will this be practical_ - as will it be fast enough. The inference would need to run on the same machine as the game and would it be hindered by the GPU.
47 |
48 | As traditional video game hacking is made by reading process memory and different anti-cheats try to detect & block these reads. Object detection would take an totally different approach - no memory reading - thus would have the possibility to be undetected by anti-cheat. Another issue would be that how could we send input to the desired video game without triggering any flags. Main goal of this project is to showcase an POC, that indeed this is currently possible with relative affordable equipment.
49 |
50 | ### Disclaimer
51 | I do not condem any hacking - it ruins the fun for you but also for other players. This project was created just to show that it is possible to "cheat" using object detection. Also this is my first bigger python project and totally first time using multiprocessing and threads - thus someone could benefit from optimizing the code. At the end I'm happy with the performance of the code, I managed to inference at ~28ms (~30 FPS) while the game was running at high settings +140 FPS.
52 |
53 | ### Built With
54 | I won't go to details on how to create your own custom models as there are way better tutorials on how to do this. If you are going to create your own model, you should have atleast an intermediate understanding of Image Recognition - as creating an valid dataset and analyzing the model outcome could be challenging if you are totally new.
55 | That said [YOLOv5 Github](https://github.com/ultralytics/yolov5) is an good starting point.
56 |
57 | Here is an list of programs / platforms I used for this project:
58 | - [YOLOv5](https://github.com/ultralytics/yolov5) Object detection (custom model)
59 | - [Google Colab](https://colab.research.google.com/) to train my model - they give free GPU (ex. my 300 epoch took only 3h)
60 | - [CVAT](https://cvat.org/) to label my datasets
61 | - [Roboflow](https://app.roboflow.com/) to enrich these datasets
62 | - [Interception_py](https://github.com/cobrce/interception_py) for mouse hooking (more on why this later)
63 |
64 |
65 | ## Getting started
66 | This repo contains two pretrained modes that will detect enemies in Splitgate.
67 | `best.pt` is trained on +600 images for 300 epoch.
68 | `bestv2.pt` is then refined from that with +1500 images and an another 300 epochs.
69 |
70 | These models only work on Splitgate - if you want to test this out in different games, you will need to create your own models.
71 |
72 | ### Prerequisites
73 | - Install [Interception Driver](https://github.com/oblitum/Interception)
74 | - [YOLOv5 requirements](https://github.com/ultralytics/yolov5#quick-start-examples)
75 |
76 | Interception driver is selected because it will hook to your mouse on OS level - has low latency and does not trigger virtual_mouse flag - anticheat softwares may look for that flag.
77 | If you aren't okay on installing that driver you will need to alter `inter.py` and use ex. [pyautogui](https://pyautogui.readthedocs.io/en/latest/) or [mouse](https://github.com/boppreh/mouse) - however latency might become an issue.
78 | If you are getting inference issues with YOLOv5 or it isn't finding any targets - try downgrading PyTorch CUDA version. Ex. I have CUDA +11.7, but needed to use PyTorch CUDA 10.2.
79 |
80 | ### Installation
81 | When above steps are done you only need to clone the repo:
82 | ```sh
83 | git clone https://github.com/matias-kovero/ObjectDetection.git
84 | ```
85 |
86 | ## Usage
87 | I have stripped the smooth aim - as I see it would create harm to Splitgates community - I don't want to distirbute an plug and play repository. This was only ment for showing that this is currently possible - of course if you have the skillset - you could add your own aim functions - but this is up to you.
88 | Currently the code has an really simple aim movement - that more likely will get you flagged - but still proves my point that you can _"hack"_ with this.
89 |
90 | ### `main.py`
91 | Main entrypoint - params found [here](https://github.com/matias-kovero/ObjectDetection/blob/0536b2752cedff554ddae14a8af8cedbb72e2559/main.py#L70). Example usage:
92 | ```sh
93 | python .\main.py --source 480
94 | ```
95 | You can change `--source` and `--img` for better performance (smaller img = faster inference, but accuracy suffers).
96 | If you have an beefy GPU and want better accuracy, try to set --img to 640 and whatever source.
97 | ### `gather.py`
98 | If you are creating your own dataset, you might find this useful - as I used this to gather my images. It is currently set to take screenshot while mouse left is pressed, and after 10 sec cooldown, allows an new screenshot to be taken. Source code should be readable, so do what you want.
99 | ```sh
100 | python .\gather.py
101 | ```
102 |
103 | ## Final words
104 | I was suprised how "easily" you could inference and play the game with relative low budget GPU.
105 | My 1660 Super managed to inference with about 28ms delay (~30FPS) and hooking it up with smoothing aim created an discusting aimbot.
106 | I really don't know how anti-cheats could detect this kind of cheating. Biggest issue is still how to send human like mouse movement to the game.
107 |
--------------------------------------------------------------------------------
/docs/debug.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/matias-kovero/ObjectDetection/08fadb675e3b8a724251a959a4008e35ba77cb25/docs/debug.png
--------------------------------------------------------------------------------
/gather.py:
--------------------------------------------------------------------------------
1 | import cv2 as cv
2 | from time import time
3 | from wincap import WinCap
4 | import keyboard
5 | import mouse
6 |
7 | # initialize wincap
8 | wincap = WinCap(None, 640, True)
9 |
10 | # State
11 | STOP='f12'
12 | GATHER='f10'
13 | gatherDataset = False
14 | active = True
15 | last_ss = time()
16 |
17 | def on_stop():
18 | global active
19 | active = False
20 | print('Stopping code!')
21 |
22 | def toggleGather():
23 | global gatherDataset
24 | gatherDataset = not gatherDataset
25 | if gatherDataset:
26 | print('Gathering dataset to folder: ./images/')
27 | else:
28 | print('Stopped gathering. Resume with {}'.format(GATHER))
29 |
30 | def take_ss():
31 | global last_ss, gatherDataset
32 | if (gatherDataset and time() - last_ss > 10): # 10s cooldown on ss
33 | last_ss = time()
34 | wincap.save_ss()
35 |
36 | # Hooks
37 | keyboard.add_hotkey(STOP, on_stop)
38 | keyboard.add_hotkey(GATHER, toggleGather)
39 | # mouse.on_click(take_ss) # this would fire after button is raised up again - it is abit too late.
40 | mouse.on_button(take_ss, buttons=[mouse.LEFT], types=[mouse.DOWN])
41 |
42 | print('STOP = {}'.format(STOP))
43 | print('Toggle Gather = {}'.format(GATHER))
44 |
45 | while(active):
46 | # press 'q' to exit
47 | # waits 25ms every loop to process key press
48 | if cv.waitKey(25) & 0xFF == ord('q'):
49 | cv.destroyAllWindows()
50 | break
51 |
52 | cv.destroyAllWindows()
53 | print('End.')
--------------------------------------------------------------------------------
/inter.py:
--------------------------------------------------------------------------------
1 | from interception_py.interception import *
2 | import threading
3 |
4 | INTER_M_LEFT_DOWN = interception_filter_mouse_state.INTERCEPTION_FILTER_MOUSE_LEFT_BUTTON_DOWN.value
5 | INTER_M_LEFT_UP = interception_filter_mouse_state.INTERCEPTION_FILTER_MOUSE_LEFT_BUTTON_UP.value
6 | INTER_M_RIGHT_DOWN = interception_filter_mouse_state.INTERCEPTION_FILTER_MOUSE_RIGHT_BUTTON_DOWN.value
7 | INTER_M_RIGHT_UP = interception_filter_mouse_state.INTERCEPTION_FILTER_MOUSE_RIGHT_BUTTON_UP.value
8 | INTER_M_MOVE = interception_filter_mouse_state.INTERCEPTION_FILTER_MOUSE_MOVE.value
9 |
10 | # Slowdown amount for aim assist
11 | X_CLOSE = 1.75
12 | X_MEDIUM = 1.55
13 | X_FAR = 1.25
14 | Y_CLOSE = 1.5
15 | Y_MEDIUM = 1.25
16 | Y_FAR = 1.05
17 |
18 | class InterMouse:
19 | """
20 | Using interception C to modify current mouse buffer.
21 | Creates 2 threads for buffer listen & send.
22 | If your mouse stops working, something is wrong with the code in this class - kill this process to gain access to mouse again.
23 | """
24 | # properties
25 | name = 'InterMouse' # Name used for logging
26 | device = None # Mouse that is hooked
27 | mstroke = None # Buffer info from hooked mouse
28 | c = None # Context
29 | pipe_mouse = None # Pipe
30 | thr_listen = None # Listen Thread
31 | thr_adjust = None # Sending Thread
32 | active = False # State to tamper mouse buffer
33 |
34 | # Polling specific
35 | orig_move = (0,0) # Contains original move
36 |
37 | # Target specific - https://www.youtube.com/watch?v=luLX6gCbPyw
38 | target_x = 0 # Target X coordinate from center (0, 0)
39 | target_y = 0 # Target Y coordinate from center (0, 0)
40 | target_d = 0 # Target distance from screen center.
41 | target_size = (0,0) # Target size (width, height)
42 | target_last = (0,0) # Target last position (x, y)
43 | target_move = (0,0) # Target movement from last frame to current frame.
44 |
45 | # constructor
46 | def __init__(self, out_mouse):
47 | """
48 | :param out_mouse: pipe to get vector information of ideal mouse position.
49 | """
50 | print(f'[{self.name}] Process launched.')
51 | self.c = interception()
52 | # Hook mouse
53 | self.c.set_filter(interception.is_mouse,
54 | INTER_M_MOVE | INTER_M_LEFT_DOWN | INTER_M_LEFT_UP | INTER_M_RIGHT_DOWN | INTER_M_RIGHT_UP)
55 |
56 | self.pipe_mouse = out_mouse
57 | self.running = True
58 |
59 | self.thr_adjust = threading.Thread(target=self.run)
60 | self.thr_listen = threading.Thread(target=self.listen)
61 |
62 | self.thr_adjust.start()
63 | self.thr_listen.start()
64 |
65 | def run(self):
66 | """
67 | Thread for sending mouse buffer information to the OS. This is an MITM between Mouse & OS.
68 | Keep the code lightweight, as gets called on every mouse poll, ex. 1000 times/sec.
69 | If possible - lower your mouse polling rate as high polling rates have bunch of (0,0) moves.
70 | If this process fails, your mouse movement won't register - aka. you can't move your mouse.
71 | Should handle exceptions better, now if fails - kill process or reboot :(
72 | """
73 | print(f'[{self.name}] Thread send launched.')
74 | while self.running:
75 | self.device = self.c.wait() # This is blocking - will fire on every mouse poll.
76 | self.mstroke = self.c.receive(self.device) # Get polled info, you need to send this or mouse won't respond.
77 | if type(self.mstroke) is mouse_stroke:
78 | # Save original movement
79 | self.orig_move = (self.mstroke.x, self.mstroke.y)
80 | # Check / Update program state
81 | self.check_status()
82 |
83 | if self.active:
84 | self.aim_track()
85 | else:
86 | self.aim_assist()
87 |
88 | self.c.send(self.device, self.mstroke) # Finally send buffer to OS.
89 | print(f'[{self.name}] Thread send ended.')
90 |
91 | def listen(self):
92 | """
93 | Thread for listening vectors from pipe that our ML Model gives.
94 | This will be fired max the speed our model is running, in my case ~ 30FPS, so much slower than our polling rate.
95 | Main function is just to save coords from pipe - some small calculations.
96 | """
97 | print(f'[{self.name}] Thread listen launched.')
98 | while self.running:
99 | try:
100 | data = self.pipe_mouse.recv() # Read data from pipe
101 | (move, size, dist) = data
102 | self.target_x = move[0]
103 | self.target_y = move[1]
104 | self.target_size = size
105 | self.target_d = dist
106 |
107 | self.check_target_move(*move)
108 |
109 | self.target_last = (move[0], move[1])
110 | # self.size = size # Save target size?
111 | except EOFError:
112 | print(f'[{self.name}] Thread listen PIPE ERROR.')
113 | self.cleanup() # Kill other thread as well
114 | break
115 | print(f'[{self.name}] Thread listen ended.')
116 |
117 | def cleanup(self):
118 | """
119 | Could be useless code, as Python should clean things when it kills processes.
120 | Still to be 100% sure, running this.
121 | """
122 | self.running = False
123 | #self.c._destroy_context() # It seems this leaves process hanging - not good. Maybe the context is destroyed automatically?
124 |
125 | def check_status(self):
126 | """
127 | Simple way to check if we want to alter mouse buffer.
128 | """
129 | if self.mstroke.state > 0:
130 | if self.mstroke.state == INTER_M_LEFT_DOWN: self.active = True
131 | elif self.mstroke.state == INTER_M_RIGHT_DOWN: self.active = True
132 | else: self.active = False
133 |
134 | def check_target_move(self, x, y):
135 | """
136 | Check how much target has moved from last frame.
137 | """
138 | if (self.target_last[0] != 0 or self.target_last[1] != 0) and (x != 0 or y != 0):
139 | self.target_move = (x - self.target_last[0], y - self.target_last[1])
140 |
141 | def aim_assist(self):
142 | """
143 | Have controller aim assist for KBM. Slows down when close to target.
144 | Scuffed calculations - needs cleaning.
145 | """
146 | # Checking is done in main loop, no need here anymore.
147 | #self.check_status()
148 | # Yeet out if not active
149 | #if self.active != True: return
150 |
151 | # No target, don't assist.
152 | if (self.target_d == 0 and self.target_x == 0): return
153 |
154 | # Slow down sections
155 | (w, h) = self.target_size
156 | h = h * 0.55 # Y axis slowdown only on 55% area
157 |
158 | # Basic 1D vector stuff - still might be scuffed with unnececary parts.
159 | # X axis
160 | if abs(self.target_x) < (w * 0.45): # Inside 45% area
161 | self.mstroke.x = int(self.mstroke.x / X_CLOSE)
162 | elif abs(self.target_x) < (w * 0.75): # Inside 75% area
163 | self.mstroke.x = int(self.mstroke.x / X_MEDIUM)
164 | elif abs(self.target_x) < w: # Inside area
165 | self.mstroke.x = int(self.mstroke.x / X_FAR)
166 |
167 | # Y axis
168 | if abs(self.target_y) < (h * 0.45): # Inside 45% area
169 | self.mstroke.y = int(self.mstroke.y / Y_CLOSE)
170 | elif abs(self.target_y) < (h * 0.75): # Inside 75% area
171 | self.mstroke.y = int(self.mstroke.y / Y_MEDIUM)
172 | elif abs(self.target_y) < h: # Inside area
173 | self.mstroke.y = int(self.mstroke.y / Y_FAR)
174 |
175 | def aim_track(self):
176 | """
177 | An more agressive tracking.
178 | Will move mouse the amount the target has moved since last ML inference frame.
179 |
180 | Caution! This movement, isn't really human like, as it sudden linear movement - and is "easily" detected.
181 | Should use bezier and take account last mouse movement amounts to smooth everything.
182 | This is on you to solve - I'm not going to give everything.
183 | """
184 | if (self.target_move[0] != 0 or self.target_move[1] != 0):
185 | # These are relative movement, but does not account sensitivity on axes, so ex /2 sens in Y.
186 | self.mstroke.x = int(self.target_move[0])
187 | self.mstroke.y = int(self.target_move[1] / 2)
188 | # Reset target move, so that we don't repeat out tracking.
189 | self.target_move = (0, 0)
190 | # For bezier / smooth. Calc max allowed move - stash remaining movement, and rinse repeat till target @ center.
191 | # Also you could just adjust current movement and not replace the movement.
--------------------------------------------------------------------------------
/interception_py/.gitignore:
--------------------------------------------------------------------------------
1 | ## Ignore Visual Studio temporary files, build results, and
2 | ## files generated by popular Visual Studio add-ons.
3 | ##
4 | ## Get latest from https://github.com/github/gitignore/blob/master/VisualStudio.gitignore
5 |
6 | # User-specific files
7 | *.suo
8 | *.user
9 | *.userosscache
10 | *.sln.docstates
11 |
12 | # User-specific files (MonoDevelop/Xamarin Studio)
13 | *.userprefs
14 |
15 | # Build results
16 | [Dd]ebug/
17 | [Dd]ebugPublic/
18 | [Rr]elease/
19 | [Rr]eleases/
20 | x64/
21 | x86/
22 | bld/
23 | [Bb]in/
24 | [Oo]bj/
25 | [Ll]og/
26 |
27 | # Visual Studio 2015/2017 cache/options directory
28 | .vs/
29 | # Uncomment if you have tasks that create the project's static files in wwwroot
30 | #wwwroot/
31 |
32 | # Visual Studio 2017 auto generated files
33 | Generated\ Files/
34 |
35 | # MSTest test Results
36 | [Tt]est[Rr]esult*/
37 | [Bb]uild[Ll]og.*
38 |
39 | # NUNIT
40 | *.VisualState.xml
41 | TestResult.xml
42 |
43 | # Build Results of an ATL Project
44 | [Dd]ebugPS/
45 | [Rr]eleasePS/
46 | dlldata.c
47 |
48 | # Benchmark Results
49 | BenchmarkDotNet.Artifacts/
50 |
51 | # .NET Core
52 | project.lock.json
53 | project.fragment.lock.json
54 | artifacts/
55 | **/Properties/launchSettings.json
56 |
57 | # StyleCop
58 | StyleCopReport.xml
59 |
60 | # Files built by Visual Studio
61 | *_i.c
62 | *_p.c
63 | *_i.h
64 | *.ilk
65 | *.meta
66 | *.obj
67 | *.iobj
68 | *.pch
69 | *.pdb
70 | *.ipdb
71 | *.pgc
72 | *.pgd
73 | *.rsp
74 | *.sbr
75 | *.tlb
76 | *.tli
77 | *.tlh
78 | *.tmp
79 | *.tmp_proj
80 | *.log
81 | *.vspscc
82 | *.vssscc
83 | .builds
84 | *.pidb
85 | *.svclog
86 | *.scc
87 |
88 | # Chutzpah Test files
89 | _Chutzpah*
90 |
91 | # Visual C++ cache files
92 | ipch/
93 | *.aps
94 | *.ncb
95 | *.opendb
96 | *.opensdf
97 | *.sdf
98 | *.cachefile
99 | *.VC.db
100 | *.VC.VC.opendb
101 |
102 | # Visual Studio profiler
103 | *.psess
104 | *.vsp
105 | *.vspx
106 | *.sap
107 |
108 | # Visual Studio Trace Files
109 | *.e2e
110 |
111 | # TFS 2012 Local Workspace
112 | $tf/
113 |
114 | # Guidance Automation Toolkit
115 | *.gpState
116 |
117 | # ReSharper is a .NET coding add-in
118 | _ReSharper*/
119 | *.[Rr]e[Ss]harper
120 | *.DotSettings.user
121 |
122 | # JustCode is a .NET coding add-in
123 | .JustCode
124 |
125 | # TeamCity is a build add-in
126 | _TeamCity*
127 |
128 | # DotCover is a Code Coverage Tool
129 | *.dotCover
130 |
131 | # AxoCover is a Code Coverage Tool
132 | .axoCover/*
133 | !.axoCover/settings.json
134 |
135 | # Visual Studio code coverage results
136 | *.coverage
137 | *.coveragexml
138 |
139 | # NCrunch
140 | _NCrunch_*
141 | .*crunch*.local.xml
142 | nCrunchTemp_*
143 |
144 | # MightyMoose
145 | *.mm.*
146 | AutoTest.Net/
147 |
148 | # Web workbench (sass)
149 | .sass-cache/
150 |
151 | # Installshield output folder
152 | [Ee]xpress/
153 |
154 | # DocProject is a documentation generator add-in
155 | DocProject/buildhelp/
156 | DocProject/Help/*.HxT
157 | DocProject/Help/*.HxC
158 | DocProject/Help/*.hhc
159 | DocProject/Help/*.hhk
160 | DocProject/Help/*.hhp
161 | DocProject/Help/Html2
162 | DocProject/Help/html
163 |
164 | # Click-Once directory
165 | publish/
166 |
167 | # Publish Web Output
168 | *.[Pp]ublish.xml
169 | *.azurePubxml
170 | # Note: Comment the next line if you want to checkin your web deploy settings,
171 | # but database connection strings (with potential passwords) will be unencrypted
172 | *.pubxml
173 | *.publishproj
174 |
175 | # Microsoft Azure Web App publish settings. Comment the next line if you want to
176 | # checkin your Azure Web App publish settings, but sensitive information contained
177 | # in these scripts will be unencrypted
178 | PublishScripts/
179 |
180 | # NuGet Packages
181 | *.nupkg
182 | # The packages folder can be ignored because of Package Restore
183 | **/[Pp]ackages/*
184 | # except build/, which is used as an MSBuild target.
185 | !**/[Pp]ackages/build/
186 | # Uncomment if necessary however generally it will be regenerated when needed
187 | #!**/[Pp]ackages/repositories.config
188 | # NuGet v3's project.json files produces more ignorable files
189 | *.nuget.props
190 | *.nuget.targets
191 |
192 | # Microsoft Azure Build Output
193 | csx/
194 | *.build.csdef
195 |
196 | # Microsoft Azure Emulator
197 | ecf/
198 | rcf/
199 |
200 | # Windows Store app package directories and files
201 | AppPackages/
202 | BundleArtifacts/
203 | Package.StoreAssociation.xml
204 | _pkginfo.txt
205 | *.appx
206 |
207 | # Visual Studio cache files
208 | # files ending in .cache can be ignored
209 | *.[Cc]ache
210 | # but keep track of directories ending in .cache
211 | !*.[Cc]ache/
212 |
213 | # Others
214 | ClientBin/
215 | ~$*
216 | *~
217 | *.dbmdl
218 | *.dbproj.schemaview
219 | *.jfm
220 | *.pfx
221 | *.publishsettings
222 | orleans.codegen.cs
223 |
224 | # Including strong name files can present a security risk
225 | # (https://github.com/github/gitignore/pull/2483#issue-259490424)
226 | #*.snk
227 |
228 | # Since there are multiple workflows, uncomment next line to ignore bower_components
229 | # (https://github.com/github/gitignore/pull/1529#issuecomment-104372622)
230 | #bower_components/
231 |
232 | # RIA/Silverlight projects
233 | Generated_Code/
234 |
235 | # Backup & report files from converting an old project file
236 | # to a newer Visual Studio version. Backup files are not needed,
237 | # because we have git ;-)
238 | _UpgradeReport_Files/
239 | Backup*/
240 | UpgradeLog*.XML
241 | UpgradeLog*.htm
242 | ServiceFabricBackup/
243 | *.rptproj.bak
244 |
245 | # SQL Server files
246 | *.mdf
247 | *.ldf
248 | *.ndf
249 |
250 | # Business Intelligence projects
251 | *.rdl.data
252 | *.bim.layout
253 | *.bim_*.settings
254 | *.rptproj.rsuser
255 |
256 | # Microsoft Fakes
257 | FakesAssemblies/
258 |
259 | # GhostDoc plugin setting file
260 | *.GhostDoc.xml
261 |
262 | # Node.js Tools for Visual Studio
263 | .ntvs_analysis.dat
264 | node_modules/
265 |
266 | # Visual Studio 6 build log
267 | *.plg
268 |
269 | # Visual Studio 6 workspace options file
270 | *.opt
271 |
272 | # Visual Studio 6 auto-generated workspace file (contains which files were open etc.)
273 | *.vbw
274 |
275 | # Visual Studio LightSwitch build output
276 | **/*.HTMLClient/GeneratedArtifacts
277 | **/*.DesktopClient/GeneratedArtifacts
278 | **/*.DesktopClient/ModelManifest.xml
279 | **/*.Server/GeneratedArtifacts
280 | **/*.Server/ModelManifest.xml
281 | _Pvt_Extensions
282 |
283 | # Paket dependency manager
284 | .paket/paket.exe
285 | paket-files/
286 |
287 | # FAKE - F# Make
288 | .fake/
289 |
290 | # JetBrains Rider
291 | .idea/
292 | *.sln.iml
293 |
294 | # CodeRush
295 | .cr/
296 |
297 | # Python Tools for Visual Studio (PTVS)
298 | __pycache__/
299 | *.pyc
300 |
301 | # Cake - Uncomment if you are using it
302 | # tools/**
303 | # !tools/packages.config
304 |
305 | # Tabs Studio
306 | *.tss
307 |
308 | # Telerik's JustMock configuration file
309 | *.jmconfig
310 |
311 | # BizTalk build output
312 | *.btp.cs
313 | *.btm.cs
314 | *.odx.cs
315 | *.xsd.cs
316 |
317 | # OpenCover UI analysis results
318 | OpenCover/
319 |
320 | # Azure Stream Analytics local run output
321 | ASALocalRun/
322 |
323 | # MSBuild Binary and Structured Log
324 | *.binlog
325 |
326 | # NVidia Nsight GPU debugger configuration file
327 | *.nvuser
328 |
329 | # MFractors (Xamarin productivity tool) working folder
330 | .mfractor/
331 |
--------------------------------------------------------------------------------
/interception_py/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 cob_258
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/interception_py/README.md:
--------------------------------------------------------------------------------
1 | # interception_py
2 | This is a port (not a [wrapper][wrp]) of [interception][c_ception] dll to python, it communicates directly with interception's driver
3 |
4 | ### why not using the wrapper?
5 | * it's very slow and some strokes are lost
6 | * fast strokes made python crash (some heap allocation errors)
7 |
8 | To make it run you should install the driver from [c-interception][c_ception]
9 |
10 | ### example
11 | ```py
12 |
13 | from interception import *
14 |
15 | if __name__ == "__main__":
16 | c = interception()
17 | c.set_filter(interception.is_keyboard,interception_filter_key_state.INTERCEPTION_FILTER_KEY_UP.value)
18 | while True:
19 | device = c.wait()
20 | stroke = c.receive(device)
21 | if type(stroke) is key_stroke:
22 | print(stroke.code)
23 | c.send(device,stroke)
24 | ```
25 |
26 |
27 | [wrp]: https://github.com/cobrce/interception_wrapper
28 | [c_ception]: https://github.com/oblitum/Interception
29 |
--------------------------------------------------------------------------------
/interception_py/__init__.py:
--------------------------------------------------------------------------------
1 | # For relative imports to work in Python 3.6
2 | import os, sys; sys.path.append(os.path.dirname(os.path.realpath(__file__)))
--------------------------------------------------------------------------------
/interception_py/_example_.py:
--------------------------------------------------------------------------------
1 | from interception import *
2 | from consts import *
3 |
4 | if __name__ == "__main__":
5 | c = interception()
6 | c.set_filter(interception.is_keyboard,interception_filter_key_state.INTERCEPTION_FILTER_KEY_UP.value)
7 | while True:
8 | device = c.wait()
9 | stroke = c.receive(device)
10 | if type(stroke) is key_stroke:
11 | print(stroke.code)
12 | c.send(device,stroke)
13 | # hwid = c.get_HWID(device)
14 | # print(u"%s" % hwid)
15 |
--------------------------------------------------------------------------------
/interception_py/_example_hardwareid.py:
--------------------------------------------------------------------------------
1 | from interception import *
2 | from consts import *
3 |
4 | SCANCODE_ESC = 0x01
5 |
6 | if __name__ == "__main__":
7 | c = interception()
8 | c.set_filter(interception.is_keyboard,interception_filter_key_state.INTERCEPTION_FILTER_KEY_UP.value | interception_filter_key_state.INTERCEPTION_FILTER_KEY_DOWN.value)
9 | c.set_filter(interception.is_mouse,interception_filter_mouse_state.INTERCEPTION_FILTER_MOUSE_LEFT_BUTTON_DOWN.value)
10 | while True:
11 | device = c.wait()
12 | stroke = c.receive(device)
13 | c.send(device,stroke)
14 | if stroke is None or (interception.is_keyboard(device) and stroke.code == SCANCODE_ESC):
15 | break
16 | print(c.get_HWID(device))
17 | c._destroy_context()
18 |
--------------------------------------------------------------------------------
/interception_py/_example_mathpointer.py:
--------------------------------------------------------------------------------
1 | from interception import *
2 | from consts import *
3 | from math import *
4 | from win32api import GetSystemMetrics
5 | from datetime import datetime
6 | from time import sleep
7 |
8 | esc = 0x01
9 | num_0 = 0x0B
10 | num_1 = 0x02
11 | num_2 = 0x03
12 | num_3 = 0x04
13 | num_4 = 0x05
14 | num_5 = 0x06
15 | num_6 = 0x07
16 | num_7 = 0x08
17 | num_8 = 0x09
18 | num_9 = 0x0A
19 | scale = 15
20 | screen_width = GetSystemMetrics(0)
21 | screen_height = GetSystemMetrics(1)
22 |
23 | def delay():
24 | sleep(0.001)
25 |
26 | class point():
27 | x = 0
28 | y = 0
29 | def __init__(self,x,y):
30 | self.x = x
31 | self.y = y
32 |
33 | def circle(t):
34 | f = 10
35 | return point(scale * f * cos(t), scale * f *sin(t))
36 |
37 | def mirabilis(t):
38 | f= 1 / 2
39 | k = 1 / (2 * pi)
40 |
41 | return point(scale * f * (exp(k * t) * cos(t)),
42 | scale * f * (exp(k * t) * sin(t)))
43 |
44 | def epitrochoid(t):
45 | f = 1
46 | R = 6
47 | r = 2
48 | d = 1
49 | c = R + r
50 |
51 | return point(scale * f * (c * cos(t) - d * cos((c * t) / r)),
52 | scale * f * (c * sin(t) - d * sin((c * t) / r)))
53 |
54 | def hypotrochoid(t):
55 | f = 10 / 7
56 | R = 5
57 | r = 3
58 | d = 5
59 | c = R - r
60 |
61 | return point(scale * f * (c * cos(t) + d * cos((c * t) / r)),
62 | scale * f * (c * sin(t) - d * sin((c * t) / r)))
63 |
64 | def hypocycloid(t):
65 | f = 10 / 3
66 | R = 3
67 | r = 1
68 | c = R - r
69 |
70 | return point(scale * f * (c * cos(t) + r * cos((c * t) / r)),
71 | scale * f * (c * sin(t) - r * sin((c * t) / r)))
72 |
73 | def bean(t):
74 | f = 10
75 | c = cos(t)
76 | s = sin(t)
77 |
78 | return point(scale * f * ((pow(c, 3) + pow(s, 3)) * c),
79 | scale * f * ((pow(c, 3) + pow(s, 3)) * s))
80 |
81 | def Lissajous(t):
82 | f = 10
83 | a = 2
84 | b = 3
85 |
86 | return point(scale * f * (sin(a * t)), scale * f * (sin(b * t)))
87 |
88 | def epicycloid(t):
89 | f = 10 / 42
90 | R = 21
91 | r = 10
92 | c = R + r
93 |
94 | return point(scale * f * (c * cos(t) - r * cos((c * t) / r)),
95 | scale * f * (c * sin(t) - r * sin((c * t) / r)))
96 |
97 | def rose(t):
98 | f = 10
99 | R = 1
100 | k = 2 / 7
101 |
102 | return point(scale * f * (R * cos(k * t) * cos(t)),
103 | scale * f * (R * cos(k * t) * sin(t)))
104 |
105 | def butterfly(t):
106 | f = 10 / 4
107 | c = exp(cos(t)) - 2 * cos(4 * t) + pow(sin(t / 12), 5)
108 |
109 | return point(scale * f * (sin(t) * c), scale * f * (cos(t) * c))
110 |
111 | def math_track(context:interception, mouse : int,
112 | center,curve, t1, t2, # changed params order
113 | partitioning):
114 | delta = t2 - t1
115 | position = curve(t1)
116 | mstroke = mouse_stroke(interception_mouse_state.INTERCEPTION_MOUSE_LEFT_BUTTON_UP.value,
117 | interception_mouse_flag.INTERCEPTION_MOUSE_MOVE_ABSOLUTE.value,
118 | 0,
119 | int((0xFFFF * center.x) / screen_width),
120 | int((0xFFFF * center.y) / screen_height),
121 | 0)
122 |
123 | context.send(mouse,mstroke)
124 |
125 | mstroke.state = 0
126 | mstroke.x = int((0xFFFF * (center.x + position.x)) / screen_width)
127 | mstroke.y = int((0xFFFF * (center.y - position.y)) / screen_height)
128 |
129 | context.send(mouse,mstroke)
130 |
131 | j = 0
132 | for i in range(partitioning+2):
133 | if (j % 250 == 0):
134 | delay()
135 | mstroke.state = interception_mouse_state.INTERCEPTION_MOUSE_LEFT_BUTTON_UP.value
136 | context.send(mouse,mstroke)
137 |
138 | delay()
139 | mstroke.state = interception_mouse_state.INTERCEPTION_MOUSE_LEFT_BUTTON_DOWN.value
140 | context.send(mouse,mstroke)
141 | if i > 0:
142 | i = i-2
143 |
144 | position = curve(t1 + (i * delta)/partitioning)
145 | mstroke.x = int((0xFFFF * (center.x + position.x)) / screen_width)
146 | mstroke.y = int((0xFFFF * (center.y - position.y)) / screen_height)
147 | context.send(mouse,mstroke)
148 | delay()
149 | j = j + 1
150 |
151 | delay()
152 | mstroke.state = interception_mouse_state.INTERCEPTION_MOUSE_LEFT_BUTTON_DOWN.value
153 | context.send(mouse,mstroke)
154 |
155 | delay()
156 | mstroke.state = interception_mouse_state.INTERCEPTION_MOUSE_LEFT_BUTTON_UP.value
157 | context.send(mouse,mstroke)
158 |
159 | delay()
160 | mstroke.state = 0
161 | mstroke.x = int((0xFFFF * center.x) / screen_width)
162 | mstroke.y = int((0xFFFF * center.y) / screen_height)
163 | context.send(mouse,mstroke)
164 |
165 | curves = { num_0 : (circle,0,2*pi,200),
166 | num_1 : (mirabilis,-6*pi,6*pi,200),
167 | num_2 : (epitrochoid,0, 2 * pi, 200),
168 | num_3 : (hypotrochoid, 0, 6 * pi, 200),
169 | num_4 : (hypocycloid,0, 2 * pi, 200),
170 | num_5 : (bean, 0, pi, 200),
171 | num_6 : (Lissajous, 0, 2 * pi, 200),
172 | num_7 : (epicycloid, 0, 20 * pi, 1000),
173 | num_8 : (rose,0, 14 * pi, 500),
174 | num_9 : (butterfly, 0, 21 * pi, 2000),
175 | }
176 |
177 |
178 | notice = '''NOTICE: This example works on real machines.
179 | Virtual machines generally work with absolute mouse
180 | positioning over the screen, which this samples isn't\n"
181 | prepared to handle.
182 |
183 | Now please, first move the mouse that's going to be impersonated.
184 | '''
185 |
186 | steps = '''Impersonating mouse %d
187 | Now:
188 | - Go to Paint (or whatever place you want to draw)
189 | - Select your pencil
190 | - Position your mouse in the drawing board
191 | - Press any digit (not numpad) on your keyboard to draw an equation
192 | - Press ESC to exit.'''
193 |
194 | def main():
195 |
196 | mouse = 0
197 | position = point(screen_width // 2, screen_height // 2)
198 | context = interception()
199 | context.set_filter(interception.is_keyboard,
200 | interception_filter_key_state.INTERCEPTION_FILTER_KEY_DOWN.value |
201 | interception_filter_key_state.INTERCEPTION_FILTER_KEY_UP.value)
202 | context.set_filter(interception.is_mouse,
203 | interception_filter_mouse_state.INTERCEPTION_FILTER_MOUSE_MOVE.value )
204 |
205 | print(notice)
206 |
207 | while True:
208 |
209 | device = context.wait()
210 | if interception.is_mouse(device):
211 | if mouse == 0:
212 | mouse = device
213 | print( steps % (device - 10))
214 |
215 | mstroke = context.receive(device)
216 |
217 | position.x += mstroke.x
218 | position.y += mstroke.y
219 |
220 | if position.x < 0:
221 | position.x = 0
222 | if position.x > screen_width - 1:
223 | position.x = screen_width -1
224 |
225 | if position.y <0 :
226 | position.y = 0
227 | if position.y > screen_height - 1:
228 | position.y = screen_height -1
229 |
230 | mstroke.flags = interception_mouse_flag.INTERCEPTION_MOUSE_MOVE_ABSOLUTE.value
231 | mstroke.x = int((0xFFFF * position.x) / screen_width)
232 | mstroke.y = int((0xFFFF * position.y) / screen_height)
233 |
234 | context.send(device,mstroke)
235 |
236 | if mouse and interception.is_keyboard(device):
237 | kstroke = context.receive(device)
238 |
239 | if kstroke.code == esc:
240 | return
241 |
242 | if kstroke.state == interception_key_state.INTERCEPTION_KEY_DOWN.value:
243 | if kstroke.code in curves:
244 | math_track(context,mouse,position,*curves[kstroke.code])
245 | else:
246 | context.send(device,kstroke)
247 |
248 | elif kstroke.state == interception_key_state.INTERCEPTION_KEY_UP.value:
249 | if not kstroke.code in curves:
250 | context.send(device,kstroke)
251 | else:
252 | context.send(device,kstroke)
253 |
254 |
255 | if __name__ == "__main__":
256 | main()
--------------------------------------------------------------------------------
/interception_py/_right_click.py:
--------------------------------------------------------------------------------
1 | from interception import *
2 | from win32api import GetSystemMetrics
3 |
4 | # get screen size
5 | screen_width = GetSystemMetrics(0)
6 | screen_height = GetSystemMetrics(1)
7 |
8 | # create a context for interception to use to send strokes, in this case
9 | # we won't use filters, we will manually search for the first found mouse
10 | context = interception()
11 |
12 | # loop through all devices and check if they correspond to a mouse
13 | mouse = 0
14 | for i in range(MAX_DEVICES):
15 | if interception.is_mouse(i):
16 | mouse = i
17 | break
18 |
19 | # no mouse we quit
20 | if (mouse == 0):
21 | print("No mouse found")
22 | exit(0)
23 |
24 |
25 | # we create a new mouse stroke, initially we use set right button down, we also use absolute move,
26 | # and for the coordinate (x and y) we use center screen
27 | mstroke = mouse_stroke(interception_mouse_state.INTERCEPTION_MOUSE_RIGHT_BUTTON_DOWN.value,
28 | interception_mouse_flag.INTERCEPTION_MOUSE_MOVE_ABSOLUTE.value,
29 | 0,
30 | int((0xFFFF * screen_width/2) / screen_width),
31 | int((0xFFFF * screen_height/2) / screen_height),
32 | 0)
33 |
34 | context.send(mouse,mstroke) # we send the key stroke, now the right button is down
35 |
36 | mstroke.state = interception_mouse_state.INTERCEPTION_MOUSE_RIGHT_BUTTON_UP.value # update the stroke to release the button
37 | context.send(mouse,mstroke) #button right is up
--------------------------------------------------------------------------------
/interception_py/consts.py:
--------------------------------------------------------------------------------
1 | from enum import Enum
2 |
3 | class interception_key_state(Enum):
4 | INTERCEPTION_KEY_DOWN = 0x00
5 | INTERCEPTION_KEY_UP = 0x01
6 | INTERCEPTION_KEY_E0 = 0x02
7 | INTERCEPTION_KEY_E1 = 0x04
8 | INTERCEPTION_KEY_TERMSRV_SET_LED = 0x08
9 | INTERCEPTION_KEY_TERMSRV_SHADOW = 0x10
10 | INTERCEPTION_KEY_TERMSRV_VKPACKET = 0x20
11 |
12 | class interception_filter_key_state(Enum):
13 | INTERCEPTION_FILTER_KEY_NONE = 0x0000
14 | INTERCEPTION_FILTER_KEY_ALL = 0xFFFF
15 | INTERCEPTION_FILTER_KEY_DOWN = interception_key_state.INTERCEPTION_KEY_UP.value
16 | INTERCEPTION_FILTER_KEY_UP = interception_key_state.INTERCEPTION_KEY_UP.value << 1
17 | INTERCEPTION_FILTER_KEY_E0 = interception_key_state.INTERCEPTION_KEY_E0.value << 1
18 | INTERCEPTION_FILTER_KEY_E1 = interception_key_state.INTERCEPTION_KEY_E1.value << 1
19 | INTERCEPTION_FILTER_KEY_TERMSRV_SET_LED = interception_key_state.INTERCEPTION_KEY_TERMSRV_SET_LED.value << 1
20 | INTERCEPTION_FILTER_KEY_TERMSRV_SHADOW = interception_key_state.INTERCEPTION_KEY_TERMSRV_SHADOW.value << 1
21 | INTERCEPTION_FILTER_KEY_TERMSRV_VKPACKET = interception_key_state.INTERCEPTION_KEY_TERMSRV_VKPACKET.value << 1
22 |
23 | class interception_mouse_state (Enum):
24 | INTERCEPTION_MOUSE_LEFT_BUTTON_DOWN = 0x001
25 | INTERCEPTION_MOUSE_LEFT_BUTTON_UP = 0x002
26 | INTERCEPTION_MOUSE_RIGHT_BUTTON_DOWN = 0x004
27 | INTERCEPTION_MOUSE_RIGHT_BUTTON_UP = 0x008
28 | INTERCEPTION_MOUSE_MIDDLE_BUTTON_DOWN = 0x010
29 | INTERCEPTION_MOUSE_MIDDLE_BUTTON_UP = 0x020
30 |
31 | INTERCEPTION_MOUSE_BUTTON_1_DOWN = INTERCEPTION_MOUSE_LEFT_BUTTON_DOWN
32 | INTERCEPTION_MOUSE_BUTTON_1_UP = INTERCEPTION_MOUSE_LEFT_BUTTON_UP
33 | INTERCEPTION_MOUSE_BUTTON_2_DOWN = INTERCEPTION_MOUSE_RIGHT_BUTTON_DOWN
34 | INTERCEPTION_MOUSE_BUTTON_2_UP = INTERCEPTION_MOUSE_RIGHT_BUTTON_UP
35 | INTERCEPTION_MOUSE_BUTTON_3_DOWN = INTERCEPTION_MOUSE_MIDDLE_BUTTON_DOWN
36 | INTERCEPTION_MOUSE_BUTTON_3_UP = INTERCEPTION_MOUSE_MIDDLE_BUTTON_UP
37 |
38 | INTERCEPTION_MOUSE_BUTTON_4_DOWN = 0x040
39 | INTERCEPTION_MOUSE_BUTTON_4_UP = 0x080
40 | INTERCEPTION_MOUSE_BUTTON_5_DOWN = 0x100
41 | INTERCEPTION_MOUSE_BUTTON_5_UP = 0x200
42 |
43 | INTERCEPTION_MOUSE_WHEEL = 0x400
44 | INTERCEPTION_MOUSE_HWHEEL = 0x800
45 |
46 | class interception_filter_mouse_state(Enum):
47 | INTERCEPTION_FILTER_MOUSE_NONE = 0x0000
48 | INTERCEPTION_FILTER_MOUSE_ALL = 0xFFFF
49 |
50 | INTERCEPTION_FILTER_MOUSE_LEFT_BUTTON_DOWN = interception_mouse_state.INTERCEPTION_MOUSE_LEFT_BUTTON_DOWN.value
51 | INTERCEPTION_FILTER_MOUSE_LEFT_BUTTON_UP = interception_mouse_state.INTERCEPTION_MOUSE_LEFT_BUTTON_UP.value
52 | INTERCEPTION_FILTER_MOUSE_RIGHT_BUTTON_DOWN = interception_mouse_state.INTERCEPTION_MOUSE_RIGHT_BUTTON_DOWN.value
53 | INTERCEPTION_FILTER_MOUSE_RIGHT_BUTTON_UP = interception_mouse_state.INTERCEPTION_MOUSE_RIGHT_BUTTON_UP.value
54 | INTERCEPTION_FILTER_MOUSE_MIDDLE_BUTTON_DOWN = interception_mouse_state.INTERCEPTION_MOUSE_MIDDLE_BUTTON_DOWN.value
55 | INTERCEPTION_FILTER_MOUSE_MIDDLE_BUTTON_UP = interception_mouse_state.INTERCEPTION_MOUSE_MIDDLE_BUTTON_UP.value
56 |
57 | INTERCEPTION_FILTER_MOUSE_BUTTON_1_DOWN = interception_mouse_state.INTERCEPTION_MOUSE_BUTTON_1_DOWN.value
58 | INTERCEPTION_FILTER_MOUSE_BUTTON_1_UP = interception_mouse_state.INTERCEPTION_MOUSE_BUTTON_1_UP.value
59 | INTERCEPTION_FILTER_MOUSE_BUTTON_2_DOWN = interception_mouse_state.INTERCEPTION_MOUSE_BUTTON_2_DOWN.value
60 | INTERCEPTION_FILTER_MOUSE_BUTTON_2_UP = interception_mouse_state.INTERCEPTION_MOUSE_BUTTON_2_UP.value
61 | INTERCEPTION_FILTER_MOUSE_BUTTON_3_DOWN = interception_mouse_state.INTERCEPTION_MOUSE_BUTTON_3_DOWN.value
62 | INTERCEPTION_FILTER_MOUSE_BUTTON_3_UP = interception_mouse_state.INTERCEPTION_MOUSE_BUTTON_3_UP.value
63 |
64 | INTERCEPTION_FILTER_MOUSE_BUTTON_4_DOWN = interception_mouse_state.INTERCEPTION_MOUSE_BUTTON_4_DOWN.value
65 | INTERCEPTION_FILTER_MOUSE_BUTTON_4_UP = interception_mouse_state.INTERCEPTION_MOUSE_BUTTON_4_UP.value
66 | INTERCEPTION_FILTER_MOUSE_BUTTON_5_DOWN = interception_mouse_state.INTERCEPTION_MOUSE_BUTTON_5_DOWN.value
67 | INTERCEPTION_FILTER_MOUSE_BUTTON_5_UP = interception_mouse_state.INTERCEPTION_MOUSE_BUTTON_5_UP.value
68 |
69 | INTERCEPTION_FILTER_MOUSE_WHEEL = interception_mouse_state.INTERCEPTION_MOUSE_WHEEL.value
70 | INTERCEPTION_FILTER_MOUSE_HWHEEL = interception_mouse_state.INTERCEPTION_MOUSE_HWHEEL.value
71 | INTERCEPTION_FILTER_MOUSE_MOVE = 0x1000
72 |
73 | class interception_mouse_flag(Enum):
74 | INTERCEPTION_MOUSE_MOVE_RELATIVE = 0x000
75 | INTERCEPTION_MOUSE_MOVE_ABSOLUTE = 0x001
76 | INTERCEPTION_MOUSE_VIRTUAL_DESKTOP = 0x002
77 | INTERCEPTION_MOUSE_ATTRIBUTES_CHANGED = 0x004
78 | INTERCEPTION_MOUSE_MOVE_NOCOALESCE = 0x008
79 | INTERCEPTION_MOUSE_TERMSRV_SRC_SHADOW = 0x100
--------------------------------------------------------------------------------
/interception_py/interception.py:
--------------------------------------------------------------------------------
1 | from ctypes import *
2 | from stroke import *
3 | from consts import *
4 |
5 | MAX_DEVICES = 20
6 | MAX_KEYBOARD = 10
7 | MAX_MOUSE = 10
8 |
9 | k32 = windll.LoadLibrary('kernel32')
10 |
11 | class interception():
12 | _context = []
13 | k32 = None
14 | _c_events = (c_void_p * MAX_DEVICES)()
15 |
16 | def __init__(self):
17 | try:
18 | for i in range(MAX_DEVICES):
19 | _device = device(k32.CreateFileA(b'\\\\.\\interception%02d' % i,
20 | 0x80000000,0,0,3,0,0),
21 | k32.CreateEventA(0, 1, 0, 0),
22 | interception.is_keyboard(i))
23 | self._context.append(_device)
24 | self._c_events[i] = _device.event
25 |
26 | except Exception as e:
27 | self._destroy_context()
28 | raise e
29 |
30 | def wait(self,milliseconds =-1):
31 |
32 | result = k32.WaitForMultipleObjects(MAX_DEVICES,self._c_events,0,milliseconds)
33 | if result == -1 or result == 0x102:
34 | return 0
35 | else:
36 | return result
37 |
38 | def set_filter(self,predicate,filter):
39 | for i in range(MAX_DEVICES):
40 | if predicate(i):
41 | result = self._context[i].set_filter(filter)
42 |
43 | def get_HWID(self,device:int):
44 | if not interception.is_invalid(device):
45 | try:
46 | return self._context[device].get_HWID().decode("utf-16")
47 | except:
48 | pass
49 | return ""
50 |
51 | def receive(self,device:int):
52 | if not interception.is_invalid(device):
53 | return self._context[device].receive()
54 |
55 | def send(self,device: int,stroke : stroke):
56 | if not interception.is_invalid(device):
57 | self._context[device].send(stroke)
58 |
59 | @staticmethod
60 | def is_keyboard(device):
61 | return device+1 > 0 and device+1 <= MAX_KEYBOARD
62 |
63 | @staticmethod
64 | def is_mouse(device):
65 | return device+1 > MAX_KEYBOARD and device+1 <= MAX_KEYBOARD + MAX_MOUSE
66 |
67 | @staticmethod
68 | def is_invalid(device):
69 | return device+1 <= 0 or device+1 > (MAX_KEYBOARD + MAX_MOUSE)
70 |
71 | def _destroy_context(self):
72 | for device in self._context:
73 | device.destroy()
74 |
75 | class device_io_result:
76 | result = 0
77 | data = None
78 | data_bytes = None
79 | def __init__(self,result,data):
80 | self.result = result
81 | if data!=None:
82 | self.data = list(data)
83 | self.data_bytes = bytes(data)
84 |
85 |
86 | def device_io_call(decorated):
87 | def decorator(device,*args,**kwargs):
88 | command,inbuffer,outbuffer = decorated(device,*args,**kwargs)
89 | return device._device_io_control(command,inbuffer,outbuffer)
90 | return decorator
91 |
92 | class device():
93 | handle=0
94 | event=0
95 | is_keyboard = False
96 | _parser = None
97 | _bytes_returned = (c_int * 1)(0)
98 | _c_byte_500 = (c_byte * 500)()
99 | _c_int_2 = (c_int * 2)()
100 | _c_ushort_1 = (c_ushort * 1)()
101 | _c_int_1 = (c_int * 1)()
102 | _c_recv_buffer = None
103 |
104 | def __init__(self, handle, event,is_keyboard:bool):
105 | self.is_keyboard = is_keyboard
106 | if is_keyboard:
107 | self._c_recv_buffer = (c_byte * 12)()
108 | self._parser = key_stroke
109 | else:
110 | self._c_recv_buffer = (c_byte * 24)()
111 | self._parser = mouse_stroke
112 |
113 | if handle == -1 or event == 0:
114 | raise Exception("Can't create device")
115 | self.handle=handle
116 | self.event =event
117 |
118 | if self._device_set_event().result == 0:
119 | raise Exception("Can't communicate with driver")
120 |
121 | def destroy(self):
122 | if self.handle != -1:
123 | k32.CloseHandle(self.handle)
124 | if self.event!=0:
125 | k32.CloseHandle(self.event)
126 |
127 | @device_io_call
128 | def get_precedence(self):
129 | return 0x222008,0,self._c_int_1
130 |
131 | @device_io_call
132 | def set_precedence(self,precedence : int):
133 | self._c_int_1[0] = precedence
134 | return 0x222004,self._c_int_1,0
135 |
136 | @device_io_call
137 | def get_filter(self):
138 | return 0x222020,0,self._c_ushort_1
139 |
140 | @device_io_call
141 | def set_filter(self,filter):
142 | self._c_ushort_1[0] = filter
143 | return 0x222010,self._c_ushort_1,0
144 |
145 | @device_io_call
146 | def _get_HWID(self):
147 | return 0x222200,0,self._c_byte_500
148 |
149 | def get_HWID(self):
150 | data = self._get_HWID().data_bytes
151 | return data[:self._bytes_returned[0]]
152 |
153 | @device_io_call
154 | def _receive(self):
155 | return 0x222100,0,self._c_recv_buffer
156 |
157 | def receive(self):
158 | data = self._receive().data_bytes
159 | return self._parser.parse_raw(data)
160 |
161 | def send(self,stroke:stroke):
162 | if type(stroke) == self._parser:
163 | self._send(stroke)
164 |
165 | @device_io_call
166 | def _send(self,stroke:stroke):
167 | memmove(self._c_recv_buffer,stroke.data_raw,len(self._c_recv_buffer))
168 | return 0x222080,self._c_recv_buffer,0
169 |
170 | @device_io_call
171 | def _device_set_event(self):
172 | self._c_int_2[0] = self.event
173 | return 0x222040,self._c_int_2,0
174 |
175 | def _device_io_control(self,command,inbuffer,outbuffer)->device_io_result:
176 | res = k32.DeviceIoControl(self.handle,command,inbuffer,
177 | len(bytes(inbuffer)) if inbuffer != 0 else 0,
178 | outbuffer,
179 | len(bytes(outbuffer)) if outbuffer !=0 else 0,
180 | self._bytes_returned,0)
181 |
182 | return device_io_result(res,outbuffer if outbuffer !=0 else None)
--------------------------------------------------------------------------------
/interception_py/stroke.py:
--------------------------------------------------------------------------------
1 | import struct
2 |
3 | class stroke():
4 |
5 | @property
6 | def data(self):
7 | raise NotImplementedError
8 |
9 | @property
10 | def data_raw(self):
11 | raise NotImplementedError
12 |
13 |
14 | class mouse_stroke(stroke):
15 |
16 | fmt = 'HHhiiI'
17 | fmt_raw = 'HHHHIiiI'
18 | state = 0
19 | flags = 0
20 | rolling = 0
21 | x = 0
22 | y = 0
23 | information = 0
24 |
25 | def __init__(self,state,flags,rolling,x,y,information):
26 | super().__init__()
27 | self.state =state
28 | self.flags = flags
29 | self.rolling = rolling
30 | self.x = x
31 | self.y = y
32 | self.information = information
33 |
34 | @staticmethod
35 | def parse(data):
36 | return mouse_stroke(*struct.unpack(mouse_stroke.fmt,data))
37 |
38 | @staticmethod
39 | def parse_raw(data):
40 | unpacked= struct.unpack(mouse_stroke.fmt_raw,data)
41 | return mouse_stroke(
42 | unpacked[2],
43 | unpacked[1],
44 | unpacked[3],
45 | unpacked[5],
46 | unpacked[6],
47 | unpacked[7])
48 |
49 | @property
50 | def data(self):
51 | data = struct.pack(self.fmt,
52 | self.state,
53 | self.flags,
54 | self.rolling,
55 | self.x,
56 | self.y,
57 | self.information)
58 | return data
59 |
60 | @property
61 | def data_raw(self):
62 | data = struct.pack(self.fmt_raw,
63 | 0,
64 | self.flags,
65 | self.state,
66 | self.rolling,
67 | 0,
68 | self.x,
69 | self.y,
70 | self.information)
71 |
72 | return data
73 |
74 | class key_stroke(stroke):
75 |
76 | fmt = 'HHI'
77 | fmt_raw = 'HHHHI'
78 | code = 0
79 | state = 0
80 | information = 0
81 |
82 | def __init__(self,code,state,information):
83 | super().__init__()
84 | self.code = code
85 | self.state = state
86 | self.information = information
87 |
88 |
89 | @staticmethod
90 | def parse(data):
91 | return key_stroke(*struct.unpack(key_stroke.fmt,data))
92 |
93 | @staticmethod
94 | def parse_raw(data):
95 | unpacked= struct.unpack(key_stroke.fmt_raw,data)
96 | return key_stroke(unpacked[1],unpacked[2],unpacked[4])
97 |
98 | @property
99 | def data(self):
100 | data = struct.pack(self.fmt,self.code,self.state,self.information)
101 | return data
102 | @property
103 | def data_raw(self):
104 | data = struct.pack(self.fmt_raw,0,self.code,self.state,0,self.information)
105 | return data
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | """
2 | Testing multiprocessing,
3 | should utilize CPU cores
4 | """
5 | import multiprocessing
6 | from multiprocessing import Pipe
7 | import argparse
8 |
9 | # Custom Classes for processes
10 | from yolo import Yolo
11 | from wincap import WinCap
12 | from inter import InterMouse
13 | from screen import Screen
14 |
15 | class MultiYOLO:
16 | """
17 | Testing multiprocessing benefits, will it speed things up?
18 | Mainly looking for paraller execution on yolo & capture
19 | """
20 | # properties
21 | mouse = None
22 | wincap = None
23 | screen = None
24 | yolo = None
25 | running = False
26 | name = 'MultiYOLOv5'
27 |
28 | # constructor
29 | def __init__(self, model, img=640, source=640, window=None, debug=False) -> None:
30 | print(f'[{self.name}] Loading...')
31 | # Pipes
32 | out_mouse, in_mouse = Pipe() # Mouse info (input from yolo, output for mouse)
33 | out_frame, in_frame = Pipe() # Screen capture info (input from wincap, output for yolo)
34 | out_result, in_result = Pipe() # Inference results (input from yolo, output for screen)
35 | self.running = True
36 |
37 | self.yolo = multiprocessing.Process(target=Yolo, args=(out_frame, in_result, model, img, source))
38 | self.wincap = multiprocessing.Process(target=WinCap, args=(in_frame, source, False, window))
39 | self.mouse = multiprocessing.Process(target=InterMouse, args=(out_mouse,))
40 | self.screen = multiprocessing.Process(target=Screen, args=(out_result, in_mouse, source, debug))
41 |
42 | self.wincap.start() # Will start capture instantly... model won't be ready
43 | self.mouse.start()
44 | self.screen.start()
45 | self.yolo.start()
46 | pass
47 |
48 | def run(self):
49 | print(f'[{self.name}] Starting...')
50 |
51 | while self.running:
52 | """ try:
53 | img = self.img_pipe.recv()
54 | if img.size > 0: print(f'Got Image!')
55 | except EOFError:
56 | break """
57 | if (not self.running): print(f'[{self.name}] Not Running')
58 |
59 | print(f'[{self.name}] End.')
60 |
61 | def exit_yolo(self):
62 | print(f'[{self.name}] Stopping...')
63 | self.mouse.terminate()
64 | self.screen.terminate()
65 | self.yolo.terminate()
66 | self.wincap.terminate()
67 | # Stop main program
68 | self.running = False
69 |
70 | def parse_opt():
71 | parser = argparse.ArgumentParser()
72 | parser.add_argument('--model', type=str, default='models/bestv2.pt', help='path to model.pt')
73 | parser.add_argument('--img', '--img-size', type=int, default=240, help='inference size (pixels)') #640
74 | parser.add_argument('--source', '--source-size', type=int, default=460, help='capture size (pixels)') #640
75 | parser.add_argument('--window', type=str, default=None, help='window name to capture')
76 | parser.add_argument('--debug', type=bool, default=False, help='Render inference results')
77 | opt = parser.parse_args()
78 | return opt
79 |
80 | def main(opt):
81 | print('Detected params: ' + ', '.join(f'{k}={v}' for k, v in vars(opt).items()))
82 | a = MultiYOLO(**vars(opt))
83 | a.run()
84 |
85 | if __name__ == "__main__":
86 | opt = parse_opt()
87 | main(opt)
--------------------------------------------------------------------------------
/models/best.pt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/matias-kovero/ObjectDetection/08fadb675e3b8a724251a959a4008e35ba77cb25/models/best.pt
--------------------------------------------------------------------------------
/models/bestv2.pt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/matias-kovero/ObjectDetection/08fadb675e3b8a724251a959a4008e35ba77cb25/models/bestv2.pt
--------------------------------------------------------------------------------
/screen.py:
--------------------------------------------------------------------------------
1 | import cv2 as cv
2 | from time import time
3 |
4 | import numpy as np
5 |
6 | # Colors - BGR
7 | CYAN = (255, 255, 0)
8 | YELLOW = ( 0, 255, 255)
9 | ORANGE = ( 0, 132, 255)
10 | RED = ( 0, 0, 255)
11 | GREEN = ( 0, 255, 0)
12 | PURPLE = (255, 0, 255)
13 |
14 | class Screen:
15 | """
16 | If debug is turned on, to close screen -> focus screen and press Q
17 | """
18 | # properties
19 | name = 'Screen'
20 | pipe_result = None
21 | pipe_mouse = None
22 | running = False
23 | source = 0
24 | previous = None
25 | prev_dist = 0
26 | debug = False
27 | # constructor
28 | def __init__(self, out_result, in_mouse, source, debug=False) -> None:
29 | print(f'[{self.name}] Process launched.')
30 | # keyboard.add_hotkey(kill_switch, self.cleanup)
31 | # Pipes
32 | self.pipe_result = out_result
33 | self.pipe_mouse = in_mouse
34 | # Screen size
35 | self.source = source
36 | # Show inference results - this will hinder FPS - should only use when debuging.
37 | self.debug = debug
38 |
39 | self.run()
40 | pass
41 |
42 | def run(self):
43 | self.running = True
44 | last_time = time()
45 | while self.running:
46 | try:
47 | (result, img, model_time) = self.pipe_result.recv()
48 | self.find_aid(result)
49 |
50 | if self.debug:
51 | if result is not None:
52 | self.plot_boxes(img, result)
53 | cv.putText(img, f'{1/(time()-last_time):.1f} FPS | {model_time:.0f}ms', (2, 15), cv.FONT_HERSHEY_COMPLEX_SMALL, 0.65, CYAN)
54 | cv.imshow('Yolo Debug', img)
55 | last_time = time()
56 | # This will mess up mouse capture - don't close with this - close by killing process.
57 | if cv.waitKey(1) & 0xFF == ord('q'):
58 | break
59 | except EOFError:
60 | break
61 |
62 | cv.destroyAllWindows()
63 | print(f'[{self.name}] Main Thread ended.')
64 |
65 | def cleanup(self):
66 | print(f'[{self.name}] Cleanup.')
67 | self.running = False
68 |
69 | def plot_boxes(self, frame, pos):
70 | """
71 | Plots boxes and labels on frame.
72 | :param frame: frame on which to make plots
73 | :param pos: inferences made by model
74 | :return: new frame with boxes and labels plotted
75 | """
76 | # We could check all results, and only render one that is the closest
77 | rect_center = (int((pos[0]+pos[2]) / 2), int((pos[1]+pos[3]) / 2))
78 | width = int(pos[2] - pos[0])
79 | offset = int((pos[3]-pos[1]) * 0.3) # 0.5 would be top
80 | height = max(60, offset)
81 | # Put rectangle around our object
82 | cv.rectangle(frame, (int(pos[0]), int(pos[1])), (int(pos[2]), int(pos[3])), CYAN, 2)
83 | # Outer slow section
84 | """ cv.rectangle(frame,
85 | (int(rect_center[0] - (width / 2)), int(rect_center[1] - offset - (height / 2))),
86 | (int(rect_center[0] + (width / 2)), int(rect_center[1] - offset + (height / 2))),
87 | YELLOW, 2)
88 | # Medium slow down
89 | cv.rectangle(frame,
90 | (int(rect_center[0] - (width * 0.75 / 2)), int(rect_center[1] - offset - (height * 0.75 / 2))),
91 | (int(rect_center[0] + (width * 0.75 / 2)), int(rect_center[1] - offset + (height * 0.75 / 2))),
92 | ORANGE, 2)
93 | # Major slow down
94 | cv.rectangle(frame,
95 | (int(rect_center[0] - (width * 0.45 / 2)), int(rect_center[1] - offset - (height * 0.45 / 2))),
96 | (int(rect_center[0] + (width * 0.45 / 2)), int(rect_center[1] - offset + (height * 0.45 / 2))),
97 | RED, 2) """
98 | # Get text size
99 | (txt_w, txt_h), baseline = cv.getTextSize(f'{pos[4]:.2f}', cv.FONT_HERSHEY_COMPLEX_SMALL, 0.5, 1)
100 | # Text rectangle
101 | cv.rectangle(frame, (int(pos[0]), int(pos[1])), (int(pos[0]) + txt_w, int(pos[1]) - txt_h - baseline), CYAN, cv.FILLED)
102 | # Put text above rectangle
103 | cv.putText(frame, f'{pos[4]:.2f}', (int(pos[0]), int(pos[1] - 3)), cv.FONT_HERSHEY_COMPLEX_SMALL, 0.5, (0,0,0))
104 | # Center - Point of Intrest
105 | frame_center = int(self.source / 2)
106 | # print(self.pythagoreanTheorem(rect_center[0] - frame_center, rect_center[1] - frame_center))
107 | cv.circle(frame, (rect_center[0], rect_center[1] - offset), 3, GREEN, 2, cv.FILLED)
108 | # draw line from POI to screen center
109 | cv.line(frame, (rect_center[0], rect_center[1] - offset), (frame_center, frame_center), YELLOW, 1)
110 | return frame
111 |
112 | def find_aid(self, pos):
113 | # If data has not changed -> return
114 | if self.previous is pos:
115 | return
116 | # Some logic to update if going from x, y to None. After that don't update future None
117 | if pos is None:
118 | self.pipe_mouse.send(( (0,0), (0,0), 0) )
119 | self.previous = None
120 | else:
121 | # This is getting spammed...
122 | rect_center = (int((pos[0]+pos[2]) / 2), int((pos[1]+pos[3]) / 2))
123 | width = int(pos[2] - pos[0])
124 | offset = int((pos[3]-pos[1]) * 0.3)
125 | height = max(30, offset)
126 | screen_center = int(self.source / 2)
127 | # These are the actual coords, or distace from center
128 | x = rect_center[0] - screen_center
129 | y = rect_center[1] - offset - screen_center
130 | move = (x, -y) # Flip - as these are in top-left (0,0) = -up | +down but mouse is +up, -down
131 | size = (width, height)
132 | dist = self.rect_distance(x, y)
133 | if (abs(self.prev_dist - dist) > 3): #or ((int(self.prev_dist) ^ int(dist)) < 0)):
134 | self.pipe_mouse.send((move, size, dist))
135 | self.prev_dist = dist
136 | self.previous = move
137 |
138 | def rect_distance(self, x, y):
139 | return np.sqrt(abs(x)**2 + abs(y)**2)
--------------------------------------------------------------------------------
/wincap.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import win32gui, win32ui, win32con
3 | from PIL import Image
4 | from pathlib import Path
5 | import time
6 | # import keyboard
7 |
8 | W = 640
9 | H = 640
10 |
11 | class WinCap:
12 | # properties
13 | w = 0
14 | h = 0
15 | hwnd = None
16 | cropped_x = 0
17 | cropped_y = 0
18 | offset_x = 0
19 | offset_y = 0
20 | img_input = None
21 | running = False
22 |
23 | # constructor
24 | def __init__(self, img_input, source, gather=False, window_name=None, show_names=False) -> None:
25 | # keyboard.add_hotkey(kill_switch, self.cleanup)
26 | print(f'[WinCap] Process launched.')
27 | self.img_input = img_input
28 | # find handle for win we want to cap. If no name given, cap entire screen
29 | if window_name is None:
30 | self.hwnd = win32gui.GetDesktopWindow()
31 | else:
32 | self.hwnd = win32gui.FindWindow(None, window_name)
33 | #self.hwnd = win32gui.FindWindow(window_name, None)
34 | if not self.hwnd:
35 | if show_names: self.list_win_names()
36 | raise Exception('Window not found: {}'.format(window_name))
37 |
38 | # get win size [left, top, right, bottom]
39 | window_rect = win32gui.GetWindowRect(self.hwnd)
40 | self.w = window_rect[2] - window_rect[0]
41 | self.h = window_rect[3] - window_rect[1]
42 |
43 | if self.w < source or self.h < source:
44 | raise Exception('Window is smaller than {0}x{1}!'.format(source, source))
45 |
46 | self.cropped_x = int(self.w / 2 - (source/2))
47 | self.cropped_y = int(self.h / 2 - (source/2))
48 | self.w = source
49 | self.h = source
50 |
51 | # set the cropped coordinates offset so we can translate ss
52 | # images into actual screen positions
53 | self.offset_x = window_rect[0] + self.cropped_x
54 | self.offset_y = window_rect[1] + self.cropped_y
55 | self.running = True
56 |
57 | if not gather: self.run()
58 |
59 | def run(self):
60 | """
61 | Wincap boosted from original 20FPS to +200FP.
62 | Happy about the fix - capture isn't an bottleneck anymore.
63 | """
64 | last_time = time.time()
65 | while self.running:
66 | # if time.time() - last_time > 5:
67 | try:
68 | img = self.get_ss()
69 | if self.img_input and self.img_input.writable:
70 | self.img_input.send(img)
71 | except BrokenPipeError:
72 | break
73 | # print(f'{1/(time.time() - last_time):.1f} FPS')
74 | last_time = time.time()
75 | print(f'[WinCap] Main Thread ended.')
76 |
77 | def cleanup(self):
78 | print(f'[WinCap] Cleanup')
79 | # self.img_input.close()
80 | self.running = False
81 |
82 | def get_ss(self):
83 | # get win image data
84 | wDC = win32gui.GetWindowDC(self.hwnd)
85 | dcObj = win32ui.CreateDCFromHandle(wDC)
86 | cDC = dcObj.CreateCompatibleDC()
87 | dataBmp = win32ui.CreateBitmap()
88 | dataBmp.CreateCompatibleBitmap(dcObj, self.w, self.h)
89 | cDC.SelectObject(dataBmp)
90 | cDC.BitBlt((0, 0), (self.w, self.h), dcObj, (self.cropped_x, self.cropped_y), win32con.SRCCOPY)
91 |
92 | # convert the raw data for opencv
93 | signedIntsArray = dataBmp.GetBitmapBits(True)
94 | img = np.fromstring(signedIntsArray, dtype='uint8')
95 | img.shape = (self.h, self.w, 4)
96 |
97 | # free resources
98 | dcObj.DeleteDC()
99 | cDC.DeleteDC()
100 | win32gui.ReleaseDC(self.hwnd, wDC)
101 | win32gui.DeleteObject(dataBmp.GetHandle())
102 |
103 | # drop alpha channel (optional) you will lose about 10 FPS
104 | # img = img[...,:3]
105 |
106 | # make image C_CONTIGUOUS to avoid errors
107 | # see the discussion here:
108 | # https://github.com/opencv/opencv/issues/14866#issuecomment-580207109
109 | # img = np.ascontiguousarray(img)
110 |
111 | return img
112 |
113 | def save_ss(self):
114 | """
115 | This code was used to gather images from the game.
116 | These images where then used as an dataset to train our model.
117 | """
118 | # get win image data
119 | wDC = win32gui.GetWindowDC(self.hwnd)
120 | dcObj = win32ui.CreateDCFromHandle(wDC)
121 | cDC = dcObj.CreateCompatibleDC()
122 | dataBmp = win32ui.CreateBitmap()
123 | dataBmp.CreateCompatibleBitmap(dcObj, self.w, self.h)
124 | cDC.SelectObject(dataBmp)
125 | cDC.BitBlt((0, 0), (self.w, self.h), dcObj, (self.cropped_x, self.cropped_y), win32con.SRCCOPY)
126 |
127 | # save ss
128 | dataBmp.SaveBitmapFile(cDC, 'debug.bmp')
129 | # create images folder if not found
130 | Path('./images').mkdir(parents=True, exist_ok=True)
131 | Image.open('debug.bmp').save('images/{}.jpg'.format(int(time.time())))
132 |
133 | # free resources
134 | dcObj.DeleteDC()
135 | cDC.DeleteDC()
136 | win32gui.ReleaseDC(self.hwnd, wDC)
137 | win32gui.DeleteObject(dataBmp.GetHandle())
138 |
139 | @staticmethod
140 | def list_win_names():
141 | """
142 | Find the name of the win you are intrested in
143 | """
144 | print('-- List of windows:')
145 | def winEnumHandler(hwnd, ctx):
146 | if win32gui.IsWindowVisible(hwnd):
147 | print(hex(hwnd), win32gui.GetWindowText(hwnd))
148 | # I already hate python indentation
149 | win32gui.EnumWindows(winEnumHandler, None)
150 | print('--')
151 |
152 | def get_screen_pos(self, pos):
153 | """
154 | Translate a pixel position on a ss image to a pixel position on screen
155 | This code is currently legacy, isn't used anymore. Might be still helpful for something?
156 | """
157 | return (pos[0] + self.offset_x, pos[1] + self.offset_y)
--------------------------------------------------------------------------------
/yolo.py:
--------------------------------------------------------------------------------
1 | import cv2 as cv
2 | from torch import hub
3 | import numpy as np
4 |
5 | # Colors - BGR
6 | CYAN = (255, 255, 0)
7 |
8 | class Yolo:
9 | # properties
10 | name = 'YOLOv5 Model'
11 | img_output = None
12 | running = False
13 | screen_center = 0
14 | # constructor
15 | def __init__(self, out_frame, in_result, model, img, source) -> None:
16 | print(f'[{self.name}] Process launched.')
17 | # Pipes
18 | self.pipe_frame = out_frame
19 | self.pipe_result = in_result
20 |
21 | self.model = self.load_model(model) # Load Model
22 | self.model.conf = 0.6 # Model threshold
23 | self.model.iou = 0.65 # NMS IoU threshold
24 | self.model.max_det = 3 # Maximum number of detections per img
25 | self.size = img # Inference resolution [1:1]
26 | self.screen_center = int(source / 2) # Screen center [1:1] resolutions
27 | self.run()
28 | pass
29 |
30 | def run(self):
31 | """
32 | Main loop of our class
33 | """
34 | self.running = True
35 | while self.running:
36 | try:
37 | img = self.pipe_frame.recv()
38 | (result, model_time) = self.score_frame(img)
39 | # Big oof - Windows does not support multiprocessing on tensors !!! 5h wasted...
40 | # Quick fix - parse tensor and pipe parsed results - hope this does not hider performance.
41 | if len(result) >= 1:
42 | self.pipe_result.send((self.filter_closest(result), img, model_time))
43 | else:
44 | self.pipe_result.send((None, img, model_time))
45 | except (EOFError, BrokenPipeError) as e:
46 | break
47 | print(f'[{self.name}] Main Thread ended.')
48 |
49 | def cleanup(self):
50 | """
51 | If any cleaning is needed, its done here
52 | """
53 | print(f'[{self.name}] Cleanup.')
54 | self.running = False
55 |
56 | def load_model(self, path):
57 | """
58 | Function loads the yolo5 model from PyTorch Hub.
59 | :param self: class object
60 | :param path: custom model path
61 | """
62 | model = hub.load('ultralytics/yolov5', 'custom', path)
63 | return model
64 |
65 | def score_frame(self, frame):
66 | """
67 | Function scores each frame and returns results.
68 | :param frame: frame to be infered.
69 | """
70 | result = self.model(cv.cvtColor(frame, cv.COLOR_RGB2BGR), size=self.size)
71 | return (result.xyxy[0], result.t[1])
72 |
73 | def filter_closest(self, results):
74 | """
75 | Get the closest match to screen middle, should add check on FP from the Model to avoid quick flicks.
76 | """
77 | if len(results) <= 1: return results[0].tolist()
78 | # Giving 2*width, any object found should be closer.
79 | closest = ( self.screen_center * 2, results[0])
80 |
81 | for i in range(len(results)):
82 | pos = results[i].detach()
83 | # We are calculating this at screen as well, should we just send results from here through the pipe?
84 | # Is it better to keep pipe data small, or save computation?
85 | rect_center = (int((pos[0]+pos[2]) / 2), int((pos[1]+pos[3]) / 2))
86 | distance = self.e_distance(*rect_center, self.screen_center, self.screen_center)
87 | # Check if new closest
88 | if distance < closest[0]:
89 | closest = ( distance, results[i])
90 | return closest[1].tolist()
91 |
92 | def e_distance(self, q1,q2,p1,p2):
93 | return np.sqrt((q1 - p1)**2 + (q2 - p2)**2)
94 |
95 | def p_distance(self, x, y):
96 | return np.sqrt(abs(x)**2 + abs(y)**2)
--------------------------------------------------------------------------------