├── .github
└── FUNDING.yml
├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── assets
├── node_v3.png
└── tensorrt_upscaling_workflow.json
├── requirements.txt
├── scripts
├── export_onnx.py
├── export_trt.py
└── export_trt_from_directory.py
├── trt_utilities.py
└── utilities.py
/.github/FUNDING.yml:
--------------------------------------------------------------------------------
1 | github: yuvraj108c
2 | custom: ["https://paypal.me/yuvraj108c", "https://buymeacoffee.com/yuvraj108cz"]
3 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Attribution-NonCommercial-ShareAlike 4.0 International
2 |
3 | =======================================================================
4 |
5 | Creative Commons Corporation ("Creative Commons") is not a law firm and
6 | does not provide legal services or legal advice. Distribution of
7 | Creative Commons public licenses does not create a lawyer-client or
8 | other relationship. Creative Commons makes its licenses and related
9 | information available on an "as-is" basis. Creative Commons gives no
10 | warranties regarding its licenses, any material licensed under their
11 | terms and conditions, or any related information. Creative Commons
12 | disclaims all liability for damages resulting from their use to the
13 | fullest extent possible.
14 |
15 | Using Creative Commons Public Licenses
16 |
17 | Creative Commons public licenses provide a standard set of terms and
18 | conditions that creators and other rights holders may use to share
19 | original works of authorship and other material subject to copyright
20 | and certain other rights specified in the public license below. The
21 | following considerations are for informational purposes only, are not
22 | exhaustive, and do not form part of our licenses.
23 |
24 | Considerations for licensors: Our public licenses are
25 | intended for use by those authorized to give the public
26 | permission to use material in ways otherwise restricted by
27 | copyright and certain other rights. Our licenses are
28 | irrevocable. Licensors should read and understand the terms
29 | and conditions of the license they choose before applying it.
30 | Licensors should also secure all rights necessary before
31 | applying our licenses so that the public can reuse the
32 | material as expected. Licensors should clearly mark any
33 | material not subject to the license. This includes other CC-
34 | licensed material, or material used under an exception or
35 | limitation to copyright. More considerations for licensors:
36 | wiki.creativecommons.org/Considerations_for_licensors
37 |
38 | Considerations for the public: By using one of our public
39 | licenses, a licensor grants the public permission to use the
40 | licensed material under specified terms and conditions. If
41 | the licensor's permission is not necessary for any reason--for
42 | example, because of any applicable exception or limitation to
43 | copyright--then that use is not regulated by the license. Our
44 | licenses grant only permissions under copyright and certain
45 | other rights that a licensor has authority to grant. Use of
46 | the licensed material may still be restricted for other
47 | reasons, including because others have copyright or other
48 | rights in the material. A licensor may make special requests,
49 | such as asking that all changes be marked or described.
50 | Although not required by our licenses, you are encouraged to
51 | respect those requests where reasonable. More considerations
52 | for the public:
53 | wiki.creativecommons.org/Considerations_for_licensees
54 |
55 | =======================================================================
56 |
57 | Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
58 | Public License
59 |
60 | By exercising the Licensed Rights (defined below), You accept and agree
61 | to be bound by the terms and conditions of this Creative Commons
62 | Attribution-NonCommercial-ShareAlike 4.0 International Public License
63 | ("Public License"). To the extent this Public License may be
64 | interpreted as a contract, You are granted the Licensed Rights in
65 | consideration of Your acceptance of these terms and conditions, and the
66 | Licensor grants You such rights in consideration of benefits the
67 | Licensor receives from making the Licensed Material available under
68 | these terms and conditions.
69 |
70 |
71 | Section 1 -- Definitions.
72 |
73 | a. Adapted Material means material subject to Copyright and Similar
74 | Rights that is derived from or based upon the Licensed Material
75 | and in which the Licensed Material is translated, altered,
76 | arranged, transformed, or otherwise modified in a manner requiring
77 | permission under the Copyright and Similar Rights held by the
78 | Licensor. For purposes of this Public License, where the Licensed
79 | Material is a musical work, performance, or sound recording,
80 | Adapted Material is always produced where the Licensed Material is
81 | synched in timed relation with a moving image.
82 |
83 | b. Adapter's License means the license You apply to Your Copyright
84 | and Similar Rights in Your contributions to Adapted Material in
85 | accordance with the terms and conditions of this Public License.
86 |
87 | c. BY-NC-SA Compatible License means a license listed at
88 | creativecommons.org/compatiblelicenses, approved by Creative
89 | Commons as essentially the equivalent of this Public License.
90 |
91 | d. Copyright and Similar Rights means copyright and/or similar rights
92 | closely related to copyright including, without limitation,
93 | performance, broadcast, sound recording, and Sui Generis Database
94 | Rights, without regard to how the rights are labeled or
95 | categorized. For purposes of this Public License, the rights
96 | specified in Section 2(b)(1)-(2) are not Copyright and Similar
97 | Rights.
98 |
99 | e. Effective Technological Measures means those measures that, in the
100 | absence of proper authority, may not be circumvented under laws
101 | fulfilling obligations under Article 11 of the WIPO Copyright
102 | Treaty adopted on December 20, 1996, and/or similar international
103 | agreements.
104 |
105 | f. Exceptions and Limitations means fair use, fair dealing, and/or
106 | any other exception or limitation to Copyright and Similar Rights
107 | that applies to Your use of the Licensed Material.
108 |
109 | g. License Elements means the license attributes listed in the name
110 | of a Creative Commons Public License. The License Elements of this
111 | Public License are Attribution, NonCommercial, and ShareAlike.
112 |
113 | h. Licensed Material means the artistic or literary work, database,
114 | or other material to which the Licensor applied this Public
115 | License.
116 |
117 | i. Licensed Rights means the rights granted to You subject to the
118 | terms and conditions of this Public License, which are limited to
119 | all Copyright and Similar Rights that apply to Your use of the
120 | Licensed Material and that the Licensor has authority to license.
121 |
122 | j. Licensor means the individual(s) or entity(ies) granting rights
123 | under this Public License.
124 |
125 | k. NonCommercial means not primarily intended for or directed towards
126 | commercial advantage or monetary compensation. For purposes of
127 | this Public License, the exchange of the Licensed Material for
128 | other material subject to Copyright and Similar Rights by digital
129 | file-sharing or similar means is NonCommercial provided there is
130 | no payment of monetary compensation in connection with the
131 | exchange.
132 |
133 | l. Share means to provide material to the public by any means or
134 | process that requires permission under the Licensed Rights, such
135 | as reproduction, public display, public performance, distribution,
136 | dissemination, communication, or importation, and to make material
137 | available to the public including in ways that members of the
138 | public may access the material from a place and at a time
139 | individually chosen by them.
140 |
141 | m. Sui Generis Database Rights means rights other than copyright
142 | resulting from Directive 96/9/EC of the European Parliament and of
143 | the Council of 11 March 1996 on the legal protection of databases,
144 | as amended and/or succeeded, as well as other essentially
145 | equivalent rights anywhere in the world.
146 |
147 | n. You means the individual or entity exercising the Licensed Rights
148 | under this Public License. Your has a corresponding meaning.
149 |
150 |
151 | Section 2 -- Scope.
152 |
153 | a. License grant.
154 |
155 | 1. Subject to the terms and conditions of this Public License,
156 | the Licensor hereby grants You a worldwide, royalty-free,
157 | non-sublicensable, non-exclusive, irrevocable license to
158 | exercise the Licensed Rights in the Licensed Material to:
159 |
160 | a. reproduce and Share the Licensed Material, in whole or
161 | in part, for NonCommercial purposes only; and
162 |
163 | b. produce, reproduce, and Share Adapted Material for
164 | NonCommercial purposes only.
165 |
166 | 2. Exceptions and Limitations. For the avoidance of doubt, where
167 | Exceptions and Limitations apply to Your use, this Public
168 | License does not apply, and You do not need to comply with
169 | its terms and conditions.
170 |
171 | 3. Term. The term of this Public License is specified in Section
172 | 6(a).
173 |
174 | 4. Media and formats; technical modifications allowed. The
175 | Licensor authorizes You to exercise the Licensed Rights in
176 | all media and formats whether now known or hereafter created,
177 | and to make technical modifications necessary to do so. The
178 | Licensor waives and/or agrees not to assert any right or
179 | authority to forbid You from making technical modifications
180 | necessary to exercise the Licensed Rights, including
181 | technical modifications necessary to circumvent Effective
182 | Technological Measures. For purposes of this Public License,
183 | simply making modifications authorized by this Section 2(a)
184 | (4) never produces Adapted Material.
185 |
186 | 5. Downstream recipients.
187 |
188 | a. Offer from the Licensor -- Licensed Material. Every
189 | recipient of the Licensed Material automatically
190 | receives an offer from the Licensor to exercise the
191 | Licensed Rights under the terms and conditions of this
192 | Public License.
193 |
194 | b. Additional offer from the Licensor -- Adapted Material.
195 | Every recipient of Adapted Material from You
196 | automatically receives an offer from the Licensor to
197 | exercise the Licensed Rights in the Adapted Material
198 | under the conditions of the Adapter's License You apply.
199 |
200 | c. No downstream restrictions. You may not offer or impose
201 | any additional or different terms or conditions on, or
202 | apply any Effective Technological Measures to, the
203 | Licensed Material if doing so restricts exercise of the
204 | Licensed Rights by any recipient of the Licensed
205 | Material.
206 |
207 | 6. No endorsement. Nothing in this Public License constitutes or
208 | may be construed as permission to assert or imply that You
209 | are, or that Your use of the Licensed Material is, connected
210 | with, or sponsored, endorsed, or granted official status by,
211 | the Licensor or others designated to receive attribution as
212 | provided in Section 3(a)(1)(A)(i).
213 |
214 | b. Other rights.
215 |
216 | 1. Moral rights, such as the right of integrity, are not
217 | licensed under this Public License, nor are publicity,
218 | privacy, and/or other similar personality rights; however, to
219 | the extent possible, the Licensor waives and/or agrees not to
220 | assert any such rights held by the Licensor to the limited
221 | extent necessary to allow You to exercise the Licensed
222 | Rights, but not otherwise.
223 |
224 | 2. Patent and trademark rights are not licensed under this
225 | Public License.
226 |
227 | 3. To the extent possible, the Licensor waives any right to
228 | collect royalties from You for the exercise of the Licensed
229 | Rights, whether directly or through a collecting society
230 | under any voluntary or waivable statutory or compulsory
231 | licensing scheme. In all other cases the Licensor expressly
232 | reserves any right to collect such royalties, including when
233 | the Licensed Material is used other than for NonCommercial
234 | purposes.
235 |
236 |
237 | Section 3 -- License Conditions.
238 |
239 | Your exercise of the Licensed Rights is expressly made subject to the
240 | following conditions.
241 |
242 | a. Attribution.
243 |
244 | 1. If You Share the Licensed Material (including in modified
245 | form), You must:
246 |
247 | a. retain the following if it is supplied by the Licensor
248 | with the Licensed Material:
249 |
250 | i. identification of the creator(s) of the Licensed
251 | Material and any others designated to receive
252 | attribution, in any reasonable manner requested by
253 | the Licensor (including by pseudonym if
254 | designated);
255 |
256 | ii. a copyright notice;
257 |
258 | iii. a notice that refers to this Public License;
259 |
260 | iv. a notice that refers to the disclaimer of
261 | warranties;
262 |
263 | v. a URI or hyperlink to the Licensed Material to the
264 | extent reasonably practicable;
265 |
266 | b. indicate if You modified the Licensed Material and
267 | retain an indication of any previous modifications; and
268 |
269 | c. indicate the Licensed Material is licensed under this
270 | Public License, and include the text of, or the URI or
271 | hyperlink to, this Public License.
272 |
273 | 2. You may satisfy the conditions in Section 3(a)(1) in any
274 | reasonable manner based on the medium, means, and context in
275 | which You Share the Licensed Material. For example, it may be
276 | reasonable to satisfy the conditions by providing a URI or
277 | hyperlink to a resource that includes the required
278 | information.
279 | 3. If requested by the Licensor, You must remove any of the
280 | information required by Section 3(a)(1)(A) to the extent
281 | reasonably practicable.
282 |
283 | b. ShareAlike.
284 |
285 | In addition to the conditions in Section 3(a), if You Share
286 | Adapted Material You produce, the following conditions also apply.
287 |
288 | 1. The Adapter's License You apply must be a Creative Commons
289 | license with the same License Elements, this version or
290 | later, or a BY-NC-SA Compatible License.
291 |
292 | 2. You must include the text of, or the URI or hyperlink to, the
293 | Adapter's License You apply. You may satisfy this condition
294 | in any reasonable manner based on the medium, means, and
295 | context in which You Share Adapted Material.
296 |
297 | 3. You may not offer or impose any additional or different terms
298 | or conditions on, or apply any Effective Technological
299 | Measures to, Adapted Material that restrict exercise of the
300 | rights granted under the Adapter's License You apply.
301 |
302 |
303 | Section 4 -- Sui Generis Database Rights.
304 |
305 | Where the Licensed Rights include Sui Generis Database Rights that
306 | apply to Your use of the Licensed Material:
307 |
308 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right
309 | to extract, reuse, reproduce, and Share all or a substantial
310 | portion of the contents of the database for NonCommercial purposes
311 | only;
312 |
313 | b. if You include all or a substantial portion of the database
314 | contents in a database in which You have Sui Generis Database
315 | Rights, then the database in which You have Sui Generis Database
316 | Rights (but not its individual contents) is Adapted Material,
317 | including for purposes of Section 3(b); and
318 |
319 | c. You must comply with the conditions in Section 3(a) if You Share
320 | all or a substantial portion of the contents of the database.
321 |
322 | For the avoidance of doubt, this Section 4 supplements and does not
323 | replace Your obligations under this Public License where the Licensed
324 | Rights include other Copyright and Similar Rights.
325 |
326 |
327 | Section 5 -- Disclaimer of Warranties and Limitation of Liability.
328 |
329 | a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
330 | EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
331 | AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
332 | ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
333 | IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
334 | WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
335 | PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
336 | ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
337 | KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
338 | ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
339 |
340 | b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
341 | TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
342 | NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
343 | INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
344 | COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
345 | USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
346 | ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
347 | DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
348 | IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
349 |
350 | c. The disclaimer of warranties and limitation of liability provided
351 | above shall be interpreted in a manner that, to the extent
352 | possible, most closely approximates an absolute disclaimer and
353 | waiver of all liability.
354 |
355 |
356 | Section 6 -- Term and Termination.
357 |
358 | a. This Public License applies for the term of the Copyright and
359 | Similar Rights licensed here. However, if You fail to comply with
360 | this Public License, then Your rights under this Public License
361 | terminate automatically.
362 |
363 | b. Where Your right to use the Licensed Material has terminated under
364 | Section 6(a), it reinstates:
365 |
366 | 1. automatically as of the date the violation is cured, provided
367 | it is cured within 30 days of Your discovery of the
368 | violation; or
369 |
370 | 2. upon express reinstatement by the Licensor.
371 |
372 | For the avoidance of doubt, this Section 6(b) does not affect any
373 | right the Licensor may have to seek remedies for Your violations
374 | of this Public License.
375 |
376 | c. For the avoidance of doubt, the Licensor may also offer the
377 | Licensed Material under separate terms or conditions or stop
378 | distributing the Licensed Material at any time; however, doing so
379 | will not terminate this Public License.
380 |
381 | d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
382 | License.
383 |
384 |
385 | Section 7 -- Other Terms and Conditions.
386 |
387 | a. The Licensor shall not be bound by any additional or different
388 | terms or conditions communicated by You unless expressly agreed.
389 |
390 | b. Any arrangements, understandings, or agreements regarding the
391 | Licensed Material not stated herein are separate from and
392 | independent of the terms and conditions of this Public License.
393 |
394 |
395 | Section 8 -- Interpretation.
396 |
397 | a. For the avoidance of doubt, this Public License does not, and
398 | shall not be interpreted to, reduce, limit, restrict, or impose
399 | conditions on any use of the Licensed Material that could lawfully
400 | be made without permission under this Public License.
401 |
402 | b. To the extent possible, if any provision of this Public License is
403 | deemed unenforceable, it shall be automatically reformed to the
404 | minimum extent necessary to make it enforceable. If the provision
405 | cannot be reformed, it shall be severed from this Public License
406 | without affecting the enforceability of the remaining terms and
407 | conditions.
408 |
409 | c. No term or condition of this Public License will be waived and no
410 | failure to comply consented to unless expressly agreed to by the
411 | Licensor.
412 |
413 | d. Nothing in this Public License constitutes or may be interpreted
414 | as a limitation upon, or waiver of, any privileges and immunities
415 | that apply to the Licensor or You, including from the legal
416 | processes of any jurisdiction or authority.
417 |
418 | =======================================================================
419 |
420 | Creative Commons is not a party to its public
421 | licenses. Notwithstanding, Creative Commons may elect to apply one of
422 | its public licenses to material it publishes and in those instances
423 | will be considered the “Licensor.” The text of the Creative Commons
424 | public licenses is dedicated to the public domain under the CC0 Public
425 | Domain Dedication. Except for the limited purpose of indicating that
426 | material is shared under a Creative Commons public license or as
427 | otherwise permitted by the Creative Commons policies published at
428 | creativecommons.org/policies, Creative Commons does not authorize the
429 | use of the trademark "Creative Commons" or any other trademark or logo
430 | of Creative Commons without its prior written consent including,
431 | without limitation, in connection with any unauthorized modifications
432 | to any of its public licenses or any other arrangements,
433 | understandings, or agreements concerning use of licensed material. For
434 | the avoidance of doubt, this paragraph does not form part of the
435 | public licenses.
436 |
437 | Creative Commons may be contacted at creativecommons.org.
438 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # ComfyUI Upscaler TensorRT ⚡
4 |
5 | [](https://www.python.org/downloads/release/python-31012/)
6 | [](https://developer.nvidia.com/cuda-downloads)
7 | [](https://developer.nvidia.com/tensorrt)
8 | [](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en)
9 |
10 |
11 |
12 | This project provides a [Tensorrt](https://github.com/NVIDIA/TensorRT) implementation for fast image upscaling using models inside ComfyUI (2-4x faster)
13 |
14 |
15 |
16 |
17 |
18 | ## ⭐ Support
19 | If you like my projects and wish to see updates and new features, please consider supporting me. It helps a lot!
20 |
21 | [](https://github.com/yuvraj108c/ComfyUI-Depth-Anything-Tensorrt)
22 | [](https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt)
23 | [](https://github.com/yuvraj108c/ComfyUI-Dwpose-Tensorrt)
24 | [](https://github.com/yuvraj108c/ComfyUI-Rife-Tensorrt)
25 |
26 | [](https://github.com/yuvraj108c/ComfyUI-Whisper)
27 | [](https://github.com/yuvraj108c/ComfyUI_InvSR)
28 | [](https://github.com/yuvraj108c/ComfyUI-Thera)
29 | [](https://github.com/yuvraj108c/ComfyUI-Video-Depth-Anything)
30 | [](https://github.com/yuvraj108c/ComfyUI-PiperTTS)
31 |
32 | [](https://www.buymeacoffee.com/yuvraj108cZ)
33 | [](https://paypal.me/yuvraj108c)
34 | ---
35 |
36 | ## ⏱️ Performance
37 |
38 | _Note: The following results were benchmarked on FP16 engines inside ComfyUI, using 100 identical frames_
39 |
40 | | Device | Model | Input Resolution (WxH) | Output Resolution (WxH) | FPS |
41 | | :----: | :-----------: | :--------------------: | :---------------------: | :-: |
42 | | RTX5090 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 12.7 |
43 | | RTX5090 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 2.0 |
44 | | RTX4090 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 6.7 |
45 | | RTX4090 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 1.1 |
46 | | RTX3060 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 2.2 |
47 | | RTX3060 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 0.35 |
48 |
49 | ## 🚀 Installation
50 | - Install via the manager
51 | - Or, navigate to the `/ComfyUI/custom_nodes` directory
52 |
53 | ```bash
54 | git clone https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt.git
55 | cd ./ComfyUI-Upscaler-Tensorrt
56 | pip install -r requirements.txt
57 | ```
58 |
59 | ## 🛠️ Supported Models
60 |
61 | - These upscaler models have been tested to work with Tensorrt. Onnx are available [here](https://huggingface.co/yuvraj108c/ComfyUI-Upscaler-Onnx/tree/main)
62 | - The exported tensorrt models support dynamic image resolutions from 256x256 to 1280x1280 px (e.g 960x540, 512x512, 1280x720 etc).
63 |
64 | - [4x-AnimeSharp](https://openmodeldb.info/models/4x-AnimeSharp)
65 | - [4x-UltraSharp](https://openmodeldb.info/models/4x-UltraSharp)
66 | - [4x-WTP-UDS-Esrgan](https://openmodeldb.info/models/4x-WTP-UDS-Esrgan)
67 | - [4x_NMKD-Siax_200k](https://openmodeldb.info/models/4x-NMKD-Siax-CX)
68 | - [4x_RealisticRescaler_100000_G](https://openmodeldb.info/models/4x-RealisticRescaler)
69 | - [4x_foolhardy_Remacri](https://openmodeldb.info/models/4x-Remacri)
70 | - [RealESRGAN_x4](https://openmodeldb.info/models/4x-realesrgan-x4plus)
71 | - [4xNomos2_otf_esrgan](https://openmodeldb.info/models/4x-Nomos2-otf-esrgan)
72 |
73 | ## ☀️ Usage
74 |
75 | - Load [example workflow](assets/tensorrt_upscaling_workflow.json)
76 | - Choose the appropriate model from the dropdown
77 | - The tensorrt engine will be built automatically
78 | - Load an image of resolution between 256-1280px
79 | - Set `resize_to` to resize the upscaled images to fixed resolutions
80 |
81 | ## 🔧 Custom Models
82 | - To export other ESRGAN models, you'll have to build the onnx model first, using [export_onnx.py](scripts/export_onnx.py)
83 | - Place the onnx model in `/ComfyUI/models/onnx/YOUR_MODEL.onnx`
84 | - Then, add your model to this list as shown: https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt/blob/8f7ef5d1f713af3b4a74a64fa13a65ee5c404cd4/__init__.py#L77
85 | - Finally, run the same workflow and choose your model
86 | - If you've tested another working tensorrt model, let me know to add it officially to this node
87 |
88 | ## 🚨 Updates
89 | ### 30 April 2025
90 | - Merge https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt/pull/48 by @BiiirdPrograms to fix soft-lock by raising an error when input image dimensions unsupported
91 | ### 4 March 2025 (breaking)
92 | - Automatic tensorrt engines are built from the workflow itself, to simplify the process for non-technical people
93 | - Separate model loading and tensorrt processing into different nodes
94 | - Optimise post processing
95 | - Update onnx export script
96 |
97 | ## ⚠️ Known issues
98 |
99 | - If you upgrade tensorrt version, you'll have to rebuild the engines
100 | - Only models with ESRGAN architecture are currently working
101 | - High ram usage when exporting `.pth` to `.onnx`
102 |
103 | ## 🤖 Environment tested
104 |
105 | - Ubuntu 22.04 LTS, Cuda 12.4, Tensorrt 10.8, Python 3.10, H100 GPU
106 | - Windows 11
107 |
108 | ## 👏 Credits
109 |
110 | - [NVIDIA/Stable-Diffusion-WebUI-TensorRT](https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT)
111 | - [comfyanonymous/ComfyUI](https://github.com/comfyanonymous/ComfyUI)
112 |
113 | ## License
114 |
115 | [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
116 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
1 | import os
2 | import folder_paths
3 | import numpy as np
4 | import torch
5 | from comfy.utils import ProgressBar
6 | from .trt_utilities import Engine
7 | from .utilities import download_file, ColoredLogger, get_final_resolutions
8 | import comfy.model_management as mm
9 | import time
10 | import tensorrt
11 |
12 | logger = ColoredLogger("ComfyUI-Upscaler-Tensorrt")
13 |
14 | IMAGE_DIM_MIN = 256
15 | IMAGE_DIM_OPT = 512
16 | IMAGE_DIM_MAX = 1280
17 |
18 | class UpscalerTensorrt:
19 | @classmethod
20 | def INPUT_TYPES(s):
21 | return {
22 | "required": {
23 | "images": ("IMAGE", {"tooltip": f"Images to be upscaled. Resolution must be between {IMAGE_DIM_MIN} and {IMAGE_DIM_MAX} px"}),
24 | "upscaler_trt_model": ("UPSCALER_TRT_MODEL", {"tooltip": "Tensorrt model built and loaded"}),
25 | "resize_to": (["none", "HD", "FHD", "2k", "4k"],{"tooltip": "Resize the upscaled image to fixed resolutions, optional"}),
26 | }
27 | }
28 | RETURN_NAMES = ("IMAGE",)
29 | RETURN_TYPES = ("IMAGE",)
30 | FUNCTION = "upscaler_tensorrt"
31 | CATEGORY = "tensorrt"
32 | DESCRIPTION = "Upscale images with tensorrt"
33 |
34 | def upscaler_tensorrt(self, images, upscaler_trt_model, resize_to):
35 | images_bchw = images.permute(0, 3, 1, 2)
36 | B, C, H, W = images_bchw.shape
37 |
38 | # Raise an error if input image dimensions fall outside of trt engine support
39 | for dim in (H, W):
40 | if dim > IMAGE_DIM_MAX or dim < IMAGE_DIM_MIN:
41 | raise ValueError(f"Input image dimensions fall outside of the supported range: {IMAGE_DIM_MIN} to {IMAGE_DIM_MAX} px!\nImage dimensions: {W}px by {H}px")
42 |
43 | final_width, final_height = get_final_resolutions(W, H, resize_to)
44 | logger.info(f"Upscaling {B} images from H:{H}, W:{W} to H:{H*4}, W:{W*4} | Final resolution: H:{final_height}, W:{final_width} | resize_to: {resize_to}")
45 |
46 | shape_dict = {
47 | "input": {"shape": (1, 3, H, W)},
48 | "output": {"shape": (1, 3, H*4, W*4)},
49 | }
50 | # setup engine
51 | upscaler_trt_model.activate()
52 | upscaler_trt_model.allocate_buffers(shape_dict=shape_dict)
53 |
54 | cudaStream = torch.cuda.current_stream().cuda_stream
55 | pbar = ProgressBar(B)
56 | images_list = list(torch.split(images_bchw, split_size_or_sections=1))
57 |
58 | upscaled_frames = torch.empty((B, C, final_height, final_width), dtype=torch.float32, device=mm.intermediate_device()) # offloaded to cpu
59 | must_resize = W*4 != final_width or H*4 != final_height
60 |
61 | for i, img in enumerate(images_list):
62 | result = upscaler_trt_model.infer({"input": img}, cudaStream)
63 | result = result["output"]
64 |
65 | if must_resize:
66 | result = torch.nn.functional.interpolate(
67 | result,
68 | size=(final_height, final_width),
69 | mode='bicubic',
70 | antialias=True
71 | )
72 | upscaled_frames[i] = result.to(mm.intermediate_device())
73 | pbar.update(1)
74 |
75 | output = upscaled_frames.permute(0, 2, 3, 1)
76 | upscaler_trt_model.reset() # frees engine vram
77 | mm.soft_empty_cache()
78 |
79 | logger.info(f"Output shape: {output.shape}")
80 | return (output,)
81 |
82 | class LoadUpscalerTensorrtModel:
83 | @classmethod
84 | def INPUT_TYPES(s):
85 | return {
86 | "required": {
87 | "model": (["4x-AnimeSharp", "4x-UltraSharp", "4x-WTP-UDS-Esrgan", "4x_NMKD-Siax_200k", "4x_RealisticRescaler_100000_G", "4x_foolhardy_Remacri", "RealESRGAN_x4", "4xNomos2_otf_esrgan"], {"default": "4x-UltraSharp", "tooltip": "These models have been tested with tensorrt"}),
88 | "precision": (["fp16", "fp32"], {"default": "fp16", "tooltip": "Precision to build the tensorrt engines"}),
89 | }
90 | }
91 | RETURN_NAMES = ("upscaler_trt_model",)
92 | RETURN_TYPES = ("UPSCALER_TRT_MODEL",)
93 | FUNCTION = "main"
94 | CATEGORY = "tensorrt"
95 | DESCRIPTION = "Load tensorrt models, they will be built automatically if not found."
96 | FUNCTION = "load_upscaler_tensorrt_model"
97 |
98 | def load_upscaler_tensorrt_model(self, model, precision):
99 | tensorrt_models_dir = os.path.join(folder_paths.models_dir, "tensorrt", "upscaler")
100 | onnx_models_dir = os.path.join(folder_paths.models_dir, "onnx")
101 |
102 | os.makedirs(tensorrt_models_dir, exist_ok=True)
103 | os.makedirs(onnx_models_dir, exist_ok=True)
104 |
105 | onnx_model_path = os.path.join(onnx_models_dir, f"{model}.onnx")
106 |
107 | # Engine config, should this power be given to people to decide?
108 | engine_channel = 3
109 | engine_min_batch, engine_opt_batch, engine_max_batch = 1, 1, 1
110 | engine_min_h, engine_opt_h, engine_max_h = IMAGE_DIM_MIN, IMAGE_DIM_OPT, IMAGE_DIM_MAX
111 | engine_min_w, engine_opt_w, engine_max_w = IMAGE_DIM_MIN, IMAGE_DIM_OPT, IMAGE_DIM_MAX
112 | tensorrt_model_path = os.path.join(tensorrt_models_dir, f"{model}_{precision}_{engine_min_batch}x{engine_channel}x{engine_min_h}x{engine_min_w}_{engine_opt_batch}x{engine_channel}x{engine_opt_h}x{engine_opt_w}_{engine_max_batch}x{engine_channel}x{engine_max_h}x{engine_max_w}_{tensorrt.__version__}.trt")
113 |
114 | # Download onnx & build tensorrt engine
115 | if not os.path.exists(tensorrt_model_path):
116 | if not os.path.exists(onnx_model_path):
117 | onnx_model_download_url = f"https://huggingface.co/yuvraj108c/ComfyUI-Upscaler-Onnx/resolve/main/{model}.onnx"
118 | logger.info(f"Downloading {onnx_model_download_url}")
119 | download_file(url=onnx_model_download_url, save_path=onnx_model_path)
120 | else:
121 | logger.info(f"Onnx model found at: {onnx_model_path}")
122 |
123 | # Build tensorrt engine
124 | logger.info(f"Building TensorRT engine for {onnx_model_path}: {tensorrt_model_path}")
125 | mm.soft_empty_cache()
126 | s = time.time()
127 | engine = Engine(tensorrt_model_path)
128 | engine.build(
129 | onnx_path=onnx_model_path,
130 | fp16= True if precision == "fp16" else False, # mixed precision not working TODO: investigate
131 | input_profile=[
132 | {"input": [(engine_min_batch,engine_channel,engine_min_h,engine_min_w), (engine_opt_batch,engine_channel,engine_opt_h,engine_min_w), (engine_max_batch,engine_channel,engine_max_h,engine_max_w)]}, # any sizes from IMAGE_DIM_MIN to IMAGE_DIM_MAX, i.e. 256x256 to 1280x1280 by default
133 | ],
134 | )
135 | e = time.time()
136 | logger.info(f"Time taken to build: {(e-s)} seconds")
137 |
138 | # Load tensorrt model
139 | logger.info(f"Loading TensorRT engine: {tensorrt_model_path}")
140 | mm.soft_empty_cache()
141 | engine = Engine(tensorrt_model_path)
142 | engine.load()
143 |
144 | return (engine,)
145 |
146 | NODE_CLASS_MAPPINGS = {
147 | "UpscalerTensorrt": UpscalerTensorrt,
148 | "LoadUpscalerTensorrtModel": LoadUpscalerTensorrtModel,
149 | }
150 |
151 | NODE_DISPLAY_NAME_MAPPINGS = {
152 | "UpscalerTensorrt": "Upscaler Tensorrt ⚡",
153 | "LoadUpscalerTensorrtModel": "Load Upscale Tensorrt Model",
154 | }
155 |
156 | __all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS']
157 |
--------------------------------------------------------------------------------
/assets/node_v3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt/8138a182e9278612b435f6a70db51291380104fb/assets/node_v3.png
--------------------------------------------------------------------------------
/assets/tensorrt_upscaling_workflow.json:
--------------------------------------------------------------------------------
1 | {
2 | "last_node_id": 13,
3 | "last_link_id": 16,
4 | "nodes": [
5 | {
6 | "id": 5,
7 | "type": "LoadImage",
8 | "pos": [
9 | -301.260986328125,
10 | 16.96155548095703
11 | ],
12 | "size": [
13 | 315,
14 | 314
15 | ],
16 | "flags": {},
17 | "order": 0,
18 | "mode": 0,
19 | "inputs": [],
20 | "outputs": [
21 | {
22 | "name": "IMAGE",
23 | "type": "IMAGE",
24 | "links": [
25 | 16
26 | ]
27 | },
28 | {
29 | "name": "MASK",
30 | "type": "MASK",
31 | "links": null
32 | }
33 | ],
34 | "properties": {
35 | "Node name for S&R": "LoadImage"
36 | },
37 | "widgets_values": [
38 | "example.png",
39 | "image"
40 | ]
41 | },
42 | {
43 | "id": 2,
44 | "type": "LoadUpscalerTensorrtModel",
45 | "pos": [
46 | -297.7986755371094,
47 | 450.532958984375
48 | ],
49 | "size": [
50 | 315,
51 | 82
52 | ],
53 | "flags": {},
54 | "order": 1,
55 | "mode": 0,
56 | "inputs": [],
57 | "outputs": [
58 | {
59 | "name": "upscaler_trt_model",
60 | "type": "UPSCALER_TRT_MODEL",
61 | "links": [
62 | 1
63 | ],
64 | "slot_index": 0
65 | }
66 | ],
67 | "properties": {
68 | "Node name for S&R": "LoadUpscalerTensorrtModel"
69 | },
70 | "widgets_values": [
71 | "4x-UltraSharp",
72 | "fp16"
73 | ]
74 | },
75 | {
76 | "id": 13,
77 | "type": "PreviewImage",
78 | "pos": [
79 | 519.0885009765625,
80 | -51.63800048828125
81 | ],
82 | "size": [
83 | 706.2752075195312,
84 | 756.0552978515625
85 | ],
86 | "flags": {},
87 | "order": 3,
88 | "mode": 0,
89 | "inputs": [
90 | {
91 | "name": "images",
92 | "type": "IMAGE",
93 | "link": 14
94 | }
95 | ],
96 | "outputs": [],
97 | "properties": {
98 | "Node name for S&R": "PreviewImage"
99 | },
100 | "widgets_values": []
101 | },
102 | {
103 | "id": 3,
104 | "type": "UpscalerTensorrt",
105 | "pos": [
106 | 111.23614501953125,
107 | 352.1241760253906
108 | ],
109 | "size": [
110 | 315,
111 | 78
112 | ],
113 | "flags": {},
114 | "order": 2,
115 | "mode": 0,
116 | "inputs": [
117 | {
118 | "name": "images",
119 | "type": "IMAGE",
120 | "link": 16
121 | },
122 | {
123 | "name": "upscaler_trt_model",
124 | "type": "UPSCALER_TRT_MODEL",
125 | "link": 1
126 | }
127 | ],
128 | "outputs": [
129 | {
130 | "name": "IMAGE",
131 | "type": "IMAGE",
132 | "links": [
133 | 14
134 | ],
135 | "slot_index": 0
136 | }
137 | ],
138 | "properties": {
139 | "Node name for S&R": "UpscalerTensorrt"
140 | },
141 | "widgets_values": [
142 | "none"
143 | ]
144 | }
145 | ],
146 | "links": [
147 | [
148 | 1,
149 | 2,
150 | 0,
151 | 3,
152 | 1,
153 | "UPSCALER_TRT_MODEL"
154 | ],
155 | [
156 | 14,
157 | 3,
158 | 0,
159 | 13,
160 | 0,
161 | "IMAGE"
162 | ],
163 | [
164 | 16,
165 | 5,
166 | 0,
167 | 3,
168 | 0,
169 | "IMAGE"
170 | ]
171 | ],
172 | "groups": [],
173 | "config": {},
174 | "extra": {
175 | "ds": {
176 | "scale": 1,
177 | "offset": [
178 | 500.6130318234485,
179 | 102.83383472565859
180 | ]
181 | }
182 | },
183 | "version": 0.4
184 | }
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | tensorrt
2 | polygraphy
3 | requests
--------------------------------------------------------------------------------
/scripts/export_onnx.py:
--------------------------------------------------------------------------------
1 | # download the upscale models & place inside models/upscaler_models
2 | # edit model paths accordingly
3 |
4 | import torch
5 | import folder_paths
6 | from spandrel import ModelLoader, ImageModelDescriptor
7 |
8 | model_name = "4xNomos2_otf_esrgan.pth"
9 | onnx_save_path = "./4xNomos2_otf_esrgan.onnx"
10 |
11 | model_path = folder_paths.get_full_path_or_raise("upscale_models", model_name)
12 | model = ModelLoader().load_from_file(model_path).model.eval().cuda()
13 |
14 | x = torch.rand(1, 3, 512, 512).cuda()
15 |
16 | dynamic_axes = {
17 | "input": {0: "batch_size", 2: "width", 3: "height"},
18 | "output": {0: "batch_size", 2: "width", 3: "height"},
19 | }
20 |
21 | torch.onnx.export(model,
22 | x,
23 | onnx_save_path,
24 | verbose=True,
25 | input_names=['input'],
26 | output_names=['output'],
27 | opset_version=17,
28 | export_params=True,
29 | dynamic_axes=dynamic_axes,
30 | )
31 | print("Saved onnx to:", onnx_save_path)
--------------------------------------------------------------------------------
/scripts/export_trt.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import time
3 | from utilities import Engine
4 |
5 | def export_trt(trt_path=None, onnx_path=None, use_fp16=True):
6 | if trt_path is None:
7 | trt_path = input("Enter the path to save the TensorRT engine (e.g ./realesrgan.engine): ")
8 | if onnx_path is None:
9 | onnx_path = input("Enter the path to the ONNX model (e.g ./realesrgan.onnx): ")
10 |
11 | engine = Engine(trt_path)
12 |
13 | torch.cuda.empty_cache()
14 |
15 | s = time.time()
16 | ret = engine.build(
17 | onnx_path,
18 | use_fp16,
19 | enable_preview=True,
20 | input_profile=[
21 | {"input": [(1,3,256,256), (1,3,512,512), (1,3,1280,1280)]}, # any sizes from 256x256 to 1280x1280
22 | ],
23 | )
24 | e = time.time()
25 | print(f"Time taken to build: {(e-s)} seconds")
26 |
27 | return ret
28 |
29 | export_trt()
30 |
--------------------------------------------------------------------------------
/scripts/export_trt_from_directory.py:
--------------------------------------------------------------------------------
1 | import os
2 | import torch
3 | import time
4 | from utilities import Engine
5 |
6 | def export_trt(trt_path=None, onnx_path=None, use_fp16=True):
7 | option = input("Choose an option:\n1. Convert a single ONNX file\n2. Convert all ONNX files in a directory\nEnter your choice (1 or 2): ")
8 |
9 | if option == '1':
10 | onnx_path = input("Enter the path to the ONNX model (e.g ./realesrgan.onnx): ")
11 | onnx_files = [onnx_path]
12 | trt_dir = input("Enter the path to save the TensorRT engine (e.g ./trt_engine/): ")
13 | elif option == '2':
14 | onnx_dir = input("Enter the directory path containing ONNX models (e.g ./onnx_models/): ")
15 | onnx_files = [os.path.join(onnx_dir, file) for file in os.listdir(onnx_dir) if file.endswith('.onnx')]
16 | if not onnx_files:
17 | raise ValueError(f"No .onnx files found in directory: {onnx_dir}")
18 | trt_dir = input("Enter the directory path to save the TensorRT engines (e.g ./trt_engine/): ")
19 | else:
20 | raise ValueError("Invalid option. Please choose either 1 or 2.")
21 |
22 | # Check if trt_dir already exists as a directory
23 | if not os.path.exists(trt_dir):
24 | os.makedirs(trt_dir)
25 |
26 | #os.makedirs(trt_dir, exist_ok=True)
27 | total_files = len(onnx_files)
28 | for index, onnx_path in enumerate(onnx_files):
29 | engine = Engine(trt_path)
30 |
31 | torch.cuda.empty_cache()
32 | base_name = os.path.splitext(os.path.basename(onnx_path))[0]
33 | trt_path = os.path.join(trt_dir, f"{base_name}.engine")
34 |
35 | print(f"Converting {onnx_path} to {trt_path}")
36 |
37 | s = time.time()
38 |
39 | # Initialize Engine with trt_path and clear CUDA cache
40 | engine = Engine(trt_path)
41 | torch.cuda.empty_cache()
42 |
43 | ret = engine.build(
44 | onnx_path,
45 | use_fp16,
46 | enable_preview=True,
47 | input_profile=[
48 | {"input": [(1,3,256,256), (1,3,512,512), (1,3,1280,1280)]}, # any sizes from 256x256 to 1280x1280
49 | ],
50 | )
51 |
52 | e = time.time()
53 | print(f"Time taken to build: {(e-s)} seconds")
54 | if index < total_files - 1:
55 | # Delay for 10 seconds
56 | print("Delaying for 10 seconds...")
57 | time.sleep(10)
58 | print("Resuming operations after delay...")
59 |
60 | return
61 |
62 | export_trt()
63 |
--------------------------------------------------------------------------------
/trt_utilities.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2022 The HuggingFace Inc. team.
3 | # SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
4 | # SPDX-License-Identifier: Apache-2.0
5 | #
6 | # Licensed under the Apache License, Version 2.0 (the "License");
7 | # you may not use this file except in compliance with the License.
8 | # You may obtain a copy of the License at
9 | #
10 | # http://www.apache.org/licenses/LICENSE-2.0
11 | #
12 | # Unless required by applicable law or agreed to in writing, software
13 | # distributed under the License is distributed on an "AS IS" BASIS,
14 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 | # See the License for the specific language governing permissions and
16 | # limitations under the License.
17 | #
18 | import torch
19 | from torch.cuda import nvtx
20 | from collections import OrderedDict
21 | import numpy as np
22 | from polygraphy.backend.common import bytes_from_path
23 | from polygraphy import util
24 | from polygraphy.backend.trt import ModifyNetworkOutputs, Profile
25 | from polygraphy.backend.trt import (
26 | engine_from_bytes,
27 | engine_from_network,
28 | network_from_onnx_path,
29 | save_engine,
30 | )
31 | from polygraphy.logger import G_LOGGER
32 | import tensorrt as trt
33 | from logging import error, warning
34 | from tqdm import tqdm
35 | import copy
36 |
37 | TRT_LOGGER = trt.Logger(trt.Logger.ERROR)
38 | G_LOGGER.module_severity = G_LOGGER.ERROR
39 |
40 | # Map of numpy dtype -> torch dtype
41 | numpy_to_torch_dtype_dict = {
42 | np.uint8: torch.uint8,
43 | np.int8: torch.int8,
44 | np.int16: torch.int16,
45 | np.int32: torch.int32,
46 | np.int64: torch.int64,
47 | np.float16: torch.float16,
48 | np.float32: torch.float32,
49 | np.float64: torch.float64,
50 | np.complex64: torch.complex64,
51 | np.complex128: torch.complex128,
52 | }
53 | if np.version.full_version >= "1.24.0":
54 | numpy_to_torch_dtype_dict[np.bool_] = torch.bool
55 | else:
56 | numpy_to_torch_dtype_dict[np.bool] = torch.bool
57 |
58 | # Map of torch dtype -> numpy dtype
59 | torch_to_numpy_dtype_dict = {
60 | value: key for (key, value) in numpy_to_torch_dtype_dict.items()
61 | }
62 |
63 | class TQDMProgressMonitor(trt.IProgressMonitor):
64 | def __init__(self):
65 | trt.IProgressMonitor.__init__(self)
66 | self._active_phases = {}
67 | self._step_result = True
68 | self.max_indent = 5
69 |
70 | def phase_start(self, phase_name, parent_phase, num_steps):
71 | leave = False
72 | try:
73 | if parent_phase is not None:
74 | nbIndents = (
75 | self._active_phases.get(parent_phase, {}).get(
76 | "nbIndents", self.max_indent
77 | )
78 | + 1
79 | )
80 | if nbIndents >= self.max_indent:
81 | return
82 | else:
83 | nbIndents = 0
84 | leave = True
85 | self._active_phases[phase_name] = {
86 | "tq": tqdm(
87 | total=num_steps, desc=phase_name, leave=leave, position=nbIndents
88 | ),
89 | "nbIndents": nbIndents,
90 | "parent_phase": parent_phase,
91 | }
92 | except KeyboardInterrupt:
93 | # The phase_start callback cannot directly cancel the build, so request the cancellation from within step_complete.
94 | _step_result = False
95 |
96 | def phase_finish(self, phase_name):
97 | try:
98 | if phase_name in self._active_phases.keys():
99 | self._active_phases[phase_name]["tq"].update(
100 | self._active_phases[phase_name]["tq"].total
101 | - self._active_phases[phase_name]["tq"].n
102 | )
103 |
104 | parent_phase = self._active_phases[phase_name].get("parent_phase", None)
105 | while parent_phase is not None:
106 | self._active_phases[parent_phase]["tq"].refresh()
107 | parent_phase = self._active_phases[parent_phase].get(
108 | "parent_phase", None
109 | )
110 | if (
111 | self._active_phases[phase_name]["parent_phase"]
112 | in self._active_phases.keys()
113 | ):
114 | self._active_phases[
115 | self._active_phases[phase_name]["parent_phase"]
116 | ]["tq"].refresh()
117 | del self._active_phases[phase_name]
118 | pass
119 | except KeyboardInterrupt:
120 | _step_result = False
121 |
122 | def step_complete(self, phase_name, step):
123 | try:
124 | if phase_name in self._active_phases.keys():
125 | self._active_phases[phase_name]["tq"].update(
126 | step - self._active_phases[phase_name]["tq"].n
127 | )
128 | return self._step_result
129 | except KeyboardInterrupt:
130 | # There is no need to propagate this exception to TensorRT. We can simply cancel the build.
131 | return False
132 |
133 |
134 | class Engine:
135 | def __init__(
136 | self,
137 | engine_path,
138 | ):
139 | self.engine_path = engine_path
140 | self.engine = None
141 | self.context = None
142 | self.buffers = OrderedDict()
143 | self.tensors = OrderedDict()
144 | self.cuda_graph_instance = None # cuda graph
145 |
146 | def __del__(self):
147 | del self.engine
148 | del self.context
149 | del self.buffers
150 | del self.tensors
151 |
152 | def reset(self, engine_path=None):
153 | # del self.engine
154 | del self.context
155 | del self.buffers
156 | del self.tensors
157 | # self.engine_path = engine_path
158 |
159 | self.context = None
160 | self.buffers = OrderedDict()
161 | self.tensors = OrderedDict()
162 | self.inputs = {}
163 | self.outputs = {}
164 |
165 | def build(
166 | self,
167 | onnx_path,
168 | fp16,
169 | input_profile=None,
170 | enable_refit=False,
171 | enable_preview=False,
172 | enable_all_tactics=False,
173 | timing_cache=None,
174 | update_output_names=None,
175 | ):
176 | p = [Profile()]
177 | if input_profile:
178 | p = [Profile() for i in range(len(input_profile))]
179 | for _p, i_profile in zip(p, input_profile):
180 | for name, dims in i_profile.items():
181 | assert len(dims) == 3
182 | _p.add(name, min=dims[0], opt=dims[1], max=dims[2])
183 |
184 | config_kwargs = {}
185 | if not enable_all_tactics:
186 | config_kwargs["tactic_sources"] = []
187 |
188 | network = network_from_onnx_path(
189 | onnx_path, flags=[trt.OnnxParserFlag.NATIVE_INSTANCENORM]
190 | )
191 | if update_output_names:
192 | print(f"Updating network outputs to {update_output_names}")
193 | network = ModifyNetworkOutputs(network, update_output_names)
194 |
195 | builder = network[0]
196 | config = builder.create_builder_config()
197 | config.progress_monitor = TQDMProgressMonitor()
198 |
199 | config.set_flag(trt.BuilderFlag.FP16) if fp16 else None
200 | config.set_flag(trt.BuilderFlag.REFIT) if enable_refit else None
201 |
202 | profiles = copy.deepcopy(p)
203 | for profile in profiles:
204 | # Last profile is used for set_calibration_profile.
205 | calib_profile = profile.fill_defaults(network[1]).to_trt(
206 | builder, network[1]
207 | )
208 | config.add_optimization_profile(calib_profile)
209 |
210 | try:
211 | engine = engine_from_network(
212 | network,
213 | config,
214 | )
215 | except Exception as e:
216 | error(f"Failed to build engine: {e}")
217 | return 1
218 | try:
219 | save_engine(engine, path=self.engine_path)
220 | except Exception as e:
221 | error(f"Failed to save engine: {e}")
222 | return 1
223 | return 0
224 |
225 | def load(self):
226 | self.engine = engine_from_bytes(bytes_from_path(self.engine_path))
227 |
228 | def activate(self, reuse_device_memory=None):
229 | if reuse_device_memory:
230 | self.context = self.engine.create_execution_context_without_device_memory()
231 | # self.context.device_memory = reuse_device_memory
232 | else:
233 | self.context = self.engine.create_execution_context()
234 |
235 | def allocate_buffers(self, shape_dict=None, device="cuda"):
236 | nvtx.range_push("allocate_buffers")
237 | for idx in range(self.engine.num_io_tensors):
238 | name = self.engine.get_tensor_name(idx)
239 | binding = self.engine[idx]
240 | if shape_dict and binding in shape_dict:
241 | shape = shape_dict[binding]["shape"]
242 | else:
243 | shape = self.context.get_tensor_shape(name)
244 |
245 | dtype = trt.nptype(self.engine.get_tensor_dtype(name))
246 | if self.engine.get_tensor_mode(name) == trt.TensorIOMode.INPUT:
247 | self.context.set_input_shape(name, shape)
248 | tensor = torch.empty(
249 | tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype]
250 | ).to(device=device)
251 | self.tensors[binding] = tensor
252 | nvtx.range_pop()
253 |
254 | def infer(self, feed_dict, stream, use_cuda_graph=False):
255 | nvtx.range_push("set_tensors")
256 | for name, buf in feed_dict.items():
257 | self.tensors[name].copy_(buf)
258 |
259 | for name, tensor in self.tensors.items():
260 | self.context.set_tensor_address(name, tensor.data_ptr())
261 | nvtx.range_pop()
262 | nvtx.range_push("execute")
263 | noerror = self.context.execute_async_v3(stream)
264 | if not noerror:
265 | raise ValueError("ERROR: inference failed.")
266 | nvtx.range_pop()
267 | return self.tensors
268 |
269 | def __str__(self):
270 | out = ""
271 |
272 | # When raising errors in the upscaler, this str() called by comfy's execution.py,
273 | # but the engine won't have the attributes required for stringification
274 | # If str() also raises an error, comfy gets soft-locked, not running prompts until restarted
275 | if not hasattr(self.engine, "num_optimization_profiles") or not hasattr(self.engine, "num_bindings"):
276 | return out
277 |
278 | for opt_profile in range(self.engine.num_optimization_profiles):
279 | for binding_idx in range(self.engine.num_bindings):
280 | name = self.engine.get_binding_name(binding_idx)
281 | shape = self.engine.get_profile_shape(opt_profile, name)
282 | out += f"\t{name} = {shape}\n"
283 | return out
--------------------------------------------------------------------------------
/utilities.py:
--------------------------------------------------------------------------------
1 | import requests
2 | from tqdm import tqdm
3 | import logging
4 | import sys
5 |
6 | class ColoredLogger:
7 | COLORS = {
8 | 'RED': '\033[91m',
9 | 'GREEN': '\033[92m',
10 | 'YELLOW': '\033[93m',
11 | 'BLUE': '\033[94m',
12 | 'MAGENTA': '\033[95m',
13 | 'RESET': '\033[0m'
14 | }
15 |
16 | LEVEL_COLORS = {
17 | 'DEBUG': COLORS['BLUE'],
18 | 'INFO': COLORS['GREEN'],
19 | 'WARNING': COLORS['YELLOW'],
20 | 'ERROR': COLORS['RED'],
21 | 'CRITICAL': COLORS['MAGENTA']
22 | }
23 |
24 | def __init__(self, name="MY-APP"):
25 | self.logger = logging.getLogger(name)
26 | self.logger.setLevel(logging.DEBUG)
27 | self.app_name = name
28 |
29 | # Prevent message propagation to parent loggers
30 | self.logger.propagate = False
31 |
32 | # Clear existing handlers
33 | self.logger.handlers = []
34 |
35 | # Create console handler
36 | handler = logging.StreamHandler(sys.stdout)
37 | handler.setLevel(logging.DEBUG)
38 |
39 | # Custom formatter class to handle colored components
40 | class ColoredFormatter(logging.Formatter):
41 | def format(self, record):
42 | # Color the level name according to severity
43 | level_color = ColoredLogger.LEVEL_COLORS.get(record.levelname, '')
44 | colored_levelname = f"{level_color}{record.levelname}{ColoredLogger.COLORS['RESET']}"
45 |
46 | # Color the logger name in blue
47 | colored_name = f"{ColoredLogger.COLORS['BLUE']}{record.name}{ColoredLogger.COLORS['RESET']}"
48 |
49 | # Set the colored components
50 | record.levelname = colored_levelname
51 | record.name = colored_name
52 |
53 | return super().format(record)
54 |
55 | # Create formatter with the new format
56 | formatter = ColoredFormatter('[%(name)s|%(levelname)s] - %(message)s')
57 | handler.setFormatter(formatter)
58 |
59 | self.logger.addHandler(handler)
60 |
61 |
62 | def debug(self, message):
63 | self.logger.debug(f"{self.COLORS['BLUE']}{message}{self.COLORS['RESET']}")
64 |
65 | def info(self, message):
66 | self.logger.info(f"{self.COLORS['GREEN']}{message}{self.COLORS['RESET']}")
67 |
68 | def warning(self, message):
69 | self.logger.warning(f"{self.COLORS['YELLOW']}{message}{self.COLORS['RESET']}")
70 |
71 | def error(self, message):
72 | self.logger.error(f"{self.COLORS['RED']}{message}{self.COLORS['RESET']}")
73 |
74 | def critical(self, message):
75 | self.logger.critical(f"{self.COLORS['MAGENTA']}{message}{self.COLORS['RESET']}")
76 |
77 | def download_file(url, save_path):
78 | """
79 | Download a file from URL with progress bar
80 |
81 | Args:
82 | url (str): URL of the file to download
83 | save_path (str): Path to save the file as
84 | """
85 | GREEN = '\033[92m'
86 | RESET = '\033[0m'
87 | response = requests.get(url, stream=True)
88 | total_size = int(response.headers.get('content-length', 0))
89 |
90 | with open(save_path, 'wb') as file, tqdm(
91 | desc=save_path,
92 | total=total_size,
93 | unit='iB',
94 | unit_scale=True,
95 | unit_divisor=1024,
96 | colour='green',
97 | bar_format=f'{GREEN}{{l_bar}}{{bar}}{RESET}{GREEN}{{r_bar}}{RESET}'
98 | ) as progress_bar:
99 | for data in response.iter_content(chunk_size=1024):
100 | size = file.write(data)
101 | progress_bar.update(size)
102 |
103 | def get_final_resolutions(width, height, resize_to):
104 | final_width = None
105 | final_height = None
106 | aspect_ratio = float(width/height)
107 |
108 | match resize_to:
109 | case "HD":
110 | final_width = 1280
111 | final_height = 720
112 | case "FHD":
113 | final_width = 1920
114 | final_height = 1080
115 | case "2k":
116 | final_width = 2560
117 | final_height = 1440
118 | case "4k":
119 | final_width = 3840
120 | final_height = 2160
121 | case "none":
122 | final_width = width*4
123 | final_height = height*4
124 |
125 | if aspect_ratio == 1.0:
126 | final_width = final_height
127 |
128 | if aspect_ratio < 1.0 and resize_to != "none":
129 | temp = final_width
130 | final_width = final_height
131 | final_height = temp
132 |
133 | return (final_width, final_height)
--------------------------------------------------------------------------------