├── LICENSE ├── README.md ├── data └── coco.yaml ├── detect_and_blur.py ├── models ├── __init__.py ├── common.py ├── experimental.py └── yolo.py ├── requirements.txt └── utils ├── __init__.py ├── activations.py ├── add_nms.py ├── autoanchor.py ├── datasets.py ├── general.py ├── google_utils.py ├── loss.py ├── metrics.py ├── plots.py └── torch_utils.py /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # YOLOv7 Object Blurring 2 | 3 | A Python project for object detection and selective blurring using YOLOv7. This repository allows you to blur specific classes of objects in videos or images. It’s an ideal solution for anonymizing data in videos, protecting privacy, or focusing on certain objects. 4 | 5 | ### Prerequisites 6 | - **Python 3.6+** installed on your system. 7 | - **pip** upgraded to the latest version. 8 | 9 | ### Quick Start Guide 10 | 11 | #### 1. Clone the Repository 12 | Start by cloning this repository to your local machine: 13 | ```bash 14 | git clone https://github.com/RizwanMunawar/yolov7-object-blurring.git 15 | cd yolov7-object-blurring 16 | ``` 17 | 18 | #### 2. Set Up a Virtual Environment (Recommended) 19 | Create a virtual environment to isolate dependencies and prevent conflicts with existing Python packages. 20 | 21 | **For Linux Users:** 22 | ```bash 23 | python3 -m venv yolov7objblurring 24 | source yolov7objblurring/bin/activate 25 | ``` 26 | 27 | **For Windows Users:** 28 | ```bash 29 | python3 -m venv yolov7objblurring 30 | yolov7objblurring\Scripts\activate 31 | ``` 32 | 33 | #### 3. Install Dependencies 34 | Upgrade pip and install the required packages by running: 35 | ```bash 36 | pip install --upgrade pip 37 | pip install -r requirements.txt 38 | ``` 39 | 40 | #### 4. Download YOLOv7 Model Weights 41 | Download the [YOLOv7 pretrained weights](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt) and move them to the `yolov7-object-blurring` folder. 42 | 43 | #### 5. Running the Code 44 | Use the following commands to detect and blur objects in your video: 45 | 46 | - **Basic Command** (change `source` to the path of your video): 47 | ```bash 48 | python detect_and_blur.py --weights yolov7.pt --source "your_video.mp4" --blurratio 20 49 | ``` 50 | 51 | - **Blurring Specific Classes** (e.g., `person` class): 52 | ```bash 53 | python detect_and_blur.py --weights yolov7.pt --source "your_video.mp4" --classes 0 --blurratio 50 54 | ``` 55 | 56 | - **Hiding Detection Boxes** (hides the bounding box for blurred areas): 57 | ```bash 58 | python detect_and_blur.py --weights yolov7.pt --source "your_video.mp4" --classes 0 --blurratio 50 --hidedetarea 59 | ``` 60 | 61 | #### 6. Accessing Results 62 | The output video will be saved in the directory: `runs/detect/exp`. Each new run creates a new `exp` folder with the results. 63 | 64 | --- 65 | 66 | ### Example Results 67 | | Objects Blurred A | Objects Blurred B | Hidden Detection Area | 68 | | --- | --- | --- | 69 | | ![Image A](https://user-images.githubusercontent.com/62513924/186101334-1de03f51-9f64-41fd-b488-b77eb949865d.png) | ![Image B](https://user-images.githubusercontent.com/62513924/186101348-3b06d516-5507-4548-8efa-9b55564a75fe.png) | ![Image C](https://user-images.githubusercontent.com/62513924/186102964-59f89ae2-80ac-43c9-ab64-54c607a1cbe9.png) | 70 | 71 | ### Resources and Further Reading 72 | 73 | - **YOLOv7 Project:** [https://github.com/WongKinYiu/yolov7](https://github.com/WongKinYiu/yolov7) 74 | - **OpenCV Documentation:** [https://opencv.org/](https://opencv.org/) 75 | 76 | **Some of my articles/research papers | computer vision awesome resources for learning | How do I appear to the world? 🚀** 77 | 78 | [Ultralytics YOLO11: Object Detection and Instance Segmentation🤯](https://muhammadrizwanmunawar.medium.com/ultralytics-yolo11-object-detection-and-instance-segmentation-88ef0239a811) ![Published Date](https://img.shields.io/badge/published_Date-2024--10--27-brightgreen) 79 | 80 | [Parking Management using Ultralytics YOLO11](https://muhammadrizwanmunawar.medium.com/parking-management-using-ultralytics-yolo11-fba4c6bc62bc) ![Published Date](https://img.shields.io/badge/published_Date-2024--11--10-brightgreen) 81 | 82 | [My 🖐️Computer Vision Hobby Projects that Yielded Earnings](https://muhammadrizwanmunawar.medium.com/my-️computer-vision-hobby-projects-that-yielded-earnings-7923c9b9eead) ![Published Date](https://img.shields.io/badge/published_Date-2023--09--10-brightgreen) 83 | 84 | [Best Resources to Learn Computer Vision](https://muhammadrizwanmunawar.medium.com/best-resources-to-learn-computer-vision-311352ed0833) ![Published Date](https://img.shields.io/badge/published_Date-2023--06--30-brightgreen) 85 | 86 | [Roadmap for Computer Vision Engineer](https://medium.com/augmented-startups/roadmap-for-computer-vision-engineer-45167b94518c) ![Published Date](https://img.shields.io/badge/published_Date-2022--08--07-brightgreen) 87 | 88 | [How did I spend 2022 in the Computer Vision Field](https://www.linkedin.com/pulse/how-did-i-spend-2022-computer-vision-field-muhammad-rizwan-munawar) ![Published Date](https://img.shields.io/badge/published_Date-2022--12--20-brightgreen) 89 | 90 | [Domain Feature Mapping with YOLOv7 for Automated Edge-Based Pallet Racking Inspections](https://www.mdpi.com/1424-8220/22/18/6927) ![Published Date](https://img.shields.io/badge/published_Date-2022--09--13-brightgreen) 91 | 92 | [Exudate Regeneration for Automated Exudate Detection in Retinal Fundus Images](https://ieeexplore.ieee.org/document/9885192) ![Published Date](https://img.shields.io/badge/published_Date-2022--09--12-brightgreen) 93 | 94 | [Feature Mapping for Rice Leaf Defect Detection Based on a Custom Convolutional Architecture](https://www.mdpi.com/2304-8158/11/23/3914) ![Published Date](https://img.shields.io/badge/published_Date-2022--12--04-brightgreen) 95 | 96 | [Yolov5, Yolo-x, Yolo-r, Yolov7 Performance Comparison: A Survey](https://aircconline.com/csit/papers/vol12/csit121602.pdf) ![Published Date](https://img.shields.io/badge/published_Date-2022--09--24-brightgreen) 97 | 98 | [Explainable AI in Drug Sensitivity Prediction on Cancer Cell Lines](https://ieeexplore.ieee.org/document/9922931) ![Published Date](https://img.shields.io/badge/published_Date-2022--09--23-brightgreen) 99 | 100 | [Train YOLOv8 on Custom Data](https://medium.com/augmented-startups/train-yolov8-on-custom-data-6d28cd348262) ![Published Date](https://img.shields.io/badge/published_Date-2022--09--23-brightgreen) 101 | 102 | 103 | **More Information** 104 | 105 | For more details, you can reach out to me on [Medium](https://muhammadrizwanmunawar.medium.com/) or can connect with me on [LinkedIn](https://www.linkedin.com/in/muhammadrizwanmunawar/) 106 | -------------------------------------------------------------------------------- /data/coco.yaml: -------------------------------------------------------------------------------- 1 | # COCO 2017 dataset http://cocodataset.org 2 | 3 | # download command/URL (optional) 4 | download: bash ./scripts/get_coco.sh 5 | 6 | # train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/] 7 | train: ./coco/train2017.txt # 118287 images 8 | val: ./coco/val2017.txt # 5000 images 9 | test: ./coco/test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794 10 | 11 | # number of classes 12 | nc: 80 13 | 14 | # class names 15 | names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 16 | 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 17 | 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 18 | 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 19 | 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 20 | 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 21 | 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 22 | 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 23 | 'hair drier', 'toothbrush' ] 24 | -------------------------------------------------------------------------------- /detect_and_blur.py: -------------------------------------------------------------------------------- 1 | #Object Crop Using YOLOv7 2 | import argparse 3 | import time 4 | from pathlib import Path 5 | import os 6 | import cv2 7 | import torch 8 | import torch.backends.cudnn as cudnn 9 | from numpy import random 10 | 11 | from models.experimental import attempt_load 12 | from utils.datasets import LoadStreams, LoadImages 13 | from utils.general import check_img_size, check_requirements, check_imshow, non_max_suppression, apply_classifier, \ 14 | scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path 15 | from utils.plots import plot_one_box 16 | from utils.torch_utils import select_device, load_classifier, time_synchronized, TracedModel 17 | 18 | 19 | def detect(save_img=False): 20 | source, weights, view_img, save_txt, imgsz, trace, blurratio,hidedetarea = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size, not opt.no_trace, opt.blurratio,opt.hidedetarea 21 | save_img = not opt.nosave and not source.endswith('.txt') # save inference images 22 | webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith( 23 | ('rtsp://', 'rtmp://', 'http://', 'https://')) 24 | 25 | # Directories 26 | save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # increment run 27 | (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir 28 | 29 | # Initialize 30 | set_logging() 31 | device = select_device(opt.device) 32 | half = device.type != 'cpu' # half precision only supported on CUDA 33 | 34 | # Load model 35 | model = attempt_load(weights, map_location=device) # load FP32 model 36 | stride = int(model.stride.max()) # model stride 37 | imgsz = check_img_size(imgsz, s=stride) # check img_size 38 | 39 | if trace: 40 | model = TracedModel(model, device, opt.img_size) 41 | 42 | if half: 43 | model.half() # to FP16 44 | 45 | # Second-stage classifier 46 | classify = False 47 | if classify: 48 | modelc = load_classifier(name='resnet101', n=2) # initialize 49 | modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval() 50 | 51 | # Set Dataloader 52 | vid_path, vid_writer = None, None 53 | if webcam: 54 | view_img = check_imshow() 55 | cudnn.benchmark = True # set True to speed up constant image size inference 56 | dataset = LoadStreams(source, img_size=imgsz, stride=stride) 57 | else: 58 | dataset = LoadImages(source, img_size=imgsz, stride=stride) 59 | 60 | # Get names and colors 61 | names = model.module.names if hasattr(model, 'module') else model.names 62 | colors = [[random.randint(0, 255) for _ in range(3)] for _ in names] 63 | 64 | # Run inference 65 | if device.type != 'cpu': 66 | model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once 67 | old_img_w = old_img_h = imgsz 68 | old_img_b = 1 69 | 70 | t0 = time.time() 71 | for path, img, im0s, vid_cap in dataset: 72 | img = torch.from_numpy(img).to(device) 73 | img = img.half() if half else img.float() # uint8 to fp16/32 74 | img /= 255.0 # 0 - 255 to 0.0 - 1.0 75 | if img.ndimension() == 3: 76 | img = img.unsqueeze(0) 77 | 78 | # Warmup 79 | if device.type != 'cpu' and (old_img_b != img.shape[0] or old_img_h != img.shape[2] or old_img_w != img.shape[3]): 80 | old_img_b = img.shape[0] 81 | old_img_h = img.shape[2] 82 | old_img_w = img.shape[3] 83 | for i in range(3): 84 | model(img, augment=opt.augment)[0] 85 | 86 | # Inference 87 | t1 = time_synchronized() 88 | pred = model(img, augment=opt.augment)[0] 89 | t2 = time_synchronized() 90 | 91 | # Apply NMS 92 | pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms) 93 | t3 = time_synchronized() 94 | 95 | # Apply Classifier 96 | if classify: 97 | pred = apply_classifier(pred, modelc, img, im0s) 98 | 99 | # Process detections 100 | for i, det in enumerate(pred): # detections per image 101 | if webcam: # batch_size >= 1 102 | p, s, im0, frame = path[i], '%g: ' % i, im0s[i].copy(), dataset.count 103 | else: 104 | p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0) 105 | 106 | p = Path(p) # to Path 107 | save_path = str(save_dir / p.name) # img.jpg 108 | txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt 109 | gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh 110 | if len(det): 111 | # Rescale boxes from img_size to im0 size 112 | det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() 113 | 114 | # Print results 115 | for c in det[:, -1].unique(): 116 | n = (det[:, -1] == c).sum() # detections per class 117 | s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string 118 | 119 | # Write results 120 | for *xyxy, conf, cls in reversed(det): 121 | 122 | #Blur the object 123 | crop_obj = im0[int(xyxy[1]):int(xyxy[3]),int(xyxy[0]):int(xyxy[2])] 124 | blur = cv2.blur(crop_obj,(blurratio,blurratio)) 125 | im0[int(xyxy[1]):int(xyxy[3]),int(xyxy[0]):int(xyxy[2])] = blur 126 | 127 | if save_txt: # Write to file 128 | xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh 129 | line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh) # label format 130 | with open(txt_path + '.txt', 'a') as f: 131 | f.write(('%g ' * len(line)).rstrip() % line + '\n') 132 | 133 | if save_img or view_img: # Add bbox to image 134 | label = f'{names[int(cls)]} {conf:.2f}' 135 | if not hidedetarea: 136 | plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3) 137 | 138 | # Print time (inference + NMS) 139 | print(f'{s}Done. ({(1E3 * (t2 - t1)):.1f}ms) Inference, ({(1E3 * (t3 - t2)):.1f}ms) NMS') 140 | 141 | # Stream results 142 | if view_img: 143 | cv2.imshow(str(p), im0) 144 | cv2.waitKey(1) # 1 millisecond 145 | 146 | # Save results (image with detections) 147 | if save_img: 148 | if dataset.mode == 'image': 149 | cv2.imwrite(save_path, im0) 150 | print(f" The image with the result is saved in: {save_path}") 151 | else: # 'video' or 'stream' 152 | if vid_path != save_path: # new video 153 | vid_path = save_path 154 | if isinstance(vid_writer, cv2.VideoWriter): 155 | vid_writer.release() # release previous video writer 156 | if vid_cap: # video 157 | fps = vid_cap.get(cv2.CAP_PROP_FPS) 158 | w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) 159 | h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) 160 | else: # stream 161 | fps, w, h = 30, im0.shape[1], im0.shape[0] 162 | save_path += '.mp4' 163 | vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) 164 | vid_writer.write(im0) 165 | 166 | if save_txt or save_img: 167 | s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' 168 | #print(f"Results saved to {save_dir}{s}") 169 | 170 | print(f'Done. ({time.time() - t0:.3f}s)') 171 | 172 | 173 | if __name__ == '__main__': 174 | parser = argparse.ArgumentParser() 175 | parser.add_argument('--weights', nargs='+', type=str, default='yolov7.pt', help='model.pt path(s)') 176 | parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam 177 | parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') 178 | parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold') 179 | parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS') 180 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') 181 | parser.add_argument('--view-img', action='store_true', help='display results') 182 | parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') 183 | parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') 184 | parser.add_argument('--nosave', action='store_true', help='do not save images/videos') 185 | parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3') 186 | parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') 187 | parser.add_argument('--augment', action='store_true', help='augmented inference') 188 | parser.add_argument('--update', action='store_true', help='update all models') 189 | parser.add_argument('--project', default='runs/detect', help='save results to project/name') 190 | parser.add_argument('--name', default='exp', help='save results to project/name') 191 | parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') 192 | parser.add_argument('--no-trace', action='store_true', help='don`t trace model') 193 | parser.add_argument('--blurratio',type=int,default=20, required=True, help='blur opacity') 194 | parser.add_argument('--hidedetarea',action='store_true', help='Hide Detected Area') 195 | opt = parser.parse_args() 196 | print(opt) 197 | #check_requirements(exclude=('pycocotools', 'thop')) 198 | 199 | with torch.no_grad(): 200 | if opt.update: # update all models (to fix SourceChangeWarning) 201 | for opt.weights in ['yolov7.pt']: 202 | detect() 203 | strip_optimizer(opt.weights) 204 | else: 205 | detect() 206 | -------------------------------------------------------------------------------- /models/__init__.py: -------------------------------------------------------------------------------- 1 | # init -------------------------------------------------------------------------------- /models/experimental.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import random 3 | import torch 4 | import torch.nn as nn 5 | 6 | from models.common import Conv, DWConv 7 | from utils.google_utils import attempt_download 8 | 9 | 10 | class CrossConv(nn.Module): 11 | # Cross Convolution Downsample 12 | def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): 13 | # ch_in, ch_out, kernel, stride, groups, expansion, shortcut 14 | super(CrossConv, self).__init__() 15 | c_ = int(c2 * e) # hidden channels 16 | self.cv1 = Conv(c1, c_, (1, k), (1, s)) 17 | self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) 18 | self.add = shortcut and c1 == c2 19 | 20 | def forward(self, x): 21 | return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) 22 | 23 | 24 | class Sum(nn.Module): 25 | # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070 26 | def __init__(self, n, weight=False): # n: number of inputs 27 | super(Sum, self).__init__() 28 | self.weight = weight # apply weights boolean 29 | self.iter = range(n - 1) # iter object 30 | if weight: 31 | self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights 32 | 33 | def forward(self, x): 34 | y = x[0] # no weight 35 | if self.weight: 36 | w = torch.sigmoid(self.w) * 2 37 | for i in self.iter: 38 | y = y + x[i + 1] * w[i] 39 | else: 40 | for i in self.iter: 41 | y = y + x[i + 1] 42 | return y 43 | 44 | 45 | class MixConv2d(nn.Module): 46 | # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595 47 | def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): 48 | super(MixConv2d, self).__init__() 49 | groups = len(k) 50 | if equal_ch: # equal c_ per group 51 | i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices 52 | c_ = [(i == g).sum() for g in range(groups)] # intermediate channels 53 | else: # equal weight.numel() per group 54 | b = [c2] + [0] * groups 55 | a = np.eye(groups + 1, groups, k=-1) 56 | a -= np.roll(a, 1, axis=1) 57 | a *= np.array(k) ** 2 58 | a[0] = 1 59 | c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b 60 | 61 | self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)]) 62 | self.bn = nn.BatchNorm2d(c2) 63 | self.act = nn.LeakyReLU(0.1, inplace=True) 64 | 65 | def forward(self, x): 66 | return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) 67 | 68 | 69 | class Ensemble(nn.ModuleList): 70 | # Ensemble of models 71 | def __init__(self): 72 | super(Ensemble, self).__init__() 73 | 74 | def forward(self, x, augment=False): 75 | y = [] 76 | for module in self: 77 | y.append(module(x, augment)[0]) 78 | # y = torch.stack(y).max(0)[0] # max ensemble 79 | # y = torch.stack(y).mean(0) # mean ensemble 80 | y = torch.cat(y, 1) # nms ensemble 81 | return y, None # inference, train output 82 | 83 | 84 | 85 | 86 | 87 | class ORT_NMS(torch.autograd.Function): 88 | '''ONNX-Runtime NMS operation''' 89 | @staticmethod 90 | def forward(ctx, 91 | boxes, 92 | scores, 93 | max_output_boxes_per_class=torch.tensor([100]), 94 | iou_threshold=torch.tensor([0.45]), 95 | score_threshold=torch.tensor([0.25])): 96 | device = boxes.device 97 | batch = scores.shape[0] 98 | num_det = random.randint(0, 100) 99 | batches = torch.randint(0, batch, (num_det,)).sort()[0].to(device) 100 | idxs = torch.arange(100, 100 + num_det).to(device) 101 | zeros = torch.zeros((num_det,), dtype=torch.int64).to(device) 102 | selected_indices = torch.cat([batches[None], zeros[None], idxs[None]], 0).T.contiguous() 103 | selected_indices = selected_indices.to(torch.int64) 104 | return selected_indices 105 | 106 | @staticmethod 107 | def symbolic(g, boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold): 108 | return g.op("NonMaxSuppression", boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold) 109 | 110 | 111 | class TRT_NMS(torch.autograd.Function): 112 | '''TensorRT NMS operation''' 113 | @staticmethod 114 | def forward( 115 | ctx, 116 | boxes, 117 | scores, 118 | background_class=-1, 119 | box_coding=1, 120 | iou_threshold=0.45, 121 | max_output_boxes=100, 122 | plugin_version="1", 123 | score_activation=0, 124 | score_threshold=0.25, 125 | ): 126 | batch_size, num_boxes, num_classes = scores.shape 127 | num_det = torch.randint(0, max_output_boxes, (batch_size, 1), dtype=torch.int32) 128 | det_boxes = torch.randn(batch_size, max_output_boxes, 4) 129 | det_scores = torch.randn(batch_size, max_output_boxes) 130 | det_classes = torch.randint(0, num_classes, (batch_size, max_output_boxes), dtype=torch.int32) 131 | return num_det, det_boxes, det_scores, det_classes 132 | 133 | @staticmethod 134 | def symbolic(g, 135 | boxes, 136 | scores, 137 | background_class=-1, 138 | box_coding=1, 139 | iou_threshold=0.45, 140 | max_output_boxes=100, 141 | plugin_version="1", 142 | score_activation=0, 143 | score_threshold=0.25): 144 | out = g.op("TRT::EfficientNMS_TRT", 145 | boxes, 146 | scores, 147 | background_class_i=background_class, 148 | box_coding_i=box_coding, 149 | iou_threshold_f=iou_threshold, 150 | max_output_boxes_i=max_output_boxes, 151 | plugin_version_s=plugin_version, 152 | score_activation_i=score_activation, 153 | score_threshold_f=score_threshold, 154 | outputs=4) 155 | nums, boxes, scores, classes = out 156 | return nums, boxes, scores, classes 157 | 158 | 159 | class ONNX_ORT(nn.Module): 160 | '''onnx module with ONNX-Runtime NMS operation.''' 161 | def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=640, device=None): 162 | super().__init__() 163 | self.device = device if device else torch.device("cpu") 164 | self.max_obj = torch.tensor([max_obj]).to(device) 165 | self.iou_threshold = torch.tensor([iou_thres]).to(device) 166 | self.score_threshold = torch.tensor([score_thres]).to(device) 167 | self.max_wh = max_wh # if max_wh != 0 : non-agnostic else : agnostic 168 | self.convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], 169 | dtype=torch.float32, 170 | device=self.device) 171 | 172 | def forward(self, x): 173 | boxes = x[:, :, :4] 174 | conf = x[:, :, 4:5] 175 | scores = x[:, :, 5:] 176 | scores *= conf 177 | boxes @= self.convert_matrix 178 | max_score, category_id = scores.max(2, keepdim=True) 179 | dis = category_id.float() * self.max_wh 180 | nmsbox = boxes + dis 181 | max_score_tp = max_score.transpose(1, 2).contiguous() 182 | selected_indices = ORT_NMS.apply(nmsbox, max_score_tp, self.max_obj, self.iou_threshold, self.score_threshold) 183 | X, Y = selected_indices[:, 0], selected_indices[:, 2] 184 | selected_boxes = boxes[X, Y, :] 185 | selected_categories = category_id[X, Y, :].float() 186 | selected_scores = max_score[X, Y, :] 187 | X = X.unsqueeze(1).float() 188 | return torch.cat([X, selected_boxes, selected_categories, selected_scores], 1) 189 | 190 | class ONNX_TRT(nn.Module): 191 | '''onnx module with TensorRT NMS operation.''' 192 | def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=None ,device=None): 193 | super().__init__() 194 | assert max_wh is None 195 | self.device = device if device else torch.device('cpu') 196 | self.background_class = -1, 197 | self.box_coding = 1, 198 | self.iou_threshold = iou_thres 199 | self.max_obj = max_obj 200 | self.plugin_version = '1' 201 | self.score_activation = 0 202 | self.score_threshold = score_thres 203 | 204 | def forward(self, x): 205 | boxes = x[:, :, :4] 206 | conf = x[:, :, 4:5] 207 | scores = x[:, :, 5:] 208 | scores *= conf 209 | num_det, det_boxes, det_scores, det_classes = TRT_NMS.apply(boxes, scores, self.background_class, self.box_coding, 210 | self.iou_threshold, self.max_obj, 211 | self.plugin_version, self.score_activation, 212 | self.score_threshold) 213 | return num_det, det_boxes, det_scores, det_classes 214 | 215 | 216 | class End2End(nn.Module): 217 | '''export onnx or tensorrt model with NMS operation.''' 218 | def __init__(self, model, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=None, device=None): 219 | super().__init__() 220 | device = device if device else torch.device('cpu') 221 | assert isinstance(max_wh,(int)) or max_wh is None 222 | self.model = model.to(device) 223 | self.model.model[-1].end2end = True 224 | self.patch_model = ONNX_TRT if max_wh is None else ONNX_ORT 225 | self.end2end = self.patch_model(max_obj, iou_thres, score_thres, max_wh, device) 226 | self.end2end.eval() 227 | 228 | def forward(self, x): 229 | x = self.model(x) 230 | x = self.end2end(x) 231 | return x 232 | 233 | 234 | 235 | 236 | 237 | def attempt_load(weights, map_location=None): 238 | # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a 239 | model = Ensemble() 240 | for w in weights if isinstance(weights, list) else [weights]: 241 | # attempt_download(w) 242 | ckpt = torch.load(w, map_location=map_location) # load 243 | model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model 244 | 245 | # Compatibility updates 246 | for m in model.modules(): 247 | if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]: 248 | m.inplace = True # pytorch 1.7.0 compatibility 249 | elif type(m) is nn.Upsample: 250 | m.recompute_scale_factor = None # torch 1.11.0 compatibility 251 | elif type(m) is Conv: 252 | m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility 253 | 254 | if len(model) == 1: 255 | return model[-1] # return model 256 | else: 257 | print('Ensemble created with %s\n' % weights) 258 | for k in ['names', 'stride']: 259 | setattr(model, k, getattr(model[-1], k)) 260 | return model # return ensemble 261 | 262 | 263 | -------------------------------------------------------------------------------- /models/yolo.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import logging 3 | import sys 4 | from copy import deepcopy 5 | 6 | sys.path.append('./') # to run '$ python *.py' files in subdirectories 7 | logger = logging.getLogger(__name__) 8 | import torch 9 | from models.common import * 10 | from models.experimental import * 11 | from utils.autoanchor import check_anchor_order 12 | from utils.general import make_divisible, check_file, set_logging 13 | from utils.torch_utils import time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, \ 14 | select_device, copy_attr 15 | from utils.loss import SigmoidBin 16 | 17 | try: 18 | import thop # for FLOPS computation 19 | except ImportError: 20 | thop = None 21 | 22 | 23 | class Detect(nn.Module): 24 | stride = None # strides computed during build 25 | export = False # onnx export 26 | end2end = False 27 | include_nms = False 28 | concat = False 29 | 30 | def __init__(self, nc=80, anchors=(), ch=()): # detection layer 31 | super(Detect, self).__init__() 32 | self.nc = nc # number of classes 33 | self.no = nc + 5 # number of outputs per anchor 34 | self.nl = len(anchors) # number of detection layers 35 | self.na = len(anchors[0]) // 2 # number of anchors 36 | self.grid = [torch.zeros(1)] * self.nl # init grid 37 | a = torch.tensor(anchors).float().view(self.nl, -1, 2) 38 | self.register_buffer('anchors', a) # shape(nl,na,2) 39 | self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) 40 | self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv 41 | 42 | def forward(self, x): 43 | # x = x.copy() # for profiling 44 | z = [] # inference output 45 | self.training |= self.export 46 | for i in range(self.nl): 47 | x[i] = self.m[i](x[i]) # conv 48 | bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) 49 | x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() 50 | 51 | if not self.training: # inference 52 | if self.grid[i].shape[2:4] != x[i].shape[2:4]: 53 | self.grid[i] = self._make_grid(nx, ny).to(x[i].device) 54 | y = x[i].sigmoid() 55 | if not torch.onnx.is_in_onnx_export(): 56 | y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy 57 | y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh 58 | else: 59 | xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0 60 | xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy 61 | wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh 62 | y = torch.cat((xy, wh, conf), 4) 63 | z.append(y.view(bs, -1, self.no)) 64 | 65 | if self.training: 66 | out = x 67 | elif self.end2end: 68 | out = torch.cat(z, 1) 69 | elif self.include_nms: 70 | z = self.convert(z) 71 | out = (z, ) 72 | elif self.concat: 73 | out = torch.cat(z, 1) 74 | else: 75 | out = (torch.cat(z, 1), x) 76 | 77 | return out 78 | 79 | @staticmethod 80 | def _make_grid(nx=20, ny=20): 81 | yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) 82 | return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() 83 | 84 | def convert(self, z): 85 | z = torch.cat(z, 1) 86 | box = z[:, :, :4] 87 | conf = z[:, :, 4:5] 88 | score = z[:, :, 5:] 89 | score *= conf 90 | convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], 91 | dtype=torch.float32, 92 | device=z.device) 93 | box @= convert_matrix 94 | return (box, score) 95 | 96 | 97 | class IDetect(nn.Module): 98 | stride = None # strides computed during build 99 | export = False # onnx export 100 | end2end = False 101 | include_nms = False 102 | concat = False 103 | 104 | def __init__(self, nc=80, anchors=(), ch=()): # detection layer 105 | super(IDetect, self).__init__() 106 | self.nc = nc # number of classes 107 | self.no = nc + 5 # number of outputs per anchor 108 | self.nl = len(anchors) # number of detection layers 109 | self.na = len(anchors[0]) // 2 # number of anchors 110 | self.grid = [torch.zeros(1)] * self.nl # init grid 111 | a = torch.tensor(anchors).float().view(self.nl, -1, 2) 112 | self.register_buffer('anchors', a) # shape(nl,na,2) 113 | self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) 114 | self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv 115 | 116 | self.ia = nn.ModuleList(ImplicitA(x) for x in ch) 117 | self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch) 118 | 119 | def forward(self, x): 120 | # x = x.copy() # for profiling 121 | z = [] # inference output 122 | self.training |= self.export 123 | for i in range(self.nl): 124 | x[i] = self.m[i](self.ia[i](x[i])) # conv 125 | x[i] = self.im[i](x[i]) 126 | bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) 127 | x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() 128 | 129 | if not self.training: # inference 130 | if self.grid[i].shape[2:4] != x[i].shape[2:4]: 131 | self.grid[i] = self._make_grid(nx, ny).to(x[i].device) 132 | 133 | y = x[i].sigmoid() 134 | y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy 135 | y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh 136 | z.append(y.view(bs, -1, self.no)) 137 | 138 | return x if self.training else (torch.cat(z, 1), x) 139 | 140 | def fuseforward(self, x): 141 | # x = x.copy() # for profiling 142 | z = [] # inference output 143 | self.training |= self.export 144 | for i in range(self.nl): 145 | x[i] = self.m[i](x[i]) # conv 146 | bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) 147 | x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() 148 | 149 | if not self.training: # inference 150 | if self.grid[i].shape[2:4] != x[i].shape[2:4]: 151 | self.grid[i] = self._make_grid(nx, ny).to(x[i].device) 152 | 153 | y = x[i].sigmoid() 154 | if not torch.onnx.is_in_onnx_export(): 155 | y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy 156 | y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh 157 | else: 158 | xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0 159 | xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy 160 | wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh 161 | y = torch.cat((xy, wh, conf), 4) 162 | z.append(y.view(bs, -1, self.no)) 163 | 164 | if self.training: 165 | out = x 166 | elif self.end2end: 167 | out = torch.cat(z, 1) 168 | elif self.include_nms: 169 | z = self.convert(z) 170 | out = (z, ) 171 | elif self.concat: 172 | out = torch.cat(z, 1) 173 | else: 174 | out = (torch.cat(z, 1), x) 175 | 176 | return out 177 | 178 | def fuse(self): 179 | print("IDetect.fuse") 180 | # fuse ImplicitA and Convolution 181 | for i in range(len(self.m)): 182 | c1,c2,_,_ = self.m[i].weight.shape 183 | c1_,c2_, _,_ = self.ia[i].implicit.shape 184 | self.m[i].bias += torch.matmul(self.m[i].weight.reshape(c1,c2),self.ia[i].implicit.reshape(c2_,c1_)).squeeze(1) 185 | 186 | # fuse ImplicitM and Convolution 187 | for i in range(len(self.m)): 188 | c1,c2, _,_ = self.im[i].implicit.shape 189 | self.m[i].bias *= self.im[i].implicit.reshape(c2) 190 | self.m[i].weight *= self.im[i].implicit.transpose(0,1) 191 | 192 | @staticmethod 193 | def _make_grid(nx=20, ny=20): 194 | yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) 195 | return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() 196 | 197 | def convert(self, z): 198 | z = torch.cat(z, 1) 199 | box = z[:, :, :4] 200 | conf = z[:, :, 4:5] 201 | score = z[:, :, 5:] 202 | score *= conf 203 | convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], 204 | dtype=torch.float32, 205 | device=z.device) 206 | box @= convert_matrix 207 | return (box, score) 208 | 209 | 210 | class IKeypoint(nn.Module): 211 | stride = None # strides computed during build 212 | export = False # onnx export 213 | 214 | def __init__(self, nc=80, anchors=(), nkpt=17, ch=(), inplace=True, dw_conv_kpt=False): # detection layer 215 | super(IKeypoint, self).__init__() 216 | self.nc = nc # number of classes 217 | self.nkpt = nkpt 218 | self.dw_conv_kpt = dw_conv_kpt 219 | self.no_det=(nc + 5) # number of outputs per anchor for box and class 220 | self.no_kpt = 3*self.nkpt ## number of outputs per anchor for keypoints 221 | self.no = self.no_det+self.no_kpt 222 | self.nl = len(anchors) # number of detection layers 223 | self.na = len(anchors[0]) // 2 # number of anchors 224 | self.grid = [torch.zeros(1)] * self.nl # init grid 225 | self.flip_test = False 226 | a = torch.tensor(anchors).float().view(self.nl, -1, 2) 227 | self.register_buffer('anchors', a) # shape(nl,na,2) 228 | self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) 229 | self.m = nn.ModuleList(nn.Conv2d(x, self.no_det * self.na, 1) for x in ch) # output conv 230 | 231 | self.ia = nn.ModuleList(ImplicitA(x) for x in ch) 232 | self.im = nn.ModuleList(ImplicitM(self.no_det * self.na) for _ in ch) 233 | 234 | if self.nkpt is not None: 235 | if self.dw_conv_kpt: #keypoint head is slightly more complex 236 | self.m_kpt = nn.ModuleList( 237 | nn.Sequential(DWConv(x, x, k=3), Conv(x,x), 238 | DWConv(x, x, k=3), Conv(x, x), 239 | DWConv(x, x, k=3), Conv(x,x), 240 | DWConv(x, x, k=3), Conv(x, x), 241 | DWConv(x, x, k=3), Conv(x, x), 242 | DWConv(x, x, k=3), nn.Conv2d(x, self.no_kpt * self.na, 1)) for x in ch) 243 | else: #keypoint head is a single convolution 244 | self.m_kpt = nn.ModuleList(nn.Conv2d(x, self.no_kpt * self.na, 1) for x in ch) 245 | 246 | self.inplace = inplace # use in-place ops (e.g. slice assignment) 247 | 248 | def forward(self, x): 249 | # x = x.copy() # for profiling 250 | z = [] # inference output 251 | self.training |= self.export 252 | for i in range(self.nl): 253 | if self.nkpt is None or self.nkpt==0: 254 | x[i] = self.im[i](self.m[i](self.ia[i](x[i]))) # conv 255 | else : 256 | x[i] = torch.cat((self.im[i](self.m[i](self.ia[i](x[i]))), self.m_kpt[i](x[i])), axis=1) 257 | 258 | bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) 259 | x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() 260 | x_det = x[i][..., :6] 261 | x_kpt = x[i][..., 6:] 262 | 263 | if not self.training: # inference 264 | if self.grid[i].shape[2:4] != x[i].shape[2:4]: 265 | self.grid[i] = self._make_grid(nx, ny).to(x[i].device) 266 | kpt_grid_x = self.grid[i][..., 0:1] 267 | kpt_grid_y = self.grid[i][..., 1:2] 268 | 269 | if self.nkpt == 0: 270 | y = x[i].sigmoid() 271 | else: 272 | y = x_det.sigmoid() 273 | 274 | if self.inplace: 275 | xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy 276 | wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].view(1, self.na, 1, 1, 2) # wh 277 | if self.nkpt != 0: 278 | x_kpt[..., 0::3] = (x_kpt[..., ::3] * 2. - 0.5 + kpt_grid_x.repeat(1,1,1,1,17)) * self.stride[i] # xy 279 | x_kpt[..., 1::3] = (x_kpt[..., 1::3] * 2. - 0.5 + kpt_grid_y.repeat(1,1,1,1,17)) * self.stride[i] # xy 280 | #x_kpt[..., 0::3] = (x_kpt[..., ::3] + kpt_grid_x.repeat(1,1,1,1,17)) * self.stride[i] # xy 281 | #x_kpt[..., 1::3] = (x_kpt[..., 1::3] + kpt_grid_y.repeat(1,1,1,1,17)) * self.stride[i] # xy 282 | #print('=============') 283 | #print(self.anchor_grid[i].shape) 284 | #print(self.anchor_grid[i][...,0].unsqueeze(4).shape) 285 | #print(x_kpt[..., 0::3].shape) 286 | #x_kpt[..., 0::3] = ((x_kpt[..., 0::3].tanh() * 2.) ** 3 * self.anchor_grid[i][...,0].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_x.repeat(1,1,1,1,17) * self.stride[i] # xy 287 | #x_kpt[..., 1::3] = ((x_kpt[..., 1::3].tanh() * 2.) ** 3 * self.anchor_grid[i][...,1].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_y.repeat(1,1,1,1,17) * self.stride[i] # xy 288 | #x_kpt[..., 0::3] = (((x_kpt[..., 0::3].sigmoid() * 4.) ** 2 - 8.) * self.anchor_grid[i][...,0].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_x.repeat(1,1,1,1,17) * self.stride[i] # xy 289 | #x_kpt[..., 1::3] = (((x_kpt[..., 1::3].sigmoid() * 4.) ** 2 - 8.) * self.anchor_grid[i][...,1].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_y.repeat(1,1,1,1,17) * self.stride[i] # xy 290 | x_kpt[..., 2::3] = x_kpt[..., 2::3].sigmoid() 291 | 292 | y = torch.cat((xy, wh, y[..., 4:], x_kpt), dim = -1) 293 | 294 | else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 295 | xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy 296 | wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh 297 | if self.nkpt != 0: 298 | y[..., 6:] = (y[..., 6:] * 2. - 0.5 + self.grid[i].repeat((1,1,1,1,self.nkpt))) * self.stride[i] # xy 299 | y = torch.cat((xy, wh, y[..., 4:]), -1) 300 | 301 | z.append(y.view(bs, -1, self.no)) 302 | 303 | return x if self.training else (torch.cat(z, 1), x) 304 | 305 | @staticmethod 306 | def _make_grid(nx=20, ny=20): 307 | yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) 308 | return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() 309 | 310 | 311 | class IAuxDetect(nn.Module): 312 | stride = None # strides computed during build 313 | export = False # onnx export 314 | end2end = False 315 | include_nms = False 316 | concat = False 317 | 318 | def __init__(self, nc=80, anchors=(), ch=()): # detection layer 319 | super(IAuxDetect, self).__init__() 320 | self.nc = nc # number of classes 321 | self.no = nc + 5 # number of outputs per anchor 322 | self.nl = len(anchors) # number of detection layers 323 | self.na = len(anchors[0]) // 2 # number of anchors 324 | self.grid = [torch.zeros(1)] * self.nl # init grid 325 | a = torch.tensor(anchors).float().view(self.nl, -1, 2) 326 | self.register_buffer('anchors', a) # shape(nl,na,2) 327 | self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) 328 | self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch[:self.nl]) # output conv 329 | self.m2 = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch[self.nl:]) # output conv 330 | 331 | self.ia = nn.ModuleList(ImplicitA(x) for x in ch[:self.nl]) 332 | self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch[:self.nl]) 333 | 334 | def forward(self, x): 335 | # x = x.copy() # for profiling 336 | z = [] # inference output 337 | self.training |= self.export 338 | for i in range(self.nl): 339 | x[i] = self.m[i](self.ia[i](x[i])) # conv 340 | x[i] = self.im[i](x[i]) 341 | bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) 342 | x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() 343 | 344 | x[i+self.nl] = self.m2[i](x[i+self.nl]) 345 | x[i+self.nl] = x[i+self.nl].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() 346 | 347 | if not self.training: # inference 348 | if self.grid[i].shape[2:4] != x[i].shape[2:4]: 349 | self.grid[i] = self._make_grid(nx, ny).to(x[i].device) 350 | 351 | y = x[i].sigmoid() 352 | if not torch.onnx.is_in_onnx_export(): 353 | y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy 354 | y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh 355 | else: 356 | xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0 357 | xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy 358 | wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh 359 | y = torch.cat((xy, wh, conf), 4) 360 | z.append(y.view(bs, -1, self.no)) 361 | 362 | return x if self.training else (torch.cat(z, 1), x[:self.nl]) 363 | 364 | def fuseforward(self, x): 365 | # x = x.copy() # for profiling 366 | z = [] # inference output 367 | self.training |= self.export 368 | for i in range(self.nl): 369 | x[i] = self.m[i](x[i]) # conv 370 | bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) 371 | x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() 372 | 373 | if not self.training: # inference 374 | if self.grid[i].shape[2:4] != x[i].shape[2:4]: 375 | self.grid[i] = self._make_grid(nx, ny).to(x[i].device) 376 | 377 | y = x[i].sigmoid() 378 | if not torch.onnx.is_in_onnx_export(): 379 | y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy 380 | y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh 381 | else: 382 | xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy 383 | wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].data # wh 384 | y = torch.cat((xy, wh, y[..., 4:]), -1) 385 | z.append(y.view(bs, -1, self.no)) 386 | 387 | if self.training: 388 | out = x 389 | elif self.end2end: 390 | out = torch.cat(z, 1) 391 | elif self.include_nms: 392 | z = self.convert(z) 393 | out = (z, ) 394 | elif self.concat: 395 | out = torch.cat(z, 1) 396 | else: 397 | out = (torch.cat(z, 1), x) 398 | 399 | return out 400 | 401 | def fuse(self): 402 | print("IAuxDetect.fuse") 403 | # fuse ImplicitA and Convolution 404 | for i in range(len(self.m)): 405 | c1,c2,_,_ = self.m[i].weight.shape 406 | c1_,c2_, _,_ = self.ia[i].implicit.shape 407 | self.m[i].bias += torch.matmul(self.m[i].weight.reshape(c1,c2),self.ia[i].implicit.reshape(c2_,c1_)).squeeze(1) 408 | 409 | # fuse ImplicitM and Convolution 410 | for i in range(len(self.m)): 411 | c1,c2, _,_ = self.im[i].implicit.shape 412 | self.m[i].bias *= self.im[i].implicit.reshape(c2) 413 | self.m[i].weight *= self.im[i].implicit.transpose(0,1) 414 | 415 | @staticmethod 416 | def _make_grid(nx=20, ny=20): 417 | yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) 418 | return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() 419 | 420 | def convert(self, z): 421 | z = torch.cat(z, 1) 422 | box = z[:, :, :4] 423 | conf = z[:, :, 4:5] 424 | score = z[:, :, 5:] 425 | score *= conf 426 | convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], 427 | dtype=torch.float32, 428 | device=z.device) 429 | box @= convert_matrix 430 | return (box, score) 431 | 432 | 433 | class IBin(nn.Module): 434 | stride = None # strides computed during build 435 | export = False # onnx export 436 | 437 | def __init__(self, nc=80, anchors=(), ch=(), bin_count=21): # detection layer 438 | super(IBin, self).__init__() 439 | self.nc = nc # number of classes 440 | self.bin_count = bin_count 441 | 442 | self.w_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0) 443 | self.h_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0) 444 | # classes, x,y,obj 445 | self.no = nc + 3 + \ 446 | self.w_bin_sigmoid.get_length() + self.h_bin_sigmoid.get_length() # w-bce, h-bce 447 | # + self.x_bin_sigmoid.get_length() + self.y_bin_sigmoid.get_length() 448 | 449 | self.nl = len(anchors) # number of detection layers 450 | self.na = len(anchors[0]) // 2 # number of anchors 451 | self.grid = [torch.zeros(1)] * self.nl # init grid 452 | a = torch.tensor(anchors).float().view(self.nl, -1, 2) 453 | self.register_buffer('anchors', a) # shape(nl,na,2) 454 | self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) 455 | self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv 456 | 457 | self.ia = nn.ModuleList(ImplicitA(x) for x in ch) 458 | self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch) 459 | 460 | def forward(self, x): 461 | 462 | #self.x_bin_sigmoid.use_fw_regression = True 463 | #self.y_bin_sigmoid.use_fw_regression = True 464 | self.w_bin_sigmoid.use_fw_regression = True 465 | self.h_bin_sigmoid.use_fw_regression = True 466 | 467 | # x = x.copy() # for profiling 468 | z = [] # inference output 469 | self.training |= self.export 470 | for i in range(self.nl): 471 | x[i] = self.m[i](self.ia[i](x[i])) # conv 472 | x[i] = self.im[i](x[i]) 473 | bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) 474 | x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() 475 | 476 | if not self.training: # inference 477 | if self.grid[i].shape[2:4] != x[i].shape[2:4]: 478 | self.grid[i] = self._make_grid(nx, ny).to(x[i].device) 479 | 480 | y = x[i].sigmoid() 481 | y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy 482 | #y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh 483 | 484 | 485 | #px = (self.x_bin_sigmoid.forward(y[..., 0:12]) + self.grid[i][..., 0]) * self.stride[i] 486 | #py = (self.y_bin_sigmoid.forward(y[..., 12:24]) + self.grid[i][..., 1]) * self.stride[i] 487 | 488 | pw = self.w_bin_sigmoid.forward(y[..., 2:24]) * self.anchor_grid[i][..., 0] 489 | ph = self.h_bin_sigmoid.forward(y[..., 24:46]) * self.anchor_grid[i][..., 1] 490 | 491 | #y[..., 0] = px 492 | #y[..., 1] = py 493 | y[..., 2] = pw 494 | y[..., 3] = ph 495 | 496 | y = torch.cat((y[..., 0:4], y[..., 46:]), dim=-1) 497 | 498 | z.append(y.view(bs, -1, y.shape[-1])) 499 | 500 | return x if self.training else (torch.cat(z, 1), x) 501 | 502 | @staticmethod 503 | def _make_grid(nx=20, ny=20): 504 | yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) 505 | return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() 506 | 507 | 508 | class Model(nn.Module): 509 | def __init__(self, cfg='yolor-csp-c.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes 510 | super(Model, self).__init__() 511 | self.traced = False 512 | if isinstance(cfg, dict): 513 | self.yaml = cfg # model dict 514 | else: # is *.yaml 515 | import yaml # for torch hub 516 | self.yaml_file = Path(cfg).name 517 | with open(cfg) as f: 518 | self.yaml = yaml.load(f, Loader=yaml.SafeLoader) # model dict 519 | 520 | # Define model 521 | ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels 522 | if nc and nc != self.yaml['nc']: 523 | logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") 524 | self.yaml['nc'] = nc # override yaml value 525 | if anchors: 526 | logger.info(f'Overriding model.yaml anchors with anchors={anchors}') 527 | self.yaml['anchors'] = round(anchors) # override yaml value 528 | self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist 529 | self.names = [str(i) for i in range(self.yaml['nc'])] # default names 530 | # print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))]) 531 | 532 | # Build strides, anchors 533 | m = self.model[-1] # Detect() 534 | if isinstance(m, Detect): 535 | s = 256 # 2x min stride 536 | m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward 537 | check_anchor_order(m) 538 | m.anchors /= m.stride.view(-1, 1, 1) 539 | self.stride = m.stride 540 | self._initialize_biases() # only run once 541 | # print('Strides: %s' % m.stride.tolist()) 542 | if isinstance(m, IDetect): 543 | s = 256 # 2x min stride 544 | m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward 545 | check_anchor_order(m) 546 | m.anchors /= m.stride.view(-1, 1, 1) 547 | self.stride = m.stride 548 | self._initialize_biases() # only run once 549 | # print('Strides: %s' % m.stride.tolist()) 550 | if isinstance(m, IAuxDetect): 551 | s = 256 # 2x min stride 552 | m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))[:4]]) # forward 553 | #print(m.stride) 554 | check_anchor_order(m) 555 | m.anchors /= m.stride.view(-1, 1, 1) 556 | self.stride = m.stride 557 | self._initialize_aux_biases() # only run once 558 | # print('Strides: %s' % m.stride.tolist()) 559 | if isinstance(m, IBin): 560 | s = 256 # 2x min stride 561 | m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward 562 | check_anchor_order(m) 563 | m.anchors /= m.stride.view(-1, 1, 1) 564 | self.stride = m.stride 565 | self._initialize_biases_bin() # only run once 566 | # print('Strides: %s' % m.stride.tolist()) 567 | if isinstance(m, IKeypoint): 568 | s = 256 # 2x min stride 569 | m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward 570 | check_anchor_order(m) 571 | m.anchors /= m.stride.view(-1, 1, 1) 572 | self.stride = m.stride 573 | self._initialize_biases_kpt() # only run once 574 | # print('Strides: %s' % m.stride.tolist()) 575 | 576 | # Init weights, biases 577 | initialize_weights(self) 578 | self.info() 579 | logger.info('') 580 | 581 | def forward(self, x, augment=False, profile=False): 582 | if augment: 583 | img_size = x.shape[-2:] # height, width 584 | s = [1, 0.83, 0.67] # scales 585 | f = [None, 3, None] # flips (2-ud, 3-lr) 586 | y = [] # outputs 587 | for si, fi in zip(s, f): 588 | xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max())) 589 | yi = self.forward_once(xi)[0] # forward 590 | # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save 591 | yi[..., :4] /= si # de-scale 592 | if fi == 2: 593 | yi[..., 1] = img_size[0] - yi[..., 1] # de-flip ud 594 | elif fi == 3: 595 | yi[..., 0] = img_size[1] - yi[..., 0] # de-flip lr 596 | y.append(yi) 597 | return torch.cat(y, 1), None # augmented inference, train 598 | else: 599 | return self.forward_once(x, profile) # single-scale inference, train 600 | 601 | def forward_once(self, x, profile=False): 602 | y, dt = [], [] # outputs 603 | for m in self.model: 604 | if m.f != -1: # if not from previous layer 605 | x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers 606 | 607 | if not hasattr(self, 'traced'): 608 | self.traced=False 609 | 610 | if self.traced: 611 | if isinstance(m, Detect) or isinstance(m, IDetect) or isinstance(m, IAuxDetect) or isinstance(m, IKeypoint): 612 | break 613 | 614 | if profile: 615 | c = isinstance(m, (Detect, IDetect, IAuxDetect, IBin)) 616 | o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPS 617 | for _ in range(10): 618 | m(x.copy() if c else x) 619 | t = time_synchronized() 620 | for _ in range(10): 621 | m(x.copy() if c else x) 622 | dt.append((time_synchronized() - t) * 100) 623 | print('%10.1f%10.0f%10.1fms %-40s' % (o, m.np, dt[-1], m.type)) 624 | 625 | x = m(x) # run 626 | 627 | y.append(x if m.i in self.save else None) # save output 628 | 629 | if profile: 630 | print('%.1fms total' % sum(dt)) 631 | return x 632 | 633 | def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency 634 | # https://arxiv.org/abs/1708.02002 section 3.3 635 | # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. 636 | m = self.model[-1] # Detect() module 637 | for mi, s in zip(m.m, m.stride): # from 638 | b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) 639 | b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) 640 | b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls 641 | mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) 642 | 643 | def _initialize_aux_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency 644 | # https://arxiv.org/abs/1708.02002 section 3.3 645 | # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. 646 | m = self.model[-1] # Detect() module 647 | for mi, mi2, s in zip(m.m, m.m2, m.stride): # from 648 | b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) 649 | b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) 650 | b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls 651 | mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) 652 | b2 = mi2.bias.view(m.na, -1) # conv.bias(255) to (3,85) 653 | b2.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) 654 | b2.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls 655 | mi2.bias = torch.nn.Parameter(b2.view(-1), requires_grad=True) 656 | 657 | def _initialize_biases_bin(self, cf=None): # initialize biases into Detect(), cf is class frequency 658 | # https://arxiv.org/abs/1708.02002 section 3.3 659 | # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. 660 | m = self.model[-1] # Bin() module 661 | bc = m.bin_count 662 | for mi, s in zip(m.m, m.stride): # from 663 | b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) 664 | old = b[:, (0,1,2,bc+3)].data 665 | obj_idx = 2*bc+4 666 | b[:, :obj_idx].data += math.log(0.6 / (bc + 1 - 0.99)) 667 | b[:, obj_idx].data += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) 668 | b[:, (obj_idx+1):].data += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls 669 | b[:, (0,1,2,bc+3)].data = old 670 | mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) 671 | 672 | def _initialize_biases_kpt(self, cf=None): # initialize biases into Detect(), cf is class frequency 673 | # https://arxiv.org/abs/1708.02002 section 3.3 674 | # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. 675 | m = self.model[-1] # Detect() module 676 | for mi, s in zip(m.m, m.stride): # from 677 | b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) 678 | b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) 679 | b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls 680 | mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) 681 | 682 | def _print_biases(self): 683 | m = self.model[-1] # Detect() module 684 | for mi in m.m: # from 685 | b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) 686 | print(('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) 687 | 688 | # def _print_weights(self): 689 | # for m in self.model.modules(): 690 | # if type(m) is Bottleneck: 691 | # print('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights 692 | 693 | def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers 694 | print('Fusing layers... ') 695 | for m in self.model.modules(): 696 | if isinstance(m, RepConv): 697 | #print(f" fuse_repvgg_block") 698 | m.fuse_repvgg_block() 699 | elif isinstance(m, RepConv_OREPA): 700 | #print(f" switch_to_deploy") 701 | m.switch_to_deploy() 702 | elif type(m) is Conv and hasattr(m, 'bn'): 703 | m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv 704 | delattr(m, 'bn') # remove batchnorm 705 | m.forward = m.fuseforward # update forward 706 | elif isinstance(m, (IDetect, IAuxDetect)): 707 | m.fuse() 708 | m.forward = m.fuseforward 709 | self.info() 710 | return self 711 | 712 | def nms(self, mode=True): # add or remove NMS module 713 | present = type(self.model[-1]) is NMS # last layer is NMS 714 | if mode and not present: 715 | print('Adding NMS... ') 716 | m = NMS() # module 717 | m.f = -1 # from 718 | m.i = self.model[-1].i + 1 # index 719 | self.model.add_module(name='%s' % m.i, module=m) # add 720 | self.eval() 721 | elif not mode and present: 722 | print('Removing NMS... ') 723 | self.model = self.model[:-1] # remove 724 | return self 725 | 726 | def autoshape(self): # add autoShape module 727 | print('Adding autoShape... ') 728 | m = autoShape(self) # wrap model 729 | copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes 730 | return m 731 | 732 | def info(self, verbose=False, img_size=640): # print model information 733 | model_info(self, verbose, img_size) 734 | 735 | 736 | def parse_model(d, ch): # model_dict, input_channels(3) 737 | logger.info('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments')) 738 | anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] 739 | na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors 740 | no = na * (nc + 5) # number of outputs = anchors * (classes + 5) 741 | 742 | layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out 743 | for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args 744 | m = eval(m) if isinstance(m, str) else m # eval strings 745 | for j, a in enumerate(args): 746 | try: 747 | args[j] = eval(a) if isinstance(a, str) else a # eval strings 748 | except: 749 | pass 750 | 751 | n = max(round(n * gd), 1) if n > 1 else n # depth gain 752 | if m in [nn.Conv2d, Conv, RobustConv, RobustConv2, DWConv, GhostConv, RepConv, RepConv_OREPA, DownC, 753 | SPP, SPPF, SPPCSPC, GhostSPPCSPC, MixConv2d, Focus, Stem, GhostStem, CrossConv, 754 | Bottleneck, BottleneckCSPA, BottleneckCSPB, BottleneckCSPC, 755 | RepBottleneck, RepBottleneckCSPA, RepBottleneckCSPB, RepBottleneckCSPC, 756 | Res, ResCSPA, ResCSPB, ResCSPC, 757 | RepRes, RepResCSPA, RepResCSPB, RepResCSPC, 758 | ResX, ResXCSPA, ResXCSPB, ResXCSPC, 759 | RepResX, RepResXCSPA, RepResXCSPB, RepResXCSPC, 760 | Ghost, GhostCSPA, GhostCSPB, GhostCSPC, 761 | SwinTransformerBlock, STCSPA, STCSPB, STCSPC, 762 | SwinTransformer2Block, ST2CSPA, ST2CSPB, ST2CSPC]: 763 | c1, c2 = ch[f], args[0] 764 | if c2 != no: # if not output 765 | c2 = make_divisible(c2 * gw, 8) 766 | 767 | args = [c1, c2, *args[1:]] 768 | if m in [DownC, SPPCSPC, GhostSPPCSPC, 769 | BottleneckCSPA, BottleneckCSPB, BottleneckCSPC, 770 | RepBottleneckCSPA, RepBottleneckCSPB, RepBottleneckCSPC, 771 | ResCSPA, ResCSPB, ResCSPC, 772 | RepResCSPA, RepResCSPB, RepResCSPC, 773 | ResXCSPA, ResXCSPB, ResXCSPC, 774 | RepResXCSPA, RepResXCSPB, RepResXCSPC, 775 | GhostCSPA, GhostCSPB, GhostCSPC, 776 | STCSPA, STCSPB, STCSPC, 777 | ST2CSPA, ST2CSPB, ST2CSPC]: 778 | args.insert(2, n) # number of repeats 779 | n = 1 780 | elif m is nn.BatchNorm2d: 781 | args = [ch[f]] 782 | elif m is Concat: 783 | c2 = sum([ch[x] for x in f]) 784 | elif m is Chuncat: 785 | c2 = sum([ch[x] for x in f]) 786 | elif m is Shortcut: 787 | c2 = ch[f[0]] 788 | elif m is Foldcut: 789 | c2 = ch[f] // 2 790 | elif m in [Detect, IDetect, IAuxDetect, IBin, IKeypoint]: 791 | args.append([ch[x] for x in f]) 792 | if isinstance(args[1], int): # number of anchors 793 | args[1] = [list(range(args[1] * 2))] * len(f) 794 | elif m is ReOrg: 795 | c2 = ch[f] * 4 796 | elif m is Contract: 797 | c2 = ch[f] * args[0] ** 2 798 | elif m is Expand: 799 | c2 = ch[f] // args[0] ** 2 800 | else: 801 | c2 = ch[f] 802 | 803 | m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module 804 | t = str(m)[8:-2].replace('__main__.', '') # module type 805 | np = sum([x.numel() for x in m_.parameters()]) # number params 806 | m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params 807 | logger.info('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n, np, t, args)) # print 808 | save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist 809 | layers.append(m_) 810 | if i == 0: 811 | ch = [] 812 | ch.append(c2) 813 | return nn.Sequential(*layers), sorted(save) 814 | 815 | 816 | if __name__ == '__main__': 817 | parser = argparse.ArgumentParser() 818 | parser.add_argument('--cfg', type=str, default='yolor-csp-c.yaml', help='model.yaml') 819 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') 820 | parser.add_argument('--profile', action='store_true', help='profile model speed') 821 | opt = parser.parse_args() 822 | opt.cfg = check_file(opt.cfg) # check file 823 | set_logging() 824 | device = select_device(opt.device) 825 | 826 | # Create model 827 | model = Model(opt.cfg).to(device) 828 | model.train() 829 | 830 | if opt.profile: 831 | img = torch.rand(1, 3, 640, 640).to(device) 832 | y = model(img, profile=True) 833 | 834 | # Profile 835 | # img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device) 836 | # y = model(img, profile=True) 837 | 838 | # Tensorboard 839 | # from torch.utils.tensorboard import SummaryWriter 840 | # tb_writer = SummaryWriter() 841 | # print("Run 'tensorboard --logdir=models/runs' to view tensorboard at http://localhost:6006/") 842 | # tb_writer.add_graph(model.model, img) # add model to tensorboard 843 | # tb_writer.add_image('test', img[0], dataformats='CWH') # add model to tensorboard 844 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | # Base ---------------------------------------- 2 | matplotlib>=3.2.2 3 | numpy>=1.18.5 4 | opencv-python>=4.1.1 5 | Pillow>=7.1.2 6 | PyYAML>=5.3.1 7 | requests>=2.23.0 8 | scipy>=1.4.1 9 | torch>=1.7.0,!=1.12.0 10 | torchvision>=0.8.1,!=0.13.0 11 | tqdm>=4.41.0 12 | protobuf<4.21.3 13 | 14 | # Logging ------------------------------------- 15 | tensorboard>=2.4.1 16 | # wandb 17 | 18 | # Plotting ------------------------------------ 19 | pandas>=1.1.4 20 | seaborn>=0.11.0 21 | 22 | # Extras -------------------------------------- 23 | ipython # interactive notebook 24 | psutil # system utilization 25 | thop # FLOPs computation 26 | -------------------------------------------------------------------------------- /utils/__init__.py: -------------------------------------------------------------------------------- 1 | # init -------------------------------------------------------------------------------- /utils/activations.py: -------------------------------------------------------------------------------- 1 | # Activation functions 2 | 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | 7 | 8 | # SiLU https://arxiv.org/pdf/1606.08415.pdf ---------------------------------------------------------------------------- 9 | class SiLU(nn.Module): # export-friendly version of nn.SiLU() 10 | @staticmethod 11 | def forward(x): 12 | return x * torch.sigmoid(x) 13 | 14 | 15 | class Hardswish(nn.Module): # export-friendly version of nn.Hardswish() 16 | @staticmethod 17 | def forward(x): 18 | # return x * F.hardsigmoid(x) # for torchscript and CoreML 19 | return x * F.hardtanh(x + 3, 0., 6.) / 6. # for torchscript, CoreML and ONNX 20 | 21 | 22 | class MemoryEfficientSwish(nn.Module): 23 | class F(torch.autograd.Function): 24 | @staticmethod 25 | def forward(ctx, x): 26 | ctx.save_for_backward(x) 27 | return x * torch.sigmoid(x) 28 | 29 | @staticmethod 30 | def backward(ctx, grad_output): 31 | x = ctx.saved_tensors[0] 32 | sx = torch.sigmoid(x) 33 | return grad_output * (sx * (1 + x * (1 - sx))) 34 | 35 | def forward(self, x): 36 | return self.F.apply(x) 37 | 38 | 39 | # Mish https://github.com/digantamisra98/Mish -------------------------------------------------------------------------- 40 | class Mish(nn.Module): 41 | @staticmethod 42 | def forward(x): 43 | return x * F.softplus(x).tanh() 44 | 45 | 46 | class MemoryEfficientMish(nn.Module): 47 | class F(torch.autograd.Function): 48 | @staticmethod 49 | def forward(ctx, x): 50 | ctx.save_for_backward(x) 51 | return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x))) 52 | 53 | @staticmethod 54 | def backward(ctx, grad_output): 55 | x = ctx.saved_tensors[0] 56 | sx = torch.sigmoid(x) 57 | fx = F.softplus(x).tanh() 58 | return grad_output * (fx + x * sx * (1 - fx * fx)) 59 | 60 | def forward(self, x): 61 | return self.F.apply(x) 62 | 63 | 64 | # FReLU https://arxiv.org/abs/2007.11824 ------------------------------------------------------------------------------- 65 | class FReLU(nn.Module): 66 | def __init__(self, c1, k=3): # ch_in, kernel 67 | super().__init__() 68 | self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False) 69 | self.bn = nn.BatchNorm2d(c1) 70 | 71 | def forward(self, x): 72 | return torch.max(x, self.bn(self.conv(x))) 73 | -------------------------------------------------------------------------------- /utils/add_nms.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import onnx 3 | from onnx import shape_inference 4 | try: 5 | import onnx_graphsurgeon as gs 6 | except Exception as e: 7 | print('Import onnx_graphsurgeon failure: %s' % e) 8 | 9 | import logging 10 | 11 | LOGGER = logging.getLogger(__name__) 12 | 13 | class RegisterNMS(object): 14 | def __init__( 15 | self, 16 | onnx_model_path: str, 17 | precision: str = "fp32", 18 | ): 19 | 20 | self.graph = gs.import_onnx(onnx.load(onnx_model_path)) 21 | assert self.graph 22 | LOGGER.info("ONNX graph created successfully") 23 | # Fold constants via ONNX-GS that PyTorch2ONNX may have missed 24 | self.graph.fold_constants() 25 | self.precision = precision 26 | self.batch_size = 1 27 | def infer(self): 28 | """ 29 | Sanitize the graph by cleaning any unconnected nodes, do a topological resort, 30 | and fold constant inputs values. When possible, run shape inference on the 31 | ONNX graph to determine tensor shapes. 32 | """ 33 | for _ in range(3): 34 | count_before = len(self.graph.nodes) 35 | 36 | self.graph.cleanup().toposort() 37 | try: 38 | for node in self.graph.nodes: 39 | for o in node.outputs: 40 | o.shape = None 41 | model = gs.export_onnx(self.graph) 42 | model = shape_inference.infer_shapes(model) 43 | self.graph = gs.import_onnx(model) 44 | except Exception as e: 45 | LOGGER.info(f"Shape inference could not be performed at this time:\n{e}") 46 | try: 47 | self.graph.fold_constants(fold_shapes=True) 48 | except TypeError as e: 49 | LOGGER.error( 50 | "This version of ONNX GraphSurgeon does not support folding shapes, " 51 | f"please upgrade your onnx_graphsurgeon module. Error:\n{e}" 52 | ) 53 | raise 54 | 55 | count_after = len(self.graph.nodes) 56 | if count_before == count_after: 57 | # No new folding occurred in this iteration, so we can stop for now. 58 | break 59 | 60 | def save(self, output_path): 61 | """ 62 | Save the ONNX model to the given location. 63 | Args: 64 | output_path: Path pointing to the location where to write 65 | out the updated ONNX model. 66 | """ 67 | self.graph.cleanup().toposort() 68 | model = gs.export_onnx(self.graph) 69 | onnx.save(model, output_path) 70 | LOGGER.info(f"Saved ONNX model to {output_path}") 71 | 72 | def register_nms( 73 | self, 74 | *, 75 | score_thresh: float = 0.25, 76 | nms_thresh: float = 0.45, 77 | detections_per_img: int = 100, 78 | ): 79 | """ 80 | Register the ``EfficientNMS_TRT`` plugin node. 81 | NMS expects these shapes for its input tensors: 82 | - box_net: [batch_size, number_boxes, 4] 83 | - class_net: [batch_size, number_boxes, number_labels] 84 | Args: 85 | score_thresh (float): The scalar threshold for score (low scoring boxes are removed). 86 | nms_thresh (float): The scalar threshold for IOU (new boxes that have high IOU 87 | overlap with previously selected boxes are removed). 88 | detections_per_img (int): Number of best detections to keep after NMS. 89 | """ 90 | 91 | self.infer() 92 | # Find the concat node at the end of the network 93 | op_inputs = self.graph.outputs 94 | op = "EfficientNMS_TRT" 95 | attrs = { 96 | "plugin_version": "1", 97 | "background_class": -1, # no background class 98 | "max_output_boxes": detections_per_img, 99 | "score_threshold": score_thresh, 100 | "iou_threshold": nms_thresh, 101 | "score_activation": False, 102 | "box_coding": 0, 103 | } 104 | 105 | if self.precision == "fp32": 106 | dtype_output = np.float32 107 | elif self.precision == "fp16": 108 | dtype_output = np.float16 109 | else: 110 | raise NotImplementedError(f"Currently not supports precision: {self.precision}") 111 | 112 | # NMS Outputs 113 | output_num_detections = gs.Variable( 114 | name="num_dets", 115 | dtype=np.int32, 116 | shape=[self.batch_size, 1], 117 | ) # A scalar indicating the number of valid detections per batch image. 118 | output_boxes = gs.Variable( 119 | name="det_boxes", 120 | dtype=dtype_output, 121 | shape=[self.batch_size, detections_per_img, 4], 122 | ) 123 | output_scores = gs.Variable( 124 | name="det_scores", 125 | dtype=dtype_output, 126 | shape=[self.batch_size, detections_per_img], 127 | ) 128 | output_labels = gs.Variable( 129 | name="det_classes", 130 | dtype=np.int32, 131 | shape=[self.batch_size, detections_per_img], 132 | ) 133 | 134 | op_outputs = [output_num_detections, output_boxes, output_scores, output_labels] 135 | 136 | # Create the NMS Plugin node with the selected inputs. The outputs of the node will also 137 | # become the final outputs of the graph. 138 | self.graph.layer(op=op, name="batched_nms", inputs=op_inputs, outputs=op_outputs, attrs=attrs) 139 | LOGGER.info(f"Created NMS plugin '{op}' with attributes: {attrs}") 140 | 141 | self.graph.outputs = op_outputs 142 | 143 | self.infer() 144 | 145 | def save(self, output_path): 146 | """ 147 | Save the ONNX model to the given location. 148 | Args: 149 | output_path: Path pointing to the location where to write 150 | out the updated ONNX model. 151 | """ 152 | self.graph.cleanup().toposort() 153 | model = gs.export_onnx(self.graph) 154 | onnx.save(model, output_path) 155 | LOGGER.info(f"Saved ONNX model to {output_path}") 156 | -------------------------------------------------------------------------------- /utils/autoanchor.py: -------------------------------------------------------------------------------- 1 | # Auto-anchor utils 2 | 3 | import numpy as np 4 | import torch 5 | import yaml 6 | from scipy.cluster.vq import kmeans 7 | from tqdm import tqdm 8 | 9 | from utils.general import colorstr 10 | 11 | 12 | def check_anchor_order(m): 13 | # Check anchor order against stride order for YOLO Detect() module m, and correct if necessary 14 | a = m.anchor_grid.prod(-1).view(-1) # anchor area 15 | da = a[-1] - a[0] # delta a 16 | ds = m.stride[-1] - m.stride[0] # delta s 17 | if da.sign() != ds.sign(): # same order 18 | print('Reversing anchor order') 19 | m.anchors[:] = m.anchors.flip(0) 20 | m.anchor_grid[:] = m.anchor_grid.flip(0) 21 | 22 | 23 | def check_anchors(dataset, model, thr=4.0, imgsz=640): 24 | # Check anchor fit to data, recompute if necessary 25 | prefix = colorstr('autoanchor: ') 26 | print(f'\n{prefix}Analyzing anchors... ', end='') 27 | m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect() 28 | shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True) 29 | scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale 30 | wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh 31 | 32 | def metric(k): # compute metric 33 | r = wh[:, None] / k[None] 34 | x = torch.min(r, 1. / r).min(2)[0] # ratio metric 35 | best = x.max(1)[0] # best_x 36 | aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold 37 | bpr = (best > 1. / thr).float().mean() # best possible recall 38 | return bpr, aat 39 | 40 | anchors = m.anchor_grid.clone().cpu().view(-1, 2) # current anchors 41 | bpr, aat = metric(anchors) 42 | print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='') 43 | if bpr < 0.98: # threshold to recompute 44 | print('. Attempting to improve anchors, please wait...') 45 | na = m.anchor_grid.numel() // 2 # number of anchors 46 | try: 47 | anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) 48 | except Exception as e: 49 | print(f'{prefix}ERROR: {e}') 50 | new_bpr = metric(anchors)[0] 51 | if new_bpr > bpr: # replace anchors 52 | anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors) 53 | m.anchor_grid[:] = anchors.clone().view_as(m.anchor_grid) # for inference 54 | check_anchor_order(m) 55 | m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss 56 | print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.') 57 | else: 58 | print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.') 59 | print('') # newline 60 | 61 | 62 | def kmean_anchors(path='./data/coco.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True): 63 | """ Creates kmeans-evolved anchors from training dataset 64 | 65 | Arguments: 66 | path: path to dataset *.yaml, or a loaded dataset 67 | n: number of anchors 68 | img_size: image size used for training 69 | thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0 70 | gen: generations to evolve anchors using genetic algorithm 71 | verbose: print all results 72 | 73 | Return: 74 | k: kmeans evolved anchors 75 | 76 | Usage: 77 | from utils.autoanchor import *; _ = kmean_anchors() 78 | """ 79 | thr = 1. / thr 80 | prefix = colorstr('autoanchor: ') 81 | 82 | def metric(k, wh): # compute metrics 83 | r = wh[:, None] / k[None] 84 | x = torch.min(r, 1. / r).min(2)[0] # ratio metric 85 | # x = wh_iou(wh, torch.tensor(k)) # iou metric 86 | return x, x.max(1)[0] # x, best_x 87 | 88 | def anchor_fitness(k): # mutation fitness 89 | _, best = metric(torch.tensor(k, dtype=torch.float32), wh) 90 | return (best * (best > thr).float()).mean() # fitness 91 | 92 | def print_results(k): 93 | k = k[np.argsort(k.prod(1))] # sort small to large 94 | x, best = metric(k, wh0) 95 | bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr 96 | print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr') 97 | print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' 98 | f'past_thr={x[x > thr].mean():.3f}-mean: ', end='') 99 | for i, x in enumerate(k): 100 | print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg 101 | return k 102 | 103 | if isinstance(path, str): # *.yaml file 104 | with open(path) as f: 105 | data_dict = yaml.load(f, Loader=yaml.SafeLoader) # model dict 106 | from utils.datasets import LoadImagesAndLabels 107 | dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True) 108 | else: 109 | dataset = path # dataset 110 | 111 | # Get label wh 112 | shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True) 113 | wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh 114 | 115 | # Filter 116 | i = (wh0 < 3.0).any(1).sum() 117 | if i: 118 | print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.') 119 | wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels 120 | # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1 121 | 122 | # Kmeans calculation 123 | print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...') 124 | s = wh.std(0) # sigmas for whitening 125 | k, dist = kmeans(wh / s, n, iter=30) # points, mean distance 126 | assert len(k) == n, print(f'{prefix}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}') 127 | k *= s 128 | wh = torch.tensor(wh, dtype=torch.float32) # filtered 129 | wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered 130 | k = print_results(k) 131 | 132 | # Plot 133 | # k, d = [None] * 20, [None] * 20 134 | # for i in tqdm(range(1, 21)): 135 | # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance 136 | # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True) 137 | # ax = ax.ravel() 138 | # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.') 139 | # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh 140 | # ax[0].hist(wh[wh[:, 0]<100, 0],400) 141 | # ax[1].hist(wh[wh[:, 1]<100, 1],400) 142 | # fig.savefig('wh.png', dpi=200) 143 | 144 | # Evolve 145 | npr = np.random 146 | f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma 147 | pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar 148 | for _ in pbar: 149 | v = np.ones(sh) 150 | while (v == 1).all(): # mutate until a change occurs (prevent duplicates) 151 | v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0) 152 | kg = (k.copy() * v).clip(min=2.0) 153 | fg = anchor_fitness(kg) 154 | if fg > f: 155 | f, k = fg, kg.copy() 156 | pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}' 157 | if verbose: 158 | print_results(k) 159 | 160 | return print_results(k) 161 | -------------------------------------------------------------------------------- /utils/general.py: -------------------------------------------------------------------------------- 1 | # YOLOR general utils 2 | 3 | import glob 4 | import logging 5 | import math 6 | import os 7 | import platform 8 | import random 9 | import re 10 | import subprocess 11 | import time 12 | from pathlib import Path 13 | 14 | import cv2 15 | import numpy as np 16 | import pandas as pd 17 | import torch 18 | import torchvision 19 | import yaml 20 | 21 | from utils.google_utils import gsutil_getsize 22 | from utils.metrics import fitness 23 | from utils.torch_utils import init_torch_seeds 24 | 25 | # Settings 26 | torch.set_printoptions(linewidth=320, precision=5, profile='long') 27 | np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5 28 | pd.options.display.max_columns = 10 29 | cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader) 30 | os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads 31 | 32 | 33 | def set_logging(rank=-1): 34 | logging.basicConfig( 35 | format="%(message)s", 36 | level=logging.INFO if rank in [-1, 0] else logging.WARN) 37 | 38 | 39 | def init_seeds(seed=0): 40 | # Initialize random number generator (RNG) seeds 41 | random.seed(seed) 42 | np.random.seed(seed) 43 | init_torch_seeds(seed) 44 | 45 | 46 | def get_latest_run(search_dir='.'): 47 | # Return path to most recent 'last.pt' in /runs (i.e. to --resume from) 48 | last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True) 49 | return max(last_list, key=os.path.getctime) if last_list else '' 50 | 51 | 52 | def isdocker(): 53 | # Is environment a Docker container 54 | return Path('/workspace').exists() # or Path('/.dockerenv').exists() 55 | 56 | 57 | def emojis(str=''): 58 | # Return platform-dependent emoji-safe version of string 59 | return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str 60 | 61 | 62 | def check_online(): 63 | # Check internet connectivity 64 | import socket 65 | try: 66 | socket.create_connection(("1.1.1.1", 443), 5) # check host accesability 67 | return True 68 | except OSError: 69 | return False 70 | 71 | 72 | def check_git_status(): 73 | # Recommend 'git pull' if code is out of date 74 | print(colorstr('github: '), end='') 75 | try: 76 | assert Path('.git').exists(), 'skipping check (not a git repository)' 77 | assert not isdocker(), 'skipping check (Docker image)' 78 | assert check_online(), 'skipping check (offline)' 79 | 80 | cmd = 'git fetch && git config --get remote.origin.url' 81 | url = subprocess.check_output(cmd, shell=True).decode().strip().rstrip('.git') # github repo url 82 | branch = subprocess.check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out 83 | n = int(subprocess.check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind 84 | if n > 0: 85 | s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \ 86 | f"Use 'git pull' to update or 'git clone {url}' to download latest." 87 | else: 88 | s = f'up to date with {url} ✅' 89 | print(emojis(s)) # emoji-safe 90 | except Exception as e: 91 | print(e) 92 | 93 | 94 | def check_requirements(requirements='requirements.txt', exclude=()): 95 | # Check installed dependencies meet requirements (pass *.txt file or list of packages) 96 | import pkg_resources as pkg 97 | prefix = colorstr('red', 'bold', 'requirements:') 98 | if isinstance(requirements, (str, Path)): # requirements.txt file 99 | file = Path(requirements) 100 | if not file.exists(): 101 | print(f"{prefix} {file.resolve()} not found, check failed.") 102 | return 103 | requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude] 104 | else: # list or tuple of packages 105 | requirements = [x for x in requirements if x not in exclude] 106 | 107 | n = 0 # number of packages updates 108 | for r in requirements: 109 | try: 110 | pkg.require(r) 111 | except Exception as e: # DistributionNotFound or VersionConflict if requirements not met 112 | n += 1 113 | print(f"{prefix} {e.req} not found and is required by YOLOR, attempting auto-update...") 114 | print(subprocess.check_output(f"pip install '{e.req}'", shell=True).decode()) 115 | 116 | if n: # if packages updated 117 | source = file.resolve() if 'file' in locals() else requirements 118 | s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \ 119 | f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n" 120 | print(emojis(s)) # emoji-safe 121 | 122 | 123 | def check_img_size(img_size, s=32): 124 | # Verify img_size is a multiple of stride s 125 | new_size = make_divisible(img_size, int(s)) # ceil gs-multiple 126 | if new_size != img_size: 127 | print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size)) 128 | return new_size 129 | 130 | 131 | def check_imshow(): 132 | # Check if environment supports image displays 133 | try: 134 | assert not isdocker(), 'cv2.imshow() is disabled in Docker environments' 135 | cv2.imshow('test', np.zeros((1, 1, 3))) 136 | cv2.waitKey(1) 137 | cv2.destroyAllWindows() 138 | cv2.waitKey(1) 139 | return True 140 | except Exception as e: 141 | print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}') 142 | return False 143 | 144 | 145 | def check_file(file): 146 | # Search for file if not found 147 | if Path(file).is_file() or file == '': 148 | return file 149 | else: 150 | files = glob.glob('./**/' + file, recursive=True) # find file 151 | assert len(files), f'File Not Found: {file}' # assert file was found 152 | assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique 153 | return files[0] # return file 154 | 155 | 156 | def check_dataset(dict): 157 | # Download dataset if not found locally 158 | val, s = dict.get('val'), dict.get('download') 159 | if val and len(val): 160 | val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path 161 | if not all(x.exists() for x in val): 162 | print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()]) 163 | if s and len(s): # download script 164 | print('Downloading %s ...' % s) 165 | if s.startswith('http') and s.endswith('.zip'): # URL 166 | f = Path(s).name # filename 167 | torch.hub.download_url_to_file(s, f) 168 | r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip 169 | else: # bash script 170 | r = os.system(s) 171 | print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value 172 | else: 173 | raise Exception('Dataset not found.') 174 | 175 | 176 | def make_divisible(x, divisor): 177 | # Returns x evenly divisible by divisor 178 | return math.ceil(x / divisor) * divisor 179 | 180 | 181 | def clean_str(s): 182 | # Cleans a string by replacing special characters with underscore _ 183 | return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s) 184 | 185 | 186 | def one_cycle(y1=0.0, y2=1.0, steps=100): 187 | # lambda function for sinusoidal ramp from y1 to y2 188 | return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1 189 | 190 | 191 | def colorstr(*input): 192 | # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world') 193 | *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string 194 | colors = {'black': '\033[30m', # basic colors 195 | 'red': '\033[31m', 196 | 'green': '\033[32m', 197 | 'yellow': '\033[33m', 198 | 'blue': '\033[34m', 199 | 'magenta': '\033[35m', 200 | 'cyan': '\033[36m', 201 | 'white': '\033[37m', 202 | 'bright_black': '\033[90m', # bright colors 203 | 'bright_red': '\033[91m', 204 | 'bright_green': '\033[92m', 205 | 'bright_yellow': '\033[93m', 206 | 'bright_blue': '\033[94m', 207 | 'bright_magenta': '\033[95m', 208 | 'bright_cyan': '\033[96m', 209 | 'bright_white': '\033[97m', 210 | 'end': '\033[0m', # misc 211 | 'bold': '\033[1m', 212 | 'underline': '\033[4m'} 213 | return ''.join(colors[x] for x in args) + f'{string}' + colors['end'] 214 | 215 | 216 | def labels_to_class_weights(labels, nc=80): 217 | # Get class weights (inverse frequency) from training labels 218 | if labels[0] is None: # no labels loaded 219 | return torch.Tensor() 220 | 221 | labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO 222 | classes = labels[:, 0].astype(np.int) # labels = [class xywh] 223 | weights = np.bincount(classes, minlength=nc) # occurrences per class 224 | 225 | # Prepend gridpoint count (for uCE training) 226 | # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image 227 | # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start 228 | 229 | weights[weights == 0] = 1 # replace empty bins with 1 230 | weights = 1 / weights # number of targets per class 231 | weights /= weights.sum() # normalize 232 | return torch.from_numpy(weights) 233 | 234 | 235 | def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)): 236 | # Produces image weights based on class_weights and image contents 237 | class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels]) 238 | image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1) 239 | # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample 240 | return image_weights 241 | 242 | 243 | def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) 244 | # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ 245 | # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n') 246 | # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n') 247 | # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco 248 | # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet 249 | x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, 250 | 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 251 | 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] 252 | return x 253 | 254 | 255 | def xyxy2xywh(x): 256 | # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right 257 | y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) 258 | y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center 259 | y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center 260 | y[:, 2] = x[:, 2] - x[:, 0] # width 261 | y[:, 3] = x[:, 3] - x[:, 1] # height 262 | return y 263 | 264 | 265 | def xywh2xyxy(x): 266 | # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right 267 | y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) 268 | y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x 269 | y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y 270 | y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x 271 | y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y 272 | return y 273 | 274 | 275 | def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0): 276 | # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right 277 | y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) 278 | y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x 279 | y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y 280 | y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x 281 | y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y 282 | return y 283 | 284 | 285 | def xyn2xy(x, w=640, h=640, padw=0, padh=0): 286 | # Convert normalized segments into pixel segments, shape (n,2) 287 | y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) 288 | y[:, 0] = w * x[:, 0] + padw # top left x 289 | y[:, 1] = h * x[:, 1] + padh # top left y 290 | return y 291 | 292 | 293 | def segment2box(segment, width=640, height=640): 294 | # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy) 295 | x, y = segment.T # segment xy 296 | inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height) 297 | x, y, = x[inside], y[inside] 298 | return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy 299 | 300 | 301 | def segments2boxes(segments): 302 | # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh) 303 | boxes = [] 304 | for s in segments: 305 | x, y = s.T # segment xy 306 | boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy 307 | return xyxy2xywh(np.array(boxes)) # cls, xywh 308 | 309 | 310 | def resample_segments(segments, n=1000): 311 | # Up-sample an (n,2) segment 312 | for i, s in enumerate(segments): 313 | x = np.linspace(0, len(s) - 1, n) 314 | xp = np.arange(len(s)) 315 | segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy 316 | return segments 317 | 318 | 319 | def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None): 320 | # Rescale coords (xyxy) from img1_shape to img0_shape 321 | if ratio_pad is None: # calculate from img0_shape 322 | gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new 323 | pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding 324 | else: 325 | gain = ratio_pad[0][0] 326 | pad = ratio_pad[1] 327 | 328 | coords[:, [0, 2]] -= pad[0] # x padding 329 | coords[:, [1, 3]] -= pad[1] # y padding 330 | coords[:, :4] /= gain 331 | clip_coords(coords, img0_shape) 332 | return coords 333 | 334 | 335 | def clip_coords(boxes, img_shape): 336 | # Clip bounding xyxy bounding boxes to image shape (height, width) 337 | boxes[:, 0].clamp_(0, img_shape[1]) # x1 338 | boxes[:, 1].clamp_(0, img_shape[0]) # y1 339 | boxes[:, 2].clamp_(0, img_shape[1]) # x2 340 | boxes[:, 3].clamp_(0, img_shape[0]) # y2 341 | 342 | 343 | def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7): 344 | # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4 345 | box2 = box2.T 346 | 347 | # Get the coordinates of bounding boxes 348 | if x1y1x2y2: # x1, y1, x2, y2 = box1 349 | b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] 350 | b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] 351 | else: # transform from xywh to xyxy 352 | b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 353 | b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 354 | b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 355 | b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 356 | 357 | # Intersection area 358 | inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ 359 | (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) 360 | 361 | # Union Area 362 | w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps 363 | w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps 364 | union = w1 * h1 + w2 * h2 - inter + eps 365 | 366 | iou = inter / union 367 | 368 | if GIoU or DIoU or CIoU: 369 | cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width 370 | ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height 371 | if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 372 | c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared 373 | rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + 374 | (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared 375 | if DIoU: 376 | return iou - rho2 / c2 # DIoU 377 | elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 378 | v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / (h2 + eps)) - torch.atan(w1 / (h1 + eps)), 2) 379 | with torch.no_grad(): 380 | alpha = v / (v - iou + (1 + eps)) 381 | return iou - (rho2 / c2 + v * alpha) # CIoU 382 | else: # GIoU https://arxiv.org/pdf/1902.09630.pdf 383 | c_area = cw * ch + eps # convex area 384 | return iou - (c_area - union) / c_area # GIoU 385 | else: 386 | return iou # IoU 387 | 388 | 389 | 390 | 391 | def bbox_alpha_iou(box1, box2, x1y1x2y2=False, GIoU=False, DIoU=False, CIoU=False, alpha=2, eps=1e-9): 392 | # Returns tsqrt_he IoU of box1 to box2. box1 is 4, box2 is nx4 393 | box2 = box2.T 394 | 395 | # Get the coordinates of bounding boxes 396 | if x1y1x2y2: # x1, y1, x2, y2 = box1 397 | b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] 398 | b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] 399 | else: # transform from xywh to xyxy 400 | b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 401 | b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 402 | b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 403 | b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 404 | 405 | # Intersection area 406 | inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ 407 | (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) 408 | 409 | # Union Area 410 | w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps 411 | w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps 412 | union = w1 * h1 + w2 * h2 - inter + eps 413 | 414 | # change iou into pow(iou+eps) 415 | # iou = inter / union 416 | iou = torch.pow(inter/union + eps, alpha) 417 | # beta = 2 * alpha 418 | if GIoU or DIoU or CIoU: 419 | cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width 420 | ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height 421 | if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 422 | c2 = (cw ** 2 + ch ** 2) ** alpha + eps # convex diagonal 423 | rho_x = torch.abs(b2_x1 + b2_x2 - b1_x1 - b1_x2) 424 | rho_y = torch.abs(b2_y1 + b2_y2 - b1_y1 - b1_y2) 425 | rho2 = ((rho_x ** 2 + rho_y ** 2) / 4) ** alpha # center distance 426 | if DIoU: 427 | return iou - rho2 / c2 # DIoU 428 | elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 429 | v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) 430 | with torch.no_grad(): 431 | alpha_ciou = v / ((1 + eps) - inter / union + v) 432 | # return iou - (rho2 / c2 + v * alpha_ciou) # CIoU 433 | return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)) # CIoU 434 | else: # GIoU https://arxiv.org/pdf/1902.09630.pdf 435 | # c_area = cw * ch + eps # convex area 436 | # return iou - (c_area - union) / c_area # GIoU 437 | c_area = torch.max(cw * ch + eps, union) # convex area 438 | return iou - torch.pow((c_area - union) / c_area + eps, alpha) # GIoU 439 | else: 440 | return iou # torch.log(iou+eps) or iou 441 | 442 | 443 | def box_iou(box1, box2): 444 | # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py 445 | """ 446 | Return intersection-over-union (Jaccard index) of boxes. 447 | Both sets of boxes are expected to be in (x1, y1, x2, y2) format. 448 | Arguments: 449 | box1 (Tensor[N, 4]) 450 | box2 (Tensor[M, 4]) 451 | Returns: 452 | iou (Tensor[N, M]): the NxM matrix containing the pairwise 453 | IoU values for every element in boxes1 and boxes2 454 | """ 455 | 456 | def box_area(box): 457 | # box = 4xn 458 | return (box[2] - box[0]) * (box[3] - box[1]) 459 | 460 | area1 = box_area(box1.T) 461 | area2 = box_area(box2.T) 462 | 463 | # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) 464 | inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) 465 | return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter) 466 | 467 | 468 | def wh_iou(wh1, wh2): 469 | # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 470 | wh1 = wh1[:, None] # [N,1,2] 471 | wh2 = wh2[None] # [1,M,2] 472 | inter = torch.min(wh1, wh2).prod(2) # [N,M] 473 | return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter) 474 | 475 | 476 | def box_giou(box1, box2): 477 | """ 478 | Return generalized intersection-over-union (Jaccard index) between two sets of boxes. 479 | Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with 480 | ``0 <= x1 < x2`` and ``0 <= y1 < y2``. 481 | Args: 482 | boxes1 (Tensor[N, 4]): first set of boxes 483 | boxes2 (Tensor[M, 4]): second set of boxes 484 | Returns: 485 | Tensor[N, M]: the NxM matrix containing the pairwise generalized IoU values 486 | for every element in boxes1 and boxes2 487 | """ 488 | 489 | def box_area(box): 490 | # box = 4xn 491 | return (box[2] - box[0]) * (box[3] - box[1]) 492 | 493 | area1 = box_area(box1.T) 494 | area2 = box_area(box2.T) 495 | 496 | inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) 497 | union = (area1[:, None] + area2 - inter) 498 | 499 | iou = inter / union 500 | 501 | lti = torch.min(box1[:, None, :2], box2[:, :2]) 502 | rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) 503 | 504 | whi = (rbi - lti).clamp(min=0) # [N,M,2] 505 | areai = whi[:, :, 0] * whi[:, :, 1] 506 | 507 | return iou - (areai - union) / areai 508 | 509 | 510 | def box_ciou(box1, box2, eps: float = 1e-7): 511 | """ 512 | Return complete intersection-over-union (Jaccard index) between two sets of boxes. 513 | Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with 514 | ``0 <= x1 < x2`` and ``0 <= y1 < y2``. 515 | Args: 516 | boxes1 (Tensor[N, 4]): first set of boxes 517 | boxes2 (Tensor[M, 4]): second set of boxes 518 | eps (float, optional): small number to prevent division by zero. Default: 1e-7 519 | Returns: 520 | Tensor[N, M]: the NxM matrix containing the pairwise complete IoU values 521 | for every element in boxes1 and boxes2 522 | """ 523 | 524 | def box_area(box): 525 | # box = 4xn 526 | return (box[2] - box[0]) * (box[3] - box[1]) 527 | 528 | area1 = box_area(box1.T) 529 | area2 = box_area(box2.T) 530 | 531 | inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) 532 | union = (area1[:, None] + area2 - inter) 533 | 534 | iou = inter / union 535 | 536 | lti = torch.min(box1[:, None, :2], box2[:, :2]) 537 | rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) 538 | 539 | whi = (rbi - lti).clamp(min=0) # [N,M,2] 540 | diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps 541 | 542 | # centers of boxes 543 | x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2 544 | y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2 545 | x_g = (box2[:, 0] + box2[:, 2]) / 2 546 | y_g = (box2[:, 1] + box2[:, 3]) / 2 547 | # The distance between boxes' centers squared. 548 | centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2 549 | 550 | w_pred = box1[:, None, 2] - box1[:, None, 0] 551 | h_pred = box1[:, None, 3] - box1[:, None, 1] 552 | 553 | w_gt = box2[:, 2] - box2[:, 0] 554 | h_gt = box2[:, 3] - box2[:, 1] 555 | 556 | v = (4 / (torch.pi ** 2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2) 557 | with torch.no_grad(): 558 | alpha = v / (1 - iou + v + eps) 559 | return iou - (centers_distance_squared / diagonal_distance_squared) - alpha * v 560 | 561 | 562 | def box_diou(box1, box2, eps: float = 1e-7): 563 | """ 564 | Return distance intersection-over-union (Jaccard index) between two sets of boxes. 565 | Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with 566 | ``0 <= x1 < x2`` and ``0 <= y1 < y2``. 567 | Args: 568 | boxes1 (Tensor[N, 4]): first set of boxes 569 | boxes2 (Tensor[M, 4]): second set of boxes 570 | eps (float, optional): small number to prevent division by zero. Default: 1e-7 571 | Returns: 572 | Tensor[N, M]: the NxM matrix containing the pairwise distance IoU values 573 | for every element in boxes1 and boxes2 574 | """ 575 | 576 | def box_area(box): 577 | # box = 4xn 578 | return (box[2] - box[0]) * (box[3] - box[1]) 579 | 580 | area1 = box_area(box1.T) 581 | area2 = box_area(box2.T) 582 | 583 | inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) 584 | union = (area1[:, None] + area2 - inter) 585 | 586 | iou = inter / union 587 | 588 | lti = torch.min(box1[:, None, :2], box2[:, :2]) 589 | rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) 590 | 591 | whi = (rbi - lti).clamp(min=0) # [N,M,2] 592 | diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps 593 | 594 | # centers of boxes 595 | x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2 596 | y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2 597 | x_g = (box2[:, 0] + box2[:, 2]) / 2 598 | y_g = (box2[:, 1] + box2[:, 3]) / 2 599 | # The distance between boxes' centers squared. 600 | centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2 601 | 602 | # The distance IoU is the IoU penalized by a normalized 603 | # distance between boxes' centers squared. 604 | return iou - (centers_distance_squared / diagonal_distance_squared) 605 | 606 | 607 | def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, 608 | labels=()): 609 | """Runs Non-Maximum Suppression (NMS) on inference results 610 | 611 | Returns: 612 | list of detections, on (n,6) tensor per image [xyxy, conf, cls] 613 | """ 614 | 615 | nc = prediction.shape[2] - 5 # number of classes 616 | xc = prediction[..., 4] > conf_thres # candidates 617 | 618 | # Settings 619 | min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height 620 | max_det = 300 # maximum number of detections per image 621 | max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() 622 | time_limit = 10.0 # seconds to quit after 623 | redundant = True # require redundant detections 624 | multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) 625 | merge = False # use merge-NMS 626 | 627 | t = time.time() 628 | output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0] 629 | for xi, x in enumerate(prediction): # image index, image inference 630 | # Apply constraints 631 | # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height 632 | x = x[xc[xi]] # confidence 633 | 634 | # Cat apriori labels if autolabelling 635 | if labels and len(labels[xi]): 636 | l = labels[xi] 637 | v = torch.zeros((len(l), nc + 5), device=x.device) 638 | v[:, :4] = l[:, 1:5] # box 639 | v[:, 4] = 1.0 # conf 640 | v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls 641 | x = torch.cat((x, v), 0) 642 | 643 | # If none remain process next image 644 | if not x.shape[0]: 645 | continue 646 | 647 | # Compute conf 648 | if nc == 1: 649 | x[:, 5:] = x[:, 4:5] # for models with one class, cls_loss is 0 and cls_conf is always 0.5, 650 | # so there is no need to multiplicate. 651 | else: 652 | x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf 653 | 654 | # Box (center x, center y, width, height) to (x1, y1, x2, y2) 655 | box = xywh2xyxy(x[:, :4]) 656 | 657 | # Detections matrix nx6 (xyxy, conf, cls) 658 | if multi_label: 659 | i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T 660 | x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) 661 | else: # best class only 662 | conf, j = x[:, 5:].max(1, keepdim=True) 663 | x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] 664 | 665 | # Filter by class 666 | if classes is not None: 667 | x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] 668 | 669 | # Apply finite constraint 670 | # if not torch.isfinite(x).all(): 671 | # x = x[torch.isfinite(x).all(1)] 672 | 673 | # Check shape 674 | n = x.shape[0] # number of boxes 675 | if not n: # no boxes 676 | continue 677 | elif n > max_nms: # excess boxes 678 | x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence 679 | 680 | # Batched NMS 681 | c = x[:, 5:6] * (0 if agnostic else max_wh) # classes 682 | boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores 683 | i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS 684 | if i.shape[0] > max_det: # limit detections 685 | i = i[:max_det] 686 | if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) 687 | # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) 688 | iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix 689 | weights = iou * scores[None] # box weights 690 | x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes 691 | if redundant: 692 | i = i[iou.sum(1) > 1] # require redundancy 693 | 694 | output[xi] = x[i] 695 | if (time.time() - t) > time_limit: 696 | print(f'WARNING: NMS time limit {time_limit}s exceeded') 697 | break # time limit exceeded 698 | 699 | return output 700 | 701 | 702 | def non_max_suppression_kpt(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, 703 | labels=(), kpt_label=False, nc=None, nkpt=None): 704 | """Runs Non-Maximum Suppression (NMS) on inference results 705 | 706 | Returns: 707 | list of detections, on (n,6) tensor per image [xyxy, conf, cls] 708 | """ 709 | if nc is None: 710 | nc = prediction.shape[2] - 5 if not kpt_label else prediction.shape[2] - 56 # number of classes 711 | xc = prediction[..., 4] > conf_thres # candidates 712 | 713 | # Settings 714 | min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height 715 | max_det = 300 # maximum number of detections per image 716 | max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() 717 | time_limit = 10.0 # seconds to quit after 718 | redundant = True # require redundant detections 719 | multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) 720 | merge = False # use merge-NMS 721 | 722 | t = time.time() 723 | output = [torch.zeros((0,6), device=prediction.device)] * prediction.shape[0] 724 | for xi, x in enumerate(prediction): # image index, image inference 725 | # Apply constraints 726 | # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height 727 | x = x[xc[xi]] # confidence 728 | 729 | # Cat apriori labels if autolabelling 730 | if labels and len(labels[xi]): 731 | l = labels[xi] 732 | v = torch.zeros((len(l), nc + 5), device=x.device) 733 | v[:, :4] = l[:, 1:5] # box 734 | v[:, 4] = 1.0 # conf 735 | v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls 736 | x = torch.cat((x, v), 0) 737 | 738 | # If none remain process next image 739 | if not x.shape[0]: 740 | continue 741 | 742 | # Compute conf 743 | x[:, 5:5+nc] *= x[:, 4:5] # conf = obj_conf * cls_conf 744 | 745 | # Box (center x, center y, width, height) to (x1, y1, x2, y2) 746 | box = xywh2xyxy(x[:, :4]) 747 | 748 | # Detections matrix nx6 (xyxy, conf, cls) 749 | if multi_label: 750 | i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T 751 | x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) 752 | else: # best class only 753 | if not kpt_label: 754 | conf, j = x[:, 5:].max(1, keepdim=True) 755 | x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] 756 | else: 757 | kpts = x[:, 6:] 758 | conf, j = x[:, 5:6].max(1, keepdim=True) 759 | x = torch.cat((box, conf, j.float(), kpts), 1)[conf.view(-1) > conf_thres] 760 | 761 | 762 | # Filter by class 763 | if classes is not None: 764 | x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] 765 | 766 | # Apply finite constraint 767 | # if not torch.isfinite(x).all(): 768 | # x = x[torch.isfinite(x).all(1)] 769 | 770 | # Check shape 771 | n = x.shape[0] # number of boxes 772 | if not n: # no boxes 773 | continue 774 | elif n > max_nms: # excess boxes 775 | x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence 776 | 777 | # Batched NMS 778 | c = x[:, 5:6] * (0 if agnostic else max_wh) # classes 779 | boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores 780 | i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS 781 | if i.shape[0] > max_det: # limit detections 782 | i = i[:max_det] 783 | if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) 784 | # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) 785 | iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix 786 | weights = iou * scores[None] # box weights 787 | x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes 788 | if redundant: 789 | i = i[iou.sum(1) > 1] # require redundancy 790 | 791 | output[xi] = x[i] 792 | if (time.time() - t) > time_limit: 793 | print(f'WARNING: NMS time limit {time_limit}s exceeded') 794 | break # time limit exceeded 795 | 796 | return output 797 | 798 | 799 | def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer() 800 | # Strip optimizer from 'f' to finalize training, optionally save as 's' 801 | x = torch.load(f, map_location=torch.device('cpu')) 802 | if x.get('ema'): 803 | x['model'] = x['ema'] # replace model with ema 804 | for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys 805 | x[k] = None 806 | x['epoch'] = -1 807 | x['model'].half() # to FP16 808 | for p in x['model'].parameters(): 809 | p.requires_grad = False 810 | torch.save(x, s or f) 811 | mb = os.path.getsize(s or f) / 1E6 # filesize 812 | print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB") 813 | 814 | 815 | def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''): 816 | # Print mutation results to evolve.txt (for use with train.py --evolve) 817 | a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys 818 | b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values 819 | c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) 820 | print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c)) 821 | 822 | if bucket: 823 | url = 'gs://%s/evolve.txt' % bucket 824 | if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0): 825 | os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local 826 | 827 | with open('evolve.txt', 'a') as f: # append result 828 | f.write(c + b + '\n') 829 | x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows 830 | x = x[np.argsort(-fitness(x))] # sort 831 | np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness 832 | 833 | # Save yaml 834 | for i, k in enumerate(hyp.keys()): 835 | hyp[k] = float(x[0, i + 7]) 836 | with open(yaml_file, 'w') as f: 837 | results = tuple(x[0, :7]) 838 | c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) 839 | f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n') 840 | yaml.dump(hyp, f, sort_keys=False) 841 | 842 | if bucket: 843 | os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload 844 | 845 | 846 | def apply_classifier(x, model, img, im0): 847 | # applies a second stage classifier to yolo outputs 848 | im0 = [im0] if isinstance(im0, np.ndarray) else im0 849 | for i, d in enumerate(x): # per image 850 | if d is not None and len(d): 851 | d = d.clone() 852 | 853 | # Reshape and pad cutouts 854 | b = xyxy2xywh(d[:, :4]) # boxes 855 | b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square 856 | b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad 857 | d[:, :4] = xywh2xyxy(b).long() 858 | 859 | # Rescale boxes from img_size to im0 size 860 | scale_coords(img.shape[2:], d[:, :4], im0[i].shape) 861 | 862 | # Classes 863 | pred_cls1 = d[:, 5].long() 864 | ims = [] 865 | for j, a in enumerate(d): # per item 866 | cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])] 867 | im = cv2.resize(cutout, (224, 224)) # BGR 868 | # cv2.imwrite('test%i.jpg' % j, cutout) 869 | 870 | im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 871 | im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32 872 | im /= 255.0 # 0 - 255 to 0.0 - 1.0 873 | ims.append(im) 874 | 875 | pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction 876 | x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections 877 | 878 | return x 879 | 880 | 881 | def increment_path(path, exist_ok=True, sep=''): 882 | # Increment path, i.e. runs/exp --> runs/exp{sep}0, runs/exp{sep}1 etc. 883 | path = Path(path) # os-agnostic 884 | if (path.exists() and exist_ok) or (not path.exists()): 885 | return str(path) 886 | else: 887 | dirs = glob.glob(f"{path}{sep}*") # similar paths 888 | matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs] 889 | i = [int(m.groups()[0]) for m in matches if m] # indices 890 | n = max(i) + 1 if i else 2 # increment number 891 | return f"{path}{sep}{n}" # update path 892 | -------------------------------------------------------------------------------- /utils/google_utils.py: -------------------------------------------------------------------------------- 1 | # Google utils: https://cloud.google.com/storage/docs/reference/libraries 2 | 3 | import os 4 | import platform 5 | import subprocess 6 | import time 7 | from pathlib import Path 8 | 9 | import requests 10 | import torch 11 | 12 | 13 | def gsutil_getsize(url=''): 14 | # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du 15 | s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8') 16 | return eval(s.split(' ')[0]) if len(s) else 0 # bytes 17 | 18 | 19 | def attempt_download(file, repo='WongKinYiu/yolov7'): 20 | # Attempt file download if does not exist 21 | file = Path(str(file).strip().replace("'", '').lower()) 22 | 23 | if not file.exists(): 24 | try: 25 | response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api 26 | assets = [x['name'] for x in response['assets']] # release assets 27 | tag = response['tag_name'] # i.e. 'v1.0' 28 | except: # fallback plan 29 | assets = ['yolov7.pt', 'yolov7-tiny.pt', 'yolov7x.pt', 'yolov7-d6.pt', 'yolov7-e6.pt', 30 | 'yolov7-e6e.pt', 'yolov7-w6.pt'] 31 | tag = subprocess.check_output('git tag', shell=True).decode().split()[-1] 32 | 33 | name = file.name 34 | if name in assets: 35 | msg = f'{file} missing, try downloading from https://github.com/{repo}/releases/' 36 | redundant = False # second download option 37 | try: # GitHub 38 | url = f'https://github.com/{repo}/releases/download/{tag}/{name}' 39 | print(f'Downloading {url} to {file}...') 40 | torch.hub.download_url_to_file(url, file) 41 | assert file.exists() and file.stat().st_size > 1E6 # check 42 | except Exception as e: # GCP 43 | print(f'Download error: {e}') 44 | assert redundant, 'No secondary mirror' 45 | url = f'https://storage.googleapis.com/{repo}/ckpt/{name}' 46 | print(f'Downloading {url} to {file}...') 47 | os.system(f'curl -L {url} -o {file}') # torch.hub.download_url_to_file(url, weights) 48 | finally: 49 | if not file.exists() or file.stat().st_size < 1E6: # check 50 | file.unlink(missing_ok=True) # remove partial downloads 51 | print(f'ERROR: Download failure: {msg}') 52 | print('') 53 | return 54 | 55 | 56 | def gdrive_download(id='', file='tmp.zip'): 57 | # Downloads a file from Google Drive. from yolov7.utils.google_utils import *; gdrive_download() 58 | t = time.time() 59 | file = Path(file) 60 | cookie = Path('cookie') # gdrive cookie 61 | print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='') 62 | file.unlink(missing_ok=True) # remove existing file 63 | cookie.unlink(missing_ok=True) # remove existing cookie 64 | 65 | # Attempt file download 66 | out = "NUL" if platform.system() == "Windows" else "/dev/null" 67 | os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}') 68 | if os.path.exists('cookie'): # large file 69 | s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}' 70 | else: # small file 71 | s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"' 72 | r = os.system(s) # execute, capture return 73 | cookie.unlink(missing_ok=True) # remove existing cookie 74 | 75 | # Error check 76 | if r != 0: 77 | file.unlink(missing_ok=True) # remove partial 78 | print('Download error ') # raise Exception('Download error') 79 | return r 80 | 81 | # Unzip if archive 82 | if file.suffix == '.zip': 83 | print('unzipping... ', end='') 84 | os.system(f'unzip -q {file}') # unzip 85 | file.unlink() # remove zip to free space 86 | 87 | print(f'Done ({time.time() - t:.1f}s)') 88 | return r 89 | 90 | 91 | def get_token(cookie="./cookie"): 92 | with open(cookie) as f: 93 | for line in f: 94 | if "download" in line: 95 | return line.split()[-1] 96 | return "" 97 | 98 | # def upload_blob(bucket_name, source_file_name, destination_blob_name): 99 | # # Uploads a file to a bucket 100 | # # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python 101 | # 102 | # storage_client = storage.Client() 103 | # bucket = storage_client.get_bucket(bucket_name) 104 | # blob = bucket.blob(destination_blob_name) 105 | # 106 | # blob.upload_from_filename(source_file_name) 107 | # 108 | # print('File {} uploaded to {}.'.format( 109 | # source_file_name, 110 | # destination_blob_name)) 111 | # 112 | # 113 | # def download_blob(bucket_name, source_blob_name, destination_file_name): 114 | # # Uploads a blob from a bucket 115 | # storage_client = storage.Client() 116 | # bucket = storage_client.get_bucket(bucket_name) 117 | # blob = bucket.blob(source_blob_name) 118 | # 119 | # blob.download_to_filename(destination_file_name) 120 | # 121 | # print('Blob {} downloaded to {}.'.format( 122 | # source_blob_name, 123 | # destination_file_name)) 124 | -------------------------------------------------------------------------------- /utils/metrics.py: -------------------------------------------------------------------------------- 1 | # Model validation metrics 2 | 3 | from pathlib import Path 4 | 5 | import matplotlib.pyplot as plt 6 | import numpy as np 7 | import torch 8 | 9 | from . import general 10 | 11 | 12 | def fitness(x): 13 | # Model fitness as a weighted combination of metrics 14 | w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] 15 | return (x[:, :4] * w).sum(1) 16 | 17 | 18 | def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=()): 19 | """ Compute the average precision, given the recall and precision curves. 20 | Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. 21 | # Arguments 22 | tp: True positives (nparray, nx1 or nx10). 23 | conf: Objectness value from 0-1 (nparray). 24 | pred_cls: Predicted object classes (nparray). 25 | target_cls: True object classes (nparray). 26 | plot: Plot precision-recall curve at mAP@0.5 27 | save_dir: Plot save directory 28 | # Returns 29 | The average precision as computed in py-faster-rcnn. 30 | """ 31 | 32 | # Sort by objectness 33 | i = np.argsort(-conf) 34 | tp, conf, pred_cls = tp[i], conf[i], pred_cls[i] 35 | 36 | # Find unique classes 37 | unique_classes = np.unique(target_cls) 38 | nc = unique_classes.shape[0] # number of classes, number of detections 39 | 40 | # Create Precision-Recall curve and compute AP for each class 41 | px, py = np.linspace(0, 1, 1000), [] # for plotting 42 | ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000)) 43 | for ci, c in enumerate(unique_classes): 44 | i = pred_cls == c 45 | n_l = (target_cls == c).sum() # number of labels 46 | n_p = i.sum() # number of predictions 47 | 48 | if n_p == 0 or n_l == 0: 49 | continue 50 | else: 51 | # Accumulate FPs and TPs 52 | fpc = (1 - tp[i]).cumsum(0) 53 | tpc = tp[i].cumsum(0) 54 | 55 | # Recall 56 | recall = tpc / (n_l + 1e-16) # recall curve 57 | r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases 58 | 59 | # Precision 60 | precision = tpc / (tpc + fpc) # precision curve 61 | p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score 62 | 63 | # AP from recall-precision curve 64 | for j in range(tp.shape[1]): 65 | ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j]) 66 | if plot and j == 0: 67 | py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 68 | 69 | # Compute F1 (harmonic mean of precision and recall) 70 | f1 = 2 * p * r / (p + r + 1e-16) 71 | if plot: 72 | plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names) 73 | plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1') 74 | plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision') 75 | plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall') 76 | 77 | i = f1.mean(0).argmax() # max F1 index 78 | return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32') 79 | 80 | 81 | def compute_ap(recall, precision): 82 | """ Compute the average precision, given the recall and precision curves 83 | # Arguments 84 | recall: The recall curve (list) 85 | precision: The precision curve (list) 86 | # Returns 87 | Average precision, precision curve, recall curve 88 | """ 89 | 90 | # Append sentinel values to beginning and end 91 | mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01])) 92 | mpre = np.concatenate(([1.], precision, [0.])) 93 | 94 | # Compute the precision envelope 95 | mpre = np.flip(np.maximum.accumulate(np.flip(mpre))) 96 | 97 | # Integrate area under curve 98 | method = 'interp' # methods: 'continuous', 'interp' 99 | if method == 'interp': 100 | x = np.linspace(0, 1, 101) # 101-point interp (COCO) 101 | ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate 102 | else: # 'continuous' 103 | i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes 104 | ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve 105 | 106 | return ap, mpre, mrec 107 | 108 | 109 | class ConfusionMatrix: 110 | # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix 111 | def __init__(self, nc, conf=0.25, iou_thres=0.45): 112 | self.matrix = np.zeros((nc + 1, nc + 1)) 113 | self.nc = nc # number of classes 114 | self.conf = conf 115 | self.iou_thres = iou_thres 116 | 117 | def process_batch(self, detections, labels): 118 | """ 119 | Return intersection-over-union (Jaccard index) of boxes. 120 | Both sets of boxes are expected to be in (x1, y1, x2, y2) format. 121 | Arguments: 122 | detections (Array[N, 6]), x1, y1, x2, y2, conf, class 123 | labels (Array[M, 5]), class, x1, y1, x2, y2 124 | Returns: 125 | None, updates confusion matrix accordingly 126 | """ 127 | detections = detections[detections[:, 4] > self.conf] 128 | gt_classes = labels[:, 0].int() 129 | detection_classes = detections[:, 5].int() 130 | iou = general.box_iou(labels[:, 1:], detections[:, :4]) 131 | 132 | x = torch.where(iou > self.iou_thres) 133 | if x[0].shape[0]: 134 | matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() 135 | if x[0].shape[0] > 1: 136 | matches = matches[matches[:, 2].argsort()[::-1]] 137 | matches = matches[np.unique(matches[:, 1], return_index=True)[1]] 138 | matches = matches[matches[:, 2].argsort()[::-1]] 139 | matches = matches[np.unique(matches[:, 0], return_index=True)[1]] 140 | else: 141 | matches = np.zeros((0, 3)) 142 | 143 | n = matches.shape[0] > 0 144 | m0, m1, _ = matches.transpose().astype(np.int16) 145 | for i, gc in enumerate(gt_classes): 146 | j = m0 == i 147 | if n and sum(j) == 1: 148 | self.matrix[gc, detection_classes[m1[j]]] += 1 # correct 149 | else: 150 | self.matrix[self.nc, gc] += 1 # background FP 151 | 152 | if n: 153 | for i, dc in enumerate(detection_classes): 154 | if not any(m1 == i): 155 | self.matrix[dc, self.nc] += 1 # background FN 156 | 157 | def matrix(self): 158 | return self.matrix 159 | 160 | def plot(self, save_dir='', names=()): 161 | try: 162 | import seaborn as sn 163 | 164 | array = self.matrix / (self.matrix.sum(0).reshape(1, self.nc + 1) + 1E-6) # normalize 165 | array[array < 0.005] = np.nan # don't annotate (would appear as 0.00) 166 | 167 | fig = plt.figure(figsize=(12, 9), tight_layout=True) 168 | sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size 169 | labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels 170 | sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True, 171 | xticklabels=names + ['background FP'] if labels else "auto", 172 | yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1)) 173 | fig.axes[0].set_xlabel('True') 174 | fig.axes[0].set_ylabel('Predicted') 175 | fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250) 176 | except Exception as e: 177 | pass 178 | 179 | def print(self): 180 | for i in range(self.nc + 1): 181 | print(' '.join(map(str, self.matrix[i]))) 182 | 183 | 184 | # Plots ---------------------------------------------------------------------------------------------------------------- 185 | 186 | def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()): 187 | # Precision-recall curve 188 | fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) 189 | py = np.stack(py, axis=1) 190 | 191 | if 0 < len(names) < 21: # display per-class legend if < 21 classes 192 | for i, y in enumerate(py.T): 193 | ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision) 194 | else: 195 | ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision) 196 | 197 | ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean()) 198 | ax.set_xlabel('Recall') 199 | ax.set_ylabel('Precision') 200 | ax.set_xlim(0, 1) 201 | ax.set_ylim(0, 1) 202 | plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") 203 | fig.savefig(Path(save_dir), dpi=250) 204 | 205 | 206 | def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'): 207 | # Metric-confidence curve 208 | fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) 209 | 210 | if 0 < len(names) < 21: # display per-class legend if < 21 classes 211 | for i, y in enumerate(py): 212 | ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric) 213 | else: 214 | ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric) 215 | 216 | y = py.mean(0) 217 | ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}') 218 | ax.set_xlabel(xlabel) 219 | ax.set_ylabel(ylabel) 220 | ax.set_xlim(0, 1) 221 | ax.set_ylim(0, 1) 222 | plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") 223 | fig.savefig(Path(save_dir), dpi=250) 224 | -------------------------------------------------------------------------------- /utils/plots.py: -------------------------------------------------------------------------------- 1 | # Plotting utils 2 | 3 | import glob 4 | import math 5 | import os 6 | import random 7 | from copy import copy 8 | from pathlib import Path 9 | 10 | import cv2 11 | import matplotlib 12 | import matplotlib.pyplot as plt 13 | import numpy as np 14 | import pandas as pd 15 | import seaborn as sns 16 | import torch 17 | import yaml 18 | from PIL import Image, ImageDraw, ImageFont 19 | from scipy.signal import butter, filtfilt 20 | 21 | from utils.general import xywh2xyxy, xyxy2xywh 22 | from utils.metrics import fitness 23 | 24 | # Settings 25 | matplotlib.rc('font', **{'size': 11}) 26 | matplotlib.use('Agg') # for writing to files only 27 | 28 | 29 | def color_list(): 30 | # Return first 10 plt colors as (r,g,b) https://stackoverflow.com/questions/51350872/python-from-color-name-to-rgb 31 | def hex2rgb(h): 32 | return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) 33 | 34 | return [hex2rgb(h) for h in matplotlib.colors.TABLEAU_COLORS.values()] # or BASE_ (8), CSS4_ (148), XKCD_ (949) 35 | 36 | 37 | def hist2d(x, y, n=100): 38 | # 2d histogram used in labels.png and evolve.png 39 | xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) 40 | hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) 41 | xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) 42 | yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) 43 | return np.log(hist[xidx, yidx]) 44 | 45 | 46 | def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): 47 | # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy 48 | def butter_lowpass(cutoff, fs, order): 49 | nyq = 0.5 * fs 50 | normal_cutoff = cutoff / nyq 51 | return butter(order, normal_cutoff, btype='low', analog=False) 52 | 53 | b, a = butter_lowpass(cutoff, fs, order=order) 54 | return filtfilt(b, a, data) # forward-backward filter 55 | 56 | 57 | def plot_one_box(x, img, color=None, label=None, line_thickness=3): 58 | # Plots one bounding box on image img 59 | tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness 60 | color = color or [random.randint(0, 255) for _ in range(3)] 61 | c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3])) 62 | cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) 63 | if label: 64 | tf = max(tl - 1, 1) # font thickness 65 | t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] 66 | c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 67 | cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled 68 | cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) 69 | 70 | 71 | def plot_one_box_PIL(box, img, color=None, label=None, line_thickness=None): 72 | img = Image.fromarray(img) 73 | draw = ImageDraw.Draw(img) 74 | line_thickness = line_thickness or max(int(min(img.size) / 200), 2) 75 | draw.rectangle(box, width=line_thickness, outline=tuple(color)) # plot 76 | if label: 77 | fontsize = max(round(max(img.size) / 40), 12) 78 | font = ImageFont.truetype("Arial.ttf", fontsize) 79 | txt_width, txt_height = font.getsize(label) 80 | draw.rectangle([box[0], box[1] - txt_height + 4, box[0] + txt_width, box[1]], fill=tuple(color)) 81 | draw.text((box[0], box[1] - txt_height + 1), label, fill=(255, 255, 255), font=font) 82 | return np.asarray(img) 83 | 84 | 85 | def plot_wh_methods(): # from utils.plots import *; plot_wh_methods() 86 | # Compares the two methods for width-height anchor multiplication 87 | # https://github.com/ultralytics/yolov3/issues/168 88 | x = np.arange(-4.0, 4.0, .1) 89 | ya = np.exp(x) 90 | yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2 91 | 92 | fig = plt.figure(figsize=(6, 3), tight_layout=True) 93 | plt.plot(x, ya, '.-', label='YOLOv3') 94 | plt.plot(x, yb ** 2, '.-', label='YOLOR ^2') 95 | plt.plot(x, yb ** 1.6, '.-', label='YOLOR ^1.6') 96 | plt.xlim(left=-4, right=4) 97 | plt.ylim(bottom=0, top=6) 98 | plt.xlabel('input') 99 | plt.ylabel('output') 100 | plt.grid() 101 | plt.legend() 102 | fig.savefig('comparison.png', dpi=200) 103 | 104 | 105 | def output_to_target(output): 106 | # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] 107 | targets = [] 108 | for i, o in enumerate(output): 109 | for *box, conf, cls in o.cpu().numpy(): 110 | targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf]) 111 | return np.array(targets) 112 | 113 | 114 | def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16): 115 | # Plot image grid with labels 116 | 117 | if isinstance(images, torch.Tensor): 118 | images = images.cpu().float().numpy() 119 | if isinstance(targets, torch.Tensor): 120 | targets = targets.cpu().numpy() 121 | 122 | # un-normalise 123 | if np.max(images[0]) <= 1: 124 | images *= 255 125 | 126 | tl = 3 # line thickness 127 | tf = max(tl - 1, 1) # font thickness 128 | bs, _, h, w = images.shape # batch size, _, height, width 129 | bs = min(bs, max_subplots) # limit plot images 130 | ns = np.ceil(bs ** 0.5) # number of subplots (square) 131 | 132 | # Check if we should resize 133 | scale_factor = max_size / max(h, w) 134 | if scale_factor < 1: 135 | h = math.ceil(scale_factor * h) 136 | w = math.ceil(scale_factor * w) 137 | 138 | colors = color_list() # list of colors 139 | mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init 140 | for i, img in enumerate(images): 141 | if i == max_subplots: # if last batch has fewer images than we expect 142 | break 143 | 144 | block_x = int(w * (i // ns)) 145 | block_y = int(h * (i % ns)) 146 | 147 | img = img.transpose(1, 2, 0) 148 | if scale_factor < 1: 149 | img = cv2.resize(img, (w, h)) 150 | 151 | mosaic[block_y:block_y + h, block_x:block_x + w, :] = img 152 | if len(targets) > 0: 153 | image_targets = targets[targets[:, 0] == i] 154 | boxes = xywh2xyxy(image_targets[:, 2:6]).T 155 | classes = image_targets[:, 1].astype('int') 156 | labels = image_targets.shape[1] == 6 # labels if no conf column 157 | conf = None if labels else image_targets[:, 6] # check for confidence presence (label vs pred) 158 | 159 | if boxes.shape[1]: 160 | if boxes.max() <= 1.01: # if normalized with tolerance 0.01 161 | boxes[[0, 2]] *= w # scale to pixels 162 | boxes[[1, 3]] *= h 163 | elif scale_factor < 1: # absolute coords need scale if image scales 164 | boxes *= scale_factor 165 | boxes[[0, 2]] += block_x 166 | boxes[[1, 3]] += block_y 167 | for j, box in enumerate(boxes.T): 168 | cls = int(classes[j]) 169 | color = colors[cls % len(colors)] 170 | cls = names[cls] if names else cls 171 | if labels or conf[j] > 0.25: # 0.25 conf thresh 172 | label = '%s' % cls if labels else '%s %.1f' % (cls, conf[j]) 173 | plot_one_box(box, mosaic, label=label, color=color, line_thickness=tl) 174 | 175 | # Draw image filename labels 176 | if paths: 177 | label = Path(paths[i]).name[:40] # trim to 40 char 178 | t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] 179 | cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf, 180 | lineType=cv2.LINE_AA) 181 | 182 | # Image border 183 | cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3) 184 | 185 | if fname: 186 | r = min(1280. / max(h, w) / ns, 1.0) # ratio to limit image size 187 | mosaic = cv2.resize(mosaic, (int(ns * w * r), int(ns * h * r)), interpolation=cv2.INTER_AREA) 188 | # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save 189 | Image.fromarray(mosaic).save(fname) # PIL save 190 | return mosaic 191 | 192 | 193 | def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): 194 | # Plot LR simulating training for full epochs 195 | optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals 196 | y = [] 197 | for _ in range(epochs): 198 | scheduler.step() 199 | y.append(optimizer.param_groups[0]['lr']) 200 | plt.plot(y, '.-', label='LR') 201 | plt.xlabel('epoch') 202 | plt.ylabel('LR') 203 | plt.grid() 204 | plt.xlim(0, epochs) 205 | plt.ylim(0) 206 | plt.savefig(Path(save_dir) / 'LR.png', dpi=200) 207 | plt.close() 208 | 209 | 210 | def plot_test_txt(): # from utils.plots import *; plot_test() 211 | # Plot test.txt histograms 212 | x = np.loadtxt('test.txt', dtype=np.float32) 213 | box = xyxy2xywh(x[:, :4]) 214 | cx, cy = box[:, 0], box[:, 1] 215 | 216 | fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) 217 | ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) 218 | ax.set_aspect('equal') 219 | plt.savefig('hist2d.png', dpi=300) 220 | 221 | fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) 222 | ax[0].hist(cx, bins=600) 223 | ax[1].hist(cy, bins=600) 224 | plt.savefig('hist1d.png', dpi=200) 225 | 226 | 227 | def plot_targets_txt(): # from utils.plots import *; plot_targets_txt() 228 | # Plot targets.txt histograms 229 | x = np.loadtxt('targets.txt', dtype=np.float32).T 230 | s = ['x targets', 'y targets', 'width targets', 'height targets'] 231 | fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) 232 | ax = ax.ravel() 233 | for i in range(4): 234 | ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std())) 235 | ax[i].legend() 236 | ax[i].set_title(s[i]) 237 | plt.savefig('targets.jpg', dpi=200) 238 | 239 | 240 | def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt() 241 | # Plot study.txt generated by test.py 242 | fig, ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True) 243 | # ax = ax.ravel() 244 | 245 | fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) 246 | # for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolor-p6', 'yolor-w6', 'yolor-e6', 'yolor-d6']]: 247 | for f in sorted(Path(path).glob('study*.txt')): 248 | y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T 249 | x = np.arange(y.shape[1]) if x is None else np.array(x) 250 | s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)'] 251 | # for i in range(7): 252 | # ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) 253 | # ax[i].set_title(s[i]) 254 | 255 | j = y[3].argmax() + 1 256 | ax2.plot(y[6, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8, 257 | label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) 258 | 259 | ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], 260 | 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet') 261 | 262 | ax2.grid(alpha=0.2) 263 | ax2.set_yticks(np.arange(20, 60, 5)) 264 | ax2.set_xlim(0, 57) 265 | ax2.set_ylim(30, 55) 266 | ax2.set_xlabel('GPU Speed (ms/img)') 267 | ax2.set_ylabel('COCO AP val') 268 | ax2.legend(loc='lower right') 269 | plt.savefig(str(Path(path).name) + '.png', dpi=300) 270 | 271 | 272 | def plot_labels(labels, names=(), save_dir=Path(''), loggers=None): 273 | # plot dataset labels 274 | print('Plotting labels... ') 275 | c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes 276 | nc = int(c.max() + 1) # number of classes 277 | colors = color_list() 278 | x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) 279 | 280 | # seaborn correlogram 281 | sns.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9)) 282 | plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200) 283 | plt.close() 284 | 285 | # matplotlib labels 286 | matplotlib.use('svg') # faster 287 | ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() 288 | ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) 289 | ax[0].set_ylabel('instances') 290 | if 0 < len(names) < 30: 291 | ax[0].set_xticks(range(len(names))) 292 | ax[0].set_xticklabels(names, rotation=90, fontsize=10) 293 | else: 294 | ax[0].set_xlabel('classes') 295 | sns.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) 296 | sns.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9) 297 | 298 | # rectangles 299 | labels[:, 1:3] = 0.5 # center 300 | labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000 301 | img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) 302 | for cls, *box in labels[:1000]: 303 | ImageDraw.Draw(img).rectangle(box, width=1, outline=colors[int(cls) % 10]) # plot 304 | ax[1].imshow(img) 305 | ax[1].axis('off') 306 | 307 | for a in [0, 1, 2, 3]: 308 | for s in ['top', 'right', 'left', 'bottom']: 309 | ax[a].spines[s].set_visible(False) 310 | 311 | plt.savefig(save_dir / 'labels.jpg', dpi=200) 312 | matplotlib.use('Agg') 313 | plt.close() 314 | 315 | # loggers 316 | for k, v in loggers.items() or {}: 317 | if k == 'wandb' and v: 318 | v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}, commit=False) 319 | 320 | 321 | def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution() 322 | # Plot hyperparameter evolution results in evolve.txt 323 | with open(yaml_file) as f: 324 | hyp = yaml.load(f, Loader=yaml.SafeLoader) 325 | x = np.loadtxt('evolve.txt', ndmin=2) 326 | f = fitness(x) 327 | # weights = (f - f.min()) ** 2 # for weighted results 328 | plt.figure(figsize=(10, 12), tight_layout=True) 329 | matplotlib.rc('font', **{'size': 8}) 330 | for i, (k, v) in enumerate(hyp.items()): 331 | y = x[:, i + 7] 332 | # mu = (y * weights).sum() / weights.sum() # best weighted result 333 | mu = y[f.argmax()] # best single result 334 | plt.subplot(6, 5, i + 1) 335 | plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none') 336 | plt.plot(mu, f.max(), 'k+', markersize=15) 337 | plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters 338 | if i % 5 != 0: 339 | plt.yticks([]) 340 | print('%15s: %.3g' % (k, mu)) 341 | plt.savefig('evolve.png', dpi=200) 342 | print('\nPlot saved as evolve.png') 343 | 344 | 345 | def profile_idetection(start=0, stop=0, labels=(), save_dir=''): 346 | # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection() 347 | ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel() 348 | s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS'] 349 | files = list(Path(save_dir).glob('frames*.txt')) 350 | for fi, f in enumerate(files): 351 | try: 352 | results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows 353 | n = results.shape[1] # number of rows 354 | x = np.arange(start, min(stop, n) if stop else n) 355 | results = results[:, x] 356 | t = (results[0] - results[0].min()) # set t0=0s 357 | results[0] = x 358 | for i, a in enumerate(ax): 359 | if i < len(results): 360 | label = labels[fi] if len(labels) else f.stem.replace('frames_', '') 361 | a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5) 362 | a.set_title(s[i]) 363 | a.set_xlabel('time (s)') 364 | # if fi == len(files) - 1: 365 | # a.set_ylim(bottom=0) 366 | for side in ['top', 'right']: 367 | a.spines[side].set_visible(False) 368 | else: 369 | a.remove() 370 | except Exception as e: 371 | print('Warning: Plotting error for %s; %s' % (f, e)) 372 | 373 | ax[1].legend() 374 | plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) 375 | 376 | 377 | def plot_results_overlay(start=0, stop=0): # from utils.plots import *; plot_results_overlay() 378 | # Plot training 'results*.txt', overlaying train and val losses 379 | s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends 380 | t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles 381 | for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')): 382 | results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T 383 | n = results.shape[1] # number of rows 384 | x = range(start, min(stop, n) if stop else n) 385 | fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True) 386 | ax = ax.ravel() 387 | for i in range(5): 388 | for j in [i, i + 5]: 389 | y = results[j, x] 390 | ax[i].plot(x, y, marker='.', label=s[j]) 391 | # y_smooth = butter_lowpass_filtfilt(y) 392 | # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j]) 393 | 394 | ax[i].set_title(t[i]) 395 | ax[i].legend() 396 | ax[i].set_ylabel(f) if i == 0 else None # add filename 397 | fig.savefig(f.replace('.txt', '.png'), dpi=200) 398 | 399 | 400 | def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''): 401 | # Plot training 'results*.txt'. from utils.plots import *; plot_results(save_dir='runs/train/exp') 402 | fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True) 403 | ax = ax.ravel() 404 | s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall', 405 | 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95'] 406 | if bucket: 407 | # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id] 408 | files = ['results%g.txt' % x for x in id] 409 | c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id) 410 | os.system(c) 411 | else: 412 | files = list(Path(save_dir).glob('results*.txt')) 413 | assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir) 414 | for fi, f in enumerate(files): 415 | try: 416 | results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T 417 | n = results.shape[1] # number of rows 418 | x = range(start, min(stop, n) if stop else n) 419 | for i in range(10): 420 | y = results[i, x] 421 | if i in [0, 1, 2, 5, 6, 7]: 422 | y[y == 0] = np.nan # don't show zero loss values 423 | # y /= y[0] # normalize 424 | label = labels[fi] if len(labels) else f.stem 425 | ax[i].plot(x, y, marker='.', label=label, linewidth=2, markersize=8) 426 | ax[i].set_title(s[i]) 427 | # if i in [5, 6, 7]: # share train and val loss y axes 428 | # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) 429 | except Exception as e: 430 | print('Warning: Plotting error for %s; %s' % (f, e)) 431 | 432 | ax[1].legend() 433 | fig.savefig(Path(save_dir) / 'results.png', dpi=200) 434 | 435 | 436 | def output_to_keypoint(output): 437 | # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] 438 | targets = [] 439 | for i, o in enumerate(output): 440 | kpts = o[:,6:] 441 | o = o[:,:6] 442 | for index, (*box, conf, cls) in enumerate(o.detach().cpu().numpy()): 443 | targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf, *list(kpts.detach().cpu().numpy()[index])]) 444 | return np.array(targets) 445 | 446 | 447 | def plot_skeleton_kpts(im, kpts, steps, orig_shape=None): 448 | #Plot the skeleton and keypointsfor coco datatset 449 | palette = np.array([[255, 128, 0], [255, 153, 51], [255, 178, 102], 450 | [230, 230, 0], [255, 153, 255], [153, 204, 255], 451 | [255, 102, 255], [255, 51, 255], [102, 178, 255], 452 | [51, 153, 255], [255, 153, 153], [255, 102, 102], 453 | [255, 51, 51], [153, 255, 153], [102, 255, 102], 454 | [51, 255, 51], [0, 255, 0], [0, 0, 255], [255, 0, 0], 455 | [255, 255, 255]]) 456 | 457 | skeleton = [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], [6, 12], 458 | [7, 13], [6, 7], [6, 8], [7, 9], [8, 10], [9, 11], [2, 3], 459 | [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]] 460 | 461 | pose_limb_color = palette[[9, 9, 9, 9, 7, 7, 7, 0, 0, 0, 0, 0, 16, 16, 16, 16, 16, 16, 16]] 462 | pose_kpt_color = palette[[16, 16, 16, 16, 16, 0, 0, 0, 0, 0, 0, 9, 9, 9, 9, 9, 9]] 463 | radius = 5 464 | num_kpts = len(kpts) // steps 465 | 466 | for kid in range(num_kpts): 467 | r, g, b = pose_kpt_color[kid] 468 | x_coord, y_coord = kpts[steps * kid], kpts[steps * kid + 1] 469 | if not (x_coord % 640 == 0 or y_coord % 640 == 0): 470 | if steps == 3: 471 | conf = kpts[steps * kid + 2] 472 | if conf < 0.5: 473 | continue 474 | cv2.circle(im, (int(x_coord), int(y_coord)), radius, (int(r), int(g), int(b)), -1) 475 | 476 | for sk_id, sk in enumerate(skeleton): 477 | r, g, b = pose_limb_color[sk_id] 478 | pos1 = (int(kpts[(sk[0]-1)*steps]), int(kpts[(sk[0]-1)*steps+1])) 479 | pos2 = (int(kpts[(sk[1]-1)*steps]), int(kpts[(sk[1]-1)*steps+1])) 480 | if steps == 3: 481 | conf1 = kpts[(sk[0]-1)*steps+2] 482 | conf2 = kpts[(sk[1]-1)*steps+2] 483 | if conf1<0.5 or conf2<0.5: 484 | continue 485 | if pos1[0]%640 == 0 or pos1[1]%640==0 or pos1[0]<0 or pos1[1]<0: 486 | continue 487 | if pos2[0] % 640 == 0 or pos2[1] % 640 == 0 or pos2[0]<0 or pos2[1]<0: 488 | continue 489 | cv2.line(im, pos1, pos2, (int(r), int(g), int(b)), thickness=2) 490 | -------------------------------------------------------------------------------- /utils/torch_utils.py: -------------------------------------------------------------------------------- 1 | # YOLOR PyTorch utils 2 | 3 | import datetime 4 | import logging 5 | import math 6 | import os 7 | import platform 8 | import subprocess 9 | import time 10 | from contextlib import contextmanager 11 | from copy import deepcopy 12 | from pathlib import Path 13 | 14 | import torch 15 | import torch.backends.cudnn as cudnn 16 | import torch.nn as nn 17 | import torch.nn.functional as F 18 | import torchvision 19 | 20 | try: 21 | import thop # for FLOPS computation 22 | except ImportError: 23 | thop = None 24 | logger = logging.getLogger(__name__) 25 | 26 | 27 | @contextmanager 28 | def torch_distributed_zero_first(local_rank: int): 29 | """ 30 | Decorator to make all processes in distributed training wait for each local_master to do something. 31 | """ 32 | if local_rank not in [-1, 0]: 33 | torch.distributed.barrier() 34 | yield 35 | if local_rank == 0: 36 | torch.distributed.barrier() 37 | 38 | 39 | def init_torch_seeds(seed=0): 40 | # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html 41 | torch.manual_seed(seed) 42 | if seed == 0: # slower, more reproducible 43 | cudnn.benchmark, cudnn.deterministic = False, True 44 | else: # faster, less reproducible 45 | cudnn.benchmark, cudnn.deterministic = True, False 46 | 47 | 48 | def date_modified(path=__file__): 49 | # return human-readable file modification date, i.e. '2021-3-26' 50 | t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime) 51 | return f'{t.year}-{t.month}-{t.day}' 52 | 53 | 54 | def git_describe(path=Path(__file__).parent): # path must be a directory 55 | # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe 56 | s = f'git -C {path} describe --tags --long --always' 57 | try: 58 | return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1] 59 | except subprocess.CalledProcessError as e: 60 | return '' # not a git repository 61 | 62 | 63 | def select_device(device='', batch_size=None): 64 | # device = 'cpu' or '0' or '0,1,2,3' 65 | s = f'YOLOR 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string 66 | cpu = device.lower() == 'cpu' 67 | if cpu: 68 | os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False 69 | elif device: # non-cpu device requested 70 | os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable 71 | assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability 72 | 73 | cuda = not cpu and torch.cuda.is_available() 74 | if cuda: 75 | n = torch.cuda.device_count() 76 | if n > 1 and batch_size: # check that batch_size is compatible with device_count 77 | assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}' 78 | space = ' ' * len(s) 79 | for i, d in enumerate(device.split(',') if device else range(n)): 80 | p = torch.cuda.get_device_properties(i) 81 | s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n" # bytes to MB 82 | else: 83 | s += 'CPU\n' 84 | 85 | logger.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe 86 | return torch.device('cuda:0' if cuda else 'cpu') 87 | 88 | 89 | def time_synchronized(): 90 | # pytorch-accurate time 91 | if torch.cuda.is_available(): 92 | torch.cuda.synchronize() 93 | return time.time() 94 | 95 | 96 | def profile(x, ops, n=100, device=None): 97 | # profile a pytorch module or list of modules. Example usage: 98 | # x = torch.randn(16, 3, 640, 640) # input 99 | # m1 = lambda x: x * torch.sigmoid(x) 100 | # m2 = nn.SiLU() 101 | # profile(x, [m1, m2], n=100) # profile speed over 100 iterations 102 | 103 | device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') 104 | x = x.to(device) 105 | x.requires_grad = True 106 | print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '') 107 | print(f"\n{'Params':>12s}{'GFLOPS':>12s}{'forward (ms)':>16s}{'backward (ms)':>16s}{'input':>24s}{'output':>24s}") 108 | for m in ops if isinstance(ops, list) else [ops]: 109 | m = m.to(device) if hasattr(m, 'to') else m # device 110 | m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m # type 111 | dtf, dtb, t = 0., 0., [0., 0., 0.] # dt forward, backward 112 | try: 113 | flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPS 114 | except: 115 | flops = 0 116 | 117 | for _ in range(n): 118 | t[0] = time_synchronized() 119 | y = m(x) 120 | t[1] = time_synchronized() 121 | try: 122 | _ = y.sum().backward() 123 | t[2] = time_synchronized() 124 | except: # no backward method 125 | t[2] = float('nan') 126 | dtf += (t[1] - t[0]) * 1000 / n # ms per op forward 127 | dtb += (t[2] - t[1]) * 1000 / n # ms per op backward 128 | 129 | s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list' 130 | s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list' 131 | p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters 132 | print(f'{p:12}{flops:12.4g}{dtf:16.4g}{dtb:16.4g}{str(s_in):>24s}{str(s_out):>24s}') 133 | 134 | 135 | def is_parallel(model): 136 | return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel) 137 | 138 | 139 | def intersect_dicts(da, db, exclude=()): 140 | # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values 141 | return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape} 142 | 143 | 144 | def initialize_weights(model): 145 | for m in model.modules(): 146 | t = type(m) 147 | if t is nn.Conv2d: 148 | pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') 149 | elif t is nn.BatchNorm2d: 150 | m.eps = 1e-3 151 | m.momentum = 0.03 152 | elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]: 153 | m.inplace = True 154 | 155 | 156 | def find_modules(model, mclass=nn.Conv2d): 157 | # Finds layer indices matching module class 'mclass' 158 | return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)] 159 | 160 | 161 | def sparsity(model): 162 | # Return global model sparsity 163 | a, b = 0., 0. 164 | for p in model.parameters(): 165 | a += p.numel() 166 | b += (p == 0).sum() 167 | return b / a 168 | 169 | 170 | def prune(model, amount=0.3): 171 | # Prune model to requested global sparsity 172 | import torch.nn.utils.prune as prune 173 | print('Pruning model... ', end='') 174 | for name, m in model.named_modules(): 175 | if isinstance(m, nn.Conv2d): 176 | prune.l1_unstructured(m, name='weight', amount=amount) # prune 177 | prune.remove(m, 'weight') # make permanent 178 | print(' %.3g global sparsity' % sparsity(model)) 179 | 180 | 181 | def fuse_conv_and_bn(conv, bn): 182 | # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ 183 | fusedconv = nn.Conv2d(conv.in_channels, 184 | conv.out_channels, 185 | kernel_size=conv.kernel_size, 186 | stride=conv.stride, 187 | padding=conv.padding, 188 | groups=conv.groups, 189 | bias=True).requires_grad_(False).to(conv.weight.device) 190 | 191 | # prepare filters 192 | w_conv = conv.weight.clone().view(conv.out_channels, -1) 193 | w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) 194 | fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape)) 195 | 196 | # prepare spatial bias 197 | b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias 198 | b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps)) 199 | fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) 200 | 201 | return fusedconv 202 | 203 | 204 | def model_info(model, verbose=False, img_size=640): 205 | # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320] 206 | n_p = sum(x.numel() for x in model.parameters()) # number parameters 207 | n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients 208 | if verbose: 209 | print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma')) 210 | for i, (name, p) in enumerate(model.named_parameters()): 211 | name = name.replace('module_list.', '') 212 | print('%5g %40s %9s %12g %20s %10.3g %10.3g' % 213 | (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std())) 214 | 215 | try: # FLOPS 216 | from thop import profile 217 | stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 218 | img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input 219 | flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPS 220 | img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float 221 | fs = ', %.1f GFLOPS' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPS 222 | except (ImportError, Exception): 223 | fs = '' 224 | 225 | logger.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}") 226 | 227 | 228 | def load_classifier(name='resnet101', n=2): 229 | # Loads a pretrained model reshaped to n-class output 230 | model = torchvision.models.__dict__[name](pretrained=True) 231 | 232 | # ResNet model properties 233 | # input_size = [3, 224, 224] 234 | # input_space = 'RGB' 235 | # input_range = [0, 1] 236 | # mean = [0.485, 0.456, 0.406] 237 | # std = [0.229, 0.224, 0.225] 238 | 239 | # Reshape output to n classes 240 | filters = model.fc.weight.shape[1] 241 | model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True) 242 | model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True) 243 | model.fc.out_features = n 244 | return model 245 | 246 | 247 | def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416) 248 | # scales img(bs,3,y,x) by ratio constrained to gs-multiple 249 | if ratio == 1.0: 250 | return img 251 | else: 252 | h, w = img.shape[2:] 253 | s = (int(h * ratio), int(w * ratio)) # new size 254 | img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize 255 | if not same_shape: # pad/crop img 256 | h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)] 257 | return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean 258 | 259 | 260 | def copy_attr(a, b, include=(), exclude=()): 261 | # Copy attributes from b to a, options to only include [...] and to exclude [...] 262 | for k, v in b.__dict__.items(): 263 | if (len(include) and k not in include) or k.startswith('_') or k in exclude: 264 | continue 265 | else: 266 | setattr(a, k, v) 267 | 268 | 269 | class ModelEMA: 270 | """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models 271 | Keep a moving average of everything in the model state_dict (parameters and buffers). 272 | This is intended to allow functionality like 273 | https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage 274 | A smoothed version of the weights is necessary for some training schemes to perform well. 275 | This class is sensitive where it is initialized in the sequence of model init, 276 | GPU assignment and distributed training wrappers. 277 | """ 278 | 279 | def __init__(self, model, decay=0.9999, updates=0): 280 | # Create EMA 281 | self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA 282 | # if next(model.parameters()).device.type != 'cpu': 283 | # self.ema.half() # FP16 EMA 284 | self.updates = updates # number of EMA updates 285 | self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs) 286 | for p in self.ema.parameters(): 287 | p.requires_grad_(False) 288 | 289 | def update(self, model): 290 | # Update EMA parameters 291 | with torch.no_grad(): 292 | self.updates += 1 293 | d = self.decay(self.updates) 294 | 295 | msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict 296 | for k, v in self.ema.state_dict().items(): 297 | if v.dtype.is_floating_point: 298 | v *= d 299 | v += (1. - d) * msd[k].detach() 300 | 301 | def update_attr(self, model, include=(), exclude=('process_group', 'reducer')): 302 | # Update EMA attributes 303 | copy_attr(self.ema, model, include, exclude) 304 | 305 | 306 | class BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): 307 | def _check_input_dim(self, input): 308 | # The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc 309 | # is this method that is overwritten by the sub-class 310 | # This original goal of this method was for tensor sanity checks 311 | # If you're ok bypassing those sanity checks (eg. if you trust your inference 312 | # to provide the right dimensional inputs), then you can just use this method 313 | # for easy conversion from SyncBatchNorm 314 | # (unfortunately, SyncBatchNorm does not store the original class - if it did 315 | # we could return the one that was originally created) 316 | return 317 | 318 | def revert_sync_batchnorm(module): 319 | # this is very similar to the function that it is trying to revert: 320 | # https://github.com/pytorch/pytorch/blob/c8b3686a3e4ba63dc59e5dcfe5db3430df256833/torch/nn/modules/batchnorm.py#L679 321 | module_output = module 322 | if isinstance(module, torch.nn.modules.batchnorm.SyncBatchNorm): 323 | new_cls = BatchNormXd 324 | module_output = BatchNormXd(module.num_features, 325 | module.eps, module.momentum, 326 | module.affine, 327 | module.track_running_stats) 328 | if module.affine: 329 | with torch.no_grad(): 330 | module_output.weight = module.weight 331 | module_output.bias = module.bias 332 | module_output.running_mean = module.running_mean 333 | module_output.running_var = module.running_var 334 | module_output.num_batches_tracked = module.num_batches_tracked 335 | if hasattr(module, "qconfig"): 336 | module_output.qconfig = module.qconfig 337 | for name, child in module.named_children(): 338 | module_output.add_module(name, revert_sync_batchnorm(child)) 339 | del module 340 | return module_output 341 | 342 | 343 | class TracedModel(nn.Module): 344 | 345 | def __init__(self, model=None, device=None, img_size=(640,640)): 346 | super(TracedModel, self).__init__() 347 | 348 | print(" Convert model to Traced-model... ") 349 | self.stride = model.stride 350 | self.names = model.names 351 | self.model = model 352 | 353 | self.model = revert_sync_batchnorm(self.model) 354 | self.model.to('cpu') 355 | self.model.eval() 356 | 357 | self.detect_layer = self.model.model[-1] 358 | self.model.traced = True 359 | 360 | rand_example = torch.rand(1, 3, img_size, img_size) 361 | 362 | traced_script_module = torch.jit.trace(self.model, rand_example, strict=False) 363 | #traced_script_module = torch.jit.script(self.model) 364 | traced_script_module.save("traced_model.pt") 365 | print(" traced_script_module saved! ") 366 | self.model = traced_script_module 367 | self.model.to(device) 368 | self.detect_layer.to(device) 369 | print(" model is traced! \n") 370 | 371 | def forward(self, x, augment=False, profile=False): 372 | out = self.model(x) 373 | out = self.detect_layer(out) 374 | return out --------------------------------------------------------------------------------