├── LICENSE
├── README.md
├── data
├── coco.yaml
├── hyp.finetune.yaml
└── hyp.scratch.yaml
├── detect.py
├── models
├── __init__.py
├── common.py
├── experimental.py
├── export.py
├── yolo.py
├── yolov4-csp.yaml
├── yolov4-p5.yaml
├── yolov4-p6.yaml
└── yolov4-p7.yaml
├── test.py
├── train.py
└── utils
├── __init__.py
├── activations.py
├── datasets.py
├── general.py
├── google_utils.py
└── torch_utils.py
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 3, 29 June 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.
5 | Everyone is permitted to copy and distribute verbatim copies
6 | of this license document, but changing it is not allowed.
7 |
8 | Preamble
9 |
10 | The GNU General Public License is a free, copyleft license for
11 | software and other kinds of works.
12 |
13 | The licenses for most software and other practical works are designed
14 | to take away your freedom to share and change the works. By contrast,
15 | the GNU General Public License is intended to guarantee your freedom to
16 | share and change all versions of a program--to make sure it remains free
17 | software for all its users. We, the Free Software Foundation, use the
18 | GNU General Public License for most of our software; it applies also to
19 | any other work released this way by its authors. You can apply it to
20 | your programs, too.
21 |
22 | When we speak of free software, we are referring to freedom, not
23 | price. Our General Public Licenses are designed to make sure that you
24 | have the freedom to distribute copies of free software (and charge for
25 | them if you wish), that you receive source code or can get it if you
26 | want it, that you can change the software or use pieces of it in new
27 | free programs, and that you know you can do these things.
28 |
29 | To protect your rights, we need to prevent others from denying you
30 | these rights or asking you to surrender the rights. Therefore, you have
31 | certain responsibilities if you distribute copies of the software, or if
32 | you modify it: responsibilities to respect the freedom of others.
33 |
34 | For example, if you distribute copies of such a program, whether
35 | gratis or for a fee, you must pass on to the recipients the same
36 | freedoms that you received. You must make sure that they, too, receive
37 | or can get the source code. And you must show them these terms so they
38 | know their rights.
39 |
40 | Developers that use the GNU GPL protect your rights with two steps:
41 | (1) assert copyright on the software, and (2) offer you this License
42 | giving you legal permission to copy, distribute and/or modify it.
43 |
44 | For the developers' and authors' protection, the GPL clearly explains
45 | that there is no warranty for this free software. For both users' and
46 | authors' sake, the GPL requires that modified versions be marked as
47 | changed, so that their problems will not be attributed erroneously to
48 | authors of previous versions.
49 |
50 | Some devices are designed to deny users access to install or run
51 | modified versions of the software inside them, although the manufacturer
52 | can do so. This is fundamentally incompatible with the aim of
53 | protecting users' freedom to change the software. The systematic
54 | pattern of such abuse occurs in the area of products for individuals to
55 | use, which is precisely where it is most unacceptable. Therefore, we
56 | have designed this version of the GPL to prohibit the practice for those
57 | products. If such problems arise substantially in other domains, we
58 | stand ready to extend this provision to those domains in future versions
59 | of the GPL, as needed to protect the freedom of users.
60 |
61 | Finally, every program is threatened constantly by software patents.
62 | States should not allow patents to restrict development and use of
63 | software on general-purpose computers, but in those that do, we wish to
64 | avoid the special danger that patents applied to a free program could
65 | make it effectively proprietary. To prevent this, the GPL assures that
66 | patents cannot be used to render the program non-free.
67 |
68 | The precise terms and conditions for copying, distribution and
69 | modification follow.
70 |
71 | TERMS AND CONDITIONS
72 |
73 | 0. Definitions.
74 |
75 | "This License" refers to version 3 of the GNU General Public License.
76 |
77 | "Copyright" also means copyright-like laws that apply to other kinds of
78 | works, such as semiconductor masks.
79 |
80 | "The Program" refers to any copyrightable work licensed under this
81 | License. Each licensee is addressed as "you". "Licensees" and
82 | "recipients" may be individuals or organizations.
83 |
84 | To "modify" a work means to copy from or adapt all or part of the work
85 | in a fashion requiring copyright permission, other than the making of an
86 | exact copy. The resulting work is called a "modified version" of the
87 | earlier work or a work "based on" the earlier work.
88 |
89 | A "covered work" means either the unmodified Program or a work based
90 | on the Program.
91 |
92 | To "propagate" a work means to do anything with it that, without
93 | permission, would make you directly or secondarily liable for
94 | infringement under applicable copyright law, except executing it on a
95 | computer or modifying a private copy. Propagation includes copying,
96 | distribution (with or without modification), making available to the
97 | public, and in some countries other activities as well.
98 |
99 | To "convey" a work means any kind of propagation that enables other
100 | parties to make or receive copies. Mere interaction with a user through
101 | a computer network, with no transfer of a copy, is not conveying.
102 |
103 | An interactive user interface displays "Appropriate Legal Notices"
104 | to the extent that it includes a convenient and prominently visible
105 | feature that (1) displays an appropriate copyright notice, and (2)
106 | tells the user that there is no warranty for the work (except to the
107 | extent that warranties are provided), that licensees may convey the
108 | work under this License, and how to view a copy of this License. If
109 | the interface presents a list of user commands or options, such as a
110 | menu, a prominent item in the list meets this criterion.
111 |
112 | 1. Source Code.
113 |
114 | The "source code" for a work means the preferred form of the work
115 | for making modifications to it. "Object code" means any non-source
116 | form of a work.
117 |
118 | A "Standard Interface" means an interface that either is an official
119 | standard defined by a recognized standards body, or, in the case of
120 | interfaces specified for a particular programming language, one that
121 | is widely used among developers working in that language.
122 |
123 | The "System Libraries" of an executable work include anything, other
124 | than the work as a whole, that (a) is included in the normal form of
125 | packaging a Major Component, but which is not part of that Major
126 | Component, and (b) serves only to enable use of the work with that
127 | Major Component, or to implement a Standard Interface for which an
128 | implementation is available to the public in source code form. A
129 | "Major Component", in this context, means a major essential component
130 | (kernel, window system, and so on) of the specific operating system
131 | (if any) on which the executable work runs, or a compiler used to
132 | produce the work, or an object code interpreter used to run it.
133 |
134 | The "Corresponding Source" for a work in object code form means all
135 | the source code needed to generate, install, and (for an executable
136 | work) run the object code and to modify the work, including scripts to
137 | control those activities. However, it does not include the work's
138 | System Libraries, or general-purpose tools or generally available free
139 | programs which are used unmodified in performing those activities but
140 | which are not part of the work. For example, Corresponding Source
141 | includes interface definition files associated with source files for
142 | the work, and the source code for shared libraries and dynamically
143 | linked subprograms that the work is specifically designed to require,
144 | such as by intimate data communication or control flow between those
145 | subprograms and other parts of the work.
146 |
147 | The Corresponding Source need not include anything that users
148 | can regenerate automatically from other parts of the Corresponding
149 | Source.
150 |
151 | The Corresponding Source for a work in source code form is that
152 | same work.
153 |
154 | 2. Basic Permissions.
155 |
156 | All rights granted under this License are granted for the term of
157 | copyright on the Program, and are irrevocable provided the stated
158 | conditions are met. This License explicitly affirms your unlimited
159 | permission to run the unmodified Program. The output from running a
160 | covered work is covered by this License only if the output, given its
161 | content, constitutes a covered work. This License acknowledges your
162 | rights of fair use or other equivalent, as provided by copyright law.
163 |
164 | You may make, run and propagate covered works that you do not
165 | convey, without conditions so long as your license otherwise remains
166 | in force. You may convey covered works to others for the sole purpose
167 | of having them make modifications exclusively for you, or provide you
168 | with facilities for running those works, provided that you comply with
169 | the terms of this License in conveying all material for which you do
170 | not control copyright. Those thus making or running the covered works
171 | for you must do so exclusively on your behalf, under your direction
172 | and control, on terms that prohibit them from making any copies of
173 | your copyrighted material outside their relationship with you.
174 |
175 | Conveying under any other circumstances is permitted solely under
176 | the conditions stated below. Sublicensing is not allowed; section 10
177 | makes it unnecessary.
178 |
179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180 |
181 | No covered work shall be deemed part of an effective technological
182 | measure under any applicable law fulfilling obligations under article
183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184 | similar laws prohibiting or restricting circumvention of such
185 | measures.
186 |
187 | When you convey a covered work, you waive any legal power to forbid
188 | circumvention of technological measures to the extent such circumvention
189 | is effected by exercising rights under this License with respect to
190 | the covered work, and you disclaim any intention to limit operation or
191 | modification of the work as a means of enforcing, against the work's
192 | users, your or third parties' legal rights to forbid circumvention of
193 | technological measures.
194 |
195 | 4. Conveying Verbatim Copies.
196 |
197 | You may convey verbatim copies of the Program's source code as you
198 | receive it, in any medium, provided that you conspicuously and
199 | appropriately publish on each copy an appropriate copyright notice;
200 | keep intact all notices stating that this License and any
201 | non-permissive terms added in accord with section 7 apply to the code;
202 | keep intact all notices of the absence of any warranty; and give all
203 | recipients a copy of this License along with the Program.
204 |
205 | You may charge any price or no price for each copy that you convey,
206 | and you may offer support or warranty protection for a fee.
207 |
208 | 5. Conveying Modified Source Versions.
209 |
210 | You may convey a work based on the Program, or the modifications to
211 | produce it from the Program, in the form of source code under the
212 | terms of section 4, provided that you also meet all of these conditions:
213 |
214 | a) The work must carry prominent notices stating that you modified
215 | it, and giving a relevant date.
216 |
217 | b) The work must carry prominent notices stating that it is
218 | released under this License and any conditions added under section
219 | 7. This requirement modifies the requirement in section 4 to
220 | "keep intact all notices".
221 |
222 | c) You must license the entire work, as a whole, under this
223 | License to anyone who comes into possession of a copy. This
224 | License will therefore apply, along with any applicable section 7
225 | additional terms, to the whole of the work, and all its parts,
226 | regardless of how they are packaged. This License gives no
227 | permission to license the work in any other way, but it does not
228 | invalidate such permission if you have separately received it.
229 |
230 | d) If the work has interactive user interfaces, each must display
231 | Appropriate Legal Notices; however, if the Program has interactive
232 | interfaces that do not display Appropriate Legal Notices, your
233 | work need not make them do so.
234 |
235 | A compilation of a covered work with other separate and independent
236 | works, which are not by their nature extensions of the covered work,
237 | and which are not combined with it such as to form a larger program,
238 | in or on a volume of a storage or distribution medium, is called an
239 | "aggregate" if the compilation and its resulting copyright are not
240 | used to limit the access or legal rights of the compilation's users
241 | beyond what the individual works permit. Inclusion of a covered work
242 | in an aggregate does not cause this License to apply to the other
243 | parts of the aggregate.
244 |
245 | 6. Conveying Non-Source Forms.
246 |
247 | You may convey a covered work in object code form under the terms
248 | of sections 4 and 5, provided that you also convey the
249 | machine-readable Corresponding Source under the terms of this License,
250 | in one of these ways:
251 |
252 | a) Convey the object code in, or embodied in, a physical product
253 | (including a physical distribution medium), accompanied by the
254 | Corresponding Source fixed on a durable physical medium
255 | customarily used for software interchange.
256 |
257 | b) Convey the object code in, or embodied in, a physical product
258 | (including a physical distribution medium), accompanied by a
259 | written offer, valid for at least three years and valid for as
260 | long as you offer spare parts or customer support for that product
261 | model, to give anyone who possesses the object code either (1) a
262 | copy of the Corresponding Source for all the software in the
263 | product that is covered by this License, on a durable physical
264 | medium customarily used for software interchange, for a price no
265 | more than your reasonable cost of physically performing this
266 | conveying of source, or (2) access to copy the
267 | Corresponding Source from a network server at no charge.
268 |
269 | c) Convey individual copies of the object code with a copy of the
270 | written offer to provide the Corresponding Source. This
271 | alternative is allowed only occasionally and noncommercially, and
272 | only if you received the object code with such an offer, in accord
273 | with subsection 6b.
274 |
275 | d) Convey the object code by offering access from a designated
276 | place (gratis or for a charge), and offer equivalent access to the
277 | Corresponding Source in the same way through the same place at no
278 | further charge. You need not require recipients to copy the
279 | Corresponding Source along with the object code. If the place to
280 | copy the object code is a network server, the Corresponding Source
281 | may be on a different server (operated by you or a third party)
282 | that supports equivalent copying facilities, provided you maintain
283 | clear directions next to the object code saying where to find the
284 | Corresponding Source. Regardless of what server hosts the
285 | Corresponding Source, you remain obligated to ensure that it is
286 | available for as long as needed to satisfy these requirements.
287 |
288 | e) Convey the object code using peer-to-peer transmission, provided
289 | you inform other peers where the object code and Corresponding
290 | Source of the work are being offered to the general public at no
291 | charge under subsection 6d.
292 |
293 | A separable portion of the object code, whose source code is excluded
294 | from the Corresponding Source as a System Library, need not be
295 | included in conveying the object code work.
296 |
297 | A "User Product" is either (1) a "consumer product", which means any
298 | tangible personal property which is normally used for personal, family,
299 | or household purposes, or (2) anything designed or sold for incorporation
300 | into a dwelling. In determining whether a product is a consumer product,
301 | doubtful cases shall be resolved in favor of coverage. For a particular
302 | product received by a particular user, "normally used" refers to a
303 | typical or common use of that class of product, regardless of the status
304 | of the particular user or of the way in which the particular user
305 | actually uses, or expects or is expected to use, the product. A product
306 | is a consumer product regardless of whether the product has substantial
307 | commercial, industrial or non-consumer uses, unless such uses represent
308 | the only significant mode of use of the product.
309 |
310 | "Installation Information" for a User Product means any methods,
311 | procedures, authorization keys, or other information required to install
312 | and execute modified versions of a covered work in that User Product from
313 | a modified version of its Corresponding Source. The information must
314 | suffice to ensure that the continued functioning of the modified object
315 | code is in no case prevented or interfered with solely because
316 | modification has been made.
317 |
318 | If you convey an object code work under this section in, or with, or
319 | specifically for use in, a User Product, and the conveying occurs as
320 | part of a transaction in which the right of possession and use of the
321 | User Product is transferred to the recipient in perpetuity or for a
322 | fixed term (regardless of how the transaction is characterized), the
323 | Corresponding Source conveyed under this section must be accompanied
324 | by the Installation Information. But this requirement does not apply
325 | if neither you nor any third party retains the ability to install
326 | modified object code on the User Product (for example, the work has
327 | been installed in ROM).
328 |
329 | The requirement to provide Installation Information does not include a
330 | requirement to continue to provide support service, warranty, or updates
331 | for a work that has been modified or installed by the recipient, or for
332 | the User Product in which it has been modified or installed. Access to a
333 | network may be denied when the modification itself materially and
334 | adversely affects the operation of the network or violates the rules and
335 | protocols for communication across the network.
336 |
337 | Corresponding Source conveyed, and Installation Information provided,
338 | in accord with this section must be in a format that is publicly
339 | documented (and with an implementation available to the public in
340 | source code form), and must require no special password or key for
341 | unpacking, reading or copying.
342 |
343 | 7. Additional Terms.
344 |
345 | "Additional permissions" are terms that supplement the terms of this
346 | License by making exceptions from one or more of its conditions.
347 | Additional permissions that are applicable to the entire Program shall
348 | be treated as though they were included in this License, to the extent
349 | that they are valid under applicable law. If additional permissions
350 | apply only to part of the Program, that part may be used separately
351 | under those permissions, but the entire Program remains governed by
352 | this License without regard to the additional permissions.
353 |
354 | When you convey a copy of a covered work, you may at your option
355 | remove any additional permissions from that copy, or from any part of
356 | it. (Additional permissions may be written to require their own
357 | removal in certain cases when you modify the work.) You may place
358 | additional permissions on material, added by you to a covered work,
359 | for which you have or can give appropriate copyright permission.
360 |
361 | Notwithstanding any other provision of this License, for material you
362 | add to a covered work, you may (if authorized by the copyright holders of
363 | that material) supplement the terms of this License with terms:
364 |
365 | a) Disclaiming warranty or limiting liability differently from the
366 | terms of sections 15 and 16 of this License; or
367 |
368 | b) Requiring preservation of specified reasonable legal notices or
369 | author attributions in that material or in the Appropriate Legal
370 | Notices displayed by works containing it; or
371 |
372 | c) Prohibiting misrepresentation of the origin of that material, or
373 | requiring that modified versions of such material be marked in
374 | reasonable ways as different from the original version; or
375 |
376 | d) Limiting the use for publicity purposes of names of licensors or
377 | authors of the material; or
378 |
379 | e) Declining to grant rights under trademark law for use of some
380 | trade names, trademarks, or service marks; or
381 |
382 | f) Requiring indemnification of licensors and authors of that
383 | material by anyone who conveys the material (or modified versions of
384 | it) with contractual assumptions of liability to the recipient, for
385 | any liability that these contractual assumptions directly impose on
386 | those licensors and authors.
387 |
388 | All other non-permissive additional terms are considered "further
389 | restrictions" within the meaning of section 10. If the Program as you
390 | received it, or any part of it, contains a notice stating that it is
391 | governed by this License along with a term that is a further
392 | restriction, you may remove that term. If a license document contains
393 | a further restriction but permits relicensing or conveying under this
394 | License, you may add to a covered work material governed by the terms
395 | of that license document, provided that the further restriction does
396 | not survive such relicensing or conveying.
397 |
398 | If you add terms to a covered work in accord with this section, you
399 | must place, in the relevant source files, a statement of the
400 | additional terms that apply to those files, or a notice indicating
401 | where to find the applicable terms.
402 |
403 | Additional terms, permissive or non-permissive, may be stated in the
404 | form of a separately written license, or stated as exceptions;
405 | the above requirements apply either way.
406 |
407 | 8. Termination.
408 |
409 | You may not propagate or modify a covered work except as expressly
410 | provided under this License. Any attempt otherwise to propagate or
411 | modify it is void, and will automatically terminate your rights under
412 | this License (including any patent licenses granted under the third
413 | paragraph of section 11).
414 |
415 | However, if you cease all violation of this License, then your
416 | license from a particular copyright holder is reinstated (a)
417 | provisionally, unless and until the copyright holder explicitly and
418 | finally terminates your license, and (b) permanently, if the copyright
419 | holder fails to notify you of the violation by some reasonable means
420 | prior to 60 days after the cessation.
421 |
422 | Moreover, your license from a particular copyright holder is
423 | reinstated permanently if the copyright holder notifies you of the
424 | violation by some reasonable means, this is the first time you have
425 | received notice of violation of this License (for any work) from that
426 | copyright holder, and you cure the violation prior to 30 days after
427 | your receipt of the notice.
428 |
429 | Termination of your rights under this section does not terminate the
430 | licenses of parties who have received copies or rights from you under
431 | this License. If your rights have been terminated and not permanently
432 | reinstated, you do not qualify to receive new licenses for the same
433 | material under section 10.
434 |
435 | 9. Acceptance Not Required for Having Copies.
436 |
437 | You are not required to accept this License in order to receive or
438 | run a copy of the Program. Ancillary propagation of a covered work
439 | occurring solely as a consequence of using peer-to-peer transmission
440 | to receive a copy likewise does not require acceptance. However,
441 | nothing other than this License grants you permission to propagate or
442 | modify any covered work. These actions infringe copyright if you do
443 | not accept this License. Therefore, by modifying or propagating a
444 | covered work, you indicate your acceptance of this License to do so.
445 |
446 | 10. Automatic Licensing of Downstream Recipients.
447 |
448 | Each time you convey a covered work, the recipient automatically
449 | receives a license from the original licensors, to run, modify and
450 | propagate that work, subject to this License. You are not responsible
451 | for enforcing compliance by third parties with this License.
452 |
453 | An "entity transaction" is a transaction transferring control of an
454 | organization, or substantially all assets of one, or subdividing an
455 | organization, or merging organizations. If propagation of a covered
456 | work results from an entity transaction, each party to that
457 | transaction who receives a copy of the work also receives whatever
458 | licenses to the work the party's predecessor in interest had or could
459 | give under the previous paragraph, plus a right to possession of the
460 | Corresponding Source of the work from the predecessor in interest, if
461 | the predecessor has it or can get it with reasonable efforts.
462 |
463 | You may not impose any further restrictions on the exercise of the
464 | rights granted or affirmed under this License. For example, you may
465 | not impose a license fee, royalty, or other charge for exercise of
466 | rights granted under this License, and you may not initiate litigation
467 | (including a cross-claim or counterclaim in a lawsuit) alleging that
468 | any patent claim is infringed by making, using, selling, offering for
469 | sale, or importing the Program or any portion of it.
470 |
471 | 11. Patents.
472 |
473 | A "contributor" is a copyright holder who authorizes use under this
474 | License of the Program or a work on which the Program is based. The
475 | work thus licensed is called the contributor's "contributor version".
476 |
477 | A contributor's "essential patent claims" are all patent claims
478 | owned or controlled by the contributor, whether already acquired or
479 | hereafter acquired, that would be infringed by some manner, permitted
480 | by this License, of making, using, or selling its contributor version,
481 | but do not include claims that would be infringed only as a
482 | consequence of further modification of the contributor version. For
483 | purposes of this definition, "control" includes the right to grant
484 | patent sublicenses in a manner consistent with the requirements of
485 | this License.
486 |
487 | Each contributor grants you a non-exclusive, worldwide, royalty-free
488 | patent license under the contributor's essential patent claims, to
489 | make, use, sell, offer for sale, import and otherwise run, modify and
490 | propagate the contents of its contributor version.
491 |
492 | In the following three paragraphs, a "patent license" is any express
493 | agreement or commitment, however denominated, not to enforce a patent
494 | (such as an express permission to practice a patent or covenant not to
495 | sue for patent infringement). To "grant" such a patent license to a
496 | party means to make such an agreement or commitment not to enforce a
497 | patent against the party.
498 |
499 | If you convey a covered work, knowingly relying on a patent license,
500 | and the Corresponding Source of the work is not available for anyone
501 | to copy, free of charge and under the terms of this License, through a
502 | publicly available network server or other readily accessible means,
503 | then you must either (1) cause the Corresponding Source to be so
504 | available, or (2) arrange to deprive yourself of the benefit of the
505 | patent license for this particular work, or (3) arrange, in a manner
506 | consistent with the requirements of this License, to extend the patent
507 | license to downstream recipients. "Knowingly relying" means you have
508 | actual knowledge that, but for the patent license, your conveying the
509 | covered work in a country, or your recipient's use of the covered work
510 | in a country, would infringe one or more identifiable patents in that
511 | country that you have reason to believe are valid.
512 |
513 | If, pursuant to or in connection with a single transaction or
514 | arrangement, you convey, or propagate by procuring conveyance of, a
515 | covered work, and grant a patent license to some of the parties
516 | receiving the covered work authorizing them to use, propagate, modify
517 | or convey a specific copy of the covered work, then the patent license
518 | you grant is automatically extended to all recipients of the covered
519 | work and works based on it.
520 |
521 | A patent license is "discriminatory" if it does not include within
522 | the scope of its coverage, prohibits the exercise of, or is
523 | conditioned on the non-exercise of one or more of the rights that are
524 | specifically granted under this License. You may not convey a covered
525 | work if you are a party to an arrangement with a third party that is
526 | in the business of distributing software, under which you make payment
527 | to the third party based on the extent of your activity of conveying
528 | the work, and under which the third party grants, to any of the
529 | parties who would receive the covered work from you, a discriminatory
530 | patent license (a) in connection with copies of the covered work
531 | conveyed by you (or copies made from those copies), or (b) primarily
532 | for and in connection with specific products or compilations that
533 | contain the covered work, unless you entered into that arrangement,
534 | or that patent license was granted, prior to 28 March 2007.
535 |
536 | Nothing in this License shall be construed as excluding or limiting
537 | any implied license or other defenses to infringement that may
538 | otherwise be available to you under applicable patent law.
539 |
540 | 12. No Surrender of Others' Freedom.
541 |
542 | If conditions are imposed on you (whether by court order, agreement or
543 | otherwise) that contradict the conditions of this License, they do not
544 | excuse you from the conditions of this License. If you cannot convey a
545 | covered work so as to satisfy simultaneously your obligations under this
546 | License and any other pertinent obligations, then as a consequence you may
547 | not convey it at all. For example, if you agree to terms that obligate you
548 | to collect a royalty for further conveying from those to whom you convey
549 | the Program, the only way you could satisfy both those terms and this
550 | License would be to refrain entirely from conveying the Program.
551 |
552 | 13. Use with the GNU Affero General Public License.
553 |
554 | Notwithstanding any other provision of this License, you have
555 | permission to link or combine any covered work with a work licensed
556 | under version 3 of the GNU Affero General Public License into a single
557 | combined work, and to convey the resulting work. The terms of this
558 | License will continue to apply to the part which is the covered work,
559 | but the special requirements of the GNU Affero General Public License,
560 | section 13, concerning interaction through a network will apply to the
561 | combination as such.
562 |
563 | 14. Revised Versions of this License.
564 |
565 | The Free Software Foundation may publish revised and/or new versions of
566 | the GNU General Public License from time to time. Such new versions will
567 | be similar in spirit to the present version, but may differ in detail to
568 | address new problems or concerns.
569 |
570 | Each version is given a distinguishing version number. If the
571 | Program specifies that a certain numbered version of the GNU General
572 | Public License "or any later version" applies to it, you have the
573 | option of following the terms and conditions either of that numbered
574 | version or of any later version published by the Free Software
575 | Foundation. If the Program does not specify a version number of the
576 | GNU General Public License, you may choose any version ever published
577 | by the Free Software Foundation.
578 |
579 | If the Program specifies that a proxy can decide which future
580 | versions of the GNU General Public License can be used, that proxy's
581 | public statement of acceptance of a version permanently authorizes you
582 | to choose that version for the Program.
583 |
584 | Later license versions may give you additional or different
585 | permissions. However, no additional obligations are imposed on any
586 | author or copyright holder as a result of your choosing to follow a
587 | later version.
588 |
589 | 15. Disclaimer of Warranty.
590 |
591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599 |
600 | 16. Limitation of Liability.
601 |
602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610 | SUCH DAMAGES.
611 |
612 | 17. Interpretation of Sections 15 and 16.
613 |
614 | If the disclaimer of warranty and limitation of liability provided
615 | above cannot be given local legal effect according to their terms,
616 | reviewing courts shall apply local law that most closely approximates
617 | an absolute waiver of all civil liability in connection with the
618 | Program, unless a warranty or assumption of liability accompanies a
619 | copy of the Program in return for a fee.
620 |
621 | END OF TERMS AND CONDITIONS
622 |
623 | How to Apply These Terms to Your New Programs
624 |
625 | If you develop a new program, and you want it to be of the greatest
626 | possible use to the public, the best way to achieve this is to make it
627 | free software which everyone can redistribute and change under these terms.
628 |
629 | To do so, attach the following notices to the program. It is safest
630 | to attach them to the start of each source file to most effectively
631 | state the exclusion of warranty; and each file should have at least
632 | the "copyright" line and a pointer to where the full notice is found.
633 |
634 |
635 | Copyright (C)
636 |
637 | This program is free software: you can redistribute it and/or modify
638 | it under the terms of the GNU General Public License as published by
639 | the Free Software Foundation, either version 3 of the License, or
640 | (at your option) any later version.
641 |
642 | This program is distributed in the hope that it will be useful,
643 | but WITHOUT ANY WARRANTY; without even the implied warranty of
644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645 | GNU General Public License for more details.
646 |
647 | You should have received a copy of the GNU General Public License
648 | along with this program. If not, see .
649 |
650 | Also add information on how to contact you by electronic and paper mail.
651 |
652 | If the program does terminal interaction, make it output a short
653 | notice like this when it starts in an interactive mode:
654 |
655 | Copyright (C)
656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657 | This is free software, and you are welcome to redistribute it
658 | under certain conditions; type `show c' for details.
659 |
660 | The hypothetical commands `show w' and `show c' should show the appropriate
661 | parts of the General Public License. Of course, your program's commands
662 | might be different; for a GUI interface, you would use an "about box".
663 |
664 | You should also get your employer (if you work as a programmer) or school,
665 | if any, to sign a "copyright disclaimer" for the program, if necessary.
666 | For more information on this, and how to apply and follow the GNU GPL, see
667 | .
668 |
669 | The GNU General Public License does not permit incorporating your program
670 | into proprietary programs. If your program is a subroutine library, you
671 | may consider it more useful to permit linking proprietary applications with
672 | the library. If this is what you want to do, use the GNU Lesser General
673 | Public License instead of this License. But first, please read
674 | .
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # YOLOv4-large
2 |
3 | This is the implementation of "[Scaled-YOLOv4: Scaling Cross Stage Partial Network](https://arxiv.org/abs/2011.08036)" using PyTorch framwork.
4 |
5 | * [YOLOv4-CSP](https://github.com/WongKinYiu/ScaledYOLOv4/tree/yolov4-csp)
6 | * [YOLOv4-tiny](https://github.com/WongKinYiu/ScaledYOLOv4/tree/yolov4-tiny)
7 | * [YOLOv4-large](https://github.com/WongKinYiu/ScaledYOLOv4/tree/yolov4-large)
8 |
9 | | Model | Test Size | APtest | AP50test | AP75test | APStest | APMtest | APLtest | batch1 throughput |
10 | | :-- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
11 | | **YOLOv4-P5** | 896 | **51.4%** | **69.9%** | **56.3%** | **33.1%** | **55.4%** | **62.4%** | 41 *fps* |
12 | | **YOLOv4-P5** | TTA | **52.5%** | **70.3%** | **58.0%** | **36.0%** | **52.4%** | **62.3%** | - |
13 | | | | | | | | |
14 | | **YOLOv4-P6** | 1280 | **54.3%** | **72.3%** | **59.5%** | **36.6%** | **58.2%** | **65.5%** | 30 *fps* |
15 | | **YOLOv4-P6** | TTA | **54.9%** | **72.6%** | **60.2%** | **37.4%** | **58.8%** | **66.7%** | - |
16 | | | | | | | | |
17 | | **YOLOv4-P7** | 1536 | **55.4%** | **73.3%** | **60.7%** | **38.1%** | **59.5%** | **67.4%** | 15 *fps* |
18 | | **YOLOv4-P7** | TTA | **55.8%** | **73.2%** | **61.2%** | **38.8%** | **60.1%** | **68.2%** | - |
19 | | | | | | | | |
20 |
21 | | Model | Test Size | APval | AP50val | AP75val | APSval | APMval | APLval | weights |
22 | | :-- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
23 | | **YOLOv4-P5** | 896 | **51.2%** | **69.8%** | **56.2%** | **35.0%** | **56.2%** | **64.0%** | [`yolov4-p5.pt`](https://github.com/WongKinYiu/ScaledYOLOv4/releases/download/weights/yolov4-p5.pt) |
24 | | **YOLOv4-P5** | TTA | **52.5%** | **70.2%** | **57.8%** | **38.5%** | **57.2%** | **64.0%** | - |
25 | | **YOLOv4-P5** (+BoF) | 896 | **51.7%** | **70.3%** | **56.7%** | **35.9%** | **56.7%** | **64.3%** | [`yolov4-p5_.pt`](https://github.com/WongKinYiu/ScaledYOLOv4/releases/download/weights/yolov4-p5_.pt) |
26 | | **YOLOv4-P5** (+BoF) | TTA | **52.8%** | **70.6%** | **58.3%** | **38.8%** | **57.4%** | **64.4%** | - |
27 | | | | | | | | | |
28 | | **YOLOv4-P6** | 1280 | **53.9%** | **72.0%** | **59.0%** | **39.3%** | **58.3%** | **66.6%** | [`yolov4-p6.pt`](https://github.com/WongKinYiu/ScaledYOLOv4/releases/download/weights/yolov4-p6.pt) |
29 | | **YOLOv4-P6** | TTA | **54.4%** | **72.3%** | **59.6%** | **39.8%** | **58.9%** | **67.6%** | - |
30 | | **YOLOv4-P6** (+BoF) | 1280 | **54.4%** | **72.7%** | **59.5%** | **39.5%** | **58.9%** | **67.3%** | [`yolov4-p6_.pt`](https://github.com/WongKinYiu/ScaledYOLOv4/releases/download/weights/yolov4-p6_.pt) |
31 | | **YOLOv4-P6** (+BoF) | TTA | **54.8%** | **72.6%** | **60.0%** | **40.6%** | **59.1%** | **68.2%** | - |
32 | | **YOLOv4-P6** (+BoF*) | 1280 | **54.7%** | **72.9%** | **60.0%** | **39.4%** | **59.2%** | **68.3%** | |
33 | | **YOLOv4-P6** (+BoF*) | TTA | **55.3%** | **73.2%** | **60.8%** | **40.5%** | **59.9%** | **69.4%** | - |
34 | | | | | | | | | |
35 | | **YOLOv4-P7** | 1536 | **55.0%** | **72.9%** | **60.2%** | **39.8%** | **59.9%** | **68.4%** | [`yolov4-p7.pt`](https://github.com/WongKinYiu/ScaledYOLOv4/releases/download/weights/yolov4-p7.pt) |
36 | | **YOLOv4-P7** | TTA | **55.5%** | **72.9%** | **60.8%** | **41.1%** | **60.3%** | **68.9%** | - |
37 | | | | | | | | | |
38 |
39 | | Model | Test Size | APval | AP50val | AP75val | APSval | APMval | APLval |
40 | | :-- | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
41 | | **YOLOv4-P6-attention** | 1280 | **54.3%** | **72.3%** | **59.6%** | **38.7%** | **58.9%** | **66.6%** |
42 |
43 | ## Installation
44 |
45 | ```
46 | # create the docker container, you can change the share memory size if you have more.
47 | nvidia-docker run --name yolov4_csp -it -v your_coco_path/:/coco/ -v your_code_path/:/yolo --shm-size=64g nvcr.io/nvidia/pytorch:20.06-py3
48 |
49 | # install mish-cuda, if you use different pytorch version, you could try https://github.com/thomasbrandon/mish-cuda
50 | cd /
51 | git clone https://github.com/JunnYu/mish-cuda
52 | cd mish-cuda
53 | python setup.py build install
54 |
55 | # go to code folder
56 | cd /yolo
57 | ```
58 |
59 | ## Testing
60 |
61 | ```
62 | # download {yolov4-p5.pt, yolov4-p6.pt, yolov4-p7.pt} and put them in /yolo/weights/ folder.
63 | python test.py --img 896 --conf 0.001 --batch 8 --device 0 --data coco.yaml --weights weights/yolov4-p5.pt
64 | python test.py --img 1280 --conf 0.001 --batch 8 --device 0 --data coco.yaml --weights weights/yolov4-p6.pt
65 | python test.py --img 1536 --conf 0.001 --batch 8 --device 0 --data coco.yaml --weights weights/yolov4-p7.pt
66 | ```
67 |
68 | You will get following results:
69 | ```
70 | # yolov4-p5
71 | Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.51244
72 | Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.69771
73 | Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.56180
74 | Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.35021
75 | Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.56247
76 | Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.63983
77 | Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.38530
78 | Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.64048
79 | Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.69801
80 | Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.55487
81 | Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.74368
82 | Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.82826
83 | ```
84 | ```
85 | # yolov4-p6
86 | Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.53857
87 | Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.72015
88 | Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.59025
89 | Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.39285
90 | Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.58283
91 | Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.66580
92 | Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.39552
93 | Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.66504
94 | Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.72141
95 | Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.59193
96 | Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.75844
97 | Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.83981
98 | ```
99 | ```
100 | # yolov4-p7
101 | Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.55046
102 | Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.72925
103 | Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.60224
104 | Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.39836
105 | Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.59854
106 | Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.68405
107 | Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.40256
108 | Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.66929
109 | Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.72943
110 | Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.59943
111 | Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.76873
112 | Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.84460
113 | ```
114 |
115 | ## Training
116 |
117 | We use multiple GPUs for training.
118 | {YOLOv4-P5, YOLOv4-P6, YOLOv4-P7} use input resolution {896, 1280, 1536} for training respectively.
119 | ```
120 | # yolov4-p5
121 | python -m torch.distributed.launch --nproc_per_node 4 train.py --batch-size 64 --img 896 896 --data coco.yaml --cfg yolov4-p5.yaml --weights '' --sync-bn --device 0,1,2,3 --name yolov4-p5
122 | python -m torch.distributed.launch --nproc_per_node 4 train.py --batch-size 64 --img 896 896 --data coco.yaml --cfg yolov4-p5.yaml --weights 'runs/exp0_yolov4-p5/weights/last_298.pt' --sync-bn --device 0,1,2,3 --name yolov4-p5-tune --hyp 'data/hyp.finetune.yaml' --epochs 450 --resume
123 | ```
124 |
125 | If your training process stucks, it due to bugs of the python.
126 | Just `Ctrl+C` to stop training and resume training by:
127 | ```
128 | # yolov4-p5
129 | python -m torch.distributed.launch --nproc_per_node 4 train.py --batch-size 64 --img 896 896 --data coco.yaml --cfg yolov4-p5.yaml --weights 'runs/exp0_yolov4-p5/weights/last.pt' --sync-bn --device 0,1,2,3 --name yolov4-p5 --resume
130 | ```
131 |
132 | ## Citation
133 |
134 | ```
135 | @InProceedings{Wang_2021_CVPR,
136 | author = {Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
137 | title = {{Scaled-YOLOv4}: Scaling Cross Stage Partial Network},
138 | booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
139 | month = {June},
140 | year = {2021},
141 | pages = {13029-13038}
142 | }
143 | ```
144 |
145 | ## Acknowledgements
146 |
147 | Expand
148 |
149 | * [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet)
150 | * [https://github.com/WongKinYiu/PyTorch_YOLOv4](https://github.com/WongKinYiu/PyTorch_YOLOv4)
151 | * [https://github.com/ultralytics/yolov3](https://github.com/ultralytics/yolov3)
152 | * [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5)
153 |
154 |
155 |
--------------------------------------------------------------------------------
/data/coco.yaml:
--------------------------------------------------------------------------------
1 | # train and val datasets (image directory or *.txt file with image paths)
2 | train: ../coco/train2017.txt # 118k images
3 | val: ../coco/val2017.txt # 5k images
4 | test: ../coco/testdev2017.txt # 20k images for submission to https://competitions.codalab.org/competitions/20794
5 |
6 | # number of classes
7 | nc: 80
8 |
9 | # class names
10 | names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
11 | 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
12 | 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
13 | 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
14 | 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
15 | 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
16 | 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
17 | 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
18 | 'hair drier', 'toothbrush']
19 |
--------------------------------------------------------------------------------
/data/hyp.finetune.yaml:
--------------------------------------------------------------------------------
1 | lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
2 | momentum: 0.937 # SGD momentum/Adam beta1
3 | weight_decay: 0.0005 # optimizer weight decay 5e-4
4 | giou: 0.05 # GIoU loss gain
5 | cls: 0.5 # cls loss gain
6 | cls_pw: 1.0 # cls BCELoss positive_weight
7 | obj: 1.0 # obj loss gain (scale with pixels)
8 | obj_pw: 1.0 # obj BCELoss positive_weight
9 | iou_t: 0.20 # IoU training threshold
10 | anchor_t: 4.0 # anchor-multiple threshold
11 | fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
12 | hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
13 | hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
14 | hsv_v: 0.4 # image HSV-Value augmentation (fraction)
15 | degrees: 0.0 # image rotation (+/- deg)
16 | translate: 0.5 # image translation (+/- fraction)
17 | scale: 0.8 # image scale (+/- gain)
18 | shear: 0.0 # image shear (+/- deg)
19 | perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
20 | flipud: 0.0 # image flip up-down (probability)
21 | fliplr: 0.5 # image flip left-right (probability)
22 | mixup: 0.2 # image mixup (probability)
23 |
--------------------------------------------------------------------------------
/data/hyp.scratch.yaml:
--------------------------------------------------------------------------------
1 | # Hyperparameters for COCO training from scratch
2 | # python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300
3 | # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
4 |
5 |
6 | lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
7 | momentum: 0.937 # SGD momentum/Adam beta1
8 | weight_decay: 0.0005 # optimizer weight decay 5e-4
9 | giou: 0.05 # GIoU loss gain
10 | cls: 0.5 # cls loss gain
11 | cls_pw: 1.0 # cls BCELoss positive_weight
12 | obj: 1.0 # obj loss gain (scale with pixels)
13 | obj_pw: 1.0 # obj BCELoss positive_weight
14 | iou_t: 0.20 # IoU training threshold
15 | anchor_t: 4.0 # anchor-multiple threshold
16 | fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
17 | hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
18 | hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
19 | hsv_v: 0.4 # image HSV-Value augmentation (fraction)
20 | degrees: 0.0 # image rotation (+/- deg)
21 | translate: 0.5 # image translation (+/- fraction)
22 | scale: 0.5 # image scale (+/- gain)
23 | shear: 0.0 # image shear (+/- deg)
24 | perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
25 | flipud: 0.0 # image flip up-down (probability)
26 | fliplr: 0.5 # image flip left-right (probability)
27 | mixup: 0.0 # image mixup (probability)
28 |
--------------------------------------------------------------------------------
/detect.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import os
3 | import platform
4 | import shutil
5 | import time
6 | from pathlib import Path
7 |
8 | import cv2
9 | import torch
10 | import torch.backends.cudnn as cudnn
11 | from numpy import random
12 |
13 | from models.experimental import attempt_load
14 | from utils.datasets import LoadStreams, LoadImages
15 | from utils.general import (
16 | check_img_size, non_max_suppression, apply_classifier, scale_coords, xyxy2xywh, plot_one_box, strip_optimizer)
17 | from utils.torch_utils import select_device, load_classifier, time_synchronized
18 |
19 |
20 | def detect(save_img=False):
21 | out, source, weights, view_img, save_txt, imgsz = \
22 | opt.output, opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size
23 | webcam = source == '0' or source.startswith('rtsp') or source.startswith('http') or source.endswith('.txt')
24 |
25 | # Initialize
26 | device = select_device(opt.device)
27 | if os.path.exists(out):
28 | shutil.rmtree(out) # delete output folder
29 | os.makedirs(out) # make new output folder
30 | half = device.type != 'cpu' # half precision only supported on CUDA
31 |
32 | # Load model
33 | model = attempt_load(weights, map_location=device) # load FP32 model
34 | imgsz = check_img_size(imgsz, s=model.stride.max()) # check img_size
35 | if half:
36 | model.half() # to FP16
37 |
38 | # Second-stage classifier
39 | classify = False
40 | if classify:
41 | modelc = load_classifier(name='resnet101', n=2) # initialize
42 | modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']) # load weights
43 | modelc.to(device).eval()
44 |
45 | # Set Dataloader
46 | vid_path, vid_writer = None, None
47 | if webcam:
48 | view_img = True
49 | cudnn.benchmark = True # set True to speed up constant image size inference
50 | dataset = LoadStreams(source, img_size=imgsz)
51 | else:
52 | save_img = True
53 | dataset = LoadImages(source, img_size=imgsz)
54 |
55 | # Get names and colors
56 | names = model.module.names if hasattr(model, 'module') else model.names
57 | colors = [[random.randint(0, 255) for _ in range(3)] for _ in range(len(names))]
58 |
59 | # Run inference
60 | t0 = time.time()
61 | img = torch.zeros((1, 3, imgsz, imgsz), device=device) # init img
62 | _ = model(img.half() if half else img) if device.type != 'cpu' else None # run once
63 | for path, img, im0s, vid_cap in dataset:
64 | img = torch.from_numpy(img).to(device)
65 | img = img.half() if half else img.float() # uint8 to fp16/32
66 | img /= 255.0 # 0 - 255 to 0.0 - 1.0
67 | if img.ndimension() == 3:
68 | img = img.unsqueeze(0)
69 |
70 | # Inference
71 | t1 = time_synchronized()
72 | pred = model(img, augment=opt.augment)[0]
73 |
74 | # Apply NMS
75 | pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms)
76 | t2 = time_synchronized()
77 |
78 | # Apply Classifier
79 | if classify:
80 | pred = apply_classifier(pred, modelc, img, im0s)
81 |
82 | # Process detections
83 | for i, det in enumerate(pred): # detections per image
84 | if webcam: # batch_size >= 1
85 | p, s, im0 = path[i], '%g: ' % i, im0s[i].copy()
86 | else:
87 | p, s, im0 = path, '', im0s
88 |
89 | save_path = str(Path(out) / Path(p).name)
90 | txt_path = str(Path(out) / Path(p).stem) + ('_%g' % dataset.frame if dataset.mode == 'video' else '')
91 | s += '%gx%g ' % img.shape[2:] # print string
92 | gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
93 | if det is not None and len(det):
94 | # Rescale boxes from img_size to im0 size
95 | det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
96 |
97 | # Print results
98 | for c in det[:, -1].unique():
99 | n = (det[:, -1] == c).sum() # detections per class
100 | s += '%g %ss, ' % (n, names[int(c)]) # add to string
101 |
102 | # Write results
103 | for *xyxy, conf, cls in det:
104 | if save_txt: # Write to file
105 | xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
106 | with open(txt_path + '.txt', 'a') as f:
107 | f.write(('%g ' * 5 + '\n') % (cls, *xywh)) # label format
108 |
109 | if save_img or view_img: # Add bbox to image
110 | label = '%s' % (names[int(cls)])
111 | plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=2)
112 |
113 | # Print time (inference + NMS)
114 | print('%sDone. (%.3fs)' % (s, t2 - t1))
115 |
116 | # Stream results
117 | if view_img:
118 | cv2.imshow(p, im0)
119 | if cv2.waitKey(1) == ord('q'): # q to quit
120 | raise StopIteration
121 |
122 | # Save results (image with detections)
123 | if save_img:
124 | if dataset.mode == 'images':
125 | cv2.imwrite(save_path, im0)
126 | else:
127 | if vid_path != save_path: # new video
128 | vid_path = save_path
129 | if isinstance(vid_writer, cv2.VideoWriter):
130 | vid_writer.release() # release previous video writer
131 |
132 | fourcc = 'mp4v' # output video codec
133 | fps = vid_cap.get(cv2.CAP_PROP_FPS)
134 | w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
135 | h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
136 | vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*fourcc), fps, (w, h))
137 | vid_writer.write(im0)
138 |
139 | if save_txt or save_img:
140 | print('Results saved to %s' % Path(out))
141 | if platform == 'darwin' and not opt.update: # MacOS
142 | os.system('open ' + save_path)
143 |
144 | print('Done. (%.3fs)' % (time.time() - t0))
145 |
146 |
147 | if __name__ == '__main__':
148 | parser = argparse.ArgumentParser()
149 | parser.add_argument('--weights', nargs='+', type=str, default='yolov4-p5.pt', help='model.pt path(s)')
150 | parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam
151 | parser.add_argument('--output', type=str, default='inference/output', help='output folder') # output folder
152 | parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
153 | parser.add_argument('--conf-thres', type=float, default=0.4, help='object confidence threshold')
154 | parser.add_argument('--iou-thres', type=float, default=0.5, help='IOU threshold for NMS')
155 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
156 | parser.add_argument('--view-img', action='store_true', help='display results')
157 | parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
158 | parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
159 | parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
160 | parser.add_argument('--augment', action='store_true', help='augmented inference')
161 | parser.add_argument('--update', action='store_true', help='update all models')
162 | opt = parser.parse_args()
163 | print(opt)
164 |
165 | with torch.no_grad():
166 | if opt.update: # update all models (to fix SourceChangeWarning)
167 | for opt.weights in ['']:
168 | detect()
169 | strip_optimizer(opt.weights)
170 | else:
171 | detect()
172 |
--------------------------------------------------------------------------------
/models/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/models/common.py:
--------------------------------------------------------------------------------
1 | # This file contains modules common to various models
2 | import math
3 |
4 | import torch
5 | import torch.nn as nn
6 |
7 | from mish_cuda import MishCuda as Mish
8 |
9 |
10 | def autopad(k, p=None): # kernel, padding
11 | # Pad to 'same'
12 | if p is None:
13 | p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
14 | return p
15 |
16 |
17 | def DWConv(c1, c2, k=1, s=1, act=True):
18 | # Depthwise convolution
19 | return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act)
20 |
21 |
22 | class Conv(nn.Module):
23 | # Standard convolution
24 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
25 | super(Conv, self).__init__()
26 | self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
27 | self.bn = nn.BatchNorm2d(c2)
28 | self.act = Mish() if act else nn.Identity()
29 |
30 | def forward(self, x):
31 | return self.act(self.bn(self.conv(x)))
32 |
33 | def fuseforward(self, x):
34 | return self.act(self.conv(x))
35 |
36 |
37 | class Bottleneck(nn.Module):
38 | # Standard bottleneck
39 | def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
40 | super(Bottleneck, self).__init__()
41 | c_ = int(c2 * e) # hidden channels
42 | self.cv1 = Conv(c1, c_, 1, 1)
43 | self.cv2 = Conv(c_, c2, 3, 1, g=g)
44 | self.add = shortcut and c1 == c2
45 |
46 | def forward(self, x):
47 | return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
48 |
49 |
50 | class BottleneckCSP(nn.Module):
51 | # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
52 | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
53 | super(BottleneckCSP, self).__init__()
54 | c_ = int(c2 * e) # hidden channels
55 | self.cv1 = Conv(c1, c_, 1, 1)
56 | self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
57 | self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
58 | self.cv4 = Conv(2 * c_, c2, 1, 1)
59 | self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
60 | self.act = Mish()
61 | self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
62 |
63 | def forward(self, x):
64 | y1 = self.cv3(self.m(self.cv1(x)))
65 | y2 = self.cv2(x)
66 | return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))
67 |
68 |
69 | class BottleneckCSP2(nn.Module):
70 | # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
71 | def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
72 | super(BottleneckCSP2, self).__init__()
73 | c_ = int(c2) # hidden channels
74 | self.cv1 = Conv(c1, c_, 1, 1)
75 | self.cv2 = nn.Conv2d(c_, c_, 1, 1, bias=False)
76 | self.cv3 = Conv(2 * c_, c2, 1, 1)
77 | self.bn = nn.BatchNorm2d(2 * c_)
78 | self.act = Mish()
79 | self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
80 |
81 | def forward(self, x):
82 | x1 = self.cv1(x)
83 | y1 = self.m(x1)
84 | y2 = self.cv2(x1)
85 | return self.cv3(self.act(self.bn(torch.cat((y1, y2), dim=1))))
86 |
87 |
88 | class VoVCSP(nn.Module):
89 | # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
90 | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
91 | super(VoVCSP, self).__init__()
92 | c_ = int(c2) # hidden channels
93 | self.cv1 = Conv(c1//2, c_//2, 3, 1)
94 | self.cv2 = Conv(c_//2, c_//2, 3, 1)
95 | self.cv3 = Conv(c_, c2, 1, 1)
96 |
97 | def forward(self, x):
98 | _, x1 = x.chunk(2, dim=1)
99 | x1 = self.cv1(x1)
100 | x2 = self.cv2(x1)
101 | return self.cv3(torch.cat((x1,x2), dim=1))
102 |
103 |
104 | class SPP(nn.Module):
105 | # Spatial pyramid pooling layer used in YOLOv3-SPP
106 | def __init__(self, c1, c2, k=(5, 9, 13)):
107 | super(SPP, self).__init__()
108 | c_ = c1 // 2 # hidden channels
109 | self.cv1 = Conv(c1, c_, 1, 1)
110 | self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
111 | self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
112 |
113 | def forward(self, x):
114 | x = self.cv1(x)
115 | return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
116 |
117 |
118 | class SPPCSP(nn.Module):
119 | # CSP SPP https://github.com/WongKinYiu/CrossStagePartialNetworks
120 | def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)):
121 | super(SPPCSP, self).__init__()
122 | c_ = int(2 * c2 * e) # hidden channels
123 | self.cv1 = Conv(c1, c_, 1, 1)
124 | self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
125 | self.cv3 = Conv(c_, c_, 3, 1)
126 | self.cv4 = Conv(c_, c_, 1, 1)
127 | self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
128 | self.cv5 = Conv(4 * c_, c_, 1, 1)
129 | self.cv6 = Conv(c_, c_, 3, 1)
130 | self.bn = nn.BatchNorm2d(2 * c_)
131 | self.act = Mish()
132 | self.cv7 = Conv(2 * c_, c2, 1, 1)
133 |
134 | def forward(self, x):
135 | x1 = self.cv4(self.cv3(self.cv1(x)))
136 | y1 = self.cv6(self.cv5(torch.cat([x1] + [m(x1) for m in self.m], 1)))
137 | y2 = self.cv2(x)
138 | return self.cv7(self.act(self.bn(torch.cat((y1, y2), dim=1))))
139 |
140 |
141 | class MP(nn.Module):
142 | # Spatial pyramid pooling layer used in YOLOv3-SPP
143 | def __init__(self, k=2):
144 | super(MP, self).__init__()
145 | self.m = nn.MaxPool2d(kernel_size=k, stride=k)
146 |
147 | def forward(self, x):
148 | return self.m(x)
149 |
150 |
151 | class Focus(nn.Module):
152 | # Focus wh information into c-space
153 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
154 | super(Focus, self).__init__()
155 | self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
156 |
157 | def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
158 | return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
159 |
160 |
161 | class Concat(nn.Module):
162 | # Concatenate a list of tensors along dimension
163 | def __init__(self, dimension=1):
164 | super(Concat, self).__init__()
165 | self.d = dimension
166 |
167 | def forward(self, x):
168 | return torch.cat(x, self.d)
169 |
170 |
171 | class Flatten(nn.Module):
172 | # Use after nn.AdaptiveAvgPool2d(1) to remove last 2 dimensions
173 | @staticmethod
174 | def forward(x):
175 | return x.view(x.size(0), -1)
176 |
177 |
178 | class Classify(nn.Module):
179 | # Classification head, i.e. x(b,c1,20,20) to x(b,c2)
180 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups
181 | super(Classify, self).__init__()
182 | self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1)
183 | self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) # to x(b,c2,1,1)
184 | self.flat = Flatten()
185 |
186 | def forward(self, x):
187 | z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list
188 | return self.flat(self.conv(z)) # flatten to x(b,c2)
189 |
190 |
191 | import os
192 | import torch
193 | import torch.nn as nn
194 | import torch.nn.functional as F
195 | import collections
196 |
197 |
198 | class CombConvLayer(nn.Sequential):
199 | def __init__(self, in_channels, out_channels, kernel=1, stride=1, dropout=0.1, bias=False):
200 | super().__init__()
201 | self.add_module('layer1',ConvLayer(in_channels, out_channels, kernel))
202 | self.add_module('layer2',DWConvLayer(out_channels, out_channels, stride=stride))
203 |
204 | def forward(self, x):
205 | return super().forward(x)
206 |
207 | class DWConvLayer(nn.Sequential):
208 | def __init__(self, in_channels, out_channels, stride=1, bias=False):
209 | super().__init__()
210 | out_ch = out_channels
211 |
212 | groups = in_channels
213 | kernel = 3
214 | #print(kernel, 'x', kernel, 'x', out_channels, 'x', out_channels, 'DepthWise')
215 |
216 | self.add_module('dwconv', nn.Conv2d(groups, groups, kernel_size=3,
217 | stride=stride, padding=1, groups=groups, bias=bias))
218 | self.add_module('norm', nn.BatchNorm2d(groups))
219 | def forward(self, x):
220 | return super().forward(x)
221 |
222 | class ConvLayer(nn.Sequential):
223 | def __init__(self, in_channels, out_channels, kernel=3, stride=1, dropout=0.1, bias=False):
224 | super().__init__()
225 | out_ch = out_channels
226 | groups = 1
227 | #print(kernel, 'x', kernel, 'x', in_channels, 'x', out_channels)
228 | self.add_module('conv', nn.Conv2d(in_channels, out_ch, kernel_size=kernel,
229 | stride=stride, padding=kernel//2, groups=groups, bias=bias))
230 | self.add_module('norm', nn.BatchNorm2d(out_ch))
231 | self.add_module('relu', nn.ReLU6(True))
232 | def forward(self, x):
233 | return super().forward(x)
234 |
235 |
236 | class HarDBlock(nn.Module):
237 | def get_link(self, layer, base_ch, growth_rate, grmul):
238 | if layer == 0:
239 | return base_ch, 0, []
240 | out_channels = growth_rate
241 | link = []
242 | for i in range(10):
243 | dv = 2 ** i
244 | if layer % dv == 0:
245 | k = layer - dv
246 | link.append(k)
247 | if i > 0:
248 | out_channels *= grmul
249 | out_channels = int(int(out_channels + 1) / 2) * 2
250 | in_channels = 0
251 | for i in link:
252 | ch,_,_ = self.get_link(i, base_ch, growth_rate, grmul)
253 | in_channels += ch
254 | return out_channels, in_channels, link
255 |
256 | def get_out_ch(self):
257 | return self.out_channels
258 |
259 | def __init__(self, in_channels, growth_rate, grmul, n_layers, keepBase=False, residual_out=False, dwconv=False):
260 | super().__init__()
261 | self.keepBase = keepBase
262 | self.links = []
263 | layers_ = []
264 | self.out_channels = 0 # if upsample else in_channels
265 | for i in range(n_layers):
266 | outch, inch, link = self.get_link(i+1, in_channels, growth_rate, grmul)
267 | self.links.append(link)
268 | use_relu = residual_out
269 | if dwconv:
270 | layers_.append(CombConvLayer(inch, outch))
271 | else:
272 | layers_.append(Conv(inch, outch, k=3))
273 |
274 | if (i % 2 == 0) or (i == n_layers - 1):
275 | self.out_channels += outch
276 | #print("Blk out =",self.out_channels)
277 | self.layers = nn.ModuleList(layers_)
278 |
279 | def forward(self, x):
280 | layers_ = [x]
281 |
282 | for layer in range(len(self.layers)):
283 | link = self.links[layer]
284 | tin = []
285 | for i in link:
286 | tin.append(layers_[i])
287 | if len(tin) > 1:
288 | x = torch.cat(tin, 1)
289 | else:
290 | x = tin[0]
291 | out = self.layers[layer](x)
292 | layers_.append(out)
293 |
294 | t = len(layers_)
295 | out_ = []
296 | for i in range(t):
297 | if (i == 0 and self.keepBase) or (i == t-1) or (i%2 == 1):
298 | out_.append(layers_[i])
299 | out = torch.cat(out_, 1)
300 | return out
301 |
302 |
303 | class BRLayer(nn.Sequential):
304 | def __init__(self, in_channels):
305 | super().__init__()
306 |
307 | self.add_module('norm', nn.BatchNorm2d(in_channels))
308 | self.add_module('relu', nn.ReLU(True))
309 | def forward(self, x):
310 | return super().forward(x)
311 |
312 |
313 | class HarDBlock2(nn.Module):
314 | def get_link(self, layer, base_ch, growth_rate, grmul):
315 | if layer == 0:
316 | return base_ch, 0, []
317 | out_channels = growth_rate
318 | link = []
319 | for i in range(10):
320 | dv = 2 ** i
321 | if layer % dv == 0:
322 | k = layer - dv
323 | link.insert(0, k)
324 | if i > 0:
325 | out_channels *= grmul
326 | out_channels = int(int(out_channels + 1) / 2) * 2
327 | in_channels = 0
328 | for i in link:
329 | ch,_,_ = self.get_link(i, base_ch, growth_rate, grmul)
330 | in_channels += ch
331 | return out_channels, in_channels, link
332 |
333 | def get_out_ch(self):
334 | return self.out_channels
335 |
336 | def __init__(self, in_channels, growth_rate, grmul, n_layers, dwconv=False):
337 | super().__init__()
338 | self.links = []
339 | conv_layers_ = []
340 | bnrelu_layers_ = []
341 | self.layer_bias = []
342 | self.out_channels = 0
343 | self.out_partition = collections.defaultdict(list)
344 |
345 | for i in range(n_layers):
346 | outch, inch, link = self.get_link(i+1, in_channels, growth_rate, grmul)
347 | self.links.append(link)
348 | for j in link:
349 | self.out_partition[j].append(outch)
350 |
351 | cur_ch = in_channels
352 | for i in range(n_layers):
353 | accum_out_ch = sum( self.out_partition[i] )
354 | real_out_ch = self.out_partition[i][0]
355 | #print( self.links[i], self.out_partition[i], accum_out_ch)
356 | conv_layers_.append( nn.Conv2d(cur_ch, accum_out_ch, kernel_size=3, stride=1, padding=1, bias=True) )
357 | bnrelu_layers_.append( BRLayer(real_out_ch) )
358 | cur_ch = real_out_ch
359 | if (i % 2 == 0) or (i == n_layers - 1):
360 | self.out_channels += real_out_ch
361 | #print("Blk out =",self.out_channels)
362 |
363 | self.conv_layers = nn.ModuleList(conv_layers_)
364 | self.bnrelu_layers = nn.ModuleList(bnrelu_layers_)
365 |
366 | def transform(self, blk, trt=False):
367 | # Transform weight matrix from a pretrained HarDBlock v1
368 | in_ch = blk.layers[0][0].weight.shape[1]
369 | for i in range(len(self.conv_layers)):
370 | link = self.links[i].copy()
371 | link_ch = [blk.layers[k-1][0].weight.shape[0] if k > 0 else
372 | blk.layers[0 ][0].weight.shape[1] for k in link]
373 | part = self.out_partition[i]
374 | w_src = blk.layers[i][0].weight
375 | b_src = blk.layers[i][0].bias
376 |
377 |
378 | self.conv_layers[i].weight[0:part[0], :, :,:] = w_src[:, 0:in_ch, :,:]
379 | self.layer_bias.append(b_src)
380 |
381 | if b_src is not None:
382 | if trt:
383 | self.conv_layers[i].bias[1:part[0]] = b_src[1:]
384 | self.conv_layers[i].bias[0] = b_src[0]
385 | self.conv_layers[i].bias[part[0]:] = 0
386 | self.layer_bias[i] = None
387 | else:
388 | #for pytorch, add bias with standalone tensor is more efficient than within conv.bias
389 | #this is because the amount of non-zero bias is small,
390 | #but if we use conv.bias, the number of bias will be much larger
391 | self.conv_layers[i].bias = None
392 | else:
393 | self.conv_layers[i].bias = None
394 |
395 | in_ch = part[0]
396 | link_ch.reverse()
397 | link.reverse()
398 | if len(link) > 1:
399 | for j in range(1, len(link) ):
400 | ly = link[j]
401 | part_id = self.out_partition[ly].index(part[0])
402 | chos = sum( self.out_partition[ly][0:part_id] )
403 | choe = chos + part[0]
404 | chis = sum( link_ch[0:j] )
405 | chie = chis + link_ch[j]
406 | self.conv_layers[ly].weight[chos:choe, :,:,:] = w_src[:, chis:chie,:,:]
407 |
408 | #update BatchNorm or remove it if there is no BatchNorm in the v1 block
409 | self.bnrelu_layers[i] = None
410 | if isinstance(blk.layers[i][1], nn.BatchNorm2d):
411 | self.bnrelu_layers[i] = nn.Sequential(
412 | blk.layers[i][1],
413 | blk.layers[i][2])
414 | else:
415 | self.bnrelu_layers[i] = blk.layers[i][1]
416 |
417 |
418 | def forward(self, x):
419 | layers_ = []
420 | outs_ = []
421 | xin = x
422 | for i in range(len(self.conv_layers)):
423 | link = self.links[i]
424 | part = self.out_partition[i]
425 |
426 | xout = self.conv_layers[i](xin)
427 | layers_.append(xout)
428 |
429 | xin = xout[:,0:part[0],:,:] if len(part) > 1 else xout
430 | #print(i)
431 | #if self.layer_bias[i] is not None:
432 | # xin += self.layer_bias[i].view(1,-1,1,1)
433 |
434 | if len(link) > 1:
435 | for j in range( len(link) - 1 ):
436 | ly = link[j]
437 | part_id = self.out_partition[ly].index(part[0])
438 | chs = sum( self.out_partition[ly][0:part_id] )
439 | che = chs + part[0]
440 |
441 | xin += layers_[ly][:,chs:che,:,:]
442 |
443 | xin = self.bnrelu_layers[i](xin)
444 |
445 | if i%2 == 0 or i == len(self.conv_layers)-1:
446 | outs_.append(xin)
447 |
448 | out = torch.cat(outs_, 1)
449 | return out
450 |
451 | class ConvSig(nn.Module):
452 | # Standard convolution
453 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
454 | super(ConvSig, self).__init__()
455 | self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
456 | self.act = nn.Sigmoid() if act else nn.Identity()
457 |
458 | def forward(self, x):
459 | return self.act(self.conv(x))
460 |
461 | def fuseforward(self, x):
462 | return self.act(self.conv(x))
463 |
464 |
465 | class ConvSqu(nn.Module):
466 | # Standard convolution
467 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
468 | super(ConvSqu, self).__init__()
469 | self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
470 | self.act = Mish() if act else nn.Identity()
471 |
472 | def forward(self, x):
473 | return self.act(self.conv(x))
474 |
475 | def fuseforward(self, x):
476 | return self.act(self.conv(x))
477 |
478 | '''
479 | class SE(nn.Module):
480 | # Squeeze-and-excitation block in https://arxiv.org/abs/1709.01507
481 | def __init__(self, c1, c2, n=1, shortcut=True, g=8, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
482 | super(SE, self).__init__()
483 | c_ = int(c2) # hidden channels
484 | self.avg_pool = nn.AdaptiveAvgPool2d(1)
485 | self.cs = ConvSqu(c1, c1//g, 1, 1)
486 | self.cvsig = ConvSig(c1//g, c1, 1, 1)
487 |
488 | def forward(self, x):
489 | return x = x * self.cvsig(self.cs(self.avg_pool(x))).expand_as(x)
490 |
491 | class SAM(nn.Module):
492 | # SAM block in yolov4
493 | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
494 | super(SAM, self).__init__()
495 | c_ = int(c2 * e) # hidden channels
496 | self.cvsig = ConvSig(c1, c1, 1, 1)
497 |
498 | def forward(self, x):
499 | return x = x * self.cvsig(x)
500 |
501 | class DNL(nn.Module):
502 | # Disentangled Non-Local block in https://arxiv.org/abs/2006.06668
503 | def __init__(self, c1, c2, k=3, s=1):
504 | super(DNL, self).__init__()
505 | c_ = int(c1) # hidden channels
506 |
507 | #
508 | self.conv_query = nn.Conv2d(c1, c_, kernel_size=1)
509 | self.conv_key = nn.Conv2d(c1, c_, kernel_size=1)
510 |
511 | self.conv_value = nn.Conv2d(c1, c1, kernel_size=1, bias=False)
512 | self.conv_out = None
513 |
514 | self.scale = math.sqrt(c_)
515 | self.temperature = 0.05
516 |
517 | self.softmax = nn.Softmax(dim=2)
518 |
519 | self.gamma = nn.Parameter(torch.zeros(1))
520 |
521 | self.conv_mask = nn.Conv2d(c1, 1, kernel_size=1)
522 |
523 | self.cv = Conv(c1, c2, k, s)
524 |
525 | def forward(self, x):
526 |
527 | # [N, C, T, H, W]
528 | residual = x
529 |
530 | # [N, C, T, H', W']
531 | input_x = x
532 |
533 | # [N, C', T, H, W]
534 | query = self.conv_query(x)
535 |
536 | # [N, C', T, H', W']
537 | key = self.conv_key(input_x)
538 | value = self.conv_value(input_x)
539 |
540 | # [N, C', H x W]
541 | query = query.view(query.size(0), query.size(1), -1)
542 |
543 | # [N, C', H' x W']
544 | key = key.view(key.size(0), key.size(1), -1)
545 | value = value.view(value.size(0), value.size(1), -1)
546 |
547 | # channel whitening
548 | key_mean = key.mean(2).unsqueeze(2)
549 | query_mean = query.mean(2).unsqueeze(2)
550 | key -= key_mean
551 | query -= query_mean
552 |
553 | # [N, T x H x W, T x H' x W']
554 | sim_map = torch.bmm(query.transpose(1, 2), key)
555 | sim_map = sim_map/self.scale
556 | sim_map = sim_map/self.temperature
557 | sim_map = self.softmax(sim_map)
558 |
559 | # [N, T x H x W, C']
560 | out_sim = torch.bmm(sim_map, value.transpose(1, 2))
561 |
562 | # [N, C', T x H x W]
563 | out_sim = out_sim.transpose(1, 2)
564 |
565 | # [N, C', T, H, W]
566 | out_sim = out_sim.view(out_sim.size(0), out_sim.size(1), *x.size()[2:])
567 | out_sim = self.gamma * out_sim
568 |
569 | # [N, 1, H', W']
570 | mask = self.conv_mask(input_x)
571 | # [N, 1, H'x W']
572 | mask = mask.view(mask.size(0), mask.size(1), -1)
573 | mask = self.softmax(mask)
574 | # [N, C, 1, 1]
575 | out_gc = torch.bmm(value, mask.permute(0,2,1)).unsqueeze(-1)
576 | out_sim = out_sim+out_gc
577 |
578 | return self.cv(out_sim + residual)
579 |
580 |
581 | class GC(nn.Module):
582 | # global context block in https://arxiv.org/abs/1904.11492
583 | def __init__(self, c1, c2, k=3, s=1):
584 | super(GC, self).__init__()
585 | c_ = int(c1) # hidden channels
586 |
587 | #
588 | self.channel_add_conv = nn.Sequential(
589 | nn.Conv2d(c1, c_, kernel_size=1),
590 | nn.LayerNorm([c_, 1, 1]),
591 | nn.ReLU(inplace=True), # yapf: disable
592 | nn.Conv2d(c_, c1, kernel_size=1))
593 |
594 | self.conv_mask = nn.Conv2d(c_, 1, kernel_size=1)
595 | self.softmax = nn.Softmax(dim=2)
596 |
597 | self.cv = Conv(c1, c2, k, s)
598 |
599 |
600 | def spatial_pool(self, x):
601 |
602 | batch, channel, height, width = x.size()
603 |
604 | input_x = x
605 | # [N, C, H * W]
606 | input_x = input_x.view(batch, channel, height * width)
607 | # [N, 1, C, H * W]
608 | input_x = input_x.unsqueeze(1)
609 | # [N, 1, H, W]
610 | context_mask = self.conv_mask(x)
611 | # [N, 1, H * W]
612 | context_mask = context_mask.view(batch, 1, height * width)
613 | # [N, 1, H * W]
614 | context_mask = self.softmax(context_mask)
615 | # [N, 1, H * W, 1]
616 | context_mask = context_mask.unsqueeze(-1)
617 | # [N, 1, C, 1]
618 | context = torch.matmul(input_x, context_mask)
619 | # [N, C, 1, 1]
620 | context = context.view(batch, channel, 1, 1)
621 |
622 | return context
623 |
624 | def forward(self, x):
625 |
626 | return self.cv(x + self.channel_add_conv(self.spatial_pool(x)))
627 | '''
628 |
--------------------------------------------------------------------------------
/models/experimental.py:
--------------------------------------------------------------------------------
1 | # This file contains experimental modules
2 |
3 | import numpy as np
4 | import torch
5 | import torch.nn as nn
6 |
7 | from models.common import Conv, DWConv
8 | from utils.google_utils import attempt_download
9 |
10 |
11 | class CrossConv(nn.Module):
12 | # Cross Convolution Downsample
13 | def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
14 | # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
15 | super(CrossConv, self).__init__()
16 | c_ = int(c2 * e) # hidden channels
17 | self.cv1 = Conv(c1, c_, (1, k), (1, s))
18 | self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
19 | self.add = shortcut and c1 == c2
20 |
21 | def forward(self, x):
22 | return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
23 |
24 |
25 | class C3(nn.Module):
26 | # Cross Convolution CSP
27 | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
28 | super(C3, self).__init__()
29 | c_ = int(c2 * e) # hidden channels
30 | self.cv1 = Conv(c1, c_, 1, 1)
31 | self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
32 | self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
33 | self.cv4 = Conv(2 * c_, c2, 1, 1)
34 | self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
35 | self.act = nn.LeakyReLU(0.1, inplace=True)
36 | self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)])
37 |
38 | def forward(self, x):
39 | y1 = self.cv3(self.m(self.cv1(x)))
40 | y2 = self.cv2(x)
41 | return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))
42 |
43 |
44 | class Sum(nn.Module):
45 | # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
46 | def __init__(self, n, weight=False): # n: number of inputs
47 | super(Sum, self).__init__()
48 | self.weight = weight # apply weights boolean
49 | self.iter = range(n - 1) # iter object
50 | if weight:
51 | self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights
52 |
53 | def forward(self, x):
54 | y = x[0] # no weight
55 | if self.weight:
56 | w = torch.sigmoid(self.w) * 2
57 | for i in self.iter:
58 | y = y + x[i + 1] * w[i]
59 | else:
60 | for i in self.iter:
61 | y = y + x[i + 1]
62 | return y
63 |
64 |
65 | class GhostConv(nn.Module):
66 | # Ghost Convolution https://github.com/huawei-noah/ghostnet
67 | def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
68 | super(GhostConv, self).__init__()
69 | c_ = c2 // 2 # hidden channels
70 | self.cv1 = Conv(c1, c_, k, s, g, act)
71 | self.cv2 = Conv(c_, c_, 5, 1, c_, act)
72 |
73 | def forward(self, x):
74 | y = self.cv1(x)
75 | return torch.cat([y, self.cv2(y)], 1)
76 |
77 |
78 | class GhostBottleneck(nn.Module):
79 | # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
80 | def __init__(self, c1, c2, k, s):
81 | super(GhostBottleneck, self).__init__()
82 | c_ = c2 // 2
83 | self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw
84 | DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
85 | GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
86 | self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
87 | Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()
88 |
89 | def forward(self, x):
90 | return self.conv(x) + self.shortcut(x)
91 |
92 |
93 | class MixConv2d(nn.Module):
94 | # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595
95 | def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):
96 | super(MixConv2d, self).__init__()
97 | groups = len(k)
98 | if equal_ch: # equal c_ per group
99 | i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices
100 | c_ = [(i == g).sum() for g in range(groups)] # intermediate channels
101 | else: # equal weight.numel() per group
102 | b = [c2] + [0] * groups
103 | a = np.eye(groups + 1, groups, k=-1)
104 | a -= np.roll(a, 1, axis=1)
105 | a *= np.array(k) ** 2
106 | a[0] = 1
107 | c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
108 |
109 | self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])
110 | self.bn = nn.BatchNorm2d(c2)
111 | self.act = nn.LeakyReLU(0.1, inplace=True)
112 |
113 | def forward(self, x):
114 | return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
115 |
116 |
117 | class Ensemble(nn.ModuleList):
118 | # Ensemble of models
119 | def __init__(self):
120 | super(Ensemble, self).__init__()
121 |
122 | def forward(self, x, augment=False):
123 | y = []
124 | for module in self:
125 | y.append(module(x, augment)[0])
126 | # y = torch.stack(y).max(0)[0] # max ensemble
127 | # y = torch.cat(y, 1) # nms ensemble
128 | y = torch.stack(y).mean(0) # mean ensemble
129 | return y, None # inference, train output
130 |
131 |
132 | def attempt_load(weights, map_location=None):
133 | # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
134 | model = Ensemble()
135 | for w in weights if isinstance(weights, list) else [weights]:
136 | attempt_download(w)
137 | model.append(torch.load(w, map_location=map_location)['model'].float().fuse().eval()) # load FP32 model
138 |
139 | if len(model) == 1:
140 | return model[-1] # return model
141 | else:
142 | print('Ensemble created with %s\n' % weights)
143 | for k in ['names', 'stride']:
144 | setattr(model, k, getattr(model[-1], k))
145 | return model # return ensemble
146 |
--------------------------------------------------------------------------------
/models/export.py:
--------------------------------------------------------------------------------
1 | import argparse
2 |
3 | import torch
4 |
5 | from utils.google_utils import attempt_download
6 |
7 | if __name__ == '__main__':
8 | parser = argparse.ArgumentParser()
9 | parser.add_argument('--weights', type=str, default='./yolov4-p5.pt', help='weights path')
10 | parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size')
11 | parser.add_argument('--batch-size', type=int, default=1, help='batch size')
12 | opt = parser.parse_args()
13 | opt.img_size *= 2 if len(opt.img_size) == 1 else 1 # expand
14 | print(opt)
15 |
16 | # Input
17 | img = torch.zeros((opt.batch_size, 3, *opt.img_size)) # image size(1,3,320,192) iDetection
18 |
19 | # Load PyTorch model
20 | attempt_download(opt.weights)
21 | model = torch.load(opt.weights, map_location=torch.device('cpu'))['model'].float()
22 | model.eval()
23 | model.model[-1].export = True # set Detect() layer export=True
24 | y = model(img) # dry run
25 |
26 | # TorchScript export
27 | try:
28 | print('\nStarting TorchScript export with torch %s...' % torch.__version__)
29 | f = opt.weights.replace('.pt', '.torchscript.pt') # filename
30 | ts = torch.jit.trace(model, img)
31 | ts.save(f)
32 | print('TorchScript export success, saved as %s' % f)
33 | except Exception as e:
34 | print('TorchScript export failure: %s' % e)
35 |
36 | # ONNX export
37 | try:
38 | import onnx
39 |
40 | print('\nStarting ONNX export with onnx %s...' % onnx.__version__)
41 | f = opt.weights.replace('.pt', '.onnx') # filename
42 | model.fuse() # only for ONNX
43 | torch.onnx.export(model, img, f, verbose=False, opset_version=12, input_names=['images'],
44 | output_names=['classes', 'boxes'] if y is None else ['output'])
45 |
46 | # Checks
47 | onnx_model = onnx.load(f) # load onnx model
48 | onnx.checker.check_model(onnx_model) # check onnx model
49 | print(onnx.helper.printable_graph(onnx_model.graph)) # print a human readable model
50 | print('ONNX export success, saved as %s' % f)
51 | except Exception as e:
52 | print('ONNX export failure: %s' % e)
53 |
54 | # CoreML export
55 | try:
56 | import coremltools as ct
57 |
58 | print('\nStarting CoreML export with coremltools %s...' % ct.__version__)
59 | # convert model from torchscript and apply pixel scaling as per detect.py
60 | model = ct.convert(ts, inputs=[ct.ImageType(name='images', shape=img.shape, scale=1 / 255.0, bias=[0, 0, 0])])
61 | f = opt.weights.replace('.pt', '.mlmodel') # filename
62 | model.save(f)
63 | print('CoreML export success, saved as %s' % f)
64 | except Exception as e:
65 | print('CoreML export failure: %s' % e)
66 |
67 | # Finish
68 | print('\nExport complete. Visualize with https://github.com/lutzroeder/netron.')
69 |
--------------------------------------------------------------------------------
/models/yolo.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import math
3 | from copy import deepcopy
4 | from pathlib import Path
5 |
6 | import torch
7 | import torch.nn as nn
8 |
9 | from models.common import *
10 | from models.experimental import MixConv2d, CrossConv, C3
11 | from utils.general import check_anchor_order, make_divisible, check_file
12 | from utils.torch_utils import (
13 | time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, select_device)
14 |
15 |
16 | class Detect(nn.Module):
17 | def __init__(self, nc=80, anchors=(), ch=()): # detection layer
18 | super(Detect, self).__init__()
19 | self.stride = None # strides computed during build
20 | self.nc = nc # number of classes
21 | self.no = nc + 5 # number of outputs per anchor
22 | self.nl = len(anchors) # number of detection layers
23 | self.na = len(anchors[0]) // 2 # number of anchors
24 | self.grid = [torch.zeros(1)] * self.nl # init grid
25 | a = torch.tensor(anchors).float().view(self.nl, -1, 2)
26 | self.register_buffer('anchors', a) # shape(nl,na,2)
27 | self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
28 | self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
29 | self.export = False # onnx export
30 |
31 | def forward(self, x):
32 | # x = x.copy() # for profiling
33 | z = [] # inference output
34 | self.training |= self.export
35 | for i in range(self.nl):
36 | x[i] = self.m[i](x[i]) # conv
37 | bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
38 | x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
39 |
40 | if not self.training: # inference
41 | if self.grid[i].shape[2:4] != x[i].shape[2:4]:
42 | self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
43 |
44 | y = x[i].sigmoid()
45 | y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i] # xy
46 | y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
47 | z.append(y.view(bs, -1, self.no))
48 |
49 | return x if self.training else (torch.cat(z, 1), x)
50 |
51 | @staticmethod
52 | def _make_grid(nx=20, ny=20):
53 | yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
54 | return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
55 |
56 |
57 | class Model(nn.Module):
58 | def __init__(self, cfg='yolov4-p5.yaml', ch=3, nc=None): # model, input channels, number of classes
59 | super(Model, self).__init__()
60 | if isinstance(cfg, dict):
61 | self.yaml = cfg # model dict
62 | else: # is *.yaml
63 | import yaml # for torch hub
64 | self.yaml_file = Path(cfg).name
65 | with open(cfg) as f:
66 | self.yaml = yaml.load(f, Loader=yaml.FullLoader) # model dict
67 |
68 | # Define model
69 | if nc and nc != self.yaml['nc']:
70 | print('Overriding %s nc=%g with nc=%g' % (cfg, self.yaml['nc'], nc))
71 | self.yaml['nc'] = nc # override yaml value
72 | self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist, ch_out
73 | # print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
74 |
75 | # Build strides, anchors
76 | m = self.model[-1] # Detect()
77 | if isinstance(m, Detect):
78 | s = 256 # 2x min stride
79 | m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
80 | m.anchors /= m.stride.view(-1, 1, 1)
81 | check_anchor_order(m)
82 | self.stride = m.stride
83 | self._initialize_biases() # only run once
84 | # print('Strides: %s' % m.stride.tolist())
85 |
86 | # Init weights, biases
87 | initialize_weights(self)
88 | self.info()
89 | print('')
90 |
91 | def forward(self, x, augment=False, profile=False):
92 | if augment:
93 | img_size = x.shape[-2:] # height, width
94 | s = [1, 0.83, 0.67] # scales
95 | f = [None, 3, None] # flips (2-ud, 3-lr)
96 | y = [] # outputs
97 | for si, fi in zip(s, f):
98 | xi = scale_img(x.flip(fi) if fi else x, si)
99 | yi = self.forward_once(xi)[0] # forward
100 | # cv2.imwrite('img%g.jpg' % s, 255 * xi[0].numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
101 | yi[..., :4] /= si # de-scale
102 | if fi == 2:
103 | yi[..., 1] = img_size[0] - yi[..., 1] # de-flip ud
104 | elif fi == 3:
105 | yi[..., 0] = img_size[1] - yi[..., 0] # de-flip lr
106 | y.append(yi)
107 | return torch.cat(y, 1), None # augmented inference, train
108 | else:
109 | return self.forward_once(x, profile) # single-scale inference, train
110 |
111 | def forward_once(self, x, profile=False):
112 | y, dt = [], [] # outputs
113 | for m in self.model:
114 | if m.f != -1: # if not from previous layer
115 | x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
116 |
117 | if profile:
118 | try:
119 | import thop
120 | o = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # FLOPS
121 | except:
122 | o = 0
123 | t = time_synchronized()
124 | for _ in range(10):
125 | _ = m(x)
126 | dt.append((time_synchronized() - t) * 100)
127 | print('%10.1f%10.0f%10.1fms %-40s' % (o, m.np, dt[-1], m.type))
128 |
129 | x = m(x) # run
130 | y.append(x if m.i in self.save else None) # save output
131 |
132 | if profile:
133 | print('%.1fms total' % sum(dt))
134 | return x
135 |
136 | def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
137 | # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
138 | m = self.model[-1] # Detect() module
139 | for mi, s in zip(m.m, m.stride): # from
140 | b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
141 | b[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
142 | b[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
143 | mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
144 |
145 | def _print_biases(self):
146 | m = self.model[-1] # Detect() module
147 | for mi in m.m: # from
148 | b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85)
149 | print(('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))
150 |
151 | # def _print_weights(self):
152 | # for m in self.model.modules():
153 | # if type(m) is Bottleneck:
154 | # print('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights
155 |
156 | def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
157 | print('Fusing layers... ', end='')
158 | for m in self.model.modules():
159 | if type(m) is Conv:
160 | m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatability
161 | m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
162 | m.bn = None # remove batchnorm
163 | m.forward = m.fuseforward # update forward
164 | self.info()
165 | return self
166 |
167 | def info(self): # print model information
168 | model_info(self)
169 |
170 |
171 | def parse_model(d, ch): # model_dict, input_channels(3)
172 | print('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments'))
173 | anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
174 | na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
175 | no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
176 |
177 | layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
178 | for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
179 | m = eval(m) if isinstance(m, str) else m # eval strings
180 | for j, a in enumerate(args):
181 | try:
182 | args[j] = eval(a) if isinstance(a, str) else a # eval strings
183 | except:
184 | pass
185 |
186 | n = max(round(n * gd), 1) if n > 1 else n # depth gain
187 | if m in [nn.Conv2d, Conv, Bottleneck, SPP, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, BottleneckCSP2, SPPCSP, VoVCSP, C3]:
188 | c1, c2 = ch[f], args[0]
189 |
190 | # Normal
191 | # if i > 0 and args[0] != no: # channel expansion factor
192 | # ex = 1.75 # exponential (default 2.0)
193 | # e = math.log(c2 / ch[1]) / math.log(2)
194 | # c2 = int(ch[1] * ex ** e)
195 | # if m != Focus:
196 |
197 | c2 = make_divisible(c2 * gw, 8) if c2 != no else c2
198 |
199 | # Experimental
200 | # if i > 0 and args[0] != no: # channel expansion factor
201 | # ex = 1 + gw # exponential (default 2.0)
202 | # ch1 = 32 # ch[1]
203 | # e = math.log(c2 / ch1) / math.log(2) # level 1-n
204 | # c2 = int(ch1 * ex ** e)
205 | # if m != Focus:
206 | # c2 = make_divisible(c2, 8) if c2 != no else c2
207 |
208 | args = [c1, c2, *args[1:]]
209 | if m in [BottleneckCSP, BottleneckCSP2, SPPCSP, VoVCSP, C3]:
210 | args.insert(2, n)
211 | n = 1
212 | elif m in [HarDBlock, HarDBlock2]:
213 | c1 = ch[f]
214 | args = [c1, *args[:]]
215 | elif m is nn.BatchNorm2d:
216 | args = [ch[f]]
217 | elif m is Concat:
218 | c2 = sum([ch[-1 if x == -1 else x + 1] for x in f])
219 | elif m is Detect:
220 | args.append([ch[x + 1] for x in f])
221 | if isinstance(args[1], int): # number of anchors
222 | args[1] = [list(range(args[1] * 2))] * len(f)
223 | else:
224 | c2 = ch[f]
225 |
226 | m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module
227 | t = str(m)[8:-2].replace('__main__.', '') # module type
228 | np = sum([x.numel() for x in m_.parameters()]) # number params
229 | m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
230 | print('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n, np, t, args)) # print
231 | save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
232 | layers.append(m_)
233 | if m in [HarDBlock, HarDBlock2]:
234 | c2 = m_.get_out_ch()
235 | ch.append(c2)
236 | else:
237 | ch.append(c2)
238 | return nn.Sequential(*layers), sorted(save)
239 |
240 |
241 | if __name__ == '__main__':
242 | parser = argparse.ArgumentParser()
243 | parser.add_argument('--cfg', type=str, default='yolov4-p5.yaml', help='model.yaml')
244 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
245 | opt = parser.parse_args()
246 | opt.cfg = check_file(opt.cfg) # check file
247 | device = select_device(opt.device)
248 |
249 | # Create model
250 | model = Model(opt.cfg).to(device)
251 | model.train()
252 |
253 | # Profile
254 | # img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device)
255 | # y = model(img, profile=True)
256 |
257 | # ONNX export
258 | # model.model[-1].export = True
259 | # torch.onnx.export(model, img, opt.cfg.replace('.yaml', '.onnx'), verbose=True, opset_version=11)
260 |
261 | # Tensorboard
262 | # from torch.utils.tensorboard import SummaryWriter
263 | # tb_writer = SummaryWriter()
264 | # print("Run 'tensorboard --logdir=models/runs' to view tensorboard at http://localhost:6006/")
265 | # tb_writer.add_graph(model.model, img) # add model to tensorboard
266 | # tb_writer.add_image('test', img[0], dataformats='CWH') # add model to tensorboard
267 |
--------------------------------------------------------------------------------
/models/yolov4-csp.yaml:
--------------------------------------------------------------------------------
1 | # parameters
2 | nc: 80 # number of classes
3 | depth_multiple: 1.0 # model depth multiple
4 | width_multiple: 1.0 # layer channel multiple
5 |
6 | # anchors
7 | anchors:
8 | - [12,16, 19,36, 40,28] # P3/8
9 | - [36,75, 76,55, 72,146] # P4/16
10 | - [142,110, 192,243, 459,401] # P5/32
11 |
12 | # yolov4-csp backbone
13 | backbone:
14 | # [from, number, module, args]
15 | [[-1, 1, Conv, [32, 3, 1]], # 0
16 | [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
17 | [-1, 1, Bottleneck, [64]],
18 | [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
19 | [-1, 2, BottleneckCSP, [128]],
20 | [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
21 | [-1, 8, BottleneckCSP, [256]],
22 | [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
23 | [-1, 8, BottleneckCSP, [512]],
24 | [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
25 | [-1, 4, BottleneckCSP, [1024]], # 10
26 | ]
27 |
28 | # yolov4-csp head
29 | # na = len(anchors[0])
30 | head:
31 | [[-1, 1, SPPCSP, [512]], # 11
32 | [-1, 1, Conv, [256, 1, 1]],
33 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
34 | [8, 1, Conv, [256, 1, 1]], # route backbone P4
35 | [[-1, -2], 1, Concat, [1]],
36 | [-1, 2, BottleneckCSP2, [256]], # 16
37 | [-1, 1, Conv, [128, 1, 1]],
38 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
39 | [6, 1, Conv, [128, 1, 1]], # route backbone P3
40 | [[-1, -2], 1, Concat, [1]],
41 | [-1, 2, BottleneckCSP2, [128]], # 21
42 | [-1, 1, Conv, [256, 3, 1]],
43 | [-2, 1, Conv, [256, 3, 2]],
44 | [[-1, 16], 1, Concat, [1]], # cat
45 | [-1, 2, BottleneckCSP2, [256]], # 25
46 | [-1, 1, Conv, [512, 3, 1]],
47 | [-2, 1, Conv, [512, 3, 2]],
48 | [[-1, 11], 1, Concat, [1]], # cat
49 | [-1, 2, BottleneckCSP2, [512]], # 29
50 | [-1, 1, Conv, [1024, 3, 1]],
51 |
52 | [[22,26,30], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
53 | ]
54 |
--------------------------------------------------------------------------------
/models/yolov4-p5.yaml:
--------------------------------------------------------------------------------
1 | # parameters
2 | nc: 80 # number of classes
3 | depth_multiple: 1.0 # model depth multiple
4 | width_multiple: 1.0 # layer channel multiple
5 |
6 | # anchors
7 | anchors:
8 | - [13,17, 31,25, 24,51, 61,45] # P3/8
9 | - [48,102, 119,96, 97,189, 217,184] # P4/16
10 | - [171,384, 324,451, 616,618, 800,800] # P5/32
11 |
12 | # csp-p5 backbone
13 | backbone:
14 | # [from, number, module, args]
15 | [[-1, 1, Conv, [32, 3, 1]], # 0
16 | [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
17 | [-1, 1, BottleneckCSP, [64]],
18 | [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
19 | [-1, 3, BottleneckCSP, [128]],
20 | [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
21 | [-1, 15, BottleneckCSP, [256]],
22 | [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
23 | [-1, 15, BottleneckCSP, [512]],
24 | [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
25 | [-1, 7, BottleneckCSP, [1024]], # 10
26 | ]
27 |
28 | # yolov4-p5 head
29 | # na = len(anchors[0])
30 | head:
31 | [[-1, 1, SPPCSP, [512]], # 11
32 | [-1, 1, Conv, [256, 1, 1]],
33 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
34 | [8, 1, Conv, [256, 1, 1]], # route backbone P4
35 | [[-1, -2], 1, Concat, [1]],
36 | [-1, 3, BottleneckCSP2, [256]], # 16
37 | [-1, 1, Conv, [128, 1, 1]],
38 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
39 | [6, 1, Conv, [128, 1, 1]], # route backbone P3
40 | [[-1, -2], 1, Concat, [1]],
41 | [-1, 3, BottleneckCSP2, [128]], # 21
42 | [-1, 1, Conv, [256, 3, 1]],
43 | [-2, 1, Conv, [256, 3, 2]],
44 | [[-1, 16], 1, Concat, [1]], # cat
45 | [-1, 3, BottleneckCSP2, [256]], # 25
46 | [-1, 1, Conv, [512, 3, 1]],
47 | [-2, 1, Conv, [512, 3, 2]],
48 | [[-1, 11], 1, Concat, [1]], # cat
49 | [-1, 3, BottleneckCSP2, [512]], # 29
50 | [-1, 1, Conv, [1024, 3, 1]],
51 |
52 | [[22,26,30], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
53 | ]
54 |
--------------------------------------------------------------------------------
/models/yolov4-p6.yaml:
--------------------------------------------------------------------------------
1 | # parameters
2 | nc: 80 # number of classes
3 | depth_multiple: 1.0 # expand model depth
4 | width_multiple: 1.0 # expand layer channels
5 |
6 | # anchors
7 | anchors:
8 | - [13,17, 31,25, 24,51, 61,45] # P3/8
9 | - [61,45, 48,102, 119,96, 97,189] # P4/16
10 | - [97,189, 217,184, 171,384, 324,451] # P5/32
11 | - [324,451, 545,357, 616,618, 1024,1024] # P6/64
12 |
13 | # csp-p6 backbone
14 | backbone:
15 | # [from, number, module, args]
16 | [[-1, 1, Conv, [32, 3, 1]], # 0
17 | [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
18 | [-1, 1, BottleneckCSP, [64]],
19 | [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
20 | [-1, 3, BottleneckCSP, [128]],
21 | [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
22 | [-1, 15, BottleneckCSP, [256]],
23 | [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
24 | [-1, 15, BottleneckCSP, [512]],
25 | [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
26 | [-1, 7, BottleneckCSP, [1024]],
27 | [-1, 1, Conv, [1024, 3, 2]], # 11-P6/64
28 | [-1, 7, BottleneckCSP, [1024]], # 12
29 | ]
30 |
31 | # yolov4-p6 head
32 | # na = len(anchors[0])
33 | head:
34 | [[-1, 1, SPPCSP, [512]], # 13
35 | [-1, 1, Conv, [512, 1, 1]],
36 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
37 | [-6, 1, Conv, [512, 1, 1]], # route backbone P5
38 | [[-1, -2], 1, Concat, [1]],
39 | [-1, 3, BottleneckCSP2, [512]], # 18
40 | [-1, 1, Conv, [256, 1, 1]],
41 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
42 | [-13, 1, Conv, [256, 1, 1]], # route backbone P4
43 | [[-1, -2], 1, Concat, [1]],
44 | [-1, 3, BottleneckCSP2, [256]], # 23
45 | [-1, 1, Conv, [128, 1, 1]],
46 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
47 | [-20, 1, Conv, [128, 1, 1]], # route backbone P3
48 | [[-1, -2], 1, Concat, [1]],
49 | [-1, 3, BottleneckCSP2, [128]], # 28
50 | [-1, 1, Conv, [256, 3, 1]],
51 | [-2, 1, Conv, [256, 3, 2]],
52 | [[-1, 23], 1, Concat, [1]], # cat
53 | [-1, 3, BottleneckCSP2, [256]], # 32
54 | [-1, 1, Conv, [512, 3, 1]],
55 | [-2, 1, Conv, [512, 3, 2]],
56 | [[-1, 18], 1, Concat, [1]], # cat
57 | [-1, 3, BottleneckCSP2, [512]], # 36
58 | [-1, 1, Conv, [1024, 3, 1]],
59 | [-2, 1, Conv, [512, 3, 2]],
60 | [[-1, 13], 1, Concat, [1]], # cat
61 | [-1, 3, BottleneckCSP2, [512]], # 40
62 | [-1, 1, Conv, [1024, 3, 1]],
63 |
64 | [[29,33,37,41], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
65 | ]
--------------------------------------------------------------------------------
/models/yolov4-p7.yaml:
--------------------------------------------------------------------------------
1 | # parameters
2 | nc: 80 # number of classes
3 | depth_multiple: 1.0 # expand model depth
4 | width_multiple: 1.25 # expand layer channels
5 |
6 | # anchors
7 | anchors:
8 | - [13,17, 22,25, 27,66, 55,41] # P3/8
9 | - [57,88, 112,69, 69,177, 136,138] # P4/16
10 | - [136,138, 287,114, 134,275, 268,248] # P5/32
11 | - [268,248, 232,504, 445,416, 640,640] # P6/64
12 | - [812,393, 477,808, 1070,908, 1408,1408] # P7/128
13 |
14 | # csp-p7 backbone
15 | backbone:
16 | # [from, number, module, args]
17 | [[-1, 1, Conv, [32, 3, 1]], # 0
18 | [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
19 | [-1, 1, BottleneckCSP, [64]],
20 | [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
21 | [-1, 3, BottleneckCSP, [128]],
22 | [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
23 | [-1, 15, BottleneckCSP, [256]],
24 | [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
25 | [-1, 15, BottleneckCSP, [512]],
26 | [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
27 | [-1, 7, BottleneckCSP, [1024]],
28 | [-1, 1, Conv, [1024, 3, 2]], # 11-P6/64
29 | [-1, 7, BottleneckCSP, [1024]],
30 | [-1, 1, Conv, [1024, 3, 2]], # 13-P7/128
31 | [-1, 7, BottleneckCSP, [1024]], # 14
32 | ]
33 |
34 | # yolov4-p7 head
35 | # na = len(anchors[0])
36 | head:
37 | [[-1, 1, SPPCSP, [512]], # 15
38 | [-1, 1, Conv, [512, 1, 1]],
39 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
40 | [-6, 1, Conv, [512, 1, 1]], # route backbone P6
41 | [[-1, -2], 1, Concat, [1]],
42 | [-1, 3, BottleneckCSP2, [512]], # 20
43 | [-1, 1, Conv, [512, 1, 1]],
44 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
45 | [-13, 1, Conv, [512, 1, 1]], # route backbone P5
46 | [[-1, -2], 1, Concat, [1]],
47 | [-1, 3, BottleneckCSP2, [512]], # 25
48 | [-1, 1, Conv, [256, 1, 1]],
49 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
50 | [-20, 1, Conv, [256, 1, 1]], # route backbone P4
51 | [[-1, -2], 1, Concat, [1]],
52 | [-1, 3, BottleneckCSP2, [256]], # 30
53 | [-1, 1, Conv, [128, 1, 1]],
54 | [-1, 1, nn.Upsample, [None, 2, 'nearest']],
55 | [-27, 1, Conv, [128, 1, 1]], # route backbone P3
56 | [[-1, -2], 1, Concat, [1]],
57 | [-1, 3, BottleneckCSP2, [128]], # 35
58 | [-1, 1, Conv, [256, 3, 1]],
59 | [-2, 1, Conv, [256, 3, 2]],
60 | [[-1, 30], 1, Concat, [1]], # cat
61 | [-1, 3, BottleneckCSP2, [256]], # 39
62 | [-1, 1, Conv, [512, 3, 1]],
63 | [-2, 1, Conv, [512, 3, 2]],
64 | [[-1, 25], 1, Concat, [1]], # cat
65 | [-1, 3, BottleneckCSP2, [512]], # 43
66 | [-1, 1, Conv, [1024, 3, 1]],
67 | [-2, 1, Conv, [512, 3, 2]],
68 | [[-1, 20], 1, Concat, [1]], # cat
69 | [-1, 3, BottleneckCSP2, [512]], # 47
70 | [-1, 1, Conv, [1024, 3, 1]],
71 | [-2, 1, Conv, [512, 3, 2]],
72 | [[-1, 15], 1, Concat, [1]], # cat
73 | [-1, 3, BottleneckCSP2, [512]], # 51
74 | [-1, 1, Conv, [1024, 3, 1]],
75 |
76 | [[36,40,44,48,52], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6, P7)
77 | ]
--------------------------------------------------------------------------------
/test.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import glob
3 | import json
4 | import os
5 | import shutil
6 | from pathlib import Path
7 |
8 | import numpy as np
9 | import torch
10 | import yaml
11 | from tqdm import tqdm
12 |
13 | from models.experimental import attempt_load
14 | from utils.datasets import create_dataloader
15 | from utils.general import (
16 | coco80_to_coco91_class, check_file, check_img_size, compute_loss, non_max_suppression,
17 | scale_coords, xyxy2xywh, clip_coords, plot_images, xywh2xyxy, box_iou, output_to_target, ap_per_class)
18 | from utils.torch_utils import select_device, time_synchronized
19 |
20 |
21 | def test(data,
22 | weights=None,
23 | batch_size=16,
24 | imgsz=640,
25 | conf_thres=0.001,
26 | iou_thres=0.6, # for NMS
27 | save_json=False,
28 | single_cls=False,
29 | augment=False,
30 | verbose=False,
31 | model=None,
32 | dataloader=None,
33 | save_dir='',
34 | merge=False,
35 | save_txt=False):
36 | # Initialize/load model and set device
37 | training = model is not None
38 | if training: # called by train.py
39 | device = next(model.parameters()).device # get model device
40 |
41 | else: # called directly
42 | device = select_device(opt.device, batch_size=batch_size)
43 | merge, save_txt = opt.merge, opt.save_txt # use Merge NMS, save *.txt labels
44 | if save_txt:
45 | out = Path('inference/output')
46 | if os.path.exists(out):
47 | shutil.rmtree(out) # delete output folder
48 | os.makedirs(out) # make new output folder
49 |
50 | # Remove previous
51 | for f in glob.glob(str(Path(save_dir) / 'test_batch*.jpg')):
52 | os.remove(f)
53 |
54 | # Load model
55 | model = attempt_load(weights, map_location=device) # load FP32 model
56 | imgsz = check_img_size(imgsz, s=model.stride.max()) # check img_size
57 |
58 | # Half
59 | half = device.type != 'cpu' # half precision only supported on CUDA
60 | if half:
61 | model.half()
62 |
63 | # Configure
64 | model.eval()
65 | with open(data) as f:
66 | data = yaml.load(f, Loader=yaml.FullLoader) # model dict
67 | nc = 1 if single_cls else int(data['nc']) # number of classes
68 | iouv = torch.linspace(0.5, 0.95, 10).to(device) # iou vector for mAP@0.5:0.95
69 | niou = iouv.numel()
70 |
71 | # Dataloader
72 | if not training:
73 | img = torch.zeros((1, 3, imgsz, imgsz), device=device) # init img
74 | _ = model(img.half() if half else img) if device.type != 'cpu' else None # run once
75 | path = data['test'] if opt.task == 'test' else data['val'] # path to val/test images
76 | dataloader = create_dataloader(path, imgsz, batch_size, model.stride.max(), opt,
77 | hyp=None, augment=False, cache=False, pad=0.5, rect=True)[0]
78 |
79 | seen = 0
80 | names = model.names if hasattr(model, 'names') else model.module.names
81 | coco91class = coco80_to_coco91_class()
82 | s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Targets', 'P', 'R', 'mAP@.5', 'mAP@.5:.95')
83 | p, r, f1, mp, mr, map50, map, t0, t1 = 0., 0., 0., 0., 0., 0., 0., 0., 0.
84 | loss = torch.zeros(3, device=device)
85 | jdict, stats, ap, ap_class = [], [], [], []
86 | #model = model.to(memory_format=torch.channels_last)
87 | for batch_i, (img, targets, paths, shapes) in enumerate(tqdm(dataloader, desc=s)):
88 | img = img.to(device, non_blocking=True)
89 | img = img.half() if half else img.float() # uint8 to fp16/32
90 | img /= 255.0 # 0 - 255 to 0.0 - 1.0
91 | targets = targets.to(device)
92 | nb, _, height, width = img.shape # batch size, channels, height, width
93 | whwh = torch.Tensor([width, height, width, height]).to(device)
94 |
95 | # Disable gradients
96 | with torch.no_grad():
97 | # Run model
98 | t = time_synchronized()
99 | inf_out, train_out = model(img, augment=augment) # inference and training outputs
100 | #inf_out, train_out = model(img.to(memory_format=torch.channels_last), augment=augment) # inference and training outputs
101 | t0 += time_synchronized() - t
102 |
103 | # Compute loss
104 | if training: # if model has loss hyperparameters
105 | loss += compute_loss([x.float() for x in train_out], targets, model)[1][:3] # GIoU, obj, cls
106 |
107 | # Run NMS
108 | t = time_synchronized()
109 | output = non_max_suppression(inf_out, conf_thres=conf_thres, iou_thres=iou_thres, merge=merge)
110 | t1 += time_synchronized() - t
111 |
112 | # Statistics per image
113 | for si, pred in enumerate(output):
114 | labels = targets[targets[:, 0] == si, 1:]
115 | nl = len(labels)
116 | tcls = labels[:, 0].tolist() if nl else [] # target class
117 | seen += 1
118 |
119 | if pred is None:
120 | if nl:
121 | stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls))
122 | continue
123 |
124 | # Append to text file
125 | if save_txt:
126 | gn = torch.tensor(shapes[si][0])[[1, 0, 1, 0]] # normalization gain whwh
127 | txt_path = str(out / Path(paths[si]).stem)
128 | pred[:, :4] = scale_coords(img[si].shape[1:], pred[:, :4], shapes[si][0], shapes[si][1]) # to original
129 | for *xyxy, conf, cls in pred:
130 | xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
131 | with open(txt_path + '.txt', 'a') as f:
132 | f.write(('%g ' * 5 + '\n') % (cls, *xywh)) # label format
133 |
134 | # Clip boxes to image bounds
135 | clip_coords(pred, (height, width))
136 |
137 | # Append to pycocotools JSON dictionary
138 | if save_json:
139 | # [{"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}, ...
140 | image_id = Path(paths[si]).stem
141 | box = pred[:, :4].clone() # xyxy
142 | scale_coords(img[si].shape[1:], box, shapes[si][0], shapes[si][1]) # to original shape
143 | box = xyxy2xywh(box) # xywh
144 | box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner
145 | for p, b in zip(pred.tolist(), box.tolist()):
146 | jdict.append({'image_id': int(image_id) if image_id.isnumeric() else image_id,
147 | 'category_id': coco91class[int(p[5])],
148 | 'bbox': [round(x, 3) for x in b],
149 | 'score': round(p[4], 5)})
150 |
151 | # Assign all predictions as incorrect
152 | correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool, device=device)
153 | if nl:
154 | detected = [] # target indices
155 | tcls_tensor = labels[:, 0]
156 |
157 | # target boxes
158 | tbox = xywh2xyxy(labels[:, 1:5]) * whwh
159 |
160 | # Per target class
161 | for cls in torch.unique(tcls_tensor):
162 | ti = (cls == tcls_tensor).nonzero(as_tuple=False).view(-1) # prediction indices
163 | pi = (cls == pred[:, 5]).nonzero(as_tuple=False).view(-1) # target indices
164 |
165 | # Search for detections
166 | if pi.shape[0]:
167 | # Prediction to target ious
168 | ious, i = box_iou(pred[pi, :4], tbox[ti]).max(1) # best ious, indices
169 |
170 | # Append detections
171 | detected_set = set()
172 | for j in (ious > iouv[0]).nonzero(as_tuple=False):
173 | d = ti[i[j]] # detected target
174 | if d.item() not in detected_set:
175 | detected_set.add(d.item())
176 | detected.append(d)
177 | correct[pi[j]] = ious[j] > iouv # iou_thres is 1xn
178 | if len(detected) == nl: # all targets already located in image
179 | break
180 |
181 | # Append statistics (correct, conf, pcls, tcls)
182 | stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls))
183 |
184 | # Plot images
185 | if batch_i < 1:
186 | f = Path(save_dir) / ('test_batch%g_gt.jpg' % batch_i) # filename
187 | plot_images(img, targets, paths, str(f), names) # ground truth
188 | f = Path(save_dir) / ('test_batch%g_pred.jpg' % batch_i)
189 | plot_images(img, output_to_target(output, width, height), paths, str(f), names) # predictions
190 |
191 | # Compute statistics
192 | stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy
193 | if len(stats) and stats[0].any():
194 | p, r, ap, f1, ap_class = ap_per_class(*stats)
195 | p, r, ap50, ap = p[:, 0], r[:, 0], ap[:, 0], ap.mean(1) # [P, R, AP@0.5, AP@0.5:0.95]
196 | mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()
197 | nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class
198 | else:
199 | nt = torch.zeros(1)
200 |
201 | # Print results
202 | pf = '%20s' + '%12.3g' * 6 # print format
203 | print(pf % ('all', seen, nt.sum(), mp, mr, map50, map))
204 |
205 | # Print results per class
206 | if verbose and nc > 1 and len(stats):
207 | for i, c in enumerate(ap_class):
208 | print(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i]))
209 |
210 | # Print speeds
211 | t = tuple(x / seen * 1E3 for x in (t0, t1, t0 + t1)) + (imgsz, imgsz, batch_size) # tuple
212 | if not training:
213 | print('Speed: %.1f/%.1f/%.1f ms inference/NMS/total per %gx%g image at batch-size %g' % t)
214 |
215 | # Save JSON
216 | if save_json and len(jdict):
217 | f = 'detections_val2017_%s_results.json' % \
218 | (weights.split(os.sep)[-1].replace('.pt', '') if isinstance(weights, str) else '') # filename
219 | print('\nCOCO mAP with pycocotools... saving %s...' % f)
220 | with open(f, 'w') as file:
221 | json.dump(jdict, file)
222 |
223 | try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
224 | from pycocotools.coco import COCO
225 | from pycocotools.cocoeval import COCOeval
226 |
227 | imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files]
228 | cocoGt = COCO(glob.glob('../coco/annotations/instances_val*.json')[0]) # initialize COCO ground truth api
229 | cocoDt = cocoGt.loadRes(f) # initialize COCO pred api
230 | cocoEval = COCOeval(cocoGt, cocoDt, 'bbox')
231 | cocoEval.params.imgIds = imgIds # image IDs to evaluate
232 | cocoEval.evaluate()
233 | cocoEval.accumulate()
234 | cocoEval.summarize()
235 | map, map50 = cocoEval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5)
236 | except Exception as e:
237 | print('ERROR: pycocotools unable to run: %s' % e)
238 |
239 | # Return results
240 | model.float() # for training
241 | maps = np.zeros(nc) + map
242 | for i, c in enumerate(ap_class):
243 | maps[c] = ap[i]
244 | return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t
245 |
246 |
247 | if __name__ == '__main__':
248 | parser = argparse.ArgumentParser(prog='test.py')
249 | parser.add_argument('--weights', nargs='+', type=str, default='yolov4-p5.pt', help='model.pt path(s)')
250 | parser.add_argument('--data', type=str, default='data/coco128.yaml', help='*.data path')
251 | parser.add_argument('--batch-size', type=int, default=32, help='size of each image batch')
252 | parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
253 | parser.add_argument('--conf-thres', type=float, default=0.001, help='object confidence threshold')
254 | parser.add_argument('--iou-thres', type=float, default=0.65, help='IOU threshold for NMS')
255 | parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file')
256 | parser.add_argument('--task', default='val', help="'val', 'test', 'study'")
257 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
258 | parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset')
259 | parser.add_argument('--augment', action='store_true', help='augmented inference')
260 | parser.add_argument('--merge', action='store_true', help='use Merge NMS')
261 | parser.add_argument('--verbose', action='store_true', help='report mAP by class')
262 | parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
263 | opt = parser.parse_args()
264 | opt.save_json |= opt.data.endswith('coco.yaml')
265 | opt.data = check_file(opt.data) # check file
266 | print(opt)
267 |
268 | if opt.task in ['val', 'test']: # run normally
269 | test(opt.data,
270 | opt.weights,
271 | opt.batch_size,
272 | opt.img_size,
273 | opt.conf_thres,
274 | opt.iou_thres,
275 | opt.save_json,
276 | opt.single_cls,
277 | opt.augment,
278 | opt.verbose)
279 |
280 | elif opt.task == 'study': # run over a range of settings and save/plot
281 | for weights in ['']:
282 | f = 'study_%s_%s.txt' % (Path(opt.data).stem, Path(weights).stem) # filename to save to
283 | x = list(range(352, 832, 64)) # x axis
284 | y = [] # y axis
285 | for i in x: # img-size
286 | print('\nRunning %s point %s...' % (f, i))
287 | r, _, t = test(opt.data, weights, opt.batch_size, i, opt.conf_thres, opt.iou_thres, opt.save_json)
288 | y.append(r + t) # results and times
289 | np.savetxt(f, y, fmt='%10.4g') # save
290 | os.system('zip -r study.zip study_*.txt')
291 | # plot_study_txt(f, x) # plot
292 |
--------------------------------------------------------------------------------
/train.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import math
3 | import os
4 | import random
5 | import time
6 | from pathlib import Path
7 |
8 | import numpy as np
9 | import torch.distributed as dist
10 | import torch.nn.functional as F
11 | import torch.optim as optim
12 | import torch.optim.lr_scheduler as lr_scheduler
13 | import torch.utils.data
14 | import yaml
15 | from torch.cuda import amp
16 | from torch.nn.parallel import DistributedDataParallel as DDP
17 | from torch.utils.tensorboard import SummaryWriter
18 | from tqdm import tqdm
19 |
20 | import test # import test.py to get mAP after each epoch
21 | from models.yolo import Model
22 | from utils.datasets import create_dataloader
23 | from utils.general import (
24 | check_img_size, torch_distributed_zero_first, labels_to_class_weights, plot_labels, check_anchors,
25 | labels_to_image_weights, compute_loss, plot_images, fitness, strip_optimizer, plot_results,
26 | get_latest_run, check_git_status, check_file, increment_dir, print_mutation, plot_evolution)
27 | from utils.google_utils import attempt_download
28 | from utils.torch_utils import init_seeds, ModelEMA, select_device, intersect_dicts
29 |
30 |
31 | def train(hyp, opt, device, tb_writer=None):
32 | print(f'Hyperparameters {hyp}')
33 | log_dir = Path(tb_writer.log_dir) if tb_writer else Path(opt.logdir) / 'evolve' # logging directory
34 | wdir = str(log_dir / 'weights') + os.sep # weights directory
35 | os.makedirs(wdir, exist_ok=True)
36 | last = wdir + 'last.pt'
37 | best = wdir + 'best.pt'
38 | results_file = str(log_dir / 'results.txt')
39 | epochs, batch_size, total_batch_size, weights, rank = \
40 | opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank
41 |
42 | # TODO: Use DDP logging. Only the first process is allowed to log.
43 | # Save run settings
44 | with open(log_dir / 'hyp.yaml', 'w') as f:
45 | yaml.dump(hyp, f, sort_keys=False)
46 | with open(log_dir / 'opt.yaml', 'w') as f:
47 | yaml.dump(vars(opt), f, sort_keys=False)
48 |
49 | # Configure
50 | cuda = device.type != 'cpu'
51 | init_seeds(2 + rank)
52 | with open(opt.data) as f:
53 | data_dict = yaml.load(f, Loader=yaml.FullLoader) # model dict
54 | train_path = data_dict['train']
55 | test_path = data_dict['val']
56 | nc, names = (1, ['item']) if opt.single_cls else (int(data_dict['nc']), data_dict['names']) # number classes, names
57 | assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, opt.data) # check
58 |
59 | # Model
60 | pretrained = weights.endswith('.pt')
61 | if pretrained:
62 | with torch_distributed_zero_first(rank):
63 | attempt_download(weights) # download if not found locally
64 | ckpt = torch.load(weights, map_location=device) # load checkpoint
65 | model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc).to(device) # create
66 | exclude = ['anchor'] if opt.cfg else [] # exclude keys
67 | state_dict = ckpt['model'].float().state_dict() # to FP32
68 | state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude) # intersect
69 | model.load_state_dict(state_dict, strict=False) # load
70 | print('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights)) # report
71 | else:
72 | model = Model(opt.cfg, ch=3, nc=nc).to(device)# create
73 | #model = model.to(memory_format=torch.channels_last) # create
74 |
75 | # Optimizer
76 | nbs = 64 # nominal batch size
77 | accumulate = max(round(nbs / total_batch_size), 1) # accumulate loss before optimizing
78 | hyp['weight_decay'] *= total_batch_size * accumulate / nbs # scale weight_decay
79 |
80 | pg0, pg1, pg2 = [], [], [] # optimizer parameter groups
81 | for k, v in model.named_parameters():
82 | v.requires_grad = True
83 | if '.bias' in k:
84 | pg2.append(v) # biases
85 | elif '.weight' in k and '.bn' not in k:
86 | pg1.append(v) # apply weight decay
87 | else:
88 | pg0.append(v) # all else
89 |
90 | if opt.adam:
91 | optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum
92 | else:
93 | optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
94 |
95 | optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay
96 | optimizer.add_param_group({'params': pg2}) # add pg2 (biases)
97 | print('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0)))
98 | del pg0, pg1, pg2
99 |
100 | # Scheduler https://arxiv.org/pdf/1812.01187.pdf
101 | # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR
102 | lf = lambda x: (((1 + math.cos(x * math.pi / epochs)) / 2) ** 1.0) * 0.8 + 0.2 # cosine
103 | scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
104 | # plot_lr_scheduler(optimizer, scheduler, epochs)
105 |
106 | # Resume
107 | start_epoch, best_fitness = 0, 0.0
108 | if pretrained:
109 | # Optimizer
110 | if ckpt['optimizer'] is not None:
111 | optimizer.load_state_dict(ckpt['optimizer'])
112 | best_fitness = ckpt['best_fitness']
113 |
114 | # Results
115 | if ckpt.get('training_results') is not None:
116 | with open(results_file, 'w') as file:
117 | file.write(ckpt['training_results']) # write results.txt
118 |
119 | # Epochs
120 | start_epoch = ckpt['epoch'] + 1
121 | if epochs < start_epoch:
122 | print('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' %
123 | (weights, ckpt['epoch'], epochs))
124 | epochs += ckpt['epoch'] # finetune additional epochs
125 |
126 | del ckpt, state_dict
127 |
128 | # Image sizes
129 | gs = int(max(model.stride)) # grid size (max stride)
130 | imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples
131 |
132 | # DP mode
133 | if cuda and rank == -1 and torch.cuda.device_count() > 1:
134 | model = torch.nn.DataParallel(model)
135 |
136 | # SyncBatchNorm
137 | if opt.sync_bn and cuda and rank != -1:
138 | model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
139 | print('Using SyncBatchNorm()')
140 |
141 | # Exponential moving average
142 | ema = ModelEMA(model) if rank in [-1, 0] else None
143 |
144 | # DDP mode
145 | if cuda and rank != -1:
146 | model = DDP(model, device_ids=[opt.local_rank], output_device=(opt.local_rank))
147 |
148 | # Trainloader
149 | dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt, hyp=hyp, augment=True,
150 | cache=opt.cache_images, rect=opt.rect, local_rank=rank,
151 | world_size=opt.world_size)
152 | mlc = np.concatenate(dataset.labels, 0)[:, 0].max() # max label class
153 | nb = len(dataloader) # number of batches
154 | assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1)
155 |
156 | # Testloader
157 | if rank in [-1, 0]:
158 | ema.updates = start_epoch * nb // accumulate # set EMA updates ***
159 | # local_rank is set to -1. Because only the first process is expected to do evaluation.
160 | testloader = create_dataloader(test_path, imgsz_test, batch_size, gs, opt, hyp=hyp, augment=False,
161 | cache=opt.cache_images, rect=True, local_rank=-1, world_size=opt.world_size)[0]
162 |
163 | # Model parameters
164 | hyp['cls'] *= nc / 80. # scale coco-tuned hyp['cls'] to current dataset
165 | model.nc = nc # attach number of classes to model
166 | model.hyp = hyp # attach hyperparameters to model
167 | model.gr = 1.0 # giou loss ratio (obj_loss = 1.0 or giou)
168 | model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) # attach class weights
169 | model.names = names
170 |
171 | # Class frequency
172 | if rank in [-1, 0]:
173 | labels = np.concatenate(dataset.labels, 0)
174 | c = torch.tensor(labels[:, 0]) # classes
175 | # cf = torch.bincount(c.long(), minlength=nc) + 1.
176 | # model._initialize_biases(cf.to(device))
177 | plot_labels(labels, save_dir=log_dir)
178 | if tb_writer:
179 | tb_writer.add_histogram('classes', c, 0)
180 |
181 | # Check anchors
182 | if not opt.noautoanchor:
183 | check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
184 |
185 | # Start training
186 | t0 = time.time()
187 | nw = max(3 * nb, 1e3) # number of warmup iterations, max(3 epochs, 1k iterations)
188 | # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
189 | maps = np.zeros(nc) # mAP per class
190 | results = (0, 0, 0, 0, 0, 0, 0) # 'P', 'R', 'mAP', 'F1', 'val GIoU', 'val Objectness', 'val Classification'
191 | scheduler.last_epoch = start_epoch - 1 # do not move
192 | scaler = amp.GradScaler(enabled=cuda)
193 | if rank in [0, -1]:
194 | print('Image sizes %g train, %g test' % (imgsz, imgsz_test))
195 | print('Using %g dataloader workers' % dataloader.num_workers)
196 | print('Starting training for %g epochs...' % epochs)
197 | # torch.autograd.set_detect_anomaly(True)
198 | for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
199 | model.train()
200 |
201 | # Update image weights (optional)
202 | if dataset.image_weights:
203 | # Generate indices
204 | if rank in [-1, 0]:
205 | w = model.class_weights.cpu().numpy() * (1 - maps) ** 2 # class weights
206 | image_weights = labels_to_image_weights(dataset.labels, nc=nc, class_weights=w)
207 | dataset.indices = random.choices(range(dataset.n), weights=image_weights,
208 | k=dataset.n) # rand weighted idx
209 | # Broadcast if DDP
210 | if rank != -1:
211 | indices = torch.zeros([dataset.n], dtype=torch.int)
212 | if rank == 0:
213 | indices[:] = torch.from_tensor(dataset.indices, dtype=torch.int)
214 | dist.broadcast(indices, 0)
215 | if rank != 0:
216 | dataset.indices = indices.cpu().numpy()
217 |
218 | # Update mosaic border
219 | # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
220 | # dataset.mosaic_border = [b - imgsz, -b] # height, width borders
221 |
222 | mloss = torch.zeros(4, device=device) # mean losses
223 | if rank != -1:
224 | dataloader.sampler.set_epoch(epoch)
225 | pbar = enumerate(dataloader)
226 | if rank in [-1, 0]:
227 | print(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'GIoU', 'obj', 'cls', 'total', 'targets', 'img_size'))
228 | pbar = tqdm(pbar, total=nb) # progress bar
229 | optimizer.zero_grad()
230 | for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
231 | ni = i + nb * epoch # number integrated batches (since train start)
232 | imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0
233 |
234 | # Warmup
235 | if ni <= nw:
236 | xi = [0, nw] # x interp
237 | # model.gr = np.interp(ni, xi, [0.0, 1.0]) # giou loss ratio (obj_loss = 1.0 or giou)
238 | accumulate = max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round())
239 | for j, x in enumerate(optimizer.param_groups):
240 | # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
241 | x['lr'] = np.interp(ni, xi, [0.1 if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
242 | if 'momentum' in x:
243 | x['momentum'] = np.interp(ni, xi, [0.9, hyp['momentum']])
244 |
245 | # Multi-scale
246 | if opt.multi_scale:
247 | sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size
248 | sf = sz / max(imgs.shape[2:]) # scale factor
249 | if sf != 1:
250 | ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)
251 | imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)
252 |
253 | # Autocast
254 | with amp.autocast(enabled=cuda):
255 | # Forward
256 | pred = model(imgs)
257 | #pred = model(imgs.to(memory_format=torch.channels_last))
258 |
259 | # Loss
260 | loss, loss_items = compute_loss(pred, targets.to(device), model) # scaled by batch_size
261 | if rank != -1:
262 | loss *= opt.world_size # gradient averaged between devices in DDP mode
263 | # if not torch.isfinite(loss):
264 | # print('WARNING: non-finite loss, ending training ', loss_items)
265 | # return results
266 |
267 | # Backward
268 | scaler.scale(loss).backward()
269 |
270 | # Optimize
271 | if ni % accumulate == 0:
272 | scaler.step(optimizer) # optimizer.step
273 | scaler.update()
274 | optimizer.zero_grad()
275 | if ema is not None:
276 | ema.update(model)
277 |
278 | # Print
279 | if rank in [-1, 0]:
280 | mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
281 | mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB)
282 | s = ('%10s' * 2 + '%10.4g' * 6) % (
283 | '%g/%g' % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1])
284 | pbar.set_description(s)
285 |
286 | # Plot
287 | if ni < 3:
288 | f = str(log_dir / ('train_batch%g.jpg' % ni)) # filename
289 | result = plot_images(images=imgs, targets=targets, paths=paths, fname=f)
290 | if tb_writer and result is not None:
291 | tb_writer.add_image(f, result, dataformats='HWC', global_step=epoch)
292 | # tb_writer.add_graph(model, imgs) # add model to tensorboard
293 |
294 | # end batch ------------------------------------------------------------------------------------------------
295 |
296 | # Scheduler
297 | scheduler.step()
298 |
299 | # DDP process 0 or single-GPU
300 | if rank in [-1, 0]:
301 | # mAP
302 | if ema is not None:
303 | ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride'])
304 | final_epoch = epoch + 1 == epochs
305 | if not opt.notest or final_epoch: # Calculate mAP
306 | results, maps, times = test.test(opt.data,
307 | batch_size=batch_size,
308 | imgsz=imgsz_test,
309 | save_json=final_epoch and opt.data.endswith(os.sep + 'coco.yaml'),
310 | model=ema.ema.module if hasattr(ema.ema, 'module') else ema.ema,
311 | single_cls=opt.single_cls,
312 | dataloader=testloader,
313 | save_dir=log_dir)
314 |
315 | # Write
316 | with open(results_file, 'a') as f:
317 | f.write(s + '%10.4g' * 7 % results + '\n') # P, R, mAP, F1, test_losses=(GIoU, obj, cls)
318 | if len(opt.name) and opt.bucket:
319 | os.system('gsutil cp %s gs://%s/results/results%s.txt' % (results_file, opt.bucket, opt.name))
320 |
321 | # Tensorboard
322 | if tb_writer:
323 | tags = ['train/giou_loss', 'train/obj_loss', 'train/cls_loss',
324 | 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95',
325 | 'val/giou_loss', 'val/obj_loss', 'val/cls_loss']
326 | for x, tag in zip(list(mloss[:-1]) + list(results), tags):
327 | tb_writer.add_scalar(tag, x, epoch)
328 |
329 | # Update best mAP
330 | fi = fitness(np.array(results).reshape(1, -1)) # fitness_i = weighted combination of [P, R, mAP, F1]
331 | if fi > best_fitness:
332 | best_fitness = fi
333 |
334 | # Save model
335 | save = (not opt.nosave) or (final_epoch and not opt.evolve)
336 | if save:
337 | with open(results_file, 'r') as f: # create checkpoint
338 | ckpt = {'epoch': epoch,
339 | 'best_fitness': best_fitness,
340 | 'training_results': f.read(),
341 | 'model': ema.ema.module if hasattr(ema, 'module') else ema.ema,
342 | 'optimizer': None if final_epoch else optimizer.state_dict()}
343 |
344 | # Save last, best and delete
345 | torch.save(ckpt, last)
346 | if epoch >= (epochs-30):
347 | torch.save(ckpt, last.replace('.pt','_{:03d}.pt'.format(epoch)))
348 | if best_fitness == fi:
349 | torch.save(ckpt, best)
350 | del ckpt
351 | # end epoch ----------------------------------------------------------------------------------------------------
352 | # end training
353 |
354 | if rank in [-1, 0]:
355 | # Strip optimizers
356 | n = ('_' if len(opt.name) and not opt.name.isnumeric() else '') + opt.name
357 | fresults, flast, fbest = 'results%s.txt' % n, wdir + 'last%s.pt' % n, wdir + 'best%s.pt' % n
358 | for f1, f2 in zip([wdir + 'last.pt', wdir + 'best.pt', 'results.txt'], [flast, fbest, fresults]):
359 | if os.path.exists(f1):
360 | os.rename(f1, f2) # rename
361 | ispt = f2.endswith('.pt') # is *.pt
362 | strip_optimizer(f2, f2.replace('.pt','_strip.pt')) if ispt else None # strip optimizer
363 | os.system('gsutil cp %s gs://%s/weights' % (f2, opt.bucket)) if opt.bucket and ispt else None # upload
364 | # Finish
365 | if not opt.evolve:
366 | plot_results(save_dir=log_dir) # save as results.png
367 | print('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600))
368 |
369 | dist.destroy_process_group() if rank not in [-1, 0] else None
370 | torch.cuda.empty_cache()
371 | return results
372 |
373 |
374 | if __name__ == '__main__':
375 | parser = argparse.ArgumentParser()
376 | parser.add_argument('--weights', type=str, default='yolov4-p5.pt', help='initial weights path')
377 | parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
378 | parser.add_argument('--data', type=str, default='data/coco128.yaml', help='data.yaml path')
379 | parser.add_argument('--hyp', type=str, default='', help='hyperparameters path, i.e. data/hyp.scratch.yaml')
380 | parser.add_argument('--epochs', type=int, default=300)
381 | parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
382 | parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='train,test sizes')
383 | parser.add_argument('--rect', action='store_true', help='rectangular training')
384 | parser.add_argument('--resume', nargs='?', const='get_last', default=False,
385 | help='resume from given path/last.pt, or most recent run if blank')
386 | parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
387 | parser.add_argument('--notest', action='store_true', help='only test final epoch')
388 | parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
389 | parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
390 | parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
391 | parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
392 | parser.add_argument('--name', default='', help='renames results.txt to results_name.txt if supplied')
393 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
394 | parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
395 | parser.add_argument('--single-cls', action='store_true', help='train as single-class dataset')
396 | parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
397 | parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
398 | parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
399 | parser.add_argument('--logdir', type=str, default='runs/', help='logging directory')
400 | opt = parser.parse_args()
401 |
402 | # Resume
403 | if opt.resume:
404 | last = get_latest_run() if opt.resume == 'get_last' else opt.resume # resume from most recent run
405 | if last and not opt.weights:
406 | print(f'Resuming training from {last}')
407 | opt.weights = last if opt.resume and not opt.weights else opt.weights
408 | if opt.local_rank == -1 or ("RANK" in os.environ and os.environ["RANK"] == "0"):
409 | check_git_status()
410 |
411 | opt.hyp = opt.hyp or ('data/hyp.finetune.yaml' if opt.weights else 'data/hyp.scratch.yaml')
412 | opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files
413 | assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
414 |
415 | opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test)
416 | device = select_device(opt.device, batch_size=opt.batch_size)
417 | opt.total_batch_size = opt.batch_size
418 | opt.world_size = 1
419 | opt.global_rank = -1
420 |
421 | # DDP mode
422 | if opt.local_rank != -1:
423 | assert torch.cuda.device_count() > opt.local_rank
424 | torch.cuda.set_device(opt.local_rank)
425 | device = torch.device('cuda', opt.local_rank)
426 | dist.init_process_group(backend='nccl', init_method='env://') # distributed backend
427 | opt.world_size = dist.get_world_size()
428 | opt.global_rank = dist.get_rank()
429 | assert opt.batch_size % opt.world_size == 0, '--batch-size must be multiple of CUDA device count'
430 | opt.batch_size = opt.total_batch_size // opt.world_size
431 |
432 | print(opt)
433 | with open(opt.hyp) as f:
434 | hyp = yaml.load(f, Loader=yaml.FullLoader) # load hyps
435 |
436 | # Train
437 | if not opt.evolve:
438 | tb_writer = None
439 | if opt.global_rank in [-1, 0]:
440 | print('Start Tensorboard with "tensorboard --logdir %s", view at http://localhost:6006/' % opt.logdir)
441 | tb_writer = SummaryWriter(log_dir=increment_dir(Path(opt.logdir) / 'exp', opt.name)) # runs/exp
442 |
443 | train(hyp, opt, device, tb_writer)
444 |
445 | # Evolve hyperparameters (optional)
446 | else:
447 | # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
448 | meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
449 | 'momentum': (0.1, 0.6, 0.98), # SGD momentum/Adam beta1
450 | 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
451 | 'giou': (1, 0.02, 0.2), # GIoU loss gain
452 | 'cls': (1, 0.2, 4.0), # cls loss gain
453 | 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight
454 | 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels)
455 | 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight
456 | 'iou_t': (0, 0.1, 0.7), # IoU training threshold
457 | 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold
458 | 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5)
459 | 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)
460 | 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)
461 | 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)
462 | 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg)
463 | 'translate': (1, 0.0, 0.9), # image translation (+/- fraction)
464 | 'scale': (1, 0.0, 0.9), # image scale (+/- gain)
465 | 'shear': (1, 0.0, 10.0), # image shear (+/- deg)
466 | 'perspective': (1, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
467 | 'flipud': (0, 0.0, 1.0), # image flip up-down (probability)
468 | 'fliplr': (1, 0.0, 1.0), # image flip left-right (probability)
469 | 'mixup': (1, 0.0, 1.0)} # image mixup (probability)
470 |
471 | assert opt.local_rank == -1, 'DDP mode not implemented for --evolve'
472 | opt.notest, opt.nosave = True, True # only test/save final epoch
473 | # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
474 | yaml_file = Path('runs/evolve/hyp_evolved.yaml') # save best result here
475 | if opt.bucket:
476 | os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket) # download evolve.txt if exists
477 |
478 | for _ in range(100): # generations to evolve
479 | if os.path.exists('evolve.txt'): # if evolve.txt exists: select best hyps and mutate
480 | # Select parent(s)
481 | parent = 'single' # parent selection method: 'single' or 'weighted'
482 | x = np.loadtxt('evolve.txt', ndmin=2)
483 | n = min(5, len(x)) # number of previous results to consider
484 | x = x[np.argsort(-fitness(x))][:n] # top n mutations
485 | w = fitness(x) - fitness(x).min() # weights
486 | if parent == 'single' or len(x) == 1:
487 | # x = x[random.randint(0, n - 1)] # random selection
488 | x = x[random.choices(range(n), weights=w)[0]] # weighted selection
489 | elif parent == 'weighted':
490 | x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination
491 |
492 | # Mutate
493 | mp, s = 0.9, 0.2 # mutation probability, sigma
494 | npr = np.random
495 | npr.seed(int(time.time()))
496 | g = np.array([x[0] for x in meta.values()]) # gains 0-1
497 | ng = len(meta)
498 | v = np.ones(ng)
499 | while all(v == 1): # mutate until a change occurs (prevent duplicates)
500 | v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
501 | for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300)
502 | hyp[k] = float(x[i + 7] * v[i]) # mutate
503 |
504 | # Constrain to limits
505 | for k, v in meta.items():
506 | hyp[k] = max(hyp[k], v[1]) # lower limit
507 | hyp[k] = min(hyp[k], v[2]) # upper limit
508 | hyp[k] = round(hyp[k], 5) # significant digits
509 |
510 | # Train mutation
511 | results = train(hyp.copy(), opt, device)
512 |
513 | # Write mutation results
514 | print_mutation(hyp.copy(), results, yaml_file, opt.bucket)
515 |
516 | # Plot results
517 | plot_evolution(yaml_file)
518 | print('Hyperparameter evolution complete. Best results saved as: %s\nCommand to train a new model with these '
519 | 'hyperparameters: $ python train.py --hyp %s' % (yaml_file, yaml_file))
520 |
--------------------------------------------------------------------------------
/utils/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/utils/activations.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 |
5 |
6 | # Swish https://arxiv.org/pdf/1905.02244.pdf ---------------------------------------------------------------------------
7 | class Swish(nn.Module): #
8 | @staticmethod
9 | def forward(x):
10 | return x * torch.sigmoid(x)
11 |
12 |
13 | class HardSwish(nn.Module):
14 | @staticmethod
15 | def forward(x):
16 | return x * F.hardtanh(x + 3, 0., 6., True) / 6.
17 |
18 |
19 | class MemoryEfficientSwish(nn.Module):
20 | class F(torch.autograd.Function):
21 | @staticmethod
22 | def forward(ctx, x):
23 | ctx.save_for_backward(x)
24 | return x * torch.sigmoid(x)
25 |
26 | @staticmethod
27 | def backward(ctx, grad_output):
28 | x = ctx.saved_tensors[0]
29 | sx = torch.sigmoid(x)
30 | return grad_output * (sx * (1 + x * (1 - sx)))
31 |
32 | def forward(self, x):
33 | return self.F.apply(x)
34 |
35 |
36 | # Mish https://github.com/digantamisra98/Mish --------------------------------------------------------------------------
37 | class Mish(nn.Module):
38 | @staticmethod
39 | def forward(x):
40 | return x * F.softplus(x).tanh()
41 |
42 |
43 | class MemoryEfficientMish(nn.Module):
44 | class F(torch.autograd.Function):
45 | @staticmethod
46 | def forward(ctx, x):
47 | ctx.save_for_backward(x)
48 | return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x)))
49 |
50 | @staticmethod
51 | def backward(ctx, grad_output):
52 | x = ctx.saved_tensors[0]
53 | sx = torch.sigmoid(x)
54 | fx = F.softplus(x).tanh()
55 | return grad_output * (fx + x * sx * (1 - fx * fx))
56 |
57 | def forward(self, x):
58 | return self.F.apply(x)
59 |
60 |
61 | # FReLU https://arxiv.org/abs/2007.11824 -------------------------------------------------------------------------------
62 | class FReLU(nn.Module):
63 | def __init__(self, c1, k=3): # ch_in, kernel
64 | super().__init__()
65 | self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1)
66 | self.bn = nn.BatchNorm2d(c1)
67 |
68 | def forward(self, x):
69 | return torch.max(x, self.bn(self.conv(x)))
70 |
--------------------------------------------------------------------------------
/utils/datasets.py:
--------------------------------------------------------------------------------
1 | import glob
2 | import math
3 | import os
4 | import random
5 | import shutil
6 | import time
7 | from pathlib import Path
8 | from threading import Thread
9 |
10 | import cv2
11 | import numpy as np
12 | import torch
13 | from PIL import Image, ExifTags
14 | from torch.utils.data import Dataset
15 | from tqdm import tqdm
16 |
17 | from utils.general import xyxy2xywh, xywh2xyxy, torch_distributed_zero_first
18 |
19 | help_url = ''
20 | img_formats = ['.bmp', '.jpg', '.jpeg', '.png', '.tif', '.tiff', '.dng']
21 | vid_formats = ['.mov', '.avi', '.mp4', '.mpg', '.mpeg', '.m4v', '.wmv', '.mkv']
22 |
23 | # Get orientation exif tag
24 | for orientation in ExifTags.TAGS.keys():
25 | if ExifTags.TAGS[orientation] == 'Orientation':
26 | break
27 |
28 |
29 | def get_hash(files):
30 | # Returns a single hash value of a list of files
31 | return sum(os.path.getsize(f) for f in files if os.path.isfile(f))
32 |
33 |
34 | def exif_size(img):
35 | # Returns exif-corrected PIL size
36 | s = img.size # (width, height)
37 | try:
38 | rotation = dict(img._getexif().items())[orientation]
39 | if rotation == 6: # rotation 270
40 | s = (s[1], s[0])
41 | elif rotation == 8: # rotation 90
42 | s = (s[1], s[0])
43 | except:
44 | pass
45 |
46 | return s
47 |
48 |
49 | def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False,
50 | local_rank=-1, world_size=1):
51 | # Make sure only the first process in DDP process the dataset first, and the following others can use the cache.
52 | with torch_distributed_zero_first(local_rank):
53 | dataset = LoadImagesAndLabels(path, imgsz, batch_size,
54 | augment=augment, # augment images
55 | hyp=hyp, # augmentation hyperparameters
56 | rect=rect, # rectangular training
57 | cache_images=cache,
58 | single_cls=opt.single_cls,
59 | stride=int(stride),
60 | pad=pad)
61 |
62 | batch_size = min(batch_size, len(dataset))
63 | nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, 8]) # number of workers
64 | train_sampler = torch.utils.data.distributed.DistributedSampler(dataset) if local_rank != -1 else None
65 | dataloader = torch.utils.data.DataLoader(dataset,
66 | batch_size=batch_size,
67 | num_workers=nw,
68 | sampler=train_sampler,
69 | pin_memory=True,
70 | collate_fn=LoadImagesAndLabels.collate_fn)
71 | return dataloader, dataset
72 |
73 |
74 | class LoadImages: # for inference
75 | def __init__(self, path, img_size=640):
76 | p = str(Path(path)) # os-agnostic
77 | p = os.path.abspath(p) # absolute path
78 | if '*' in p:
79 | files = sorted(glob.glob(p)) # glob
80 | elif os.path.isdir(p):
81 | files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir
82 | elif os.path.isfile(p):
83 | files = [p] # files
84 | else:
85 | raise Exception('ERROR: %s does not exist' % p)
86 |
87 | images = [x for x in files if os.path.splitext(x)[-1].lower() in img_formats]
88 | videos = [x for x in files if os.path.splitext(x)[-1].lower() in vid_formats]
89 | ni, nv = len(images), len(videos)
90 |
91 | self.img_size = img_size
92 | self.files = images + videos
93 | self.nf = ni + nv # number of files
94 | self.video_flag = [False] * ni + [True] * nv
95 | self.mode = 'images'
96 | if any(videos):
97 | self.new_video(videos[0]) # new video
98 | else:
99 | self.cap = None
100 | assert self.nf > 0, 'No images or videos found in %s. Supported formats are:\nimages: %s\nvideos: %s' % \
101 | (p, img_formats, vid_formats)
102 |
103 | def __iter__(self):
104 | self.count = 0
105 | return self
106 |
107 | def __next__(self):
108 | if self.count == self.nf:
109 | raise StopIteration
110 | path = self.files[self.count]
111 |
112 | if self.video_flag[self.count]:
113 | # Read video
114 | self.mode = 'video'
115 | ret_val, img0 = self.cap.read()
116 | if not ret_val:
117 | self.count += 1
118 | self.cap.release()
119 | if self.count == self.nf: # last video
120 | raise StopIteration
121 | else:
122 | path = self.files[self.count]
123 | self.new_video(path)
124 | ret_val, img0 = self.cap.read()
125 |
126 | self.frame += 1
127 | print('video %g/%g (%g/%g) %s: ' % (self.count + 1, self.nf, self.frame, self.nframes, path), end='')
128 |
129 | else:
130 | # Read image
131 | self.count += 1
132 | img0 = cv2.imread(path) # BGR
133 | assert img0 is not None, 'Image Not Found ' + path
134 | print('image %g/%g %s: ' % (self.count, self.nf, path), end='')
135 |
136 | # Padded resize
137 | img = letterbox(img0, new_shape=self.img_size)[0]
138 |
139 | # Convert
140 | img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
141 | img = np.ascontiguousarray(img)
142 |
143 | # cv2.imwrite(path + '.letterbox.jpg', 255 * img.transpose((1, 2, 0))[:, :, ::-1]) # save letterbox image
144 | return path, img, img0, self.cap
145 |
146 | def new_video(self, path):
147 | self.frame = 0
148 | self.cap = cv2.VideoCapture(path)
149 | self.nframes = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))
150 |
151 | def __len__(self):
152 | return self.nf # number of files
153 |
154 |
155 | class LoadWebcam: # for inference
156 | def __init__(self, pipe=0, img_size=640):
157 | self.img_size = img_size
158 |
159 | if pipe == '0':
160 | pipe = 0 # local camera
161 | # pipe = 'rtsp://192.168.1.64/1' # IP camera
162 | # pipe = 'rtsp://username:password@192.168.1.64/1' # IP camera with login
163 | # pipe = 'rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa' # IP traffic camera
164 | # pipe = 'http://wmccpinetop.axiscam.net/mjpg/video.mjpg' # IP golf camera
165 |
166 | # https://answers.opencv.org/question/215996/changing-gstreamer-pipeline-to-opencv-in-pythonsolved/
167 | # pipe = '"rtspsrc location="rtsp://username:password@192.168.1.64/1" latency=10 ! appsink' # GStreamer
168 |
169 | # https://answers.opencv.org/question/200787/video-acceleration-gstremer-pipeline-in-videocapture/
170 | # https://stackoverflow.com/questions/54095699/install-gstreamer-support-for-opencv-python-package # install help
171 | # pipe = "rtspsrc location=rtsp://root:root@192.168.0.91:554/axis-media/media.amp?videocodec=h264&resolution=3840x2160 protocols=GST_RTSP_LOWER_TRANS_TCP ! rtph264depay ! queue ! vaapih264dec ! videoconvert ! appsink" # GStreamer
172 |
173 | self.pipe = pipe
174 | self.cap = cv2.VideoCapture(pipe) # video capture object
175 | self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3) # set buffer size
176 |
177 | def __iter__(self):
178 | self.count = -1
179 | return self
180 |
181 | def __next__(self):
182 | self.count += 1
183 | if cv2.waitKey(1) == ord('q'): # q to quit
184 | self.cap.release()
185 | cv2.destroyAllWindows()
186 | raise StopIteration
187 |
188 | # Read frame
189 | if self.pipe == 0: # local camera
190 | ret_val, img0 = self.cap.read()
191 | img0 = cv2.flip(img0, 1) # flip left-right
192 | else: # IP camera
193 | n = 0
194 | while True:
195 | n += 1
196 | self.cap.grab()
197 | if n % 30 == 0: # skip frames
198 | ret_val, img0 = self.cap.retrieve()
199 | if ret_val:
200 | break
201 |
202 | # Print
203 | assert ret_val, 'Camera Error %s' % self.pipe
204 | img_path = 'webcam.jpg'
205 | print('webcam %g: ' % self.count, end='')
206 |
207 | # Padded resize
208 | img = letterbox(img0, new_shape=self.img_size)[0]
209 |
210 | # Convert
211 | img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
212 | img = np.ascontiguousarray(img)
213 |
214 | return img_path, img, img0, None
215 |
216 | def __len__(self):
217 | return 0
218 |
219 |
220 | class LoadStreams: # multiple IP or RTSP cameras
221 | def __init__(self, sources='streams.txt', img_size=640):
222 | self.mode = 'images'
223 | self.img_size = img_size
224 |
225 | if os.path.isfile(sources):
226 | with open(sources, 'r') as f:
227 | sources = [x.strip() for x in f.read().splitlines() if len(x.strip())]
228 | else:
229 | sources = [sources]
230 |
231 | n = len(sources)
232 | self.imgs = [None] * n
233 | self.sources = sources
234 | for i, s in enumerate(sources):
235 | # Start the thread to read frames from the video stream
236 | print('%g/%g: %s... ' % (i + 1, n, s), end='')
237 | cap = cv2.VideoCapture(0 if s == '0' else s)
238 | assert cap.isOpened(), 'Failed to open %s' % s
239 | w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
240 | h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
241 | fps = cap.get(cv2.CAP_PROP_FPS) % 100
242 | _, self.imgs[i] = cap.read() # guarantee first frame
243 | thread = Thread(target=self.update, args=([i, cap]), daemon=True)
244 | print(' success (%gx%g at %.2f FPS).' % (w, h, fps))
245 | thread.start()
246 | print('') # newline
247 |
248 | # check for common shapes
249 | s = np.stack([letterbox(x, new_shape=self.img_size)[0].shape for x in self.imgs], 0) # inference shapes
250 | self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal
251 | if not self.rect:
252 | print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.')
253 |
254 | def update(self, index, cap):
255 | # Read next stream frame in a daemon thread
256 | n = 0
257 | while cap.isOpened():
258 | n += 1
259 | # _, self.imgs[index] = cap.read()
260 | cap.grab()
261 | if n == 4: # read every 4th frame
262 | _, self.imgs[index] = cap.retrieve()
263 | n = 0
264 | time.sleep(0.01) # wait time
265 |
266 | def __iter__(self):
267 | self.count = -1
268 | return self
269 |
270 | def __next__(self):
271 | self.count += 1
272 | img0 = self.imgs.copy()
273 | if cv2.waitKey(1) == ord('q'): # q to quit
274 | cv2.destroyAllWindows()
275 | raise StopIteration
276 |
277 | # Letterbox
278 | img = [letterbox(x, new_shape=self.img_size, auto=self.rect)[0] for x in img0]
279 |
280 | # Stack
281 | img = np.stack(img, 0)
282 |
283 | # Convert
284 | img = img[:, :, :, ::-1].transpose(0, 3, 1, 2) # BGR to RGB, to bsx3x416x416
285 | img = np.ascontiguousarray(img)
286 |
287 | return self.sources, img, img0, None
288 |
289 | def __len__(self):
290 | return 0 # 1E12 frames = 32 streams at 30 FPS for 30 years
291 |
292 |
293 | class LoadImagesAndLabels(Dataset): # for training/testing
294 | def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False,
295 | cache_images=False, single_cls=False, stride=32, pad=0.0):
296 | try:
297 | f = [] # image files
298 | for p in path if isinstance(path, list) else [path]:
299 | p = str(Path(p)) # os-agnostic
300 | parent = str(Path(p).parent) + os.sep
301 | if os.path.isfile(p): # file
302 | with open(p, 'r') as t:
303 | t = t.read().splitlines()
304 | f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path
305 | elif os.path.isdir(p): # folder
306 | f += glob.iglob(p + os.sep + '*.*')
307 | else:
308 | raise Exception('%s does not exist' % p)
309 | self.img_files = sorted(
310 | [x.replace('/', os.sep) for x in f if os.path.splitext(x)[-1].lower() in img_formats])
311 | except Exception as e:
312 | raise Exception('Error loading data from %s: %s\nSee %s' % (path, e, help_url))
313 |
314 | n = len(self.img_files)
315 | assert n > 0, 'No images found in %s. See %s' % (path, help_url)
316 | bi = np.floor(np.arange(n) / batch_size).astype(np.int) # batch index
317 | nb = bi[-1] + 1 # number of batches
318 |
319 | self.n = n # number of images
320 | self.batch = bi # batch index of image
321 | self.img_size = img_size
322 | self.augment = augment
323 | self.hyp = hyp
324 | self.image_weights = image_weights
325 | self.rect = False if image_weights else rect
326 | self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training)
327 | self.mosaic_border = [-img_size // 2, -img_size // 2]
328 | self.stride = stride
329 |
330 | # Define labels
331 | self.label_files = [x.replace('images', 'labels').replace(os.path.splitext(x)[-1], '.txt') for x in
332 | self.img_files]
333 |
334 | # Check cache
335 | cache_path = str(Path(self.label_files[0]).parent) + '.cache' # cached labels
336 | if os.path.isfile(cache_path):
337 | cache = torch.load(cache_path) # load
338 | if cache['hash'] != get_hash(self.label_files + self.img_files): # dataset changed
339 | cache = self.cache_labels(cache_path) # re-cache
340 | else:
341 | cache = self.cache_labels(cache_path) # cache
342 |
343 | # Get labels
344 | labels, shapes = zip(*[cache[x] for x in self.img_files])
345 | self.shapes = np.array(shapes, dtype=np.float64)
346 | self.labels = list(labels)
347 |
348 | # Rectangular Training https://github.com/ultralytics/yolov3/issues/232
349 | if self.rect:
350 | # Sort by aspect ratio
351 | s = self.shapes # wh
352 | ar = s[:, 1] / s[:, 0] # aspect ratio
353 | irect = ar.argsort()
354 | self.img_files = [self.img_files[i] for i in irect]
355 | self.label_files = [self.label_files[i] for i in irect]
356 | self.labels = [self.labels[i] for i in irect]
357 | self.shapes = s[irect] # wh
358 | ar = ar[irect]
359 |
360 | # Set training image shapes
361 | shapes = [[1, 1]] * nb
362 | for i in range(nb):
363 | ari = ar[bi == i]
364 | mini, maxi = ari.min(), ari.max()
365 | if maxi < 1:
366 | shapes[i] = [maxi, 1]
367 | elif mini > 1:
368 | shapes[i] = [1, 1 / mini]
369 |
370 | self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride
371 |
372 | # Cache labels
373 | create_datasubset, extract_bounding_boxes, labels_loaded = False, False, False
374 | nm, nf, ne, ns, nd = 0, 0, 0, 0, 0 # number missing, found, empty, datasubset, duplicate
375 | pbar = tqdm(self.label_files)
376 | for i, file in enumerate(pbar):
377 | l = self.labels[i] # label
378 | if l.shape[0]:
379 | assert l.shape[1] == 5, '> 5 label columns: %s' % file
380 | assert (l >= 0).all(), 'negative labels: %s' % file
381 | assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels: %s' % file
382 | if np.unique(l, axis=0).shape[0] < l.shape[0]: # duplicate rows
383 | nd += 1 # print('WARNING: duplicate rows in %s' % self.label_files[i]) # duplicate rows
384 | if single_cls:
385 | l[:, 0] = 0 # force dataset into single-class mode
386 | self.labels[i] = l
387 | nf += 1 # file found
388 |
389 | # Create subdataset (a smaller dataset)
390 | if create_datasubset and ns < 1E4:
391 | if ns == 0:
392 | create_folder(path='./datasubset')
393 | os.makedirs('./datasubset/images')
394 | exclude_classes = 43
395 | if exclude_classes not in l[:, 0]:
396 | ns += 1
397 | # shutil.copy(src=self.img_files[i], dst='./datasubset/images/') # copy image
398 | with open('./datasubset/images.txt', 'a') as f:
399 | f.write(self.img_files[i] + '\n')
400 |
401 | # Extract object detection boxes for a second stage classifier
402 | if extract_bounding_boxes:
403 | p = Path(self.img_files[i])
404 | img = cv2.imread(str(p))
405 | h, w = img.shape[:2]
406 | for j, x in enumerate(l):
407 | f = '%s%sclassifier%s%g_%g_%s' % (p.parent.parent, os.sep, os.sep, x[0], j, p.name)
408 | if not os.path.exists(Path(f).parent):
409 | os.makedirs(Path(f).parent) # make new output folder
410 |
411 | b = x[1:] * [w, h, w, h] # box
412 | b[2:] = b[2:].max() # rectangle to square
413 | b[2:] = b[2:] * 1.3 + 30 # pad
414 | b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int)
415 |
416 | b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image
417 | b[[1, 3]] = np.clip(b[[1, 3]], 0, h)
418 | assert cv2.imwrite(f, img[b[1]:b[3], b[0]:b[2]]), 'Failure extracting classifier boxes'
419 | else:
420 | ne += 1 # print('empty labels for image %s' % self.img_files[i]) # file empty
421 | # os.system("rm '%s' '%s'" % (self.img_files[i], self.label_files[i])) # remove
422 |
423 | pbar.desc = 'Scanning labels %s (%g found, %g missing, %g empty, %g duplicate, for %g images)' % (
424 | cache_path, nf, nm, ne, nd, n)
425 | if nf == 0:
426 | s = 'WARNING: No labels found in %s. See %s' % (os.path.dirname(file) + os.sep, help_url)
427 | print(s)
428 | assert not augment, '%s. Can not train without labels.' % s
429 |
430 | # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM)
431 | self.imgs = [None] * n
432 | if cache_images:
433 | gb = 0 # Gigabytes of cached images
434 | pbar = tqdm(range(len(self.img_files)), desc='Caching images')
435 | self.img_hw0, self.img_hw = [None] * n, [None] * n
436 | for i in pbar: # max 10k images
437 | self.imgs[i], self.img_hw0[i], self.img_hw[i] = load_image(self, i) # img, hw_original, hw_resized
438 | gb += self.imgs[i].nbytes
439 | pbar.desc = 'Caching images (%.1fGB)' % (gb / 1E9)
440 |
441 | def cache_labels(self, path='labels.cache'):
442 | # Cache dataset labels, check images and read shapes
443 | x = {} # dict
444 | pbar = tqdm(zip(self.img_files, self.label_files), desc='Scanning images', total=len(self.img_files))
445 | for (img, label) in pbar:
446 | try:
447 | l = []
448 | image = Image.open(img)
449 | image.verify() # PIL verify
450 | # _ = io.imread(img) # skimage verify (from skimage import io)
451 | shape = exif_size(image) # image size
452 | assert (shape[0] > 9) & (shape[1] > 9), 'image size <10 pixels'
453 | if os.path.isfile(label):
454 | with open(label, 'r') as f:
455 | l = np.array([x.split() for x in f.read().splitlines()], dtype=np.float32) # labels
456 | if len(l) == 0:
457 | l = np.zeros((0, 5), dtype=np.float32)
458 | x[img] = [l, shape]
459 | except Exception as e:
460 | x[img] = None
461 | print('WARNING: %s: %s' % (img, e))
462 |
463 | x['hash'] = get_hash(self.label_files + self.img_files)
464 | torch.save(x, path) # save for next time
465 | return x
466 |
467 | def __len__(self):
468 | return len(self.img_files)
469 |
470 | # def __iter__(self):
471 | # self.count = -1
472 | # print('ran dataset iter')
473 | # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF)
474 | # return self
475 |
476 | def __getitem__(self, index):
477 | if self.image_weights:
478 | index = self.indices[index]
479 |
480 | hyp = self.hyp
481 | if self.mosaic:
482 | # Load mosaic
483 | img, labels = load_mosaic(self, index)
484 | shapes = None
485 |
486 | # MixUp https://arxiv.org/pdf/1710.09412.pdf
487 | if random.random() < hyp['mixup']:
488 | img2, labels2 = load_mosaic(self, random.randint(0, len(self.labels) - 1))
489 | r = np.random.beta(8.0, 8.0) # mixup ratio, alpha=beta=8.0
490 | img = (img * r + img2 * (1 - r)).astype(np.uint8)
491 | labels = np.concatenate((labels, labels2), 0)
492 |
493 | else:
494 | # Load image
495 | img, (h0, w0), (h, w) = load_image(self, index)
496 |
497 | # Letterbox
498 | shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape
499 | img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment)
500 | shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling
501 |
502 | # Load labels
503 | labels = []
504 | x = self.labels[index]
505 | if x.size > 0:
506 | # Normalized xywh to pixel xyxy format
507 | labels = x.copy()
508 | labels[:, 1] = ratio[0] * w * (x[:, 1] - x[:, 3] / 2) + pad[0] # pad width
509 | labels[:, 2] = ratio[1] * h * (x[:, 2] - x[:, 4] / 2) + pad[1] # pad height
510 | labels[:, 3] = ratio[0] * w * (x[:, 1] + x[:, 3] / 2) + pad[0]
511 | labels[:, 4] = ratio[1] * h * (x[:, 2] + x[:, 4] / 2) + pad[1]
512 |
513 | if self.augment:
514 | # Augment imagespace
515 | if not self.mosaic:
516 | img, labels = random_perspective(img, labels,
517 | degrees=hyp['degrees'],
518 | translate=hyp['translate'],
519 | scale=hyp['scale'],
520 | shear=hyp['shear'],
521 | perspective=hyp['perspective'])
522 |
523 | # Augment colorspace
524 | augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v'])
525 |
526 | # Apply cutouts
527 | # if random.random() < 0.9:
528 | # labels = cutout(img, labels)
529 |
530 | nL = len(labels) # number of labels
531 | if nL:
532 | labels[:, 1:5] = xyxy2xywh(labels[:, 1:5]) # convert xyxy to xywh
533 | labels[:, [2, 4]] /= img.shape[0] # normalized height 0-1
534 | labels[:, [1, 3]] /= img.shape[1] # normalized width 0-1
535 |
536 | if self.augment:
537 | # flip up-down
538 | if random.random() < hyp['flipud']:
539 | img = np.flipud(img)
540 | if nL:
541 | labels[:, 2] = 1 - labels[:, 2]
542 |
543 | # flip left-right
544 | if random.random() < hyp['fliplr']:
545 | img = np.fliplr(img)
546 | if nL:
547 | labels[:, 1] = 1 - labels[:, 1]
548 |
549 | labels_out = torch.zeros((nL, 6))
550 | if nL:
551 | labels_out[:, 1:] = torch.from_numpy(labels)
552 |
553 | # Convert
554 | img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
555 | img = np.ascontiguousarray(img)
556 |
557 | return torch.from_numpy(img), labels_out, self.img_files[index], shapes
558 |
559 | @staticmethod
560 | def collate_fn(batch):
561 | img, label, path, shapes = zip(*batch) # transposed
562 | for i, l in enumerate(label):
563 | l[:, 0] = i # add target image index for build_targets()
564 | return torch.stack(img, 0), torch.cat(label, 0), path, shapes
565 |
566 |
567 | # Ancillary functions --------------------------------------------------------------------------------------------------
568 | def load_image(self, index):
569 | # loads 1 image from dataset, returns img, original hw, resized hw
570 | img = self.imgs[index]
571 | if img is None: # not cached
572 | path = self.img_files[index]
573 | img = cv2.imread(path) # BGR
574 | assert img is not None, 'Image Not Found ' + path
575 | h0, w0 = img.shape[:2] # orig hw
576 | r = self.img_size / max(h0, w0) # resize image to img_size
577 | if r != 1: # always resize down, only resize up if training with augmentation
578 | interp = cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR
579 | img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=interp)
580 | return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized
581 | else:
582 | return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized
583 |
584 |
585 | def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5):
586 | r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains
587 | hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV))
588 | dtype = img.dtype # uint8
589 |
590 | x = np.arange(0, 256, dtype=np.int16)
591 | lut_hue = ((x * r[0]) % 180).astype(dtype)
592 | lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)
593 | lut_val = np.clip(x * r[2], 0, 255).astype(dtype)
594 |
595 | img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))).astype(dtype)
596 | cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) # no return needed
597 |
598 | # Histogram equalization
599 | # if random.random() < 0.2:
600 | # for i in range(3):
601 | # img[:, :, i] = cv2.equalizeHist(img[:, :, i])
602 |
603 |
604 | def load_mosaic(self, index):
605 | # loads images in a mosaic
606 |
607 | labels4 = []
608 | s = self.img_size
609 | yc, xc = s, s # mosaic center x, y
610 | indices = [index] + [random.randint(0, len(self.labels) - 1) for _ in range(3)] # 3 additional image indices
611 | for i, index in enumerate(indices):
612 | # Load image
613 | img, _, (h, w) = load_image(self, index)
614 |
615 | # place img in img4
616 | if i == 0: # top left
617 | img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
618 | x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
619 | x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
620 | elif i == 1: # top right
621 | x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
622 | x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
623 | elif i == 2: # bottom left
624 | x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
625 | x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, max(xc, w), min(y2a - y1a, h)
626 | elif i == 3: # bottom right
627 | x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
628 | x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
629 |
630 | img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
631 | padw = x1a - x1b
632 | padh = y1a - y1b
633 |
634 | # Labels
635 | x = self.labels[index]
636 | labels = x.copy()
637 | if x.size > 0: # Normalized xywh to pixel xyxy format
638 | labels[:, 1] = w * (x[:, 1] - x[:, 3] / 2) + padw
639 | labels[:, 2] = h * (x[:, 2] - x[:, 4] / 2) + padh
640 | labels[:, 3] = w * (x[:, 1] + x[:, 3] / 2) + padw
641 | labels[:, 4] = h * (x[:, 2] + x[:, 4] / 2) + padh
642 | labels4.append(labels)
643 |
644 | # Concat/clip labels
645 | if len(labels4):
646 | labels4 = np.concatenate(labels4, 0)
647 | # np.clip(labels4[:, 1:] - s / 2, 0, s, out=labels4[:, 1:]) # use with center crop
648 | np.clip(labels4[:, 1:], 0, 2 * s, out=labels4[:, 1:]) # use with random_affine
649 |
650 | # Replicate
651 | # img4, labels4 = replicate(img4, labels4)
652 |
653 | # Augment
654 | # img4 = img4[s // 2: int(s * 1.5), s // 2:int(s * 1.5)] # center crop (WARNING, requires box pruning)
655 | img4, labels4 = random_perspective(img4, labels4,
656 | degrees=self.hyp['degrees'],
657 | translate=self.hyp['translate'],
658 | scale=self.hyp['scale'],
659 | shear=self.hyp['shear'],
660 | perspective=self.hyp['perspective'],
661 | border=self.mosaic_border) # border to remove
662 |
663 | return img4, labels4
664 |
665 |
666 | def replicate(img, labels):
667 | # Replicate labels
668 | h, w = img.shape[:2]
669 | boxes = labels[:, 1:].astype(int)
670 | x1, y1, x2, y2 = boxes.T
671 | s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels)
672 | for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices
673 | x1b, y1b, x2b, y2b = boxes[i]
674 | bh, bw = y2b - y1b, x2b - x1b
675 | yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y
676 | x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh]
677 | img[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
678 | labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0)
679 |
680 | return img, labels
681 |
682 |
683 | def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True):
684 | # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232
685 | shape = img.shape[:2] # current shape [height, width]
686 | if isinstance(new_shape, int):
687 | new_shape = (new_shape, new_shape)
688 |
689 | # Scale ratio (new / old)
690 | r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
691 | if not scaleup: # only scale down, do not scale up (for better test mAP)
692 | r = min(r, 1.0)
693 |
694 | # Compute padding
695 | ratio = r, r # width, height ratios
696 | new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
697 | dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
698 | if auto: # minimum rectangle
699 | dw, dh = np.mod(dw, 128), np.mod(dh, 128) # wh padding
700 | elif scaleFill: # stretch
701 | dw, dh = 0.0, 0.0
702 | new_unpad = (new_shape[1], new_shape[0])
703 | ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios
704 |
705 | dw /= 2 # divide padding into 2 sides
706 | dh /= 2
707 |
708 | if shape[::-1] != new_unpad: # resize
709 | img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
710 | top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
711 | left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
712 | img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
713 | return img, ratio, (dw, dh)
714 |
715 |
716 | def random_perspective(img, targets=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, border=(0, 0)):
717 | # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10))
718 | # targets = [cls, xyxy]
719 |
720 | height = img.shape[0] + border[0] * 2 # shape(h,w,c)
721 | width = img.shape[1] + border[1] * 2
722 |
723 | # Center
724 | C = np.eye(3)
725 | C[0, 2] = -img.shape[1] / 2 # x translation (pixels)
726 | C[1, 2] = -img.shape[0] / 2 # y translation (pixels)
727 |
728 | # Perspective
729 | P = np.eye(3)
730 | P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y)
731 | P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x)
732 |
733 | # Rotation and Scale
734 | R = np.eye(3)
735 | a = random.uniform(-degrees, degrees)
736 | # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
737 | s = random.uniform(1 - scale, 1 + scale)
738 | # s = 2 ** random.uniform(-scale, scale)
739 | R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
740 |
741 | # Shear
742 | S = np.eye(3)
743 | S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg)
744 | S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg)
745 |
746 | # Translation
747 | T = np.eye(3)
748 | T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels)
749 | T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels)
750 |
751 | # Combined rotation matrix
752 | M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
753 | if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed
754 | if perspective:
755 | img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114))
756 | else: # affine
757 | img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
758 |
759 | # Visualize
760 | # import matplotlib.pyplot as plt
761 | # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
762 | # ax[0].imshow(img[:, :, ::-1]) # base
763 | # ax[1].imshow(img2[:, :, ::-1]) # warped
764 |
765 | # Transform label coordinates
766 | n = len(targets)
767 | if n:
768 | # warp points
769 | xy = np.ones((n * 4, 3))
770 | xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1
771 | xy = xy @ M.T # transform
772 | if perspective:
773 | xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 8) # rescale
774 | else: # affine
775 | xy = xy[:, :2].reshape(n, 8)
776 |
777 | # create new boxes
778 | x = xy[:, [0, 2, 4, 6]]
779 | y = xy[:, [1, 3, 5, 7]]
780 | xy = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
781 |
782 | # # apply angle-based reduction of bounding boxes
783 | # radians = a * math.pi / 180
784 | # reduction = max(abs(math.sin(radians)), abs(math.cos(radians))) ** 0.5
785 | # x = (xy[:, 2] + xy[:, 0]) / 2
786 | # y = (xy[:, 3] + xy[:, 1]) / 2
787 | # w = (xy[:, 2] - xy[:, 0]) * reduction
788 | # h = (xy[:, 3] - xy[:, 1]) * reduction
789 | # xy = np.concatenate((x - w / 2, y - h / 2, x + w / 2, y + h / 2)).reshape(4, n).T
790 |
791 | # clip boxes
792 | xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width)
793 | xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height)
794 |
795 | # filter candidates
796 | i = box_candidates(box1=targets[:, 1:5].T * s, box2=xy.T)
797 | targets = targets[i]
798 | targets[:, 1:5] = xy[i]
799 |
800 | return img, targets
801 |
802 |
803 | def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.2): # box1(4,n), box2(4,n)
804 | # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
805 | w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
806 | w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
807 | ar = np.maximum(w2 / (h2 + 1e-16), h2 / (w2 + 1e-16)) # aspect ratio
808 | return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + 1e-16) > area_thr) & (ar < ar_thr) # candidates
809 |
810 |
811 | def cutout(image, labels):
812 | # Applies image cutout augmentation https://arxiv.org/abs/1708.04552
813 | h, w = image.shape[:2]
814 |
815 | def bbox_ioa(box1, box2):
816 | # Returns the intersection over box2 area given box1, box2. box1 is 4, box2 is nx4. boxes are x1y1x2y2
817 | box2 = box2.transpose()
818 |
819 | # Get the coordinates of bounding boxes
820 | b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
821 | b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
822 |
823 | # Intersection area
824 | inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \
825 | (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0)
826 |
827 | # box2 area
828 | box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + 1e-16
829 |
830 | # Intersection over box2 area
831 | return inter_area / box2_area
832 |
833 | # create random masks
834 | scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction
835 | for s in scales:
836 | mask_h = random.randint(1, int(h * s))
837 | mask_w = random.randint(1, int(w * s))
838 |
839 | # box
840 | xmin = max(0, random.randint(0, w) - mask_w // 2)
841 | ymin = max(0, random.randint(0, h) - mask_h // 2)
842 | xmax = min(w, xmin + mask_w)
843 | ymax = min(h, ymin + mask_h)
844 |
845 | # apply random color mask
846 | image[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)]
847 |
848 | # return unobscured labels
849 | if len(labels) and s > 0.03:
850 | box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
851 | ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
852 | labels = labels[ioa < 0.60] # remove >60% obscured labels
853 |
854 | return labels
855 |
856 |
857 | def reduce_img_size(path='path/images', img_size=1024): # from utils.datasets import *; reduce_img_size()
858 | # creates a new ./images_reduced folder with reduced size images of maximum size img_size
859 | path_new = path + '_reduced' # reduced images path
860 | create_folder(path_new)
861 | for f in tqdm(glob.glob('%s/*.*' % path)):
862 | try:
863 | img = cv2.imread(f)
864 | h, w = img.shape[:2]
865 | r = img_size / max(h, w) # size ratio
866 | if r < 1.0:
867 | img = cv2.resize(img, (int(w * r), int(h * r)), interpolation=cv2.INTER_AREA) # _LINEAR fastest
868 | fnew = f.replace(path, path_new) # .replace(Path(f).suffix, '.jpg')
869 | cv2.imwrite(fnew, img)
870 | except:
871 | print('WARNING: image failure %s' % f)
872 |
873 |
874 | def recursive_dataset2bmp(dataset='path/dataset_bmp'): # from utils.datasets import *; recursive_dataset2bmp()
875 | # Converts dataset to bmp (for faster training)
876 | formats = [x.lower() for x in img_formats] + [x.upper() for x in img_formats]
877 | for a, b, files in os.walk(dataset):
878 | for file in tqdm(files, desc=a):
879 | p = a + '/' + file
880 | s = Path(file).suffix
881 | if s == '.txt': # replace text
882 | with open(p, 'r') as f:
883 | lines = f.read()
884 | for f in formats:
885 | lines = lines.replace(f, '.bmp')
886 | with open(p, 'w') as f:
887 | f.write(lines)
888 | elif s in formats: # replace image
889 | cv2.imwrite(p.replace(s, '.bmp'), cv2.imread(p))
890 | if s != '.bmp':
891 | os.system("rm '%s'" % p)
892 |
893 |
894 | def imagelist2folder(path='path/images.txt'): # from utils.datasets import *; imagelist2folder()
895 | # Copies all the images in a text file (list of images) into a folder
896 | create_folder(path[:-4])
897 | with open(path, 'r') as f:
898 | for line in f.read().splitlines():
899 | os.system('cp "%s" %s' % (line, path[:-4]))
900 | print(line)
901 |
902 |
903 | def create_folder(path='./new'):
904 | # Create folder
905 | if os.path.exists(path):
906 | shutil.rmtree(path) # delete output folder
907 | os.makedirs(path) # make new output folder
908 |
--------------------------------------------------------------------------------
/utils/google_utils.py:
--------------------------------------------------------------------------------
1 | # This file contains google utils: https://cloud.google.com/storage/docs/reference/libraries
2 | # pip install --upgrade google-cloud-storage
3 | # from google.cloud import storage
4 |
5 | import os
6 | import platform
7 | import time
8 | from pathlib import Path
9 |
10 |
11 | def attempt_download(weights):
12 | # Attempt to download pretrained weights if not found locally
13 | weights = weights.strip().replace("'", '')
14 | msg = weights + ' missing'
15 |
16 | r = 1 # return
17 | if len(weights) > 0 and not os.path.isfile(weights):
18 | d = {'',
19 | }
20 |
21 | file = Path(weights).name
22 | if file in d:
23 | r = gdrive_download(id=d[file], name=weights)
24 |
25 | if not (r == 0 and os.path.exists(weights) and os.path.getsize(weights) > 1E6): # weights exist and > 1MB
26 | os.remove(weights) if os.path.exists(weights) else None # remove partial downloads
27 | s = ''
28 | r = os.system(s) # execute, capture return values
29 |
30 | # Error check
31 | if not (r == 0 and os.path.exists(weights) and os.path.getsize(weights) > 1E6): # weights exist and > 1MB
32 | os.remove(weights) if os.path.exists(weights) else None # remove partial downloads
33 | raise Exception(msg)
34 |
35 |
36 | def gdrive_download(id='1n_oKgR81BJtqk75b00eAjdv03qVCQn2f', name='coco128.zip'):
37 | # Downloads a file from Google Drive, accepting presented query
38 | # from utils.google_utils import *; gdrive_download()
39 | t = time.time()
40 |
41 | print('Downloading https://drive.google.com/uc?export=download&id=%s as %s... ' % (id, name), end='')
42 | os.remove(name) if os.path.exists(name) else None # remove existing
43 | os.remove('cookie') if os.path.exists('cookie') else None
44 |
45 | # Attempt file download
46 | out = "NUL" if platform.system() == "Windows" else "/dev/null"
47 | os.system('curl -c ./cookie -s -L "drive.google.com/uc?export=download&id=%s" > %s ' % (id, out))
48 | if os.path.exists('cookie'): # large file
49 | s = 'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm=%s&id=%s" -o %s' % (get_token(), id, name)
50 | else: # small file
51 | s = 'curl -s -L -o %s "drive.google.com/uc?export=download&id=%s"' % (name, id)
52 | r = os.system(s) # execute, capture return values
53 | os.remove('cookie') if os.path.exists('cookie') else None
54 |
55 | # Error check
56 | if r != 0:
57 | os.remove(name) if os.path.exists(name) else None # remove partial
58 | print('Download error ') # raise Exception('Download error')
59 | return r
60 |
61 | # Unzip if archive
62 | if name.endswith('.zip'):
63 | print('unzipping... ', end='')
64 | os.system('unzip -q %s' % name) # unzip
65 | os.remove(name) # remove zip to free space
66 |
67 | print('Done (%.1fs)' % (time.time() - t))
68 | return r
69 |
70 |
71 | def get_token(cookie="./cookie"):
72 | with open(cookie) as f:
73 | for line in f:
74 | if "download" in line:
75 | return line.split()[-1]
76 | return ""
77 |
--------------------------------------------------------------------------------
/utils/torch_utils.py:
--------------------------------------------------------------------------------
1 | import math
2 | import os
3 | import time
4 | from copy import deepcopy
5 |
6 | import torch
7 | import torch.backends.cudnn as cudnn
8 | import torch.nn as nn
9 | import torch.nn.functional as F
10 | import torchvision.models as models
11 |
12 |
13 | def init_seeds(seed=0):
14 | torch.manual_seed(seed)
15 |
16 | # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html
17 | if seed == 0: # slower, more reproducible
18 | cudnn.deterministic = True
19 | cudnn.benchmark = False
20 | else: # faster, less reproducible
21 | cudnn.deterministic = False
22 | cudnn.benchmark = True
23 |
24 |
25 | def select_device(device='', batch_size=None):
26 | # device = 'cpu' or '0' or '0,1,2,3'
27 | cpu_request = device.lower() == 'cpu'
28 | if device and not cpu_request: # if device requested other than 'cpu'
29 | os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable
30 | assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device # check availablity
31 |
32 | cuda = False if cpu_request else torch.cuda.is_available()
33 | if cuda:
34 | c = 1024 ** 2 # bytes to MB
35 | ng = torch.cuda.device_count()
36 | if ng > 1 and batch_size: # check that batch_size is compatible with device_count
37 | assert batch_size % ng == 0, 'batch-size %g not multiple of GPU count %g' % (batch_size, ng)
38 | x = [torch.cuda.get_device_properties(i) for i in range(ng)]
39 | s = 'Using CUDA '
40 | for i in range(0, ng):
41 | if i == 1:
42 | s = ' ' * len(s)
43 | print("%sdevice%g _CudaDeviceProperties(name='%s', total_memory=%dMB)" %
44 | (s, i, x[i].name, x[i].total_memory / c))
45 | else:
46 | print('Using CPU')
47 |
48 | print('') # skip a line
49 | return torch.device('cuda:0' if cuda else 'cpu')
50 |
51 |
52 | def time_synchronized():
53 | torch.cuda.synchronize() if torch.cuda.is_available() else None
54 | return time.time()
55 |
56 |
57 | def is_parallel(model):
58 | return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
59 |
60 |
61 | def intersect_dicts(da, db, exclude=()):
62 | # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
63 | return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape}
64 |
65 |
66 | def initialize_weights(model):
67 | for m in model.modules():
68 | t = type(m)
69 | if t is nn.Conv2d:
70 | pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
71 | elif t is nn.BatchNorm2d:
72 | m.eps = 1e-3
73 | m.momentum = 0.03
74 | elif t in [nn.LeakyReLU, nn.ReLU, nn.ReLU6]:
75 | m.inplace = True
76 |
77 |
78 | def find_modules(model, mclass=nn.Conv2d):
79 | # Finds layer indices matching module class 'mclass'
80 | return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]
81 |
82 |
83 | def sparsity(model):
84 | # Return global model sparsity
85 | a, b = 0., 0.
86 | for p in model.parameters():
87 | a += p.numel()
88 | b += (p == 0).sum()
89 | return b / a
90 |
91 |
92 | def prune(model, amount=0.3):
93 | # Prune model to requested global sparsity
94 | import torch.nn.utils.prune as prune
95 | print('Pruning model... ', end='')
96 | for name, m in model.named_modules():
97 | if isinstance(m, nn.Conv2d):
98 | prune.l1_unstructured(m, name='weight', amount=amount) # prune
99 | prune.remove(m, 'weight') # make permanent
100 | print(' %.3g global sparsity' % sparsity(model))
101 |
102 |
103 | def fuse_conv_and_bn(conv, bn):
104 | # https://tehnokv.com/posts/fusing-batchnorm-and-conv/
105 | with torch.no_grad():
106 | # init
107 | fusedconv = nn.Conv2d(conv.in_channels,
108 | conv.out_channels,
109 | kernel_size=conv.kernel_size,
110 | stride=conv.stride,
111 | padding=conv.padding,
112 | bias=True).to(conv.weight.device)
113 |
114 | # prepare filters
115 | w_conv = conv.weight.clone().view(conv.out_channels, -1)
116 | w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
117 | fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.size()))
118 |
119 | # prepare spatial bias
120 | b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias
121 | b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
122 | fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
123 |
124 | return fusedconv
125 |
126 |
127 | def model_info(model, verbose=False):
128 | # Plots a line-by-line description of a PyTorch model
129 | n_p = sum(x.numel() for x in model.parameters()) # number parameters
130 | n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
131 | if verbose:
132 | print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma'))
133 | for i, (name, p) in enumerate(model.named_parameters()):
134 | name = name.replace('module_list.', '')
135 | print('%5g %40s %9s %12g %20s %10.3g %10.3g' %
136 | (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
137 |
138 | try: # FLOPS
139 | from thop import profile
140 | flops = profile(deepcopy(model), inputs=(torch.zeros(1, 3, 64, 64),), verbose=False)[0] / 1E9 * 2
141 | fs = ', %.1f GFLOPS' % (flops * 100) # 640x640 FLOPS
142 | except:
143 | fs = ''
144 |
145 | print('Model Summary: %g layers, %g parameters, %g gradients%s' % (len(list(model.parameters())), n_p, n_g, fs))
146 |
147 |
148 | def load_classifier(name='resnet101', n=2):
149 | # Loads a pretrained model reshaped to n-class output
150 | model = models.__dict__[name](pretrained=True)
151 |
152 | # Display model properties
153 | input_size = [3, 224, 224]
154 | input_space = 'RGB'
155 | input_range = [0, 1]
156 | mean = [0.485, 0.456, 0.406]
157 | std = [0.229, 0.224, 0.225]
158 | for x in [input_size, input_space, input_range, mean, std]:
159 | print(x + ' =', eval(x))
160 |
161 | # Reshape output to n classes
162 | filters = model.fc.weight.shape[1]
163 | model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True)
164 | model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True)
165 | model.fc.out_features = n
166 | return model
167 |
168 |
169 | def scale_img(img, ratio=1.0, same_shape=False): # img(16,3,256,416), r=ratio
170 | # scales img(bs,3,y,x) by ratio
171 | if ratio == 1.0:
172 | return img
173 | else:
174 | h, w = img.shape[2:]
175 | s = (int(h * ratio), int(w * ratio)) # new size
176 | img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
177 | if not same_shape: # pad/crop img
178 | gs = 128#64#32 # (pixels) grid size
179 | h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)]
180 | return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean
181 |
182 |
183 | def copy_attr(a, b, include=(), exclude=()):
184 | # Copy attributes from b to a, options to only include [...] and to exclude [...]
185 | for k, v in b.__dict__.items():
186 | if (len(include) and k not in include) or k.startswith('_') or k in exclude:
187 | continue
188 | else:
189 | setattr(a, k, v)
190 |
191 |
192 | class ModelEMA:
193 | """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
194 | Keep a moving average of everything in the model state_dict (parameters and buffers).
195 | This is intended to allow functionality like
196 | https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
197 | A smoothed version of the weights is necessary for some training schemes to perform well.
198 | This class is sensitive where it is initialized in the sequence of model init,
199 | GPU assignment and distributed training wrappers.
200 | """
201 |
202 | def __init__(self, model, decay=0.9999, updates=0):
203 | # Create EMA
204 | self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA
205 | # if next(model.parameters()).device.type != 'cpu':
206 | # self.ema.half() # FP16 EMA
207 | self.updates = updates # number of EMA updates
208 | self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs)
209 | for p in self.ema.parameters():
210 | p.requires_grad_(False)
211 |
212 | def update(self, model):
213 | # Update EMA parameters
214 | with torch.no_grad():
215 | self.updates += 1
216 | d = self.decay(self.updates)
217 |
218 | msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
219 | for k, v in self.ema.state_dict().items():
220 | if v.dtype.is_floating_point:
221 | v *= d
222 | v += (1. - d) * msd[k].detach()
223 |
224 | def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
225 | # Update EMA attributes
226 | copy_attr(self.ema, model, include, exclude)
227 |
--------------------------------------------------------------------------------