├── .gitignore ├── LICENSE ├── README.md ├── cvrl.py ├── models.py ├── soft_actor_critic.py ├── tools.py └── wrappers.py /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | *.py[cod] 3 | *.egg-info 4 | ./dist 5 | logdir/* 6 | MUJOCO_LOG.TXT 7 | .vscode/ 8 | *.pkl 9 | \.idea/ -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # CVRL 2 | This repo contains the Tensorflow 2.0 implementation for the CoRL 2020 paper 3 | 4 | Xiao Ma, Siwei Chen, David Hsu, Wee Sun Lee: Contrastive Variational Model-Based Reinforcement Learning for Complex Observations. In Proc. 4th Conference on Robot Learning. [[paper]](https://arxiv.org/abs/2008.02430) 5 | 6 | For visualzations, please visit our [project page](https://sites.google.com/view/cvrl/home). Our talk is publicly available [here](https://youtu.be/koXGdHR6Nd4). 7 | 8 | ## Setup 9 | ``` 10 | pip3 install --user tensorflow-gpu==2.2.0 11 | pip3 install --user tensorflow_probability 12 | pip3 install --user git+git://github.com/deepmind/dm_control.git 13 | pip3 install --user pandas 14 | pip3 install --user matplotlib 15 | ``` 16 | 17 | You will need the [Mujoco license](https://www.roboti.us/license.html) to run the Mujoco tasks. 18 | 19 | To play with the natural Mujoco tasks, download the natural Mujoco background dataset from [here](https://drive.google.com/drive/folders/1r7i1PYY_Yhfhu7T8hlhi2DJtaeD6lIvp?usp=sharing) and put it at the root of this folder. 20 | 21 | 22 | ## Train the agent: 23 | 24 | ``` 25 | python3 cvrl.py --logdir ./logdir/dmc_walker_walk/natural_walker_walk/1 --task dmc_walker_walk --natural True --obs_model contrastive --use_dreamer True --use_sac True --trajectory_opt True 26 | ``` 27 | 28 | To view the training logs and execution videos, please use 29 | ``` 30 | tensorboard --logdir ./logdir --bind_all 31 | ``` 32 | 33 | ## Cite CVRL 34 | 35 | If you find this repo useful, please consider citing our paper 36 | 37 | ```bibtex 38 | @inproceedings{ 39 | ma2020contrastive, 40 | title={Contrastive Variational Model-Based Reinforcement Learning for Complex Observations}, 41 | author={Xiao Ma and Siwei Chen and David Hsu and Wee Sun Lee}, 42 | booktitle={Proceedings of the 4th Conference on Robot Learning}, 43 | year={2020} 44 | } 45 | ``` 46 | 47 | ## Reference 48 | The code borrows heavily from Danijar Hafner's Dreamer [implementation](https://github.com/danijar/dreamer). 49 | -------------------------------------------------------------------------------- /cvrl.py: -------------------------------------------------------------------------------- 1 | import wrappers 2 | import tools 3 | import models 4 | from tensorflow_probability import distributions as tfd 5 | from tensorflow.keras.mixed_precision import experimental as prec 6 | import tensorflow as tf 7 | import numpy as np 8 | import argparse 9 | import collections 10 | import functools 11 | import json 12 | import os 13 | import pathlib 14 | import sys 15 | import time 16 | import soft_actor_critic 17 | 18 | os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' 19 | 20 | # enable headless training on servers for mujoco 21 | os.environ['MUJOCO_GL'] = 'egl' 22 | 23 | tf.executing_eagerly() 24 | 25 | tf.get_logger().setLevel('ERROR') 26 | 27 | 28 | sys.path.append(str(pathlib.Path(__file__).parent)) 29 | 30 | 31 | def define_config(): 32 | config = tools.AttrDict() 33 | # General. 34 | config.logdir = pathlib.Path('.') 35 | config.seed = 0 36 | config.steps = 5e6 37 | config.eval_every = 1e4 38 | config.log_every = 1e3 39 | config.log_scalars = True 40 | config.log_images = True 41 | config.gpu_growth = True 42 | config.precision = 16 43 | # Environment. 44 | config.task = 'dmc_walker_walk' 45 | config.envs = 1 46 | config.parallel = 'none' 47 | config.action_repeat = 2 48 | config.time_limit = 1000 49 | config.prefill = 5000 50 | config.eval_noise = 0.0 51 | config.clip_rewards = 'none' 52 | # Model. 53 | config.deter_size = 200 54 | config.stoch_size = 30 55 | config.num_units = 400 56 | config.dense_act = 'elu' 57 | config.cnn_act = 'relu' 58 | config.cnn_depth = 32 59 | config.pcont = False 60 | config.free_nats = 3.0 61 | config.kl_scale = 1.0 62 | config.pcont_scale = 10.0 63 | config.weight_decay = 0.0 64 | config.weight_decay_pattern = r'.*' 65 | # Training. 66 | config.batch_size = 50 67 | config.batch_length = 50 68 | config.train_every = 1000 69 | config.train_steps = 100 70 | config.pretrain = 100 71 | config.model_lr = 6e-4 72 | config.value_lr = 8e-5 73 | config.actor_lr = 8e-5 74 | config.grad_clip = 100.0 75 | config.dataset_balance = False 76 | # Behavior. 77 | config.discount = 0.99 78 | config.disclam = 0.95 79 | config.horizon = 15 80 | config.action_dist = 'tanh_normal' 81 | config.action_init_std = 5.0 82 | config.expl = 'additive_gaussian' 83 | config.expl_amount = 0.3 84 | config.expl_decay = 0.0 85 | config.expl_min = 0.0 86 | config.log_imgs = False 87 | 88 | # natural or not 89 | config.natural = False 90 | 91 | # obs model 92 | config.obs_model = 'contrastive' 93 | 94 | # SAC settings 95 | config.num_Qs = 2 96 | 97 | # use dreamer and SAC for hybrid actor-critic training 98 | config.use_sac = True 99 | config.use_dreamer = True 100 | 101 | # use trajectory optimization 102 | config.trajectory_opt = True 103 | config.traj_opt_lr = 0.003 104 | config.num_samples = 20 105 | return config 106 | 107 | 108 | class CVRL(tools.Module): 109 | 110 | def __init__(self, config, datadir, actspace, writer): 111 | self._c = config 112 | self._actspace = actspace 113 | self._actdim = actspace.n if hasattr( 114 | actspace, 'n') else actspace.shape[0] 115 | self._writer = writer 116 | self._random = np.random.RandomState(config.seed) 117 | with tf.device('cpu:0'): 118 | self._step = tf.Variable(count_steps( 119 | datadir, config), dtype=tf.int64) 120 | self._should_pretrain = tools.Once() 121 | self._should_train = tools.Every(config.train_every) 122 | self._should_log = tools.Every(config.log_every) 123 | self._last_log = None 124 | self._last_time = time.time() 125 | self._metrics = collections.defaultdict(tf.metrics.Mean) 126 | self._metrics['expl_amount'] # Create variable for checkpoint. 127 | self._float = prec.global_policy().compute_dtype 128 | self._dataset = iter(load_dataset(datadir, self._c)) 129 | self._build_model() 130 | 131 | def __call__(self, obs, reset, state=None, training=True): 132 | step = self._step.numpy().item() 133 | tf.summary.experimental.set_step(step) 134 | if state is not None and reset.any(): 135 | mask = tf.cast(1 - reset, self._float)[:, None] 136 | state = tf.nest.map_structure(lambda x: x * mask, state) 137 | if self._should_train(step): 138 | log = self._should_log(step) 139 | n = self._c.pretrain if self._should_pretrain() else self._c.train_steps 140 | print(f'Training for {n} steps.') 141 | # with self._strategy.scope(): 142 | for train_step in range(n): 143 | log_images = self._c.log_images and log and train_step == 0 144 | self.train(next(self._dataset), log_images) 145 | if log: 146 | self._write_summaries() 147 | action, state = self.policy(obs, state, training) 148 | if training: 149 | self._step.assign_add(len(reset) * self._c.action_repeat) 150 | return action, state 151 | 152 | @tf.function 153 | def policy(self, obs, state, training): 154 | if state is None: 155 | latent = self._dynamics.initial(len(obs['image'])) 156 | action = tf.zeros((len(obs['image']), self._actdim), self._float) 157 | else: 158 | latent, action = state 159 | embed = self._encode(preprocess(obs, self._c)) 160 | latent, _ = self._dynamics.obs_step(latent, action, embed) 161 | feat = self._dynamics.get_feat(latent) 162 | 163 | if self._c.trajectory_opt: 164 | action = self._trajectory_optimization(latent) 165 | else: 166 | if training: 167 | action = self._actor(feat).sample() 168 | else: 169 | action = self._actor(feat).mode() 170 | 171 | action = self._exploration(action, training) 172 | state = (latent, action) 173 | return action, state 174 | 175 | def load(self, filename): 176 | super().load(filename) 177 | self._should_pretrain() 178 | 179 | @tf.function() 180 | def train(self, data, log_images=False): 181 | self._train(data, log_images) 182 | 183 | def _train(self, data, log_images): 184 | with tf.GradientTape() as model_tape: 185 | embed = self._encode(data) 186 | post, prior = self._dynamics.observe(embed, data['action']) 187 | feat = self._dynamics.get_feat(post) 188 | reward_pred = self._reward(feat) 189 | likes = tools.AttrDict() 190 | likes.reward = tf.reduce_mean(reward_pred.log_prob(data['reward'])) 191 | 192 | # if we use the generative observation model, we need to perform observation reconstruction 193 | image_pred = self._decode(feat) 194 | # compute the contrative loss directly in CVRL 195 | cont_loss = self._contrastive(feat, embed) 196 | 197 | # the contrastive / generative implementation of the observation model p(o|s) 198 | if self._c.obs_model == 'generative': 199 | likes.image = tf.reduce_mean(image_pred.log_prob(data['image'])) 200 | elif self._c.obs_model == 'contrastive': 201 | likes.image = tf.reduce_mean(cont_loss) 202 | 203 | if self._c.pcont: 204 | pcont_pred = self._pcont(feat) 205 | pcont_target = self._c.discount * data['discount'] 206 | likes.pcont = tf.reduce_mean(pcont_pred.log_prob(pcont_target)) 207 | likes.pcont *= self._c.pcont_scale 208 | 209 | prior_dist = self._dynamics.get_dist(prior) 210 | post_dist = self._dynamics.get_dist(post) 211 | div = tf.reduce_mean(tfd.kl_divergence(post_dist, prior_dist)) 212 | div = tf.maximum(div, self._c.free_nats) 213 | model_loss = self._c.kl_scale * div - sum(likes.values()) 214 | 215 | assert self._c.use_dreamer or self._c.use_sac 216 | 217 | if self._c.use_dreamer: 218 | with tf.GradientTape() as actor_tape: 219 | imag_feat = self._imagine_ahead(post) 220 | reward = self._reward(imag_feat).mode() 221 | if self._c.pcont: 222 | pcont = self._pcont(imag_feat).mean() 223 | else: 224 | pcont = self._c.discount * tf.ones_like(reward) 225 | value = self._value(imag_feat).mode() 226 | returns = tools.lambda_return( 227 | reward[:-1], value[:-1], pcont[:-1], 228 | bootstrap=value[-1], lambda_=self._c.disclam, axis=0) 229 | discount = tf.stop_gradient(tf.math.cumprod(tf.concat( 230 | [tf.ones_like(pcont[:1]), pcont[:-2]], 0), 0)) 231 | actor_loss = -tf.reduce_mean(discount * returns) 232 | 233 | with tf.GradientTape() as value_tape: 234 | value_pred = self._value(imag_feat)[:-1] 235 | target = tf.stop_gradient(returns) 236 | value_loss = - \ 237 | tf.reduce_mean(discount * value_pred.log_prob(target)) 238 | 239 | actor_norm = self._actor_opt(actor_tape, actor_loss) 240 | value_norm = self._value_opt(value_tape, value_loss) 241 | else: 242 | actor_norm = actor_loss = 0 243 | value_norm = value_loss = 0 244 | 245 | model_norm = self._model_opt(model_tape, model_loss) 246 | states = tf.concat([post['stoch'], post['deter']], axis=-1) 247 | rewards = data['reward'] 248 | dones = tf.zeros_like(rewards) 249 | actions = data['action'] 250 | 251 | # if we use SAC, add the SAC training 252 | if self._c.use_sac: 253 | self._sac._do_training(self._step, states, actions, rewards, dones) 254 | 255 | if tf.distribute.get_replica_context().replica_id_in_sync_group == 0: 256 | if self._c.log_scalars: 257 | self._scalar_summaries( 258 | data, feat, prior_dist, post_dist, likes, div, 259 | model_loss, value_loss, actor_loss, model_norm, value_norm, 260 | actor_norm) 261 | if tf.equal(log_images, True) and self._c.log_imgs: 262 | self._image_summaries(data, embed, image_pred) 263 | 264 | def _build_model(self): 265 | acts = dict( 266 | elu=tf.nn.elu, relu=tf.nn.relu, swish=tf.nn.swish, 267 | leaky_relu=tf.nn.leaky_relu) 268 | cnn_act = acts[self._c.cnn_act] 269 | act = acts[self._c.dense_act] 270 | self._encode = models.ConvEncoder(self._c.cnn_depth, cnn_act) 271 | self._dynamics = models.RSSM( 272 | self._c.stoch_size, self._c.deter_size, self._c.deter_size) 273 | self._decode = models.ConvDecoder(self._c.cnn_depth, cnn_act) 274 | self._contrastive = models.ContrastiveObsModel(self._c.deter_size, 275 | self._c.deter_size * 2) 276 | self._reward = models.DenseDecoder((), 2, self._c.num_units, act=act) 277 | if self._c.pcont: 278 | self._pcont = models.DenseDecoder( 279 | (), 3, self._c.num_units, 'binary', act=act) 280 | self._value = models.DenseDecoder((), 3, self._c.num_units, act=act) 281 | self._Qs = [models.QNetwork(3, self._c.num_units, act=act) for _ in range(self._c.num_Qs)] 282 | self._actor = models.ActionDecoder( 283 | self._actdim, 4, self._c.num_units, self._c.action_dist, 284 | init_std=self._c.action_init_std, act=act) 285 | model_modules = [self._encode, self._dynamics, 286 | self._contrastive, self._reward, self._decode] 287 | if self._c.pcont: 288 | model_modules.append(self._pcont) 289 | Optimizer = functools.partial( 290 | tools.Adam, wd=self._c.weight_decay, clip=self._c.grad_clip, 291 | wdpattern=self._c.weight_decay_pattern) 292 | self._model_opt = Optimizer('model', model_modules, self._c.model_lr) 293 | self._value_opt = Optimizer('value', [self._value], self._c.value_lr) 294 | self._actor_opt = Optimizer('actor', [self._actor], self._c.actor_lr) 295 | self._q_opts = [Optimizer('qs', [qnet], self._c.value_lr) for qnet in self._Qs] 296 | 297 | if self._c.use_sac: 298 | self._sac = soft_actor_critic.SAC(self._actor, self._Qs, self._actor_opt, self._q_opts, self._actspace) 299 | 300 | self.train(next(self._dataset)) 301 | 302 | def _exploration(self, action, training): 303 | if training: 304 | amount = self._c.expl_amount 305 | if self._c.expl_decay: 306 | amount *= 0.5 ** (tf.cast(self._step, 307 | tf.float32) / self._c.expl_decay) 308 | if self._c.expl_min: 309 | amount = tf.maximum(self._c.expl_min, amount) 310 | self._metrics['expl_amount'].update_state(amount) 311 | elif self._c.eval_noise: 312 | amount = self._c.eval_noise 313 | else: 314 | return action 315 | if self._c.expl == 'additive_gaussian': 316 | return tf.clip_by_value(tfd.Normal(action, amount).sample(), -1, 1) 317 | if self._c.expl == 'completely_random': 318 | return tf.random.uniform(action.shape, -1, 1) 319 | if self._c.expl == 'epsilon_greedy': 320 | indices = tfd.Categorical(0 * action).sample() 321 | return tf.where( 322 | tf.random.uniform(action.shape[:1], 0, 1) < amount, 323 | tf.one_hot(indices, action.shape[-1], dtype=self._float), 324 | action) 325 | raise NotImplementedError(self._c.expl) 326 | 327 | def _imagine_ahead(self, post): 328 | if self._c.pcont: # Last step could be terminal. 329 | post = {k: v[:, :-1] for k, v in post.items()} 330 | 331 | def flatten(x): return tf.reshape(x, [-1] + list(x.shape[2:])) 332 | start = {k: flatten(v) for k, v in post.items()} 333 | 334 | def policy(state): return self._actor( 335 | tf.stop_gradient(self._dynamics.get_feat(state))).sample() 336 | states = tools.static_scan( 337 | lambda prev, _: self._dynamics.img_step(prev, policy(prev)), 338 | tf.range(self._c.horizon), start) 339 | imag_feat = self._dynamics.get_feat(states) 340 | return imag_feat 341 | 342 | def _trajectory_optimization(self, post): 343 | def policy(state): return self._actor( 344 | tf.stop_gradient(self._dynamics.get_feat(state))).sample() 345 | 346 | def repeat(x): 347 | return tf.repeat(x, self._c.num_samples, axis=0) 348 | 349 | states, actions = tools.static_scan_action( 350 | lambda prev, action, _: self._dynamics.img_step(prev, action), 351 | lambda prev: policy(prev), 352 | tf.range(self._c.horizon), post) 353 | 354 | feat = self._dynamics.get_feat(states) 355 | reward = self._reward(feat).mode() 356 | 357 | if self._c.pcont: 358 | pcont = self._pcont(feat).mean() 359 | else: 360 | pcont = self._c.discount * tf.ones_like(reward) 361 | value = self._value(feat).mode() 362 | 363 | # compute the accumulated reward 364 | returns = tools.lambda_return( 365 | reward[:-1], value[:-1], pcont[:-1], 366 | bootstrap=value[-1], lambda_=self._c.disclam, axis=0) 367 | 368 | accumulated_reward = returns[0, 0] 369 | 370 | # since the reward and latent dynamics are fully differentiable, we can backprop the gradients to update the actions 371 | grad = tf.gradients(accumulated_reward, actions)[0] 372 | act = actions + grad * self._c.traj_opt_lr 373 | 374 | return act 375 | 376 | 377 | def _scalar_summaries( 378 | self, data, feat, prior_dist, post_dist, likes, div, 379 | model_loss, value_loss, actor_loss, model_norm, value_norm, 380 | actor_norm): 381 | self._metrics['model_grad_norm'].update_state(model_norm) 382 | self._metrics['value_grad_norm'].update_state(value_norm) 383 | self._metrics['actor_grad_norm'].update_state(actor_norm) 384 | self._metrics['prior_ent'].update_state(prior_dist.entropy()) 385 | self._metrics['post_ent'].update_state(post_dist.entropy()) 386 | for name, logprob in likes.items(): 387 | self._metrics[name + '_loss'].update_state(-logprob) 388 | self._metrics['div'].update_state(div) 389 | self._metrics['model_loss'].update_state(model_loss) 390 | self._metrics['value_loss'].update_state(value_loss) 391 | self._metrics['actor_loss'].update_state(actor_loss) 392 | self._metrics['action_ent'].update_state(self._actor(feat).entropy()) 393 | 394 | def _image_summaries(self, data, embed, image_pred): 395 | truth = data['image'][:6] + 0.5 396 | recon = image_pred.mode()[:6] 397 | init, _ = self._dynamics.observe(embed[:6, :5], data['action'][:6, :5]) 398 | init = {k: v[:, -1] for k, v in init.items()} 399 | prior = self._dynamics.imagine(data['action'][:6, 5:], init) 400 | openl = self._decode(self._dynamics.get_feat(prior)).mode() 401 | model = tf.concat([recon[:, :5] + 0.5, openl + 0.5], 1) 402 | error = (model - truth + 1) / 2 403 | openl = tf.concat([truth, model, error], 2) 404 | tools.graph_summary( 405 | self._writer, tools.video_summary, 'agent/openl', openl) 406 | 407 | def _write_summaries(self): 408 | step = int(self._step.numpy()) 409 | metrics = [(k, float(v.result())) for k, v in self._metrics.items()] 410 | if self._last_log is not None: 411 | duration = time.time() - self._last_time 412 | self._last_time += duration 413 | metrics.append(('fps', (step - self._last_log) / duration)) 414 | self._last_log = step 415 | [m.reset_states() for m in self._metrics.values()] 416 | with (self._c.logdir / 'metrics.jsonl').open('a') as f: 417 | f.write(json.dumps({'step': step, **dict(metrics)}) + '\n') 418 | [tf.summary.scalar('agent/' + k, m) for k, m in metrics] 419 | print(f'[{step}]', ' / '.join(f'{k} {v:.1f}' for k, v in metrics)) 420 | self._writer.flush() 421 | 422 | 423 | def preprocess(obs, config): 424 | dtype = prec.global_policy().compute_dtype 425 | obs = obs.copy() 426 | with tf.device('cpu:0'): 427 | obs['image'] = tf.cast(obs['image'], dtype) / 255.0 - 0.5 428 | clip_rewards = dict(none=lambda x: x, tanh=tf.tanh)[ 429 | config.clip_rewards] 430 | obs['reward'] = clip_rewards(obs['reward']) 431 | return obs 432 | 433 | 434 | def count_steps(datadir, config): 435 | return tools.count_episodes(datadir)[1] * config.action_repeat 436 | 437 | 438 | def load_dataset(directory, config): 439 | episode = next(tools.load_episodes(directory, 1)) 440 | types = {k: v.dtype for k, v in episode.items()} 441 | shapes = {k: (None,) + v.shape[1:] for k, v in episode.items()} 442 | 443 | def generator(): return tools.load_episodes( 444 | directory, config.train_steps, config.batch_length, 445 | config.dataset_balance) 446 | dataset = tf.data.Dataset.from_generator(generator, types, shapes) 447 | dataset = dataset.batch(config.batch_size, drop_remainder=True) 448 | dataset = dataset.map(functools.partial(preprocess, config=config)) 449 | dataset = dataset.prefetch(10) 450 | return dataset 451 | 452 | 453 | def summarize_episode(episode, config, datadir, writer, prefix): 454 | episodes, steps = tools.count_episodes(datadir) 455 | length = (len(episode['reward']) - 1) * config.action_repeat 456 | ret = episode['reward'].sum() 457 | print(f'{prefix.title()} episode of length {length} with return {ret:.1f}.') 458 | metrics = [ 459 | (f'{prefix}/return', float(episode['reward'].sum())), 460 | (f'{prefix}/length', len(episode['reward']) - 1), 461 | (f'episodes', episodes)] 462 | step = count_steps(datadir, config) 463 | with (config.logdir / 'metrics.jsonl').open('a') as f: 464 | f.write(json.dumps(dict([('step', step)] + metrics)) + '\n') 465 | with writer.as_default(): # Env might run in a different thread. 466 | tf.summary.experimental.set_step(step) 467 | [tf.summary.scalar('sim/' + k, v) for k, v in metrics] 468 | if prefix == 'test': 469 | tools.video_summary(f'sim/{prefix}/video', episode['image'][None]) 470 | 471 | 472 | def make_env(config, writer, prefix, datadir, train): 473 | suite, task = config.task.split('_', 1) 474 | if suite == 'dmc': 475 | env = wrappers.DeepMindControl(task) 476 | env = wrappers.ActionRepeat(env, config.action_repeat) 477 | env = wrappers.NormalizeActions(env) 478 | if config.natural: 479 | data = tools.load_imgnet(train) 480 | env = wrappers.NaturalMujoco(env, data) 481 | elif suite == 'atari': 482 | env = wrappers.Atari( 483 | task, config.action_repeat, (64, 64), grayscale=False, 484 | life_done=True, sticky_actions=True) 485 | env = wrappers.OneHotAction(env) 486 | else: 487 | raise NotImplementedError(suite) 488 | env = wrappers.TimeLimit(env, config.time_limit / config.action_repeat) 489 | callbacks = [] 490 | if train: 491 | callbacks.append(lambda ep: tools.save_episodes(datadir, [ep])) 492 | callbacks.append( 493 | lambda ep: summarize_episode(ep, config, datadir, writer, prefix)) 494 | env = wrappers.Collect(env, callbacks, config.precision) 495 | env = wrappers.RewardObs(env) 496 | return env 497 | 498 | 499 | def main(config): 500 | if config.gpu_growth: 501 | for gpu in tf.config.experimental.list_physical_devices('GPU'): 502 | tf.config.experimental.set_memory_growth(gpu, True) 503 | assert config.precision in (16, 32), config.precision 504 | if config.precision == 16: 505 | prec.set_policy(prec.Policy('mixed_float16')) 506 | config.steps = int(config.steps) 507 | config.logdir.mkdir(parents=True, exist_ok=True) 508 | print('Logdir', config.logdir) 509 | 510 | arg_dict = vars(config).copy() 511 | del arg_dict['logdir'] 512 | 513 | with open(os.path.join(config.logdir, 'args.json'), 'w') as fout: 514 | import json 515 | json.dump(arg_dict, fout) 516 | 517 | # Create environments. 518 | datadir = config.logdir / 'episodes' 519 | writer = tf.summary.create_file_writer( 520 | str(config.logdir), max_queue=1000, flush_millis=20000) 521 | writer.set_as_default() 522 | train_envs = [wrappers.Async(lambda: make_env( 523 | config, writer, 'train', datadir, train=True), config.parallel) 524 | for _ in range(config.envs)] 525 | test_envs = [wrappers.Async(lambda: make_env( 526 | config, writer, 'test', datadir, train=False), config.parallel) 527 | for _ in range(config.envs)] 528 | actspace = train_envs[0].action_space 529 | 530 | # Prefill dataset with random episodes. 531 | step = count_steps(datadir, config) 532 | prefill = max(0, config.prefill - step) 533 | print(f'Prefill dataset with {prefill} steps.') 534 | def random_agent(o, d, _): return ([actspace.sample() for _ in d], None) 535 | tools.simulate(random_agent, train_envs, prefill / config.action_repeat) 536 | writer.flush() 537 | 538 | # Train and regularly evaluate the agent. 539 | step = count_steps(datadir, config) 540 | print(f'Simulating agent for {config.steps-step} steps.') 541 | agent = CVRL(config, datadir, actspace, writer) 542 | if (config.logdir / 'variables.pkl').exists(): 543 | print('Load checkpoint.') 544 | agent.load(config.logdir / 'variables.pkl') 545 | state = None 546 | while step < config.steps: 547 | print('Start evaluation.') 548 | tools.simulate( 549 | functools.partial(agent, training=False), test_envs, episodes=1) 550 | writer.flush() 551 | print('Start collection.') 552 | steps = config.eval_every // config.action_repeat 553 | state = tools.simulate(agent, train_envs, steps, state=state) 554 | step = count_steps(datadir, config) 555 | agent.save(config.logdir / 'variables.pkl') 556 | for env in train_envs + test_envs: 557 | env.close() 558 | 559 | 560 | if __name__ == '__main__': 561 | try: 562 | import colored_traceback 563 | colored_traceback.add_hook() 564 | except ImportError: 565 | pass 566 | parser = argparse.ArgumentParser() 567 | for key, value in define_config().items(): 568 | parser.add_argument( 569 | f'--{key}', type=tools.args_type(value), default=value) 570 | args = parser.parse_args() 571 | 572 | main(args) 573 | -------------------------------------------------------------------------------- /models.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tensorflow as tf 3 | from tensorflow.keras import layers as tfkl 4 | from tensorflow_probability import distributions as tfd 5 | from tensorflow.keras.mixed_precision import experimental as prec 6 | import tools 7 | 8 | 9 | class RSSM(tools.Module): 10 | 11 | def __init__(self, stoch=30, deter=200, hidden=200, act=tf.nn.elu): 12 | super().__init__() 13 | self._activation = act 14 | self._stoch_size = stoch 15 | self._deter_size = deter 16 | self._hidden_size = hidden 17 | self._cell = tfkl.GRUCell(self._deter_size) 18 | 19 | def initial(self, batch_size): 20 | dtype = prec.global_policy().compute_dtype 21 | return dict( 22 | mean=tf.zeros([batch_size, self._stoch_size], dtype), 23 | std=tf.zeros([batch_size, self._stoch_size], dtype), 24 | stoch=tf.zeros([batch_size, self._stoch_size], dtype), 25 | deter=self._cell.get_initial_state(None, batch_size, dtype)) 26 | 27 | @tf.function 28 | def observe(self, embed, action, state=None): 29 | if state is None: 30 | state = self.initial(tf.shape(action)[0]) 31 | embed = tf.transpose(embed, [1, 0, 2]) 32 | action = tf.transpose(action, [1, 0, 2]) 33 | post, prior = tools.static_scan( 34 | lambda prev, inputs: self.obs_step(prev[0], *inputs), 35 | (action, embed), (state, state)) 36 | post = {k: tf.transpose(v, [1, 0, 2]) for k, v in post.items()} 37 | prior = {k: tf.transpose(v, [1, 0, 2]) for k, v in prior.items()} 38 | return post, prior 39 | 40 | @tf.function 41 | def imagine(self, action, state=None): 42 | if state is None: 43 | state = self.initial(tf.shape(action)[0]) 44 | assert isinstance(state, dict), state 45 | action = tf.transpose(action, [1, 0, 2]) 46 | prior = tools.static_scan(self.img_step, action, state) 47 | prior = {k: tf.transpose(v, [1, 0, 2]) for k, v in prior.items()} 48 | return prior 49 | 50 | def get_feat(self, state): 51 | return tf.concat([state['stoch'], state['deter']], -1) 52 | 53 | def get_dist(self, state): 54 | return tfd.MultivariateNormalDiag(state['mean'], state['std']) 55 | 56 | @tf.function 57 | def obs_step(self, prev_state, prev_action, embed): 58 | prior = self.img_step(prev_state, prev_action) 59 | x = tf.concat([prior['deter'], embed], -1) 60 | x = self.get('obs1', tfkl.Dense, self._hidden_size, 61 | self._activation)(x) 62 | x = self.get('obs2', tfkl.Dense, 2 * self._stoch_size, None)(x) 63 | mean, std = tf.split(x, 2, -1) 64 | std = tf.nn.softplus(std) + 0.1 65 | stoch = self.get_dist({'mean': mean, 'std': std}).sample() 66 | post = {'mean': mean, 'std': std, 67 | 'stoch': stoch, 'deter': prior['deter']} 68 | return post, prior 69 | 70 | @tf.function 71 | def img_step(self, prev_state, prev_action): 72 | x = tf.concat([prev_state['stoch'], prev_action], -1) 73 | x = self.get('img1', tfkl.Dense, self._hidden_size, 74 | self._activation)(x) 75 | x, deter = self._cell(x, [prev_state['deter']]) 76 | deter = deter[0] # Keras wraps the state in a list. 77 | x = self.get('img2', tfkl.Dense, self._hidden_size, 78 | self._activation)(x) 79 | x = self.get('img3', tfkl.Dense, 2 * self._stoch_size, None)(x) 80 | mean, std = tf.split(x, 2, -1) 81 | std = tf.nn.softplus(std) + 0.1 82 | stoch = self.get_dist({'mean': mean, 'std': std}).sample() 83 | prior = {'mean': mean, 'std': std, 'stoch': stoch, 'deter': deter} 84 | return prior 85 | 86 | 87 | class ConvEncoder(tools.Module): 88 | 89 | def __init__(self, depth=32, act=tf.nn.relu): 90 | self._act = act 91 | self._depth = depth 92 | 93 | def __call__(self, obs): 94 | kwargs = dict(strides=2, activation=self._act) 95 | x = tf.reshape(obs['image'], (-1,) + tuple(obs['image'].shape[-3:])) 96 | x = self.get('h1', tfkl.Conv2D, 1 * self._depth, 4, **kwargs)(x) 97 | x = self.get('h2', tfkl.Conv2D, 2 * self._depth, 4, **kwargs)(x) 98 | x = self.get('h3', tfkl.Conv2D, 4 * self._depth, 4, **kwargs)(x) 99 | x = self.get('h4', tfkl.Conv2D, 8 * self._depth, 4, **kwargs)(x) 100 | shape = tf.concat([tf.shape(obs['image'])[:-3], [32 * self._depth]], 0) 101 | return tf.reshape(x, shape) 102 | 103 | 104 | class ConvDecoder(tools.Module): 105 | 106 | def __init__(self, depth=32, act=tf.nn.relu, shape=(64, 64, 3)): 107 | self._act = act 108 | self._depth = depth 109 | self._shape = shape 110 | 111 | def __call__(self, features): 112 | kwargs = dict(strides=2, activation=self._act) 113 | x = self.get('h1', tfkl.Dense, 32 * self._depth, None)(features) 114 | x = tf.reshape(x, [-1, 1, 1, 32 * self._depth]) 115 | x = self.get('h2', tfkl.Conv2DTranspose, 116 | 4 * self._depth, 5, **kwargs)(x) 117 | x = self.get('h3', tfkl.Conv2DTranspose, 118 | 2 * self._depth, 5, **kwargs)(x) 119 | x = self.get('h4', tfkl.Conv2DTranspose, 120 | 1 * self._depth, 6, **kwargs)(x) 121 | x = self.get('h5', tfkl.Conv2DTranspose, 122 | self._shape[-1], 6, strides=2)(x) 123 | mean = tf.reshape(x, tf.concat( 124 | [tf.shape(features)[:-1], self._shape], 0)) 125 | return tfd.Independent(tfd.Normal(mean, 1), len(self._shape)) 126 | 127 | 128 | class ContrastiveObsModel(tools.Module): 129 | """The contrastive observation model 130 | """ 131 | def __init__(self, hz, hx, act=tf.nn.elu): 132 | self.act = act 133 | self.hz = hz 134 | self.hx = hx 135 | 136 | def __call__(self, z, x): 137 | """Both inputs have the shape of [batch_sz, length, dim]. For each positive sample, we use the rest of batch_sz * length - 1 samples as negative samples 138 | 139 | Args: 140 | z (tensor): latent state 141 | x (tensor): encoded observation 142 | """ 143 | 144 | x = tf.reshape(x, (-1, x.shape[-1])) 145 | z = tf.reshape(z, (-1, z.shape[-1])) 146 | 147 | # use mixed precision of float32 to avoid overflow 148 | x = self.get('obs_enc1', tfkl.Dense, self.hx, self.act)(x) 149 | x = self.get('obs_enc2', tfkl.Dense, self.hz, self.act, dtype='float32')(x) 150 | 151 | z = self.get('state_merge1', tfkl.Dense, self.hz, self.act)(z) 152 | z = self.get('state_merge2', tfkl.Dense, self.hz, self.act, 153 | dtype='float32')(z) 154 | 155 | weight_mat = tf.matmul(z, x, transpose_b=True) 156 | 157 | positive = tf.linalg.tensor_diag_part(weight_mat) 158 | norm = tf.reduce_logsumexp(weight_mat, axis=1) 159 | 160 | # compute the infonce loss and change the predicion back to float16 161 | info_nce = tf.cast(positive - norm, 'float16') 162 | 163 | return info_nce 164 | 165 | 166 | class DenseDecoder(tools.Module): 167 | 168 | def __init__(self, shape, layers, units, dist='normal', act=tf.nn.elu): 169 | self._shape = shape 170 | self._layers = layers 171 | self._units = units 172 | self._dist = dist 173 | self._act = act 174 | 175 | def __call__(self, features): 176 | x = features 177 | for index in range(self._layers): 178 | x = self.get(f'h{index}', tfkl.Dense, self._units, self._act)(x) 179 | x = self.get(f'hout', tfkl.Dense, np.prod(self._shape))(x) 180 | x = tf.reshape(x, tf.concat([tf.shape(features)[:-1], self._shape], 0)) 181 | if self._dist == 'normal': 182 | return tfd.Independent(tfd.Normal(x, 1), len(self._shape)) 183 | if self._dist == 'binary': 184 | return tfd.Independent(tfd.Bernoulli(x), len(self._shape)) 185 | raise NotImplementedError(self._dist) 186 | 187 | class QNetwork(tools.Module): 188 | 189 | def __init__(self, layers, units, dist='normal', act=tf.nn.elu, shape=()): 190 | self._shape = shape 191 | self._layers = layers 192 | self._units = units 193 | self._dist = dist 194 | self._act = act 195 | 196 | def __call__(self, features): 197 | x = features 198 | for index in range(self._layers): 199 | x = self.get(f'h{index}', tfkl.Dense, self._units, self._act)(x) 200 | x = self.get(f'hout', tfkl.Dense, np.prod(self._shape))(x) 201 | x = tf.reshape(x, tf.concat([tf.shape(features)[:-1], self._shape], 0)) 202 | 203 | return x 204 | 205 | class ActionDecoder(tools.Module): 206 | 207 | def __init__( 208 | self, size, layers, units, dist='tanh_normal', act=tf.nn.elu, 209 | min_std=1e-4, init_std=5, mean_scale=5): 210 | self._size = size 211 | self._layers = layers 212 | self._units = units 213 | self._dist = dist 214 | self._act = act 215 | self._min_std = min_std 216 | self._init_std = init_std 217 | self._mean_scale = mean_scale 218 | 219 | def __call__(self, features): 220 | raw_init_std = np.log(np.exp(self._init_std) - 1) 221 | x = features 222 | for index in range(self._layers): 223 | x = self.get(f'h{index}', tfkl.Dense, self._units, self._act)(x) 224 | if self._dist == 'tanh_normal': 225 | # https://www.desmos.com/calculator/rcmcf5jwe7 226 | x = self.get(f'hout', tfkl.Dense, 2 * self._size)(x) 227 | mean, std = tf.split(x, 2, -1) 228 | mean = self._mean_scale * tf.tanh(mean / self._mean_scale) 229 | std = tf.nn.softplus(std + raw_init_std) + self._min_std 230 | dist = tfd.Normal(mean, std) 231 | dist = tfd.TransformedDistribution(dist, tools.TanhBijector()) 232 | dist = tfd.Independent(dist, 1) 233 | dist = tools.SampleDist(dist) 234 | elif self._dist == 'onehot': 235 | x = self.get(f'hout', tfkl.Dense, self._size)(x) 236 | dist = tools.OneHotDist(x) 237 | else: 238 | raise NotImplementedError(dist) 239 | return dist 240 | 241 | def actions_and_log_probs(self, features): 242 | dist = self(features) 243 | action = dist.sample() 244 | log_prob = dist.log_prob(action) 245 | 246 | return action, log_prob 247 | -------------------------------------------------------------------------------- /soft_actor_critic.py: -------------------------------------------------------------------------------- 1 | # The code is modified from rail-berkeley/softlearning repo https://github.com/rail-berkeley/softlearning 2 | 3 | from copy import deepcopy 4 | from collections import OrderedDict 5 | from numbers import Number 6 | 7 | import numpy as np 8 | import tensorflow as tf 9 | import tensorflow_probability as tfp 10 | 11 | def td_targets(rewards, discounts, next_values): 12 | return rewards + discounts * next_values 13 | 14 | def compute_Q_targets(next_Q_values, 15 | next_log_pis, 16 | rewards, 17 | terminals, 18 | discount, 19 | entropy_scale, 20 | reward_scale): 21 | next_values = next_Q_values - entropy_scale * next_log_pis 22 | terminals = tf.cast(terminals, next_values.dtype) 23 | 24 | Q_targets = td_targets( 25 | rewards=reward_scale * rewards, 26 | discounts=discount, 27 | next_values=(1.0 - terminals) * next_values) 28 | 29 | return Q_targets 30 | 31 | 32 | def heuristic_target_entropy(action_space): 33 | heuristic_target_entropy = -np.prod(action_space.shape) 34 | 35 | return heuristic_target_entropy 36 | 37 | 38 | class SAC: 39 | """Soft Actor-Critic (SAC) 40 | 41 | References 42 | ---------- 43 | [1] Tuomas Haarnoja*, Aurick Zhou*, Kristian Hartikainen*, George Tucker, 44 | Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter 45 | Abbeel, and Sergey Levine. Soft Actor-Critic Algorithms and 46 | Applications. arXiv preprint arXiv:1812.05905. 2018. 47 | """ 48 | 49 | def __init__( 50 | self, 51 | policy, 52 | Qs, 53 | policy_optimizer, 54 | q_optimizers, 55 | action_space, 56 | plotter=None, 57 | policy_lr=3e-4, 58 | Q_lr=3e-4, 59 | alpha_lr=3e-4, 60 | reward_scale=1.0, 61 | target_entropy='auto', 62 | discount=0.99, 63 | tau=5e-3, 64 | target_update_interval=1, 65 | save_full_state=False, 66 | Q_targets=None, 67 | ): 68 | """ 69 | Args: 70 | env (`SoftlearningEnv`): Environment used for training. 71 | policy: A policy function approximator. 72 | Qs: Q-function approximators. The min of these 73 | approximators will be used. Usage of at least two Q-functions 74 | improves performance by reducing overestimation bias. 75 | plotter (`QFPolicyPlotter`): Plotter instance to be used for 76 | visualizing Q-function during training. 77 | lr (`float`): Learning rate used for the function approximators. 78 | discount (`float`): Discount factor for Q-function updates. 79 | tau (`float`): Soft value function target update weight. 80 | target_update_interval ('int'): Frequency at which target network 81 | updates occur in iterations. 82 | """ 83 | 84 | self._policy = policy 85 | 86 | self._Qs = Qs 87 | 88 | if Q_targets is not None: 89 | self._Q_targets = Q_targets 90 | else: 91 | self._Q_targets = tuple(deepcopy(Q) for Q in Qs) 92 | self._update_target(tau=tf.constant(1.0)) 93 | 94 | self._plotter = plotter 95 | 96 | self._policy_lr = policy_lr 97 | self._Q_lr = Q_lr 98 | self._alpha_lr = alpha_lr 99 | 100 | self._reward_scale = reward_scale 101 | self._target_entropy = ( 102 | heuristic_target_entropy(action_space) 103 | if target_entropy == 'auto' 104 | else target_entropy) 105 | 106 | self._discount = discount 107 | self._tau = tau 108 | self._target_update_interval = target_update_interval 109 | 110 | self._save_full_state = save_full_state 111 | 112 | self._Q_optimizers = q_optimizers 113 | self._policy_optimizer = policy_optimizer 114 | 115 | self._log_alpha = tf.Variable(0.0, dtype=tf.float16) 116 | self._alpha = tfp.util.DeferredTensor(self._log_alpha, tf.exp) 117 | 118 | self._alpha_optimizer = tf.optimizers.Adam( 119 | self._alpha_lr, name='alpha_optimizer') 120 | 121 | def _compute_Q_targets(self, batch): 122 | next_observations = batch['next_observations'] 123 | rewards = batch['rewards'] 124 | terminals = batch['terminals'] 125 | 126 | entropy_scale = self._alpha 127 | reward_scale = self._reward_scale 128 | discount = self._discount 129 | 130 | next_actions, next_log_pis = self._policy.actions_and_log_probs( 131 | next_observations) 132 | next_Qs_values = tuple( 133 | # Q.values(next_observations, next_actions) for Q in self._Q_targets) 134 | Q(tf.concat((next_observations, next_actions), axis=-1)) for Q in self._Q_targets) 135 | next_Q_values = tf.reduce_min(next_Qs_values, axis=0) 136 | 137 | Q_targets = compute_Q_targets( 138 | next_Q_values, 139 | next_log_pis, 140 | rewards, 141 | terminals, 142 | discount, 143 | entropy_scale, 144 | reward_scale) 145 | 146 | return tf.stop_gradient(Q_targets) 147 | 148 | def _update_critic(self, batch): 149 | """Update the Q-function. 150 | 151 | Creates a `tf.optimizer.minimize` operation for updating 152 | critic Q-function with gradient descent, and appends it to 153 | `self._training_ops` attribute. 154 | 155 | See Equations (5, 6) in [1], for further information of the 156 | Q-function update rule. 157 | """ 158 | Q_targets = self._compute_Q_targets(batch) 159 | Q_targets = tf.expand_dims(Q_targets, axis=-1) 160 | 161 | observations = batch['observations'] 162 | actions = batch['actions'] 163 | rewards = batch['rewards'] 164 | rewards = tf.expand_dims(rewards, axis=-1) 165 | 166 | # tf.debugging.assert_shapes(( 167 | # (Q_targets, ('B', 1)), (rewards, ('B', 1)))) 168 | 169 | Qs_values = [] 170 | Qs_losses = [] 171 | for Q, optimizer in zip(self._Qs, self._Q_optimizers): 172 | with tf.GradientTape() as tape: 173 | Q_values = Q(tf.concat((observations, actions), axis=-1)) 174 | Q_losses = 0.5 * ( 175 | tf.losses.MSE(y_true=Q_targets, y_pred=tf.expand_dims(Q_values, axis=-1))) 176 | Q_loss = tf.nn.compute_average_loss(Q_losses) 177 | 178 | optimizer(tape, Q_loss) 179 | Qs_losses.append(Q_losses) 180 | Qs_values.append(Q_values) 181 | 182 | return Qs_values, Qs_losses 183 | 184 | def _update_actor(self, batch): 185 | """Update the policy. 186 | 187 | Creates a `tf.optimizer.minimize` operations for updating 188 | policy and entropy with gradient descent, and adds them to 189 | `self._training_ops` attribute. 190 | 191 | See Section 4.2 in [1], for further information of the policy update, 192 | and Section 5 in [1] for further information of the entropy update. 193 | """ 194 | observations = batch['observations'] 195 | 196 | with tf.GradientTape() as tape: 197 | actions, log_pis = self._policy.actions_and_log_probs(observations) 198 | 199 | Qs_log_targets = tuple( 200 | # Q.values(observations, actions) for Q in self._Qs) 201 | Q(tf.concat((observations, actions), axis=-1)) for Q in self._Qs) 202 | Q_log_targets = tf.reduce_min(Qs_log_targets, axis=0) 203 | policy_losses = self._alpha * log_pis - Q_log_targets 204 | policy_loss = tf.nn.compute_average_loss(policy_losses) 205 | 206 | return policy_losses 207 | 208 | # @tf.function(experimental_relax_shapes=True) 209 | def _update_alpha(self, batch): 210 | if not isinstance(self._target_entropy, Number): 211 | return 0.0 212 | 213 | observations = batch['observations'] 214 | 215 | actions, log_pis = self._policy.actions_and_log_probs(observations) 216 | 217 | with tf.GradientTape() as tape: 218 | alpha_losses = -1.0 * ( 219 | self._alpha * tf.stop_gradient(log_pis + self._target_entropy)) 220 | 221 | alpha_loss = tf.nn.compute_average_loss(alpha_losses) 222 | 223 | alpha_gradients = tape.gradient(alpha_loss, [self._log_alpha]) 224 | 225 | return alpha_losses 226 | 227 | def _update_target(self, tau): 228 | for Q, Q_target in zip(self._Qs, self._Q_targets): 229 | for source_weight, target_weight in zip( 230 | Q.trainable_variables, Q_target.trainable_variables): 231 | target_weight.assign( 232 | tau * source_weight + (1.0 - tau) * target_weight) 233 | 234 | def _do_updates(self, states, actions, rewards, dones): 235 | """Runs the update operations for policy, Q, and alpha.""" 236 | batch = OrderedDict(( 237 | ('observations', states[:-1]), 238 | ('next_observations', states[1:]), 239 | ('rewards', rewards[:-1]), 240 | ('terminals', dones[:-1]), 241 | ('actions', actions[:-1]) 242 | )) 243 | Qs_values, Qs_losses = self._update_critic(batch) 244 | policy_losses = self._update_actor(batch) 245 | alpha_losses = self._update_alpha(batch) 246 | 247 | diagnostics = OrderedDict(( 248 | ('Q_value-mean', tf.reduce_mean(Qs_values)), 249 | ('Q_loss-mean', tf.reduce_mean(Qs_losses)), 250 | ('policy_loss-mean', tf.reduce_mean(policy_losses)), 251 | ('alpha', tf.convert_to_tensor(self._alpha)), 252 | ('alpha_loss-mean', tf.reduce_mean(alpha_losses)), 253 | )) 254 | return diagnostics 255 | 256 | def _do_training(self, iteration, states, actions, rewards, dones): 257 | training_diagnostics = self._do_updates(states, actions, rewards, dones) 258 | 259 | if iteration % self._target_update_interval == 0: 260 | # Run target ops here. 261 | self._update_target(tau=tf.constant(self._tau)) 262 | 263 | return training_diagnostics 264 | 265 | def get_diagnostics(self, 266 | iteration, 267 | batch, 268 | training_paths, 269 | evaluation_paths): 270 | """Return diagnostic information as an ordered dictionary. 271 | 272 | Also calls the `draw` method of the plotter, if plotter defined. 273 | """ 274 | diagnostics = OrderedDict(( 275 | ('alpha', self._alpha.numpy()), 276 | ('policy', self._policy.get_diagnostics_np(batch['observations'])), 277 | )) 278 | 279 | if self._plotter: 280 | self._plotter.draw() 281 | 282 | return diagnostics 283 | 284 | @property 285 | def tf_saveables(self): 286 | saveables = { 287 | '_policy_optimizer': self._policy_optimizer, 288 | **{ 289 | f'Q_optimizer_{i}': optimizer 290 | for i, optimizer in enumerate(self._Q_optimizers) 291 | }, 292 | '_alpha': self._alpha, 293 | } 294 | 295 | if hasattr(self, '_alpha_optimizer'): 296 | saveables['_alpha_optimizer'] = self._alpha_optimizer 297 | 298 | return saveables 299 | -------------------------------------------------------------------------------- /tools.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import io 3 | import pathlib 4 | import pickle 5 | import re 6 | import uuid 7 | 8 | import gym 9 | import numpy as np 10 | import tensorflow as tf 11 | import tensorflow.compat.v1 as tf1 12 | import tensorflow_probability as tfp 13 | from tensorflow.keras.mixed_precision import experimental as prec 14 | from tensorflow_probability import distributions as tfd 15 | 16 | from PIL import Image 17 | 18 | 19 | class AttrDict(dict): 20 | 21 | __setattr__ = dict.__setitem__ 22 | __getattr__ = dict.__getitem__ 23 | 24 | 25 | class Module(tf.Module): 26 | 27 | def save(self, filename): 28 | values = tf.nest.map_structure(lambda x: x.numpy(), self.variables) 29 | with pathlib.Path(filename).open('wb') as f: 30 | pickle.dump(values, f) 31 | 32 | def load(self, filename): 33 | with pathlib.Path(filename).open('rb') as f: 34 | values = pickle.load(f) 35 | tf.nest.map_structure(lambda x, y: x.assign(y), self.variables, values) 36 | 37 | def get(self, name, ctor, *args, **kwargs): 38 | # Create or get layer by name to avoid mentioning it in the constructor. 39 | if not hasattr(self, '_modules'): 40 | self._modules = {} 41 | if name not in self._modules: 42 | self._modules[name] = ctor(*args, **kwargs) 43 | return self._modules[name] 44 | 45 | 46 | def nest_summary(structure): 47 | if isinstance(structure, dict): 48 | return {k: nest_summary(v) for k, v in structure.items()} 49 | if isinstance(structure, list): 50 | return [nest_summary(v) for v in structure] 51 | if hasattr(structure, 'shape'): 52 | return str(structure.shape).replace(', ', 'x').strip('(), ') 53 | return '?' 54 | 55 | 56 | def graph_summary(writer, fn, *args): 57 | step = tf.summary.experimental.get_step() 58 | 59 | def inner(*args): 60 | tf.summary.experimental.set_step(step) 61 | with writer.as_default(): 62 | fn(*args) 63 | return tf.numpy_function(inner, args, []) 64 | 65 | 66 | def video_summary(name, video, step=None, fps=20): 67 | name = name if isinstance(name, str) else str(name) 68 | if np.issubdtype(video.dtype, np.floating): 69 | video = np.clip(255 * video, 0, 255).astype(np.uint8) 70 | B, T, H, W, C = video.shape 71 | try: 72 | frames = video.transpose((1, 2, 0, 3, 4)).reshape((T, H, B * W, C)) 73 | summary = tf1.Summary() 74 | image = tf1.Summary.Image(height=B * H, width=T * W, colorspace=C) 75 | image.encoded_image_string = encode_gif(frames, fps) 76 | summary.value.add(tag=name + '/gif', image=image) 77 | tf.summary.experimental.write_raw_pb(summary.SerializeToString(), step) 78 | except (IOError, OSError) as e: 79 | print('GIF summaries require ffmpeg in $PATH.', e) 80 | frames = video.transpose((0, 2, 1, 3, 4)).reshape((1, B * H, T * W, C)) 81 | tf.summary.image(name + '/grid', frames, step) 82 | 83 | 84 | def encode_gif(frames, fps): 85 | from subprocess import Popen, PIPE 86 | h, w, c = frames[0].shape 87 | pxfmt = {1: 'gray', 3: 'rgb24'}[c] 88 | cmd = ' '.join([ 89 | f'ffmpeg -y -f rawvideo -vcodec rawvideo', 90 | f'-r {fps:.02f} -s {w}x{h} -pix_fmt {pxfmt} -i - -filter_complex', 91 | f'[0:v]split[x][z];[z]palettegen[y];[x]fifo[x];[x][y]paletteuse', 92 | f'-r {fps:.02f} -f gif -']) 93 | proc = Popen(cmd.split(' '), stdin=PIPE, stdout=PIPE, stderr=PIPE) 94 | for image in frames: 95 | proc.stdin.write(image.tostring()) 96 | out, err = proc.communicate() 97 | if proc.returncode: 98 | raise IOError('\n'.join([' '.join(cmd), err.decode('utf8')])) 99 | del proc 100 | return out 101 | 102 | 103 | def simulate(agent, envs, steps=0, episodes=0, state=None): 104 | # Initialize or unpack simulation state. 105 | if state is None: 106 | step, episode = 0, 0 107 | done = np.ones(len(envs), np.bool) 108 | length = np.zeros(len(envs), np.int32) 109 | obs = [None] * len(envs) 110 | agent_state = None 111 | else: 112 | step, episode, done, length, obs, agent_state = state 113 | while (steps and step < steps) or (episodes and episode < episodes): 114 | # Reset envs if necessary. 115 | if done.any(): 116 | indices = [index for index, d in enumerate(done) if d] 117 | promises = [envs[i].reset(blocking=False) for i in indices] 118 | for index, promise in zip(indices, promises): 119 | obs[index] = promise() 120 | # Step agents. 121 | obs = {k: np.stack([o[k] for o in obs]) for k in obs[0]} 122 | action, agent_state = agent(obs, done, agent_state) 123 | action = np.array(action) 124 | assert len(action) == len(envs) 125 | # Step envs. 126 | promises = [e.step(a, blocking=False) for e, a in zip(envs, action)] 127 | obs, _, done = zip(*[p()[:3] for p in promises]) 128 | obs = list(obs) 129 | done = np.stack(done) 130 | episode += int(done.sum()) 131 | length += 1 132 | step += (done * length).sum() 133 | length *= (1 - done) 134 | # Return new state to allow resuming the simulation. 135 | return (step - steps, episode - episodes, done, length, obs, agent_state) 136 | 137 | 138 | def count_episodes(directory): 139 | filenames = directory.glob('*.npz') 140 | lengths = [int(n.stem.rsplit('-', 1)[-1]) - 1 for n in filenames] 141 | episodes, steps = len(lengths), sum(lengths) 142 | return episodes, steps 143 | 144 | 145 | def save_episodes(directory, episodes): 146 | directory = pathlib.Path(directory).expanduser() 147 | directory.mkdir(parents=True, exist_ok=True) 148 | timestamp = datetime.datetime.now().strftime('%Y%m%dT%H%M%S') 149 | for episode in episodes: 150 | identifier = str(uuid.uuid4().hex) 151 | length = len(episode['reward']) 152 | filename = directory / f'{timestamp}-{identifier}-{length}.npz' 153 | with io.BytesIO() as f1: 154 | np.savez_compressed(f1, **episode) 155 | f1.seek(0) 156 | with filename.open('wb') as f2: 157 | f2.write(f1.read()) 158 | 159 | 160 | def load_episodes(directory, rescan, length=None, balance=False, seed=0): 161 | directory = pathlib.Path(directory).expanduser() 162 | random = np.random.RandomState(seed) 163 | cache = {} 164 | while True: 165 | for filename in directory.glob('*.npz'): 166 | if filename not in cache: 167 | try: 168 | with filename.open('rb') as f: 169 | episode = np.load(f) 170 | episode = {k: episode[k] for k in episode.keys()} 171 | except Exception as e: 172 | print(f'Could not load episode: {e}') 173 | continue 174 | cache[filename] = episode 175 | keys = list(cache.keys()) 176 | for index in random.choice(len(keys), rescan): 177 | episode = cache[keys[index]] 178 | if length: 179 | total = len(next(iter(episode.values()))) 180 | available = total - length 181 | if available < 1: 182 | print(f'Skipped short episode of length {available}.') 183 | continue 184 | if balance: 185 | index = min(random.randint(0, total), available) 186 | else: 187 | index = int(random.randint(0, available)) 188 | episode = {k: v[index: index + length] 189 | for k, v in episode.items()} 190 | yield episode 191 | 192 | 193 | class DummyEnv: 194 | 195 | def __init__(self): 196 | self._random = np.random.RandomState(seed=0) 197 | self._step = None 198 | 199 | @property 200 | def observation_space(self): 201 | low = np.zeros([64, 64, 3], dtype=np.uint8) 202 | high = 255 * np.ones([64, 64, 3], dtype=np.uint8) 203 | spaces = {'image': gym.spaces.Box(low, high)} 204 | return gym.spaces.Dict(spaces) 205 | 206 | @property 207 | def action_space(self): 208 | low = -np.ones([5], dtype=np.float32) 209 | high = np.ones([5], dtype=np.float32) 210 | return gym.spaces.Box(low, high) 211 | 212 | def reset(self): 213 | self._step = 0 214 | obs = self.observation_space.sample() 215 | return obs 216 | 217 | def step(self, action): 218 | obs = self.observation_space.sample() 219 | reward = self._random.uniform(0, 1) 220 | self._step += 1 221 | done = self._step >= 1000 222 | info = {} 223 | return obs, reward, done, info 224 | 225 | 226 | class SampleDist: 227 | 228 | def __init__(self, dist, samples=100): 229 | self._dist = dist 230 | self._samples = samples 231 | 232 | @property 233 | def name(self): 234 | return 'SampleDist' 235 | 236 | def __getattr__(self, name): 237 | return getattr(self._dist, name) 238 | 239 | def mean(self): 240 | samples = self._dist.sample(self._samples) 241 | return tf.reduce_mean(samples, 0) 242 | 243 | def mode(self): 244 | sample = self._dist.sample(self._samples) 245 | logprob = self._dist.log_prob(sample) 246 | return tf.gather(sample, tf.argmax(logprob))[0] 247 | 248 | def entropy(self): 249 | sample = self._dist.sample(self._samples) 250 | logprob = self.log_prob(sample) 251 | return -tf.reduce_mean(logprob, 0) 252 | 253 | 254 | class OneHotDist: 255 | 256 | def __init__(self, logits=None, probs=None): 257 | self._dist = tfd.Categorical(logits=logits, probs=probs) 258 | self._num_classes = self.mean().shape[-1] 259 | self._dtype = prec.global_policy().compute_dtype 260 | 261 | @property 262 | def name(self): 263 | return 'OneHotDist' 264 | 265 | def __getattr__(self, name): 266 | return getattr(self._dist, name) 267 | 268 | def prob(self, events): 269 | indices = tf.argmax(events, axis=-1) 270 | return self._dist.prob(indices) 271 | 272 | def log_prob(self, events): 273 | indices = tf.argmax(events, axis=-1) 274 | return self._dist.log_prob(indices) 275 | 276 | def mean(self): 277 | return self._dist.probs_parameter() 278 | 279 | def mode(self): 280 | return self._one_hot(self._dist.mode()) 281 | 282 | def sample(self, amount=None): 283 | amount = [amount] if amount else [] 284 | indices = self._dist.sample(*amount) 285 | sample = self._one_hot(indices) 286 | probs = self._dist.probs_parameter() 287 | sample += tf.cast(probs - tf.stop_gradient(probs), self._dtype) 288 | return sample 289 | 290 | def _one_hot(self, indices): 291 | return tf.one_hot(indices, self._num_classes, dtype=self._dtype) 292 | 293 | 294 | class TanhBijector(tfp.bijectors.Bijector): 295 | 296 | def __init__(self, validate_args=False, name='tanh'): 297 | super().__init__( 298 | forward_min_event_ndims=0, 299 | validate_args=validate_args, 300 | name=name) 301 | 302 | def _forward(self, x): 303 | return tf.nn.tanh(x) 304 | 305 | def _inverse(self, y): 306 | dtype = y.dtype 307 | y = tf.cast(y, tf.float32) 308 | y = tf.where( 309 | tf.less_equal(tf.abs(y), 1.), 310 | tf.clip_by_value(y, -0.99999997, 0.99999997), y) 311 | y = tf.atanh(y) 312 | y = tf.cast(y, dtype) 313 | return y 314 | 315 | def _forward_log_det_jacobian(self, x): 316 | log2 = tf.math.log(tf.constant(2.0, dtype=x.dtype)) 317 | return 2.0 * (log2 - x - tf.nn.softplus(-2.0 * x)) 318 | 319 | 320 | def lambda_return( 321 | reward, value, pcont, bootstrap, lambda_, axis): 322 | # Setting lambda=1 gives a discounted Monte Carlo return. 323 | # Setting lambda=0 gives a fixed 1-step return. 324 | assert reward.shape.ndims == value.shape.ndims, (reward.shape, value.shape) 325 | if isinstance(pcont, (int, float)): 326 | pcont = pcont * tf.ones_like(reward) 327 | dims = list(range(reward.shape.ndims)) 328 | dims = [axis] + dims[1:axis] + [0] + dims[axis + 1:] 329 | if axis != 0: 330 | reward = tf.transpose(reward, dims) 331 | value = tf.transpose(value, dims) 332 | pcont = tf.transpose(pcont, dims) 333 | if bootstrap is None: 334 | bootstrap = tf.zeros_like(value[-1]) 335 | next_values = tf.concat([value[1:], bootstrap[None]], 0) 336 | inputs = reward + pcont * next_values * (1 - lambda_) 337 | returns = static_scan( 338 | lambda agg, cur: cur[0] + cur[1] * lambda_ * agg, 339 | (inputs, pcont), bootstrap, reverse=True) 340 | if axis != 0: 341 | returns = tf.transpose(returns, dims) 342 | return returns 343 | 344 | 345 | class Adam(tf.Module): 346 | 347 | def __init__(self, name, modules, lr, clip=None, wd=None, wdpattern=r'.*'): 348 | self._name = name 349 | self._modules = modules 350 | self._clip = clip 351 | self._wd = wd 352 | self._wdpattern = wdpattern 353 | self._opt = tf.optimizers.Adam(lr) 354 | self._opt = prec.LossScaleOptimizer(self._opt, 'dynamic') 355 | self._variables = None 356 | 357 | @property 358 | def variables(self): 359 | return self._opt.variables() 360 | 361 | def __call__(self, tape, loss): 362 | if self._variables is None: 363 | variables = [module.variables for module in self._modules] 364 | self._variables = tf.nest.flatten(variables) 365 | count = sum(np.prod(x.shape) for x in self._variables) 366 | print(f'Found {count} {self._name} parameters.') 367 | assert len(loss.shape) == 0, loss.shape 368 | with tape: 369 | loss = self._opt.get_scaled_loss(loss) 370 | grads = tape.gradient(loss, self._variables) 371 | grads = self._opt.get_unscaled_gradients(grads) 372 | norm = tf.linalg.global_norm(grads) 373 | if self._clip: 374 | grads, _ = tf.clip_by_global_norm(grads, self._clip, norm) 375 | if self._wd: 376 | context = tf.distribute.get_replica_context() 377 | context.merge_call(self._apply_weight_decay) 378 | self._opt.apply_gradients(zip(grads, self._variables)) 379 | return norm 380 | 381 | def _apply_weight_decay(self, strategy): 382 | print('Applied weight decay to variables:') 383 | for var in self._variables: 384 | if re.search(self._wdpattern, self._name + '/' + var.name): 385 | print('- ' + self._name + '/' + var.name) 386 | strategy.extended.update(var, lambda var: self._wd * var) 387 | 388 | 389 | def args_type(default): 390 | if isinstance(default, bool): 391 | return lambda x: bool(['False', 'True'].index(x)) 392 | if isinstance(default, int): 393 | return lambda x: float(x) if ('e' in x or '.' in x) else int(x) 394 | if isinstance(default, pathlib.Path): 395 | return lambda x: pathlib.Path(x).expanduser() 396 | return type(default) 397 | 398 | 399 | def static_scan(fn, inputs, start, reverse=False): 400 | last = start 401 | outputs = [[] for _ in tf.nest.flatten(start)] 402 | indices = range(len(tf.nest.flatten(inputs)[0])) 403 | if reverse: 404 | indices = reversed(indices) 405 | for index in indices: 406 | inp = tf.nest.map_structure(lambda x: x[index], inputs) 407 | last = fn(last, inp) 408 | [o.append(l) for o, l in zip(outputs, tf.nest.flatten(last))] 409 | if reverse: 410 | outputs = [list(reversed(x)) for x in outputs] 411 | outputs = [tf.stack(x, 0) for x in outputs] 412 | return tf.nest.pack_sequence_as(start, outputs) 413 | 414 | def static_scan_action(fn1, fn2, inputs, start, reverse=False): 415 | last = start 416 | outputs = [[] for _ in tf.nest.flatten(start)] 417 | indices = range(len(tf.nest.flatten(inputs)[0])) 418 | actions = [] 419 | if reverse: 420 | indices = reversed(indices) 421 | for index in indices: 422 | inp = tf.nest.map_structure(lambda x: x[index], inputs) 423 | action = fn2(last) 424 | last = fn1(last, action, inp) 425 | [o.append(l) for o, l in zip(outputs, tf.nest.flatten(last))] 426 | actions.append(action) 427 | if reverse: 428 | outputs = [list(reversed(x)) for x in outputs] 429 | outputs = [tf.stack(x, 0) for x in outputs] 430 | return tf.nest.pack_sequence_as(start, outputs), actions[0] 431 | 432 | 433 | 434 | def _mnd_sample(self, sample_shape=(), seed=None, name='sample'): 435 | return tf.random.normal( 436 | tuple(sample_shape) + tuple(self.event_shape), 437 | self.mean(), self.stddev(), self.dtype, seed, name) 438 | 439 | 440 | tfd.MultivariateNormalDiag.sample = _mnd_sample 441 | 442 | 443 | def _cat_sample(self, sample_shape=(), seed=None, name='sample'): 444 | assert len(sample_shape) in (0, 1), sample_shape 445 | assert len(self.logits_parameter().shape) == 2 446 | indices = tf.random.categorical( 447 | self.logits_parameter(), sample_shape[0] if sample_shape else 1, 448 | self.dtype, seed, name) 449 | if not sample_shape: 450 | indices = indices[..., 0] 451 | return indices 452 | 453 | 454 | tfd.Categorical.sample = _cat_sample 455 | 456 | 457 | class Every: 458 | 459 | def __init__(self, every): 460 | self._every = every 461 | self._last = None 462 | 463 | def __call__(self, step): 464 | if self._last is None: 465 | self._last = step 466 | return True 467 | if step >= self._last + self._every: 468 | self._last += self._every 469 | return True 470 | return False 471 | 472 | 473 | class Once: 474 | 475 | def __init__(self): 476 | self._once = True 477 | 478 | def __call__(self): 479 | if self._once: 480 | self._once = False 481 | return True 482 | return False 483 | 484 | 485 | def load_imgnet(train): 486 | import pickle 487 | name = 'train' if train else 'valid' 488 | 489 | with open('./natural_{}.pkl'.format(name), 'rb') as fin: 490 | imgnet = pickle.load(fin) 491 | 492 | imgnet = np.transpose(imgnet, axes=(0, 1, 3, 4, 2)) 493 | 494 | return imgnet 495 | -------------------------------------------------------------------------------- /wrappers.py: -------------------------------------------------------------------------------- 1 | import atexit 2 | import functools 3 | import sys 4 | import threading 5 | import traceback 6 | import gym 7 | import numpy as np 8 | from PIL import Image 9 | import cv2 10 | 11 | class DeepMindControl: 12 | 13 | def __init__(self, name, size=(64, 64), camera=None): 14 | domain, task = name.split('_', 1) 15 | if domain == 'cup': # Only domain with multiple words. 16 | domain = 'ball_in_cup' 17 | if isinstance(domain, str): 18 | from dm_control import suite 19 | self._env = suite.load(domain, task) 20 | else: 21 | assert task is None 22 | self._env = domain() 23 | self._size = size 24 | if camera is None: 25 | camera = dict(quadruped=2).get(domain, 0) 26 | self._camera = camera 27 | 28 | @property 29 | def observation_space(self): 30 | spaces = {} 31 | for key, value in self._env.observation_spec().items(): 32 | spaces[key] = gym.spaces.Box( 33 | -np.inf, np.inf, value.shape, dtype=np.float32) 34 | spaces['image'] = gym.spaces.Box( 35 | 0, 255, self._size + (3,), dtype=np.uint8) 36 | return gym.spaces.Dict(spaces) 37 | 38 | @property 39 | def action_space(self): 40 | spec = self._env.action_spec() 41 | return gym.spaces.Box(spec.minimum, spec.maximum, dtype=np.float32) 42 | 43 | def step(self, action): 44 | time_step = self._env.step(action) 45 | obs = dict(time_step.observation) 46 | obs['image'] = self.render() 47 | reward = time_step.reward or 0 48 | done = time_step.last() 49 | info = {'discount': np.array(time_step.discount, np.float32)} 50 | return obs, reward, done, info 51 | 52 | def reset(self): 53 | time_step = self._env.reset() 54 | obs = dict(time_step.observation) 55 | obs['image'] = self.render() 56 | return obs 57 | 58 | def render(self, *args, **kwargs): 59 | if kwargs.get('mode', 'rgb_array') != 'rgb_array': 60 | raise ValueError("Only render mode 'rgb_array' is supported.") 61 | return self._env.physics.render(*self._size, camera_id=self._camera) 62 | 63 | 64 | class Atari: 65 | 66 | LOCK = threading.Lock() 67 | 68 | def __init__( 69 | self, name, action_repeat=4, size=(84, 84), grayscale=True, noops=30, 70 | life_done=False, sticky_actions=True): 71 | import gym 72 | version = 0 if sticky_actions else 4 73 | name = ''.join(word.title() for word in name.split('_')) 74 | with self.LOCK: 75 | self._env = gym.make('{}NoFrameskip-v{}'.format(name, version)) 76 | self._action_repeat = action_repeat 77 | self._size = size 78 | self._grayscale = grayscale 79 | self._noops = noops 80 | self._life_done = life_done 81 | self._lives = None 82 | shape = self._env.observation_space.shape[:2] + \ 83 | (() if grayscale else (3,)) 84 | self._buffers = [np.empty(shape, dtype=np.uint8) for _ in range(2)] 85 | self._random = np.random.RandomState(seed=None) 86 | 87 | @property 88 | def observation_space(self): 89 | shape = self._size + (1 if self._grayscale else 3,) 90 | space = gym.spaces.Box(low=0, high=255, shape=shape, dtype=np.uint8) 91 | return gym.spaces.Dict({'image': space}) 92 | 93 | @property 94 | def action_space(self): 95 | return self._env.action_space 96 | 97 | def close(self): 98 | return self._env.close() 99 | 100 | def reset(self): 101 | with self.LOCK: 102 | self._env.reset() 103 | noops = self._random.randint(1, self._noops + 1) 104 | for _ in range(noops): 105 | done = self._env.step(0)[2] 106 | if done: 107 | with self.LOCK: 108 | self._env.reset() 109 | self._lives = self._env.ale.lives() 110 | if self._grayscale: 111 | self._env.ale.getScreenGrayscale(self._buffers[0]) 112 | else: 113 | self._env.ale.getScreenRGB2(self._buffers[0]) 114 | self._buffers[1].fill(0) 115 | return self._get_obs() 116 | 117 | def step(self, action): 118 | total_reward = 0.0 119 | for step in range(self._action_repeat): 120 | _, reward, done, info = self._env.step(action) 121 | total_reward += reward 122 | if self._life_done: 123 | lives = self._env.ale.lives() 124 | done = done or lives < self._lives 125 | self._lives = lives 126 | if done: 127 | break 128 | elif step >= self._action_repeat - 2: 129 | index = step - (self._action_repeat - 2) 130 | if self._grayscale: 131 | self._env.ale.getScreenGrayscale(self._buffers[index]) 132 | else: 133 | self._env.ale.getScreenRGB2(self._buffers[index]) 134 | obs = self._get_obs() 135 | return obs, total_reward, done, info 136 | 137 | def render(self, mode): 138 | return self._env.render(mode) 139 | 140 | def _get_obs(self): 141 | if self._action_repeat > 1: 142 | np.maximum(self._buffers[0], 143 | self._buffers[1], out=self._buffers[0]) 144 | image = np.array(Image.fromarray(self._buffers[0]).resize( 145 | self._size, Image.BILINEAR)) 146 | image = np.clip(image, 0, 255).astype(np.uint8) 147 | image = image[:, :, None] if self._grayscale else image 148 | return {'image': image} 149 | 150 | 151 | class Collect: 152 | 153 | def __init__(self, env, callbacks=None, precision=32): 154 | self._env = env 155 | self._callbacks = callbacks or () 156 | self._precision = precision 157 | self._episode = None 158 | 159 | def __getattr__(self, name): 160 | return getattr(self._env, name) 161 | 162 | def step(self, action): 163 | obs, reward, done, info = self._env.step(action) 164 | obs = {k: self._convert(v) for k, v in obs.items()} 165 | transition = obs.copy() 166 | transition['action'] = action 167 | transition['reward'] = reward 168 | transition['discount'] = info.get( 169 | 'discount', np.array(1 - float(done))) 170 | self._episode.append(transition) 171 | if done: 172 | episode = {k: [t[k] for t in self._episode] 173 | for k in self._episode[0]} 174 | episode = {k: self._convert(v) for k, v in episode.items()} 175 | info['episode'] = episode 176 | for callback in self._callbacks: 177 | callback(episode) 178 | return obs, reward, done, info 179 | 180 | def reset(self): 181 | obs = self._env.reset() 182 | transition = obs.copy() 183 | transition['action'] = np.zeros(self._env.action_space.shape) 184 | transition['reward'] = 0.0 185 | transition['discount'] = 1.0 186 | self._episode = [transition] 187 | return obs 188 | 189 | def _convert(self, value): 190 | value = np.array(value) 191 | if np.issubdtype(value.dtype, np.floating): 192 | dtype = {16: np.float16, 32: np.float32, 193 | 64: np.float64}[self._precision] 194 | elif np.issubdtype(value.dtype, np.signedinteger): 195 | dtype = {16: np.int16, 32: np.int32, 64: np.int64}[self._precision] 196 | elif np.issubdtype(value.dtype, np.uint8): 197 | dtype = np.uint8 198 | else: 199 | raise NotImplementedError(value.dtype) 200 | return value.astype(dtype) 201 | 202 | 203 | class TimeLimit: 204 | 205 | def __init__(self, env, duration): 206 | self._env = env 207 | self._duration = duration 208 | self._step = None 209 | 210 | def __getattr__(self, name): 211 | return getattr(self._env, name) 212 | 213 | def step(self, action): 214 | assert self._step is not None, 'Must reset environment.' 215 | obs, reward, done, info = self._env.step(action) 216 | self._step += 1 217 | if self._step >= self._duration: 218 | done = True 219 | if 'discount' not in info: 220 | info['discount'] = np.array(1.0).astype(np.float32) 221 | self._step = None 222 | return obs, reward, done, info 223 | 224 | def reset(self): 225 | self._step = 0 226 | return self._env.reset() 227 | 228 | class NaturalMujoco: 229 | 230 | def __init__(self, env, dataset): 231 | self.dataset = dataset 232 | self._pointer = (np.random.randint(self.dataset.shape[0]), 0) 233 | self._env = env 234 | 235 | def __getattr__(self, name): 236 | return getattr(self._env, name) 237 | 238 | def step(self, action): 239 | obs, reward, done, info = self._env.step(action) 240 | obs = self._noisify_obs(obs, done) 241 | return obs, reward, done, info 242 | 243 | def _noisify_obs(self, obs, done): 244 | obs = obs.copy() 245 | img = obs['image'] 246 | video_id, img_id = self._pointer 247 | # fgbg = cv2.createBackgroundSubtractorKNN() 248 | # fgbg = cv2.createBackgroundSubtractorMOG2(detectShadows=True) 249 | # temp = fgbg.apply(img) != 255 250 | # fgmask = temp[:, :, None].repeat(3, axis=2) 251 | # fgmask = ~(fgbg.apply(img) == 255)[:, :, None].repeat(3, axis=2) 252 | 253 | # ugly hack to extract only yellow pixels 254 | fgmask = (img[:, :, 0] > 100)[:, :, None].repeat(3, axis=2) 255 | 256 | if done: 257 | video_id = np.random.randint(self.dataset.shape[0]) 258 | img_id = 0 259 | else: 260 | img_id = (img_id + 1) % self.dataset.shape[1] 261 | 262 | background = self.dataset[video_id, img_id] 263 | img = img * fgmask + background * (~fgmask) 264 | 265 | self._pointer = (video_id, img_id) 266 | 267 | obs['image'] = img 268 | 269 | return obs 270 | 271 | def reset(self): 272 | obs = self._env.reset() 273 | obs = self._noisify_obs(obs, False) 274 | return obs 275 | 276 | 277 | 278 | class ActionRepeat: 279 | 280 | def __init__(self, env, amount): 281 | self._env = env 282 | self._amount = amount 283 | 284 | def __getattr__(self, name): 285 | return getattr(self._env, name) 286 | 287 | def step(self, action): 288 | done = False 289 | total_reward = 0 290 | current_step = 0 291 | while current_step < self._amount and not done: 292 | obs, reward, done, info = self._env.step(action) 293 | total_reward += reward 294 | current_step += 1 295 | return obs, total_reward, done, info 296 | 297 | 298 | class NormalizeActions: 299 | 300 | def __init__(self, env): 301 | self._env = env 302 | self._mask = np.logical_and( 303 | np.isfinite(env.action_space.low), 304 | np.isfinite(env.action_space.high)) 305 | self._low = np.where(self._mask, env.action_space.low, -1) 306 | self._high = np.where(self._mask, env.action_space.high, 1) 307 | 308 | def __getattr__(self, name): 309 | return getattr(self._env, name) 310 | 311 | @property 312 | def action_space(self): 313 | low = np.where(self._mask, -np.ones_like(self._low), self._low) 314 | high = np.where(self._mask, np.ones_like(self._low), self._high) 315 | return gym.spaces.Box(low, high, dtype=np.float32) 316 | 317 | def step(self, action): 318 | original = (action + 1) / 2 * (self._high - self._low) + self._low 319 | original = np.where(self._mask, original, action) 320 | return self._env.step(original) 321 | 322 | 323 | class ObsDict: 324 | 325 | def __init__(self, env, key='obs'): 326 | self._env = env 327 | self._key = key 328 | 329 | def __getattr__(self, name): 330 | return getattr(self._env, name) 331 | 332 | @property 333 | def observation_space(self): 334 | spaces = {self._key: self._env.observation_space} 335 | return gym.spaces.Dict(spaces) 336 | 337 | @property 338 | def action_space(self): 339 | return self._env.action_space 340 | 341 | def step(self, action): 342 | obs, reward, done, info = self._env.step(action) 343 | obs = {self._key: np.array(obs)} 344 | return obs, reward, done, info 345 | 346 | def reset(self): 347 | obs = self._env.reset() 348 | obs = {self._key: np.array(obs)} 349 | return obs 350 | 351 | 352 | class OneHotAction: 353 | 354 | def __init__(self, env): 355 | assert isinstance(env.action_space, gym.spaces.Discrete) 356 | self._env = env 357 | 358 | def __getattr__(self, name): 359 | return getattr(self._env, name) 360 | 361 | @property 362 | def action_space(self): 363 | shape = (self._env.action_space.n,) 364 | space = gym.spaces.Box(low=0, high=1, shape=shape, dtype=np.float32) 365 | space.sample = self._sample_action 366 | return space 367 | 368 | def step(self, action): 369 | index = np.argmax(action).astype(int) 370 | reference = np.zeros_like(action) 371 | reference[index] = 1 372 | if not np.allclose(reference, action): 373 | raise ValueError(f'Invalid one-hot action:\n{action}') 374 | return self._env.step(index) 375 | 376 | def reset(self): 377 | return self._env.reset() 378 | 379 | def _sample_action(self): 380 | actions = self._env.action_space.n 381 | index = self._random.randint(0, actions) 382 | reference = np.zeros(actions, dtype=np.float32) 383 | reference[index] = 1.0 384 | return reference 385 | 386 | 387 | class RewardObs: 388 | 389 | def __init__(self, env): 390 | self._env = env 391 | 392 | def __getattr__(self, name): 393 | return getattr(self._env, name) 394 | 395 | @property 396 | def observation_space(self): 397 | spaces = self._env.observation_space.spaces 398 | assert 'reward' not in spaces 399 | spaces['reward'] = gym.spaces.Box(-np.inf, np.inf, dtype=np.float32) 400 | return gym.spaces.Dict(spaces) 401 | 402 | def step(self, action): 403 | obs, reward, done, info = self._env.step(action) 404 | obs['reward'] = reward 405 | return obs, reward, done, info 406 | 407 | def reset(self): 408 | obs = self._env.reset() 409 | obs['reward'] = 0.0 410 | return obs 411 | 412 | 413 | class Async: 414 | 415 | _ACCESS = 1 416 | _CALL = 2 417 | _RESULT = 3 418 | _EXCEPTION = 4 419 | _CLOSE = 5 420 | 421 | def __init__(self, ctor, strategy='process'): 422 | self._strategy = strategy 423 | if strategy == 'none': 424 | self._env = ctor() 425 | elif strategy == 'thread': 426 | import multiprocessing.dummy as mp 427 | elif strategy == 'process': 428 | import multiprocessing as mp 429 | else: 430 | raise NotImplementedError(strategy) 431 | if strategy != 'none': 432 | self._conn, conn = mp.Pipe() 433 | self._process = mp.Process(target=self._worker, args=(ctor, conn)) 434 | atexit.register(self.close) 435 | self._process.start() 436 | self._obs_space = None 437 | self._action_space = None 438 | 439 | @property 440 | def observation_space(self): 441 | if not self._obs_space: 442 | self._obs_space = self.__getattr__('observation_space') 443 | return self._obs_space 444 | 445 | @property 446 | def action_space(self): 447 | if not self._action_space: 448 | self._action_space = self.__getattr__('action_space') 449 | return self._action_space 450 | 451 | def __getattr__(self, name): 452 | if self._strategy == 'none': 453 | return getattr(self._env, name) 454 | self._conn.send((self._ACCESS, name)) 455 | return self._receive() 456 | 457 | def call(self, name, *args, **kwargs): 458 | blocking = kwargs.pop('blocking', True) 459 | if self._strategy == 'none': 460 | return functools.partial(getattr(self._env, name), *args, **kwargs) 461 | payload = name, args, kwargs 462 | self._conn.send((self._CALL, payload)) 463 | promise = self._receive 464 | return promise() if blocking else promise 465 | 466 | def close(self): 467 | if self._strategy == 'none': 468 | try: 469 | self._env.close() 470 | except AttributeError: 471 | pass 472 | return 473 | try: 474 | self._conn.send((self._CLOSE, None)) 475 | self._conn.close() 476 | except IOError: 477 | # The connection was already closed. 478 | pass 479 | self._process.join() 480 | 481 | def step(self, action, blocking=True): 482 | return self.call('step', action, blocking=blocking) 483 | 484 | def reset(self, blocking=True): 485 | return self.call('reset', blocking=blocking) 486 | 487 | def _receive(self): 488 | try: 489 | message, payload = self._conn.recv() 490 | except ConnectionResetError: 491 | raise RuntimeError('Environment worker crashed.') 492 | # Re-raise exceptions in the main process. 493 | if message == self._EXCEPTION: 494 | stacktrace = payload 495 | raise Exception(stacktrace) 496 | if message == self._RESULT: 497 | return payload 498 | raise KeyError(f'Received message of unexpected type {message}') 499 | 500 | def _worker(self, ctor, conn): 501 | try: 502 | env = ctor() 503 | while True: 504 | try: 505 | # Only block for short times to have keyboard exceptions be raised. 506 | if not conn.poll(0.1): 507 | continue 508 | message, payload = conn.recv() 509 | except (EOFError, KeyboardInterrupt): 510 | break 511 | if message == self._ACCESS: 512 | name = payload 513 | result = getattr(env, name) 514 | conn.send((self._RESULT, result)) 515 | continue 516 | if message == self._CALL: 517 | name, args, kwargs = payload 518 | result = getattr(env, name)(*args, **kwargs) 519 | conn.send((self._RESULT, result)) 520 | continue 521 | if message == self._CLOSE: 522 | assert payload is None 523 | break 524 | raise KeyError(f'Received message of unknown type {message}') 525 | except Exception: 526 | stacktrace = ''.join(traceback.format_exception(*sys.exc_info())) 527 | print(f'Error in environment process: {stacktrace}') 528 | conn.send((self._EXCEPTION, stacktrace)) 529 | conn.close() 530 | --------------------------------------------------------------------------------