├── LICENSE ├── README.md ├── graphs.png ├── main.py ├── network.py ├── noises ├── ounoise.py └── param_noise.py ├── normalized_actions.py ├── policies ├── generative.py └── policy.py ├── replay_memory.py └── utils.py /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | This repo contains the code for the implementation of [Distributional Policy Optimization: An Alternative Approach for Continuous Control](https://arxiv.org/abs/1905.09855) (NeurIPS 2019). The theoretical framework is named DPO (Distributional Policy Optimization), whereas the Deep Learning approach to attaining it is named GAC (Generative Actor Critic). 2 | 3 | # How to run 4 | 5 | An example of how to run the code is provided below. The exact hyper-parameters per each domain are provided in the appendix of the paper. 6 | 7 | main.py --visualize --env-name Hopper-v2 --training_actor_samples 32 --noise normal --batch_size 128 --noise_scale 0.2 --print --num_steps 1000000 --target_policy exponential --train_frequency 2048 --replay_size 200000 8 | 9 | # Visualizing 10 | 11 | You may visualize the run by adding the flag --visualize and starting a visdom server as follows: 12 | 13 | python3.6 -m visdom.server 14 | 15 | # Requirements 16 | 17 | - mujoco - see explanation here: https://github.com/openai/mujoco-py 18 | - gym 19 | - numpy 20 | - tqdm - for tracking experiment time left 21 | - visdom - for visualization of the learning process 22 | 23 | # Performance 24 | 25 | The graphs below are taken from the paper and compare the performance of our proposed method to various baselines. The best performing method is the Autoregressive network. 26 | 27 | ![performance graphs](graphs.png?raw=true) 28 | -------------------------------------------------------------------------------- /graphs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tesslerc/GAC/841584cce21fad69950f4b2b8f691d9d3254a2d8/graphs.png -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import gym 4 | import numpy as np 5 | import pickle 6 | from tqdm import trange 7 | import visdom 8 | import torch 9 | 10 | from policies.generative import Generative 11 | from policies.policy import hard_update 12 | from normalized_actions import NormalizedActions 13 | from noises.ounoise import OrnsteinUhlenbeckActionNoise, NormalActionNoise 14 | from utils import save_model, vis_plot 15 | 16 | parser = argparse.ArgumentParser() 17 | parser.add_argument('--env-name', default="HalfCheetah-v2", 18 | help='name of the environment to run') 19 | parser.add_argument('--gamma', type=float, default=0.99, metavar='G', 20 | help='discount factor for reward (default: 0.99)') 21 | parser.add_argument('--tau', type=float, default=0.01, metavar='G', 22 | help='discount factor for model (default: 0.01)') 23 | parser.add_argument('--noise', default='normal', choices=['ou', 'normal']) 24 | parser.add_argument('--noise_scale', type=float, default=0.2, metavar='G', help='(default: 0.2)') 25 | parser.add_argument('--batch_size', type=int, default=64, metavar='N', help='batch size (default: 64)') 26 | parser.add_argument('--num_epochs', type=int, default=None, metavar='N', help='number of epochs (default: None)') 27 | parser.add_argument('--num_epochs_cycles', type=int, default=20, metavar='N') 28 | parser.add_argument('--num_steps', type=int, default=1000000, metavar='N', 29 | help='number of training steps (default: 1000000)') 30 | parser.add_argument('--start_timesteps', type=int, default=10000, metavar='N') 31 | parser.add_argument('--eval_freq', type=int, default=5000, metavar='N') 32 | parser.add_argument('--eval_episodes', type=int, default=100, metavar='N') 33 | parser.add_argument('--train_frequency', type=int, default=2048, metavar='N') 34 | parser.add_argument('--replay_size', type=int, default=50000, metavar='N', 35 | help='size of replay buffer (default: 50000)') 36 | parser.add_argument('--training_actor_samples', type=int, default=16, metavar='N', 37 | help='number of times to sample from the actor for calculating the losses (default: 16)') 38 | parser.add_argument('--visualize', default=False, action='store_true') 39 | parser.add_argument('--experiment_name', default=None, type=str, 40 | help='For multiple different experiments, provide an informative experiment name') 41 | parser.add_argument('--print', default=False, action='store_true') 42 | parser.add_argument('--not_autoregressive', default=False, action='store_true') 43 | parser.add_argument('--q_normalization', type=float, default=0.01, 44 | help='Uniformly smooth the Q function in this range.') 45 | parser.add_argument('--target_policy', type=str, default='exponential', choices=['linear', 'boltzman', 'uniform', 'exponential'], 46 | help='Target policy is constructed based on this operator.') 47 | parser.add_argument('--target_policy_q', type=str, default='min', choices=['min', 'max', 'mean', 'none'], 48 | help='The Q value for each sample is determined based on this operator over the two Q networks.') 49 | parser.add_argument('--temp', type=float, default=1.0, help='Boltzman Temperature for normalizing actions') 50 | 51 | args = parser.parse_args() 52 | 53 | assert args.training_actor_samples > 0 54 | 55 | env = NormalizedActions(gym.make(args.env_name)) 56 | eval_env = NormalizedActions(gym.make(args.env_name)) 57 | 58 | agent = Generative(gamma=args.gamma, tau=args.tau, num_inputs=env.observation_space.shape[0], 59 | action_space=env.action_space, replay_size=args.replay_size, actor_samples=args.training_actor_samples, 60 | q_normalization=args.q_normalization, target_policy=args.target_policy, 61 | target_policy_q=args.target_policy_q, autoregressive=not args.not_autoregressive, 62 | temp=args.temp) 63 | 64 | results_dict = {'eval_rewards': [], 65 | 'value_losses': [], 66 | 'policy_losses': [], 67 | 'train_rewards': [] 68 | } 69 | 70 | base_dir = os.getcwd() + '/models/' + args.env_name + '/' 71 | 72 | if args.experiment_name is not None: 73 | base_dir += args.experiment_name + '/' 74 | 75 | run_number = 0 76 | while os.path.exists(base_dir + str(run_number)): 77 | run_number += 1 78 | base_dir = base_dir + str(run_number) 79 | os.makedirs(base_dir) 80 | 81 | if args.noise == 'ou': 82 | noise = OrnsteinUhlenbeckActionNoise(mu=np.zeros(env.action_space.shape[0]), 83 | sigma=float(args.noise_scale) * np.ones(env.action_space.shape[0]) 84 | ) 85 | elif args.noise == 'normal': 86 | noise = NormalActionNoise(mu=np.zeros(env.action_space.shape[0]), 87 | sigma=float(args.noise_scale) * np.ones(env.action_space.shape[0]) 88 | ) 89 | else: 90 | noise = None 91 | 92 | 93 | def reset_noise(a_noise): 94 | if a_noise is not None: 95 | a_noise.reset() 96 | 97 | 98 | print(base_dir) 99 | 100 | state = agent.Tensor([env.reset()]) 101 | episode_reward = 0 102 | agent.train() 103 | 104 | reset_noise(noise) 105 | 106 | if args.visualize: 107 | vis = visdom.Visdom(env=base_dir) 108 | else: 109 | vis = None 110 | 111 | episode_timesteps = 0 112 | for step in trange(args.num_steps): 113 | with torch.no_grad(): 114 | if step % args.eval_freq == 0: 115 | eval_reward = 0 116 | for test_epoch in range(args.eval_episodes): 117 | done = False 118 | eval_state = agent.Tensor([eval_env.reset()]) 119 | while not done: 120 | action = agent.select_action(eval_state) 121 | 122 | next_eval_state, reward, done, _ = eval_env.step(action.cpu().numpy()[0]) 123 | eval_reward += reward 124 | 125 | next_eval_state = agent.Tensor([next_eval_state]) 126 | 127 | eval_state = next_eval_state 128 | results_dict['eval_rewards'].append((step, eval_reward * 1.0 / args.eval_episodes)) 129 | if args.print: 130 | try: 131 | print('env: {0}, run number: {1}, step: {2}, reward: {3}, value loss: {4}, policy loss: {5}'.format( 132 | args.env_name, 133 | run_number, 134 | results_dict['eval_rewards'][-1][0], 135 | results_dict['eval_rewards'][-1][1], 136 | results_dict['value_losses'][-1][1], 137 | results_dict['policy_losses'][-1][1])) 138 | except: 139 | pass 140 | save_model(actor=agent.actor, basedir=base_dir) 141 | with open(base_dir + '/results', 'wb') as f: 142 | pickle.dump(results_dict, f) 143 | 144 | if step < args.start_timesteps: 145 | action = torch.Tensor(env.action_space.sample()).to(agent.device).unsqueeze(0) 146 | else: 147 | action = agent.select_action(state, noise) 148 | next_state, reward, done, _ = env.step(action.cpu().numpy()[0]) 149 | done_bool = False if episode_timesteps + 1 == env.env._max_episode_steps else done 150 | 151 | episode_timesteps += 1 152 | episode_reward += reward 153 | 154 | action = agent.Tensor(action) 155 | mask = agent.Tensor([not done_bool]) 156 | next_state = agent.Tensor([next_state]) 157 | reward = agent.Tensor([reward]) 158 | 159 | agent.store_transition(state, action, mask, next_state, reward) 160 | 161 | state = next_state 162 | 163 | if done: 164 | results_dict['train_rewards'].append((step, np.mean(episode_reward))) 165 | episode_reward = 0 166 | episode_timesteps = 0 167 | state = agent.Tensor([env.reset()]) 168 | reset_noise(noise) 169 | 170 | if len(agent.memory) > args.batch_size and step % args.train_frequency == 0: 171 | value_loss, policy_loss = agent.update_parameters(batch_size=args.batch_size, 172 | number_of_iterations=args.train_frequency) 173 | 174 | results_dict['value_losses'].append((step, value_loss)) 175 | results_dict['policy_losses'].append((step, policy_loss)) 176 | 177 | vis_plot(vis, results_dict) 178 | 179 | 180 | with open(base_dir + '/results', 'wb') as f: 181 | pickle.dump(results_dict, f) 182 | save_model(actor=agent.actor, basedir=base_dir) 183 | 184 | env.close() 185 | -------------------------------------------------------------------------------- /network.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | import numpy as np 5 | 6 | 7 | class Categorical(nn.Module): 8 | def __init__(self, num_inputs, num_outputs): 9 | super(Categorical, self).__init__() 10 | self.linear = nn.Linear(num_inputs, num_outputs) 11 | 12 | def forward(self, x): 13 | x = self.linear(x) 14 | return x 15 | 16 | def sample(self, x, deterministic=False): 17 | x = self(x) 18 | 19 | probs = F.softmax(x, dim=-1) 20 | if deterministic is False: 21 | action = probs.multinomial(1) 22 | else: 23 | action = probs.max(1)[1] 24 | return action 25 | 26 | def logprobs_and_entropy(self, x): 27 | x = self(x) 28 | 29 | log_probs = F.log_softmax(x, dim=-1) 30 | probs = F.softmax(x, dim=-1) 31 | 32 | dist_entropy = -(log_probs * probs).sum(-1).mean() 33 | return probs, dist_entropy 34 | 35 | 36 | class Actor(nn.Module): 37 | def __init__(self, num_inputs, action_space, num_outputs): 38 | super(Actor, self).__init__() 39 | self.action_dim = action_space.shape[0] 40 | self.num_outputs = num_outputs 41 | 42 | self.common = nn.Sequential( 43 | nn.Linear(num_inputs, 400), 44 | nn.ReLU(), 45 | nn.Linear(400, 300), 46 | nn.ReLU() 47 | ) 48 | 49 | self.mu = nn.Linear(300, self.action_dim * self.num_outputs) 50 | self.dist = Categorical(300, self.num_outputs) 51 | 52 | def forward(self, x): 53 | common = self.common(x) 54 | mu = F.tanh(self.mu(common)).view(x.shape[0], self.num_outputs, self.action_dim) 55 | action = self.dist.sample(common) 56 | probs, dist_entropy = self.dist.logprobs_and_entropy(x) 57 | return mu, action, probs, dist_entropy 58 | 59 | 60 | def cosine_basis_functions(x, n_basis_functions=64): 61 | x = x.view(-1, 1) 62 | i_pi = np.tile(np.arange(1, n_basis_functions + 1, dtype=np.float32), (x.shape[0], 1)) * np.pi 63 | i_pi = torch.Tensor(i_pi) 64 | if x.is_cuda: 65 | i_pi = i_pi.cuda() 66 | embedding = (x * i_pi).cos() 67 | return embedding 68 | 69 | 70 | class CosineBasisLinear(nn.Module): 71 | def __init__(self, n_basis_functions, out_size): 72 | super(CosineBasisLinear, self).__init__() 73 | self.linear = nn.Linear(n_basis_functions, out_size) 74 | self.n_basis_functions = n_basis_functions 75 | self.out_size = out_size 76 | 77 | def forward(self, x): 78 | batch_size = x.shape[0] 79 | h = cosine_basis_functions(x, self.n_basis_functions) 80 | out = self.linear(h) 81 | out = out.view(batch_size, -1, self.out_size) 82 | return out 83 | 84 | 85 | class AutoRegressiveStochasticActor(nn.Module): 86 | def __init__(self, num_inputs, action_dim, n_basis_functions): 87 | super(AutoRegressiveStochasticActor, self).__init__() 88 | self.action_dim = action_dim 89 | self.state_embedding = nn.Linear(num_inputs, 400) 90 | self.noise_embedding = CosineBasisLinear(n_basis_functions, 400) 91 | self.action_embedding = CosineBasisLinear(n_basis_functions, 400) 92 | 93 | self.rnn = nn.GRU(800, 400, batch_first=True) 94 | self.l1 = nn.Linear(400, 400) 95 | self.l2 = nn.Linear(400, 1) 96 | 97 | def forward(self, state, taus, actions=None): 98 | if actions is not None: 99 | return self.supervised_forward(state, taus, actions) 100 | batch_size = state.shape[0] 101 | # batch x 1 x 400 102 | state_embedding = F.leaky_relu(self.state_embedding(state)).unsqueeze(1) 103 | # batch x action dim x 400 104 | noise_embedding = self.noise_embedding(taus) 105 | 106 | action_list = [] 107 | 108 | action = torch.zeros(batch_size, 1) 109 | if state.is_cuda: 110 | action = action.cuda() 111 | hidden_state = None 112 | 113 | for idx in range(self.action_dim): 114 | # batch x 1 x 400 115 | action_embedding = F.leaky_relu(self.action_embedding(action.view(batch_size, 1, 1))) 116 | rnn_input = torch.cat([state_embedding, action_embedding], dim=2) 117 | gru_out, hidden_state = self.rnn(rnn_input, hidden_state) 118 | 119 | # batch x 400 120 | hadamard_product = gru_out.squeeze(1) * noise_embedding[:, idx, :] 121 | action = torch.tanh(self.l2(F.leaky_relu(self.l1(hadamard_product)))) 122 | action_list.append(action) 123 | 124 | actions = torch.stack(action_list, dim=1).squeeze(-1) 125 | return actions 126 | 127 | def supervised_forward(self, state, taus, actions): 128 | # batch x action dim x 400 129 | state_embedding = F.leaky_relu(self.state_embedding(state)).unsqueeze(1).expand(-1, self.action_dim, -1) 130 | # batch x action dim x 400 131 | shifted_actions = torch.zeros_like(actions) 132 | shifted_actions[:, 1:] = actions[:, :-1] 133 | provided_action_embedding = F.leaky_relu(self.action_embedding(shifted_actions)) 134 | 135 | rnn_input = torch.cat([state_embedding, provided_action_embedding], dim=2) 136 | gru_out, _ = self.rnn(rnn_input) 137 | 138 | # batch x action dim x 400 139 | noise_embedding = self.noise_embedding(taus) 140 | # batch x action dim x 400 141 | hadamard_product = gru_out * noise_embedding 142 | actions = torch.tanh(self.l2(F.leaky_relu(self.l1(hadamard_product)))) 143 | return actions.squeeze(-1) 144 | 145 | 146 | class StochasticActor(nn.Module): 147 | def __init__(self, num_inputs, action_dim, n_basis_functions): 148 | super(StochasticActor, self).__init__() 149 | 150 | hidden_size = 400 151 | 152 | self.hidden_size = hidden_size 153 | self.action_dim = action_dim 154 | self.l1 = nn.Linear(num_inputs, self.hidden_size) 155 | self.phi = CosineBasisLinear(n_basis_functions, self.hidden_size) 156 | self.l2 = nn.Linear(self.hidden_size, 200) 157 | self.l3 = nn.Linear(200, self.action_dim) 158 | 159 | def forward(self, state, tau, actions): 160 | # batch x ~400 161 | state_embedding = F.leaky_relu(self.l1(state)) 162 | # batch x ~400 163 | noise_embedding = F.leaky_relu(self.phi(tau)).view(-1, self.hidden_size) 164 | 165 | hadamard_product = state_embedding * noise_embedding 166 | 167 | l2 = F.leaky_relu(self.l2(hadamard_product)) 168 | 169 | actions = torch.tanh(self.l3(l2)) 170 | 171 | return actions 172 | 173 | 174 | class Critic(nn.Module): 175 | def __init__(self, num_inputs, num_networks=1): 176 | super(Critic, self).__init__() 177 | self.num_networks = num_networks 178 | self.q1 = nn.Sequential( 179 | nn.Linear(num_inputs, 400), 180 | nn.LeakyReLU(), 181 | nn.Linear(400, 300), 182 | nn.LeakyReLU(), 183 | nn.Linear(300, 1) 184 | ) 185 | 186 | if self.num_networks == 2: 187 | self.q2 = nn.Sequential( 188 | nn.Linear(num_inputs, 400), 189 | nn.LeakyReLU(), 190 | nn.Linear(400, 300), 191 | nn.LeakyReLU(), 192 | nn.Linear(300, 1) 193 | ) 194 | elif self.num_networks > 2 or self.num_networks < 1: 195 | raise NotImplementedError 196 | 197 | def forward(self, x): 198 | if self.num_networks == 1: 199 | return self.q1(x) 200 | return self.q1(x), self.q2(x) 201 | -------------------------------------------------------------------------------- /noises/ounoise.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | 4 | # Taken from OpenAI baselines - baselines/ddpg/noise.py 5 | 6 | class ActionNoise(object): 7 | def reset(self): 8 | pass 9 | 10 | 11 | class NormalActionNoise(ActionNoise): 12 | def __init__(self, mu, sigma): 13 | self.mu = mu 14 | self.sigma = sigma 15 | 16 | def __call__(self): 17 | return np.random.normal(self.mu, self.sigma) 18 | 19 | def reset(self): 20 | pass 21 | 22 | def __repr__(self): 23 | return 'NormalActionNoise(mu={}, sigma={})'.format(self.mu, self.sigma) 24 | 25 | 26 | class OrnsteinUhlenbeckActionNoise(ActionNoise): 27 | def __init__(self, mu, sigma, theta=.15, dt=1e-2, x0=None): 28 | self.theta = theta 29 | self.mu = mu 30 | self.sigma = sigma 31 | self.dt = dt 32 | self.x0 = x0 33 | self.reset() 34 | 35 | def __call__(self): 36 | x = self.x_prev + self.theta * (self.mu - self.x_prev) * self.dt + self.sigma * np.sqrt(self.dt) * np.random.normal(size=self.mu.shape) 37 | self.x_prev = x 38 | return x 39 | 40 | def reset(self): 41 | self.x_prev = self.x0 if self.x0 is not None else np.zeros_like(self.mu) 42 | 43 | def __repr__(self): 44 | return 'OrnsteinUhlenbeckActionNoise(mu={}, sigma={})'.format(self.mu, self.sigma) 45 | -------------------------------------------------------------------------------- /noises/param_noise.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from math import sqrt 3 | 4 | """ 5 | From OpenAI Baselines: 6 | https://github.com/openai/baselines/blob/master/baselines/ddpg/noise.py 7 | """ 8 | 9 | 10 | class AdaptiveParamNoiseSpec(object): 11 | def __init__(self, initial_stddev=0.1, desired_action_stddev=0.2, adaptation_coefficient=1.01): 12 | """ 13 | Note that initial_stddev and current_stddev refer to std of parameter noise, 14 | but desired_action_stddev refers to (as name notes) desired std in action space 15 | """ 16 | self.initial_stddev = initial_stddev 17 | self.desired_action_stddev = desired_action_stddev 18 | self.adaptation_coefficient = adaptation_coefficient 19 | 20 | self.current_stddev = initial_stddev 21 | 22 | def adapt(self, distance): 23 | if distance > self.desired_action_stddev: 24 | # Decrease stddev. 25 | self.current_stddev /= self.adaptation_coefficient 26 | else: 27 | # Increase stddev. 28 | self.current_stddev *= self.adaptation_coefficient 29 | 30 | def get_stats(self): 31 | stats = { 32 | 'param_noise_stddev': self.current_stddev, 33 | } 34 | return stats 35 | 36 | def __repr__(self): 37 | fmt = 'AdaptiveParamNoiseSpec(initial_stddev={}, desired_action_stddev={}, adaptation_coefficient={})' 38 | return fmt.format(self.initial_stddev, self.desired_action_stddev, self.adaptation_coefficient) 39 | 40 | 41 | def ddpg_distance_metric(actions1, actions2): 42 | """ 43 | Compute "distance" between actions taken by two policies at the same states 44 | Expects numpy arrays 45 | """ 46 | diff = actions1-actions2 47 | mean_diff = np.mean(np.square(diff), axis=0) 48 | dist = sqrt(np.mean(mean_diff)) 49 | return dist 50 | -------------------------------------------------------------------------------- /normalized_actions.py: -------------------------------------------------------------------------------- 1 | import gym 2 | import torch 3 | 4 | 5 | class NormalizedActions(gym.ActionWrapper): 6 | def action(self, action): 7 | action = (action + 1) / 2 # [-1, 1] => [0, 1] 8 | action *= (self.action_space.high - self.action_space.low) 9 | action += self.action_space.low 10 | return action 11 | 12 | def _action(self, action): 13 | action = (action + 1) / 2 # [-1, 1] => [0, 1] 14 | action *= (self.action_space.high - self.action_space.low) 15 | action += self.action_space.low 16 | return action 17 | 18 | def _reverse_action(self, action): 19 | action -= self.action_space.low 20 | action /= (self.action_space.high - self.action_space.low) 21 | action = action * 2 - 1 22 | return action 23 | 24 | 25 | def normalize(x, stats): 26 | if stats is None: 27 | return x 28 | return (x - stats.mean) / (stats.var + 1e-8).sqrt() 29 | 30 | 31 | class RunningMeanStd(object): 32 | # https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Parallel_algorithm 33 | def __init__(self, epsilon=1e-4, shape=(), device=torch.device('cpu')): 34 | self.mean = torch.zeros(shape).to(device) 35 | self.var = torch.ones(shape).to(device) 36 | self.count = epsilon 37 | 38 | def update(self, x): 39 | batch_mean = torch.mean(x, dim=0) 40 | batch_var = torch.var(x, dim=0) 41 | batch_count = x.shape[0] 42 | self.update_from_moments(batch_mean, batch_var, batch_count) 43 | 44 | def update_from_moments(self, batch_mean, batch_var, batch_count): 45 | self.mean, self.var, self.count = update_mean_var_count_from_moments( 46 | self.mean, self.var, self.count, batch_mean, batch_var, batch_count) 47 | 48 | 49 | def update_mean_var_count_from_moments(mean, var, count, batch_mean, batch_var, batch_count): 50 | delta = batch_mean - mean 51 | tot_count = count + batch_count 52 | 53 | new_mean = mean + delta * batch_count / tot_count 54 | m_a = var * count 55 | m_b = batch_var * batch_count 56 | M2 = m_a + m_b + delta.sqrt() * count * batch_count / tot_count 57 | new_var = M2 / tot_count 58 | new_count = tot_count 59 | 60 | return new_mean, new_var, new_count 61 | -------------------------------------------------------------------------------- /policies/generative.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import math 3 | from torch.optim import Adam, SGD 4 | from torch.autograd import Variable 5 | from torch.distributions import Uniform 6 | import torch.nn.functional as F 7 | from network import Critic, StochasticActor, AutoRegressiveStochasticActor 8 | from policies.policy import Policy, hard_update, soft_update 9 | 10 | 11 | def compute_eltwise_huber_quantile_loss(actions, target_actions, taus, weighting): 12 | """Compute elementwise Huber losses for quantile regression. 13 | This is based on Algorithm 1 of https://arxiv.org/abs/1806.06923. 14 | This function assumes that, both of the two kinds of quantile thresholds, 15 | taus (used to compute y) and taus_prime (used to compute t) are iid samples 16 | from U([0,1]). 17 | Args: 18 | actions (Variable): Quantile prediction from taus as a 19 | (batch_size, N, K)-shaped array. 20 | target_actions (Variable): Quantile targets from taus as a 21 | (batch_size, N, K)-shaped array. 22 | taus (ndarray): Quantile thresholds used to compute y as a 23 | (batch_size, N, 1)-shaped array. 24 | Returns: 25 | Variable: Loss 26 | """ 27 | I_delta = ((actions - target_actions) > 0).float() 28 | eltwise_huber_loss = F.smooth_l1_loss(actions, target_actions, reduce=False) 29 | eltwise_loss = abs(taus - I_delta) * eltwise_huber_loss * weighting 30 | return eltwise_loss.mean() 31 | 32 | 33 | class Generative(Policy): 34 | def __init__(self, gamma, tau, num_inputs, action_space, replay_size, num_basis_functions=64, actor_samples=1, 35 | q_normalization=0.01, target_policy='linear', target_policy_q='min', autoregressive=True, temp=1.0): 36 | 37 | super(Generative, self).__init__(gamma=gamma, tau=tau, num_inputs=num_inputs, action_space=action_space, 38 | replay_size=replay_size) 39 | 40 | self.actor_samples = actor_samples 41 | self.topk = math.ceil(self.actor_samples * 0.7) 42 | self.num_basis_functions = num_basis_functions 43 | self.action_dim = self.action_space.shape[0] 44 | self.q_normalization = q_normalization 45 | self.target_policy = target_policy 46 | self.autoregressive = autoregressive 47 | self.temp = temp 48 | 49 | if target_policy_q == 'min': 50 | self.target_policy_q = lambda x, y: torch.min(x, y) 51 | elif target_policy_q == 'max': 52 | self.target_policy_q = lambda x, y: torch.max(x, y) 53 | elif target_policy_q == 'mean': 54 | self.target_policy_q = lambda x, y: (x + y / 2) 55 | else: 56 | self.target_policy_q = lambda x, y: x 57 | 58 | self.tau_sampler = Uniform(self.Tensor([0.0]), self.Tensor([1.0])) 59 | 60 | ''' 61 | Define networks and optimizers 62 | ''' 63 | 64 | if self.autoregressive: 65 | self.actor = AutoRegressiveStochasticActor(self.num_inputs, self.action_dim, self.num_basis_functions).to(self.device) 66 | self.actor_target = AutoRegressiveStochasticActor(self.num_inputs, self.action_dim, self.num_basis_functions).to(self.device) 67 | else: 68 | self.actor = StochasticActor(self.num_inputs, self.action_dim, self.num_basis_functions).to(self.device) 69 | self.actor_target = StochasticActor(self.num_inputs, self.action_dim, self.num_basis_functions).to(self.device) 70 | self.actor_optim = Adam(self.actor.parameters(), lr=1e-4) 71 | 72 | self.critic = Critic(self.num_inputs + self.action_dim, num_networks=2).to(self.device) 73 | self.critic_target = Critic(self.num_inputs + self.action_dim, num_networks=2).to(self.device) 74 | self.critic_optim = Adam(self.critic.parameters(), lr=1e-3) 75 | 76 | self.value = Critic(self.num_inputs).to(self.device) 77 | self.value_target = Critic(self.num_inputs).to(self.device) 78 | self.value_optim = Adam(self.value.parameters(), lr=1e-3) 79 | 80 | ''' 81 | For multi-gpu setups we enable data parallelism, due to large sample sizes 82 | ''' 83 | if torch.cuda.device_count() > 1: 84 | self.actor = torch.nn.DataParallel(self.actor) 85 | self.actor_target = torch.nn.DataParallel(self.actor_target) 86 | 87 | self.critic = torch.nn.DataParallel(self.critic) 88 | self.critic_target = torch.nn.DataParallel(self.critic_target) 89 | 90 | self.value = torch.nn.DataParallel(self.value) 91 | self.value_target = torch.nn.DataParallel(self.value_target) 92 | 93 | ''' 94 | Initialize target network with the same parameters as the main network 95 | ''' 96 | hard_update(self.actor_target, self.actor) 97 | hard_update(self.critic_target, self.critic) 98 | hard_update(self.value_target, self.value) 99 | 100 | def eval(self): 101 | self.actor.eval() 102 | self.critic.eval() 103 | self.value.eval() 104 | 105 | def train(self): 106 | self.actor.train() 107 | self.critic.train() 108 | self.value.train() 109 | 110 | def policy(self, actor, state, actions=None): 111 | batch_size = state.shape[0] 112 | ''' 113 | We sample a quantile for each dimension of the action. 114 | The action is modeled as an auto-regressive distribution, e.g., 115 | P(X) = P(x_0) * P(x_1 | x_0) * ... * P(x_n | x_{n-1}, ..., x_0) 116 | ''' 117 | if self.autoregressive: 118 | taus = self.tau_sampler.rsample((batch_size, self.action_dim)).view(batch_size, self.action_dim, 1) 119 | else: 120 | taus = self.tau_sampler.rsample((batch_size, 1)).view(batch_size, 1, 1) 121 | return actor(state, taus, actions), None, taus 122 | 123 | def update_critic(self, state_batch, action_batch, reward_batch, mask_batch, next_state_batch): 124 | batch_size = state_batch.shape[0] 125 | 126 | ''' 127 | Update value network 128 | ''' 129 | with torch.no_grad(): 130 | # the value is calculated based on multiple samples from the policy and evaluated using the Q networks 131 | tiled_next_state_batch = self._tile(next_state_batch, 0, self.actor_samples) 132 | tiled_next_action_batch = self.policy(self.actor_target, tiled_next_state_batch)[0].view(batch_size * self.actor_samples, -1) 133 | 134 | next_q1, next_q2 = self.critic_target(torch.cat((tiled_next_state_batch, tiled_next_action_batch), 1)) 135 | 136 | # to avoid over-estimation, we use the minimal value calculated between both Q networks 137 | next_v = self.target_policy_q( 138 | (torch.topk(next_q1.view(batch_size, self.actor_samples), self.topk)[0]).mean(-1).unsqueeze(-1), 139 | (torch.topk(next_q2.view(batch_size, self.actor_samples), self.topk)[0]).mean(-1).unsqueeze(-1) 140 | ) 141 | v = self.value(state_batch) 142 | value_loss = F.mse_loss(v, next_v) 143 | 144 | with torch.no_grad(): 145 | next_v = self.value_target(next_state_batch) 146 | target_q = reward_batch + self.gamma * mask_batch * next_v 147 | 148 | self.value_optim.zero_grad() 149 | value_loss.backward() 150 | torch.nn.utils.clip_grad_norm_(self.value.parameters(), 5.0) 151 | self.value_optim.step() 152 | 153 | ''' 154 | Update Q networks 155 | ''' 156 | 157 | # Add regularization for the Q function. Similar actions should result in similar Q values. 158 | # noise = (torch.randn_like(action_batch) * self.q_normalization).clamp(-0.5, 0.5) 159 | # action_batch = (action_batch + noise).clamp(-1, 1) 160 | 161 | noise = (self.tau_sampler.rsample((batch_size, self.action_dim)).view(batch_size, self.action_dim) * 2 - 1) * self.q_normalization 162 | action_batch = (action_batch + noise).clamp(-1, 1) 163 | 164 | q1, q2 = self.critic(torch.cat((state_batch, action_batch), 1)) 165 | q1_loss = F.mse_loss(q1, target_q) 166 | q2_loss = F.mse_loss(q2, target_q) 167 | critic_loss = q1_loss + q2_loss 168 | 169 | self.critic_optim.zero_grad() 170 | critic_loss.backward() 171 | torch.nn.utils.clip_grad_norm_(self.critic.parameters(), 5.0) 172 | self.critic_optim.step() 173 | 174 | return critic_loss.item() + value_loss.item() 175 | 176 | def update_actor(self, state_batch, action_batch): 177 | batch_size = state_batch.shape[0] 178 | tiled_state_batch = self._tile(state_batch, 0, self.actor_samples) 179 | 180 | with torch.no_grad(): 181 | # Calculate the value of each state 182 | values = self.value_target(state_batch) 183 | values = torch.cat([values, values], dim=0) 184 | 185 | ''' 186 | Sample multiple actions both from the target policy and from a uniform distribution over the action 187 | space. These samples are used to compute the target distribution, which is defined as all the actions 188 | where Q(state, action) > V(state). 189 | ''' 190 | target_actions = self.policy(self.actor_target, tiled_state_batch)[0] 191 | target_actions += torch.randn_like(target_actions) * 0.01 192 | target_actions = target_actions.clamp(-1, 1) 193 | 194 | target_q1, target_q2 = self.critic_target(torch.cat((tiled_state_batch, target_actions), 1)) 195 | target_action_values = self.target_policy_q( 196 | target_q1.view(batch_size, self.actor_samples, -1), 197 | target_q2.view(batch_size, self.actor_samples, -1) 198 | ) 199 | 200 | random_actions = torch.rand_like(target_actions) * 2 - 1 201 | random_q1, random_q2 = self.critic_target(torch.cat((tiled_state_batch, random_actions), 1)) 202 | target_random_values = self.target_policy_q( 203 | random_q1.view(batch_size, self.actor_samples, -1), 204 | random_q2.view(batch_size, self.actor_samples, -1) 205 | ) 206 | 207 | target_actions = target_actions.view(batch_size, self.actor_samples, -1) 208 | random_actions = random_actions.view(batch_size, self.actor_samples, -1) 209 | 210 | target_actions = torch.cat([target_actions, random_actions], dim=0) 211 | target_action_values = torch.cat([target_action_values, target_random_values], dim=0) 212 | 213 | # (batch_size, 1) -> (batch_size, N, 1) 214 | values = values.unsqueeze(-1).expand(-1, self.actor_samples, -1) 215 | improvement = (target_action_values > values).view(-1, 1) # Choose everything over value 216 | 217 | weighting_improvement = improvement.view(batch_size * 2, self.actor_samples) 218 | state_improvement = improvement.expand(-1, tiled_state_batch.shape[1]) 219 | action_improvement = improvement.expand(-1, self.action_dim) 220 | 221 | tiled_state_batch = torch.cat([tiled_state_batch, tiled_state_batch], dim=0) 222 | improving_state_batch = tiled_state_batch[state_improvement].view(-1, tiled_state_batch.shape[1]) 223 | improving_action_batch = target_actions.view(-1, self.action_dim)[action_improvement].view(-1, self.action_dim) 224 | 225 | if self.target_policy == 'linear': 226 | weighting = (target_action_values[weighting_improvement] - values[weighting_improvement]) 227 | weighting = weighting / weighting.sum(-1, keepdim=True) 228 | elif self.target_policy == 'exponential': 229 | weighting = torch.exp(target_action_values[weighting_improvement] - values[weighting_improvement]) 230 | weighting = torch.clamp(weighting, max=20) 231 | elif self.target_policy == 'boltzman': 232 | weighting = (target_action_values[weighting_improvement] - values[weighting_improvement]) 233 | weighting = F.softmax((1./self.temp) * weighting, dim=1) 234 | elif self.target_policy == 'uniform': 235 | weighting = torch.ones_like(target_action_values[weighting_improvement]) 236 | else: # argmax 237 | raise NotImplementedError 238 | 239 | if improving_state_batch.shape[0] > 0: 240 | # Sample multiple actions for each state as an estimation of the current policy 241 | actions, _, taus = self.policy(self.actor, improving_state_batch, improving_action_batch) 242 | policy_loss = compute_eltwise_huber_quantile_loss(actions, improving_action_batch, taus.squeeze(-1), weighting) 243 | 244 | self.actor_optim.zero_grad() 245 | policy_loss.backward() 246 | torch.nn.utils.clip_grad_norm_(self.actor.parameters(), 1) 247 | self.actor_optim.step() 248 | 249 | return policy_loss.item() 250 | else: 251 | return 0 252 | 253 | def soft_update(self): 254 | soft_update(self.actor_target, self.actor, self.tau) 255 | soft_update(self.critic_target, self.critic, self.tau) 256 | soft_update(self.value_target, self.value, self.tau) 257 | -------------------------------------------------------------------------------- /policies/policy.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch.autograd import Variable 3 | import os 4 | import numpy as np 5 | from replay_memory import ReplayMemory, Transition 6 | 7 | 8 | def soft_update(target, source, tau): 9 | for target_param, param in zip(target.parameters(), source.parameters()): 10 | target_param.data.copy_(target_param.data * (1.0 - tau) + param.data * tau) 11 | 12 | 13 | def hard_update(target, source): 14 | for target_param, param in zip(target.parameters(), source.parameters()): 15 | target_param.data.copy_(param.data) 16 | 17 | 18 | def get_free_gpu(): 19 | os.system('nvidia-smi -q -d Memory |grep -A4 GPU|grep Free >tmp') 20 | memory_available = [int(x.split()[2]) for x in open('tmp', 'r').readlines()] 21 | return np.argmax(memory_available) 22 | 23 | 24 | class Policy: 25 | def __init__(self, gamma, tau, num_inputs, action_space, replay_size): 26 | if torch.cuda.is_available(): 27 | self.device = torch.device('cuda') 28 | torch.backends.cudnn.enabled = False 29 | self.Tensor = torch.cuda.FloatTensor 30 | else: 31 | self.device = torch.device('cpu') 32 | self.Tensor = torch.FloatTensor 33 | 34 | self.num_inputs = num_inputs 35 | self.action_space = action_space 36 | 37 | self.gamma = gamma 38 | self.tau = tau 39 | 40 | self.memory = ReplayMemory(replay_size) 41 | self.actor = None 42 | 43 | def eval(self): 44 | raise NotImplementedError 45 | 46 | def train(self): 47 | raise NotImplementedError 48 | 49 | def select_action(self, state, action_noise=None): 50 | state = Variable(state).to(self.device) 51 | 52 | action = self.policy(self.actor, state)[0] 53 | 54 | action = action.data 55 | if action_noise is not None: 56 | action += self.Tensor(action_noise()).to(self.device) 57 | 58 | action = action.clamp(-1, 1) 59 | 60 | return action 61 | 62 | def policy(self, actor, state): 63 | raise NotImplementedError 64 | 65 | def store_transition(self, state, action, mask, next_state, reward): 66 | B = state.shape[0] 67 | for b in range(B): 68 | self.memory.push(state[b], action[b], mask[b], next_state[b], reward[b]) 69 | 70 | def update_critic(self, state_batch, action_batch, reward_batch, mask_batch, next_state_batch): 71 | raise NotImplementedError 72 | 73 | def update_actor(self, state_batch, action_batch): 74 | raise NotImplementedError 75 | 76 | def update_parameters(self, batch_size, number_of_iterations): 77 | policy_losses = [] 78 | value_losses = [] 79 | 80 | for _ in range(number_of_iterations): 81 | transitions = self.memory.sample(batch_size) 82 | batch = Transition(*zip(*transitions)) 83 | 84 | state_batch = Variable(torch.stack(batch.state)).to(self.device) 85 | action_batch = Variable(torch.stack(batch.action)).to(self.device) 86 | reward_batch = Variable(torch.stack(batch.reward)).to(self.device).unsqueeze(1) 87 | mask_batch = Variable(torch.stack(batch.mask)).to(self.device).unsqueeze(1) 88 | next_state_batch = Variable(torch.stack(batch.next_state)).to(self.device) 89 | 90 | value_loss = self.update_critic(state_batch, action_batch, reward_batch, mask_batch, next_state_batch) 91 | value_losses.append(value_loss) 92 | 93 | policy_loss = self.update_actor(state_batch, action_batch) 94 | policy_losses.append(policy_loss) 95 | self.soft_update() 96 | 97 | return np.mean(value_losses), np.mean(policy_losses) 98 | 99 | def soft_update(self): 100 | raise NotImplementedError 101 | 102 | def _tile(self, a, dim, n_tile): 103 | init_dim = a.size(dim) 104 | repeat_idx = [1] * a.dim() 105 | repeat_idx[dim] = n_tile 106 | a = a.repeat(*(repeat_idx)) 107 | order_index = torch.LongTensor(np.concatenate([init_dim * np.arange(n_tile) + i for i in range(init_dim)])).to( 108 | self.device) 109 | return torch.index_select(a, dim, order_index) 110 | -------------------------------------------------------------------------------- /replay_memory.py: -------------------------------------------------------------------------------- 1 | import random 2 | from collections import namedtuple 3 | 4 | # Taken from 5 | # https://github.com/pytorch/tutorials/blob/master/Reinforcement%20(Q-)Learning%20with%20PyTorch.ipynb 6 | 7 | Transition = namedtuple( 8 | 'Transition', ('state', 'action', 'mask', 'next_state', 'reward')) 9 | 10 | 11 | class ReplayMemory(object): 12 | 13 | def __init__(self, capacity): 14 | self.capacity = capacity 15 | self.memory = [] 16 | self.position = 0 17 | 18 | def push(self, *args): 19 | """Saves a transition.""" 20 | if len(self.memory) < self.capacity: 21 | self.memory.append(None) 22 | self.memory[self.position] = Transition(*args) 23 | self.position = (self.position + 1) % self.capacity 24 | 25 | def sample(self, batch_size): 26 | return random.sample(self.memory, batch_size) 27 | 28 | def __len__(self): 29 | return len(self.memory) 30 | -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import torch 3 | import numpy as np 4 | 5 | 6 | def save_model(actor, basedir=None): 7 | if not os.path.exists('models/'): 8 | os.makedirs('models/') 9 | 10 | actor_path = "{}/ddpg_actor".format(basedir) 11 | torch.save(actor.state_dict(), actor_path) 12 | 13 | 14 | def load_model(agent, basedir=None): 15 | actor_path = "{}/ddpg_actor".format(basedir) 16 | 17 | print('Loading model from {}'.format(actor_path)) 18 | agent.actor.load_state_dict(torch.load(actor_path)) 19 | 20 | 21 | def moving_average(a, n=3): 22 | plot_data = np.zeros_like(a) 23 | for idx in range(len(a)): 24 | length = min(idx, n) 25 | plot_data[idx] = a[idx-length:idx+1].mean() 26 | return plot_data 27 | 28 | 29 | def vis_plot(viz, log_dict): 30 | ma_length = 0 31 | if viz is not None: 32 | for field in log_dict: 33 | if len(log_dict[field]) > 0: 34 | _, values = zip(*log_dict[field]) 35 | 36 | plot_data = np.array(log_dict[field]) 37 | viz.line(X=plot_data[:, 0], Y=moving_average(plot_data[:, 1], ma_length), win=field, 38 | opts=dict(title=field, legend=[field])) 39 | --------------------------------------------------------------------------------