├── LICENSE ├── README.md └── proxies ├── comm_only ├── bert_large.cpp ├── gpt2_large.cpp └── resnet50.cpp ├── cosmoflow.cpp ├── dlrm.cpp ├── gpt3.cpp ├── gpt3_moe.cpp ├── gpt3_moe_one_pipe_step.cpp ├── gpt3_one_pipe_step.cpp ├── resnet152.cpp └── resnet152_scal.cpp /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DNN-cpp-proxies 2 | C++/MPI proxies for distributed training of deep neural networks, including `ResNet-50`, `ResNet-152`, `BERT-large`, `CosmoFlow`, `DLRM`, `GPT-2`, `GPT-3`, etc. These proxies cover `data parallelism`, `operator parallelism`, `pipeline parallelism`, and `hybrid parallelism`. 3 | 4 | ## Demo 5 | Compile: 6 | 7 | `mpicxx gpt2_large.cpp -o gpt2` 8 | 9 | Run: 10 | 11 | `mpirun -n 32 ./gpt2` 12 | 13 | Setup the number of Transformer layers and the number of pipeline stages: 14 | 15 | `mpirun -n 32 ./gpt2 64 8` 16 | -------------------------------------------------------------------------------- /proxies/comm_only/bert_large.cpp: -------------------------------------------------------------------------------- 1 | /********************************************************************* 2 | * 3 | * Description: C++/MPI proxy for BERT-large distributed training 4 | * with a hybrid pipeline and data parallelism 5 | * Author: Shigang Li 6 | * Email: shigangli.cs@gmail.com 7 | * 8 | *********************************************************************/ 9 | 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | 18 | #define RUNS 256 19 | #define WARM_UP 10 20 | 21 | //p2p msg size for Bert with micro-batch size=8 and seq_length=128 22 | #define P2PSIZE 1049600 23 | 24 | #define BEGINSIZE 44379136 25 | #define INTERSIZE 12596224 26 | #define ENDSIZE 45984572 27 | 28 | #define MSGAGG 1 29 | 30 | #ifdef MSGAGG 31 | //message aggregation 32 | #define BEGINNUM 1 33 | #define INTERNUM 1 34 | #define ENDNUM 1 35 | int first_layer_grad_sizes[BEGINNUM] = {BEGINSIZE}; 36 | int intermediate_layer_grad_sizes[INTERNUM] = {INTERSIZE}; 37 | int end_layer_grad_sizes[ENDNUM] = {ENDSIZE}; 38 | 39 | #else 40 | #define BEGINNUM 21 41 | #define INTERNUM 16 42 | #define ENDNUM 26 43 | //sizes for the gradients per layer of bert 44 | int first_layer_grad_sizes[BEGINNUM] = {31254528, 524288, 2048, 1048576, 1048576, 1048576, 1048576, 4194304, 4194304, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 4096, 1024, 1024, 1024}; 45 | int intermediate_layer_grad_sizes[INTERNUM] = {1048576, 1048576, 1048576, 1048576, 4194304, 4194304, 1024, 1024, 1024, 1024, 1024, 1024, 4096, 1024, 1024, 1024}; 46 | int end_layer_grad_sizes[ENDNUM] = {1048576, 1048576, 1048576, 1048576, 4194304, 4194304, 1048576, 1048576, 31254528, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 4096, 1024, 1024, 1024, 1024, 30522, 1024, 1024, 1024, 2}; 47 | 48 | #endif 49 | 50 | 51 | 52 | int run_pipeline(int grad_acc_step, int stage_id, int num_grad_per_stage, 53 | int num_stage, int allreduce_group_size, 54 | float **begin_stage_grad_ptrs, 55 | float **sum_begin_stage_grad_ptrs, 56 | float **end_stage_grad_ptrs, 57 | float **sum_end_stage_grad_ptrs, 58 | float **intermediate_stage_grad_ptrs, 59 | float **sum_intermediate_stage_grad_ptrs, 60 | int *stage_grad_sizes, 61 | MPI_Comm p2p_comm, MPI_Comm allreduce_comm){ 62 | 63 | float *send_buffer = (float *)calloc(P2PSIZE, sizeof(float)); 64 | float *recv_buffer = (float *)calloc(P2PSIZE, sizeof(float)); 65 | 66 | //p2p forward 67 | for(int i=0; i 1){ 108 | if(stage_id == 0){ 109 | for(int i=0; i 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | 18 | #define RUNS 256 19 | #define WARM_UP 10 20 | 21 | //p2p msg size for GPT-2 with micro-batch size=1 and seq_length=632 22 | #define P2PSIZE 808960 23 | 24 | #define BEGINSIZE 85317120 25 | #define INTERSIZE 19677440 26 | #define ENDSIZE 84008960 27 | 28 | #define MSGAGG 1 29 | 30 | #ifdef MSGAGG 31 | //message aggregation 32 | #define BEGINNUM 1 33 | #define INTERNUM 1 34 | #define ENDNUM 1 35 | int first_layer_grad_sizes[BEGINNUM] = {BEGINSIZE}; 36 | int intermediate_layer_grad_sizes[INTERNUM] = {INTERSIZE}; 37 | int end_layer_grad_sizes[ENDNUM] = {ENDSIZE}; 38 | 39 | #else 40 | #define BEGINNUM 14 41 | #define INTERNUM 12 42 | #define ENDNUM 15 43 | //sizes for the gradients per layer of gpt-2 44 | int first_layer_grad_sizes[BEGINNUM] = {64328960, 1310720, 1280, 4915200, 1638400, 1280, 6553600, 6553600, 1280, 3840, 1280, 1280, 5120, 1280}; 45 | int intermediate_layer_grad_sizes[INTERNUM] = {1280, 4915200, 1638400, 1280, 6553600, 6553600, 1280, 3840, 1280, 1280, 5120, 1280}; 46 | int end_layer_grad_sizes[ENDNUM] = {1280, 4915200, 1638400, 1280, 6553600, 6553600, 1280, 64328960, 1280, 3840, 1280, 1280, 5120, 1280, 1280}; 47 | 48 | #endif 49 | 50 | 51 | 52 | int run_pipeline(int grad_acc_step, int stage_id, int num_grad_per_stage, 53 | int num_stage, int allreduce_group_size, 54 | float **begin_stage_grad_ptrs, 55 | float **sum_begin_stage_grad_ptrs, 56 | float **end_stage_grad_ptrs, 57 | float **sum_end_stage_grad_ptrs, 58 | float **intermediate_stage_grad_ptrs, 59 | float **sum_intermediate_stage_grad_ptrs, 60 | int *stage_grad_sizes, 61 | MPI_Comm p2p_comm, MPI_Comm allreduce_comm){ 62 | 63 | float *send_buffer = (float *)calloc(P2PSIZE, sizeof(float)); 64 | float *recv_buffer = (float *)calloc(P2PSIZE, sizeof(float)); 65 | 66 | //p2p forward 67 | for(int i=0; i 1){ 108 | if(stage_id == 0){ 109 | for(int i=0; i 11 | #include 12 | #include 13 | #include 14 | #include 15 | 16 | #define RUNS 512 17 | #define WARM_UP 10 18 | #define TOTALSIZE 25559081 19 | #define MSGAGG 1 20 | 21 | #ifdef MSGAGG 22 | //message aggregation 23 | #define NUM 6 24 | 25 | //pointers of the send/receive buffers 26 | float* grad_ptrs[NUM]; 27 | float* sum_grad_ptrs[NUM]; 28 | 29 | //sizes for the gradients 30 | int msgSize[NUM] = { 31 | 3104745, 32 | 4461568, 33 | 4462592, 34 | 4986880, 35 | 4468736, 36 | 4074560 37 | }; 38 | 39 | #else 40 | //number of trainable parameters in ResNet-50 41 | #define NUM 161 42 | 43 | //pointers of the send/receive buffers 44 | float* grad_ptrs[NUM]; 45 | float* sum_grad_ptrs[NUM]; 46 | 47 | //sizes for the gradients 48 | int msgSize[NUM] = { 49 | 1001, 50 | 2050048, 51 | 2048, 52 | 2048, 53 | 1048576, 54 | 512, 55 | 512, 56 | 2359296, 57 | 512, 58 | 512, 59 | 1048576, 60 | 2048, 61 | 2048, 62 | 1048576, 63 | 512, 64 | 512, 65 | 2359296, 66 | 512, 67 | 512, 68 | 1048576, 69 | 2048, 70 | 2048, 71 | 1048576, 72 | 512, 73 | 512, 74 | 2359296, 75 | 512, 76 | 512, 77 | 524288, 78 | 2048, 79 | 2048, 80 | 2097152, 81 | 1024, 82 | 1024, 83 | 262144, 84 | 256, 85 | 256, 86 | 589824, 87 | 256, 88 | 256, 89 | 262144, 90 | 1024, 91 | 1024, 92 | 262144, 93 | 256, 94 | 256, 95 | 589824, 96 | 256, 97 | 256, 98 | 262144, 99 | 1024, 100 | 1024, 101 | 262144, 102 | 256, 103 | 256, 104 | 589824, 105 | 256, 106 | 256, 107 | 262144, 108 | 1024, 109 | 1024, 110 | 262144, 111 | 256, 112 | 256, 113 | 589824, 114 | 256, 115 | 256, 116 | 262144, 117 | 1024, 118 | 1024, 119 | 262144, 120 | 256, 121 | 256, 122 | 589824, 123 | 256, 124 | 256, 125 | 262144, 126 | 1024, 127 | 1024, 128 | 262144, 129 | 256, 130 | 256, 131 | 589824, 132 | 256, 133 | 256, 134 | 131072, 135 | 1024, 136 | 1024, 137 | 524288, 138 | 512, 139 | 512, 140 | 65536, 141 | 128, 142 | 128, 143 | 147456, 144 | 128, 145 | 128, 146 | 65536, 147 | 512, 148 | 512, 149 | 65536, 150 | 128, 151 | 128, 152 | 147456, 153 | 128, 154 | 128, 155 | 65536, 156 | 512, 157 | 512, 158 | 65536, 159 | 128, 160 | 128, 161 | 147456, 162 | 128, 163 | 128, 164 | 65536, 165 | 512, 166 | 512, 167 | 65536, 168 | 128, 169 | 128, 170 | 147456, 171 | 128, 172 | 128, 173 | 32768, 174 | 512, 175 | 512, 176 | 131072, 177 | 256, 178 | 256, 179 | 16384, 180 | 64, 181 | 64, 182 | 36864, 183 | 64, 184 | 64, 185 | 16384, 186 | 256, 187 | 256, 188 | 16384, 189 | 64, 190 | 64, 191 | 36864, 192 | 64, 193 | 64, 194 | 16384, 195 | 256, 196 | 256, 197 | 16384, 198 | 64, 199 | 64, 200 | 36864, 201 | 64, 202 | 64, 203 | 4096, 204 | 256, 205 | 256, 206 | 16384, 207 | 64, 208 | 64, 209 | 9408 210 | }; 211 | #endif 212 | 213 | 214 | //allreduce 215 | int run_allreduce(){ 216 | for(int i=0; i 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | 16 | #define WARM_UP 8 17 | #define RUNS 128 18 | 19 | #define NUM_L 8 20 | // we set model shards = 4 21 | // batchsize = 8 22 | // suggest world_size <= 4096, which is corresponding to a global batch_size <= 8192 23 | // A100 GPU 24 | // runtime in us (10E-6) for each model shard 25 | int fwd_rt_per_layer[NUM_L] = {6567, 13135, 6567, 3283, 1641, 5, 3, 1}; 26 | int bwd_rt_per_layer[NUM_L] = {2, 6, 10, 3283, 6567, 13135, 26270, 13135}; 27 | 28 | #define NUM_Conv_L 5 29 | // 2x2 2D spatial decomposation for 3D tensors 30 | // Note that each worker has two neighbors in 2D decomposation 31 | 32 | // conv layer halo exchange message sizes in forward 33 | int conv_fwd_halo_sizes[NUM_Conv_L-1] = {2097152, 1048576, 524288, 262144}; 34 | 35 | // conv layer halo exchange message sizes in backward 36 | int conv_bwd_halo_sizes[NUM_Conv_L-1] = {131072, 262144, 524288, 1048576}; 37 | 38 | #define NUM_Dense_L 3 39 | // dense layer allgather msg sizes in forward 40 | int dense_fwd_allgather_sizes[NUM_Dense_L] = {65536, 256, 128}; 41 | 42 | // dense layer reduce_scatter msg sizes in backward 43 | //int dense_bwd_reduce_scatter_sizes[NUM_Dense_L] = {512, 1024, 262144}; 44 | int dense_bwd_reduce_scatter_sizes[NUM_Dense_L] = {128, 256, 65536}; 45 | 46 | // allreduce sizes for gradients with message aggregation 47 | // aggregate all dense layers: Dense2-0 Conv4 Conv3 Conv2 Conv1 Conv0 48 | int allreduce_sizes[NUM_L-2] = {1050737, 3539456, 884992, 221312, 55360, 3488}; 49 | 50 | int run_model_data_parallel(float** fwd_halo_send_buff0_ptrs, 51 | float** fwd_halo_send_buff1_ptrs, 52 | float** fwd_halo_recv_buff0_ptrs, 53 | float** fwd_halo_recv_buff1_ptrs, 54 | float** bwd_halo_send_buff0_ptrs, 55 | float** bwd_halo_send_buff1_ptrs, 56 | float** bwd_halo_recv_buff0_ptrs, 57 | float** bwd_halo_recv_buff1_ptrs, 58 | float** dense_fwd_allgather_sbuff_ptrs, 59 | float** dense_fwd_allgather_rbuff_ptrs, 60 | float** dense_bwd_rs_sbuff_ptrs, 61 | float** dense_bwd_rs_rbuff_ptrs, 62 | float** grad_ptrs, 63 | float** sum_grad_ptrs, 64 | MPI_Comm model_parallel_comm, 65 | MPI_Comm dense_allreduce_comm){ 66 | 67 | 68 | //forward 69 | int mp_group_rank; 70 | MPI_Comm_rank(model_parallel_comm, &mp_group_rank); 71 | for(int i=0; i=1 && i=NUM_Conv_L){ //all gather for dense layers 82 | int msg_idx = i-NUM_Conv_L; 83 | MPI_Allgather(dense_fwd_allgather_sbuff_ptrs[msg_idx], dense_fwd_allgather_sizes[msg_idx], MPI_FLOAT, dense_fwd_allgather_rbuff_ptrs[msg_idx], dense_fwd_allgather_sizes[msg_idx], MPI_FLOAT, model_parallel_comm); 84 | } 85 | 86 | usleep(fwd_rt_per_layer[i]); //compute 87 | } 88 | 89 | //backward 90 | MPI_Request grad_allreduce_reqs[NUM_Conv_L+1]; 91 | for(int i=0; i NUM_Dense_L) 97 | MPI_Testany(NUM_Conv_L+1, grad_allreduce_reqs, &index, &flag, MPI_STATUSES_IGNORE); //advancing MPI in the background 98 | 99 | usleep(bwd_rt_per_layer[i]); //compute 100 | 101 | if(i < NUM_Dense_L){ //dense layers 102 | MPI_Reduce_scatter_block(dense_bwd_rs_sbuff_ptrs[i], dense_bwd_rs_rbuff_ptrs[i], dense_bwd_reduce_scatter_sizes[i], MPI_FLOAT, MPI_SUM, model_parallel_comm); 103 | } 104 | else if(i < NUM_L-1){ //conv layers 105 | int msg_idx = i-NUM_Dense_L; 106 | MPI_Request requests[4]; 107 | MPI_Isend(bwd_halo_send_buff0_ptrs[msg_idx], conv_bwd_halo_sizes[msg_idx], MPI_FLOAT, mp_group_rank^1, i, model_parallel_comm, &requests[0]); 108 | MPI_Isend(bwd_halo_send_buff1_ptrs[msg_idx], conv_bwd_halo_sizes[msg_idx], MPI_FLOAT, mp_group_rank^2, i, model_parallel_comm, &requests[1]); 109 | MPI_Irecv(bwd_halo_recv_buff0_ptrs[msg_idx], conv_bwd_halo_sizes[msg_idx], MPI_FLOAT, mp_group_rank^1, i, model_parallel_comm, &requests[2]); 110 | MPI_Irecv(bwd_halo_recv_buff1_ptrs[msg_idx], conv_bwd_halo_sizes[msg_idx], MPI_FLOAT, mp_group_rank^2, i, model_parallel_comm, &requests[3]); 111 | MPI_Waitall(4, requests, MPI_STATUSES_IGNORE); 112 | } 113 | 114 | if(i == NUM_Dense_L-1){ 115 | MPI_Iallreduce(grad_ptrs[0], sum_grad_ptrs[0], allreduce_sizes[0], MPI_FLOAT, MPI_SUM, dense_allreduce_comm, &grad_allreduce_reqs[0]); 116 | } 117 | else if(i > NUM_Dense_L-1){ 118 | MPI_Iallreduce(grad_ptrs[i-NUM_Dense_L+1], sum_grad_ptrs[i-NUM_Dense_L+1], allreduce_sizes[i-NUM_Dense_L+1], MPI_FLOAT, MPI_SUM, MPI_COMM_WORLD, &grad_allreduce_reqs[i-NUM_Dense_L+1]); 119 | } 120 | } 121 | 122 | MPI_Waitall(NUM_Conv_L+1, grad_allreduce_reqs, MPI_STATUSES_IGNORE); 123 | return 0; 124 | } 125 | 126 | int main(int argc, char *argv[]){ 127 | int rank, world_size; 128 | 129 | int model_shards = 4; // do not change this 130 | 131 | MPI_Init(&argc,&argv); 132 | MPI_Comm_size(MPI_COMM_WORLD, &world_size); 133 | MPI_Comm_rank(MPI_COMM_WORLD, &rank); 134 | 135 | int dense_allreduce_group_rank, mp_group_rank; 136 | int dense_allreduce_group_size, mp_group_size; 137 | 138 | //the number of processes should be a multiple of model_shards = 4 139 | assert(world_size % model_shards == 0); 140 | int dense_allreduce_group_color = rank % model_shards; 141 | 142 | MPI_Comm dense_allreduce_comm; 143 | MPI_Comm_split(MPI_COMM_WORLD, dense_allreduce_group_color, rank, &dense_allreduce_comm); 144 | 145 | MPI_Comm_rank(dense_allreduce_comm, &dense_allreduce_group_rank); 146 | MPI_Comm_size(dense_allreduce_comm, &dense_allreduce_group_size); 147 | 148 | MPI_Comm model_parallel_comm; 149 | MPI_Comm_split(MPI_COMM_WORLD, dense_allreduce_group_rank, rank, &model_parallel_comm); 150 | MPI_Comm_rank(model_parallel_comm, &mp_group_rank); 151 | MPI_Comm_size(model_parallel_comm, &mp_group_size); 152 | 153 | assert(dense_allreduce_group_color == mp_group_rank); 154 | assert(model_shards == mp_group_size); 155 | 156 | float* fwd_halo_send_buff0_ptrs[NUM_Conv_L-1]; 157 | float* fwd_halo_send_buff1_ptrs[NUM_Conv_L-1]; 158 | float* fwd_halo_recv_buff0_ptrs[NUM_Conv_L-1]; 159 | float* fwd_halo_recv_buff1_ptrs[NUM_Conv_L-1]; 160 | 161 | float* bwd_halo_send_buff0_ptrs[NUM_Conv_L-1]; 162 | float* bwd_halo_send_buff1_ptrs[NUM_Conv_L-1]; 163 | float* bwd_halo_recv_buff0_ptrs[NUM_Conv_L-1]; 164 | float* bwd_halo_recv_buff1_ptrs[NUM_Conv_L-1]; 165 | for(int i=0; i 15 | #include 16 | #include 17 | #include 18 | #include 19 | #include 20 | #include 21 | 22 | #define RUNS 1 23 | #define WARM_UP 0 24 | 25 | 26 | #define BOT_MLP_SIZE 49536 27 | #define TOP_MLP_SIZE 728065 28 | #define EMB_ALL2ALL_SIZE 262144 //2048*128 29 | 30 | // runtime in us (10E-6) 31 | #define FWD_BOT_MLP 341 32 | #define FWD_TOP_MLP 455 33 | #define FWD_INTER 209 34 | #define FWD_EMB 95 35 | 36 | int run_dlrm(int num_proc, 37 | float *top_grad_ptr, 38 | float *sum_top_grad_ptr, 39 | float *bot_grad_ptr, 40 | float *sum_bot_grad_ptr, 41 | float *fwd_alltoall_send_ptrs, 42 | float *fwd_alltoall_recv_ptrs, 43 | float *bwd_alltoall_send_ptrs, 44 | float *bwd_alltoall_recv_ptrs){ 45 | 46 | MPI_Request grad_allreduce_reqs[2]; 47 | usleep(FWD_EMB); //fwd 48 | //alltoall 49 | MPI_Alltoall(fwd_alltoall_send_ptrs, EMB_ALL2ALL_SIZE/num_proc, MPI_FLOAT, fwd_alltoall_recv_ptrs, EMB_ALL2ALL_SIZE/num_proc, MPI_FLOAT, MPI_COMM_WORLD); 50 | 51 | usleep(FWD_BOT_MLP); //fwd 52 | usleep(FWD_INTER); //fwd 53 | 54 | usleep(FWD_TOP_MLP); //fwd 55 | 56 | usleep(FWD_TOP_MLP*2); //bwd 57 | //allreduce 58 | //MPI_Allreduce(top_grad_ptr, sum_top_grad_ptr, TOP_MLP_SIZE, MPI_FLOAT, MPI_SUM, MPI_COMM_WORLD); 59 | MPI_Iallreduce(top_grad_ptr, sum_top_grad_ptr, TOP_MLP_SIZE, MPI_FLOAT, MPI_SUM, MPI_COMM_WORLD, &grad_allreduce_reqs[0]); 60 | 61 | usleep(FWD_INTER); //bwd 62 | usleep(FWD_BOT_MLP*2); //bwd 63 | //allreduce 64 | //MPI_Allreduce(bot_grad_ptr, sum_bot_grad_ptr, BOT_MLP_SIZE, MPI_FLOAT, MPI_SUM, MPI_COMM_WORLD); 65 | MPI_Iallreduce(bot_grad_ptr, sum_bot_grad_ptr, BOT_MLP_SIZE, MPI_FLOAT, MPI_SUM, MPI_COMM_WORLD, &grad_allreduce_reqs[1]); 66 | 67 | //alltoall 68 | MPI_Alltoall(bwd_alltoall_send_ptrs, EMB_ALL2ALL_SIZE/num_proc, MPI_FLOAT, bwd_alltoall_recv_ptrs, EMB_ALL2ALL_SIZE/num_proc, MPI_FLOAT, MPI_COMM_WORLD); 69 | usleep(FWD_EMB*2); //bwd 70 | 71 | MPI_Waitall(2, grad_allreduce_reqs, MPI_STATUSES_IGNORE); 72 | 73 | return 0; 74 | } 75 | 76 | 77 | int main(int argc, char *argv[]){ 78 | int rank, world_size; 79 | double begin, elapse; 80 | 81 | MPI_Init(&argc,&argv); 82 | MPI_Comm_size(MPI_COMM_WORLD, &world_size); 83 | MPI_Comm_rank(MPI_COMM_WORLD, &rank); 84 | 85 | float* top_grad_ptr = (float *)calloc(TOP_MLP_SIZE, sizeof(float)); 86 | float* sum_top_grad_ptr = (float *)calloc(TOP_MLP_SIZE, sizeof(float)); 87 | float* bot_grad_ptr = (float *)calloc(BOT_MLP_SIZE, sizeof(float)); 88 | float* sum_bot_grad_ptr = (float *)calloc(BOT_MLP_SIZE , sizeof(float)); 89 | 90 | float* fwd_alltoall_send_ptrs = (float *)calloc(EMB_ALL2ALL_SIZE, sizeof(float)); 91 | float* fwd_alltoall_recv_ptrs = (float *)calloc(EMB_ALL2ALL_SIZE, sizeof(float)); 92 | float* bwd_alltoall_send_ptrs = (float *)calloc(EMB_ALL2ALL_SIZE, sizeof(float)); 93 | float* bwd_alltoall_recv_ptrs = (float *)calloc(EMB_ALL2ALL_SIZE, sizeof(float)); 94 | 95 | MPI_Barrier(MPI_COMM_WORLD); 96 | 97 | //warmup 98 | for(int wmp = 0; wmp < WARM_UP; wmp++){ 99 | run_dlrm(world_size, 100 | top_grad_ptr, 101 | sum_top_grad_ptr, 102 | bot_grad_ptr, 103 | sum_bot_grad_ptr, 104 | fwd_alltoall_send_ptrs, 105 | fwd_alltoall_recv_ptrs, 106 | bwd_alltoall_send_ptrs, 107 | bwd_alltoall_recv_ptrs); 108 | } 109 | 110 | begin = MPI_Wtime(); 111 | for(int iter = 0; iter < RUNS; iter++){ 112 | run_dlrm(world_size, 113 | top_grad_ptr, 114 | sum_top_grad_ptr, 115 | bot_grad_ptr, 116 | sum_bot_grad_ptr, 117 | fwd_alltoall_send_ptrs, 118 | fwd_alltoall_recv_ptrs, 119 | bwd_alltoall_send_ptrs, 120 | bwd_alltoall_recv_ptrs); 121 | } 122 | elapse = (MPI_Wtime()-begin)/RUNS; 123 | 124 | if(rank == 0) 125 | printf("MoEs: Rank = %d, world_size = %d, global batch = %d, DLRM runtime per iteration = %f s\n", rank, world_size, 2048, elapse); 126 | 127 | MPI_Finalize(); 128 | } 129 | -------------------------------------------------------------------------------- /proxies/gpt3.cpp: -------------------------------------------------------------------------------- 1 | /********************************************************************* 2 | * 3 | * Description: C++/MPI proxy for GPT3 (175 B) distributed training 4 | * with a hybrid data, model, and pipeline parallelism 5 | * 6 | *********************************************************************/ 7 | 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | 16 | #define RUNS 128 17 | #define WARM_UP 8 18 | 19 | #define NUM_L 96 20 | #define ACC_STEP_SCALE 2 21 | #define MODEL_SHARDS 4 22 | 23 | // msg sizes for GPT-3 (M_dim=12288) with micro-batch size=1 and seq_len=2048 24 | // we set model shards = 4 25 | #define PIPE_P2P_SIZE 25165824 26 | #define MP_ALLREDUCE_SIZE 25165824 27 | #define MOE_ALL2ALL_SIZE 25165824 28 | //#define DP_ALLREDUCE_SIZE 452984832+154389504 29 | #define DP_ALLREDUCE_SIZE 452984832 // num params of one shard of a layer 30 | 31 | // runtime in us (10E-6) for each model shard of each layer 32 | #define FWD_RT 15915 33 | #define BWD_RT 31830 34 | #define BWD_RT_GPIPE 47745 35 | 36 | int run_data_model_pipe(int grad_acc_step, int stage_id, int num_stage, 37 | float *grad_ptr, 38 | float *sum_grad_ptr, 39 | float *fwd_send_buff, 40 | float *fwd_recv_buff, 41 | float *bwd_send_buff, 42 | float *bwd_recv_buff, 43 | float **mp_fwd_inter_ptrs, 44 | float **sum_mp_fwd_inter_ptrs, 45 | float **mp_bwd_grad_ptrs, 46 | float **sum_mp_bwd_grad_ptrs, 47 | MPI_Comm dp_allreduce_comm, 48 | MPI_Comm mp_allreduce_comm, 49 | MPI_Comm pp_p2p_comm){ 50 | 51 | MPI_Request fwd_reqs[2]; 52 | MPI_Request bwd_reqs[2]; 53 | for(int i=0; i<2; i++){ 54 | fwd_reqs[i] = MPI_REQUEST_NULL; 55 | bwd_reqs[i] = MPI_REQUEST_NULL; 56 | } 57 | 58 | //forward 59 | for(int i=0; i 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | 18 | #define RUNS 128 19 | #define WARM_UP 4 20 | 21 | #define NUM_L 96 22 | #define NUM_MOE 16 23 | #define ACC_STEP_SCALE 2 24 | 25 | // msg sizes for GPT-3 (M_dim=12288) with micro-batch size=1 and seq_len=2048 26 | #define PIPE_P2P_SIZE 25165824 27 | #define MP_ALLREDUCE_SIZE 25165824 28 | #define MOE_ALL2ALL_SIZE 25165824 29 | #define MHA_SIZE 603979776 // num params of mha in a layer 30 | #define MLP_SIZE 1207959552 // num params of mlp in a layer 31 | 32 | // runtime in us (10E-6) 33 | #define FWD_MHA 22367 34 | #define BWD_MHA 44734 35 | #define FWD_MLP 41293 36 | #define BWD_MLP 82586 37 | 38 | int run_data_moe_pipe(int grad_acc_step, int stage_id, int num_stage, int num_moe, 39 | float *grad_ptr, 40 | float *sum_grad_ptr, 41 | float *moe_grad_ptr, 42 | float *sum_moe_grad_ptr, 43 | float *fwd_send_buff, 44 | float *fwd_recv_buff, 45 | float *bwd_send_buff, 46 | float *bwd_recv_buff, 47 | float **moe_fwd_alltoall_send_ptrs, 48 | float **moe_fwd_alltoall_recv_ptrs, 49 | float **moe_bwd_alltoall_send_ptrs, 50 | float **moe_bwd_alltoall_recv_ptrs, 51 | MPI_Comm dp_allreduce_comm, 52 | MPI_Comm pp_p2p_comm, 53 | MPI_Comm moe_alltoall_comm, 54 | MPI_Comm moe_allreduce_comm){ 55 | 56 | MPI_Request fwd_reqs[2]; 57 | MPI_Request bwd_reqs[2]; 58 | for(int i=0; i<2; i++){ 59 | fwd_reqs[i] = MPI_REQUEST_NULL; 60 | bwd_reqs[i] = MPI_REQUEST_NULL; 61 | } 62 | 63 | //forward 64 | for(int i=0; i 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | 10 | #define RUNS 1 11 | #define WARM_UP 0 12 | 13 | #define NUM_L 96 14 | #define NUM_MOE 16 15 | #define ACC_STEP_SCALE 2 16 | 17 | // msg sizes for GPT-3 (M_dim=12288) with micro-batch size=1 and seq_len=2048 18 | #define PIPE_P2P_SIZE 25165824 19 | #define MP_ALLREDUCE_SIZE 25165824 20 | #define MOE_ALL2ALL_SIZE 25165824 21 | #define MHA_SIZE 603979776 // num params of mha in a layer 22 | #define MLP_SIZE 1207959552 // num params of mlp in a layer 23 | 24 | // runtime in us (10E-6) 25 | #define FWD_MHA 22367 26 | #define BWD_MHA 44734 27 | #define FWD_MLP 41293 28 | #define BWD_MLP 82586 29 | 30 | int run_one_step_pipe_moe(int grad_acc_step, int stage_id, int num_stage, int num_moe, 31 | float *grad_ptr, 32 | float *sum_grad_ptr, 33 | float *moe_grad_ptr, 34 | float *sum_moe_grad_ptr, 35 | float *fwd_send_buff, 36 | float *fwd_recv_buff, 37 | float *bwd_send_buff, 38 | float *bwd_recv_buff, 39 | float **moe_fwd_alltoall_send_ptrs, 40 | float **moe_fwd_alltoall_recv_ptrs, 41 | float **moe_bwd_alltoall_send_ptrs, 42 | float **moe_bwd_alltoall_recv_ptrs, 43 | MPI_Comm dp_allreduce_comm, 44 | MPI_Comm pp_p2p_comm, 45 | MPI_Comm moe_alltoall_comm, 46 | MPI_Comm moe_allreduce_comm){ 47 | 48 | MPI_Request reqs[2]; 49 | 50 | if(stage_id % 2 == 0){ 51 | MPI_Irecv(bwd_recv_buff, PIPE_P2P_SIZE, MPI_FLOAT, stage_id+1, 1, pp_p2p_comm, &reqs[0]); //receive input of next mb 52 | usleep(FWD_MHA); //compute fwd 53 | usleep(FWD_MLP/num_moe); 54 | 55 | for(int j=0; j<2; j++){ //all-to-all for MoE 56 | MPI_Alltoall(moe_fwd_alltoall_send_ptrs[j], MOE_ALL2ALL_SIZE/num_moe, MPI_FLOAT, moe_fwd_alltoall_recv_ptrs[j], MOE_ALL2ALL_SIZE/num_moe, MPI_FLOAT, moe_alltoall_comm); 57 | } 58 | 59 | MPI_Isend(fwd_send_buff, PIPE_P2P_SIZE, MPI_FLOAT, stage_id+1, 2, pp_p2p_comm, &reqs[1]); //send output of current mb 60 | MPI_Waitall(2, reqs, MPI_STATUS_IGNORE); 61 | }else{ 62 | MPI_Irecv(fwd_recv_buff, PIPE_P2P_SIZE, MPI_FLOAT, stage_id-1, 2, pp_p2p_comm, &reqs[1]); //receive input of next mb 63 | usleep(BWD_MHA); //compute bwd 64 | usleep(BWD_MLP/num_moe); 65 | 66 | for(int j=0; j<2; j++){ //all-to-all for MoE 67 | MPI_Alltoall(moe_bwd_alltoall_send_ptrs[j], MOE_ALL2ALL_SIZE/num_moe, MPI_FLOAT, moe_bwd_alltoall_recv_ptrs[j], MOE_ALL2ALL_SIZE/num_moe, MPI_FLOAT, moe_alltoall_comm); 68 | } 69 | 70 | MPI_Isend(bwd_send_buff, PIPE_P2P_SIZE, MPI_FLOAT, stage_id-1, 1, pp_p2p_comm, &reqs[0]); //send output of current mb 71 | MPI_Waitall(2, reqs, MPI_STATUS_IGNORE); 72 | } 73 | 74 | return 0; 75 | } 76 | 77 | 78 | int main(int argc, char *argv[]){ 79 | int rank, world_size; 80 | double begin, elapse; 81 | 82 | //number of pipeline stages 83 | int num_stage = NUM_L; 84 | int num_layer = NUM_L; 85 | int acc_step_scale = ACC_STEP_SCALE; 86 | //number of micro-batches in an iteration 87 | int grad_acc_step = num_stage * acc_step_scale; 88 | 89 | if(argc == 2){ 90 | num_stage = atoi(argv[1]); 91 | num_layer = atoi(argv[1]); 92 | } 93 | if(argc == 3){ 94 | num_stage = atoi(argv[1]); 95 | num_layer = atoi(argv[1]); 96 | acc_step_scale = atoi(argv[2]); 97 | grad_acc_step = num_stage * acc_step_scale; 98 | } 99 | 100 | MPI_Init(&argc,&argv); 101 | MPI_Comm_size(MPI_COMM_WORLD, &world_size); 102 | MPI_Comm_rank(MPI_COMM_WORLD, &rank); 103 | MPI_Comm dp_allreduce_comm; 104 | MPI_Comm pp_p2p_comm; 105 | MPI_Comm moe_alltoall_comm; 106 | MPI_Comm moe_allreduce_comm; 107 | 108 | int num_moe = NUM_MOE; 109 | //the number of processes should be a multiple of num_stage 110 | assert(world_size % (num_stage * num_moe) == 0); 111 | 112 | int dp_group_rank, pp_p2p_group_rank; 113 | int dp_group_size, pp_p2p_group_size; 114 | int moe_allreduce_group_rank, moe_alltoall_group_rank; 115 | int moe_allreduce_group_size, moe_alltoall_group_size; 116 | 117 | int dp_group_color = rank % num_stage; 118 | MPI_Comm_split(MPI_COMM_WORLD, dp_group_color, rank, &dp_allreduce_comm); 119 | MPI_Comm_rank(dp_allreduce_comm, &dp_group_rank); 120 | MPI_Comm_size(dp_allreduce_comm, &dp_group_size); 121 | 122 | MPI_Comm_split(MPI_COMM_WORLD, dp_group_rank, rank, &pp_p2p_comm); 123 | MPI_Comm_rank(pp_p2p_comm, &pp_p2p_group_rank); 124 | MPI_Comm_size(pp_p2p_comm, &pp_p2p_group_size); 125 | 126 | int moe_allreduce_group_color = dp_group_rank % num_moe; 127 | MPI_Comm_split(dp_allreduce_comm, moe_allreduce_group_color, dp_group_rank, &moe_allreduce_comm); 128 | 129 | MPI_Comm_rank(moe_allreduce_comm, &moe_allreduce_group_rank); 130 | MPI_Comm_size(moe_allreduce_comm, &moe_allreduce_group_size); 131 | 132 | MPI_Comm_split(dp_allreduce_comm, moe_allreduce_group_rank, dp_group_rank, &moe_alltoall_comm); 133 | MPI_Comm_rank(moe_alltoall_comm, &moe_alltoall_group_rank); 134 | MPI_Comm_size(moe_alltoall_comm, &moe_alltoall_group_size); 135 | 136 | assert(pp_p2p_group_size == num_stage); 137 | assert(moe_alltoall_group_size == num_moe); 138 | assert(dp_group_size == num_moe * moe_allreduce_group_size); 139 | 140 | int stage_id = pp_p2p_group_rank; 141 | 142 | float* grad_ptr = (float *)calloc(MHA_SIZE, sizeof(float)); 143 | float* sum_grad_ptr = (float *)calloc(MHA_SIZE, sizeof(float)); 144 | float* moe_grad_ptr = (float *)calloc(MLP_SIZE/num_moe, sizeof(float)); 145 | float* sum_moe_grad_ptr = (float *)calloc(MLP_SIZE/num_moe, sizeof(float)); 146 | 147 | float* fwd_send_buff = (float *)calloc(PIPE_P2P_SIZE, sizeof(float)); 148 | float* fwd_recv_buff = (float *)calloc(PIPE_P2P_SIZE, sizeof(float)); 149 | float* bwd_send_buff = (float *)calloc(PIPE_P2P_SIZE, sizeof(float)); 150 | float* bwd_recv_buff = (float *)calloc(PIPE_P2P_SIZE, sizeof(float)); 151 | 152 | float* moe_fwd_alltoall_send_ptrs[2]; 153 | float* moe_fwd_alltoall_recv_ptrs[2]; 154 | float* moe_bwd_alltoall_send_ptrs[2]; 155 | float* moe_bwd_alltoall_recv_ptrs[2]; 156 | for(int i=0; i<2; i++){ 157 | moe_fwd_alltoall_send_ptrs[i] = (float *)calloc(MOE_ALL2ALL_SIZE, sizeof(float)); 158 | moe_fwd_alltoall_recv_ptrs[i] = (float *)calloc(MOE_ALL2ALL_SIZE, sizeof(float)); 159 | moe_bwd_alltoall_send_ptrs[i] = (float *)calloc(MOE_ALL2ALL_SIZE, sizeof(float)); 160 | moe_bwd_alltoall_recv_ptrs[i] = (float *)calloc(MOE_ALL2ALL_SIZE, sizeof(float)); 161 | } 162 | 163 | MPI_Barrier(MPI_COMM_WORLD); 164 | 165 | //warmup 166 | for(int wmp = 0; wmp < WARM_UP; wmp++){ 167 | run_one_step_pipe_moe(grad_acc_step, stage_id, num_stage, num_moe, 168 | grad_ptr, 169 | sum_grad_ptr, 170 | moe_grad_ptr, 171 | sum_moe_grad_ptr, 172 | fwd_send_buff, 173 | fwd_recv_buff, 174 | bwd_send_buff, 175 | bwd_recv_buff, 176 | moe_fwd_alltoall_send_ptrs, 177 | moe_fwd_alltoall_recv_ptrs, 178 | moe_bwd_alltoall_send_ptrs, 179 | moe_bwd_alltoall_recv_ptrs, 180 | dp_allreduce_comm, 181 | pp_p2p_comm, 182 | moe_alltoall_comm, 183 | moe_allreduce_comm); 184 | } 185 | 186 | begin = MPI_Wtime(); 187 | for(int iter = 0; iter < RUNS; iter++){ 188 | run_one_step_pipe_moe(grad_acc_step, stage_id, num_stage, num_moe, 189 | grad_ptr, 190 | sum_grad_ptr, 191 | moe_grad_ptr, 192 | sum_moe_grad_ptr, 193 | fwd_send_buff, 194 | fwd_recv_buff, 195 | bwd_send_buff, 196 | bwd_recv_buff, 197 | moe_fwd_alltoall_send_ptrs, 198 | moe_fwd_alltoall_recv_ptrs, 199 | moe_bwd_alltoall_send_ptrs, 200 | moe_bwd_alltoall_recv_ptrs, 201 | dp_allreduce_comm, 202 | pp_p2p_comm, 203 | moe_alltoall_comm, 204 | moe_allreduce_comm); 205 | } 206 | elapse = (MPI_Wtime()-begin)/RUNS; 207 | 208 | if(rank == 0) 209 | printf("MoEs: Rank = %d, world_size = %d, layers = %d, stages = %d, num_moe = %d, acc_step = %d, total_params = %d B, global batch = %d, GPT-3 runtime for one pipeline step wit MoE = %f s\n", rank, world_size, num_layer, num_stage, num_moe, grad_acc_step, 1811939328/1024*num_layer/1024/1024, world_size*acc_step_scale, elapse); 210 | 211 | MPI_Finalize(); 212 | } 213 | -------------------------------------------------------------------------------- /proxies/gpt3_one_pipe_step.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | 9 | #define WARM_UP 0 10 | #define RUNS 1 11 | 12 | #define NUM_L 96 13 | #define ACC_STEP_SCALE 2 14 | #define MODEL_SHARDS 4 15 | 16 | // msg sizes for GPT-3 (M_dim=12288) with micro-batch size=1 and seq_len=2048 17 | // we set model shards = 4 18 | #define PIPE_P2P_SIZE 25165824 19 | #define MP_ALLREDUCE_SIZE 25165824 20 | #define MOE_ALL2ALL_SIZE 25165824 21 | //#define DP_ALLREDUCE_SIZE 452984832+154389504 22 | #define DP_ALLREDUCE_SIZE 452984832 // num params of one shard of a layer 23 | 24 | // runtime in us (10E-6) for each model shard of each layer 25 | #define FWD_RT 15915 26 | #define BWD_RT 31830 27 | #define BWD_RT_GPIPE 47745 28 | 29 | int run_one_step_pipe_model(int grad_acc_step, int stage_id, int num_stage, 30 | float *grad_ptr, 31 | float *sum_grad_ptr, 32 | float *fwd_send_buff, 33 | float *fwd_recv_buff, 34 | float *bwd_send_buff, 35 | float *bwd_recv_buff, 36 | float **mp_fwd_inter_ptrs, 37 | float **sum_mp_fwd_inter_ptrs, 38 | float **mp_bwd_grad_ptrs, 39 | float **sum_mp_bwd_grad_ptrs, 40 | MPI_Comm dp_allreduce_comm, 41 | MPI_Comm mp_allreduce_comm, 42 | MPI_Comm pp_p2p_comm){ 43 | 44 | MPI_Request reqs[2]; 45 | 46 | if(stage_id % 2 == 0){ 47 | MPI_Irecv(bwd_recv_buff, PIPE_P2P_SIZE, MPI_FLOAT, stage_id+1, 1, pp_p2p_comm, &reqs[0]);// receive input for next mb 48 | usleep(FWD_RT); //compute fwd 49 | for(int j=0; j<2; j++){ 50 | MPI_Allreduce(mp_fwd_inter_ptrs[j], sum_mp_fwd_inter_ptrs[j], MP_ALLREDUCE_SIZE, MPI_FLOAT, MPI_SUM, mp_allreduce_comm); 51 | } 52 | MPI_Isend(fwd_send_buff, PIPE_P2P_SIZE, MPI_FLOAT, stage_id+1, 2, pp_p2p_comm, &reqs[1]);// send output of current mb 53 | MPI_Waitall(2, reqs, MPI_STATUS_IGNORE); 54 | }else{ 55 | MPI_Irecv(fwd_recv_buff, PIPE_P2P_SIZE, MPI_FLOAT, stage_id-1, 2, pp_p2p_comm, &reqs[1]);// receive input for next mb 56 | usleep(BWD_RT); //compute bwd 57 | for(int j=0; j<2; j++){ 58 | MPI_Allreduce(mp_bwd_grad_ptrs[j], sum_mp_bwd_grad_ptrs[j], MP_ALLREDUCE_SIZE, MPI_FLOAT, MPI_SUM, mp_allreduce_comm); 59 | } 60 | MPI_Isend(bwd_send_buff, PIPE_P2P_SIZE, MPI_FLOAT, stage_id-1, 1, pp_p2p_comm, &reqs[0]);// send output of current mb 61 | MPI_Waitall(2, reqs, MPI_STATUS_IGNORE); 62 | } 63 | 64 | return 0; 65 | } 66 | 67 | 68 | int main(int argc, char *argv[]){ 69 | int rank, world_size; 70 | double begin, elapse; 71 | 72 | //number of pipeline stages 73 | int num_stage = NUM_L; 74 | int num_layer = NUM_L; 75 | int acc_step_scale = ACC_STEP_SCALE; 76 | //number of micro-batches in an iteration 77 | int grad_acc_step = num_stage * acc_step_scale; 78 | 79 | if(argc == 2){ 80 | num_stage = atoi(argv[1]); 81 | num_layer = atoi(argv[1]); 82 | } 83 | if(argc == 3){ 84 | num_stage = atoi(argv[1]); 85 | num_layer = atoi(argv[1]); 86 | acc_step_scale = atoi(argv[2]); 87 | grad_acc_step = num_stage * acc_step_scale; 88 | } 89 | 90 | MPI_Init(&argc,&argv); 91 | MPI_Comm_size(MPI_COMM_WORLD, &world_size); 92 | MPI_Comm_rank(MPI_COMM_WORLD, &rank); 93 | MPI_Comm dp_allreduce_comm; 94 | MPI_Comm mp_pp_comm; 95 | MPI_Comm mp_allreduce_comm; 96 | MPI_Comm pp_p2p_comm; 97 | 98 | //the number of processes should be a multiple of num_stage*MODEL_SHARDS = 384 99 | assert(world_size % (num_stage*MODEL_SHARDS) == 0); 100 | 101 | int dp_allreduce_group_rank, mp_pp_group_rank, mp_allreduce_group_rank, pp_p2p_group_rank; 102 | int dp_allreduce_group_size, mp_pp_group_size, mp_allreduce_group_size, pp_p2p_group_size; 103 | 104 | int dp_allreduce_group_color = rank % (num_stage*MODEL_SHARDS); 105 | MPI_Comm_split(MPI_COMM_WORLD, dp_allreduce_group_color, rank, &dp_allreduce_comm); 106 | MPI_Comm_rank(dp_allreduce_comm, &dp_allreduce_group_rank); 107 | MPI_Comm_size(dp_allreduce_comm, &dp_allreduce_group_size); 108 | 109 | MPI_Comm_split(MPI_COMM_WORLD, dp_allreduce_group_rank, rank, &mp_pp_comm); 110 | MPI_Comm_rank(mp_pp_comm, &mp_pp_group_rank); 111 | MPI_Comm_size(mp_pp_comm, &mp_pp_group_size); 112 | 113 | int mp_allreduce_group_color = mp_pp_group_rank % num_stage; 114 | MPI_Comm_split(mp_pp_comm, mp_allreduce_group_color, mp_pp_group_rank, &mp_allreduce_comm); 115 | 116 | MPI_Comm_rank(mp_allreduce_comm, &mp_allreduce_group_rank); 117 | MPI_Comm_size(mp_allreduce_comm, &mp_allreduce_group_size); 118 | 119 | MPI_Comm_split(mp_pp_comm, mp_allreduce_group_rank, mp_pp_group_rank, &pp_p2p_comm); 120 | MPI_Comm_rank(pp_p2p_comm, &pp_p2p_group_rank); 121 | MPI_Comm_size(pp_p2p_comm, &pp_p2p_group_size); 122 | 123 | assert(pp_p2p_group_size == num_stage); 124 | assert(mp_allreduce_group_size == MODEL_SHARDS); 125 | assert(dp_allreduce_group_size == world_size/(num_stage*MODEL_SHARDS)); 126 | 127 | int stage_id = pp_p2p_group_rank; 128 | 129 | float* grad_ptr = (float *)calloc(DP_ALLREDUCE_SIZE, sizeof(float)); 130 | float* sum_grad_ptr = (float *)calloc(DP_ALLREDUCE_SIZE, sizeof(float)); 131 | 132 | float* fwd_send_buff = (float *)calloc(PIPE_P2P_SIZE, sizeof(float)); 133 | float* fwd_recv_buff = (float *)calloc(PIPE_P2P_SIZE, sizeof(float)); 134 | float* bwd_send_buff = (float *)calloc(PIPE_P2P_SIZE, sizeof(float)); 135 | float* bwd_recv_buff = (float *)calloc(PIPE_P2P_SIZE, sizeof(float)); 136 | 137 | float* mp_fwd_inter_ptrs[2]; 138 | float* sum_mp_fwd_inter_ptrs[2]; 139 | float* mp_bwd_grad_ptrs[2]; 140 | float* sum_mp_bwd_grad_ptrs[2]; 141 | for(int i=0; i<2; i++){ 142 | mp_fwd_inter_ptrs[i] = (float *)calloc(MP_ALLREDUCE_SIZE, sizeof(float)); 143 | sum_mp_fwd_inter_ptrs[i] = (float *)calloc(MP_ALLREDUCE_SIZE, sizeof(float)); 144 | mp_bwd_grad_ptrs[i] = (float *)calloc(MP_ALLREDUCE_SIZE, sizeof(float)); 145 | sum_mp_bwd_grad_ptrs[i] = (float *)calloc(MP_ALLREDUCE_SIZE, sizeof(float)); 146 | } 147 | 148 | float* moe_fwd_alltoall_ptrs[2]; 149 | float* moe_bwd_alltoall_ptrs[2]; 150 | for(int i=0; i<2; i++){ 151 | moe_fwd_alltoall_ptrs[i] = (float *)calloc(MOE_ALL2ALL_SIZE, sizeof(float)); 152 | moe_bwd_alltoall_ptrs[i] = (float *)calloc(MOE_ALL2ALL_SIZE, sizeof(float)); 153 | } 154 | 155 | MPI_Barrier(MPI_COMM_WORLD); 156 | 157 | //warmup 158 | for(int wmp = 0; wmp < WARM_UP; wmp++){ 159 | run_one_step_pipe_model(grad_acc_step, stage_id, num_stage, 160 | grad_ptr, 161 | sum_grad_ptr, 162 | fwd_send_buff, 163 | fwd_recv_buff, 164 | bwd_send_buff, 165 | bwd_recv_buff, 166 | mp_fwd_inter_ptrs, 167 | sum_mp_fwd_inter_ptrs, 168 | mp_bwd_grad_ptrs, 169 | sum_mp_bwd_grad_ptrs, 170 | dp_allreduce_comm, 171 | mp_allreduce_comm, 172 | pp_p2p_comm); 173 | } 174 | 175 | begin = MPI_Wtime(); 176 | for(int iter = 0; iter < RUNS; iter++){ 177 | run_one_step_pipe_model(grad_acc_step, stage_id, num_stage, 178 | grad_ptr, 179 | sum_grad_ptr, 180 | fwd_send_buff, 181 | fwd_recv_buff, 182 | bwd_send_buff, 183 | bwd_recv_buff, 184 | mp_fwd_inter_ptrs, 185 | sum_mp_fwd_inter_ptrs, 186 | mp_bwd_grad_ptrs, 187 | sum_mp_bwd_grad_ptrs, 188 | dp_allreduce_comm, 189 | mp_allreduce_comm, 190 | pp_p2p_comm); 191 | } 192 | elapse = (MPI_Wtime()-begin)/RUNS; 193 | 194 | if(rank == 0) 195 | printf("1F1B: Rank = %d, world_size = %d, layers = %d, stages = %d, acc_step = %d, total_params = %d B, global batch = %d, GPT-3 runtime for one pipeline step = %f s\n", rank, world_size, num_layer, num_stage, grad_acc_step, 1811939328/1024*num_layer/1024/1024, world_size*acc_step_scale/MODEL_SHARDS, elapse); 196 | 197 | MPI_Finalize(); 198 | } 199 | -------------------------------------------------------------------------------- /proxies/resnet152.cpp: -------------------------------------------------------------------------------- 1 | /********************************************************************* 2 | * 3 | * Description: C++/MPI proxy for ResNet-152 distributed training 4 | * with data parallelism 5 | * Author: Shigang Li 6 | * Email: shigangli.cs@gmail.com 7 | * 8 | *********************************************************************/ 9 | 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | 18 | #define WARM_UP 8 19 | #define RUNS 128 20 | 21 | // allreduce sizes for gradients with message aggregation 22 | #define NUM_B 10 23 | int allreduce_sizes[NUM_B] = {6511592, 6567936, 5905920, 6113280, 6176256, 6112768, 6176256, 6112768, 5321216, 5194816}; 24 | 25 | // batchsize = 128 26 | // Suggest world_size <= 256, which is corresponding to a global batch_size <= 32 K 27 | // A100 GPU 28 | // runtime in us (10E-6) for each iteration 29 | int fwd_rt_whole_model = 119000; 30 | int bwd_rt_per_B = 23800; 31 | 32 | int run_data_parallel(float** grad_ptrs, float** sum_grad_ptrs){ 33 | 34 | //forward 35 | usleep(fwd_rt_whole_model); //compute 36 | 37 | //backward 38 | MPI_Request grad_allreduce_reqs[NUM_B]; 39 | //must initialize with MPI_REQUEST_NULL 40 | for(int i=0; i 1) 46 | MPI_Testany(NUM_B, grad_allreduce_reqs, &index, &flag, MPI_STATUSES_IGNORE); //advancing MPI in the background 47 | 48 | usleep(bwd_rt_per_B); //compute 49 | 50 | MPI_Iallreduce(grad_ptrs[i], sum_grad_ptrs[i], allreduce_sizes[i], MPI_FLOAT, MPI_SUM, MPI_COMM_WORLD, &grad_allreduce_reqs[i]); 51 | } 52 | 53 | MPI_Waitall(NUM_B, grad_allreduce_reqs, MPI_STATUSES_IGNORE); 54 | return 0; 55 | } 56 | 57 | int main(int argc, char *argv[]){ 58 | int rank, world_size; 59 | 60 | MPI_Init(&argc,&argv); 61 | MPI_Comm_size(MPI_COMM_WORLD, &world_size); 62 | MPI_Comm_rank(MPI_COMM_WORLD, &rank); 63 | 64 | float* grad_ptrs[NUM_B]; 65 | float* sum_grad_ptrs[NUM_B]; 66 | for(int i=0; i 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | 18 | #define WARM_UP 8 19 | #define RUNS 128 20 | 21 | // allreduce sizes for gradients with message aggregation 22 | #define NUM_B 10 23 | int allreduce_sizes[NUM_B] = {6511592, 6567936, 5905920, 6113280, 6176256, 6112768, 6176256, 6112768, 5321216, 5194816}; 24 | 25 | // Global batch_size <= 32 K 26 | // A100 GPU 27 | // runtime in us (10E-6) for each iteration 28 | // corresponding to local batch size = {128, 64, 32, 16} 29 | int local_batch_size_arr[4] = {128, 64, 32, 16}; 30 | int fwd_rt_whole_model_arr[4] = {119000, 63000, 36000, 27667}; 31 | int bwd_rt_per_B_arr[4] = {23800, 12600, 7200, 5533}; 32 | 33 | int run_data_parallel(float** grad_ptrs, float** sum_grad_ptrs, int fwd_rt_whole_model, int bwd_rt_per_B){ 34 | 35 | //forward 36 | usleep(fwd_rt_whole_model); //compute 37 | 38 | //backward 39 | MPI_Request grad_allreduce_reqs[NUM_B]; 40 | //must initialize with MPI_REQUEST_NULL 41 | for(int i=0; i 1) 47 | MPI_Testany(NUM_B, grad_allreduce_reqs, &index, &flag, MPI_STATUSES_IGNORE); //advancing MPI in the background 48 | 49 | usleep(bwd_rt_per_B); //compute 50 | 51 | MPI_Iallreduce(grad_ptrs[i], sum_grad_ptrs[i], allreduce_sizes[i], MPI_FLOAT, MPI_SUM, MPI_COMM_WORLD, &grad_allreduce_reqs[i]); 52 | } 53 | 54 | MPI_Waitall(NUM_B, grad_allreduce_reqs, MPI_STATUSES_IGNORE); 55 | return 0; 56 | } 57 | 58 | int main(int argc, char *argv[]){ 59 | int rank, world_size; 60 | 61 | MPI_Init(&argc,&argv); 62 | MPI_Comm_size(MPI_COMM_WORLD, &world_size); 63 | MPI_Comm_rank(MPI_COMM_WORLD, &rank); 64 | 65 | float* grad_ptrs[NUM_B]; 66 | float* sum_grad_ptrs[NUM_B]; 67 | for(int i=0; i