├── .gitattributes ├── LICENSE ├── ddns ├── CloudFlare-ddns.sh ├── CloudXNS-ddns.sh ├── Dnspod-ddns.sh └── Readme.md ├── module ├── readme.md ├── tcp_scalable_re │ ├── Makefile │ ├── make.sh │ └── tcp_scalable_re.c └── tcp_tsunami │ ├── Makefile │ ├── make.sh │ └── tcp_tsunami.c ├── readme.md ├── shellbox ├── 99-shellbox ├── README.md └── shellbox.sh ├── sss ├── README.md ├── ss ├── sss.svg └── update_list └── useful-commands ├── alibaba-selenium.py ├── arknights_preparation.py ├── ass.sh ├── boot_patch_app.sh ├── boot_patch_miui.sh ├── boot_patch_qemu.sh ├── branch.sh ├── cat-finite ├── copy_libs.sh ├── cut.sh ├── debootstrap.sh ├── dir-running.sh ├── domain-to-ip.sh ├── excel_spliter.py ├── extract_apex.sh ├── file-hexo-install-global.sh ├── genshin.py ├── genshin_loot.py ├── get-latest-chromium-tag.sh ├── github-https-sed.sh ├── ipv6.sh ├── md5check.sh ├── move-example.sh ├── msd.sh ├── mtrp.py ├── okteto_saver.sh ├── opsetup.sh ├── repo.sh ├── saveapt.sh ├── ss-local.sh ├── steamfree.py ├── swap.sh ├── threadpool.py ├── tr.sh ├── uparchlivecd.sh └── useful-commands.sh /.gitattributes: -------------------------------------------------------------------------------- 1 | * text=auto eol=lf whitespace=blank-at-eol,blank-at-eof,space-before-tab,-indent-with-non-tab,tab-in-indent,tabwidth=4 -crlf -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | {one line to give the program's name and a brief idea of what it does.} 635 | Copyright (C) {year} {name of author} 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | {project} Copyright (C) {year} {fullname} 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /ddns/CloudFlare-ddns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | ############### 3 | Email="**@*.**" 4 | Domain="***.**" 5 | SubDomain="***" 6 | SubDomain6="***" 7 | APIKey="******" 8 | USE_IPV4=true 9 | USE_IPV6=true 10 | ############### 11 | [ -x "$(command -v curl)" ] || exit 1 12 | # Sleep Wait Network 13 | if [ "${1}" = "-w" ] 14 | then 15 | sleep 15 16 | fi 17 | # Login Check 18 | ZoneINFO=$(curl -skX GET https://api.cloudflare.com/client/v4/zones/ -H "Content-Type:application/json" -H "X-Auth-Email:${Email}" -H "X-Auth-Key:${APIKey}") 19 | ZoneID=$(echo ${ZoneINFO}| sed -n 's/.*"id":"\([^"]*\)",[^}]*"name":"'${Domain}'".*/\1/p') 20 | Record=$(curl -skX GET https://api.cloudflare.com/client/v4/zones/${ZoneID}/dns_records?per_page=100 -H "Content-Type:application/json" -H "X-Auth-Email:${Email}" -H "X-Auth-Key:${APIKey}") 21 | if [ -z "${Record}" ] 22 | then 23 | USE_IPV4=false 24 | USE_IPV6=false 25 | Result4="ErrZoneRecord." 26 | fi 27 | # IPv4 28 | if ${USE_IPV4} 29 | then 30 | RecodIP4=$(curl -s4 ip.sb) 31 | RecordID4=$(echo $Record | sed -n 's/.*"id":"\([^"]*\)",[^}]*"name":"'${SubDomain}'.'${Domain}'",[^}]*"type":"A".*/\1/p') 32 | OldIP4=$(echo $Record | sed -n 's/.*"name":"'${SubDomain}'.'${Domain}'",[^}]*"type":"A",[^}]*"content":"\([0-9.]*\)".*/\1/p') 33 | if [ -z "${RecodIP4}" ] 34 | then 35 | Result4="ErrRecodIP4." 36 | elif [ -z "${RecordID4}" ] 37 | then 38 | Result4="ErrRecodID4." 39 | elif [ "${OldIP4}" = "${RecodIP4}" ] 40 | then 41 | Result4="Skipped." 42 | else 43 | Result4=$(curl -sX PUT https://api.cloudflare.com/client/v4/zones/${ZoneID}/dns_records/${RecordID4} -H "Content-Type:application/json" -H "X-Auth-Email:${Email}" -H "X-Auth-Key:${APIKey}" --data '{"type":"A","name":"'${SubDomain}'.'${Domain}'","content":"'${RecodIP4}'","ttl":1,"proxied":false}' |grep -Eo '"success"[^,]*,') 44 | if [ ${?} -ne 0 ] 45 | then 46 | Result4='cURL failed.' 47 | fi 48 | fi 49 | fi 50 | # IPv6 51 | if ${USE_IPV6} 52 | then 53 | RecodIP6=$(curl -s6 ip.sb) 54 | RecordID6=$(echo $Record | sed -n 's/.*"id":"\([^"]*\)",[^}]*"name":"'${SubDomain6}'.'${Domain}'",[^}]*"type":"AAAA".*/\1/p') 55 | OldIP6=$(echo $Record | sed -n 's/.*"name":"'${SubDomain6}'.'${Domain}'",[^}]*"type":"AAAA",[^}]*"content":"\([^\"]*\)".*/\1/p') 56 | if [ -z "${RecodIP6}" ] 57 | then 58 | Result6="ErrRecodIP6." 59 | elif [ -z "${RecordID6}" ] 60 | then 61 | Result6="ErrRecordID6." 62 | elif [ "${OldIP6}" = "${RecodIP6}" ] 63 | then 64 | Result6="Skipped." 65 | else 66 | Result6=$(curl -sX PUT https://api.cloudflare.com/client/v4/zones/${ZoneID}/dns_records/${RecordID6} -H "Content-Type:application/json" -H "X-Auth-Email:${Email}" -H "X-Auth-Key:${APIKey}" --data '{"type":"AAAA","name":"'${SubDomain6}'.'${Domain}'","content":"'${RecodIP6}'","ttl":1,"proxied":false}' |grep -Eo '"success"[^,]*,') 67 | if [ ${?} -ne 0 ] 68 | then 69 | Result6='cURL failed.' 70 | fi 71 | fi 72 | fi 73 | [ -x "$(command -v logger)" ] && logger -s "CloudFlare-ddns.sh: $(date) IPV4 ${Result4} IPV6 ${Result6}." 74 | exit 0 75 | -------------------------------------------------------------------------------- /ddns/CloudXNS-ddns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | #================================================== 3 | # OS Required: Linux with curl 4 | # Description: CloudXNS DDNS on bash 5 | # Author: Kuretru 6 | # Version: 1.1.160913 7 | # Github: https://github.com/kuretru/CloudXNS-DDNS/ 8 | #================================================== 9 | 10 | #API Key 11 | api_key="" 12 | 13 | #Secret Key 14 | secret_key="" 15 | 16 | #Domain name 17 | #e.g. domain="www.cloudxns.net." 18 | domain="" 19 | 20 | value=$(curl members.3322.org/dyndns/getip) 21 | url="https://www.cloudxns.net/api2/ddns" 22 | time=$(date -R) 23 | data="{\"domain\":\"${domain}\",\"ip\":\"${value}\",\"line_id\":\"1\"}" 24 | mac_raw="$api_key$url$data$time$secret_key" 25 | mac=$(echo -n $mac_raw | md5sum | awk '{print $1}') 26 | header1="API-KEY:"$api_key 27 | header2="API-REQUEST-DATE:"$time 28 | header3="API-HMAC:"$mac 29 | header4="API-FORMAT:json" 30 | 31 | result=$(curl -k -X POST -H $header1 -H "$header2" -H $header3 -H $header4 -d "$data" $url) 32 | echo "${result} ${time} ${data}" 33 | -------------------------------------------------------------------------------- /ddns/Dnspod-ddns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | ############### 3 | TokenID="*****" 4 | Token="*******" 5 | SubDomain="***" 6 | Domain="******" 7 | RecordTTL=600 8 | ############### 9 | if [ ! -x "$(command -v curl)" ] 10 | then 11 | echo "Cannot find cURL." 12 | exit 1 13 | fi 14 | #if [ ! -x "$(command -v nc)" ] 15 | #then 16 | RecodIP=$(curl -skX GET members.3322.org/dyndns/getip) 17 | #else 18 | #RecodIP=$(nc ns1.dnspod.net 6666) # This command is out of date. 19 | #fi 20 | List=$(curl -skX POST https://dnsapi.cn/Record.List -d "login_token=${TokenID},${Token}&format=json&domain=${Domain}&sub_domain=${SubDomain}") 21 | RecodID=$(echo $List|sed -n 's/.*"id":"\([0-9]*\)".*"name":"'${SubDomain}'".*/\1/p') 22 | OldIP=$(echo $List|sed -n 's/.*"value":"\([0-9.]*\)".*"name":"'${SubDomain}'".*/\1/p') 23 | OldTTL=$(echo $List|sed -n 's/.*"ttl":"\([0-9]*\)".*"name":"'${SubDomain}'".*/\1/p') 24 | if [ "${OldIP}" = "${RecodIP}" ] && [ "${OldTTL}" = "${RecordTTL}" ] 25 | then 26 | Result="Action skipped successfully" 27 | else 28 | Result=$(curl -skX POST https://dnsapi.cn/Record.Modify -d "login_token=${TokenID},${Token}&format=json&record_id=${RecodID}&domain=${Domain}&sub_domain=${SubDomain}&value=${RecodIP}&ttl=${RecordTTL}&record_type=A&record_line_id=0" | sed -n 's/.*"message":"\(.*\)","created_at".*/\1/p') 29 | fi 30 | if [ -x "$(command -v logger)" ] 31 | then 32 | logger -s "Dnspod-ddns.sh: $(date) ${Result}" 33 | else 34 | echo "Dnspod-ddns.sh: $(date) ${Result}" 35 | fi 36 | exit 0 37 | -------------------------------------------------------------------------------- /ddns/Readme.md: -------------------------------------------------------------------------------- 1 | # CloudFlare-DDNS-sh 2 | 3 | 非常简易的 CloudFlare DDNS 脚本。 4 | 需要创建 CloudFlare Global API key。 5 | 需要`curl`支持。 6 | 适合所有发行版平台,包括 openwrt。 7 | 8 | ## 使用 9 | 10 | 1. `wget https://github.com/SYHGroup/easy_shell/raw/master/ddns/CloudFlare-ddns.sh` 11 | 2. 在[账户设置](https://www.cloudflare.com/a/account/my-account),获取并在脚本填写 API Key 和账号。 12 | 3. 在CloudFlare添加需要DDNS的域名和二级域名,并在脚本填写该域名。 13 | 4. 在系统-计划任务中,添加`0 */3 * * * sh /etc/CloudFlare-ddns.sh`即可每3小时自动更新。 14 | 15 | 16 | # Dnspod-DDNS-sh 17 | 18 | 非常简易的 DNSPod DDNS 脚本。 19 | 需要创建 DNSPod API token。 20 | 需要`curl`支持。 21 | 适合所有发行版平台,包括 openwrt。 22 | 23 | ## 使用 24 | 25 | 1. `wget https://github.com/SYHGroup/easy_shell/raw/master/ddns/Dnspod-ddns.sh` 26 | 2. 根据[DNSPod](https://support.dnspod.cn/Kb/showarticle/tsid/227/),获取并在脚本填写 API Token 和 TokenID。 27 | 3. 在DNSPod添加需要DDNS的域名和二级域名,并在脚本填写该域名。 28 | 4. 在系统-计划任务中,添加`0 */3 * * * sh /etc/Dnspod-ddns.sh`即可每3小时自动更新。 29 | 30 | # CloudXNS-DDNS 31 | 看到很多 CloudXNS 的 DDNS 客户端都是基于官方的 Python SDK 做的,于是根据官方的API文档撸了个 bash 下的一个轮子。 32 | 33 | ## 特点 34 | 1. 系统需支持 curl 命令,适合在闪存容量小,不能安装 Python 的路由器上运行。 35 | 2. 在CloudXNS[申请API Key](https://www.cloudxns.net/AccountManage/apimanage.html),只需要在脚本中填写 API Key ,不需要提供账号密码,绿色安全。 36 | 3. 不像 DNSPod 那样繁琐,需要先通过客户端查询域名ID、记录ID,只需提供需要 DDNS 的域名。 37 | 38 | ## 用法 39 | 1. `wget https://github.com/SYHGroup/easy_shell/raw/master/ddns/CloudXNS-ddns.sh` 40 | 2. 在[CloudXNS](https://www.cloudxns.net/AccountManage/apimanage.html)获得 API Key 后,将 API Key、Secret Key 填入脚本。 41 | 3. 在CloudXNS添加需要DDNS的域名,并在脚本填写该域名。 42 | `domain="www.cloudxns.net."` 43 | 4. 执行`sh CloudXNS-ddns.sh`。 44 | 45 | ## Credit 46 | 47 | Copyright (C) 2016 simonsmh 48 | 49 | ## LICENSE 50 | GNU General Public License v3.0 -------------------------------------------------------------------------------- /module/readme.md: -------------------------------------------------------------------------------- 1 | ### Linux Kernel Modules 2 | 3 | Before install, please install headers first. 4 | Clone modules by `svn co https://github.com/SYHGroup/easy_shell/trunk/module/{module_name}`. 5 | 6 | ### License 7 | 8 | Dual BSD/GPL 9 | -------------------------------------------------------------------------------- /module/tcp_scalable_re/Makefile: -------------------------------------------------------------------------------- 1 | obj-m:=tcp_scalable_re.o 2 | 3 | all: 4 | make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules 5 | 6 | clean: 7 | make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean 8 | 9 | install: 10 | install tcp_scalable_re.ko /lib/modules/$(shell uname -r)/kernel/net/ipv4 11 | insmod /lib/modules/$(shell uname -r)/kernel/net/ipv4/tcp_scalable_re.ko 12 | depmod -a 13 | modprobe tcp_scalable_re -------------------------------------------------------------------------------- /module/tcp_scalable_re/make.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | make && make install 3 | sed -i 's/bbr/scalable-re/g' /etc/sysctl.conf 4 | sysctl -p -------------------------------------------------------------------------------- /module/tcp_scalable_re/tcp_scalable_re.c: -------------------------------------------------------------------------------- 1 | /* Scalable-Reactive (Scalable-RE) congestion control */ 2 | 3 | #include 4 | #include 5 | 6 | #define INIT_CWND_SCALE 8 /* scaling factor to initialize cwnd */ 7 | #define MIN_CWND 22 8 | 9 | /* Scalable-RE congestion control block */ 10 | struct scalable { 11 | u32 prior_cwnd, /* prior cwnd upon entering loss recovery */ 12 | prev_ca_state:3, /* CA state on previous ACK */ 13 | max_acked:1, /* maximum number of ACK received */ 14 | peak_delivered:1, /* peak threshould of available cwnd in cycle before we filled pipe */ 15 | conceal_loss:1; /* perform concealment of packet losses? */ 16 | }; 17 | 18 | static const u32 scalable_init_cwnd = (1 << INIT_CWND_SCALE); 19 | static const u32 scalable_cwnd_min_target = MIN_CWND; 20 | 21 | /* Slow-start up toward maximum available cwnd (if bw estimate is growing, or packet loss 22 | * has drawn us down below target), or snap down to target if we're above it. 23 | */ 24 | static void tcp_scalable_re_set_cwnd( 25 | struct sock *sk, u32 acked) 26 | { 27 | struct tcp_sock *tp = tcp_sk(sk); 28 | struct scalable *ca = inet_csk_ca(sk); 29 | u32 cwnd, max_snd_cwnd_thresh = (0xffffffff >> 7); 30 | 31 | cwnd = max_t(u32, ca->peak_delivered, scalable_init_cwnd); 32 | 33 | if (ca->conceal_loss) 34 | return; 35 | 36 | /* use the maximum number of ACKs we observed: */ 37 | ca->max_acked = max_t(u32, ca->max_acked, acked); 38 | 39 | /* slow-start up toward the bottleneck bandwidth: */ 40 | if (inet_csk(sk)->icsk_ca_state == TCP_CA_Open) 41 | cwnd = min(max_snd_cwnd_thresh, (cwnd * 17 >> 4) + acked); 42 | else 43 | cwnd = ca->peak_delivered; /* not to exceed peak threshould when congestion can occur: */ 44 | cwnd = max(cwnd, scalable_cwnd_min_target); 45 | 46 | /* Reduce delayed ACKs by rounding up cwnd to the next even number. */ 47 | cwnd = (cwnd + 1) & ~1U; 48 | 49 | tp->snd_cwnd = cwnd; 50 | } 51 | 52 | 53 | /* Try to conceal packet losses and keep pipe filled: */ 54 | static void tcp_scalable_re_loss_concealment( 55 | struct sock *sk, const struct rate_sample *rs, u32 acked) 56 | { 57 | struct tcp_sock *tp = tcp_sk(sk); 58 | struct scalable *ca = inet_csk_ca(sk); 59 | u8 prev_state = ca->prev_ca_state, state = inet_csk(sk)->icsk_ca_state; 60 | u32 cwnd = tp->snd_cwnd; 61 | 62 | ca->max_acked = max_t(u32, ca->max_acked, acked); 63 | 64 | /* update peak threshould then do the loss concealment: */ 65 | 66 | ca->peak_delivered = max_t(u32, ca->peak_delivered, rs->delivered); 67 | 68 | if (state == TCP_CA_Recovery && prev_state != TCP_CA_Recovery) { 69 | cwnd = max_t(u32, cwnd - rs->losses + ca->max_acked, ca->peak_delivered); /* deduct the number of lost packets */ 70 | ca->peak_delivered = max_t(u32, ca->peak_delivered - rs->losses, 71 | (ca->peak_delivered * 7 >> 3)); /* lower peak threshould estimate (0.875x) when pipe is probably full */ 72 | ca->max_acked >>= 1; 73 | } 74 | else 75 | cwnd = max(cwnd, ca->prior_cwnd); /* restore cwnd after exiting loss recovery */ 76 | 77 | cwnd = max(cwnd, scalable_cwnd_min_target); 78 | 79 | ca->prev_ca_state = state; 80 | tp->snd_cwnd = cwnd; 81 | ca->conceal_loss = 0; 82 | } 83 | 84 | static void tcp_scalable_re_main(struct sock *sk, const struct rate_sample *rs) 85 | { 86 | struct scalable *ca = inet_csk_ca(sk); 87 | 88 | ca->peak_delivered = max_t(u32, ca->peak_delivered, 89 | rs->delivered); /* save maximum peak threshould observed first */ 90 | tcp_scalable_re_set_cwnd(sk, rs->acked_sacked); 91 | tcp_scalable_re_loss_concealment(sk, rs, rs->acked_sacked); 92 | } 93 | 94 | static void tcp_scalable_re_set_state(struct sock *sk, u8 new_state) 95 | { 96 | struct scalable *ca = inet_csk_ca(sk); 97 | 98 | if (new_state == TCP_CA_Loss) { 99 | ca->prev_ca_state = TCP_CA_Loss; 100 | ca->conceal_loss = 1; 101 | } 102 | } 103 | 104 | static void tcp_scalable_re_init(struct sock *sk) 105 | { 106 | struct scalable *ca = inet_csk_ca(sk); 107 | 108 | ca->max_acked = 0; 109 | ca->peak_delivered = 0; 110 | ca->conceal_loss = 0; 111 | ca->prev_ca_state = TCP_CA_Open; 112 | } 113 | 114 | static u32 tcp_scalable_re_ssthresh(struct sock *sk) 115 | { 116 | const struct tcp_sock *tp = tcp_sk(sk); 117 | struct scalable *ca = inet_csk_ca(sk); 118 | 119 | ca->prior_cwnd = tp->snd_cwnd; 120 | return TCP_INFINITE_SSTHRESH; 121 | } 122 | 123 | static u32 tcp_scalable_re_undo_cwnd(struct sock *sk) 124 | { 125 | return tcp_sk(sk)->snd_cwnd; 126 | } 127 | 128 | static struct tcp_congestion_ops tcp_scalable_re_cong_ops __read_mostly = { 129 | .flags = TCP_CONG_NON_RESTRICTED, 130 | .name = "scalable-re", 131 | .owner = THIS_MODULE, 132 | .init = tcp_scalable_re_init, 133 | .cong_control = tcp_scalable_re_main, 134 | .undo_cwnd = tcp_scalable_re_undo_cwnd, 135 | .ssthresh = tcp_scalable_re_ssthresh, 136 | .set_state = tcp_scalable_re_set_state, 137 | }; 138 | 139 | static int __init tcp_scalable_re_register(void) 140 | { 141 | BUILD_BUG_ON(sizeof(struct scalable) > ICSK_CA_PRIV_SIZE); 142 | return tcp_register_congestion_control(&tcp_scalable_re_cong_ops); 143 | } 144 | 145 | static void __exit tcp_scalable_re_unregister(void) 146 | { 147 | tcp_unregister_congestion_control(&tcp_scalable_re_cong_ops); 148 | } 149 | 150 | module_init(tcp_scalable_re_register); 151 | module_exit(tcp_scalable_re_unregister); 152 | 153 | MODULE_AUTHOR("Neal Cardwell "); 154 | MODULE_AUTHOR("Yuchung Cheng "); 155 | MODULE_AUTHOR("John Heffner"); 156 | MODULE_LICENSE("Dual BSD/GPL"); 157 | MODULE_DESCRIPTION("TCP Scalable-RE (Scalable-Reactive)"); -------------------------------------------------------------------------------- /module/tcp_tsunami/Makefile: -------------------------------------------------------------------------------- 1 | obj-m:=tcp_tsunami.o 2 | 3 | all: 4 | make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules 5 | 6 | clean: 7 | make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean 8 | 9 | install: 10 | install tcp_tsunami.ko /lib/modules/$(shell uname -r)/kernel/net/ipv4 11 | insmod /lib/modules/$(shell uname -r)/kernel/net/ipv4/tcp_tsunami.ko 12 | depmod -a 13 | modprobe tcp_tsunami -------------------------------------------------------------------------------- /module/tcp_tsunami/make.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | make && make install 3 | sed -i 's/bbr/tsunami/g' /etc/sysctl.conf 4 | sysctl -p 5 | -------------------------------------------------------------------------------- /module/tcp_tsunami/tcp_tsunami.c: -------------------------------------------------------------------------------- 1 | /* Bottleneck Bandwidth and RTT (BBR) congestion control 2 | * 3 | * BBR congestion control computes the sending rate based on the delivery 4 | * rate (throughput) estimated from ACKs. In a nutshell: 5 | * 6 | * On each ACK, update our model of the network path: 7 | * bottleneck_bandwidth = windowed_max(delivered / elapsed, 10 round trips) 8 | * min_rtt = windowed_min(rtt, 10 seconds) 9 | * pacing_rate = pacing_gain * bottleneck_bandwidth 10 | * cwnd = max(cwnd_gain * bottleneck_bandwidth * min_rtt, 4) 11 | * 12 | * The core algorithm does not react directly to packet losses or delays, 13 | * although BBR may adjust the size of next send per ACK when loss is 14 | * observed, or adjust the sending rate if it estimates there is a 15 | * traffic policer, in order to keep the drop rate reasonable. 16 | * 17 | * BBR is described in detail in: 18 | * "BBR: Congestion-Based Congestion Control", 19 | * Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, 20 | * Van Jacobson. ACM Queue, Vol. 14 No. 5, September-October 2016. 21 | * 22 | * There is a public e-mail list for discussing BBR development and testing: 23 | * https://groups.google.com/forum/#!forum/bbr-dev 24 | * 25 | * NOTE: BBR *must* be used with the fq qdisc ("man tc-fq") with pacing enabled, 26 | * since pacing is integral to the BBR design and implementation. 27 | * BBR without pacing would not function properly, and may incur unnecessary 28 | * high packet loss rates. 29 | */ 30 | #include 31 | #include 32 | #include 33 | #include 34 | #include 35 | #include 36 | 37 | /* Scale factor for rate in pkt/uSec unit to avoid truncation in bandwidth 38 | * estimation. The rate unit ~= (1500 bytes / 1 usec / 2^24) ~= 715 bps. 39 | * This handles bandwidths from 0.06pps (715bps) to 256Mpps (3Tbps) in a u32. 40 | * Since the minimum window is >=4 packets, the lower bound isn't 41 | * an issue. The upper bound isn't an issue with existing technologies. 42 | */ 43 | #define BW_SCALE 24 44 | #define BW_UNIT (1 << BW_SCALE) 45 | 46 | #define BBR_SCALE 8 /* scaling factor for fractions in BBR (e.g. gains) */ 47 | #define BBR_UNIT (1 << BBR_SCALE) 48 | 49 | /* BBR has the following modes for deciding how fast to send: */ 50 | enum bbr_mode { 51 | BBR_STARTUP, /* ramp up sending rate rapidly to fill pipe */ 52 | BBR_DRAIN, /* drain any queue created during startup */ 53 | BBR_PROBE_BW, /* discover, share bw: pace around estimated bw */ 54 | BBR_PROBE_RTT, /* cut cwnd to min to probe min_rtt */ 55 | }; 56 | 57 | /* BBR congestion control block */ 58 | struct bbr { 59 | u32 min_rtt_us; /* min RTT in min_rtt_win_sec window */ 60 | //deprecated u32 rtt_us; 61 | u32 min_rtt_stamp; /* timestamp of min_rtt_us */ 62 | u32 probe_rtt_done_stamp; /* end time for BBR_PROBE_RTT mode */ 63 | //deprecated struct minmax max_rtt; 64 | struct minmax bw; /* Max recent delivery rate in pkts/uS << 24 */ 65 | u32 rtt_cnt; /* count of packet-timed rounds elapsed */ 66 | u32 next_rtt_delivered; /* scb->tx.delivered at end of round */ 67 | u64 cycle_mstamp; /* time of this cycle phase start */ 68 | u32 mode:3, /* current bbr_mode in state machine */ 69 | prev_ca_state:3, /* CA state on previous ACK */ 70 | packet_conservation:1, /* use packet conservation? */ 71 | restore_cwnd:1, /* decided to revert cwnd to old value */ 72 | round_start:1, /* start of packet-timed tx->ack round? */ 73 | tso_segs_goal:7, /* segments we want in each skb we send */ 74 | idle_restart:1, /* restarting after idle? */ 75 | probe_rtt_round_done:1, /* a BBR_PROBE_RTT round at 4 pkts? */ 76 | unused:5, 77 | lt_is_sampling:1, /* taking long-term ("LT") samples now? */ 78 | lt_rtt_cnt:7, /* round trips in long-term interval */ 79 | lt_use_bw:1; /* use lt_bw as our bw estimate? */ 80 | u32 lt_bw; /* LT est delivery rate in pkts/uS << 24 */ 81 | u32 lt_last_delivered; /* LT intvl start: tp->delivered */ 82 | u32 lt_last_stamp; /* LT intvl start: tp->delivered_mstamp */ 83 | u32 lt_last_lost; /* LT intvl start: tp->lost */ 84 | u32 pacing_gain:10, /* current gain for setting pacing rate */ 85 | cwnd_gain:10, /* current gain for setting cwnd */ 86 | full_bw_cnt:3, /* number of rounds without large bw gains */ 87 | cycle_idx:3, /* current index in pacing_gain cycle array */ 88 | unused_b:6; 89 | u32 prior_cwnd; /* prior cwnd upon entering loss recovery */ 90 | u32 full_bw; /* recent bw, to estimate if pipe is full */ 91 | }; 92 | 93 | #define CYCLE_LEN 8 /* number of phases in a pacing gain cycle */ 94 | 95 | /* Window length of bw filter (in rounds): */ 96 | static const int bbr_bw_rtts = CYCLE_LEN + 7; 97 | /* Window length of min_rtt filter (in sec): */ 98 | static const u32 bbr_min_rtt_win_sec = 20; 99 | /* Minimum time (in ms) spent at bbr_cwnd_min_target in BBR_PROBE_RTT mode: */ 100 | static const u32 bbr_probe_rtt_mode_ms = 200; 101 | /* Skip TSO below the following bandwidth (bits/sec): */ 102 | static const int bbr_min_tso_rate = 1200000; 103 | 104 | /* We use a high_gain value of 2/ln(2) because it's the smallest pacing gain 105 | * that will allow a smoothly increasing pacing rate that will double each RTT 106 | * and send the same number of packets per RTT that an un-paced, slow-starting 107 | * Reno or CUBIC flow would: 108 | */ 109 | static const int bbr_high_gain = BBR_UNIT * 2885 / 1000 + 1; 110 | /* The pacing gain of 1/high_gain in BBR_DRAIN is calculated to typically drain 111 | * the queue created in BBR_STARTUP in a single round: 112 | */ 113 | static const int bbr_drain_gain = BBR_UNIT * 1200 / 2885; 114 | /* The gain for deriving steady-state cwnd tolerates delayed/stretched ACKs: */ 115 | static const int bbr_cwnd_gain = BBR_UNIT * 2; 116 | /* The pacing_gain values for the PROBE_BW gain cycle, to discover/share bw: */ 117 | static const int bbr_pacing_gain[] = { 118 | BBR_UNIT * 3 / 2, /* probe for more available bw */ 119 | BBR_UNIT * 3 / 4, /* drain queue and/or yield bw to other flows */ 120 | BBR_UNIT * 9 / 8, BBR_UNIT * 9 / 8, BBR_UNIT * 9 / 8, /* cruise at 1.0*bw to utilize pipe, */ 121 | BBR_UNIT * 9 / 8, BBR_UNIT * 9 / 8, BBR_UNIT * 9 / 8 /* without creating excess queue... */ 122 | }; 123 | /* Randomize the starting gain cycling phase over N phases: */ 124 | static const u32 bbr_cycle_rand = 7; 125 | 126 | /* Try to keep at least this many packets in flight, if things go smoothly. For 127 | * smooth functioning, a sliding window protocol ACKing every other packet 128 | * needs at least 4 packets in flight: 129 | */ 130 | static const u32 bbr_cwnd_min_target = 4; 131 | 132 | /* To estimate if BBR_STARTUP mode (i.e. high_gain) has filled pipe... */ 133 | /* If bw has increased significantly (1.25x), there may be more bw available: */ 134 | static const u32 bbr_full_bw_thresh = BBR_UNIT * 5 / 4; 135 | /* But after 3 rounds w/o significant bw growth, estimate pipe is full: */ 136 | static const u32 bbr_full_bw_cnt = 3; 137 | 138 | /* "long-term" ("LT") bandwidth estimator parameters... */ 139 | /* The minimum number of rounds in an LT bw sampling interval: */ 140 | static const u32 bbr_lt_intvl_min_rtts = 4; 141 | /* If lost/delivered ratio > 20%, interval is "lossy" and we may be policed: */ 142 | static const u32 bbr_lt_loss_thresh = 50; 143 | /* If 2 intervals have a bw ratio <= 1/8, their bw is "consistent": */ 144 | static const u32 bbr_lt_bw_ratio = BBR_UNIT / 8; 145 | /* If 2 intervals have a bw diff <= 4 Kbit/sec their bw is "consistent": */ 146 | static const u32 bbr_lt_bw_diff = 4000 / 8; 147 | /* If we estimate we're policed, use lt_bw for this many round trips: */ 148 | static const u32 bbr_lt_bw_max_rtts = 48; 149 | 150 | /* Do we estimate that STARTUP filled the pipe? */ 151 | static bool bbr_full_bw_reached(const struct sock *sk) 152 | { 153 | const struct bbr *bbr = inet_csk_ca(sk); 154 | 155 | return bbr->full_bw_cnt >= bbr_full_bw_cnt; 156 | } 157 | 158 | /* Return the windowed max recent bandwidth sample, in pkts/uS << BW_SCALE. */ 159 | static u32 bbr_max_bw(const struct sock *sk) 160 | { 161 | struct bbr *bbr = inet_csk_ca(sk); 162 | 163 | return minmax_get(&bbr->bw); 164 | } 165 | 166 | /* Return the estimated bandwidth of the path, in pkts/uS << BW_SCALE. */ 167 | static u32 bbr_bw(const struct sock *sk) 168 | { 169 | struct bbr *bbr = inet_csk_ca(sk); 170 | 171 | return bbr->lt_use_bw ? bbr->lt_bw : bbr_max_bw(sk); 172 | } 173 | 174 | /* Return rate in bytes per second, optionally with a gain. 175 | * The order here is chosen carefully to avoid overflow of u64. This should 176 | * work for input rates of up to 2.9Tbit/sec and gain of 2.89x. 177 | */ 178 | static u64 bbr_rate_bytes_per_sec(struct sock *sk, u64 rate, int gain) 179 | { 180 | rate *= tcp_mss_to_mtu(sk, tcp_sk(sk)->mss_cache); 181 | rate *= gain; 182 | rate >>= BBR_SCALE; 183 | rate *= USEC_PER_SEC; 184 | return rate >> BW_SCALE; 185 | } 186 | 187 | /* Pace using current bw estimate and a gain factor. In order to help drive the 188 | * network toward lower queues while maintaining high utilization and low 189 | * latency, the average pacing rate aims to be slightly (~1%) lower than the 190 | * estimated bandwidth. This is an important aspect of the design. In this 191 | * implementation this slightly lower pacing rate is achieved implicitly by not 192 | * including link-layer headers in the packet size used for the pacing rate. 193 | */ 194 | static void bbr_set_pacing_rate(struct sock *sk, u32 bw, int gain) 195 | { 196 | struct bbr *bbr = inet_csk_ca(sk); 197 | u64 rate = bw; 198 | 199 | rate = bbr_rate_bytes_per_sec(sk, rate, gain); 200 | rate = min_t(u64, rate, sk->sk_max_pacing_rate); 201 | if (bbr->mode != BBR_STARTUP || rate > sk->sk_pacing_rate) 202 | sk->sk_pacing_rate = rate; 203 | } 204 | 205 | /* Return count of segments we want in the skbs we send, or 0 for default. */ 206 | static u32 bbr_tso_segs_goal(struct sock *sk) 207 | { 208 | struct bbr *bbr = inet_csk_ca(sk); 209 | 210 | return bbr->tso_segs_goal; 211 | } 212 | 213 | static void bbr_set_tso_segs_goal(struct sock *sk) 214 | { 215 | struct tcp_sock *tp = tcp_sk(sk); 216 | struct bbr *bbr = inet_csk_ca(sk); 217 | u32 min_segs; 218 | 219 | min_segs = sk->sk_pacing_rate < (bbr_min_tso_rate >> 3) ? 1 : 2; 220 | bbr->tso_segs_goal = min(tcp_tso_autosize(sk, tp->mss_cache, min_segs), 221 | 0x7FU); 222 | } 223 | 224 | /* Save "last known good" cwnd so we can restore it after losses or PROBE_RTT */ 225 | static void bbr_save_cwnd(struct sock *sk) 226 | { 227 | struct tcp_sock *tp = tcp_sk(sk); 228 | struct bbr *bbr = inet_csk_ca(sk); 229 | 230 | if (bbr->prev_ca_state < TCP_CA_Recovery && bbr->mode != BBR_PROBE_RTT) 231 | bbr->prior_cwnd = tp->snd_cwnd; /* this cwnd is good enough */ 232 | else /* loss recovery or BBR_PROBE_RTT have temporarily cut cwnd */ 233 | bbr->prior_cwnd = max(bbr->prior_cwnd, tp->snd_cwnd); 234 | } 235 | 236 | static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event) 237 | { 238 | struct tcp_sock *tp = tcp_sk(sk); 239 | struct bbr *bbr = inet_csk_ca(sk); 240 | 241 | if (event == CA_EVENT_TX_START && tp->app_limited) { 242 | bbr->idle_restart = 1; 243 | /* Avoid pointless buffer overflows: pace at est. bw if we don't 244 | * need more speed (we're restarting from idle and app-limited). 245 | */ 246 | if (bbr->mode == BBR_PROBE_BW) 247 | bbr_set_pacing_rate(sk, bbr_bw(sk), BBR_UNIT); 248 | } 249 | } 250 | 251 | /* Find target cwnd. Right-size the cwnd based on min RTT and the 252 | * estimated bottleneck bandwidth: 253 | * 254 | * cwnd = bw * min_rtt * gain = BDP * gain 255 | * 256 | * The key factor, gain, controls the amount of queue. While a small gain 257 | * builds a smaller queue, it becomes more vulnerable to noise in RTT 258 | * measurements (e.g., delayed ACKs or other ACK compression effects). This 259 | * noise may cause BBR to under-estimate the rate. 260 | * 261 | * To achieve full performance in high-speed paths, we budget enough cwnd to 262 | * fit full-sized skbs in-flight on both end hosts to fully utilize the path: 263 | * - one skb in sending host Qdisc, 264 | * - one skb in sending host TSO/GSO engine 265 | * - one skb being received by receiver host LRO/GRO/delayed-ACK engine 266 | * Don't worry, at low rates (bbr_min_tso_rate) this won't bloat cwnd because 267 | * in such cases tso_segs_goal is 1. The minimum cwnd is 4 packets, 268 | * which allows 2 outstanding 2-packet sequences, to try to keep pipe 269 | * full even with ACK-every-other-packet delayed ACKs. 270 | */ 271 | static u32 bbr_target_cwnd(struct sock *sk, u32 bw, int gain) 272 | { 273 | struct bbr *bbr = inet_csk_ca(sk); 274 | u32 cwnd; 275 | u64 w; 276 | 277 | /* If we've never had a valid RTT sample, cap cwnd at the initial 278 | * default. This should only happen when the connection is not using TCP 279 | * timestamps and has retransmitted all of the SYN/SYNACK/data packets 280 | * ACKed so far. In this case, an RTO can cut cwnd to 1, in which 281 | * case we need to slow-start up toward something safe: TCP_INIT_CWND. 282 | */ 283 | if (unlikely(bbr->min_rtt_us == ~0U)) /* no valid RTT samples yet? */ 284 | return TCP_INIT_CWND; /* be safe: cap at default initial cwnd*/ 285 | 286 | w = (u64)bw * bbr->min_rtt_us; 287 | 288 | /* Apply a gain to the given value, then remove the BW_SCALE shift. */ 289 | cwnd = (((w * gain) >> BBR_SCALE) + BW_UNIT - 1) / BW_UNIT; 290 | 291 | /* Allow enough full-sized skbs in flight to utilize end systems. */ 292 | cwnd += 3 * bbr->tso_segs_goal; 293 | 294 | /* Reduce delayed ACKs by rounding up cwnd to the next even number. */ 295 | cwnd = (cwnd + 1) & ~1U; 296 | 297 | return cwnd; 298 | } 299 | 300 | /* An optimization in BBR to reduce losses: On the first round of recovery, we 301 | * follow the packet conservation principle: send P packets per P packets acked. 302 | * After that, we slow-start and send at most 2*P packets per P packets acked. 303 | * After recovery finishes, or upon undo, we restore the cwnd we had when 304 | * recovery started (capped by the target cwnd based on estimated BDP). 305 | * 306 | * TODO(ycheng/ncardwell): implement a rate-based approach. 307 | */ 308 | static bool bbr_set_cwnd_to_recover_or_restore( 309 | struct sock *sk, const struct rate_sample *rs, u32 acked, u32 *new_cwnd) 310 | { 311 | struct tcp_sock *tp = tcp_sk(sk); 312 | struct bbr *bbr = inet_csk_ca(sk); 313 | u8 prev_state = bbr->prev_ca_state, state = inet_csk(sk)->icsk_ca_state; 314 | u32 cwnd = tp->snd_cwnd; 315 | 316 | /* An ACK for P pkts should release at most 2*P packets. We do this 317 | * in two steps. First, here we deduct the number of lost packets. 318 | * Then, in bbr_set_cwnd() we slow start up toward the target cwnd. 319 | */ 320 | if (rs->losses > 0) 321 | cwnd = max_t(s32, cwnd - rs->losses, 1); 322 | 323 | if (state == TCP_CA_Recovery && prev_state != TCP_CA_Recovery) { 324 | /* Starting 1st round of Recovery, so do packet conservation. */ 325 | bbr->packet_conservation = 1; 326 | bbr->next_rtt_delivered = tp->delivered; /* start round now */ 327 | /* Cut unused cwnd from app behavior, TSQ, or TSO deferral: */ 328 | cwnd = tcp_packets_in_flight(tp) + acked; 329 | } else if (prev_state >= TCP_CA_Recovery && state < TCP_CA_Recovery) { 330 | /* Exiting loss recovery; restore cwnd saved before recovery. */ 331 | bbr->restore_cwnd = 1; 332 | bbr->packet_conservation = 0; 333 | } 334 | bbr->prev_ca_state = state; 335 | 336 | if (bbr->restore_cwnd) { 337 | /* Restore cwnd after exiting loss recovery or PROBE_RTT. */ 338 | cwnd = max(cwnd, bbr->prior_cwnd); 339 | bbr->restore_cwnd = 0; 340 | } 341 | 342 | if (bbr->packet_conservation) { 343 | *new_cwnd = max(cwnd, tcp_packets_in_flight(tp) + acked); 344 | return true; /* yes, using packet conservation */ 345 | } 346 | *new_cwnd = cwnd; 347 | return false; 348 | } 349 | 350 | /* Slow-start up toward target cwnd (if bw estimate is growing, or packet loss 351 | * has drawn us down below target), or snap down to target if we're above it. 352 | */ 353 | static void bbr_set_cwnd(struct sock *sk, const struct rate_sample *rs, 354 | u32 acked, u32 bw, int gain) 355 | { 356 | struct tcp_sock *tp = tcp_sk(sk); 357 | struct bbr *bbr = inet_csk_ca(sk); 358 | u32 cwnd = 0, target_cwnd = 0; 359 | 360 | if (!acked) 361 | return; 362 | 363 | if (bbr_set_cwnd_to_recover_or_restore(sk, rs, acked, &cwnd)) 364 | goto done; 365 | 366 | /* If we're below target cwnd, slow start cwnd toward target cwnd. */ 367 | target_cwnd = bbr_target_cwnd(sk, bw, gain); 368 | if (bbr_full_bw_reached(sk)) /* only cut cwnd if we filled the pipe */ 369 | cwnd = min(cwnd + acked, target_cwnd); 370 | else if (cwnd < target_cwnd || tp->delivered < TCP_INIT_CWND) 371 | cwnd = cwnd + acked; 372 | cwnd = max(cwnd, bbr_cwnd_min_target); 373 | 374 | done: 375 | tp->snd_cwnd = min(cwnd, tp->snd_cwnd_clamp); /* apply global cap */ 376 | if (bbr->mode == BBR_PROBE_RTT) /* drain queue, refresh min_rtt */ 377 | tp->snd_cwnd = max(tp->snd_cwnd >> 1, bbr_cwnd_min_target); 378 | } 379 | 380 | /* End cycle phase if it's time and/or we hit the phase's in-flight target. */ 381 | static bool bbr_is_next_cycle_phase(struct sock *sk, 382 | const struct rate_sample *rs) 383 | { 384 | struct tcp_sock *tp = tcp_sk(sk); 385 | struct bbr *bbr = inet_csk_ca(sk); 386 | bool is_full_length = 387 | tcp_stamp_us_delta(tp->delivered_mstamp, bbr->cycle_mstamp) > 388 | bbr->min_rtt_us; 389 | u32 inflight, bw; 390 | 391 | /* The pacing_gain of 1.0 paces at the estimated bw to try to fully 392 | * use the pipe without increasing the queue. 393 | */ 394 | if (bbr->pacing_gain == BBR_UNIT) 395 | return is_full_length; /* just use wall clock time */ 396 | 397 | inflight = rs->prior_in_flight; /* what was in-flight before ACK? */ 398 | bw = bbr_max_bw(sk); 399 | 400 | /* A pacing_gain > 1.0 probes for bw by trying to raise inflight to at 401 | * least pacing_gain*BDP; this may take more than min_rtt if min_rtt is 402 | * small (e.g. on a LAN). We do not persist if packets are lost, since 403 | * a path with small buffers may not hold that much. 404 | */ 405 | if (bbr->pacing_gain > BBR_UNIT) 406 | return is_full_length && 407 | (rs->losses || /* perhaps pacing_gain*BDP won't fit */ 408 | inflight >= bbr_target_cwnd(sk, bw, bbr->pacing_gain)); 409 | 410 | /* A pacing_gain < 1.0 tries to drain extra queue we added if bw 411 | * probing didn't find more bw. If inflight falls to match BDP then we 412 | * estimate queue is drained; persisting would underutilize the pipe. 413 | */ 414 | return is_full_length || 415 | inflight <= bbr_target_cwnd(sk, bw, BBR_UNIT); 416 | } 417 | 418 | static void bbr_advance_cycle_phase(struct sock *sk) 419 | { 420 | struct tcp_sock *tp = tcp_sk(sk); 421 | struct bbr *bbr = inet_csk_ca(sk); 422 | 423 | bbr->cycle_idx = (bbr->cycle_idx + 1) & (CYCLE_LEN - 1); 424 | bbr->cycle_mstamp = tp->delivered_mstamp; 425 | bbr->pacing_gain = bbr_pacing_gain[bbr->cycle_idx]; 426 | } 427 | 428 | /* Gain cycling: cycle pacing gain to converge to fair share of available bw. */ 429 | static void bbr_update_cycle_phase(struct sock *sk, 430 | const struct rate_sample *rs) 431 | { 432 | struct bbr *bbr = inet_csk_ca(sk); 433 | 434 | if ((bbr->mode == BBR_PROBE_BW) && !bbr->lt_use_bw && 435 | bbr_is_next_cycle_phase(sk, rs)) 436 | bbr_advance_cycle_phase(sk); 437 | } 438 | 439 | static void bbr_reset_startup_mode(struct sock *sk) 440 | { 441 | struct bbr *bbr = inet_csk_ca(sk); 442 | 443 | bbr->mode = BBR_STARTUP; 444 | bbr->pacing_gain = bbr_high_gain; 445 | bbr->cwnd_gain = bbr_high_gain; 446 | } 447 | 448 | static void bbr_reset_probe_bw_mode(struct sock *sk) 449 | { 450 | struct bbr *bbr = inet_csk_ca(sk); 451 | 452 | bbr->mode = BBR_PROBE_BW; 453 | bbr->pacing_gain = BBR_UNIT; 454 | bbr->cwnd_gain = bbr_cwnd_gain; 455 | bbr->cycle_idx = CYCLE_LEN - 1 - prandom_u32_max(bbr_cycle_rand); 456 | bbr_advance_cycle_phase(sk); /* flip to next phase of gain cycle */ 457 | } 458 | 459 | static void bbr_reset_mode(struct sock *sk) 460 | { 461 | if (!bbr_full_bw_reached(sk)) 462 | bbr_reset_startup_mode(sk); 463 | else 464 | bbr_reset_probe_bw_mode(sk); 465 | } 466 | 467 | /* Start a new long-term sampling interval. */ 468 | static void bbr_reset_lt_bw_sampling_interval(struct sock *sk) 469 | { 470 | struct tcp_sock *tp = tcp_sk(sk); 471 | struct bbr *bbr = inet_csk_ca(sk); 472 | 473 | bbr->lt_last_stamp = div_u64(tp->delivered_mstamp, USEC_PER_MSEC); 474 | bbr->lt_last_delivered = tp->delivered; 475 | bbr->lt_last_lost = tp->lost; 476 | bbr->lt_rtt_cnt = 0; 477 | } 478 | 479 | /* Completely reset long-term bandwidth sampling. */ 480 | static void bbr_reset_lt_bw_sampling(struct sock *sk) 481 | { 482 | struct bbr *bbr = inet_csk_ca(sk); 483 | 484 | bbr->lt_bw = 0; 485 | bbr->lt_use_bw = 0; 486 | bbr->lt_is_sampling = false; 487 | bbr_reset_lt_bw_sampling_interval(sk); 488 | } 489 | 490 | /* Long-term bw sampling interval is done. Estimate whether we're policed. */ 491 | static void bbr_lt_bw_interval_done(struct sock *sk, u32 bw) 492 | { 493 | struct bbr *bbr = inet_csk_ca(sk); 494 | u32 diff; 495 | 496 | if (bbr->lt_bw) { /* do we have bw from a previous interval? */ 497 | /* Is new bw close to the lt_bw from the previous interval? */ 498 | diff = abs(bw - bbr->lt_bw); 499 | if ((diff * BBR_UNIT <= bbr_lt_bw_ratio * bbr->lt_bw) || 500 | (bbr_rate_bytes_per_sec(sk, diff, BBR_UNIT) <= 501 | bbr_lt_bw_diff)) { 502 | /* All criteria are met; estimate we're policed. */ 503 | bbr->lt_bw = (bw + bbr->lt_bw) >> 1; /* avg 2 intvls */ 504 | bbr->lt_use_bw = 1; 505 | bbr->pacing_gain = BBR_UNIT; /* try to avoid drops */ 506 | bbr->lt_rtt_cnt = 0; 507 | return; 508 | } 509 | } 510 | bbr->lt_bw = bw; 511 | bbr_reset_lt_bw_sampling_interval(sk); 512 | } 513 | 514 | /* Token-bucket traffic policers are common (see "An Internet-Wide Analysis of 515 | * Traffic Policing", SIGCOMM 2016). BBR detects token-bucket policers and 516 | * explicitly models their policed rate, to reduce unnecessary losses. We 517 | * estimate that we're policed if we see 2 consecutive sampling intervals with 518 | * consistent throughput and high packet loss. If we think we're being policed, 519 | * set lt_bw to the "long-term" average delivery rate from those 2 intervals. 520 | */ 521 | static void bbr_lt_bw_sampling(struct sock *sk, const struct rate_sample *rs) 522 | { 523 | struct tcp_sock *tp = tcp_sk(sk); 524 | struct bbr *bbr = inet_csk_ca(sk); 525 | u32 lost, delivered; 526 | u64 bw; 527 | u32 t; 528 | 529 | if (bbr->lt_use_bw) { /* already using long-term rate, lt_bw? */ 530 | if (bbr->mode == BBR_PROBE_BW && bbr->round_start && 531 | ++bbr->lt_rtt_cnt >= bbr_lt_bw_max_rtts) { 532 | bbr_reset_lt_bw_sampling(sk); /* stop using lt_bw */ 533 | bbr_reset_probe_bw_mode(sk); /* restart gain cycling */ 534 | } 535 | return; 536 | } 537 | 538 | /* Wait for the first loss before sampling, to let the policer exhaust 539 | * its tokens and estimate the steady-state rate allowed by the policer. 540 | * Starting samples earlier includes bursts that over-estimate the bw. 541 | */ 542 | if (!bbr->lt_is_sampling) { 543 | if (!rs->losses) 544 | return; 545 | bbr_reset_lt_bw_sampling_interval(sk); 546 | bbr->lt_is_sampling = true; 547 | } 548 | 549 | /* To avoid underestimates, reset sampling if we run out of data. */ 550 | if (rs->is_app_limited) { 551 | bbr_reset_lt_bw_sampling(sk); 552 | return; 553 | } 554 | 555 | if (bbr->round_start) 556 | bbr->lt_rtt_cnt++; /* count round trips in this interval */ 557 | if (bbr->lt_rtt_cnt < bbr_lt_intvl_min_rtts) 558 | return; /* sampling interval needs to be longer */ 559 | if (bbr->lt_rtt_cnt > 4 * bbr_lt_intvl_min_rtts) { 560 | bbr_reset_lt_bw_sampling(sk); /* interval is too long */ 561 | return; 562 | } 563 | 564 | /* End sampling interval when a packet is lost, so we estimate the 565 | * policer tokens were exhausted. Stopping the sampling before the 566 | * tokens are exhausted under-estimates the policed rate. 567 | */ 568 | if (!rs->losses) 569 | return; 570 | 571 | /* Calculate packets lost and delivered in sampling interval. */ 572 | lost = tp->lost - bbr->lt_last_lost; 573 | delivered = tp->delivered - bbr->lt_last_delivered; 574 | /* Is loss rate (lost/delivered) >= lt_loss_thresh? If not, wait. */ 575 | if (!delivered || (lost << BBR_SCALE) < bbr_lt_loss_thresh * delivered) 576 | return; 577 | 578 | /* Find average delivery rate in this sampling interval. */ 579 | t = div_u64(tp->delivered_mstamp, USEC_PER_MSEC) - bbr->lt_last_stamp; 580 | if ((s32)t < 1) 581 | return; /* interval is less than one ms, so wait */ 582 | /* Check if can multiply without overflow */ 583 | if (t >= ~0U / USEC_PER_MSEC) { 584 | bbr_reset_lt_bw_sampling(sk); /* interval too long; reset */ 585 | return; 586 | } 587 | t *= USEC_PER_MSEC; 588 | bw = (u64)delivered * BW_UNIT; 589 | do_div(bw, t); 590 | bbr_lt_bw_interval_done(sk, bw); 591 | } 592 | 593 | /* Estimate the bandwidth based on how fast packets are delivered */ 594 | static void bbr_update_bw(struct sock *sk, const struct rate_sample *rs) 595 | { 596 | struct tcp_sock *tp = tcp_sk(sk); 597 | struct bbr *bbr = inet_csk_ca(sk); 598 | u64 bw; 599 | 600 | bbr->round_start = 0; 601 | if (rs->delivered < 0 || rs->interval_us <= 0) 602 | return; /* Not a valid observation */ 603 | 604 | /* See if we've reached the next RTT */ 605 | if (!before(rs->prior_delivered, bbr->next_rtt_delivered)) { 606 | bbr->next_rtt_delivered = tp->delivered; 607 | bbr->rtt_cnt++; 608 | bbr->round_start = 1; 609 | bbr->packet_conservation = 0; 610 | } 611 | 612 | bbr_lt_bw_sampling(sk, rs); 613 | 614 | /* Divide delivered by the interval to find a (lower bound) bottleneck 615 | * bandwidth sample. Delivered is in packets and interval_us in uS and 616 | * ratio will be <<1 for most connections. So delivered is first scaled. 617 | */ 618 | bw = (u64)rs->delivered * BW_UNIT; 619 | do_div(bw, rs->interval_us); 620 | 621 | /* If this sample is application-limited, it is likely to have a very 622 | * low delivered count that represents application behavior rather than 623 | * the available network rate. Such a sample could drag down estimated 624 | * bw, causing needless slow-down. Thus, to continue to send at the 625 | * last measured network rate, we filter out app-limited samples unless 626 | * they describe the path bw at least as well as our bw model. 627 | * 628 | * So the goal during app-limited phase is to proceed with the best 629 | * network rate no matter how long. We automatically leave this 630 | * phase when app writes faster than the network can deliver :) 631 | */ 632 | if (!rs->is_app_limited || bw >= bbr_max_bw(sk)) { 633 | /* Incorporate new sample into our max bw filter. */ 634 | minmax_running_max(&bbr->bw, bbr_bw_rtts, bbr->rtt_cnt, bw); 635 | } 636 | } 637 | 638 | /* Estimate when the pipe is full, using the change in delivery rate: BBR 639 | * estimates that STARTUP filled the pipe if the estimated bw hasn't changed by 640 | * at least bbr_full_bw_thresh (25%) after bbr_full_bw_cnt (3) non-app-limited 641 | * rounds. Why 3 rounds: 1: rwin autotuning grows the rwin, 2: we fill the 642 | * higher rwin, 3: we get higher delivery rate samples. Or transient 643 | * cross-traffic or radio noise can go away. CUBIC Hystart shares a similar 644 | * design goal, but uses delay and inter-ACK spacing instead of bandwidth. 645 | */ 646 | static void bbr_check_full_bw_reached(struct sock *sk, 647 | const struct rate_sample *rs) 648 | { 649 | struct bbr *bbr = inet_csk_ca(sk); 650 | u32 bw_thresh; 651 | 652 | if (bbr_full_bw_reached(sk) || !bbr->round_start || rs->is_app_limited) 653 | return; 654 | 655 | bw_thresh = (u64)bbr->full_bw * bbr_full_bw_thresh >> BBR_SCALE; 656 | if (bbr_max_bw(sk) >= bw_thresh) { 657 | bbr->full_bw = bbr_max_bw(sk); 658 | bbr->full_bw_cnt = 0; 659 | return; 660 | } 661 | ++bbr->full_bw_cnt; 662 | } 663 | 664 | /* If pipe is probably full, drain the queue and then enter steady-state. */ 665 | static void bbr_check_drain(struct sock *sk, const struct rate_sample *rs) 666 | { 667 | struct bbr *bbr = inet_csk_ca(sk); 668 | 669 | if (bbr->mode == BBR_STARTUP && bbr_full_bw_reached(sk)) { 670 | bbr->mode = BBR_DRAIN; /* drain queue we created */ 671 | bbr->pacing_gain = bbr_drain_gain; /* pace slow to drain */ 672 | bbr->cwnd_gain = bbr_high_gain; /* maintain cwnd */ 673 | } /* fall through to check if in-flight is already small: */ 674 | if (bbr->mode == BBR_DRAIN && 675 | tcp_packets_in_flight(tcp_sk(sk)) <= 676 | bbr_target_cwnd(sk, bbr_max_bw(sk), BBR_UNIT)) 677 | bbr_reset_probe_bw_mode(sk); /* we estimate queue is drained */ 678 | } 679 | 680 | /* The goal of PROBE_RTT mode is to have BBR flows cooperatively and 681 | * periodically drain the bottleneck queue, to converge to measure the true 682 | * min_rtt (unloaded propagation delay). This allows the flows to keep queues 683 | * small (reducing queuing delay and packet loss) and achieve fairness among 684 | * BBR flows. 685 | * 686 | * The min_rtt filter window is 10 seconds. When the min_rtt estimate expires, 687 | * we enter PROBE_RTT mode and cap the cwnd at bbr_cwnd_min_target=4 packets. 688 | * After at least bbr_probe_rtt_mode_ms=200ms and at least one packet-timed 689 | * round trip elapsed with that flight size <= 4, we leave PROBE_RTT mode and 690 | * re-enter the previous mode. BBR uses 200ms to approximately bound the 691 | * performance penalty of PROBE_RTT's cwnd capping to roughly 2% (200ms/10s). 692 | * 693 | * Note that flows need only pay 2% if they are busy sending over the last 10 694 | * seconds. Interactive applications (e.g., Web, RPCs, video chunks) often have 695 | * natural silences or low-rate periods within 10 seconds where the rate is low 696 | * enough for long enough to drain its queue in the bottleneck. We pick up 697 | * these min RTT measurements opportunistically with our min_rtt filter. :-) 698 | */ 699 | static void bbr_update_min_rtt(struct sock *sk, const struct rate_sample *rs) 700 | { 701 | struct tcp_sock *tp = tcp_sk(sk); 702 | struct bbr *bbr = inet_csk_ca(sk); 703 | //deprecated u32 rtt_prior = 0; 704 | bool filter_expired; 705 | 706 | /* Track min RTT seen in the min_rtt_win_sec filter window: */ 707 | filter_expired = after(tcp_jiffies32, 708 | bbr->min_rtt_stamp + bbr_min_rtt_win_sec * HZ); 709 | if (rs->rtt_us >= 0 && 710 | (rs->rtt_us <= bbr->min_rtt_us || filter_expired)) { 711 | bbr->min_rtt_us = rs->rtt_us; 712 | bbr->min_rtt_stamp = tcp_jiffies32; 713 | //deprecated bbr->rtt_us = rs->rtt_us; 714 | } 715 | //deprecated bbr->rtt_us = rs->rtt_us; 716 | //deprecated rtt_prior = minmax_get(&bbr->max_rtt); 717 | //deprecated bbr->rtt_us = min(bbr->rtt_us, rtt_prior); 718 | 719 | //deprecated minmax_running_max(&bbr->max_rtt, bbr_bw_rtts, bbr->rtt_cnt, rs->rtt_us); 720 | 721 | if (bbr_probe_rtt_mode_ms > 0 && filter_expired && 722 | !bbr->idle_restart && bbr->mode != BBR_PROBE_RTT) { 723 | bbr->mode = BBR_PROBE_RTT; /* dip, drain queue */ 724 | bbr->pacing_gain = BBR_UNIT; 725 | bbr->cwnd_gain = BBR_UNIT; 726 | bbr_save_cwnd(sk); /* note cwnd so we can restore it */ 727 | bbr->probe_rtt_done_stamp = 0; 728 | } 729 | 730 | if (bbr->mode == BBR_PROBE_RTT) { 731 | /* Ignore low rate samples during this mode. */ 732 | tp->app_limited = 733 | (tp->delivered + tcp_packets_in_flight(tp)) ? : 1; 734 | /* Maintain min packets in flight for max(200 ms, 1 round). */ 735 | if (!bbr->probe_rtt_done_stamp && 736 | tcp_packets_in_flight(tp) <= bbr_cwnd_min_target) { 737 | bbr->probe_rtt_done_stamp = tcp_jiffies32 + 738 | msecs_to_jiffies(bbr_probe_rtt_mode_ms >> 1); 739 | bbr->probe_rtt_round_done = 0; 740 | bbr->next_rtt_delivered = tp->delivered; 741 | } else if (bbr->probe_rtt_done_stamp) { 742 | if (bbr->round_start) 743 | bbr->probe_rtt_round_done = 1; 744 | if (bbr->probe_rtt_round_done && 745 | after(tcp_jiffies32, bbr->probe_rtt_done_stamp)) { 746 | bbr->min_rtt_stamp = tcp_jiffies32; 747 | bbr->restore_cwnd = 1; /* snap to prior_cwnd */ 748 | bbr_reset_mode(sk); 749 | } 750 | } 751 | } 752 | bbr->idle_restart = 0; 753 | } 754 | 755 | static void bbr_update_model(struct sock *sk, const struct rate_sample *rs) 756 | { 757 | bbr_update_bw(sk, rs); 758 | bbr_update_cycle_phase(sk, rs); 759 | bbr_check_full_bw_reached(sk, rs); 760 | bbr_check_drain(sk, rs); 761 | bbr_update_min_rtt(sk, rs); 762 | } 763 | 764 | static void bbr_main(struct sock *sk, const struct rate_sample *rs) 765 | { 766 | struct bbr *bbr = inet_csk_ca(sk); 767 | u32 bw; 768 | 769 | bbr_update_model(sk, rs); 770 | 771 | bw = bbr_bw(sk); 772 | bbr_set_pacing_rate(sk, bw, bbr->pacing_gain); 773 | bbr_set_tso_segs_goal(sk); 774 | bbr_set_cwnd(sk, rs, rs->acked_sacked, bw, bbr->cwnd_gain); 775 | } 776 | 777 | static void bbr_init(struct sock *sk) 778 | { 779 | struct tcp_sock *tp = tcp_sk(sk); 780 | struct bbr *bbr = inet_csk_ca(sk); 781 | u64 bw; 782 | 783 | bbr->prior_cwnd = 0; 784 | bbr->tso_segs_goal = 0; /* default segs per skb until first ACK */ 785 | bbr->rtt_cnt = 0; 786 | bbr->next_rtt_delivered = 0; 787 | bbr->prev_ca_state = TCP_CA_Open; 788 | bbr->packet_conservation = 0; 789 | 790 | bbr->probe_rtt_done_stamp = 0; 791 | bbr->probe_rtt_round_done = 0; 792 | bbr->min_rtt_us = tcp_min_rtt(tp); 793 | bbr->min_rtt_stamp = tcp_jiffies32; 794 | 795 | minmax_reset(&bbr->bw, bbr->rtt_cnt, 0); /* init max bw to 0 */ 796 | 797 | /* Initialize pacing rate to: high_gain * init_cwnd / RTT. */ 798 | bw = (u64)tp->snd_cwnd * BW_UNIT; 799 | do_div(bw, (tp->srtt_us >> 3) ? : USEC_PER_MSEC); 800 | sk->sk_pacing_rate = 0; /* force an update of sk_pacing_rate */ 801 | bbr_set_pacing_rate(sk, bw, bbr_high_gain); 802 | 803 | bbr->restore_cwnd = 0; 804 | bbr->round_start = 0; 805 | bbr->idle_restart = 0; 806 | bbr->full_bw = 0; 807 | bbr->full_bw_cnt = 0; 808 | bbr->cycle_mstamp = 0; 809 | bbr->cycle_idx = 0; 810 | bbr_reset_lt_bw_sampling(sk); 811 | bbr_reset_startup_mode(sk); 812 | 813 | cmpxchg(&sk->sk_pacing_status, SK_PACING_NONE, SK_PACING_NEEDED); 814 | } 815 | 816 | static u32 bbr_sndbuf_expand(struct sock *sk) 817 | { 818 | /* Provision 3 * cwnd since BBR may slow-start even during recovery. */ 819 | return 3; 820 | } 821 | 822 | /* In theory BBR does not need to undo the cwnd since it does not 823 | * always reduce cwnd on losses (see bbr_main()). Keep it for now. 824 | */ 825 | static u32 bbr_undo_cwnd(struct sock *sk) 826 | { 827 | return tcp_sk(sk)->snd_cwnd; 828 | } 829 | 830 | /* Entering loss recovery, so save cwnd for when we exit or undo recovery. */ 831 | static u32 bbr_ssthresh(struct sock *sk) 832 | { 833 | bbr_save_cwnd(sk); 834 | return TCP_INFINITE_SSTHRESH; /* BBR does not use ssthresh */ 835 | } 836 | 837 | static size_t bbr_get_info(struct sock *sk, u32 ext, int *attr, 838 | union tcp_cc_info *info) 839 | { 840 | if (ext & (1 << (INET_DIAG_BBRINFO - 1)) || 841 | ext & (1 << (INET_DIAG_VEGASINFO - 1))) { 842 | struct tcp_sock *tp = tcp_sk(sk); 843 | struct bbr *bbr = inet_csk_ca(sk); 844 | u64 bw = bbr_bw(sk); 845 | 846 | bw = bw * tp->mss_cache * USEC_PER_SEC >> BW_SCALE; 847 | memset(&info->bbr, 0, sizeof(info->bbr)); 848 | info->bbr.bbr_bw_lo = (u32)bw; 849 | info->bbr.bbr_bw_hi = (u32)(bw >> 32); 850 | info->bbr.bbr_min_rtt = bbr->min_rtt_us; 851 | info->bbr.bbr_pacing_gain = bbr->pacing_gain; 852 | info->bbr.bbr_cwnd_gain = bbr->cwnd_gain; 853 | *attr = INET_DIAG_BBRINFO; 854 | return sizeof(info->bbr); 855 | } 856 | return 0; 857 | } 858 | 859 | static void bbr_set_state(struct sock *sk, u8 new_state) 860 | { 861 | struct bbr *bbr = inet_csk_ca(sk); 862 | 863 | if (new_state == TCP_CA_Loss) { 864 | struct rate_sample rs = { .losses = 1 }; 865 | 866 | bbr->prev_ca_state = TCP_CA_Loss; 867 | bbr->full_bw = 0; 868 | bbr->round_start = 1; /* treat RTO like end of a round */ 869 | bbr_lt_bw_sampling(sk, &rs); 870 | } 871 | } 872 | 873 | static struct tcp_congestion_ops tcp_bbr_cong_ops __read_mostly = { 874 | .flags = TCP_CONG_NON_RESTRICTED, 875 | .name = "tsunami", 876 | .owner = THIS_MODULE, 877 | .init = bbr_init, 878 | .cong_control = bbr_main, 879 | .sndbuf_expand = bbr_sndbuf_expand, 880 | .undo_cwnd = bbr_undo_cwnd, 881 | .cwnd_event = bbr_cwnd_event, 882 | .ssthresh = bbr_ssthresh, 883 | .tso_segs_goal = bbr_tso_segs_goal, 884 | .get_info = bbr_get_info, 885 | .set_state = bbr_set_state, 886 | }; 887 | 888 | static int __init bbr_register(void) 889 | { 890 | BUILD_BUG_ON(sizeof(struct bbr) > ICSK_CA_PRIV_SIZE); 891 | return tcp_register_congestion_control(&tcp_bbr_cong_ops); 892 | } 893 | 894 | static void __exit bbr_unregister(void) 895 | { 896 | tcp_unregister_congestion_control(&tcp_bbr_cong_ops); 897 | } 898 | 899 | module_init(bbr_register); 900 | module_exit(bbr_unregister); 901 | 902 | MODULE_AUTHOR("Van Jacobson "); 903 | MODULE_AUTHOR("Neal Cardwell "); 904 | MODULE_AUTHOR("Yuchung Cheng "); 905 | MODULE_AUTHOR("Soheil Hassas Yeganeh "); 906 | MODULE_LICENSE("Dual BSD/GPL"); 907 | MODULE_DESCRIPTION("TCP BBR (Bottleneck Bandwidth and RTT)"); 908 | -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # easy_shell 2 | 3 | [](./LICENSE) 4 | 5 | ## To Do 6 | * [ ] Collect daily commands and make it into a script. 7 | * [x] Thinking 8 | 9 | ## Contributing 10 | * Copyright © 2019 SYHGroup 11 | -------------------------------------------------------------------------------- /shellbox/99-shellbox: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | echo "\e[37;44;1m欢迎登陆: \e[0m\e[37;42;1m $(whoami) \e[0m" 3 | if df / -h |sed -n '2p' |awk '{print $4}'|grep -Fq 'G' ; then 4 | echo "\e[37;44;1m存储充足: \e[0m\e[37;42;1m $(df / -h|sed -n '2p' |awk '{print $4}') \e[0m" 5 | else 6 | echo "\e[37;44;1m存储爆炸: \e[0m\e[37;41;1m $(df / -h|sed -n '2p' |awk '{print $4}') \e[0m" 7 | fi 8 | echo "\e[37;44;1m可用内存: \e[0m\e[37;42;1m $(free -h --si|sed -n '2p' |awk '{print $7}') \e[0m" 9 | systemctl list-units --all \ 10 | nginx.service \ 11 | mariadb.service \ 12 | mtproxy.service \ 13 | php7.4-fpm.service \ 14 | shadowsocks-libev.service \ 15 | shadowsocks-libev-manager.service \ 16 | shadowsocks-libev-redir@config.service \ 17 | shadowsocks-libev-server@config-v2ray.service \ 18 | shadowsocks-libev-server@config-v2ray-quic.service \ 19 | vlmcsd.service \ 20 | trojan.service | \ 21 | head -n -7 | tail +2 | sort | \ 22 | awk '{ if ($4 == "running") 23 | print "\033[37;44;1m"$1" 状态: \033[0m\033[37;42;1m 正常 \033[0m"; 24 | else 25 | print "\033[37;44;1m"$2" 状态: \033[0m\033[37;41;1m 异常 \033[0m"; 26 | }' 27 | echo "\e[37;40;4m上次执行: \e[0m$(date)" 28 | -------------------------------------------------------------------------------- /shellbox/README.md: -------------------------------------------------------------------------------- 1 | # easyshell/shellbox 2 | ## Download directly 3 | `wget --no-cache https://raw.githubusercontent.com/SYHGroup/easyshell/master/shellbox/shellbox.sh -O shellbox.sh && chmod +x shellbox.sh` 4 | -------------------------------------------------------------------------------- /shellbox/shellbox.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | #encoding=utf8 3 | rootpath="/tmp/build-source" 4 | mkdir -p -m 777 $rootpath 5 | 6 | ######## 7 | #Small Script 8 | ######## 9 | 10 | Checkroot(){ 11 | if [[ $EUID != "0" ]] 12 | then 13 | echo "Not root user." 14 | exit 1 15 | fi 16 | } 17 | 18 | Sshroot(){ 19 | Checkroot 20 | sed -i s/'PermitRootLogin without-password'/'PermitRootLogin yes'/ /etc/ssh/sshd_config 21 | sed -i s/'PermitRootLogin prohibit-password'/'PermitRootLogin yes'/ /etc/ssh/sshd_config 22 | sed -i s/'Port 22'/'Port 20'/ /etc/ssh/sshd_config 23 | } 24 | 25 | Switchipv6(){ 26 | Checkroot 27 | if grep -Fq "#precedence ::ffff:0:0/96 100" /etc/gai.conf 28 | then 29 | sed -i s/'#precedence ::ffff:0:0\/96 100'/'precedence ::ffff:0:0\/96 100'/ /etc/gai.conf 30 | echo "Set to prefer ipv4." 31 | else 32 | sed -i s/'precedence ::ffff:0:0\/96 100'/'#precedence ::ffff:0:0\/96 100'/ /etc/gai.conf 33 | echo "Set to prefer ipv6." 34 | fi 35 | } 36 | 37 | Saveapt(){ 38 | rm /var/lib/apt/lists/lock 39 | rm /var/cache/apt/archives/lock 40 | rm /var/lib/dpkg/lock 41 | } 42 | 43 | DisableResolvedListener(){ 44 | sed -i "s/#DNSStubListener=yes/DNSStubListener=no/g" /etc/systemd/resolved.conf 45 | systemctl restart systemd-resolved.service 46 | } 47 | 48 | 49 | ######## 50 | #Server Preset 51 | ######## 52 | 53 | Debiancnsource(){ 54 | Checkroot 55 | echo "deb https://repo.debiancn.org/ buster main" > /etc/apt/sources.list.d/debiancn.list 56 | wget https://repo.debiancn.org/pool/main/d/debiancn-keyring/debiancn-keyring_0~20161212_all.deb -O /tmp/debiancn-keyring.deb 57 | apt install /tmp/debiancn-keyring.deb 58 | } 59 | 60 | Aptstablesources(){ 61 | Debiancnsource 62 | echo -e 'deb https://deb.debian.org/debian/ stable main contrib non-free 63 | deb https://deb.debian.org/debian/ stable-updates main contrib non-free 64 | deb https://deb.debian.org/debian/ stable-proposed-updates main contrib non-free 65 | deb https://deb.debian.org/debian/ stable-backports main contrib non-free 66 | deb https://deb.debian.org/debian-security/ stable/updates main\n' > /etc/apt/sources.list 67 | } 68 | 69 | Apttestingsources(){ 70 | Debiancnsource 71 | echo -e 'deb https://deb.debian.org/debian/ testing main contrib non-free 72 | deb https://deb.debian.org/debian/ testing-updates main contrib non-free 73 | deb https://deb.debian.org/debian/ testing-proposed-updates main contrib non-free 74 | deb https://deb.debian.org/debian-security/ testing-security/updates main contrib non-free 75 | deb https://deb.debian.org/debian/ experimental main contrib non-free\n' > /etc/apt/sources.list 76 | } 77 | 78 | Aptunstablesources(){ 79 | Debiancnsource 80 | echo -e 'deb https://deb.debian.org/debian/ unstable main contrib non-free 81 | deb https://deb.debian.org/debian/ experimental main contrib non-free\n' > /etc/apt/sources.list 82 | } 83 | 84 | Setsysctl(){ 85 | Checkroot 86 | wget --no-cache https://gist.github.com/simonsmh/d5531ea7e07ef152bbe8e672da1ddd65/raw/sysctl.conf -O /etc/sysctl.conf 87 | sysctl -p 88 | } 89 | 90 | Setdns(){ 91 | Checkroot 92 | apt install -y resolvconf 93 | echo -e 'nameserver 2001:4860:4860:0:0:0:0:8888 94 | nameserver 2001:4860:4860:0:0:0:0:8844 95 | nameserver 8.8.8.8 96 | nameserver 8.8.4.4\n' > /etc/resolvconf/resolv.conf.d/base 97 | resolvconf -u 98 | } 99 | 100 | Setgolang(){ 101 | Checkroot 102 | apt install -y golang 103 | go env -w GOBIN=/usr/local/bin 104 | go env -w GOPATH=/opt/go 105 | } 106 | 107 | Setsh(){ 108 | read -p "Choose your team: 1.zsh 2.fish 3.bash " 109 | sed -i s/required/sufficient/g /etc/pam.d/chsh 110 | if [ $REPLY = 1 ] 111 | then 112 | apt -y install git zsh zsh-autosuggestions zsh-syntax-highlighting 113 | rm -r ~/.oh-my-zsh 114 | git clone https://github.com/robbyrussell/oh-my-zsh.git ~/.oh-my-zsh 115 | echo "source ~/.bashrc" > ~/.zshrc 116 | cat ~/.oh-my-zsh/templates/zshrc.zsh-template >> ~/.zshrc 117 | sed -i "s/robbyrussell/ys/g" ~/.zshrc 118 | sed -i "s/git/git git-extras svn last-working-dir catimg encode64 urltools wd sudo command-not-found common-aliases debian gitfast gradle npm python systemd dircycle zsh-completions zsh-history-substring-search/g" ~/.zshrc 119 | git clone https://github.com/zsh-users/zsh-completions ${ZSH_CUSTOM:=~/.oh-my-zsh/custom}/plugins/zsh-completions 120 | git clone https://github.com/zsh-users/zsh-history-substring-search ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-history-substring-search 121 | echo "[[ -f /usr/share/zsh-autosuggestions/zsh-autosuggestions.zsh ]] && source /usr/share/zsh-autosuggestions/zsh-autosuggestions.zsh 122 | [[ -f /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh ]] && source /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh" >> ~/.zshrc 123 | chsh -s /usr/bin/zsh 124 | elif [ $REPLY = 2 ] 125 | then 126 | apt -y install git fish 127 | rm -r ~/.config/fish/config.fish ~/.config/fish/functions/ 128 | mkdir -p ~/.config/fish/functions/ 129 | wget https://github.com/fisherman/fisherman/raw/master/fisher.fish -O ~/.config/fish/functions/fisher.fish 130 | echo "source ~/.bashrc" >> ~/.config/fish/config.fish 131 | chsh -s /usr/bin/fish 132 | elif [ $REPLY = 3 ] 133 | then 134 | apt -y install git mosh bash powerline 135 | chsh -s /bin/bash 136 | rm -r ~/.bash_it 137 | git clone https://github.com/Bash-it/bash-it.git ~/.bash_it 138 | ~/.bash_it/install.sh --silent 139 | echo "set colored-completion-prefix on 140 | set colored-stats on" >> ~/.inputrc 141 | sed -i "s/bobby/demula/g" ~/.bashrc 142 | sed -i "s/git@git.domain.com/git@github.com/g" ~/.bashrc 143 | sed -i "s/irssi//g" ~/.bashrc 144 | source ~/.bashrc 145 | bash-it disable plugin all && bash-it enable plugin alias-completion base git hub node plugin sshagent ssh subversion tmux 146 | bash-it enable completion all && bash-it disable completion conda ng 147 | else 148 | echo "Bad syntax." 149 | fi 150 | } 151 | 152 | Setautomaintenance(){ 153 | Checkroot 154 | apt install -y unattended-upgrades needrestart 155 | sed -i " 156 | s|Unattended-Upgrade::Origins-Pattern {|Unattended-Upgrade::Origins-Pattern {\n\"o=*\"\;|g; 157 | s|//Unattended-Upgrade::Remove-Unused-Kernel-Packages|Unattended-Upgrade::Remove-Unused-Kernel-Packages|g; 158 | s|//Unattended-Upgrade::Remove-New-Unused-Dependencies|Unattended-Upgrade::Remove-New-Unused-Dependencies|g; 159 | " /etc/apt/apt.conf.d/50unattended-upgrades 160 | echo unattended-upgrades unattended-upgrades/enable_auto_updates boolean true | debconf-set-selections 161 | dpkg-reconfigure -f noninteractive unattended-upgrades 162 | } 163 | 164 | Setmotd(){ 165 | Checkroot 166 | wget https://github.com/SYHGroup/easy_shell/raw/master/shellbox/99-shellbox -O /etc/update-motd.d/99-shellbox 167 | chmod +x /etc/update-motd.d/99-shellbox 168 | run-parts /etc/update-motd.d 169 | } 170 | 171 | SetLimit(){ 172 | if grep "^\*" /etc/security/limits.conf 173 | then 174 | echo "* soft nofile 65535 175 | * hard nofile 65535" > /etc/security/limits.conf 176 | fi 177 | sed -i "s/^#DefaultLimitNOFILE=.*/DefaultLimitNOFILE=1048576:1048576/g" /etc/systemd/system.conf 178 | } 179 | 180 | Desktop(){ 181 | Checkroot 182 | apt install -y tigervnc-scraping-server tigervnc-standalone-server tigervnc-xorg-extension xfce4 xfce4-goodies xorg fonts-noto 183 | wget https://github.com/SYHGroup/easysystemd/raw/master/x0vncserver%40.service -O /etc/systemd/system/x0vncserver@.service 184 | systemctl enable x0vncserver@5901.service 185 | systemctl start x0vncserver@5901.service 186 | } 187 | 188 | LNMP(){ 189 | Checkroot 190 | apt install nginx-extras mariadb-client mariadb-server php7.3-[^dev,apcu] 191 | systemctl enable nginx mariadb php7.3-fpm 192 | sed -i s/'upload_max_filesize = 2M'/'upload_max_filesize = 100M'/ /etc/php/7.3/fpm/php.ini 193 | sed -i s/'post_max_size = 8M'/'post_max_size = 100M'/ /etc/php/7.3/fpm/php.ini 194 | sed -i s/'short_open_tag = Off'/'short_open_tag = On'/ /etc/php/7.3/fpm/php.ini 195 | sed -i s/'default_socket_timeout = 60'/'default_socket_timeout = 300'/ /etc/php/7.3/fpm/php.ini 196 | sed -i s/'memory_limit = 128M'/'memory_limit = 64M'/ /etc/php/7.3/fpm/php.ini 197 | sed -i s/';opcache.enable=0'/'opcache.enable=1'/ /etc/php/7.3/fpm/php.ini 198 | sed -i s/';opcache.enable_cli=0'/'opcache.enable_cli=1'/ /etc/php/7.3/fpm/php.ini 199 | sed -i s/';opcache.fast_shutdown=0'/'opcache.fast_shutdown=1'/ /etc/php/7.3/fpm/php.ini 200 | sed -i s/'zlib.output_compression = Off'/'zlib.output_compression = On'/ /etc/php/7.3/fpm/php.ini 201 | sed -i s/';zlib.output_compression_level = -1'/'zlib.output_compression_level = 5'/ /etc/php/7.3/fpm/php.ini 202 | sed -i s/'allow_url_include = Off'/'allow_url_include = On'/ /etc/php/7.3/fpm/php.ini 203 | #mysql -u root 204 | #use mysql; 205 | #update user set plugin='' where User='root'; 206 | #flush privileges; 207 | #exit; 208 | } 209 | 210 | Github(){ 211 | git config --global user.name "Simon Shi" 212 | git config --global user.email simonsmh@gmail.com 213 | git config --global credential.helper store 214 | git config --global commit.gpgsign true 215 | git config --global tag.gpgsign true 216 | echo -e 'export GPG_TTY=$(tty) 217 | export DEBEMAIL="simonsmh@gmail.com" 218 | export DEBFULLNAME="Simon Shi"' >>~/.bashrc 219 | #Import gpg key from keybase.io first 220 | } 221 | 222 | SSPreset(){ 223 | Checkroot 224 | apt install -y build-essential gettext build-essential autoconf libtool libpcre3-dev libc-ares-dev libev-dev automake libcork-dev libcorkipset-dev libmbedtls-dev libsodium-dev python-pip python-m2crypto golang libwebsockets-dev libjson-c-dev libssl-dev 225 | apt install -y --no-install-recommends asciidoc xmlto 226 | wget https://github.com/SYHGroup/easy_systemd/raw/master/ssserver.service -O /etc/systemd/system/ssserver.service 227 | Python & 228 | Libev & 229 | systemctl enable ssserver shadowsocks-libev 230 | } 231 | 232 | ######## 233 | #Production Server Automatic Update 234 | ######## 235 | 236 | Sysupdate(){ 237 | Checkroot 238 | [[ -z $(cat ~/.bashrc | grep "alias u") ]] && echo -e "alias u=\"$(cd "$(dirname "$0")"; pwd)/$0 -u\"" >> ~/.bashrc 239 | DEBIAN_FRONTEND=noninteractive 240 | apt update 241 | apt -y full-upgrade 242 | apt -y autoremove --purge 243 | unset DEBIAN_FRONTEND 244 | # apt -y purge `dpkg -l |grep ^rc |awk '{print $2}'` 245 | } 246 | 247 | Vlmcsd(){ 248 | Checkroot 249 | cd $rootpath 250 | git clone https://github.com/Wind4/vlmcsd 251 | cd vlmcsd 252 | git fetch 253 | git reset --hard origin/HEAD 254 | git submodule update --init --recursive 255 | dpkg-buildpackage -rfakeroot -us -uc 256 | git clean -fdx 257 | dpkg -i ../vlmcsd_*.deb 258 | rm ../vlmcsd*.{buildinfo,changes,deb} 259 | } 260 | 261 | Ttyd(){ 262 | Checkroot 263 | cd $rootpath 264 | git clone https://github.com/tsl0922/ttyd 265 | cd ttyd 266 | git fetch 267 | git reset --hard origin/HEAD 268 | dpkg-buildpackage -rfakeroot -us -uc 269 | git clean -fdx 270 | dpkg -i ../ttyd_*.deb 271 | rm ../ttyd*.{buildinfo,changes,deb} 272 | } 273 | 274 | Rust(){ 275 | Checkroot 276 | cd $rootpath 277 | git clone https://github.com/shadowsocks/shadowsocks-rust 278 | cd shadowsocks-rust 279 | git fetch 280 | git reset --hard origin/HEAD 281 | dpkg-buildpackage -rfakeroot -us -uc 282 | git clean -fdx 283 | dpkg -i ../shadowsocks-rust_*.deb 284 | rm ../shadowsocks-rust*.{buildinfo,changes,deb} 285 | } 286 | 287 | Libev(){ 288 | Checkroot 289 | ## Libev 290 | cd $rootpath 291 | git clone https://github.com/shadowsocks/shadowsocks-libev 292 | cd shadowsocks-libev 293 | git fetch 294 | git reset --hard origin/HEAD 295 | git submodule update --init --recursive 296 | ./autogen.sh 297 | dpkg-buildpackage -rfakeroot -us -uc 298 | git clean -fdx 299 | ## Obfs Plugin 300 | cd $rootpath 301 | git clone https://github.com/shadowsocks/simple-obfs 302 | cd simple-obfs 303 | git fetch 304 | git reset --hard origin/HEAD 305 | git submodule update --init --recursive 306 | ./autogen.sh 307 | dpkg-buildpackage -rfakeroot -us -uc 308 | git clean -fdx 309 | ## Install 310 | cd $rootpath 311 | dpkg -i {shadowsocks-libev,simple-obfs}_*.deb 312 | systemctl restart shadowsocks-libev 313 | rm *{shadowsocks-libev,simple-obfs}*.{buildinfo,changes,deb} 314 | } 315 | 316 | Python(){ 317 | Checkroot 318 | cd $rootpath 319 | pip install --upgrade git+https://github.com/shadowsocks/shadowsocks.git@master 320 | systemctl restart ssserver 321 | } 322 | 323 | Go(){ 324 | go get -u github.com/shadowsocks/go-shadowsocks2 325 | install ~/go/bin/go-shadowsocks2 /usr/bin/ 326 | systemctl restart go-shadowsocks2 327 | } 328 | 329 | ######## 330 | #Large Script 331 | ######## 332 | 333 | NX(){ 334 | Checkroot 335 | Aptstablesources 336 | apt update 337 | apt install -y nginx-extras tmux 338 | echo -e 'server { 339 | listen 80 default_server; 340 | listen [::]:80 default_server; 341 | autoindex on; 342 | autoindex_exact_size off; 343 | autoindex_localtime on; 344 | root /root/; 345 | }\n' > /etc/nginx/sites-available/default 346 | sed -i s/'user www-data'/'user root'/ /etc/nginx/nginx.conf 347 | systemctl enable nginx 348 | systemctl restart nginx 349 | } 350 | 351 | TMSU(){ 352 | Checkroot 353 | Aptstablesources 354 | apt update 355 | apt install -y transmission-daemon nginx-extras tmux 356 | systemctl stop transmission-daemon 357 | #username="transmission" 358 | #password="transmission" 359 | #sed -i s/"\"rpc-username\": \"transmission\","/"\"rpc-username\": \"$username\","/ /etc/transmission-daemon/settings.json 360 | #sed -i s/"\"rpc-password\": \".*"/"\"rpc-password\": \"$password\","/ /etc/transmission-daemon/settings.json 361 | sed -i s/'"download-queue-enabled": true'/'"download-queue-enabled": false'/ /etc/transmission-daemon/settings.json 362 | sed -i s/'"rpc-authentication-required": true'/'"rpc-authentication-required": false'/ /etc/transmission-daemon/settings.json 363 | sed -i s/'"rpc-whitelist-enabled": true'/'"rpc-whitelist-enabled": false'/ /etc/transmission-daemon/settings.json 364 | echo -e 'server { 365 | listen 80 default_server; 366 | listen [::]:80 default_server; 367 | autoindex on; 368 | autoindex_exact_size off; 369 | autoindex_localtime on; 370 | root /var/lib/transmission-daemon/downloads/; 371 | location /transmission { 372 | proxy_pass http://127.0.0.1:9091; 373 | proxy_set_header Accept-Encoding ""; 374 | proxy_pass_header X-Transmission-Session-Id; 375 | }}\n' > /etc/nginx/sites-available/default 376 | wget https://github.com/ronggang/transmission-web-control/raw/master/release/tr-control-easy-install.sh 377 | bash tr-control-easy-install.sh 378 | rm tr-control-easy-install.sh 379 | systemctl enable transmission-daemon nginx 380 | systemctl restart transmission-daemon nginx 381 | } 382 | 383 | ######## 384 | #Help 385 | ######## 386 | 387 | Help(){ 388 | echo -e `date`" 389 | Usage: 390 | \tSmall Script: 391 | \t\t-checkroot\tCheck root 392 | \t\t-sshroot\tEnable ssh for root 393 | \t\t-ipv6\t\tSwitch ipv6 394 | \t\t-saveapt\tSave apt/dpkg lock 395 | \t\t-resolved\tDisable Stub Listener 396 | \tServer Preset: 397 | \t\t-stable\t\tApt stable sources 398 | \t\t-testing\tApt testing sources 399 | \t\t-unstable\tApt unstable sources 400 | \t\t-setsysctl\tSet sysctl 401 | \t\t-setdns\t\tSet dns 402 | \t\t-setgolang\tSet golang 403 | \t\t-setam\t\tSet auto maintenance 404 | \t\t-setmotd\tSet motd 405 | \t\t-setlimit\tSet nofile limit 406 | \t\t-setsh\t\tSet custome shell 407 | \t\t-setdesktop\tSet Xfce 408 | \t\t-lnmp\t\tNginx+Mariadb+PHP7 409 | \t\t-gitpreset\tGitHub Preset 410 | \t\t-sspreset\tShadowsocks Preset 411 | \tProduction Server Automatic Update: 412 | \t\t-m\t\tUpdate motd 413 | \t\t-u\t\tSystem update 414 | \t\t-v\t\tCompile Vlmcsd 415 | \t\t-t\t\tCompile Ttyd 416 | \t\t-sr\t\tCompile SS-Rust 417 | \t\t-sl\t\tCompile SS-Libev 418 | \t\t-sp\t\tCompile SS-Python 419 | \t\t-sg\t\tCompile SS-Go 420 | \tLarge Script: 421 | \t\tNX\t\tNginx 422 | \t\tTMSU\t\tTransmission+Nginx 423 | \tShellbox: 424 | \t\t-server\t\tRun Production Server Automatic Update 425 | \t\tupdate\t\tUpdate shellbox.sh 426 | \t\tRUN\t\tRun with parameter 427 | \t\tfishroom\tRun Fishroom 428 | \t\tkillfishroom\tKill Fishroom" 429 | } 430 | 431 | ######## 432 | #Running 433 | ######## 434 | for arg in "$@" 435 | do 436 | case $arg in 437 | #Small Script 438 | -checkroot)Checkroot;; 439 | -sshroot)Sshroot;; 440 | -ipv6)Switchipv6;; 441 | -saveapt)Saveapt;; 442 | -resolved)DisableResolvedListener;; 443 | #Server Preset 444 | -stable)Aptstablesources;; 445 | -testing)Apttestingsources;; 446 | -unstable)Aptunstablesources;; 447 | -setsysctl)Setsysctl;; 448 | -setdns)Setdns;; 449 | -setgolang)Setgolang;; 450 | -setsh)Setsh;; 451 | -setam)Setautomaintenance;; 452 | -setlimit)SetLimit;; 453 | -setmotd)Setmotd;; 454 | -setdesktop)Desktop;; 455 | -lnmp|LNMP)LNMP;; 456 | -gitpreset)Github;; 457 | -sspreset)SSPreset;; 458 | #Production Server Automatic Update 459 | -u)Sysupdate;; 460 | -t)Ttyd;; 461 | -v)Vlmcsd;; 462 | -sr)Rust;; 463 | -sl)Libev;; 464 | -sp)Python;; 465 | -sg)Go;; 466 | #Large Script 467 | -nx|NX)NX;; 468 | -tmsu|TMSU)TMSU;; 469 | #Shellbox 470 | -server) 471 | Vlmcsd & 472 | Rust & 473 | wait 474 | Sysupdate & 475 | ;; 476 | u|update|upgrade) 477 | cd $(cd "$(dirname "$0")"; pwd) 478 | wget --no-cache https://raw.githubusercontent.com/SYHGroup/easy_shell/master/shellbox/shellbox.sh -O shellbox.sh 479 | chmod +x shellbox.sh && exit 0 480 | ;; 481 | fishroom) 482 | export PYTHONPATH=/root/fishroom 483 | tmux new -d -s fishroom -n core python3 -m fishroom.fishroom 484 | tmux neww -t fishroom -n telegram python3 -m fishroom.telegram 485 | tmux neww -t fishroom -n web python3 -m fishroom.web 486 | ;; 487 | killfishroom) 488 | tmux kill-session -t fishroom 489 | ;; 490 | fishroom-smu) 491 | export PYTHONPATH=/root/fishroom-smu 492 | tmux new -d -s fishroom-smu -n core python3 -m fishroom.fishroom 493 | tmux neww -t fishroom-smu -n telegram python3 -m fishroom.telegram 494 | tmux neww -t fishroom-smu -n web python3 -m fishroom.web 495 | ;; 496 | killfishroom-smu) 497 | tmux kill-session -t fishroom-smu 498 | ;; 499 | RUN) 500 | `echo -n $* |sed -e 's/^RUN //g' |awk -F ' ' '{ print $0 }'` 501 | exit $? 502 | ;; 503 | -h) 504 | Help 505 | ;; 506 | *) 507 | Help && exit 1 508 | ;; 509 | esac 510 | done 511 | [ -z "$1" ] && Help && exit 1 512 | -------------------------------------------------------------------------------- /sss/README.md: -------------------------------------------------------------------------------- 1 | Shadowsocks Solution for Linux 透明代理解决方案 2 | - 3 | # 程序原理 4 | 使用CNNIC提供的chnroute中国大陆路由进行境内外出入流量分流,并使用gfwlist进行DNS污染规避。 5 | 6 | # 程序框图 7 |  8 | 9 | 10 | # 安装使用 11 | ## 获取脚本依赖 12 | 更新软件源列表,并安装下列程序。 13 | ```shell 14 | sudo apt update 15 | sudo apt install shadowsocks-libev dnsmasq stubby 16 | ``` 17 | ## 下载脚本 18 | 获取脚本并授予执行权限。 19 | ```shell 20 | wget https://github.com/SYHGroup/easy_shell/raw/master/sss/ss 21 | wget https://github.com/SYHGroup/easy_shell/raw/master/sss/update_list 22 | chmod +x ./ss 23 | chmod +x ./update_list 24 | ``` 25 | ## 配置环境 26 | 1. 系统配置 27 | 28 | - (Linux桌面环境) 29 | 30 | 修改/etc/NetworkManager/NetworkManager.conf,添加如下内容以启用`dnsmasq`显式调用。 31 | ``` 32 | [main] 33 | dns=dnsmasq 34 | ``` 35 | 然后修改`update_list`配置中的`dnsmasq`参数。 36 | ``` 37 | DNSMASQ=/etc/NetworkManager/dnsmasq.d/dnsmasq_gfwlist.conf 38 | ``` 39 | - (Openwrt环境)预先安装好`smartdns` 40 | 41 | 42 | 2. 修改**可能会引起冲突**的配置 43 | - 修改`systemd-resolved`配置,避免与Dnsmasq服务冲突。 44 | ```shell 45 | sed -i "s/#DNSStubListener=yes/DNSStubListener=no/g" /etc/systemd/resolved.conf 46 | systemctl restart systemd-resolved 47 | ``` 48 | 49 | - 修改`/etc/stubby/stubby.xml`端口配置,避免与Dnsmasq服务冲突。 50 | ```yaml 51 | ... 52 | # Set the listen addresses for the stubby DAEMON. This specifies localhost IPv4 53 | # and IPv6. It will listen on port 53 by default. Use @ to 54 | # specify a different port 55 | listen_addresses: 56 | - 127.0.0.1@5453 57 | - 0::1@5453 58 | ... 59 | ``` 60 | 61 | - 修改`/etc/stubby/stubby.xml`的TLS DNS服务器配置,加快解析速度。 62 | ```yaml 63 | ... 64 | ## Google 65 | - address_data: 8.8.8.8 66 | tls_auth_name: "dns.google" 67 | - address_data: 8.8.4.4 68 | tls_auth_name: "dns.google" 69 | ... 70 | ``` 71 | 72 | - 修改`update_list`配置 73 | ``` 74 | DNS_PORT=5453 75 | ``` 76 | 这样`stubby`即可作为`dnsmasq`的上游正常工作。当然也可以使用无污染服务器提供的地址。 77 | 78 | - 确保`/etc/resolv.conf`配置为NetworkManager调用Dnsmasq监听的地址。 79 | ``` 80 | nameserver 127.0.0.1 81 | ``` 82 | 83 | 检查服务运行情况 84 | ``` 85 | sudo ss -tlnp 86 | ``` 87 | 需要返回如下信息方说明正常工作,`NetworkManager(user level) -> Dnsmasq(53) -> stubby(5300)`: 88 | ``` 89 | # simonsmh @ XPS15 in ~ [15:51:09] 90 | $ sudo ss -tlnp 91 | State Recv-Q Send-Q Local Address:Port Peer Address:Port Process 92 | LISTEN 0 16 127.0.0.1:5300 0.0.0.0:* users:(("stubby",pid=1903,fd=4)) 93 | LISTEN 0 32 127.0.0.1:53 0.0.0.0:* users:(("dnsmasq",pid=1907,fd=5)) 94 | ``` 95 | 96 | 3. 运行`update_list`以更新`chnroute`和`gfwlist`列表 97 | ``` 98 | sudo ./update_list 99 | ``` 100 | 101 | ## 使用透明代理 102 | 脚本支持**自定义配置**,在`ss`脚本中修改 103 | ```shell 104 | config_location=/etc/shadowsocks 105 | systemd_service=shadowsocks-libev-redir 106 | ``` 107 | 可以自行指定**系统服务名称**以及**配置位置**。 108 | 109 | 例:使用`/etc/shadowsocks/config.json`文件时,使用: 110 | ```shell 111 | sudo ./ss config 112 | ``` 113 | 停止使用 114 | ```shell 115 | sudo ./ss config stop 116 | ``` 117 | 118 | ## 维护 119 | 1. 请每周执行`update_list`以更新列表。 120 | 121 | 2. Linux桌面环境由于可以使用`NetworkManager`显式调用`dnsmasq`,请禁用`dnsmasq`的systemd服务,因此不建议开机自启。 122 | ```shell 123 | systemctl disable dnsmasq.service --now 124 | ``` 125 | 126 | 3. 如DNS缓慢,请检查`stubby`工作速度。 127 | ```shell 128 | # simonsmh @ XPS15 in ~/Projects/easy_shell on git:master 129 | $ dig google.com -p 5453 130 | 131 | ; <<>> DiG 9.14.9 <<>> google.com -p 5453 132 | ;; global options: +cmd 133 | ;; Got answer: 134 | ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25559 135 | ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 136 | 137 | ;; OPT PSEUDOSECTION: 138 | ; EDNS: version: 0, flags:; udp: 512 139 | ;; QUESTION SECTION: 140 | ;google.com. IN A 141 | 142 | ;; ANSWER SECTION: 143 | google.com. 299 IN A 172.217.26.142 144 | 145 | ;; Query time: 2505 msec 146 | ;; SERVER: 127.0.0.1#5453(127.0.0.1) 147 | ;; WHEN: 二 1月 21 22:53:50 CST 2020 148 | ;; MSG SIZE rcvd: 65 149 | ``` 150 | 151 | 4. 如代理不工作,请检查systemd日志。 152 | ```shell 153 | # simonsmh @ XPS15 in ~ 154 | $ systemctl status shadowsocks-libev-redir@config.service 155 | ● shadowsocks-libev-redir@config.service - Shadowsocks-Libev Client Service Redir Mode 156 | Loaded: loaded (/usr/lib/systemd/system/shadowsocks-libev-redir@.service; disabled; vendor preset: disabled) 157 | Active: active (running) since Tue 2020-01-21 22:56:14 CST; 3s ago 158 | Main PID: 11575 (ss-redir) 159 | Tasks: 1 (limit: 19010) 160 | Memory: 1.6M 161 | CGroup: /system.slice/system-shadowsocks\x2dlibev\x2dredir.slice/shadowsocks-libev-redir@config.service 162 | └─11575 /usr/bin/ss-redir -c /etc/shadowsocks/config.json 163 | 164 | 1月 21 22:56:14 XPS15 systemd[1]: Started Shadowsocks-Libev Client Service Redir Mode. 165 | 1月 21 22:56:15 XPS15 ss-redir[11575]: 2020-01-21 22:56:14 INFO: using tcp fast open 166 | 1月 21 22:56:15 XPS15 ss-redir[11575]: 2020-01-21 22:56:14 INFO: initializing ciphers... xchacha20-ietf-poly1305 167 | 1月 21 22:56:15 XPS15 ss-redir[11575]: 2020-01-21 22:56:14 INFO: listening at 127.0.0.1:1080 168 | 1月 21 22:56:15 XPS15 ss-redir[11575]: 2020-01-21 22:56:14 INFO: tcp port reuse enabled 169 | 1月 21 22:56:15 XPS15 ss-redir[11575]: 2020-01-21 22:56:15 INFO: UDP relay enabled 170 | 1月 21 22:56:15 XPS15 ss-redir[11575]: 2020-01-21 22:56:15 INFO: udp port reuse enabled 171 | ``` 172 | -------------------------------------------------------------------------------- /sss/ss: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | config_location=/etc/shadowsocks 3 | systemd_service=shadowsocks-libev-redir 4 | chnroute=/etc/chinadns_chnroute.txt 5 | ########## 6 | Help(){ 7 | echo "Usage: $0 <[start]|stop>" && exit $@ 8 | } 9 | [[ $EUID != "0" ]] && { echo "Not root user." && exit 1;} 10 | [[ -z "$1" ]] && Help 1 || [[ "$1" == "-h" ]] || [[ "$1" == "--help" ]] && Help 0 || CONFIG=$1 11 | [[ "$2" == "stop" ]] && unset ENABLE || ENABLE=start 12 | 13 | if [ $ENABLE ]; then 14 | [[ $(systemctl is-active $systemd_service@$CONFIG.service) ]] || { echo "WARN: ss-libev is running." ;} 15 | domain=$(sed -n 's/[[:space:]]//;s/.*"server":"\(.*\)".*/\1/p' $config_location/$CONFIG.json 2>/dev/null) 16 | [[ -z "$domain" ]] && { echo "ERROR: Couldn't find your server." && exit 1;} 17 | port=$(sed -n 's/[[:space:]]//;s/.*"local_port":\([0-9]*\).*/\1/p' $config_location/$CONFIG.json 2>/dev/null) 18 | [[ -z "$port" ]] && port=1080 19 | echo $domain $port $ENABLE 20 | systemctl restart $systemd_service@$CONFIG.service 21 | ss-nat -s $domain -l $port -i $chnroute -u -o 22 | else 23 | systemctl stop $systemd_service@$CONFIG.service 24 | ss-nat -f 25 | fi 26 | exit $? 27 | -------------------------------------------------------------------------------- /sss/sss.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | PoisoningPoisoningdnsmasqgfwlistdnsmasq...ss-nat (iptables)chnroutess-nat (iptables)...DNSDNSTrafficTrafficISPISPss-redirss-redirVPSVPSstubby(TLS DNS request)stubby... -------------------------------------------------------------------------------- /sss/update_list: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | CHNROUTE=/etc/chinadns_chnroute.txt 3 | GFWLIST=/etc/dnsmasq_gfwlist.conf #/etc/NetworkManager/dnsmasq.d/dnsmasq_gfwlist.conf 4 | CHINALIST=/etc/dnsmasq_chinalist.conf #/etc/NetworkManager/dnsmasq.d/dnsmasq_chinalist.conf 5 | DNS_IP=127.0.0.1 6 | DNS_PORT=5453 7 | EXTRADOMAINLIST=' 8 | jerryxiao.cc 9 | ' 10 | ########## 11 | #CHECK 12 | set -e -o pipefail 13 | 14 | #CHNROUTE 15 | FILE_CHNROUTE=$(basename $CHNROUTE) 16 | wget https://ftp.apnic.net/apnic/stats/apnic/delegated-apnic-latest -O delegated-apnic-latest.txt 17 | cat delegated-apnic-latest.txt | grep ipv4 | grep CN | awk -F\| '{printf("%s/%d\n", $4, 32-log($5)/log(2))}' > $FILE_CHNROUTE 18 | # cat delegated-apnic-latest.txt | grep ipv6 | grep CN | awk -F\| '{printf("%s/%d\n", $4, $5)}' >> $FILE_CHNROUTE 19 | rm delegated-apnic-latest.txt 20 | mv -f ./$FILE_CHNROUTE $CHNROUTE 21 | 22 | #CHINALIST 23 | FILE_CHINALIST=$(basename $CHINALIST) 24 | wget https://github.com/felixonmars/dnsmasq-china-list/raw/master/accelerated-domains.china.conf -O- | sed "s|^\(server.*\)/[^/]*$|\1/$DNS_IP#$DNS_PORT|" > $FILE_CHINALIST 25 | mv -f ./$FILE_CHINALIST $CHINALIST 26 | 27 | #GFWLIST 28 | FILE_GFWLIST=$(basename $GFWLIST) 29 | wget http://raw.githubusercontent.com/gfwlist/gfwlist/master/gfwlist.txt -O- | base64 -d | grep -vE '^\!|\[|^@@|(https?://){0,1}[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | sed -r 's#^(\|\|?)?(https?://)?##g' | sed -r 's#/.*$|%2F.*$##g' | grep -E '([a-zA-Z0-9][-a-zA-Z0-9]*(\.[a-zA-Z0-9][-a-zA-Z0-9]*)+)' | sed -r 's#^(([a-zA-Z0-9]*\*[-a-zA-Z0-9]*)?(\.))?([a-zA-Z0-9][-a-zA-Z0-9]*(\.[a-zA-Z0-9][-a-zA-Z0-9]*)+)(\*)?#\4#g' > gfwdomain.txt 30 | echo -e 'google.com\ngoogle.ad\ngoogle.ae\ngoogle.com.af\ngoogle.com.ag\ngoogle.com.ai\ngoogle.al\ngoogle.am\ngoogle.co.ao\ngoogle.com.ar\ngoogle.as\ngoogle.at\ngoogle.com.au\ngoogle.az\ngoogle.ba\ngoogle.com.bd\ngoogle.be\ngoogle.bf\ngoogle.bg\ngoogle.com.bh\ngoogle.bi\ngoogle.bj\ngoogle.com.bn\ngoogle.com.bo\ngoogle.com.br\ngoogle.bs\ngoogle.bt\ngoogle.co.bw\ngoogle.by\ngoogle.com.bz\ngoogle.ca\ngoogle.cd\ngoogle.cf\ngoogle.cg\ngoogle.ch\ngoogle.ci\ngoogle.co.ck\ngoogle.cl\ngoogle.cm\ngoogle.cn\ngoogle.com.co\ngoogle.co.cr\ngoogle.com.cu\ngoogle.cv\ngoogle.com.cy\ngoogle.cz\ngoogle.de\ngoogle.dj\ngoogle.dk\ngoogle.dm\ngoogle.com.do\ngoogle.dz\ngoogle.com.ec\ngoogle.ee\ngoogle.com.eg\ngoogle.es\ngoogle.com.et\ngoogle.fi\ngoogle.com.fj\ngoogle.fm\ngoogle.fr\ngoogle.ga\ngoogle.ge\ngoogle.gg\ngoogle.com.gh\ngoogle.com.gi\ngoogle.gl\ngoogle.gm\ngoogle.gp\ngoogle.gr\ngoogle.com.gt\ngoogle.gy\ngoogle.com.hk\ngoogle.hn\ngoogle.hr\ngoogle.ht\ngoogle.hu\ngoogle.co.id\ngoogle.ie\ngoogle.co.il\ngoogle.im\ngoogle.co.in\ngoogle.iq\ngoogle.is\ngoogle.it\ngoogle.je\ngoogle.com.jm\ngoogle.jo\ngoogle.co.jp\ngoogle.co.ke\ngoogle.com.kh\ngoogle.ki\ngoogle.kg\ngoogle.co.kr\ngoogle.com.kw\ngoogle.kz\ngoogle.la\ngoogle.com.lb\ngoogle.li\ngoogle.lk\ngoogle.co.ls\ngoogle.lt\ngoogle.lu\ngoogle.lv\ngoogle.com.ly\ngoogle.co.ma\ngoogle.md\ngoogle.me\ngoogle.mg\ngoogle.mk\ngoogle.ml\ngoogle.com.mm\ngoogle.mn\ngoogle.ms\ngoogle.com.mt\ngoogle.mu\ngoogle.mv\ngoogle.mw\ngoogle.com.mx\ngoogle.com.my\ngoogle.co.mz\ngoogle.com.na\ngoogle.com.nf\ngoogle.com.ng\ngoogle.com.ni\ngoogle.ne\ngoogle.nl\ngoogle.no\ngoogle.com.np\ngoogle.nr\ngoogle.nu\ngoogle.co.nz\ngoogle.com.om\ngoogle.com.pa\ngoogle.com.pe\ngoogle.com.pg\ngoogle.com.ph\ngoogle.com.pk\ngoogle.pl\ngoogle.pn\ngoogle.com.pr\ngoogle.ps\ngoogle.pt\ngoogle.com.py\ngoogle.com.qa\ngoogle.ro\ngoogle.ru\ngoogle.rw\ngoogle.com.sa\ngoogle.com.sb\ngoogle.sc\ngoogle.se\ngoogle.com.sg\ngoogle.sh\ngoogle.si\ngoogle.sk\ngoogle.com.sl\ngoogle.sn\ngoogle.so\ngoogle.sm\ngoogle.sr\ngoogle.st\ngoogle.com.sv\ngoogle.td\ngoogle.tg\ngoogle.co.th\ngoogle.com.tj\ngoogle.tk\ngoogle.tl\ngoogle.tm\ngoogle.tn\ngoogle.to\ngoogle.com.tr\ngoogle.tt\ngoogle.com.tw\ngoogle.co.tz\ngoogle.com.ua\ngoogle.co.ug\ngoogle.co.uk\ngoogle.com.uy\ngoogle.co.uz\ngoogle.com.vc\ngoogle.co.ve\ngoogle.vg\ngoogle.co.vi\ngoogle.com.vn\ngoogle.vu\ngoogle.ws\ngoogle.rs\ngoogle.co.za\ngoogle.co.zm\ngoogle.co.zw\ngoogle.cat 31 | blogspot.ca\nblogspot.co.uk\nblogspot.com\nblogspot.com.ar\nblogspot.com.au\nblogspot.com.br\nblogspot.com.by\nblogspot.com.co\nblogspot.com.cy\nblogspot.com.ee\nblogspot.com.eg\nblogspot.com.es\nblogspot.com.mt\nblogspot.com.ng\nblogspot.com.tr\nblogspot.com.uy\nblogspot.de\nblogspot.gr\nblogspot.in\nblogspot.mx\nblogspot.ch\nblogspot.fr\nblogspot.ie\nblogspot.it\nblogspot.pt\nblogspot.ro\nblogspot.sg\nblogspot.be\nblogspot.no\nblogspot.se\nblogspot.jp\nblogspot.in\nblogspot.ae\nblogspot.al\nblogspot.am\nblogspot.ba\nblogspot.bg\nblogspot.ch\nblogspot.cl\nblogspot.cz\nblogspot.dk\nblogspot.fi\nblogspot.gr\nblogspot.hk\nblogspot.hr\nblogspot.hu\nblogspot.ie\nblogspot.is\nblogspot.kr\nblogspot.li\nblogspot.lt\nblogspot.lu\nblogspot.md\nblogspot.mk\nblogspot.my\nblogspot.nl\nblogspot.no\nblogspot.pe\nblogspot.qa\nblogspot.ro\nblogspot.ru\nblogspot.se\nblogspot.sg\nblogspot.si\nblogspot.sk\nblogspot.sn\nblogspot.tw\nblogspot.ug\nblogspot.cat 32 | twimg.edgesuite.net\n' > gfwdomain.txt 33 | for f in $EXTRADOMAINLIST; do 34 | echo $f >> gfwdomain.txt 35 | done 36 | sort -u gfwdomain.txt -o gfwdomain.txt > /dev/null 37 | less gfwdomain.txt | sed -r 's#(.+)#server=/\1/'$DNS_IP'\#'$DNS_PORT'#g' > $FILE_GFWLIST 38 | mv -f ./$FILE_GFWLIST $GFWLIST 39 | [[ -d /etc/smartdns ]] && { less gfwdomain.txt | sed -r 's#(.+)#server /\1/global#g'> address.conf && mv -f ./address.conf /etc/smartdns/address.conf; } 40 | rm gfwdomain.txt 41 | 42 | if [ -f /etc/openwrt_release ]; then 43 | if pidof ss-redir>/dev/null; then 44 | /etc/init.d/shadowsocks rules 45 | fi 46 | if pidof smartdns>/dev/null; then 47 | /etc/init.d/smartdns reload 48 | fi 49 | if pidof dnsmasq>/dev/null; then 50 | /etc/init.d/dnsmasq reload 51 | fi 52 | fi 53 | exit $? 54 | -------------------------------------------------------------------------------- /useful-commands/alibaba-selenium.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import importlib 3 | import os 4 | import pickle 5 | import re 6 | import sys 7 | import time 8 | import itertools 9 | import aiohttp 10 | from selenium import webdriver 11 | from yarl import URL 12 | import logging 13 | 14 | logging.basicConfig( 15 | format="%(asctime)s - %(filename)s - %(levelname)s - %(message)s", 16 | level=logging.INFO, 17 | ) 18 | 19 | logger = logging.getLogger("Alibaba Image Fetcher Selenium") 20 | 21 | headers = { 22 | "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36", 23 | } 24 | 25 | 26 | async def get_id_list(s, begin_page): 27 | async with s.get( 28 | URL( 29 | f"https://search.1688.com/service/marketOfferResultViewService?keywords=****&beginPage={begin_page}&pageSize=20" 30 | ) 31 | ) as resp: 32 | result = await resp.json(content_type=None) 33 | results = result.get("data").get("data").get("offerList") 34 | result_ids = [results[i].get("id") for i in range(len(results))] 35 | return result_ids 36 | 37 | 38 | # https://stackoverflow.com/questions/53039551/selenium-webdriver-modifying-navigator-webdriver-flag-to-prevent-selenium-detec 39 | # https://stackoverflow.com/questions/33225947/can-a-website-detect-when-you-are-using-selenium-with-chromedriver 40 | def hide_driver(driver): 41 | driver.execute_cdp_cmd( 42 | "Page.addScriptToEvaluateOnNewDocument", 43 | { 44 | "source": """ 45 | Object.defineProperty(navigator, 'webdriver', { 46 | get: () => undefined 47 | }) 48 | """ 49 | }, 50 | ) 51 | driver.execute_cdp_cmd("Network.enable", {}) 52 | driver.execute_cdp_cmd( 53 | "Network.setExtraHTTPHeaders", {"headers": {"User-Agent": "QQBrowser"}} 54 | ) 55 | 56 | 57 | async def main(): 58 | pages = 100 59 | async with aiohttp.ClientSession(headers=headers) as s: 60 | tasks = [get_id_list(s, begin_page=i) for i in range(pages)] 61 | results = await asyncio.gather(*tasks) 62 | result_ids = list(dict.fromkeys(itertools.chain.from_iterable(results))) 63 | logger.info(result_ids) 64 | logger.info(f"Got {len(result_ids)} tiles") 65 | 66 | options = webdriver.ChromeOptions() 67 | driver = webdriver.Chrome(options=options) 68 | 69 | for result_id in result_ids: 70 | logger.info(result_id) 71 | url = f"https://detail.1688.com/offer/{result_id}.html" 72 | driver.get(url=url) 73 | hide_driver(driver) 74 | while ( 75 | "https://detail.1688.com/offer/" not in driver.current_url 76 | or "nocaptcha" in driver.page_source 77 | ): 78 | time.sleep(1) 79 | logger.info(driver.current_url) 80 | result = driver.page_source 81 | imgs = re.findall(r""original":"(\S+.jpg)"", result) 82 | logger.info(imgs) 83 | with open("images.txt", "a", encoding="utf8") as file: 84 | for i in imgs: 85 | file.write(f"{i}\n") 86 | 87 | 88 | if __name__ == "__main__": 89 | asyncio.run(main()) 90 | -------------------------------------------------------------------------------- /useful-commands/arknights_preparation.py: -------------------------------------------------------------------------------- 1 | import logging 2 | from itertools import product 3 | 4 | import requests 5 | 6 | logging.basicConfig( 7 | level=logging.INFO, 8 | format='%(asctime)s %(levelname)s %(message)s', 9 | datefmt='%Y-%m-%dT%H:%M:%S') 10 | 11 | logger = logging 12 | 13 | 14 | def sign(COOKIES: str): 15 | s = requests.Session() 16 | 17 | def request(method, url, max_retry: int = 2, *args, **kwargs): 18 | headers = { 19 | "User-Agent": "user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36 Edg/90.0.818.46", 20 | "Referer": "https://ak.hypergryph.com/activity/preparation", 21 | "Cookie": COOKIES 22 | } 23 | for i in range(max_retry + 1): 24 | try: 25 | response = s.request( 26 | method, url, headers=headers, *args, ** kwargs) 27 | except requests.exceptions.HTTPError as e: 28 | logger.error(f'HTTP error:\n{e}') 29 | logger.error(f'The NO.{i + 1} request failed, retrying...') 30 | except KeyError as e: 31 | logger.error(f'Wrong response:\n{e}') 32 | logger.error(f'The NO.{i + 1} request failed, retrying...') 33 | except Exception as e: 34 | logger.error(f'Unknown error:\n{e}') 35 | logger.error(f'The NO.{i + 1} request failed, retrying...') 36 | else: 37 | return response 38 | 39 | raise Exception(f'All {max_retry + 1} HTTP requests failed, die.') 40 | 41 | def info(): 42 | response = request("get", 43 | "https://ak.hypergryph.com/activity/preparation/activity/userInfo") 44 | result = response.json() 45 | data = result["data"] 46 | logger.info( 47 | f"{data['uid']}:当前拥有美味值:{data['remainCoin']},剩余签到次数:{data['rollChance']}") 48 | return data 49 | 50 | def roll(): 51 | response = request("post", 52 | "https://ak.hypergryph.com/activity/preparation/activity/roll") 53 | result = response.json() 54 | return result["data"]["coin"] 55 | 56 | def share(): 57 | response = request("post", 58 | "https://ak.hypergryph.com/activity/preparation/activity/share", data={"method": 1}) 59 | result = response.json() 60 | if result["data"]["todayFirst"]: 61 | logger.info("分享页面") 62 | 63 | def exchange(target): 64 | response = request("post", 65 | "https://ak.hypergryph.com/activity/preparation/activity/exchange", data={"giftPackId": target}) 66 | result = response.json() 67 | if result["statusCode"] == 201: 68 | logger.info(f"{target}: {result['message']}") 69 | elif result["statusCode"] == 403: 70 | if result["message"] == "未完成兑换前置条件": 71 | return True 72 | 73 | data = info() 74 | if data["share"]: 75 | share() 76 | if rollChance := data["rollChance"]: 77 | while rollChance: 78 | rollChance = rollChance - 1 79 | earn = roll() 80 | logger.info(f"美味值+{earn},剩余签到次数: {rollChance}") 81 | data = info() 82 | if data['remainCoin'] > 100: 83 | for a, b in product(range(1, 6), range(1, 7)): 84 | if (a, b) in [(1, 5), (1, 6), (2, 6), (3, 6)]: 85 | continue 86 | else: 87 | if exchange(f"g_{a}_{b}"): 88 | break 89 | 90 | 91 | if __name__ == "__main__": 92 | cookies = [] #ak2nda 93 | for cookie in cookies: 94 | sign(cookie) 95 | -------------------------------------------------------------------------------- /useful-commands/ass.sh: -------------------------------------------------------------------------------- 1 | for i in $(seq -w 1 10) 2 | do 3 | #ffmpeg -i Kono_Subarashii_Sekai_ni_Shukufuku_wo__-_${i}__BD_1280x720_AVC_AACx2_.mp4 -vf "ass=_Kono_Subarashii_Sekai_ni_Shukufuku_o___${i}__BDRIP__1080P__H264_FLAC_.sc_KissSubFZSD.ass" ${i}.mp4 & 4 | ffmpeg -i Kono_Subarashii_Sekai_ni_Shukufuku_wo__2_-_${i}__BD_1280x720_AVC_AACx2_.mp4 -vf "ass=_KissSub&FZSD&Xrip__Kono_Subarashii_Sekai_ni_Shukufuku_o__2__BDrip__${i}__1080P__HEVC_Main10_.sc.ass" ${i}.mp4 5 | done 6 | -------------------------------------------------------------------------------- /useful-commands/boot_patch_app.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # Requirements: wget unzip 3 | ###### 4 | BOOTIMAGE=$1 5 | KEEPVERITY=true 6 | KEEPFORCEENCRYPT=true 7 | RECOVERYMODE=false 8 | filename=magisk.apk 9 | ###### 10 | [ -f $filename ] || wget https://github.com/topjohnwu/Magisk/releases/download/v22.0/Magisk-v22.0.apk -O $filename 11 | [ -z $1 ] && { echo "Usage: $0 " && exit 1 ;} 12 | [ -f $filename ] || { echo "Error: Need $filename" && exit 1 ;} 13 | unzip $filename lib/x86/libmagiskboot.so 14 | unzip $filename lib/armeabi-v7a/libmagiskinit.so 15 | unzip $filename lib/armeabi-v7a/libmagisk32.so 16 | unzip $filename lib/armeabi-v7a/libmagisk64.so 17 | mv lib/x86/libmagiskboot.so ./magiskboot 18 | mv lib/armeabi-v7a/libmagiskinit.so ./magiskinit 19 | mv lib/armeabi-v7a/libmagisk32.so ./magisk32 20 | mv lib/armeabi-v7a/libmagisk64.so ./magisk64 21 | rm -r lib 22 | chmod +x magiskboot 23 | export KEEPVERITY 24 | export KEEPFORCEENCRYPT 25 | SHA1=`./magiskboot sha1 "$BOOTIMAGE"` 26 | echo "KEEPVERITY=$KEEPVERITY 27 | KEEPFORCEENCRYPT=$KEEPFORCEENCRYPT 28 | RECOVERYMODE=$RECOVERYMODE 29 | SHA1=$SHA1" > config 30 | ./magiskboot unpack $BOOTIMAGE 31 | cp -af ramdisk.cpio ramdisk.cpio.orig 32 | ./magiskboot compress=xz magisk32 magisk32.xz 33 | ./magiskboot compress=xz magisk64 magisk64.xz 34 | ./magiskboot cpio ramdisk.cpio \ 35 | "add 0750 init magiskinit" \ 36 | "mkdir 0750 overlay.d" \ 37 | "mkdir 0750 overlay.d/sbin" \ 38 | "add 0644 overlay.d/sbin/magisk32.xz magisk32.xz" \ 39 | "add 0644 overlay.d/sbin/magisk64.xz magisk64.xz" \ 40 | "patch" \ 41 | "backup ramdisk.cpio.orig" \ 42 | "mkdir 000 .backup" \ 43 | "add 000 .backup/.magisk config" 44 | for dt in dtb kernel_dtb extra; do 45 | [ -f $dt ] && ./magiskboot dtb $dt patch 46 | done 47 | ./magiskboot hexpatch kernel \ 48 | 736B69705F696E697472616D667300 \ 49 | 77616E745F696E697472616D667300 50 | ./magiskboot repack $BOOTIMAGE 51 | ./magiskboot cleanup 52 | rm -f ramdisk.cpio.orig config magisk32* magisk64* magiskboot magiskinit 53 | -------------------------------------------------------------------------------- /useful-commands/boot_patch_miui.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Requirements: curl unzip 3 | ###### 4 | WAITING=5 5 | ###### 6 | [ -z $1 ] && { echo "Usage: $0 " && exit 1 ;} || zipstr=$1 7 | tmp=${zipstr#*_} 8 | device=${tmp%%_*} 9 | tmp=${tmp#*_} 10 | version=${tmp%%_*} 11 | tmp=${tmp#*_} 12 | hash=${tmp%%_*} 13 | tmp=${tmp#*_} 14 | api=${tmp%%.zip} 15 | zipstr=miui_${device}_${version}_${hash}_${api}.zip 16 | [ -z $device ] || [ -z $version ] || [ -z $hash ] || [ -z $api ] && { echo "Error: Bad str $1" && exit 1 ;} || [ ! -f $zipstr ] && { echo "Downloading $zipstr" 17 | aria2c -j10 https://bigota.d.miui.com/$version/$zipstr 18 | while [ -f $zipstr.aria2 ] 19 | do 20 | echo "Warning: Trying again after ${WAITING}s..." && sleep $WAITING 21 | aria2c -j10 https://bigota.d.miui.com/$version/$zipstr 22 | done 23 | } 24 | yes | unzip $zipstr boot.img 25 | sh boot_patch_app.sh boot.img 26 | zstd new-boot.img -f 27 | rm -f new-boot.img 28 | -------------------------------------------------------------------------------- /useful-commands/boot_patch_qemu.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh -x 2 | # Requirements: 3 | # QEMU arm magiskinit64 4 | # x86 magiskboot curl unzip 5 | ###### 6 | BOOTIMAGE=$1 7 | KEEPVERITY=true 8 | KEEPFORCEENCRYPT=true 9 | RECOVERYMODE=false 10 | ###### 11 | [ -f magisk.zip ] || wget https://github.com/topjohnwu/magisk_files/raw/canary/magisk-debug.zip -O magisk.zip 12 | unzip magisk.zip x86/magiskboot 13 | unzip magisk.zip arm/magiskinit64 14 | mv x86/magiskboot ./ 15 | mv arm/magiskinit64 ./ 16 | rmdir x86 arm 17 | chmod +x magiskboot magiskinit64 18 | export KEEPVERITY 19 | export KEEPFORCEENCRYPT 20 | SHA1=`./magiskboot sha1 "$BOOTIMAGE"` 21 | echo "KEEPVERITY=$KEEPVERITY 22 | KEEPFORCEENCRYPT=$KEEPFORCEENCRYPT 23 | RECOVERYMODE=$RECOVERYMODE 24 | SHA1=$SHA1" > config 25 | [ -e magisk ] || qemu-arm magiskinit64 -x magisk magisk 26 | ./magiskboot unpack $BOOTIMAGE 27 | cp -af ramdisk.cpio ramdisk.cpio.orig 28 | ./magiskboot cpio ramdisk.cpio \ 29 | "add 750 init magiskinit64" \ 30 | "patch" \ 31 | "backup ramdisk.cpio.orig" \ 32 | "mkdir 000 .backup" \ 33 | "add 000 .backup/.magisk config" 34 | ./magiskboot dtb dtb patch 35 | ./magiskboot hexpatch kernel \ 36 | 736B69705F696E697472616D667300 \ 37 | 77616E745F696E697472616D667300 38 | ./magiskboot repack $BOOTIMAGE 39 | ./magiskboot cleanup 40 | rm -f ramdisk.cpio.orig config magisk* 41 | -------------------------------------------------------------------------------- /useful-commands/branch.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | upstreamBranch=upstream 3 | masterBranch=master 4 | if [ "$2" == "delete" ]; then 5 | git checkout $masterBranch 6 | git branch -D $1 7 | git push origin :$1 8 | echo "Done." 9 | elif [ "$2" == "edit" ]; then 10 | git checkout $1 11 | read -p "Finish your work again." 12 | git add $1.json 13 | git commit -m "$(cat .git/COMMIT_EDITMSG)" --amend 14 | git push -u origin $1 -f 15 | git checkout $masterBranch 16 | echo "Done." 17 | else 18 | git reset --hard 19 | git checkout $upstreamBranch 20 | git branch $1 21 | git checkout $1 22 | touch $1.json 23 | read -p "Finish your work now." 24 | git add $1.json 25 | if [ "$2" ]; then 26 | git commit -m "Add $1 & Closes #$2" 27 | else 28 | git commit -m "Add $1" 29 | fi 30 | git push -u origin $1 31 | git checkout $masterBranch 32 | hub pull-request -b RikkaW:master -h $1 -m "Add $1 & Closes #$2" 33 | echo "Done." 34 | fi 35 | -------------------------------------------------------------------------------- /useful-commands/cat-finite: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | if [ "$1" == "-h" ] ; then 3 | echo "cat-finite [文件名] [终止行数(默认none)] [显示行数(默认50,留空)]" 4 | exit 0 5 | fi 6 | file="$1" 7 | eof="$2" 8 | lines="$3" 9 | if [ "$eof" == "" ] || [ "$eof" == "none" ] ; then 10 | eof=$(sed -n '$=' ${file}) 11 | fi 12 | if (( ${eof} < 2 )) ; then 13 | echo "终止行数过小" 14 | exit 0 15 | fi 16 | if [ "$lines" == "" ] ; then 17 | lines=50 18 | fi 19 | if (( ${eof} > ${lines} )) ; then 20 | (( sof=${eof} - ${lines} )) 21 | else 22 | sof=1 23 | fi 24 | sed -n "${sof},${eof}p" ${file} 25 | echo "当前文件:${file} 行${sof}到${eof}" 26 | #cut -d$'\n' -f4-5 ${file} 27 | exit 0 28 | -------------------------------------------------------------------------------- /useful-commands/copy_libs.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh -x 2 | OUT="recovery/root/vendor/lib64/" 3 | EXTRACT="/mnt/storage/WorkGround/umi/R_MIUI/miui_UMI_20.6.28_a5e0d69c19_11.0" 4 | FILES=$(find $OUT -maxdepth 1 -type f -exec basename {} \;) 5 | for file in $FILES 6 | do 7 | LIB=$(sudo find $EXTRACT/vendor/bin $EXTRACT/vendor/lib64 $EXTRACT/system/system/lib64 $EXTRACT/apex/apex/lib64 -type f -iname $file | head -n 1) 8 | if [[ -f $LIB ]]; then 9 | echo Success: $LIB "->" $OUT/$file 10 | cp $LIB $OUT 11 | if grep -q /system/bin/linker64 $OUT/$file; then 12 | echo Relink: $OUT/$file 13 | sed -i "s|/system/bin/linker64\x0|/sbin/linker64\x0\x0\x0\x0\x0\x0\x0|g" $OUT/$file 14 | fi 15 | else 16 | echo Not found: $OUT/$file 17 | fi 18 | done 19 | -------------------------------------------------------------------------------- /useful-commands/cut.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | file="$1" 3 | sof="$2" 4 | eof=$(sed -n '$=' ${file}) 5 | sed -n "${sof},${eof}p" ${file} >> output.txt 6 | exit 0 7 | -------------------------------------------------------------------------------- /useful-commands/debootstrap.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -x 3 | primaryDisk=/dev/sda 4 | debianVersion=testing 5 | myChroot=/debian-chroot 6 | Stage1(){ 7 | pacman -Syu debootstrap 8 | parted ${primaryDisk} mklabel gpt 9 | parted ${primaryDisk} mkpart efi fat32 0 128MB 10 | parted ${primaryDisk} mkpart debian ext4 128MB 100% 11 | mkfs.ext4 ${primaryDisk}2 12 | mkfs.vfat -F 32 ${primaryDisk}1 13 | mkdir ${myChroot} 14 | mount ${primaryDisk}2 ${myChroot} 15 | debootstrap --arch amd64 ${debianVersion} ${myChroot} https://mirrors.ustc.edu.cn/debian/ 16 | #cp /proc/mounts ${myChroot}/etc/mtab 17 | # Must find a way to get rid of genfstab 18 | mkdir -p ${myChroot}/boot/efi 19 | mount ${primaryDisk}1 ${myChroot}/boot/efi 20 | genfstab -U ${myChroot} >> ${myChroot}/etc/fstab 21 | mount /proc ${myChroot}/proc -t proc 22 | mount /sys ${myChroot}/sys -t sysfs 23 | mount /sys/firmware/efi/efivars ${myChroot}/sys/firmware/efi/efivars -t efivarfs -o nosuid,noexec,nodev 24 | mount /dev ${myChroot}/dev -t devtmpfs -o mode=0755,nosuid 25 | mount /dev/pts ${myChroot}/dev/pts -t devpts -o mode=0620,gid=5,nosuid,noexec 26 | mount /dev/shm ${myChroot}/dev/shm -t tmpfs -o mode=1777,nosuid,nodev 27 | mount /run ${myChroot}/run -t tmpfs -o nosuid,nodev,mode=0755 28 | mount /tmp ${myChroot}/tmp -t tmpfs -o mode=1777,strictatime,nodev,nosuid 29 | cp $0 ${myChroot}/ 30 | chroot ${myChroot} /bin/bash 31 | } 32 | Stage2(){ 33 | export PATH=${PATH}:/sbin:/usr/sbin 34 | apt update 35 | apt list linux-image* |grep -E "linux-image-[0-9].[0-9]+.[0-9]+-[0-9]+-amd64[^-]" |cut -d '/' -f 1 |head -n 1| xargs apt install -y 36 | apt install -y grub-efi 37 | if [ -d /sys/firmware/efi ]; then 38 | echo "Not efi system" 39 | exit 1; fi 40 | grub-install ${primaryDisk} 41 | update-grub 42 | echo "ok" 43 | echo "set password for root, install firmware, then ctrl d to exit, umount -R ${myChroot} and reboot" 44 | } 45 | 46 | [ "$*" == "1" ] && Stage1 47 | [ "$*" == "2" ] && Stage2 -------------------------------------------------------------------------------- /useful-commands/dir-running.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | cd /root/files/openwrt/OpenWrt-SDK-* 3 | for dir in $(ls -l package/ |awk '/^d/ {print $NF}') 4 | do cd package/$dir 5 | git fetch 6 | git reset --hard origin/HEAD 7 | cd ../.. 8 | done 9 | -------------------------------------------------------------------------------- /useful-commands/domain-to-ip.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | if [ $# -lt 1 ]; then 3 | echo $0 need a parameter 4 | exit 0 5 | fi 6 | ADDR=$1 7 | Result=`ping ${ADDR} -s 1 -c 1 | grep ${ADDR} | head -n 1` 8 | Result=`echo ${Result} | cut -d'(' -f 2 | cut -d')' -f1` 9 | echo "$Result" 10 | exit 0 11 | -------------------------------------------------------------------------------- /useful-commands/excel_spliter.py: -------------------------------------------------------------------------------- 1 | # coding=UTF-8 2 | import os 3 | import tkinter 4 | import tkinter.filedialog 5 | import tkinter.simpledialog 6 | import tkinter.ttk 7 | from concurrent.futures import ProcessPoolExecutor 8 | 9 | import pandas 10 | from tqdm import tqdm 11 | 12 | 13 | def subprocess(param): 14 | df, path, target, people, debug = param 15 | new_df = df[df[target].str.contains(people[0])] 16 | fp = f'{path if path else "."}/{"_".join(map(str,people))}.xlsx' 17 | if not debug: 18 | new_df.to_excel(fp, index=False) 19 | return True 20 | 21 | 22 | def process(df, path, target, args=None, debug=False): 23 | if not target: 24 | return 25 | params = [df.columns[arg] for arg in args] 26 | peoples = list(set(zip(df.loc[:, target], *[df.loc[:, param] for param in params]))) 27 | total = len(peoples) 28 | with ProcessPoolExecutor( 29 | max_workers=os.cpu_count() if os.cpu_count() else min(2, total) 30 | ) as executor: 31 | results = list( 32 | tqdm( 33 | executor.map( 34 | subprocess, map(lambda x: (df, path, target, x, debug), peoples) 35 | ), 36 | total=total, 37 | ) 38 | ) 39 | return results 40 | 41 | 42 | def main(): 43 | root = tkinter.Tk() 44 | root.title("拆分Excel") 45 | frm = tkinter.ttk.Frame(root, padding=10) 46 | frm.grid() 47 | label = tkinter.ttk.Label(frm, text="选择列") 48 | label.grid(column=0, row=0) 49 | fileobj = tkinter.filedialog.askopenfile( 50 | mode="rb", 51 | title="选择分割文件", 52 | filetypes=["typeName {xls}", "typeName {xlsx}"], 53 | parent=frm, 54 | ) 55 | if not fileobj: 56 | return 57 | df = pandas.read_excel(fileobj) 58 | column = list(df.columns) 59 | combobox = tkinter.ttk.Combobox(frm, state="readonly", values=column) 60 | combobox.grid(column=0, row=1) 61 | listbox = tkinter.Listbox(frm, selectmode="multiple") 62 | listbox.grid(column=0, row=2) 63 | for index, tile in enumerate(column): 64 | listbox.insert(index, tile) 65 | tkinter.ttk.Button( 66 | frm, 67 | text="确认", 68 | command=lambda: process( 69 | df, 70 | os.path.dirname(os.path.realpath(fileobj.name)), 71 | combobox.get(), # str 72 | listbox.curselection(), # tuple of index 73 | False, 74 | ), 75 | ).grid(column=0, row=3) 76 | root.mainloop() 77 | 78 | 79 | if __name__ == "__main__": 80 | main() 81 | -------------------------------------------------------------------------------- /useful-commands/extract_apex.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | MOUNT=$(mktemp -d) 3 | mkdir -p apex/lib64 4 | for file in ./*.apex 5 | do 6 | if [[ -f $file ]]; then 7 | unzip -o $file apex_payload.img 8 | sudo mount -t ext4 -o loop,ro apex_payload.img $MOUNT 9 | if [[ -d $MOUNT/lib64 ]]; then 10 | cp -rf $MOUNT/lib64/* apex/lib64/ 11 | fi 12 | sudo umount $MOUNT 13 | fi 14 | done 15 | rmdir $MOUNT 16 | -------------------------------------------------------------------------------- /useful-commands/file-hexo-install-global.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #https://gist.github.com/danieljsummers/4626790842e48725ecc5c18fc6a71692#file-hexo-install-global-sh 3 | INSTALL_DIR=/usr/lib/node_modules 4 | SUB_DIR=$INSTALL_DIR/hexo-cli/node_modules 5 | echo Getting root privileges... 6 | sudo echo Done. 7 | echo Installing in a temp directory... 8 | mkdir tmp 9 | cd tmp 10 | npm install hexo-cli 11 | echo "Done; moving to global NPM directory..." 12 | cd node_modules 13 | sudo chown -R root:root * 14 | sudo mv hexo-cli $INSTALL_DIR 15 | sudo mkdir $SUB_DIR 16 | sudo mv * $SUB_DIR 17 | echo "Done; cleaning up..." 18 | cd ../.. 19 | rm -r tmp 20 | if [-f /usr/bin/hexo]; then 21 | sudo rm /usr/bin/hexo 22 | fi 23 | sudo ln -s /usr/lib/node_modules/hexo-cli/bin/hexo /usr/bin/hexo 24 | hexo 25 | echo "" 26 | echo If you see the Hexo help above, it has installed successfully 27 | -------------------------------------------------------------------------------- /useful-commands/genshin.py: -------------------------------------------------------------------------------- 1 | import hashlib 2 | import json 3 | import logging 4 | import random 5 | import string 6 | import time 7 | import uuid 8 | import httpx 9 | import asyncio 10 | 11 | logging.basicConfig( 12 | level=logging.INFO, 13 | format="%(asctime)s %(levelname)s %(message)s", 14 | datefmt="%Y-%m-%dT%H:%M:%S", 15 | ) 16 | logger = logging 17 | 18 | 19 | class _Config: 20 | APP_VERSION = "2.34.1" 21 | SALT = "9nQiU3AV0rJSIBWgdynfoGMGKaklfbM7" 22 | ACT_ID = "e202009291139501" 23 | AWARD_URL = ( 24 | f"https://api-takumi.mihoyo.com/event/bbs_sign_reward/home?act_id={ACT_ID}" 25 | ) 26 | ROLE_URL = "https://api-takumi.mihoyo.com/binding/api/getUserGameRolesByCookie?game_biz=hk4e_cn" 27 | INFO_URL = "https://api-takumi.mihoyo.com/event/bbs_sign_reward/info?region={}&act_id={}&uid={}" 28 | SIGN_URL = "https://api-takumi.mihoyo.com/event/bbs_sign_reward/sign" 29 | USER_AGENT = f"Mozilla/5.0 (iPhone; CPU iPhone OS 16_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) miHoYoBBS/{APP_VERSION}" 30 | MESSAGE_TEMPLATE = """ 31 | {today:#^28} 32 | 🔅[{region_name}]{uid} 33 | 今日奖励: {award_name} × {award_cnt} 34 | 本月累签: {total_sign_day} 天 35 | 签到结果: {status} 36 | {end:#^28}""" 37 | MAX_RETRY = 2 38 | MAX_WORKER = 4 39 | 40 | 41 | CONFIG = _Config() 42 | 43 | 44 | class Sign: 45 | def __init__(self, cookies: str): 46 | self.cookies = cookies 47 | self.s = httpx.AsyncClient( 48 | follow_redirects=True, 49 | verify=False, 50 | headers={ 51 | "User-Agent": CONFIG.USER_AGENT, 52 | "Cookie": self.cookies, 53 | }, 54 | timeout=None, 55 | ) 56 | self._region_list = [] 57 | self._region_name_list = [] 58 | self._uid_list = [] 59 | 60 | async def request(self, method, url, **kwargs): 61 | for i in range(CONFIG.MAX_RETRY + 1): 62 | try: 63 | response = (await self.s.request(method, url, **kwargs)).json() 64 | except Exception as e: 65 | logger.error(f"Unknown error:\n{e}") 66 | logger.error(f"The NO.{i + 1} request failed, retrying...") 67 | else: 68 | return response 69 | raise Exception(f"All {CONFIG.MAX_RETRY + 1} HTTP requests failed, die.") 70 | 71 | @staticmethod 72 | def get_ds(): 73 | r = "".join(random.sample(string.ascii_lowercase + string.digits, 6)) 74 | i = int(time.time()) 75 | target = f"salt={CONFIG.SALT}&t={i}&r={r}" 76 | c = hashlib.md5(target.encode()).hexdigest() 77 | return "{},{},{}".format(i, r, c) 78 | 79 | async def get_roles(self): 80 | logger.info("准备获取账号信息...") 81 | response = {} 82 | try: 83 | response = await self.request("get", CONFIG.ROLE_URL) 84 | message = response["message"] 85 | except Exception as e: 86 | raise Exception(e) 87 | if response.get("retcode", 1) != 0 or response.get("data", None) is None: 88 | raise Exception(message) 89 | logger.info("账号信息获取完毕") 90 | return response 91 | 92 | async def get_info(self): 93 | user_game_roles = await self.get_roles() 94 | role_list = user_game_roles.get("data", {}).get("list", []) 95 | if not role_list: 96 | raise Exception(user_game_roles.get("message", "Role list empty")) 97 | logger.info(f"当前账号绑定了 {len(role_list)} 个角色") 98 | info_list = [] 99 | # cn_gf01: 天空岛 100 | # cn_qd01: 世界树 101 | self._region_list = [(i.get("region", "NA")) for i in role_list] 102 | self._region_name_list = [(i.get("region_name", "NA")) for i in role_list] 103 | self._uid_list = [(i.get("game_uid", "NA")) for i in role_list] 104 | logger.info("准备获取签到信息...") 105 | for i in range(len(self._uid_list)): 106 | info_url = CONFIG.INFO_URL.format( 107 | self._region_list[i], CONFIG.ACT_ID, self._uid_list[i] 108 | ) 109 | try: 110 | content = await self.request("get", info_url) 111 | info_list.append(content) 112 | except Exception as e: 113 | raise Exception(e) 114 | if not info_list: 115 | raise Exception("User sign info list is empty") 116 | logger.info("签到信息获取完毕") 117 | return info_list 118 | 119 | async def run(self): 120 | info_list = await self.get_info() 121 | message_list = [] 122 | for i in range(len(info_list)): 123 | today = info_list[i]["data"]["today"] 124 | total_sign_day = info_list[i]["data"]["total_sign_day"] 125 | awards_rsp = await self.request("get", CONFIG.AWARD_URL) 126 | awards = awards_rsp["data"]["awards"] 127 | uid = str(self._uid_list[i]) 128 | logger.info(f"准备为旅行者 {i + 1} 号签到...") 129 | message = { 130 | "today": today, 131 | "region_name": self._region_name_list[i], 132 | "uid": uid, 133 | "total_sign_day": total_sign_day, 134 | "end": "", 135 | } 136 | if info_list[i]["data"]["is_sign"] is True: 137 | message["award_name"] = awards[total_sign_day - 1]["name"] 138 | message["award_cnt"] = awards[total_sign_day - 1]["cnt"] 139 | message["status"] = f"👀 旅行者 {i + 1} 号, 你已经签到过了哦" 140 | message_list.append(CONFIG.MESSAGE_TEMPLATE.format(**message)) 141 | continue 142 | else: 143 | message["award_name"] = awards[total_sign_day]["name"] 144 | message["award_cnt"] = awards[total_sign_day]["cnt"] 145 | if info_list[i]["data"]["first_bind"] is True: 146 | message["status"] = f"💪 旅行者 {i + 1} 号, 请先前往米游社App手动签到一次" 147 | message_list.append(CONFIG.MESSAGE_TEMPLATE.format(**message)) 148 | continue 149 | data = { 150 | "act_id": CONFIG.ACT_ID, 151 | "region": self._region_list[i], 152 | "uid": self._uid_list[i], 153 | } 154 | self.s.headers.update( 155 | { 156 | "x-rpc-device_id": uuid.uuid3( 157 | uuid.NAMESPACE_URL, self.cookies 158 | ).hex.upper(), 159 | "x-rpc-client_type": "5", 160 | "x-rpc-app_version": CONFIG.APP_VERSION, 161 | "DS": self.get_ds(), 162 | } 163 | ) 164 | try: 165 | response = await self.request( 166 | "post", 167 | CONFIG.SIGN_URL, 168 | json=data, 169 | ) 170 | logger.info(response) 171 | except Exception as e: 172 | logger.exception(e) 173 | raise Exception(e) 174 | code = response.get("retcode") 175 | # 0: success 176 | # -5003: already signed in 177 | if code != 0: 178 | message_list.append(response.get("message", json.dumps(response))) 179 | continue 180 | message["total_sign_day"] = total_sign_day + 1 181 | message["status"] = response["message"] 182 | message_list.append(CONFIG.MESSAGE_TEMPLATE.format(**message)) 183 | logger.info("签到完毕") 184 | return "".join(message_list) 185 | 186 | 187 | async def main(cookie_list): 188 | sem = asyncio.Semaphore(CONFIG.MAX_WORKER) 189 | task_list = [] 190 | async with sem: 191 | for cookie in cookie_list: 192 | task_list.append(asyncio.tasks.create_task(Sign(cookie).run())) 193 | return await asyncio.gather(*task_list) 194 | 195 | 196 | if __name__ == "__main__": 197 | # login_ticket account_id cookie_token 198 | cookie_list = [] 199 | logger.info(f"🌀原神签到小助手检测到共配置了 {len(cookie_list)} 个帐号") 200 | result = asyncio.run(main(cookie_list)) 201 | for item in result: 202 | logger.info(item) 203 | logger.info(f"任务结束") 204 | -------------------------------------------------------------------------------- /useful-commands/genshin_loot.py: -------------------------------------------------------------------------------- 1 | import ctypes 2 | import random 3 | import sys 4 | import threading 5 | import time 6 | 7 | try: 8 | import win32gui 9 | from pynput import keyboard, mouse 10 | except: 11 | print("pip install pywin32 pynput") 12 | sys.exit(1) 13 | 14 | RAND = 0.01 # Seconds 15 | TARGETKEY = mouse.Button.x1 # Target Key 16 | STOPKEY = keyboard.Key.f8 # Stop Key 17 | 18 | def is_admin(): 19 | # https://stackoverflow.com/questions/130763/request-uac-elevation-from-within-a-python-script 20 | try: 21 | return ctypes.windll.shell32.IsUserAnAdmin() 22 | except: 23 | return False 24 | 25 | if is_admin(): 26 | # Code of your program here 27 | print(f"按住 {TARGETKEY} 开启") 28 | print(f"按下 {STOPKEY} 停止") 29 | print(f"间隔 {RAND}") 30 | 31 | k = keyboard.Controller() 32 | m = mouse.Controller() 33 | continue_flag = threading.Event() 34 | stop_flag = threading.Event() 35 | def script(stop_event, continue_event, rand): 36 | while True: 37 | if continue_event.wait(rand): 38 | if win32gui.GetWindowText(win32gui.GetForegroundWindow()) == "原神": 39 | k.press("f") 40 | time.sleep(random.uniform(0, rand)) 41 | k.release("f") 42 | time.sleep(random.uniform(0, rand)) 43 | m.scroll(0, -1) 44 | time.sleep(random.uniform(0, rand)) 45 | elif stop_event.wait(rand): 46 | break 47 | print("Script thread killed") 48 | 49 | 50 | t = threading.Thread(target=script, args=(stop_flag, continue_flag, RAND)) 51 | t.start() 52 | 53 | def on_click(x, y, button, pressed): 54 | if button == TARGETKEY: 55 | if pressed == 1: 56 | continue_flag.set() 57 | else: 58 | continue_flag.clear() 59 | # else: 60 | # print(f"{button}") 61 | 62 | m_listener = mouse.Listener(on_click=on_click) 63 | m_listener.start() 64 | 65 | def on_release(key): 66 | if key == STOPKEY: 67 | stop_flag.set() 68 | m_listener.stop() 69 | print("Listener thread stoped") 70 | return False 71 | # else: 72 | # print(f"{key}") 73 | 74 | with keyboard.Listener(on_release=on_release) as k_listener: 75 | m_listener.join() 76 | k_listener.join() 77 | 78 | else: 79 | # Re-run the program with admin rights 80 | ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, " ".join(sys.argv), None, 1) 81 | -------------------------------------------------------------------------------- /useful-commands/get-latest-chromium-tag.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | chromium_tag=$(curl -s 'https://api.github.com/repos/chromium/chromium/tags' | grep -F '"name":' | sed -n 's/[ \t]*"name":[ ][ ]*"\(.*\)".*/\1/p' | grep -E '([0-9]+.){3}[0-9]+' | head -n 1) 3 | 4 | echo "$chromium_tag" >&2 5 | 6 | ua="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/${chromium_tag} Safari/537.36" 7 | if [ -z "$ua" ]; then 8 | echo "Empty reply" >&2 9 | exit 1 10 | else 11 | echo "$ua" 12 | fi 13 | 14 | flags=("$HOME/.config/chromium-flags.conf" "$HOME/.config/chrome-flags.conf" "$HOME/.config/chrome-dev-flags.conf") 15 | for flag in "${flags[@]}"; do 16 | if [ -e "$flag" ]; then 17 | if grep -Eq '^--user-agent=' "$flag"; then 18 | uae=${ua//\;/\\\;} 19 | uae=${uae//\//\\\/} 20 | tflag=$(mktemp -p /tmp chromium-flags.confXXXXXX) 21 | cp "$flag" "$tflag" 22 | sed -i "s/^--user-agent=.*$/--user-agent='${uae}'/g" "$flag" 23 | diff -u "$tflag" "$flag" 24 | echo "Modified ${flag}" >&2 25 | rm "$tflag" 26 | fi 27 | fi 28 | done 29 | -------------------------------------------------------------------------------- /useful-commands/github-https-sed.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | sed -i 's/https:\/\/github.com\//git@github.com:/g' ./.git/config 3 | exit $? 4 | -------------------------------------------------------------------------------- /useful-commands/ipv6.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | if grep -q "#precedence ::ffff:0:0/96 100" /etc/gai.conf 3 | then 4 | sed -i s/'#precedence ::ffff:0:0\/96 100'/'precedence ::ffff:0:0\/96 100'/ /etc/gai.conf 5 | echo "Set to prefer ipv4." 6 | else 7 | sed -i s/'precedence ::ffff:0:0\/96 100'/'#precedence ::ffff:0:0\/96 100'/ /etc/gai.conf 8 | echo "Set to prefer ipv6." 9 | fi 10 | -------------------------------------------------------------------------------- /useful-commands/md5check.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | code=RIfrSEq 3 | string="Keep Out!\"$code\"Keep Out!" 4 | md5check(){ 5 | if (echo -n $string|md5sum|grep $1) 6 | then 7 | echo $code 8 | fi 9 | } 10 | md5check 1b22289d656182b24547f307c9d368b7 11 | md5check 552bc26417cb2969badc8f1229797571 12 | -------------------------------------------------------------------------------- /useful-commands/move-example.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | move-ss-built(){ 3 | cd /tmp/build-source 4 | #rm -rf *[shadowsocks-libev,simple-obfs]*[buildinfo,changes,deb] 5 | wwwdir="/var/wwwfiles/files/ss-debian-amd64binary" 6 | if [ ! -d "$wwwdir" ] ; then 7 | mkdir -p -m 755 "$wwwdir" 8 | chown www-data:www-data "$wwwdir" 9 | [ $? == 0 ] || exit 1 10 | fi 11 | List=$(ls |grep -E "\<*(shadowsocks-libev|simple-obfs)*(buildinfo|changes|deb)\>") 12 | #List=$(ls |grep -E "(shadowsocks-libev|simple-obfs)*(buildinfo|changes|deb)$") 13 | [ $? == 0 ] && [ -n "$List" ] || exit 1 14 | echo "Moving built debian packages." 15 | sudo -u www-data rm -rf "${wwwdir}/*" 16 | for File in $List 17 | do 18 | mv "$File" "${wwwdir}/" 19 | chown www-data:www-data "${wwwdir}/${File}" 20 | done 21 | } 22 | move-ss-built 23 | exit 0 24 | 25 | -------------------------------------------------------------------------------- /useful-commands/msd.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | usb="/sys/class/android_usb/android0" 3 | file = $1 4 | # ro = $2 5 | enable = $2 6 | echo 0 > $usb/enable 7 | grep mass_storage $usb/functions > /dev/null || sed -e 's/$/mass_storage/' $usb/functions | cat > $usb/functions 8 | [[ -z $(cat $usb/functions) ]] && echo mass_storage > $usb/functions 9 | [[ 0 == $enable ]] && sed -e 's/mass_storage//' $usb/functions | cat > $usb/functions 10 | echo disk > $usb/f_mass_storage/luns 11 | echo 1 > $usb/enable 12 | echo > $usb/f_mass_storage/lun0/file 13 | echo 0 > $usb/f_mass_storage/lun0/ro 14 | echo $file > $usb/f_mass_storage/lun0/file 15 | echo > $usb/f_mass_storage/lun/file 16 | echo 0 > $usb/f_mass_storage/lun/ro 17 | echo $file > $usb/f_mass_storage/lun/file 18 | echo success -------------------------------------------------------------------------------- /useful-commands/mtrp.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | import subprocess 4 | import argparse 5 | import socket 6 | import time 7 | import datetime 8 | import traceback 9 | from threading import Thread 10 | import curses 11 | 12 | def background(func): 13 | def wrapped(*args, **kwargs): 14 | tr = Thread(target=func, args=args, kwargs=kwargs) 15 | tr.daemon = True 16 | tr.start() 17 | return tr 18 | return wrapped 19 | 20 | def ascii_color_text(text, color, end="\033[0m"): 21 | if color in (0, "bl"): # black 22 | return f"\033[30;49m{text}{end}" 23 | elif color in (1, "r"): # red 24 | return f"\033[31;49m{text}{end}" 25 | elif color in (2, "g"): # green 26 | return f"\033[32;49m{text}{end}" 27 | elif color in (3, 'y'): # yellow 28 | return f"\033[33;49m{text}{end}" 29 | elif color in (4, 'b'): # blue 30 | return f"\033[34;49m{text}{end}" 31 | elif color in (5, 'm'): # magenta 32 | return f"\033[35;49m{text}{end}" 33 | elif color in (6, 'c'): # cyan 34 | return f"\033[36;49m{text}{end}" 35 | elif color in (7, 'w'): # white 36 | return f"\033[37;49m{text}{end}" 37 | else: 38 | return text 39 | 40 | parser = argparse.ArgumentParser(description='Mtr data plotter') 41 | if __name__ != "__main__": 42 | parser.exit(1, message="Please run the script interactively.") 43 | parser.add_argument('address') 44 | parser.add_argument('-6', '--ipv6', action='store_true', help='Use IPv6') 45 | parser.add_argument('-o', '--output', default='mtrp.txt', help='Output file') 46 | parser.add_argument('--tsu', action='store_true', help='Use tsu -c mtr in termux') 47 | parser.add_argument('-T', '--tcp', action='store_true', help='Use TCP') 48 | parser.add_argument('-u', '--udp', action='store_true', help='Use UDP') 49 | parser.add_argument('-P', '--port', default='443', help='TCP or UDP port') 50 | args = parser.parse_args() 51 | if args.ipv6: 52 | inetfamily=socket.AF_INET6 53 | else: 54 | inetfamily=socket.AF_INET 55 | addrinfo = socket.getaddrinfo(args.address, None, family=inetfamily, proto=socket.IPPROTO_TCP) 56 | ipaddr = [addr[4][0] for addr in addrinfo] 57 | assert ipaddr 58 | IP = ipaddr[0] 59 | assert IP 60 | print('*** Mtr data plotter ***') 61 | print('Address:', IP) 62 | print('Started at', time.strftime("%Y%m%d %H:%M:%S", time.localtime())) 63 | 64 | MTRARGS = ['mtr', '-n', '-c', '2147483646', '-l', '-i', '10'] 65 | if args.tsu: 66 | MTRARGS = ['tsu', '-c'] + MTRARGS 67 | if args.tcp: 68 | MTRARGS.extend(['-T', '-P', args.port]) 69 | elif args.udp: 70 | MTRARGS.extend(['-u', '-P', args.port]) 71 | 72 | class Hop: 73 | def __init__(self): 74 | self.hostname = None 75 | self.addr = set() 76 | self._sent = 0 77 | self._recv = 0 78 | self.alive = False 79 | self._time = 0 # current ttl, in microseconds (10e-6) 80 | self._atime = 0 # all recv ttl, in microseconds 81 | self.tmdata = list() 82 | @background 83 | def addr_found(self, addr): 84 | self.addr.add(addr) 85 | try: 86 | self.hostname = socket.gethostbyaddr(addr)[0] 87 | except Exception: 88 | pass 89 | def send(self): 90 | self._sent += 1 91 | def recv(self, ms): 92 | self._recv += 1 93 | self._time = ms 94 | self._atime += ms 95 | @property 96 | def avg(self): 97 | return self._atime / self._recv if self._recv else 0 98 | @property 99 | def loss(self): 100 | return 1 - self._recv / self._sent if self._sent else 0 101 | def __repr__(self): 102 | return f"Hop({self.addr} {self.alive=} {self._sent=} {self._recv=} {self._time=} {self._atime=})" 103 | class MtrRawData: 104 | def __init__(self, dest, ofhandle, ipv6=False): 105 | self.dest = dest 106 | self.ofhandle = ofhandle 107 | self.ipv6 = ipv6 108 | self.maxhopidx = 0 109 | self._hops = list() 110 | self._thopidx = 0 111 | self._rhopidx = 0 112 | self._tmtrid = "" 113 | self._rmtrid = "" 114 | self._lastrecv = -1 115 | self._starttime = 0 116 | self._messages = "" 117 | self._msgtime = 0.0 118 | def show_msg(self, text): 119 | self._messages = text 120 | self._msgtime = time.time() 121 | def format_info(self): 122 | def time2str4(mtime): 123 | mstr = f"{float(mtime): <0.2f}" 124 | if mtime > 9999 and len(mstr) > 4: 125 | return "10e4 " 126 | else: 127 | return mstr[:4] + " " 128 | buffer = list() 129 | if self._messages and time.time() - self._msgtime < 10: 130 | buffer += self._messages.split('\n')[:20] 131 | buffer.append(" Last Avg Loss Address") 132 | for (idx, hop) in enumerate(self._hops): 133 | line = "" 134 | line += f"{idx+1: >2d} " 135 | line += time2str4(hop._time/1000) if hop.alive else "inf " 136 | line += time2str4(hop.avg/1000) 137 | line += time2str4(hop.loss*100) 138 | line += f'{" ".join(hop.addr): >15s}' 139 | if hop.hostname: 140 | line += " " 141 | line += hop.hostname 142 | buffer.append(line) 143 | return buffer 144 | def draw(self): 145 | pad.clear() 146 | lidx = 0 147 | ltmidx = 0 148 | banner = f'Mtr data plotter {IP} {time.strftime("%Y%m%d %H:%M:%S", time.localtime())}' 149 | banner = banner[:PADX] 150 | pad.addnstr(0, 0, banner, len(banner)) 151 | timebanner = time.strftime("%Y%m%d %H:%M:%S", self._starttime) 152 | pad.addnstr(1, 0, timebanner, len(timebanner), curses.color_pair(6)) 153 | for (idx, hop) in enumerate(self._hops): 154 | idx += 2 # for title bars 155 | pad.addnstr(idx, 0, f"{idx-1: >2d} ", 3) 156 | for (tmidx, tm) in enumerate(hop.tmdata): 157 | lidx = idx 158 | if tm < 0: 159 | pad.addch(idx, 3+tmidx, '?', curses.color_pair(1)) 160 | elif tm < 10*1000: 161 | pad.addch(idx, 3+tmidx, '.', curses.color_pair(2)) 162 | elif tm < 50*1000: 163 | pad.addch(idx, 3+tmidx, '+', curses.color_pair(2)) 164 | elif tm < 100*1000: 165 | pad.addch(idx, 3+tmidx, '.', curses.color_pair(4)) 166 | elif tm < 200*1000: 167 | pad.addch(idx, 3+tmidx, '+', curses.color_pair(4)) 168 | elif tm < 300*1000: 169 | pad.addch(idx, 3+tmidx, '.', curses.color_pair(3)) 170 | else: 171 | pad.addch(idx, 3+tmidx, '+', curses.color_pair(3)) 172 | ltmidx = tmidx 173 | for (strid, mstr) in enumerate(self.format_info()): 174 | mstr = mstr[:PADX] 175 | pad.addnstr(lidx+2+strid, 0, mstr, len(mstr)) 176 | if ltmidx >= PADX - 3 - 1: 177 | self.draw_file() 178 | def pop_all(l): 179 | _, l[:] = l[:], [] 180 | return None 181 | for hop in self._hops: 182 | pop_all(hop.tmdata) 183 | self._starttime = 0 184 | pad._flushpad() 185 | def draw_file_banner(self): 186 | banner = f'Mtr data plotter {IP} {time.strftime("%Y%m%d %H:%M:%S", time.localtime())}' 187 | banner += "\n" 188 | self.ofhandle.write(banner) 189 | self.ofhandle.flush() 190 | def draw_file(self, end=False): 191 | buffer = "" 192 | timebanner = time.strftime("%Y%m%d %H:%M:%S", self._starttime) 193 | timebanner += "\n" 194 | timebanner = ascii_color_text(timebanner, 6) 195 | buffer += timebanner 196 | for (idx, hop) in enumerate(self._hops): 197 | line = f"{idx+1: >2d} " 198 | for (tmidx, tm) in enumerate(hop.tmdata): 199 | if tm < 0: 200 | line += ascii_color_text("?", 1) 201 | elif tm < 10*1000: 202 | line += ascii_color_text(".", 2) 203 | elif tm < 50*1000: 204 | line += ascii_color_text("+", 2) 205 | elif tm < 100*1000: 206 | line += ascii_color_text(".", 4) 207 | elif tm < 200*1000: 208 | line += ascii_color_text("+", 4) 209 | elif tm < 300*1000: 210 | line += ascii_color_text(".", 3) 211 | else: 212 | line += ascii_color_text("+", 3) 213 | buffer += f"{line}\n" 214 | if end: 215 | buffer += "\n" 216 | buffer += "\n".join(self.format_info()) 217 | buffer += "\n" 218 | self.ofhandle.write(buffer) 219 | self.ofhandle.flush() 220 | def process_input(self, text): 221 | if not self._starttime: 222 | self._starttime = time.localtime() 223 | if text.startswith('x '): # x 0 33000 => transmit 224 | (_, hopidx, mtrid) = text.split() 225 | hopidx = int(hopidx) 226 | if self._thopidx > hopidx: 227 | # last biggest hop index 228 | self.maxhopidx = self._thopidx 229 | while len(self._hops) - 1 > self.maxhopidx: 230 | self._hops.pop(-1) 231 | while len(self._hops) < hopidx + 1: 232 | self._hops.append(Hop()) 233 | if self._tmtrid and self._tmtrid != self._rmtrid: # last transfer was not received 234 | if self._lastrecv == self._thopidx: 235 | self.show_msg(f"Duplicate addnone {text=}") 236 | else: 237 | self._hops[self._thopidx].alive = False 238 | self._hops[self._thopidx].tmdata.append(-1) 239 | self._lastrecv = self._thopidx 240 | self._hops[hopidx].send() 241 | self._thopidx = hopidx 242 | self._tmtrid = mtrid 243 | elif text.startswith('h '): # h 0 x.x.x.x => new hop with addr x.x.x.x 244 | (_, hopidx, addr) = text.split() 245 | hopidx = int(hopidx) 246 | if self.maxhopidx and hopidx > self.maxhopidx: 247 | return 248 | self._hops[hopidx].addr_found(addr) 249 | elif text.startswith('p '): # p 0 100 33000 250 | (_, hopidx, ms, mtrid) = text.split() 251 | self._rmtrid = mtrid 252 | if self._rmtrid != self._tmtrid: # this receive is garbage 253 | self.show_msg(f"Bad {self._rmtrid=}, {self._tmtrid=}, {text=}") 254 | return 255 | (hopidx, ms) = (int(hopidx), int(ms)) 256 | if self.maxhopidx and hopidx > self.maxhopidx: 257 | return 258 | if self._lastrecv == self._thopidx: 259 | self.show_msg(f"Duplicate recv {text=}") 260 | else: 261 | self._hops[hopidx].alive = True 262 | self._hops[hopidx].recv(ms) 263 | self._hops[hopidx].tmdata.append(ms) 264 | self._lastrecv = self._thopidx 265 | lhopidx = len(self._hops) - 1 if hopidx == 0 else hopidx - 1 266 | while lhopidx != hopidx and \ 267 | len(self._hops[lhopidx].tmdata) < len(self._hops[hopidx].tmdata) - (1 if hopidx == 0 else 0): 268 | self.show_msg(f"hop {lhopidx} lost one packet!") 269 | self._hops[lhopidx].recv(-1) 270 | self._hops[lhopidx].tmdata.append(-1) 271 | self._rhopidx = hopidx 272 | self.draw() 273 | # init screen 274 | def initscr(): 275 | stdscr = curses.initscr() 276 | curses.noecho() 277 | curses.cbreak() 278 | curses.start_color() 279 | curses.use_default_colors() 280 | curses.curs_set(False) 281 | stdscr.keypad(True) 282 | for i in range(7): 283 | curses.init_pair(i+1, i+1, -1) # see https://docs.python.org/3/howto/curses.html#attributes-and-color 284 | return stdscr 285 | 286 | def endscr(): 287 | curses.curs_set(True) 288 | curses.nocbreak() 289 | stdscr.keypad(False) 290 | curses.echo() 291 | curses.endwin() 292 | 293 | stdscr = initscr() 294 | class Scr: 295 | scr = stdscr 296 | def __init__(self): 297 | (self.y, self.x) = stdscr.getmaxyx() 298 | def _resize(self): 299 | curses.update_lines_cols() 300 | self.__init__() 301 | def __getattr__(self, attr): 302 | return getattr(self.pad, attr) 303 | scr = Scr() 304 | (PADY, PADX) = (100, 50+3 if scr.x <= 50+3 else scr.x//50*50+3) 305 | virtpad = curses.newpad(PADY + 1, PADX + 1) 306 | class Pad: 307 | pad = virtpad 308 | def __init__(self): 309 | self.resize() 310 | def resize(self): 311 | self.ymin = self.xmin = 0 312 | self.ymax = min(scr.y-1, PADY-1) 313 | self.xmax = min(scr.x-1, PADX-1) 314 | def __getattr__(self, attr): 315 | def wrapped(*args, **kwargs): 316 | try: 317 | return getattr(self.pad, attr)(*args, **kwargs) 318 | except Exception: 319 | try: 320 | mtrraw.show_msg(traceback.format_exc()) 321 | except Exception: 322 | pass 323 | if attr in ("addch", 'addnstr', 'addstr'): 324 | return wrapped 325 | if attr == "refresh": 326 | if curses.is_term_resized(scr.y, scr.x): 327 | scr._resize() 328 | self.resize() 329 | mtrraw.show_msg('Auto Window resize.') 330 | return wrapped 331 | else: 332 | return getattr(self.pad, attr) 333 | def _flushpad(self): 334 | self.refresh(self.ymin, self.xmin, 0, 0, self.ymax, self.xmax) 335 | pad = Pad() 336 | 337 | @background 338 | def interact(p): 339 | while True: 340 | c = stdscr.getch() 341 | if c == ord('q'): 342 | p.terminate() 343 | break 344 | if c == ord('r'): 345 | scr._resize() 346 | pad.resize() 347 | mtrraw.show_msg('Window resize') 348 | elif c == curses.KEY_UP: 349 | if pad.ymin > 0: 350 | pad.ymin -= 1 351 | elif c == curses.KEY_DOWN: 352 | if PADY > scr.y and PADY - scr.y > pad.ymin: 353 | pad.ymin += 1 354 | elif c == curses.KEY_LEFT: 355 | if pad.xmin > 0: 356 | pad.xmin -= 1 357 | elif c == curses.KEY_RIGHT: 358 | if PADX > scr.x and PADX - scr.x > pad.xmin: 359 | pad.xmin += 1 360 | else: 361 | continue 362 | if curses.is_term_resized(scr.y, scr.x): 363 | scr._resize() 364 | pad.resize() 365 | mtrraw.show_msg('Auto Window resize.') 366 | pad.refresh(pad.ymin, pad.xmin, 0, 0, pad.ymax, pad.xmax) 367 | 368 | class Pstderr: 369 | err = "" 370 | def __init__(self, p): 371 | self.p = p 372 | @background 373 | def read(self): 374 | while p.poll() is None: 375 | line = p.stderr.readline() 376 | if line: 377 | self.err += line 378 | self.err += "\n" 379 | self.err = self.err[:4096] 380 | with open(args.output, 'w') as ofhandle: 381 | p = pstderr = None 382 | try: 383 | p = subprocess.Popen([*MTRARGS, "-6" if args.ipv6 else "-4", IP], env={"LANG": "C"}, 384 | stdin=subprocess.DEVNULL, stdout=subprocess.PIPE, 385 | stderr=subprocess.PIPE, encoding='utf-8') 386 | pstderr = Pstderr(p) 387 | pstderr.read() 388 | mtrraw = MtrRawData(IP, ofhandle) 389 | mtrraw.draw_file_banner() 390 | interact(p) 391 | while p.poll() is None: 392 | line = p.stdout.readline() 393 | if line: 394 | mtrraw.process_input(line) 395 | except (KeyboardInterrupt, SystemExit): 396 | try: 397 | mtrraw.draw_file(end=True) 398 | except Exception: 399 | traceback.print_exc() 400 | endscr() 401 | print('Bye') 402 | except Exception: 403 | try: 404 | mtrraw.draw_file(end=True) 405 | except Exception: 406 | traceback.print_exc() 407 | endscr() 408 | traceback.print_exc() 409 | else: 410 | try: 411 | mtrraw.draw_file(end=True) 412 | except Exception: 413 | traceback.print_exc() 414 | endscr() 415 | if pstderr: 416 | print(pstderr.err) 417 | print('Stopped at', time.strftime("%Y%m%d %H:%M:%S", time.localtime())) 418 | for tries in range(10): 419 | if p is None or p.poll(): 420 | break 421 | else: 422 | print('Terminate mtr') 423 | time.sleep(1) 424 | p.terminate() 425 | else: 426 | print('Kill mtr') 427 | p.kill() 428 | -------------------------------------------------------------------------------- /useful-commands/okteto_saver.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | NAMES=$(kubectl config get-contexts | awk '{print $(NF-3)}' | tail -n +2 | sort -u) 3 | for NAME in $NAMES;do 4 | kubectl config use-context $NAME 5 | kubectl scale --replicas=1 deployment --all 6 | done 7 | -------------------------------------------------------------------------------- /useful-commands/opsetup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | sed -i 's/downloads.openwrt.org/mirrors.ustc.edu.cn\/lede/g' /etc/opkg/distfeeds.conf 3 | 4 | # SSL Required Packages 5 | opkg update 6 | opkg install \ 7 | ca-bundle \ 8 | ca-certificates \ 9 | libustream-mbedtls 10 | 11 | sed -i 's/http/https/g' /etc/opkg/distfeeds.conf 12 | echo "src/gz simonsmh_base http://github.com/simonsmh/openwrt-dist/raw/ipq806x/packages/arm_cortex-a15_neon-vfpv4/base 13 | src/gz simonsmh_packages http://github.com/simonsmh/openwrt-dist/raw/ipq806x/targets/ipq806x/generic/packages" >> /etc/opkg/customfeeds.conf 14 | 15 | # Basic Packages 16 | opkg update 17 | opkg install \ 18 | block-mount \ 19 | coreutils \ 20 | coreutils-base64 \ 21 | curl \ 22 | ip-full \ 23 | iptables-mod-tproxy \ 24 | kmod-fs-nfs \ 25 | kmod-fs-xfs \ 26 | kmod-usb-storage-extras \ 27 | kmod-ipt-nat6 \ 28 | libmbedtls \ 29 | luci-app-nfs \ 30 | luci-app-samba \ 31 | luci-app-shadowsocks \ 32 | luci-i18n-base-zh-cn \ 33 | luci-i18n-firewall-zh-cn \ 34 | luci-i18n-samba-zh-cn \ 35 | mount-utils \ 36 | nfs-kernel-server-utils \ 37 | nfs-utils \ 38 | rsync \ 39 | shadowsocks-libev \ 40 | stubby 41 | 42 | wget https://github.com/SYHGroup/easy_shell/raw/master/ddns/CloudFlare-ddns.sh 43 | wget https://github.com/SYHGroup/easy_shell/raw/master/useful-commands/update_list 44 | wget https://github.com/cokebar/gfwlist2dnsmasq/raw/master/gfwlist2dnsmasq.sh 45 | 46 | echo "30 4 * * 0 /root/update_list >/dev/null 2>&1 47 | 0 */3 * * * /root/CloudFlare-ddns.sh >/dev/null 2>&1 48 | #30 2 * * 0 opkg update && opkg upgrade `opkg list-upgradable | awk '{printf $1\" \"}'`" >> /etc/crontabs/root 49 | 50 | uci set shadowsocks.@access_control[0].wan_bp_list='/etc/chinadns_chnroute.txt' 51 | uci set dhcp.@dnsmasq[0].serversfile='/etc/dnsmasq_gfwlist.conf' 52 | -------------------------------------------------------------------------------- /useful-commands/repo.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | repo=( 4 | SYHGroup/easy_shell 5 | SYHGroup/easy_systemd 6 | ) 7 | 8 | for line in ${repo[@]} 9 | do 10 | git clone git@github.com:/$line 11 | done 12 | 13 | mat1=(1 2) 14 | mat3=(4 5) 15 | while (( i <= ((${#mat1[$i]})) )) 16 | do 17 | echo ${mat1[$i]} ${mat3[$i]} 18 | ((i++)) 19 | done 20 | -------------------------------------------------------------------------------- /useful-commands/saveapt.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | rm /var/lib/apt/lists/lock 3 | rm /var/cache/apt/archives/lock 4 | rm /var/lib/dpkg/lock 5 | -------------------------------------------------------------------------------- /useful-commands/ss-local.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | function Update(){ 3 | echo -e "正在下载acl,请稍候...\c" 4 | wget https://raw.githubusercontent.com/shadowsocks/shadowsocks-android/master/src/main/assets/acl/china-list.acl -O bypasschina.acl 5 | echo "完成" 6 | } 7 | function Checkroot(){ 8 | if [[ $EUID != "0" ]] 9 | then 10 | echo "需要root权限" 11 | exit 1 12 | fi 13 | } 14 | function Start(){ 15 | ss-local -c /etc/shadowsocks-libev/config-client.json --acl ./bypasschina.acl -f ss-local.pid 16 | echo "Started." 17 | } 18 | function Stop(){ 19 | kill `cat ss-local.pid` 20 | rm ss-local.pid 21 | } 22 | #主进程开始 23 | Checkroot 24 | SCRIPT=$(readlink -f "$0") 25 | SCRIPTPATH=$(dirname "$SCRIPT") 26 | cd "$SCRIPTPATH" 27 | case $* in 28 | update) 29 | Update 30 | ;; 31 | start) 32 | Start 33 | ;; 34 | stop) 35 | Stop 36 | ;; 37 | restart) 38 | Stop 39 | service networking restart 40 | #Update 41 | Start 42 | ;; 43 | *) 44 | echo "用法: 45 | update 更新acl 46 | start 开启本地socks 47 | stop 关闭本地socks 48 | restart 重新打开本地socks" 49 | ;; 50 | esac 51 | exit 0 52 | -------------------------------------------------------------------------------- /useful-commands/steamfree.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import json 3 | import logging 4 | import os 5 | import re 6 | import sys 7 | import time 8 | 9 | import aiohttp 10 | from bs4 import BeautifulSoup 11 | 12 | ### 13 | ASF = False 14 | ASF_interface = "https://***/api/command" 15 | ASF_password = "***" 16 | ### 17 | 18 | logging.basicConfig( 19 | format="%(asctime)s - %(filename)s - %(levelname)s - %(message)s", 20 | level=logging.INFO, 21 | ) 22 | 23 | logger = logging.getLogger("Steam") 24 | 25 | headers = { 26 | "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36", 27 | } 28 | cookies = {"wants_mature_content": 1, "birthtime": 312825601} 29 | 30 | async def main(): 31 | async def asf(cmd): 32 | async with s.post( 33 | ASF_interface, 34 | json={"Command": cmd}, 35 | params={"password": "simonsmh"}, 36 | ) as resp: 37 | return await resp.json() 38 | 39 | async def get_id(sku, free=True): 40 | async with s.get(f"https://store.steampowered.com/app/{sku}/") as resp: 41 | sub = await resp.text() 42 | sub_soup = BeautifulSoup(sub, "lxml") 43 | subname = sub_soup.find( 44 | "form", 45 | action="https://store.steampowered.com/checkout/addfreelicense/" 46 | if free 47 | else "https://store.steampowered.com/cart/", 48 | ) 49 | if not subname: 50 | return 51 | if not sub_soup.select("p.game_purchase_discount_quantity"): 52 | return 53 | logger.info(f"https://store.steampowered.com/app/{sku} Is still discounting.") 54 | subid = subname.get("name")[12:] 55 | if ASF: 56 | result = await asf(f"owns asf sub/{subid}") 57 | if not re.search(r"Not owned yet", result.get("Result")): 58 | logger.info(f"app/{sku} Is owned.") 59 | return 60 | logger.info(f"app/{sku} Not owned yet sub/{subid}") 61 | return subid 62 | 63 | async with aiohttp.ClientSession(cookies=cookies, headers=headers) as s: 64 | async with s.get("https://barter.vg/giveaways/json/") as resp: 65 | fetch = await resp.json() 66 | logger.info("Fetching giveaways") 67 | skus = [ 68 | info.get("sku") 69 | for num, info in fetch.items() 70 | if info.get("type_id") == 3 and info.get("platform_id") == 1 71 | ] 72 | tasks = [get_id(i) for i in skus] 73 | subids = [i for i in await asyncio.gather(*tasks) if i] 74 | if subids: 75 | if ASF: 76 | logger.info(f"Adding licenses from asf {subids}") 77 | result = await asf(f"addlicense asf {' '.join(subids)}") 78 | logger.info(f"{result.get('Message')}\n{result.get('Result')}") 79 | else: 80 | logger.info(f"Games might be avaliable {subids}") 81 | else: 82 | logger.info("No games are avaliable for you.") 83 | 84 | 85 | if __name__ == "__main__": 86 | asyncio.run(main()) 87 | -------------------------------------------------------------------------------- /useful-commands/swap.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | dd if=/dev/zero of=/swap bs=256M count=2 3 | mkswap /swap 4 | chmod 600 /swap 5 | echo '/swap none swap sw 0 0' >> /etc/fstab 6 | swapon -a 7 | -------------------------------------------------------------------------------- /useful-commands/threadpool.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Threading Pool 3 | Usage: 4 | with ThreadingPool(processes=N) as pool: 5 | result = pool.map(lambda x: x**2, range(10)) 6 | ''' 7 | 8 | import threading 9 | 10 | class ThreadingPool: 11 | def __init__(self, processes=4): 12 | self.__processes = processes 13 | self.__running = 0 14 | self.__pending = list() 15 | self.__result = list() # List[ {'index': 0, 'args': args, 'ret': ret, 'exc': err} ] 16 | self.__signallock = threading.Lock() 17 | self.__racelock = threading.Lock() 18 | self.__acknowledgelock = threading.Lock() 19 | self.__func = None 20 | def __enter__(self): 21 | return self 22 | def __exit__(self, *_): 23 | pass 24 | def map(self, func, iterable): 25 | self.__func = func 26 | for index, args in enumerate(iterable): 27 | if not isinstance(args, (list, tuple)): 28 | args = (args,) 29 | self.__pending.append({'index': index, 'args': args, 'ret': None, 'exc': None}) 30 | self.__start_and_wait_for_trs() 31 | self.__result.sort(key=lambda x: x['index']) 32 | return [r['exc'] if r['exc'] else r['ret'] for r in self.__result] 33 | def __start_and_wait_for_trs(self): 34 | self.__signallock.acquire() 35 | while self.__pending or self.__running: 36 | if self.__pending and self.__running < self.__processes: 37 | # do some tasks 38 | taskdict = self.__pending.pop(0) 39 | threading.Thread(target=self.__inside_tr, args=(taskdict,)).start() 40 | self.__running += 1 41 | else: 42 | self.__signallock.acquire() # block 43 | self.__acknowledgelock.release() # acknowledge, the thread may end now 44 | self.__running -= 1 45 | def __inside_tr(self, taskdict): 46 | args = taskdict.get('args') 47 | try: 48 | taskdict['ret'] = self.__func(*args) 49 | except Exception as err: 50 | taskdict['ret'] = None 51 | taskdict['exc'] = err 52 | finally: 53 | with self.__racelock: 54 | self.__result.append(taskdict) 55 | self.__acknowledgelock.acquire() 56 | self.__signallock.release() # notify the main process 57 | with self.__acknowledgelock: # wait for the main process to release it 58 | pass 59 | -------------------------------------------------------------------------------- /useful-commands/tr.sh: -------------------------------------------------------------------------------- 1 | ls | while read name 2 | do 3 | mv "$name" $(echo $name | tr ' ' '_' | tr '!' '_' | tr '[' '_' | tr ']' '_') 4 | done 5 | -------------------------------------------------------------------------------- /useful-commands/uparchlivecd.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # !! Make sure your partition has a label !! See line 11 4 | # How to use: 5 | # mkdir -p /livecd 6 | # place this script inside /livecd 7 | # run this script 8 | # grub command: 9 | # cat livecd/archiso-x86_64-linux.conf 10 | # set root=(xxx) 11 | # linux /livecd/vmlinuz-linux archisobasedir=livecd archisolabel=YOUR_LABEL 12 | # initrd /livecd/amd-ucode.img 13 | # initrd /livecd/intel-ucode.img 14 | # initrd /livecd/initramfs-linux.img 15 | # boot 16 | 17 | die() { 18 | echo $@ 19 | exit 1 20 | } 21 | read -p 'Place this script inside a empty dir. [Enter]' 22 | 23 | MIRROR="https://mirrors.ustc.edu.cn/archlinux" 24 | ISO_DIR="iso/latest" 25 | MD5SUM="${MIRROR}/${ISO_DIR}/md5sums.txt" 26 | 27 | SUDO='sudo' 28 | [[ $EUID == 0 ]] && SUDO='' 29 | 30 | wget -O md5sum ${MD5SUM} 31 | 32 | md5=$(cat md5sum |grep -F '.iso') 33 | ISO=$(awk '{print $2;}' <<< "$md5") 34 | echo "$md5" > md5sum 35 | 36 | if [[ -n $ISO ]]; then 37 | [[ -f $ISO ]] && echo "iso exists" || wget -O ${ISO} "${MIRROR}/${ISO_DIR}/${ISO}" 38 | else 39 | die "iso not found" 40 | fi 41 | md5sum -c md5sum || die "md5sum check failed" 42 | 43 | mkdir -p mnt 44 | $SUDO mount -o loop,ro ${ISO} mnt || die "mount failed" 45 | 46 | echo "copying..." 47 | DST='./' 48 | cp mnt/arch/boot/{amd-ucode.img,intel-ucode.img} ${DST} 49 | cp -R mnt/arch/x86_64 ${DST} 50 | cp mnt/arch/boot/x86_64/{initramfs-linux.img,vmlinuz-linux} ${DST} 51 | cp mnt/loader/entries/archiso-x86_64-linux.conf ${DST} 52 | echo "done copying" 53 | 54 | $SUDO umount mnt && echo "unmount successful" || echo "!! unmount failed" 55 | 56 | read -p "Delete iso file? [Enter]" 57 | rm "$ISO" 58 | -------------------------------------------------------------------------------- /useful-commands/useful-commands.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | function GatewayPorts(){ 3 | #sshpf 4 | echo "GatewayPorts yes" >> /etc/ssh/sshd_config 5 | } 6 | function GitSetup(){ 7 | #git setup 8 | git config --global user.name "Jerry" 9 | git config --global user.email "Jerry981028@gmail.com" 10 | git config --global push.default simple 11 | sed -i 's/https:\/\/github.com\//git@github.com:/g' ./.git/config 12 | git add . 13 | git commit 14 | git push 15 | } 16 | function GitInit(){ 17 | #git init 18 | echo "# first" >> README.md 19 | git init 20 | git add README.md 21 | git commit -m "first commit" 22 | git remote add origin https://github.com/Jerry981028/first.git 23 | git push -u origin master 24 | } 25 | function SystemControl(){ 26 | tar -zcvf /tmp/etc.tar.gz /etc # -z for gzip(gz) 27 | tar -zxvf /tmp/etc.tar.gz 28 | fc-cache -fv 29 | systemctl daemon-reload 30 | chattr +i /etc/resolv.conf 31 | chattr -i /etc/resolv.conf 32 | #apt install gnome-disk-utility 33 | #service network-manager restart 34 | #wifi led blink off 35 | echo none > /sys/class/leds/phy0-led/trigger 36 | echo 1 > /sys/class/leds/phy0-led/brightness 37 | #cpufreq-set 38 | #apt install cpufrequtils 39 | cpufreq-set -c 0 -u 800000 40 | cpufreq-set -c 1 -u 800000 41 | #uuid 42 | #查看硬盘UUID 43 | ls -l /dev/disk/by-uuid 44 | blkid /dev/sdb2 45 | uuidgen | xargs tune2fs /dev/sdb2 -U 46 | #original uuid = 7651122e-84c1-4e85-956b-4860651fb019 (/dev/sda3) 47 | tune2fs -U 735a2fd3-9425-4ddd-9c91-a57e3ebbaeff /dev/sdb2 48 | #apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 2D87398A 49 | export http_proxy="127.0.0.1:1081" 50 | unset http_proxy 51 | export ftp_proxy="127.0.0.1:1081" 52 | unset ftp_proxy 53 | } 54 | function FlushIptables(){ 55 | iptables -F 56 | iptables -X 57 | iptables -t nat -F 58 | iptables -t nat -X 59 | iptables -t mangle -F 60 | iptables -t mangle -X 61 | iptables -t raw -F 62 | iptables -t raw -X 63 | iptables -t security -F 64 | iptables -t security -X 65 | iptables -P INPUT ACCEPT 66 | iptables -P FORWARD ACCEPT 67 | iptables -P OUTPUT ACCEPT 68 | } 69 | function Iptables(){ 70 | iptables -I INPUT -p tcp --dport 5901 -s 10.0.0.85 -j REJECT --reject-with icmp-port-unreachable -m comment --comment "VNC" 71 | iptables -I INPUT -s 10.0.0.85 -j DROP -m comment --comment "Block Ip" 72 | #192.168.1.0/24 73 | iptables -nvL --line-numbers 74 | iptables -D INPUT n* # n for line number 75 | #iptables --policy INPUT DROP 76 | iptables -P INPUT DROP 77 | iptables -P INPUT ACCEPT 78 | } 79 | exit 0 80 | --------------------------------------------------------------------------------