├── .gitignore ├── LICENSE ├── README.md ├── packaging ├── control ├── make_deb ├── prefixes.conf ├── sshuttle.conf └── tunnel.conf └── src ├── Makefile ├── all.do ├── assembler.py ├── clean.do ├── client.py ├── compat ├── __init__.py └── ssubprocess.py ├── default.8.do ├── do ├── firewall.py ├── helpers.py ├── hostwatch.py ├── main.py ├── options.py ├── server.py ├── ssh.py ├── sshuttle ├── sshuttle.md ├── ssnet.py ├── ssyslog.py ├── stresstest.py └── ui-macos ├── .gitignore ├── Info.plist ├── MainMenu.xib ├── UserDefaults.plist ├── all.do ├── app.icns ├── askpass.py ├── bits ├── .gitignore ├── PkgInfo ├── runpython.c └── runpython.do ├── chicken-tiny-bw.png ├── chicken-tiny-err.png ├── chicken-tiny.png ├── clean.do ├── debug.app.do ├── default.app.do ├── default.app.tar.gz.do ├── default.app.zip.do ├── default.nib.do ├── dist.do ├── git-export.do ├── main.py ├── models.py ├── my.py ├── run.do ├── sources.list.do └── sshuttle /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | *~ 3 | *.8 4 | /.do_built 5 | /.do_built.dir 6 | /.redo 7 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU LIBRARY GENERAL PUBLIC LICENSE 2 | Version 2, June 1991 3 | 4 | Copyright (C) 1991 Free Software Foundation, Inc. 5 | 675 Mass Ave, Cambridge, MA 02139, USA 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | [This is the first released version of the library GPL. It is 10 | numbered 2 because it goes with version 2 of the ordinary GPL.] 11 | 12 | Preamble 13 | 14 | The licenses for most software are designed to take away your 15 | freedom to share and change it. By contrast, the GNU General Public 16 | Licenses are intended to guarantee your freedom to share and change 17 | free software--to make sure the software is free for all its users. 18 | 19 | This license, the Library General Public License, applies to some 20 | specially designated Free Software Foundation software, and to any 21 | other libraries whose authors decide to use it. You can use it for 22 | your libraries, too. 23 | 24 | When we speak of free software, we are referring to freedom, not 25 | price. Our General Public Licenses are designed to make sure that you 26 | have the freedom to distribute copies of free software (and charge for 27 | this service if you wish), that you receive source code or can get it 28 | if you want it, that you can change the software or use pieces of it 29 | in new free programs; and that you know you can do these things. 30 | 31 | To protect your rights, we need to make restrictions that forbid 32 | anyone to deny you these rights or to ask you to surrender the rights. 33 | These restrictions translate to certain responsibilities for you if 34 | you distribute copies of the library, or if you modify it. 35 | 36 | For example, if you distribute copies of the library, whether gratis 37 | or for a fee, you must give the recipients all the rights that we gave 38 | you. You must make sure that they, too, receive or can get the source 39 | code. If you link a program with the library, you must provide 40 | complete object files to the recipients so that they can relink them 41 | with the library, after making changes to the library and recompiling 42 | it. And you must show them these terms so they know their rights. 43 | 44 | Our method of protecting your rights has two steps: (1) copyright 45 | the library, and (2) offer you this license which gives you legal 46 | permission to copy, distribute and/or modify the library. 47 | 48 | Also, for each distributor's protection, we want to make certain 49 | that everyone understands that there is no warranty for this free 50 | library. If the library is modified by someone else and passed on, we 51 | want its recipients to know that what they have is not the original 52 | version, so that any problems introduced by others will not reflect on 53 | the original authors' reputations. 54 | 55 | Finally, any free program is threatened constantly by software 56 | patents. We wish to avoid the danger that companies distributing free 57 | software will individually obtain patent licenses, thus in effect 58 | transforming the program into proprietary software. To prevent this, 59 | we have made it clear that any patent must be licensed for everyone's 60 | free use or not licensed at all. 61 | 62 | Most GNU software, including some libraries, is covered by the ordinary 63 | GNU General Public License, which was designed for utility programs. This 64 | license, the GNU Library General Public License, applies to certain 65 | designated libraries. This license is quite different from the ordinary 66 | one; be sure to read it in full, and don't assume that anything in it is 67 | the same as in the ordinary license. 68 | 69 | The reason we have a separate public license for some libraries is that 70 | they blur the distinction we usually make between modifying or adding to a 71 | program and simply using it. Linking a program with a library, without 72 | changing the library, is in some sense simply using the library, and is 73 | analogous to running a utility program or application program. However, in 74 | a textual and legal sense, the linked executable is a combined work, a 75 | derivative of the original library, and the ordinary General Public License 76 | treats it as such. 77 | 78 | Because of this blurred distinction, using the ordinary General 79 | Public License for libraries did not effectively promote software 80 | sharing, because most developers did not use the libraries. We 81 | concluded that weaker conditions might promote sharing better. 82 | 83 | However, unrestricted linking of non-free programs would deprive the 84 | users of those programs of all benefit from the free status of the 85 | libraries themselves. This Library General Public License is intended to 86 | permit developers of non-free programs to use free libraries, while 87 | preserving your freedom as a user of such programs to change the free 88 | libraries that are incorporated in them. (We have not seen how to achieve 89 | this as regards changes in header files, but we have achieved it as regards 90 | changes in the actual functions of the Library.) The hope is that this 91 | will lead to faster development of free libraries. 92 | 93 | The precise terms and conditions for copying, distribution and 94 | modification follow. Pay close attention to the difference between a 95 | "work based on the library" and a "work that uses the library". The 96 | former contains code derived from the library, while the latter only 97 | works together with the library. 98 | 99 | Note that it is possible for a library to be covered by the ordinary 100 | General Public License rather than by this special one. 101 | 102 | GNU LIBRARY GENERAL PUBLIC LICENSE 103 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 104 | 105 | 0. This License Agreement applies to any software library which 106 | contains a notice placed by the copyright holder or other authorized 107 | party saying it may be distributed under the terms of this Library 108 | General Public License (also called "this License"). Each licensee is 109 | addressed as "you". 110 | 111 | A "library" means a collection of software functions and/or data 112 | prepared so as to be conveniently linked with application programs 113 | (which use some of those functions and data) to form executables. 114 | 115 | The "Library", below, refers to any such software library or work 116 | which has been distributed under these terms. A "work based on the 117 | Library" means either the Library or any derivative work under 118 | copyright law: that is to say, a work containing the Library or a 119 | portion of it, either verbatim or with modifications and/or translated 120 | straightforwardly into another language. (Hereinafter, translation is 121 | included without limitation in the term "modification".) 122 | 123 | "Source code" for a work means the preferred form of the work for 124 | making modifications to it. For a library, complete source code means 125 | all the source code for all modules it contains, plus any associated 126 | interface definition files, plus the scripts used to control compilation 127 | and installation of the library. 128 | 129 | Activities other than copying, distribution and modification are not 130 | covered by this License; they are outside its scope. The act of 131 | running a program using the Library is not restricted, and output from 132 | such a program is covered only if its contents constitute a work based 133 | on the Library (independent of the use of the Library in a tool for 134 | writing it). Whether that is true depends on what the Library does 135 | and what the program that uses the Library does. 136 | 137 | 1. You may copy and distribute verbatim copies of the Library's 138 | complete source code as you receive it, in any medium, provided that 139 | you conspicuously and appropriately publish on each copy an 140 | appropriate copyright notice and disclaimer of warranty; keep intact 141 | all the notices that refer to this License and to the absence of any 142 | warranty; and distribute a copy of this License along with the 143 | Library. 144 | 145 | You may charge a fee for the physical act of transferring a copy, 146 | and you may at your option offer warranty protection in exchange for a 147 | fee. 148 | 149 | 2. You may modify your copy or copies of the Library or any portion 150 | of it, thus forming a work based on the Library, and copy and 151 | distribute such modifications or work under the terms of Section 1 152 | above, provided that you also meet all of these conditions: 153 | 154 | a) The modified work must itself be a software library. 155 | 156 | b) You must cause the files modified to carry prominent notices 157 | stating that you changed the files and the date of any change. 158 | 159 | c) You must cause the whole of the work to be licensed at no 160 | charge to all third parties under the terms of this License. 161 | 162 | d) If a facility in the modified Library refers to a function or a 163 | table of data to be supplied by an application program that uses 164 | the facility, other than as an argument passed when the facility 165 | is invoked, then you must make a good faith effort to ensure that, 166 | in the event an application does not supply such function or 167 | table, the facility still operates, and performs whatever part of 168 | its purpose remains meaningful. 169 | 170 | (For example, a function in a library to compute square roots has 171 | a purpose that is entirely well-defined independent of the 172 | application. Therefore, Subsection 2d requires that any 173 | application-supplied function or table used by this function must 174 | be optional: if the application does not supply it, the square 175 | root function must still compute square roots.) 176 | 177 | These requirements apply to the modified work as a whole. If 178 | identifiable sections of that work are not derived from the Library, 179 | and can be reasonably considered independent and separate works in 180 | themselves, then this License, and its terms, do not apply to those 181 | sections when you distribute them as separate works. But when you 182 | distribute the same sections as part of a whole which is a work based 183 | on the Library, the distribution of the whole must be on the terms of 184 | this License, whose permissions for other licensees extend to the 185 | entire whole, and thus to each and every part regardless of who wrote 186 | it. 187 | 188 | Thus, it is not the intent of this section to claim rights or contest 189 | your rights to work written entirely by you; rather, the intent is to 190 | exercise the right to control the distribution of derivative or 191 | collective works based on the Library. 192 | 193 | In addition, mere aggregation of another work not based on the Library 194 | with the Library (or with a work based on the Library) on a volume of 195 | a storage or distribution medium does not bring the other work under 196 | the scope of this License. 197 | 198 | 3. You may opt to apply the terms of the ordinary GNU General Public 199 | License instead of this License to a given copy of the Library. To do 200 | this, you must alter all the notices that refer to this License, so 201 | that they refer to the ordinary GNU General Public License, version 2, 202 | instead of to this License. (If a newer version than version 2 of the 203 | ordinary GNU General Public License has appeared, then you can specify 204 | that version instead if you wish.) Do not make any other change in 205 | these notices. 206 | 207 | Once this change is made in a given copy, it is irreversible for 208 | that copy, so the ordinary GNU General Public License applies to all 209 | subsequent copies and derivative works made from that copy. 210 | 211 | This option is useful when you wish to copy part of the code of 212 | the Library into a program that is not a library. 213 | 214 | 4. You may copy and distribute the Library (or a portion or 215 | derivative of it, under Section 2) in object code or executable form 216 | under the terms of Sections 1 and 2 above provided that you accompany 217 | it with the complete corresponding machine-readable source code, which 218 | must be distributed under the terms of Sections 1 and 2 above on a 219 | medium customarily used for software interchange. 220 | 221 | If distribution of object code is made by offering access to copy 222 | from a designated place, then offering equivalent access to copy the 223 | source code from the same place satisfies the requirement to 224 | distribute the source code, even though third parties are not 225 | compelled to copy the source along with the object code. 226 | 227 | 5. A program that contains no derivative of any portion of the 228 | Library, but is designed to work with the Library by being compiled or 229 | linked with it, is called a "work that uses the Library". Such a 230 | work, in isolation, is not a derivative work of the Library, and 231 | therefore falls outside the scope of this License. 232 | 233 | However, linking a "work that uses the Library" with the Library 234 | creates an executable that is a derivative of the Library (because it 235 | contains portions of the Library), rather than a "work that uses the 236 | library". The executable is therefore covered by this License. 237 | Section 6 states terms for distribution of such executables. 238 | 239 | When a "work that uses the Library" uses material from a header file 240 | that is part of the Library, the object code for the work may be a 241 | derivative work of the Library even though the source code is not. 242 | Whether this is true is especially significant if the work can be 243 | linked without the Library, or if the work is itself a library. The 244 | threshold for this to be true is not precisely defined by law. 245 | 246 | If such an object file uses only numerical parameters, data 247 | structure layouts and accessors, and small macros and small inline 248 | functions (ten lines or less in length), then the use of the object 249 | file is unrestricted, regardless of whether it is legally a derivative 250 | work. (Executables containing this object code plus portions of the 251 | Library will still fall under Section 6.) 252 | 253 | Otherwise, if the work is a derivative of the Library, you may 254 | distribute the object code for the work under the terms of Section 6. 255 | Any executables containing that work also fall under Section 6, 256 | whether or not they are linked directly with the Library itself. 257 | 258 | 6. As an exception to the Sections above, you may also compile or 259 | link a "work that uses the Library" with the Library to produce a 260 | work containing portions of the Library, and distribute that work 261 | under terms of your choice, provided that the terms permit 262 | modification of the work for the customer's own use and reverse 263 | engineering for debugging such modifications. 264 | 265 | You must give prominent notice with each copy of the work that the 266 | Library is used in it and that the Library and its use are covered by 267 | this License. You must supply a copy of this License. If the work 268 | during execution displays copyright notices, you must include the 269 | copyright notice for the Library among them, as well as a reference 270 | directing the user to the copy of this License. Also, you must do one 271 | of these things: 272 | 273 | a) Accompany the work with the complete corresponding 274 | machine-readable source code for the Library including whatever 275 | changes were used in the work (which must be distributed under 276 | Sections 1 and 2 above); and, if the work is an executable linked 277 | with the Library, with the complete machine-readable "work that 278 | uses the Library", as object code and/or source code, so that the 279 | user can modify the Library and then relink to produce a modified 280 | executable containing the modified Library. (It is understood 281 | that the user who changes the contents of definitions files in the 282 | Library will not necessarily be able to recompile the application 283 | to use the modified definitions.) 284 | 285 | b) Accompany the work with a written offer, valid for at 286 | least three years, to give the same user the materials 287 | specified in Subsection 6a, above, for a charge no more 288 | than the cost of performing this distribution. 289 | 290 | c) If distribution of the work is made by offering access to copy 291 | from a designated place, offer equivalent access to copy the above 292 | specified materials from the same place. 293 | 294 | d) Verify that the user has already received a copy of these 295 | materials or that you have already sent this user a copy. 296 | 297 | For an executable, the required form of the "work that uses the 298 | Library" must include any data and utility programs needed for 299 | reproducing the executable from it. However, as a special exception, 300 | the source code distributed need not include anything that is normally 301 | distributed (in either source or binary form) with the major 302 | components (compiler, kernel, and so on) of the operating system on 303 | which the executable runs, unless that component itself accompanies 304 | the executable. 305 | 306 | It may happen that this requirement contradicts the license 307 | restrictions of other proprietary libraries that do not normally 308 | accompany the operating system. Such a contradiction means you cannot 309 | use both them and the Library together in an executable that you 310 | distribute. 311 | 312 | 7. You may place library facilities that are a work based on the 313 | Library side-by-side in a single library together with other library 314 | facilities not covered by this License, and distribute such a combined 315 | library, provided that the separate distribution of the work based on 316 | the Library and of the other library facilities is otherwise 317 | permitted, and provided that you do these two things: 318 | 319 | a) Accompany the combined library with a copy of the same work 320 | based on the Library, uncombined with any other library 321 | facilities. This must be distributed under the terms of the 322 | Sections above. 323 | 324 | b) Give prominent notice with the combined library of the fact 325 | that part of it is a work based on the Library, and explaining 326 | where to find the accompanying uncombined form of the same work. 327 | 328 | 8. You may not copy, modify, sublicense, link with, or distribute 329 | the Library except as expressly provided under this License. Any 330 | attempt otherwise to copy, modify, sublicense, link with, or 331 | distribute the Library is void, and will automatically terminate your 332 | rights under this License. However, parties who have received copies, 333 | or rights, from you under this License will not have their licenses 334 | terminated so long as such parties remain in full compliance. 335 | 336 | 9. You are not required to accept this License, since you have not 337 | signed it. However, nothing else grants you permission to modify or 338 | distribute the Library or its derivative works. These actions are 339 | prohibited by law if you do not accept this License. Therefore, by 340 | modifying or distributing the Library (or any work based on the 341 | Library), you indicate your acceptance of this License to do so, and 342 | all its terms and conditions for copying, distributing or modifying 343 | the Library or works based on it. 344 | 345 | 10. Each time you redistribute the Library (or any work based on the 346 | Library), the recipient automatically receives a license from the 347 | original licensor to copy, distribute, link with or modify the Library 348 | subject to these terms and conditions. You may not impose any further 349 | restrictions on the recipients' exercise of the rights granted herein. 350 | You are not responsible for enforcing compliance by third parties to 351 | this License. 352 | 353 | 11. If, as a consequence of a court judgment or allegation of patent 354 | infringement or for any other reason (not limited to patent issues), 355 | conditions are imposed on you (whether by court order, agreement or 356 | otherwise) that contradict the conditions of this License, they do not 357 | excuse you from the conditions of this License. If you cannot 358 | distribute so as to satisfy simultaneously your obligations under this 359 | License and any other pertinent obligations, then as a consequence you 360 | may not distribute the Library at all. For example, if a patent 361 | license would not permit royalty-free redistribution of the Library by 362 | all those who receive copies directly or indirectly through you, then 363 | the only way you could satisfy both it and this License would be to 364 | refrain entirely from distribution of the Library. 365 | 366 | If any portion of this section is held invalid or unenforceable under any 367 | particular circumstance, the balance of the section is intended to apply, 368 | and the section as a whole is intended to apply in other circumstances. 369 | 370 | It is not the purpose of this section to induce you to infringe any 371 | patents or other property right claims or to contest validity of any 372 | such claims; this section has the sole purpose of protecting the 373 | integrity of the free software distribution system which is 374 | implemented by public license practices. Many people have made 375 | generous contributions to the wide range of software distributed 376 | through that system in reliance on consistent application of that 377 | system; it is up to the author/donor to decide if he or she is willing 378 | to distribute software through any other system and a licensee cannot 379 | impose that choice. 380 | 381 | This section is intended to make thoroughly clear what is believed to 382 | be a consequence of the rest of this License. 383 | 384 | 12. If the distribution and/or use of the Library is restricted in 385 | certain countries either by patents or by copyrighted interfaces, the 386 | original copyright holder who places the Library under this License may add 387 | an explicit geographical distribution limitation excluding those countries, 388 | so that distribution is permitted only in or among countries not thus 389 | excluded. In such case, this License incorporates the limitation as if 390 | written in the body of this License. 391 | 392 | 13. The Free Software Foundation may publish revised and/or new 393 | versions of the Library General Public License from time to time. 394 | Such new versions will be similar in spirit to the present version, 395 | but may differ in detail to address new problems or concerns. 396 | 397 | Each version is given a distinguishing version number. If the Library 398 | specifies a version number of this License which applies to it and 399 | "any later version", you have the option of following the terms and 400 | conditions either of that version or of any later version published by 401 | the Free Software Foundation. If the Library does not specify a 402 | license version number, you may choose any version ever published by 403 | the Free Software Foundation. 404 | 405 | 14. If you wish to incorporate parts of the Library into other free 406 | programs whose distribution conditions are incompatible with these, 407 | write to the author to ask for permission. For software which is 408 | copyrighted by the Free Software Foundation, write to the Free 409 | Software Foundation; we sometimes make exceptions for this. Our 410 | decision will be guided by the two goals of preserving the free status 411 | of all derivatives of our free software and of promoting the sharing 412 | and reuse of software generally. 413 | 414 | NO WARRANTY 415 | 416 | 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO 417 | WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. 418 | EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR 419 | OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY 420 | KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE 421 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 422 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE 423 | LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME 424 | THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 425 | 426 | 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN 427 | WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY 428 | AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU 429 | FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR 430 | CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE 431 | LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING 432 | RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A 433 | FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF 434 | SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH 435 | DAMAGES. 436 | 437 | END OF TERMS AND CONDITIONS 438 | 439 | Appendix: How to Apply These Terms to Your New Libraries 440 | 441 | If you develop a new library, and you want it to be of the greatest 442 | possible use to the public, we recommend making it free software that 443 | everyone can redistribute and change. You can do so by permitting 444 | redistribution under these terms (or, alternatively, under the terms of the 445 | ordinary General Public License). 446 | 447 | To apply these terms, attach the following notices to the library. It is 448 | safest to attach them to the start of each source file to most effectively 449 | convey the exclusion of warranty; and each file should have at least the 450 | "copyright" line and a pointer to where the full notice is found. 451 | 452 | 453 | Copyright (C) 454 | 455 | This library is free software; you can redistribute it and/or 456 | modify it under the terms of the GNU Library General Public 457 | License as published by the Free Software Foundation; either 458 | version 2 of the License, or (at your option) any later version. 459 | 460 | This library is distributed in the hope that it will be useful, 461 | but WITHOUT ANY WARRANTY; without even the implied warranty of 462 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 463 | Library General Public License for more details. 464 | 465 | You should have received a copy of the GNU Library General Public 466 | License along with this library; if not, write to the Free 467 | Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 468 | 469 | Also add information on how to contact you by electronic and paper mail. 470 | 471 | You should also get your employer (if you work as a programmer) or your 472 | school, if any, to sign a "copyright disclaimer" for the library, if 473 | necessary. Here is a sample; alter the names: 474 | 475 | Yoyodyne, Inc., hereby disclaims all copyright interest in the 476 | library `Frob' (a library for tweaking knobs) written by James Random Hacker. 477 | 478 | , 1 April 1990 479 | Ty Coon, President of Vice 480 | 481 | That's all there is to it! 482 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | WARNING: 3 | On MacOS 10.6 (at least up to 10.6.6), your network will 4 | stop responding about 10 minutes after the first time you 5 | start sshuttle, because of a MacOS kernel bug relating to 6 | arp and the net.inet.ip.scopedroute sysctl. To fix it, 7 | just switch your wireless off and on. Sshuttle makes the 8 | kernel setting it changes permanent, so this won't happen 9 | again, even after a reboot. 10 | 11 | Required Software 12 | ================= 13 | 14 | - You need PyXAPI, available here: 15 | http://www.pps.univ-paris-diderot.fr/~ylg/PyXAPI/ 16 | - You also need autossh, available in various package management systems 17 | - Python 2.x, both locally and the remote system 18 | 19 | 20 | sshuttle: where transparent proxy meets VPN meets ssh 21 | ===================================================== 22 | 23 | As far as I know, sshuttle is the only program that solves the following 24 | common case: 25 | 26 | - Your client machine (or router) is Linux, FreeBSD, or MacOS. 27 | 28 | - You have access to a remote network via ssh. 29 | 30 | - You don't necessarily have admin access on the remote network. 31 | 32 | - The remote network has no VPN, or only stupid/complex VPN 33 | protocols (IPsec, PPTP, etc). Or maybe you are the 34 | admin and you just got frustrated with the awful state of 35 | VPN tools. 36 | 37 | - You don't want to create an ssh port forward for every 38 | single host/port on the remote network. 39 | 40 | - You hate openssh's port forwarding because it's randomly 41 | slow and/or stupid. 42 | 43 | - You can't use openssh's PermitTunnel feature because 44 | it's disabled by default on openssh servers; plus it does 45 | TCP-over-TCP, which has terrible performance (see below). 46 | 47 | 48 | Prerequisites 49 | ------------- 50 | 51 | - sudo, su, or logged in as root on your client machine. 52 | (The server doesn't need admin access.) 53 | 54 | - If you use Linux on your client machine: 55 | iptables installed on the client, including at 56 | least the iptables DNAT, REDIRECT, and ttl modules. 57 | These are installed by default on most Linux distributions. 58 | (The server doesn't need iptables and doesn't need to be 59 | Linux.) 60 | 61 | - If you use MacOS or BSD on your client machine: 62 | Your kernel needs to be compiled with `IPFIREWALL_FORWARD` 63 | (MacOS has this by default) and you need to have ipfw 64 | available. (The server doesn't need to be MacOS or BSD.) 65 | 66 | 67 | Obtaining sshuttle 68 | ------------------ 69 | 70 | - First, go get PyXAPI from the link above 71 | 72 | - Clone github.com/jwyllie83/sshuttle/tree/local 73 | 74 | 75 | Usage on (Ubuntu) Linux 76 | ----------------------- 77 | 78 | - `cd packaging; ./make_deb` 79 | 80 | - `sudo dpkg -i ./sshuttle-VERSION.deb` 81 | 82 | - Check out the files in `/etc/sshuttle`; configure them so your tunnel works 83 | 84 | - `sudo service sshuttle start` 85 | 86 | 87 | Usage on other Linuxes and OSes 88 | ------------------------------- 89 | 90 | ./sshuttle -r username@sshserver 0.0.0.0/0 -vv 91 | 92 | - There is a shortcut for 0.0.0.0/0 for those that value 93 | their wrists 94 | ./sshuttle -r username@sshserver 0/0 -vv 95 | 96 | - If you would also like your DNS queries to be proxied 97 | through the DNS server of the server you are connect to: 98 | ./sshuttle --dns -vvr username@sshserver 0/0 99 | 100 | The above is probably what you want to use to prevent 101 | local network attacks such as Firesheep and friends. 102 | 103 | (You may be prompted for one or more passwords; first, the 104 | local password to become root using either sudo or su, and 105 | then the remote ssh password. Or you might have sudo and ssh set 106 | up to not require passwords, in which case you won't be 107 | prompted at all.) 108 | 109 | Usage Notes 110 | ----------- 111 | 112 | That's it! Now your local machine can access the remote network as if you 113 | were right there. And if your "client" machine is a router, everyone on 114 | your local network can make connections to your remote network. 115 | 116 | You don't need to install sshuttle on the remote server; 117 | the remote server just needs to have python available. 118 | sshuttle will automatically upload and run its source code 119 | to the remote python interpreter. 120 | 121 | This creates a transparent proxy server on your local machine for all IP 122 | addresses that match 0.0.0.0/0. (You can use more specific IP addresses if 123 | you want; use any number of IP addresses or subnets to change which 124 | addresses get proxied. Using 0.0.0.0/0 proxies everything, which is 125 | interesting if you don't trust the people on your local network.) 126 | 127 | Any TCP session you initiate to one of the proxied IP addresses will be 128 | captured by sshuttle and sent over an ssh session to the remote copy of 129 | sshuttle, which will then regenerate the connection on that end, and funnel 130 | the data back and forth through ssh. 131 | 132 | Fun, right? A poor man's instant VPN, and you don't even have to have 133 | admin access on the server. 134 | 135 | 136 | Theory of Operation 137 | ------------------- 138 | 139 | sshuttle is not exactly a VPN, and not exactly port forwarding. It's kind 140 | of both, and kind of neither. 141 | 142 | It's like a VPN, since it can forward every port on an entire network, not 143 | just ports you specify. Conveniently, it lets you use the "real" IP 144 | addresses of each host rather than faking port numbers on localhost. 145 | 146 | On the other hand, the way it *works* is more like ssh port forwarding than 147 | a VPN. Normally, a VPN forwards your data one packet at a time, and 148 | doesn't care about individual connections; ie. it's "stateless" with respect 149 | to the traffic. sshuttle is the opposite of stateless; it tracks every 150 | single connection. 151 | 152 | You could compare sshuttle to something like the old Slirp program, which was a 154 | userspace TCP/IP implementation that did something similar. But it 155 | operated on a packet-by-packet basis on the client side, reassembling the 156 | packets on the server side. That worked okay back in the "real live serial 157 | port" days, because serial ports had predictable latency and buffering. 158 | 159 | But you can't safely just forward TCP packets over a TCP session (like ssh), 160 | because TCP's performance depends fundamentally on packet loss; it 161 | must experience packet loss in order to know when to slow down! At 162 | the same time, the outer TCP session (ssh, in this case) is a reliable 163 | transport, which means that what you forward through the tunnel never 164 | experiences packet loss. The ssh session itself experiences packet loss, of 165 | course, but TCP fixes it up and ssh (and thus you) never know the 166 | difference. But neither does your inner TCP session, and extremely screwy 167 | performance ensues. 168 | 169 | sshuttle assembles the TCP stream locally, multiplexes it statefully over 170 | an ssh session, and disassembles it back into packets at the other end. So 171 | it never ends up doing TCP-over-TCP. It's just data-over-TCP, which is 172 | safe. 173 | 174 | 175 | Useless Trivia 176 | -------------- 177 | 178 | Back in 1998 (12 years ago! Yikes!), I released the first version of Tunnel Vision, a 180 | semi-intelligent VPN client for Linux. Unfortunately, I made two big mistakes: 181 | I implemented the key exchange myself (oops), and I ended up doing 182 | TCP-over-TCP (double oops). The resulting program worked okay - and people 183 | used it for years - but the performance was always a bit funny. And nobody 184 | ever found any security flaws in my key exchange, either, but that doesn't 185 | mean anything. :) 186 | 187 | The same year, dcoombs and I also released Fast Forward, a proxy server 188 | supporting transparent proxying. Among other things, we used it for 189 | automatically splitting traffic across more than one Internet connection (a 190 | tool we called "Double Vision"). 191 | 192 | I was still in university at the time. A couple years after that, one of my 193 | professors was working with some graduate students on the technology that 194 | would eventually become Slipstream 195 | Internet Acceleration. He asked me to do a contract for him to build an 196 | initial prototype of a transparent proxy server for mobile networks. The 197 | idea was similar to sshuttle: if you reassemble and then disassemble the TCP 198 | packets, you can reduce latency and improve performance vs. just forwarding 199 | the packets over a plain VPN or mobile network. (It's unlikely that any of 200 | my code has persisted in the Slipstream product today, but the concept is 201 | still pretty cool. I'm still horrified that people use plain TCP on 202 | complex mobile networks with crazily variable latency, for which it was 203 | never really intended.) 204 | 205 | That project I did for Slipstream was what first gave me the idea to merge 206 | the concepts of Fast Forward, Double Vision, and Tunnel Vision into a single 207 | program that was the best of all worlds. And here we are, at last, 10 years 208 | later. You're welcome. 209 | 210 | -- 211 | Avery Pennarun 212 | 213 | Mailing list: 214 | Subscribe by sending a message to 215 | List archives are at: http://groups.google.com/group/sshuttle 216 | -------------------------------------------------------------------------------- /packaging/control: -------------------------------------------------------------------------------- 1 | Package: sshuttle 2 | Version: 0.2 3 | Architecture: i386 4 | Maintainer: Jim Wyllie 5 | Depends: autossh, upstart, python (>=2.6) 6 | Section: utils 7 | Priority: optional 8 | Homepage: http://github.com/jwyllie83/sshuttle.udp 9 | Description: "Full-featured" VPN over an SSH tunnel, allowing full remote 10 | access somewhere where all you have is an SSH connection. It works well if 11 | you generally find yourself in the following situation: 12 | . 13 | - Your client machine (or router) is Linux, FreeBSD, or MacOS. 14 | - You have access to a remote network via ssh. 15 | - You don't necessarily have admin access on the remote network. 16 | - You do not wish to, or can't, use other VPN software 17 | - You don't want to create an ssh port forward for every 18 | single host/port on the remote network. 19 | - You hate openssh's port forwarding because it's randomly 20 | slow and/or stupid. 21 | - You can't use openssh's PermitTunnel feature because 22 | it's disabled by default on openssh servers; plus it does 23 | TCP-over-TCP, which has suboptimal performance 24 | . 25 | It also has hooks for more complicated setups (VPN-in-a-SSH-VPN, etc) to allow 26 | you to set it up as you like. 27 | -------------------------------------------------------------------------------- /packaging/make_deb: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This script puts together a .deb package suitable for installing on an Ubuntu 4 | # system 5 | 6 | B="/tmp/sshuttle/build" 7 | 8 | if [ ! -x /usr/bin/dpkg ]; then 9 | echo 'Unable to build: dpkg not found on system' 10 | exit 1 11 | fi 12 | 13 | # Create the new directory structure 14 | mkdir -p ${B}/etc/sshuttle/pre-start.d 15 | mkdir -p ${B}/etc/sshuttle/post-stop.d 16 | mkdir -p ${B}/usr/share/sshuttle 17 | mkdir -p ${B}/usr/bin 18 | mkdir -p ${B}/etc/init 19 | mkdir -p ${B}/DEBIAN 20 | 21 | # Copy over all of the files 22 | cp -r ../src/* ${B}/usr/share/sshuttle 23 | cp ../src/sshuttle ${B}/usr/bin 24 | cp -r sshuttle.conf ${B}/etc/init 25 | cp prefixes.conf ${B}/etc/sshuttle 26 | cp tunnel.conf ${B}/etc/sshuttle 27 | 28 | # Copy the control file over, as well 29 | cp control ${B}/DEBIAN 30 | 31 | # Create the md5sum manifest 32 | if [ -x /usr/bin/md5sum ]; then 33 | cd ${B} 34 | find . -type f | egrep -v DEBIAN | sed -re 's/^..//' | xargs md5sum > ${B}/DEBIAN/md5sums 35 | cd ${OLDPWD} 36 | fi 37 | 38 | # Build the debian package 39 | VERSION=$(egrep -e '^Version' control | sed -re 's/^[^:]*: //') 40 | dpkg --build ${B} ./sshuttle-${VERSION}.deb 41 | rm -rf ${B} 42 | -------------------------------------------------------------------------------- /packaging/prefixes.conf: -------------------------------------------------------------------------------- 1 | # Output prefixes here, one per line. Prefix is in: 2 | # prefix/netmask format 3 | # Like this: 4 | # 192.168.0.0/16 5 | # 192.0.43.10/32 6 | -------------------------------------------------------------------------------- /packaging/sshuttle.conf: -------------------------------------------------------------------------------- 1 | description "Create a transparent proxy over SSH" 2 | author "Jim Wyllie " 3 | 4 | manual 5 | nice -5 6 | 7 | # Edit this file with network prefixes that should be loaded through the SSH 8 | # tunnel. 9 | env PREFIX_LOCATION=/etc/sshuttle/prefixes.conf 10 | 11 | # Routing table; defaults to 100 12 | env ROUTE_TABLE=100 13 | 14 | # fwmark; defaults to 1 15 | env FWMARK=1 16 | 17 | # SSH tunnel configuration file 18 | env SSHUTTLE_TUNNEL_FILE=/etc/sshuttle/tunnel.conf 19 | 20 | # File containing the tunnel proxy name / host / whatever 21 | env TUNNEL_PROXY="/etc/sshuttle/tunnel.conf" 22 | 23 | # Any other commands needed to run before or after loading the SSH tunnel. 24 | # This is where you can put any of your hacks to set up tunnels-in-tunnels, 25 | # etc. Scripts in this directory are executed in order. 26 | env MISC_START_DIR=/etc/sshuttle/pre-start.d 27 | env MISC_STOP_DIR=/etc/sshuttle/post-stop.d 28 | 29 | start on (local-filesystems and net-device-up IFACE!=lo) 30 | stop on stopping network-services 31 | 32 | #respawn 33 | 34 | pre-start script 35 | # Make sure we have created the routes 36 | sudo ip rule add fwmark ${FWMARK} lookup ${ROUTE_TABLE} 37 | logger "Starting sshuttle..." 38 | 39 | if [ -f "${PREFIX_LOCATION}" ]; then 40 | cat "${PREFIX_LOCATION}" | while read ROUTE; do 41 | 42 | # Skip comments 43 | if [ -n "$(echo ${ROUTE} | egrep "^[ ]*#")" ]; then 44 | continue 45 | fi 46 | 47 | # Skip empty lines 48 | if [ -z "${ROUTE}" ]; then 49 | continue 50 | fi 51 | 52 | logger "Adding route: ${ROUTE}" 53 | ip route add local ${ROUTE} dev lo table ${ROUTE_TABLE} 54 | done 55 | fi 56 | 57 | for RUNFILE in ${MISC_START_DIR}/*; do 58 | logger "Executing ${RUNFILE}" 59 | /bin/sh -c "${RUNFILE}" 60 | done 61 | end script 62 | 63 | post-stop script 64 | if [ -f "${PREFIX_LOCATION}" ]; then 65 | cat "${PREFIX_LOCATION}" | while read ROUTE; do 66 | 67 | # Skip comments 68 | if [ -n "$(echo ${ROUTE} | egrep "^[ ]*#")" ]; then 69 | continue 70 | fi 71 | 72 | # Skip empty lines 73 | if [ -z "${ROUTE}" ]; then 74 | continue 75 | fi 76 | 77 | logger "Deleting route: ${ROUTE}" 78 | ip route del local ${ROUTE} dev lo table ${ROUTE_TABLE} 79 | done 80 | fi 81 | 82 | ip rule del fwmark ${FWMARK} 83 | 84 | for RUNFILE in "${MISC_STOP_DIR}/*"; do 85 | logger "Executing ${RUNFILE}" 86 | /bin/sh -c "${RUNFILE}" 87 | done 88 | end script 89 | 90 | exec /usr/bin/sshuttle --dns --method=tproxy --listen 0.0.0.0 --remote sshuttle_tunnel -s /etc/sshuttle/prefixes.conf -e "ssh -F ${TUNNEL_PROXY}" 91 | -------------------------------------------------------------------------------- /packaging/tunnel.conf: -------------------------------------------------------------------------------- 1 | # Here is where you can specify any SSH tunnel options See ssh_config(5) for 2 | # details. You need to leave the Host line intact, but everything else can 3 | # specify whatever you want 4 | Host sshuttle_tunnel 5 | 6 | # REQUIRED: Set this to be the host to which you would like to connect your 7 | # tunnel 8 | #Hostname localhost 9 | 10 | # REQUIRED: Set this to be the target SSH user on the remote system 11 | #User foo 12 | 13 | # --------------------------------------------------------------------------- 14 | # The rest are all optional; see ssh_config(5) for the full list of what can 15 | # be specified. Some very commonly needed ones are below. 16 | # --------------------------------------------------------------------------- 17 | 18 | # SSH key used for connecting 19 | #IdentityFile /path/to/key 20 | -------------------------------------------------------------------------------- /src/Makefile: -------------------------------------------------------------------------------- 1 | all: 2 | 3 | Makefile: 4 | @ 5 | 6 | %: FORCE 7 | +./do $@ 8 | 9 | .PHONY: FORCE 10 | 11 | -------------------------------------------------------------------------------- /src/all.do: -------------------------------------------------------------------------------- 1 | exec >&2 2 | UI= 3 | [ "$(uname)" = "Darwin" ] && UI=ui-macos/all 4 | redo-ifchange sshuttle.8 $UI 5 | 6 | echo 7 | echo "What now?" 8 | [ -z "$UI" ] || echo "- Try the MacOS GUI: open ui-macos/Sshuttle*.app" 9 | echo "- Run sshuttle: ./sshuttle --dns -r HOSTNAME 0/0" 10 | echo "- Read the README: less README.md" 11 | echo "- Read the man page: less sshuttle.md" 12 | -------------------------------------------------------------------------------- /src/assembler.py: -------------------------------------------------------------------------------- 1 | import sys, zlib 2 | 3 | z = zlib.decompressobj() 4 | mainmod = sys.modules[__name__] 5 | while 1: 6 | name = sys.stdin.readline().strip() 7 | if name: 8 | nbytes = int(sys.stdin.readline()) 9 | if verbosity >= 2: 10 | sys.stderr.write('server: assembling %r (%d bytes)\n' 11 | % (name, nbytes)) 12 | content = z.decompress(sys.stdin.read(nbytes)) 13 | exec compile(content, name, "exec") 14 | 15 | # FIXME: this crushes everything into a single module namespace, 16 | # then makes each of the module names point at this one. Gross. 17 | assert(name.endswith('.py')) 18 | modname = name[:-3] 19 | mainmod.__dict__[modname] = mainmod 20 | else: 21 | break 22 | 23 | verbose = verbosity 24 | sys.stderr.flush() 25 | sys.stdout.flush() 26 | main() 27 | -------------------------------------------------------------------------------- /src/clean.do: -------------------------------------------------------------------------------- 1 | redo ui-macos/clean 2 | rm -f *~ */*~ .*~ */.*~ *.8 *.tmp */*.tmp *.pyc */*.pyc 3 | -------------------------------------------------------------------------------- /src/client.py: -------------------------------------------------------------------------------- 1 | import struct, select, errno, re, signal, time 2 | import compat.ssubprocess as ssubprocess 3 | import helpers, ssnet, ssh, ssyslog 4 | from ssnet import SockWrapper, Handler, Proxy, Mux, MuxWrapper 5 | from helpers import * 6 | 7 | recvmsg = None 8 | try: 9 | # try getting recvmsg from python 10 | import socket as pythonsocket 11 | getattr(pythonsocket.socket,"recvmsg") 12 | socket = pythonsocket 13 | recvmsg = "python" 14 | except AttributeError: 15 | # try getting recvmsg from socket_ext library 16 | try: 17 | import socket_ext 18 | getattr(socket_ext.socket,"recvmsg") 19 | socket = socket_ext 20 | recvmsg = "socket_ext" 21 | except ImportError: 22 | import socket 23 | 24 | _extra_fd = os.open('/dev/null', os.O_RDONLY) 25 | 26 | def got_signal(signum, frame): 27 | log('exiting on signal %d\n' % signum) 28 | sys.exit(1) 29 | 30 | 31 | _pidname = None 32 | IP_TRANSPARENT = 19 33 | IP_ORIGDSTADDR = 20 34 | IP_RECVORIGDSTADDR = IP_ORIGDSTADDR 35 | SOL_IPV6 = 41 36 | IPV6_ORIGDSTADDR = 74 37 | IPV6_RECVORIGDSTADDR = IPV6_ORIGDSTADDR 38 | 39 | 40 | if recvmsg == "python": 41 | def recv_udp(listener, bufsize): 42 | debug3('Accept UDP python using recvmsg.\n') 43 | data, ancdata, msg_flags, srcip = listener.recvmsg(4096,socket.CMSG_SPACE(24)) 44 | dstip = None 45 | family = None 46 | for cmsg_level, cmsg_type, cmsg_data in ancdata: 47 | if cmsg_level == socket.SOL_IP and cmsg_type == IP_ORIGDSTADDR: 48 | family,port = struct.unpack('=HH', cmsg_data[0:4]) 49 | port = socket.htons(port) 50 | if family == socket.AF_INET: 51 | start = 4 52 | length = 4 53 | else: 54 | raise Fatal("Unsupported socket type '%s'"%family) 55 | ip = socket.inet_ntop(family, cmsg_data[start:start+length]) 56 | dstip = (ip, port) 57 | break 58 | elif cmsg_level == SOL_IPV6 and cmsg_type == IPV6_ORIGDSTADDR: 59 | family,port = struct.unpack('=HH', cmsg_data[0:4]) 60 | port = socket.htons(port) 61 | if family == socket.AF_INET6: 62 | start = 8 63 | length = 16 64 | else: 65 | raise Fatal("Unsupported socket type '%s'"%family) 66 | ip = socket.inet_ntop(family, cmsg_data[start:start+length]) 67 | dstip = (ip, port) 68 | break 69 | return (srcip, dstip, data) 70 | elif recvmsg == "socket_ext": 71 | def recv_udp(listener, bufsize): 72 | debug3('Accept UDP using socket_ext recvmsg.\n') 73 | srcip, data, adata, flags = listener.recvmsg((bufsize,),socket.CMSG_SPACE(24)) 74 | dstip = None 75 | family = None 76 | for a in adata: 77 | if a.cmsg_level == socket.SOL_IP and a.cmsg_type == IP_ORIGDSTADDR: 78 | family,port = struct.unpack('=HH', a.cmsg_data[0:4]) 79 | port = socket.htons(port) 80 | if family == socket.AF_INET: 81 | start = 4 82 | length = 4 83 | else: 84 | raise Fatal("Unsupported socket type '%s'"%family) 85 | ip = socket.inet_ntop(family, a.cmsg_data[start:start+length]) 86 | dstip = (ip, port) 87 | break 88 | elif a.cmsg_level == SOL_IPV6 and a.cmsg_type == IPV6_ORIGDSTADDR: 89 | family,port = struct.unpack('=HH', a.cmsg_data[0:4]) 90 | port = socket.htons(port) 91 | if family == socket.AF_INET6: 92 | start = 8 93 | length = 16 94 | else: 95 | raise Fatal("Unsupported socket type '%s'"%family) 96 | ip = socket.inet_ntop(family, a.cmsg_data[start:start+length]) 97 | dstip = (ip, port) 98 | break 99 | return (srcip, dstip, data[0]) 100 | else: 101 | def recv_udp(listener, bufsize): 102 | debug3('Accept UDP using recvfrom.\n') 103 | data, srcip = listener.recvfrom(bufsize) 104 | return (srcip, None, data) 105 | 106 | 107 | def check_daemon(pidfile): 108 | global _pidname 109 | _pidname = os.path.abspath(pidfile) 110 | try: 111 | oldpid = open(_pidname).read(1024) 112 | except IOError, e: 113 | if e.errno == errno.ENOENT: 114 | return # no pidfile, ok 115 | else: 116 | raise Fatal("can't read %s: %s" % (_pidname, e)) 117 | if not oldpid: 118 | os.unlink(_pidname) 119 | return # invalid pidfile, ok 120 | oldpid = int(oldpid.strip() or 0) 121 | if oldpid <= 0: 122 | os.unlink(_pidname) 123 | return # invalid pidfile, ok 124 | try: 125 | os.kill(oldpid, 0) 126 | except OSError, e: 127 | if e.errno == errno.ESRCH: 128 | os.unlink(_pidname) 129 | return # outdated pidfile, ok 130 | elif e.errno == errno.EPERM: 131 | pass 132 | else: 133 | raise 134 | raise Fatal("%s: sshuttle is already running (pid=%d)" 135 | % (_pidname, oldpid)) 136 | 137 | 138 | def daemonize(): 139 | if os.fork(): 140 | os._exit(0) 141 | os.setsid() 142 | if os.fork(): 143 | os._exit(0) 144 | 145 | outfd = os.open(_pidname, os.O_WRONLY|os.O_CREAT|os.O_EXCL, 0666) 146 | try: 147 | os.write(outfd, '%d\n' % os.getpid()) 148 | finally: 149 | os.close(outfd) 150 | os.chdir("/") 151 | 152 | # Normal exit when killed, or try/finally won't work and the pidfile won't 153 | # be deleted. 154 | signal.signal(signal.SIGTERM, got_signal) 155 | 156 | si = open('/dev/null', 'r+') 157 | os.dup2(si.fileno(), 0) 158 | os.dup2(si.fileno(), 1) 159 | si.close() 160 | 161 | ssyslog.stderr_to_syslog() 162 | 163 | 164 | def daemon_cleanup(): 165 | try: 166 | os.unlink(_pidname) 167 | except OSError, e: 168 | if e.errno == errno.ENOENT: 169 | pass 170 | else: 171 | raise 172 | 173 | 174 | def original_dst(sock): 175 | try: 176 | SO_ORIGINAL_DST = 80 177 | SOCKADDR_MIN = 16 178 | sockaddr_in = sock.getsockopt(socket.SOL_IP, 179 | SO_ORIGINAL_DST, SOCKADDR_MIN) 180 | (proto, port, a,b,c,d) = struct.unpack('=HHBBBB', sockaddr_in[:8]) 181 | port = socket.htons(port) 182 | assert(proto == socket.AF_INET) 183 | ip = '%d.%d.%d.%d' % (a,b,c,d) 184 | return (ip,port) 185 | except socket.error, e: 186 | if e.args[0] == errno.ENOPROTOOPT: 187 | return sock.getsockname() 188 | raise 189 | 190 | 191 | class MultiListener: 192 | 193 | def __init__(self, type=socket.SOCK_STREAM, proto=0): 194 | self.v6 = socket.socket(socket.AF_INET6, type, proto) 195 | self.v4 = socket.socket(socket.AF_INET, type, proto) 196 | 197 | def setsockopt(self, level, optname, value): 198 | if self.v6: 199 | self.v6.setsockopt(level, optname, value) 200 | if self.v4: 201 | self.v4.setsockopt(level, optname, value) 202 | 203 | def add_handler(self, handlers, callback, method, mux): 204 | if self.v6: 205 | handlers.append(Handler([self.v6], lambda: callback(self.v6, method, mux, handlers))) 206 | if self.v4: 207 | handlers.append(Handler([self.v4], lambda: callback(self.v4, method, mux, handlers))) 208 | 209 | def listen(self, backlog): 210 | if self.v6: 211 | self.v6.listen(backlog) 212 | if self.v4: 213 | try: 214 | self.v4.listen(backlog) 215 | except socket.error, e: 216 | # on some systems v4 bind will fail if the v6 suceeded, 217 | # in this case the v6 socket will receive v4 too. 218 | if e.errno == errno.EADDRINUSE and self.v6: 219 | self.v4 = None 220 | else: 221 | raise e 222 | 223 | def bind(self, address_v6, address_v4): 224 | if address_v6 and self.v6: 225 | self.v6.bind(address_v6) 226 | else: 227 | self.v6 = None 228 | if address_v4 and self.v4: 229 | self.v4.bind(address_v4) 230 | else: 231 | self.v4 = None 232 | 233 | def print_listening(self, what): 234 | if self.v6: 235 | listenip = self.v6.getsockname() 236 | debug1('%s listening on %r.\n' % (what, listenip)) 237 | if self.v4: 238 | listenip = self.v4.getsockname() 239 | debug1('%s listening on %r.\n' % (what, listenip)) 240 | 241 | 242 | class FirewallClient: 243 | def __init__(self, port_v6, port_v4, subnets_include, subnets_exclude, dnsport_v6, dnsport_v4, method, udp): 244 | self.auto_nets = [] 245 | self.subnets_include = subnets_include 246 | self.subnets_exclude = subnets_exclude 247 | argvbase = ([sys.argv[1], sys.argv[0], sys.argv[1]] + 248 | ['-v'] * (helpers.verbose or 0) + 249 | ['--firewall', str(port_v6), str(port_v4), 250 | str(dnsport_v6), str(dnsport_v4), 251 | method, str(int(udp))]) 252 | if ssyslog._p: 253 | argvbase += ['--syslog'] 254 | argv_tries = [ 255 | ['sudo', '-p', '[local sudo] Password: '] + argvbase, 256 | ['su', '-c', ' '.join(argvbase)], 257 | argvbase 258 | ] 259 | 260 | # we can't use stdin/stdout=subprocess.PIPE here, as we normally would, 261 | # because stupid Linux 'su' requires that stdin be attached to a tty. 262 | # Instead, attach a *bidirectional* socket to its stdout, and use 263 | # that for talking in both directions. 264 | (s1,s2) = socket.socketpair() 265 | def setup(): 266 | # run in the child process 267 | s2.close() 268 | e = None 269 | if os.getuid() == 0: 270 | argv_tries = argv_tries[-1:] # last entry only 271 | for argv in argv_tries: 272 | try: 273 | if argv[0] == 'su': 274 | sys.stderr.write('[local su] ') 275 | self.p = ssubprocess.Popen(argv, stdout=s1, preexec_fn=setup) 276 | e = None 277 | break 278 | except OSError, e: 279 | pass 280 | self.argv = argv 281 | s1.close() 282 | self.pfile = s2.makefile('wb+') 283 | if e: 284 | log('Spawning firewall manager: %r\n' % self.argv) 285 | raise Fatal(e) 286 | line = self.pfile.readline() 287 | self.check() 288 | if line[0:5] != 'READY': 289 | raise Fatal('%r expected READY, got %r' % (self.argv, line)) 290 | self.method = line[6:-1] 291 | 292 | def check(self): 293 | rv = self.p.poll() 294 | if rv: 295 | raise Fatal('%r returned %d' % (self.argv, rv)) 296 | 297 | def start(self): 298 | self.pfile.write('ROUTES\n') 299 | for (family,ip,width) in self.subnets_include+self.auto_nets: 300 | self.pfile.write('%d,%d,0,%s\n' % (family, width, ip)) 301 | for (family,ip,width) in self.subnets_exclude: 302 | self.pfile.write('%d,%d,1,%s\n' % (family, width, ip)) 303 | self.pfile.write('GO\n') 304 | self.pfile.flush() 305 | line = self.pfile.readline() 306 | self.check() 307 | if line != 'STARTED\n': 308 | raise Fatal('%r expected STARTED, got %r' % (self.argv, line)) 309 | 310 | def sethostip(self, hostname, ip): 311 | assert(not re.search(r'[^-\w]', hostname)) 312 | assert(not re.search(r'[^0-9.]', ip)) 313 | self.pfile.write('HOST %s,%s\n' % (hostname, ip)) 314 | self.pfile.flush() 315 | 316 | def done(self): 317 | self.pfile.close() 318 | rv = self.p.wait() 319 | if rv: 320 | raise Fatal('cleanup: %r returned %d' % (self.argv, rv)) 321 | 322 | 323 | dnsreqs = {} 324 | udp_by_src = {} 325 | def expire_connections(now, mux): 326 | for chan,timeout in dnsreqs.items(): 327 | if timeout < now: 328 | debug3('expiring dnsreqs channel=%d\n' % chan) 329 | del mux.channels[chan] 330 | del dnsreqs[chan] 331 | debug3('Remaining DNS requests: %d\n' % len(dnsreqs)) 332 | for peer,(chan,timeout) in udp_by_src.items(): 333 | if timeout < now: 334 | debug3('expiring UDP channel channel=%d peer=%r\n' % (chan, peer)) 335 | mux.send(chan, ssnet.CMD_UDP_CLOSE, '') 336 | del mux.channels[chan] 337 | del udp_by_src[peer] 338 | debug3('Remaining UDP channels: %d\n' % len(udp_by_src)) 339 | 340 | 341 | def onaccept_tcp(listener, method, mux, handlers): 342 | global _extra_fd 343 | try: 344 | sock,srcip = listener.accept() 345 | except socket.error, e: 346 | if e.args[0] in [errno.EMFILE, errno.ENFILE]: 347 | debug1('Rejected incoming connection: too many open files!\n') 348 | # free up an fd so we can eat the connection 349 | os.close(_extra_fd) 350 | try: 351 | sock,srcip = listener.accept() 352 | sock.close() 353 | finally: 354 | _extra_fd = os.open('/dev/null', os.O_RDONLY) 355 | return 356 | else: 357 | raise 358 | if method == "tproxy": 359 | dstip = sock.getsockname(); 360 | else: 361 | dstip = original_dst(sock) 362 | debug1('Accept TCP: %s:%r -> %s:%r.\n' % (srcip[0],srcip[1], 363 | dstip[0],dstip[1])) 364 | if dstip[1] == sock.getsockname()[1] and islocal(dstip[0], sock.family): 365 | debug1("-- ignored: that's my address!\n") 366 | sock.close() 367 | return 368 | chan = mux.next_channel() 369 | if not chan: 370 | log('warning: too many open channels. Discarded connection.\n') 371 | sock.close() 372 | return 373 | mux.send(chan, ssnet.CMD_TCP_CONNECT, '%d,%s,%s' % (sock.family, dstip[0], dstip[1])) 374 | outwrap = MuxWrapper(mux, chan) 375 | handlers.append(Proxy(SockWrapper(sock, sock), outwrap)) 376 | expire_connections(time.time(), mux) 377 | 378 | 379 | def udp_done(chan, data, method, family, dstip): 380 | (src,srcport,data) = data.split(",",2) 381 | srcip = (src,int(srcport)) 382 | debug3('doing send from %r to %r\n' % (srcip,dstip,)) 383 | 384 | try: 385 | sender = socket.socket(family, socket.SOCK_DGRAM) 386 | sender.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 387 | sender.setsockopt(socket.SOL_IP, IP_TRANSPARENT, 1) 388 | sender.bind(srcip) 389 | sender.sendto(data, dstip) 390 | sender.close() 391 | except socket.error, e: 392 | debug1('-- ignored socket error sending UDP data: %r\n'%e) 393 | 394 | 395 | def onaccept_udp(listener, method, mux, handlers): 396 | now = time.time() 397 | srcip, dstip, data = recv_udp(listener, 4096) 398 | if not dstip: 399 | debug1("-- ignored UDP from %r: couldn't determine destination IP address\n" % (srcip,)) 400 | return 401 | debug1('Accept UDP: %r -> %r.\n' % (srcip,dstip,)) 402 | if srcip in udp_by_src: 403 | chan,timeout = udp_by_src[srcip] 404 | else: 405 | chan = mux.next_channel() 406 | mux.channels[chan] = lambda cmd,data: udp_done(chan, data, method, listener.family, dstip=srcip) 407 | mux.send(chan, ssnet.CMD_UDP_OPEN, listener.family) 408 | udp_by_src[srcip] = chan,now+30 409 | 410 | hdr = "%s,%r,"%(dstip[0], dstip[1]) 411 | mux.send(chan, ssnet.CMD_UDP_DATA, hdr+data) 412 | 413 | expire_connections(now, mux) 414 | 415 | 416 | def dns_done(chan, data, method, sock, srcip, dstip, mux): 417 | debug3('dns_done: channel=%d src=%r dst=%r\n' % (chan,srcip,dstip)) 418 | del mux.channels[chan] 419 | del dnsreqs[chan] 420 | if method == "tproxy": 421 | debug3('doing send from %r to %r\n' % (srcip,dstip,)) 422 | sender = socket.socket(sock.family, socket.SOCK_DGRAM) 423 | sender.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 424 | sender.setsockopt(socket.SOL_IP, IP_TRANSPARENT, 1) 425 | sender.bind(srcip) 426 | sender.sendto(data, dstip) 427 | sender.close() 428 | else: 429 | debug3('doing sendto %r\n' % (dstip,)) 430 | sock.sendto(data, dstip) 431 | 432 | 433 | def ondns(listener, method, mux, handlers): 434 | now = time.time() 435 | srcip, dstip, data = recv_udp(listener, 4096) 436 | if method == "tproxy" and not dstip: 437 | debug1("-- ignored UDP from %r: couldn't determine destination IP address\n" % (srcip,)) 438 | return 439 | debug1('DNS request from %r to %r: %d bytes\n' % (srcip,dstip,len(data))) 440 | chan = mux.next_channel() 441 | dnsreqs[chan] = now+30 442 | mux.send(chan, ssnet.CMD_DNS_REQ, data) 443 | mux.channels[chan] = lambda cmd,data: dns_done(chan, data, method, listener, srcip=dstip, dstip=srcip, mux=mux) 444 | expire_connections(now, mux) 445 | 446 | 447 | def _main(tcp_listener, udp_listener, fw, ssh_cmd, remotename, python, latency_control, 448 | dns_listener, method, seed_hosts, auto_nets, 449 | syslog, daemon): 450 | handlers = [] 451 | if helpers.verbose >= 1: 452 | helpers.logprefix = 'c : ' 453 | else: 454 | helpers.logprefix = 'client: ' 455 | debug1('connecting to server...\n') 456 | 457 | try: 458 | (serverproc, serversock) = ssh.connect(ssh_cmd, remotename, python, 459 | stderr=ssyslog._p and ssyslog._p.stdin, 460 | options=dict(latency_control=latency_control, method=method)) 461 | except socket.error, e: 462 | if e.args[0] == errno.EPIPE: 463 | raise Fatal("failed to establish ssh session (1)") 464 | else: 465 | raise 466 | mux = Mux(serversock, serversock) 467 | handlers.append(mux) 468 | 469 | expected = 'SSHUTTLE0001' 470 | 471 | try: 472 | v = 'x' 473 | while v and v != '\0': 474 | v = serversock.recv(1) 475 | v = 'x' 476 | while v and v != '\0': 477 | v = serversock.recv(1) 478 | initstring = serversock.recv(len(expected)) 479 | except socket.error, e: 480 | if e.args[0] == errno.ECONNRESET: 481 | raise Fatal("failed to establish ssh session (2)") 482 | else: 483 | raise 484 | 485 | rv = serverproc.poll() 486 | if rv: 487 | raise Fatal('server died with error code %d' % rv) 488 | 489 | if initstring != expected: 490 | raise Fatal('expected server init string %r; got %r' 491 | % (expected, initstring)) 492 | debug1('connected.\n') 493 | print 'Connected.' 494 | sys.stdout.flush() 495 | if daemon: 496 | daemonize() 497 | log('daemonizing (%s).\n' % _pidname) 498 | elif syslog: 499 | debug1('switching to syslog.\n') 500 | ssyslog.stderr_to_syslog() 501 | 502 | def onroutes(routestr): 503 | if auto_nets: 504 | for line in routestr.strip().split('\n'): 505 | (family,ip,width) = line.split(',', 2) 506 | fw.auto_nets.append((family,ip,int(width))) 507 | 508 | # we definitely want to do this *after* starting ssh, or we might end 509 | # up intercepting the ssh connection! 510 | # 511 | # Moreover, now that we have the --auto-nets option, we have to wait 512 | # for the server to send us that message anyway. Even if we haven't 513 | # set --auto-nets, we might as well wait for the message first, then 514 | # ignore its contents. 515 | mux.got_routes = None 516 | fw.start() 517 | mux.got_routes = onroutes 518 | 519 | def onhostlist(hostlist): 520 | debug2('got host list: %r\n' % hostlist) 521 | for line in hostlist.strip().split(): 522 | if line: 523 | name,ip = line.split(',', 1) 524 | fw.sethostip(name, ip) 525 | mux.got_host_list = onhostlist 526 | 527 | tcp_listener.add_handler(handlers, onaccept_tcp, method, mux) 528 | 529 | if udp_listener: 530 | udp_listener.add_handler(handlers, onaccept_udp, method, mux) 531 | 532 | if dns_listener: 533 | dns_listener.add_handler(handlers, ondns, method, mux) 534 | 535 | if seed_hosts != None: 536 | debug1('seed_hosts: %r\n' % seed_hosts) 537 | mux.send(0, ssnet.CMD_HOST_REQ, '\n'.join(seed_hosts)) 538 | 539 | while 1: 540 | rv = serverproc.poll() 541 | if rv: 542 | raise Fatal('server died with error code %d' % rv) 543 | 544 | ssnet.runonce(handlers, mux) 545 | if latency_control: 546 | mux.check_fullness() 547 | mux.callback() 548 | 549 | 550 | def main(listenip_v6, listenip_v4, 551 | ssh_cmd, remotename, python, latency_control, dns, 552 | method, seed_hosts, auto_nets, 553 | subnets_include, subnets_exclude, syslog, daemon, pidfile): 554 | 555 | if syslog: 556 | ssyslog.start_syslog() 557 | if daemon: 558 | try: 559 | check_daemon(pidfile) 560 | except Fatal, e: 561 | log("%s\n" % e) 562 | return 5 563 | debug1('Starting sshuttle proxy.\n') 564 | 565 | if recvmsg is not None: 566 | debug1("recvmsg %s support enabled.\n"%recvmsg) 567 | 568 | if method == "tproxy": 569 | if recvmsg is not None: 570 | debug1("tproxy UDP support enabled.\n") 571 | udp = True 572 | else: 573 | debug1("tproxy UDP support requires recvmsg function.\n") 574 | udp = False 575 | if dns and recvmsg is None: 576 | debug1("tproxy DNS support requires recvmsg function.\n") 577 | dns = False 578 | else: 579 | debug1("UDP support requires tproxy; disabling UDP.\n") 580 | udp = False 581 | 582 | if listenip_v6 and listenip_v6[1] and listenip_v4 and listenip_v4[1]: 583 | # if both ports given, no need to search for a spare port 584 | ports = [ 0, ] 585 | else: 586 | # if at least one port missing, we have to search 587 | ports = xrange(12300,9000,-1) 588 | 589 | # search for free ports and try to bind 590 | last_e = None 591 | redirectport_v6 = 0 592 | redirectport_v4 = 0 593 | bound = False 594 | debug2('Binding redirector:') 595 | for port in ports: 596 | debug2(' %d' % port) 597 | tcp_listener = MultiListener() 598 | tcp_listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 599 | 600 | if udp: 601 | udp_listener = MultiListener(socket.SOCK_DGRAM) 602 | udp_listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 603 | else: 604 | udp_listener = None 605 | 606 | if listenip_v6 and listenip_v6[1]: 607 | lv6 = listenip_v6 608 | redirectport_v6 = lv6[1] 609 | elif listenip_v6: 610 | lv6 = (listenip_v6[0],port) 611 | redirectport_v6 = port 612 | else: 613 | lv6 = None 614 | redirectport_v6 = 0 615 | 616 | if listenip_v4 and listenip_v4[1]: 617 | lv4 = listenip_v4 618 | redirectport_v4 = lv4[1] 619 | elif listenip_v4: 620 | lv4 = (listenip_v4[0],port) 621 | redirectport_v4 = port 622 | else: 623 | lv4 = None 624 | redirectport_v4 = 0 625 | 626 | try: 627 | tcp_listener.bind(lv6, lv4) 628 | if udp_listener: 629 | udp_listener.bind(lv6, lv4) 630 | bound = True 631 | break 632 | except socket.error, e: 633 | if e.errno == errno.EADDRINUSE: 634 | last_e = e 635 | else: 636 | raise e 637 | debug2('\n') 638 | if not bound: 639 | assert(last_e) 640 | raise last_e 641 | tcp_listener.listen(10) 642 | tcp_listener.print_listening("TCP redirector") 643 | if udp_listener: 644 | udp_listener.print_listening("UDP redirector") 645 | 646 | bound = False 647 | if dns: 648 | # search for spare port for DNS 649 | debug2('Binding DNS:') 650 | ports = xrange(12300,9000,-1) 651 | for port in ports: 652 | debug2(' %d' % port) 653 | dns_listener = MultiListener(socket.SOCK_DGRAM) 654 | 655 | if listenip_v6: 656 | lv6 = (listenip_v6[0],port) 657 | dnsport_v6 = port 658 | else: 659 | lv6 = None 660 | dnsport_v6 = 0 661 | 662 | if listenip_v4: 663 | lv4 = (listenip_v4[0],port) 664 | dnsport_v4 = port 665 | else: 666 | lv4 = None 667 | dnsport_v4 = 0 668 | 669 | try: 670 | dns_listener.bind(lv6, lv4) 671 | bound = True 672 | break 673 | except socket.error, e: 674 | if e.errno == errno.EADDRINUSE: 675 | last_e = e 676 | else: 677 | raise e 678 | debug2('\n') 679 | dns_listener.print_listening("DNS") 680 | if not bound: 681 | assert(last_e) 682 | raise last_e 683 | else: 684 | dnsport_v6 = 0 685 | dnsport_v4 = 0 686 | dns_listener = None 687 | 688 | fw = FirewallClient(redirectport_v6, redirectport_v4, subnets_include, subnets_exclude, dnsport_v6, dnsport_v4, method, udp) 689 | 690 | if fw.method == "tproxy": 691 | tcp_listener.setsockopt(socket.SOL_IP, IP_TRANSPARENT, 1) 692 | if udp_listener: 693 | udp_listener.setsockopt(socket.SOL_IP, IP_TRANSPARENT, 1) 694 | if udp_listener.v4 is not None: 695 | udp_listener.v4.setsockopt(socket.SOL_IP, IP_RECVORIGDSTADDR, 1) 696 | if udp_listener.v6 is not None: 697 | udp_listener.v6.setsockopt(SOL_IPV6, IPV6_RECVORIGDSTADDR, 1) 698 | if dns_listener: 699 | dns_listener.setsockopt(socket.SOL_IP, IP_TRANSPARENT, 1) 700 | if dns_listener.v4 is not None: 701 | dns_listener.v4.setsockopt(socket.SOL_IP, IP_RECVORIGDSTADDR, 1) 702 | if dns_listener.v6 is not None: 703 | dns_listener.v6.setsockopt(SOL_IPV6, IPV6_RECVORIGDSTADDR, 1) 704 | 705 | try: 706 | return _main(tcp_listener, udp_listener, fw, ssh_cmd, remotename, 707 | python, latency_control, dns_listener, 708 | fw.method, seed_hosts, auto_nets, syslog, 709 | daemon) 710 | finally: 711 | try: 712 | if daemon: 713 | # it's not our child anymore; can't waitpid 714 | fw.p.returncode = 0 715 | fw.done() 716 | finally: 717 | if daemon: 718 | daemon_cleanup() 719 | -------------------------------------------------------------------------------- /src/compat/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/brianmay/sshuttle/b3009b8f434f35d9e50550892bef1970264d7a5f/src/compat/__init__.py -------------------------------------------------------------------------------- /src/default.8.do: -------------------------------------------------------------------------------- 1 | exec >&2 2 | if pandoc /dev/null; then 3 | pandoc -s -r markdown -w man -o $3 $1.md 4 | else 5 | echo "Warning: pandoc not installed; can't generate manpages." 6 | redo-always 7 | fi 8 | -------------------------------------------------------------------------------- /src/do: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # A minimal alternative to djb redo that doesn't support incremental builds. 4 | # For the full version, visit http://github.com/apenwarr/redo 5 | # 6 | # The author disclaims copyright to this source file and hereby places it in 7 | # the public domain. (2010 12 14) 8 | # 9 | 10 | # By default, no output coloring. 11 | green="" 12 | bold="" 13 | plain="" 14 | 15 | if [ -n "$TERM" -a "$TERM" != "dumb" ] && tty <&2 >/dev/null 2>&1; then 16 | green="$(printf '\033[32m')" 17 | bold="$(printf '\033[1m')" 18 | plain="$(printf '\033[m')" 19 | fi 20 | 21 | _dirsplit() 22 | { 23 | base=${1##*/} 24 | dir=${1%$base} 25 | } 26 | 27 | dirname() 28 | ( 29 | _dirsplit "$1" 30 | dir=${dir%/} 31 | echo "${dir:-.}" 32 | ) 33 | 34 | _dirsplit "$0" 35 | export REDO=$(cd "${dir:-.}" && echo "$PWD/$base") 36 | 37 | DO_TOP= 38 | if [ -z "$DO_BUILT" ]; then 39 | DO_TOP=1 40 | [ -n "$*" ] || set all # only toplevel redo has a default target 41 | export DO_BUILT=$PWD/.do_built 42 | : >>"$DO_BUILT" 43 | echo "Removing previously built files..." >&2 44 | sort -u "$DO_BUILT" | tee "$DO_BUILT.new" | 45 | while read f; do printf "%s\0%s.did\0" "$f" "$f"; done | 46 | xargs -0 rm -f 2>/dev/null 47 | mv "$DO_BUILT.new" "$DO_BUILT" 48 | DO_PATH=$DO_BUILT.dir 49 | export PATH=$DO_PATH:$PATH 50 | rm -rf "$DO_PATH" 51 | mkdir "$DO_PATH" 52 | for d in redo redo-ifchange; do 53 | ln -s "$REDO" "$DO_PATH/$d"; 54 | done 55 | [ -e /bin/true ] && TRUE=/bin/true || TRUE=/usr/bin/true 56 | for d in redo-ifcreate redo-stamp redo-always; do 57 | ln -s $TRUE "$DO_PATH/$d"; 58 | done 59 | fi 60 | 61 | 62 | _find_dofile_pwd() 63 | { 64 | dofile=default.$1.do 65 | while :; do 66 | dofile=default.${dofile#default.*.} 67 | [ -e "$dofile" -o "$dofile" = default.do ] && break 68 | done 69 | ext=${dofile#default} 70 | ext=${ext%.do} 71 | base=${1%$ext} 72 | } 73 | 74 | 75 | _find_dofile() 76 | { 77 | local prefix= 78 | while :; do 79 | _find_dofile_pwd "$1" 80 | [ -e "$dofile" ] && break 81 | [ "$PWD" = "/" ] && break 82 | target=${PWD##*/}/$target 83 | tmp=${PWD##*/}/$tmp 84 | prefix=${PWD##*/}/$prefix 85 | cd .. 86 | done 87 | base=$prefix$base 88 | } 89 | 90 | 91 | _run_dofile() 92 | { 93 | export DO_DEPTH="$DO_DEPTH " 94 | export REDO_TARGET=$PWD/$target 95 | local line1 96 | set -e 97 | read line1 <"$PWD/$dofile" 98 | cmd=${line1#"#!/"} 99 | if [ "$cmd" != "$line1" ]; then 100 | /$cmd "$PWD/$dofile" "$@" >"$tmp.tmp2" 101 | else 102 | :; . "$PWD/$dofile" >"$tmp.tmp2" 103 | fi 104 | } 105 | 106 | 107 | _do() 108 | { 109 | local dir=$1 target=$2 tmp=$3 110 | if [ ! -e "$target" ] || [ -d "$target" -a ! -e "$target.did" ]; then 111 | printf '%sdo %s%s%s%s\n' \ 112 | "$green" "$DO_DEPTH" "$bold" "$dir$target" "$plain" >&2 113 | echo "$PWD/$target" >>"$DO_BUILT" 114 | dofile=$target.do 115 | base=$target 116 | ext= 117 | [ -e "$target.do" ] || _find_dofile "$target" 118 | if [ ! -e "$dofile" ]; then 119 | echo "do: $target: no .do file" >&2 120 | return 1 121 | fi 122 | [ ! -e "$DO_BUILT" ] || [ ! -d "$(dirname "$target")" ] || 123 | : >>"$target.did" 124 | ( _run_dofile "$base" "$ext" "$tmp.tmp" ) 125 | rv=$? 126 | if [ $rv != 0 ]; then 127 | printf "do: %s%s\n" "$DO_DEPTH" \ 128 | "$dir$target: got exit code $rv" >&2 129 | rm -f "$tmp.tmp" "$tmp.tmp2" 130 | return $rv 131 | fi 132 | mv "$tmp.tmp" "$target" 2>/dev/null || 133 | ! test -s "$tmp.tmp2" || 134 | mv "$tmp.tmp2" "$target" 2>/dev/null 135 | rm -f "$tmp.tmp2" 136 | else 137 | echo "do $DO_DEPTH$target exists." >&2 138 | fi 139 | } 140 | 141 | 142 | # Make corrections for directories that don't actually exist yet. 143 | _dir_shovel() 144 | { 145 | local dir base 146 | xdir=$1 xbase=$2 xbasetmp=$2 147 | while [ ! -d "$xdir" -a -n "$xdir" ]; do 148 | _dirsplit "${xdir%/}" 149 | xbasetmp=${base}__$xbase 150 | xdir=$dir xbase=$base/$xbase 151 | echo "xbasetmp='$xbasetmp'" >&2 152 | done 153 | } 154 | 155 | 156 | redo() 157 | { 158 | for i in "$@"; do 159 | _dirsplit "$i" 160 | _dir_shovel "$dir" "$base" 161 | dir=$xdir base=$xbase basetmp=$xbasetmp 162 | ( cd "$dir" && _do "$dir" "$base" "$basetmp" ) || return 1 163 | done 164 | } 165 | 166 | 167 | set -e 168 | redo "$@" 169 | 170 | if [ -n "$DO_TOP" ]; then 171 | echo "Removing stamp files..." >&2 172 | [ ! -e "$DO_BUILT" ] || 173 | while read f; do printf "%s.did\0" "$f"; done <"$DO_BUILT" | 174 | xargs -0 rm -f 2>/dev/null 175 | fi 176 | -------------------------------------------------------------------------------- /src/firewall.py: -------------------------------------------------------------------------------- 1 | import re, errno, socket, select, struct 2 | import compat.ssubprocess as ssubprocess 3 | import helpers, ssyslog 4 | from helpers import * 5 | 6 | # python doesn't have a definition for this 7 | IPPROTO_DIVERT = 254 8 | 9 | 10 | def nonfatal(func, *args): 11 | try: 12 | func(*args) 13 | except Fatal, e: 14 | log('error: %s\n' % e) 15 | 16 | 17 | def ipt_chain_exists(family, table, name): 18 | if family == socket.AF_INET6: 19 | cmd = 'ip6tables' 20 | elif family == socket.AF_INET: 21 | cmd = 'iptables' 22 | else: 23 | raise Exception('Unsupported family "%s"'%family_to_string(family)) 24 | argv = [cmd, '-t', table, '-nL'] 25 | p = ssubprocess.Popen(argv, stdout = ssubprocess.PIPE) 26 | for line in p.stdout: 27 | if line.startswith('Chain %s ' % name): 28 | return True 29 | rv = p.wait() 30 | if rv: 31 | raise Fatal('%r returned %d' % (argv, rv)) 32 | 33 | 34 | def _ipt(family, table, *args): 35 | if family == socket.AF_INET6: 36 | argv = ['ip6tables', '-t', table] + list(args) 37 | elif family == socket.AF_INET: 38 | argv = ['iptables', '-t', table] + list(args) 39 | else: 40 | raise Exception('Unsupported family "%s"'%family_to_string(family)) 41 | debug1('>> %s\n' % ' '.join(argv)) 42 | rv = ssubprocess.call(argv) 43 | if rv: 44 | raise Fatal('%r returned %d' % (argv, rv)) 45 | 46 | 47 | _no_ttl_module = False 48 | def _ipt_ttl(family, *args): 49 | global _no_ttl_module 50 | if not _no_ttl_module: 51 | # we avoid infinite loops by generating server-side connections 52 | # with ttl 42. This makes the client side not recapture those 53 | # connections, in case client == server. 54 | try: 55 | argsplus = list(args) + ['-m', 'ttl', '!', '--ttl', '42'] 56 | _ipt(family, *argsplus) 57 | except Fatal: 58 | _ipt(family, *args) 59 | # we only get here if the non-ttl attempt succeeds 60 | log('sshuttle: warning: your iptables is missing ' 61 | 'the ttl module.\n') 62 | _no_ttl_module = True 63 | else: 64 | _ipt(family, *args) 65 | 66 | 67 | # We name the chain based on the transproxy port number so that it's possible 68 | # to run multiple copies of sshuttle at the same time. Of course, the 69 | # multiple copies shouldn't have overlapping subnets, or only the most- 70 | # recently-started one will win (because we use "-I OUTPUT 1" instead of 71 | # "-A OUTPUT"). 72 | def do_iptables_nat(port, dnsport, family, subnets, udp): 73 | # only ipv4 supported with NAT 74 | if family != socket.AF_INET: 75 | raise Exception('Address family "%s" unsupported by nat method'%family_to_string(family)) 76 | if udp: 77 | raise Exception("UDP not supported by nat method") 78 | 79 | table = "nat" 80 | def ipt(*args): 81 | return _ipt(family, table, *args) 82 | def ipt_ttl(*args): 83 | return _ipt_ttl(family, table, *args) 84 | 85 | chain = 'sshuttle-%s' % port 86 | 87 | # basic cleanup/setup of chains 88 | if ipt_chain_exists(family, table, chain): 89 | nonfatal(ipt, '-D', 'OUTPUT', '-j', chain) 90 | nonfatal(ipt, '-D', 'PREROUTING', '-j', chain) 91 | nonfatal(ipt, '-F', chain) 92 | ipt('-X', chain) 93 | 94 | if subnets or dnsport: 95 | ipt('-N', chain) 96 | ipt('-F', chain) 97 | ipt('-I', 'OUTPUT', '1', '-j', chain) 98 | ipt('-I', 'PREROUTING', '1', '-j', chain) 99 | 100 | if subnets: 101 | # create new subnet entries. Note that we're sorting in a very 102 | # particular order: we need to go from most-specific (largest swidth) 103 | # to least-specific, and at any given level of specificity, we want 104 | # excludes to come first. That's why the columns are in such a non- 105 | # intuitive order. 106 | for f,swidth,sexclude,snet in sorted(subnets, key=lambda s: s[1], reverse=True): 107 | if sexclude: 108 | ipt('-A', chain, '-j', 'RETURN', 109 | '--dest', '%s/%s' % (snet,swidth), 110 | '-p', 'tcp') 111 | else: 112 | ipt_ttl('-A', chain, '-j', 'REDIRECT', 113 | '--dest', '%s/%s' % (snet,swidth), 114 | '-p', 'tcp', 115 | '--to-ports', str(port)) 116 | 117 | if dnsport: 118 | nslist = resolvconf_nameservers() 119 | for f,ip in filter(lambda i: i[0]==family, nslist): 120 | ipt_ttl('-A', chain, '-j', 'REDIRECT', 121 | '--dest', '%s/32' % ip, 122 | '-p', 'udp', 123 | '--dport', '53', 124 | '--to-ports', str(dnsport)) 125 | 126 | 127 | def do_iptables_tproxy(port, dnsport, family, subnets, udp): 128 | if family not in [socket.AF_INET, socket.AF_INET6]: 129 | raise Exception('Address family "%s" unsupported by tproxy method'%family_to_string(family)) 130 | 131 | table = "mangle" 132 | def ipt(*args): 133 | return _ipt(family, table, *args) 134 | def ipt_ttl(*args): 135 | return _ipt_ttl(family, table, *args) 136 | 137 | mark_chain = 'sshuttle-m-%s' % port 138 | tproxy_chain = 'sshuttle-t-%s' % port 139 | divert_chain = 'sshuttle-d-%s' % port 140 | 141 | # basic cleanup/setup of chains 142 | if ipt_chain_exists(family, table, mark_chain): 143 | ipt('-D', 'OUTPUT', '-j', mark_chain) 144 | ipt('-F', mark_chain) 145 | ipt('-X', mark_chain) 146 | 147 | if ipt_chain_exists(family, table, tproxy_chain): 148 | ipt('-D', 'PREROUTING', '-j', tproxy_chain) 149 | ipt('-F', tproxy_chain) 150 | ipt('-X', tproxy_chain) 151 | 152 | if ipt_chain_exists(family, table, divert_chain): 153 | ipt('-F', divert_chain) 154 | ipt('-X', divert_chain) 155 | 156 | if subnets or dnsport: 157 | ipt('-N', mark_chain) 158 | ipt('-F', mark_chain) 159 | ipt('-N', divert_chain) 160 | ipt('-F', divert_chain) 161 | ipt('-N', tproxy_chain) 162 | ipt('-F', tproxy_chain) 163 | ipt('-I', 'OUTPUT', '1', '-j', mark_chain) 164 | ipt('-I', 'PREROUTING', '1', '-j', tproxy_chain) 165 | ipt('-A', divert_chain, '-j', 'MARK', '--set-mark', '1') 166 | ipt('-A', divert_chain, '-j', 'ACCEPT') 167 | ipt('-A', tproxy_chain, '-m', 'socket', '-j', divert_chain, 168 | '-m', 'tcp', '-p', 'tcp') 169 | if subnets and udp: 170 | ipt('-A', tproxy_chain, '-m', 'socket', '-j', divert_chain, 171 | '-m', 'udp', '-p', 'udp') 172 | 173 | if dnsport: 174 | nslist = resolvconf_nameservers() 175 | for f,ip in filter(lambda i: i[0]==family, nslist): 176 | ipt('-A', mark_chain, '-j', 'MARK', '--set-mark', '1', 177 | '--dest', '%s/32' % ip, 178 | '-m', 'udp', '-p', 'udp', '--dport', '53') 179 | ipt('-A', tproxy_chain, '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', 180 | '--dest', '%s/32' % ip, 181 | '-m', 'udp', '-p', 'udp', '--dport', '53', 182 | '--on-port', str(dnsport)) 183 | 184 | if subnets: 185 | for f,swidth,sexclude,snet in sorted(subnets, key=lambda s: s[1], reverse=True): 186 | if sexclude: 187 | ipt('-A', mark_chain, '-j', 'RETURN', 188 | '--dest', '%s/%s' % (snet,swidth), 189 | '-m', 'tcp', '-p', 'tcp') 190 | ipt('-A', tproxy_chain, '-j', 'RETURN', 191 | '--dest', '%s/%s' % (snet,swidth), 192 | '-m', 'tcp', '-p', 'tcp') 193 | else: 194 | ipt('-A', mark_chain, '-j', 'MARK', '--set-mark', '1', 195 | '--dest', '%s/%s' % (snet,swidth), 196 | '-m', 'tcp', '-p', 'tcp') 197 | ipt('-A', tproxy_chain, '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', 198 | '--dest', '%s/%s' % (snet,swidth), 199 | '-m', 'tcp', '-p', 'tcp', 200 | '--on-port', str(port)) 201 | 202 | if sexclude and udp: 203 | ipt('-A', mark_chain, '-j', 'RETURN', 204 | '--dest', '%s/%s' % (snet,swidth), 205 | '-m', 'udp', '-p', 'udp') 206 | ipt('-A', tproxy_chain, '-j', 'RETURN', 207 | '--dest', '%s/%s' % (snet,swidth), 208 | '-m', 'udp', '-p', 'udp') 209 | elif udp: 210 | ipt('-A', mark_chain, '-j', 'MARK', '--set-mark', '1', 211 | '--dest', '%s/%s' % (snet,swidth), 212 | '-m', 'udp', '-p', 'udp') 213 | ipt('-A', tproxy_chain, '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', 214 | '--dest', '%s/%s' % (snet,swidth), 215 | '-m', 'udp', '-p', 'udp', 216 | '--on-port', str(port)) 217 | 218 | 219 | def ipfw_rule_exists(n): 220 | argv = ['ipfw', 'list'] 221 | p = ssubprocess.Popen(argv, stdout = ssubprocess.PIPE) 222 | found = False 223 | for line in p.stdout: 224 | if line.startswith('%05d ' % n): 225 | if not ('ipttl 42' in line 226 | or ('skipto %d' % (n+1)) in line 227 | or 'check-state' in line): 228 | log('non-sshuttle ipfw rule: %r\n' % line.strip()) 229 | raise Fatal('non-sshuttle ipfw rule #%d already exists!' % n) 230 | found = True 231 | rv = p.wait() 232 | if rv: 233 | raise Fatal('%r returned %d' % (argv, rv)) 234 | return found 235 | 236 | 237 | _oldctls = {} 238 | def _fill_oldctls(prefix): 239 | argv = ['sysctl', prefix] 240 | p = ssubprocess.Popen(argv, stdout = ssubprocess.PIPE) 241 | for line in p.stdout: 242 | assert(line[-1] == '\n') 243 | (k,v) = line[:-1].split(': ', 1) 244 | _oldctls[k] = v 245 | rv = p.wait() 246 | if rv: 247 | raise Fatal('%r returned %d' % (argv, rv)) 248 | if not line: 249 | raise Fatal('%r returned no data' % (argv,)) 250 | 251 | 252 | def _sysctl_set(name, val): 253 | argv = ['sysctl', '-w', '%s=%s' % (name, val)] 254 | debug1('>> %s\n' % ' '.join(argv)) 255 | return ssubprocess.call(argv, stdout = open('/dev/null', 'w')) 256 | 257 | 258 | _changedctls = [] 259 | def sysctl_set(name, val, permanent=False): 260 | PREFIX = 'net.inet.ip' 261 | assert(name.startswith(PREFIX + '.')) 262 | val = str(val) 263 | if not _oldctls: 264 | _fill_oldctls(PREFIX) 265 | if not (name in _oldctls): 266 | debug1('>> No such sysctl: %r\n' % name) 267 | return False 268 | oldval = _oldctls[name] 269 | if val != oldval: 270 | rv = _sysctl_set(name, val) 271 | if rv==0 and permanent: 272 | debug1('>> ...saving permanently in /etc/sysctl.conf\n') 273 | f = open('/etc/sysctl.conf', 'a') 274 | f.write('\n' 275 | '# Added by sshuttle\n' 276 | '%s=%s\n' % (name, val)) 277 | f.close() 278 | else: 279 | _changedctls.append(name) 280 | return True 281 | 282 | 283 | def _udp_unpack(p): 284 | src = (socket.inet_ntoa(p[12:16]), struct.unpack('!H', p[20:22])[0]) 285 | dst = (socket.inet_ntoa(p[16:20]), struct.unpack('!H', p[22:24])[0]) 286 | return src, dst 287 | 288 | 289 | def _udp_repack(p, src, dst): 290 | addrs = socket.inet_aton(src[0]) + socket.inet_aton(dst[0]) 291 | ports = struct.pack('!HH', src[1], dst[1]) 292 | return p[:12] + addrs + ports + p[24:] 293 | 294 | 295 | _real_dns_server = [None] 296 | def _handle_diversion(divertsock, dnsport): 297 | p,tag = divertsock.recvfrom(4096) 298 | src,dst = _udp_unpack(p) 299 | debug3('got diverted packet from %r to %r\n' % (src, dst)) 300 | if dst[1] == 53: 301 | # outgoing DNS 302 | debug3('...packet is a DNS request.\n') 303 | _real_dns_server[0] = dst 304 | dst = ('127.0.0.1', dnsport) 305 | elif src[1] == dnsport: 306 | if islocal(src[0]): 307 | debug3('...packet is a DNS response.\n') 308 | src = _real_dns_server[0] 309 | else: 310 | log('weird?! unexpected divert from %r to %r\n' % (src, dst)) 311 | assert(0) 312 | newp = _udp_repack(p, src, dst) 313 | divertsock.sendto(newp, tag) 314 | 315 | 316 | def ipfw(*args): 317 | argv = ['ipfw', '-q'] + list(args) 318 | debug1('>> %s\n' % ' '.join(argv)) 319 | rv = ssubprocess.call(argv) 320 | if rv: 321 | raise Fatal('%r returned %d' % (argv, rv)) 322 | 323 | 324 | def do_ipfw(port, dnsport, family, subnets, udp): 325 | # IPv6 not supported 326 | if family not in [socket.AF_INET, ]: 327 | raise Exception('Address family "%s" unsupported by ipfw method'%family_to_string(family)) 328 | if udp: 329 | raise Exception("UDP not supported by ipfw method") 330 | 331 | sport = str(port) 332 | xsport = str(port+1) 333 | 334 | # cleanup any existing rules 335 | if ipfw_rule_exists(port): 336 | ipfw('delete', sport) 337 | 338 | while _changedctls: 339 | name = _changedctls.pop() 340 | oldval = _oldctls[name] 341 | _sysctl_set(name, oldval) 342 | 343 | if subnets or dnsport: 344 | sysctl_set('net.inet.ip.fw.enable', 1) 345 | changed = sysctl_set('net.inet.ip.scopedroute', 0, permanent=True) 346 | if changed: 347 | log("\n" 348 | " WARNING: ONE-TIME NETWORK DISRUPTION:\n" 349 | " =====================================\n" 350 | "sshuttle has changed a MacOS kernel setting to work around\n" 351 | "a bug in MacOS 10.6. This will cause your network to drop\n" 352 | "within 5-10 minutes unless you restart your network\n" 353 | "interface (change wireless networks or unplug/plug the\n" 354 | "ethernet port) NOW, then restart sshuttle. The fix is\n" 355 | "permanent; you only have to do this once.\n\n") 356 | sys.exit(1) 357 | 358 | ipfw('add', sport, 'check-state', 'ip', 359 | 'from', 'any', 'to', 'any') 360 | 361 | if subnets: 362 | # create new subnet entries 363 | for f,swidth,sexclude,snet in sorted(subnets, key=lambda s: s[1], reverse=True): 364 | if sexclude: 365 | ipfw('add', sport, 'skipto', xsport, 366 | 'log', 'tcp', 367 | 'from', 'any', 'to', '%s/%s' % (snet,swidth)) 368 | else: 369 | ipfw('add', sport, 'fwd', '127.0.0.1,%d' % port, 370 | 'log', 'tcp', 371 | 'from', 'any', 'to', '%s/%s' % (snet,swidth), 372 | 'not', 'ipttl', '42', 'keep-state', 'setup') 373 | 374 | # This part is much crazier than it is on Linux, because MacOS (at least 375 | # 10.6, and probably other versions, and maybe FreeBSD too) doesn't 376 | # correctly fixup the dstip/dstport for UDP packets when it puts them 377 | # through a 'fwd' rule. It also doesn't fixup the srcip/srcport in the 378 | # response packet. In Linux iptables, all that happens magically for us, 379 | # so we just redirect the packets and relax. 380 | # 381 | # On MacOS, we have to fix the ports ourselves. For that, we use a 382 | # 'divert' socket, which receives raw packets and lets us mangle them. 383 | # 384 | # Here's how it works. Let's say the local DNS server is 1.1.1.1:53, 385 | # and the remote DNS server is 2.2.2.2:53, and the local transproxy port 386 | # is 10.0.0.1:12300, and a client machine is making a request from 387 | # 10.0.0.5:9999. We see a packet like this: 388 | # 10.0.0.5:9999 -> 1.1.1.1:53 389 | # Since the destip:port matches one of our local nameservers, it will 390 | # match a 'fwd' rule, thus grabbing it on the local machine. However, 391 | # the local kernel will then see a packet addressed to *:53 and 392 | # not know what to do with it; there's nobody listening on port 53. Thus, 393 | # we divert it, rewriting it into this: 394 | # 10.0.0.5:9999 -> 10.0.0.1:12300 395 | # This gets proxied out to the server, which sends it to 2.2.2.2:53, 396 | # and the answer comes back, and the proxy sends it back out like this: 397 | # 10.0.0.1:12300 -> 10.0.0.5:9999 398 | # But that's wrong! The original machine expected an answer from 399 | # 1.1.1.1:53, so we have to divert the *answer* and rewrite it: 400 | # 1.1.1.1:53 -> 10.0.0.5:9999 401 | # 402 | # See? Easy stuff. 403 | if dnsport: 404 | divertsock = socket.socket(socket.AF_INET, socket.SOCK_RAW, 405 | IPPROTO_DIVERT) 406 | divertsock.bind(('0.0.0.0', port)) # IP field is ignored 407 | 408 | nslist = resolvconf_nameservers() 409 | for f,ip in filter(lambda i: i[0]==family, nslist): 410 | # relabel and then catch outgoing DNS requests 411 | ipfw('add', sport, 'divert', sport, 412 | 'log', 'udp', 413 | 'from', 'any', 'to', '%s/32' % ip, '53', 414 | 'not', 'ipttl', '42') 415 | # relabel DNS responses 416 | ipfw('add', sport, 'divert', sport, 417 | 'log', 'udp', 418 | 'from', 'any', str(dnsport), 'to', 'any', 419 | 'not', 'ipttl', '42') 420 | 421 | def do_wait(): 422 | while 1: 423 | r,w,x = select.select([sys.stdin, divertsock], [], []) 424 | if divertsock in r: 425 | _handle_diversion(divertsock, dnsport) 426 | if sys.stdin in r: 427 | return 428 | else: 429 | do_wait = None 430 | 431 | return do_wait 432 | 433 | 434 | def program_exists(name): 435 | paths = (os.getenv('PATH') or os.defpath).split(os.pathsep) 436 | for p in paths: 437 | fn = '%s/%s' % (p, name) 438 | if os.path.exists(fn): 439 | return not os.path.isdir(fn) and os.access(fn, os.X_OK) 440 | 441 | 442 | hostmap = {} 443 | def rewrite_etc_hosts(port): 444 | HOSTSFILE='/etc/hosts' 445 | BAKFILE='%s.sbak' % HOSTSFILE 446 | APPEND='# sshuttle-firewall-%d AUTOCREATED' % port 447 | old_content = '' 448 | st = None 449 | try: 450 | old_content = open(HOSTSFILE).read() 451 | st = os.stat(HOSTSFILE) 452 | except IOError, e: 453 | if e.errno == errno.ENOENT: 454 | pass 455 | else: 456 | raise 457 | if old_content.strip() and not os.path.exists(BAKFILE): 458 | os.link(HOSTSFILE, BAKFILE) 459 | tmpname = "%s.%d.tmp" % (HOSTSFILE, port) 460 | f = open(tmpname, 'w') 461 | for line in old_content.rstrip().split('\n'): 462 | if line.find(APPEND) >= 0: 463 | continue 464 | f.write('%s\n' % line) 465 | for (name,ip) in sorted(hostmap.items()): 466 | f.write('%-30s %s\n' % ('%s %s' % (ip,name), APPEND)) 467 | f.close() 468 | 469 | if st: 470 | os.chown(tmpname, st.st_uid, st.st_gid) 471 | os.chmod(tmpname, st.st_mode) 472 | else: 473 | os.chown(tmpname, 0, 0) 474 | os.chmod(tmpname, 0644) 475 | os.rename(tmpname, HOSTSFILE) 476 | 477 | 478 | def restore_etc_hosts(port): 479 | global hostmap 480 | hostmap = {} 481 | rewrite_etc_hosts(port) 482 | 483 | 484 | # This is some voodoo for setting up the kernel's transparent 485 | # proxying stuff. If subnets is empty, we just delete our sshuttle rules; 486 | # otherwise we delete it, then make them from scratch. 487 | # 488 | # This code is supposed to clean up after itself by deleting its rules on 489 | # exit. In case that fails, it's not the end of the world; future runs will 490 | # supercede it in the transproxy list, at least, so the leftover rules 491 | # are hopefully harmless. 492 | def main(port_v6, port_v4, dnsport_v6, dnsport_v4, method, udp, syslog): 493 | assert(port_v6 >= 0) 494 | assert(port_v6 <= 65535) 495 | assert(port_v4 >= 0) 496 | assert(port_v4 <= 65535) 497 | assert(dnsport_v6 >= 0) 498 | assert(dnsport_v6 <= 65535) 499 | assert(dnsport_v4 >= 0) 500 | assert(dnsport_v4 <= 65535) 501 | 502 | if os.getuid() != 0: 503 | raise Fatal('you must be root (or enable su/sudo) to set the firewall') 504 | 505 | if method == "auto": 506 | if program_exists('ipfw'): 507 | method = "ipfw" 508 | elif program_exists('iptables'): 509 | method = "nat" 510 | else: 511 | raise Fatal("can't find either ipfw or iptables; check your PATH") 512 | 513 | if method == "nat": 514 | do_it = do_iptables_nat 515 | elif method == "tproxy": 516 | do_it = do_iptables_tproxy 517 | elif method == "ipfw": 518 | do_it = do_ipfw 519 | else: 520 | raise Exception('Unknown method "%s"'%method) 521 | 522 | # because of limitations of the 'su' command, the *real* stdin/stdout 523 | # are both attached to stdout initially. Clone stdout into stdin so we 524 | # can read from it. 525 | os.dup2(1, 0) 526 | 527 | if syslog: 528 | ssyslog.start_syslog() 529 | ssyslog.stderr_to_syslog() 530 | 531 | debug1('firewall manager ready method %s.\n'%method) 532 | sys.stdout.write('READY %s\n'%method) 533 | sys.stdout.flush() 534 | 535 | # ctrl-c shouldn't be passed along to me. When the main sshuttle dies, 536 | # I'll die automatically. 537 | os.setsid() 538 | 539 | # we wait until we get some input before creating the rules. That way, 540 | # sshuttle can launch us as early as possible (and get sudo password 541 | # authentication as early in the startup process as possible). 542 | line = sys.stdin.readline(128) 543 | if not line: 544 | return # parent died; nothing to do 545 | 546 | subnets = [] 547 | if line != 'ROUTES\n': 548 | raise Fatal('firewall: expected ROUTES but got %r' % line) 549 | while 1: 550 | line = sys.stdin.readline(128) 551 | if not line: 552 | raise Fatal('firewall: expected route but got %r' % line) 553 | elif line == 'GO\n': 554 | break 555 | try: 556 | (family,width,exclude,ip) = line.strip().split(',', 3) 557 | except: 558 | raise Fatal('firewall: expected route or GO but got %r' % line) 559 | subnets.append((int(family), int(width), bool(int(exclude)), ip)) 560 | 561 | try: 562 | if line: 563 | debug1('firewall manager: starting transproxy.\n') 564 | 565 | subnets_v6 = filter(lambda i: i[0]==socket.AF_INET6, subnets) 566 | if port_v6: 567 | do_wait = do_it(port_v6, dnsport_v6, socket.AF_INET6, subnets_v6, udp) 568 | elif len(subnets_v6) > 0: 569 | debug1("IPv6 subnets defined but IPv6 disabled\n") 570 | 571 | subnets_v4 = filter(lambda i: i[0]==socket.AF_INET, subnets) 572 | if port_v4: 573 | do_wait = do_it(port_v4, dnsport_v4, socket.AF_INET, subnets_v4, udp) 574 | elif len(subnets_v4) > 0: 575 | debug1('IPv4 subnets defined but IPv4 disabled\n') 576 | 577 | sys.stdout.write('STARTED\n') 578 | 579 | try: 580 | sys.stdout.flush() 581 | except IOError: 582 | # the parent process died for some reason; he's surely been loud 583 | # enough, so no reason to report another error 584 | return 585 | 586 | # Now we wait until EOF or any other kind of exception. We need 587 | # to stay running so that we don't need a *second* password 588 | # authentication at shutdown time - that cleanup is important! 589 | while 1: 590 | if do_wait: do_wait() 591 | line = sys.stdin.readline(128) 592 | if line.startswith('HOST '): 593 | (name,ip) = line[5:].strip().split(',', 1) 594 | hostmap[name] = ip 595 | rewrite_etc_hosts(port_v6 or port_v4) 596 | elif line: 597 | raise Fatal('expected EOF, got %r' % line) 598 | else: 599 | break 600 | finally: 601 | try: 602 | debug1('firewall manager: undoing changes.\n') 603 | except: 604 | pass 605 | if port_v6: 606 | do_it(port_v6, 0, socket.AF_INET6, [], udp) 607 | if port_v4: 608 | do_it(port_v4, 0, socket.AF_INET, [], udp) 609 | restore_etc_hosts(port_v6 or port_v4) 610 | -------------------------------------------------------------------------------- /src/helpers.py: -------------------------------------------------------------------------------- 1 | import sys, os, socket, errno 2 | 3 | logprefix = '' 4 | verbose = 0 5 | 6 | def log(s): 7 | try: 8 | sys.stdout.flush() 9 | sys.stderr.write(logprefix + s) 10 | sys.stderr.flush() 11 | except IOError: 12 | # this could happen if stderr gets forcibly disconnected, eg. because 13 | # our tty closes. That sucks, but it's no reason to abort the program. 14 | pass 15 | 16 | def debug1(s): 17 | if verbose >= 1: 18 | log(s) 19 | 20 | def debug2(s): 21 | if verbose >= 2: 22 | log(s) 23 | 24 | def debug3(s): 25 | if verbose >= 3: 26 | log(s) 27 | 28 | 29 | class Fatal(Exception): 30 | pass 31 | 32 | 33 | def list_contains_any(l, sub): 34 | for i in sub: 35 | if i in l: 36 | return True 37 | return False 38 | 39 | 40 | def resolvconf_nameservers(): 41 | l = [] 42 | for line in open('/etc/resolv.conf'): 43 | words = line.lower().split() 44 | if len(words) >= 2 and words[0] == 'nameserver': 45 | if ':' in words[1]: 46 | l.append((socket.AF_INET6,words[1])) 47 | else: 48 | l.append((socket.AF_INET,words[1])) 49 | return l 50 | 51 | 52 | def resolvconf_random_nameserver(): 53 | l = resolvconf_nameservers() 54 | if l: 55 | if len(l) > 1: 56 | # don't import this unless we really need it 57 | import random 58 | random.shuffle(l) 59 | return l[0] 60 | else: 61 | return (socket.AF_INET,'127.0.0.1') 62 | 63 | 64 | def islocal(ip,family): 65 | sock = socket.socket(family) 66 | try: 67 | try: 68 | sock.bind((ip, 0)) 69 | except socket.error, e: 70 | if e.args[0] == errno.EADDRNOTAVAIL: 71 | return False # not a local IP 72 | else: 73 | raise 74 | finally: 75 | sock.close() 76 | return True # it's a local IP, or there would have been an error 77 | 78 | 79 | def guess_address_family(ip): 80 | if ':' in ip: 81 | return socket.AF_INET6 82 | else: 83 | return socket.AF_INET 84 | 85 | 86 | def family_to_string(family): 87 | if family == socket.AF_INET6: 88 | return "AF_INET6" 89 | elif family == socket.AF_INET: 90 | return "AF_INET" 91 | else: 92 | return str(family) 93 | 94 | -------------------------------------------------------------------------------- /src/hostwatch.py: -------------------------------------------------------------------------------- 1 | import time, socket, re, select, errno 2 | if not globals().get('skip_imports'): 3 | import compat.ssubprocess as ssubprocess 4 | import helpers 5 | from helpers import * 6 | 7 | POLL_TIME = 60*15 8 | NETSTAT_POLL_TIME = 30 9 | CACHEFILE=os.path.expanduser('~/.sshuttle.hosts') 10 | 11 | 12 | _nmb_ok = True 13 | _smb_ok = True 14 | hostnames = {} 15 | queue = {} 16 | try: 17 | null = open('/dev/null', 'wb') 18 | except IOError, e: 19 | log('warning: %s\n' % e) 20 | null = os.popen("sh -c 'while read x; do :; done'", 'wb', 4096) 21 | 22 | 23 | def _is_ip(s): 24 | return re.match(r'\d+\.\d+\.\d+\.\d+$', s) 25 | 26 | 27 | def write_host_cache(): 28 | tmpname = '%s.%d.tmp' % (CACHEFILE, os.getpid()) 29 | try: 30 | f = open(tmpname, 'wb') 31 | for name,ip in sorted(hostnames.items()): 32 | f.write('%s,%s\n' % (name, ip)) 33 | f.close() 34 | os.rename(tmpname, CACHEFILE) 35 | finally: 36 | try: 37 | os.unlink(tmpname) 38 | except: 39 | pass 40 | 41 | 42 | def read_host_cache(): 43 | try: 44 | f = open(CACHEFILE) 45 | except IOError, e: 46 | if e.errno == errno.ENOENT: 47 | return 48 | else: 49 | raise 50 | for line in f: 51 | words = line.strip().split(',') 52 | if len(words) == 2: 53 | (name,ip) = words 54 | name = re.sub(r'[^-\w]', '-', name).strip() 55 | ip = re.sub(r'[^0-9.]', '', ip).strip() 56 | if name and ip: 57 | found_host(name, ip) 58 | 59 | 60 | def found_host(hostname, ip): 61 | hostname = re.sub(r'\..*', '', hostname) 62 | hostname = re.sub(r'[^-\w]', '_', hostname) 63 | if (ip.startswith('127.') or ip.startswith('255.') 64 | or hostname == 'localhost'): 65 | return 66 | oldip = hostnames.get(hostname) 67 | if oldip != ip: 68 | hostnames[hostname] = ip 69 | debug1('Found: %s: %s\n' % (hostname, ip)) 70 | sys.stdout.write('%s,%s\n' % (hostname, ip)) 71 | write_host_cache() 72 | 73 | 74 | def _check_etc_hosts(): 75 | debug2(' > hosts\n') 76 | for line in open('/etc/hosts'): 77 | line = re.sub(r'#.*', '', line) 78 | words = line.strip().split() 79 | if not words: 80 | continue 81 | ip = words[0] 82 | names = words[1:] 83 | if _is_ip(ip): 84 | debug3('< %s %r\n' % (ip, names)) 85 | for n in names: 86 | check_host(n) 87 | found_host(n, ip) 88 | 89 | 90 | def _check_revdns(ip): 91 | debug2(' > rev: %s\n' % ip) 92 | try: 93 | r = socket.gethostbyaddr(ip) 94 | debug3('< %s\n' % r[0]) 95 | check_host(r[0]) 96 | found_host(r[0], ip) 97 | except socket.herror, e: 98 | pass 99 | 100 | 101 | def _check_dns(hostname): 102 | debug2(' > dns: %s\n' % hostname) 103 | try: 104 | ip = socket.gethostbyname(hostname) 105 | debug3('< %s\n' % ip) 106 | check_host(ip) 107 | found_host(hostname, ip) 108 | except socket.gaierror, e: 109 | pass 110 | 111 | 112 | def _check_netstat(): 113 | debug2(' > netstat\n') 114 | argv = ['netstat', '-n'] 115 | try: 116 | p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE, stderr=null) 117 | content = p.stdout.read() 118 | p.wait() 119 | except OSError, e: 120 | log('%r failed: %r\n' % (argv, e)) 121 | return 122 | 123 | for ip in re.findall(r'\d+\.\d+\.\d+\.\d+', content): 124 | debug3('< %s\n' % ip) 125 | check_host(ip) 126 | 127 | 128 | def _check_smb(hostname): 129 | return 130 | global _smb_ok 131 | if not _smb_ok: 132 | return 133 | argv = ['smbclient', '-U', '%', '-L', hostname] 134 | debug2(' > smb: %s\n' % hostname) 135 | try: 136 | p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE, stderr=null) 137 | lines = p.stdout.readlines() 138 | p.wait() 139 | except OSError, e: 140 | log('%r failed: %r\n' % (argv, e)) 141 | _smb_ok = False 142 | return 143 | 144 | lines.reverse() 145 | 146 | # junk at top 147 | while lines: 148 | line = lines.pop().strip() 149 | if re.match(r'Server\s+', line): 150 | break 151 | 152 | # server list section: 153 | # Server Comment 154 | # ------ ------- 155 | while lines: 156 | line = lines.pop().strip() 157 | if not line or re.match(r'-+\s+-+', line): 158 | continue 159 | if re.match(r'Workgroup\s+Master', line): 160 | break 161 | words = line.split() 162 | hostname = words[0].lower() 163 | debug3('< %s\n' % hostname) 164 | check_host(hostname) 165 | 166 | # workgroup list section: 167 | # Workgroup Master 168 | # --------- ------ 169 | while lines: 170 | line = lines.pop().strip() 171 | if re.match(r'-+\s+', line): 172 | continue 173 | if not line: 174 | break 175 | words = line.split() 176 | (workgroup, hostname) = (words[0].lower(), words[1].lower()) 177 | debug3('< group(%s) -> %s\n' % (workgroup, hostname)) 178 | check_host(hostname) 179 | check_workgroup(workgroup) 180 | 181 | if lines: 182 | assert(0) 183 | 184 | 185 | def _check_nmb(hostname, is_workgroup, is_master): 186 | return 187 | global _nmb_ok 188 | if not _nmb_ok: 189 | return 190 | argv = ['nmblookup'] + ['-M']*is_master + ['--', hostname] 191 | debug2(' > n%d%d: %s\n' % (is_workgroup, is_master, hostname)) 192 | try: 193 | p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE, stderr=null) 194 | lines = p.stdout.readlines() 195 | rv = p.wait() 196 | except OSError, e: 197 | log('%r failed: %r\n' % (argv, e)) 198 | _nmb_ok = False 199 | return 200 | if rv: 201 | log('%r returned %d\n' % (argv, rv)) 202 | return 203 | for line in lines: 204 | m = re.match(r'(\d+\.\d+\.\d+\.\d+) (\w+)<\w\w>\n', line) 205 | if m: 206 | g = m.groups() 207 | (ip, name) = (g[0], g[1].lower()) 208 | debug3('< %s -> %s\n' % (name, ip)) 209 | if is_workgroup: 210 | _enqueue(_check_smb, ip) 211 | else: 212 | found_host(name, ip) 213 | check_host(name) 214 | 215 | 216 | def check_host(hostname): 217 | if _is_ip(hostname): 218 | _enqueue(_check_revdns, hostname) 219 | else: 220 | _enqueue(_check_dns, hostname) 221 | _enqueue(_check_smb, hostname) 222 | _enqueue(_check_nmb, hostname, False, False) 223 | 224 | 225 | def check_workgroup(hostname): 226 | _enqueue(_check_nmb, hostname, True, False) 227 | _enqueue(_check_nmb, hostname, True, True) 228 | 229 | 230 | def _enqueue(op, *args): 231 | t = (op,args) 232 | if queue.get(t) == None: 233 | queue[t] = 0 234 | 235 | 236 | def _stdin_still_ok(timeout): 237 | r,w,x = select.select([sys.stdin.fileno()], [], [], timeout) 238 | if r: 239 | b = os.read(sys.stdin.fileno(), 4096) 240 | if not b: 241 | return False 242 | return True 243 | 244 | 245 | def hw_main(seed_hosts): 246 | if helpers.verbose >= 2: 247 | helpers.logprefix = 'HH: ' 248 | else: 249 | helpers.logprefix = 'hostwatch: ' 250 | 251 | read_host_cache() 252 | 253 | _enqueue(_check_etc_hosts) 254 | _enqueue(_check_netstat) 255 | check_host('localhost') 256 | check_host(socket.gethostname()) 257 | check_workgroup('workgroup') 258 | check_workgroup('-') 259 | for h in seed_hosts: 260 | check_host(h) 261 | 262 | while 1: 263 | now = time.time() 264 | for t,last_polled in queue.items(): 265 | (op,args) = t 266 | if not _stdin_still_ok(0): 267 | break 268 | maxtime = POLL_TIME 269 | if op == _check_netstat: 270 | maxtime = NETSTAT_POLL_TIME 271 | if now - last_polled > maxtime: 272 | queue[t] = time.time() 273 | op(*args) 274 | try: 275 | sys.stdout.flush() 276 | except IOError: 277 | break 278 | 279 | # FIXME: use a smarter timeout based on oldest last_polled 280 | if not _stdin_still_ok(1): 281 | break 282 | -------------------------------------------------------------------------------- /src/main.py: -------------------------------------------------------------------------------- 1 | import sys, os, re, socket 2 | import helpers, options, client, server, firewall, hostwatch 3 | import compat.ssubprocess as ssubprocess 4 | from helpers import * 5 | 6 | 7 | # 1.2.3.4/5 or just 1.2.3.4 8 | def parse_subnet4(s): 9 | m = re.match(r'(\d+)(?:\.(\d+)\.(\d+)\.(\d+))?(?:/(\d+))?$', s) 10 | if not m: 11 | raise Fatal('%r is not a valid IP subnet format' % s) 12 | (a,b,c,d,width) = m.groups() 13 | (a,b,c,d) = (int(a or 0), int(b or 0), int(c or 0), int(d or 0)) 14 | if width == None: 15 | width = 32 16 | else: 17 | width = int(width) 18 | if a > 255 or b > 255 or c > 255 or d > 255: 19 | raise Fatal('%d.%d.%d.%d has numbers > 255' % (a,b,c,d)) 20 | if width > 32: 21 | raise Fatal('*/%d is greater than the maximum of 32' % width) 22 | return(socket.AF_INET, '%d.%d.%d.%d' % (a,b,c,d), width) 23 | 24 | 25 | # 1:2::3/64 or just 1:2::3 26 | def parse_subnet6(s): 27 | m = re.match(r'(?:([a-fA-F\d:]+))?(?:/(\d+))?$', s) 28 | if not m: 29 | raise Fatal('%r is not a valid IP subnet format' % s) 30 | (net,width) = m.groups() 31 | if width == None: 32 | width = 128 33 | else: 34 | width = int(width) 35 | if width > 128: 36 | raise Fatal('*/%d is greater than the maximum of 128' % width) 37 | return(socket.AF_INET6, net, width) 38 | 39 | 40 | # Subnet file, supporting empty lines and hash-started comment lines 41 | def parse_subnet_file(s): 42 | try: 43 | handle = open(s, 'r') 44 | except OSError, e: 45 | raise Fatal('Unable to open subnet file: %s' % s) 46 | 47 | raw_config_lines = handle.readlines() 48 | config_lines = [] 49 | for line_no, line in enumerate(raw_config_lines): 50 | line = line.strip() 51 | if len(line) == 0: 52 | continue 53 | if line[0] == '#': 54 | continue 55 | config_lines.append(line) 56 | 57 | return config_lines 58 | 59 | 60 | # list of: 61 | # 1.2.3.4/5 or just 1.2.3.4 62 | # 1:2::3/64 or just 1:2::3 63 | def parse_subnets(subnets_str): 64 | subnets = [] 65 | for s in subnets_str: 66 | if ':' in s: 67 | subnet = parse_subnet6(s) 68 | else: 69 | subnet = parse_subnet4(s) 70 | subnets.append(subnet) 71 | return subnets 72 | 73 | 74 | # 1.2.3.4:567 or just 1.2.3.4 or just 567 75 | def parse_ipport4(s): 76 | s = str(s) 77 | m = re.match(r'(?:(\d+)\.(\d+)\.(\d+)\.(\d+))?(?::)?(?:(\d+))?$', s) 78 | if not m: 79 | raise Fatal('%r is not a valid IP:port format' % s) 80 | (a,b,c,d,port) = m.groups() 81 | (a,b,c,d,port) = (int(a or 0), int(b or 0), int(c or 0), int(d or 0), 82 | int(port or 0)) 83 | if a > 255 or b > 255 or c > 255 or d > 255: 84 | raise Fatal('%d.%d.%d.%d has numbers > 255' % (a,b,c,d)) 85 | if port > 65535: 86 | raise Fatal('*:%d is greater than the maximum of 65535' % port) 87 | if a == None: 88 | a = b = c = d = 0 89 | return ('%d.%d.%d.%d' % (a,b,c,d), port) 90 | 91 | 92 | # [1:2::3]:456 or [1:2::3] or 456 93 | def parse_ipport6(s): 94 | s = str(s) 95 | m = re.match(r'(?:\[([^]]*)])?(?::)?(?:(\d+))?$', s) 96 | if not m: 97 | raise Fatal('%s is not a valid IP:port format' % s) 98 | (ip,port) = m.groups() 99 | (ip,port) = (ip or '::', int(port or 0)) 100 | return (ip, port) 101 | 102 | 103 | optspec = """ 104 | sshuttle [-l [ip:]port] [-r [username@]sshserver[:port]] 105 | sshuttle --server 106 | sshuttle --firewall 107 | sshuttle --hostwatch 108 | -- 109 | l,listen= transproxy to this ip address and port number 110 | H,auto-hosts scan for remote hostnames and update local /etc/hosts 111 | N,auto-nets automatically determine subnets to route 112 | dns capture local DNS requests and forward to the remote DNS server 113 | method= auto, nat, tproxy, or ipfw 114 | python= path to python interpreter on the remote server 115 | r,remote= ssh hostname (and optional username) of remote sshuttle server 116 | x,exclude= exclude this subnet (can be used more than once) 117 | v,verbose increase debug message verbosity 118 | e,ssh-cmd= the command to use to connect to the remote [ssh] 119 | seed-hosts= with -H, use these hostnames for initial scan (comma-separated) 120 | no-latency-control sacrifice latency to improve bandwidth benchmarks 121 | wrap= restart counting channel numbers after this number (for testing) 122 | D,daemon run in the background as a daemon 123 | s,subnets= file where the subnets are stored, instead of on the command line 124 | syslog send log messages to syslog (default if you use --daemon) 125 | pidfile= pidfile name (only if using --daemon) [./sshuttle.pid] 126 | server (internal use only) 127 | firewall (internal use only) 128 | hostwatch (internal use only) 129 | """ 130 | o = options.Options(optspec) 131 | (opt, flags, extra) = o.parse(sys.argv[2:]) 132 | 133 | if opt.daemon: 134 | opt.syslog = 1 135 | if opt.wrap: 136 | import ssnet 137 | ssnet.MAX_CHANNEL = int(opt.wrap) 138 | helpers.verbose = opt.verbose 139 | 140 | try: 141 | if opt.server: 142 | if len(extra) != 0: 143 | o.fatal('no arguments expected') 144 | server.latency_control = opt.latency_control 145 | sys.exit(server.main()) 146 | elif opt.firewall: 147 | if len(extra) != 6: 148 | o.fatal('exactly six arguments expected') 149 | sys.exit(firewall.main(int(extra[0]), int(extra[1]), 150 | int(extra[2]), int(extra[3]), 151 | extra[4], int(extra[5]), opt.syslog)) 152 | elif opt.hostwatch: 153 | sys.exit(hostwatch.hw_main(extra)) 154 | else: 155 | if len(extra) < 1 and not opt.auto_nets and not opt.subnets: 156 | o.fatal('at least one subnet, subnet file, or -N expected') 157 | includes = extra 158 | excludes = ['127.0.0.0/8'] 159 | for k,v in flags: 160 | if k in ('-x','--exclude'): 161 | excludes.append(v) 162 | remotename = opt.remote 163 | if remotename == '' or remotename == '-': 164 | remotename = None 165 | if opt.seed_hosts and not opt.auto_hosts: 166 | o.fatal('--seed-hosts only works if you also use -H') 167 | if opt.seed_hosts: 168 | sh = re.split(r'[\s,]+', (opt.seed_hosts or "").strip()) 169 | elif opt.auto_hosts: 170 | sh = [] 171 | else: 172 | sh = None 173 | if opt.subnets: 174 | includes = parse_subnet_file(opt.subnets) 175 | if not opt.method: 176 | method = "auto" 177 | elif opt.method in [ "auto", "nat", "tproxy", "ipfw" ]: 178 | method = opt.method 179 | else: 180 | o.fatal("method %s not supported"%opt.method) 181 | if not opt.listen: 182 | if opt.method == "tproxy": 183 | ipport_v6 = parse_ipport6('[::1]:0') 184 | else: 185 | ipport_v6 = None 186 | ipport_v4 = parse_ipport4('127.0.0.1:0') 187 | else: 188 | ipport_v6 = None 189 | ipport_v4 = None 190 | list = opt.listen.split(",") 191 | for ip in list: 192 | if '[' in ip and ']' in ip and opt.method == "tproxy": 193 | ipport_v6 = parse_ipport6(ip) 194 | else: 195 | ipport_v4 = parse_ipport4(ip) 196 | return_code = client.main(ipport_v6, ipport_v4, 197 | opt.ssh_cmd, 198 | remotename, 199 | opt.python, 200 | opt.latency_control, 201 | opt.dns, 202 | method, 203 | sh, 204 | opt.auto_nets, 205 | parse_subnets(includes), 206 | parse_subnets(excludes), 207 | opt.syslog, opt.daemon, opt.pidfile) 208 | 209 | if return_code == 0: 210 | log('Normal exit code, exiting...') 211 | else: 212 | log('Abnormal exit code detected, failing...' % return_code) 213 | sys.exit(return_code) 214 | 215 | except Fatal, e: 216 | log('fatal: %s\n' % e) 217 | sys.exit(99) 218 | except KeyboardInterrupt: 219 | log('\n') 220 | log('Keyboard interrupt: exiting.\n') 221 | sys.exit(1) 222 | -------------------------------------------------------------------------------- /src/options.py: -------------------------------------------------------------------------------- 1 | """Command-line options parser. 2 | With the help of an options spec string, easily parse command-line options. 3 | """ 4 | import sys, os, textwrap, getopt, re, struct 5 | 6 | class OptDict: 7 | def __init__(self): 8 | self._opts = {} 9 | 10 | def __setitem__(self, k, v): 11 | if k.startswith('no-') or k.startswith('no_'): 12 | k = k[3:] 13 | v = not v 14 | self._opts[k] = v 15 | 16 | def __getitem__(self, k): 17 | if k.startswith('no-') or k.startswith('no_'): 18 | return not self._opts[k[3:]] 19 | return self._opts[k] 20 | 21 | def __getattr__(self, k): 22 | return self[k] 23 | 24 | 25 | def _default_onabort(msg): 26 | sys.exit(97) 27 | 28 | 29 | def _intify(v): 30 | try: 31 | vv = int(v or '') 32 | if str(vv) == v: 33 | return vv 34 | except ValueError: 35 | pass 36 | return v 37 | 38 | 39 | def _atoi(v): 40 | try: 41 | return int(v or 0) 42 | except ValueError: 43 | return 0 44 | 45 | 46 | def _remove_negative_kv(k, v): 47 | if k.startswith('no-') or k.startswith('no_'): 48 | return k[3:], not v 49 | return k,v 50 | 51 | def _remove_negative_k(k): 52 | return _remove_negative_kv(k, None)[0] 53 | 54 | 55 | def _tty_width(): 56 | s = struct.pack("HHHH", 0, 0, 0, 0) 57 | try: 58 | import fcntl, termios 59 | s = fcntl.ioctl(sys.stderr.fileno(), termios.TIOCGWINSZ, s) 60 | except (IOError, ImportError): 61 | return _atoi(os.environ.get('WIDTH')) or 70 62 | (ysize,xsize,ypix,xpix) = struct.unpack('HHHH', s) 63 | return xsize or 70 64 | 65 | 66 | class Options: 67 | """Option parser. 68 | When constructed, two strings are mandatory. The first one is the command 69 | name showed before error messages. The second one is a string called an 70 | optspec that specifies the synopsis and option flags and their description. 71 | For more information about optspecs, consult the bup-options(1) man page. 72 | 73 | Two optional arguments specify an alternative parsing function and an 74 | alternative behaviour on abort (after having output the usage string). 75 | 76 | By default, the parser function is getopt.gnu_getopt, and the abort 77 | behaviour is to exit the program. 78 | """ 79 | def __init__(self, optspec, optfunc=getopt.gnu_getopt, 80 | onabort=_default_onabort): 81 | self.optspec = optspec 82 | self._onabort = onabort 83 | self.optfunc = optfunc 84 | self._aliases = {} 85 | self._shortopts = 'h?' 86 | self._longopts = ['help'] 87 | self._hasparms = {} 88 | self._defaults = {} 89 | self._usagestr = self._gen_usage() 90 | 91 | def _gen_usage(self): 92 | out = [] 93 | lines = self.optspec.strip().split('\n') 94 | lines.reverse() 95 | first_syn = True 96 | while lines: 97 | l = lines.pop() 98 | if l == '--': break 99 | out.append('%s: %s\n' % (first_syn and 'usage' or ' or', l)) 100 | first_syn = False 101 | out.append('\n') 102 | last_was_option = False 103 | while lines: 104 | l = lines.pop() 105 | if l.startswith(' '): 106 | out.append('%s%s\n' % (last_was_option and '\n' or '', 107 | l.lstrip())) 108 | last_was_option = False 109 | elif l: 110 | (flags, extra) = l.split(' ', 1) 111 | extra = extra.strip() 112 | if flags.endswith('='): 113 | flags = flags[:-1] 114 | has_parm = 1 115 | else: 116 | has_parm = 0 117 | g = re.search(r'\[([^\]]*)\]$', extra) 118 | if g: 119 | defval = g.group(1) 120 | else: 121 | defval = None 122 | flagl = flags.split(',') 123 | flagl_nice = [] 124 | for _f in flagl: 125 | f,dvi = _remove_negative_kv(_f, _intify(defval)) 126 | self._aliases[f] = _remove_negative_k(flagl[0]) 127 | self._hasparms[f] = has_parm 128 | self._defaults[f] = dvi 129 | if len(f) == 1: 130 | self._shortopts += f + (has_parm and ':' or '') 131 | flagl_nice.append('-' + f) 132 | else: 133 | f_nice = re.sub(r'\W', '_', f) 134 | self._aliases[f_nice] = _remove_negative_k(flagl[0]) 135 | self._longopts.append(f + (has_parm and '=' or '')) 136 | self._longopts.append('no-' + f) 137 | flagl_nice.append('--' + _f) 138 | flags_nice = ', '.join(flagl_nice) 139 | if has_parm: 140 | flags_nice += ' ...' 141 | prefix = ' %-20s ' % flags_nice 142 | argtext = '\n'.join(textwrap.wrap(extra, width=_tty_width(), 143 | initial_indent=prefix, 144 | subsequent_indent=' '*28)) 145 | out.append(argtext + '\n') 146 | last_was_option = True 147 | else: 148 | out.append('\n') 149 | last_was_option = False 150 | return ''.join(out).rstrip() + '\n' 151 | 152 | def usage(self, msg=""): 153 | """Print usage string to stderr and abort.""" 154 | sys.stderr.write(self._usagestr) 155 | e = self._onabort and self._onabort(msg) or None 156 | if e: 157 | raise e 158 | 159 | def fatal(self, s): 160 | """Print an error message to stderr and abort with usage string.""" 161 | msg = 'error: %s\n' % s 162 | sys.stderr.write(msg) 163 | return self.usage(msg) 164 | 165 | def parse(self, args): 166 | """Parse a list of arguments and return (options, flags, extra). 167 | 168 | In the returned tuple, "options" is an OptDict with known options, 169 | "flags" is a list of option flags that were used on the command-line, 170 | and "extra" is a list of positional arguments. 171 | """ 172 | try: 173 | (flags,extra) = self.optfunc(args, self._shortopts, self._longopts) 174 | except getopt.GetoptError, e: 175 | self.fatal(e) 176 | 177 | opt = OptDict() 178 | 179 | for k,v in self._defaults.iteritems(): 180 | k = self._aliases[k] 181 | opt[k] = v 182 | 183 | for (k,v) in flags: 184 | k = k.lstrip('-') 185 | if k in ('h', '?', 'help'): 186 | self.usage() 187 | if k.startswith('no-'): 188 | k = self._aliases[k[3:]] 189 | v = 0 190 | else: 191 | k = self._aliases[k] 192 | if not self._hasparms[k]: 193 | assert(v == '') 194 | v = (opt._opts.get(k) or 0) + 1 195 | else: 196 | v = _intify(v) 197 | opt[k] = v 198 | for (f1,f2) in self._aliases.iteritems(): 199 | opt[f1] = opt._opts.get(f2) 200 | return (opt,flags,extra) 201 | -------------------------------------------------------------------------------- /src/server.py: -------------------------------------------------------------------------------- 1 | import re, struct, socket, select, traceback, time 2 | if not globals().get('skip_imports'): 3 | import ssnet, helpers, hostwatch 4 | import compat.ssubprocess as ssubprocess 5 | from ssnet import SockWrapper, Handler, Proxy, Mux, MuxWrapper 6 | from helpers import * 7 | 8 | 9 | def _ipmatch(ipstr): 10 | if ipstr == 'default': 11 | ipstr = '0.0.0.0/0' 12 | m = re.match(r'^(\d+(\.\d+(\.\d+(\.\d+)?)?)?)(?:/(\d+))?$', ipstr) 13 | if m: 14 | g = m.groups() 15 | ips = g[0] 16 | width = int(g[4] or 32) 17 | if g[1] == None: 18 | ips += '.0.0.0' 19 | width = min(width, 8) 20 | elif g[2] == None: 21 | ips += '.0.0' 22 | width = min(width, 16) 23 | elif g[3] == None: 24 | ips += '.0' 25 | width = min(width, 24) 26 | return (struct.unpack('!I', socket.inet_aton(ips))[0], width) 27 | 28 | 29 | def _ipstr(ip, width): 30 | if width >= 32: 31 | return ip 32 | else: 33 | return "%s/%d" % (ip, width) 34 | 35 | 36 | def _maskbits(netmask): 37 | if not netmask: 38 | return 32 39 | for i in range(32): 40 | if netmask[0] & _shl(1, i): 41 | return 32-i 42 | return 0 43 | 44 | 45 | def _shl(n, bits): 46 | return n * int(2**bits) 47 | 48 | 49 | def _list_routes(): 50 | argv = ['netstat', '-rn'] 51 | p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE) 52 | routes = [] 53 | for line in p.stdout: 54 | cols = re.split(r'\s+', line) 55 | ipw = _ipmatch(cols[0]) 56 | if not ipw: 57 | continue # some lines won't be parseable; never mind 58 | maskw = _ipmatch(cols[2]) # linux only 59 | mask = _maskbits(maskw) # returns 32 if maskw is null 60 | width = min(ipw[1], mask) 61 | ip = ipw[0] & _shl(_shl(1, width) - 1, 32-width) 62 | routes.append((socket.AF_INET, socket.inet_ntoa(struct.pack('!I', ip)), width)) 63 | rv = p.wait() 64 | if rv != 0: 65 | log('WARNING: %r returned %d\n' % (argv, rv)) 66 | log('WARNING: That prevents --auto-nets from working.\n') 67 | return routes 68 | 69 | 70 | def list_routes(): 71 | for (family, ip,width) in _list_routes(): 72 | if not ip.startswith('0.') and not ip.startswith('127.'): 73 | yield (family, ip,width) 74 | 75 | 76 | def _exc_dump(): 77 | exc_info = sys.exc_info() 78 | return ''.join(traceback.format_exception(*exc_info)) 79 | 80 | 81 | def start_hostwatch(seed_hosts): 82 | s1,s2 = socket.socketpair() 83 | pid = os.fork() 84 | if not pid: 85 | # child 86 | rv = 99 87 | try: 88 | try: 89 | s2.close() 90 | os.dup2(s1.fileno(), 1) 91 | os.dup2(s1.fileno(), 0) 92 | s1.close() 93 | rv = hostwatch.hw_main(seed_hosts) or 0 94 | except Exception, e: 95 | log('%s\n' % _exc_dump()) 96 | rv = 98 97 | finally: 98 | os._exit(rv) 99 | s1.close() 100 | return pid,s2 101 | 102 | 103 | class Hostwatch: 104 | def __init__(self): 105 | self.pid = 0 106 | self.sock = None 107 | 108 | 109 | class DnsProxy(Handler): 110 | def __init__(self, mux, chan, request): 111 | # FIXME! IPv4 specific 112 | sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) 113 | Handler.__init__(self, [sock]) 114 | self.timeout = time.time()+30 115 | self.mux = mux 116 | self.chan = chan 117 | self.tries = 0 118 | self.peer = None 119 | self.request = request 120 | self.sock = sock 121 | # FIXME! IPv4 specific 122 | self.sock.setsockopt(socket.SOL_IP, socket.IP_TTL, 42) 123 | self.try_send() 124 | 125 | def try_send(self): 126 | if self.tries >= 3: 127 | return 128 | self.tries += 1 129 | # FIXME! Support IPv6 nameservers 130 | self.peer = resolvconf_random_nameserver()[1] 131 | self.sock.connect((self.peer, 53)) 132 | debug2('DNS: sending to %r\n' % self.peer) 133 | try: 134 | self.sock.send(self.request) 135 | except socket.error, e: 136 | if e.args[0] in ssnet.NET_ERRS: 137 | # might have been spurious; try again. 138 | # Note: these errors sometimes are reported by recv(), 139 | # and sometimes by send(). We have to catch both. 140 | debug2('DNS send to %r: %s\n' % (self.peer, e)) 141 | self.try_send() 142 | return 143 | else: 144 | log('DNS send to %r: %s\n' % (self.peer, e)) 145 | return 146 | 147 | def callback(self): 148 | try: 149 | data = self.sock.recv(4096) 150 | except socket.error, e: 151 | if e.args[0] in ssnet.NET_ERRS: 152 | # might have been spurious; try again. 153 | # Note: these errors sometimes are reported by recv(), 154 | # and sometimes by send(). We have to catch both. 155 | debug2('DNS recv from %r: %s\n' % (self.peer, e)) 156 | self.try_send() 157 | return 158 | else: 159 | log('DNS recv from %r: %s\n' % (self.peer, e)) 160 | return 161 | debug2('DNS response: %d bytes\n' % len(data)) 162 | self.mux.send(self.chan, ssnet.CMD_DNS_RESPONSE, data) 163 | self.ok = False 164 | 165 | 166 | class UdpProxy(Handler): 167 | def __init__(self, mux, chan, family): 168 | sock = socket.socket(family, socket.SOCK_DGRAM) 169 | Handler.__init__(self, [sock]) 170 | self.timeout = time.time()+30 171 | self.mux = mux 172 | self.chan = chan 173 | self.sock = sock 174 | if family == socket.AF_INET: 175 | self.sock.setsockopt(socket.SOL_IP, socket.IP_TTL, 42) 176 | 177 | def send(self, dstip, data): 178 | debug2('UDP: sending to %r port %d\n' % dstip) 179 | try: 180 | self.sock.sendto(data,dstip) 181 | except socket.error, e: 182 | log('UDP send to %r port %d: %s\n' % (dstip[0], dstip[1], e)) 183 | return 184 | 185 | def callback(self): 186 | try: 187 | data,peer = self.sock.recvfrom(4096) 188 | except socket.error, e: 189 | log('UDP recv from %r port %d: %s\n' % (peer[0], peer[1], e)) 190 | return 191 | debug2('UDP response: %d bytes\n' % len(data)) 192 | hdr = "%s,%r,"%(peer[0], peer[1]) 193 | self.mux.send(self.chan, ssnet.CMD_UDP_DATA, hdr+data) 194 | 195 | def main(): 196 | if helpers.verbose >= 1: 197 | helpers.logprefix = ' s: ' 198 | else: 199 | helpers.logprefix = 'server: ' 200 | debug1('latency control setting = %r\n' % latency_control) 201 | 202 | routes = list(list_routes()) 203 | debug1('available routes:\n') 204 | for r in routes: 205 | debug1(' %d/%s/%d\n' % r) 206 | 207 | # synchronization header 208 | sys.stdout.write('\0\0SSHUTTLE0001') 209 | sys.stdout.flush() 210 | 211 | handlers = [] 212 | mux = Mux(socket.fromfd(sys.stdin.fileno(), 213 | socket.AF_INET, socket.SOCK_STREAM), 214 | socket.fromfd(sys.stdout.fileno(), 215 | socket.AF_INET, socket.SOCK_STREAM)) 216 | handlers.append(mux) 217 | routepkt = '' 218 | for r in routes: 219 | routepkt += '%d,%s,%d\n' % r 220 | mux.send(0, ssnet.CMD_ROUTES, routepkt) 221 | 222 | hw = Hostwatch() 223 | hw.leftover = '' 224 | 225 | def hostwatch_ready(): 226 | assert(hw.pid) 227 | content = hw.sock.recv(4096) 228 | if content: 229 | lines = (hw.leftover + content).split('\n') 230 | if lines[-1]: 231 | # no terminating newline: entry isn't complete yet! 232 | hw.leftover = lines.pop() 233 | lines.append('') 234 | else: 235 | hw.leftover = '' 236 | mux.send(0, ssnet.CMD_HOST_LIST, '\n'.join(lines)) 237 | else: 238 | raise Fatal('hostwatch process died') 239 | 240 | def got_host_req(data): 241 | if not hw.pid: 242 | (hw.pid,hw.sock) = start_hostwatch(data.strip().split()) 243 | handlers.append(Handler(socks = [hw.sock], 244 | callback = hostwatch_ready)) 245 | mux.got_host_req = got_host_req 246 | 247 | def new_channel(channel, data): 248 | (family,dstip,dstport) = data.split(',', 2) 249 | family = int(family) 250 | dstport = int(dstport) 251 | outwrap = ssnet.connect_dst(family, dstip, dstport) 252 | handlers.append(Proxy(MuxWrapper(mux, channel), outwrap)) 253 | mux.new_channel = new_channel 254 | 255 | dnshandlers = {} 256 | def dns_req(channel, data): 257 | debug2('Incoming DNS request channel=%d.\n' % channel) 258 | h = DnsProxy(mux, channel, data) 259 | handlers.append(h) 260 | dnshandlers[channel] = h 261 | mux.got_dns_req = dns_req 262 | 263 | udphandlers = {} 264 | def udp_req(channel, cmd, data): 265 | debug2('Incoming UDP request channel=%d, cmd=%d\n' % (channel,cmd)) 266 | if cmd == ssnet.CMD_UDP_DATA: 267 | (dstip,dstport,data) = data.split(",",2) 268 | dstport = int(dstport) 269 | debug2('is incoming UDP data. %r %d.\n' % (dstip,dstport)) 270 | h = udphandlers[channel] 271 | h.send((dstip,dstport),data) 272 | elif cmd == ssnet.CMD_UDP_CLOSE: 273 | debug2('is incoming UDP close\n') 274 | h = udphandlers[channel] 275 | h.ok = False 276 | del mux.channels[channel] 277 | 278 | def udp_open(channel, data): 279 | debug2('Incoming UDP open.\n') 280 | family = int(data) 281 | mux.channels[channel] = lambda cmd, data: udp_req(channel, cmd, data) 282 | if channel in udphandlers: 283 | raise Fatal('UDP connection channel %d already open'%channel) 284 | else: 285 | h = UdpProxy(mux, channel, family) 286 | handlers.append(h) 287 | udphandlers[channel] = h 288 | mux.got_udp_open = udp_open 289 | 290 | 291 | while mux.ok: 292 | if hw.pid: 293 | assert(hw.pid > 0) 294 | (rpid, rv) = os.waitpid(hw.pid, os.WNOHANG) 295 | if rpid: 296 | raise Fatal('hostwatch exited unexpectedly: code 0x%04x\n' % rv) 297 | 298 | ssnet.runonce(handlers, mux) 299 | if latency_control: 300 | mux.check_fullness() 301 | mux.callback() 302 | 303 | if dnshandlers: 304 | now = time.time() 305 | for channel,h in dnshandlers.items(): 306 | if h.timeout < now or not h.ok: 307 | debug3('expiring dnsreqs channel=%d\n' % channel) 308 | del dnshandlers[channel] 309 | h.sock.close() 310 | h.ok = False 311 | for channel,h in udphandlers.items(): 312 | if not h.ok: 313 | debug3('expiring UDP channel=%d\n' % channel) 314 | del udphandlers[channel] 315 | h.sock.close() 316 | h.ok = False 317 | -------------------------------------------------------------------------------- /src/ssh.py: -------------------------------------------------------------------------------- 1 | import sys, os, re, socket, zlib 2 | import compat.ssubprocess as ssubprocess 3 | import helpers 4 | from helpers import * 5 | 6 | 7 | def readfile(name): 8 | basedir = os.path.dirname(os.path.abspath(sys.argv[0])) 9 | path = [basedir] + sys.path 10 | for d in path: 11 | fullname = os.path.join(d, name) 12 | if os.path.exists(fullname): 13 | return open(fullname, 'rb').read() 14 | raise Exception("can't find file %r in any of %r" % (name, path)) 15 | 16 | 17 | def empackage(z, filename, data=None): 18 | (path,basename) = os.path.split(filename) 19 | if not data: 20 | data = readfile(filename) 21 | content = z.compress(data) 22 | content += z.flush(zlib.Z_SYNC_FLUSH) 23 | return '%s\n%d\n%s' % (basename, len(content), content) 24 | 25 | 26 | def connect(ssh_cmd, rhostport, python, stderr, options): 27 | main_exe = sys.argv[0] 28 | portl = [] 29 | 30 | if (rhostport or '').count(':') > 1: 31 | if rhostport.count(']') or rhostport.count('['): 32 | result = rhostport.split(']') 33 | rhost = result[0].strip('[') 34 | if len(result) > 1: 35 | result[1] = result[1].strip(':') 36 | if result[1] is not '': 37 | portl = ['-p', str(int(result[1]))] 38 | else: # can't disambiguate IPv6 colons and a port number. pass the hostname through. 39 | rhost = rhostport 40 | else: # IPv4 41 | l = (rhostport or '').split(':', 1) 42 | rhost = l[0] 43 | if len(l) > 1: 44 | portl = ['-p', str(int(l[1]))] 45 | 46 | if rhost == '-': 47 | rhost = None 48 | 49 | z = zlib.compressobj(1) 50 | content = readfile('assembler.py') 51 | optdata = ''.join("%s=%r\n" % (k,v) for (k,v) in options.items()) 52 | content2 = (empackage(z, 'cmdline_options.py', optdata) + 53 | empackage(z, 'helpers.py') + 54 | empackage(z, 'compat/ssubprocess.py') + 55 | empackage(z, 'ssnet.py') + 56 | empackage(z, 'hostwatch.py') + 57 | empackage(z, 'server.py') + 58 | "\n") 59 | 60 | pyscript = r""" 61 | import sys; 62 | skip_imports=1; 63 | verbosity=%d; 64 | exec compile(sys.stdin.read(%d), "assembler.py", "exec") 65 | """ % (helpers.verbose or 0, len(content)) 66 | pyscript = re.sub(r'\s+', ' ', pyscript.strip()) 67 | 68 | 69 | if not rhost: 70 | # ignore the --python argument when running locally; we already know 71 | # which python version works. 72 | argv = [sys.argv[1], '-c', pyscript] 73 | else: 74 | if ssh_cmd: 75 | sshl = ssh_cmd.split(' ') 76 | else: 77 | sshl = ['ssh'] 78 | if python: 79 | pycmd = "'%s' -c '%s'" % (python, pyscript) 80 | else: 81 | pycmd = ("P=python2; $P -V 2>/dev/null || P=python; " 82 | "exec \"$P\" -c '%s'") % pyscript 83 | argv = (sshl + 84 | portl + 85 | [rhost, '--', pycmd]) 86 | (s1,s2) = socket.socketpair() 87 | def setup(): 88 | # runs in the child process 89 | s2.close() 90 | s1a,s1b = os.dup(s1.fileno()), os.dup(s1.fileno()) 91 | s1.close() 92 | debug2('executing: %r\n' % argv) 93 | p = ssubprocess.Popen(argv, stdin=s1a, stdout=s1b, preexec_fn=setup, 94 | close_fds=True, stderr=stderr) 95 | os.close(s1a) 96 | os.close(s1b) 97 | s2.sendall(content) 98 | s2.sendall(content2) 99 | return p, s2 100 | -------------------------------------------------------------------------------- /src/sshuttle: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | EXE=$0 3 | for i in 1 2 3 4 5 6 7 8 9 10; do 4 | [ -L "$EXE" ] || break 5 | EXE=$(readlink "$EXE") 6 | done 7 | #DIR=$(dirname "$EXE") 8 | DIR=/usr/share/sshuttle 9 | if python2 -V 2>/dev/null; then 10 | exec python2 "$DIR/main.py" python2 "$@" 11 | else 12 | exec python "$DIR/main.py" python "$@" 13 | fi 14 | -------------------------------------------------------------------------------- /src/sshuttle.md: -------------------------------------------------------------------------------- 1 | % sshuttle(8) Sshuttle 0.46 2 | % Avery Pennarun 3 | % 2011-01-25 4 | 5 | # NAME 6 | 7 | sshuttle - a transparent proxy-based VPN using ssh 8 | 9 | # SYNOPSIS 10 | 11 | sshuttle [options...] [-r [username@]sshserver[:port]] \ 12 | 13 | 14 | # DESCRIPTION 15 | 16 | sshuttle allows you to create a VPN connection from your 17 | machine to any remote server that you can connect to via 18 | ssh, as long as that server has python 2.3 or higher. 19 | 20 | To work, you must have root access on the local machine, 21 | but you can have a normal account on the server. 22 | 23 | It's valid to run sshuttle more than once simultaneously on 24 | a single client machine, connecting to a different server 25 | every time, so you can be on more than one VPN at once. 26 | 27 | If run on a router, sshuttle can forward traffic for your 28 | entire subnet to the VPN. 29 | 30 | 31 | # OPTIONS 32 | 33 | \ 34 | : a list of subnets to route over the VPN, in the form 35 | `a.b.c.d[/width]`. Valid examples are 1.2.3.4 (a 36 | single IP address), 1.2.3.4/32 (equivalent to 1.2.3.4), 37 | 1.2.3.0/24 (a 24-bit subnet, ie. with a 255.255.255.0 38 | netmask), and 0/0 ('just route everything through the 39 | VPN'). 40 | 41 | -l, --listen=*[ip:]port* 42 | : use this ip address and port number as the transparent 43 | proxy port. By default sshuttle finds an available 44 | port automatically and listens on IP 127.0.0.1 45 | (localhost), so you don't need to override it, and 46 | connections are only proxied from the local machine, 47 | not from outside machines. If you want to accept 48 | connections from other machines on your network (ie. to 49 | run sshuttle on a router) try enabling IP Forwarding in 50 | your kernel, then using `--listen 0.0.0.0:0`. 51 | 52 | -H, --auto-hosts 53 | : scan for remote hostnames and update the local /etc/hosts 54 | file with matching entries for as long as the VPN is 55 | open. This is nicer than changing your system's DNS 56 | (/etc/resolv.conf) settings, for several reasons. First, 57 | hostnames are added without domain names attached, so 58 | you can `ssh thatserver` without worrying if your local 59 | domain matches the remote one. Second, if you sshuttle 60 | into more than one VPN at a time, it's impossible to 61 | use more than one DNS server at once anyway, but 62 | sshuttle correctly merges /etc/hosts entries between 63 | all running copies. Third, if you're only routing a 64 | few subnets over the VPN, you probably would prefer to 65 | keep using your local DNS server for everything else. 66 | 67 | -N, --auto-nets 68 | : in addition to the subnets provided on the command 69 | line, ask the server which subnets it thinks we should 70 | route, and route those automatically. The suggestions 71 | are taken automatically from the server's routing 72 | table. 73 | 74 | --python 75 | : specify the name/path of the remote python interpreter. 76 | The default is just `python`, which means to use the 77 | default python interpreter on the remote system's PATH. 78 | 79 | -r, --remote=*[username@]sshserver[:port]* 80 | : the remote hostname and optional username and ssh 81 | port number to use for connecting to the remote server. 82 | For example, example.com, testuser@example.com, 83 | testuser@example.com:2222, or example.com:2244. 84 | 85 | -x, --exclude=*subnet* 86 | : explicitly exclude this subnet from forwarding. The 87 | format of this option is the same as the `` 88 | option. To exclude more than one subnet, specify the 89 | `-x` option more than once. You can say something like 90 | `0/0 -x 1.2.3.0/24` to forward everything except the 91 | local subnet over the VPN, for example. 92 | 93 | -v, --verbose 94 | : print more information about the session. This option 95 | can be used more than once for increased verbosity. By 96 | default, sshuttle prints only error messages. 97 | 98 | -e, --ssh-cmd 99 | : the command to use to connect to the remote server. The 100 | default is just `ssh`. Use this if your ssh client is 101 | in a non-standard location or you want to provide extra 102 | options to the ssh command, for example, `-e 'ssh -v'`. 103 | 104 | --seed-hosts 105 | : a comma-separated list of hostnames to use to 106 | initialize the `--auto-hosts` scan algorithm. 107 | `--auto-hosts` does things like poll local SMB servers 108 | for lists of local hostnames, but can speed things up 109 | if you use this option to give it a few names to start 110 | from. 111 | 112 | --no-latency-control 113 | : sacrifice latency to improve bandwidth benchmarks. ssh 114 | uses really big socket buffers, which can overload the 115 | connection if you start doing large file transfers, 116 | thus making all your other sessions inside the same 117 | tunnel go slowly. Normally, sshuttle tries to avoid 118 | this problem using a "fullness check" that allows only 119 | a certain amount of outstanding data to be buffered at 120 | a time. But on high-bandwidth links, this can leave a 121 | lot of your bandwidth underutilized. It also makes 122 | sshuttle seem slow in bandwidth benchmarks (benchmarks 123 | rarely test ping latency, which is what sshuttle is 124 | trying to control). This option disables the latency 125 | control feature, maximizing bandwidth usage. Use at 126 | your own risk. 127 | 128 | -D, --daemon 129 | : automatically fork into the background after connecting 130 | to the remote server. Implies `--syslog`. 131 | 132 | --syslog 133 | : after connecting, send all log messages to the 134 | `syslog`(3) service instead of stderr. This is 135 | implicit if you use `--daemon`. 136 | 137 | --pidfile=*pidfilename* 138 | : when using `--daemon`, save sshuttle's pid to 139 | *pidfilename*. The default is `sshuttle.pid` in the 140 | current directory. 141 | 142 | --server 143 | : (internal use only) run the sshuttle server on 144 | stdin/stdout. This is what the client runs on 145 | the remote end. 146 | 147 | --firewall 148 | : (internal use only) run the firewall manager. This is 149 | the only part of sshuttle that must run as root. If 150 | you start sshuttle as a non-root user, it will 151 | automatically run `sudo` or `su` to start the firewall 152 | manager, but the core of sshuttle still runs as a 153 | normal user. 154 | 155 | --hostwatch 156 | : (internal use only) run the hostwatch daemon. This 157 | process runs on the server side and collects hostnames for 158 | the `--auto-hosts` option. Using this option by itself 159 | makes it a lot easier to debug and test the `--auto-hosts` 160 | feature. 161 | 162 | 163 | # EXAMPLES 164 | 165 | Test locally by proxying all local connections, without using ssh: 166 | 167 | $ sshuttle -v 0/0 168 | 169 | Starting sshuttle proxy. 170 | Listening on ('0.0.0.0', 12300). 171 | [local sudo] Password: 172 | firewall manager ready. 173 | c : connecting to server... 174 | s: available routes: 175 | s: 192.168.42.0/24 176 | c : connected. 177 | firewall manager: starting transproxy. 178 | c : Accept: 192.168.42.106:50035 -> 192.168.42.121:139. 179 | c : Accept: 192.168.42.121:47523 -> 77.141.99.22:443. 180 | ...etc... 181 | ^C 182 | firewall manager: undoing changes. 183 | KeyboardInterrupt 184 | c : Keyboard interrupt: exiting. 185 | c : SW#8:192.168.42.121:47523: deleting 186 | c : SW#6:192.168.42.106:50035: deleting 187 | 188 | Test connection to a remote server, with automatic hostname 189 | and subnet guessing: 190 | 191 | $ sshuttle -vNHr example.org 192 | 193 | Starting sshuttle proxy. 194 | Listening on ('0.0.0.0', 12300). 195 | firewall manager ready. 196 | c : connecting to server... 197 | s: available routes: 198 | s: 77.141.99.0/24 199 | c : connected. 200 | c : seed_hosts: [] 201 | firewall manager: starting transproxy. 202 | hostwatch: Found: testbox1: 1.2.3.4 203 | hostwatch: Found: mytest2: 5.6.7.8 204 | hostwatch: Found: domaincontroller: 99.1.2.3 205 | c : Accept: 192.168.42.121:60554 -> 77.141.99.22:22. 206 | ^C 207 | firewall manager: undoing changes. 208 | c : Keyboard interrupt: exiting. 209 | c : SW#6:192.168.42.121:60554: deleting 210 | 211 | 212 | # DISCUSSION 213 | 214 | When it starts, sshuttle creates an ssh session to the 215 | server specified by the `-r` option. If `-r` is omitted, 216 | it will start both its client and server locally, which is 217 | sometimes useful for testing. 218 | 219 | After connecting to the remote server, sshuttle uploads its 220 | (python) source code to the remote end and executes it 221 | there. Thus, you don't need to install sshuttle on the 222 | remote server, and there are never sshuttle version 223 | conflicts between client and server. 224 | 225 | Unlike most VPNs, sshuttle forwards sessions, not packets. 226 | That is, it uses kernel transparent proxying (`iptables 227 | REDIRECT` rules on Linux, or `ipfw fwd` rules on BSD) to 228 | capture outgoing TCP sessions, then creates entirely 229 | separate TCP sessions out to the original destination at 230 | the other end of the tunnel. 231 | 232 | Packet-level forwarding (eg. using the tun/tap devices on 233 | Linux) seems elegant at first, but it results in 234 | several problems, notably the 'tcp over tcp' problem. The 235 | tcp protocol depends fundamentally on packets being dropped 236 | in order to implement its congestion control agorithm; if 237 | you pass tcp packets through a tcp-based tunnel (such as 238 | ssh), the inner tcp packets will never be dropped, and so 239 | the inner tcp stream's congestion control will be 240 | completely broken, and performance will be terrible. Thus, 241 | packet-based VPNs (such as IPsec and openvpn) cannot use 242 | tcp-based encrypted streams like ssh or ssl, and have to 243 | implement their own encryption from scratch, which is very 244 | complex and error prone. 245 | 246 | sshuttle's simplicity comes from the fact that it can 247 | safely use the existing ssh encrypted tunnel without 248 | incurring a performance penalty. It does this by letting 249 | the client-side kernel manage the incoming tcp stream, and 250 | the server-side kernel manage the outgoing tcp stream; 251 | there is no need for congestion control to be shared 252 | between the two separate streams, so a tcp-based tunnel is 253 | fine. 254 | 255 | 256 | # BUGS 257 | 258 | On MacOS 10.6 (at least up to 10.6.6), your network will 259 | stop responding about 10 minutes after the first time you 260 | start sshuttle, because of a MacOS kernel bug relating to 261 | arp and the net.inet.ip.scopedroute sysctl. To fix it, 262 | just switch your wireless off and on. Sshuttle makes the 263 | kernel setting it changes permanent, so this won't happen 264 | again, even after a reboot. 265 | 266 | 267 | # SEE ALSO 268 | 269 | `ssh`(1), `python`(1) 270 | 271 | -------------------------------------------------------------------------------- /src/ssnet.py: -------------------------------------------------------------------------------- 1 | import struct, socket, errno, select 2 | if not globals().get('skip_imports'): 3 | from helpers import * 4 | 5 | MAX_CHANNEL = 65535 6 | 7 | # these don't exist in the socket module in python 2.3! 8 | SHUT_RD = 0 9 | SHUT_WR = 1 10 | SHUT_RDWR = 2 11 | 12 | 13 | HDR_LEN = 8 14 | 15 | 16 | CMD_EXIT = 0x4200 17 | CMD_PING = 0x4201 18 | CMD_PONG = 0x4202 19 | CMD_TCP_CONNECT = 0x4203 20 | CMD_TCP_STOP_SENDING = 0x4204 21 | CMD_TCP_EOF = 0x4205 22 | CMD_TCP_DATA = 0x4206 23 | CMD_ROUTES = 0x4207 24 | CMD_HOST_REQ = 0x4208 25 | CMD_HOST_LIST = 0x4209 26 | CMD_DNS_REQ = 0x420a 27 | CMD_DNS_RESPONSE = 0x420b 28 | CMD_UDP_OPEN = 0x420c 29 | CMD_UDP_DATA = 0x420d 30 | CMD_UDP_CLOSE = 0x420e 31 | 32 | cmd_to_name = { 33 | CMD_EXIT: 'EXIT', 34 | CMD_PING: 'PING', 35 | CMD_PONG: 'PONG', 36 | CMD_TCP_CONNECT: 'TCP_CONNECT', 37 | CMD_TCP_STOP_SENDING: 'TCP_STOP_SENDING', 38 | CMD_TCP_EOF: 'TCP_EOF', 39 | CMD_TCP_DATA: 'TCP_DATA', 40 | CMD_ROUTES: 'ROUTES', 41 | CMD_HOST_REQ: 'HOST_REQ', 42 | CMD_HOST_LIST: 'HOST_LIST', 43 | CMD_DNS_REQ: 'DNS_REQ', 44 | CMD_DNS_RESPONSE: 'DNS_RESPONSE', 45 | CMD_UDP_OPEN: 'UDP_OPEN', 46 | CMD_UDP_DATA: 'UDP_DATA', 47 | CMD_UDP_CLOSE: 'UDP_CLOSE', 48 | } 49 | 50 | 51 | NET_ERRS = [errno.ECONNREFUSED, errno.ETIMEDOUT, 52 | errno.EHOSTUNREACH, errno.ENETUNREACH, 53 | errno.EHOSTDOWN, errno.ENETDOWN] 54 | 55 | 56 | def _add(l, elem): 57 | if not elem in l: 58 | l.append(elem) 59 | 60 | 61 | def _fds(l): 62 | out = [] 63 | for i in l: 64 | try: 65 | out.append(i.fileno()) 66 | except AttributeError: 67 | out.append(i) 68 | out.sort() 69 | return out 70 | 71 | 72 | def _nb_clean(func, *args): 73 | try: 74 | return func(*args) 75 | except OSError, e: 76 | if e.errno not in (errno.EWOULDBLOCK, errno.EAGAIN): 77 | raise 78 | else: 79 | debug3('%s: err was: %s\n' % (func.__name__, e)) 80 | return None 81 | 82 | 83 | def _try_peername(sock): 84 | try: 85 | pn = sock.getpeername() 86 | if pn: 87 | return '%s:%s' % (pn[0], pn[1]) 88 | except socket.error, e: 89 | if e.args[0] not in (errno.ENOTCONN, errno.ENOTSOCK): 90 | raise 91 | return 'unknown' 92 | 93 | 94 | _swcount = 0 95 | class SockWrapper: 96 | def __init__(self, rsock, wsock, connect_to=None, peername=None): 97 | global _swcount 98 | _swcount += 1 99 | debug3('creating new SockWrapper (%d now exist)\n' % _swcount) 100 | self.exc = None 101 | self.rsock = rsock 102 | self.wsock = wsock 103 | self.shut_read = self.shut_write = False 104 | self.buf = [] 105 | self.connect_to = connect_to 106 | self.peername = peername or _try_peername(self.rsock) 107 | self.try_connect() 108 | 109 | def __del__(self): 110 | global _swcount 111 | _swcount -= 1 112 | debug1('%r: deleting (%d remain)\n' % (self, _swcount)) 113 | if self.exc: 114 | debug1('%r: error was: %s\n' % (self, self.exc)) 115 | 116 | def __repr__(self): 117 | if self.rsock == self.wsock: 118 | fds = '#%d' % self.rsock.fileno() 119 | else: 120 | fds = '#%d,%d' % (self.rsock.fileno(), self.wsock.fileno()) 121 | return 'SW%s:%s' % (fds, self.peername) 122 | 123 | def seterr(self, e): 124 | if not self.exc: 125 | self.exc = e 126 | self.nowrite() 127 | self.noread() 128 | 129 | def try_connect(self): 130 | if self.connect_to and self.shut_write: 131 | self.noread() 132 | self.connect_to = None 133 | if not self.connect_to: 134 | return # already connected 135 | self.rsock.setblocking(False) 136 | debug3('%r: trying connect to %r\n' % (self, self.connect_to)) 137 | family = self.rsock.family 138 | if family==socket.AF_INET and socket.inet_pton(family, self.connect_to[0])[0] == '\0': 139 | self.seterr(Exception("Can't connect to %r: " 140 | "IP address starts with zero\n" 141 | % (self.connect_to,))) 142 | self.connect_to = None 143 | return 144 | try: 145 | self.rsock.connect(self.connect_to) 146 | # connected successfully (Linux) 147 | self.connect_to = None 148 | except socket.error, e: 149 | debug3('%r: connect result: %s\n' % (self, e)) 150 | if e.args[0] == errno.EINVAL: 151 | # this is what happens when you call connect() on a socket 152 | # that is now connected but returned EINPROGRESS last time, 153 | # on BSD, on python pre-2.5.1. We need to use getsockopt() 154 | # to get the "real" error. Later pythons do this 155 | # automatically, so this code won't run. 156 | realerr = self.rsock.getsockopt(socket.SOL_SOCKET, 157 | socket.SO_ERROR) 158 | e = socket.error(realerr, os.strerror(realerr)) 159 | debug3('%r: fixed connect result: %s\n' % (self, e)) 160 | if e.args[0] in [errno.EINPROGRESS, errno.EALREADY]: 161 | pass # not connected yet 162 | elif e.args[0] == 0: 163 | # connected successfully (weird Linux bug?) 164 | # Sometimes Linux seems to return EINVAL when it isn't 165 | # invalid. This *may* be caused by a race condition 166 | # between connect() and getsockopt(SO_ERROR) (ie. it 167 | # finishes connecting in between the two, so there is no 168 | # longer an error). However, I'm not sure of that. 169 | # 170 | # I did get at least one report that the problem went away 171 | # when we added this, however. 172 | self.connect_to = None 173 | elif e.args[0] == errno.EISCONN: 174 | # connected successfully (BSD) 175 | self.connect_to = None 176 | elif e.args[0] in NET_ERRS + [errno.EACCES, errno.EPERM]: 177 | # a "normal" kind of error 178 | self.connect_to = None 179 | self.seterr(e) 180 | else: 181 | raise # error we've never heard of?! barf completely. 182 | 183 | def noread(self): 184 | if not self.shut_read: 185 | debug2('%r: done reading\n' % self) 186 | self.shut_read = True 187 | #self.rsock.shutdown(SHUT_RD) # doesn't do anything anyway 188 | 189 | def nowrite(self): 190 | if not self.shut_write: 191 | debug2('%r: done writing\n' % self) 192 | self.shut_write = True 193 | try: 194 | self.wsock.shutdown(SHUT_WR) 195 | except socket.error, e: 196 | self.seterr('nowrite: %s' % e) 197 | 198 | def too_full(self): 199 | return False # fullness is determined by the socket's select() state 200 | 201 | def uwrite(self, buf): 202 | if self.connect_to: 203 | return 0 # still connecting 204 | self.wsock.setblocking(False) 205 | try: 206 | return _nb_clean(os.write, self.wsock.fileno(), buf) 207 | except OSError, e: 208 | if e.errno == errno.EPIPE: 209 | debug1('%r: uwrite: got EPIPE\n' % self) 210 | self.nowrite() 211 | return 0 212 | else: 213 | # unexpected error... stream is dead 214 | self.seterr('uwrite: %s' % e) 215 | return 0 216 | 217 | def write(self, buf): 218 | assert(buf) 219 | return self.uwrite(buf) 220 | 221 | def uread(self): 222 | if self.connect_to: 223 | return None # still connecting 224 | if self.shut_read: 225 | return 226 | self.rsock.setblocking(False) 227 | try: 228 | return _nb_clean(os.read, self.rsock.fileno(), 65536) 229 | except OSError, e: 230 | self.seterr('uread: %s' % e) 231 | return '' # unexpected error... we'll call it EOF 232 | 233 | def fill(self): 234 | if self.buf: 235 | return 236 | rb = self.uread() 237 | if rb: 238 | self.buf.append(rb) 239 | if rb == '': # empty string means EOF; None means temporarily empty 240 | self.noread() 241 | 242 | def copy_to(self, outwrap): 243 | if self.buf and self.buf[0]: 244 | wrote = outwrap.write(self.buf[0]) 245 | self.buf[0] = self.buf[0][wrote:] 246 | while self.buf and not self.buf[0]: 247 | self.buf.pop(0) 248 | if not self.buf and self.shut_read: 249 | outwrap.nowrite() 250 | 251 | 252 | class Handler: 253 | def __init__(self, socks = None, callback = None): 254 | self.ok = True 255 | self.socks = socks or [] 256 | if callback: 257 | self.callback = callback 258 | 259 | def pre_select(self, r, w, x): 260 | for i in self.socks: 261 | _add(r, i) 262 | 263 | def callback(self): 264 | log('--no callback defined-- %r\n' % self) 265 | (r,w,x) = select.select(self.socks, [], [], 0) 266 | for s in r: 267 | v = s.recv(4096) 268 | if not v: 269 | log('--closed-- %r\n' % self) 270 | self.socks = [] 271 | self.ok = False 272 | 273 | 274 | class Proxy(Handler): 275 | def __init__(self, wrap1, wrap2): 276 | Handler.__init__(self, [wrap1.rsock, wrap1.wsock, 277 | wrap2.rsock, wrap2.wsock]) 278 | self.wrap1 = wrap1 279 | self.wrap2 = wrap2 280 | 281 | def pre_select(self, r, w, x): 282 | if self.wrap1.shut_write: self.wrap2.noread() 283 | if self.wrap2.shut_write: self.wrap1.noread() 284 | 285 | if self.wrap1.connect_to: 286 | _add(w, self.wrap1.rsock) 287 | elif self.wrap1.buf: 288 | if not self.wrap2.too_full(): 289 | _add(w, self.wrap2.wsock) 290 | elif not self.wrap1.shut_read: 291 | _add(r, self.wrap1.rsock) 292 | 293 | if self.wrap2.connect_to: 294 | _add(w, self.wrap2.rsock) 295 | elif self.wrap2.buf: 296 | if not self.wrap1.too_full(): 297 | _add(w, self.wrap1.wsock) 298 | elif not self.wrap2.shut_read: 299 | _add(r, self.wrap2.rsock) 300 | 301 | def callback(self): 302 | self.wrap1.try_connect() 303 | self.wrap2.try_connect() 304 | self.wrap1.fill() 305 | self.wrap2.fill() 306 | self.wrap1.copy_to(self.wrap2) 307 | self.wrap2.copy_to(self.wrap1) 308 | if self.wrap1.buf and self.wrap2.shut_write: 309 | self.wrap1.buf = [] 310 | self.wrap1.noread() 311 | if self.wrap2.buf and self.wrap1.shut_write: 312 | self.wrap2.buf = [] 313 | self.wrap2.noread() 314 | if (self.wrap1.shut_read and self.wrap2.shut_read and 315 | not self.wrap1.buf and not self.wrap2.buf): 316 | self.ok = False 317 | self.wrap1.nowrite() 318 | self.wrap2.nowrite() 319 | 320 | 321 | class Mux(Handler): 322 | def __init__(self, rsock, wsock): 323 | Handler.__init__(self, [rsock, wsock]) 324 | self.rsock = rsock 325 | self.wsock = wsock 326 | self.new_channel = self.got_dns_req = self.got_routes = None 327 | self.got_udp_open = self.got_udp_data = self.got_udp_close = None 328 | self.got_host_req = self.got_host_list = None 329 | self.channels = {} 330 | self.chani = 0 331 | self.want = 0 332 | self.inbuf = '' 333 | self.outbuf = [] 334 | self.fullness = 0 335 | self.too_full = False 336 | self.send(0, CMD_PING, 'chicken') 337 | 338 | def next_channel(self): 339 | # channel 0 is special, so we never allocate it 340 | for timeout in xrange(1024): 341 | self.chani += 1 342 | if self.chani > MAX_CHANNEL: 343 | self.chani = 1 344 | if not self.channels.get(self.chani): 345 | return self.chani 346 | 347 | def amount_queued(self): 348 | total = 0 349 | for b in self.outbuf: 350 | total += len(b) 351 | return total 352 | 353 | def check_fullness(self): 354 | if self.fullness > 32768: 355 | if not self.too_full: 356 | self.send(0, CMD_PING, 'rttest') 357 | self.too_full = True 358 | #ob = [] 359 | #for b in self.outbuf: 360 | # (s1,s2,c) = struct.unpack('!ccH', b[:4]) 361 | # ob.append(c) 362 | #log('outbuf: %d %r\n' % (self.amount_queued(), ob)) 363 | 364 | def send(self, channel, cmd, data): 365 | data = str(data) 366 | assert(len(data) <= 65535) 367 | p = struct.pack('!ccHHH', 'S', 'S', channel, cmd, len(data)) + data 368 | self.outbuf.append(p) 369 | debug2(' > channel=%d cmd=%s len=%d (fullness=%d)\n' 370 | % (channel, cmd_to_name.get(cmd,hex(cmd)), 371 | len(data), self.fullness)) 372 | self.fullness += len(data) 373 | 374 | def got_packet(self, channel, cmd, data): 375 | debug2('< channel=%d cmd=%s len=%d\n' 376 | % (channel, cmd_to_name.get(cmd,hex(cmd)), len(data))) 377 | if cmd == CMD_PING: 378 | self.send(0, CMD_PONG, data) 379 | elif cmd == CMD_PONG: 380 | debug2('received PING response\n') 381 | self.too_full = False 382 | self.fullness = 0 383 | elif cmd == CMD_EXIT: 384 | self.ok = False 385 | elif cmd == CMD_TCP_CONNECT: 386 | assert(not self.channels.get(channel)) 387 | if self.new_channel: 388 | self.new_channel(channel, data) 389 | elif cmd == CMD_DNS_REQ: 390 | assert(not self.channels.get(channel)) 391 | if self.got_dns_req: 392 | self.got_dns_req(channel, data) 393 | elif cmd == CMD_UDP_OPEN: 394 | assert(not self.channels.get(channel)) 395 | if self.got_udp_open: 396 | self.got_udp_open(channel, data) 397 | elif cmd == CMD_ROUTES: 398 | if self.got_routes: 399 | self.got_routes(data) 400 | else: 401 | raise Exception('got CMD_ROUTES without got_routes?') 402 | elif cmd == CMD_HOST_REQ: 403 | if self.got_host_req: 404 | self.got_host_req(data) 405 | else: 406 | raise Exception('got CMD_HOST_REQ without got_host_req?') 407 | elif cmd == CMD_HOST_LIST: 408 | if self.got_host_list: 409 | self.got_host_list(data) 410 | else: 411 | raise Exception('got CMD_HOST_LIST without got_host_list?') 412 | else: 413 | callback = self.channels.get(channel) 414 | if not callback: 415 | log('warning: closed channel %d got cmd=%s len=%d\n' 416 | % (channel, cmd_to_name.get(cmd,hex(cmd)), len(data))) 417 | else: 418 | callback(cmd, data) 419 | 420 | def flush(self): 421 | self.wsock.setblocking(False) 422 | if self.outbuf and self.outbuf[0]: 423 | wrote = _nb_clean(os.write, self.wsock.fileno(), self.outbuf[0]) 424 | debug2('mux wrote: %r/%d\n' % (wrote, len(self.outbuf[0]))) 425 | if wrote: 426 | self.outbuf[0] = self.outbuf[0][wrote:] 427 | while self.outbuf and not self.outbuf[0]: 428 | self.outbuf[0:1] = [] 429 | 430 | def fill(self): 431 | self.rsock.setblocking(False) 432 | try: 433 | b = _nb_clean(os.read, self.rsock.fileno(), 32768) 434 | except OSError, e: 435 | raise Fatal('other end: %r' % e) 436 | #log('<<< %r\n' % b) 437 | if b == '': # EOF 438 | self.ok = False 439 | if b: 440 | self.inbuf += b 441 | 442 | def handle(self): 443 | self.fill() 444 | #log('inbuf is: (%d,%d) %r\n' 445 | # % (self.want, len(self.inbuf), self.inbuf)) 446 | while 1: 447 | if len(self.inbuf) >= (self.want or HDR_LEN): 448 | (s1,s2,channel,cmd,datalen) = \ 449 | struct.unpack('!ccHHH', self.inbuf[:HDR_LEN]) 450 | assert(s1 == 'S') 451 | assert(s2 == 'S') 452 | self.want = datalen + HDR_LEN 453 | if self.want and len(self.inbuf) >= self.want: 454 | data = self.inbuf[HDR_LEN:self.want] 455 | self.inbuf = self.inbuf[self.want:] 456 | self.want = 0 457 | self.got_packet(channel, cmd, data) 458 | else: 459 | break 460 | 461 | def pre_select(self, r, w, x): 462 | _add(r, self.rsock) 463 | if self.outbuf: 464 | _add(w, self.wsock) 465 | 466 | def callback(self): 467 | (r,w,x) = select.select([self.rsock], [self.wsock], [], 0) 468 | if self.rsock in r: 469 | self.handle() 470 | if self.outbuf and self.wsock in w: 471 | self.flush() 472 | 473 | 474 | class MuxWrapper(SockWrapper): 475 | def __init__(self, mux, channel): 476 | SockWrapper.__init__(self, mux.rsock, mux.wsock) 477 | self.mux = mux 478 | self.channel = channel 479 | self.mux.channels[channel] = self.got_packet 480 | self.socks = [] 481 | debug2('new channel: %d\n' % channel) 482 | 483 | def __del__(self): 484 | self.nowrite() 485 | SockWrapper.__del__(self) 486 | 487 | def __repr__(self): 488 | return 'SW%r:Mux#%d' % (self.peername,self.channel) 489 | 490 | def noread(self): 491 | if not self.shut_read: 492 | self.shut_read = True 493 | self.mux.send(self.channel, CMD_TCP_STOP_SENDING, '') 494 | self.maybe_close() 495 | 496 | def nowrite(self): 497 | if not self.shut_write: 498 | self.shut_write = True 499 | self.mux.send(self.channel, CMD_TCP_EOF, '') 500 | self.maybe_close() 501 | 502 | def maybe_close(self): 503 | if self.shut_read and self.shut_write: 504 | # remove the mux's reference to us. The python garbage collector 505 | # will then be able to reap our object. 506 | self.mux.channels[self.channel] = None 507 | 508 | def too_full(self): 509 | return self.mux.too_full 510 | 511 | def uwrite(self, buf): 512 | if self.mux.too_full: 513 | return 0 # too much already enqueued 514 | if len(buf) > 2048: 515 | buf = buf[:2048] 516 | self.mux.send(self.channel, CMD_TCP_DATA, buf) 517 | return len(buf) 518 | 519 | def uread(self): 520 | if self.shut_read: 521 | return '' # EOF 522 | else: 523 | return None # no data available right now 524 | 525 | def got_packet(self, cmd, data): 526 | if cmd == CMD_TCP_EOF: 527 | self.noread() 528 | elif cmd == CMD_TCP_STOP_SENDING: 529 | self.nowrite() 530 | elif cmd == CMD_TCP_DATA: 531 | self.buf.append(data) 532 | else: 533 | raise Exception('unknown command %d (%d bytes)' 534 | % (cmd, len(data))) 535 | 536 | 537 | def connect_dst(family, ip, port): 538 | debug2('Connecting to %s:%d\n' % (ip, port)) 539 | outsock = socket.socket(family) 540 | outsock.setsockopt(socket.SOL_IP, socket.IP_TTL, 42) 541 | return SockWrapper(outsock, outsock, 542 | connect_to = (ip,port), 543 | peername = '%s:%d' % (ip,port)) 544 | 545 | 546 | def runonce(handlers, mux): 547 | r = [] 548 | w = [] 549 | x = [] 550 | to_remove = filter(lambda s: not s.ok, handlers) 551 | for h in to_remove: 552 | handlers.remove(h) 553 | 554 | for s in handlers: 555 | s.pre_select(r,w,x) 556 | debug2('Waiting: %d r=%r w=%r x=%r (fullness=%d/%d)\n' 557 | % (len(handlers), _fds(r), _fds(w), _fds(x), 558 | mux.fullness, mux.too_full)) 559 | (r,w,x) = select.select(r,w,x) 560 | debug2(' Ready: %d r=%r w=%r x=%r\n' 561 | % (len(handlers), _fds(r), _fds(w), _fds(x))) 562 | ready = r+w+x 563 | did = {} 564 | for h in handlers: 565 | for s in h.socks: 566 | if s in ready: 567 | h.callback() 568 | did[s] = 1 569 | for s in ready: 570 | if not s in did: 571 | raise Fatal('socket %r was not used by any handler' % s) 572 | -------------------------------------------------------------------------------- /src/ssyslog.py: -------------------------------------------------------------------------------- 1 | import sys, os 2 | from compat import ssubprocess 3 | 4 | 5 | _p = None 6 | def start_syslog(): 7 | global _p 8 | _p = ssubprocess.Popen(['logger', 9 | '-p', 'daemon.notice', 10 | '-t', 'sshuttle'], stdin=ssubprocess.PIPE) 11 | 12 | 13 | def stderr_to_syslog(): 14 | sys.stdout.flush() 15 | sys.stderr.flush() 16 | os.dup2(_p.stdin.fileno(), 2) 17 | -------------------------------------------------------------------------------- /src/stresstest.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import sys, os, socket, select, struct, time 3 | 4 | listener = socket.socket() 5 | listener.bind(('127.0.0.1', 0)) 6 | listener.listen(500) 7 | 8 | servers = [] 9 | clients = [] 10 | remain = {} 11 | 12 | NUMCLIENTS = 50 13 | count = 0 14 | 15 | 16 | while 1: 17 | if len(clients) < NUMCLIENTS: 18 | c = socket.socket() 19 | c.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 20 | c.bind(('0.0.0.0', 0)) 21 | c.connect(listener.getsockname()) 22 | count += 1 23 | if count >= 16384: 24 | count = 1 25 | print 'cli CREATING %d' % count 26 | b = struct.pack('I', count) + 'x'*count 27 | remain[c] = count 28 | print 'cli >> %r' % len(b) 29 | c.send(b) 30 | c.shutdown(socket.SHUT_WR) 31 | clients.append(c) 32 | r = [listener] 33 | time.sleep(0.1) 34 | else: 35 | r = [listener]+servers+clients 36 | print 'select(%d)' % len(r) 37 | r,w,x = select.select(r, [], [], 5) 38 | assert(r) 39 | for i in r: 40 | if i == listener: 41 | s,addr = listener.accept() 42 | servers.append(s) 43 | elif i in servers: 44 | b = i.recv(4096) 45 | print 'srv << %r' % len(b) 46 | if not i in remain: 47 | assert(len(b) >= 4) 48 | want = struct.unpack('I', b[:4])[0] 49 | b = b[4:] 50 | #i.send('y'*want) 51 | else: 52 | want = remain[i] 53 | if want < len(b): 54 | print 'weird wanted %d bytes, got %d: %r' % (want, len(b), b) 55 | assert(want >= len(b)) 56 | want -= len(b) 57 | remain[i] = want 58 | if not b: # EOF 59 | if want: 60 | print 'weird: eof but wanted %d more' % want 61 | assert(want == 0) 62 | i.close() 63 | servers.remove(i) 64 | del remain[i] 65 | else: 66 | print 'srv >> %r' % len(b) 67 | i.send('y'*len(b)) 68 | if not want: 69 | i.shutdown(socket.SHUT_WR) 70 | elif i in clients: 71 | b = i.recv(4096) 72 | print 'cli << %r' % len(b) 73 | want = remain[i] 74 | if want < len(b): 75 | print 'weird wanted %d bytes, got %d: %r' % (want, len(b), b) 76 | assert(want >= len(b)) 77 | want -= len(b) 78 | remain[i] = want 79 | if not b: # EOF 80 | if want: 81 | print 'weird: eof but wanted %d more' % want 82 | assert(want == 0) 83 | i.close() 84 | clients.remove(i) 85 | del remain[i] 86 | listener.accept() 87 | -------------------------------------------------------------------------------- /src/ui-macos/.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | *~ 3 | /*.nib 4 | /debug.app 5 | /sources.list 6 | /Sshuttle VPN.app 7 | /*.tar.gz 8 | /*.zip 9 | -------------------------------------------------------------------------------- /src/ui-macos/Info.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | CFBundleDevelopmentRegion 6 | English 7 | CFBundleDisplayName 8 | Sshuttle VPN 9 | CFBundleExecutable 10 | Sshuttle 11 | CFBundleIconFile 12 | app.icns 13 | CFBundleIdentifier 14 | ca.apenwarr.Sshuttle 15 | CFBundleInfoDictionaryVersion 16 | 6.0 17 | CFBundleName 18 | Sshuttle VPN 19 | CFBundlePackageType 20 | APPL 21 | CFBundleShortVersionString 22 | 0.0.0 23 | CFBundleSignature 24 | ???? 25 | CFBundleVersion 26 | 0.0.0 27 | LSUIElement 28 | 1 29 | LSHasLocalizedDisplayName 30 | 31 | NSAppleScriptEnabled 32 | 33 | NSHumanReadableCopyright 34 | GNU LGPL Version 2 35 | NSMainNibFile 36 | MainMenu 37 | NSPrincipalClass 38 | NSApplication 39 | 40 | 41 | -------------------------------------------------------------------------------- /src/ui-macos/UserDefaults.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | startAtLogin 6 | 7 | autoReconnect 8 | 9 | 10 | 11 | -------------------------------------------------------------------------------- /src/ui-macos/all.do: -------------------------------------------------------------------------------- 1 | redo-ifchange debug.app dist 2 | -------------------------------------------------------------------------------- /src/ui-macos/app.icns: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/brianmay/sshuttle/b3009b8f434f35d9e50550892bef1970264d7a5f/src/ui-macos/app.icns -------------------------------------------------------------------------------- /src/ui-macos/askpass.py: -------------------------------------------------------------------------------- 1 | import sys, os, re, subprocess 2 | 3 | def askpass(prompt): 4 | prompt = prompt.replace('"', "'") 5 | 6 | if 'yes/no' in prompt: 7 | return "yes" 8 | 9 | script=""" 10 | tell application "Finder" 11 | activate 12 | display dialog "%s" \ 13 | with title "Sshuttle SSH Connection" \ 14 | default answer "" \ 15 | with icon caution \ 16 | with hidden answer 17 | end tell 18 | """ % prompt 19 | 20 | p = subprocess.Popen(['osascript', '-e', script], stdout=subprocess.PIPE) 21 | out = p.stdout.read() 22 | rv = p.wait() 23 | if rv: 24 | return None 25 | g = re.match("text returned:(.*), button returned:.*", out) 26 | if not g: 27 | return None 28 | return g.group(1) 29 | -------------------------------------------------------------------------------- /src/ui-macos/bits/.gitignore: -------------------------------------------------------------------------------- 1 | /runpython 2 | -------------------------------------------------------------------------------- /src/ui-macos/bits/PkgInfo: -------------------------------------------------------------------------------- 1 | APPL???? -------------------------------------------------------------------------------- /src/ui-macos/bits/runpython.c: -------------------------------------------------------------------------------- 1 | /* 2 | * This rather pointless program acts like the python interpreter, except 3 | * it's intended to sit inside a MacOS .app package, so that its argv[0] 4 | * will point inside the package. 5 | * 6 | * NSApplicationMain() looks for Info.plist using the path in argv[0], which 7 | * goes wrong if your interpreter is /usr/bin/python. 8 | */ 9 | #include 10 | #include 11 | #include 12 | 13 | int main(int argc, char **argv) 14 | { 15 | char *path = strdup(argv[0]), *cptr; 16 | char *args[] = {argv[0], "../Resources/main.py", NULL}; 17 | cptr = strrchr(path, '/'); 18 | if (cptr) 19 | *cptr = 0; 20 | chdir(path); 21 | free(path); 22 | return Py_Main(2, args); 23 | } 24 | -------------------------------------------------------------------------------- /src/ui-macos/bits/runpython.do: -------------------------------------------------------------------------------- 1 | exec >&2 2 | redo-ifchange runpython.c 3 | ARCHES="" 4 | printf "Platforms: " 5 | for d in /usr/libexec/gcc/darwin/*; do 6 | PLAT=$(basename "$d") 7 | [ "$PLAT" != "ppc64" ] || continue # fails for some reason on my Mac 8 | ARCHES="$ARCHES -arch $PLAT" 9 | printf "$PLAT " 10 | done 11 | printf "\n" 12 | gcc $ARCHES \ 13 | -Wall -o $3 runpython.c \ 14 | -I/usr/include/python2.5 \ 15 | -lpython2.5 16 | -------------------------------------------------------------------------------- /src/ui-macos/chicken-tiny-bw.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/brianmay/sshuttle/b3009b8f434f35d9e50550892bef1970264d7a5f/src/ui-macos/chicken-tiny-bw.png -------------------------------------------------------------------------------- /src/ui-macos/chicken-tiny-err.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/brianmay/sshuttle/b3009b8f434f35d9e50550892bef1970264d7a5f/src/ui-macos/chicken-tiny-err.png -------------------------------------------------------------------------------- /src/ui-macos/chicken-tiny.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/brianmay/sshuttle/b3009b8f434f35d9e50550892bef1970264d7a5f/src/ui-macos/chicken-tiny.png -------------------------------------------------------------------------------- /src/ui-macos/clean.do: -------------------------------------------------------------------------------- 1 | exec >&2 2 | find . -name '*~' | xargs rm -f 3 | rm -rf *.app *.zip *.tar.gz 4 | rm -f bits/runpython *.nib sources.list 5 | -------------------------------------------------------------------------------- /src/ui-macos/debug.app.do: -------------------------------------------------------------------------------- 1 | redo-ifchange bits/runpython MainMenu.nib 2 | rm -rf debug.app 3 | mkdir debug.app debug.app/Contents 4 | cd debug.app/Contents 5 | ln -s ../.. Resources 6 | ln -s ../.. English.lproj 7 | ln -s ../../Info.plist . 8 | ln -s ../../app.icns . 9 | 10 | mkdir MacOS 11 | cd MacOS 12 | ln -s ../../../bits/runpython Sshuttle 13 | 14 | cd ../../.. 15 | redo-ifchange $(find debug.app -type f) 16 | -------------------------------------------------------------------------------- /src/ui-macos/default.app.do: -------------------------------------------------------------------------------- 1 | TOP=$PWD 2 | redo-ifchange sources.list 3 | redo-ifchange Info.plist bits/runpython \ 4 | $(while read name newname; do echo "$name"; done &2 2 | IFS=" 3 | " 4 | redo-ifchange $1.app 5 | tar -czf $3 $1.app/ 6 | -------------------------------------------------------------------------------- /src/ui-macos/default.app.zip.do: -------------------------------------------------------------------------------- 1 | exec >&2 2 | IFS=" 3 | " 4 | redo-ifchange $1.app 5 | zip -q -r $3 $1.app/ 6 | -------------------------------------------------------------------------------- /src/ui-macos/default.nib.do: -------------------------------------------------------------------------------- 1 | redo-ifchange $1.xib 2 | ibtool --compile $3 $1.xib 3 | -------------------------------------------------------------------------------- /src/ui-macos/dist.do: -------------------------------------------------------------------------------- 1 | redo-ifchange "Sshuttle VPN.app.zip" "Sshuttle VPN.app.tar.gz" 2 | -------------------------------------------------------------------------------- /src/ui-macos/git-export.do: -------------------------------------------------------------------------------- 1 | # update a local branch with pregenerated output files, so people can download 2 | # the completed tarballs from github. Since we don't have any real binaries, 3 | # our final distribution package contains mostly blobs from the source code, 4 | # so this doesn't cost us much extra space in the repo. 5 | BRANCH=dist/macos 6 | redo-ifchange 'Sshuttle VPN.app' 7 | git update-ref refs/heads/$BRANCH origin/$BRANCH '' 2>/dev/null || true 8 | 9 | export GIT_INDEX_FILE=$PWD/gitindex.tmp 10 | rm -f "$GIT_INDEX_FILE" 11 | git add -f 'Sshuttle VPN.app' 12 | 13 | MSG="MacOS precompiled app package for $(git describe)" 14 | TREE=$(git write-tree --prefix=ui-macos) 15 | git show-ref refs/heads/$BRANCH >/dev/null && PARENT="-p refs/heads/$BRANCH" 16 | COMMITID=$(echo "$MSG" | git commit-tree $TREE $PARENT) 17 | 18 | git update-ref refs/heads/$BRANCH $COMMITID 19 | rm -f "$GIT_INDEX_FILE" -------------------------------------------------------------------------------- /src/ui-macos/main.py: -------------------------------------------------------------------------------- 1 | import sys, os, pty 2 | from AppKit import * 3 | import my, models, askpass 4 | 5 | def sshuttle_args(host, auto_nets, auto_hosts, dns, nets, debug, 6 | no_latency_control): 7 | argv = [my.bundle_path('sshuttle/sshuttle', ''), '-r', host] 8 | assert(argv[0]) 9 | if debug: 10 | argv.append('-v') 11 | if auto_nets: 12 | argv.append('--auto-nets') 13 | if auto_hosts: 14 | argv.append('--auto-hosts') 15 | if dns: 16 | argv.append('--dns') 17 | if no_latency_control: 18 | argv.append('--no-latency-control') 19 | argv += nets 20 | return argv 21 | 22 | 23 | class _Callback(NSObject): 24 | def initWithFunc_(self, func): 25 | self = super(_Callback, self).init() 26 | self.func = func 27 | return self 28 | def func_(self, obj): 29 | return self.func(obj) 30 | 31 | 32 | class Callback: 33 | def __init__(self, func): 34 | self.obj = _Callback.alloc().initWithFunc_(func) 35 | self.sel = self.obj.func_ 36 | 37 | 38 | class Runner: 39 | def __init__(self, argv, logfunc, promptfunc, serverobj): 40 | print 'in __init__' 41 | self.id = argv 42 | self.rv = None 43 | self.pid = None 44 | self.fd = None 45 | self.logfunc = logfunc 46 | self.promptfunc = promptfunc 47 | self.serverobj = serverobj 48 | self.buf = '' 49 | self.logfunc('\nConnecting to %s.\n' % self.serverobj.host()) 50 | print 'will run: %r' % argv 51 | self.serverobj.setConnected_(False) 52 | pid,fd = pty.fork() 53 | if pid == 0: 54 | # child 55 | try: 56 | os.execvp(argv[0], argv) 57 | except Exception, e: 58 | sys.stderr.write('failed to start: %r\n' % e) 59 | raise 60 | finally: 61 | os._exit(42) 62 | # parent 63 | self.pid = pid 64 | self.file = NSFileHandle.alloc()\ 65 | .initWithFileDescriptor_closeOnDealloc_(fd, True) 66 | self.cb = Callback(self.gotdata) 67 | NSNotificationCenter.defaultCenter()\ 68 | .addObserver_selector_name_object_(self.cb.obj, self.cb.sel, 69 | NSFileHandleDataAvailableNotification, self.file) 70 | self.file.waitForDataInBackgroundAndNotify() 71 | 72 | def __del__(self): 73 | self.wait() 74 | 75 | def _try_wait(self, options): 76 | if self.rv == None and self.pid > 0: 77 | pid,code = os.waitpid(self.pid, options) 78 | if pid == self.pid: 79 | if os.WIFEXITED(code): 80 | self.rv = os.WEXITSTATUS(code) 81 | else: 82 | self.rv = -os.WSTOPSIG(code) 83 | self.serverobj.setConnected_(False) 84 | self.serverobj.setError_('VPN process died') 85 | self.logfunc('Disconnected.\n') 86 | print 'wait_result: %r' % self.rv 87 | return self.rv 88 | 89 | def wait(self): 90 | return self._try_wait(0) 91 | 92 | def poll(self): 93 | return self._try_wait(os.WNOHANG) 94 | 95 | def kill(self): 96 | assert(self.pid > 0) 97 | print 'killing: pid=%r rv=%r' % (self.pid, self.rv) 98 | if self.rv == None: 99 | self.logfunc('Disconnecting from %s.\n' % self.serverobj.host()) 100 | os.kill(self.pid, 15) 101 | self.wait() 102 | 103 | def gotdata(self, notification): 104 | print 'gotdata!' 105 | d = str(self.file.availableData()) 106 | if d: 107 | self.logfunc(d) 108 | self.buf = self.buf + d 109 | if 'Connected.\r\n' in self.buf: 110 | self.serverobj.setConnected_(True) 111 | self.buf = self.buf[-4096:] 112 | if self.buf.strip().endswith(':'): 113 | lastline = self.buf.rstrip().split('\n')[-1] 114 | resp = self.promptfunc(lastline) 115 | add = ' (response)\n' 116 | self.buf += add 117 | self.logfunc(add) 118 | self.file.writeData_(my.Data(resp + '\n')) 119 | self.file.waitForDataInBackgroundAndNotify() 120 | self.poll() 121 | #print 'gotdata done!' 122 | 123 | 124 | class SshuttleApp(NSObject): 125 | def initialize(self): 126 | d = my.PList('UserDefaults') 127 | my.Defaults().registerDefaults_(d) 128 | 129 | 130 | class SshuttleController(NSObject): 131 | # Interface builder outlets 132 | startAtLoginField = objc.IBOutlet() 133 | autoReconnectField = objc.IBOutlet() 134 | debugField = objc.IBOutlet() 135 | routingField = objc.IBOutlet() 136 | prefsWindow = objc.IBOutlet() 137 | serversController = objc.IBOutlet() 138 | logField = objc.IBOutlet() 139 | latencyControlField = objc.IBOutlet() 140 | 141 | servers = [] 142 | conns = {} 143 | 144 | def _connect(self, server): 145 | host = server.host() 146 | print 'connecting %r' % host 147 | self.fill_menu() 148 | def logfunc(msg): 149 | print 'log! (%d bytes)' % len(msg) 150 | self.logField.textStorage()\ 151 | .appendAttributedString_(NSAttributedString.alloc()\ 152 | .initWithString_(msg)) 153 | self.logField.didChangeText() 154 | def promptfunc(prompt): 155 | print 'prompt! %r' % prompt 156 | return askpass.askpass(prompt) 157 | nets_mode = server.autoNets() 158 | if nets_mode == models.NET_MANUAL: 159 | manual_nets = ["%s/%d" % (i.subnet(), i.width()) 160 | for i in server.nets()] 161 | elif nets_mode == models.NET_ALL: 162 | manual_nets = ['0/0'] 163 | else: 164 | manual_nets = [] 165 | noLatencyControl = (server.latencyControl() != models.LAT_INTERACTIVE) 166 | conn = Runner(sshuttle_args(host, 167 | auto_nets = nets_mode == models.NET_AUTO, 168 | auto_hosts = server.autoHosts(), 169 | dns = server.useDns(), 170 | nets = manual_nets, 171 | debug = self.debugField.state(), 172 | no_latency_control = noLatencyControl), 173 | logfunc=logfunc, promptfunc=promptfunc, 174 | serverobj=server) 175 | self.conns[host] = conn 176 | 177 | def _disconnect(self, server): 178 | host = server.host() 179 | print 'disconnecting %r' % host 180 | conn = self.conns.get(host) 181 | if conn: 182 | conn.kill() 183 | self.fill_menu() 184 | self.logField.textStorage().setAttributedString_( 185 | NSAttributedString.alloc().initWithString_('')) 186 | 187 | @objc.IBAction 188 | def cmd_connect(self, sender): 189 | server = sender.representedObject() 190 | server.setWantConnect_(True) 191 | 192 | @objc.IBAction 193 | def cmd_disconnect(self, sender): 194 | server = sender.representedObject() 195 | server.setWantConnect_(False) 196 | 197 | @objc.IBAction 198 | def cmd_show(self, sender): 199 | self.prefsWindow.makeKeyAndOrderFront_(self) 200 | NSApp.activateIgnoringOtherApps_(True) 201 | 202 | @objc.IBAction 203 | def cmd_quit(self, sender): 204 | NSApp.performSelector_withObject_afterDelay_(NSApp.terminate_, 205 | None, 0.0) 206 | 207 | def fill_menu(self): 208 | menu = self.menu 209 | menu.removeAllItems() 210 | 211 | def additem(name, func, obj): 212 | it = menu.addItemWithTitle_action_keyEquivalent_(name, None, "") 213 | it.setRepresentedObject_(obj) 214 | it.setTarget_(self) 215 | it.setAction_(func) 216 | def addnote(name): 217 | additem(name, None, None) 218 | 219 | any_inprogress = None 220 | any_conn = None 221 | any_err = None 222 | if len(self.servers): 223 | for i in self.servers: 224 | host = i.host() 225 | title = i.title() 226 | want = i.wantConnect() 227 | connected = i.connected() 228 | numnets = len(list(i.nets())) 229 | if not host: 230 | additem('Connect Untitled', None, i) 231 | elif i.autoNets() == models.NET_MANUAL and not numnets: 232 | additem('Connect %s (no routes)' % host, None, i) 233 | elif want: 234 | any_conn = i 235 | additem('Disconnect %s' % title, self.cmd_disconnect, i) 236 | else: 237 | additem('Connect %s' % title, self.cmd_connect, i) 238 | if not want: 239 | msg = 'Off' 240 | elif i.error(): 241 | msg = 'ERROR - try reconnecting' 242 | any_err = i 243 | elif connected: 244 | msg = 'Connected' 245 | else: 246 | msg = 'Connecting...' 247 | any_inprogress = i 248 | addnote(' State: %s' % msg) 249 | else: 250 | addnote('No servers defined yet') 251 | 252 | menu.addItem_(NSMenuItem.separatorItem()) 253 | additem('Preferences...', self.cmd_show, None) 254 | additem('Quit Sshuttle VPN', self.cmd_quit, None) 255 | 256 | if any_err: 257 | self.statusitem.setImage_(self.img_err) 258 | self.statusitem.setTitle_('Error!') 259 | elif any_conn: 260 | self.statusitem.setImage_(self.img_running) 261 | if any_inprogress: 262 | self.statusitem.setTitle_('Connecting...') 263 | else: 264 | self.statusitem.setTitle_('') 265 | else: 266 | self.statusitem.setImage_(self.img_idle) 267 | self.statusitem.setTitle_('') 268 | 269 | def load_servers(self): 270 | l = my.Defaults().arrayForKey_('servers') or [] 271 | sl = [] 272 | for s in l: 273 | host = s.get('host', None) 274 | if not host: continue 275 | 276 | nets = s.get('nets', []) 277 | nl = [] 278 | for n in nets: 279 | subnet = n[0] 280 | width = n[1] 281 | net = models.SshuttleNet.alloc().init() 282 | net.setSubnet_(subnet) 283 | net.setWidth_(width) 284 | nl.append(net) 285 | 286 | autoNets = s.get('autoNets', models.NET_AUTO) 287 | autoHosts = s.get('autoHosts', True) 288 | useDns = s.get('useDns', autoNets == models.NET_ALL) 289 | latencyControl = s.get('latencyControl', models.LAT_INTERACTIVE) 290 | srv = models.SshuttleServer.alloc().init() 291 | srv.setHost_(host) 292 | srv.setAutoNets_(autoNets) 293 | srv.setAutoHosts_(autoHosts) 294 | srv.setNets_(nl) 295 | srv.setUseDns_(useDns) 296 | srv.setLatencyControl_(latencyControl) 297 | sl.append(srv) 298 | self.serversController.addObjects_(sl) 299 | self.serversController.setSelectionIndex_(0) 300 | 301 | def save_servers(self): 302 | l = [] 303 | for s in self.servers: 304 | host = s.host() 305 | if not host: continue 306 | nets = [] 307 | for n in s.nets(): 308 | subnet = n.subnet() 309 | if not subnet: continue 310 | nets.append((subnet, n.width())) 311 | d = dict(host=s.host(), 312 | nets=nets, 313 | autoNets=s.autoNets(), 314 | autoHosts=s.autoHosts(), 315 | useDns=s.useDns(), 316 | latencyControl=s.latencyControl()) 317 | l.append(d) 318 | my.Defaults().setObject_forKey_(l, 'servers') 319 | self.fill_menu() 320 | 321 | def awakeFromNib(self): 322 | self.routingField.removeAllItems() 323 | tf = self.routingField.addItemWithTitle_ 324 | tf('Send all traffic through this server') 325 | tf('Determine automatically') 326 | tf('Custom...') 327 | 328 | self.latencyControlField.removeAllItems() 329 | tf = self.latencyControlField.addItemWithTitle_ 330 | tf('Fast transfer') 331 | tf('Low latency') 332 | 333 | # Hmm, even when I mark this as !enabled in the .nib, it still comes 334 | # through as enabled. So let's just disable it here (since we don't 335 | # support this feature yet). 336 | self.startAtLoginField.setEnabled_(False) 337 | self.startAtLoginField.setState_(False) 338 | self.autoReconnectField.setEnabled_(False) 339 | self.autoReconnectField.setState_(False) 340 | 341 | self.load_servers() 342 | 343 | # Initialize our menu item 344 | self.menu = NSMenu.alloc().initWithTitle_('Sshuttle') 345 | bar = NSStatusBar.systemStatusBar() 346 | statusitem = bar.statusItemWithLength_(NSVariableStatusItemLength) 347 | self.statusitem = statusitem 348 | self.img_idle = my.Image('chicken-tiny-bw', 'png') 349 | self.img_running = my.Image('chicken-tiny', 'png') 350 | self.img_err = my.Image('chicken-tiny-err', 'png') 351 | statusitem.setImage_(self.img_idle) 352 | statusitem.setHighlightMode_(True) 353 | statusitem.setMenu_(self.menu) 354 | self.fill_menu() 355 | 356 | models.configchange_callback = my.DelayedCallback(self.save_servers) 357 | 358 | def sc(server): 359 | if server.wantConnect(): 360 | self._connect(server) 361 | else: 362 | self._disconnect(server) 363 | models.setconnect_callback = sc 364 | 365 | 366 | # Note: NSApplicationMain calls sys.exit(), so this never returns. 367 | NSApplicationMain(sys.argv) 368 | -------------------------------------------------------------------------------- /src/ui-macos/models.py: -------------------------------------------------------------------------------- 1 | from AppKit import * 2 | import my 3 | 4 | 5 | configchange_callback = setconnect_callback = None 6 | objc_validator = objc.signature('@@:N^@o^@') 7 | 8 | 9 | def config_changed(): 10 | if configchange_callback: 11 | configchange_callback() 12 | 13 | 14 | def _validate_ip(v): 15 | parts = v.split('.')[:4] 16 | if len(parts) < 4: 17 | parts += ['0'] * (4 - len(parts)) 18 | for i in range(4): 19 | n = my.atoi(parts[i]) 20 | if n < 0: 21 | n = 0 22 | elif n > 255: 23 | n = 255 24 | parts[i] = str(n) 25 | return '.'.join(parts) 26 | 27 | 28 | def _validate_width(v): 29 | n = my.atoi(v) 30 | if n < 0: 31 | n = 0 32 | elif n > 32: 33 | n = 32 34 | return n 35 | 36 | 37 | class SshuttleNet(NSObject): 38 | def subnet(self): 39 | return getattr(self, '_k_subnet', None) 40 | def setSubnet_(self, v): 41 | self._k_subnet = v 42 | config_changed() 43 | @objc_validator 44 | def validateSubnet_error_(self, value, error): 45 | #print 'validateSubnet!' 46 | return True, _validate_ip(value), error 47 | 48 | def width(self): 49 | return getattr(self, '_k_width', 24) 50 | def setWidth_(self, v): 51 | self._k_width = v 52 | config_changed() 53 | @objc_validator 54 | def validateWidth_error_(self, value, error): 55 | #print 'validateWidth!' 56 | return True, _validate_width(value), error 57 | 58 | NET_ALL = 0 59 | NET_AUTO = 1 60 | NET_MANUAL = 2 61 | 62 | LAT_BANDWIDTH = 0 63 | LAT_INTERACTIVE = 1 64 | 65 | class SshuttleServer(NSObject): 66 | def init(self): 67 | self = super(SshuttleServer, self).init() 68 | config_changed() 69 | return self 70 | 71 | def wantConnect(self): 72 | return getattr(self, '_k_wantconnect', False) 73 | def setWantConnect_(self, v): 74 | self._k_wantconnect = v 75 | self.setError_(None) 76 | config_changed() 77 | if setconnect_callback: setconnect_callback(self) 78 | 79 | def connected(self): 80 | return getattr(self, '_k_connected', False) 81 | def setConnected_(self, v): 82 | print 'setConnected of %r to %r' % (self, v) 83 | self._k_connected = v 84 | if v: self.setError_(None) # connected ok, so no error 85 | config_changed() 86 | 87 | def error(self): 88 | return getattr(self, '_k_error', None) 89 | def setError_(self, v): 90 | self._k_error = v 91 | config_changed() 92 | 93 | def isValid(self): 94 | if not self.host(): 95 | return False 96 | if self.autoNets() == NET_MANUAL and not len(list(self.nets())): 97 | return False 98 | return True 99 | 100 | def title(self): 101 | host = self.host() 102 | if not host: 103 | return host 104 | an = self.autoNets() 105 | suffix = "" 106 | if an == NET_ALL: 107 | suffix = " (all traffic)" 108 | elif an == NET_MANUAL: 109 | n = self.nets() 110 | suffix = ' (%d subnet%s)' % (len(n), len(n)!=1 and 's' or '') 111 | return self.host() + suffix 112 | def setTitle_(self, v): 113 | # title is always auto-generated 114 | config_changed() 115 | 116 | def host(self): 117 | return getattr(self, '_k_host', None) 118 | def setHost_(self, v): 119 | self._k_host = v 120 | self.setTitle_(None) 121 | config_changed() 122 | @objc_validator 123 | def validateHost_error_(self, value, error): 124 | #print 'validatehost! %r %r %r' % (self, value, error) 125 | while value.startswith('-'): 126 | value = value[1:] 127 | return True, value, error 128 | 129 | def nets(self): 130 | return getattr(self, '_k_nets', []) 131 | def setNets_(self, v): 132 | self._k_nets = v 133 | self.setTitle_(None) 134 | config_changed() 135 | def netsHidden(self): 136 | #print 'checking netsHidden' 137 | return self.autoNets() != NET_MANUAL 138 | def setNetsHidden_(self, v): 139 | config_changed() 140 | #print 'setting netsHidden to %r' % v 141 | 142 | def autoNets(self): 143 | return getattr(self, '_k_autoNets', NET_AUTO) 144 | def setAutoNets_(self, v): 145 | self._k_autoNets = v 146 | self.setNetsHidden_(-1) 147 | self.setUseDns_(v == NET_ALL) 148 | self.setTitle_(None) 149 | config_changed() 150 | 151 | def autoHosts(self): 152 | return getattr(self, '_k_autoHosts', True) 153 | def setAutoHosts_(self, v): 154 | self._k_autoHosts = v 155 | config_changed() 156 | 157 | def useDns(self): 158 | return getattr(self, '_k_useDns', False) 159 | def setUseDns_(self, v): 160 | self._k_useDns = v 161 | config_changed() 162 | 163 | def latencyControl(self): 164 | return getattr(self, '_k_latencyControl', LAT_INTERACTIVE) 165 | def setLatencyControl_(self, v): 166 | self._k_latencyControl = v 167 | config_changed() 168 | -------------------------------------------------------------------------------- /src/ui-macos/my.py: -------------------------------------------------------------------------------- 1 | import sys, os 2 | from AppKit import * 3 | import PyObjCTools.AppHelper 4 | 5 | 6 | def bundle_path(name, typ): 7 | if typ: 8 | return NSBundle.mainBundle().pathForResource_ofType_(name, typ) 9 | else: 10 | return os.path.join(NSBundle.mainBundle().resourcePath(), name) 11 | 12 | 13 | # Load an NSData using a python string 14 | def Data(s): 15 | return NSData.alloc().initWithBytes_length_(s, len(s)) 16 | 17 | 18 | # Load a property list from a file in the application bundle. 19 | def PList(name): 20 | path = bundle_path(name, 'plist') 21 | return NSDictionary.dictionaryWithContentsOfFile_(path) 22 | 23 | 24 | # Load an NSImage from a file in the application bundle. 25 | def Image(name, ext): 26 | bytes = open(bundle_path(name, ext)).read() 27 | img = NSImage.alloc().initWithData_(Data(bytes)) 28 | return img 29 | 30 | 31 | # Return the NSUserDefaults shared object. 32 | def Defaults(): 33 | return NSUserDefaults.standardUserDefaults() 34 | 35 | 36 | # Usage: 37 | # f = DelayedCallback(func, args...) 38 | # later: 39 | # f() 40 | # 41 | # When you call f(), it will schedule a call to func() next time the 42 | # ObjC event loop iterates. Multiple calls to f() in a single iteration 43 | # will only result in one call to func(). 44 | # 45 | def DelayedCallback(func, *args, **kwargs): 46 | flag = [0] 47 | def _go(): 48 | if flag[0]: 49 | print 'running %r (flag=%r)' % (func, flag) 50 | flag[0] = 0 51 | func(*args, **kwargs) 52 | def call(): 53 | flag[0] += 1 54 | PyObjCTools.AppHelper.callAfter(_go) 55 | return call 56 | 57 | 58 | def atoi(s): 59 | try: 60 | return int(s) 61 | except ValueError: 62 | return 0 63 | -------------------------------------------------------------------------------- /src/ui-macos/run.do: -------------------------------------------------------------------------------- 1 | redo-ifchange debug.app 2 | exec >&2 3 | ./debug.app/Contents/MacOS/Sshuttle 4 | 5 | -------------------------------------------------------------------------------- /src/ui-macos/sources.list.do: -------------------------------------------------------------------------------- 1 | redo-always 2 | exec >$3 3 | cat <<-EOF 4 | app.icns 5 | MainMenu.nib English.lproj/MainMenu.nib 6 | UserDefaults.plist 7 | chicken-tiny.png 8 | chicken-tiny-bw.png 9 | chicken-tiny-err.png 10 | EOF 11 | for d in *.py sshuttle/*.py sshuttle/sshuttle sshuttle/compat/*.py; do 12 | echo $d 13 | done 14 | redo-stamp <$3 15 | -------------------------------------------------------------------------------- /src/ui-macos/sshuttle: -------------------------------------------------------------------------------- 1 | .. --------------------------------------------------------------------------------