├── .gitignore ├── CHANGELOG.md ├── Makefile ├── README.md ├── TROUBLESHOOTING.md ├── blake.c ├── blake.h ├── input.cl ├── main.c ├── param.h ├── sha256.c ├── sha256.h ├── silentarmy ├── testing ├── dummy-solver ├── sols-100 └── sols-100.nr_rows_log_20 └── thirdparty ├── README └── asyncio ├── .gitattributes ├── .gitignore ├── .travis.yml ├── AUTHORS ├── COPYING ├── ChangeLog ├── MANIFEST.in ├── Makefile ├── README.rst ├── appveyor.yml ├── asyncio ├── __init__.py ├── base_events.py ├── base_subprocess.py ├── compat.py ├── constants.py ├── coroutines.py ├── events.py ├── futures.py ├── locks.py ├── log.py ├── proactor_events.py ├── protocols.py ├── queues.py ├── selector_events.py ├── selectors.py ├── sslproto.py ├── streams.py ├── subprocess.py ├── tasks.py ├── test_support.py ├── test_utils.py ├── transports.py ├── unix_events.py ├── windows_events.py └── windows_utils.py ├── check.py ├── examples ├── cacheclt.py ├── cachesvr.py ├── child_process.py ├── crawl.py ├── echo_client_tulip.py ├── echo_server_tulip.py ├── fetch0.py ├── fetch1.py ├── fetch2.py ├── fetch3.py ├── fuzz_as_completed.py ├── hello_callback.py ├── hello_coroutine.py ├── qspeed.py ├── shell.py ├── simple_tcp_server.py ├── sink.py ├── source.py ├── source1.py ├── stacks.py ├── subprocess_attach_read_pipe.py ├── subprocess_attach_write_pipe.py ├── subprocess_shell.py ├── tcp_echo.py ├── timing_tcp_server.py └── udp_echo.py ├── overlapped.c ├── pypi.bat ├── release.py ├── run_aiotest.py ├── runtests.py ├── setup.py ├── tests ├── echo.py ├── echo2.py ├── echo3.py ├── keycert3.pem ├── pycacert.pem ├── sample.crt ├── sample.key ├── ssl_cert.pem ├── ssl_key.pem ├── test_base_events.py ├── test_events.py ├── test_futures.py ├── test_locks.py ├── test_pep492.py ├── test_proactor_events.py ├── test_queues.py ├── test_selector_events.py ├── test_selectors.py ├── test_sslproto.py ├── test_streams.py ├── test_subprocess.py ├── test_tasks.py ├── test_transports.py ├── test_unix_events.py ├── test_windows_events.py └── test_windows_utils.py ├── tox.ini ├── update_asyncio.sh └── update_stdlib.sh /.gitignore: -------------------------------------------------------------------------------- 1 | /.hg/ 2 | /.hgignore 3 | /sa-solver 4 | /_kernel.h 5 | *.o 6 | _temp_* 7 | *.swp 8 | *.tmp 9 | *.pyc 10 | tags 11 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Current tip 2 | 3 | * Avoid 100% CPU usage with Nvidia's OpenCL, aka busywait fix (Kubuxu) 4 | * Optimization: +10% speedup, increase collision items tracked per thread 5 | (nerdralph). 'make test' finds 196 sols again 6 | * Implement mining.extranonce.subscribe (kenshirothefist) 7 | * mining.authorize sends an empty string if no password is specified 8 | * Fix memory leaks 9 | * Avoid fatal error when OpenCL platform returns CL_DEVICE_NOT_FOUND 10 | 11 | # Version 5 (11 Nov 2016) 12 | 13 | * Optimization: major 2x speedup (eXtremal) by storing 8 atomic counters in 14 | 1 uint, and by reducing branch divergence when iterating over and XORing Xi's; 15 | note that as a result of these optimizations, sa-solver compiled with 16 | NR_ROWS_LOG=20 now only finds 182 out of 196 existing solutions ("make test" 17 | verification data was adjusted accordingly) 18 | * Defaulting OPTIM_SIMPLIFY_ROUND to 1; GPU memory usage down to 0.8 GB per 19 | instance 20 | * Optimization: significantly reduce CPU usage and PCIe bandwidth (before: 21 | ~100 MB/s/GPU, after: 0.5 MB/s/GPU), accomplished by filtering invalid 22 | solutions on-device 23 | * Optimization: reduce size of collisions[] array; +7% speed increase measured 24 | on RX 480 and R9 Nano using AMDGPU-PRO 16.40 25 | * Implement stratum method client.reconnect 26 | * Avoid segfault when encountering an out-of-range input 27 | * For simplicity `-i
` now only accepts 140-byte headers 28 | * Update README.md with Nvidia performance numbers 29 | * Fix mining on Xeon Phi and CPUs (fix OpenCL warnings) 30 | * Fix compilation warnings and 32-bit platforms 31 | 32 | # Version 4 (08 Nov 2016) 33 | 34 | * Add Nvidia GPU support (fix more unaligned memory accesses) 35 | * Add nerdralph's optimization (OPTIM_SIMPLIFY_ROUND) for potential +30% 36 | speedup, especially useful on Nvidia GPUs 37 | * Drop the Python 3.5 dependency; now requires only Python 3.3 or above (lhl) 38 | * Drop the libsodium dependency; instead use our own SHA256 implementation 39 | * Add nicehash compatibility (stratum servers fixing 17 bytes of the nonce) 40 | * Only apply set_target to *next* mining job 41 | * Do not abandon previous mining jobs if clean_jobs is false 42 | * Fix KeyError's when displaying stats 43 | * Be more robust about different types of network errors during connection 44 | * Remove bytes.hex() which was only supported on Python 3.5+ 45 | 46 | # Version 3 (04 Nov 2016) 47 | 48 | * SILENTARMY is now a full miner, not just a solver; the solver binary was 49 | renamed "sa-solver" and the miner is the script "silentarmy" 50 | * Multi-GPU support 51 | * Stratum support for pool mining 52 | * Reduce GPU memory usage to 671 MB (NR_ROWS_LOG=19) or 1208 MB 53 | (NR_ROWS_LOG=20, default, ~10% faster than 19) per Equihash instance 54 | * Rename --list-gpu to --list and list all OpenCL devices (not just GPUs) 55 | * Add support for multiple OpenCL platforms: --list now scans all available 56 | platforms, numbering devices using globally unique IDs 57 | * Improve correctness: find ~0.09% more solutions 58 | 59 | # Version 2 (30 Oct 2016) 60 | 61 | * Support GCN 1.0 / remove unaligned memory accesses (because of this bug, 62 | previously SILENTARMY always reported 0 solutions on GCN 1.0 hardware) 63 | * Minor performance improvement (~1%) 64 | * Get rid of "kernel.cl" and move the OpenCL code to a C string embedded in the 65 | binary during compilation 66 | * Update README with instructions for installing 67 | **Radeon Software Crimson Edition** (fglrx.ko) in addition to 68 | **AMDGPU-PRO** (amdgpu.ko) 69 | 70 | # Version 1 (27 Oct 2016) 71 | 72 | * Initial import into GitHub 73 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | # Change this path if the SDK was installed in a non-standard location 2 | OPENCL_HEADERS = "/opt/AMDAPPSDK-3.0/include" 3 | # By default libOpenCL.so is searched in default system locations, this path 4 | # lets you adds one more directory to the search path. 5 | LIBOPENCL = "/opt/amdgpu-pro/lib/x86_64-linux-gnu" 6 | 7 | CC = gcc 8 | CPPFLAGS = -I${OPENCL_HEADERS} 9 | CFLAGS = -O2 -std=gnu99 -pedantic -Wextra -Wall \ 10 | -Wno-deprecated-declarations \ 11 | -Wno-overlength-strings 12 | LDFLAGS = -rdynamic -L${LIBOPENCL} 13 | LDLIBS = -lOpenCL -lrt 14 | OBJ = main.o blake.o sha256.o 15 | INCLUDES = blake.h param.h _kernel.h sha256.h 16 | 17 | all : sa-solver 18 | 19 | sa-solver : ${OBJ} 20 | ${CC} -o sa-solver ${OBJ} ${LDFLAGS} ${LDLIBS} 21 | 22 | ${OBJ} : ${INCLUDES} 23 | 24 | _kernel.h : input.cl param.h 25 | echo 'const char *ocl_code = R"_mrb_(' >$@ 26 | cpp $< >>$@ 27 | echo ')_mrb_";' >>$@ 28 | 29 | test : sa-solver 30 | @echo Testing... 31 | @if res=`./sa-solver --nonces 100 -v -v 2>&1 | grep Soln: | \ 32 | diff -u testing/sols-100 -`; then \ 33 | echo "Test: success"; \ 34 | else \ 35 | echo "$$res\nTest: FAILED" | cut -c 1-75 >&2; \ 36 | fi 37 | # When compiling with NR_ROWS_LOG != 20, the solutions it finds are 38 | # different: testing/sols-100 39 | 40 | clean : 41 | rm -f sa-solver _kernel.h *.o _temp_* 42 | 43 | re : clean all 44 | -------------------------------------------------------------------------------- /TROUBLESHOOTING.md: -------------------------------------------------------------------------------- 1 | # Troubleshooting 2 | 3 | Follow this checklist to verify that your entire hardware and software 4 | stack works (drivers, OpenCL, SILENTARMY). 5 | 6 | ## Driver / OpenCL installation 7 | 8 | Run `clinfo` to list all the OpenCL devices. If it does not find all your 9 | devices, something is wrong with your drivers and/or OpenCL stack. Uninstall 10 | and reinstall your drivers. Here are good instructions: 11 | https://hashcat.net/wiki/doku.php?id=frequently_asked_questions#i_may_have_the_wrong_driver_installed_what_should_i_do 12 | 13 | ## Check silentarmy 14 | 15 | Does `./silentarmy --list` list your devices? If `clinfo` does, silentarmy 16 | should list them as well. 17 | 18 | ## Basic operation 19 | 20 | Run the Equihash solver `sa-solver` to solve the all-zero block. It should 21 | report 2 solutions. Specify the device ID to test with `--use ID` 22 | 23 | ``` 24 | $ ./sa-solver --use 0 25 | Solving default all-zero 140-byte header 26 | Building program 27 | Hash tables will use 805.3 MB 28 | Running... 29 | Nonce 0000000000000000000000000000000000000000000000000000000000000000: 2 sols 30 | Total 2 solutions in 205.3 ms (9.7 Sol/s) 31 | ``` 32 | 33 | Note that `sa-solver` only supports 1 device at a time. It will not recognize 34 | eg. `--use 0,1,2`. 35 | 36 | ## Correct results 37 | 38 | Verify that `make test` succeeds. It should take between 5 and 60 seconds 39 | depending on your GPU: 40 | 41 | ``` 42 | $ make test 43 | Testing... 44 | Test: success 45 | ``` 46 | 47 | ## Sustained operation on one device 48 | 49 | Let the Equihash solver `sa-solver` run for multiple hours: 50 | 51 | ``` 52 | $ ./sa-solver --nonces 100000000 53 | Solving default all-zero 140-byte header 54 | Building program 55 | Hash tables will use 1208.0 MB 56 | Running... 57 | Nonce 0000000000000000000000000000000000000000000000000000000000000000: 2 sols 58 | Nonce 0100000000000000000000000000000000000000000000000000000000000000: 0 sols 59 | ... 60 | ``` 61 | 62 | It should not crash or hang. 63 | 64 | ## Mining 65 | 66 | Run the miner without options. By default it will use the first device, 67 | and connect to flypool with my donation address. These known-good parameters 68 | should let you know easily if your machine can mine properly: 69 | 70 | ``` 71 | $ ./silentarmy 72 | Connecting to us1-zcash.flypool.org:3333 73 | Stratum server sent us the first job 74 | Mining on 1 device 75 | Total 0.0 sol/s [dev0 0.0] 0 shares 76 | Total 48.9 sol/s [dev0 48.9] 1 share 77 | Total 44.9 sol/s [dev0 44.9] 1 share 78 | ... 79 | ``` 80 | 81 | Verify that the number of shares increases over time. 82 | 83 | ## Performance 84 | 85 | Not reaching the sol/s performance you expected? 86 | 87 | * Try running a different number of instances using the `silentarmy --instances 88 | N` argument. Try 1, 2, 3, or more. Note that each instance requires 805 MB of 89 | GPU memory. 90 | * If 1 instance still requires more GPU memory than available, edit `param.h` 91 | and set `NR_ROWS_LOG` to `19` (this reduces the per-instance memory usage 92 | to 671 MB) and run with `--instances 1`. 93 | * By default SILENTARMY mines with only one device/GPU; make sure to specify 94 | all the GPUs in the `--use` option, for example `silentarmy --use 0,1,2` 95 | if the host has three devices with IDs 0, 1, and 2. 96 | * Update your graphics card driver. The OpenCL compiler comes with the driver 97 | and occasionally new driver versions significantly tweak or improve it. 98 | -------------------------------------------------------------------------------- /blake.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include "blake.h" 5 | 6 | static const uint32_t blake2b_block_len = 128; 7 | static const uint32_t blake2b_rounds = 12; 8 | static const uint64_t blake2b_iv[8] = 9 | { 10 | 0x6a09e667f3bcc908ULL, 0xbb67ae8584caa73bULL, 11 | 0x3c6ef372fe94f82bULL, 0xa54ff53a5f1d36f1ULL, 12 | 0x510e527fade682d1ULL, 0x9b05688c2b3e6c1fULL, 13 | 0x1f83d9abfb41bd6bULL, 0x5be0cd19137e2179ULL, 14 | }; 15 | static const uint8_t blake2b_sigma[12][16] = 16 | { 17 | { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 }, 18 | { 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 }, 19 | { 11, 8, 12, 0, 5, 2, 15, 13, 10, 14, 3, 6, 7, 1, 9, 4 }, 20 | { 7, 9, 3, 1, 13, 12, 11, 14, 2, 6, 5, 10, 4, 0, 15, 8 }, 21 | { 9, 0, 5, 7, 2, 4, 10, 15, 14, 1, 11, 12, 6, 8, 3, 13 }, 22 | { 2, 12, 6, 10, 0, 11, 8, 3, 4, 13, 7, 5, 15, 14, 1, 9 }, 23 | { 12, 5, 1, 15, 14, 13, 4, 10, 0, 7, 6, 3, 9, 2, 8, 11 }, 24 | { 13, 11, 7, 14, 12, 1, 3, 9, 5, 0, 15, 4, 8, 6, 2, 10 }, 25 | { 6, 15, 14, 9, 11, 3, 0, 8, 12, 2, 13, 7, 1, 4, 10, 5 }, 26 | { 10, 2, 8, 4, 7, 6, 1, 5, 15, 11, 9, 14, 3, 12, 13, 0 }, 27 | { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 }, 28 | { 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 }, 29 | }; 30 | 31 | /* 32 | ** Init the state according to Zcash parameters. 33 | */ 34 | void zcash_blake2b_init(blake2b_state_t *st, uint8_t hash_len, 35 | uint32_t n, uint32_t k) 36 | { 37 | assert(n > k); 38 | assert(hash_len <= 64); 39 | st->h[0] = blake2b_iv[0] ^ (0x01010000 | hash_len); 40 | for (uint32_t i = 1; i <= 5; i++) 41 | st->h[i] = blake2b_iv[i]; 42 | st->h[6] = blake2b_iv[6] ^ *(uint64_t *)"ZcashPoW"; 43 | st->h[7] = blake2b_iv[7] ^ (((uint64_t)k << 32) | n); 44 | st->bytes = 0; 45 | } 46 | 47 | static uint64_t rotr64(uint64_t a, uint8_t bits) 48 | { 49 | return (a >> bits) | (a << (64 - bits)); 50 | } 51 | 52 | static void mix(uint64_t *va, uint64_t *vb, uint64_t *vc, uint64_t *vd, 53 | uint64_t x, uint64_t y) 54 | { 55 | *va = (*va + *vb + x); 56 | *vd = rotr64(*vd ^ *va, 32); 57 | *vc = (*vc + *vd); 58 | *vb = rotr64(*vb ^ *vc, 24); 59 | *va = (*va + *vb + y); 60 | *vd = rotr64(*vd ^ *va, 16); 61 | *vc = (*vc + *vd); 62 | *vb = rotr64(*vb ^ *vc, 63); 63 | } 64 | 65 | /* 66 | ** Process either a full message block or the final partial block. 67 | ** Note that v[13] is not XOR'd because st->bytes is assumed to never overflow. 68 | ** 69 | ** _msg pointer to message (must be zero-padded to 128 bytes if final block) 70 | ** msg_len must be 128 (<= 128 allowed only for final partial block) 71 | ** is_final indicate if this is the final block 72 | */ 73 | void zcash_blake2b_update(blake2b_state_t *st, const uint8_t *_msg, 74 | uint32_t msg_len, uint32_t is_final) 75 | { 76 | const uint64_t *m = (const uint64_t *)_msg; 77 | uint64_t v[16]; 78 | assert(msg_len <= 128); 79 | assert(st->bytes <= UINT64_MAX - msg_len); 80 | memcpy(v + 0, st->h, 8 * sizeof (*v)); 81 | memcpy(v + 8, blake2b_iv, 8 * sizeof (*v)); 82 | v[12] ^= (st->bytes += msg_len); 83 | v[14] ^= is_final ? -1 : 0; 84 | for (uint32_t round = 0; round < blake2b_rounds; round++) 85 | { 86 | const uint8_t *s = blake2b_sigma[round]; 87 | mix(v + 0, v + 4, v + 8, v + 12, m[s[0]], m[s[1]]); 88 | mix(v + 1, v + 5, v + 9, v + 13, m[s[2]], m[s[3]]); 89 | mix(v + 2, v + 6, v + 10, v + 14, m[s[4]], m[s[5]]); 90 | mix(v + 3, v + 7, v + 11, v + 15, m[s[6]], m[s[7]]); 91 | mix(v + 0, v + 5, v + 10, v + 15, m[s[8]], m[s[9]]); 92 | mix(v + 1, v + 6, v + 11, v + 12, m[s[10]], m[s[11]]); 93 | mix(v + 2, v + 7, v + 8, v + 13, m[s[12]], m[s[13]]); 94 | mix(v + 3, v + 4, v + 9, v + 14, m[s[14]], m[s[15]]); 95 | } 96 | for (uint32_t i = 0; i < 8; i++) 97 | st->h[i] ^= v[i] ^ v[i + 8]; 98 | } 99 | 100 | void zcash_blake2b_final(blake2b_state_t *st, uint8_t *out, uint8_t outlen) 101 | { 102 | assert(outlen <= 64); 103 | memcpy(out, st->h, outlen); 104 | } 105 | -------------------------------------------------------------------------------- /blake.h: -------------------------------------------------------------------------------- 1 | typedef struct blake2b_state_s 2 | { 3 | uint64_t h[8]; 4 | uint64_t bytes; 5 | } blake2b_state_t; 6 | void zcash_blake2b_init(blake2b_state_t *st, uint8_t hash_len, 7 | uint32_t n, uint32_t k); 8 | void zcash_blake2b_update(blake2b_state_t *st, const uint8_t *_msg, 9 | uint32_t msg_len, uint32_t is_final); 10 | void zcash_blake2b_final(blake2b_state_t *st, uint8_t *out, uint8_t outlen); 11 | -------------------------------------------------------------------------------- /param.h: -------------------------------------------------------------------------------- 1 | #define PARAM_N 200 2 | #define PARAM_K 9 3 | #define PREFIX (PARAM_N / (PARAM_K + 1)) 4 | #define NR_INPUTS (1 << PREFIX) 5 | // Approximate log base 2 of number of elements in hash tables 6 | #define APX_NR_ELMS_LOG (PREFIX + 1) 7 | // Number of rows and slots is affected by this; 20 offers the best performance 8 | #define NR_ROWS_LOG 20 9 | 10 | // Setting this to 1 might make SILENTARMY faster, see TROUBLESHOOTING.md 11 | #define OPTIM_SIMPLIFY_ROUND 1 12 | 13 | // Number of collision items to track, per thread 14 | #define COLL_DATA_SIZE_PER_TH (NR_SLOTS * 5) 15 | 16 | // Ratio of time of sleeping before rechecking if task is done (0-1) 17 | #define SLEEP_RECHECK_RATIO 0.60 18 | // Ratio of time to busy wait for the solution (0-1) 19 | // The higher value the higher CPU usage with Nvidia 20 | #define SLEEP_SKIP_RATIO 0.005 21 | 22 | // Make hash tables OVERHEAD times larger than necessary to store the average 23 | // number of elements per row. The ideal value is as small as possible to 24 | // reduce memory usage, but not too small or else elements are dropped from the 25 | // hash tables. 26 | // 27 | // The actual number of elements per row is closer to the theoretical average 28 | // (less variance) when NR_ROWS_LOG is small. So accordingly OVERHEAD can be 29 | // smaller. 30 | // 31 | // Even (as opposed to odd) values of OVERHEAD sometimes significantly decrease 32 | // performance as they cause VRAM channel conflicts. 33 | #if NR_ROWS_LOG == 16 34 | #error "NR_ROWS_LOG = 16 is currently broken - do not use" 35 | #define OVERHEAD 3 36 | #elif NR_ROWS_LOG == 18 37 | #define OVERHEAD 3 38 | #elif NR_ROWS_LOG == 19 39 | #define OVERHEAD 5 40 | #elif NR_ROWS_LOG == 20 && OPTIM_SIMPLIFY_ROUND 41 | #define OVERHEAD 6 42 | #elif NR_ROWS_LOG == 20 43 | #define OVERHEAD 9 44 | #endif 45 | 46 | #define NR_ROWS (1 << NR_ROWS_LOG) 47 | #define NR_SLOTS ((1 << (APX_NR_ELMS_LOG - NR_ROWS_LOG)) * OVERHEAD) 48 | // Length of 1 element (slot) in bytes 49 | #define SLOT_LEN 32 50 | // Total size of hash table 51 | #define HT_SIZE (NR_ROWS * NR_SLOTS * SLOT_LEN) 52 | // Length of Zcash block header, nonce (part of header) 53 | #define ZCASH_BLOCK_HEADER_LEN 140 54 | // Offset of nTime in header 55 | #define ZCASH_BLOCK_OFFSET_NTIME (4 + 3 * 32) 56 | // Length of nonce 57 | #define ZCASH_NONCE_LEN 32 58 | // Length of encoded representation of solution size 59 | #define ZCASH_SOLSIZE_LEN 3 60 | // Solution size (1344 = 0x540) represented as a compact integer, in hex 61 | #define ZCASH_SOLSIZE_HEX "fd4005" 62 | // Length of encoded solution (512 * 21 bits / 8 = 1344 bytes) 63 | #define ZCASH_SOL_LEN ((1 << PARAM_K) * (PREFIX + 1) / 8) 64 | // Last N_ZERO_BYTES of nonce must be zero due to my BLAKE2B optimization 65 | #define N_ZERO_BYTES 12 66 | // Number of bytes Zcash needs out of Blake 67 | #define ZCASH_HASH_LEN 50 68 | // Number of wavefronts per SIMD for the Blake kernel. 69 | // Blake is ALU-bound (beside the atomic counter being incremented) so we need 70 | // at least 2 wavefronts per SIMD to hide the 2-clock latency of integer 71 | // instructions. 10 is the max supported by the hw. 72 | #define BLAKE_WPS 10 73 | // Maximum number of solutions reported by kernel to host 74 | #define MAX_SOLS 10 75 | // Length of SHA256 target 76 | #define SHA256_TARGET_LEN (256 / 8) 77 | 78 | #if (NR_SLOTS < 16) 79 | #define BITS_PER_ROW 4 80 | #define ROWS_PER_UINT 8 81 | #define ROW_MASK 0x0F 82 | #else 83 | #define BITS_PER_ROW 8 84 | #define ROWS_PER_UINT 4 85 | #define ROW_MASK 0xFF 86 | #endif 87 | 88 | // Optional features 89 | #undef ENABLE_DEBUG 90 | 91 | /* 92 | ** Return the offset of Xi in bytes from the beginning of the slot. 93 | */ 94 | #define xi_offset_for_round(round) (8 + ((round) / 2) * 4) 95 | 96 | // An (uncompressed) solution stores (1 << PARAM_K) 32-bit values 97 | #define SOL_SIZE ((1 << PARAM_K) * 4) 98 | typedef struct sols_s 99 | { 100 | uint nr; 101 | uint likely_invalids; 102 | uchar valid[MAX_SOLS]; 103 | uint values[MAX_SOLS][(1 << PARAM_K)]; 104 | } sols_t; 105 | -------------------------------------------------------------------------------- /sha256.c: -------------------------------------------------------------------------------- 1 | /* Crypto/Sha256.c -- SHA-256 Hash 2 | 2016-11-04 : Marc Bevand : A few changes to make it more self-contained 3 | 2010-06-11 : Igor Pavlov : Public domain 4 | This code is based on public domain code from Wei Dai's Crypto++ library. */ 5 | 6 | #include 7 | #include 8 | #include "sha256.h" 9 | 10 | /* define it for speed optimization */ 11 | /* #define _SHA256_UNROLL */ 12 | /* #define _SHA256_UNROLL2 */ 13 | 14 | #define rotlFixed(x, n) (((x) << (n)) | ((x) >> (32 - (n)))) 15 | #define rotrFixed(x, n) (((x) >> (n)) | ((x) << (32 - (n)))) 16 | 17 | void Sha256_Init(CSha256 *p) 18 | { 19 | p->state[0] = 0x6a09e667; 20 | p->state[1] = 0xbb67ae85; 21 | p->state[2] = 0x3c6ef372; 22 | p->state[3] = 0xa54ff53a; 23 | p->state[4] = 0x510e527f; 24 | p->state[5] = 0x9b05688c; 25 | p->state[6] = 0x1f83d9ab; 26 | p->state[7] = 0x5be0cd19; 27 | p->count = 0; 28 | } 29 | 30 | #define S0(x) (rotrFixed(x, 2) ^ rotrFixed(x,13) ^ rotrFixed(x, 22)) 31 | #define S1(x) (rotrFixed(x, 6) ^ rotrFixed(x,11) ^ rotrFixed(x, 25)) 32 | #define s0(x) (rotrFixed(x, 7) ^ rotrFixed(x,18) ^ (x >> 3)) 33 | #define s1(x) (rotrFixed(x,17) ^ rotrFixed(x,19) ^ (x >> 10)) 34 | 35 | #define blk0(i) (W[i] = data[i]) 36 | #define blk2(i) (W[i&15] += s1(W[(i-2)&15]) + W[(i-7)&15] + s0(W[(i-15)&15])) 37 | 38 | #define Ch(x,y,z) (z^(x&(y^z))) 39 | #define Maj(x,y,z) ((x&y)|(z&(x|y))) 40 | 41 | #define a(i) T[(0-(i))&7] 42 | #define b(i) T[(1-(i))&7] 43 | #define c(i) T[(2-(i))&7] 44 | #define d(i) T[(3-(i))&7] 45 | #define e(i) T[(4-(i))&7] 46 | #define f(i) T[(5-(i))&7] 47 | #define g(i) T[(6-(i))&7] 48 | #define h(i) T[(7-(i))&7] 49 | 50 | 51 | #ifdef _SHA256_UNROLL2 52 | 53 | #define R(a,b,c,d,e,f,g,h, i) h += S1(e) + Ch(e,f,g) + K[i+j] + (j?blk2(i):blk0(i));\ 54 | d += h; h += S0(a) + Maj(a, b, c) 55 | 56 | #define RX_8(i) \ 57 | R(a,b,c,d,e,f,g,h, i); \ 58 | R(h,a,b,c,d,e,f,g, i+1); \ 59 | R(g,h,a,b,c,d,e,f, i+2); \ 60 | R(f,g,h,a,b,c,d,e, i+3); \ 61 | R(e,f,g,h,a,b,c,d, i+4); \ 62 | R(d,e,f,g,h,a,b,c, i+5); \ 63 | R(c,d,e,f,g,h,a,b, i+6); \ 64 | R(b,c,d,e,f,g,h,a, i+7) 65 | 66 | #else 67 | 68 | #define R(i) h(i) += S1(e(i)) + Ch(e(i),f(i),g(i)) + K[i+j] + (j?blk2(i):blk0(i));\ 69 | d(i) += h(i); h(i) += S0(a(i)) + Maj(a(i), b(i), c(i)) 70 | 71 | #ifdef _SHA256_UNROLL 72 | 73 | #define RX_8(i) R(i+0); R(i+1); R(i+2); R(i+3); R(i+4); R(i+5); R(i+6); R(i+7); 74 | 75 | #endif 76 | 77 | #endif 78 | 79 | static const uint32_t K[64] = { 80 | 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 81 | 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5, 82 | 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 83 | 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, 84 | 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, 85 | 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da, 86 | 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 87 | 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967, 88 | 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 89 | 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85, 90 | 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 91 | 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, 92 | 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 93 | 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3, 94 | 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 95 | 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 96 | }; 97 | 98 | static void Sha256_Transform(uint32_t *state, const uint32_t *data) 99 | { 100 | uint32_t W[16]; 101 | unsigned j; 102 | #ifdef _SHA256_UNROLL2 103 | uint32_t a,b,c,d,e,f,g,h; 104 | a = state[0]; 105 | b = state[1]; 106 | c = state[2]; 107 | d = state[3]; 108 | e = state[4]; 109 | f = state[5]; 110 | g = state[6]; 111 | h = state[7]; 112 | #else 113 | uint32_t T[8]; 114 | for (j = 0; j < 8; j++) 115 | T[j] = state[j]; 116 | #endif 117 | 118 | for (j = 0; j < 64; j += 16) 119 | { 120 | #if defined(_SHA256_UNROLL) || defined(_SHA256_UNROLL2) 121 | RX_8(0); RX_8(8); 122 | #else 123 | unsigned i; 124 | for (i = 0; i < 16; i++) { R(i); } 125 | #endif 126 | } 127 | 128 | #ifdef _SHA256_UNROLL2 129 | state[0] += a; 130 | state[1] += b; 131 | state[2] += c; 132 | state[3] += d; 133 | state[4] += e; 134 | state[5] += f; 135 | state[6] += g; 136 | state[7] += h; 137 | #else 138 | for (j = 0; j < 8; j++) 139 | state[j] += T[j]; 140 | #endif 141 | 142 | /* Wipe variables */ 143 | /* memset(W, 0, sizeof(W)); */ 144 | /* memset(T, 0, sizeof(T)); */ 145 | } 146 | 147 | #undef S0 148 | #undef S1 149 | #undef s0 150 | #undef s1 151 | 152 | static void Sha256_WriteByteBlock(CSha256 *p) 153 | { 154 | uint32_t data32[16]; 155 | unsigned i; 156 | for (i = 0; i < 16; i++) 157 | data32[i] = 158 | ((uint32_t)(p->buffer[i * 4 ]) << 24) + 159 | ((uint32_t)(p->buffer[i * 4 + 1]) << 16) + 160 | ((uint32_t)(p->buffer[i * 4 + 2]) << 8) + 161 | ((uint32_t)(p->buffer[i * 4 + 3])); 162 | Sha256_Transform(p->state, data32); 163 | } 164 | 165 | void Sha256_Update(CSha256 *p, const uint8_t *data, size_t size) 166 | { 167 | uint32_t curBufferPos = (uint32_t)p->count & 0x3F; 168 | while (size > 0) 169 | { 170 | p->buffer[curBufferPos++] = *data++; 171 | p->count++; 172 | size--; 173 | if (curBufferPos == 64) 174 | { 175 | curBufferPos = 0; 176 | Sha256_WriteByteBlock(p); 177 | } 178 | } 179 | } 180 | 181 | void Sha256_Final(CSha256 *p, uint8_t *digest) 182 | { 183 | uint64_t lenInBits = (p->count << 3); 184 | uint32_t curBufferPos = (uint32_t)p->count & 0x3F; 185 | unsigned i; 186 | p->buffer[curBufferPos++] = 0x80; 187 | while (curBufferPos != (64 - 8)) 188 | { 189 | curBufferPos &= 0x3F; 190 | if (curBufferPos == 0) 191 | Sha256_WriteByteBlock(p); 192 | p->buffer[curBufferPos++] = 0; 193 | } 194 | for (i = 0; i < 8; i++) 195 | { 196 | p->buffer[curBufferPos++] = (uint8_t)(lenInBits >> 56); 197 | lenInBits <<= 8; 198 | } 199 | Sha256_WriteByteBlock(p); 200 | 201 | for (i = 0; i < 8; i++) 202 | { 203 | *digest++ = (uint8_t)(p->state[i] >> 24); 204 | *digest++ = (uint8_t)(p->state[i] >> 16); 205 | *digest++ = (uint8_t)(p->state[i] >> 8); 206 | *digest++ = (uint8_t)(p->state[i]); 207 | } 208 | Sha256_Init(p); 209 | } 210 | 211 | void Sha256_Onestep(const uint8_t *data, size_t size, uint8_t *digest) 212 | { 213 | CSha256 p; 214 | Sha256_Init(&p); 215 | Sha256_Update(&p, data, size); 216 | Sha256_Final(&p, digest); 217 | } 218 | -------------------------------------------------------------------------------- /sha256.h: -------------------------------------------------------------------------------- 1 | /* Sha256.h -- SHA-256 Hash 2 | 2016-11-04 : Marc Bevand : A few changes to make it more self-contained 3 | 2010-06-11 : Igor Pavlov : Public domain */ 4 | 5 | #ifndef __CRYPTO_SHA256_H 6 | #define __CRYPTO_SHA256_H 7 | 8 | #define SHA256_DIGEST_SIZE 32 9 | 10 | typedef struct 11 | { 12 | uint32_t state[8]; 13 | uint64_t count; 14 | uint8_t buffer[64]; 15 | } CSha256; 16 | 17 | void Sha256_Init(CSha256 *p); 18 | void Sha256_Update(CSha256 *p, const uint8_t *data, size_t size); 19 | void Sha256_Final(CSha256 *p, uint8_t *digest); 20 | void Sha256_Onestep(const uint8_t *data, size_t size, uint8_t *digest); 21 | 22 | #endif 23 | -------------------------------------------------------------------------------- /testing/dummy-solver: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | echo "SILENTARMY mining mode ready" 4 | i=0 5 | while read job; do 6 | # job is: 7 | #
8 | job_id=$(echo $job | cut -d' ' -f2) 9 | ( 10 | j=0 11 | while [ $j -lt 5 ]; do 12 | sleep 1 13 | # output is: 14 | # sol: 15 | echo "sol: $job_id 11223358 1111 2222"`printf %02x $j` 16 | j=$(($j+1)) 17 | done 18 | ) & 19 | sleep .6 20 | # occasionnaly report statistics: 21 | # status: 22 | echo "status: 100 10" 23 | sleep .6 24 | echo "hello world" 25 | done 26 | echo 'i am done' 27 | exit 42 28 | -------------------------------------------------------------------------------- /thirdparty/README: -------------------------------------------------------------------------------- 1 | "asyncio" is a clone of https://github.com/python/asyncio as of revision 2 | 07ac834068037d8206d6c941e029474bda8e08f2 (03 Nov 2016) with the following 3 | patch: 4 | 5 | --- asyncio/base_events.py 2016-11-07 16:45:34.587238061 -0600 6 | +++ asyncio/base_events.py 2016-11-07 14:50:34.169035083 -0600 7 | @@ -40,6 +40,11 @@ 8 | 9 | __all__ = ['BaseEventLoop'] 10 | 11 | +# lhl: We need to add this back in because 3.3 doesn't have a default max_workers in ThreadPoolExecuter 12 | +# https://bugs.python.org/issue26796 13 | +# https://docs.python.org/3/library/asyncio-eventloop.html#executor 14 | +# https://docs.python.org/3/library/concurrent.futures.html 15 | +_MAX_WORKERS=8 16 | 17 | # Minimum number of _scheduled timer handles before cleanup of 18 | # cancelled handles is performed. 19 | @@ -619,7 +624,10 @@ 20 | if executor is None: 21 | executor = self._default_executor 22 | if executor is None: 23 | - executor = concurrent.futures.ThreadPoolExecutor() 24 | + try: 25 | + executor = concurrent.futures.ThreadPoolExecutor() 26 | + except: 27 | + executor = concurrent.futures.ThreadPoolExecutor(_MAX_WORKERS) 28 | self._default_executor = executor 29 | return futures.wrap_future(executor.submit(func, *args), loop=self) 30 | 31 | -------------------------------------------------------------------------------- /thirdparty/asyncio/.gitattributes: -------------------------------------------------------------------------------- 1 | * text=auto 2 | *.py text diff=python 3 | -------------------------------------------------------------------------------- /thirdparty/asyncio/.gitignore: -------------------------------------------------------------------------------- 1 | *\.py[co] 2 | *~ 3 | *\.orig 4 | *\#.* 5 | *@.* 6 | .coverage 7 | htmlcov 8 | .DS_Store 9 | venv 10 | pyvenv 11 | distribute_setup.py 12 | distribute-*.tar.gz 13 | build 14 | dist 15 | *.egg-info 16 | .tox 17 | .idea/ 18 | *.iml -------------------------------------------------------------------------------- /thirdparty/asyncio/.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | 3 | os: 4 | - linux 5 | 6 | python: 7 | - 3.5 8 | 9 | install: 10 | - pip install asyncio 11 | - python setup.py install 12 | 13 | script: 14 | - python runtests.py 15 | - PYTHONASYNCIODEBUG=1 python -bb runtests.py 16 | -------------------------------------------------------------------------------- /thirdparty/asyncio/AUTHORS: -------------------------------------------------------------------------------- 1 | A. Jesse Jiryu Davis 2 | Aaron Griffith 3 | Andrew Svetlov 4 | Anthony Baire 5 | Antoine Pitrou 6 | Arnaud Faure 7 | Aymeric Augustin 8 | Brett Cannon 9 | Charles-François Natali 10 | Christian Heimes 11 | Donald Stufft 12 | Eli Bendersky 13 | Geert Jansen 14 | Giampaolo Rodola' 15 | Guido van Rossum : creator of the asyncio project and author of the PEP 3156 16 | Gustavo Carneiro 17 | Jeff Quast 18 | Jonathan Slenders 19 | Nikolay Kim 20 | Richard Oudkerk 21 | Saúl Ibarra Corretgé 22 | Serhiy Storchaka 23 | Vajrasky Kok 24 | Victor Stinner 25 | Vladimir Kryachko 26 | Yann Sionneau 27 | Yury Selivanov 28 | -------------------------------------------------------------------------------- /thirdparty/asyncio/MANIFEST.in: -------------------------------------------------------------------------------- 1 | include AUTHORS COPYING 2 | include Makefile 3 | include overlapped.c pypi.bat 4 | include check.py runtests.py run_aiotest.py release.py 5 | include update_stdlib.sh 6 | 7 | recursive-include examples *.py 8 | recursive-include tests *.crt 9 | recursive-include tests *.key 10 | recursive-include tests *.pem 11 | recursive-include tests *.py 12 | -------------------------------------------------------------------------------- /thirdparty/asyncio/Makefile: -------------------------------------------------------------------------------- 1 | # Some simple testing tasks (sorry, UNIX only). 2 | 3 | PYTHON=python3 4 | VERBOSE=$(V) 5 | V= 0 6 | FLAGS= 7 | 8 | test: 9 | $(PYTHON) runtests.py -v $(VERBOSE) $(FLAGS) 10 | PYTHONASYNCIODEBUG=1 $(PYTHON) runtests.py -v $(VERBOSE) $(FLAGS) 11 | 12 | vtest: 13 | $(PYTHON) runtests.py -v 1 $(FLAGS) 14 | 15 | testloop: 16 | while sleep 1; do $(PYTHON) runtests.py -v $(VERBOSE) $(FLAGS); done 17 | 18 | # See runtests.py for coverage installation instructions. 19 | cov coverage: 20 | $(PYTHON) runtests.py --coverage -v $(VERBOSE) $(FLAGS) 21 | 22 | check: 23 | $(PYTHON) check.py 24 | 25 | # Requires "pip install pep8". 26 | pep8: check 27 | pep8 --ignore E125,E127,E226 tests asyncio 28 | 29 | clean: 30 | rm -rf `find . -name __pycache__` 31 | rm -f `find . -type f -name '*.py[co]' ` 32 | rm -f `find . -type f -name '*~' ` 33 | rm -f `find . -type f -name '.*~' ` 34 | rm -f `find . -type f -name '@*' ` 35 | rm -f `find . -type f -name '#*#' ` 36 | rm -f `find . -type f -name '*.orig' ` 37 | rm -f `find . -type f -name '*.rej' ` 38 | rm -rf dist 39 | rm -f .coverage 40 | rm -rf htmlcov 41 | rm -rf build 42 | rm -rf asyncio.egg-info 43 | rm -f MANIFEST 44 | 45 | 46 | # For distribution builders only! 47 | # Push a source distribution for Python 3.3 to PyPI. 48 | # You must update the version in setup.py first. 49 | # A PyPI user configuration in ~/.pypirc is required; 50 | # you can create a suitable confifuration using 51 | # python setup.py register 52 | pypi: clean 53 | python3.3 setup.py sdist upload 54 | 55 | # The corresponding action on Windows is pypi.bat. For that to work, 56 | # you need to install wheel and setuptools. The easiest way is to get 57 | # pip using the get-pip.py script found here: 58 | # https://pip.pypa.io/en/latest/installing.html#install-pip 59 | # That will install setuptools and pip; then you can just do 60 | # \Python33\python.exe -m pip install wheel 61 | # after which the pypi.bat script should work. 62 | -------------------------------------------------------------------------------- /thirdparty/asyncio/README.rst: -------------------------------------------------------------------------------- 1 | .. image:: https://travis-ci.org/python/asyncio.svg?branch=master 2 | :target: https://travis-ci.org/python/asyncio 3 | 4 | .. image:: https://ci.appveyor.com/api/projects/status/u72781t69ljdpm2y?svg=true 5 | :target: https://ci.appveyor.com/project/1st1/asyncio 6 | 7 | 8 | The asyncio module provides infrastructure for writing single-threaded 9 | concurrent code using coroutines, multiplexing I/O access over sockets and 10 | other resources, running network clients and servers, and other related 11 | primitives. Here is a more detailed list of the package contents: 12 | 13 | * a pluggable event loop with various system-specific implementations; 14 | 15 | * transport and protocol abstractions (similar to those in Twisted); 16 | 17 | * concrete support for TCP, UDP, SSL, subprocess pipes, delayed calls, and 18 | others (some may be system-dependent); 19 | 20 | * a Future class that mimics the one in the concurrent.futures module, but 21 | adapted for use with the event loop; 22 | 23 | * coroutines and tasks based on ``yield from`` (PEP 380), to help write 24 | concurrent code in a sequential fashion; 25 | 26 | * cancellation support for Futures and coroutines; 27 | 28 | * synchronization primitives for use between coroutines in a single thread, 29 | mimicking those in the threading module; 30 | 31 | * an interface for passing work off to a threadpool, for times when you 32 | absolutely, positively have to use a library that makes blocking I/O calls. 33 | 34 | Note: The implementation of asyncio was previously called "Tulip". 35 | 36 | 37 | Installation 38 | ============ 39 | 40 | To install asyncio, type:: 41 | 42 | pip install asyncio 43 | 44 | asyncio requires Python 3.3 or later! The asyncio module is part of the Python 45 | standard library since Python 3.4. 46 | 47 | asyncio is a free software distributed under the Apache license version 2.0. 48 | 49 | 50 | Websites 51 | ======== 52 | 53 | * `asyncio project at GitHub `_: source 54 | code, bug tracker 55 | * `asyncio documentation `_ 56 | * Mailing list: `python-tulip Google Group 57 | `_ 58 | * IRC: join the ``#asyncio`` channel on the Freenode network 59 | 60 | 61 | Development 62 | =========== 63 | 64 | The actual code lives in the 'asyncio' subdirectory. Tests are in the 'tests' 65 | subdirectory. 66 | 67 | To run tests, run:: 68 | 69 | tox 70 | 71 | Or use the Makefile:: 72 | 73 | make test 74 | 75 | To run coverage (coverage package is required):: 76 | 77 | make coverage 78 | 79 | On Windows, things are a little more complicated. Assume ``P`` is your 80 | Python binary (for example ``C:\Python33\python.exe``). 81 | 82 | You must first build the _overlapped.pyd extension and have it placed 83 | in the asyncio directory, as follows:: 84 | 85 | C:\> P setup.py build_ext --inplace 86 | 87 | If this complains about vcvars.bat, you probably don't have the 88 | required version of Visual Studio installed. Compiling extensions for 89 | Python 3.3 requires Microsoft Visual C++ 2010 (MSVC 10.0) of any 90 | edition; you can download Visual Studio Express 2010 for free from 91 | http://www.visualstudio.com/downloads (scroll down to Visual C++ 2010 92 | Express). 93 | 94 | Once you have built the _overlapped.pyd extension successfully you can 95 | run the tests as follows:: 96 | 97 | C:\> P runtests.py 98 | 99 | And coverage as follows:: 100 | 101 | C:\> P runtests.py --coverage 102 | -------------------------------------------------------------------------------- /thirdparty/asyncio/appveyor.yml: -------------------------------------------------------------------------------- 1 | environment: 2 | matrix: 3 | - PYTHON: "C:\\Python35" 4 | - PYTHON: "C:\\Python35-x64" 5 | 6 | build: false 7 | 8 | test_script: 9 | - "%PYTHON%\\python.exe runtests.py" 10 | -------------------------------------------------------------------------------- /thirdparty/asyncio/asyncio/__init__.py: -------------------------------------------------------------------------------- 1 | """The asyncio package, tracking PEP 3156.""" 2 | 3 | import sys 4 | 5 | # The selectors module is in the stdlib in Python 3.4 but not in 3.3. 6 | # Do this first, so the other submodules can use "from . import selectors". 7 | # Prefer asyncio/selectors.py over the stdlib one, as ours may be newer. 8 | try: 9 | from . import selectors 10 | except ImportError: 11 | import selectors # Will also be exported. 12 | 13 | if sys.platform == 'win32': 14 | # Similar thing for _overlapped. 15 | try: 16 | from . import _overlapped 17 | except ImportError: 18 | import _overlapped # Will also be exported. 19 | 20 | # This relies on each of the submodules having an __all__ variable. 21 | from .base_events import * 22 | from .coroutines import * 23 | from .events import * 24 | from .futures import * 25 | from .locks import * 26 | from .protocols import * 27 | from .queues import * 28 | from .streams import * 29 | from .subprocess import * 30 | from .tasks import * 31 | from .transports import * 32 | 33 | __all__ = (base_events.__all__ + 34 | coroutines.__all__ + 35 | events.__all__ + 36 | futures.__all__ + 37 | locks.__all__ + 38 | protocols.__all__ + 39 | queues.__all__ + 40 | streams.__all__ + 41 | subprocess.__all__ + 42 | tasks.__all__ + 43 | transports.__all__) 44 | 45 | if sys.platform == 'win32': # pragma: no cover 46 | from .windows_events import * 47 | __all__ += windows_events.__all__ 48 | else: 49 | from .unix_events import * # pragma: no cover 50 | __all__ += unix_events.__all__ 51 | -------------------------------------------------------------------------------- /thirdparty/asyncio/asyncio/base_subprocess.py: -------------------------------------------------------------------------------- 1 | import collections 2 | import subprocess 3 | import warnings 4 | 5 | from . import compat 6 | from . import protocols 7 | from . import transports 8 | from .coroutines import coroutine 9 | from .log import logger 10 | 11 | 12 | class BaseSubprocessTransport(transports.SubprocessTransport): 13 | 14 | def __init__(self, loop, protocol, args, shell, 15 | stdin, stdout, stderr, bufsize, 16 | waiter=None, extra=None, **kwargs): 17 | super().__init__(extra) 18 | self._closed = False 19 | self._protocol = protocol 20 | self._loop = loop 21 | self._proc = None 22 | self._pid = None 23 | self._returncode = None 24 | self._exit_waiters = [] 25 | self._pending_calls = collections.deque() 26 | self._pipes = {} 27 | self._finished = False 28 | 29 | if stdin == subprocess.PIPE: 30 | self._pipes[0] = None 31 | if stdout == subprocess.PIPE: 32 | self._pipes[1] = None 33 | if stderr == subprocess.PIPE: 34 | self._pipes[2] = None 35 | 36 | # Create the child process: set the _proc attribute 37 | try: 38 | self._start(args=args, shell=shell, stdin=stdin, stdout=stdout, 39 | stderr=stderr, bufsize=bufsize, **kwargs) 40 | except: 41 | self.close() 42 | raise 43 | 44 | self._pid = self._proc.pid 45 | self._extra['subprocess'] = self._proc 46 | 47 | if self._loop.get_debug(): 48 | if isinstance(args, (bytes, str)): 49 | program = args 50 | else: 51 | program = args[0] 52 | logger.debug('process %r created: pid %s', 53 | program, self._pid) 54 | 55 | self._loop.create_task(self._connect_pipes(waiter)) 56 | 57 | def __repr__(self): 58 | info = [self.__class__.__name__] 59 | if self._closed: 60 | info.append('closed') 61 | if self._pid is not None: 62 | info.append('pid=%s' % self._pid) 63 | if self._returncode is not None: 64 | info.append('returncode=%s' % self._returncode) 65 | elif self._pid is not None: 66 | info.append('running') 67 | else: 68 | info.append('not started') 69 | 70 | stdin = self._pipes.get(0) 71 | if stdin is not None: 72 | info.append('stdin=%s' % stdin.pipe) 73 | 74 | stdout = self._pipes.get(1) 75 | stderr = self._pipes.get(2) 76 | if stdout is not None and stderr is stdout: 77 | info.append('stdout=stderr=%s' % stdout.pipe) 78 | else: 79 | if stdout is not None: 80 | info.append('stdout=%s' % stdout.pipe) 81 | if stderr is not None: 82 | info.append('stderr=%s' % stderr.pipe) 83 | 84 | return '<%s>' % ' '.join(info) 85 | 86 | def _start(self, args, shell, stdin, stdout, stderr, bufsize, **kwargs): 87 | raise NotImplementedError 88 | 89 | def set_protocol(self, protocol): 90 | self._protocol = protocol 91 | 92 | def get_protocol(self): 93 | return self._protocol 94 | 95 | def is_closing(self): 96 | return self._closed 97 | 98 | def close(self): 99 | if self._closed: 100 | return 101 | self._closed = True 102 | 103 | for proto in self._pipes.values(): 104 | if proto is None: 105 | continue 106 | proto.pipe.close() 107 | 108 | if (self._proc is not None 109 | # the child process finished? 110 | and self._returncode is None 111 | # the child process finished but the transport was not notified yet? 112 | and self._proc.poll() is None 113 | ): 114 | if self._loop.get_debug(): 115 | logger.warning('Close running child process: kill %r', self) 116 | 117 | try: 118 | self._proc.kill() 119 | except ProcessLookupError: 120 | pass 121 | 122 | # Don't clear the _proc reference yet: _post_init() may still run 123 | 124 | # On Python 3.3 and older, objects with a destructor part of a reference 125 | # cycle are never destroyed. It's not more the case on Python 3.4 thanks 126 | # to the PEP 442. 127 | if compat.PY34: 128 | def __del__(self): 129 | if not self._closed: 130 | warnings.warn("unclosed transport %r" % self, ResourceWarning) 131 | self.close() 132 | 133 | def get_pid(self): 134 | return self._pid 135 | 136 | def get_returncode(self): 137 | return self._returncode 138 | 139 | def get_pipe_transport(self, fd): 140 | if fd in self._pipes: 141 | return self._pipes[fd].pipe 142 | else: 143 | return None 144 | 145 | def _check_proc(self): 146 | if self._proc is None: 147 | raise ProcessLookupError() 148 | 149 | def send_signal(self, signal): 150 | self._check_proc() 151 | self._proc.send_signal(signal) 152 | 153 | def terminate(self): 154 | self._check_proc() 155 | self._proc.terminate() 156 | 157 | def kill(self): 158 | self._check_proc() 159 | self._proc.kill() 160 | 161 | @coroutine 162 | def _connect_pipes(self, waiter): 163 | try: 164 | proc = self._proc 165 | loop = self._loop 166 | 167 | if proc.stdin is not None: 168 | _, pipe = yield from loop.connect_write_pipe( 169 | lambda: WriteSubprocessPipeProto(self, 0), 170 | proc.stdin) 171 | self._pipes[0] = pipe 172 | 173 | if proc.stdout is not None: 174 | _, pipe = yield from loop.connect_read_pipe( 175 | lambda: ReadSubprocessPipeProto(self, 1), 176 | proc.stdout) 177 | self._pipes[1] = pipe 178 | 179 | if proc.stderr is not None: 180 | _, pipe = yield from loop.connect_read_pipe( 181 | lambda: ReadSubprocessPipeProto(self, 2), 182 | proc.stderr) 183 | self._pipes[2] = pipe 184 | 185 | assert self._pending_calls is not None 186 | 187 | loop.call_soon(self._protocol.connection_made, self) 188 | for callback, data in self._pending_calls: 189 | loop.call_soon(callback, *data) 190 | self._pending_calls = None 191 | except Exception as exc: 192 | if waiter is not None and not waiter.cancelled(): 193 | waiter.set_exception(exc) 194 | else: 195 | if waiter is not None and not waiter.cancelled(): 196 | waiter.set_result(None) 197 | 198 | def _call(self, cb, *data): 199 | if self._pending_calls is not None: 200 | self._pending_calls.append((cb, data)) 201 | else: 202 | self._loop.call_soon(cb, *data) 203 | 204 | def _pipe_connection_lost(self, fd, exc): 205 | self._call(self._protocol.pipe_connection_lost, fd, exc) 206 | self._try_finish() 207 | 208 | def _pipe_data_received(self, fd, data): 209 | self._call(self._protocol.pipe_data_received, fd, data) 210 | 211 | def _process_exited(self, returncode): 212 | assert returncode is not None, returncode 213 | assert self._returncode is None, self._returncode 214 | if self._loop.get_debug(): 215 | logger.info('%r exited with return code %r', 216 | self, returncode) 217 | self._returncode = returncode 218 | if self._proc.returncode is None: 219 | # asyncio uses a child watcher: copy the status into the Popen 220 | # object. On Python 3.6, it is required to avoid a ResourceWarning. 221 | self._proc.returncode = returncode 222 | self._call(self._protocol.process_exited) 223 | self._try_finish() 224 | 225 | # wake up futures waiting for wait() 226 | for waiter in self._exit_waiters: 227 | if not waiter.cancelled(): 228 | waiter.set_result(returncode) 229 | self._exit_waiters = None 230 | 231 | @coroutine 232 | def _wait(self): 233 | """Wait until the process exit and return the process return code. 234 | 235 | This method is a coroutine.""" 236 | if self._returncode is not None: 237 | return self._returncode 238 | 239 | waiter = self._loop.create_future() 240 | self._exit_waiters.append(waiter) 241 | return (yield from waiter) 242 | 243 | def _try_finish(self): 244 | assert not self._finished 245 | if self._returncode is None: 246 | return 247 | if all(p is not None and p.disconnected 248 | for p in self._pipes.values()): 249 | self._finished = True 250 | self._call(self._call_connection_lost, None) 251 | 252 | def _call_connection_lost(self, exc): 253 | try: 254 | self._protocol.connection_lost(exc) 255 | finally: 256 | self._loop = None 257 | self._proc = None 258 | self._protocol = None 259 | 260 | 261 | class WriteSubprocessPipeProto(protocols.BaseProtocol): 262 | 263 | def __init__(self, proc, fd): 264 | self.proc = proc 265 | self.fd = fd 266 | self.pipe = None 267 | self.disconnected = False 268 | 269 | def connection_made(self, transport): 270 | self.pipe = transport 271 | 272 | def __repr__(self): 273 | return ('<%s fd=%s pipe=%r>' 274 | % (self.__class__.__name__, self.fd, self.pipe)) 275 | 276 | def connection_lost(self, exc): 277 | self.disconnected = True 278 | self.proc._pipe_connection_lost(self.fd, exc) 279 | self.proc = None 280 | 281 | def pause_writing(self): 282 | self.proc._protocol.pause_writing() 283 | 284 | def resume_writing(self): 285 | self.proc._protocol.resume_writing() 286 | 287 | 288 | class ReadSubprocessPipeProto(WriteSubprocessPipeProto, 289 | protocols.Protocol): 290 | 291 | def data_received(self, data): 292 | self.proc._pipe_data_received(self.fd, data) 293 | -------------------------------------------------------------------------------- /thirdparty/asyncio/asyncio/compat.py: -------------------------------------------------------------------------------- 1 | """Compatibility helpers for the different Python versions.""" 2 | 3 | import sys 4 | 5 | PY34 = sys.version_info >= (3, 4) 6 | PY35 = sys.version_info >= (3, 5) 7 | PY352 = sys.version_info >= (3, 5, 2) 8 | 9 | 10 | def flatten_list_bytes(list_of_data): 11 | """Concatenate a sequence of bytes-like objects.""" 12 | if not PY34: 13 | # On Python 3.3 and older, bytes.join() doesn't handle 14 | # memoryview. 15 | list_of_data = ( 16 | bytes(data) if isinstance(data, memoryview) else data 17 | for data in list_of_data) 18 | return b''.join(list_of_data) 19 | -------------------------------------------------------------------------------- /thirdparty/asyncio/asyncio/constants.py: -------------------------------------------------------------------------------- 1 | """Constants.""" 2 | 3 | # After the connection is lost, log warnings after this many write()s. 4 | LOG_THRESHOLD_FOR_CONNLOST_WRITES = 5 5 | 6 | # Seconds to wait before retrying accept(). 7 | ACCEPT_RETRY_DELAY = 1 8 | -------------------------------------------------------------------------------- /thirdparty/asyncio/asyncio/log.py: -------------------------------------------------------------------------------- 1 | """Logging configuration.""" 2 | 3 | import logging 4 | 5 | 6 | # Name the logger after the package. 7 | logger = logging.getLogger(__package__) 8 | -------------------------------------------------------------------------------- /thirdparty/asyncio/asyncio/protocols.py: -------------------------------------------------------------------------------- 1 | """Abstract Protocol class.""" 2 | 3 | __all__ = ['BaseProtocol', 'Protocol', 'DatagramProtocol', 4 | 'SubprocessProtocol'] 5 | 6 | 7 | class BaseProtocol: 8 | """Common base class for protocol interfaces. 9 | 10 | Usually user implements protocols that derived from BaseProtocol 11 | like Protocol or ProcessProtocol. 12 | 13 | The only case when BaseProtocol should be implemented directly is 14 | write-only transport like write pipe 15 | """ 16 | 17 | def connection_made(self, transport): 18 | """Called when a connection is made. 19 | 20 | The argument is the transport representing the pipe connection. 21 | To receive data, wait for data_received() calls. 22 | When the connection is closed, connection_lost() is called. 23 | """ 24 | 25 | def connection_lost(self, exc): 26 | """Called when the connection is lost or closed. 27 | 28 | The argument is an exception object or None (the latter 29 | meaning a regular EOF is received or the connection was 30 | aborted or closed). 31 | """ 32 | 33 | def pause_writing(self): 34 | """Called when the transport's buffer goes over the high-water mark. 35 | 36 | Pause and resume calls are paired -- pause_writing() is called 37 | once when the buffer goes strictly over the high-water mark 38 | (even if subsequent writes increases the buffer size even 39 | more), and eventually resume_writing() is called once when the 40 | buffer size reaches the low-water mark. 41 | 42 | Note that if the buffer size equals the high-water mark, 43 | pause_writing() is not called -- it must go strictly over. 44 | Conversely, resume_writing() is called when the buffer size is 45 | equal or lower than the low-water mark. These end conditions 46 | are important to ensure that things go as expected when either 47 | mark is zero. 48 | 49 | NOTE: This is the only Protocol callback that is not called 50 | through EventLoop.call_soon() -- if it were, it would have no 51 | effect when it's most needed (when the app keeps writing 52 | without yielding until pause_writing() is called). 53 | """ 54 | 55 | def resume_writing(self): 56 | """Called when the transport's buffer drains below the low-water mark. 57 | 58 | See pause_writing() for details. 59 | """ 60 | 61 | 62 | class Protocol(BaseProtocol): 63 | """Interface for stream protocol. 64 | 65 | The user should implement this interface. They can inherit from 66 | this class but don't need to. The implementations here do 67 | nothing (they don't raise exceptions). 68 | 69 | When the user wants to requests a transport, they pass a protocol 70 | factory to a utility function (e.g., EventLoop.create_connection()). 71 | 72 | When the connection is made successfully, connection_made() is 73 | called with a suitable transport object. Then data_received() 74 | will be called 0 or more times with data (bytes) received from the 75 | transport; finally, connection_lost() will be called exactly once 76 | with either an exception object or None as an argument. 77 | 78 | State machine of calls: 79 | 80 | start -> CM [-> DR*] [-> ER?] -> CL -> end 81 | 82 | * CM: connection_made() 83 | * DR: data_received() 84 | * ER: eof_received() 85 | * CL: connection_lost() 86 | """ 87 | 88 | def data_received(self, data): 89 | """Called when some data is received. 90 | 91 | The argument is a bytes object. 92 | """ 93 | 94 | def eof_received(self): 95 | """Called when the other end calls write_eof() or equivalent. 96 | 97 | If this returns a false value (including None), the transport 98 | will close itself. If it returns a true value, closing the 99 | transport is up to the protocol. 100 | """ 101 | 102 | 103 | class DatagramProtocol(BaseProtocol): 104 | """Interface for datagram protocol.""" 105 | 106 | def datagram_received(self, data, addr): 107 | """Called when some datagram is received.""" 108 | 109 | def error_received(self, exc): 110 | """Called when a send or receive operation raises an OSError. 111 | 112 | (Other than BlockingIOError or InterruptedError.) 113 | """ 114 | 115 | 116 | class SubprocessProtocol(BaseProtocol): 117 | """Interface for protocol for subprocess calls.""" 118 | 119 | def pipe_data_received(self, fd, data): 120 | """Called when the subprocess writes data into stdout/stderr pipe. 121 | 122 | fd is int file descriptor. 123 | data is bytes object. 124 | """ 125 | 126 | def pipe_connection_lost(self, fd, exc): 127 | """Called when a file descriptor associated with the child process is 128 | closed. 129 | 130 | fd is the int file descriptor that was closed. 131 | """ 132 | 133 | def process_exited(self): 134 | """Called when subprocess has exited.""" 135 | -------------------------------------------------------------------------------- /thirdparty/asyncio/asyncio/queues.py: -------------------------------------------------------------------------------- 1 | """Queues""" 2 | 3 | __all__ = ['Queue', 'PriorityQueue', 'LifoQueue', 'QueueFull', 'QueueEmpty'] 4 | 5 | import collections 6 | import heapq 7 | 8 | from . import compat 9 | from . import events 10 | from . import locks 11 | from .coroutines import coroutine 12 | 13 | 14 | class QueueEmpty(Exception): 15 | """Exception raised when Queue.get_nowait() is called on a Queue object 16 | which is empty. 17 | """ 18 | pass 19 | 20 | 21 | class QueueFull(Exception): 22 | """Exception raised when the Queue.put_nowait() method is called on a Queue 23 | object which is full. 24 | """ 25 | pass 26 | 27 | 28 | class Queue: 29 | """A queue, useful for coordinating producer and consumer coroutines. 30 | 31 | If maxsize is less than or equal to zero, the queue size is infinite. If it 32 | is an integer greater than 0, then "yield from put()" will block when the 33 | queue reaches maxsize, until an item is removed by get(). 34 | 35 | Unlike the standard library Queue, you can reliably know this Queue's size 36 | with qsize(), since your single-threaded asyncio application won't be 37 | interrupted between calling qsize() and doing an operation on the Queue. 38 | """ 39 | 40 | def __init__(self, maxsize=0, *, loop=None): 41 | if loop is None: 42 | self._loop = events.get_event_loop() 43 | else: 44 | self._loop = loop 45 | self._maxsize = maxsize 46 | 47 | # Futures. 48 | self._getters = collections.deque() 49 | # Futures. 50 | self._putters = collections.deque() 51 | self._unfinished_tasks = 0 52 | self._finished = locks.Event(loop=self._loop) 53 | self._finished.set() 54 | self._init(maxsize) 55 | 56 | # These three are overridable in subclasses. 57 | 58 | def _init(self, maxsize): 59 | self._queue = collections.deque() 60 | 61 | def _get(self): 62 | return self._queue.popleft() 63 | 64 | def _put(self, item): 65 | self._queue.append(item) 66 | 67 | # End of the overridable methods. 68 | 69 | def _wakeup_next(self, waiters): 70 | # Wake up the next waiter (if any) that isn't cancelled. 71 | while waiters: 72 | waiter = waiters.popleft() 73 | if not waiter.done(): 74 | waiter.set_result(None) 75 | break 76 | 77 | def __repr__(self): 78 | return '<{} at {:#x} {}>'.format( 79 | type(self).__name__, id(self), self._format()) 80 | 81 | def __str__(self): 82 | return '<{} {}>'.format(type(self).__name__, self._format()) 83 | 84 | def _format(self): 85 | result = 'maxsize={!r}'.format(self._maxsize) 86 | if getattr(self, '_queue', None): 87 | result += ' _queue={!r}'.format(list(self._queue)) 88 | if self._getters: 89 | result += ' _getters[{}]'.format(len(self._getters)) 90 | if self._putters: 91 | result += ' _putters[{}]'.format(len(self._putters)) 92 | if self._unfinished_tasks: 93 | result += ' tasks={}'.format(self._unfinished_tasks) 94 | return result 95 | 96 | def qsize(self): 97 | """Number of items in the queue.""" 98 | return len(self._queue) 99 | 100 | @property 101 | def maxsize(self): 102 | """Number of items allowed in the queue.""" 103 | return self._maxsize 104 | 105 | def empty(self): 106 | """Return True if the queue is empty, False otherwise.""" 107 | return not self._queue 108 | 109 | def full(self): 110 | """Return True if there are maxsize items in the queue. 111 | 112 | Note: if the Queue was initialized with maxsize=0 (the default), 113 | then full() is never True. 114 | """ 115 | if self._maxsize <= 0: 116 | return False 117 | else: 118 | return self.qsize() >= self._maxsize 119 | 120 | @coroutine 121 | def put(self, item): 122 | """Put an item into the queue. 123 | 124 | Put an item into the queue. If the queue is full, wait until a free 125 | slot is available before adding item. 126 | 127 | This method is a coroutine. 128 | """ 129 | while self.full(): 130 | putter = self._loop.create_future() 131 | self._putters.append(putter) 132 | try: 133 | yield from putter 134 | except: 135 | putter.cancel() # Just in case putter is not done yet. 136 | if not self.full() and not putter.cancelled(): 137 | # We were woken up by get_nowait(), but can't take 138 | # the call. Wake up the next in line. 139 | self._wakeup_next(self._putters) 140 | raise 141 | return self.put_nowait(item) 142 | 143 | def put_nowait(self, item): 144 | """Put an item into the queue without blocking. 145 | 146 | If no free slot is immediately available, raise QueueFull. 147 | """ 148 | if self.full(): 149 | raise QueueFull 150 | self._put(item) 151 | self._unfinished_tasks += 1 152 | self._finished.clear() 153 | self._wakeup_next(self._getters) 154 | 155 | @coroutine 156 | def get(self): 157 | """Remove and return an item from the queue. 158 | 159 | If queue is empty, wait until an item is available. 160 | 161 | This method is a coroutine. 162 | """ 163 | while self.empty(): 164 | getter = self._loop.create_future() 165 | self._getters.append(getter) 166 | try: 167 | yield from getter 168 | except: 169 | getter.cancel() # Just in case getter is not done yet. 170 | if not self.empty() and not getter.cancelled(): 171 | # We were woken up by put_nowait(), but can't take 172 | # the call. Wake up the next in line. 173 | self._wakeup_next(self._getters) 174 | raise 175 | return self.get_nowait() 176 | 177 | def get_nowait(self): 178 | """Remove and return an item from the queue. 179 | 180 | Return an item if one is immediately available, else raise QueueEmpty. 181 | """ 182 | if self.empty(): 183 | raise QueueEmpty 184 | item = self._get() 185 | self._wakeup_next(self._putters) 186 | return item 187 | 188 | def task_done(self): 189 | """Indicate that a formerly enqueued task is complete. 190 | 191 | Used by queue consumers. For each get() used to fetch a task, 192 | a subsequent call to task_done() tells the queue that the processing 193 | on the task is complete. 194 | 195 | If a join() is currently blocking, it will resume when all items have 196 | been processed (meaning that a task_done() call was received for every 197 | item that had been put() into the queue). 198 | 199 | Raises ValueError if called more times than there were items placed in 200 | the queue. 201 | """ 202 | if self._unfinished_tasks <= 0: 203 | raise ValueError('task_done() called too many times') 204 | self._unfinished_tasks -= 1 205 | if self._unfinished_tasks == 0: 206 | self._finished.set() 207 | 208 | @coroutine 209 | def join(self): 210 | """Block until all items in the queue have been gotten and processed. 211 | 212 | The count of unfinished tasks goes up whenever an item is added to the 213 | queue. The count goes down whenever a consumer calls task_done() to 214 | indicate that the item was retrieved and all work on it is complete. 215 | When the count of unfinished tasks drops to zero, join() unblocks. 216 | """ 217 | if self._unfinished_tasks > 0: 218 | yield from self._finished.wait() 219 | 220 | 221 | class PriorityQueue(Queue): 222 | """A subclass of Queue; retrieves entries in priority order (lowest first). 223 | 224 | Entries are typically tuples of the form: (priority number, data). 225 | """ 226 | 227 | def _init(self, maxsize): 228 | self._queue = [] 229 | 230 | def _put(self, item, heappush=heapq.heappush): 231 | heappush(self._queue, item) 232 | 233 | def _get(self, heappop=heapq.heappop): 234 | return heappop(self._queue) 235 | 236 | 237 | class LifoQueue(Queue): 238 | """A subclass of Queue that retrieves most recently added entries first.""" 239 | 240 | def _init(self, maxsize): 241 | self._queue = [] 242 | 243 | def _put(self, item): 244 | self._queue.append(item) 245 | 246 | def _get(self): 247 | return self._queue.pop() 248 | 249 | 250 | if not compat.PY35: 251 | JoinableQueue = Queue 252 | """Deprecated alias for Queue.""" 253 | __all__.append('JoinableQueue') 254 | -------------------------------------------------------------------------------- /thirdparty/asyncio/asyncio/subprocess.py: -------------------------------------------------------------------------------- 1 | __all__ = ['create_subprocess_exec', 'create_subprocess_shell'] 2 | 3 | import subprocess 4 | 5 | from . import events 6 | from . import protocols 7 | from . import streams 8 | from . import tasks 9 | from .coroutines import coroutine 10 | from .log import logger 11 | 12 | 13 | PIPE = subprocess.PIPE 14 | STDOUT = subprocess.STDOUT 15 | DEVNULL = subprocess.DEVNULL 16 | 17 | 18 | class SubprocessStreamProtocol(streams.FlowControlMixin, 19 | protocols.SubprocessProtocol): 20 | """Like StreamReaderProtocol, but for a subprocess.""" 21 | 22 | def __init__(self, limit, loop): 23 | super().__init__(loop=loop) 24 | self._limit = limit 25 | self.stdin = self.stdout = self.stderr = None 26 | self._transport = None 27 | 28 | def __repr__(self): 29 | info = [self.__class__.__name__] 30 | if self.stdin is not None: 31 | info.append('stdin=%r' % self.stdin) 32 | if self.stdout is not None: 33 | info.append('stdout=%r' % self.stdout) 34 | if self.stderr is not None: 35 | info.append('stderr=%r' % self.stderr) 36 | return '<%s>' % ' '.join(info) 37 | 38 | def connection_made(self, transport): 39 | self._transport = transport 40 | 41 | stdout_transport = transport.get_pipe_transport(1) 42 | if stdout_transport is not None: 43 | self.stdout = streams.StreamReader(limit=self._limit, 44 | loop=self._loop) 45 | self.stdout.set_transport(stdout_transport) 46 | 47 | stderr_transport = transport.get_pipe_transport(2) 48 | if stderr_transport is not None: 49 | self.stderr = streams.StreamReader(limit=self._limit, 50 | loop=self._loop) 51 | self.stderr.set_transport(stderr_transport) 52 | 53 | stdin_transport = transport.get_pipe_transport(0) 54 | if stdin_transport is not None: 55 | self.stdin = streams.StreamWriter(stdin_transport, 56 | protocol=self, 57 | reader=None, 58 | loop=self._loop) 59 | 60 | def pipe_data_received(self, fd, data): 61 | if fd == 1: 62 | reader = self.stdout 63 | elif fd == 2: 64 | reader = self.stderr 65 | else: 66 | reader = None 67 | if reader is not None: 68 | reader.feed_data(data) 69 | 70 | def pipe_connection_lost(self, fd, exc): 71 | if fd == 0: 72 | pipe = self.stdin 73 | if pipe is not None: 74 | pipe.close() 75 | self.connection_lost(exc) 76 | return 77 | if fd == 1: 78 | reader = self.stdout 79 | elif fd == 2: 80 | reader = self.stderr 81 | else: 82 | reader = None 83 | if reader != None: 84 | if exc is None: 85 | reader.feed_eof() 86 | else: 87 | reader.set_exception(exc) 88 | 89 | def process_exited(self): 90 | self._transport.close() 91 | self._transport = None 92 | 93 | 94 | class Process: 95 | def __init__(self, transport, protocol, loop): 96 | self._transport = transport 97 | self._protocol = protocol 98 | self._loop = loop 99 | self.stdin = protocol.stdin 100 | self.stdout = protocol.stdout 101 | self.stderr = protocol.stderr 102 | self.pid = transport.get_pid() 103 | 104 | def __repr__(self): 105 | return '<%s %s>' % (self.__class__.__name__, self.pid) 106 | 107 | @property 108 | def returncode(self): 109 | return self._transport.get_returncode() 110 | 111 | @coroutine 112 | def wait(self): 113 | """Wait until the process exit and return the process return code. 114 | 115 | This method is a coroutine.""" 116 | return (yield from self._transport._wait()) 117 | 118 | def send_signal(self, signal): 119 | self._transport.send_signal(signal) 120 | 121 | def terminate(self): 122 | self._transport.terminate() 123 | 124 | def kill(self): 125 | self._transport.kill() 126 | 127 | @coroutine 128 | def _feed_stdin(self, input): 129 | debug = self._loop.get_debug() 130 | self.stdin.write(input) 131 | if debug: 132 | logger.debug('%r communicate: feed stdin (%s bytes)', 133 | self, len(input)) 134 | try: 135 | yield from self.stdin.drain() 136 | except (BrokenPipeError, ConnectionResetError) as exc: 137 | # communicate() ignores BrokenPipeError and ConnectionResetError 138 | if debug: 139 | logger.debug('%r communicate: stdin got %r', self, exc) 140 | 141 | if debug: 142 | logger.debug('%r communicate: close stdin', self) 143 | self.stdin.close() 144 | 145 | @coroutine 146 | def _noop(self): 147 | return None 148 | 149 | @coroutine 150 | def _read_stream(self, fd): 151 | transport = self._transport.get_pipe_transport(fd) 152 | if fd == 2: 153 | stream = self.stderr 154 | else: 155 | assert fd == 1 156 | stream = self.stdout 157 | if self._loop.get_debug(): 158 | name = 'stdout' if fd == 1 else 'stderr' 159 | logger.debug('%r communicate: read %s', self, name) 160 | output = yield from stream.read() 161 | if self._loop.get_debug(): 162 | name = 'stdout' if fd == 1 else 'stderr' 163 | logger.debug('%r communicate: close %s', self, name) 164 | transport.close() 165 | return output 166 | 167 | @coroutine 168 | def communicate(self, input=None): 169 | if input is not None: 170 | stdin = self._feed_stdin(input) 171 | else: 172 | stdin = self._noop() 173 | if self.stdout is not None: 174 | stdout = self._read_stream(1) 175 | else: 176 | stdout = self._noop() 177 | if self.stderr is not None: 178 | stderr = self._read_stream(2) 179 | else: 180 | stderr = self._noop() 181 | stdin, stdout, stderr = yield from tasks.gather(stdin, stdout, stderr, 182 | loop=self._loop) 183 | yield from self.wait() 184 | return (stdout, stderr) 185 | 186 | 187 | @coroutine 188 | def create_subprocess_shell(cmd, stdin=None, stdout=None, stderr=None, 189 | loop=None, limit=streams._DEFAULT_LIMIT, **kwds): 190 | if loop is None: 191 | loop = events.get_event_loop() 192 | protocol_factory = lambda: SubprocessStreamProtocol(limit=limit, 193 | loop=loop) 194 | transport, protocol = yield from loop.subprocess_shell( 195 | protocol_factory, 196 | cmd, stdin=stdin, stdout=stdout, 197 | stderr=stderr, **kwds) 198 | return Process(transport, protocol, loop) 199 | 200 | @coroutine 201 | def create_subprocess_exec(program, *args, stdin=None, stdout=None, 202 | stderr=None, loop=None, 203 | limit=streams._DEFAULT_LIMIT, **kwds): 204 | if loop is None: 205 | loop = events.get_event_loop() 206 | protocol_factory = lambda: SubprocessStreamProtocol(limit=limit, 207 | loop=loop) 208 | transport, protocol = yield from loop.subprocess_exec( 209 | protocol_factory, 210 | program, *args, 211 | stdin=stdin, stdout=stdout, 212 | stderr=stderr, **kwds) 213 | return Process(transport, protocol, loop) 214 | -------------------------------------------------------------------------------- /thirdparty/asyncio/asyncio/windows_utils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Various Windows specific bits and pieces 3 | """ 4 | 5 | import sys 6 | 7 | if sys.platform != 'win32': # pragma: no cover 8 | raise ImportError('win32 only') 9 | 10 | import _winapi 11 | import itertools 12 | import msvcrt 13 | import os 14 | import socket 15 | import subprocess 16 | import tempfile 17 | import warnings 18 | 19 | 20 | __all__ = ['socketpair', 'pipe', 'Popen', 'PIPE', 'PipeHandle'] 21 | 22 | 23 | # Constants/globals 24 | 25 | 26 | BUFSIZE = 8192 27 | PIPE = subprocess.PIPE 28 | STDOUT = subprocess.STDOUT 29 | _mmap_counter = itertools.count() 30 | 31 | 32 | if hasattr(socket, 'socketpair'): 33 | # Since Python 3.5, socket.socketpair() is now also available on Windows 34 | socketpair = socket.socketpair 35 | else: 36 | # Replacement for socket.socketpair() 37 | def socketpair(family=socket.AF_INET, type=socket.SOCK_STREAM, proto=0): 38 | """A socket pair usable as a self-pipe, for Windows. 39 | 40 | Origin: https://gist.github.com/4325783, by Geert Jansen. 41 | Public domain. 42 | """ 43 | if family == socket.AF_INET: 44 | host = '127.0.0.1' 45 | elif family == socket.AF_INET6: 46 | host = '::1' 47 | else: 48 | raise ValueError("Only AF_INET and AF_INET6 socket address " 49 | "families are supported") 50 | if type != socket.SOCK_STREAM: 51 | raise ValueError("Only SOCK_STREAM socket type is supported") 52 | if proto != 0: 53 | raise ValueError("Only protocol zero is supported") 54 | 55 | # We create a connected TCP socket. Note the trick with setblocking(0) 56 | # that prevents us from having to create a thread. 57 | lsock = socket.socket(family, type, proto) 58 | try: 59 | lsock.bind((host, 0)) 60 | lsock.listen(1) 61 | # On IPv6, ignore flow_info and scope_id 62 | addr, port = lsock.getsockname()[:2] 63 | csock = socket.socket(family, type, proto) 64 | try: 65 | csock.setblocking(False) 66 | try: 67 | csock.connect((addr, port)) 68 | except (BlockingIOError, InterruptedError): 69 | pass 70 | csock.setblocking(True) 71 | ssock, _ = lsock.accept() 72 | except: 73 | csock.close() 74 | raise 75 | finally: 76 | lsock.close() 77 | return (ssock, csock) 78 | 79 | 80 | # Replacement for os.pipe() using handles instead of fds 81 | 82 | 83 | def pipe(*, duplex=False, overlapped=(True, True), bufsize=BUFSIZE): 84 | """Like os.pipe() but with overlapped support and using handles not fds.""" 85 | address = tempfile.mktemp(prefix=r'\\.\pipe\python-pipe-%d-%d-' % 86 | (os.getpid(), next(_mmap_counter))) 87 | 88 | if duplex: 89 | openmode = _winapi.PIPE_ACCESS_DUPLEX 90 | access = _winapi.GENERIC_READ | _winapi.GENERIC_WRITE 91 | obsize, ibsize = bufsize, bufsize 92 | else: 93 | openmode = _winapi.PIPE_ACCESS_INBOUND 94 | access = _winapi.GENERIC_WRITE 95 | obsize, ibsize = 0, bufsize 96 | 97 | openmode |= _winapi.FILE_FLAG_FIRST_PIPE_INSTANCE 98 | 99 | if overlapped[0]: 100 | openmode |= _winapi.FILE_FLAG_OVERLAPPED 101 | 102 | if overlapped[1]: 103 | flags_and_attribs = _winapi.FILE_FLAG_OVERLAPPED 104 | else: 105 | flags_and_attribs = 0 106 | 107 | h1 = h2 = None 108 | try: 109 | h1 = _winapi.CreateNamedPipe( 110 | address, openmode, _winapi.PIPE_WAIT, 111 | 1, obsize, ibsize, _winapi.NMPWAIT_WAIT_FOREVER, _winapi.NULL) 112 | 113 | h2 = _winapi.CreateFile( 114 | address, access, 0, _winapi.NULL, _winapi.OPEN_EXISTING, 115 | flags_and_attribs, _winapi.NULL) 116 | 117 | ov = _winapi.ConnectNamedPipe(h1, overlapped=True) 118 | ov.GetOverlappedResult(True) 119 | return h1, h2 120 | except: 121 | if h1 is not None: 122 | _winapi.CloseHandle(h1) 123 | if h2 is not None: 124 | _winapi.CloseHandle(h2) 125 | raise 126 | 127 | 128 | # Wrapper for a pipe handle 129 | 130 | 131 | class PipeHandle: 132 | """Wrapper for an overlapped pipe handle which is vaguely file-object like. 133 | 134 | The IOCP event loop can use these instead of socket objects. 135 | """ 136 | def __init__(self, handle): 137 | self._handle = handle 138 | 139 | def __repr__(self): 140 | if self._handle is not None: 141 | handle = 'handle=%r' % self._handle 142 | else: 143 | handle = 'closed' 144 | return '<%s %s>' % (self.__class__.__name__, handle) 145 | 146 | @property 147 | def handle(self): 148 | return self._handle 149 | 150 | def fileno(self): 151 | if self._handle is None: 152 | raise ValueError("I/O operatioon on closed pipe") 153 | return self._handle 154 | 155 | def close(self, *, CloseHandle=_winapi.CloseHandle): 156 | if self._handle is not None: 157 | CloseHandle(self._handle) 158 | self._handle = None 159 | 160 | def __del__(self): 161 | if self._handle is not None: 162 | warnings.warn("unclosed %r" % self, ResourceWarning) 163 | self.close() 164 | 165 | def __enter__(self): 166 | return self 167 | 168 | def __exit__(self, t, v, tb): 169 | self.close() 170 | 171 | 172 | # Replacement for subprocess.Popen using overlapped pipe handles 173 | 174 | 175 | class Popen(subprocess.Popen): 176 | """Replacement for subprocess.Popen using overlapped pipe handles. 177 | 178 | The stdin, stdout, stderr are None or instances of PipeHandle. 179 | """ 180 | def __init__(self, args, stdin=None, stdout=None, stderr=None, **kwds): 181 | assert not kwds.get('universal_newlines') 182 | assert kwds.get('bufsize', 0) == 0 183 | stdin_rfd = stdout_wfd = stderr_wfd = None 184 | stdin_wh = stdout_rh = stderr_rh = None 185 | if stdin == PIPE: 186 | stdin_rh, stdin_wh = pipe(overlapped=(False, True), duplex=True) 187 | stdin_rfd = msvcrt.open_osfhandle(stdin_rh, os.O_RDONLY) 188 | else: 189 | stdin_rfd = stdin 190 | if stdout == PIPE: 191 | stdout_rh, stdout_wh = pipe(overlapped=(True, False)) 192 | stdout_wfd = msvcrt.open_osfhandle(stdout_wh, 0) 193 | else: 194 | stdout_wfd = stdout 195 | if stderr == PIPE: 196 | stderr_rh, stderr_wh = pipe(overlapped=(True, False)) 197 | stderr_wfd = msvcrt.open_osfhandle(stderr_wh, 0) 198 | elif stderr == STDOUT: 199 | stderr_wfd = stdout_wfd 200 | else: 201 | stderr_wfd = stderr 202 | try: 203 | super().__init__(args, stdin=stdin_rfd, stdout=stdout_wfd, 204 | stderr=stderr_wfd, **kwds) 205 | except: 206 | for h in (stdin_wh, stdout_rh, stderr_rh): 207 | if h is not None: 208 | _winapi.CloseHandle(h) 209 | raise 210 | else: 211 | if stdin_wh is not None: 212 | self.stdin = PipeHandle(stdin_wh) 213 | if stdout_rh is not None: 214 | self.stdout = PipeHandle(stdout_rh) 215 | if stderr_rh is not None: 216 | self.stderr = PipeHandle(stderr_rh) 217 | finally: 218 | if stdin == PIPE: 219 | os.close(stdin_rfd) 220 | if stdout == PIPE: 221 | os.close(stdout_wfd) 222 | if stderr == PIPE: 223 | os.close(stderr_wfd) 224 | -------------------------------------------------------------------------------- /thirdparty/asyncio/check.py: -------------------------------------------------------------------------------- 1 | """Search for lines >= 80 chars or with trailing whitespace.""" 2 | 3 | import os 4 | import sys 5 | 6 | 7 | def main(): 8 | args = sys.argv[1:] or os.curdir 9 | for arg in args: 10 | if os.path.isdir(arg): 11 | for dn, dirs, files in os.walk(arg): 12 | for fn in sorted(files): 13 | if fn.endswith('.py'): 14 | process(os.path.join(dn, fn)) 15 | dirs[:] = [d for d in dirs if d[0] != '.'] 16 | dirs.sort() 17 | else: 18 | process(arg) 19 | 20 | 21 | def isascii(x): 22 | try: 23 | x.encode('ascii') 24 | return True 25 | except UnicodeError: 26 | return False 27 | 28 | 29 | def process(fn): 30 | try: 31 | f = open(fn) 32 | except IOError as err: 33 | print(err) 34 | return 35 | try: 36 | for i, line in enumerate(f): 37 | line = line.rstrip('\n') 38 | sline = line.rstrip() 39 | if len(line) >= 80 or line != sline or not isascii(line): 40 | print('{}:{:d}:{}{}'.format( 41 | fn, i+1, sline, '_' * (len(line) - len(sline)))) 42 | finally: 43 | f.close() 44 | 45 | main() 46 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/cacheclt.py: -------------------------------------------------------------------------------- 1 | """Client for cache server. 2 | 3 | See cachesvr.py for protocol description. 4 | """ 5 | 6 | import argparse 7 | import asyncio 8 | from asyncio import test_utils 9 | import json 10 | import logging 11 | 12 | ARGS = argparse.ArgumentParser(description='Cache client example.') 13 | ARGS.add_argument( 14 | '--tls', action='store_true', dest='tls', 15 | default=False, help='Use TLS') 16 | ARGS.add_argument( 17 | '--iocp', action='store_true', dest='iocp', 18 | default=False, help='Use IOCP event loop (Windows only)') 19 | ARGS.add_argument( 20 | '--host', action='store', dest='host', 21 | default='localhost', help='Host name') 22 | ARGS.add_argument( 23 | '--port', action='store', dest='port', 24 | default=54321, type=int, help='Port number') 25 | ARGS.add_argument( 26 | '--timeout', action='store', dest='timeout', 27 | default=5, type=float, help='Timeout') 28 | ARGS.add_argument( 29 | '--max_backoff', action='store', dest='max_backoff', 30 | default=5, type=float, help='Max backoff on reconnect') 31 | ARGS.add_argument( 32 | '--ntasks', action='store', dest='ntasks', 33 | default=10, type=int, help='Number of tester tasks') 34 | ARGS.add_argument( 35 | '--ntries', action='store', dest='ntries', 36 | default=5, type=int, help='Number of request tries before giving up') 37 | 38 | 39 | args = ARGS.parse_args() 40 | 41 | 42 | class CacheClient: 43 | """Multiplexing cache client. 44 | 45 | This wraps a single connection to the cache client. The 46 | connection is automatically re-opened when an error occurs. 47 | 48 | Multiple tasks may share this object; the requests will be 49 | serialized. 50 | 51 | The public API is get(), set(), delete() (all are coroutines). 52 | """ 53 | 54 | def __init__(self, host, port, sslctx=None, loop=None): 55 | self.host = host 56 | self.port = port 57 | self.sslctx = sslctx 58 | self.loop = loop 59 | self.todo = set() 60 | self.initialized = False 61 | self.task = asyncio.Task(self.activity(), loop=self.loop) 62 | 63 | @asyncio.coroutine 64 | def get(self, key): 65 | resp = yield from self.request('get', key) 66 | if resp is None: 67 | return None 68 | return resp.get('value') 69 | 70 | @asyncio.coroutine 71 | def set(self, key, value): 72 | resp = yield from self.request('set', key, value) 73 | if resp is None: 74 | return False 75 | return resp.get('status') == 'ok' 76 | 77 | @asyncio.coroutine 78 | def delete(self, key): 79 | resp = yield from self.request('delete', key) 80 | if resp is None: 81 | return False 82 | return resp.get('status') == 'ok' 83 | 84 | @asyncio.coroutine 85 | def request(self, type, key, value=None): 86 | assert not self.task.done() 87 | data = {'type': type, 'key': key} 88 | if value is not None: 89 | data['value'] = value 90 | payload = json.dumps(data).encode('utf8') 91 | waiter = asyncio.Future(loop=self.loop) 92 | if self.initialized: 93 | try: 94 | yield from self.send(payload, waiter) 95 | except IOError: 96 | self.todo.add((payload, waiter)) 97 | else: 98 | self.todo.add((payload, waiter)) 99 | return (yield from waiter) 100 | 101 | @asyncio.coroutine 102 | def activity(self): 103 | backoff = 0 104 | while True: 105 | try: 106 | self.reader, self.writer = yield from asyncio.open_connection( 107 | self.host, self.port, ssl=self.sslctx, loop=self.loop) 108 | except Exception as exc: 109 | backoff = min(args.max_backoff, backoff + (backoff//2) + 1) 110 | logging.info('Error connecting: %r; sleep %s', exc, backoff) 111 | yield from asyncio.sleep(backoff, loop=self.loop) 112 | continue 113 | backoff = 0 114 | self.next_id = 0 115 | self.pending = {} 116 | self. initialized = True 117 | try: 118 | while self.todo: 119 | payload, waiter = self.todo.pop() 120 | if not waiter.done(): 121 | yield from self.send(payload, waiter) 122 | while True: 123 | resp_id, resp = yield from self.process() 124 | if resp_id in self.pending: 125 | payload, waiter = self.pending.pop(resp_id) 126 | if not waiter.done(): 127 | waiter.set_result(resp) 128 | except Exception as exc: 129 | self.initialized = False 130 | self.writer.close() 131 | while self.pending: 132 | req_id, pair = self.pending.popitem() 133 | payload, waiter = pair 134 | if not waiter.done(): 135 | self.todo.add(pair) 136 | logging.info('Error processing: %r', exc) 137 | 138 | @asyncio.coroutine 139 | def send(self, payload, waiter): 140 | self.next_id += 1 141 | req_id = self.next_id 142 | frame = 'request %d %d\n' % (req_id, len(payload)) 143 | self.writer.write(frame.encode('ascii')) 144 | self.writer.write(payload) 145 | self.pending[req_id] = payload, waiter 146 | yield from self.writer.drain() 147 | 148 | @asyncio.coroutine 149 | def process(self): 150 | frame = yield from self.reader.readline() 151 | if not frame: 152 | raise EOFError() 153 | head, tail = frame.split(None, 1) 154 | if head == b'error': 155 | raise IOError('OOB error: %r' % tail) 156 | if head != b'response': 157 | raise IOError('Bad frame: %r' % frame) 158 | resp_id, resp_size = map(int, tail.split()) 159 | data = yield from self.reader.readexactly(resp_size) 160 | if len(data) != resp_size: 161 | raise EOFError() 162 | resp = json.loads(data.decode('utf8')) 163 | return resp_id, resp 164 | 165 | 166 | def main(): 167 | asyncio.set_event_loop(None) 168 | if args.iocp: 169 | from asyncio.windows_events import ProactorEventLoop 170 | loop = ProactorEventLoop() 171 | else: 172 | loop = asyncio.new_event_loop() 173 | sslctx = None 174 | if args.tls: 175 | sslctx = test_utils.dummy_ssl_context() 176 | cache = CacheClient(args.host, args.port, sslctx=sslctx, loop=loop) 177 | try: 178 | loop.run_until_complete( 179 | asyncio.gather( 180 | *[testing(i, cache, loop) for i in range(args.ntasks)], 181 | loop=loop)) 182 | finally: 183 | loop.close() 184 | 185 | 186 | @asyncio.coroutine 187 | def testing(label, cache, loop): 188 | 189 | def w(g): 190 | return asyncio.wait_for(g, args.timeout, loop=loop) 191 | 192 | key = 'foo-%s' % label 193 | while True: 194 | logging.info('%s %s', label, '-'*20) 195 | try: 196 | ret = yield from w(cache.set(key, 'hello-%s-world' % label)) 197 | logging.info('%s set %s', label, ret) 198 | ret = yield from w(cache.get(key)) 199 | logging.info('%s get %s', label, ret) 200 | ret = yield from w(cache.delete(key)) 201 | logging.info('%s del %s', label, ret) 202 | ret = yield from w(cache.get(key)) 203 | logging.info('%s get2 %s', label, ret) 204 | except asyncio.TimeoutError: 205 | logging.warn('%s Timeout', label) 206 | except Exception as exc: 207 | logging.exception('%s Client exception: %r', label, exc) 208 | break 209 | 210 | 211 | if __name__ == '__main__': 212 | logging.basicConfig(level=logging.INFO) 213 | main() 214 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/cachesvr.py: -------------------------------------------------------------------------------- 1 | """A simple memcache-like server. 2 | 3 | The basic data structure maintained is a single in-memory dictionary 4 | mapping string keys to string values, with operations get, set and 5 | delete. (Both keys and values may contain Unicode.) 6 | 7 | This is a TCP server listening on port 54321. There is no 8 | authentication. 9 | 10 | Requests provide an operation and return a response. A connection may 11 | be used for multiple requests. The connection is closed when a client 12 | sends a bad request. 13 | 14 | If a client is idle for over 5 seconds (i.e., it does not send another 15 | request, or fails to read the whole response, within this time), it is 16 | disconnected. 17 | 18 | Framing of requests and responses within a connection uses a 19 | line-based protocol. The first line of a request is the frame header 20 | and contains three whitespace-delimited token followed by LF or CRLF: 21 | 22 | - the keyword 'request' 23 | - a decimal request ID; the first request is '1', the second '2', etc. 24 | - a decimal byte count giving the size of the rest of the request 25 | 26 | Note that the requests ID *must* be consecutive and start at '1' for 27 | each connection. 28 | 29 | Response frames look the same except the keyword is 'response'. The 30 | response ID matches the request ID. There should be exactly one 31 | response to each request and responses should be seen in the same 32 | order as the requests. 33 | 34 | After the frame, individual requests and responses are JSON encoded. 35 | 36 | If the frame header or the JSON request body cannot be parsed, an 37 | unframed error message (always starting with 'error') is written back 38 | and the connection is closed. 39 | 40 | JSON-encoded requests can be: 41 | 42 | - {"type": "get", "key": } 43 | - {"type": "set", "key": , "value": } 44 | - {"type": "delete", "key": } 45 | 46 | Responses are also JSON-encoded: 47 | 48 | - {"status": "ok", "value": } # Successful get request 49 | - {"status": "ok"} # Successful set or delete request 50 | - {"status": "notfound"} # Key not found for get or delete request 51 | 52 | If the request is valid JSON but cannot be handled (e.g., the type or 53 | key field is absent or invalid), an error response of the following 54 | form is returned, but the connection is not closed: 55 | 56 | - {"error": } 57 | """ 58 | 59 | import argparse 60 | import asyncio 61 | import json 62 | import logging 63 | import os 64 | import random 65 | 66 | ARGS = argparse.ArgumentParser(description='Cache server example.') 67 | ARGS.add_argument( 68 | '--tls', action='store_true', dest='tls', 69 | default=False, help='Use TLS') 70 | ARGS.add_argument( 71 | '--iocp', action='store_true', dest='iocp', 72 | default=False, help='Use IOCP event loop (Windows only)') 73 | ARGS.add_argument( 74 | '--host', action='store', dest='host', 75 | default='localhost', help='Host name') 76 | ARGS.add_argument( 77 | '--port', action='store', dest='port', 78 | default=54321, type=int, help='Port number') 79 | ARGS.add_argument( 80 | '--timeout', action='store', dest='timeout', 81 | default=5, type=float, help='Timeout') 82 | ARGS.add_argument( 83 | '--random_failure_percent', action='store', dest='fail_percent', 84 | default=0, type=float, help='Fail randomly N percent of the time') 85 | ARGS.add_argument( 86 | '--random_failure_sleep', action='store', dest='fail_sleep', 87 | default=0, type=float, help='Sleep time when randomly failing') 88 | ARGS.add_argument( 89 | '--random_response_sleep', action='store', dest='resp_sleep', 90 | default=0, type=float, help='Sleep time before responding') 91 | 92 | args = ARGS.parse_args() 93 | 94 | 95 | class Cache: 96 | 97 | def __init__(self, loop): 98 | self.loop = loop 99 | self.table = {} 100 | 101 | @asyncio.coroutine 102 | def handle_client(self, reader, writer): 103 | # Wrapper to log stuff and close writer (i.e., transport). 104 | peer = writer.get_extra_info('socket').getpeername() 105 | logging.info('got a connection from %s', peer) 106 | try: 107 | yield from self.frame_parser(reader, writer) 108 | except Exception as exc: 109 | logging.error('error %r from %s', exc, peer) 110 | else: 111 | logging.info('end connection from %s', peer) 112 | finally: 113 | writer.close() 114 | 115 | @asyncio.coroutine 116 | def frame_parser(self, reader, writer): 117 | # This takes care of the framing. 118 | last_request_id = 0 119 | while True: 120 | # Read the frame header, parse it, read the data. 121 | # NOTE: The readline() and readexactly() calls will hang 122 | # if the client doesn't send enough data but doesn't 123 | # disconnect either. We add a timeout to each. (But the 124 | # timeout should really be implemented by StreamReader.) 125 | framing_b = yield from asyncio.wait_for( 126 | reader.readline(), 127 | timeout=args.timeout, loop=self.loop) 128 | if random.random()*100 < args.fail_percent: 129 | logging.warn('Inserting random failure') 130 | yield from asyncio.sleep(args.fail_sleep*random.random(), 131 | loop=self.loop) 132 | writer.write(b'error random failure\r\n') 133 | break 134 | logging.debug('framing_b = %r', framing_b) 135 | if not framing_b: 136 | break # Clean close. 137 | try: 138 | frame_keyword, request_id_b, byte_count_b = framing_b.split() 139 | except ValueError: 140 | writer.write(b'error unparseable frame\r\n') 141 | break 142 | if frame_keyword != b'request': 143 | writer.write(b'error frame does not start with request\r\n') 144 | break 145 | try: 146 | request_id, byte_count = int(request_id_b), int(byte_count_b) 147 | except ValueError: 148 | writer.write(b'error unparsable frame parameters\r\n') 149 | break 150 | if request_id != last_request_id + 1 or byte_count < 2: 151 | writer.write(b'error invalid frame parameters\r\n') 152 | break 153 | last_request_id = request_id 154 | request_b = yield from asyncio.wait_for( 155 | reader.readexactly(byte_count), 156 | timeout=args.timeout, loop=self.loop) 157 | try: 158 | request = json.loads(request_b.decode('utf8')) 159 | except ValueError: 160 | writer.write(b'error unparsable json\r\n') 161 | break 162 | response = self.handle_request(request) # Not a coroutine. 163 | if response is None: 164 | writer.write(b'error unhandlable request\r\n') 165 | break 166 | response_b = json.dumps(response).encode('utf8') + b'\r\n' 167 | byte_count = len(response_b) 168 | framing_s = 'response {} {}\r\n'.format(request_id, byte_count) 169 | writer.write(framing_s.encode('ascii')) 170 | yield from asyncio.sleep(args.resp_sleep*random.random(), 171 | loop=self.loop) 172 | writer.write(response_b) 173 | 174 | def handle_request(self, request): 175 | # This parses one request and farms it out to a specific handler. 176 | # Return None for all errors. 177 | if not isinstance(request, dict): 178 | return {'error': 'request is not a dict'} 179 | request_type = request.get('type') 180 | if request_type is None: 181 | return {'error': 'no type in request'} 182 | if request_type not in {'get', 'set', 'delete'}: 183 | return {'error': 'unknown request type'} 184 | key = request.get('key') 185 | if not isinstance(key, str): 186 | return {'error': 'key is not a string'} 187 | if request_type == 'get': 188 | return self.handle_get(key) 189 | if request_type == 'set': 190 | value = request.get('value') 191 | if not isinstance(value, str): 192 | return {'error': 'value is not a string'} 193 | return self.handle_set(key, value) 194 | if request_type == 'delete': 195 | return self.handle_delete(key) 196 | assert False, 'bad request type' # Should have been caught above. 197 | 198 | def handle_get(self, key): 199 | value = self.table.get(key) 200 | if value is None: 201 | return {'status': 'notfound'} 202 | else: 203 | return {'status': 'ok', 'value': value} 204 | 205 | def handle_set(self, key, value): 206 | self.table[key] = value 207 | return {'status': 'ok'} 208 | 209 | def handle_delete(self, key): 210 | if key not in self.table: 211 | return {'status': 'notfound'} 212 | else: 213 | del self.table[key] 214 | return {'status': 'ok'} 215 | 216 | 217 | def main(): 218 | asyncio.set_event_loop(None) 219 | if args.iocp: 220 | from asyncio.windows_events import ProactorEventLoop 221 | loop = ProactorEventLoop() 222 | else: 223 | loop = asyncio.new_event_loop() 224 | sslctx = None 225 | if args.tls: 226 | import ssl 227 | # TODO: take cert/key from args as well. 228 | here = os.path.join(os.path.dirname(__file__), '..', 'tests') 229 | sslctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23) 230 | sslctx.options |= ssl.OP_NO_SSLv2 231 | sslctx.load_cert_chain( 232 | certfile=os.path.join(here, 'ssl_cert.pem'), 233 | keyfile=os.path.join(here, 'ssl_key.pem')) 234 | cache = Cache(loop) 235 | task = asyncio.streams.start_server(cache.handle_client, 236 | args.host, args.port, 237 | ssl=sslctx, loop=loop) 238 | svr = loop.run_until_complete(task) 239 | for sock in svr.sockets: 240 | logging.info('socket %s', sock.getsockname()) 241 | try: 242 | loop.run_forever() 243 | finally: 244 | loop.close() 245 | 246 | 247 | if __name__ == '__main__': 248 | logging.basicConfig(level=logging.INFO) 249 | main() 250 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/child_process.py: -------------------------------------------------------------------------------- 1 | """ 2 | Example of asynchronous interaction with a child python process. 3 | 4 | This example shows how to attach an existing Popen object and use the low level 5 | transport-protocol API. See shell.py and subprocess_shell.py for higher level 6 | examples. 7 | """ 8 | 9 | import os 10 | import sys 11 | 12 | try: 13 | import asyncio 14 | except ImportError: 15 | # asyncio is not installed 16 | sys.path.append(os.path.join(os.path.dirname(__file__), '..')) 17 | import asyncio 18 | 19 | if sys.platform == 'win32': 20 | from asyncio.windows_utils import Popen, PIPE 21 | from asyncio.windows_events import ProactorEventLoop 22 | else: 23 | from subprocess import Popen, PIPE 24 | 25 | # 26 | # Return a write-only transport wrapping a writable pipe 27 | # 28 | 29 | @asyncio.coroutine 30 | def connect_write_pipe(file): 31 | loop = asyncio.get_event_loop() 32 | transport, _ = yield from loop.connect_write_pipe(asyncio.Protocol, file) 33 | return transport 34 | 35 | # 36 | # Wrap a readable pipe in a stream 37 | # 38 | 39 | @asyncio.coroutine 40 | def connect_read_pipe(file): 41 | loop = asyncio.get_event_loop() 42 | stream_reader = asyncio.StreamReader(loop=loop) 43 | def factory(): 44 | return asyncio.StreamReaderProtocol(stream_reader) 45 | transport, _ = yield from loop.connect_read_pipe(factory, file) 46 | return stream_reader, transport 47 | 48 | 49 | # 50 | # Example 51 | # 52 | 53 | @asyncio.coroutine 54 | def main(loop): 55 | # program which prints evaluation of each expression from stdin 56 | code = r'''if 1: 57 | import os 58 | def writeall(fd, buf): 59 | while buf: 60 | n = os.write(fd, buf) 61 | buf = buf[n:] 62 | while True: 63 | s = os.read(0, 1024) 64 | if not s: 65 | break 66 | s = s.decode('ascii') 67 | s = repr(eval(s)) + '\n' 68 | s = s.encode('ascii') 69 | writeall(1, s) 70 | ''' 71 | 72 | # commands to send to input 73 | commands = iter([b"1+1\n", 74 | b"2**16\n", 75 | b"1/3\n", 76 | b"'x'*50", 77 | b"1/0\n"]) 78 | 79 | # start subprocess and wrap stdin, stdout, stderr 80 | p = Popen([sys.executable, '-c', code], 81 | stdin=PIPE, stdout=PIPE, stderr=PIPE) 82 | 83 | stdin = yield from connect_write_pipe(p.stdin) 84 | stdout, stdout_transport = yield from connect_read_pipe(p.stdout) 85 | stderr, stderr_transport = yield from connect_read_pipe(p.stderr) 86 | 87 | # interact with subprocess 88 | name = {stdout:'OUT', stderr:'ERR'} 89 | registered = {asyncio.Task(stderr.readline()): stderr, 90 | asyncio.Task(stdout.readline()): stdout} 91 | while registered: 92 | # write command 93 | cmd = next(commands, None) 94 | if cmd is None: 95 | stdin.close() 96 | else: 97 | print('>>>', cmd.decode('ascii').rstrip()) 98 | stdin.write(cmd) 99 | 100 | # get and print lines from stdout, stderr 101 | timeout = None 102 | while registered: 103 | done, pending = yield from asyncio.wait( 104 | registered, timeout=timeout, 105 | return_when=asyncio.FIRST_COMPLETED) 106 | if not done: 107 | break 108 | for f in done: 109 | stream = registered.pop(f) 110 | res = f.result() 111 | print(name[stream], res.decode('ascii').rstrip()) 112 | if res != b'': 113 | registered[asyncio.Task(stream.readline())] = stream 114 | timeout = 0.0 115 | 116 | stdout_transport.close() 117 | stderr_transport.close() 118 | 119 | if __name__ == '__main__': 120 | if sys.platform == 'win32': 121 | loop = ProactorEventLoop() 122 | asyncio.set_event_loop(loop) 123 | else: 124 | loop = asyncio.get_event_loop() 125 | try: 126 | loop.run_until_complete(main(loop)) 127 | finally: 128 | loop.close() 129 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/echo_client_tulip.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | 3 | END = b'Bye-bye!\n' 4 | 5 | @asyncio.coroutine 6 | def echo_client(): 7 | reader, writer = yield from asyncio.open_connection('localhost', 8000) 8 | writer.write(b'Hello, world\n') 9 | writer.write(b'What a fine day it is.\n') 10 | writer.write(END) 11 | while True: 12 | line = yield from reader.readline() 13 | print('received:', line) 14 | if line == END or not line: 15 | break 16 | writer.close() 17 | 18 | loop = asyncio.get_event_loop() 19 | loop.run_until_complete(echo_client()) 20 | loop.close() 21 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/echo_server_tulip.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | 3 | @asyncio.coroutine 4 | def echo_server(): 5 | yield from asyncio.start_server(handle_connection, 'localhost', 8000) 6 | 7 | @asyncio.coroutine 8 | def handle_connection(reader, writer): 9 | while True: 10 | data = yield from reader.read(8192) 11 | if not data: 12 | break 13 | writer.write(data) 14 | 15 | loop = asyncio.get_event_loop() 16 | loop.run_until_complete(echo_server()) 17 | try: 18 | loop.run_forever() 19 | finally: 20 | loop.close() 21 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/fetch0.py: -------------------------------------------------------------------------------- 1 | """Simplest possible HTTP client.""" 2 | 3 | import sys 4 | 5 | from asyncio import * 6 | 7 | 8 | @coroutine 9 | def fetch(): 10 | r, w = yield from open_connection('python.org', 80) 11 | request = 'GET / HTTP/1.0\r\n\r\n' 12 | print('>', request, file=sys.stderr) 13 | w.write(request.encode('latin-1')) 14 | while True: 15 | line = yield from r.readline() 16 | line = line.decode('latin-1').rstrip() 17 | if not line: 18 | break 19 | print('<', line, file=sys.stderr) 20 | print(file=sys.stderr) 21 | body = yield from r.read() 22 | return body 23 | 24 | 25 | def main(): 26 | loop = get_event_loop() 27 | try: 28 | body = loop.run_until_complete(fetch()) 29 | finally: 30 | loop.close() 31 | print(body.decode('latin-1'), end='') 32 | 33 | 34 | if __name__ == '__main__': 35 | main() 36 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/fetch1.py: -------------------------------------------------------------------------------- 1 | """Fetch one URL and write its content to stdout. 2 | 3 | This version adds URL parsing (including SSL) and a Response object. 4 | """ 5 | 6 | import sys 7 | import urllib.parse 8 | 9 | from asyncio import * 10 | 11 | 12 | class Response: 13 | 14 | def __init__(self, verbose=True): 15 | self.verbose = verbose 16 | self.http_version = None # 'HTTP/1.1' 17 | self.status = None # 200 18 | self.reason = None # 'Ok' 19 | self.headers = [] # [('Content-Type', 'text/html')] 20 | 21 | @coroutine 22 | def read(self, reader): 23 | @coroutine 24 | def getline(): 25 | return (yield from reader.readline()).decode('latin-1').rstrip() 26 | status_line = yield from getline() 27 | if self.verbose: print('<', status_line, file=sys.stderr) 28 | self.http_version, status, self.reason = status_line.split(None, 2) 29 | self.status = int(status) 30 | while True: 31 | header_line = yield from getline() 32 | if not header_line: 33 | break 34 | if self.verbose: print('<', header_line, file=sys.stderr) 35 | # TODO: Continuation lines. 36 | key, value = header_line.split(':', 1) 37 | self.headers.append((key, value.strip())) 38 | if self.verbose: print(file=sys.stderr) 39 | 40 | 41 | @coroutine 42 | def fetch(url, verbose=True): 43 | parts = urllib.parse.urlparse(url) 44 | if parts.scheme == 'http': 45 | ssl = False 46 | elif parts.scheme == 'https': 47 | ssl = True 48 | else: 49 | print('URL must use http or https.') 50 | sys.exit(1) 51 | port = parts.port 52 | if port is None: 53 | port = 443 if ssl else 80 54 | path = parts.path or '/' 55 | if parts.query: 56 | path += '?' + parts.query 57 | request = 'GET %s HTTP/1.0\r\n\r\n' % path 58 | if verbose: 59 | print('>', request, file=sys.stderr, end='') 60 | r, w = yield from open_connection(parts.hostname, port, ssl=ssl) 61 | w.write(request.encode('latin-1')) 62 | response = Response(verbose) 63 | yield from response.read(r) 64 | body = yield from r.read() 65 | return body 66 | 67 | 68 | def main(): 69 | loop = get_event_loop() 70 | try: 71 | body = loop.run_until_complete(fetch(sys.argv[1], '-v' in sys.argv)) 72 | finally: 73 | loop.close() 74 | print(body.decode('latin-1'), end='') 75 | 76 | 77 | if __name__ == '__main__': 78 | main() 79 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/fetch2.py: -------------------------------------------------------------------------------- 1 | """Fetch one URL and write its content to stdout. 2 | 3 | This version adds a Request object. 4 | """ 5 | 6 | import sys 7 | import urllib.parse 8 | from http.client import BadStatusLine 9 | 10 | from asyncio import * 11 | 12 | 13 | class Request: 14 | 15 | def __init__(self, url, verbose=True): 16 | self.url = url 17 | self.verbose = verbose 18 | self.parts = urllib.parse.urlparse(self.url) 19 | self.scheme = self.parts.scheme 20 | assert self.scheme in ('http', 'https'), repr(url) 21 | self.ssl = self.parts.scheme == 'https' 22 | self.netloc = self.parts.netloc 23 | self.hostname = self.parts.hostname 24 | self.port = self.parts.port or (443 if self.ssl else 80) 25 | self.path = (self.parts.path or '/') 26 | self.query = self.parts.query 27 | if self.query: 28 | self.full_path = '%s?%s' % (self.path, self.query) 29 | else: 30 | self.full_path = self.path 31 | self.http_version = 'HTTP/1.1' 32 | self.method = 'GET' 33 | self.headers = [] 34 | self.reader = None 35 | self.writer = None 36 | 37 | @coroutine 38 | def connect(self): 39 | if self.verbose: 40 | print('* Connecting to %s:%s using %s' % 41 | (self.hostname, self.port, 'ssl' if self.ssl else 'tcp'), 42 | file=sys.stderr) 43 | self.reader, self.writer = yield from open_connection(self.hostname, 44 | self.port, 45 | ssl=self.ssl) 46 | if self.verbose: 47 | print('* Connected to %s' % 48 | (self.writer.get_extra_info('peername'),), 49 | file=sys.stderr) 50 | 51 | def putline(self, line): 52 | self.writer.write(line.encode('latin-1') + b'\r\n') 53 | 54 | @coroutine 55 | def send_request(self): 56 | request = '%s %s %s' % (self.method, self.full_path, self.http_version) 57 | if self.verbose: print('>', request, file=sys.stderr) 58 | self.putline(request) 59 | if 'host' not in {key.lower() for key, _ in self.headers}: 60 | self.headers.insert(0, ('Host', self.netloc)) 61 | for key, value in self.headers: 62 | line = '%s: %s' % (key, value) 63 | if self.verbose: print('>', line, file=sys.stderr) 64 | self.putline(line) 65 | self.putline('') 66 | 67 | @coroutine 68 | def get_response(self): 69 | response = Response(self.reader, self.verbose) 70 | yield from response.read_headers() 71 | return response 72 | 73 | 74 | class Response: 75 | 76 | def __init__(self, reader, verbose=True): 77 | self.reader = reader 78 | self.verbose = verbose 79 | self.http_version = None # 'HTTP/1.1' 80 | self.status = None # 200 81 | self.reason = None # 'Ok' 82 | self.headers = [] # [('Content-Type', 'text/html')] 83 | 84 | @coroutine 85 | def getline(self): 86 | return (yield from self.reader.readline()).decode('latin-1').rstrip() 87 | 88 | @coroutine 89 | def read_headers(self): 90 | status_line = yield from self.getline() 91 | if self.verbose: print('<', status_line, file=sys.stderr) 92 | status_parts = status_line.split(None, 2) 93 | if len(status_parts) != 3: 94 | raise BadStatusLine(status_line) 95 | self.http_version, status, self.reason = status_parts 96 | self.status = int(status) 97 | while True: 98 | header_line = yield from self.getline() 99 | if not header_line: 100 | break 101 | if self.verbose: print('<', header_line, file=sys.stderr) 102 | # TODO: Continuation lines. 103 | key, value = header_line.split(':', 1) 104 | self.headers.append((key, value.strip())) 105 | if self.verbose: print(file=sys.stderr) 106 | 107 | @coroutine 108 | def read(self): 109 | nbytes = None 110 | for key, value in self.headers: 111 | if key.lower() == 'content-length': 112 | nbytes = int(value) 113 | break 114 | if nbytes is None: 115 | body = yield from self.reader.read() 116 | else: 117 | body = yield from self.reader.readexactly(nbytes) 118 | return body 119 | 120 | 121 | @coroutine 122 | def fetch(url, verbose=True): 123 | request = Request(url, verbose) 124 | yield from request.connect() 125 | yield from request.send_request() 126 | response = yield from request.get_response() 127 | body = yield from response.read() 128 | return body 129 | 130 | 131 | def main(): 132 | loop = get_event_loop() 133 | try: 134 | body = loop.run_until_complete(fetch(sys.argv[1], '-v' in sys.argv)) 135 | finally: 136 | loop.close() 137 | sys.stdout.buffer.write(body) 138 | 139 | 140 | if __name__ == '__main__': 141 | main() 142 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/fetch3.py: -------------------------------------------------------------------------------- 1 | """Fetch one URL and write its content to stdout. 2 | 3 | This version adds a primitive connection pool, redirect following and 4 | chunked transfer-encoding. It also supports a --iocp flag. 5 | """ 6 | 7 | import sys 8 | import urllib.parse 9 | from http.client import BadStatusLine 10 | 11 | from asyncio import * 12 | 13 | 14 | class ConnectionPool: 15 | # TODO: Locking? Close idle connections? 16 | 17 | def __init__(self, verbose=False): 18 | self.verbose = verbose 19 | self.connections = {} # {(host, port, ssl): (reader, writer)} 20 | 21 | def close(self): 22 | for _, writer in self.connections.values(): 23 | writer.close() 24 | 25 | @coroutine 26 | def open_connection(self, host, port, ssl): 27 | port = port or (443 if ssl else 80) 28 | ipaddrs = yield from get_event_loop().getaddrinfo(host, port) 29 | if self.verbose: 30 | print('* %s resolves to %s' % 31 | (host, ', '.join(ip[4][0] for ip in ipaddrs)), 32 | file=sys.stderr) 33 | for _, _, _, _, (h, p, *_) in ipaddrs: 34 | key = h, p, ssl 35 | conn = self.connections.get(key) 36 | if conn: 37 | reader, writer = conn 38 | if reader._eof: 39 | self.connections.pop(key) 40 | continue 41 | if self.verbose: 42 | print('* Reusing pooled connection', key, file=sys.stderr) 43 | return conn 44 | reader, writer = yield from open_connection(host, port, ssl=ssl) 45 | host, port, *_ = writer.get_extra_info('peername') 46 | key = host, port, ssl 47 | self.connections[key] = reader, writer 48 | if self.verbose: 49 | print('* New connection', key, file=sys.stderr) 50 | return reader, writer 51 | 52 | 53 | class Request: 54 | 55 | def __init__(self, url, verbose=True): 56 | self.url = url 57 | self.verbose = verbose 58 | self.parts = urllib.parse.urlparse(self.url) 59 | self.scheme = self.parts.scheme 60 | assert self.scheme in ('http', 'https'), repr(url) 61 | self.ssl = self.parts.scheme == 'https' 62 | self.netloc = self.parts.netloc 63 | self.hostname = self.parts.hostname 64 | self.port = self.parts.port or (443 if self.ssl else 80) 65 | self.path = (self.parts.path or '/') 66 | self.query = self.parts.query 67 | if self.query: 68 | self.full_path = '%s?%s' % (self.path, self.query) 69 | else: 70 | self.full_path = self.path 71 | self.http_version = 'HTTP/1.1' 72 | self.method = 'GET' 73 | self.headers = [] 74 | self.reader = None 75 | self.writer = None 76 | 77 | def vprint(self, *args): 78 | if self.verbose: 79 | print(*args, file=sys.stderr) 80 | 81 | @coroutine 82 | def connect(self, pool): 83 | self.vprint('* Connecting to %s:%s using %s' % 84 | (self.hostname, self.port, 'ssl' if self.ssl else 'tcp')) 85 | self.reader, self.writer = \ 86 | yield from pool.open_connection(self.hostname, 87 | self.port, 88 | ssl=self.ssl) 89 | self.vprint('* Connected to %s' % 90 | (self.writer.get_extra_info('peername'),)) 91 | 92 | @coroutine 93 | def putline(self, line): 94 | self.vprint('>', line) 95 | self.writer.write(line.encode('latin-1') + b'\r\n') 96 | ##yield from self.writer.drain() 97 | 98 | @coroutine 99 | def send_request(self): 100 | request = '%s %s %s' % (self.method, self.full_path, self.http_version) 101 | yield from self.putline(request) 102 | if 'host' not in {key.lower() for key, _ in self.headers}: 103 | self.headers.insert(0, ('Host', self.netloc)) 104 | for key, value in self.headers: 105 | line = '%s: %s' % (key, value) 106 | yield from self.putline(line) 107 | yield from self.putline('') 108 | 109 | @coroutine 110 | def get_response(self): 111 | response = Response(self.reader, self.verbose) 112 | yield from response.read_headers() 113 | return response 114 | 115 | 116 | class Response: 117 | 118 | def __init__(self, reader, verbose=True): 119 | self.reader = reader 120 | self.verbose = verbose 121 | self.http_version = None # 'HTTP/1.1' 122 | self.status = None # 200 123 | self.reason = None # 'Ok' 124 | self.headers = [] # [('Content-Type', 'text/html')] 125 | 126 | def vprint(self, *args): 127 | if self.verbose: 128 | print(*args, file=sys.stderr) 129 | 130 | @coroutine 131 | def getline(self): 132 | line = (yield from self.reader.readline()).decode('latin-1').rstrip() 133 | self.vprint('<', line) 134 | return line 135 | 136 | @coroutine 137 | def read_headers(self): 138 | status_line = yield from self.getline() 139 | status_parts = status_line.split(None, 2) 140 | if len(status_parts) != 3: 141 | raise BadStatusLine(status_line) 142 | self.http_version, status, self.reason = status_parts 143 | self.status = int(status) 144 | while True: 145 | header_line = yield from self.getline() 146 | if not header_line: 147 | break 148 | # TODO: Continuation lines. 149 | key, value = header_line.split(':', 1) 150 | self.headers.append((key, value.strip())) 151 | 152 | def get_redirect_url(self, default=None): 153 | if self.status not in (300, 301, 302, 303, 307): 154 | return default 155 | return self.get_header('Location', default) 156 | 157 | def get_header(self, key, default=None): 158 | key = key.lower() 159 | for k, v in self.headers: 160 | if k.lower() == key: 161 | return v 162 | return default 163 | 164 | @coroutine 165 | def read(self): 166 | nbytes = None 167 | for key, value in self.headers: 168 | if key.lower() == 'content-length': 169 | nbytes = int(value) 170 | break 171 | if nbytes is None: 172 | if self.get_header('transfer-encoding', '').lower() == 'chunked': 173 | blocks = [] 174 | size = -1 175 | while size: 176 | size_header = yield from self.reader.readline() 177 | if not size_header: 178 | break 179 | parts = size_header.split(b';') 180 | size = int(parts[0], 16) 181 | if size: 182 | block = yield from self.reader.readexactly(size) 183 | assert len(block) == size, (len(block), size) 184 | blocks.append(block) 185 | crlf = yield from self.reader.readline() 186 | assert crlf == b'\r\n', repr(crlf) 187 | body = b''.join(blocks) 188 | else: 189 | body = yield from self.reader.read() 190 | else: 191 | body = yield from self.reader.readexactly(nbytes) 192 | return body 193 | 194 | 195 | @coroutine 196 | def fetch(url, verbose=True, max_redirect=10): 197 | pool = ConnectionPool(verbose) 198 | try: 199 | for _ in range(max_redirect): 200 | request = Request(url, verbose) 201 | yield from request.connect(pool) 202 | yield from request.send_request() 203 | response = yield from request.get_response() 204 | body = yield from response.read() 205 | next_url = response.get_redirect_url() 206 | if not next_url: 207 | break 208 | url = urllib.parse.urljoin(url, next_url) 209 | print('redirect to', url, file=sys.stderr) 210 | return body 211 | finally: 212 | pool.close() 213 | 214 | 215 | def main(): 216 | if '--iocp' in sys.argv: 217 | from asyncio.windows_events import ProactorEventLoop 218 | loop = ProactorEventLoop() 219 | set_event_loop(loop) 220 | else: 221 | loop = get_event_loop() 222 | try: 223 | body = loop.run_until_complete(fetch(sys.argv[1], '-v' in sys.argv)) 224 | finally: 225 | loop.close() 226 | sys.stdout.buffer.write(body) 227 | 228 | 229 | if __name__ == '__main__': 230 | main() 231 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/fuzz_as_completed.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | """Fuzz tester for as_completed(), by Glenn Langford.""" 4 | 5 | import asyncio 6 | import itertools 7 | import random 8 | import sys 9 | 10 | @asyncio.coroutine 11 | def sleeper(time): 12 | yield from asyncio.sleep(time) 13 | return time 14 | 15 | @asyncio.coroutine 16 | def watcher(tasks,delay=False): 17 | res = [] 18 | for t in asyncio.as_completed(tasks): 19 | r = yield from t 20 | res.append(r) 21 | if delay: 22 | # simulate processing delay 23 | process_time = random.random() / 10 24 | yield from asyncio.sleep(process_time) 25 | #print(res) 26 | #assert(sorted(res) == res) 27 | if sorted(res) != res: 28 | print('FAIL', res) 29 | print('------------') 30 | else: 31 | print('.', end='') 32 | sys.stdout.flush() 33 | 34 | loop = asyncio.get_event_loop() 35 | 36 | print('Pass 1') 37 | # All permutations of discrete task running times must be returned 38 | # by as_completed in the correct order. 39 | task_times = [0, 0.1, 0.2, 0.3, 0.4 ] # 120 permutations 40 | for times in itertools.permutations(task_times): 41 | tasks = [ asyncio.Task(sleeper(t)) for t in times ] 42 | loop.run_until_complete(asyncio.Task(watcher(tasks))) 43 | 44 | print() 45 | print('Pass 2') 46 | # Longer task times, with randomized duplicates. 100 tasks each time. 47 | longer_task_times = [x/10 for x in range(30)] 48 | for i in range(20): 49 | task_times = longer_task_times * 10 50 | random.shuffle(task_times) 51 | #print('Times', task_times[:500]) 52 | tasks = [ asyncio.Task(sleeper(t)) for t in task_times[:100] ] 53 | loop.run_until_complete(asyncio.Task(watcher(tasks))) 54 | 55 | print() 56 | print('Pass 3') 57 | # Same as pass 2, but with a random processing delay (0 - 0.1s) after 58 | # retrieving each future from as_completed and 200 tasks. This tests whether 59 | # the order that callbacks are triggered is preserved through to the 60 | # as_completed caller. 61 | for i in range(20): 62 | task_times = longer_task_times * 10 63 | random.shuffle(task_times) 64 | #print('Times', task_times[:200]) 65 | tasks = [ asyncio.Task(sleeper(t)) for t in task_times[:200] ] 66 | loop.run_until_complete(asyncio.Task(watcher(tasks, delay=True))) 67 | 68 | print() 69 | loop.close() 70 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/hello_callback.py: -------------------------------------------------------------------------------- 1 | """Print 'Hello World' every two seconds, using a callback.""" 2 | 3 | import asyncio 4 | 5 | 6 | def print_and_repeat(loop): 7 | print('Hello World') 8 | loop.call_later(2, print_and_repeat, loop) 9 | 10 | 11 | if __name__ == '__main__': 12 | loop = asyncio.get_event_loop() 13 | print_and_repeat(loop) 14 | try: 15 | loop.run_forever() 16 | finally: 17 | loop.close() 18 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/hello_coroutine.py: -------------------------------------------------------------------------------- 1 | """Print 'Hello World' every two seconds, using a coroutine.""" 2 | 3 | import asyncio 4 | 5 | 6 | @asyncio.coroutine 7 | def greet_every_two_seconds(): 8 | while True: 9 | print('Hello World') 10 | yield from asyncio.sleep(2) 11 | 12 | 13 | if __name__ == '__main__': 14 | loop = asyncio.get_event_loop() 15 | try: 16 | loop.run_until_complete(greet_every_two_seconds()) 17 | finally: 18 | loop.close() 19 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/qspeed.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """How fast is the queue implementation?""" 3 | 4 | import time 5 | import asyncio 6 | print(asyncio) 7 | 8 | N_CONSUMERS = 10 9 | N_PRODUCERS = 1 10 | N_ITEMS = 100000 # Per producer 11 | Q_SIZE = 1 12 | 13 | @asyncio.coroutine 14 | def producer(q): 15 | for i in range(N_ITEMS): 16 | yield from q.put(i) 17 | for i in range(N_CONSUMERS): 18 | yield from q.put(None) 19 | 20 | @asyncio.coroutine 21 | def consumer(q): 22 | while True: 23 | i = yield from q.get() 24 | if i is None: 25 | break 26 | 27 | def main(): 28 | q = asyncio.Queue(Q_SIZE) 29 | loop = asyncio.get_event_loop() 30 | consumers = [consumer(q) for _ in range(N_CONSUMERS)] 31 | producers = [producer(q) for _ in range(N_PRODUCERS)] 32 | t0 = time.time() 33 | loop.run_until_complete(asyncio.gather(*consumers, *producers)) 34 | t1 = time.time() 35 | dt = t1 - t0 36 | print(N_CONSUMERS, 'consumers;', 37 | N_PRODUCERS, 'producers;', 38 | N_ITEMS, 'items/producer;', 39 | Q_SIZE, 'maxsize;', 40 | '%.3f total seconds;' % dt, 41 | '%.3f usec per item.' % (1e6*dt/N_ITEMS/N_PRODUCERS)) 42 | 43 | main() 44 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/shell.py: -------------------------------------------------------------------------------- 1 | """Examples using create_subprocess_exec() and create_subprocess_shell().""" 2 | 3 | import asyncio 4 | import signal 5 | from asyncio.subprocess import PIPE 6 | 7 | @asyncio.coroutine 8 | def cat(loop): 9 | proc = yield from asyncio.create_subprocess_shell("cat", 10 | stdin=PIPE, 11 | stdout=PIPE) 12 | print("pid: %s" % proc.pid) 13 | 14 | message = "Hello World!" 15 | print("cat write: %r" % message) 16 | 17 | stdout, stderr = yield from proc.communicate(message.encode('ascii')) 18 | print("cat read: %r" % stdout.decode('ascii')) 19 | 20 | exitcode = yield from proc.wait() 21 | print("(exit code %s)" % exitcode) 22 | 23 | @asyncio.coroutine 24 | def ls(loop): 25 | proc = yield from asyncio.create_subprocess_exec("ls", 26 | stdout=PIPE) 27 | while True: 28 | line = yield from proc.stdout.readline() 29 | if not line: 30 | break 31 | print("ls>>", line.decode('ascii').rstrip()) 32 | try: 33 | proc.send_signal(signal.SIGINT) 34 | except ProcessLookupError: 35 | pass 36 | 37 | @asyncio.coroutine 38 | def test_call(*args, timeout=None): 39 | proc = yield from asyncio.create_subprocess_exec(*args) 40 | try: 41 | exitcode = yield from asyncio.wait_for(proc.wait(), timeout) 42 | print("%s: exit code %s" % (' '.join(args), exitcode)) 43 | except asyncio.TimeoutError: 44 | print("timeout! (%.1f sec)" % timeout) 45 | proc.kill() 46 | yield from proc.wait() 47 | 48 | loop = asyncio.get_event_loop() 49 | loop.run_until_complete(cat(loop)) 50 | loop.run_until_complete(ls(loop)) 51 | loop.run_until_complete(test_call("bash", "-c", "sleep 3", timeout=1.0)) 52 | loop.close() 53 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/simple_tcp_server.py: -------------------------------------------------------------------------------- 1 | """ 2 | Example of a simple TCP server that is written in (mostly) coroutine 3 | style and uses asyncio.streams.start_server() and 4 | asyncio.streams.open_connection(). 5 | 6 | Note that running this example starts both the TCP server and client 7 | in the same process. It listens on port 12345 on 127.0.0.1, so it will 8 | fail if this port is currently in use. 9 | """ 10 | 11 | import sys 12 | import asyncio 13 | import asyncio.streams 14 | 15 | 16 | class MyServer: 17 | """ 18 | This is just an example of how a TCP server might be potentially 19 | structured. This class has basically 3 methods: start the server, 20 | handle a client, and stop the server. 21 | 22 | Note that you don't have to follow this structure, it is really 23 | just an example or possible starting point. 24 | """ 25 | 26 | def __init__(self): 27 | self.server = None # encapsulates the server sockets 28 | 29 | # this keeps track of all the clients that connected to our 30 | # server. It can be useful in some cases, for instance to 31 | # kill client connections or to broadcast some data to all 32 | # clients... 33 | self.clients = {} # task -> (reader, writer) 34 | 35 | def _accept_client(self, client_reader, client_writer): 36 | """ 37 | This method accepts a new client connection and creates a Task 38 | to handle this client. self.clients is updated to keep track 39 | of the new client. 40 | """ 41 | 42 | # start a new Task to handle this specific client connection 43 | task = asyncio.Task(self._handle_client(client_reader, client_writer)) 44 | self.clients[task] = (client_reader, client_writer) 45 | 46 | def client_done(task): 47 | print("client task done:", task, file=sys.stderr) 48 | del self.clients[task] 49 | 50 | task.add_done_callback(client_done) 51 | 52 | @asyncio.coroutine 53 | def _handle_client(self, client_reader, client_writer): 54 | """ 55 | This method actually does the work to handle the requests for 56 | a specific client. The protocol is line oriented, so there is 57 | a main loop that reads a line with a request and then sends 58 | out one or more lines back to the client with the result. 59 | """ 60 | while True: 61 | data = (yield from client_reader.readline()).decode("utf-8") 62 | if not data: # an empty string means the client disconnected 63 | break 64 | cmd, *args = data.rstrip().split(' ') 65 | if cmd == 'add': 66 | arg1 = float(args[0]) 67 | arg2 = float(args[1]) 68 | retval = arg1 + arg2 69 | client_writer.write("{!r}\n".format(retval).encode("utf-8")) 70 | elif cmd == 'repeat': 71 | times = int(args[0]) 72 | msg = args[1] 73 | client_writer.write("begin\n".encode("utf-8")) 74 | for idx in range(times): 75 | client_writer.write("{}. {}\n".format(idx+1, msg) 76 | .encode("utf-8")) 77 | client_writer.write("end\n".encode("utf-8")) 78 | else: 79 | print("Bad command {!r}".format(data), file=sys.stderr) 80 | 81 | # This enables us to have flow control in our connection. 82 | yield from client_writer.drain() 83 | 84 | def start(self, loop): 85 | """ 86 | Starts the TCP server, so that it listens on port 12345. 87 | 88 | For each client that connects, the accept_client method gets 89 | called. This method runs the loop until the server sockets 90 | are ready to accept connections. 91 | """ 92 | self.server = loop.run_until_complete( 93 | asyncio.streams.start_server(self._accept_client, 94 | '127.0.0.1', 12345, 95 | loop=loop)) 96 | 97 | def stop(self, loop): 98 | """ 99 | Stops the TCP server, i.e. closes the listening socket(s). 100 | 101 | This method runs the loop until the server sockets are closed. 102 | """ 103 | if self.server is not None: 104 | self.server.close() 105 | loop.run_until_complete(self.server.wait_closed()) 106 | self.server = None 107 | 108 | 109 | def main(): 110 | loop = asyncio.get_event_loop() 111 | 112 | # creates a server and starts listening to TCP connections 113 | server = MyServer() 114 | server.start(loop) 115 | 116 | @asyncio.coroutine 117 | def client(): 118 | reader, writer = yield from asyncio.streams.open_connection( 119 | '127.0.0.1', 12345, loop=loop) 120 | 121 | def send(msg): 122 | print("> " + msg) 123 | writer.write((msg + '\n').encode("utf-8")) 124 | 125 | def recv(): 126 | msgback = (yield from reader.readline()).decode("utf-8").rstrip() 127 | print("< " + msgback) 128 | return msgback 129 | 130 | # send a line 131 | send("add 1 2") 132 | msg = yield from recv() 133 | 134 | send("repeat 5 hello") 135 | msg = yield from recv() 136 | assert msg == 'begin' 137 | while True: 138 | msg = yield from recv() 139 | if msg == 'end': 140 | break 141 | 142 | writer.close() 143 | yield from asyncio.sleep(0.5) 144 | 145 | # creates a client and connects to our server 146 | try: 147 | loop.run_until_complete(client()) 148 | server.stop(loop) 149 | finally: 150 | loop.close() 151 | 152 | 153 | if __name__ == '__main__': 154 | main() 155 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/sink.py: -------------------------------------------------------------------------------- 1 | """Test service that accepts connections and reads all data off them.""" 2 | 3 | import argparse 4 | import os 5 | import sys 6 | 7 | from asyncio import * 8 | 9 | ARGS = argparse.ArgumentParser(description="TCP data sink example.") 10 | ARGS.add_argument( 11 | '--tls', action='store_true', dest='tls', 12 | default=False, help='Use TLS with a self-signed cert') 13 | ARGS.add_argument( 14 | '--iocp', action='store_true', dest='iocp', 15 | default=False, help='Use IOCP event loop (Windows only)') 16 | ARGS.add_argument( 17 | '--host', action='store', dest='host', 18 | default='127.0.0.1', help='Host name') 19 | ARGS.add_argument( 20 | '--port', action='store', dest='port', 21 | default=1111, type=int, help='Port number') 22 | ARGS.add_argument( 23 | '--maxsize', action='store', dest='maxsize', 24 | default=16*1024*1024, type=int, help='Max total data size') 25 | 26 | server = None 27 | args = None 28 | 29 | 30 | def dprint(*args): 31 | print('sink:', *args, file=sys.stderr) 32 | 33 | 34 | class Service(Protocol): 35 | 36 | def connection_made(self, tr): 37 | dprint('connection from', tr.get_extra_info('peername')) 38 | dprint('my socket is', tr.get_extra_info('sockname')) 39 | self.tr = tr 40 | self.total = 0 41 | 42 | def data_received(self, data): 43 | if data == b'stop': 44 | dprint('stopping server') 45 | server.close() 46 | self.tr.close() 47 | return 48 | self.total += len(data) 49 | dprint('received', len(data), 'bytes; total', self.total) 50 | if self.total > args.maxsize: 51 | dprint('closing due to too much data') 52 | self.tr.close() 53 | 54 | def connection_lost(self, how): 55 | dprint('closed', repr(how)) 56 | 57 | 58 | @coroutine 59 | def start(loop, host, port): 60 | global server 61 | sslctx = None 62 | if args.tls: 63 | import ssl 64 | # TODO: take cert/key from args as well. 65 | here = os.path.join(os.path.dirname(__file__), '..', 'tests') 66 | sslctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23) 67 | sslctx.options |= ssl.OP_NO_SSLv2 68 | sslctx.load_cert_chain( 69 | certfile=os.path.join(here, 'ssl_cert.pem'), 70 | keyfile=os.path.join(here, 'ssl_key.pem')) 71 | 72 | server = yield from loop.create_server(Service, host, port, ssl=sslctx) 73 | dprint('serving TLS' if sslctx else 'serving', 74 | [s.getsockname() for s in server.sockets]) 75 | yield from server.wait_closed() 76 | 77 | 78 | def main(): 79 | global args 80 | args = ARGS.parse_args() 81 | if args.iocp: 82 | from asyncio.windows_events import ProactorEventLoop 83 | loop = ProactorEventLoop() 84 | set_event_loop(loop) 85 | else: 86 | loop = get_event_loop() 87 | try: 88 | loop.run_until_complete(start(loop, args.host, args.port)) 89 | finally: 90 | loop.close() 91 | 92 | 93 | if __name__ == '__main__': 94 | main() 95 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/source.py: -------------------------------------------------------------------------------- 1 | """Test client that connects and sends infinite data.""" 2 | 3 | import argparse 4 | import sys 5 | 6 | from asyncio import * 7 | from asyncio import test_utils 8 | 9 | 10 | ARGS = argparse.ArgumentParser(description="TCP data sink example.") 11 | ARGS.add_argument( 12 | '--tls', action='store_true', dest='tls', 13 | default=False, help='Use TLS') 14 | ARGS.add_argument( 15 | '--iocp', action='store_true', dest='iocp', 16 | default=False, help='Use IOCP event loop (Windows only)') 17 | ARGS.add_argument( 18 | '--stop', action='store_true', dest='stop', 19 | default=False, help='Stop the server by sending it b"stop" as data') 20 | ARGS.add_argument( 21 | '--host', action='store', dest='host', 22 | default='127.0.0.1', help='Host name') 23 | ARGS.add_argument( 24 | '--port', action='store', dest='port', 25 | default=1111, type=int, help='Port number') 26 | ARGS.add_argument( 27 | '--size', action='store', dest='size', 28 | default=16*1024, type=int, help='Data size') 29 | 30 | args = None 31 | 32 | 33 | def dprint(*args): 34 | print('source:', *args, file=sys.stderr) 35 | 36 | 37 | class Client(Protocol): 38 | 39 | total = 0 40 | 41 | def connection_made(self, tr): 42 | dprint('connecting to', tr.get_extra_info('peername')) 43 | dprint('my socket is', tr.get_extra_info('sockname')) 44 | self.tr = tr 45 | self.lost = False 46 | self.loop = get_event_loop() 47 | self.waiter = Future() 48 | if args.stop: 49 | self.tr.write(b'stop') 50 | self.tr.close() 51 | else: 52 | self.data = b'x'*args.size 53 | self.write_some_data() 54 | 55 | def write_some_data(self): 56 | if self.lost: 57 | dprint('lost already') 58 | return 59 | data = self.data 60 | size = len(data) 61 | self.total += size 62 | dprint('writing', size, 'bytes; total', self.total) 63 | self.tr.write(data) 64 | self.loop.call_soon(self.write_some_data) 65 | 66 | def connection_lost(self, exc): 67 | dprint('lost connection', repr(exc)) 68 | self.lost = True 69 | self.waiter.set_result(None) 70 | 71 | 72 | @coroutine 73 | def start(loop, host, port): 74 | sslctx = None 75 | if args.tls: 76 | sslctx = test_utils.dummy_ssl_context() 77 | tr, pr = yield from loop.create_connection(Client, host, port, 78 | ssl=sslctx) 79 | dprint('tr =', tr) 80 | dprint('pr =', pr) 81 | yield from pr.waiter 82 | 83 | 84 | def main(): 85 | global args 86 | args = ARGS.parse_args() 87 | if args.iocp: 88 | from asyncio.windows_events import ProactorEventLoop 89 | loop = ProactorEventLoop() 90 | set_event_loop(loop) 91 | else: 92 | loop = get_event_loop() 93 | try: 94 | loop.run_until_complete(start(loop, args.host, args.port)) 95 | finally: 96 | loop.close() 97 | 98 | 99 | if __name__ == '__main__': 100 | main() 101 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/source1.py: -------------------------------------------------------------------------------- 1 | """Like source.py, but uses streams.""" 2 | 3 | import argparse 4 | import sys 5 | 6 | from asyncio import * 7 | from asyncio import test_utils 8 | 9 | ARGS = argparse.ArgumentParser(description="TCP data sink example.") 10 | ARGS.add_argument( 11 | '--tls', action='store_true', dest='tls', 12 | default=False, help='Use TLS') 13 | ARGS.add_argument( 14 | '--iocp', action='store_true', dest='iocp', 15 | default=False, help='Use IOCP event loop (Windows only)') 16 | ARGS.add_argument( 17 | '--stop', action='store_true', dest='stop', 18 | default=False, help='Stop the server by sending it b"stop" as data') 19 | ARGS.add_argument( 20 | '--host', action='store', dest='host', 21 | default='127.0.0.1', help='Host name') 22 | ARGS.add_argument( 23 | '--port', action='store', dest='port', 24 | default=1111, type=int, help='Port number') 25 | ARGS.add_argument( 26 | '--size', action='store', dest='size', 27 | default=16*1024, type=int, help='Data size') 28 | 29 | 30 | class Debug: 31 | """A clever little class that suppresses repetitive messages.""" 32 | 33 | overwriting = False 34 | label = 'stream1:' 35 | 36 | def print(self, *args): 37 | if self.overwriting: 38 | print(file=sys.stderr) 39 | self.overwriting = 0 40 | print(self.label, *args, file=sys.stderr) 41 | 42 | def oprint(self, *args): 43 | self.overwriting += 1 44 | end = '\n' 45 | if self.overwriting >= 3: 46 | if self.overwriting == 3: 47 | print(self.label, '[...]', file=sys.stderr) 48 | end = '\r' 49 | print(self.label, *args, file=sys.stderr, end=end, flush=True) 50 | 51 | 52 | @coroutine 53 | def start(loop, args): 54 | d = Debug() 55 | total = 0 56 | sslctx = None 57 | if args.tls: 58 | d.print('using dummy SSLContext') 59 | sslctx = test_utils.dummy_ssl_context() 60 | r, w = yield from open_connection(args.host, args.port, ssl=sslctx) 61 | d.print('r =', r) 62 | d.print('w =', w) 63 | if args.stop: 64 | w.write(b'stop') 65 | w.close() 66 | else: 67 | size = args.size 68 | data = b'x'*size 69 | try: 70 | while True: 71 | total += size 72 | d.oprint('writing', size, 'bytes; total', total) 73 | w.write(data) 74 | f = w.drain() 75 | if f: 76 | d.print('pausing') 77 | yield from f 78 | except (ConnectionResetError, BrokenPipeError) as exc: 79 | d.print('caught', repr(exc)) 80 | 81 | 82 | def main(): 83 | global args 84 | args = ARGS.parse_args() 85 | if args.iocp: 86 | from asyncio.windows_events import ProactorEventLoop 87 | loop = ProactorEventLoop() 88 | set_event_loop(loop) 89 | else: 90 | loop = get_event_loop() 91 | try: 92 | loop.run_until_complete(start(loop, args)) 93 | finally: 94 | loop.close() 95 | 96 | 97 | if __name__ == '__main__': 98 | main() 99 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/stacks.py: -------------------------------------------------------------------------------- 1 | """Crude demo for print_stack().""" 2 | 3 | 4 | from asyncio import * 5 | 6 | 7 | @coroutine 8 | def helper(r): 9 | print('--- helper ---') 10 | for t in Task.all_tasks(): 11 | t.print_stack() 12 | print('--- end helper ---') 13 | line = yield from r.readline() 14 | 1/0 15 | return line 16 | 17 | def doit(): 18 | l = get_event_loop() 19 | lr = l.run_until_complete 20 | r, w = lr(open_connection('python.org', 80)) 21 | t1 = async(helper(r)) 22 | for t in Task.all_tasks(): t.print_stack() 23 | print('---') 24 | l._run_once() 25 | for t in Task.all_tasks(): t.print_stack() 26 | print('---') 27 | w.write(b'GET /\r\n') 28 | w.write_eof() 29 | try: 30 | lr(t1) 31 | except Exception as e: 32 | print('catching', e) 33 | finally: 34 | for t in Task.all_tasks(): 35 | t.print_stack() 36 | l.close() 37 | 38 | 39 | def main(): 40 | doit() 41 | 42 | 43 | if __name__ == '__main__': 44 | main() 45 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/subprocess_attach_read_pipe.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Example showing how to attach a read pipe to a subprocess.""" 3 | import asyncio 4 | import os, sys 5 | 6 | code = """ 7 | import os, sys 8 | fd = int(sys.argv[1]) 9 | os.write(fd, b'data') 10 | os.close(fd) 11 | """ 12 | 13 | loop = asyncio.get_event_loop() 14 | 15 | @asyncio.coroutine 16 | def task(): 17 | rfd, wfd = os.pipe() 18 | args = [sys.executable, '-c', code, str(wfd)] 19 | 20 | pipe = open(rfd, 'rb', 0) 21 | reader = asyncio.StreamReader(loop=loop) 22 | protocol = asyncio.StreamReaderProtocol(reader, loop=loop) 23 | transport, _ = yield from loop.connect_read_pipe(lambda: protocol, pipe) 24 | 25 | proc = yield from asyncio.create_subprocess_exec(*args, pass_fds={wfd}) 26 | yield from proc.wait() 27 | 28 | os.close(wfd) 29 | data = yield from reader.read() 30 | print("read = %r" % data.decode()) 31 | 32 | loop.run_until_complete(task()) 33 | loop.close() 34 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/subprocess_attach_write_pipe.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Example showing how to attach a write pipe to a subprocess.""" 3 | import asyncio 4 | import os, sys 5 | from asyncio import subprocess 6 | 7 | code = """ 8 | import os, sys 9 | fd = int(sys.argv[1]) 10 | data = os.read(fd, 1024) 11 | sys.stdout.buffer.write(data) 12 | """ 13 | 14 | loop = asyncio.get_event_loop() 15 | 16 | @asyncio.coroutine 17 | def task(): 18 | rfd, wfd = os.pipe() 19 | args = [sys.executable, '-c', code, str(rfd)] 20 | proc = yield from asyncio.create_subprocess_exec( 21 | *args, 22 | pass_fds={rfd}, 23 | stdout=subprocess.PIPE) 24 | 25 | pipe = open(wfd, 'wb', 0) 26 | transport, _ = yield from loop.connect_write_pipe(asyncio.Protocol, 27 | pipe) 28 | transport.write(b'data') 29 | 30 | stdout, stderr = yield from proc.communicate() 31 | print("stdout = %r" % stdout.decode()) 32 | transport.close() 33 | 34 | loop.run_until_complete(task()) 35 | loop.close() 36 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/subprocess_shell.py: -------------------------------------------------------------------------------- 1 | """Example writing to and reading from a subprocess at the same time using 2 | tasks.""" 3 | 4 | import asyncio 5 | import os 6 | from asyncio.subprocess import PIPE 7 | 8 | 9 | @asyncio.coroutine 10 | def send_input(writer, input): 11 | try: 12 | for line in input: 13 | print('sending', len(line), 'bytes') 14 | writer.write(line) 15 | d = writer.drain() 16 | if d: 17 | print('pause writing') 18 | yield from d 19 | print('resume writing') 20 | writer.close() 21 | except BrokenPipeError: 22 | print('stdin: broken pipe error') 23 | except ConnectionResetError: 24 | print('stdin: connection reset error') 25 | 26 | @asyncio.coroutine 27 | def log_errors(reader): 28 | while True: 29 | line = yield from reader.readline() 30 | if not line: 31 | break 32 | print('ERROR', repr(line)) 33 | 34 | @asyncio.coroutine 35 | def read_stdout(stdout): 36 | while True: 37 | line = yield from stdout.readline() 38 | print('received', repr(line)) 39 | if not line: 40 | break 41 | 42 | @asyncio.coroutine 43 | def start(cmd, input=None, **kwds): 44 | kwds['stdout'] = PIPE 45 | kwds['stderr'] = PIPE 46 | if input is None and 'stdin' not in kwds: 47 | kwds['stdin'] = None 48 | else: 49 | kwds['stdin'] = PIPE 50 | proc = yield from asyncio.create_subprocess_shell(cmd, **kwds) 51 | 52 | tasks = [] 53 | if input is not None: 54 | tasks.append(send_input(proc.stdin, input)) 55 | else: 56 | print('No stdin') 57 | if proc.stderr is not None: 58 | tasks.append(log_errors(proc.stderr)) 59 | else: 60 | print('No stderr') 61 | if proc.stdout is not None: 62 | tasks.append(read_stdout(proc.stdout)) 63 | else: 64 | print('No stdout') 65 | 66 | if tasks: 67 | # feed stdin while consuming stdout to avoid hang 68 | # when stdin pipe is full 69 | yield from asyncio.wait(tasks) 70 | 71 | exitcode = yield from proc.wait() 72 | print("exit code: %s" % exitcode) 73 | 74 | 75 | def main(): 76 | if os.name == 'nt': 77 | loop = asyncio.ProactorEventLoop() 78 | asyncio.set_event_loop(loop) 79 | else: 80 | loop = asyncio.get_event_loop() 81 | loop.run_until_complete(start( 82 | 'sleep 2; wc', input=[b'foo bar baz\n'*300 for i in range(100)])) 83 | loop.close() 84 | 85 | 86 | if __name__ == '__main__': 87 | main() 88 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/tcp_echo.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """TCP echo server example.""" 3 | import argparse 4 | import asyncio 5 | import sys 6 | try: 7 | import signal 8 | except ImportError: 9 | signal = None 10 | 11 | 12 | class EchoServer(asyncio.Protocol): 13 | 14 | TIMEOUT = 5.0 15 | 16 | def timeout(self): 17 | print('connection timeout, closing.') 18 | self.transport.close() 19 | 20 | def connection_made(self, transport): 21 | print('connection made') 22 | self.transport = transport 23 | 24 | # start 5 seconds timeout timer 25 | self.h_timeout = asyncio.get_event_loop().call_later( 26 | self.TIMEOUT, self.timeout) 27 | 28 | def data_received(self, data): 29 | print('data received: ', data.decode()) 30 | self.transport.write(b'Re: ' + data) 31 | 32 | # restart timeout timer 33 | self.h_timeout.cancel() 34 | self.h_timeout = asyncio.get_event_loop().call_later( 35 | self.TIMEOUT, self.timeout) 36 | 37 | def eof_received(self): 38 | pass 39 | 40 | def connection_lost(self, exc): 41 | print('connection lost:', exc) 42 | self.h_timeout.cancel() 43 | 44 | 45 | class EchoClient(asyncio.Protocol): 46 | 47 | message = 'This is the message. It will be echoed.' 48 | 49 | def connection_made(self, transport): 50 | self.transport = transport 51 | self.transport.write(self.message.encode()) 52 | print('data sent:', self.message) 53 | 54 | def data_received(self, data): 55 | print('data received:', data) 56 | 57 | # disconnect after 10 seconds 58 | asyncio.get_event_loop().call_later(10.0, self.transport.close) 59 | 60 | def eof_received(self): 61 | pass 62 | 63 | def connection_lost(self, exc): 64 | print('connection lost:', exc) 65 | asyncio.get_event_loop().stop() 66 | 67 | 68 | def start_client(loop, host, port): 69 | t = asyncio.Task(loop.create_connection(EchoClient, host, port)) 70 | loop.run_until_complete(t) 71 | 72 | 73 | def start_server(loop, host, port): 74 | f = loop.create_server(EchoServer, host, port) 75 | return loop.run_until_complete(f) 76 | 77 | 78 | ARGS = argparse.ArgumentParser(description="TCP Echo example.") 79 | ARGS.add_argument( 80 | '--server', action="store_true", dest='server', 81 | default=False, help='Run tcp server') 82 | ARGS.add_argument( 83 | '--client', action="store_true", dest='client', 84 | default=False, help='Run tcp client') 85 | ARGS.add_argument( 86 | '--host', action="store", dest='host', 87 | default='127.0.0.1', help='Host name') 88 | ARGS.add_argument( 89 | '--port', action="store", dest='port', 90 | default=9999, type=int, help='Port number') 91 | ARGS.add_argument( 92 | '--iocp', action="store_true", dest='iocp', 93 | default=False, help='Use IOCP event loop') 94 | 95 | 96 | if __name__ == '__main__': 97 | args = ARGS.parse_args() 98 | 99 | if ':' in args.host: 100 | args.host, port = args.host.split(':', 1) 101 | args.port = int(port) 102 | 103 | if (not (args.server or args.client)) or (args.server and args.client): 104 | print('Please specify --server or --client\n') 105 | ARGS.print_help() 106 | else: 107 | if args.iocp: 108 | from asyncio import windows_events 109 | loop = windows_events.ProactorEventLoop() 110 | asyncio.set_event_loop(loop) 111 | else: 112 | loop = asyncio.get_event_loop() 113 | print ('Using backend: {0}'.format(loop.__class__.__name__)) 114 | 115 | if signal is not None and sys.platform != 'win32': 116 | loop.add_signal_handler(signal.SIGINT, loop.stop) 117 | 118 | if args.server: 119 | server = start_server(loop, args.host, args.port) 120 | else: 121 | start_client(loop, args.host, args.port) 122 | 123 | try: 124 | loop.run_forever() 125 | finally: 126 | if args.server: 127 | server.close() 128 | loop.close() 129 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/timing_tcp_server.py: -------------------------------------------------------------------------------- 1 | """ 2 | A variant of simple_tcp_server.py that measures the time it takes to 3 | send N messages for a range of N. (This was O(N**2) in a previous 4 | version of asyncio.) 5 | 6 | Note that running this example starts both the TCP server and client 7 | in the same process. It listens on port 1234 on 127.0.0.1, so it will 8 | fail if this port is currently in use. 9 | """ 10 | 11 | import sys 12 | import time 13 | import random 14 | 15 | import asyncio 16 | import asyncio.streams 17 | 18 | 19 | class MyServer: 20 | """ 21 | This is just an example of how a TCP server might be potentially 22 | structured. This class has basically 3 methods: start the server, 23 | handle a client, and stop the server. 24 | 25 | Note that you don't have to follow this structure, it is really 26 | just an example or possible starting point. 27 | """ 28 | 29 | def __init__(self): 30 | self.server = None # encapsulates the server sockets 31 | 32 | # this keeps track of all the clients that connected to our 33 | # server. It can be useful in some cases, for instance to 34 | # kill client connections or to broadcast some data to all 35 | # clients... 36 | self.clients = {} # task -> (reader, writer) 37 | 38 | def _accept_client(self, client_reader, client_writer): 39 | """ 40 | This method accepts a new client connection and creates a Task 41 | to handle this client. self.clients is updated to keep track 42 | of the new client. 43 | """ 44 | 45 | # start a new Task to handle this specific client connection 46 | task = asyncio.Task(self._handle_client(client_reader, client_writer)) 47 | self.clients[task] = (client_reader, client_writer) 48 | 49 | def client_done(task): 50 | print("client task done:", task, file=sys.stderr) 51 | del self.clients[task] 52 | 53 | task.add_done_callback(client_done) 54 | 55 | @asyncio.coroutine 56 | def _handle_client(self, client_reader, client_writer): 57 | """ 58 | This method actually does the work to handle the requests for 59 | a specific client. The protocol is line oriented, so there is 60 | a main loop that reads a line with a request and then sends 61 | out one or more lines back to the client with the result. 62 | """ 63 | while True: 64 | data = (yield from client_reader.readline()).decode("utf-8") 65 | if not data: # an empty string means the client disconnected 66 | break 67 | cmd, *args = data.rstrip().split(' ') 68 | if cmd == 'add': 69 | arg1 = float(args[0]) 70 | arg2 = float(args[1]) 71 | retval = arg1 + arg2 72 | client_writer.write("{!r}\n".format(retval).encode("utf-8")) 73 | elif cmd == 'repeat': 74 | times = int(args[0]) 75 | msg = args[1] 76 | client_writer.write("begin\n".encode("utf-8")) 77 | for idx in range(times): 78 | client_writer.write("{}. {}\n".format( 79 | idx+1, msg + 'x'*random.randint(10, 50)) 80 | .encode("utf-8")) 81 | client_writer.write("end\n".encode("utf-8")) 82 | else: 83 | print("Bad command {!r}".format(data), file=sys.stderr) 84 | 85 | # This enables us to have flow control in our connection. 86 | yield from client_writer.drain() 87 | 88 | def start(self, loop): 89 | """ 90 | Starts the TCP server, so that it listens on port 1234. 91 | 92 | For each client that connects, the accept_client method gets 93 | called. This method runs the loop until the server sockets 94 | are ready to accept connections. 95 | """ 96 | self.server = loop.run_until_complete( 97 | asyncio.streams.start_server(self._accept_client, 98 | '127.0.0.1', 12345, 99 | loop=loop)) 100 | 101 | def stop(self, loop): 102 | """ 103 | Stops the TCP server, i.e. closes the listening socket(s). 104 | 105 | This method runs the loop until the server sockets are closed. 106 | """ 107 | if self.server is not None: 108 | self.server.close() 109 | loop.run_until_complete(self.server.wait_closed()) 110 | self.server = None 111 | 112 | 113 | def main(): 114 | loop = asyncio.get_event_loop() 115 | 116 | # creates a server and starts listening to TCP connections 117 | server = MyServer() 118 | server.start(loop) 119 | 120 | @asyncio.coroutine 121 | def client(): 122 | reader, writer = yield from asyncio.streams.open_connection( 123 | '127.0.0.1', 12345, loop=loop) 124 | 125 | def send(msg): 126 | print("> " + msg) 127 | writer.write((msg + '\n').encode("utf-8")) 128 | 129 | def recv(): 130 | msgback = (yield from reader.readline()).decode("utf-8").rstrip() 131 | print("< " + msgback) 132 | return msgback 133 | 134 | # send a line 135 | send("add 1 2") 136 | msg = yield from recv() 137 | 138 | Ns = list(range(100, 100000, 10000)) 139 | times = [] 140 | 141 | for N in Ns: 142 | t0 = time.time() 143 | send("repeat {} hello world ".format(N)) 144 | msg = yield from recv() 145 | assert msg == 'begin' 146 | while True: 147 | msg = (yield from reader.readline()).decode("utf-8").rstrip() 148 | if msg == 'end': 149 | break 150 | t1 = time.time() 151 | dt = t1 - t0 152 | print("Time taken: {:.3f} seconds ({:.6f} per repetition)" 153 | .format(dt, dt/N)) 154 | times.append(dt) 155 | 156 | writer.close() 157 | yield from asyncio.sleep(0.5) 158 | 159 | # creates a client and connects to our server 160 | try: 161 | loop.run_until_complete(client()) 162 | server.stop(loop) 163 | finally: 164 | loop.close() 165 | 166 | 167 | if __name__ == '__main__': 168 | main() 169 | -------------------------------------------------------------------------------- /thirdparty/asyncio/examples/udp_echo.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """UDP echo example.""" 3 | import argparse 4 | import sys 5 | import asyncio 6 | try: 7 | import signal 8 | except ImportError: 9 | signal = None 10 | 11 | 12 | class MyServerUdpEchoProtocol: 13 | 14 | def connection_made(self, transport): 15 | print('start', transport) 16 | self.transport = transport 17 | 18 | def datagram_received(self, data, addr): 19 | print('Data received:', data, addr) 20 | self.transport.sendto(data, addr) 21 | 22 | def error_received(self, exc): 23 | print('Error received:', exc) 24 | 25 | def connection_lost(self, exc): 26 | print('stop', exc) 27 | 28 | 29 | class MyClientUdpEchoProtocol: 30 | 31 | message = 'This is the message. It will be echoed.' 32 | 33 | def connection_made(self, transport): 34 | self.transport = transport 35 | print('sending "{}"'.format(self.message)) 36 | self.transport.sendto(self.message.encode()) 37 | print('waiting to receive') 38 | 39 | def datagram_received(self, data, addr): 40 | print('received "{}"'.format(data.decode())) 41 | self.transport.close() 42 | 43 | def error_received(self, exc): 44 | print('Error received:', exc) 45 | 46 | def connection_lost(self, exc): 47 | print('closing transport', exc) 48 | loop = asyncio.get_event_loop() 49 | loop.stop() 50 | 51 | 52 | def start_server(loop, addr): 53 | t = asyncio.Task(loop.create_datagram_endpoint( 54 | MyServerUdpEchoProtocol, local_addr=addr)) 55 | transport, server = loop.run_until_complete(t) 56 | return transport 57 | 58 | 59 | def start_client(loop, addr): 60 | t = asyncio.Task(loop.create_datagram_endpoint( 61 | MyClientUdpEchoProtocol, remote_addr=addr)) 62 | loop.run_until_complete(t) 63 | 64 | 65 | ARGS = argparse.ArgumentParser(description="UDP Echo example.") 66 | ARGS.add_argument( 67 | '--server', action="store_true", dest='server', 68 | default=False, help='Run udp server') 69 | ARGS.add_argument( 70 | '--client', action="store_true", dest='client', 71 | default=False, help='Run udp client') 72 | ARGS.add_argument( 73 | '--host', action="store", dest='host', 74 | default='127.0.0.1', help='Host name') 75 | ARGS.add_argument( 76 | '--port', action="store", dest='port', 77 | default=9999, type=int, help='Port number') 78 | 79 | 80 | if __name__ == '__main__': 81 | args = ARGS.parse_args() 82 | if ':' in args.host: 83 | args.host, port = args.host.split(':', 1) 84 | args.port = int(port) 85 | 86 | if (not (args.server or args.client)) or (args.server and args.client): 87 | print('Please specify --server or --client\n') 88 | ARGS.print_help() 89 | else: 90 | loop = asyncio.get_event_loop() 91 | if signal is not None: 92 | loop.add_signal_handler(signal.SIGINT, loop.stop) 93 | 94 | if '--server' in sys.argv: 95 | server = start_server(loop, (args.host, args.port)) 96 | else: 97 | start_client(loop, (args.host, args.port)) 98 | 99 | try: 100 | loop.run_forever() 101 | finally: 102 | if '--server' in sys.argv: 103 | server.close() 104 | loop.close() 105 | -------------------------------------------------------------------------------- /thirdparty/asyncio/pypi.bat: -------------------------------------------------------------------------------- 1 | c:\Python33\python.exe setup.py bdist_wheel upload 2 | -------------------------------------------------------------------------------- /thirdparty/asyncio/run_aiotest.py: -------------------------------------------------------------------------------- 1 | import aiotest.run 2 | import asyncio 3 | import sys 4 | if sys.platform == 'win32': 5 | from asyncio.windows_utils import socketpair 6 | else: 7 | from socket import socketpair 8 | 9 | config = aiotest.TestConfig() 10 | config.asyncio = asyncio 11 | config.socketpair = socketpair 12 | config.new_event_pool_policy = asyncio.DefaultEventLoopPolicy 13 | config.call_soon_check_closed = True 14 | aiotest.run.main(config) 15 | -------------------------------------------------------------------------------- /thirdparty/asyncio/runtests.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Run asyncio unittests. 3 | 4 | Usage: 5 | python3 runtests.py [flags] [pattern] ... 6 | 7 | Patterns are matched against the fully qualified name of the test, 8 | including package, module, class and method, 9 | e.g. 'tests.test_events.PolicyTests.testPolicy'. 10 | 11 | For full help, try --help. 12 | 13 | runtests.py --coverage is equivalent of: 14 | 15 | $(COVERAGE) run --branch runtests.py -v 16 | $(COVERAGE) html $(list of files) 17 | $(COVERAGE) report -m $(list of files) 18 | 19 | """ 20 | 21 | # Originally written by Beech Horn (for NDB). 22 | 23 | import argparse 24 | import gc 25 | import logging 26 | import os 27 | import random 28 | import re 29 | import sys 30 | import unittest 31 | import textwrap 32 | import warnings 33 | import importlib.machinery 34 | try: 35 | import coverage 36 | except ImportError: 37 | coverage = None 38 | 39 | from unittest.signals import installHandler 40 | 41 | assert sys.version >= '3.3', 'Please use Python 3.3 or higher.' 42 | 43 | ARGS = argparse.ArgumentParser(description="Run all unittests.") 44 | ARGS.add_argument( 45 | '-v', action="store", dest='verbose', 46 | nargs='?', const=1, type=int, default=0, help='verbose') 47 | ARGS.add_argument( 48 | '-x', action="store_true", dest='exclude', help='exclude tests') 49 | ARGS.add_argument( 50 | '-f', '--failfast', action="store_true", default=False, 51 | dest='failfast', help='Stop on first fail or error') 52 | ARGS.add_argument( 53 | '-c', '--catch', action="store_true", default=False, 54 | dest='catchbreak', help='Catch control-C and display results') 55 | ARGS.add_argument( 56 | '--forever', action="store_true", dest='forever', default=False, 57 | help='run tests forever to catch sporadic errors') 58 | ARGS.add_argument( 59 | '--findleaks', action='store_true', dest='findleaks', 60 | help='detect tests that leak memory') 61 | ARGS.add_argument('-r', '--randomize', action='store_true', 62 | help='randomize test execution order.') 63 | ARGS.add_argument('--seed', type=int, 64 | help='random seed to reproduce a previous random run') 65 | ARGS.add_argument( 66 | '-q', action="store_true", dest='quiet', help='quiet') 67 | ARGS.add_argument( 68 | '--tests', action="store", dest='testsdir', default='tests', 69 | help='tests directory') 70 | ARGS.add_argument( 71 | '--coverage', action="store_true", dest='coverage', 72 | help='enable html coverage report') 73 | ARGS.add_argument( 74 | 'pattern', action="store", nargs="*", 75 | help='optional regex patterns to match test ids (default all tests)') 76 | 77 | COV_ARGS = argparse.ArgumentParser(description="Run all unittests.") 78 | COV_ARGS.add_argument( 79 | '--coverage', action="store", dest='coverage', nargs='?', const='', 80 | help='enable coverage report and provide python files directory') 81 | 82 | 83 | def load_modules(basedir, suffix='.py'): 84 | def list_dir(prefix, dir): 85 | files = [] 86 | 87 | modpath = os.path.join(dir, '__init__.py') 88 | if os.path.isfile(modpath): 89 | mod = os.path.split(dir)[-1] 90 | files.append(('{}{}'.format(prefix, mod), modpath)) 91 | 92 | prefix = '{}{}.'.format(prefix, mod) 93 | 94 | for name in os.listdir(dir): 95 | path = os.path.join(dir, name) 96 | 97 | if os.path.isdir(path): 98 | files.extend(list_dir('{}{}.'.format(prefix, name), path)) 99 | else: 100 | if (name != '__init__.py' and 101 | name.endswith(suffix) and 102 | not name.startswith(('.', '_'))): 103 | files.append(('{}{}'.format(prefix, name[:-3]), path)) 104 | 105 | return files 106 | 107 | mods = [] 108 | for modname, sourcefile in list_dir('', basedir): 109 | if modname == 'runtests': 110 | continue 111 | if modname == 'test_pep492' and (sys.version_info < (3, 5)): 112 | print("Skipping '{0}': need at least Python 3.5".format(modname), 113 | file=sys.stderr) 114 | continue 115 | try: 116 | loader = importlib.machinery.SourceFileLoader(modname, sourcefile) 117 | mods.append((loader.load_module(), sourcefile)) 118 | except SyntaxError: 119 | raise 120 | except unittest.SkipTest as err: 121 | print("Skipping '{}': {}".format(modname, err), file=sys.stderr) 122 | 123 | return mods 124 | 125 | 126 | def randomize_tests(tests, seed): 127 | if seed is None: 128 | seed = random.randrange(10000000) 129 | random.seed(seed) 130 | print("Randomize test execution order (seed: %s)" % seed) 131 | random.shuffle(tests._tests) 132 | 133 | 134 | class TestsFinder: 135 | 136 | def __init__(self, testsdir, includes=(), excludes=()): 137 | self._testsdir = testsdir 138 | self._includes = includes 139 | self._excludes = excludes 140 | self.find_available_tests() 141 | 142 | def find_available_tests(self): 143 | """ 144 | Find available test classes without instantiating them. 145 | """ 146 | self._test_factories = [] 147 | mods = [mod for mod, _ in load_modules(self._testsdir)] 148 | for mod in mods: 149 | for name in set(dir(mod)): 150 | if name.endswith('Tests'): 151 | self._test_factories.append(getattr(mod, name)) 152 | 153 | def load_tests(self): 154 | """ 155 | Load test cases from the available test classes and apply 156 | optional include / exclude filters. 157 | """ 158 | loader = unittest.TestLoader() 159 | suite = unittest.TestSuite() 160 | for test_factory in self._test_factories: 161 | tests = loader.loadTestsFromTestCase(test_factory) 162 | if self._includes: 163 | tests = [test 164 | for test in tests 165 | if any(re.search(pat, test.id()) 166 | for pat in self._includes)] 167 | if self._excludes: 168 | tests = [test 169 | for test in tests 170 | if not any(re.search(pat, test.id()) 171 | for pat in self._excludes)] 172 | suite.addTests(tests) 173 | return suite 174 | 175 | 176 | class TestResult(unittest.TextTestResult): 177 | 178 | def __init__(self, stream, descriptions, verbosity): 179 | super().__init__(stream, descriptions, verbosity) 180 | self.leaks = [] 181 | 182 | def startTest(self, test): 183 | super().startTest(test) 184 | gc.collect() 185 | 186 | def addSuccess(self, test): 187 | super().addSuccess(test) 188 | gc.collect() 189 | if gc.garbage: 190 | if self.showAll: 191 | self.stream.writeln( 192 | " Warning: test created {} uncollectable " 193 | "object(s).".format(len(gc.garbage))) 194 | # move the uncollectable objects somewhere so we don't see 195 | # them again 196 | self.leaks.append((self.getDescription(test), gc.garbage[:])) 197 | del gc.garbage[:] 198 | 199 | 200 | class TestRunner(unittest.TextTestRunner): 201 | resultclass = TestResult 202 | 203 | def run(self, test): 204 | result = super().run(test) 205 | if result.leaks: 206 | self.stream.writeln("{} tests leaks:".format(len(result.leaks))) 207 | for name, leaks in result.leaks: 208 | self.stream.writeln(' '*4 + name + ':') 209 | for leak in leaks: 210 | self.stream.writeln(' '*8 + repr(leak)) 211 | return result 212 | 213 | 214 | def _runtests(args, tests): 215 | v = 0 if args.quiet else args.verbose + 1 216 | runner_factory = TestRunner if args.findleaks else unittest.TextTestRunner 217 | if args.randomize: 218 | randomize_tests(tests, args.seed) 219 | runner = runner_factory(verbosity=v, failfast=args.failfast) 220 | sys.stdout.flush() 221 | sys.stderr.flush() 222 | return runner.run(tests) 223 | 224 | 225 | def runtests(): 226 | # Print all warnings to the stdout. 227 | warnings.simplefilter("always") 228 | 229 | args = ARGS.parse_args() 230 | 231 | if args.coverage and coverage is None: 232 | URL = "bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py" 233 | print(textwrap.dedent(""" 234 | coverage package is not installed. 235 | 236 | To install coverage3 for Python 3, you need: 237 | - Setuptools (https://pypi.python.org/pypi/setuptools) 238 | 239 | What worked for me: 240 | - download {0} 241 | * curl -O https://{0} 242 | - python3 ez_setup.py 243 | - python3 -m easy_install coverage 244 | """.format(URL)).strip()) 245 | sys.exit(1) 246 | 247 | testsdir = os.path.abspath(args.testsdir) 248 | if not os.path.isdir(testsdir): 249 | print("Tests directory is not found: {}\n".format(testsdir)) 250 | ARGS.print_help() 251 | return 252 | 253 | excludes = includes = [] 254 | if args.exclude: 255 | excludes = args.pattern 256 | else: 257 | includes = args.pattern 258 | 259 | v = 0 if args.quiet else args.verbose + 1 260 | failfast = args.failfast 261 | 262 | if args.coverage: 263 | cov = coverage.coverage(branch=True, 264 | source=['asyncio'], 265 | ) 266 | cov.start() 267 | 268 | logger = logging.getLogger() 269 | if v == 0: 270 | level = logging.CRITICAL 271 | elif v == 1: 272 | level = logging.ERROR 273 | elif v == 2: 274 | level = logging.WARNING 275 | elif v == 3: 276 | level = logging.INFO 277 | elif v >= 4: 278 | level = logging.DEBUG 279 | logging.basicConfig(level=level) 280 | 281 | finder = TestsFinder(args.testsdir, includes, excludes) 282 | if args.catchbreak: 283 | installHandler() 284 | import asyncio.coroutines 285 | if asyncio.coroutines._DEBUG: 286 | print("Run tests in debug mode") 287 | else: 288 | print("Run tests in release mode") 289 | try: 290 | tests = finder.load_tests() 291 | if args.forever: 292 | while True: 293 | result = _runtests(args, tests) 294 | if not result.wasSuccessful(): 295 | sys.exit(1) 296 | else: 297 | result = _runtests(args, tests) 298 | sys.exit(not result.wasSuccessful()) 299 | finally: 300 | if args.coverage: 301 | cov.stop() 302 | cov.save() 303 | cov.html_report(directory='htmlcov') 304 | print("\nCoverage report:") 305 | cov.report(show_missing=False) 306 | here = os.path.dirname(os.path.abspath(__file__)) 307 | print("\nFor html report:") 308 | print("open file://{}/htmlcov/index.html".format(here)) 309 | 310 | 311 | if __name__ == '__main__': 312 | runtests() 313 | -------------------------------------------------------------------------------- /thirdparty/asyncio/setup.py: -------------------------------------------------------------------------------- 1 | # Release procedure: 2 | # - run tox (to run runtests.py and run_aiotest.py) 3 | # - maybe test examples 4 | # - update version in setup.py 5 | # - hg ci 6 | # - hg tag VERSION 7 | # - hg push 8 | # - run on Linux: python setup.py register sdist upload 9 | # - run on Windows: python release.py VERSION 10 | # - increment version in setup.py 11 | # - hg ci && hg push 12 | 13 | import os 14 | import sys 15 | try: 16 | from setuptools import setup, Extension 17 | except ImportError: 18 | # Use distutils.core as a fallback. 19 | # We won't be able to build the Wheel file on Windows. 20 | from distutils.core import setup, Extension 21 | 22 | if sys.version_info < (3, 3, 0): 23 | raise RuntimeError("asyncio requires Python 3.3.0+") 24 | 25 | extensions = [] 26 | if os.name == 'nt': 27 | ext = Extension( 28 | 'asyncio._overlapped', ['overlapped.c'], libraries=['ws2_32'], 29 | ) 30 | extensions.append(ext) 31 | 32 | with open("README.rst") as fp: 33 | long_description = fp.read() 34 | 35 | setup( 36 | name="asyncio", 37 | version="3.4.4", 38 | 39 | description="reference implementation of PEP 3156", 40 | long_description=long_description, 41 | url="http://www.python.org/dev/peps/pep-3156/", 42 | 43 | classifiers=[ 44 | "Programming Language :: Python", 45 | "Programming Language :: Python :: 3", 46 | "Programming Language :: Python :: 3.3", 47 | ], 48 | 49 | packages=["asyncio"], 50 | test_suite="runtests.runtests", 51 | 52 | ext_modules=extensions, 53 | ) 54 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/echo.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | if __name__ == '__main__': 4 | while True: 5 | buf = os.read(0, 1024) 6 | if not buf: 7 | break 8 | os.write(1, buf) 9 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/echo2.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | if __name__ == '__main__': 4 | buf = os.read(0, 1024) 5 | os.write(1, b'OUT:'+buf) 6 | os.write(2, b'ERR:'+buf) 7 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/echo3.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | if __name__ == '__main__': 4 | while True: 5 | buf = os.read(0, 1024) 6 | if not buf: 7 | break 8 | try: 9 | os.write(1, b'OUT:'+buf) 10 | except OSError as ex: 11 | os.write(2, b'ERR:' + ex.__class__.__name__.encode('ascii')) 12 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/keycert3.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN PRIVATE KEY----- 2 | MIICdgIBADANBgkqhkiG9w0BAQEFAASCAmAwggJcAgEAAoGBAMLgD0kAKDb5cFyP 3 | jbwNfR5CtewdXC+kMXAWD8DLxiTTvhMW7qVnlwOm36mZlszHKvsRf05lT4pegiFM 4 | 9z2j1OlaN+ci/X7NU22TNN6crYSiN77FjYJP464j876ndSxyD+rzys386T+1r1aZ 5 | aggEdkj1TsSsv1zWIYKlPIjlvhuxAgMBAAECgYA0aH+T2Vf3WOPv8KdkcJg6gCRe 6 | yJKXOWgWRcicx/CUzOEsTxmFIDPLxqAWA3k7v0B+3vjGw5Y9lycV/5XqXNoQI14j 7 | y09iNsumds13u5AKkGdTJnZhQ7UKdoVHfuP44ZdOv/rJ5/VD6F4zWywpe90pcbK+ 8 | AWDVtusgGQBSieEl1QJBAOyVrUG5l2yoUBtd2zr/kiGm/DYyXlIthQO/A3/LngDW 9 | 5/ydGxVsT7lAVOgCsoT+0L4efTh90PjzW8LPQrPBWVMCQQDS3h/FtYYd5lfz+FNL 10 | 9CEe1F1w9l8P749uNUD0g317zv1tatIqVCsQWHfVHNdVvfQ+vSFw38OORO00Xqs9 11 | 1GJrAkBkoXXEkxCZoy4PteheO/8IWWLGGr6L7di6MzFl1lIqwT6D8L9oaV2vynFT 12 | DnKop0pa09Unhjyw57KMNmSE2SUJAkEArloTEzpgRmCq4IK2/NpCeGdHS5uqRlbh 13 | 1VIa/xGps7EWQl5Mn8swQDel/YP3WGHTjfx7pgSegQfkyaRtGpZ9OQJAa9Vumj8m 14 | JAAtI0Bnga8hgQx7BhTQY4CadDxyiRGOGYhwUzYVCqkb2sbVRH9HnwUaJT7cWBY3 15 | RnJdHOMXWem7/w== 16 | -----END PRIVATE KEY----- 17 | Certificate: 18 | Data: 19 | Version: 1 (0x0) 20 | Serial Number: 12723342612721443281 (0xb09264b1f2da21d1) 21 | Signature Algorithm: sha1WithRSAEncryption 22 | Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server 23 | Validity 24 | Not Before: Jan 4 19:47:07 2013 GMT 25 | Not After : Nov 13 19:47:07 2022 GMT 26 | Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost 27 | Subject Public Key Info: 28 | Public Key Algorithm: rsaEncryption 29 | Public-Key: (1024 bit) 30 | Modulus: 31 | 00:c2:e0:0f:49:00:28:36:f9:70:5c:8f:8d:bc:0d: 32 | 7d:1e:42:b5:ec:1d:5c:2f:a4:31:70:16:0f:c0:cb: 33 | c6:24:d3:be:13:16:ee:a5:67:97:03:a6:df:a9:99: 34 | 96:cc:c7:2a:fb:11:7f:4e:65:4f:8a:5e:82:21:4c: 35 | f7:3d:a3:d4:e9:5a:37:e7:22:fd:7e:cd:53:6d:93: 36 | 34:de:9c:ad:84:a2:37:be:c5:8d:82:4f:e3:ae:23: 37 | f3:be:a7:75:2c:72:0f:ea:f3:ca:cd:fc:e9:3f:b5: 38 | af:56:99:6a:08:04:76:48:f5:4e:c4:ac:bf:5c:d6: 39 | 21:82:a5:3c:88:e5:be:1b:b1 40 | Exponent: 65537 (0x10001) 41 | Signature Algorithm: sha1WithRSAEncryption 42 | 2f:42:5f:a3:09:2c:fa:51:88:c7:37:7f:ea:0e:63:f0:a2:9a: 43 | e5:5a:e2:c8:20:f0:3f:60:bc:c8:0f:b6:c6:76:ce:db:83:93: 44 | f5:a3:33:67:01:8e:04:cd:00:9a:73:fd:f3:35:86:fa:d7:13: 45 | e2:46:c6:9d:c0:29:53:d4:a9:90:b8:77:4b:e6:83:76:e4:92: 46 | d6:9c:50:cf:43:d0:c6:01:77:61:9a:de:9b:70:f7:72:cd:59: 47 | 00:31:69:d9:b4:ca:06:9c:6d:c3:c7:80:8c:68:e6:b5:a2:f8: 48 | ef:1d:bb:16:9f:77:77:ef:87:62:22:9b:4d:69:a4:3a:1a:f1: 49 | 21:5e:8c:32:ac:92:fd:15:6b:18:c2:7f:15:0d:98:30:ca:75: 50 | 8f:1a:71:df:da:1d:b2:ef:9a:e8:2d:2e:02:fd:4a:3c:aa:96: 51 | 0b:06:5d:35:b3:3d:24:87:4b:e0:b0:58:60:2f:45:ac:2e:48: 52 | 8a:b0:99:10:65:27:ff:cc:b1:d8:fd:bd:26:6b:b9:0c:05:2a: 53 | f4:45:63:35:51:07:ed:83:85:fe:6f:69:cb:bb:40:a8:ae:b6: 54 | 3b:56:4a:2d:a4:ed:6d:11:2c:4d:ed:17:24:fd:47:bc:d3:41: 55 | a2:d3:06:fe:0c:90:d8:d8:94:26:c4:ff:cc:a1:d8:42:77:eb: 56 | fc:a9:94:71 57 | -----BEGIN CERTIFICATE----- 58 | MIICpDCCAYwCCQCwkmSx8toh0TANBgkqhkiG9w0BAQUFADBNMQswCQYDVQQGEwJY 59 | WTEmMCQGA1UECgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNV 60 | BAMMDW91ci1jYS1zZXJ2ZXIwHhcNMTMwMTA0MTk0NzA3WhcNMjIxMTEzMTk0NzA3 61 | WjBfMQswCQYDVQQGEwJYWTEXMBUGA1UEBxMOQ2FzdGxlIEFudGhyYXgxIzAhBgNV 62 | BAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMRIwEAYDVQQDEwlsb2NhbGhv 63 | c3QwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMLgD0kAKDb5cFyPjbwNfR5C 64 | tewdXC+kMXAWD8DLxiTTvhMW7qVnlwOm36mZlszHKvsRf05lT4pegiFM9z2j1Ola 65 | N+ci/X7NU22TNN6crYSiN77FjYJP464j876ndSxyD+rzys386T+1r1aZaggEdkj1 66 | TsSsv1zWIYKlPIjlvhuxAgMBAAEwDQYJKoZIhvcNAQEFBQADggEBAC9CX6MJLPpR 67 | iMc3f+oOY/CimuVa4sgg8D9gvMgPtsZ2ztuDk/WjM2cBjgTNAJpz/fM1hvrXE+JG 68 | xp3AKVPUqZC4d0vmg3bkktacUM9D0MYBd2Ga3ptw93LNWQAxadm0ygacbcPHgIxo 69 | 5rWi+O8duxafd3fvh2Iim01ppDoa8SFejDKskv0VaxjCfxUNmDDKdY8acd/aHbLv 70 | mugtLgL9SjyqlgsGXTWzPSSHS+CwWGAvRawuSIqwmRBlJ//Msdj9vSZruQwFKvRF 71 | YzVRB+2Dhf5vacu7QKiutjtWSi2k7W0RLE3tFyT9R7zTQaLTBv4MkNjYlCbE/8yh 72 | 2EJ36/yplHE= 73 | -----END CERTIFICATE----- 74 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/pycacert.pem: -------------------------------------------------------------------------------- 1 | Certificate: 2 | Data: 3 | Version: 3 (0x2) 4 | Serial Number: 12723342612721443280 (0xb09264b1f2da21d0) 5 | Signature Algorithm: sha1WithRSAEncryption 6 | Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server 7 | Validity 8 | Not Before: Jan 4 19:47:07 2013 GMT 9 | Not After : Jan 2 19:47:07 2023 GMT 10 | Subject: C=XY, O=Python Software Foundation CA, CN=our-ca-server 11 | Subject Public Key Info: 12 | Public Key Algorithm: rsaEncryption 13 | Public-Key: (2048 bit) 14 | Modulus: 15 | 00:e7:de:e9:e3:0c:9f:00:b6:a1:fd:2b:5b:96:d2: 16 | 6f:cc:e0:be:86:b9:20:5e:ec:03:7a:55:ab:ea:a4: 17 | e9:f9:49:85:d2:66:d5:ed:c7:7a:ea:56:8e:2d:8f: 18 | e7:42:e2:62:28:a9:9f:d6:1b:8e:eb:b5:b4:9c:9f: 19 | 14:ab:df:e6:94:8b:76:1d:3e:6d:24:61:ed:0c:bf: 20 | 00:8a:61:0c:df:5c:c8:36:73:16:00:cd:47:ba:6d: 21 | a4:a4:74:88:83:23:0a:19:fc:09:a7:3c:4a:4b:d3: 22 | e7:1d:2d:e4:ea:4c:54:21:f3:26:db:89:37:18:d4: 23 | 02:bb:40:32:5f:a4:ff:2d:1c:f7:d4:bb:ec:8e:cf: 24 | 5c:82:ac:e6:7c:08:6c:48:85:61:07:7f:25:e0:5c: 25 | e0:bc:34:5f:e0:b9:04:47:75:c8:47:0b:8d:bc:d6: 26 | c8:68:5f:33:83:62:d2:20:44:35:b1:ad:81:1a:8a: 27 | cd:bc:35:b0:5c:8b:47:d6:18:e9:9c:18:97:cc:01: 28 | 3c:29:cc:e8:1e:e4:e4:c1:b8:de:e7:c2:11:18:87: 29 | 5a:93:34:d8:a6:25:f7:14:71:eb:e4:21:a2:d2:0f: 30 | 2e:2e:d4:62:00:35:d3:d6:ef:5c:60:4b:4c:a9:14: 31 | e2:dd:15:58:46:37:33:26:b7:e7:2e:5d:ed:42:e4: 32 | c5:4d 33 | Exponent: 65537 (0x10001) 34 | X509v3 extensions: 35 | X509v3 Subject Key Identifier: 36 | BC:DD:62:D9:76:DA:1B:D2:54:6B:CF:E0:66:9B:1E:1E:7B:56:0C:0B 37 | X509v3 Authority Key Identifier: 38 | keyid:BC:DD:62:D9:76:DA:1B:D2:54:6B:CF:E0:66:9B:1E:1E:7B:56:0C:0B 39 | 40 | X509v3 Basic Constraints: 41 | CA:TRUE 42 | Signature Algorithm: sha1WithRSAEncryption 43 | 7d:0a:f5:cb:8d:d3:5d:bd:99:8e:f8:2b:0f:ba:eb:c2:d9:a6: 44 | 27:4f:2e:7b:2f:0e:64:d8:1c:35:50:4e:ee:fc:90:b9:8d:6d: 45 | a8:c5:c6:06:b0:af:f3:2d:bf:3b:b8:42:07:dd:18:7d:6d:95: 46 | 54:57:85:18:60:47:2f:eb:78:1b:f9:e8:17:fd:5a:0d:87:17: 47 | 28:ac:4c:6a:e6:bc:29:f4:f4:55:70:29:42:de:85:ea:ab:6c: 48 | 23:06:64:30:75:02:8e:53:bc:5e:01:33:37:cc:1e:cd:b8:a4: 49 | fd:ca:e4:5f:65:3b:83:1c:86:f1:55:02:a0:3a:8f:db:91:b7: 50 | 40:14:b4:e7:8d:d2:ee:73:ba:e3:e5:34:2d:bc:94:6f:4e:24: 51 | 06:f7:5f:8b:0e:a7:8e:6b:de:5e:75:f4:32:9a:50:b1:44:33: 52 | 9a:d0:05:e2:78:82:ff:db:da:8a:63:eb:a9:dd:d1:bf:a0:61: 53 | ad:e3:9e:8a:24:5d:62:0e:e7:4c:91:7f:ef:df:34:36:3b:2f: 54 | 5d:f5:84:b2:2f:c4:6d:93:96:1a:6f:30:28:f1:da:12:9a:64: 55 | b4:40:33:1d:bd:de:2b:53:a8:ea:be:d6:bc:4e:96:f5:44:fb: 56 | 32:18:ae:d5:1f:f6:69:af:b6:4e:7b:1d:58:ec:3b:a9:53:a3: 57 | 5e:58:c8:9e 58 | -----BEGIN CERTIFICATE----- 59 | MIIDbTCCAlWgAwIBAgIJALCSZLHy2iHQMA0GCSqGSIb3DQEBBQUAME0xCzAJBgNV 60 | BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW 61 | MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xMzAxMDQxOTQ3MDdaFw0yMzAxMDIx 62 | OTQ3MDdaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg 63 | Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCASIwDQYJKoZI 64 | hvcNAQEBBQADggEPADCCAQoCggEBAOfe6eMMnwC2of0rW5bSb8zgvoa5IF7sA3pV 65 | q+qk6flJhdJm1e3HeupWji2P50LiYiipn9Ybjuu1tJyfFKvf5pSLdh0+bSRh7Qy/ 66 | AIphDN9cyDZzFgDNR7ptpKR0iIMjChn8Cac8SkvT5x0t5OpMVCHzJtuJNxjUArtA 67 | Ml+k/y0c99S77I7PXIKs5nwIbEiFYQd/JeBc4Lw0X+C5BEd1yEcLjbzWyGhfM4Ni 68 | 0iBENbGtgRqKzbw1sFyLR9YY6ZwYl8wBPCnM6B7k5MG43ufCERiHWpM02KYl9xRx 69 | 6+QhotIPLi7UYgA109bvXGBLTKkU4t0VWEY3Mya35y5d7ULkxU0CAwEAAaNQME4w 70 | HQYDVR0OBBYEFLzdYtl22hvSVGvP4GabHh57VgwLMB8GA1UdIwQYMBaAFLzdYtl2 71 | 2hvSVGvP4GabHh57VgwLMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEB 72 | AH0K9cuN0129mY74Kw+668LZpidPLnsvDmTYHDVQTu78kLmNbajFxgawr/Mtvzu4 73 | QgfdGH1tlVRXhRhgRy/reBv56Bf9Wg2HFyisTGrmvCn09FVwKULeheqrbCMGZDB1 74 | Ao5TvF4BMzfMHs24pP3K5F9lO4MchvFVAqA6j9uRt0AUtOeN0u5zuuPlNC28lG9O 75 | JAb3X4sOp45r3l519DKaULFEM5rQBeJ4gv/b2opj66nd0b+gYa3jnookXWIO50yR 76 | f+/fNDY7L131hLIvxG2TlhpvMCjx2hKaZLRAMx293itTqOq+1rxOlvVE+zIYrtUf 77 | 9mmvtk57HVjsO6lTo15YyJ4= 78 | -----END CERTIFICATE----- 79 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/sample.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIICMzCCAZwCCQDFl4ys0fU7iTANBgkqhkiG9w0BAQUFADBeMQswCQYDVQQGEwJV 3 | UzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuLUZyYW5jaXNjbzEi 4 | MCAGA1UECgwZUHl0aG9uIFNvZnR3YXJlIEZvbmRhdGlvbjAeFw0xMzAzMTgyMDA3 5 | MjhaFw0yMzAzMTYyMDA3MjhaMF4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxp 6 | Zm9ybmlhMRYwFAYDVQQHDA1TYW4tRnJhbmNpc2NvMSIwIAYDVQQKDBlQeXRob24g 7 | U29mdHdhcmUgRm9uZGF0aW9uMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCn 8 | t3s+J7L0xP/YdAQOacpPi9phlrzKZhcXL3XMu2LCUg2fNJpx/47Vc5TZSaO11uO7 9 | gdwVz3Z7Q2epAgwo59JLffLt5fia8+a/SlPweI/j4+wcIIIiqusnLfpqR8cIAavg 10 | Z06cLYCDvb9wMlheIvSJY12skc1nnphWS2YJ0Xm6uQIDAQABMA0GCSqGSIb3DQEB 11 | BQUAA4GBAE9PknG6pv72+5z/gsDGYy8sK5UNkbWSNr4i4e5lxVsF03+/M71H+3AB 12 | MxVX4+A+Vlk2fmU+BrdHIIUE0r1dDcO3josQ9hc9OJpp5VLSQFP8VeuJCmzYPp9I 13 | I8WbW93cnXnChTrYQVdgVoFdv7GE9YgU7NYkrGIM0nZl1/f/bHPB 14 | -----END CERTIFICATE----- 15 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/sample.key: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIICXQIBAAKBgQCnt3s+J7L0xP/YdAQOacpPi9phlrzKZhcXL3XMu2LCUg2fNJpx 3 | /47Vc5TZSaO11uO7gdwVz3Z7Q2epAgwo59JLffLt5fia8+a/SlPweI/j4+wcIIIi 4 | qusnLfpqR8cIAavgZ06cLYCDvb9wMlheIvSJY12skc1nnphWS2YJ0Xm6uQIDAQAB 5 | AoGABfm8k19Yue3W68BecKEGS0VBV57GRTPT+MiBGvVGNIQ15gk6w3sGfMZsdD1y 6 | bsUkQgcDb2d/4i5poBTpl/+Cd41V+c20IC/sSl5X1IEreHMKSLhy/uyjyiyfXlP1 7 | iXhToFCgLWwENWc8LzfUV8vuAV5WG6oL9bnudWzZxeqx8V0CQQDR7xwVj6LN70Eb 8 | DUhSKLkusmFw5Gk9NJ/7wZ4eHg4B8c9KNVvSlLCLhcsVTQXuqYeFpOqytI45SneP 9 | lr0vrvsDAkEAzITYiXu6ox5huDCG7imX2W9CAYuX638urLxBqBXMS7GqBzojD6RL 10 | 21Q8oPwJWJquERa3HDScq1deiQbM9uKIkwJBAIa1PLslGN216Xv3UPHPScyKD/aF 11 | ynXIv+OnANPoiyp6RH4ksQ/18zcEGiVH8EeNpvV9tlAHhb+DZibQHgNr74sCQQC0 12 | zhToplu/bVKSlUQUNO0rqrI9z30FErDewKeCw5KSsIRSU1E/uM3fHr9iyq4wiL6u 13 | GNjUtKZ0y46lsT9uW6LFAkB5eqeEQnshAdr3X5GykWHJ8DDGBXPPn6Rce1NX4RSq 14 | V9khG2z1bFyfo+hMqpYnF2k32hVq3E54RS8YYnwBsVof 15 | -----END RSA PRIVATE KEY----- 16 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/ssl_cert.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIICVDCCAb2gAwIBAgIJANfHOBkZr8JOMA0GCSqGSIb3DQEBBQUAMF8xCzAJBgNV 3 | BAYTAlhZMRcwFQYDVQQHEw5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9u 4 | IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMTCWxvY2FsaG9zdDAeFw0xMDEw 5 | MDgyMzAxNTZaFw0yMDEwMDUyMzAxNTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH 6 | Ew5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9uIFNvZnR3YXJlIEZvdW5k 7 | YXRpb24xEjAQBgNVBAMTCWxvY2FsaG9zdDCBnzANBgkqhkiG9w0BAQEFAAOBjQAw 8 | gYkCgYEA21vT5isq7F68amYuuNpSFlKDPrMUCa4YWYqZRt2OZ+/3NKaZ2xAiSwr7 9 | 6MrQF70t5nLbSPpqE5+5VrS58SY+g/sXLiFd6AplH1wJZwh78DofbFYXUggktFMt 10 | pTyiX8jtP66bkcPkDADA089RI1TQR6Ca+n7HFa7c1fabVV6i3zkCAwEAAaMYMBYw 11 | FAYDVR0RBA0wC4IJbG9jYWxob3N0MA0GCSqGSIb3DQEBBQUAA4GBAHPctQBEQ4wd 12 | BJ6+JcpIraopLn8BGhbjNWj40mmRqWB/NAWF6M5ne7KpGAu7tLeG4hb1zLaldK8G 13 | lxy2GPSRF6LFS48dpEj2HbMv2nvv6xxalDMJ9+DicWgAKTQ6bcX2j3GUkCR0g/T1 14 | CRlNBAAlvhKzO7Clpf9l0YKBEfraJByX 15 | -----END CERTIFICATE----- 16 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/ssl_key.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN PRIVATE KEY----- 2 | MIICdwIBADANBgkqhkiG9w0BAQEFAASCAmEwggJdAgEAAoGBANtb0+YrKuxevGpm 3 | LrjaUhZSgz6zFAmuGFmKmUbdjmfv9zSmmdsQIksK++jK0Be9LeZy20j6ahOfuVa0 4 | ufEmPoP7Fy4hXegKZR9cCWcIe/A6H2xWF1IIJLRTLaU8ol/I7T+um5HD5AwAwNPP 5 | USNU0Eegmvp+xxWu3NX2m1Veot85AgMBAAECgYA3ZdZ673X0oexFlq7AAmrutkHt 6 | CL7LvwrpOiaBjhyTxTeSNWzvtQBkIU8DOI0bIazA4UreAFffwtvEuPmonDb3F+Iq 7 | SMAu42XcGyVZEl+gHlTPU9XRX7nTOXVt+MlRRRxL6t9GkGfUAXI3XxJDXW3c0vBK 8 | UL9xqD8cORXOfE06rQJBAP8mEX1ERkR64Ptsoe4281vjTlNfIbs7NMPkUnrn9N/Y 9 | BLhjNIfQ3HFZG8BTMLfX7kCS9D593DW5tV4Z9BP/c6cCQQDcFzCcVArNh2JSywOQ 10 | ZfTfRbJg/Z5Lt9Fkngv1meeGNPgIMLN8Sg679pAOOWmzdMO3V706rNPzSVMME7E5 11 | oPIfAkEA8pDddarP5tCvTTgUpmTFbakm0KoTZm2+FzHcnA4jRh+XNTjTOv98Y6Ik 12 | eO5d1ZnKXseWvkZncQgxfdnMqqpj5wJAcNq/RVne1DbYlwWchT2Si65MYmmJ8t+F 13 | 0mcsULqjOnEMwf5e+ptq5LzwbyrHZYq5FNk7ocufPv/ZQrcSSC+cFwJBAKvOJByS 14 | x56qyGeZLOQlWS2JS3KJo59XuLFGqcbgN9Om9xFa41Yb4N9NvplFivsvZdw3m1Q/ 15 | SPIXQuT8RMPDVNQ= 16 | -----END PRIVATE KEY----- 17 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/test_pep492.py: -------------------------------------------------------------------------------- 1 | """Tests support for new syntax introduced by PEP 492.""" 2 | 3 | import collections.abc 4 | import types 5 | import unittest 6 | 7 | try: 8 | from test import support 9 | except ImportError: 10 | from asyncio import test_support as support 11 | from unittest import mock 12 | 13 | import asyncio 14 | from asyncio import test_utils 15 | 16 | 17 | class BaseTest(test_utils.TestCase): 18 | 19 | def setUp(self): 20 | super().setUp() 21 | self.loop = asyncio.BaseEventLoop() 22 | self.loop._process_events = mock.Mock() 23 | self.loop._selector = mock.Mock() 24 | self.loop._selector.select.return_value = () 25 | self.set_event_loop(self.loop) 26 | 27 | 28 | class LockTests(BaseTest): 29 | 30 | def test_context_manager_async_with(self): 31 | primitives = [ 32 | asyncio.Lock(loop=self.loop), 33 | asyncio.Condition(loop=self.loop), 34 | asyncio.Semaphore(loop=self.loop), 35 | asyncio.BoundedSemaphore(loop=self.loop), 36 | ] 37 | 38 | async def test(lock): 39 | await asyncio.sleep(0.01, loop=self.loop) 40 | self.assertFalse(lock.locked()) 41 | async with lock as _lock: 42 | self.assertIs(_lock, None) 43 | self.assertTrue(lock.locked()) 44 | await asyncio.sleep(0.01, loop=self.loop) 45 | self.assertTrue(lock.locked()) 46 | self.assertFalse(lock.locked()) 47 | 48 | for primitive in primitives: 49 | self.loop.run_until_complete(test(primitive)) 50 | self.assertFalse(primitive.locked()) 51 | 52 | def test_context_manager_with_await(self): 53 | primitives = [ 54 | asyncio.Lock(loop=self.loop), 55 | asyncio.Condition(loop=self.loop), 56 | asyncio.Semaphore(loop=self.loop), 57 | asyncio.BoundedSemaphore(loop=self.loop), 58 | ] 59 | 60 | async def test(lock): 61 | await asyncio.sleep(0.01, loop=self.loop) 62 | self.assertFalse(lock.locked()) 63 | with await lock as _lock: 64 | self.assertIs(_lock, None) 65 | self.assertTrue(lock.locked()) 66 | await asyncio.sleep(0.01, loop=self.loop) 67 | self.assertTrue(lock.locked()) 68 | self.assertFalse(lock.locked()) 69 | 70 | for primitive in primitives: 71 | self.loop.run_until_complete(test(primitive)) 72 | self.assertFalse(primitive.locked()) 73 | 74 | 75 | class StreamReaderTests(BaseTest): 76 | 77 | def test_readline(self): 78 | DATA = b'line1\nline2\nline3' 79 | 80 | stream = asyncio.StreamReader(loop=self.loop) 81 | stream.feed_data(DATA) 82 | stream.feed_eof() 83 | 84 | async def reader(): 85 | data = [] 86 | async for line in stream: 87 | data.append(line) 88 | return data 89 | 90 | data = self.loop.run_until_complete(reader()) 91 | self.assertEqual(data, [b'line1\n', b'line2\n', b'line3']) 92 | 93 | 94 | class CoroutineTests(BaseTest): 95 | 96 | def test_iscoroutine(self): 97 | async def foo(): pass 98 | 99 | f = foo() 100 | try: 101 | self.assertTrue(asyncio.iscoroutine(f)) 102 | finally: 103 | f.close() # silence warning 104 | 105 | # Test that asyncio.iscoroutine() uses collections.abc.Coroutine 106 | class FakeCoro: 107 | def send(self, value): pass 108 | def throw(self, typ, val=None, tb=None): pass 109 | def close(self): pass 110 | def __await__(self): yield 111 | 112 | self.assertTrue(asyncio.iscoroutine(FakeCoro())) 113 | 114 | def test_iscoroutinefunction(self): 115 | async def foo(): pass 116 | self.assertTrue(asyncio.iscoroutinefunction(foo)) 117 | 118 | def test_function_returning_awaitable(self): 119 | class Awaitable: 120 | def __await__(self): 121 | return ('spam',) 122 | 123 | @asyncio.coroutine 124 | def func(): 125 | return Awaitable() 126 | 127 | coro = func() 128 | self.assertEqual(coro.send(None), 'spam') 129 | coro.close() 130 | 131 | def test_async_def_coroutines(self): 132 | async def bar(): 133 | return 'spam' 134 | async def foo(): 135 | return await bar() 136 | 137 | # production mode 138 | data = self.loop.run_until_complete(foo()) 139 | self.assertEqual(data, 'spam') 140 | 141 | # debug mode 142 | self.loop.set_debug(True) 143 | data = self.loop.run_until_complete(foo()) 144 | self.assertEqual(data, 'spam') 145 | 146 | @mock.patch('asyncio.coroutines.logger') 147 | def test_async_def_wrapped(self, m_log): 148 | async def foo(): 149 | pass 150 | async def start(): 151 | foo_coro = foo() 152 | self.assertRegex( 153 | repr(foo_coro), 154 | r'') 155 | 156 | with support.check_warnings((r'.*foo.*was never', 157 | RuntimeWarning)): 158 | foo_coro = None 159 | support.gc_collect() 160 | self.assertTrue(m_log.error.called) 161 | message = m_log.error.call_args[0][0] 162 | self.assertRegex(message, 163 | r'CoroWrapper.*foo.*was never') 164 | 165 | self.loop.set_debug(True) 166 | self.loop.run_until_complete(start()) 167 | 168 | async def start(): 169 | foo_coro = foo() 170 | task = asyncio.ensure_future(foo_coro, loop=self.loop) 171 | self.assertRegex(repr(task), r'Task.*foo.*running') 172 | 173 | self.loop.run_until_complete(start()) 174 | 175 | 176 | def test_types_coroutine(self): 177 | def gen(): 178 | yield from () 179 | return 'spam' 180 | 181 | @types.coroutine 182 | def func(): 183 | return gen() 184 | 185 | async def coro(): 186 | wrapper = func() 187 | self.assertIsInstance(wrapper, types._GeneratorWrapper) 188 | return await wrapper 189 | 190 | data = self.loop.run_until_complete(coro()) 191 | self.assertEqual(data, 'spam') 192 | 193 | def test_task_print_stack(self): 194 | T = None 195 | 196 | async def foo(): 197 | f = T.get_stack(limit=1) 198 | try: 199 | self.assertEqual(f[0].f_code.co_name, 'foo') 200 | finally: 201 | f = None 202 | 203 | async def runner(): 204 | nonlocal T 205 | T = asyncio.ensure_future(foo(), loop=self.loop) 206 | await T 207 | 208 | self.loop.run_until_complete(runner()) 209 | 210 | def test_double_await(self): 211 | async def afunc(): 212 | await asyncio.sleep(0.1, loop=self.loop) 213 | 214 | async def runner(): 215 | coro = afunc() 216 | t = asyncio.Task(coro, loop=self.loop) 217 | try: 218 | await asyncio.sleep(0, loop=self.loop) 219 | await coro 220 | finally: 221 | t.cancel() 222 | 223 | self.loop.set_debug(True) 224 | with self.assertRaisesRegex( 225 | RuntimeError, 226 | r'Cannot await.*test_double_await.*\bafunc\b.*while.*\bsleep\b'): 227 | 228 | self.loop.run_until_complete(runner()) 229 | 230 | 231 | if __name__ == '__main__': 232 | unittest.main() 233 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/test_sslproto.py: -------------------------------------------------------------------------------- 1 | """Tests for asyncio/sslproto.py.""" 2 | 3 | import logging 4 | import unittest 5 | from unittest import mock 6 | try: 7 | import ssl 8 | except ImportError: 9 | ssl = None 10 | 11 | import asyncio 12 | from asyncio import log 13 | from asyncio import sslproto 14 | from asyncio import test_utils 15 | 16 | 17 | @unittest.skipIf(ssl is None, 'No ssl module') 18 | class SslProtoHandshakeTests(test_utils.TestCase): 19 | 20 | def setUp(self): 21 | super().setUp() 22 | self.loop = asyncio.new_event_loop() 23 | self.set_event_loop(self.loop) 24 | 25 | def ssl_protocol(self, waiter=None): 26 | sslcontext = test_utils.dummy_ssl_context() 27 | app_proto = asyncio.Protocol() 28 | proto = sslproto.SSLProtocol(self.loop, app_proto, sslcontext, waiter) 29 | self.assertIs(proto._app_transport.get_protocol(), app_proto) 30 | self.addCleanup(proto._app_transport.close) 31 | return proto 32 | 33 | def connection_made(self, ssl_proto, do_handshake=None): 34 | transport = mock.Mock() 35 | sslpipe = mock.Mock() 36 | sslpipe.shutdown.return_value = b'' 37 | if do_handshake: 38 | sslpipe.do_handshake.side_effect = do_handshake 39 | else: 40 | def mock_handshake(callback): 41 | return [] 42 | sslpipe.do_handshake.side_effect = mock_handshake 43 | with mock.patch('asyncio.sslproto._SSLPipe', return_value=sslpipe): 44 | ssl_proto.connection_made(transport) 45 | 46 | def test_cancel_handshake(self): 47 | # Python issue #23197: cancelling a handshake must not raise an 48 | # exception or log an error, even if the handshake failed 49 | waiter = asyncio.Future(loop=self.loop) 50 | ssl_proto = self.ssl_protocol(waiter) 51 | handshake_fut = asyncio.Future(loop=self.loop) 52 | 53 | def do_handshake(callback): 54 | exc = Exception() 55 | callback(exc) 56 | handshake_fut.set_result(None) 57 | return [] 58 | 59 | waiter.cancel() 60 | self.connection_made(ssl_proto, do_handshake) 61 | 62 | with test_utils.disable_logger(): 63 | self.loop.run_until_complete(handshake_fut) 64 | 65 | def test_eof_received_waiter(self): 66 | waiter = asyncio.Future(loop=self.loop) 67 | ssl_proto = self.ssl_protocol(waiter) 68 | self.connection_made(ssl_proto) 69 | ssl_proto.eof_received() 70 | test_utils.run_briefly(self.loop) 71 | self.assertIsInstance(waiter.exception(), ConnectionResetError) 72 | 73 | def test_fatal_error_no_name_error(self): 74 | # From issue #363. 75 | # _fatal_error() generates a NameError if sslproto.py 76 | # does not import base_events. 77 | waiter = asyncio.Future(loop=self.loop) 78 | ssl_proto = self.ssl_protocol(waiter) 79 | # Temporarily turn off error logging so as not to spoil test output. 80 | log_level = log.logger.getEffectiveLevel() 81 | log.logger.setLevel(logging.FATAL) 82 | try: 83 | ssl_proto._fatal_error(None) 84 | finally: 85 | # Restore error logging. 86 | log.logger.setLevel(log_level) 87 | 88 | if __name__ == '__main__': 89 | unittest.main() 90 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/test_transports.py: -------------------------------------------------------------------------------- 1 | """Tests for transports.py.""" 2 | 3 | import unittest 4 | from unittest import mock 5 | 6 | import asyncio 7 | from asyncio import transports 8 | 9 | 10 | class TransportTests(unittest.TestCase): 11 | 12 | def test_ctor_extra_is_none(self): 13 | transport = asyncio.Transport() 14 | self.assertEqual(transport._extra, {}) 15 | 16 | def test_get_extra_info(self): 17 | transport = asyncio.Transport({'extra': 'info'}) 18 | self.assertEqual('info', transport.get_extra_info('extra')) 19 | self.assertIsNone(transport.get_extra_info('unknown')) 20 | 21 | default = object() 22 | self.assertIs(default, transport.get_extra_info('unknown', default)) 23 | 24 | def test_writelines(self): 25 | transport = asyncio.Transport() 26 | transport.write = mock.Mock() 27 | 28 | transport.writelines([b'line1', 29 | bytearray(b'line2'), 30 | memoryview(b'line3')]) 31 | self.assertEqual(1, transport.write.call_count) 32 | transport.write.assert_called_with(b'line1line2line3') 33 | 34 | def test_not_implemented(self): 35 | transport = asyncio.Transport() 36 | 37 | self.assertRaises(NotImplementedError, 38 | transport.set_write_buffer_limits) 39 | self.assertRaises(NotImplementedError, transport.get_write_buffer_size) 40 | self.assertRaises(NotImplementedError, transport.write, 'data') 41 | self.assertRaises(NotImplementedError, transport.write_eof) 42 | self.assertRaises(NotImplementedError, transport.can_write_eof) 43 | self.assertRaises(NotImplementedError, transport.pause_reading) 44 | self.assertRaises(NotImplementedError, transport.resume_reading) 45 | self.assertRaises(NotImplementedError, transport.close) 46 | self.assertRaises(NotImplementedError, transport.abort) 47 | 48 | def test_dgram_not_implemented(self): 49 | transport = asyncio.DatagramTransport() 50 | 51 | self.assertRaises(NotImplementedError, transport.sendto, 'data') 52 | self.assertRaises(NotImplementedError, transport.abort) 53 | 54 | def test_subprocess_transport_not_implemented(self): 55 | transport = asyncio.SubprocessTransport() 56 | 57 | self.assertRaises(NotImplementedError, transport.get_pid) 58 | self.assertRaises(NotImplementedError, transport.get_returncode) 59 | self.assertRaises(NotImplementedError, transport.get_pipe_transport, 1) 60 | self.assertRaises(NotImplementedError, transport.send_signal, 1) 61 | self.assertRaises(NotImplementedError, transport.terminate) 62 | self.assertRaises(NotImplementedError, transport.kill) 63 | 64 | def test_flowcontrol_mixin_set_write_limits(self): 65 | 66 | class MyTransport(transports._FlowControlMixin, 67 | transports.Transport): 68 | 69 | def get_write_buffer_size(self): 70 | return 512 71 | 72 | loop = mock.Mock() 73 | transport = MyTransport(loop=loop) 74 | transport._protocol = mock.Mock() 75 | 76 | self.assertFalse(transport._protocol_paused) 77 | 78 | with self.assertRaisesRegex(ValueError, 'high.*must be >= low'): 79 | transport.set_write_buffer_limits(high=0, low=1) 80 | 81 | transport.set_write_buffer_limits(high=1024, low=128) 82 | self.assertFalse(transport._protocol_paused) 83 | self.assertEqual(transport.get_write_buffer_limits(), (128, 1024)) 84 | 85 | transport.set_write_buffer_limits(high=256, low=128) 86 | self.assertTrue(transport._protocol_paused) 87 | self.assertEqual(transport.get_write_buffer_limits(), (128, 256)) 88 | 89 | 90 | if __name__ == '__main__': 91 | unittest.main() 92 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/test_windows_events.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import unittest 4 | from unittest import mock 5 | 6 | if sys.platform != 'win32': 7 | raise unittest.SkipTest('Windows only') 8 | 9 | import _winapi 10 | 11 | import asyncio 12 | from asyncio import _overlapped 13 | from asyncio import test_utils 14 | from asyncio import windows_events 15 | 16 | 17 | class UpperProto(asyncio.Protocol): 18 | def __init__(self): 19 | self.buf = [] 20 | 21 | def connection_made(self, trans): 22 | self.trans = trans 23 | 24 | def data_received(self, data): 25 | self.buf.append(data) 26 | if b'\n' in data: 27 | self.trans.write(b''.join(self.buf).upper()) 28 | self.trans.close() 29 | 30 | 31 | class ProactorTests(test_utils.TestCase): 32 | 33 | def setUp(self): 34 | super().setUp() 35 | self.loop = asyncio.ProactorEventLoop() 36 | self.set_event_loop(self.loop) 37 | 38 | def test_close(self): 39 | a, b = self.loop._socketpair() 40 | trans = self.loop._make_socket_transport(a, asyncio.Protocol()) 41 | f = asyncio.ensure_future(self.loop.sock_recv(b, 100)) 42 | trans.close() 43 | self.loop.run_until_complete(f) 44 | self.assertEqual(f.result(), b'') 45 | b.close() 46 | 47 | def test_double_bind(self): 48 | ADDRESS = r'\\.\pipe\test_double_bind-%s' % os.getpid() 49 | server1 = windows_events.PipeServer(ADDRESS) 50 | with self.assertRaises(PermissionError): 51 | windows_events.PipeServer(ADDRESS) 52 | server1.close() 53 | 54 | def test_pipe(self): 55 | res = self.loop.run_until_complete(self._test_pipe()) 56 | self.assertEqual(res, 'done') 57 | 58 | def _test_pipe(self): 59 | ADDRESS = r'\\.\pipe\_test_pipe-%s' % os.getpid() 60 | 61 | with self.assertRaises(FileNotFoundError): 62 | yield from self.loop.create_pipe_connection( 63 | asyncio.Protocol, ADDRESS) 64 | 65 | [server] = yield from self.loop.start_serving_pipe( 66 | UpperProto, ADDRESS) 67 | self.assertIsInstance(server, windows_events.PipeServer) 68 | 69 | clients = [] 70 | for i in range(5): 71 | stream_reader = asyncio.StreamReader(loop=self.loop) 72 | protocol = asyncio.StreamReaderProtocol(stream_reader, 73 | loop=self.loop) 74 | trans, proto = yield from self.loop.create_pipe_connection( 75 | lambda: protocol, ADDRESS) 76 | self.assertIsInstance(trans, asyncio.Transport) 77 | self.assertEqual(protocol, proto) 78 | clients.append((stream_reader, trans)) 79 | 80 | for i, (r, w) in enumerate(clients): 81 | w.write('lower-{}\n'.format(i).encode()) 82 | 83 | for i, (r, w) in enumerate(clients): 84 | response = yield from r.readline() 85 | self.assertEqual(response, 'LOWER-{}\n'.format(i).encode()) 86 | w.close() 87 | 88 | server.close() 89 | 90 | with self.assertRaises(FileNotFoundError): 91 | yield from self.loop.create_pipe_connection( 92 | asyncio.Protocol, ADDRESS) 93 | 94 | return 'done' 95 | 96 | def test_connect_pipe_cancel(self): 97 | exc = OSError() 98 | exc.winerror = _overlapped.ERROR_PIPE_BUSY 99 | with mock.patch.object(_overlapped, 'ConnectPipe', side_effect=exc) as connect: 100 | coro = self.loop._proactor.connect_pipe('pipe_address') 101 | task = self.loop.create_task(coro) 102 | 103 | # check that it's possible to cancel connect_pipe() 104 | task.cancel() 105 | with self.assertRaises(asyncio.CancelledError): 106 | self.loop.run_until_complete(task) 107 | 108 | def test_wait_for_handle(self): 109 | event = _overlapped.CreateEvent(None, True, False, None) 110 | self.addCleanup(_winapi.CloseHandle, event) 111 | 112 | # Wait for unset event with 0.5s timeout; 113 | # result should be False at timeout 114 | fut = self.loop._proactor.wait_for_handle(event, 0.5) 115 | start = self.loop.time() 116 | done = self.loop.run_until_complete(fut) 117 | elapsed = self.loop.time() - start 118 | 119 | self.assertEqual(done, False) 120 | self.assertFalse(fut.result()) 121 | self.assertTrue(0.48 < elapsed < 0.9, elapsed) 122 | 123 | _overlapped.SetEvent(event) 124 | 125 | # Wait for set event; 126 | # result should be True immediately 127 | fut = self.loop._proactor.wait_for_handle(event, 10) 128 | start = self.loop.time() 129 | done = self.loop.run_until_complete(fut) 130 | elapsed = self.loop.time() - start 131 | 132 | self.assertEqual(done, True) 133 | self.assertTrue(fut.result()) 134 | self.assertTrue(0 <= elapsed < 0.3, elapsed) 135 | 136 | # asyncio issue #195: cancelling a done _WaitHandleFuture 137 | # must not crash 138 | fut.cancel() 139 | 140 | def test_wait_for_handle_cancel(self): 141 | event = _overlapped.CreateEvent(None, True, False, None) 142 | self.addCleanup(_winapi.CloseHandle, event) 143 | 144 | # Wait for unset event with a cancelled future; 145 | # CancelledError should be raised immediately 146 | fut = self.loop._proactor.wait_for_handle(event, 10) 147 | fut.cancel() 148 | start = self.loop.time() 149 | with self.assertRaises(asyncio.CancelledError): 150 | self.loop.run_until_complete(fut) 151 | elapsed = self.loop.time() - start 152 | self.assertTrue(0 <= elapsed < 0.1, elapsed) 153 | 154 | # asyncio issue #195: cancelling a _WaitHandleFuture twice 155 | # must not crash 156 | fut = self.loop._proactor.wait_for_handle(event) 157 | fut.cancel() 158 | fut.cancel() 159 | 160 | 161 | if __name__ == '__main__': 162 | unittest.main() 163 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tests/test_windows_utils.py: -------------------------------------------------------------------------------- 1 | """Tests for window_utils""" 2 | 3 | import socket 4 | import sys 5 | import unittest 6 | import warnings 7 | from unittest import mock 8 | 9 | if sys.platform != 'win32': 10 | raise unittest.SkipTest('Windows only') 11 | 12 | import _winapi 13 | 14 | from asyncio import _overlapped 15 | from asyncio import windows_utils 16 | try: 17 | from test import support 18 | except ImportError: 19 | from asyncio import test_support as support 20 | 21 | 22 | class WinsocketpairTests(unittest.TestCase): 23 | 24 | def check_winsocketpair(self, ssock, csock): 25 | csock.send(b'xxx') 26 | self.assertEqual(b'xxx', ssock.recv(1024)) 27 | csock.close() 28 | ssock.close() 29 | 30 | def test_winsocketpair(self): 31 | ssock, csock = windows_utils.socketpair() 32 | self.check_winsocketpair(ssock, csock) 33 | 34 | @unittest.skipUnless(support.IPV6_ENABLED, 'IPv6 not supported or enabled') 35 | def test_winsocketpair_ipv6(self): 36 | ssock, csock = windows_utils.socketpair(family=socket.AF_INET6) 37 | self.check_winsocketpair(ssock, csock) 38 | 39 | @unittest.skipIf(hasattr(socket, 'socketpair'), 40 | 'socket.socketpair is available') 41 | @mock.patch('asyncio.windows_utils.socket') 42 | def test_winsocketpair_exc(self, m_socket): 43 | m_socket.AF_INET = socket.AF_INET 44 | m_socket.SOCK_STREAM = socket.SOCK_STREAM 45 | m_socket.socket.return_value.getsockname.return_value = ('', 12345) 46 | m_socket.socket.return_value.accept.return_value = object(), object() 47 | m_socket.socket.return_value.connect.side_effect = OSError() 48 | 49 | self.assertRaises(OSError, windows_utils.socketpair) 50 | 51 | def test_winsocketpair_invalid_args(self): 52 | self.assertRaises(ValueError, 53 | windows_utils.socketpair, family=socket.AF_UNSPEC) 54 | self.assertRaises(ValueError, 55 | windows_utils.socketpair, type=socket.SOCK_DGRAM) 56 | self.assertRaises(ValueError, 57 | windows_utils.socketpair, proto=1) 58 | 59 | @unittest.skipIf(hasattr(socket, 'socketpair'), 60 | 'socket.socketpair is available') 61 | @mock.patch('asyncio.windows_utils.socket') 62 | def test_winsocketpair_close(self, m_socket): 63 | m_socket.AF_INET = socket.AF_INET 64 | m_socket.SOCK_STREAM = socket.SOCK_STREAM 65 | sock = mock.Mock() 66 | m_socket.socket.return_value = sock 67 | sock.bind.side_effect = OSError 68 | self.assertRaises(OSError, windows_utils.socketpair) 69 | self.assertTrue(sock.close.called) 70 | 71 | 72 | class PipeTests(unittest.TestCase): 73 | 74 | def test_pipe_overlapped(self): 75 | h1, h2 = windows_utils.pipe(overlapped=(True, True)) 76 | try: 77 | ov1 = _overlapped.Overlapped() 78 | self.assertFalse(ov1.pending) 79 | self.assertEqual(ov1.error, 0) 80 | 81 | ov1.ReadFile(h1, 100) 82 | self.assertTrue(ov1.pending) 83 | self.assertEqual(ov1.error, _winapi.ERROR_IO_PENDING) 84 | ERROR_IO_INCOMPLETE = 996 85 | try: 86 | ov1.getresult() 87 | except OSError as e: 88 | self.assertEqual(e.winerror, ERROR_IO_INCOMPLETE) 89 | else: 90 | raise RuntimeError('expected ERROR_IO_INCOMPLETE') 91 | 92 | ov2 = _overlapped.Overlapped() 93 | self.assertFalse(ov2.pending) 94 | self.assertEqual(ov2.error, 0) 95 | 96 | ov2.WriteFile(h2, b"hello") 97 | self.assertIn(ov2.error, {0, _winapi.ERROR_IO_PENDING}) 98 | 99 | res = _winapi.WaitForMultipleObjects([ov2.event], False, 100) 100 | self.assertEqual(res, _winapi.WAIT_OBJECT_0) 101 | 102 | self.assertFalse(ov1.pending) 103 | self.assertEqual(ov1.error, ERROR_IO_INCOMPLETE) 104 | self.assertFalse(ov2.pending) 105 | self.assertIn(ov2.error, {0, _winapi.ERROR_IO_PENDING}) 106 | self.assertEqual(ov1.getresult(), b"hello") 107 | finally: 108 | _winapi.CloseHandle(h1) 109 | _winapi.CloseHandle(h2) 110 | 111 | def test_pipe_handle(self): 112 | h, _ = windows_utils.pipe(overlapped=(True, True)) 113 | _winapi.CloseHandle(_) 114 | p = windows_utils.PipeHandle(h) 115 | self.assertEqual(p.fileno(), h) 116 | self.assertEqual(p.handle, h) 117 | 118 | # check garbage collection of p closes handle 119 | with warnings.catch_warnings(): 120 | warnings.filterwarnings("ignore", "", ResourceWarning) 121 | del p 122 | support.gc_collect() 123 | try: 124 | _winapi.CloseHandle(h) 125 | except OSError as e: 126 | self.assertEqual(e.winerror, 6) # ERROR_INVALID_HANDLE 127 | else: 128 | raise RuntimeError('expected ERROR_INVALID_HANDLE') 129 | 130 | 131 | class PopenTests(unittest.TestCase): 132 | 133 | def test_popen(self): 134 | command = r"""if 1: 135 | import sys 136 | s = sys.stdin.readline() 137 | sys.stdout.write(s.upper()) 138 | sys.stderr.write('stderr') 139 | """ 140 | msg = b"blah\n" 141 | 142 | p = windows_utils.Popen([sys.executable, '-c', command], 143 | stdin=windows_utils.PIPE, 144 | stdout=windows_utils.PIPE, 145 | stderr=windows_utils.PIPE) 146 | 147 | for f in [p.stdin, p.stdout, p.stderr]: 148 | self.assertIsInstance(f, windows_utils.PipeHandle) 149 | 150 | ovin = _overlapped.Overlapped() 151 | ovout = _overlapped.Overlapped() 152 | overr = _overlapped.Overlapped() 153 | 154 | ovin.WriteFile(p.stdin.handle, msg) 155 | ovout.ReadFile(p.stdout.handle, 100) 156 | overr.ReadFile(p.stderr.handle, 100) 157 | 158 | events = [ovin.event, ovout.event, overr.event] 159 | # Super-long timeout for slow buildbots. 160 | res = _winapi.WaitForMultipleObjects(events, True, 10000) 161 | self.assertEqual(res, _winapi.WAIT_OBJECT_0) 162 | self.assertFalse(ovout.pending) 163 | self.assertFalse(overr.pending) 164 | self.assertFalse(ovin.pending) 165 | 166 | self.assertEqual(ovin.getresult(), len(msg)) 167 | out = ovout.getresult().rstrip() 168 | err = overr.getresult().rstrip() 169 | 170 | self.assertGreater(len(out), 0) 171 | self.assertGreater(len(err), 0) 172 | # allow for partial reads... 173 | self.assertTrue(msg.upper().rstrip().startswith(out)) 174 | self.assertTrue(b"stderr".startswith(err)) 175 | 176 | # The context manager calls wait() and closes resources 177 | with p: 178 | pass 179 | 180 | 181 | if __name__ == '__main__': 182 | unittest.main() 183 | -------------------------------------------------------------------------------- /thirdparty/asyncio/tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = py33,py34,py3_release 3 | 4 | [testenv] 5 | deps= 6 | aiotest 7 | # Run tests in debug mode 8 | setenv = 9 | PYTHONASYNCIODEBUG = 1 10 | commands= 11 | python -Wd runtests.py -r {posargs} 12 | python -Wd run_aiotest.py -r {posargs} 13 | 14 | [testenv:py3_release] 15 | # Run tests in release mode 16 | setenv = 17 | PYTHONASYNCIODEBUG = 18 | basepython = python3 19 | 20 | [testenv:py35] 21 | basepython = python3.5 22 | -------------------------------------------------------------------------------- /thirdparty/asyncio/update_asyncio.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | PYTHON=${1-$HOME/cpython} 4 | 5 | if [ ! -d $PYTHON ] 6 | then 7 | echo Bad destination $PYTHON 8 | exit 1 9 | fi 10 | 11 | if [ ! -f asyncio/__init__.py ] 12 | then 13 | echo Bad current directory 14 | exit 1 15 | fi 16 | 17 | echo "Sync from $PYTHON to $ASYNCIO" 18 | set -e -x 19 | echo 20 | 21 | cp $PYTHON/Lib/asyncio/*.py asyncio/ 22 | cp $PYTHON/Lib/test/test_asyncio/test_*.py tests/ 23 | echo 24 | 25 | git status 26 | -------------------------------------------------------------------------------- /thirdparty/asyncio/update_stdlib.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Script to copy asyncio files to the standard library tree. 4 | # Optional argument is the root of the Python 3.4 tree. 5 | # Assumes you have already created Lib/asyncio and 6 | # Lib/test/test_asyncio in the destination tree. 7 | 8 | CPYTHON=${1-$HOME/cpython} 9 | 10 | if [ ! -d $CPYTHON ] 11 | then 12 | echo Bad destination $CPYTHON 13 | exit 1 14 | fi 15 | 16 | if [ ! -f asyncio/__init__.py ] 17 | then 18 | echo Bad current directory 19 | exit 1 20 | fi 21 | 22 | maybe_copy() 23 | { 24 | SRC=$1 25 | DST=$CPYTHON/$2 26 | if cmp $DST $SRC 27 | then 28 | return 29 | fi 30 | echo ======== $SRC === $DST ======== 31 | diff -u $DST $SRC 32 | echo -n "Copy $SRC? [y/N/back] " 33 | read X 34 | case $X in 35 | [yY]*) echo Copying $SRC; cp $SRC $DST;; 36 | back) echo Copying TO $SRC; cp $DST $SRC;; 37 | *) echo Not copying $SRC;; 38 | esac 39 | } 40 | 41 | for i in `(cd asyncio && ls *.py)` 42 | do 43 | if [ $i == test_support.py ] 44 | then 45 | continue 46 | fi 47 | 48 | if [ $i == selectors.py ] 49 | then 50 | if [ "`(cd $CPYTHON; hg branch)`" == "3.4" ] 51 | then 52 | echo "Destination is 3.4 branch -- ignoring selectors.py" 53 | else 54 | maybe_copy asyncio/$i Lib/$i 55 | fi 56 | else 57 | maybe_copy asyncio/$i Lib/asyncio/$i 58 | fi 59 | done 60 | 61 | for i in `(cd tests && ls *.py *.pem)` 62 | do 63 | if [ $i == test_selectors.py ] 64 | then 65 | continue 66 | fi 67 | if [ $i == test_pep492.py ] 68 | then 69 | if [ "`(cd $CPYTHON; hg branch)`" == "3.4" ] 70 | then 71 | echo "Destination is 3.4 branch -- ignoring test_pep492.py" 72 | continue 73 | fi 74 | fi 75 | maybe_copy tests/$i Lib/test/test_asyncio/$i 76 | done 77 | 78 | maybe_copy overlapped.c Modules/overlapped.c 79 | --------------------------------------------------------------------------------