├── AWS ├── 2017-12-29-02-42-53.png ├── 2018-01-08-22-56-45.png ├── CLI.md ├── RDS.md ├── VPC.md └── images │ └── 2018-01-04-15-09-55.png ├── Ansible ├── Ansible.md └── tips.md ├── Azure ├── AzureTable.md └── images │ └── 2018-11-27-20-46-13.png ├── Books └── infrastructure-as-code.md ├── CPP.md ├── Cat.md ├── Confluence.md ├── DNS-over-HTTPS.md ├── Docker.md ├── DotNET ├── Load-Context.md └── NuGet.md ├── FFmpeg.md ├── Firefox.md ├── GPG.md ├── Git.md ├── GitLab-Runner.md ├── Go.md ├── HDMI-Cables.md ├── Hack.md ├── Kubernetes ├── CoreDNS.md ├── MicroK8s.md ├── Minikube.md └── Traefik.md ├── LICENSE ├── Linux ├── Bash.md ├── Common.md ├── Conventions.md ├── SSH.md ├── Ubuntu.md ├── WSL.md ├── images │ └── 2018-06-25-09-46-54.png ├── systemd.md └── tools │ ├── AWK.md │ ├── cURL.md │ ├── general.md │ ├── parallel.md │ └── text-manipulate.md ├── Mac.md ├── Nginx.md ├── Python ├── DepHell.md ├── Exception.md ├── GIL.md ├── Python.md ├── Python2-3.md └── pyenv.md ├── RESTful-API.md ├── SQL.md ├── Terraform-Cloud.md ├── Terraform.md ├── Testing.md ├── Todo.md ├── Vim.md ├── Vimscript.md ├── Windows.md ├── WireGuard.md ├── images ├── 2018-11-28-09-53-52.png └── 2020-06-27-17-41-33.png └── misc.md /AWS/2017-12-29-02-42-53.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kexplo/TIL/bbced71de2aa5a9452678657284b0be7db6a9481/AWS/2017-12-29-02-42-53.png -------------------------------------------------------------------------------- /AWS/2018-01-08-22-56-45.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kexplo/TIL/bbced71de2aa5a9452678657284b0be7db6a9481/AWS/2018-01-08-22-56-45.png -------------------------------------------------------------------------------- /AWS/CLI.md: -------------------------------------------------------------------------------- 1 | ## Export AWS CLI profile to Environment variable 2 | 3 | ```bash 4 | export AWS_ACCESS_KEY_ID=$(aws configure get default.aws_access_key_id) 5 | ``` -------------------------------------------------------------------------------- /AWS/RDS.md: -------------------------------------------------------------------------------- 1 | 2 | ## 스냅샷에서 RDS 복원 3 | 4 | RDS는 스냅샷에서 복원하기 위해서는 항상 새로운 인스턴스를 띄워야 한다. 5 | 6 | 그리고 거기에 주의점이 있는데, 복원 인스턴스를 띄우면 DB parameter group과 security group이 default가 지정되어 버린다고 한다. 그래서 재 7 | 지정이 필요하다. 8 | 9 | > When you restore a DB instance, the default DB parameter group is associated with the restored instance. As soon as the restore is complete and your new DB instance is available, you must associate any custom DB parameter group used by the instance you restored from. 10 | 11 | > When you restore a DB instance, the default security group is associated with the restored instance. As soon as the restore is complete and your new DB instance is available, you must associate any custom security groups used by the instance you restored from. 12 | 13 | https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html 14 | 15 | 그리고 스냅샷을 남겼던 인스턴스와 다른 타입의 인스턴스에 복원할 수 있는데, 이 경우에는 변환(?) 과정이 있어서 시간이 걸린다고 한다. 16 | 17 | > You can restore a DB instance and use a different storage type than the source DB snapshot. In this case, the restoration process is slower because of the additional work required to migrate the data to the new storage type. 18 | 19 | Terraform에서는 `aws_rds_instance`에 `snapshot_identifier`라는 Optional variable이 있고, 여기에 snapshot 이름을 지정하면 인스턴스를 만들 때, 복원을 한다고 한다. 20 | 21 | 그럼 다음 궁금증이 생긴다. 22 | 23 | Q. parameter group과 security group이 default로 지정된다는데, terraform은 알아서 잘 다시 설정 해 주나? 24 | 25 | 그런 것 같다. terraform을 통해 생성된 cluster와 instance를 살펴보니, 전부 제대로 지정되어 있다. 26 | 27 | Q. 인스턴스 생성 이후에 `snapshot_identifier` 값을 삭제하면 어떻게 되나? 28 | 29 | ![](images/2018-01-04-15-09-55.png) 30 | 31 | 변경점으로 잡히지만, apply 해 보면 되게 빨리 applying이 완료 되고 변화도 없어 보인다. 32 | -------------------------------------------------------------------------------- /AWS/VPC.md: -------------------------------------------------------------------------------- 1 | # VPC 2 | 3 | Amazon Virtual Private Cloud(Amazon VPC) 4 | 5 | http://docs.aws.amazon.com/ko_kr/AmazonVPC/latest/UserGuide/VPC_Introduction.html 6 | 7 | > *Virtual Private Cloud(VPC)* 는 사용자의 AWS 계정 전용 가상 네트워크입니다. VPC는 AWS 클라우드에서 다른 가상 네트워크와 논리적으로 분리되어 있습니다. Amazon EC2 인스턴스와 같은 AWS 리소스를 VPC에서 실행할 수 있습니다. VPC를 구성할 수 있으며 그 IP 주소 범위를 선택하고 서브넷을 생성하고 라우팅 테이블, 네트워크 게이트웨이 및 보안 설정을 구성할 수 있습니다. 8 | 9 | 10 | ## Subnet 11 | 12 | http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html 13 | 14 | > When you create a VPC, you must specify a range of IPv4 addresses for the VPC in the form of a Classless Inter-Domain Routing (CIDR) block; for example, 10.0.0.0/16. This is the primary CIDR block for your VPC. For more information about CIDR notation, see [RFC 4632](https://tools.ietf.org/html/rfc4632). 15 | 16 | ### Public subnet 17 | 18 | > If a subnet's traffic is routed to an internet gateway, the subnet is known as a public subnet. 19 | 20 | > If you want your instance in a public subnet to communicate with the internet over IPv4, it must have a public IPv4 address or an Elastic IP address (IPv4). 21 | 22 | 23 | ### Private subnet 24 | 25 | > If a subnet doesn't have a route to the internet gateway, the subnet is known as a private subnet. 26 | 27 | Q. Internet access? 28 | 29 | > You can connect an instance in a private subnet to the internet through the NAT device, which routes traffic from the instance to the internet gateway, and routes any responses to the instance. 30 | 31 | ### Subnet Routing 32 | 33 | > Each subnet must be associated with a route table, which specifies the allowed routes for outbound traffic leaving the subnet. 34 | 35 | ![](2017-12-29-02-42-53.png) 36 | 37 | ## Connection between two VPCs 38 | 39 | VPC Peering connection 40 | 41 | http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html 42 | 43 | > A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. **You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account.** The VPCs can be in different regions (also known as an inter-region VPC peering connection). 44 | 45 | ![](2018-01-08-22-56-45.png) 46 | 47 | > AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware. **There is no single point of failure** for communication or a bandwidth bottleneck. 48 | 49 | -------------------------------------------------------------------------------- /AWS/images/2018-01-04-15-09-55.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kexplo/TIL/bbced71de2aa5a9452678657284b0be7db6a9481/AWS/images/2018-01-04-15-09-55.png -------------------------------------------------------------------------------- /Ansible/Ansible.md: -------------------------------------------------------------------------------- 1 | # Ansible 2 | 3 | document: http://docs.ansible.com/ansible/latest/intro_getting_started.html 4 | 5 | ## Installation 6 | 7 | Latest releases (Debian): 8 | 9 | ```bash 10 | $ sudo apt-get update 11 | $ sudo apt-get install software-properties-common 12 | $ sudo apt-add-repository ppa:ansible/ansible 13 | $ sudo apt-get update 14 | $ sudo apt-get install ansible 15 | ``` 16 | 17 | pip: 18 | 19 | ```bash 20 | $ pip install ansible 21 | ``` 22 | 23 | ## Remote Connection Information 24 | 25 | > By default, Ansible 1.3 and later will try to use native OpenSSH for remote communication when possible. This enables ControlPersist (a performance feature), Kerberos, and options in `~/.ssh/config` such as Jump Host setup. However, when using Enterprise Linux 6 operating systems as the control machine (Red Hat Enterprise Linux and derivatives such as CentOS), the version of OpenSSH may be too old to support ControlPersist. On these operating systems, Ansible will fallback into using a high-quality Python implementation of OpenSSH called ‘paramiko’. If you wish to use features like Kerberized SSH and more, consider using Fedora, OS X, or Ubuntu as your control machine until a newer version of OpenSSH is available for your platform – or engage ‘accelerated mode’ in Ansible. See [Accelerated Mode.](http://docs.ansible.com/ansible/latest/playbooks_acceleration.html) 26 | 27 | 28 | ## Inventory 29 | 30 | doc: http://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html 31 | 32 | Ansible works against multiple systems in your infrastructure at the same time. It does this by selecting portions of systems listed in Ansible’s inventory, which defaults to being saved in the location `/etc/ansible/hosts`. You can specify a different inventory file using the `-i ` option on the command line. 33 | 34 | ### Host and Groups 35 | 36 | INI format: 37 | 38 | ```ini 39 | mail.example.com 40 | 41 | [webservers] 42 | foo.example.com 43 | bar.example.com 44 | 45 | [dbservers] 46 | one.example.com 47 | two.example.com 48 | three.example.com 49 | ``` 50 | 51 | YAML format: 52 | 53 | ```yml 54 | all: 55 | hosts: 56 | mail.example.com: 57 | children: 58 | webservers: 59 | hosts: 60 | foo.example.com: 61 | bar.example.com: 62 | dbservers: 63 | hosts: 64 | one.example.com: 65 | two.example.com: 66 | three.example.com: 67 | ``` 68 | 69 | To make things explicit, it is suggested that you set them if things are not running on the default port: 70 | 71 | ```ini 72 | jumper ansible_port=5555 ansible_host=192.0.2.50 73 | ``` 74 | 75 | ```yml 76 | ... 77 | hosts: 78 | jumper: 79 | ansible_port: 5555 80 | ansible_host: 192.0.2.50 81 | ``` 82 | 83 | ### Patterns 84 | 85 | ```ini 86 | [webservers] 87 | www[01:50].example.com 88 | ``` 89 | 90 | ```ini 91 | [databases] 92 | db-[a:f].example.com 93 | ``` 94 | 95 | ### Group Variable 96 | 97 | ```ini 98 | [atlanta] 99 | host1 100 | host2 101 | 102 | [atlanta:vars] 103 | ntp_server=ntp.atlanta.example.com 104 | proxy=proxy.atlanta.example.com 105 | ``` 106 | 107 | ```yml 108 | atlanta: 109 | hosts: 110 | host1: 111 | host2: 112 | vars: 113 | ntp_server: ntp.atlanta.example.com 114 | proxy: proxy.atlanta.example.com 115 | ``` 116 | 117 | 118 | ### Groups of Groups 119 | 120 | ```ini 121 | [atlanta] 122 | host1 123 | host2 124 | 125 | [raleigh] 126 | host2 127 | host3 128 | 129 | [southeast:children] 130 | atlanta 131 | raleigh 132 | 133 | [southeast:vars] 134 | some_server=foo.southeast.example.com 135 | halon_system_timeout=30 136 | self_destruct_countdown=60 137 | escape_pods=2 138 | 139 | [usa:children] 140 | southeast 141 | northeast 142 | southwest 143 | northwest 144 | ``` 145 | 146 | ```yml 147 | all: 148 | children: 149 | usa: 150 | children: 151 | southeast: 152 | children: 153 | atlanta: 154 | hosts: 155 | host1: 156 | host2: 157 | raleigh: 158 | hosts: 159 | host2: 160 | host3: 161 | vars: 162 | some_server: foo.southeast.example.com 163 | halon_system_timeout: 30 164 | self_destruct_countdown: 60 165 | escape_pods: 2 166 | northeast: 167 | northwest: 168 | southwest: 169 | ``` 170 | 171 | ### Default Groups 172 | 173 | - `all` : contains every host. 174 | - `ungrouped` : contains all hosts that don't have another group aside from `all` 175 | 176 | ## Dynamic Inventory 177 | 178 | 179 | 180 | ## Ad-hoc command 181 | 182 | doc: http://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html 183 | 184 | 185 | ## Playbook 186 | 187 | doc: http://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html 188 | 189 | > Playbooks are a completely different way to use ansible than in adhoc task execution mode, and are particularly powerful. 190 | 191 | example: 192 | 193 | ```yml 194 | --- 195 | - hosts: webservers 196 | vars: 197 | http_port: 80 198 | max_clients: 200 199 | remote_user: root 200 | tasks: 201 | - name: ensure apache is at the latest version 202 | yum: 203 | name: httpd 204 | state: latest 205 | - name: write the apache config file 206 | template: 207 | src: /srv/httpd.j2 208 | dest: /etc/httpd.conf 209 | notify: 210 | - restart apache 211 | - name: ensure apache is running (and enable it at boot) 212 | service: 213 | name: httpd 214 | state: started 215 | enabled: yes 216 | handlers: 217 | - name: restart apache 218 | service: 219 | name: httpd 220 | state: restarted 221 | ``` 222 | 223 | 224 | ### Handlers: Running Operations on Change 225 | 226 | ```yml 227 | - name: template configuration file 228 | template: 229 | src: template.j2 230 | dest: /etc/foo.conf 231 | notify: 232 | - restart memcached 233 | - restart apache 234 | ``` 235 | 236 | ```yml 237 | handlers: 238 | - name: restart memcached 239 | service: 240 | name: memcached 241 | state: restarted 242 | - name: restart apache 243 | service: 244 | name: apache 245 | state: restarted 246 | ``` 247 | 248 | As of Ansible 2.2, handlers can also “listen” to generic topics, and tasks can notify those topics as follows: 249 | 250 | ```yml 251 | handlers: 252 | - name: restart memcached 253 | service: 254 | name: memcached 255 | state: restarted 256 | listen: "restart web services" 257 | - name: restart apache 258 | service: 259 | name: apache 260 | state:restarted 261 | listen: "restart web services" 262 | 263 | tasks: 264 | - name: restart everything 265 | command: echo "this task will restart the web services" 266 | notify: "restart web services" 267 | ``` 268 | 269 | ### Loop 270 | 271 | ```yml 272 | - name: add several users 273 | user: 274 | name: "{{ item }}" 275 | state: present 276 | groups: "wheel" 277 | loop: 278 | - testuser1 279 | - testuser2 280 | ``` 281 | 282 | 283 | before 2.5: 284 | 285 | ```yml 286 | - name: add several users 287 | user: 288 | name: "{{ item }}" 289 | state: present 290 | groups: "wheel" 291 | with_items: 292 | - testuser1 293 | - testuser2 294 | ``` 295 | 296 | 297 | ### Filter 298 | 299 | Filters in Ansible are from Jinja2, and are used for transforming data inside a template expression. Jinja2 ships with many filters. See [builtin filters](http://jinja.pocoo.org/docs/templates/#builtin-filters) in the official Jinja2 template documentation. 300 | 301 | ```yml 302 | {{ some_variable | to_json }} 303 | {{ some_variable | to_yaml }} 304 | ``` 305 | 306 | ```yml 307 | tasks: 308 | - shell: cat /some/path/to/file.json 309 | register: result 310 | 311 | - set_fact: 312 | myvar: "{{ result.stdout | from_json }}" 313 | ``` 314 | 315 | 316 | ### Conditionals 317 | 318 | ```yml 319 | tasks: 320 | - name: "shut down Debian flavored systems" 321 | command: /sbin/shutdown -t now 322 | when: ansible_os_family == "Debian" 323 | # note that Ansible facts and vars like ansible_os_family can be used 324 | # directly in conditionals without double curly braces 325 | ``` 326 | 327 | ### Syntax check 328 | 329 | To check the syntax of a playbook, use `ansible-playbook` with the `--syntax-check` flag. This will run the playbook file through the parser to ensure its included files, roles, etc. have no syntax problems. 330 | 331 | 332 | ## Role 333 | 334 | doc: http://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html 335 | 336 | Example project structure: 337 | 338 | ``` 339 | site.yml 340 | webservers.yml 341 | fooservers.yml 342 | roles/ 343 | common/ 344 | tasks/ 345 | handlers/ 346 | files/ 347 | templates/ 348 | vars/ 349 | defaults/ 350 | meta/ 351 | webservers/ 352 | tasks/ 353 | defaults/ 354 | meta/ 355 | ``` 356 | 357 | - `tasks` - contains the main list of tasks to be executed by the role. 358 | - `handlers` - contains handlers, which may be used by this role or even anywhere outside this role. 359 | - `defaults` - default variables for the role (see Variables for more information). 360 | - `vars` - other variables for the role (see Variables for more information). 361 | - `files` - contains files which can be deployed via this role. 362 | - `templates` - contains templates which can be deployed via this role. 363 | - `meta` - defines some meta data for this role. See below for more details. 364 | 365 | 366 | ### Example 367 | 368 | ```yml 369 | # roles/example/tasks/main.yml 370 | - name: added in 2.4, previously you used 'include' 371 | import_tasks: redhat.yml 372 | when: ansible_os_platform|lower == 'redhat' 373 | - import_tasks: debian.yml 374 | when: ansible_os_platform|lower == 'debian' 375 | 376 | # roles/example/tasks/redhat.yml 377 | - yum: 378 | name: "httpd" 379 | state: present 380 | 381 | # roles/example/tasks/debian.yml 382 | - apt: 383 | name: "apache2" 384 | state: present 385 | ``` 386 | 387 | Using role: 388 | 389 | ```yml 390 | --- 391 | - hosts: webservers 392 | roles: 393 | - common 394 | - webservers 395 | ``` 396 | 397 | 398 | ## Ansible-Pull 399 | 400 | The ansible-pull is a small script that will checkout a repo of configuration instructions from git, and then run ansible-playbook against that content. 401 | 402 | 403 | ## Async 404 | 405 | http://docs.ansible.com/ansible/latest/playbooks_async.html 406 | 407 | ## Serial 408 | 409 | ## Strategies 410 | 411 | reference: http://docs.ansible.com/ansible/latest/playbooks_strategies.html 412 | 413 | 414 | > `strategy`, by default plays will still run as they used to, with what we call the linear strategy. All hosts will run each task before any host starts the next task, using the number of forks (default 5) to parallelize. 415 | > 416 | > The `serial` directive can ‘batch’ this behaviour to a subset of the hosts, which then run to completion of the play before the next ‘batch’ starts. 417 | > 418 | > A second `strategy` ships with ansible `free`, which allows each host to run until the end of the play as fast as it can. 419 | 420 | 421 | ## Blocks 422 | 423 | reference: http://docs.ansible.com/ansible/latest/playbooks_blocks.html 424 | 425 | > a block feature to allow for logical grouping of tasks and even in play error handling. 426 | 427 | ----- 428 | 429 | 430 | ## Modules 431 | 432 | http://docs.ansible.com/ansible/latest/modules/list_of_all_modules.html 433 | 434 | ### apt - Manages apt-packages 435 | 436 | reference: http://docs.ansible.com/ansible/latest/apt_module.html 437 | 438 | ```yml 439 | - name Install 'foo' package 440 | apt: 441 | name: foo 442 | state: present 443 | `` 444 | 445 | ## copy - Copies files to remote locations 446 | 447 | reference: http://docs.ansible.com/ansible/latest/copy_module.html 448 | 449 | | parameter | description | 450 | |-----------|-------------| 451 | | src | Local path to a file to copy to the remote server; can be absolute or relative. If path is a directory, it is copied recursively. In this case, if path ends with "/", only inside contents of that directory are copied to destination. Otherwise, if it does not end with "/", the directory itself with all contents is copied. This behavior is similar to Rsync. | 452 | 453 | ```yml 454 | - copy: 455 | src: /src/file/path 456 | dest: /dest/file/path 457 | ``` 458 | 459 | 460 | ### lineinfile - Ensure a particular line is in a file, or replace an existing line using a back-referenced regular expression 461 | 462 | reference: http://docs.ansible.com/ansible/latest/lineinfile_module.html 463 | 464 | 465 | ```yml 466 | - lineinfile: 467 | path: /etc/blahblahblah 468 | line: '127.0.0.1' 469 | 470 | - lineinfile: 471 | path: /etc/selinux/config 472 | regexp: '^SELINUX=' 473 | line: 'SELINUX=enforcing' 474 | 475 | - lineinfile: 476 | path: /etc/httpd/conf/httpd.conf 477 | regexp: '^Listen ' 478 | insertafter: '^#Listen ' 479 | line: 'Listen 8080' 480 | ``` -------------------------------------------------------------------------------- /Ansible/tips.md: -------------------------------------------------------------------------------- 1 | ## Disable host key checking 2 | 3 | in the `/etc/ansible/ansible.cfg` or `~/.ansible.cfg` or `./ansible.cfg` 4 | vi 5 | 6 | ```ini 7 | [defaults] 8 | host_key_checking = False 9 | ``` 10 | 11 | Or 12 | 13 | ```bash 14 | $ export ANSIBLE_HOST_KEY_CHECKING=False 15 | ``` 16 | 17 | reference: https://stackoverflow.com/a/23094433 18 | 19 | 20 | ## Slow ansible-playbook commands with dynamic inventory 21 | 22 | reference: https://github.com/ansible/ansible/issues/22633#issuecomment-293956861 23 | 24 | > if your inventory does not return a `_meta` key 25 | > Ansible will call `--list` once, and then call `--host` for every host in the inventory. This can often cause confusion as to why when `--list` is run, why running via ansible takes longer. 26 | 27 | 28 | It would need to at least have `'_meta': {'hostvars': {}}` 29 | 30 | 31 | ## run_once 32 | 33 | `When used together with “serial”, tasks marked as “run_once” will be run on one host in each serial batch.` 34 | 35 | ```yml 36 | --- 37 | - hosts: webservers[0] 38 | ``` 39 | 40 | ```yml 41 | - name: run once task 42 | shell: '..' 43 | when: inventory_hostname == webservers[0] 44 | ``` -------------------------------------------------------------------------------- /Azure/AzureTable.md: -------------------------------------------------------------------------------- 1 | # Azure Table 2 | 3 | A NoSQL datastore 4 | 5 | ## Table storage concepts 6 | 7 | ![storage-table-concepts](images/2018-11-27-20-46-13.png) 8 | 9 | - **Table**: A table is a collection of entities. Tables don't enforce a schema on entities, which means a single table can contain entities that have different sets of properties. 10 | - **Entity**: An entity is a set of properties, similar to a database row. An entity in Azure Storage can be up to 1MB in size. An entity in Azure Cosmos DB can be up to 2MB in size. 11 | - **Properties**: A property is a name-value pair. Each entity can include up to 252 properties to store data. Each entity also has three system properties that specify a partition key, a row key, and a timestamp. Entities with the same partition key can be queried more quickly, and inserted/updated in atomic operations. An entity's row key is its unique identifier within a partition. 12 | 13 | ## Property 14 | 15 | ### Limitations 16 | 17 | An entity can have up to 255 properties, including 3 system properties described in the following section. Therefore, the user may include up to 252 custom properties, in addition to the 3 system properties. The combined size of all data in an entity's properties cannot exceed 1 MB. 18 | 19 | ### System Properties 20 | 21 | An entity always has the following system properties: 22 | 23 | - `PartitionKey` property 24 | - `RowKey` property 25 | - `Timestamp` property 26 | 27 | These system properties are automatically included for every entity in a table. The names of these properties are reserved and cannot be changed. The developer is responsible for inserting and updating the values of `PartitionKey` and `RowKey`. The server manages the value of `Timestamp`, which cannot be modified. 28 | 29 | ### PartitionKey Property 30 | 31 | Tables are partitioned to support load balancing across storage nodes. A table's entities are organized by partition. A partition is a consecutive range of entities possessing the same partition key value. The partition key is a unique identifier for the partition within a given table, specified by the `PartitionKey` property. The partition key forms the first part of an entity's primary key. The partition key may be a string value up to 1 KB in size. 32 | 33 | You must include the `PartitionKey` property in every insert, update, and delete operation. 34 | 35 | ### RowKey Property 36 | 37 | The second part of the primary key is the row key, specified by the `RowKey` property. The row key is a unique identifier for an entity within a given partition. Together the `PartitionKey` and RowKey uniquely identify every entity within a table. 38 | 39 | The row key is a string value that may be up to 1 KB in size. 40 | 41 | You must include the `RowKey` property in every insert, update, and delete operation. 42 | 43 | ### Timestamp Property 44 | 45 | The `Timestamp` property is a `DateTime` value that is maintained on the server side to record the time an entity was last modified. The Table service uses the `Timestamp` property internally to provide optimistic concurrency. The value of Timestamp is a monotonically increasing value, meaning that each time the entity is modified, the value of `Timestamp` increases for that entity. This property should not be set on insert or update operations (the value will be ignored). 46 | 47 | ## Performing Entity Group Transactions 48 | 49 | The Table service supports batch transactions on entities that are in the same table and belong to the same partition group. Multiple Insert Entity, Update Entity, Merge Entity, Delete Entity, Insert Or Replace Entity, and Insert Or Merge Entity operations are supported within a single transaction. 50 | 51 | ### Requirements for Entity Group Transactions 52 | 53 | An entity group transaction must meet the following requirements: 54 | 55 | - All entities subject to operations as part of the transaction must have the same `PartitionKey` value. 56 | - An entity can appear only once in the transaction, and only one operation may be performed against it. 57 | - The transaction can include at most 100 entities, and its total payload may be no more than 4 MB in size. 58 | - All entities are subject to the limitations described in [Understanding the Table Service Data Model](https://docs.microsoft.com/en-us/rest/api/storageservices/Understanding-the-Table-Service-Data-Model). 59 | 60 | ## References 61 | 62 | - https://docs.microsoft.com/en-us/azure/storage/tables/table-storage-overview 63 | - https://docs.microsoft.com/en-us/rest/api/storageservices/Understanding-the-Table-Service-Data-Model 64 | - https://docs.microsoft.com/en-us/rest/api/storageservices/performing-entity-group-transactions -------------------------------------------------------------------------------- /Azure/images/2018-11-27-20-46-13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kexplo/TIL/bbced71de2aa5a9452678657284b0be7db6a9481/Azure/images/2018-11-27-20-46-13.png -------------------------------------------------------------------------------- /Books/infrastructure-as-code.md: -------------------------------------------------------------------------------- 1 | # Infrastructure as Code 2 | 3 | 번역서: 4 | 5 | - [리디북스](https://ridibooks.com/v2/Detail?id=443000488) 6 | - [yes24](http://www.yes24.com/24/Goods/36551650?Acode=101) 7 | 8 | ## 구성 편차 9 | 10 | 클라우드와 가상화를 이용하면 관리할 수 있는 능력보다 더 빠른 속도로 서버가 증식할 수 있다. 11 | 12 | 이런 경우에 서버마다 패치나 최신 업데이트를 부분적으로 수행하게 되면, 서버마다 버전이나 구성이 달라지게 된다. 13 | 14 | 이로 인해 서버간의 불일치가 일어나게 되는데, 이를 `구성 편차(configuration drift)`라고 한다. 15 | 16 | ## 눈송이 서버 17 | 18 | 서버마다 구성이 다르다고 해서 나쁜 것은 아니다. 상황에 맞게 다른 서버마다 다른 설정을 적용할 수도 있다. 19 | 단, 이런 변종 서버를 파악하고 쉽게 재생성할 수 있게 관리해야 한다. 20 | 21 | 관리되지 않는 변종 서버는 결국 `눈송이(snowflake)`서버와 자동화 공포를 일으킨다. 22 | 23 | 눈송이 서버는 네트워크상의 다른 서버들과는 다르게 구성된 서버를 말한다. 눈송이 서버는 다시 만들 수 없다는 점에서 특별하다. 24 | 25 | > `마치 눈처럼 녹아버리는` 서버의 형태를 눈송이 서버라고 한다. 26 | 27 | 팀은 인프라 안의 어떤 서버든 확신을 갖고 쉽고 빠르게 다시 만들 수 있어야 한다. 28 | 29 | 쉽게 만든다는 것은 무언가를 다시 만드는 방법에 대해 어떠한 중요한 결정도 할 필요가 없다는 의미다. 30 | 31 | ## 원칙 32 | 33 | 1. 시스템은 쉽게 다시 만들 수 있다 34 | 1. 시스템은 일회용이다 35 | - 인프라는 동적이고 언제나 삭제, 교체, 이동 된다는 점을 상기해야 한다 36 | - 서버에 수동으로 넣은 설정이나 파일은 언제든지 없어질 수 있다 37 | 1. 시스템은 일관성이 있다 38 | - 같은 서비스를 제공하는 시스템의 사양은 최대한 비슷해야 한다 39 | - 일부 서버가 다른 사양을 가지게 된다면 동일하게 맞추거나, 다른 사양이 존재할 수 있도록 새로운 등급을 추가해야 한다 40 | 1. 절차는 반복 가능하다 41 | - 인프라에서 실행한 어떤 작업이든 다시 실행할 수 있어야 한다 42 | 1. 설계는 항상 변한다 43 | - 동적 인프라는 구성 변경이 쉽고 비용은 낮다 44 | - 다양한 요구사항을 기대하지 않고, 현재의 요구사항만 충족하는 단순한 설계를 해야 한다. 45 | - 구성을 자주 변경해 봄으로써 시스템을 안전하고 빠르게 변경할 수 있다 46 | 47 | ## 관례 48 | 49 | 1. 정의 파일을 사용하라 50 | 1. 자체 문서화 51 | - 문서는 쉽게 낡으니, 쉽게 찾아볼 수 있도록 설명할 코드 근처에 한다 52 | - 혹은 자동으로 문서가 생성될 수 있도록 한다 53 | 1. 모든 것의 버전을 관리 54 | 1. 시스템과 절차를 계속 시험하라 55 | 1. 일괄 변경보다는 조금씩 변경하라 56 | 57 | ## 효과적인 팀의 특성 58 | 59 | - 인프라의 모든 요소를 적은 노력으로 빠르게 다시 만들 수 있다 60 | - 모든 시스템을 지속해서 패치하고, 일관성을 유지하며, 최신 상태로 관리한다 61 | - 서버와 환경을 프로비저닝하는 등의 인프라 서비스는 인프라 팀의 도움 없이 수 분 안으로 처리할 수 있다 62 | - 유지 보수 시간은 거의 필요 없다 63 | - 평균 복구 시간(Mean Time To Recover, MTTR)을 관리하고 개선하는 데 관심을 가진다. 평균 무고장 시간(Mean Time Between Failures, MTBF)을 관리하기는 하지만, 고장을 피할 수 있다고 생각하지는 않는다 [^1] 64 | - 그들의 작업이 조직에 매우 중요한 가치를 부여한다고 생각한다 65 | 66 | [^1]: http://www.kitchensoap.com/2010/11/07/mttr-mtbf-for-most-types-of-f/ -------------------------------------------------------------------------------- /CPP.md: -------------------------------------------------------------------------------- 1 | # Modern C++ 2 | 3 | Since C++11 4 | 5 | ## Inheriting constructors 6 | 7 | ```cpp 8 | // Old way 9 | class A { 10 | public: 11 | A() {} 12 | A(int var) {} 13 | } 14 | 15 | class B : public A { 16 | public: 17 | B() : A() {} 18 | B(int var) : A(var) {} 19 | } 20 | 21 | 22 | // Since C++11 23 | class B : public A { 24 | public: 25 | using A::A; // inherit all parent's constructors 26 | } 27 | ``` 28 | 29 | See also: https://en.cppreference.com/w/cpp/language/using_declaration 30 | 31 | ## Read a whole binary file 32 | 33 | ```cpp 34 | std::ifstream ifs(path, std::fstream::in | std::fstream::binary); 35 | ifs.seekg(0, ifs.end); 36 | std::streampos length = ifs.tellg(); 37 | ifs.seekg(0, ifs.beg); 38 | 39 | std::string buffer; 40 | buffer.reserve(length); 41 | buffer.assign( 42 | (std::istreambuf_iterator(ifs)), 43 | (std::istreambuf_iterator())); 44 | ``` -------------------------------------------------------------------------------- /Cat.md: -------------------------------------------------------------------------------- 1 | # Cat 2 | 3 | The cutest animal. 4 | 5 | ## Pica in Cats 6 | 7 | Pica is the term used for the behavior of eating non-food material. 8 | 9 | - **Dietary deficiencies** 10 | - **Medical problems**: Certain diseases such as diabetes, dental disease or hyperthyroidism or brain disorders may be associated with pica behavior 11 | - **Genetic predisposition** 12 | - **Environmental factors**: Stress, Boredom or Lack of Attention 13 | - **Compulsive disorder** 14 | 15 | ## References 16 | 17 | - https://www.catbehaviorassociates.com/pica/ 18 | - https://pets.webmd.com/cats/guide/unusual-cat-cravings#1 19 | - https://pets.stackexchange.com/a/1617 -------------------------------------------------------------------------------- /Confluence.md: -------------------------------------------------------------------------------- 1 | # Confluence 2 | 3 | ## Insert attached images with Markdown 4 | 5 | ```md 6 | ![](/download/attachments//) 7 | ``` 8 | 9 | reference: https://community.atlassian.com/t5/Confluence-questions/How-to-display-attached-images-with-markdown/qaq-p/621010 -------------------------------------------------------------------------------- /DNS-over-HTTPS.md: -------------------------------------------------------------------------------- 1 | # 👷 WIP 2 | 3 | # Table of Contents 4 | 5 | - [DNS over HTTPS (DoH)](#dns-over-https-doh) 6 | - [Cloudflare 1.1.1.1](#cloudflare-1111) 7 | - [PC](#pc) 8 | - [Simple DNSCrypt](#simple-dnscrypt) 9 | - [Router](#router) 10 | - [Asuswrt-Merlin](#asuswrt-merlin) 11 | - [Android](#android) 12 | - [iOS](#ios) 13 | - [Firefox](#firefox) 14 | - [DoH 적용 테스트](#doh-적용-테스트) 15 | - [ESNI (Encrypted Server Name Indication)](#esni-encrypted-server-name-indication) 16 | 17 | # DNS over HTTPS (DoH) 18 | 19 | HTTPS를 사용한다고 해도 DNS 쿼리는 암호화되지 않은 평문으로 전송이 된다. 그래서 어떤 사이트를 방문하고 있는지 감청할 수 있다. 20 | 21 | DNS over HTTPS(DoH)는 DNS 쿼리를 HTTPS 프로토콜을 이용하여 암호화된 방법으로 주고받는 방법이다. 이 방법을 통해 DNS 요청을 통해 접속하는 서버를 감청하는 사생활 침해에서 벗어날 수 있다. (하지만 감청을 하고자 마음먹는다면 SNI를 감청해서 접속하는 서버를 알아내는 방법이 있으며, 이 경우에는 [ESNI](#esni-encrypted-server-name-indication)를 적용해야만 감청에서 자유로울 수 있다.) 22 | 23 | SEE: https://en.wikipedia.org/wiki/DNS_over_HTTPS 24 | 25 | ## Cloudflare 1.1.1.1 26 | 27 | DoH를 제공하는 업체는 많이 있지만, 가장 유명하고 요청 로그를 영속적으로 보관하지 않는 업체로 Cloudflare가 있다. 28 | 29 | Cloudflare는 `1.1.1.1` 주소로 DNS를 제공한다. 하지만 DNS 주소를 변경한다고 해서 DoH가 적용되는 것은 아니다. 30 | 31 | Cloudflare는 자체적으로 만든 Mobile Apps를 제공하며, 이 앱을 통해 DoH가 적용된 DNS를 사용할 수 있다. 32 | 33 | - [iOS](https://itunes.apple.com/us/app/1-1-1-1-faster-internet/id1423538627?mt=8) 34 | - [Android](https://play.google.com/store/apps/details?id=com.cloudflare.onedotonedotonedotone) 35 | 36 | ## PC 37 | 38 | ### Simple DNSCrypt 39 | 40 | https://simplednscrypt.org/ 41 | 42 | PC에서는 Simple DNSCrypt를 이용해 DoH를 사용할 수 있다. DNS 제공 업체를 고를 수 있으며, 개인적으로는 DNS 쿼리 로그를 영속적으로 저장하지 않는 업체를 추천한다. (Cloudflare 등) 43 | 44 | 로그 저장? theregister라는 곳의 [기사](https://www.theregister.co.uk/2018/04/03/cloudflare_dns_privacy/)에서 Cloudflare는 로그를 24-48시간 보관하지만, Google은 장기간 보관한다고 한다. 아무리 Google이라지만 나의 요청 기록이 없어지지 않고 장기간 남는다는 것은 꺼림직할 것 같다. 45 | 46 | > // ref: https://www.theregister.co.uk/2018/04/03/cloudflare_dns_privacy/ 47 | > 48 | > In this Cloudflare's venture is similar to Google's Public DNS (8.8.8.8), which claims that it keeps some data for just 24 to 48 hours. Google, however, keeps other non-personally identifiable information for longer periods. 49 | 50 | ## Router 51 | 52 | ### Asuswrt-Merlin 53 | 54 | 👷 WIP 55 | 56 | ## Android 57 | 58 | ~~Android P에서 DoH를 지원하려는 [움직임](https://android-developers.googleblog.com/2018/04/dns-over-tls-support-in-android-p.html)이 보이지만, 현재 최신버전의 Android Oreo에서는 쓸 수 없다.~~ 59 | 60 | Android Pie 부터는 `Private DNS Mode`가 추가되어 DoH를 사용할 수 있다. 자세한 것은 [링크](https://blog.cloudflare.com/enable-private-dns-with-1-1-1-1-on-android-9-pie/) 참고 61 | 62 | Android Pie 미만에서는 서드파티 앱을 통해 DoH를 사용할 수 있다, 대표적으로 Cloudflare 공식 앱과 Intra 앱이 있다. 63 | 64 | Cloudflare DNS를 사용할 것이라면 Cloudflare 공색 앱을 사용하는 것을 권장한다. 65 | 66 | - [Cloudflare 공식 앱](https://play.google.com/store/apps/details?id=com.cloudflare.onedotonedotonedotone) 67 | - [Intra](https://play.google.com/store/apps/details?id=app.intra&hl=en_US) 68 | 69 | 둘 다, 내부적으로 VPN을 만들어서 모든 연결에 대해 DoH를 적용하는 방식으로 동작한다. 앱을 활성화 시키면 Andorid에서 VPN 연결중이라고 뜨게 된다. 70 | 71 | Infra앱에서는 Cloudflare와 Google, 두 가지의 DoH 서버를 설정할 수 있다. 72 | 73 | [Simple DNSCrypt](#simple-dnscrypt) 항목해서 설명한 것 처럼, 보통은 DNS 로그를 영속적으로 저장하지 않는 Cloudflare를 추천한다. 74 | 75 | ## iOS 76 | 77 | iOS에서는 Cloudflare 공색 앱과 DNSCloak 앱을 사용해서 DoH를 사용할 수 있다. Android와 마찬가지로 VPN 기능을 활용하는 방식이다. 78 | 79 | - [Cloudflare 공색 앱](https://itunes.apple.com/us/app/1-1-1-1-faster-internet/id1423538627?mt=8) 80 | - [DNSCloak](https://itunes.apple.com/kr/app/dnscloak-dnscrypt-doh-client/id1330471557?mt=8) 81 | 82 | ## Firefox 83 | 84 | Firefox는 자체적인 DoH 기능을 가지고 있다. Firefox 60부터 사용할 수 있다. 이 글을 작성하는 시점에서는 Android와 Windows용 Firefox가 60 버전 이상인 것을 확인했다. 85 | 86 | 주소창에 `about:config`를 입력해 고흡 관경 설정 페이지로 이동한다. 87 | 88 | 상단의 `검색`창에 `network.trr`을 입력해서 `network.trr`로 시작하는 설정들을 모아 볼 수 있게한다. 89 | 90 | 그리고 아래 항목의 값을 설정한다. 91 | 92 | - `network.trr.bootstrapAddress`: 1.1.1.1 93 | - `network.trr.mode`: 2 94 | - `network.trr.uri`: https://mozilla.cloudflare-dns.com/dns-query 95 | 96 | 각 설정값의 의미를 설명해보자면 이렇다. 97 | 관심이 없다면 넘어가도 된다. 98 | 99 | 참고로 TRR은 Trusted Recursive Resolver의 약자. 100 | 101 | `network.trr.uri`: 사용할 DoH 서버의 URI를 설정한다. 반드시 HTTPS 주소여야 한다. 102 | 103 | `network.trr.bootstrapAddress`: `network.trr.uri`에서 설정한 호스트의 IP 주소를 설정한다. 이 값을 설정하면 시스템에서 호스트 IP를 얻어내는 것을 무시하고 설정한 값을 사용하게 된다. Cloudflare DoH를 설정했기 때문에 `1.1.1.1`을 사용했다. 104 | 105 | `network.trr.mode`: 값에 따라 DoH의 동작을 설정한다. 106 | 107 | - 0 - (기본값) DoH 기능을 끈다 108 | - 1 - 시스템 기본 방식과 DoH에 동시에 요청을 보낸다. 빨리 응답이 오는 쪽을 사용 109 | - 2 - DoH를 기본으료 사용하고, 응답이 실패할 경우에 시스템 기본 방식을 사용 110 | - 3 - DoH만 사용한다. 시스템 기본 방식을 사용하지 않는다 111 | - 4 - 타이밍 측정을 위해 DoH와 시스템 기본 방식을 병렬로 실행한다. 하지만 시스템 기본 방식의 응답만 사용한다 112 | - 5 - 0과 같다. 0은 기본값, 5는 선택으로 인한 값을 표시하기 위해 사용한다 113 | 114 | DoH만 사용하고 싶다면 `network.trr.mode`를 `3`으로 설정하면 된다. 115 | 116 | 이제 별다른 도구 없이 Firefox에서 자체적으로 DoH를 적용할 수 있다. 117 | 118 | ## DoH 적용 테스트 119 | 120 | DoH가 잘 적용되었는지 확인하기 위해서는 https://dnsleaktest.com/ 등의 사이트를 이용할 수 있다. 121 | 122 | 위 URL로 접속해서 `Standard Test` 버튼을 눌러보자. 공급자가 `Cloudflare`로 표기되면 성공, 본인이 사용하는 인터넷 사업자 (KT, SKT, ...)가 표기된다면 제대로 적용되지 않은 것이다. 123 | 124 | # ESNI (Encrypted Server Name Indication) 125 | 126 | 👷 WIP 127 | 128 | - https://blog.cloudflare.com/encrypted-sni/ 129 | - https://www.cloudflare.com/ssl/encrypted-sni/ -------------------------------------------------------------------------------- /Docker.md: -------------------------------------------------------------------------------- 1 | ## Create sudoer user in docker 2 | 3 | ```Dockerfile 4 | RUN apt-get update && apt-get install -y sudo && rm -rf /var/lib/apt/lists/* 5 | RUN useradd -m user && \ 6 | echo "user ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/user && \ 7 | chmod 0400 /etc/sudoers.d/gitlab 8 | USER user 9 | ``` 10 | 11 | ## Docker save, load 12 | 13 | Documents: 14 | 15 | - https://docs.docker.com/engine/reference/commandline/save/ 16 | - https://docs.docker.com/engine/reference/commandline/load/ 17 | 18 | > Save one or more images to a tar archive (streamed to STDOUT by default) 19 | 20 | ```bash 21 | $ docker save -o etcd_v3.2.9.tar gcr.io/etcd-development/etcd:v3.2.9 22 | ``` 23 | 24 | ```bash 25 | $ docker load -i etcd_v3.2.9.tar 26 | ``` 27 | 28 | It is very useful when `docker pull` command hangs downloading image 29 | 30 | ## Multi-stage build 31 | 32 | ```Dockerfile 33 | # base image 34 | FROM microsoft/dotnet:2.1-runtime AS base 35 | WORKDIR /app 36 | EXPOSE 80 37 | 38 | # build image 39 | FROM microsoft/dotnet:2.1-sdk AS build 40 | WORKDIR /src 41 | COPY . . 42 | # `dotnet publish` command will run `dotnet restore` and `dotnet build` implicitly 43 | RUN dotnet publish -c Release -o /app 44 | 45 | # Add the binary files that generated in the `build image` container above 46 | FROM base as final 47 | WORKDIR /app 48 | COPY --from=build /app . 49 | RUN ls -al 50 | ENTRYPOINT ["dotnet", "OrleansSilo.dll"] 51 | ``` 52 | 53 | ## Set build-time variables (--build-arg) 54 | 55 | Dockerfile: 56 | 57 | ```Dockerfile 58 | FROM ubuntu 59 | ARG MESSAGE 60 | RUN echo $MESSAGE 61 | ``` 62 | 63 | Command: 64 | 65 | ```bash 66 | $ docker build --build-arg MESSAGE=hello . 67 | ``` 68 | 69 | ### Use ARG with ENV 70 | 71 | ```text 72 | ARG [=] 73 | ``` 74 | 75 | > You can use an `ARG` or an `ENV` instruction to specify variables that are available to the `RUN` instruction. Environment variables defined using the `ENV` instruction always override an `ARG` instruction of the same name. Consider this Dockerfile with an `ENV` and `ARG` instruction. 76 | 77 | Dockerfile: 78 | 79 | ```Dockerfile 80 | FROM ubuntu 81 | ARG CONT_IMG_VER 82 | ENV CONT_IMG_VER v1.0.0 83 | RUN echo $CONT_IMG_VER 84 | ``` 85 | 86 | Command: 87 | 88 | ```bash 89 | $ docker build --build-arg CONT_IMG_VER=v2.0.1 . 90 | ``` 91 | 92 | > In this case, the `RUN` instruction uses `v1.0.0` instead of the `ARG` setting passed by the user: `v2.0.1` 93 | 94 | references: 95 | 96 | - https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg 97 | - https://docs.docker.com/engine/reference/builder/#arg -------------------------------------------------------------------------------- /DotNET/Load-Context.md: -------------------------------------------------------------------------------- 1 | # .NET Assembly Load Context (or Binding Context) 2 | 3 | ## .NET Framework 4 | 5 | ### TL;DR 6 | 7 | - the [Assembly.Load](https://docs.microsoft.com/en-us/dotnet/api/system.reflection.assembly.load) method is loaded with Default Load Context. 8 | - the [Assembly.LoadFrom](https://docs.microsoft.com/en-us/dotnet/api/system.reflection.assembly.loadfrom) method is loaded with Load-From Context. It enables dependencies to be located and loaded from that path. In addition, assemblies in this context can use dependencies that are loaded into the Default Load Context. 9 | - the [Assembly.LoadFile](https://docs.microsoft.com/en-us/dotnet/api/system.reflection.assembly.loadfile) method is loaded without any context. so its dependencies are not automatically loaded. (You might have a handler for the AppDomain.AssemblyResolve.) 10 | 11 | ---- 12 | 13 | ### Default Load Context 14 | 15 | When assemblies are loaded into the default load context, their dependencies are loaded automatically. Dependencies that are loaded into the default load context are found automatically for assemblies in the default load context or the load-from context. 16 | 17 | Disadventages: 18 | 19 | - Dependencies that are loaded into other contexts are not available. 20 | - You cannot load assemblies from locations outside the probing path into the default load context. 21 | 22 | ### Load-From Context 23 | 24 | The load-from context lets you load an assembly from a path that is not under the application path, and therefore is not included in probing. It enables dependencies to be located and loaded from that path, because the path information is maintained by the context. In addition, assemblies in this context can use dependencies that are loaded into the default load context. 25 | 26 | Loading assemblies by using the Assembly.LoadFrom method, or one of the other methods that load by path, has the following disadvantages: 27 | 28 | - If an assembly with the same identity is already loaded, LoadFrom returns the loaded assembly even if a different path was specified. 29 | - If an assembly is loaded with LoadFrom, and later an assembly in the default load context tries to load the same assembly by display name, the load attempt fails. This can occur when an assembly is deserialized. 30 | - If an assembly is loaded with LoadFrom, and the probing path includes an assembly with the same identity but in a different location, an InvalidCastException, MissingMethodException, or other unexpected behavior can occur. 31 | - LoadFrom demands FileIOPermissionAccess.Read and FileIOPermissionAccess.PathDiscovery, or WebPermission, on the specified path. 32 | - If a native image exists for the assembly, it is not used. 33 | - The assembly cannot be loaded as domain-neutral. 34 | - In the .NET Framework versions 1.0 and 1.1, policy is not applied. 35 | 36 | ### No Context 37 | 38 | Loading without context is the only option for transient assemblies that are generated with reflection emit. Loading without context is the only way to load multiple assemblies that have the same identity into one application domain. The cost of probing is avoided. 39 | 40 | Disadvantages: 41 | 42 | - Other assemblies cannot bind to assemblies that are loaded without context, unless you handle the AppDomain.AssemblyResolve event. 43 | - Dependencies are not loaded automatically. You can preload them without context, preload them into the default load context, or load them by handling the AppDomain.AssemblyResolve event. 44 | - Loading multiple assemblies with the same identity without context can cause type identity problems similar to those caused by loading assemblies with the same identity into multiple contexts. See Avoid Loading an Assembly into Multiple Contexts. 45 | - If a native image exists for the assembly, it is not used. 46 | - The assembly cannot be loaded as domain-neutral. 47 | - In the .NET Framework versions 1.0 and 1.1, policy is not applied. 48 | 49 | ---- 50 | 51 | reference: https://docs.microsoft.com/en-us/dotnet/framework/deployment/best-practices-for-assembly-loading 52 | 53 | ## .NET Core 54 | 55 | **LoadContext** can be viewed as a container for assemblies, their code and data (e.g. statics). Whenever an assembly is loaded, it is loaded within a load context - independent of whether the load was triggered explicitly (e.g. via Assembly.Load), implicitly (e.g. resolving static assembly references from the manifest) or dynamically (by emitting code on the fly). 56 | 57 | In .NET Core, we have exposed a [managed API surface](https://github.com/dotnet/corefx/blob/master/src/System.Runtime.Loader/ref/System.Runtime.Loader.cs) that developers can use to interact with it - to inspect loaded assemblies or create their own **LoadContext** instance. Here are some of the scenarios that motivated this work: 58 | 59 | Here are some of the scenarios that motivated this work: 60 | 61 | - Ability to load multiple versions of the same assembly within a given process (e.g. for plugin frameworks) 62 | - Ability to load assemblies explicitly in a context isolated from that of the application. 63 | - Ability to override assemblies being resolved from application context. 64 | - Ability to have isolation of statics (as they are tied to the **LoadContext**) 65 | - Expose LoadContext as a first class concept for developers to interface with and not be a magic. 66 | 67 | ### Default LoadContext 68 | 69 | Every .NET Core app has a **LoadContext** instance created during .NET Core Runtime startup that we will refer to as the *Default LoadContext*. All application assemblies (including their transitive closure) are loaded within this **LoadContext** instance. 70 | 71 | ### Custom LoadContext 72 | 73 | For scenarios that wish to have isolation between loaded assemblies, applications can create their own **LoadContext** instance by deriving from **System.Runtime.Loader.AssemblyLoadContext** type and loading the assemblies within that instance. 74 | 75 | Multiple assemblies with the same simple name cannot be loaded into a single load context *(Default or Custom)*. Also, .Net Core ignores strong name token for assembly binding process. 76 | 77 | ### API Surface 78 | 79 | Most of the **AssemblyLoadContext** API surface is self-explanatory. 80 | 81 | #### Default 82 | 83 | This property will return a reference to the *Default LoadContext*. 84 | 85 | #### Load 86 | 87 | This method should be overriden in a *Custom LoadContext* if the intent is to override the assembly resolution that would be done during fallback to *Defaut LoadContext* 88 | 89 | #### LoadFromAssemblyName 90 | 91 | This method can be used to load an assembly into a load context different from the load context of the currently executing assembly. The assembly will be loaded into the load context on which the method is called. If the context can't resolve the assembly in its **Load** method the assembly loading will defer to the **Default** load context. In such case it's possible the loaded assembly is from the **Default** context even though the method was called on a non-default context. 92 | 93 | Calling this method directly on the AssemblyLoadContext.Default will only load the assembly from the Default context. Depending on the caller the Default may or may not be different from the load context of the currently executing assembly. 94 | 95 | To make sure a specified assembly is loaded into the specified load context call **AssemblyLoadContext.LoadFromAssemblyPath** and specify the path to the assembly file. 96 | 97 | ### Assembly Load APIs and LoadContext 98 | 99 | - Assembly.Load - loads the assembly into the context of the assembly that triggers the load. 100 | - Assembly.LoadFrom - loads the assembly into the *Default LoadContext* 101 | - Assembly.LoadFile - creates a new (anonymous) load context to load the assembly into. 102 | - Assembly.Load(byte[]) - creates a new (anonymous) load context to load the assembly into. 103 | 104 | references: 105 | 106 | - https://github.com/dotnet/coreclr/blob/master/Documentation/design-docs/assemblyloadcontext.md 107 | - https://github.com/dotnet/corefx/tree/master/src/System.Runtime.Loader -------------------------------------------------------------------------------- /DotNET/NuGet.md: -------------------------------------------------------------------------------- 1 | # Nuget 2 | 3 | ## Install local NuGet Package File 4 | 5 | Add a `Nuget.Config` file to Project Directory 6 | 7 | ```xml 8 | 9 | 10 | 11 | 12 | 13 | 14 | ``` 15 | 16 | Copy the `*.nupkg` files to `..\NugetPackages` Directory. 17 | 18 | Run Command: 19 | 20 | ```cmd 21 | > dotnet add package 22 | ``` 23 | 24 | Alternatively in .NET Core 2.0 tools / NuGet 4.3.0, you could also add the source directly to the csproj file that is supposed to consume the NuGet: 25 | 26 | ```xml 27 | 28 | $(RestoreSources);../foo/bin/Debug;https://api.nuget.org/v3/index.json 29 | 30 | ``` 31 | 32 | reference: https://stackoverflow.com/a/44463578 33 | 34 | ## Install packages from Multiple sources 35 | 36 | Add Multiple Sources to `NuGet.Config` file 37 | 38 | ```xml 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | ``` 47 | 48 | Copy `NuGet.Config` to Config file locations. 49 | 50 | Config file locations: 51 | 52 | - `$HOME/.config/NuGet/NuGet.Config`(or `$HOME/.nuget/NuGet/NuGet.Config`). 53 | - Solution Root Directory 54 | - Project Root Directory 55 | 56 | SEE [NuGet Config file locations](https://docs.microsoft.com/en-us/nuget/consume-packages/configuring-nuget-behavior#config-file-locations-and-uses) 57 | 58 | **IMPORTANT**: the paths and the `NuGet.Config` file name are CASE SENSITIVE. 59 | 60 | And, Run `dotnet restore` or `dotnet restore --configfile ` 61 | 62 | If above instructions doesn't work, it is a bug. there is [a issue](https://github.com/NuGet/Home/issues/6140). 63 | 64 | You can temporarily fix the bug by following the instruction below: 65 | 66 | ```bash 67 | $ dotnet restore --source "http://private-nuget-repo-source" || true 68 | $ dotnet restore --source "https://api.nuget.org/v3/index.json" || true 69 | $ dotnet restore --source "https://api.nuget.org/v3/index.json;http://private-nuget-repo-source" 70 | ``` 71 | -------------------------------------------------------------------------------- /FFmpeg.md: -------------------------------------------------------------------------------- 1 | # FFmpeg 2 | 3 | FFmpeg snippets 4 | 5 | ## Concatenate 6 | 7 | ```txt 8 | # filelist.txt 9 | file '/path/to/file1.ts' 10 | file '/path/to/file2.ts' 11 | file '/path/to/file3.ts' 12 | ``` 13 | 14 | ```bash 15 | $ ffmpeg -f concat -i list.txt -c copy merged.ts 16 | ``` 17 | 18 | Or 19 | 20 | ```bash 21 | $ ffmpeg -i "concat:file1.ts|file2.ts|file3.ts" -c copy merged.ts 22 | ``` 23 | 24 | reference: https://trac.ffmpeg.org/wiki/Concatenate 25 | 26 | ## TS to MP4 27 | 28 | ```bash 29 | $ ffmpeg -i input.ts -acodec copy -vcodec copy output.mp4 30 | ``` -------------------------------------------------------------------------------- /Firefox.md: -------------------------------------------------------------------------------- 1 | # Firefox 2 | 3 | ## Disable the Enterprise Roots preference 4 | 5 | Open `about:config` in the URL bar 6 | 7 | - Set `security.certerrors.mitm.auto_enable_enterprise_roots` to `false` 8 | - Set `security.enterprise_roots.auto-enabled` to `false` 9 | - Set `security.enterprise_roots.enabled` to `false` 10 | -------------------------------------------------------------------------------- /GPG.md: -------------------------------------------------------------------------------- 1 | # GPG 2 | 3 | ## Generate a new GPG key 4 | 5 | ```bash 6 | gpg --full-generate-key 7 | ``` 8 | 9 | ## Export all public keys 10 | 11 | ```bash 12 | gpg --export --armor > public-keys.asc 13 | ``` 14 | 15 | ## Export all private keys 16 | 17 | ```bash 18 | gpg --export-secret-keys --armor > private-keys.asc 19 | ``` 20 | 21 | ## List all keys 22 | 23 | ```bash 24 | gpg --list-keys 25 | ``` 26 | 27 | ## List all keys with subkey fingerprints 28 | 29 | ```bash 30 | gpg --list-keys --with-subkey-fingerprint 31 | ``` 32 | 33 | ## Subkey 34 | 35 | A GPG key, often referred to as a private key, can have associated subkeys. Subkeys are beneficial as they help keep the primary key safe. For instance, you can create multiple subkeys, each for a different purpose, and use them to sign, encrypt, or authenticate messages. 36 | 37 | ```bash 38 | $ gpg -k --with-subkey-fingerprint 39 | /home/user/.gnupg/pubring.kbx 40 | ---------------------------------- 41 | pub ed25519 2024-04-28 [SC] [expires: 2026-04-28] 42 | 28D3DC46315EF15EC3FB0DB64FE2CF412F192CDD 43 | uid [ultimate] Chanwoong Kim 44 | sub cv25519 2024-04-28 [E] [expires: 2026-04-28] 45 | 64C8EDE87A62FA45E28ABBC9DEDFDD5BE94C46B7 46 | ``` 47 | 48 | - SC: Sign and Certify 49 | - S: Sign 50 | - C: Certify 51 | - E: Encryption 52 | - A: Authentication 53 | -------------------------------------------------------------------------------- /Git.md: -------------------------------------------------------------------------------- 1 | ## git flow, Finish hotfix branch when a release branch currently exists 2 | 3 | http://nvie.com/posts/a-successful-git-branching-model/ 4 | 5 | The one exception to the rule here is that, **when a release branch currently exists, the hotfix changes need to be merged into that release branch, instead of** `develop`. Back-merging the bugfix into the release branch will eventually result in the bugfix being merged into develop too, when the release branch is finished. (If work in `develop` immediately requires this bugfix and cannot wait for the release branch to be finished, you may safely merge the bugfix into `develop` now already as well.) 6 | 7 | https://github.com/nvie/gitflow/issues/177 8 | 9 | SEE: https://github.com/petervanderdoes/gitflow-avh/issues/161 10 | 11 | This may be optional. We got a release management including RC versions, which are based on the current release branch. However, currently we need to manually merge the master back into release branch when finishing a hotfix. 12 | -------------------------------------------------------------------------------- /GitLab-Runner.md: -------------------------------------------------------------------------------- 1 | # Run GitLab Runner in a Docker container 2 | 3 | Register GitLab Runner to create the `/etc/gitlab-runner/config.toml` 4 | 5 | ```bash 6 | $ docker run --rm -t -i -v /etc/gitlab-runner:/etc/gitlab-runner gitlab/gitlab-runner:latest register 7 | 8 | Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/): 9 | https://gitlab.com 10 | 11 | Please enter the gitlab-ci token for this runner: 12 | token 13 | 14 | Please enter the gitlab-ci description for this runner: 15 | [xxxxxxxxxxxx]: runner desc 16 | 17 | Please enter the gitlab-ci tags for this runner (comma separated): 18 | 19 | Registering runner... succeeded runner=XXXXXXXX 20 | 21 | Please enter the executor: parallels, ssh, virtualbox, docker+machine, docker-ssh+machine, docker, docker-ssh, shell, kubernetes, custom: 22 | docker 23 | 24 | Please enter the default Docker image (e.g. ruby:2.6): 25 | alpine:latest 26 | 27 | Runner registered successfully. 28 | ``` 29 | 30 | Run a GitLab Runner container 31 | 32 | ```bash 33 | $ docker run -d --name gitlab-runner --restart always \ 34 | -v /etc/gitlab-runner:/etc/gitlab-runner \ 35 | -v /var/run/docker.sock:/var/run/docker.sock \ 36 | gitlab/gitlab-runner:latest 37 | ``` 38 | 39 | references: 40 | 41 | - https://docs.gitlab.com/runner/install/docker.html 42 | - https://docs.gitlab.com/runner/register/ 43 | -------------------------------------------------------------------------------- /Go.md: -------------------------------------------------------------------------------- 1 | # Golang 2 | 3 | ## Memory leak with `time.After()` 4 | 5 | Problematic code: 6 | 7 | ```go 8 | select { 9 | case <- time.After(time.Second * 10): 10 | // do something after 10 seconds. 11 | case <- ctx.Done(): 12 | // do something when the context is finished. 13 | // but, the underlying timer created by `time.After()` will not be garbage collected. 14 | // Memory leaked! 15 | } 16 | ``` 17 | 18 | Correct code: 19 | 20 | ```go 21 | timer = time.NewTimer(time.Second * 10) 22 | // Call `Stop()` when the timer is no longer needed. It is important. 23 | defer time.Stop() 24 | 25 | select { 26 | case <-timer.C: 27 | // do something after 10 seconds. 28 | case <- ctx.Done(): 29 | // do something when the context is finished. 30 | } 31 | ``` 32 | 33 | from godoc: https://godoc.org/time#After 34 | 35 | > After waits for the duration to elapse and then sends the current time on the returned channel. It is equivalent to NewTimer(d).C. The underlying Timer is not recovered by the garbage collector until the timer fires. If efficiency is a concern, use NewTimer instead and call Timer.Stop if the timer is no longer needed. 36 | 37 | ## Use `GOPRIVATE` environment variable with private module 38 | 39 | ```text 40 | $ go help module-private 41 | The go command defaults to downloading modules from the public Go module 42 | mirror at proxy.golang.org. It also defaults to validating downloaded modules, 43 | regardless of source, against the public Go checksum database at sum.golang.org. 44 | These defaults work well for publicly available source code. 45 | 46 | The GOPRIVATE environment variable controls which modules the go command 47 | considers to be private (not available publicly) and should therefore not use the 48 | proxy or checksum database. The variable is a comma-separated list of 49 | glob patterns (in the syntax of Go's path.Match) of module path prefixes. 50 | For example, 51 | 52 | GOPRIVATE=*.corp.example.com,rsc.io/private 53 | 54 | causes the go command to treat as private any module with a path prefix 55 | matching either pattern, including git.corp.example.com/xyzzy, rsc.io/private, 56 | and rsc.io/private/quux. 57 | 58 | The GOPRIVATE environment variable may be used by other tools as well to 59 | identify non-public modules. For example, an editor could use GOPRIVATE 60 | to decide whether to hyperlink a package import to a godoc.org page. 61 | 62 | For fine-grained control over module download and validation, the GONOPROXY 63 | and GONOSUMDB environment variables accept the same kind of glob list 64 | and override GOPRIVATE for the specific decision of whether to use the proxy 65 | and checksum database, respectively. 66 | 67 | For example, if a company ran a module proxy serving private modules, 68 | users would configure go using: 69 | 70 | GOPRIVATE=*.corp.example.com 71 | GOPROXY=proxy.example.com 72 | GONOPROXY=none 73 | 74 | This would tell the go command and other tools that modules beginning with 75 | a corp.example.com subdomain are private but that the company proxy should 76 | be used for downloading both public and private modules, because 77 | GONOPROXY has been set to a pattern that won't match any modules, 78 | overriding GOPRIVATE. 79 | 80 | The 'go env -w' command (see 'go help env') can be used to set these variables 81 | for future go command invocations. 82 | ``` 83 | -------------------------------------------------------------------------------- /HDMI-Cables.md: -------------------------------------------------------------------------------- 1 | # HDMI cables 2 | 3 | ## There is no 'HDMI 2.0 Cable' 4 | 5 | HDMI cables are just a dumb pipe. There is no such thing as an HDMI 1.4 or 2.0 cable. 6 | 7 | There are only two different kinds of HDMI cables specified by the Industry: 8 | 9 | - High Speed 10 | - Standard 11 | 12 | High Speed (with Ethernet) HDMI Cables will support the new higher bandwidths (up to 18Gbps). it called HDMI 2.0 13 | 14 | ## HDMI 2.1 15 | 16 | The new version is called **HDMI" 2.1** and it adds several new features including a new cable type. 17 | 18 | ## HDMI Cables 19 | 20 | ![HDMI-Cables](images/2018-11-28-09-53-52.png) 21 | 22 | ## References 23 | 24 | - https://www.hdmi.org/consumer/finding_right_cable.aspx 25 | - https://www.cnet.com/news/do-you-need-new-hdmi-cables-for-hdr/ -------------------------------------------------------------------------------- /Hack.md: -------------------------------------------------------------------------------- 1 | # Hack 2 | 3 | Hackish tips and snippets. 4 | 5 | ## Single script to run in both Windows batch and Linux Bash 6 | 7 | ### Single line script 8 | 9 | Use CMD label trick: 10 | 11 | - The label character, a colon (`:`), is equivalent to `true` in most POSIXish shells 12 | - CMD will ignore lines that start with `:` (label character) 13 | 14 | ```bash 15 | :; echo "Hi, I’m ${SHELL}."; exit $? 16 | @ECHO OFF 17 | ECHO I'm %COMSPEC% 18 | ``` 19 | 20 | Don’t forget that any use of `$?` must be before your next colon `:` because `:` resets `$?` to 0. 21 | 22 | ### Multi line script 23 | 24 | Use heredoc trick: 25 | 26 | ```bash 27 | :; echo "I am ${SHELL}" 28 | :<<"::CMDLITERAL" 29 | ECHO I am %COMSPEC% 30 | ::CMDLITERAL 31 | :; echo "And ${SHELL} is back!" 32 | :; exit 33 | ECHO And back to %COMSPEC% 34 | ``` 35 | 36 | Use heredoc with GOTO trick: 37 | 38 | ```bash 39 | :<<"::CMDLITERAL" 40 | @ECHO OFF 41 | GOTO :CMDSCRIPT 42 | ::CMDLITERAL 43 | 44 | echo "I can write free-form ${SHELL} now!" 45 | if :; then 46 | echo "This makes conditional constructs so much easier because" 47 | echo "they can now span multiple lines." 48 | fi 49 | exit $? 50 | 51 | :CMDSCRIPT 52 | ECHO Welcome to %COMSPEC% 53 | ``` 54 | 55 | ### Universal comment 56 | 57 | Universal comments, of course, can be done with the character sequence `: #` or `:;#`. The space or semicolon are necessary because `sh` considers `#` to be part of a command name if it is not the first character of an identifier. 58 | 59 | ```bash 60 | : # This is a special script which intermixes both sh 61 | : # and cmd code. It is written this way because it is 62 | : # used in system() shell-outs directly in otherwise 63 | : # portable code. See https://stackoverflow.com/questions/17510688 64 | : # for details. 65 | :; echo "This is ${SHELL}"; exit 66 | @ECHO OFF 67 | ECHO This is %COMSPEC% 68 | ``` 69 | 70 | ### reference 71 | 72 | - https://stackoverflow.com/a/17623721 -------------------------------------------------------------------------------- /Kubernetes/CoreDNS.md: -------------------------------------------------------------------------------- 1 | # Add Support DNS-over-HTTPS 2 | 3 | ```Corefile 4 | forward . tls://1.1.1.1 { 5 | tls_servername tls.cloudflare-dns.com 6 | } 7 | ``` 8 | 9 | reference: https://github.com/coredns/coredns/issues/1650#issuecomment-377790487 10 | -------------------------------------------------------------------------------- /Kubernetes/MicroK8s.md: -------------------------------------------------------------------------------- 1 | # MicroK8s 2 | 3 | Lightweight Kubernetes cluster with a single-node. 4 | 5 | https://microk8s.io/ 6 | 7 | > If you need a more lighter Kubernetes cluster (even including IoT), see https://k3s.io/ 8 | 9 | ## Installation 10 | 11 | ```bash 12 | # Install microk8s from Snap 13 | $ sudo snap install microk8s --classic 14 | # Use microk8s command without sudo (re-login required) 15 | $ sudo usermod -a -G microk8s $USER 16 | # Add alias for microk8s.kubectl 17 | $ alias kubectl='microk8s.kubectl' 18 | ``` 19 | 20 | ## Quickstarts 21 | 22 | - `microk8s.status`: Show status 23 | - `microk8s.enable `: Enable microk8s addon 24 | - `microk8s.disable `: Disable microk8s addon 25 | - `microk8s.kubectl`: Run kubectl 26 | 27 | ## Recommend addons 28 | 29 | - dns (CoreDNS) 30 | - helm 31 | - metallb (MetalLB) 32 | 33 | link: [list of all Microk8s addons](https://microk8s.io/docs/addons) 34 | 35 | ## Troubleshooting 36 | 37 | - `microk8s.inspect`: Show System status and collecting logs 38 | 39 | SEE: https://microk8s.io/docs/troubleshooting 40 | 41 | ---- 42 | 43 | ## Do not expose NodePorts externally 44 | 45 | `serviceType=NodePort` or `serviceType=LoadBalancer` will open the NodePort externally, even if the firewall(e.g., ufw) was enabled. 46 | 47 | [Because `kube-proxy` is writing `iptables` rules.](https://stackoverflow.com/a/53142983) 48 | 49 | It can't close but it can set does not expose externally. 50 | 51 | ``` 52 | # append below line to /var/snap/microk8s/current/args/kube-proxy 53 | --nodeport-addresses=["::1/128","127.0.0.1/32"] 54 | ``` 55 | 56 | reference: https://github.com/kubernetes/kubernetes/pull/89998#issuecomment-611590526 57 | 58 | `127.0.0.0/8` for localhost only ( ref: https://serverfault.com/a/1024340 ) 59 | 60 | ``` 61 | # append below line to /var/snap/microk8s/current/args/kube-proxy 62 | --nodeport-addresses=127.0.0.0/8 63 | ``` 64 | -------------------------------------------------------------------------------- /Kubernetes/Minikube.md: -------------------------------------------------------------------------------- 1 | # Minikube 2 | 3 | ## Run minkube without VM 4 | 5 | No. Please do not do this, ever. 6 | 7 | minikube was designed to run Kubernetes within a dedicated VM, and when used with --vm-driver=none, may overwrite system binaries, configuration files, and system logs. Executing minikube --vm-driver=none outside of a VM could result in data loss, system instability and decreased security. 8 | 9 | ### references 10 | 11 | - https://github.com/kubernetes/minikube/issues/2575 12 | - https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md -------------------------------------------------------------------------------- /Kubernetes/Traefik.md: -------------------------------------------------------------------------------- 1 | # Traefik 2 | 3 | ## Install Traefik on Microk8s (using helm) 4 | 5 | ```bash 6 | # Set externalIP manually 7 | $ helm install stable/traefik --name traefik --set externalIP= --namespace kube-system 8 | ``` 9 | 10 | or 11 | 12 | ```bash 13 | # Deploy as NodePort 14 | $ helm install stable/traefik --name traefik --set serviceType=NodePort --namespace kube-system 15 | 16 | # Show NodeIP and NodePort 17 | $ kubectl describe svc traefik --namespace kube-system 18 | ``` 19 | 20 | You can also use `serviceType=LoadBalancer` with the LoadBalancer. 21 | 22 | ### Enable Dashboard and set Dashboard domain 23 | 24 | 1. Append below options to `helm install stable/traefik ...` command 25 | 26 | ``` 27 | --set dashboard.enabled=true,dashboard.domain=traefik-dashboard.local 28 | ``` 29 | 30 | 2. Confiure DNS record. (/etc/hosts) 31 | 32 | ```bash 33 | $ cat ' traefik-dashboard.local' >> /etc/hosts 34 | ``` 35 | 36 | ## Routing 37 | 38 | See: https://docs.traefik.io/v1.7/user-guide/kubernetes/ 39 | 40 | ### Name-based Routing 41 | 42 | ```yml 43 | apiVersion: extensions/v1beta1 44 | kind: Ingress 45 | metadata: 46 | name: cheese 47 | annotations: 48 | kubernetes.io/ingress.class: traefik 49 | spec: 50 | rules: 51 | - host: stilton.minikube 52 | http: 53 | paths: 54 | - backend: 55 | serviceName: stilton 56 | servicePort: http 57 | - host: cheddar.minikube 58 | http: 59 | paths: 60 | - backend: 61 | serviceName: cheddar 62 | servicePort: http 63 | - host: wensleydale.minikube 64 | http: 65 | paths: 66 | - path: / 67 | backend: 68 | serviceName: wensleydale 69 | servicePort: http 70 | ``` 71 | 72 | ### Path-based Routing 73 | 74 | ```yml 75 | apiVersion: extensions/v1beta1 76 | kind: Ingress 77 | metadata: 78 | name: cheeses 79 | annotations: 80 | kubernetes.io/ingress.class: traefik 81 | traefik.frontend.rule.type: PathPrefixStrip 82 | spec: 83 | rules: 84 | - host: cheeses.minikube 85 | http: 86 | paths: 87 | - path: /stilton 88 | backend: 89 | serviceName: stilton 90 | servicePort: http 91 | - path: /cheddar 92 | backend: 93 | serviceName: cheddar 94 | servicePort: http 95 | - path: /wensleydale 96 | backend: 97 | serviceName: wensleydale 98 | servicePort: http 99 | ``` 100 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Chanwoong Kim 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Linux/Bash.md: -------------------------------------------------------------------------------- 1 | # bash script 2 | 3 | ## Unofficial Strict Mode 4 | 5 | ```bash 6 | #!/bin/bash 7 | set -euo pipefail 8 | # ││└─ 'set -o pipefail' will cause a pipeline to return a failure status 9 | # ││ if any command fails. 10 | # │└ Treat unset variables and parameters other than the special parameters 11 | # │ ‘@’ or ‘*’ as an error when performing parameter expansion. 12 | # └ Exit immediately if a pipeline, which may consist of a single 13 | # simple command, a list, or a compound command returns a non-zero status. 14 | # 15 | # SEE: https://www.gnu.org/software/bash/manual/bashref.html#The-Set-Builtin 16 | ``` 17 | 18 | ## Variable name 19 | 20 | Uppercase only variable names are not recommanded. 21 | 22 | ref: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html paragraph 4 23 | 24 | > Environment variable names used by the utilities in the Shell and Utilities volume of POSIX.1-2017 consist solely of uppercase letters, digits, and the \ ( '_' ) from the characters defined in Portable Character Set and do not begin with a digit. 25 | > The name space of environment variable names containing lowercase letters is reserved for applications. 26 | 27 | ## Paremter expansion 28 | 29 | ref: http://www.gnu.org/software/bash/manual/bash.html#Shell-Parameter-Expansion 30 | 31 | > When not performing substring expansion, using the form described below (e.g., ‘:-’), Bash tests for a parameter that is unset or null. Omitting the colon results in a test only for a parameter that is unset. Put another way, if the colon is included, the operator tests for both parameter’s existence and that its value is not null; if the colon is omitted, the operator tests only for existence. 32 | 33 | ``` 34 | ${parameter:-word} 35 | 36 | If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted. 37 | 38 | ${parameter:=word} 39 | 40 | If parameter is unset or null, the expansion of word is assigned to parameter. The value of parameter is then substituted. Positional parameters and special parameters may not be assigned to in this way. 41 | 42 | ${parameter:?word} 43 | 44 | If parameter is null or unset, the expansion of word (or a message to that effect if word is not present) is written to the standard error and the shell, if it is not interactive, exits. Otherwise, the value of parameter is substituted. 45 | 46 | ${parameter:+word} 47 | 48 | If parameter is null or unset, nothing is substituted, otherwise the expansion of word is substituted. 49 | 50 | ... 51 | 52 | ``` 53 | 54 | ## Check environment variable exists 55 | 56 | ```bash 57 | if [ "${VARIABLE:-x}" == "x" ]; then 58 | # └─ if $VARIABLE is unset or null, set 'x' as default value. 59 | echo 'variable is unset or null' 60 | fi 61 | ``` 62 | 63 | ## check environment variable value 64 | 65 | ```bash 66 | if [ "$variable" == "value" ] 67 | then 68 | # blahblah 69 | fi 70 | ``` 71 | 72 | ## check directory exists 73 | 74 | ```bash 75 | if [ ! -d "/path/to/check" ] 76 | then 77 | echo 'path is not exist!' 78 | fi 79 | ``` 80 | 81 | ## check file exists 82 | 83 | ```bash 84 | if [ ! -f /path/to/check ] 85 | then 86 | echo 'file is not exist' 87 | fi 88 | ``` 89 | 90 | ## check command exists 91 | 92 | ```bash 93 | if ! which docker > /dev/null 94 | then 95 | echo 'docker command not exist' 96 | fi 97 | ``` 98 | 99 | ```bash 100 | function has () { 101 | # Or type "$1" &> /dev/null 102 | type "$1" > /dev/null 2>&1 103 | } 104 | 105 | if has docker; then 106 | echo 'docker command exist' 107 | fi 108 | ``` 109 | 110 | ## check process is running 111 | 112 | ```bash 113 | if pgrep ssh > /dev/null; then 114 | echo 'ssh is running' 115 | fi 116 | ``` 117 | 118 | ## pass environment variable 119 | 120 | ```bash 121 | export ENV=xxx 122 | command blahblah 123 | unset ENV 124 | ``` 125 | 126 | or 127 | 128 | ```bash 129 | ENV=xxxx command blahblah 130 | ``` 131 | 132 | ## declare 133 | 134 | ``` 135 | SYNTAX 136 | declare [-afFrxi] [-p] [name[=value]] 137 | 138 | OPTIONS 139 | 140 | -a Each name is an array variable. 141 | 142 | -f Use function names only. 143 | 144 | -F Inhibit the display of function definitions; 145 | only the function name and attributes are printed. 146 | (implies -f) 147 | 148 | -i The variable is to be treated as an integer; 149 | arithmetic evaluation is performed when the 150 | variable is assigned a value. 151 | 152 | -p Display the attributes and values of each name. 153 | When `-p' is used, additional options are ignored. 154 | 155 | -r Make names readonly. These names cannot then 156 | be assigned values by subsequent assignment statements 157 | or unset. 158 | 159 | -x Mark each name for export to subsequent commands 160 | via the environment. 161 | ``` 162 | 163 | ## What is difference in `declare -r` and `readonly` in bash? 164 | 165 | reference: https://stackoverflow.com/a/30362832/1545387 166 | 167 | So one difference is `readonly` will make the variable scope global. `declare` makes variable scope local (which is expected). 168 | Note: adding `-g` flag to the `declare` statement (e.g. `declare -rg a="a1"`) makes the variable scope global. 169 | 170 | ## Getting the source directory of a Bash script from within 171 | 172 | ```bash 173 | # from: https://stackoverflow.com/a/246128/1545387 174 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 175 | ``` 176 | 177 | ## Print error line 178 | 179 | ref: https://unix.stackexchange.com/a/39660 180 | 181 | ```bash 182 | #! /bin/bash 183 | 184 | err_report() { 185 | echo "Error on line $1" 186 | } 187 | 188 | trap 'err_report $LINENO' ERR 189 | # └─ 'single quotes to prevent `$LINENO` from being expanded when the trap line is first parsed. 190 | ``` 191 | 192 | ## Tokenize a string 193 | 194 | ```bash 195 | string="hello;world" 196 | IFS=';' read -r -a tokens <<< "$string" 197 | echo "${tokens[*]}" 198 | ``` 199 | 200 | ### divided by newlines 201 | 202 | need to use the `-d` switch to `read`: 203 | 204 | > `-d` *delim* 205 | > 206 | > The first character of delim is used to terminate the input line, rather than newline. 207 | 208 | ```bash 209 | $ string=$'hello\nworld' 210 | $ IFS=$'\n' read -r -d '' -a arr < <(printf '%s\0' "$string") 211 | $ declare -p arr 212 | declare -a arr='([0]="hello" [1]="world")' 213 | ``` 214 | 215 | Or 216 | 217 | ```bash 218 | IFS=$'\n' read -r -d '' -a arr <<< "$var" 219 | ``` 220 | 221 | In this case, the content of `arr` is the same; the only difference is that the return code of `read` is 1 (failure). 222 | 223 | reference: https://stackoverflow.com/a/28417633/1545387 224 | 225 | ## What is '<<<' ? 226 | 227 | ref: https://stackoverflow.com/a/16045687 228 | 229 | `<<<` is bash-specific redirection operator. 230 | 231 | Bash: 232 | 233 | ```bash 234 | if grep -q "^127.0.0." <<< "$RESULT" 235 | then 236 | echo IF-THEN 237 | fi 238 | ``` 239 | 240 | No Bash compatible: 241 | 242 | ```bash 243 | if echo "$RESULT" | grep -q "^127.0.0." 244 | then 245 | echo IF-THEN 246 | fi 247 | ``` 248 | 249 | ## Here Document 250 | 251 | ref: http://www.gnu.org/software/bash/manual/bashref.html#Here-Documents 252 | 253 | ## Empty array expansion with 'set -u' 254 | 255 | ref: https://stackoverflow.com/a/7577209 256 | 257 | ```bash 258 | $ set -u 259 | $ arr=() 260 | $ echo "foo: '${arr[@]}'" 261 | bash: arr[@]: unbound variable 262 | ``` 263 | 264 | Use `${arr[@]+"${arr[@]}"}` instead of `"${arr[@]}"`. 265 | 266 | ```bash 267 | $ function args { perl -E'say 0+@ARGV; say "$_: $ARGV[$_]" for 0..$#ARGV' -- "$@" ; } 268 | 269 | $ set -u 270 | 271 | $ arr=() 272 | 273 | $ args "${arr[@]}" 274 | -bash: arr[@]: unbound variable 275 | 276 | $ args ${arr[@]+"${arr[@]}"} 277 | 0 278 | 279 | $ arr=("") 280 | 281 | $ args ${arr[@]+"${arr[@]}"} 282 | 1 283 | 0: 284 | 285 | $ arr=(a b c) 286 | 287 | $ args ${arr[@]+"${arr[@]}"} 288 | 3 289 | 0: a 290 | 1: b 291 | 2: c 292 | ``` 293 | 294 | ## Snippets 295 | 296 | ### Write file with cat 297 | 298 | ```bash 299 | cat << 'EOF'| tee '/path/to/file' 300 | blah blah 301 | blah blah 302 | EOF 303 | ``` 304 | 305 | ### Confirm message 306 | 307 | ```bash 308 | read -p "Are you sure? [y/N]: " -r 309 | if [[ ! "$REPLY" =~ ^[Yy]$ ]]; then 310 | exit 1 311 | fi 312 | ``` 313 | 314 | ### Extract tar.gz with wget 315 | 316 | ```bash 317 | $ wget -qO- http://tar.gz.link | tar xvz -C /target/directory 318 | ``` 319 | 320 | ### Check sudo requires password 321 | 322 | ref: https://askubuntu.com/a/357222 323 | 324 | ```bash 325 | if sudo -n true 2>/dev/null; then 326 | echo "I got sudo" 327 | else 328 | echo "I don't have sudo" 329 | fi 330 | ``` 331 | 332 | ```bash 333 | # explain 334 | sudo -n true 2>/dev/null; 335 | # │ └─ If the password is not required, then this expression is true. 336 | # └─ (non-interactive) option. Avoid prompting. 337 | # if a password required, sudo will display an error and exit. 338 | ``` 339 | 340 | ### Increase counter variable 341 | 342 | ```bash 343 | counter=1 344 | while cond; do 345 | # do something 346 | counter=$((counter+1)) 347 | done 348 | ``` 349 | 350 | ### Read a file line by line 351 | 352 | ```bash 353 | while IFS= read -r line; do 354 | echo "$line" 355 | done < 'file.txt' 356 | ``` 357 | 358 | ### Strip string 359 | 360 | Trim leading and trailing spaces 361 | 362 | ```bash 363 | awk '{$1=$1};1' 364 | ``` 365 | 366 | reference: https://unix.stackexchange.com/a/205854 367 | -------------------------------------------------------------------------------- /Linux/Common.md: -------------------------------------------------------------------------------- 1 | # Linux 2 | 3 | ## What is the system user(group)? 4 | 5 | You can create a system user(or group) by `adduser --system`(`addgroup --system` for the group). There is no technical difference between the system user and the normal user. 6 | 7 | It is a convention for the UID range. 8 | 9 | - `SYS_UID_MIN <= UID <= SYS_UID_MAX` is the system user 10 | - `SYS_UID_MAX < UID` is the normal user 11 | - `SYS_GID_MIN <= GID <= SYS_GID_MAX` is the system group 12 | - `SYS_GID_MAX < GID` is the normal group 13 | 14 | You can see the boundary values from `/etc/login.defs`. 15 | 16 | ``` 17 | # /etc/login.defs 18 | 19 | # 20 | # Min/max values for automatic uid selection in useradd 21 | # 22 | UID_MIN 1000 23 | UID_MAX 60000 24 | # System accounts 25 | #SYS_UID_MIN 100 26 | #SYS_UID_MAX 999 27 | 28 | # 29 | # Min/max values for automatic gid selection in groupadd 30 | # 31 | GID_MIN 1000 32 | GID_MAX 60000 33 | # System accounts 34 | #SYS_GID_MIN 100 35 | #SYS_GID_MAX 999 36 | ``` 37 | 38 | references: 39 | - https://askubuntu.com/a/524010 40 | - https://unix.stackexchange.com/a/80279 -------------------------------------------------------------------------------- /Linux/Conventions.md: -------------------------------------------------------------------------------- 1 | # .dist file 2 | 3 | from: https://stackoverflow.com/a/16843246/1545387 4 | 5 | `.dist` files are often configuration files which do not contain the real-world deploy-specific parameters (e.g. Database Passwords, etc.), and are there to help you get started with the application/framework faster. So, to get started with such frameworks, you should remove the `.dist` extension, and customize your configuration file with your personal parameters. 6 | 7 | One purpose I have seen in using `.dist` extension, is to avoid publishing personal data on VCSs (say git). So, you, as the developer of a reusable app, would use your own configuration file, but put the de-facto get-started config data in a separate `.dist`-suffixed file. 8 | -------------------------------------------------------------------------------- /Linux/SSH.md: -------------------------------------------------------------------------------- 1 | # SSH Proxy settings 2 | 3 | Create a user for SSH proxy 4 | 5 | ```bash 6 | $ sudo useradd -s /bin/false tunnel 7 | ``` 8 | 9 | Edit `/etc/ssh/sshd_config` 10 | 11 | ```sshd_config 12 | Match User tunnel 13 | X11Forwarding no 14 | PermitTTY no 15 | PermitTunnel no 16 | AllowAgentForwarding no 17 | GatewayPorts no 18 | AllowTcpForwarding yes 19 | AuthorizedKeysFile|·/etc/ssh/authorized_keys_%u 20 | ``` 21 | 22 | ```bash 23 | $ ssh-keygen -t rsa -b 4096 -C "tunnel" 24 | $ sudo cat id_rsa >> /etc/ssh/authorized_keys_tunnel 25 | $ sudo systemctl restart ssh 26 | ``` 27 | 28 | ---- 29 | 30 | Connect to the server 31 | 32 | ```bash 33 | $ ssh -N -D -C tunnel@host 34 | ``` 35 | 36 | - `-D `: Open a SOCKS proxy on local port ``. 37 | - `-C`: Requests compression of all data. 38 | - `-q`: Quiet mode. 39 | - `-N`: Do not execute a remote command. This is useful for just forwarding ports. 40 | -------------------------------------------------------------------------------- /Linux/Ubuntu.md: -------------------------------------------------------------------------------- 1 | ## Add nameserver 2 | 3 | ``` 4 | # in /etc/resolvconf/resolv.conf.d/head 5 | nameserver 6 | ``` 7 | 8 | then, `sudo resolvconf -u` -------------------------------------------------------------------------------- /Linux/WSL.md: -------------------------------------------------------------------------------- 1 | # WSL 2 | 3 | Windows Subsystem for Linux 4 | 5 | ## Installation 6 | 7 | Open PowerShell as Admin: 8 | 9 | ```powershell 10 | Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux 11 | ``` 12 | 13 | ## Fix "Could not load host key: ..." Errors 14 | 15 | ```bash 16 | $ sudo service ssh start 17 | * Starting OpenBSD Secure Shell server sshd 18 | Could not load host key: /etc/ssh/ssh_host_rsa_key 19 | Could not load host key: /etc/ssh/ssh_host_ecdsa_key 20 | Could not load host key: /etc/ssh/ssh_host_ed25519_key 21 | ``` 22 | 23 | Fix: 24 | 25 | ```bash 26 | $ sudo ssh-keygen -A 27 | ``` -------------------------------------------------------------------------------- /Linux/images/2018-06-25-09-46-54.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kexplo/TIL/bbced71de2aa5a9452678657284b0be7db6a9481/Linux/images/2018-06-25-09-46-54.png -------------------------------------------------------------------------------- /Linux/systemd.md: -------------------------------------------------------------------------------- 1 | # systemd 2 | 3 | ## The '@' symbol in unit name 4 | 5 | > A template unit must have a single "@" at the end of the name (right before the type suffix). The name of the full unit is formed by inserting the instance name between "@" and the unit type suffix. In the unit file itself, the instance parameter may be referred to using "%i" 6 | 7 | from : https://www.freedesktop.org/software/systemd/man/systemd.unit.html 8 | 9 | example: 10 | 11 | `/etc/systemd/system/echo@.service` 12 | 13 | ```conf 14 | [Unit] 15 | Description=Echo '%I' 16 | 17 | [Service] 18 | Type=oneshot 19 | ExecStart=/bin/echo %i 20 | StandardOutput=syslog 21 | ``` 22 | 23 | ```bash 24 | systemctl start echo@foo.service 25 | systemctl start echo@bar.service 26 | ``` 27 | 28 | ## '-'(minus) prefix 29 | 30 | - `ExecStartPre=`, `ExecStartPost=` : If any of those commands (not prefixed with "-") fail, the rest are not executed and the unit is considered failed. 31 | 32 | Table 1. Special executable prefixes 33 | 34 | | prefix | Effect | 35 | |--------|--------| 36 | | "@" | If the executable path is prefixed with "@", the second specified token will be passed as "argv[0]" to the executed process (instead of the actual filename), followed by the further arguments specified. | 37 | | "-" | If the executable path is prefixed with "-", an exit code of the command normally considered a failure (i.e. non-zero exit status or abnormal exit due to signal) is recorded, but has no further effect and is considered equivalent to success. | 38 | | ":" | If the executable path is prefixed with ":", environment variable substitution (as described by the "Command Lines" section below) is not applied. | 39 | | "+" | If the executable path is prefixed with "+" then the process is executed with full privileges. In this mode privilege restrictions configured with User=, Group=, CapabilityBoundingSet= or the various file system namespacing options (such as PrivateDevices=, PrivateTmp=) are not applied to the invoked command line (but still affect any other ExecStart=, ExecStop=, … lines). | 40 | | "!" | Similar to the "+" character discussed above this permits invoking command lines with elevated privileges. However, unlike "+" the "!" character exclusively alters the effect of User=, Group= and SupplementaryGroups=, i.e. only the stanzas that affect user and group credentials. Note that this setting may be combined with DynamicUser=, in which case a dynamic user/group pair is allocated before the command is invoked, but credential changing is left to the executed process itself. | 41 | | "!!" | This prefix is very similar to "!", however it only has an effect on systems lacking support for ambient process capabilities, i.e. without support for AmbientCapabilities=. It's intended to be used for unit files that take benefit of ambient capabilities to run processes with minimal privileges wherever possible while remaining compatible with systems that lack ambient capabilities support. Note that when "!!" is used, and a system lacking ambient capability support is detected any configured SystemCallFilter= and CapabilityBoundingSet= stanzas are implicitly modified, in order to permit spawned processes to drop credentials and capabilities themselves, even if this is configured to not be allowed. Moreover, if this prefix is used and a system lacking ambient capability support is detected AmbientCapabilities= will be skipped and not be applied. On systems supporting ambient capabilities, "!!" has no effect and is redundant. | 42 | 43 | from : https://www.freedesktop.org/software/systemd/man/systemd.service.html 44 | 45 | --- 46 | 47 | ## Snippets 48 | 49 | ```conf 50 | [Unit] 51 | Description=Redis Container 52 | After=docker.service 53 | Requires=docker.service 54 | 55 | [Service] 56 | TimeoutStartSec=0 57 | Restart=always 58 | ExecStartPre=-/usr/bin/docker stop %n 59 | ExecStartPre=-/usr/bin/docker rm %n 60 | ExecStartPre=/usr/bin/docker pull redis 61 | ExecStart=/usr/bin/docker run --rm --name %n redis 62 | 63 | [Install] 64 | WantedBy=multi-user.target 65 | ``` 66 | 67 | ## systemd.device 68 | 69 | > systemd will dynamically create device units for all kernel devices that are marked with the "systemd" udev tag (by default all block and network devices, and a few others). Note that *if systemd-udevd.service is not running, no device units will be available (for example in a typical container).* 70 | > 71 | > Device units are named after the `/sys/` and `/dev/` paths they control. Example: the device `/dev/sda5` is exposed in systemd as `dev-sda5.device`. For details about the escaping logic used to convert a file system path to a unit name see systemd.unit(5). 72 | 73 | from: https://www.freedesktop.org/software/systemd/man/latest/systemd.device.html 74 | 75 | ### example: make sshd service wait until the Wireguard network interface is ready 76 | 77 | add the dependency by adding this following lines to the '[Unit]' section: 78 | 79 | ```diff 80 | [Unit] 81 | -After=network.target 82 | +After=network.target wg-quick@wg0.service 83 | +Requires=sys-devices-virtual-net-wg0.device 84 | ``` 85 | -------------------------------------------------------------------------------- /Linux/tools/AWK.md: -------------------------------------------------------------------------------- 1 | ## Print the line of last match 2 | 3 | ```bash 4 | $ cat test.txt 5 | hello1 6 | hello2 7 | hello3 8 | bye1 9 | bye2 10 | 11 | $ awk '/^hello/ {a=$0} END{print a}' test.txt 12 | hello3 13 | ``` 14 | 15 | ## Print the line number of last match 16 | 17 | ```bash 18 | $ awk '/^hello/ {a=NR} END{print a}' test.txt 19 | ``` 20 | 21 | ## Print CSV Column 22 | 23 | ```bash 24 | # print first column of csv 25 | $ awk -F '"*,"*' '{print $1}' data.csv 26 | ``` -------------------------------------------------------------------------------- /Linux/tools/cURL.md: -------------------------------------------------------------------------------- 1 | ## Follow redirection 2 | 3 | ```bash 4 | $ curl -L 5 | ``` 6 | 7 | ## Show headers 8 | 9 | ```bash 10 | $ curl -I 11 | # or 12 | $ curl --head 13 | ``` -------------------------------------------------------------------------------- /Linux/tools/general.md: -------------------------------------------------------------------------------- 1 | # iostat 2 | 3 | ```txt 4 | iostat -xmdz 1 5 | # ││││ └─ repeat every 1 second 6 | # │││└─ omit output for any devices for which there was no activity during the sample period 7 | # ││└─ display the device utilization report 8 | # │└─ display statistics in megabytes per second 9 | # └─ display extended statistics 10 | ``` 11 | 12 | ```txt 13 | Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util 14 | xvdf 0.00 2934.00 0.00 2000.00 0.00 48.69 49.86 2.59 1.30 0.00 1.30 0.50 99.60 15 | ``` 16 | 17 | `rrqm/s, wrqm/s` 18 | 19 | read/write requests merged per second 20 | 21 | `r/s, w/s, rMB/s, wMB/s` 22 | 23 | reads/writes (throughput) per second 24 | 25 | `avgrq-sz` 26 | 27 | Average request size in **sectors** (512 bytes) In general if this number is below 16 (16 * 512 bytes = 8KB). If this number is low (<50), you are going to be IOPS limited. If it’s high (>100), you are likely to be bandwidth limited. 28 | 29 | `avgqu-sz` 30 | 31 | Average queue size. Indicates how many requests are queued waiting to be serviced. If `avgqu-sz` gets big (>30), your application is submitting more requests per secondthan the volume can handle. 32 | 33 | `await` 34 | 35 | Average wait in **milliseconds.** The average amount of time the requests that were completed during this period waited from when they entered the queue to when they were serviced. This number is a combination of the queue length and the average service time. **This is one of the most important metrics.** 36 | 37 | `svctm` 38 | 39 | Service time in **milliseconds.** While `await` counts the whole wait time of requests, `svctm` counts only the time consumed by device. As Linux doesn’t measure the actual service time, so `svctm` is just approximation. **Consider await more importantly.** 40 | 41 | `%util` 42 | 43 | Percentage of CPU time during whchi I/O requests were issed to the device. High `%util` doesn’t always say that there is an overload. If the device serves requests in parallel, this value can constantly be high. 44 | 45 | # nc, netcat 46 | 47 | ## Send packets using nc 48 | 49 | ```bash 50 | printf 'GET / HTTP/1.0\r\nHost: example.com\r\n\r\n' | nc example.com 80 51 | ``` 52 | 53 | # htop 54 | 55 | ## color mean 56 | 57 | Hitting F1 or h 58 | 59 | ![](images/2018-06-25-09-46-54.png) 60 | 61 | 62 | ## shortcuts 63 | 64 | - `h`: help 65 | - `t`: tree view 66 | - `p`: toggle program path 67 | - `P M T`: sort by CPU%, MEM% or TIME 68 | - `e`: show process environment 69 | - `l`: list open files with lsof -------------------------------------------------------------------------------- /Linux/tools/parallel.md: -------------------------------------------------------------------------------- 1 | # GNU Parallel 2 | 3 | GNU parallel is a command-line driven utility for Linux and other Unix-like operating systems which allows the user to execute shell scripts in parallel. (one or more computers) 4 | 5 | By default, parallel runs as many jobs in parallel as there are CPU cores. 6 | 7 | ## Examples 8 | 9 | `parallel ::: ` 10 | 11 | ```bash 12 | $ parallel echo ::: A B C D 13 | A 14 | B 15 | C 16 | D 17 | ``` 18 | 19 | or 20 | 21 | ```bash 22 | $ parallel echo {} ::: A B C D 23 | A 24 | B 25 | C 26 | D 27 | 28 | $ (echo "A"; echo "B"; echo "C"; echo "D") | parallel echo 29 | A 30 | B 31 | C 32 | D 33 | ``` 34 | 35 | ---- 36 | 37 | `parallel ::: -a ` 38 | 39 | ```bash 40 | $ cat input.txt 41 | A 42 | B 43 | C 44 | D 45 | 46 | $ parallel -a input.txt echo 47 | A 48 | B 49 | C 50 | D 51 | ``` 52 | 53 | ---- 54 | 55 | ```bash 56 | $!/usr/bin/env bash 57 | 58 | set -euo pipefail 59 | 60 | function job { 61 | print "$1" 62 | sleep "$1" 63 | } 64 | 65 | export -f job 66 | 67 | parallel job ::: 1 2 3 4 68 | ``` 69 | 70 | ## Options 71 | 72 | - `--progress` : Show progress 73 | - `-j N`, `--jobs N` : Specifies the number of jobs to be run. `0` means as many as possible 74 | - `--joblog` : Write log file of executed jobs 75 | - `--resume` : Resumes unfinished jobs by reading joblog (`--joblog`) 76 | - `--resume-failed` : Retry and resumes all failed jobs by reading joblog (`--joblog`) 77 | 78 | ## Graceful shutdown 79 | 80 | Send the signal **SIGTERM** to parallel process. **parallel** waiting for currently running jobs to complete and do not start new jobs. 81 | 82 | ```bash 83 | killall -TERM parallel 84 | ``` 85 | 86 | ## References 87 | 88 | - https://www.gnu.org/software/parallel/ 89 | -------------------------------------------------------------------------------- /Linux/tools/text-manipulate.md: -------------------------------------------------------------------------------- 1 | ## cut 2 | 3 | ```bash 4 | $ cat test.csv 5 | 1,2,3,4 6 | a,b,c,d 7 | q,w,e,r 8 | 9 | $ cut -d , -f 2- test.csv 10 | # ──┬─ ──┬── 11 | # │ └─ only select 2 or more fields 12 | # └─ use ',' as field delimiter (default: Tab) 13 | 2,3,4 14 | b,c,d 15 | w,e,r 16 | 17 | $ cut -d , -f 3 test.csv 18 | # ──┬─ ──┬── 19 | # │ └─ only select 3 field 20 | # └─ use ',' as field delimiter (default: Tab) 21 | 3 22 | c 23 | e 24 | ``` 25 | 26 | ## paste 27 | 28 | ```bash 29 | $ cat test1.csv 30 | 1,2 31 | 3,4 32 | 33 | $ cat test2.csv 34 | 5,6 35 | 7,8 36 | 37 | $ paste -d , test1.csv test2.csv 38 | # ──┬─ 39 | # └─ use ',' as field delimiter 40 | 1,2,5,6 41 | 3,4,7,8 42 | ``` 43 | 44 | ### merge two CSV file 45 | 46 | ```bash 47 | $ cat test1.csv 48 | 1,2 49 | 3,4 50 | 51 | $ cat test2.csv 52 | 5,6,7 53 | 8,9,0 54 | 55 | $ cut -d , -f 2 test2.csv | paste -d , test1.csv - 56 | 1,2,6 57 | 3,4,9 58 | ``` 59 | 60 | ## truncate 61 | 62 | ```bash 63 | # reference: manpage 64 | 65 | truncate -s 66 | # └─ set or adjust the file size by SIZE bytes 67 | # 68 | # SIZE may also be prefixed by one of the following modifying characters: 69 | # '+' extend by, '-' reduce by, '<' at most, '>' at least, 70 | # '/' round down to multiple of, '%' round up to multiple of. 71 | ``` 72 | 73 | ```bash 74 | # remove 1 character from file 75 | $ truncate -s-1 76 | # remove 2 character from file 77 | $ truncate -s-2 78 | ``` -------------------------------------------------------------------------------- /Mac.md: -------------------------------------------------------------------------------- 1 | ## Copy to clipboard using command line 2 | 3 | `pbcopy`, `pbpaste` 4 | 5 | ```bash 6 | $ echo 'hello!' | pbcopy 7 | $ echo `pbpaste` 8 | ``` -------------------------------------------------------------------------------- /Nginx.md: -------------------------------------------------------------------------------- 1 | # Nginx 2 | 3 | ## Return client's IP 4 | 5 | Use `$remote_addr` 6 | 7 | ```conf 8 | location /ip { 9 | # disable cache 10 | expires -1; 11 | add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; 12 | 13 | default_type text/plain; 14 | return 200 "$remote_addr\n"; 15 | } 16 | ``` 17 | 18 | If you use the Cloudflare CDN, use `$http_cf_connection_ip` 19 | 20 | ```conf 21 | location /ip { 22 | # disable cache 23 | expires -1; 24 | add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; 25 | 26 | default_type text/plain; 27 | return 200 "$http_cf_connecting_ip\n"; 28 | } 29 | ``` 30 | -------------------------------------------------------------------------------- /Python/DepHell.md: -------------------------------------------------------------------------------- 1 | # DepHell 2 | 3 | Project management for Python. https://dephell.readthedocs.io/index.html 4 | 5 | ## Convert poetry pyproject.toml to setup.py 6 | 7 | ```bash 8 | $ dephell deps convert --from pyproject.toml --from-format poetry --to setup.py --to-format setuppy 9 | ``` 10 | 11 | or 12 | 13 | Add below config to `pyproject.toml` or `dephell.toml` and run `dephell deps convert` 14 | 15 | ```toml 16 | [tool.dephell.main] 17 | from = {format = "poetry", path = "pyproject.toml"} 18 | to = {format = "setuppy", path = "setup.py"} 19 | ``` 20 | 21 | -------------------------------------------------------------------------------- /Python/Exception.md: -------------------------------------------------------------------------------- 1 | ## NotImplemented is not an Exception 2 | 3 | reference: https://help.semmle.com/wiki/display/PYTHON/NotImplemented+is+not+an+Exception 4 | 5 | `NotImplemented` is not an Exception, but is often mistakenly used in place of `NotImplementedError`. Executing `raise NotImplemented` or `raise NotImplemented()` will raise a `TypeError`. 6 | -------------------------------------------------------------------------------- /Python/GIL.md: -------------------------------------------------------------------------------- 1 | # GIL 2 | 3 | Global Interpreter Lock 4 | 5 | ### Why GIL? 6 | 7 | > Python uses reference counting for memory management. It means that objects created in Python have a reference count variable that keeps track of the number of references that point to the object. When this count reaches zero, the memory occupied by the object is released. 8 | > ... 9 | > The problem was that this reference count variable needed protection from race conditions where two threads increase or decrease its value simultaneously. 10 | > ... 11 | > The GIL is a single lock on the interpreter itself which adds a rule that execution of any Python bytecode requires acquiring the interpreter lock. This prevents deadlocks (as there is only one lock) and doesn’t introduce much performance overhead. But it effectively makes any CPU-bound Python program single-threaded. 12 | 13 | ### The impact on multi-threaded Python programs 14 | 15 | > In the multi-threaded version the GIL prevented the CPU-bound threads from executing in parellel. 16 | > The GIL does not have much impact on the performance of I/O-bound multi-threaded programs as the lock is shared between threads while they are waiting for I/O. 17 | 18 | ## reference 19 | 20 | https://realpython.com/python-gil/ 21 | -------------------------------------------------------------------------------- /Python/Python.md: -------------------------------------------------------------------------------- 1 | ## Do I have to do StringIO.close() ? 2 | 3 | reference: https://stackoverflow.com/questions/9718950/do-i-have-to-do-stringio-close 4 | 5 | ``` 6 | `StringIO.close()`: Free the memory buffer. Attempting to do further operations with a closed StringIO object will raise a ValueError. 7 | ``` 8 | 9 | ```python 10 | import StringIO, weakref 11 | 12 | def handler(ref): 13 | print 'Buffer died!' 14 | 15 | def f(): 16 | buffer = StringIO.StringIO() 17 | ref = weakref.ref(buffer, handler) 18 | buffer.write('something') 19 | return buffer.getvalue() 20 | 21 | print 'before f()' 22 | f() 23 | print 'after f()' 24 | ``` 25 | 26 | result: 27 | 28 | ```bash 29 | $ python test.py 30 | before f() 31 | Buffer died! 32 | after f() 33 | $ 34 | ``` 35 | 36 | Or, use it in a `with` statement. 37 | 38 | 39 | ```python 40 | # python 2 41 | with contextlib.closing(StringIO()) as buffer: 42 | buffer.write('hello') 43 | ``` 44 | 45 | ```python 46 | # python 3 47 | with StringIO() as buffer: 48 | buffer.write('hello') 49 | ``` 50 | 51 | 52 | ## Create Cython lib (*.so) in the current directory 53 | 54 | ```bash 55 | $ python setup.py build_ext --inplace 56 | ``` 57 | 58 | 59 | from python [document](https://docs.python.org/2/distutils/configfile.html) 60 | 61 | ```bash 62 | > python setup.py --help build_ext 63 | [...] 64 | Options for 'build_ext' command: 65 | --build-lib (-b) directory for compiled extension modules 66 | --build-temp (-t) directory for temporary files (build by-products) 67 | --inplace (-i) ignore build-lib and put compiled extensions into the 68 | source directory alongside your pure Python modules 69 | --include-dirs (-I) list of directories to search for header files 70 | --define (-D) C preprocessor macros to define 71 | --undef (-U) C preprocessor macros to undefine 72 | --swig-opts list of SWIG command line options 73 | [...] 74 | ``` -------------------------------------------------------------------------------- /Python/Python2-3.md: -------------------------------------------------------------------------------- 1 | ## StringIO 2 | 3 | ```python 4 | # python3 5 | with StringIO() as buffer: 6 | buffer.write('hello') 7 | ``` 8 | 9 | ```python 10 | # python2 11 | with contextlib.closing(StringIO()) as buffer: 12 | buffer.write('hello') 13 | ``` 14 | -------------------------------------------------------------------------------- /Python/pyenv.md: -------------------------------------------------------------------------------- 1 | # pyenv 2 | 3 | Simple Python version management 4 | 5 | https://github.com/pyenv/pyenv 6 | 7 | ## Install python from pyenv (with brew) 8 | 9 | Install pyenv via brew 10 | 11 | ```bash 12 | brew install pyenv 13 | ``` 14 | 15 | Install Prerequisites 16 | 17 | ```bash 18 | sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \ 19 | libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \ 20 | xz-utils tk-dev libffi-dev liblzma-dev python-openssl git 21 | ``` 22 | 23 | Install python 3.8.0 24 | 25 | ```bash 26 | $ pyenv install 3.8.0 27 | Downloading Python-3.8.0.tar.xz... 28 | -> https://www.python.org/ftp/python/3.8.0/Python-3.8.0.tar.xz 29 | Installing Python-3.8.0... 30 | ``` 31 | 32 | ## How to fix: 'readline extension was not compiled' 33 | 34 | The problem: 35 | 36 | ```bash 37 | $ pyenv install 3.8.0 38 | ... 39 | python-build: use readline from homebrew 40 | WARNING: The Python readline extension was not compiled. Missing the GNU readline lib? 41 | ``` 42 | 43 | 1. Uninstall `readline` from `brew` 44 | 2. Temporarily remove `brew` from `$PATH` 45 | 46 | ## References 47 | 48 | - https://github.com/pyenv/pyenv 49 | - https://github.com/pyenv/pyenv/wiki/Common-build-problems -------------------------------------------------------------------------------- /RESTful-API.md: -------------------------------------------------------------------------------- 1 | # RESTful API 2 | 3 | ## What is REST? 4 | 5 | **REST**: **Re**presentational **s**tate **t**ransfer. 6 | 7 | Six guiding constraints define a RESTful system: 8 | 9 | - Client-server Architecture 10 | - Statelessness 11 | - Cacheability 12 | - Layered System 13 | - Code on demand (optional) 14 | - Uniform interface 15 | - Resource identification in requests 16 | - Resource manipulation through representations 17 | - Self-descriptive messages 18 | - Hypermedia as the engine of application state (HATEOAS) 19 | 20 | ### reference 21 | 22 | - https://en.wikipedia.org/wiki/Representational_state_transfer 23 | 24 | ## There is no REST API 25 | 26 | ![](images/2020-06-27-17-41-33.png) 27 | 28 | image from http://slides.com/eungjun/rest 29 | 30 | - [Microsoft REST API Guidelines](https://github.com/Microsoft/api-guidelines) 31 | 32 | > REST doesn’t describe APIs. REST describes the architectural characteristics of an entire system, which includes all of the different components of that system. 33 | > 34 | > from: https://www.howarddierking.com/2016/09/15/there-is-no-rest-api/ 35 | 36 | ---- 37 | 38 | ## PUT vs. PATCH (in HTTP API) 39 | 40 | ### TL;DR 41 | 42 | **PUT** is used to replace the entire entity. 43 | **PATCH** is used to patch the entity. 44 | 45 | ---- 46 | 47 | When using PUT, it is assumed that you are sending the complete entity, and that complete entity *replaces* any existing entity at that URI. 48 | 49 | ``` 50 | { "username": "skwee357", "email": "skwee357@domain.com" } 51 | ``` 52 | 53 | If you POST this document to /users, as you suggest, then you might get back an entity such as 54 | 55 | ``` 56 | ## /users/1 57 | { 58 | "username": "skwee357", 59 | "email": "skwee357@domain.com" 60 | } 61 | ``` 62 | 63 | If you want to modify this entity later, you choose between PUT and PATCH. A PUT might look like this: 64 | 65 | ``` 66 | PUT /users/1 67 | { 68 | "username": "skwee357", 69 | "email": "skwee357@gmail.com" // new email address 70 | } 71 | ``` 72 | 73 | You can accomplish the same using PATCH. That might look like this: 74 | 75 | ``` 76 | PATCH /users/1 77 | { 78 | "email": "skwee357@gmail.com" // new email address 79 | } 80 | ``` 81 | 82 | ### Using PUT wrong 83 | 84 | ``` 85 | GET /users/1 86 | { 87 | "username": "skwee357", 88 | "email": "skwee357@domain.com" 89 | } 90 | PUT /users/1 91 | { 92 | "email": "skwee357@gmail.com" // new email address 93 | } 94 | 95 | GET /users/1 96 | { 97 | "email": "skwee357@gmail.com" // new email address... and nothing else! 98 | } 99 | ``` 100 | 101 | reference: https://stackoverflow.com/a/34400076 -------------------------------------------------------------------------------- /SQL.md: -------------------------------------------------------------------------------- 1 | ## Using String as Primary Key 2 | 3 | reference: https://stackoverflow.com/questions/3455297/mysql-using-string-as-primary-key 4 | 5 | > There's nothing wrong with using a CHAR or VARCHAR as a primary key. 6 | > 7 | > Sure it'll take up a little more space than an INT in many cases, but there are many cases where it is the most logical choice and may even reduce the number of columns you need, improving efficiency, by avoiding the need to have a separate ID field. 8 | 9 | 10 | ## What is the difference between a primary key and a index key 11 | 12 | reference: https://stackoverflow.com/questions/5374908/what-is-the-difference-between-a-primary-key-and-a-index-key 13 | 14 | > A primary key is a special kind of index in that: 15 | > 16 | > - there can be only one; 17 | > - it cannot be nullable; and 18 | > - it must be unique. 19 | 20 | 21 | 22 | ## MySQL string sorting 23 | 24 | reference: https://stackoverflow.com/a/8557307 25 | 26 | 27 | ### Alpha Numeric Sorting in MySQL 28 | 29 | Given input 30 | 31 | ``` 32 | 1A 1a 10A 9B 21C 1C 1D 33 | ``` 34 | 35 | Expected output 36 | 37 | ``` 38 | 1A 1C 1D 1a 9B 10A 21C 39 | ``` 40 | 41 | Query 42 | 43 | ```sql 44 | -- Bin Way 45 | -- =================================== 46 | SELECT 47 | tbl_column, 48 | BIN(tbl_column) AS binray_not_needed_column 49 | FROM db_table 50 | ORDER BY binray_not_needed_column ASC , tbl_column ASC 51 | 52 | ----------------------- 53 | 54 | -- Cast Way 55 | -- =================================== 56 | SELECT 57 | tbl_column, 58 | CAST(tbl_column as SIGNED) AS casted_column 59 | FROM db_table 60 | ORDER BY casted_column ASC , tbl_column ASC 61 | ``` 62 | 63 | ### Natural Sorting in MySQL 64 | 65 | Given input 66 | 67 | ``` 68 | Table: sorting_test 69 | -------------------------- ------------- 70 | | alphanumeric VARCHAR(75) | integer INT | 71 | -------------------------- ------------- 72 | | test1 | 1 | 73 | | test12 | 2 | 74 | | test13 | 3 | 75 | | test2 | 4 | 76 | | test3 | 5 | 77 | -------------------------- ------------- 78 | ``` 79 | 80 | Expected Output 81 | 82 | ``` 83 | -------------------------- ------------- 84 | | alphanumeric VARCHAR(75) | integer INT | 85 | -------------------------- ------------- 86 | | test1 | 1 | 87 | | test2 | 4 | 88 | | test3 | 5 | 89 | | test12 | 2 | 90 | | test13 | 3 | 91 | -------------------------- ------------- 92 | ``` 93 | 94 | Query 95 | 96 | ```sql 97 | SELECT alphanumeric, integer 98 | FROM sorting_test 99 | ORDER BY LENGTH(alphanumeric), alphanumeric 100 | ``` 101 | 102 | ### Sorting of numeric values mixed with alphanumeric values 103 | 104 | Given input 105 | 106 | ``` 107 | 2a, 12, 5b, 5a, 10, 11, 1, 4b 108 | ``` 109 | 110 | Expected Output 111 | 112 | ``` 113 | 1, 2a, 4b, 5a, 5b, 10, 11, 12 114 | ``` 115 | 116 | Query 117 | 118 | ```sql 119 | SELECT version 120 | FROM version_sorting 121 | ORDER BY CAST(version AS UNSIGNED), version; 122 | ``` 123 | 124 | ## mysqldump 125 | 126 | ```bash 127 | # dump whole database 128 | $ mysqldump -h -p -u > dump.sql 129 | ``` 130 | 131 | ```bash 132 | # dump specific table(s) 133 | $ mysqldump -h -p -u [
...] > dump.sql 134 | ``` 135 | 136 | ```bash 137 | # restore 138 | $ mysql -u -p < dump.sql 139 | ``` -------------------------------------------------------------------------------- /Terraform-Cloud.md: -------------------------------------------------------------------------------- 1 | # Error: No valid credential sources found for AWS Provider. 2 | 3 | Add `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` Environment variables to workspace's variables page. 4 | 5 | You must check the `Sensitive` checkbox. 6 | 7 | # Remote backend configuration 8 | 9 | It is required only if you are using the CLI driven Run workflow. 10 | 11 | `terraform.tf`: 12 | 13 | ```terraform 14 | terraform { 15 | backend "remote" { 16 | organization = "" 17 | 18 | workspaces { 19 | name = "" 20 | } 21 | } 22 | } 23 | ``` 24 | 25 | `$HOME/.terraformrc`: 26 | 27 | ```rc 28 | credentials "app.terraform.io" { 29 | token = "" 30 | } 31 | ``` 32 | 33 | references: 34 | 35 | - https://www.terraform.io/docs/cloud/run/cli.html 36 | - https://www.terraform.io/docs/commands/cli-config.html#credentials 37 | -------------------------------------------------------------------------------- /Terraform.md: -------------------------------------------------------------------------------- 1 | ## The `aws_iam_policy_attachment` resource can only be used once **PER** policy resource, as the resource manages all of the role attachments for that IAM Policy. 2 | 3 | The `aws_iam_policy_attachment` resource can only be used once **PER** policy resource, as the resource manages all of the role attachments for that IAM Policy. The best workaround for this issue is to create individual policies for each role attachment that you're wanting to attach. 4 | 5 | reference: https://github.com/hashicorp/terraform/issues/11873#issuecomment-279418587 -------------------------------------------------------------------------------- /Testing.md: -------------------------------------------------------------------------------- 1 | # Mock, Dummy, Stub, Spy 2 | 3 | ref: https://martinfowler.com/articles/mocksArentStubs.html 4 | 5 | - **Dummy** objects are passed around but never actually used. Usually they are just used to fill parameter lists. 6 | - **Fake** objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example). 7 | - **Stubs** provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test. 8 | - **Spies** are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent. 9 | - **Mocks** are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive. 10 | 11 | See also: 12 | 13 | - https://laurentkempe.com/2010/07/17/Unit-Test-using-test-doubles-aka-Mock-Stub-Fake-Dummy/ 14 | - http://www.hostettler.net/blog/2014/05/18/fakes-stubs-dummy-mocks-doubles-and-all-that/ -------------------------------------------------------------------------------- /Todo.md: -------------------------------------------------------------------------------- 1 | * Ansible (WIP) 2 | * Zplugin 3 | * spec -------------------------------------------------------------------------------- /Vim.md: -------------------------------------------------------------------------------- 1 | ## Create new file in Explorer mode 2 | 3 | Type `%` in the explorer mode. 4 | 5 | 6 | ## Format JSON with jq 7 | 8 | ``` 9 | :%!jq '.' 10 | ``` 11 | 12 | or 13 | 14 | ``` 15 | :.w !jq 16 | ``` 17 | 18 | reference: https://stackoverflow.com/a/7025184 19 | 20 | See `:help :w_c` 21 | 22 | 23 | ## Delete the text between matching XML tags 24 | 25 | reference: https://stackoverflow.com/a/946241 26 | 27 | before: 28 | 29 | ` content ` 30 | 31 | `:dit` (`it` is for "inner tag block") 32 | 33 | after: 34 | 35 | `` 36 | -------------------------------------------------------------------------------- /Vimscript.md: -------------------------------------------------------------------------------- 1 | # Vimscript 2 | 3 | ## augroup 4 | 5 | `:autocmd` adds to the list of autocommands regardless of whether they are 6 | already present. When your .vimrc file is sourced twice, the autocommands 7 | will appear twice. To avoid this, define your autocommands in a group, so 8 | that you can easily clear them: > 9 | 10 | ```vimscript 11 | augroup vimrc 12 | " Remove all vimrc autocommands 13 | autocmd! 14 | au BufNewFile,BufRead *.html so :h/html.vim 15 | augroup END 16 | ``` 17 | 18 | ```vimscript 19 | augroup vimrc 20 | autocmd! 21 | augroup END 22 | 23 | " ... 24 | 25 | " Put autocmd where you want 26 | au vimrc BufNewFile,BufRead *.html so :h/html.vim 27 | ``` 28 | 29 | ## autocmd 30 | 31 | `:au[tocmd] [group] {event} {pat} [nested] {cmd}` 32 | 33 | Add {cmd} to the list of commands that Vim will execute automatically on {event} for a file matching {pat} |autocmd-patterns|. 34 | 35 | Note: A quote character is seen as argument to the :autocmd and won't start a comment. 36 | 37 | Vim always adds the {cmd} after existing autocommands, so that the autocommands execute in the order in which they were given. See |autocmd-nested| for [nested]. 38 | 39 | ```vimscript 40 | :au BufNewFile,BufRead *.html so :h/html.vim 41 | ``` 42 | 43 | ## autocmd! 44 | 45 | `:au[tocmd]! [group]` : Remove ALL autocommands. 46 | 47 | Note: a quote will be seen as argument to the :autocmd and won't start a comment. 48 | Warning: You should normally not do this without a group, it breaks plugins, syntax highlighting, etc. 49 | 50 | When the [group] argument is not given, Vim uses the current group (as defined with ":augroup"); otherwise, Vim uses the group defined with [group]. 51 | 52 | ## function 53 | 54 | `:fu[nction][!] {name}([arguments]) [range] [abort] [dict] [closure]` 55 | 56 | The `name` must be made of alphanumeric characters and '_', and must start with a capital or "s:". Note that using "b:" or "g:" is not allowed. 57 | 58 | When a function by this name already exists and [!] is not used an error message is given. There is one exception: When sourcing a script again, a function that was previously defined in that script will be silently replaced. 59 | 60 | When [!] is used, an existing function is silently replaced. Unless it is currently being executed, that is an error. 61 | 62 | NOTE: Use ! wisely. If used without care it can cause an existing function to be replaced unexpectedly, which is hard to debug. 63 | 64 | ```vimscript 65 | function! Bar() 66 | echo "in Bar" 67 | return 4710 68 | endfunction 69 | ``` 70 | 71 | ## References 72 | 73 | `:help autocmd` on Vim 74 | -------------------------------------------------------------------------------- /Windows.md: -------------------------------------------------------------------------------- 1 | # Open Active Directory Search Window 2 | 3 | "C:\Windows\System32\rundll32.exe" dsquery.dll,OpenQueryWindow 4 | 5 | # BSOD when running Android Emulator after 1903 update installed 6 | 7 | `SYSTEM_SERVICE_EXCEPTION` BSOD occurs when Android Emulator after Windows 10 1903 Update Installed. 8 | 9 | How to fix: 10 | 11 | 1. Turn off Hyper-V 12 | 2. Turn off Windows Sandbox 13 | 3. "Disable" `Turn on Virtualization Based Security` in Local Group Policy 14 | 1. Open `gpedit.msc` 15 | 2. Go to `Computer Configuration` -> `Administrative Templates` -> `System` -> `Device Guard` 16 | 3. Open `Turn on Virtualization Based Security` 17 | 4. Set `Disable` (Even if `Not Configured` is already selected) 18 | 19 | reference: https://www.reddit.com/r/Windows10/comments/bxcawn/windows_v1903_crashing_when_running_nox_player/ 20 | -------------------------------------------------------------------------------- /WireGuard.md: -------------------------------------------------------------------------------- 1 | # WireGuard 2 | 3 | https://www.wireguard.com/ 4 | 5 | ## Install WireGuard on Ubuntu 6 | 7 | Install WireGuard. 8 | 9 | ```bash 10 | $ sudo add-apt-repository ppa:wireguard/wireguard 11 | $ sudo apt install wireguard 12 | ``` 13 | 14 | Enable IP Forwarding. Open `/etc/sysctl.conf` and uncomment the `#net.ipv4.ip_forward=1` line. 15 | 16 | ```conf 17 | net.ipv4.ip_forward=1 18 | ``` 19 | 20 | ## Create Private and Public keys 21 | 22 | ```bash 23 | $ umask 077 24 | $ wg genkey | tee privatekey | wg pubkey > publickey 25 | ``` 26 | 27 | ## Configure Server 28 | 29 | Replace `eth0` with \. 30 | 31 | ```conf 32 | # /etc/wireguard/wg0.conf 33 | 34 | [Interface] 35 | Address = 36 | PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE 37 | PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE 38 | ListenPort = 51820 39 | PrivateKey = 40 | 41 | [Peer] 42 | PublicKey = 43 | AllowedIPs = 44 | ``` 45 | 46 | > Other iptables rule: 47 | > 48 | > Copied from https://www.ckn.io/blog/2017/11/14/wireguard-vpn-typical-setup/ 49 | > 50 | > ``` 51 | > # Track VPN connection 52 | > iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 53 | > iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 54 | > 55 | > # Allowing incoming VPN traffic on the listening port 56 | > iptables -A INPUT -p udp -m udp --dport 51820 -m conntrack --ctstate NEW -j ACCEPT 57 | > 58 | > # Allow both TCP and UDP recursive DNS traffic 59 | > iptables -A INPUT -s 10.200.200.0/24 -p tcp -m tcp --dport 53 -m conntrack --ctstate NEW -j ACCEPT 60 | > iptables -A INPUT -s 10.200.200.0/24 -p udp -m udp --dport 53 -m conntrack --ctstate NEW -j ACCEPT 61 | > 62 | > # Allow forwarding of packets that stay in the VPN tunnel 63 | > iptables -A FORWARD -i wg0 -o wg0 -m conntrack --ctstate NEW -j ACCEPT 64 | > 65 | > # Set up nat 66 | > iptables -t nat -A POSTROUTING -s 10.200.200.0/24 -o eth0 -j MASQUERADE 67 | 68 | If you want to add a new client, add a new `[Peer]` section to `wg0.conf` 69 | 70 | ## Enable/Diasble the WireGuard interface on the Server 71 | 72 | ```bash 73 | # Enable WireGuard interface 74 | $ wg-quick up wg0 75 | 76 | # Disable WireGuard interface 77 | $ wg-quick down wg0 78 | ``` 79 | 80 | ```bash 81 | # Enable the interface as a service. 82 | $ systemctl enable wg-quick@wg0.service 83 | ``` 84 | 85 | ## Add firewall rule (ufw) 86 | 87 | ```bash 88 | $ sudo ufw allow 51820/udp 89 | $ sudo ufw enable 90 | ``` 91 | 92 | ## Show status 93 | 94 | ```bash 95 | $ sudo wg show 96 | interface: wg0 97 | public key: 98 | private key: (hidden) 99 | listening port: 100 | ``` 101 | 102 | ---- 103 | 104 | ## Configure Client 105 | 106 | - [WireGuard for Windows](https://www.wireguard.com/install/) from the homepage. 107 | - [WireGuard for Android](https://play.google.com/store/apps/details?id=com.wireguard.android) from PlayStore. 108 | 109 | Client config: 110 | 111 | ```conf 112 | [Interface] 113 | PrivateKey = 114 | Address = 115 | 116 | [Peer] 117 | PublicKey = 118 | AllowedIPs = 0.0.0.0/0 119 | Endpoint = 120 | ``` 121 | 122 | Share the client config via QRCode: 123 | 124 | ```bash 125 | $ qrencode -t ansiutf8 < wgclient.conf 126 | ``` 127 | 128 | ## References 129 | 130 | - https://www.ckn.io/blog/2017/11/14/wireguard-vpn-typical-setup/ 131 | - https://golb.hplar.ch/2018/10/wireguard-on-amazon-lightsail.html -------------------------------------------------------------------------------- /images/2018-11-28-09-53-52.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kexplo/TIL/bbced71de2aa5a9452678657284b0be7db6a9481/images/2018-11-28-09-53-52.png -------------------------------------------------------------------------------- /images/2020-06-27-17-41-33.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kexplo/TIL/bbced71de2aa5a9452678657284b0be7db6a9481/images/2020-06-27-17-41-33.png -------------------------------------------------------------------------------- /misc.md: -------------------------------------------------------------------------------- 1 | # 잡다 2 | 3 | 짧은 여러가지 주제를 모아 둠 4 | 5 | ## Table of Contents 6 | 7 | * [GPKI 인증서 사태](#gpki-인증서-사태) 8 | * [회사 MITM에 대처하는 자세](#회사-mitm에-대처하는-자세) 9 | * [Terraform](#terraform) 10 | * [npm](#npm) 11 | * [Git](#git) 12 | * [WSL](#wsl) 13 | * [Docker](#docker) 14 | * [pip](#pip) 15 | 16 | 17 | ## GPKI 인증서 사태 18 | 19 | 얼마전 GPKI 인증서 Wildcard 도메인에 대해 발급되었다는 충격적인 소식을 접했다. 20 | 21 | - 참고1: [보안뉴스 기사](http://www.boannews.com/media/view.asp?idx=68221) 22 | - 참고2: [트위터 쓰레드]( https://twitter.com/_Hoto_Cocoa_/status/981538520064905221) 23 | 24 | 트위터 쓰레드를 보면 내용이 가관이다. `*.or.kr`, `*.co.kr`, ... 등의 도메인에 대해 승인 되어있다. 심지어 `192.168`로 시작하는 내부 아이피에 대한 등록도 있다고 한다. 25 | 26 | 해당 트위터 쓰레드의 하단에는 후속 조치에 대한 내용도 있다. 27 | 28 | 후속 조치를 보면, [문제된 인증서를 폐기할 예정](https://twitter.com/_Hoto_Cocoa_/status/982479767801741312)에 있다고 하고, 최신 업데이트가 적용된 Windows에서 [GPKI 인증서가 신뢰되지 않게](https://twitter.com/_Hoto_Cocoa_/status/984800333095288832) 변경되었다고 한다. 29 | 30 | 원래는 Windows에서 GPKI 인증서가 신뢰 상태였기 때문에 문제가 되었다. 31 | 32 | Firefox 처럼 OS의 인증서를 무조건 신뢰하지 않는 브라우저라면 괜찮겠지만, IE, Edge, Chrome 등의 브라우저를 쓰면 브라우저가 인증서를 신뢰하게 된다. 33 | 34 | 혹시 GPKI 인증서를 무력화 시키는 방법이 필요할 수도 있으니, 인터넷에 떠돌던 GPKI 인증서를 신뢰하지 않게 만드는 방법을 링크해 둔다. 35 | 36 | https://twitter.com/hibiyasleep/status/981559511595999233 37 | 38 | 아래 순서대로 하면 된다: 39 | 40 | `컴퓨터 인증서 관리` -> `신뢰할 수 있는 루트 인증 기관` -> `인증서` -> `GPKIRootCA1`의 속성 -> `이 인증서의 모든 용도를 사용 안 함` 41 | 42 | 43 | ## 회사 MITM에 대처하는 자세 44 | 45 | 회사에서 MITM을 걸고 있기 때문에, `curl` 등의 명령어는 전부 실패한다. 46 | 47 | ```bash 48 | $ curl https://google.com 49 | curl: (60) SSL certificate problem: self signed certificate in certificate chain 50 | More details here: http://curl.haxx.se/docs/sslcerts.html 51 | curl performs SSL certificate verification by default, using a "bundle" 52 | of Certificate Authority (CA) public keys (CA certs). If the default 53 | bundle file isn't adequate, you can specify an alternate file 54 | using the --cacert option. 55 | If this HTTPS server uses a certificate signed by a CA represented in 56 | the bundle, the certificate verification probably failed due to a 57 | problem with the certificate (it might be expired, or the name might 58 | not match the domain name in the URL). 59 | If you'd like to turn off curl's verification of the certificate, use 60 | the -k (or --insecure) option. 61 | ``` 62 | 63 | `curl`이 MITM 인증서를 신뢰하지 않기 때문에 실패한다. 64 | 65 | `curl`에 `-k` 옵션을 사용하는 방법도 있겠지만, 다른 스크립트 등에서 `curl`을 사용하는 경우 `-k` 옵션을 주지 못하기 때문에 실패한다. 66 | 67 | 이를 해결하기 위해서 `CURL_CA_BUNDLE` 환경 변수에 회사에서 제공하는 MITM 인증서 경로를 설정하면, `curl`은 무사히 넘어간다. 68 | 69 | 아니면 `curl -v https://google.com` 명령어를 통해 `curl`이 시스템 인증서를 읽어오는 경로를 확인하고, 경로에 인증서를 넣어주는 방법도 있다. 70 | 71 | ```bash 72 | $ curl -v https://google.com 73 | * Rebuilt URL to: https://google.com/ 74 | * Trying 216.58.197.142... 75 | * Connected to google.com (216.58.197.142) port 443 (#0) 76 | * found 148 certificates in /etc/ssl/certs/ca-certificates.crt 77 | * found 593 certificates in /etc/ssl/certs 78 | ... 79 | ``` 80 | 81 | 하지만 시스템 전역적으로 허용하긴 싫으니 넘어간다. 82 | 83 | ### Terraform 84 | 85 | 이제 `Terraform`을 보자 86 | 87 | `terraform init`을 하면 처참하게 실패한다. 88 | 89 | ```bash 90 | Initializing provider plugins... 91 | - Checking for available provider plugins on https://releases.hashicorp.com... 92 | 93 | Error installing provider "archive": Get https://releases.hashicorp.com/terraform-provider-archive/: x509: certificate signed by unknown authority. 94 | 95 | Terraform analyses the configuration and state and automatically downloads 96 | plugins for the providers used. However, when attempting to download this 97 | plugin an unexpected error occured. 98 | 99 | This may be caused if for some reason Terraform is unable to reach the 100 | plugin repository. The repository may be unreachable if access is blocked 101 | by a firewall. 102 | 103 | If automatic installation is not possible or desirable in your environment, 104 | you may alternatively manually install plugins by downloading a suitable 105 | distribution package and placing the plugin's executable file in the 106 | following directory: 107 | terraform.d/plugins/linux_amd64 108 | ``` 109 | 110 | 오류를 봐선 `curl` 등을 사용하는게 아닌 것 같다 111 | 112 | 코드를 보자 113 | 114 | https://github.com/golang/go/blob/master/src/crypto/x509/root_linux.go 115 | 116 | ```go 117 | // Copyright 2015 The Go Authors. All rights reserved. 118 | // Use of this source code is governed by a BSD-style 119 | // license that can be found in the LICENSE file. 120 | 121 | package x509 122 | 123 | // Possible certificate files; stop after finding one. 124 | var certFiles = []string{ 125 | "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc. 126 | "/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL 6 127 | "/etc/ssl/ca-bundle.pem", // OpenSUSE 128 | "/etc/pki/tls/cacert.pem", // OpenELEC 129 | "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem", // CentOS/RHEL 7 130 | 131 | ``` 132 | 133 | 시스템의 인증서 파일을 직접 읽어서 사용하고 있다. 134 | 135 | 결국 회사의 MITM 인증서를 시스템에 전역적으로 깔 수 밖에 없겠다. 136 | 137 | Ubuntu 같은 경우는 `/etc/ssl/certs`에 인증서의 심볼릭 링크를 만들어 주고, `update-ca-certificates` 명령을 실행해 주면 된다. 138 | 139 | ### npm 140 | 141 | 이제 `npm`을 실행해 보자 142 | 143 | ```bash 144 | npm ERR! node v6.11.4 145 | npm ERR! npm v3.10.10 146 | npm ERR! code SELF_SIGNED_CERT_IN_CHAIN 147 | 148 | npm ERR! self signed certificate in certificate chain 149 | npm ERR! 150 | npm ERR! If you need help, you may report this error at: 151 | npm ERR! 152 | ``` 153 | 154 | 시스템에 인증서를 설치했으나.. 처참하게 실패한다 155 | 156 | 이쯤되니 더 파보기 귀찮다. 내 생산성을 처참히 갉아먹는다. 그냥 SSL 옵션을 하나 끄도록 하자. 157 | 158 | ```bash 159 | $ npm config set strict-ssl false 160 | ``` 161 | 162 | ### Git 163 | 164 | 이제 `Git`을 실행해 볼까? 165 | 166 | ```bash 167 | fatal: unable to access 'https://xxxxxxxxxxxxxxxxxxxxx.git/': SSL certificate problem: self signed certificate in certificate chain 168 | error: Could not fetch origin 169 | ``` 170 | 171 | ...... 172 | 173 | `git`의 SSL 옵션도 끄자 174 | 175 | ```bash 176 | $ git config -g http.sslVerify "false" 177 | ``` 178 | 179 | ### WSL 180 | 181 | `WSL(Windows Subsystem for Linux)` 환경에서는 또 다른 문제가 있다. Windows 쪽에 설치한 MITM 인증서를 읽어오지 않는 것 같다. 182 | 183 | 위에서 언급했던 Ubuntu 설정은 (`/etc/ssl/certs`에 넣는..) 통하지 않아 보인다. 184 | 185 | 대신 `/usr/local/share/ca-certificates`에 인증서를 넣고, `sudo dpkg-reconfigure ca-certificates`를 설정해주면 된다. 186 | 187 | 188 | ### Docker 189 | 190 | `docker`도 사용해 볼까? 191 | 192 | ```bash 193 | $ docker pull gcr.io/etcd-development/etcd:v3.2.9 194 | Error response from daemon: Get https://gcr.io/v1/_ping: x509: certificate signed by unknown authority 195 | ``` 196 | 197 | 인증서 에러가 난다. 198 | 199 | 이 경우엔 insecure registry로 등록해 주면 해결된다. 200 | 201 | `/etc/docker/daemon.json` 파일을 열어서 다음 내용을 채워주고, docker daemon을 재시작 한다. 202 | 203 | ```json 204 | { 205 | "insecure-registries" : [ "gcr.io" ] 206 | } 207 | ``` 208 | 209 | ### pip 210 | 211 | python을 쓴다면 `pip`도 사용하게 된다. 212 | 213 | ```bash 214 | $ pip install xxxxx==1.1.1 215 | Downloading/unpacking xxxxx==1.1.1 216 | Getting page https://pypi.python.org/simple/xxxxx/ 217 | Could not fetch URL https://pypi.python.org/simple/xxxxx/: connection error: [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed 218 | ``` 219 | 220 | 또 인증서 에러가 난다. 221 | 222 | `pip`의 경우엔 `--trusted-host` 또는 `PIP_TRUSTED_HOST` 환경변수에 `pypi.python.org`를 넣어서 해결할 수 있다. 223 | 224 | 하지만 `pip` 버전이 낮은 경우엔 지원되지 않는 옵션이기 때문에... 225 | 226 | 높은 버전의 `pip`의 `.whl` 파일을 `pypi.python.org`에서 수동으로 받아서 깔아주고, 수행한다. 227 | --------------------------------------------------------------------------------