├── Jenkins.txt
├── TODO.txt
├── ansible.txt
├── caddy_http_server.txt
├── containerization.txt
├── containerization_advanced.txt
├── containerization_image_build.txt
├── containerization_registry.txt
├── containerization_runtimes.txt
├── coroot.txt
├── devops_CICD.txt
├── devops_TODO.txt
├── devops_bash.txt
├── devops_curl.txt
├── devops_monitoring.txt
├── devops_networking.txt
├── devops_yaml.txt
├── git.txt
├── linux.txt
├── linux_101.txt
├── linux_ALL.payload
├── linux_SElinux.txt
├── linux_TODO.txt
├── linux_administration_advanced_summary.md
├── linux_alternatives.txt
├── linux_application_tracing.txt
├── linux_auditing_map.html
├── linux_cheat_seet.jpg
├── linux_desktop.txt
├── linux_dmesg.txt
├── linux_eBPF.txt
├── linux_encryption.txt
├── linux_kernel.txt
├── linux_kernel_alternatives.txt
├── linux_kernel_monitoring.txt
├── linux_mobile.txt
├── linux_networking_map.txt
├── linux_nftables.txt
├── linux_package_managers.txt
├── linux_perf.txt
├── linux_security.txt
├── linux_storage.txt
├── linux_summary.pdf
├── linux_systemd.txt
├── linux_who_is_who.txt
├── monitorization_OpenTelemetry.txt
├── socat.txt
├── vagrant.txt
├── vim.txt
└── windows.txt
/caddy_http_server.txt:
--------------------------------------------------------------------------------
1 | Caddy by Ardan Labs https://caddyserver.com/
2 | - low-code/low-conf HTTP(s) server with TLS+OCSP Certificate automation.
3 | - Container friendly (GOlang based with no dependencies, not even libc)
4 | - Use-cases:
5 | web server, WAF, ingress, reverse proxy,
6 | TLS terminator, logging, caching, TLS Cert. Management.
7 |
8 | - File compression.
9 | - template evaluation
10 | - Markdown rendering!!!
11 | - HTTPS by default!!!
12 |
13 | # PRODUCTION-READY 1-LINER COMMANDS:
14 |
15 | $ caddy file-server <··· local file server
16 |
17 | $ caddy file-server \ <··· Public HTTPS file server
18 | --domain example.com (Requires domain's public
19 | A/AAAA DNS records to host)
20 |
21 | $ caddy reverse-proxy \ <··· HTTPS reverse proxy
22 | --from example.com \
23 | --to localhost:9000
24 |
25 | $ caddy run <···· Run server with Caddyfile
26 | in working directory (if present)
27 |
28 | # "Caddyfile" (Optional) human-readable config file for "advanced" tasks.
29 | (Alternative Config RESTful API also available ( POST /config/ + JSON body))
30 | ┌─ Caddyfile ──────────
31 | │ localhost <·· Serve local-files
32 | │
33 | │ templates <·· give static sites some dynamic features
34 | │ encode gzip zstd <·· Compress responses according to request headers
35 | │ try_files {path}.html <·· Make HTML file ext. optional
36 | │ {path}
37 | │ reverse_proxy /api/* <·· Send API requests to backend
38 | │ localhost:9005
39 | │ file_server <·· Serve everything else from FS
40 | └───────────────────────
41 |
42 | ┌─ Caddyfile ────────── HTTPS reverse proxy with custom
43 | │ example.com load balancing and active health checks
44 | │ between 3 backends with custom
45 | │ reverse_proxy health─checks
46 | │ 10.0.0.1:9000
47 | │ 10.0.0.2:9000
48 | │ 10.0.0.3:9000 {
49 | │ lb_policy random_choose 2
50 | │ health_path /ok
51 | │ health_interval 10s
52 | │ }
53 | └───────────────────────
54 |
55 | # Docker run:
56 |
57 | $ docker pull caddy:2.6.2-alpine
58 | $ docker inspect caddy:2.6.2-alpine
59 | WorkingDir : "/srv",
60 | ExposedPorts: "2019/tcp", "443/tcp" "443/udp" "80/tcp"
61 | Cmd : caddy run --config /etc/caddy/Caddyfile --adapter caddyfile
62 | $ docker run --rm -v hostDir:/srv -p 8080:80 caddy:2.6.2-alpine
63 |
--------------------------------------------------------------------------------
/containerization_advanced.txt:
--------------------------------------------------------------------------------
1 | [[{containerization]]
2 | # Containerization: Advanced Topics
3 |
4 |
5 | ## namespace.conf
6 |
7 | * man 5 namespace.conf
8 | * Linux Namespaces:
9 | - https://en.wikipedia.org/wiki/Linux_namespaces
10 | - http://man7.org/linux/man-pages/man1/nsenter.1.html
11 | *
12 |
13 |
14 | ## Cgroups
15 |
16 | * TODO
17 |
18 |
19 |
20 | ## Docker API
21 | * @[https://docs.docker.com/engine/api/])
22 | * @[https://godoc.org/github.com/docker/docker/api]
23 | * @[https://godoc.org/github.com/docker/docker/api/types]
24 |
25 |
26 |
27 | [[{PM.TODO,containerization.networking]]
28 | ## Container Network Iface (CNI)
29 | @[https://github.com/containernetworking/cni]
30 | * specification and libraries for writing plugins to configure network interfaces
31 | in Linux containers, along with a number of supported plugins.
32 | * CNI concerns itself only with network connectivity of containers
33 | and removing allocated resources when the container is deleted.
34 | * @[https://github.com/containernetworking/cni/blob/master/SPEC.md]
35 | * CNI concerns itself only with network connectivity of
36 | containers and removing allocated resources when container
37 | are deleted.
38 | * specification and libraries for writing plugins
39 | to configure network interfaces in Linux containers,
40 | along with a number of supported plugins:
41 | - libcni, a CNI runtime implementation
42 | - skel, a reference plugin implementation
43 | github.com/cotainernetworking/cni
44 | * Set of reference and example plugins:
45 | - Inteface plugins: ptp, bridge,macvlan,...
46 | - "Chained" plugins: portmap, bandwithd, tuning,
47 | github.com/cotainernetworking/pluginds
48 |
49 | NOTE: Plugins are executable programs with STDIN/STDOUT
50 | ```
51 | | ┌ Network
52 | | ┌·····>(STDIN) │
53 | | Runtime → ADD JSON CNI ······┤
54 | | ^ ^^^ executable│
55 | | · ADD plugin └ Container(or Pod)
56 | | · DEL └─┬──┘ Interface
57 | | · CHECK v
58 | | · VERSION (STDOUT)
59 | | · └────┬──────┘
60 | | · │
61 | | └···· JSON result <····┘
62 | |
63 | | RUNTIMES 3RD PARTY PLUGINS
64 | | K8s, Mesos, podman, Calico ,Weave, Cilium,
65 | | CRI-O, AWS ECS, ... ECS CNI, Bonding CNI,...
66 | ```
67 |
68 | - The idea of CNI is to provide common interface between
69 | the runtime and the CNI (executable) plugins through
70 | standarised JSON messages.
71 |
72 | - Example cli Tool executing CNI config:
73 | *
74 |
75 | ```
76 | | INPUT_JSON
77 | | {
78 | | "cniVersion":"0.4.0", ← Standard attribute
79 | | "name": *"myptp"*,
80 | | "type":"ptp",
81 | | "ipMasq":true,
82 | | "ipam": { ← Plugin specific attribute
83 | | "type":"host-local",
84 | | "subnet":"172.16.29.0/24",
85 | | "routes":[{"dst":"0.0.0.0/0"}]
86 | | }
87 | | }
88 | | $ echo $INPUT_JSON | \ ← Create network config
89 | | sudo tee /etc/cni/net.d/10-myptp.conf it can be stored on file-system
90 | | or runtime artifacts (k8s etcd,...)
91 | |
92 | | $ sudo ip netns add testing ← Create network namespace.
93 | |
94 | | $ sudo CNI_PATH=./bin \ ← Add container to network
95 | | cnitool add *myptp* \
96 | | /var/run/netns/testing
97 | |
98 | | $ sudo CNI_PATH=./bin \ ← Check config
99 | | cnitool check myptp \
100 | | /var/run/netns/testing
101 | |
102 | | $ sudo ip -n testing addr ← Test
103 | | $ sudo ip netns exec testing \
104 | | ping -c 1 4.2.2.2
105 | |
106 | | $ sudo CNI_PATH=./bin \ ← Clean up
107 | | cnitool del myptp \
108 | | /var/run/netns/testing
109 | | $ sudo ip netns del testing
110 | ```
111 |
112 | * CNI Maintainers (2020). [[{HHRR.who-is-who]]
113 | - Bruce Ma (Alibaba)
114 | - Bryan Boreham (Weaveworks)
115 | - Casey Callendrello (IBM Red Hat)
116 | - Dan Williams (IBM Red Hat)
117 | - Gabe Rosenhouse (Pivotal)
118 | - Matt Dupre (Tigera)
119 | - Piotr Skamruk (CodiLime)
120 | - "CONTRIBUTORS"
121 | [[HHRR.who-is-who}]]
122 |
123 | * Chat channels: , topic #cni
124 | [[}]]
125 | [[containerization}]]
126 |
--------------------------------------------------------------------------------
/containerization_registry.txt:
--------------------------------------------------------------------------------
1 | [[{containerization,image.registry,PM.WiP]]
2 |
3 | # Container Registry
4 |
5 | * OCI Image: Sort of ISO image with inmutable file system BUT ... formed
6 | of different layers.
7 |
8 | * Registry daily ussage:
9 |
10 | ```
11 | $ docker images # <·· List local (/var/lib/docker/...) images
12 | | REPOSITORY TAG IMAGE ID CREATED SIZE
13 | | gitea/gitea 1.20.1-rootless eb341527606d 4 months ago 261MB
14 | | postgres 14.5 cefd1c9e490c 13 months ago 376MB
15 | | golang_1.17_alpine3.15_gcc latest 6093faef6d66 17 months ago 438MB
16 | | ...
17 | | hello-world latest feb5d9fea6a5 2 years ago 13.3kB
18 | ```
19 |
20 | ```
21 | | $ docker rmi ${IMG_NAME}:${IMG_VER} # <·· remove (local) image
22 | |
23 | | $ docker image prune # <·· remove all non used images
24 | | [[{troubleshooting.storage}]]
25 | ```
26 |
27 | ## Registries vs Repositories ("Image repo")
28 |
29 | * repository: "storage" for OCI binary images with a common http
30 | network protocol to "push" and "pull" new OCI images (actually,
31 | OCI "layers") and cryptographic hash protection against tampered
32 | images.
33 | * registry : index of 1+ repositories (ussually its own repo), and
34 | network protocol to query the index remotely.
35 | *
36 | *
37 |
38 | ## Registry "Daily" Ussage
39 |
40 | ```
41 | | (server01) ─────────────────────────────────────────────────────────
42 | | $ docker run -d \ <·· Start registry v2 at server01:5000
43 | | -p 5000:5000 \
44 | | --restart=always \
45 | | --name registry registry:2
46 | ```
47 |
48 | ```
49 | | (shell console@local-dev-laptop) ───────────────────────────────────
50 | | $ docker search ubuntu <·· Search remote images @ Docker Hub:
51 | | NAME DESCRIPTION STARS OFFICIAL AUTOMATED
52 | | ┌> ubuntu Ubuntu is a... 16627 [OK]
53 | | · ubuntu/nginx Nginx, a hi... 102
54 | | · ubuntu/squid Squid is a ... 70
55 | | · ...
56 | | · ^^^^^^^^^^^^
57 | | ·
58 | | └> To see also avaiable tags: (latest by default)···········┬────┐
59 | | · ·
60 | | $ curl -s \ v v
61 | | 'https://registry.hub.docker.com/v2/repositories/library/ubuntu/tags/' | \
62 | | jq '."results"[]["name"]'
63 | | | "latest"
64 | | | "rolling"
65 | | | ...
66 | | | "23.10"
67 | | | "23.04"
68 | | | ...
69 | |
70 | | $ docker pull ubuntu <·· Pull (example) image from public
71 | | registry to /var/lib/docker/image
72 | |
73 | | $ docker image tag ubuntu \ <·· Add new tag to the image to "point"
74 | | server01:5000/myfirstimage to local registry
75 | |
76 | | $ docker login \ <·· Optional. If user/pass is required
77 | | -u ${LOGIN_USER} \ (on success a local access token will
78 | | -p ${SESSION_TOKEN} \ be created and reused in next comands)
79 | | server01:5000
80 | |
81 | | $ docker push \ <·· Push to registry@server01
82 | | server01:5000/myfirstimage
83 | |
84 | | $ docker pull \ <·· final Check
85 | | server01:5000/myfirstimage
86 | ```
87 |
88 | ## Add Insecure HTTP registry [[{image.registry,troubleshooting]]
89 |
90 | * strongly not recommended.
91 |
92 | * REF:
93 | ```
94 | cat /etc/containers/registries.conf
95 | # ...
96 | [registries.search]
97 | registries = ['docker.io', ..., server01:5000 ]
98 | └───────────┴─ non-TLS server
99 | ```
100 | [[}]]
101 |
102 |
103 |
104 | ## skopeo [[{image.registry,QA.UX,PM.low_code.skopeo,troubleshooting]]
105 |
106 | *
107 | *
108 |
109 | * command line utility to manage local/remote images in repositories.
110 | (support for registry API v2 -The Quay,Docker Hub, OpenShift, GCR, ...).
111 | * rootless execution (for most of its operations).
112 | * Current commands include:
113 | ```
114 | | · copy : Copy image (manifest, FS layers, signatures).
115 | | · delete : Mark image-name for **later deletion by registry's garbage collector**.
116 | | · inspect : Return image low-level information from remote registry.
117 | | · list-tags : Return list of tags for image.
118 | | · sync : Synchronize images between registry repositories.
119 | | · manifest-digest : Compute manifest digest for manifest-file.
120 | | · standalone-sign : Debugging tool: Sign image locally without uploading.
121 | | · standalone-verify: Debugging tool: Verify image signature from local files.
122 | | · login/logout
123 | | · generate-sigstore-key : Generate a sigstore public/private key pair.
124 | ```
125 | * compatible with OCI images (standards) and original docker v2 images.
126 | * Install with dnf, apt, docker, ...
127 | ```
128 | | registry
129 | | ├ repo01 <·· fetch list with GET .../v2/_catalog
130 | | │ ├ tag1 <·· fetch list with GET .../v2/repo01/tags/list
131 | | │ ├ tag2
132 | | │ ├ ...
133 | | │ └ tagN
134 | | ├ repo02
135 | | ├ ...
136 | | # STEP 1: list repositories in registry
137 | | # Use optional flag: -u : for authentication
138 | | $ REG="registry.fedoraproject.org"
139 | | $ REPO="https://${REG}"
140 | | $ curl --silent -X GET \
141 | | ${REPO}/v2/_catalog \
142 | | | jq .repositories
143 | | [ ..., "fedora", ... ]
144 | | └────┴─··················─┬────┐
145 | | $ curl --silent -X GET ${REPO}/v2/fedora/tags/list | jq .
146 | | {
147 | | "name": "fedora",
148 | | "tags": [ "latest", "34-aarch64", "34-ppc64le", ... ]
149 | | }
150 | |
151 | | $ skopeo login ${REG}
152 | |
153 | | $ skopeo delete \ <·· delete image. (STEP 1)
154 | | docker://${REG}/fedora:34-aarch64
155 | |
156 | | $ docker exec -it -u root registry \ <·· delete image. (STEP 2)
157 | | bin/registry garbage-collect \
158 | | --delete-untagged /etc/docker/registry/config.yml
159 | |
160 | | $ skopeo inspect \
161 | | docker://registry.fedoraproject.org/fedora:latest # <· Inspect repository in registry
162 | | # fetch repository's manifest, output
163 | | # whole repository or tag
164 | | > {
165 | | > "Name": "registry.fedoraproject.org/fedora",
166 | | > "Digest": "sha256:0f65bee...", # <· WARN: Unverified digest.
167 | | > "RepoTags": [ "34-aarch64", "34", "latest", ... ],
168 | | > ...
169 | | > "DockerVersion": "1.10.1",
170 | | > "Labels": { "license": "MIT", "name": "fedora", "vendor": "Fedora Project",... },
171 | | > "Architecture": "amd64",
172 | | > ...
173 | | > "Layers": [ "sha256:2a0fc..." ],
174 | | > "LayersData": [ { ... } ],
175 | | > "Env": [ "DISTTAG=f37container", "FGC=f37", "container=oci" | ]
176 | | > }
177 | |
178 | | $ skopeo inspect --config \
179 | | docker://registry.fedoraproject.org/fedora:latest | \
180 | | jq . # Show container config. from fedora:latest
181 | | {
182 | | ...
183 | | "config": { "Env": [ ... } ], "Cmd": [ "/bin/bash" ], ... }
184 | | "rootfs": { "type": "layers", "diff_ids": [ "sha256:a4c0fa..." ] },
185 | | "history": [ { ... } ]
186 | | }
187 | |
188 | | $ skopeo copy \ <·· copy image
189 | | docker://quay.io/buildah/stable \
190 | | docker://registry.internal.company.com/buildah
191 | |
192 | | $ skopeo copy \ <·· copy image
193 | | oci:busybox_ocilayout:latest \
194 | | dir:existingemptydirectory
195 | |
196 | | SYNCING REGISTRIES ---------------------------------------
197 | |
198 | | $ skopeo sync \
199 | | --src docker \
200 | | --dest dir registry.example.com/busybox /media/usb
201 | |
202 | | AUTHENTICATING TO A REGISTRY
203 | |
204 | | $ skopeo login \
205 | | --username \
206 | | myregistrydomain.com:5000 # skopeo logout myregistrydomain.com:5000
207 | |
208 | | Alternatively --creds=user:pass can be used with no prevous login or
209 | | docker/podman/... login
210 | ```
211 |
212 |
213 |
214 | [[}]]
215 |
216 | [[containerization,image.registry}]]
217 |
--------------------------------------------------------------------------------
/containerization_runtimes.txt:
--------------------------------------------------------------------------------
1 | [[{containerization.runtimes]]
2 | # Runtimes
3 |
4 | ## Runtime Comparative Summary
5 | ```
6 | runC Golang, by docker and others <·· Alt 1
7 | ---------------------------------------------------
8 | Crun C-based, faster than runC <·· Alt 2
9 | ---------------------------------------------------
10 | containerd by IBM and others <·· Alt 2
11 | ---------------------------------------------------
12 | CRI-O: lightweight k8s alterantive <·· Alt 3
13 | ---------------------------------------------------
14 | rklet <·· Alt 3
15 | ---------------------------------------------------
16 | ```
17 |
18 |
19 | ## runc [[{ $runc ]]
20 |
21 | * @[https://github.com/opencontainers/runc]
22 | * Reference runtime and cli tool donated by Docker for spawning and
23 | running containers according to the OCI specification.
24 | (@[https://www.opencontainers.org/])
25 | * **It reads a runtime specification and configures the Linux kernel.
26 | Eventually it creates and starts container processes. **
27 | ```
28 | Go might not have been the best programming language for this task.
29 | since it does not have good support for the fork/exec model of computing
30 | - Go's threading model expects programs to fork a second process and then
31 | to exec immediately.
32 | - However, an OCI container runtime is expected to fork off the first
33 | process in the container. It may then do some additional
34 | configuration, including potentially executing hook programs, before
35 | exec-ing the container process. The runc developers have added a lot
36 | of clever hacks to make this work but are still constrained by Go's
37 | limitations.
38 | crun, C based, solved those problems.
39 | ```
40 | [[ $runc }]]
41 |
42 | ## crun [[{$crun]]
43 | @[https://github.com/containers/crun/issues]
44 | @[https://www.redhat.com/sysadmin/introduction-crun]
45 |
46 | * fast, low-memory footprint container runtime by Giuseppe Scrivanoby (RedHat).
47 | * C based: Unlike Go, C is not multi-threaded by default, and was built
48 | and designed around the fork/exec model.
49 | It could handle the fork/exec OCI runtime requirements in a much cleaner
50 | fashion than 'runc'. C also interacts very well with the Linux kernel.
51 | It is also lightweight, with much smaller sizes and memory than runc(Go):
52 | compiled with -Os, 'crun' binary is ~300k (vs ~15M 'runc')
53 | "" We have experimented running a container with just *250K limit set*.""
54 | *or 50 times smaller.* and up to *twice as fast.
55 | * `cgroups v2` ("==" Upstream kernel, Fedora 31+) compliant from the scratch
56 | while runc -Docker/K8s/...- **gets "stuck" into cgroups v1.**
57 | (experimental support in 'runc' for v2 as of v1.0.0-rc91, thanks to
58 | Kolyshkin and Akihiro Suda).
59 | * feature-compatible with "runc" with extra experimental features.
60 | * Given the same Podman CLI/k8s YAML we get the same containers "almost
61 | always" since **the OCI runtime's job is to instrument the kernel to
62 | control how PID 1 of the container runs. It is up to higher-level tools
63 | like `conmon` or the container engine to monitor the container.**
64 | * Sometimes users want to limit number of PIDs in containers to just one.
65 | With 'runc' PIDs limit can not be set too low, because the Go runtime
66 | spawns several threads.
67 | `crun`, written in C, does not have that problem. Ex:
68 | ```
69 | $ RUNC="/usr/bin/runc" , CRUN="/usr/bin/crun"
70 | $ podman --runtime $RUNC run --rm --pids-limit 5 fedora echo it works
71 | └────────────┘
72 | → Error: container create failed (no logs from conmon): EOF
73 | $ podman --runtime $CRUN run --rm --pids-limit 1 fedora echo it works
74 | └────────────┘
75 | → it works
76 | ```
77 | * OCI hooks supported, allowing the execution of specific programs at
78 | different stages of the container's lifecycle.
79 |
80 | ### runc/crun comparative
81 |
82 | * 'crun' is more portable: Ex: Risc-V.
83 | * Performance:
84 | ```
85 | $ CMD_RUNC="for i in {1..100}; do runc run foo < /dev/null; done"
86 | $ CMD_CRUN="for i in {1..100}; do crun run foo < /dev/null; done"
87 | $ time -v sh -c "$CMD_RUNC"
88 | → User time (seconds): 2.16
89 | → System time (seconds): 4.60
90 | → Elapsed (wall clock) time (h:mm:ss or m:ss): 0:06.89
91 | → Maximum resident set size (kbytes): 15120
92 | → ...
93 | $ time -v sh -c "$CMD_CRUN"
94 | → ...
95 | → User time (seconds): 0.53
96 | → System time (seconds): 1.87
97 | → Elapsed (wall clock) time (h:mm:ss or m:ss): 0:03.86
98 | → Maximum resident set size (kbytes): 3752
99 | → ...
100 | ```
101 |
102 | ### Experimental features
103 | * redirecting hooks STDOUT/STDERR via annotations.
104 | - Controlling stdout and stderr of OCI hooks
105 | Debugging hooks can be quite tricky because, by default,
106 | it's not possible to get the hook's stdout and stderr.
107 | - Getting the error or debug messages may require some yoga.
108 | - common trick: log to syslog to access hook-logs via journalctl.
109 | (Not always possible)
110 | - With 'crun' + 'Podman':
111 | ```
112 | $ podman run --annotation run.oci.hooks.stdout=/tmp/hook.stdout
113 | └───────────────────────────────────┘
114 | executed hooks will write:
115 | STDOUT → /tmp/hook.stdout
116 | STDERR → /tmp/hook.stderr
117 | (proposed fo OCI runtime spec)
118 | ```
119 |
120 | * crun supports running older versions of systemd on cgroup v2 using
121 | `--annotation run.oci.systemd.force_cgroup_v1`.
122 | This forces a cgroup v1 mount inside the container for the `name=systemd` hierarchy,
123 | which is enough for systemd to work.
124 | Useful to run older container images, such as RHEL7, on a cgroup v2-enabled system.
125 | Ej:
126 | ```
127 | $ podman run --annotation \
128 | run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup \
129 | centos:7 /usr/lib/systemd/systemd
130 | ```
131 | * Crun as a library:
132 | "We are considering to integrate it with *conmon, the container monitor used by
133 | Podman and CRI-O, rather than executing an OCI runtime."
134 | * 'crun' Extensibility:
135 | """... easily to use all the kernel features, including syscalls not enabled in Go."""
136 | -Ex: openat2 syscall protects against link path attacks (already supported by crun).
137 | [[$crun}]]
138 |
139 |
140 |
141 | [[containerization.runtimes}]]
142 |
--------------------------------------------------------------------------------
/coroot.txt:
--------------------------------------------------------------------------------
1 | [[{101.coroot,containarization.troubleshooting,troubleshooting.coroot,monitoring.101]]
2 |
3 | # coroot: monitoringr+troubleshooting agent for containers and decoupled architectures.
4 |
5 | *
6 |
7 | Node-agent turns terabytes of logs into just a few dozen metrics by extracting
8 | repeated patterns right on the node, allowing to quickly and cost-effectively
9 | find the errors relevant to a particular outage.
10 |
11 | * Thanks to eBPF, it reports a comprehensive map of services WITHOUT ANY CODE CHANGES.
12 |
13 | * It also use "cloud metadata" to show which regions and availability zones each application
14 | runs in. This is very important to known, because:
15 | - Network latency between availability zones within the same region
16 | can be higher than within one particular zone.
17 | - Data transfer between availability zones in the same region is
18 | paid, while data transfer within a zone is free.
19 |
20 | [[101.coroot}]]
21 |
--------------------------------------------------------------------------------
/devops_CICD.txt:
--------------------------------------------------------------------------------
1 | [[{DevOps.CICD]]
2 |
3 | [[{security.IA.source_d,qa.code_analysis,qa.testing,git,PM.TODO]]
4 | # Source{d}: Large Scale Code Analysis with IA
5 |
6 |
7 | * source{d} offers a suite of applications that uses machine learning on code
8 | to complete source code analysis and assisted code reviews. Chief among them
9 | is the source{d} Engine, now in public beta; it uses a suite of open source
10 | tools (such as Gitbase, Babelfish, and Enry) to enable large-scale source
11 | code analysis. Some key uses of the source{d} Engine include language
12 | identification, parsing code into abstract syntax trees, and performing SQL
13 | Queries on your source code such as:
14 | * What are the top repositories in a codebase based on number of commits?
15 | * What is the most recent commit message in a given repository?
16 | * Who are the most prolific contributors in a repository
17 | [[security.IA.source_d}]]
18 |
19 | [[DevOps.CICD}]]
20 |
--------------------------------------------------------------------------------
/devops_TODO.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/earizon/DevOps/2f0a426941c4a9300edf52c94f676b02cbe29f63/devops_TODO.txt
--------------------------------------------------------------------------------
/devops_bash.txt:
--------------------------------------------------------------------------------
1 | [[{dev_stack.shell_script]]
2 | # Shell Scripting
3 |
4 | ## REFERENCE SCRIPT: [[{dev_stack.shell_script.101]]
5 |
6 | ```
7 | | #!/bin/bash
8 | | # NOTE: The Bash "syntax sugar" VAR1=$("some command") executes "some command"
9 | | # and assigns execution (STDOUT) output as effective value to VAR1
10 | |
11 | | # SETUP STDERR/STDOUT logging to file and console {{{
12 | | readonly LOG_DIR="LOGS.gitignore"
13 | | if [ ! -d ${LOG_DIR} ] ; then
14 | | mkdir ${LOG_DIR}
15 | | fi
16 | | # $(whoami) will avoid collisions among different users even if writing to the
17 | | # same directory and serves as audit trail. # This happens frequently in DevOps when
18 | | # executing in sudo/non-sudo contexts.
19 | | readonly OUTPUT="${LOG_DIR}/$(basename $0).$(whoami).$(date +%Y%m%d_%Hh%Mm%Ss).log"
20 | | ln -sf ${OUTPUT} link_last_log.$(basename $0).gitignore # (opinionated) Improve UX, create link to latest log
21 | | exec 3>&1
22 | | exec 4>&2
23 | | echo "Cloning STDOUT/STDERR to ${PWD}/${OUTPUT}"
24 | | # (Opnionated) Redirect to STDOUT and file REF:
25 | | exec &> >(tee -a "$OUTPUT") # Comment to disable (Ussually not needed in Kubernetes/AWS-EC2/...
26 | | # since console output is direcly saved to files/S3 by some external mechanism.
27 | | # https://unix.stackexchange.com/questions/145651↩
28 | | # /using-exec-and-tee-to-redirect-logs-to-stdout-and-a-log-file-in-the-same-time
29 | |
30 | | exec 2>&1 # (Opinionated). Mix errors (STDERR) with STDOUT.
31 | | # Recommended to see errors in the context of normal execution.
32 | | echo "message logged to file & console"
33 | | # }}}
34 | |
35 | | # Bash syntax sugar.
36 | | [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 # <·· Check hostname match -[0-9] or exit.
37 | | SERVER_NUMBER=${BASH_REMATCH[1]} # <·· Otherwise asing match to var.
38 | |
39 | | global_exit_status=0
40 | | readonly WD=$(pwd) # Best Practice: write down current work dir are use it
41 | | # to avoid problems when changing dir ("cd")
42 | | # randomnly throughout the script execution
43 | |
44 | | readonly FILE_RESOURCE_01="${WD}/temp_data.csv" # <- readonly: inmutable value [[qa]]
45 | |
46 | |
47 | | readonly LOCK=$(mktemp) # <- Make temporal file. Assign ro constant LOCK.
48 | | # Use TMP_DIR=$(mktemp --directory) to create temporal dir.
49 | |
50 | | function funCleanUpOnExit() {
51 | | rm ${LOCK}
52 | | }
53 | | trap funCleanUpOnExit EXIT # ← Clean any temporal resource (socket, file, ...) on exit
54 | |
55 | | function XXX(){
56 | | set +e # <- Disable "exit on any error" for the body of function.
57 | | # REF:
58 | | readonly local -r name = ${HOME} # local: scoped to function!!! [[qa]]
59 | | echo "Cleaning resource and exiting"
60 | | rm -fO ${FILE_RESOURCE_01}
61 | | set -e # <- Re-enable fail-fast.
62 | | }
63 | |
64 | | ERR_MSG=""
65 | | function funThrow {
66 | | if [[ $STOP_ON_ERR_MSG != false ]] ; then
67 | | echo "ERR_MSG DETECTED: Aborting now due to "
68 | | echo -e ${ERR_MSG}
69 | | if [[ $1 != "" ]]; then
70 | | global_exit_status=$1 ;
71 | | elif [[ $global_exit_status == 0 ]]; then
72 | | global_exit_status=1 ;
73 | | fi
74 | | exit $global_exit_status
75 | | else
76 | | echo "ERR_MSG DETECTED: "
77 | | echo -e ${ERR_MSG}
78 | | echo "WARN: CONTINUING WITH ERR_MSGS "
79 | |
80 | | global_exit_status=1 ;
81 | | fi
82 | | ERR_MSG=""
83 | | }
84 | |
85 | | exec 100>${LOCK} # Simple linux-way to use locks.
86 | | flock 100 # First script execution will hold the lock
87 | | if [[ $? != 0 ]] ; then # Next ones will have to wait. Use -w nSecs
88 | | ERR_MSG="HOME ENV.VAR NOT DEFINED" # to fail after timeout or -n to fail-fast
89 | | funThrow 10 ; # lock will automatically be liberated on
90 | | fi # exit. (no need to unlock manually)
91 | | # REF
92 | ```
93 |
94 | # SIMPLE WAY TO PARSE/CONSUME ARGUMENTS WITH while-loop.
95 | ```
96 | | while [ $# -gt 0 ]; do # $# number of arguments
97 | | case "$1" in
98 | | -l|--list)
99 | | echo "list arg"
100 | | shift 1 # <- consume arg , $# = $#-1
101 | | ;;
102 | | -p|--port)
103 | | export PORT="${2}"
104 | | shift 2 # <- consume arg+value, $# = $#-2
105 | | ;;
106 | | -h|--host)
107 | | export HOST="${2^^}" # <- ^^ suffix: Convert ${2} to upper case
108 | | shift 2 # <- consume arg+value, $# = $#-2
109 | | ;;
110 | | *)
111 | | echo "non-recognised option '$1'"
112 | | shift 1 # <- consume arg , $# = $#-1
113 | | esac
114 | | done
115 | |
116 | | set -e # At this point all variable must be defined. Exit on any error.
117 | |
118 | | function preChecks() {
119 | | # Check that ENV.VARs and parsed arguments are in place
120 | | if [[ ! ${HOME} ]] ; then ERR_MSG="HOME ENV.VAR NOT DEFINED" ; funThrow 41 ; fi
121 | | if [[ ! ${PORT} ]] ; then ERR_MSG="PORT ENV.VAR NOT DEFINED" ; funThrow 42 ; fi
122 | | if [[ ! ${HOST} ]] ; then ERR_MSG="HOST ENV.VAR NOT DEFINED" ; funThrow 43 ; fi
123 | | set -u # From here on, ANY UNDEFINED VARIABLE IS CONSIDERED AN ERROR.
124 | | }
125 | |
126 | | function funSTEP1 {
127 | | echo "STEP 1: $HOME, PORT:$PORT, HOST: $HOST"
128 | | }
129 | | function funSTEP2 { # throw ERR_MSG
130 | | ERR_MSG="My favourite ERROR@funSTEP2"
131 | | funThrow 2
132 | | }
133 | |
134 | | cd $WD ; preChecks
135 | | cd $WD ; funSTEP1
136 | | cd $WD ; funSTEP2
137 | |
138 | | echo "Exiting with status:$global_exit_status"
139 | | exit $global_exit_status
140 | ```
141 |
142 | ## INIT VARS AND CONSTANTS:
143 |
144 | * complete Shell parameter expansion list available at:
145 |
146 | ```
147 | | var1=$1 # init var $1 with first param
148 | | var2=$# # init var $1 with number of params
149 | | var3=$! # init var with PID of last executed command.
150 | | var4=${parameter:-word} # == $parameter if parameter set or 'word' (expansion)
151 | | var5=${parameter:=word} # == $parameter if parameter set or 'word' (expansion), then parameter=word
152 | | var6=${parameter:?word} # == $parameter if parameter set or 'word' (expansion) written to STDERR, then exit.
153 | | var7=${parameter:+word} # == var1 if parameter set or 'word' (expansion).
154 | | var8=${var1^^} # init var2 as var1 UPPERCASE.
155 | | var9=${parameter:offset} # <- Substring Expansion. It expands to up to length characters of the value
156 | | varA=${parameter:offset:length} | of parameter starting at the character specified by offset.
157 | | | If parameter is '@', an indexed array subscripted by '@' or '*', or an
158 | | | associative array name, the results differ.
159 | | readonly const1=${varA}
160 | ```
161 |
162 | ## CONCURRENT PROCESS BARRIER SYNCHRONIZATION
163 |
164 | * Wait for background jobs to complete example:
165 | ```
166 | (
167 | ( sleep 3 ; echo "job 1 ended" ) &
168 | ( sleep 1 ; echo "job 2 ended" ) &
169 | ( sleep 1 ; echo "job 3 ended" ) &
170 | ( sleep 9 ; echo "job 4 ended" ) &
171 | wait ${!} # alt.1: Wait for all background jobs to complete
172 | # wait %1 %2 %3 # alt.2: Wait for jobs 1,2,3. Do not wait for job 4
173 | echo "All subjobs ended"
174 | ) &
175 | ```
176 |
177 | ## SIMPLIFIED READ-EVALUATE-PARSE-LOOP (REPL) IN BASH
178 |
179 | * REPL stands for Read-eval-print loop: More info at:
180 |
181 |
182 | ```
183 | | while [[ ${LANGUAGE} != "EXIT" ]] ; do # {
184 | | select LANGUAGE in BASH CSHARP JAVA PHP PYTHON EXIT
185 | | do # {
186 | | echo "Selected language is $language"
187 | | done #
188 | | done # }
189 | ```
190 |
191 | * More complex input can be done replacing the line:
192 | ```
193 | | select INPUT in OPT1 OPT2 ...
194 | | do
195 | | ...
196 | | done
197 | ```
198 | by
199 | ```
200 | | read -p "PROMPT" INPUT
201 | | funComplexParsingAndEvaluationOver $INPUT
202 | ```
203 |
204 | ## 'test' CONDITIONAL BRANCHING [[{]]
205 | (man test summary from GNU coreutils for more info)
206 |
207 | ```
208 | | test EXPRESSION # ← EXPRESSION true/false sets the exit status.
209 | | test [ EXPRESSION ]
210 | |
211 | | -n STRING # STRING length >0
212 | | # (or just STRING)
213 | | -z STRING # STRING length == 0
214 | | STRING1 = STRING2 # String equality
215 | | STRING1 != STRING2 # String in-equality
216 | |
217 | |
218 | | INTEGER1 -eq INTEGER2 # ==
219 | | INTEGER1 -ge INTEGER2 # <=
220 | | INTEGER1 -gt INTEGER2
221 | | INTEGER1 -le INTEGER2
222 | | INTEGER1 -lt INTEGER2
223 | | INTEGER1 -ne INTEGER2
224 | | ^^^^^^^^
225 | | NOTE: INTEGER can be -l STRING (length of STRING)
226 | |
227 | | FILE TEST/COMPARISION
228 | | WARN: Except -h/-L, all FILE-related tests dereference symbolic links.
229 | | -e FILE # FILE exists
230 | | -f FILE # FILE exists and is a1regular file
231 | | -h FILE # FILE exists and is symbolic link (same as -L)
232 | | -L FILE # (same as -h)
233 | | -S FILE # FILE exists and is socket
234 | | -p FILE # FILE exists and is a named pipe
235 | | -s FILE # FILE exists and has size greater than zero
236 | |
237 | |
238 | | -r FILE # FILE exists and read permission is granted
239 | | -w FILE # FILE exists and write permission is granted
240 | | -x FILE # FILE exists and exec permission is granted
241 | |
242 | | FILE1 -ef FILE2 # ← same device and inode numbers
243 | | FILE1 -nt FILE2 # FILE1 is newer (modification date) than FILE2
244 | | FILE1 -ot FILE2 # FILE1 is older (modification date) than FILE2
245 | | -b FILE # FILE exists and is block special
246 | | -c FILE # FILE exists and is character special
247 | | -d FILE # FILE exists and is a directory
248 | | -k FILE # FILE exists and has its sticky bit set
249 | |
250 | |
251 | | -g FILE # FILE exists and is set-group-ID
252 | | -G FILE # FILE exists and is owned by the effective group ID
253 | | -O FILE # FILE exists and is owned by the effective user ID
254 | | -t FD file descriptor FD is opened on a terminal
255 | | -u FILE FILE exists and its set-user-ID bit is set
256 | ```
257 | ### BOOLEAN ADITION:
258 | * WARN: inherently ambiguous. Use
259 | ```
260 | EXPRESSION1 -a EXPRESSION2 # AND # 'test EXPR1 && test EXPR2' is prefered
261 | EXPRESSION1 -o EXPRESSION2 # OR # 'test EXPR1 || test EXPR2' is prefered
262 | ```
263 | [[}]]
264 |
265 | WARN,WARN,WARN : your shell may have its own version of test and/or '[',
266 | which usually supersedes the version described here.
267 | Use /usr/bin/test to force non-shell ussage.
268 |
269 | Full documentation at:
270 | [[}]]
271 |
272 | ## (KEY/VALUE) MAPS (Bash 4+)
273 | (also known as associative array or hashtable)
274 |
275 | Bash Maps can be used as "low code" key-value databases.
276 | Very useful for daily config/devops/testing task.
277 | Ex:
278 | ```
279 | | #!/bin/bash # ← /bin/sh will fail. Bash 4+ specific
280 | |
281 | | declare -A map01 # ← STEP 1) declare Map
282 | |
283 | | map01["key1"]="value1" # ← STEP 2) Init with some elements.
284 | | map01["key2"]="value2" # Visually map01 will be a table similar to:
285 | | map01["key3"]="value3" # key │ value
286 | | # ─────┼───────
287 | | # key1 │ value1 ← key?, value? can be any string
288 | | # key2 │ value2
289 | | # key3 │ value3
290 | |
291 | | keyN="key2" # ← STEP 3) Example Ussage
292 | | ${map01[${key_var}]} # ← fetch value for key "key2"
293 | | ${!map01[@]} # ← fetch keys . key2 key3 key1
294 | | ${map01[@]} # ← fetch values. (value2 value3 value1)
295 | |
296 | | for keyN in "${!map01[@]}"; # ← walk over keys:
297 | | do # (output)
298 | | echo "$keyN : ${map01[$keyN]}" # key1 : value1
299 | | done # key2 : value2
300 | | # key3 : value3
301 | |
302 | ```
303 |
304 | ## Kapow! Shell Script to HTTP API [[{web_hook,PM.low_code,git,monitoring.prometheus,dev_stack.shell_script,dev_stack.kubernetes,]]
305 | [[notifications.jira,InfraAsCode.ansible,git,git.github,notifications.slack]]
306 | (by BBVA-Labs Security team members)
307 | " If you can script it, you can HTTP it !!!!"
308 |
309 | Example.
310 |
311 |
312 | Initial Script:
313 | ```
314 | | $ cat /var/log/apache2/access.log | grep 'File does not exist'
315 | ```
316 |
317 | To expose it as HTTP:
318 |
319 | ```
320 | | $ cat search-apache-errors
321 | | #!/usr/bin/env sh
322 | | kapow route add /apache-errors - <-'EOF'
323 | | cat /var/log/apache2/access.log | grep 'File does not exist' | kapow set /response/body
324 | | EOF
325 | ```
326 |
327 | Run HTTP Service like:
328 |
329 | ```
330 | | $ kapow server search-apache-errors ← Client can access it like
331 | | curl http://apache-host:8080/apache-errors
332 | | [Fri Feb 01 ...] [core:info] File does not exist: ../favicon.ico
333 | | ...
334 | ```
335 | We can share information without having to grant SSH access to anybody.
336 |
337 |
338 | Recipe: Run script as a given user:
339 | # Note that `kapow` must be available under $PATH relative to /some/path
340 | ```
341 | | kapow route add /chrooted\
342 | | -e 'sudo --preserve-env=KAPOW_HANDLER_ID,KAPOW_DATA_URL \
343 | | chroot --userspec=sandbox /some/path /bin/sh -c' \
344 | | -c 'ls / | kapow set /response/body'
345 | ```
346 | [[}]]
347 |
348 | ## WebHook [[{dev_stack.shell_script,PM.TODO]]
349 | *
350 | * lightweight incoming webhook server to run shell commands
351 | You can also pass data from the HTTP request (such as headers,
352 | payload or query variables) to your commands. webhook also allows you
353 | to specify rules which have to be satisfied in order for the hook to
354 | be triggered.
355 | * For example, if you're using Github or Bitbucket, you can use webhook
356 | to set up a hook that runs a redeploy script for your project on your
357 | staging server, whenever you push changes to the master branch of
358 | your project.
359 | * Guides featuring webhook:
360 | * Webhook and JIRA by @perfecto25 [[jira]]
361 | * Trigger Ansible AWX job runs on SCM (e.g. git) commit by @jpmens [[ansible]]
362 | * Deploy using GitHub webhooks by @awea [git][github]
363 | * Setting up Automatic Deployment and Builds Using Webhooks by Will
364 | Browning
365 | * Auto deploy your Node.js app on push to GitHub in 3 simple steps by [[git.github]]
366 | Karolis Rusenas
367 | * Automate Static Site Deployments with Salt, Git, and Webhooks by [[git]]
368 | Linode
369 | * Using Prometheus to Automatically Scale WebLogic Clusters on [[prometheus,k8s,weblogic]]
370 | Kubernetes by Marina Kogan
371 | * Github Pages and Jekyll - A New Platform for LACNIC Labs by Carlos
372 | Martínez Cagnazzo
373 | * How to Deploy React Apps Using Webhooks and Integrating Slack on [[{notifications.slack}]]
374 | Ubuntu by Arslan Ud Din Shafiq
375 | * Private webhooks by Thomas
376 | * Adventures in webhooks by Drake
377 | * GitHub pro tips by Spencer Lyon [github]
378 | * XiaoMi Vacuum + Amazon Button = Dash Cleaning by c0mmensal
379 | * Set up Automated Deployments From Github With Webhook by Maxim Orlov
380 | VIDEO: Gitlab CI/CD configuration using Docker and adnanh/webhook
381 | to deploy on VPS - Tutorial #1 by Yes! Let's Learn Software
382 | [[}]]
383 |
384 | ## Bash-it: community bash commands [[{]]
385 | *
386 | * bundle of community bash commands and scripts for Bash 3.2+,
387 | which comes with autocompletion, aliases, custom functions, ....
388 | * It offers a useful framework for developing, maintaining and
389 | using shell scripts and custom commands for your daily work.
390 | [[}]]
391 |
392 | ## Shell Script Best Practices [[{PM.TODO]]
393 | *
394 | [[}]]
395 | [[dev_stack.shell_script}]]
396 |
397 | # Networking Summary for DevOps [[{networking.101,PM.low_code,networking.load_balancer,web_balancer]]
398 |
399 |
400 |
401 | [[dev_stack.shell_script}]]
402 |
--------------------------------------------------------------------------------
/devops_curl.txt:
--------------------------------------------------------------------------------
1 | # Curl (network client swiss Army nkife) [[{networking.curl,troubleshooting,qa.testing]]
2 | * Suport for DICT, FILE, FTP, FTPS, GOPHER, HTTP GET/POST, HTTPS, HTTP2, IMAP,
3 | IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS,
4 | SMTP, SMTPS, TELNET, TFTP, unix socket protocols.
5 | * Proxy support.
6 | * kerberos support.
7 | * HTTP cookies, etags
8 | * file transfer resume.
9 | * Metalink
10 | * SMTP / IMAP Multi-part
11 | * HAProxy PROXY protocol
12 | * ...
13 |
14 | ## Ussage Summary:
15 | ```
16 | | $ curl http://site.{a,b}.com
17 | | --silent <·· Disable progress meter
18 | | --verbose <·· Debug
19 | | --anyauth <·· make curl figure out auth. method
20 | | (--basic, --digest, --ntlm, and --negotiate)
21 | | not recommended if uploading from STDIN since
22 | | data can be sent 2+ times
23 | | - Used together with -u, --user.
24 | | --cacert cert_file_path <·· Alt: Use CURL_CA_BUNDLE
25 | | - See also --capath dir, --cert-status, --cert-type PEM|DER|...
26 | | --cert certificate[:pass]<·· Use cert to indentify curl client
27 | | --ciphers TLS_ciphers_list
28 | | --compressed <·· Request compressed response. Save uncompressed
29 | | --config curl_args_list
30 | | --connect-timeout secs
31 | | HTTP Post Data:
32 | | --data-binary data <·· alt 1: posts data with no extra processing whatsoever.
33 | | --data-urlencode data <·· alt 2: URLencoded
34 | | --data data <·· alt 3: application/x-www-form-urlencoded (browser like forms)
35 | | └──┴─·········· Or @path_to_file_with_data
36 | | Or @- to use STDIN
37 | | --header ...
38 | | --limit-rate speed
39 | | --location <·· follow redirects
40 | | --include <·· Include HTTP response headers in output
41 | | --oauth2-bearer ... <·· (IMAP POP3 SMTP)
42 | | --fail-early <·· Fail as soon as possible
43 | | --continue-at - <·· Continue a partial download
44 | | --output out_file <·· Write output to file (Defaults to stdout)
45 | | Use also --create-dirs to create unexisting dirs.
46 | ```
47 |
48 | ## Curl: List remote contents
49 | ```
50 | | $ curl --list-only https://..../dir1/ ← List contents of remote dir
51 | ```
52 |
53 | # wget.java "alternative"
54 |
55 | In containerized environments (docker, kubernetes,...) we may lack access to tooling like curl.
56 | Still, if we are fortunate we can have access to some sort of container SDK or VM (Java, Python, ...).
57 | Then we can try to use the SDK to "emulate" a basic curl functionality, good enough to check
58 | if the internal container network is working properly (DNS, internal proxies to targets, ...).
59 |
60 | Here it follow a rudimentary "curl.java" to test "GET" connections:
61 | ```
62 | | $ cat curl.java
63 | | import java.io.BufferedReader;
64 | | import java.io.IOException;
65 | | import java.io.InputStreamReader;
66 | | import java.net.URL;
67 | | import java.net.URLConnection;
68 | |
69 | | public class URLReader {
70 | | public static void main(String[] args) {
71 | | try {
72 | | URL url = new URL(args[0]);
73 | | URLConnection connection = url.openConnection();
74 | | BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream()));
75 | | String line;
76 | | while ((line = reader.readLine()) != null) {
77 | | System.out.println(line);
78 | | }
79 | | reader.close();
80 | | } catch (IOException e) {
81 | | e.printStackTrace();
82 | | }
83 | | }
84 | | }
85 | ```
86 | Ussage:
87 | ```
88 | $ java curl.java http://some.url.com:8080
89 | ```
90 | This can be good enough to detect hostname/DNS/route problems inside the container.
91 |
92 | [[}]]
93 |
--------------------------------------------------------------------------------
/devops_monitoring.txt:
--------------------------------------------------------------------------------
1 | [[{monitoring.101,]]
2 | # Monitoring for DevOps 101
3 |
4 | ## Infra vs App Monitoring [[{doc_has.comparative]]
5 |
6 | ### *Infrastructure Monitoring*
7 | * Prometheus + Grafana (Opinionated)
8 | Prometheus periodically pulls multidimensional data from different apps/components.
9 | Grafana allows to visualize Prometheus data in custom dashboards.
10 | (Alternatives include Monit, Datadog, Nagios, Zabbix, ...)
11 |
12 | ### *Application Monitoring*
13 | * OpenTelemetry: replaces OpenTracing and OpenCensus.
14 | Cloud Native Foundation projects.
15 | It also serves as ¿front-end? for Jaeger and others.
16 | * Jaeger, New Relic: (Very opinionated)
17 | (Other alternatives include AppDynamics, Instana, ...)
18 |
19 | ### *Log Management* (Opinionated)
20 | * Elastic Stack
21 | (Alternative include Graylog, Splunk, Papertrail, ...)
22 | Elastic search has evolved throught the years to become a
23 | full analytical platform.
24 |
25 | * MUCH MORE DETAILED INFORMATION IS AVAILABLE AT:
26 | <../../txt_world_domination/viewer.html?payload=../SoftwareArchitecture/ddbb_engines.txt>
27 |
28 | [[}]]
29 |
30 |
31 | # OpenTelemetry: modern replacement for OpenTracing
32 |
33 | [[{monitoring.opentelemetry,monitoring.101]]
34 |
35 | Also known as OTel, is a vendor-neutral open source Observability framework.
36 | Summary from :
37 |
38 | OpenTelemetry surged from merging OpenTracing with OpenCensus, taking the best of both and adding new features...
39 | ... It combines aspects of distributed tracing, metrics, and logging into a single API.
40 | ... Like OpenTracing, OpenTelemetry provides an API for developers to add instrumentation to their applications.
41 | .. unlike OpenTracing, **it also provides an implementation, eliminating the need for developers to integrate a
42 | separate tracing system**.
43 | ... in addition to distributed tracing, OpenTelemetry supports metrics and logs. This comprehensive approach to
44 | observability means that developers can get a complete picture of system performance and behavior, without having
45 | to integrate multiple systems.
46 |
47 |
48 | * Vendors/Apps who natively support OpenTelemetry via OTLP include:
49 | AWS, Azure, Google Cloud Platform, New Relic, Sentry Software,
50 | Traeffik 3+, Grafana Loki 3.0, Oracle, Splunk,
51 | Jaeger, Apache SkyWalking, Fluent Bit,
52 | OpenLIT, ClickHouse, Embrace, Grafana Labs, GreptimeDB, Highlight,
53 | HyperDX, observIQ, OneUptime, Ope, qryn, Red Hat, Sig, Tracetest,
54 | Uptrace, VictoriaMetrics, Alibaba Cloud, AppDynamics (Cisco), Aria by
55 | VMware (Wavefront), Aspecto, Axiom, Better Stack, Bonree,
56 | Causely, Chro, Control Plane, Coralogix, Cribl, DaoCloud, Datadog,
57 | Dynatrace, Elastic, F5, Helios, Honeycomb,
58 | Immersive Fusion, Instana, ITRS, KloudFuse, KloudMate, LogicMonitor,
59 | LogScale by Crowdstrike (Humio), Logz.io, Lumigo, Middleware,
60 | , Observe, Inc., ObserveAny, OpenText, Seq, Service, ServicePilot,
61 | SolarWinds, Sumo Logic, TelemetryHub, TingYun, Traceloop, VuNet Systems, ...
62 |
63 |
64 |
65 |
66 | *
67 | A metric is a measurement about a service, captured at
68 | runtime. ... metric event which consists not only of the measurement
69 | itself, but the time that it was captured and associated metadata.
70 | *
71 | Traces give us the big picture of what happens when a request is made
72 | to anapplication ... essential to understanding the "full path" a request
73 | takes in your application.
74 | *
75 | log is a timestamped text record, either structured (recommended)
76 | orunstructured, with metadata. Of all telemetry signals logs, have
77 | the biggestlegacy.
78 | [[monitoring.opentelemetry}]]
79 |
80 | ## OpenTelemetry Adopts Continuous Profiling
81 |
82 | * Elastic Donates Their Agent
83 | *
84 |
85 | [[{dev_stack.java,dev_stack.springboot]]
86 | ## OpenTelemetry + JAVA Micrometer
87 | *
88 |
89 | ...current version of OpenTelemetry’s Java instrumentation agent picks up Spring Boot’s Micrometer metrics automatically.
90 | It is no longer necessary to manually bridge OpenTelemetry and Micrometer.
91 |
92 | BASE: simple Hello World REST service ()
93 |
94 | 1. STEP 1: enable metrics and expose them directly in Prometheus format.
95 | ( We will not yet use the OpenTelemetry Java instrumentation agent.)
96 |
97 | ```
98 |
99 |
100 | org.springframework.boot
101 | spring-boot-starter-actuator
102 |
103 |
104 |
105 | io.micrometer
106 | micrometer-registry-prometheus
107 | runtime
108 |
109 | ```
110 |
111 | 2. enable (http://localhost:8080)/actuator/prometheus endpoint in application.properties
112 |
113 | ```
114 | | management.endpoints.web.exposure.include=prometheus
115 | ```
116 | Out-of-the-box metrics it includes JVM metrics like jvm_gc_pause_seconds, metrics from
117 | the logging framework (logback_events_total,...) and metrics from the REST endpoint
118 | like http_server_requests.
119 |
120 | 3. Add custom metrics:
121 |
122 | They can be registered with a MeterRegistry provided by Spring Boot.
123 |
124 | 3.1 inject MeterRegistry to the GreetingController.
125 |
126 | ```java
127 | | // ...
128 | | import io.micrometer.core.instrument.MeterRegistry;
129 | |
130 | | @RestController
131 | | public class GreetingController {
132 | |
133 | | // ...
134 | | private final MeterRegistry registry;
135 | |
136 | | // Use constructor injection to get the MeterRegistry
137 | | public GreetingController(MeterRegistry registry) { // Best pattern: Inject in constructor.
138 | | this.registry = registry;
139 | | }
140 | |
141 | | // ...
142 | | }
143 | ```
144 | 3.2 add custom metric "Counter".
145 | ```
146 | | @GetMapping("/greeting")
147 | | public Greeting greeting(@RequestParam(value = "name", defaultValue = "World") String name){
148 | |
149 | | // Add a counter tracking the greeting calls by name
150 | | registry.counter("greetings.total", "name", name).increment();
151 | |
152 | | // ...
153 | | }
154 | ```
155 | 3.3 Test it: Recompile/restart the application.
156 | ```
157 | | # make some calls to the API
158 | | $ curl -X GET http://localhost:8080/greeting?name=Grafana
159 | | $ curl -X GET http://localhost:8080/greeting?name=ABD
160 | | $ curl -X GET http://localhost:8080/greeting?name=...
161 | |
162 | | # Check metrics:
163 | | $ curl -X GET http://localhost:8080/actuator/prometheus
164 | |
165 | | # TYPE greetings_total counter
166 | | greetings_total{name="Grafana",} 2.0 <·· Using user input as label is a poor practice, used just as example
167 | | greetings_total{name="Prometheus",} 1.0
168 | ```
169 | 4. Configure OpenTelemetry collector:
170 | * OpenTelemetry collector is used tro receive, process, and export telemetry data.
171 | * It usually "sits in the middle" , between the applications to be monitored and the monitoring backend.
172 | * OpenTelemetry collector is configured to scrape the metrics from the Prometheus endpoint and expose
173 | them again in Prometheus format. http://collector:8889/metrics metrics should match http://application:8080/actuator/prometheus
174 | 4.1. Download otelcol_*.tar.gz from
175 | 4.2. Create config.yaml like:
176 | ```yaml
177 | | receivers:
178 | | prometheus:
179 | | config:
180 | | scrape_configs:
181 | | - job_name: "example"
182 | | scrape_interval: 5s
183 | | metrics_path: '/actuator/prometheus'
184 | | static_configs:
185 | | - targets: ["localhost:8080"]
186 | |
187 | | processors:
188 | | batch:
189 | |
190 | | exporters:
191 | | prometheus:
192 | | endpoint: "localhost:8889"
193 | |
194 | | service:
195 | | pipelines:
196 | | metrics:
197 | | receivers: [prometheus]
198 | | processors: [batch]
199 | | exporters: [prometheus]
200 | ```
201 | 4.3. Run it:
202 | ```
203 | | $ ./otelcol --config=config.yaml
204 | ```
205 | 5. Finally get rid of Prometheus endpoint:
206 | 5.1. re-configure the receiver side of the OpenTelemetry collector to **OpenTelemetry Line Protocol (otlp)**
207 | (vs scrapping Prometheus end-point).
208 | ```
209 | | yaml
210 | |
211 | | receivers:
212 | | otlp:
213 | | protocols:
214 | | grpc:
215 | | http:
216 | |
217 | | processors:
218 | | batch:
219 | |
220 | | exporters:
221 | | prometheus:
222 | | endpoint: "localhost:8889"
223 | |
224 | | service:
225 | | pipelines:
226 | | metrics:
227 | | receivers: [otlp]
228 | | processors: [batch]
229 | | exporters: [prometheus]
230 | ```
231 | 5.2. download OpenTelemetry Java JVM instrumentation agent:
232 |
233 | * Metrics are disabled in the agent by default. Enable like:
234 | ```
235 | | $ export OTEL_METRICS_EXPORTER=otlp
236 | | # RESTARUP LIKE:
237 | | $ java -javaagent:./opentelemetry-javaagent.jar \
238 | | -jar ./target/rest-service-complete-0.0.1-SNAPSHOT.jar
239 | ```
240 | After a minute or so, metrics are available again at
241 | after being "shipped" directly to the collector. (Prometheus endpoint is no longer involved).
242 | and can be removed in application.properties as well as the micrometer-registry-prometheus
243 | dependency)
244 | 5.3. metrics are **NOT** the original metrics maintained by the Spring Boot application.
245 | .... some Spring metrics are clearly missing (logback_events_total , ...),
246 | custom metric greetings_total is no longer available.
247 | Micrometer library API offers a flexible meter registry for vendors to expose metrics
248 | for their specific monitoring backend. (Prometheus meter registry in first example).
249 | Capturing Micrometer metrics with the OpenTelemetry Java instrumentation agent almost works
250 | out of the box: The agent detects Micrometer and registers an OpenTelemetryMeterRegistry on the fly.
251 | ... **Unfortunately the agent registers with Micrometer’s Metrics.globalRegistry, while Spring uses
252 | its own registry instance via dependency injection**. If the OpenTelemetryMeterRegistry ends up in
253 | the wrong MeterRegistry instance, it is not used by Spring. ...
254 | FIX: make OpenTelemetry’s OpenTelemetryMeterRegistry available as a Spring bean, so that Spring can
255 | register it correctly when it sets up dependency injection.
256 |
257 | ```java
258 | |
259 | | @SpringBootApplication
260 | | public class RestServiceApplication {
261 | |
262 | | // Unregister OpenTelemetryMeterRegistry from Metrics.globalRegistry
263 | | // and make it available as a Spring bean instead.
264 | | @Bean
265 | | @ConditionalOnClass(name = "io.opentelemetry.javaagent.OpenTelemetryAgent")
266 | | public MeterRegistry otelRegistry() {
267 | | Optional otelRegistry = Metrics.globalRegistry.getRegistries().stream()
268 | | .filter(r -> r.getClass().getName().contains("OpenTelemetryMeterRegistry"))
269 | | .findAny();
270 | | otelRegistry.ifPresent(Metrics.globalRegistry::remove);
271 | | return otelRegistry.orElse(null);
272 | | }
273 | |
274 | | // ...
275 | | }
276 | ```
277 | [[dev_stack.java}]]
278 |
279 | [[monitoring.101}]]
280 |
--------------------------------------------------------------------------------
/devops_networking.txt:
--------------------------------------------------------------------------------
1 | [[{networking.101]]
2 | # DevOps Networking Notes
3 |
4 | [[{networking.TLS,troubleshooting,PM.TODO]]
5 | ## CharlesProxy: Monitor TLS/HTTPS traffic
6 | *
7 | * HTTP proxy / HTTP monitor / Reverse Proxy enabling developers to view HTTP+SSL/HTTPS
8 | traffic between loal machine and Internet, including requests, responses and HTTP headers
9 | (which contain the cookies and caching information).
10 | [[}]]
11 |
12 | [[{]]
13 | ## DNS 101
14 | ```
15 | ┌ DNS Records ────────────────────────────────┐
16 | │ A root domain name IP address │
17 | │ Ex: mydomain.com → 1.2.3.4 │
18 | │ Not recomended for changing IPs │
19 | ├─────────────────────────────────────────────┤
20 | │ CNAME maps name2 → name1 │
21 | │ Ex: int.mydomain.com → mydomain.com │
22 | ├─────────────────────────────────────────────┤
23 | │ Alias Amazon Route 53 virtual record │
24 | │ to map AWS resources like ELBs, │
25 | │ CloudFront, S3 buckets, ... │
26 | ├─────────────────────────────────────────────┤
27 | │ MX mail server name → IP address │
28 | │ Ex: smtp.mydomain.com → 1.2.3.4 │
29 | ├─────────────────────────────────────────────┤
30 | │ AAAA A record for IPv6 addresses │
31 | └─────────────────────────────────────────────┘
32 | ```
33 | [[}]]
34 |
35 | [[{PM.low_code]]
36 | ## SERVICE MESH EVOLUTION
37 |
38 | * Summary extracted from
39 |
40 |
41 | * Service Mesh: Takes care of (netwok)distributed concerns (visibility, security, balancing,
42 | service discovery, ...)
43 |
44 | ```
45 | | 1st GENERATION. Each app 2nd Generation. A common 3rd Generation. Sidecar
46 | | links against a library. sidecar is used. functionality moved to
47 | | linux kernel usinb eBPF
48 | | ┌─ App1 ────┐ ┌─ App2 ────┐ ┌─ App1 ────┐ ┌─ App2 ────┐
49 | | │ ┌───────┐│ │ ┌───────┐│ │ │ │ │
50 | | │ │Service││ │ │Service││ └───────────┘ └───────────┘ ┌─ App1 ────┐ ┌─ App2 ────┐
51 | | │ │Mesh ││ │ │Mesh ││ ┌───────────┐ ┌───────────┐ │ │ │ │
52 | | │ │Library││ │ │Library││ │ServiceMesh│ │ServiceMesh│ └───────────┘ └───────────┘
53 | | │ └───────┘│ │ └───────┘│ │SideCar │ │SideCar │ ┌─ Kernel ────────────────┐
54 | | └───────────┘ └───────────┘ └───────────┘ └───────────┘ │ ┌─ eBPF Service Mesh ┐ │
55 | | ┌─ Kernel ────────────────┐ ┌─ Kernel ────────────────┐ │ └────────────────────┘ │
56 | | │ ┌─ TCP/IP ─┐ │ │ ┌─ TCP/IP ─┐ │ │ ┌─ TCP/IP ─┐ │
57 | | │ └──────────┘ │ │ └──────────┘ │ │ └──────────┘ │
58 | | │ ┌─ Network─┐ │ │ ┌─ Network─┐ │ │ ┌─ Network─┐ │
59 | | │ └──────────┘ │ │ └──────────┘ │ │ └──────────┘ │
60 | | └─────────────────────────┘ └─────────────────────────┘ └─────────────────────────┘
61 | | Envoy, Linkerd, Nginx,... Cilium
62 | | or kube-proxy
63 | |
64 | | App1 ←→ Kernel TCP/IP App1 ←→ SideCar1 App1 ←→ Kernel eBPF
65 | | Kernel TCP/IP ←→ App2 SideCar1 ←→ Kernel TCP/IP Kernel eBPF ←→ App2
66 | | Kernel TCP/IP ←→ Sidecar2
67 | | Sidecar2 ←→ App2
68 | ```
69 | [[}]]
70 |
71 | [[networking.101}]]
72 |
--------------------------------------------------------------------------------
/devops_yaml.txt:
--------------------------------------------------------------------------------
1 | ## Yaml References [[{yaml.101]]
2 |
3 |
4 | ```
5 | YAML JSON
6 | --- {
7 | key1: val1 "key1": "val1",
8 | key2: "key2": [
9 | - "thing1" "thing1",
10 | - "thing2" "thing2"
11 | # I am a comment ]
12 | }
13 | ```
14 |
15 | - Anchors allows to reuse/extends YAML code:
16 | ```
17 | ┌─ YAML ───────────┐···(generates)··>┌─ JSON ────────────────┐
18 | │ --- │ │ │
19 | │ key1: &anchor ← '&' Defines │ { │
20 | │ K1: "One" │ the anchor │ "key1": { │
21 | │ K2: "Two" │ │ "K1": "One", │
22 | │ │ │ "K2": "Two" │
23 | │ │ │ }, │
24 | │ key2: *anchor ← References/ │ "key2": { │
25 | │ │ uses the anch. │ "K1": "One", │
26 | │ │ │ "K2": "Two" │
27 | │ │ │ } │
28 | │ key3: │ │ "key3": { │
29 | │ <<: *anchor ← Extends anch. │ "K1": "One", │
30 | │ K2: "I Changed"│ │ "K2": "I Changed",│
31 | │ │ │ "K3": "Three" │
32 | │ │ │ } │
33 | │ K3: "Three" │ │ │
34 | │ │ │ } │
35 | └──────────────────┘ └───────────────────────┘
36 | ```
37 | WARN!!!: Many NodeJS parsers break the 'extend' functionality.
38 |
39 | * Extend Inline:
40 | - take only SOME sub-keys from key1 to inject into key2
41 | ```
42 | ┌─ YAML ────── ┐···(generates)··>┌─ JSON ───────────┐
43 | │ --- │ │ { │
44 | │ key1: │ │ "key1": { │
45 | │ <<: &anchor ← Inject into │ "K1": "One", │
46 | │ K1: "One" │ key1 and save │ "K2": "Two" │
47 | │ K2: "Two" │ as anchor │ }, │
48 | │ │ │ │
49 | │ bar: │ │ "bar": { │
50 | │ <<: *anchor │ │ "K1": "One", │
51 | │ K3: "Three" │ │ "K3": "Three"│
52 | │ │ │ } │
53 | │ │ │ } │
54 | └──────────────────┘ └──────────────────┘
55 | ```
56 |
57 | * yaml2js python Ulitity:
58 | * Add next lines to ~/.bashrc (or /etc/profile or ...):
59 | ```
60 | + alias yaml2js="python -c 'import sys, yaml, json; \
61 | + json.dump(yaml.load(sys.stdin), sys.stdout, indent=4)'"
62 | ```
63 |
64 | Ussage:
65 | ```
66 | $ cat in.yaml | yaml2js > out.json
67 | ```
68 |
69 | **WARN:** Unfortunatelly there is no way to override or
70 | extends lists to append new elements to existing ones,
71 | only maps/dictionaries with the '<<' operator:
72 | '<<' "inserts" values of referenced map into
73 | current one being defined.
74 | [[}]]
75 |
--------------------------------------------------------------------------------
/linux_ALL.payload:
--------------------------------------------------------------------------------
1 | 101.txt
2 | linux.txt
3 | linux_systemd.txt
4 | linux_application_tracing.txt
5 | linux_package_managers.txt
6 | linux_administration_advanced_summary.md
7 | linux_storage.txt
8 | linux_security.txt
9 | linux_encryption.txt
10 | linux_perf.txt
11 | linux_kernel_monitoring.txt
12 | linux_eBPF.txt
13 | linux_kernel_alternatives.txt
14 | linux_desktop.txt
15 | linux_mobile.txt
16 | linux_SElinux.txt
17 | linux_kernel.txt
18 | linux_TODO.txt
19 | linux_who_is_who.txt
20 |
21 |
--------------------------------------------------------------------------------
/linux_alternatives.txt:
--------------------------------------------------------------------------------
1 | [[{linux.alternatives,PM.draft]]
2 | # Linux Micro-kernel Alternatives
3 | - OX kernel:
4 | - Mirage OS:
5 | Mirage OS is a library operating system that constructs unikernels
6 | for secure, high-performance network applications across a variety
7 | of cloud computing and mobile platforms. Code can be developed on a normal OS
8 | such as Linux or MacOS X, and then compiled into a fully-standalone,
9 | specialised unikernel that runs under the Xen hypervisor.
10 | - Fiasco:
11 | Fiasco is a µ-kernel (microkernel) running on various platforms.
12 |
13 | The Fiasco.OC kernel can be used to construct flexible systems. Fiasco.OC is the base for our
14 | TUDO:OS system which supports running real-time, time-sharing and virtualization applications
15 | concurrently on one computer. However, Fiasco.OC is both suitable for big and complex systems,
16 | but also for small, embedded applications. We have developed the L4 Runtime Environment which
17 | provides the necessary infrastructure on top of Fiasco.OC for conveniently developing applications.
18 | Please refer to the Features page for more information on the capabilities of Fiasco.OC.
19 |
20 | Fiasco is a preemptible real-time kernel supporting hard priorities. It uses non-blocking
21 | synchronization for its kernel objects. This guarantees priority inheritance and makes sure
22 | that runnable high-priority processes never block waiting for lower-priority processes.
23 |
24 | - Genode:
25 | - Nucleus RTOS:
26 | - Linux,Nucleus or both:
27 |
28 | - L4.verified:
29 | Formal functional correctness verified kernel:
30 | Trustworthy Systems represents the confluence of formal methods and
31 | operating systems, applying the former in the context of the latter,
32 | and advancing the state of the art in both areas.
33 |
34 | Aim: Unprecedented security, safety, reliability and
35 | efficiency for software systems, and especially critical cyber-physical systems.
36 | **Approach**: large, complex, performant systems, built on a
37 | formally verified microkernel, which isolates untrusted
38 | components from formally verified or correct-by-construction
39 | synthesised trusted components. Formal, machine-checked proof for
40 | safety and security properties on the code level for systems with
41 | over a million lines of code.
42 |
43 | # Other Real Time and/or embedded Alternatives
44 |
45 | *
46 | List of OOSS real-time OSs
47 |
48 | ```
49 | | Name Built-in Description
50 | | Components
51 | |
52 | │ ┌························································· FS
53 | │ · ┌······················································· Network
54 | │ · · ┌····················································· TLS/SSL
55 | │ · · · ┌··················································· BLE
56 | │ · · · · ┌················································· LoRaWan
57 | │ · · · · · ┌··············································· 6LoWPAN
58 | │ · · · · · · ┌············································· AT Commands
59 | │ · · · · · · · ┌··········································· Runtime Analysis
60 | │ · · · · · · · · ┌········································· USBHost
61 | │ · · · · · · · · · ┌······································· USBDevice
62 | | FreeRTOS X X X X popular RT OS for embedded devices, being ported to 31
63 | | - MIT / GPL License microcontrollers.
64 | | V10.0.1 Code 2017-12-26
65 | | Platforms: ARM,AVR,ColdFire,MSP430,PIC,x86
66 | | _________________________________________________________________________________________________________
67 | | RT-Thread X X ? X X X X OOSS RT OS for embedded devices from China.
68 | | - GLP v2 License RT─Thread RTOS is a scalable real─time OS: a tiny kernel
69 | | for ARM Cortex─M0, Cortex─M3/4, or a full feature system
70 | | in ARM Cortex─A8, ARM Cortex─A9 DualCor v3.0.4 GitHub 2018─05─31
71 | | Platforms: Andes,ARM,MIPS,PowerPC,RISC─V,x86
72 | | _________________________________________________________________________________________________________
73 | | mbed OS X X X X X X X X X X embedded OS designed specifically for the "things" in
74 | | - Apache License the Internet of Things (IoT). It includes all the features you need
75 | | to develop a connected product based on an ARM Cortex─M microcontroller.
76 | | mbed─os─5.9.0 GitHub 2018─06─11
77 | | Platforms: ARM
78 | | _________________________________________________________________________________________________________
79 | | ChibiOS/RT X X X X ChibiOS is a complete development environment for embedded applications
80 | | - GPL License including RTOS, an HAL, peripheral drivers, support files and a
81 | | development environment. 18.2.1 Code 2018─05─01
82 | | Platforms: MSP430,AVR
83 | | _________________________________________________________________________________________________________
84 | | NuttX X X X X X Real─time OS (RTOS) with an emphasis on standards compliance and small
85 | | - BSD License footprint. Scalable from 8─bit to 32─bit microcontroller environments,
86 | | the primary governing standards in NuttX are Posix and ANSI standards.
87 | | 7.25 Code 2018─06─11
88 | | GUI
89 | | Platforms: 8051,ARM,AVR,Freescale,HCS12,MIPS,PIC,RISC─V,x86,Xtensa,Zilog
90 | | _________________________________________________________________________________________________________
91 | | RIOT X X X X RIOT is a real─time multi─threading OS that supports a range of devices
92 | | - LGPLv2.1 License that are typically found in the Internet of Things (IoT): 8─bit, 16─bit and
93 | | 32─bit microcontrollers. 2018.04 GitHub 2018─05─11
94 | | GUI
95 | | Platforms: ARM,AVR,MIPS,MSP430,RISC─V
96 | | _________________________________________________________________________________________________________
97 | | RTEMS X X X RTEMS is an OOSS RTOS that supports open standard application programming
98 | | - GPL License interfaces such as POSIX. It is used in space flight, medical, networking
99 | | and many more embedded devices. 4.11 Code 2018─02─16
100 | | Platforms: ARM,m68k,MIPS,PowerPC,x86
101 | | _________________________________________________________________________________________________________
102 | | MongooseOS X X X Mongoose OS for Internet of Things. Supported microcontrollers:
103 | | - GPLv2 License ESP32, ESP8266, CC3220, CC3200, STM32F4. Amazon AWS IoT & Google IoT Core
104 | | integrated. Code in C or JavaScript. 2.3 GitHub 2018─06─15
105 | | Platforms: ARM,Xtensa
106 | | _________________________________________________________________________________________________________
107 | | Xenomai X X X Xenomai is a real─time development framework cooperating with the Linux
108 | | - GPL License kernel, in order to provide a pervasive, interface─agnostic, hard real─time
109 | | support to user─space applications, seamlessly integrated into the
110 | | GNU/Linux environment. v3.0.7 Code 2018─07─25
111 | | Platforms: ARM,PowerPC,x86
112 | | _________________________________________________________________________________________________________
113 | | Atom ? ? Atomthreads is a free, lightweight, portable, real─time scheduler for
114 | | threads embedded systems. release1.3 GitHub 2017─08─27
115 | | - BSD License Platforms: AVR
116 | | _________________________________________________________________________________________________________
117 | | StratifyOS ? StratifyOS is a powerful embedded RTOS for the ARM Cortex M microcontrollers.
118 | | - GLP License v3.6.0 GitHub 2018─04─20
119 | | Platforms: ARM
120 | | _________________________________________________________________________________________________________
121 | | distortos ? Distortos is an advanced RTOS written in C++11. v0.5.0 GitHub 2017─09─14
122 | | - Mozilla License Platforms: ARM
123 | |
124 | | _________________________________________________________________________________________________________
125 | | Zephyr ? The Zephyr™ Project is a scalable, real─time OS (RTOS) supporting multiple
126 | | - Apache License hardware architectures, optimized for resource constrained devices, and built with
127 | | security in mind. This Linux Foundation hosted project embraces OOSS development
128 | | values and governance on its mission to unite leaders from across the industry
129 | | to produce a best─in─breed solution. zephyr─v1.12.0 GitHub 2018─06─11
130 | | Platforms: ARM
131 | | _________________________________________________________________________________________________________
132 | | StateOS ? Free, extremely simple and amazingly tiny real─time operating system (RTOS) designed
133 | | - GPLv3 License for deeply embedded applications. Target: ARM Cortex─M family. It was inspired by
134 | | the concept of a state machine. v6.0 GitHub 2018─05─25
135 | | Platforms: ARM
136 | | _________________________________________________________________________________________________________
137 | | F9 ? F9 microkernel is a microkernel─based (L4─style) kernel to support running real─time
138 | | Microkernel and time─sharing applications (for example, wireless communications) for ARM
139 | | - BSD License Cortex─M series microprocessors with efficiency (performance ┼ power consumption)
140 | | and security (memory protection ┼ isolated execution) in mind. GitHub 2017─02─02
141 | | Platforms: ARM
142 | | _________________________________________________________________________________________________________
143 | | BRTOS ? BRTOS is a lightweight preemptive real time operating system designed for low end
144 | | - MIT License microcontrollers. GitHub 2017─09─17
145 | | Platforms: AVR,ColdFire,MSP430,PIC
146 | | _________________________________________________________________________________________________________
147 | | BeRTOS ? BeRTOS is a real time open source operating system supplied with drivers and libraries
148 | | - GPL License designed for the rapid development of embedded software. GitHub 2017─01─23
149 | | Platforms: AVR,PowerPC,x86,x86_64
150 | | _________________________________________________________________________________________________________
151 | | Erika ? Erika Enterprise is the first open─source Free RTOS that has been certified
152 | | Enterprise OSEK/VDX compliant!. GH40 GitHub 2018─06─12
153 | | - GPL License
154 | | Platforms: ARM,AVR,MSP430
155 | | _________________________________________________________________________________________________________
156 | | BitThunder ? A Reliable Real─Time Operating System & Application Framework.
157 | | - GPLv2 License stable─0.9.2 GitHub 2017─01─25
158 | | Platforms: ARM
159 | | _________________________________________________________________________________________________________
160 | | TNeo ? TNeo is a well─formed and carefully tested preemptive real─time kernel for
161 | | - OTher License 16─ and 32─bits MCUs. It is compact and fast. v1.08 GitHub 2017─02─25
162 | | Platforms: ARM
163 | | _________________________________________________________________________________________________________
164 | | Libre RTOS ? LibreRTOS is a portable single─stack Real Time Operating System.
165 | | - Apache License All tasks share the same stack, allowing a large number or tasks to be created
166 | | even on architectures with low RAM, such as ATmega328P (2kB).
167 | | dev─master GitHub 2017─11─15
168 | | Apache License
169 | | Platforms: AVR
170 | | _________________________________________________________________________________________________________
171 | | IntrOS ? Free, simple and tiny cooperative operating system (OS) designed
172 | | - GPLv3 License for deeply embedded applications. v4.0 GitHub 2018─05─25
173 | | Platforms: ARM
174 | | _________________________________________________________________________________________________________
175 | | embox ? Embox is a configurable operating system kernel designed for resource constrained
176 | | - Other License and embedded systems.
177 | | v0.3.21 GitHub 2018─03─31
178 | | Platforms: ARM,MIPS,MSP430,PowerPC
179 | | ____________________________________________________________________________________________________
180 | | uKOS ? uKOS is a multi─tasking OS suitable for small embedded µController systems.
181 | | - GPLv3 License It is based on a preventive multitasking scheduler.
182 | | 4.0.0 2017─04─26
183 | | Platforms: ARM
184 | | _________________________________________________________________________________________________________
185 | | seL4 ? The world's first operating─system kernel with an ºend─to─end proof of implementationº
186 | | - Other License correctness and security enforcement is available as open source.
187 | | 10.0.0 GitHub 2018─05─28
188 | | Platforms: ARM
189 | | _________________________________________________________________________________________________________
190 | | Trampoline ? Trampoline is a static RTOS for small embedded systems. Its API is aligned with
191 | | - GPLv2 License OSEK/VDX OS and AUTOSAR OS 4.2 standards.
192 | | dev─master GitHub 2017─12─07
193 | | Platforms: ARM,AVR,PowerPC
194 | | _________________________________________________________________________________________________________
195 | | eChronos ? The eChronos RTOS is a real─time operating system (RTOS) originally developed by NICTA
196 | | - Others License and Breakaway Consulting Pty. Ltd. It is intended for tightly resource─constrained
197 | | devices without memory protection.
198 | | v3.0.2 GitHub 2018─04─01
199 | | Platforms: PowerPC
200 | | _________________________________________________________________________________________________________
201 | | AliOS ? AliOS Things is designed for low power, resource constrained MCU, connectivity SoC,
202 | | - Apache License greatly suitable for IoT devices. AliOS Things is not just a RTOS, it contains full stack of
203 | | software components and tools for building IoT devices. v1.3.1 GitHub 2018─05─31
204 | | Platforms: ARM
205 | | _________________________________________________________________________________________________________
206 | | Things ?
207 | | LiteOS Huawei LiteOS is a lightweight open─source IoT OS and a smart hardware development p
208 | | - BSD License latform. It simplifies IoT device development and device connectivity, makes services
209 | | smarter, delivers superb user experience, and provides better data protection. Huawei
210 | | LiteOS is designed for smart homes, wearables, IoV, and intelligent manufacturing
211 | | applications.
212 | | v1.1.1 GitHub 2017─04─01
213 | | Platforms: ARM
214 | | ________________________________________________________________________________________________________
215 | | TizenRT ? TizenRT is a lightweight RTOS─based platform to support low─end IoT devices
216 | | - Apache Licence 1.1_Public_Release GitHub 2017─10─27
217 | | Platforms: ARM
218 | | ________________________________________________________________________________________________________
219 | | MOE ? MOE is an event─driven scheduler system for 8/16/32─bit MCUs.
220 | | - MIT License MOE means "Minds Of Embedded system".
221 | | V0.1.6 GitHub 2017─04─21
222 | | Platforms: ARM
223 | | ________________________________________________________________________________________________________
224 | | cocoOS ? cocoOS is OOSS cooperative task scheduler, based on coroutines
225 | | - BSD License targeted for embedded microcontrollers like AVR, MSP430 and STM32
226 | | Platforms: ARM
227 | | ________________________________________________________________________________________________________
228 | ```
229 | [[linux.alternatives}]]
230 |
--------------------------------------------------------------------------------
/linux_application_tracing.txt:
--------------------------------------------------------------------------------
1 | [[{monitoring.jobs]]
2 |
3 | # Tracing (Application&Kernel fine grined monitoring)
4 |
5 | [[{profiling,security.encryption]]
6 | [[dev_stack.mysql,dev_stack.postgresql,dev_stack.java]]
7 | ## Dynamic Tracing Tools
8 |
9 | * Extracted from "Best linux monitoring tools" ()
10 |
11 | [[{monitoring.network.TLS]]
12 | ### sslsniff
13 | .
14 | Trace write/send and read/recv functions of OpenSSL, GnuTLS and NSS.
15 | Data passed to this functions is printed as plain text.
16 | Useful, for example, to sniff HTTP before encrypted with SSL.
17 | See also:
18 |
19 | [[monitoring.network.TLS}]]
20 |
21 | [[{monitoring.network.tcpconnect]]
22 | ### tcpconnect
23 |
24 | *
25 | * trace kernel function performing active TCP connections
26 | (eg, via a connect() syscall; accept() are passive connections).
27 | Ex:
28 | ```
29 | | $ sudo tcpconnect
30 | | PID COMM IP SADDR DADDR DPORT
31 | | 1479 telnet 4 127.0.0.1 127.0.0.1 23
32 | | 1469 curl 4 10.201.219.236 54.245.105.25 80
33 | | 1469 curl 4 10.201.219.236 54.67.101.145 80
34 | | 1991 telnet 6 ::1 ::1 23
35 | | 2015 ssh 6 fe80::2000:bff:fe82:3ac fe80::2000:bff:fe82:3ac 22
36 | ```
37 | [[monitoring.network.tcpconnect}]]
38 |
39 | [[{monitoring.network.tcplife]]
40 | ### tcplife
41 |
42 | *
43 | summarizes TCP sessions that open and close while tracing. Ex:
44 | ```
45 | | # ./tcplife
46 | | PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS
47 | | 22597 recordProg 127.0.0.1 46644 127.0.0.1 28527 0 0 0.23
48 | | 3277 redis-serv 127.0.0.1 28527 127.0.0.1 46644 0 0 0.28
49 | | 22598 curl 100.66.3.172 61620 52.205.89.26 80 0 1 91.79
50 | | 22604 curl 100.66.3.172 44400 52.204.43.121 80 0 1 121.38
51 | | 22624 recordProg 127.0.0.1 46648 127.0.0.1 28527 0 0 0.22
52 | | 3277 redis-serv 127.0.0.1 28527 127.0.0.1 46648 0 0 0.27
53 | | 22647 recordProg 127.0.0.1 46650 127.0.0.1 28527 0 0 0.21
54 | | 3277 redis-serv 127.0.0.1 28527 127.0.0.1 46650 0 0 0.26
55 | | [...]
56 | ```
57 | [[monitoring.network.tcplife}]]
58 |
59 | [[{security.audit.user]]
60 | ### ttysnoop
61 |
62 | *
63 | * watch a tty or pts device, and prints the same output that is
64 | appearing on that device. It can be used to mirror the output from a shell
65 | session, or the system console.
66 | [[security.audit.user}]]
67 |
68 | [[{profiling,troubleshooting,PM.low_code]]
69 | ### argdist
70 |
71 | *
72 | - probe indicated functions by collecting parameter values into a
73 | **histogram-or-frequency-count**, allowing to understand the distribution
74 | of values a certain parameter takes, filter and print interesting parameters
75 | **without attaching a debugger**, and obtain general execution statistics on
76 | various functions. Ex:
77 | ```
78 | | $ sudo ./argdist -p ${PID} \
79 | | -c -C \
80 | | 'p:c:malloc(size_t size):size_t:size' ← find out allocation sizes
81 | |
82 | | > p:c:malloc(size_t size):size_t:size
83 | | > COUNT EVENT
84 | | > 3 size = 16 ← 16 looks to be the most commonly
85 | | > [01:42:34] used alloc.size
86 | | >
87 | | > p:c:malloc(size_t size):size_t:size
88 | | > COUNT EVENT
89 | | > 4 size = 16
90 | ```
91 |
92 | [[{security.audit.user]]
93 | ### bashreadline
94 | *
95 | * prints bash commands from all running bash shells on the system
96 | [[security.audit.user}]]
97 |
98 | [[{monitoring.storage,profiling.storage,troubleshooting.storage]]
99 | ### biolatency
100 |
101 | *
102 | * traces block device I/O (disk I/O), and records the distribution
103 | of I/O latency (time), printing this as a histogram when Ctrl-C is hit
104 |
105 | ### biosnoop
106 | *
107 | * biosnoop traces block device I/O (disk I/O), and prints a line of output
108 |
109 | ### biotop "top for disks"
110 | *
111 | * lock device I/O top, biotop summarizes which processes are
112 | performing disk I/O.
113 |
114 | ### filetop
115 | *
116 | * show reads and writes by file, with process details.
117 |
118 | ### bitesize
119 | *
120 | * show I/O distribution for requested block sizes, by process name.
121 |
122 | ### cachestat
123 | *
124 | * shows hits and misses to the file system page cache.
125 |
126 | ### cachetop
127 | *
128 | * show Linux page cache hit/miss statistics including read and write hit % per
129 | processes in a UI like top.
130 |
131 | ### dcstat
132 | *
133 | * dcstat shows directory entry cache (dcache) statistics.
134 |
135 |
136 | ### fileslower
137 | *
138 | * shows file-based synchronous reads and writes slower than a threshold.
139 |
140 | ### llcstat
141 | *
142 | * traces cache reference and cache miss events system-wide, and summarizes
143 | them by PID and CPU.
144 |
145 | [[{monitoring.hardware]]
146 | ### mdflush
147 | *
148 | * traces flushes at the md driver (kernel software RAID) level.
149 | [[monitoring.hardware}]]
150 |
151 | [[monitoring.storage}]]
152 |
153 | [[{monitoring.eBPF,kernel.eBPF]]
154 | ### bpflist
155 |
156 | * displays information on running eBPF programs and optionally also
157 | prints open kprobes and uprobes.
158 | [[monitoring.bft}]]
159 |
160 | [[{security.audit.capable]]
161 | ### capable
162 | *
163 | * capable traces calls to the kernel cap_capable() function, which does
164 | security capability checks, and prints details for each call.
165 | [[security.audit.capable}]]
166 |
167 | [[{monitoring.cpu,profiling.cpu]]
168 | ### cpudist
169 | *
170 | * summarizes task on-CPU time as a histogram, showing how long tasks
171 | spent on the CPU before being descheduled.
172 | [[monitoring.cpu}]]
173 |
174 | [[{monitoring.cpu,profiling.cpu]]
175 | ### cpuunclaimed
176 | *
177 | * samples the length of the CPU run queues and determine when there are
178 | idle CPUs, yet queued threads waiting their turn.
179 | [[monitoring.cpu}]]
180 |
181 | [[{monitoring.kernel.locks,troubleshooting.locks]]
182 | ### criticalstat
183 | * traces and reports occurences of atomic critical sections in the
184 | kernel with useful stacktraces showing the origin of them.
185 | *
186 | [[monitoring.kernel.locks}]]
187 |
188 | [[{profiling.ddbb,profiling.ddbb.postgresql,profiling.ddbb.mysql]]
189 | ### dbslower
190 | *
191 | traces queries served by a MySQL or PostgreSQL server, and prints
192 | those that exceed a latency (query time) threshold.
193 |
194 | ### dbstat
195 | *
196 | traces queries performed by a MySQL or PostgreSQL database process, and
197 | displays a histogram of query latencies.
198 | [[profiling.ddbb}]]
199 |
200 | [[{troubleshooting.locks.deadlock]]
201 | ### *deadlock*
202 | *
203 | * This program detects potential deadlocks on a running process. The program
204 | attaches uprobes on `pthread_mutex_lock` and `pthread_mutex_unlock` to build
205 | a mutex wait directed graph, and then looks for a cycle in this graph.
206 | [[troubleshooting.locks.deadlock}]]
207 |
208 | [[{troubleshooting.memory.drsnoop,profiling.memory]]
209 | ### drsnoop
210 | *
211 | While tracing, the processes alloc pages, due to insufficient memory available
212 | in the system, direct reclaim events happened, which will increase the waiting
213 | delay of the processes.
214 | * drsnoop traces the direct reclaim system-wide, and prints various details.
215 | [[troubleshooting.memory.drsnoop}]]
216 |
217 | [[{monitoring.jobs,security.audit.process]]
218 | ### execsnoop
219 | *
220 | * Traces new process
221 | [[}]]
222 |
223 | [[{monitoring.network.gethostlatency]]
224 | ### gethostlatency
225 | *
226 | * traces host name lookup calls.
227 | [[monitoring.network.gethostlatency}]]
228 |
229 | [[{monitoring.hardware,profiling.hardware]]
230 | ### hardirqs
231 | *
232 | * traces hard interrupts (irqs), and stores timing statistics in-kernel for efficiency.
233 | [[monitoring.hardware}]]
234 |
235 | [[{troubleshooting.memory.memleak,monitoring.memory,profiling.memory,]]
236 | ### memleak
237 | *
238 | * trace and match memory allocation and deallocation requests, and
239 | collects call stacks for each allocation.
240 | * memleak can then print a summary of which call stacks performed allocations
241 | that weren't subsequently freed.
242 | [[troubleshooting.memory.memleak}]]
243 |
244 | [[{profiling.locks,troubleshooting.locks]]
245 | ### offcputime, offcpu..., ...
246 | *
247 | shows stack traces that were blocked, and the total duration they
248 | were blocked.
249 | [[profiling.locks}]]
250 |
251 | [[{troubleshooting.memory,monitoring.memory]]
252 | ### oomkill
253 | *
254 | * simple program that traces the Linux out-of-memory (OOM) killer,
255 | and shows basic details on one line per OOM kill.
256 | [[troubleshooting.memory}]]
257 |
258 | [[{profiling.locks,troubleshooting.locks]]
259 | ### wakeuptime
260 | *
261 | * measures when threads block, and shows the stack traces for the
262 | threads that performed the wakeup, along with the process names of the waker
263 | and target processes, and the total blocked time.
264 | [[profiling.locks}]]
265 |
266 | [[{ profiling.java, profiling.python, profiling.nodejs]]
267 | [[ monitoring.java,monitoring.python,monitoring.nodejs]]
268 | ## Python,Java,NodeJS
269 |
270 | ### ugc
271 | *
272 | * traces garbage collection events in high-level languages, including Java,
273 | Python, Ruby, and Node.
274 |
275 | ### ucalls
276 | *
277 | * ucalls summarizes method calls in various high-level languages, including Java,
278 | Perl, PHP, Python, Ruby, Tcl, and Linux system calls.
279 |
280 | ### uflow
281 | *
282 | * uflow traces method entry and exit events and prints a visual flow graph that
283 | shows how methods are entered and exited, similar to a tracing debugger with
284 | breakpoints. This can be useful for understanding program flow in high-level
285 | languages such as Java, Perl, PHP, Python, Ruby, and Tcl which provide USDT
286 | probes for method invocations.
287 |
288 | ### uobjnew
289 | *
290 | * summarizes new object allocation events and prints out statistics on
291 | which object type has been allocated frequently, and how many bytes of that
292 | type have been allocated. This helps diagnose common allocation paths, which
293 | can in turn cause heavy garbage collection.
294 |
295 | ### ustat
296 | *
297 | * ustat is a "top"-like tool for monitoring events in high-level languages. It
298 | prints statistics about garbage collections, method calls, object allocations,
299 | and various other events for every process that it recognizes with a Java,
300 | Node, Perl, PHP, Python, Ruby, and Tcl runtime.
301 |
302 | ### uthreads
303 | *
304 | * traces thread creation events in Java or raw (C) pthreads, and prints
305 | details about the newly created thread. For Java threads, the thread name is
306 | printed; for pthreads, the thread's start function is printed, if there is
307 | symbol information to resolve it.
308 |
309 | ### TODO:
310 | * filelife
311 | * fileslower
312 | * vfscount
313 | * vfsstat
314 | * dcstat, ...
315 | [[}]]
316 |
317 | [[monitoring.jobs}]]
318 |
--------------------------------------------------------------------------------
/linux_cheat_seet.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/earizon/DevOps/2f0a426941c4a9300edf52c94f676b02cbe29f63/linux_cheat_seet.jpg
--------------------------------------------------------------------------------
/linux_desktop.txt:
--------------------------------------------------------------------------------
1 | [[{linux.desktop,PM.backlog]]
2 |
3 | # Linux Desktop
4 |
5 | ## D-BUS
6 |
7 | ## Best Screen Recorders for Wayland in Linux [Compared & Tested]
8 |
9 | *
10 |
11 | ### Desktop recording in Wayland thanks to Pipeware
12 |
13 | *
14 | *
15 | - It provides a low-latency, graph based processing engine on top of
16 | audio and video devices that can be used to support the use cases
17 | currently handled by both pulseaudio and JACK. PipeWire was designed
18 | with a powerful security model that makes interacting with audio and
19 | video devices from containerized applications easy, with supporting
20 | Flatpak applications being the primary goal. Alongside Wayland and
21 | Flatpak we expect PipeWire to provide a core building block for the
22 | future of Linux application development
23 |
24 | ## Kooha – Screen Recorder with Wayland Support
25 |
26 | *
27 |
28 | ## Script to reduce pdf size x5
29 |
30 | *
31 |
32 | Linux shell script to reduce PDF file size (x5 or more when embbeding images)
33 |
34 | [[{performance.desktop]]
35 | ## Improve Desktop performance (Spanish)
36 |
37 | *
38 | [[performance.desktop}]]
39 |
40 | [[{monitoring.hardware.battery]]
41 | # Laptop Recipes
42 | *
43 | [[monitoring.hardware.battery}]]
44 |
45 |
46 | ## rygel: - easily share multimedia content to your home and media players
47 |
48 | * man 1 rygel
49 |
50 | - a collection of DLNA/UPnP AV services
51 | - home media solution that allows you to easily share audio, video and
52 | pictures, and control of media player on your home network.
53 | - UPnP AV MediaServer and MediaRenderer implemented through a plug-in
54 | mechanism. Interoperability with other devices in the market is
55 | achieved by conformance to very strict requirements of DLNA and on the fly
56 | conversion of media to format that client devices are capable of handling.
57 | - Play media stored on a PC via TV, PS3, ...
58 | - Easily search and play media using a phone, TV or PC
59 | - Redirect sound to DLNA speakers.
60 |
61 | [[{security.remote_access.vnc,security.remote_access.ssh,security.remote_access.rdp,}]]
62 | ## Guacamole clientless remote desktop gateway.
63 |
64 | * It supports standard protocols like VNC, RDP, and SSH.
65 | *
66 | *
67 |
68 | ## OBS Studio Live Streaming and Screen Recording Gets Wayland Support - 9to5Linux
69 |
70 | *
71 |
72 | [[{linux.desktop.OStree]]
73 | ## Endless OS: "Git like" OS updates.
74 |
75 | *
76 |
77 | With OSTree, keeping a previous version of the OS does not double the
78 | disk space used: only the differences between the two versions need
79 | to be stored. For minor upgrades (such as 3.4.0 to 3.4.1) the
80 | difference is typically only a few megabytes. For major upgrades
81 | (such as 3.3.x to 3.4.x) the difference is significantly larger, but
82 | the ability to roll back to the old version is even more valuable.
83 |
84 | *
85 |
86 | As implied above, libostree is both a shared library and suite of
87 | command line tools that combines a "git-like" model for
88 | committing and downloading bootable filesystem trees, along with a
89 | layer for deploying them and managing the bootloader configuration.
90 | The core OSTree model is like git in that it checksums individual
91 | files and has a content-addressed-object store. It’s unlike git in
92 | that it “checks out” the files via hardlinks, and they thus need
93 | to be immutable to prevent corruption. Therefore, another way to
94 | think of OSTree is that it’s just a more polished version of Linux
95 | VServer hardlinks.
96 |
97 | Features:
98 | - Transactional upgrades and rollback for the system
99 | - Replicating content incrementally over HTTP via GPG signatures and
100 | "pinned TLS" support
101 | - Support for parallel installing more than just 2 bootable roots
102 | - Binary history on the server side (and client)
103 | - Introspectable shared library API for build and deployment systems
104 | - Flexible support for multiple branches and repositories, supporting
105 | projects like flatpak which use libostree for applications, rather
106 | than hosts.
107 |
108 | Endless OS uses libostree for their host system as well as flatpak.
109 | See their eos-updater and deb-ostree-builder projects.
110 |
111 | * Fedora derivatives use rpm-ostree (noted below); there are 4 variants
112 | using OSTree:
113 | - Fedora CoreOS
114 | - Fedora Silverblue
115 | - Fedora Kinoite
116 | - Fedora IoT
117 |
118 | * GNOME Continuous is where OSTree was born - as a high performance
119 | continuous delivery/testing system for GNOME.
120 |
121 | * OSTree is in many ways very evolutionary. It builds on concepts and
122 | ideas introduced from many different projects such as Systemd
123 | Stateless, Systemd Bootloader Spec, Chromium Autoupdate, the much
124 | older Fedora/Red Hat Stateless Project, Linux VServer and many more.
125 | [[linux.desktop.OStree}]]
126 |
127 | ## 5 OOSS alternatives to Microsoft Exchange
128 |
129 | *
130 |
131 | [[{]]
132 | ## WirePlumber in Fedora 35
133 |
134 | *
135 |
136 | ...WirePlumber as the default session manager for PipeWire! ... previously adopted in
137 | the automotive space by Automotive Grade Linux... it is now the recommended session
138 | manager to accompany PipeWire
139 |
140 | ... WirePlumber brings some new and interesting things to the desktop. Most notably, it
141 | introduces the ability to easily modify the overall behavior of PipeWire for different
142 | use cases, using Lua scripts ... customize multimedia experience...
143 | [[}]]
144 |
145 | ## Barrier: keyboard/mouse sharing across machines
146 |
147 | *
148 |
149 |
150 | ## Avahi,ZeroConf
151 |
152 | *
153 |
154 | [[{]]
155 | ## 101: postmarketOS: real Linux distribution for phones
156 | https://postmarketos.org/
157 |
158 | Real OS for Smartphones
159 |
160 | Writing packages is easy, by the way: as long as you know how to
161 | write shell scripts, you are good to go. We have continuous
162 | integration in place that makes sure everything builds that gets
163 | submitted to our packages repository, among other sanity checks.
164 | [[}]]
165 |
166 |
167 |
168 | [[linux.desktop}]]
169 |
170 |
171 |
172 |
--------------------------------------------------------------------------------
/linux_dmesg.txt:
--------------------------------------------------------------------------------
1 | ## dmesg:
2 |
3 | https://thenewstack.io/a-new-linux-memory-controller-promises-to-save-lots-of-ram/
4 | The Linux kernel is the core of the operating system that controls
5 | access to the system resources, such as CPU, I/O devices, physical
6 | memory, and file systems. The kernel writes various messages to the
7 | kernel ring buffer during the boot process, and when the system is
8 | running. These messages include various information about the
9 | operation of the system.
10 |
11 | The kernel ring buffer is a portion of the physical memory that holds
12 | the kernel’s log messages. It has a fixed size, which means once
13 | the buffer is full, the older logs records are overwritten.
14 |
15 | The dmesg command-line utility is used to print and control the
16 | kernel ring buffer in Linux and other Unix-like operating systems. It
17 | is useful for examining kernel boot messages and debugging hardware
18 | related issues.
19 |
20 | In this tutorial, we’ll cover the basics of the dmesg command.
21 | Using the dmesg Command
22 |
23 | The syntax for the dmesg command is as follows:
24 |
25 | dmesg [OPTIONS]
26 |
27 | Copy
28 |
29 | When invoked without any options dmesg writes all messages from the
30 | kernel ring buffer to the standard output:
31 |
32 | dmesg
33 |
34 | Copy
35 |
36 | By default, all users can run the dmesg command. However, on some
37 | systems, the access to dmesg may be restricted for non-root users. In
38 | this situation, when invoking dmesg you will get an error message
39 | like below:
40 |
41 | dmesg: read kernel buffer failed: Operation not permitted
42 |
43 | Copy
44 |
45 | The kernel parameter kernel.dmesg_restrict specifies whether
46 | unprivileged users can use dmesg to view messages from the kernel’s
47 | log buffer. To remove the restrictions, set it to zero:
48 |
49 | sudo sysctl -w kernel.dmesg_restrict=0
50 |
51 | Copy
52 |
53 | Usually, the output contains a lot of lines of information, so only
54 | the last part of the output is viewable. To see one page at a time,
55 | pipe the output to a pager utility such as less or more:
56 |
57 | dmesg --color=always | less
58 |
59 | Copy
60 |
61 | The --color=always is used to preserve the colored output.
62 |
63 | If you want to filter the buffer messages, use grep. For example, to
64 | view only the USB related messages, you would type:
65 |
66 | dmesg | grep -i usb
67 |
68 | Copy
69 |
70 | dmesg reads the messages generated by the kernel from the /proc/kmsg
71 | virtual file. This file provides an interface to the kernel ring
72 | buffer and can be opened only by one process. If syslog process is
73 | running on your system and you try to read the file with cat, or
74 | less, the command will hang.
75 |
76 | The syslog daemon dumps kernel messages to /var/log/dmesg, so you can
77 | also use that log file:
78 |
79 | cat /var/log/dmesg
80 |
81 | Copy
82 | Formating dmesg Output
83 |
84 | The dmesg command provides a number of options that help you format
85 | and filter the output.
86 |
87 | One of the most used options of dmesg is -H (--human), which enables
88 | the human-readable output. This option pipe the command output into a
89 | pager:
90 |
91 | dmesg -H
92 |
93 | Copy
94 |
95 | To print human-readable timestamps use the -T (--ctime) option:
96 |
97 | dmesg -T
98 |
99 | Copy
100 |
101 | [Mon Oct 14 14:38:04 2019] IPv6: ADDRCONF(NETDEV_CHANGE): wlp1s0: link becomes ready
102 |
103 | Copy
104 |
105 | The timestamps format can also be set using the --time-format option, which can be ctime, reltime, delta, notime, or iso. For example to use the delta format you would type:
106 |
107 | dmesg --time-format=delta
108 |
109 | Copy
110 |
111 | You can also combine two or more options:
112 |
113 | dmesg -H -T
114 |
115 | Copy
116 |
117 | To watch the output of the dmesg command in real-time use the -w (--follow) option:
118 |
119 | dmesg --follow
120 |
121 | Copy
122 | Filtering dmesg Output
123 |
124 | You can restrict the dmesg output to given facilities and levels.
125 |
126 | The facility represents the process that created the message. dmesg supports the following log facilities:
127 |
128 | kern - kernel messages
129 | user - user-level messages
130 | mail - mail system
131 | daemon - system daemons
132 | auth - security/authorization messages
133 | syslog - internal syslogd messages
134 | lpr - line printer subsystem
135 | news - network news subsystem
136 |
137 | The -f (--facility ) option allows you to limit the output to specific facilities. The option accepts one or more comma-separated facilities.
138 |
139 | For example, to display only the kernel and system daemons messages you would use:
140 |
141 | dmesg -f kern,daemon
142 |
143 | Copy
144 |
145 | Each log message is associated with a log level that shows the importance of the message. dmesg supports the following log levels:
146 |
147 | emerg - system is unusable
148 | alert - action must be taken immediately
149 | crit - critical conditions
150 | err - error conditions
151 | warn - warning conditions
152 | notice - normal but significant condition
153 | info - informational
154 | debug - debug-level messages
155 |
156 | The -l (--level ) option restricts the output to defined levels. The option accepts one or more comma-separated levels.
157 |
158 | The following command displays only the error and critical messages:
159 |
160 | dmesg -l err,crit
161 |
162 | Copy
163 | Clearing the Ring Buffer
164 |
165 | The -C (--clear) option allows you to clear the ring buffer:
166 |
167 | sudo dmesg -C
168 |
169 | Copy
170 |
171 | Only root or users with sudo privileges can clear the buffer.
172 |
173 | To print the buffer contents before clearing use the -c (--read-clear) option:
174 |
175 | sudo dmesg -c
176 |
177 | Copy
178 |
179 | If you want to save the current dmesg logs in a file before clearing it, redirect the output to a file:
180 |
181 | dmesg > dmesg_messages
182 |
183 | Copy
184 | Conclusion
185 |
186 | The dmesg command allows you to view and control the kernel ring buffer. It can be very useful when troubleshooting kernel or hardware issues.
187 |
188 | Type man dmesg in your terminal for information about all available dmesg options.
189 |
190 | If you have any questions or feedback, feel free to leave a comment.
191 |
--------------------------------------------------------------------------------
/linux_eBPF.txt:
--------------------------------------------------------------------------------
1 | # eBPF
2 |
3 |
4 | * List running BFP programs
5 |
6 | ```
7 | | $ sudo bpftool prog
8 | | 2: tracing name hid_tail_call tag 7cc47bbf07148bfe gpl
9 | | 47: lsm name restrict_filesystems tag 713a545fe0530ce7 gpl
10 | | 570: cgroup_skb name sd_fw_egress ...
11 | | 572: cgroup_device name sd_devices ...
12 | | ...
13 | ```
14 |
15 | [[{]]
16 | ## eBPFtrace
17 |
18 | *
19 |
20 | * built on top of eBPF, as a higher level front-end for tracing "competing" wit
21 | h Solaris DTrace.
22 |
23 | * Looks to enhance over SystemTap (that basically was an IBM/RedHat internal
24 | project).
25 | A typical script looks like:
26 |
27 | ```
28 | | bpftrace
29 | |
30 | | BEGIN { printf("Tracing... Hit Ctrl-C to end.\n"); }
31 | |
32 | | // Process io start */
33 | | tracepoint:block:block_rq_insert
34 | | /@last[args->dev]/ {
35 | | $last = @last[args->dev]; // calculate seek distance
36 | | $dist = (args->sector - $last) > 0 ?
37 | | args->sector - $last : $last - args->sector;
38 | | @size[pid, comm] = hist($dist); // store details
39 | | }
40 | |
41 | | tracepoint:block:block_rq_insert {
42 | | @last[args->dev] = args->sector // save last position of disk head
43 | | + args->nr_sector;
44 | | }
45 | |
46 | | END {
47 | | printf("\n@[PID, COMM]:\n");
48 | | print(@size); clear(@size); clear(@last);
49 | | }
50 | ```
51 | [[monitoring.kernel.bpftrace}]]
52 |
53 |
54 |
55 | [[}]]
56 |
57 | [[{monitoring.eBPF,monitoring,storage]]
58 | ## trace FS requests with eBPF
59 |
60 | [[monitoring.eBPF}]]
61 |
62 | [[{monitoring.netflix_vector,kernel.eBPF,]]
63 |
64 | ## vector-ebpf-container
65 |
66 | *
67 | * Vector Performance Monitoring Tool Adds eBPF, Unified Host-Container Metrics Support
68 |
69 | Vector, the open source performance monitoring tool from Netflix,
70 | added support for eBPF based tools using a PCP daemon, a unified view
71 | of container and host metrics, and UI improvements.
72 |
73 | Netflix had earlier released a performance monitoring tool called
74 | Vector as open source. Vector can "visualize and analyze system and
75 | application-level metrics in near real-time". These metrics include
76 | CPU, memory, disk and network, and application profiling using
77 | flamegraphs. Vector is build on top of Performance Co-Pilot (PCP), a
78 | performance analysis toolkit. PCP works in a distributed fashion with
79 | a daemon on each monitored host, which controls and routes metric
80 | requests to individual agents which collect the actual metrics. There
81 | are agents for most popular software, and custom application metrics
82 | can be collected by writing one's own agent. Client applications
83 | connect to the daemon.
84 | [[monitoring.netflix_vector}]]
85 |
86 | [[{security.eBPF,kernel.eBPF,PM.low_code]]
87 | ## Linux XDP (eXpress Data Path) eBFP
88 |
89 | * Major Applications:
90 | * bcc: Toolkit and library for efficient BPF-based kernel tracing
91 | * Cilium: eBPF-based Networking, Security, and Observability, designed for Kubernetes.
92 | * bpftrace: High-level tracing language for Linux eBPF. Inspired by awk and C and
93 | predecessor tracers such as DTrace and SystemTap.
94 | * Falco: behavioral activity monitor designed to detect anomalous activity in applications.
95 | Falco audits a system at the Linux kernel layer with the help of eBPF. It enriches gathered
96 | data with other input streams such as container runtime metrics and Kubernetes metrics,
97 | and allows to continuously monitor and detect container, application, host, and network activity.
98 | * Pixie: observability tool for Kubernetes applications. No need for manual instrumentation.
99 | Developers can use Pixie to view the high-level state of their cluster (service maps,
100 | cluster resources, application traffic) and also drill down into more detailed views
101 | (pod state, flame graphs, individual full body application requests).
102 | * Calico: eBPF dataplane for networking, load-balancing and in-kernel security for containers and Kubernetes.
103 | * Katran: Layer 4 load balancer. Katran leverages the XDP infrastructure from the Linux kernel
104 | to provide an in-kernel facility for fast packet processing. Its performance scales linearly with
105 | the number of NIC's receive queues and it uses RSS friendly encapsulation for forwarding to L7 load balancers.
106 | * Parca: Continuous Profiling Platform. Track memory, CPU, I/O bottlenecks broken down by method name,
107 | class name, and line number over time. Without complex overhead, in any language or framework.
108 | Parca's UI allows data to be globally explored and analyzed using various visualizations
109 | * Tetragon: eBPF-based Security Observability & Runtime Enforcement
110 | Observability combined with real-time runtime enforcement without application changes
111 | [[security.eBPF,kernel.eBPF,PM.low_code}]]
112 |
113 |
114 |
--------------------------------------------------------------------------------
/linux_encryption.txt:
--------------------------------------------------------------------------------
1 | [[{security.encryption,security.101]]
2 | # Linux Encryption
3 |
4 | ## Kernel Keyrings [[{security.encryption,security.secret_management]]
5 |
6 | *
7 |
8 | ## in-Kernel cache for cryptographic secrets (encrypted FS passwords, kernel services,...),
9 |
10 | * used by various kernel components -file-systems, hardware modules,...- to cache*
11 | * security data, authentication keys, encryption keys, ... in kernel space*.
12 |
13 | ```
14 | | (kernel) KEY OBJECT
15 | | ┌── attributes ────┐
16 | | │ · type │ Must be registered in kernel by a kernel service (FS,...)
17 | | │ │ and determine operations allowed for key.
18 | | │------------------│ e.g: AFS FS might want to define a Kerberos 5 ticket key type.
19 | | │ · serial number │ 32-bit uint ID ** *
20 | | │------------------│
21 | | │ · description │ arbitrary printable string*used in searchs*prefixed with "key-type"
22 | | │ │ prefix allows kernel to invoke proper implementations to type.
23 | | │ │ "description" meaning can be different for a File System, a HSM, ...
24 | | │------------------│
25 | | │ · Access control │ == ( owner ID, group ID, permissions mask) to control what a process
26 | | │ │ may do to a kernel-key from userspace or whether kernel service
27 | | │------------------│ will be able to find the key.
28 | | │ · Expiry time │
29 | | │------------------│
30 | | │ · payload │ ← blob of data (*"secret"*) or key-list for keyrings
31 | | │------------------│ Optional. Payload can be stored in kernel struct itself
32 | | │ · State │ ← Non garbage collected: Uninstantiated | Instantiated | Negative
33 | | │ │ garbage collected: Expired | Revoked | Dead
34 | | ├─── functions ────┤
35 | | │ · Constructor │ ← Used at key instantiation time. Depends on type
36 | | │ · read │ ← Used to convert the key's internal payload back into blob
37 | | │ · "find" │ ← Match a description to a key.
38 | | └──────────────────┘
39 | | ** * SPECIAL KEY IDS :
40 | | · 0 : No key
41 | | · @t or -1 : Thread keyring, 1st on searchs, replaced on (v)fork|exec|clone
42 | | · @p or -2 : Process keyring, 2nd on searchs, replaced on (v)fork|exec
43 | | · @s or -3 : Session keyring, 3rd on searchs, inherited on (v)fork|exec|clone
44 | | It/They can be named and joined with existing ones.
45 | | · @u or -4 : User specific, shared by processes owned by user.
46 | | Not searched directly, but normally linked to/from @s.
47 | | · @us or -5 : User default session keyring upon succesful login (by root login processes)
48 | | · @g or -6 : Group specific keyring, *not actually implemented yet in the kernel*.
49 | | · @a or -7 : Assumed request_key authorisation key:
50 | | · %:${name} : named keyring, to be searched for in [process's keyrings , */proc/keys*]
51 | | · %$typ:$name: named type:keyring, to be searched for in [process's keyrings , */proc/keys*]
52 | ```
53 | * RELATED PROCFS FILES:
54 | ```
55 | | · /proc/keys ← list keys granted View perm/task reading the file.
56 | | · /proc/key-users ← lists quota+stats by user
57 | ```
58 |
59 | * RELATED SYSCTL PROCFS FILES:
60 | ```
61 | | /proc/sys/kernel/keys/root_maxkeys
62 | | /proc/sys/kernel/keys/root_maxbytes
63 | | /proc/sys/kernel/keys/maxkeys
64 | | /proc/sys/kernel/keys/maxbytes
65 | | /proc/sys/kernel/keys/gc_delay ← timeout to garbage-collect revoked|expired keys
66 | ```
67 |
68 | * e.g.: When a filesystem/device calls "file-open", the kernel react by
69 | searching for a key, releasing it upon close.
70 | (How to deal with conflicting keys due to concurrent users opening
71 | the same file is left to the filesystem author to solve)
72 |
73 | * `man 1 keyctl summary` (to be used by scripts, ...)
74 |
75 | ```
76 | | $ keyctl show [$ring] # ← Show (recursively by default) keys and keyrings
77 | | Session Keyring a process is subscribed-to or just keyring $ring
78 | | 84022412 --alswrv 0 -x : display IDs in hex (vs decimal).
79 | | 204615789 --alswrv 0 6 0 keyring: _ses
80 | | 529474961 --alsw-v 0 5534 \_ keyring: _uid.0
81 | | 84022412 --alswrv 0 0 \_ logon: f2fs:28e21cc0c4393da1
82 | | └───────┴─ ex key-type used by f2fs file-system.
83 | |
84 | | $ keyctl add \ # <· create+add key to $ring.
85 | | $type "${descrip}" \ $data: instantiates new key with $data
86 | | "${data}" $ring Use padd (vs add) to read $data from STDIN
87 | | 26 # <· New key's ID printed to STDOUT
88 | |
89 | | $ keyctl request $type $desc # <· Request key lookup in process's keyrings
90 | | 26 of the given type and description. Print Key's ID
91 | | to STDOUT in case of match, or ENOKEY otherwise.
92 | | If an optional ${dest_keyring} (last option) is
93 | | provided, key will be added to it.
94 | | $ keyctl request2 $type $desc $info # <· alt: create partial key (type,desc) and calls out
95 | | $ keyctl prequest2 $type $desc # key creation to /sbin/request-key to (attempt to)
96 | | instantiate the key in some manner.
97 | | prequest2 (vs request2) reads (callout) $info
98 | | from STDIN.
99 | |
100 | | $ keyctl update $key $data # < replace key data (pupdate to get data from STDIN)
101 | | $ keyctl newring $name $ring # < Create new keyring $name attached to existing $keyring
102 | | e.g.: $ keyctl newring squelch @us
103 | | $ keyctl revoke $key # < Revoke key
104 | | $ keyctl clear $ring # < Clear $keyring (unlinks all keys attached)
105 | |
106 | | $ keyctl link $key $ring # < Link key to keyring (if there's enough capacity)
107 | | $ keyctl unlink $key [$ring] # < UnLink key. If $ring unset do depth-first search
108 | | (unlinking all links found for $key)
109 | |
110 | | $ keyctl search $ring $type \ # ← Search key non-recursively
111 | | $desc [$dest_ring] # ← Id $dset_ring set and key found, attach to it.
112 | | $ keyctl search @us user debug:hello # ex:
113 | | 23
114 | | $ keyctl search @us user debug:bye #
115 | | keyctl_search: Requested key not available
116 | |
117 | | # --------------------------------- RESTRICT A KEYRING
118 | | $ keyctl restrict_keyring $ring \ # limit linkage-of-keys to the given $ring
119 | | [$type [$restriction]] # using a provided restriction-scheme
120 | | (associated with a given key type)
121 | | · If restriction-scheme not provided,
122 | | keyring will reject all links.
123 | | e.g.:
124 | | $ keyctl restrict_keyring $1 \ # Options typically contain a restriction-name
125 | | asymmetric builtin_trusted # possibly followed by key ids or other data
126 | | relevant to the restriction.
127 |
128 | | Read (payload) of key:
129 | | $ keyctl read $key # <· dumps raw-data to STDOUT as a hex dump
130 | | $ keyctl pipe $key # <· dumps raw-data to STDOUT
131 | | $ keyctl print $key # <· dumps raw-data to STDOUT if entirely printable
132 | | or as ":hex:"+hex dump otherwise
133 | | "Operation not supported" returned
134 | | if key-type does not support payload-reading
135 | |
136 | | # -------------------------------- LIST CONTENTS OF A KEYRING
137 | | $ keyctl list $ring # ← pretty-prints $ring list-of-key IDs
138 | | $ keyctl rlist $ring # (rlist) => space separated
139 | | e.g.:
140 | | $ keyctl list @us
141 | | 2 keys in keyring:
142 | | 22: vrwsl---------- 4043 -1 keyring: _uid.4043
143 | | 23: vrwsl---------- 4043 4043 user: debug:hello
144 | | $ keyctl rlist @us *
145 | | 22 23
146 | |
147 | | DESCRIBE A KEY
148 | | $ keyctl describe @us # ← pretty prints description
149 | | -5: vrwsl-... _uid_ses.4043
150 | |
151 | | $ keyctl rdescribe @us [$sep] # ← prints raw data returned from kernel.
152 | | keyring;4043;-1;3f1f0000;_uid_ses.4043
153 | | └─────┘ └──┘ └┘ └──────┘ └───────────┘
154 | | type uid gid permis. description
155 | |
156 | | # -------------------------------- CHANGE KEY ACCESS CONTROLS
157 | | $ keyctl chown $key $uid # ← change owner: It also governs which
158 | | NOT currently supported! quota a key is taken out of
159 | | $ keyctl chgrp $key $gid # ← change group (process's GID|GID in
160 | | process's groups list for non-root
161 | | users)
162 | |
163 | | $ keyctl setperm $key $mask # ← Set permissions mask(as "0x..." hex
164 | | or "0..." octal)
165 | |
166 | | The hex numbers are a combination of:
167 | | Possessor UID GID Other Permission Granted
168 | | ┌┐====== ==┌┐==== ====┌┐== ======┌┐
169 | | 010..... ..01.... 0...01.. 0... 01 View : allow view of key type, description, "others"
170 | | 020..... ..02.... 0...02.. 0... 02 Read : allow view of payload|ring list (if supported)
171 | | 040..... ..04.... 0...04.. 0... 04 Write : allow write of payload|ring list (if supported)
172 | | 080..... ..08.... 0...08.. 0... 08 Search: allow key to be found in linked keyring.
173 | | 100..... ..10.... 0...10.. 0... 10 Link : allow key to be linked to keyring.
174 | | 200..... ..20.... 0...20.. 0... 20 Set Attribute: allows change of own|grp|per.msk|timeout
175 | | 3f0..... ..3f.... 0...3f.. 0... 3f All
176 | | └┘====== ==└┘==== ====└┘== ======└┘
177 | |
178 | | $ keyctl setperm 27 0x1f1f1f00 # <· e.g.
179 | ```
180 |
181 | * See man 1 keyctl for further info about how to:
182 | * Start a new session with fresh keyrings
183 | * Instantiate a new key or mark as invalid, timeout.
184 | * Retrieve a key's (SELinux) security context
185 | * Give the parent process a new session keyring
186 | * Remove/purge keys
187 | * Get persistent keyring
188 | ```keyctl get_persistent $ring []
189 | ```
190 | * Compute a Diffie-Hellman shared secret or public key from (private, prime, base)
191 | * Compute a Diffie-Hellman shared secret and derive key material
192 | from (private, prime, base, output_length, hash_type)
193 | * Compute a Diffie-Hellman shared secret and apply KDF with other input
194 | from (private, prime, base, output_length, hash_type)
195 | * Perform public-key operations with an asymmetric key (encrypt/decript, sign/verify)
196 | [[security.encryption}]]
197 |
198 |
199 | ## eCryptfs Summary [[{storage,security.encryption]]
200 |
201 | * Linux Kernel v2.6.19 and ahead
202 | *
203 |
204 | - Original author: Michael Halcrow and IBM Linux Technology Center.
205 | - Actively maintained by Dustin Kirkland and Tyler Hicks from Canonical.
206 |
207 | - eCryptfs is not a Kernel-level full disk encryption subsystems like dm-crypt.
208 |
209 | - eCryptfs is a stacked filesystem mounted on any directory on top of
210 | the local file system (EXT3, EXT4, XFS,, ...) and also
211 | **network file systems**(NFS, CIFS, Samba, WebDAV,...) with some
212 | restrictions when compared to local FS.
213 |
214 | - No separate partition or pre-allocated space is actually required!
215 | eCryptfs stores cryptographic metadata in the headers of files,
216 | so the encrypted data can be easily moved between different users and
217 | even systems.
218 |
219 | ### PRE-SETUP install
220 |
221 | ```
222 | | $ sudo pacman -S ecryptfs-utils # ← Arch, Manjaro,..
223 | | $ sudo apt-get install ecryptfs-utils # ← Debian,Ubuntu,...
224 | | $ sudo dnf install ecryptfs-utils # ← RedHat, CentOS, Fedora, ..
225 | | $ sudo zypper --install ecryptfs-utils # ← openSUSE
226 | | ...
227 | ```
228 |
229 | ### ecrypt Ussage
230 |
231 | - At first mount, you will be prompted cipher, key bytes,
232 | plaintext passthrough, enable/disable filename encryption etc.
233 |
234 | ```
235 | | $ sudo mount -t ecryptfs ~/SensitiveData ~/SensitiveData/
236 | |
237 | | [sudo] password for sk:
238 | | Passphrase: ← Enter passphrase. Needed to unlock again
239 | | Select cipher: on next mounts.
240 | | 1) aes: blocksize = 16; min keysize = 16; max keysize = 32
241 | | 2) blowfish: blocksize = 8; min keysize = 16; max keysize = 56
242 | | 3) des3_ede: blocksize = 8; min keysize = 24; max keysize = 24
243 | | 4) twofish: blocksize = 16; min keysize = 16; max keysize = 32
244 | | 5) cast6: blocksize = 16; min keysize = 16; max keysize = 32
245 | | 6) cast5: blocksize = 8; min keysize = 5; max keysize = 16
246 | | Selection [aes]: ← Prompt
247 | | Select key bytes:
248 | | 1) 16
249 | | 2) 32
250 | | 3) 24
251 | | Selection [16]: ← [Enter]
252 | | Enable plaintext passthrough (y/n) [n]: ← [Enter]
253 | | Enable filename encryption (y/n) [n]: ← [Enter]
254 | | Attempting to mount with the following options:
255 | | ecryptfs_unlink_sigs
256 | | ecryptfs_key_bytes=16
257 | | ecryptfs_cipher=aes
258 | | ecryptfs_sig=8567ee2ae5880f2d
259 | | WARNING: Based on the contents of [/root/.ecryptfs/sig-cache.txt],
260 | | it looks like you have never mounted with this key
261 | | before. This could mean that you have typed your
262 | | passphrase wrong.
263 | |
264 | | Would you like to proceed with the mount (yes/no)? : yes ←
265 | | Would you like to append sig [8567ee2ae5880f2d] to
266 | | [/root/.ecryptfs/sig-cache.txt]
267 | | in order to avoid this warning in the future (yes/no)? : yes ←
268 | | Successfully appended new sig to user sig cache file
269 | | Mounted eCryptfs
270 | ```
271 |
272 | * A signature file */root/.ecryptfs/sig-cache.txt* will be created
273 | to identify the mount passphrase in the kernel keyring.
274 | * When the directory is un-mounted, files are still visible, but
275 | completely encrypted and un-readable.
276 |
277 | * Changing mount passphrase
278 | ```
279 | | $ sudo rm -f /root/.ecryptfs/sig-cache.txt
280 | | $ sudo mount -t ecryptfs ~/SensitiveData ~/SensitiveData/ # ← Mount again
281 | ```
282 | * Re-mount automatically at reboot
283 | ```
284 | | PRE-SETUP:
285 | | Fetch an USB drive to store the signature and
286 | | path of the password file.
287 | |
288 | | $ sudo mount /dev/sdb1 /mnt/usb # ← sdb1 == "usb". Use $ 'dmesg'
289 | | # to known the exact value
290 | |
291 | | $ sudo cat /root/.ecryptfs/sig-cache.txt
292 | | 934e8e1fa80152e4
293 | |
294 | | $ sudo vim /mnt/usb/password.txt # ← Ej: change!M@1234
295 | |
296 | | $ sudo vim /root.ecryptfsrc
297 | | key=passphrase:passphrase_passwd_file=/mnt/usb/password.txt
298 | | ecryptfs_sig=934e8e1fa80152e4
299 | | ecryptfs_cipher=aes
300 | | ecryptfs_key_bytes=16
301 | | ecryptfs_passthrough=n
302 | | ecryptfs_enable_filename_crypto=n
303 | ```
304 | (Note that USB will need to be mounted for the setup to work properly)
305 |
306 | - Finally add next line to /etc/fstab :
307 | ```
308 | | /home/myUser/SensitiveData /home/myUser/SensitiveData ecryptfs defaults 0 0
309 | ```
310 | [[}]]
311 |
312 | ## LVM + LUKS encryption [[{security.encryption.luks,storage.lvm,]]
313 |
314 | - LUKS stands for Linux-Unified-Key-Setup encryption toolchain.
315 | - LVM integrates nicely with disk encryption
316 |
317 | LUKS encrypts full-partitions (vs files in GnuPG, ...)
318 |
319 | NOTICE/WARN: LUKS will prompts for a password during boot.
320 | (server-autoboot will fail)
321 |
322 | 1. format the partition with the "cryptsetup" command*
323 | ```
324 | # cryptsetup luksFormat /dev/sdx1
325 | LUKS will warn that it's going to erase your drive:
326 | A prompt will ask for a passphrase: (Enter it to continue)
327 | ```
328 |
329 | The partition is encrypted at this point but no filesystem is in yet:
330 | - In order to partition it you must un-lock it.
331 | ```
332 | | # cryptsetup luksOpen /dev/sdx1 mySafeDrive # ← Unlock before formating it.
333 | | ^^^^^^^^^^^
334 | | human-friendly name
335 | | will create a symlink
336 | | /dev/mapper/mySafeDrive
337 | | to auto-generated designator
338 | (LUKS will ask for the passphrase to un-lock the drive)
339 | ```
340 | - Check the volume is "OK":
341 | ```
342 | # ls -ld /dev/mapper/mySafeDrive
343 | lrwxrwxrwx. 1 root root 7 Oct 24 03:58 /dev/mapper/mySafeDrive → ../dm-4
344 | ```
345 | 2. format with standard-filesystem (ext4,...)
346 | ```
347 | | # mkfs.ext4 -o Linux -L mySafeExt4Drive /dev/mapper/mySafeDrive
348 | ```
349 | 3. Mount the unit
350 | ```
351 | | # mount /dev/mapper/mySafeExt4Drive /mnt/hd
352 | ```
353 | [[security.encryption.luks}]]
354 | [[security.encryption}]]
355 |
--------------------------------------------------------------------------------
/linux_kernel_monitoring.txt:
--------------------------------------------------------------------------------
1 | [[{monitoring.kernel,monitoring.I/O,monitoring.network,monitoring.jobs]]
2 |
3 | # Kernel Monitorization
4 |
5 | ## dmesg
6 |
7 | * `dmesg` prints or controls the kernel ring buffer.
8 |
9 |
10 |
11 |
12 | ```
13 | | KERNEL MONITORIZATION [[doc_has.diagram,troubleshooting]]
14 | | ┌ ┌───────────────────────────────────────────────────────┐
15 | | · │ APPLICATIONS │
16 | | · ├───────────────────────────────────────────────────────┤
17 | | · │[ltrace][ldd] System Libraries gethostlatency perf│
18 | | · ├───────────────────────────────────────────────────────┤
19 | | · │ strace,sydgid System Call Interface [*3] perf│
20 | | · │ opensnoop,statsnoop,syncsnoop │ CPU
21 | |perf ┌ ├─────────────────┬───┬──────────────┬┬─────────────────┤ Inter- ┌────┐
22 | |dtrace · │ VFS │ │SOCKETS [ss] ││SCHEDULER [perf│ connect |CPU1│
23 | | · │ opensnoop │ │ ││ ├──────────┤perf│
24 | |stap · ├─────────────────┤ │──────────────┤│ perf <·· top └─·──·
25 | |lttnp · │ FILE SYSTEM │ │TCP/UPD [*2] ││ latencytop │ · ps · ·
26 | |lttnp L │ │ │tcptop, ││ mpstat │ · pidstat · ·
27 | |ktap I │ lsof,fstrace │ │tcplife, │├─────────────────┤ · · ·
28 | | · N │ filelie,pcstat │ │tcpconnect, ││VIRTUAL MEMORY │ · Memory · ·
29 | | · U ├─────────────────┤ │tcpaccept ││[vmstat] <·· BUS · ·
30 | | · X │ VOLUME MANAGERS │ │tcpconnlat, ││[slabtop] <·· ┌─v─┐·
31 | | · │ mdflush │ ├tcpretrans ││[free] ├··········>RAM│·
32 | | · K ├─────────────────┤ ├──────────────┤│[memleak] │ └───┘·
33 | | · E │ BLOCK DEVICE │ │IP ip, ││[comkill] │ numastat ·
34 | | · R │ INTERFACE │ │route, ││[slabratetop] │ lstop ·
35 | | · N │ iostat,iotop │ │iptables ││ │ ·
36 | | · E │ blktrace │ │ ││ │ ·
37 | | · L │ pidstat │ │ ││ │ ·
38 | | · · │ biosnoop │ │──────────────│├─────────────────┤ ·
39 | | · · │ biolatency │ │Ethernet [ss] ││CLOCKSOURCE │ ·
40 | | · · │ biotop,blktrace │ │[tcpdump] ││[/sys/...] │ ·
41 | | · · ├──·──────────────┴───┴──────────────┴┴─────────────────┤ ·
42 | | · · │ ·[hardirqs][criticalstat] Device Drivers │ ·
43 | | └ └ └──·────────────────────────────────────────────────────┘ I/O perf ·
44 | | · Expander-Interconnect ┌ I/O ──┐BUS tiptop·
45 | | · ─┬──────────────────────────────────────┬───┤ BRidge├···········┘
46 | | · │ │ └───────┘
47 | | └┐ │ │
48 | | ┌v────┴───────────┐ ┌─ Network ──┴─┐ nicstat
49 | | │I/O Controller *1│ │Controller │ ss, ip
50 | | └─────────────────┘ └───────┬──────┘
51 | | ┬──────┴───────┬ ┬──────┴────┬
52 | | │ │ │ │
53 | | Disk[*1] Swap [swapon] Port Port
54 | | ping, traceroute]
55 | | ethtool] [snmpget]
56 | | lldptool]
57 | | OTHERS: [sar] [dstat] [/proc]
58 | |
59 | | ┌───┐[sar -m FAN] ┌────────────┐[ipmitool]
60 | | │FAN│ │POWER SUPPLY│[dmidecode]
61 | | └───┘ └────────────┘
62 | ```
63 |
64 |
65 | ## `/proc/meminfo`
66 |
67 | ```
68 | | $ cat /proc/meminfo
69 | | MemTotal: 16116792 kB
70 | | MemFree: 2042420 kB
71 | | MemAvailable: 10656344 kB
72 | | Buffers: 1637424 kB
73 | | Cached: 6513208 kB
74 | | SwapCached: 352 kB
75 | | Active: 8372356 kB
76 | | Inactive: 3940908 kB
77 | | Active(anon): 3755128 kB
78 | | Inactive(anon): 645496 kB
79 | | Active(file): 4617228 kB
80 | | Inactive(file): 3295412 kB
81 | | Unevictable: 0 kB
82 | | Mlocked: 0 kB
83 | | SwapTotal: 8126460 kB
84 | | SwapFree: 8124156 kB
85 | | Dirty: 1304 kB
86 | | Writeback: 0 kB
87 | | AnonPages: 4162388 kB
88 | | Mapped: 732652 kB
89 | | Shmem: 238000 kB
90 | | Slab: 1337700 kB
91 | | SReclaimable: 1029376 kB
92 | | SUnreclaim: 308324 kB
93 | | KernelStack: 15632 kB
94 | | PageTables: 31724 kB
95 | | NFS_Unstable: 0 kB
96 | | Bounce: 0 kB
97 | | WritebackTmp: 0 kB
98 | | CommitLimit: 16184856 kB
99 | | Committed_AS: 11012532 kB
100 | | VmallocTotal: 34359738367 kB
101 | | VmallocUsed: 0 kB
102 | | VmallocChunk: 0 kB
103 | | HardwareCorrupted: 0 kB
104 | | AnonHugePages: 0 kB
105 | | ShmemHugePages: 0 kB
106 | | ShmemPmdMapped: 0 kB
107 | | CmaTotal: 0 kB
108 | | CmaFree: 0 kB
109 | | HugePages_Total: 0
110 | | HugePages_Free: 0
111 | | HugePages_Rsvd: 0
112 | | HugePages_Surp: 0
113 | | Hugepagesize: 2048 kB
114 | | Hugetlb: 0 kB
115 | | DirectMap4k: 1147072 kB
116 | | DirectMap2M: 15319040 kB
117 | ```
118 |
119 |
120 |
121 | [[{PM.TODO]]
122 | ## BCC
123 |
124 | * Dynamic Tracing toolkit for creating efficient Linux kernel tracing,
125 | performance Monitoring, networking and much more.
126 | * It makes use of extended BPF (Berkeley Packet Filters), formally known as eBPF
127 | *
128 | [[PM.TODO}]]
129 |
130 | [[{doc_has.comparative,PM.TODO]]
131 | ### Linux Tracer comparative
132 | *
133 | [[doc_has.comparative}]]
134 |
135 | ## ltrace: Library Call Tracer [[{monitoring.jobs]]
136 |
137 | *
138 |
139 | Summary:
140 | ```
141 | ltrace | ltrace -c # ← Count time and calls for each library call*
142 | | and report a summary on program exit. *
143 | [-e filter|-L] | [-e filter|-L]
144 | [-l|--library=library_pattern] | [-l|--library=library_pattern]
145 | [-x filter] | [-x filter]
146 | [-S] | [-S]
147 | [-b|--no-signals] |
148 | [-i] [-w|--where=nr] |
149 | [-r|-t|-tt|-ttt] |
150 | [-T] |
151 | [-F pathlist] |
152 | [-A maxelts] |
153 | [-s strsize] |
154 | [-C|--demangle] |
155 | [-a|--align column] |
156 | [-n|--indent nr] |
157 | [-o|--output filename] | [-o|--output filename]
158 | [-D|--debug mask] |
159 | [-u username] |
160 | [-f] | [-f]
161 | [-p pid] | [-p pid]
162 | [ [--] command [arg ...] ] | [ [--] command [arg ...] ]
163 | ```
164 | * runs the specified command until it exits, intercepting/recording:
165 | * dynamic library calls by process
166 | * Display functions and funct.parameters.
167 | * external prototype libraries is needed
168 | for human-readable output.
169 | (ltrace.conf(5), section PROTOTYPE LIBRARY DISCOVERY )
170 | * signals which received by process
171 | * system calls by process
172 | [[}]]
173 |
174 | [[{monitoring.jobs.strace]]
175 | ## strace: System call tracer
176 |
177 | * man 1 strace
178 |
179 | ```
180 | | strace | strace -c ← -c: Count time, calls, and errors
181 | | | for each system call and report summary on exit.
182 | | | -f aggregate over all forked processes
183 | | [ -dDffhiqrtttTvVxx ] | [ -D ]
184 | | [ -acolumn ] |
185 | | [ -eexpr ] ... | [ -eexpr ] ...
186 | | | [ -Ooverhead ]
187 | | [ -ofile ] |
188 | | [ -ppid ] ... |
189 | | [ -sstrsize ] |
190 | | [ -uusername ] |
191 | | [ -Evar=val ] ... |
192 | | [ -Evar ] ... | [ -Ssortby ]
193 | | | [ -Ssortby ]
194 | | [ command [ arg ... ] ] | [ command [ arg ... ] ]
195 | ```
196 |
197 | * strace runs specified command until it exits intercepting:
198 | * system calls called by a process
199 | * system-call-name + arguments + return-value is printed to STDERR (or -o file)
200 | * signals received by a process
201 |
202 | ```
203 | | Ex system-call output:
204 | | open("/dev/null", O_RDONLY) = 3
205 | | open("/foo/bar", O_RDONLY) = -1 ENOENT (No such file or directory)
206 | | ...
207 |
208 | | Ex signal output:
209 | | $ strace sleep 111
210 | | ...
211 | | sigsuspend([]
212 | | --- SIGINT (Interrupt) --- ← Signal received
213 | | ...
214 | | +++ killed by SIGINT +++
215 | ```
216 |
217 | * If a system call is being executed and meanwhile another one is being called
218 | from a different thread/process then strace will try to preserve the order
219 | of those events and mark the ongoing call as being unfinished.
220 | * When the call returns it will be marked as resumed. Ex. output:
221 | ```
222 | → [pid 28772] select(4, [3], NULL, NULL, NULL *unfinished ... *
223 | → [pid 28779] clock_gettime(CLOCK_REALTIME, {1130322148, 939977000}) = 0
224 | → [pid 28772] *<... select resumed>* ) = 1 (in [3])
225 | ```
226 |
227 | Interruption of a (restartable) system call by a signal delivery is
228 | processed differently as kernel terminates the system call and also
229 | arranges its immediate reexecution after the signal handler completes.
230 |
231 | ```
232 | | read(0, 0x7ffff72cf5cf, 1) = ? *ERESTARTSYS (To be restarted)*
233 | | --- SIGALRM (Alarm clock) @ 0 (0) ---
234 | | rt_sigreturn(0xe) = 0
235 | | read(0, ""..., 1) = 0
236 | ```
237 |
238 | * explain: Tool to decode the error returned from strace
239 |
240 | [[monitoring.jobs.strace}]]
241 |
242 | [[{monitoring.cpu]]
243 | ## mpstat-CPU stats (ints., hypervisor...)
244 | *
245 | ```
246 | | mpstat
247 | | [ -A ] == -I ALL -u -P ALL
248 | | [ -I { SUM | CPU | ALL } ] == Report interrupts statistics
249 | | [ -u ] Reports cpu utilization (default)
250 | | [ -P { cpu [,...] | ON | ALL } ] Indicates the processor number
251 | | [ -V ]
252 | | [ secs_interval [ count ] ]
253 | | secs_interval = 0 => Report from times system startup (boot)
254 | |
255 | | mpstat writes to standard output activities for each available processor,
256 | | Global average activities among all processors are also reported.
257 | |
258 | | CPU output columns:
259 | | %usr : executing at the user level (application).
260 | | %nice : executing at the user level with nice priority.
261 | | %sys : executing at the system level (kernel).
262 | | It does NOT include time spent servicing hardware
263 | | and software interrupts.
264 | | %iowait: idle during which the system had an outstanding disk I/O request.
265 | | %irq : time spent by the CPU or CPUs to service hardware interrupts.
266 | | %soft : time spent by the CPU or CPUs to service software interrupts.
267 | |
268 | | %steal : **time spent in involuntary wait by the virtual CPU or CPUs
269 | | while the hypervisor was servicing another virtual processor** !!!!
270 | | [[monitoring.hypervisor]]
271 | |
272 | | %guest : time spent by the CPU or CPUs to run a virtual processor.
273 | | %idle : time that the CPU or CPUs were idle and the system did not have
274 | | an outstanding disk I/O request.
275 | ```
276 | [[}]]
277 |
278 |
279 |
280 |
281 | [[{profiling.latencytop,troubleshooting.performance.101,troubleshooting.locks,troubleshooting.jobs,QA.UX,PM.low_code]]
282 | ## latencytop
283 | *
284 | * aimed at:
285 | * identifying and visualizing where (kernel and userspace) latencies are happening
286 | * What kind of operation/action is causing the latency
287 |
288 | **LATENCYTOP FOCUSES ON THE CASES WHERE THE APPLICATIONS WANT TO RUN
289 | AND EXECUTE USEFUL CODE, BUT THERE'S SOME RESOURCE THAT'S NOT
290 | CURRENTLY AVAILABLE (AND THE KERNEL THEN BLOCKS THE PROCESS).**
291 |
292 | This is done both on a system level and on a per process level,
293 | so that you can see what's happening to the system, and which
294 | process is suffering and/or causing the delays.
295 |
296 | ```
297 | | $ sudo latencytop <·· Press "s" + "letter" to display active processes
298 | | starting with that lettter.
299 | | Press "s" followed by 0 to remove the filter
300 | ```
301 |
302 | * NOTES:
303 | * See also notes about disk write-read pending queue.
304 | * There are newer alternatives based on BPF:
305 |
306 | * See also the more advanced TimeChart:
307 | *
308 | *
309 | [[profiling.latencytop}]]
310 |
311 | [[monitoring.kernel}]]
312 |
313 | [[{monitoring.kernel,monitoring.eBPF,kernel.eBPF]]
314 |
315 | ## eBPF Tracing
316 |
317 | * Tutorial and Examples:
318 |
319 |
320 | [[monitoring.kernel.eBPF}]]
321 |
--------------------------------------------------------------------------------
/linux_mobile.txt:
--------------------------------------------------------------------------------
1 | [[{linux.mobile.mobifree]]
2 | ## Mobifree
3 |
4 | *
5 |
6 | [[linux.mobile.mobifree}]]
7 |
8 | [[{]]
9 | ## 101: postmarketOS: real Linux distribution for phones
10 | https://postmarketos.org/
11 |
12 | Real OS for Smartphones
13 |
14 | Writing packages is easy, by the way: as long as you know how to
15 | write shell scripts, you are good to go. We have continuous
16 | integration in place that makes sure everything builds that gets
17 | submitted to our packages repository, among other sanity checks.
18 | [[}]]
19 |
20 |
21 |
22 |
--------------------------------------------------------------------------------
/linux_networking_map.txt:
--------------------------------------------------------------------------------
1 | # Napalm [[{DevOps.ansible]]
2 | @[https://napalm-automation.net/]
3 | @[https://napalm.readthedocs.io/en/latest/]
4 | NAPALM: (N)etwork (A)utomation and (P)rogrammability (A)bstraction (L)ayer with (M)ultivendor support
5 | - Python library that implements a set of functions to interact with different
6 | network device Operating Systems using aºunified API to network devicesº.
7 | - Supported Network Operating Systems:
8 | - Arista EOS
9 | - Cisco IOS
10 | - Cisco IOS-XR
11 | - Cisco NX-OS
12 | - Juniper JunOS
13 |
14 | Compatibility Matrix: (2020-02-01)
15 | @[https://napalm.readthedocs.io/en/latest/support/index.html]
16 | EOS IOS IOS JUN NX NX
17 | XR OS OS OS_SSH
18 | get_arp_table ✅ ✅ ❌ ❌ ❌ ✅
19 | get_bgp_config ✅ ✅ ✅ ✅ ❌ ❌
20 | get_bgp_neighbors ✅ ✅ ✅ ✅ ✅ ✅
21 | get_bgp_neighbors_detail ✅ ✅ ✅ ✅ ❌ ❌
22 | get_config ✅ ✅ ✅ ✅ ✅ ✅
23 | get_environment ✅ ✅ ✅ ✅ ✅ ✅
24 | get_facts ✅ ✅ ✅ ✅ ✅ ✅
25 | get_firewall_policies ❌ ❌ ❌ ❌ ❌ ❌
26 | get_interfaces ✅ ✅ ✅ ✅ ✅ ✅
27 | get_interfaces_counters ✅ ✅ ✅ ✅ ❌ ❌
28 | get_interfaces_ip ✅ ✅ ✅ ✅ ✅ ✅
29 | get_ipv6_neighbors_table ❌ ✅ ❌ ✅ ❌ ❌
30 | get_lldp_neighbors ✅ ✅ ✅ ✅ ✅ ✅
31 | get_lldp_neighbors_detail ✅ ✅ ✅ ✅ ✅ ✅
32 | get_mac_address_table ✅ ✅ ✅ ✅ ✅ ✅
33 | get_network_instances ✅ ✅ ❌ ✅ ✅ ❌
34 | get_ntp_peers ❌ ✅ ✅ ✅ ✅ ✅
35 | get_ntp_servers ✅ ✅ ✅ ✅ ✅ ✅
36 | get_ntp_stats ✅ ✅ ✅ ✅ ✅ ❌
37 | get_optics ✅ ✅ ❌ ✅ ❌ ❌
38 | get_probes_config ❌ ✅ ✅ ✅ ❌ ❌
39 | get_probes_results ❌ ❌ ✅ ✅ ❌ ❌
40 | get_route_to ✅ ✅ ✅ ✅ ❌ ✅
41 | get_snmp_information ✅ ✅ ✅ ✅ ✅ ✅
42 | get_users ✅ ✅ ✅ ✅ ✅ ✅
43 | is_alive ✅ ✅ ✅ ✅ ✅ ✅
44 | ping dvantage over using the ping and traceroute commands is that Mtr will provide a lot of statistics about each hop, like response time and percentage.
45 |
46 | This tool comes pre-installed on most of the linux distros. However you can also install it manually using the following command. ✅ ✅ ❌ ✅ ✅ ✅
47 | traceroute
48 | [[}]]
49 |
50 | ## Mtr : Stop using ping and traceroute anymore
51 |
52 | *
53 |
54 | * Over ping and traceroute commands, mtr provides lot of statistics about each hop,
55 | like response time and percentage.
56 |
57 | ```
58 | | $ mtr google.com # Opts:
59 | | # -n: Do not resolve hostname (inverse DNS)
60 | | # -c: number of pings
61 | | # -r: printable output
62 | | # -l: interval in seconds between "pings"
63 | | # --tcp: Send tcp syn (vs ICMP ECHO)
64 | | # --udp: Send udp datagram (vs ICMP ECHO)
65 | | My traceroute [v0.95]
66 | | fedora (192.168.10.3) -> google.com (142.250.178.174) 2024-10-03T16:02:36+0200
67 | | Ping Bit Pattern:
68 | | Pattern Range: 0(0x00)-255(0xff), <0 random.
69 | | Host Loss% Snt Last Avg Best Wrst StDev
70 | | 1. _gateway 0.0% 7 3.5 3.6 3.2 4.3 0.4
71 | | 2. 192.168.144.1 57.1% 7 5.2 5.1 4.9 5.3 0.2
72 | | 3. 33.red-5-205-19.dynamicip.rima-tde.net 0.0% 6 5.1 6.3 5.1 7.9 1.2
73 | | 4. (waiting for reply)
74 | | 5. (waiting for reply)
75 | | 6. (waiting for reply)
76 | | 7. 5.53.1.82 0.0% 6 12.3 12.7 11.3 17.1 2.2
77 | | 8. 192.178.110.89 0.0% 6 12.0 12.1 11.4 12.7 0.5
78 | | 9. 142.251.54.155 0.0% 6 12.5 17.6 12.2 41.9 11.9
79 | | 10. mad41s08-in-f14.1e100.net 0.0% 6 11.8 12.0 11.1 13.7 1.0
80 | ```
81 |
82 |
83 | # OpenDaylight Magnesium SDN [[{]]
84 | @[http://www.enterprisenetworkingplanet.com//datacenter/opendaylight-magnesium-advances-open-source-software-defined-networking.html]
85 | - open source Software Defined Networking (SDN) controller platform.
86 |
87 | - platform comprised of multiple modular component project that users
88 | can choose to mix and match in different configurations as needed.
89 |
90 | ºDeterministic Networking Comes to OpenDaylightº
91 | - DetNet is a Deterministic Networking project which aims to provide a
92 | very precise, deterministic set of networking characteristics
93 | including guaranteed bandwidth and bounded latency. The need for
94 | deterministic attributes in networking is critical for real-time
95 | applications that need to be execute with the exact same attributes
96 | every time.
97 |
98 | - The release notes for Magnesium state that DetNet includes a number
99 | of Layer3 deterministic networking and Layer2 Time-Sensitive
100 | Networking (TSN) techniques,
101 |
102 | "Architecturally, DetNet applications communicate with MD-SAL over
103 | RESTCONF API and the southbound DetNet controller enables MD-SAL to
104 | obtain topology information about DetNet bridges and to subsequently
105 | configure them by using the NETCONF protocol," the notes state. "The
106 | Magnesium release includes the first version of DetNet with time sync
107 | support for TSN, topology discovery for DetNet bridges, the
108 | southbound controller plugin, and features to manage the end-to-end
109 | information flow and service configuration, QoS, and optimal path
110 | calculation."
111 |
112 | - Plastic Brings Translation by Intent to SDN
113 | "The model-to-model translation problem is pervasive in writing SDN
114 | controller applications, both internally and in supporting
115 | microservices," the release notes state. "Plastic emphasizes writing
116 | translations intended to be as resilient to model changes as
117 | possible."
118 | [[}]]
119 |
120 | # SNMP → gRPC [[{snmp,TODO]]
121 | https://www.brighttalk.com/webcast/17628/384377?player-preauth=2BiCCR552sHx%2FC02RlHmQIwBkVlhwe7BEsWkRAzKHRM%3D
122 | Watch out SNMP! gRPC is here: Model-Driven Telemetry in the Enterprise
123 |
124 | Jeremy Cohoe, Technical Marketing Engineer, Cisco
125 | Feb 6 2020 | 31 mins
126 |
127 | We know the challenges of SNMP with it's UDP transport, limited filtering and encoding options, and the tax to the device CPU and memory resources when multiple tools are polling. Now that gRPC Dial-Out model-driven telemetry is here there are options for migrating to the newer TCP based solution that is supported by YANG data models. These data models make finding specific data points or KPI's easy - the days of analyzing MIB's and OIDs are over
128 | [[}]]
129 |
130 |
131 | # TODO
132 |
133 | ## Sclaing networking
134 | https://www.infoq.com/news/2020/06/scaling-networking-digitalocean/
135 |
136 |
137 | ## iptables: 2 variants and their relationship with nftables
138 | https://developers.redhat.com/blog/2020/08/18/iptables-the-two-variants-and-their-relationship-with-nftables/132118/
139 |
140 | In Red Hat Enterprise Linux (RHEL) 8, the userspace utility program
141 | iptables has a close relationship to its successor, nftables. The
142 | association between the two utilities is subtle, which has led to
143 | confusion among Linux users and developers. In this article, I
144 | attempt to clarify the relationship between the two variants of
145 | iptables and its successor program, nftables.
146 |
147 | The kernel API
148 |
149 | In the beginning, there was only iptables. It lived a good, long life in Linux history, but it wasn’t without pain points. Later, nftables appeared. It presented an opportunity to learn from the mistakes made with iptables and improve on them.
150 |
151 | The most important nftables improvement, in the context of this article, is the kernel API. The kernel API is how user space programs the kernel. You can use either the nft command or a variant of the iptables command to access the kernel API. We’ll focus on the iptables variant.
152 | Everything you need to grow your career.
153 |
154 | With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.
155 | SIGN UP
156 | [Everything you need to grow your career.]
157 | Two variants of the iptables command
158 |
159 | The two variants of the iptables command are:
160 |
161 | legacy: Often referred to as iptables-legacy.
162 | nf_tables: Often referred to as iptables-nft.
163 |
164 | The newer iptables-nft command provides a bridge to the nftables kernel API and infrastructure. You can find out which variant is in use by looking up the iptables version. For iptables-nft, the variant will be shown in parentheses after the version number, denoted as nf_tables:
165 |
166 | root@rhel-8 # iptables -V
167 | iptables v1.8.4 (nf_tables)
168 |
169 | For iptables-legacy, the variant will either be absent, or it will show legacy in parentheses:
170 |
171 | root@rhel-7 # iptables -V
172 | iptables v1.4.21
173 |
174 | You can also identify iptables-nft by checking whether the iptables binary is a symbolic link to xtables-nft-multi:
175 |
176 | root@rhel-8 # ls -al /usr/sbin/iptables
177 | lrwxrwxrwx. 1 root root 17 Mar 17 10:22 /usr/sbin/iptables -> xtables-nft-multi
178 |
179 | Using iptables-nft
180 |
181 | As I noted earlier, the nftables utility improves the kernel API. The iptables-nft command allows iptables users to take advantage of the improvements. The iptables-nft command uses the newer nftables kernel API but reuses the legacy packet-matching code. As a result, you get the following benefits while using the familiar iptables command:
182 |
183 | Atomic rules updates.
184 | Per-network namespace locking.
185 | No file-based locking (for example: /run/xtables.lock).
186 | Fast updates to the incremental ruleset.
187 |
188 | These benefits are mostly transparent to the user.
189 |
190 | Note: The userspace command for nftables is nft. It has its own syntax and grammar.
191 | Packet matching is the same
192 |
193 | It’s important to understand that while there are two variants of iptables, packet matching utilizes the same code. Regardless of the variant that you are using, the same packet-matching features are available and behave identically. Another term for the packet matching code in the kernel is xtables. Both variants, iptables-legacy and iptables-nft, use the same xtables code. This diagram provides a visual aid. I included nft for completeness:
194 |
195 | +--------------+ +--------------+ +--------------+
196 | | iptables | | iptables | | nft | USER
197 | | legacy | | nft | | (nftables) | SPACE
198 | +--------------+ +--------------+ +--------------+
199 | | | |
200 | ====== | ===== KERNEL API ======= | ======= | =====================
201 | | | |
202 | +--------------+ +--------------+
203 | | iptables | | nftables | KERNEL
204 | | API | | API | SPACE
205 | +--------------+ +--------------+
206 | | | |
207 | | | |
208 | +--------------+ | | +--------------+
209 | | xtables |--------+ +-----| nftables |
210 | | match | | match |
211 | +--------------+ +--------------+
212 |
213 | The iptables rules appear in the nftables rule listing
214 |
215 | An interesting consequence of iptables-nft using nftables infrastructure is that the iptables ruleset appears in the nftables rule listing. Let’s consider an example based on a simple rule:
216 |
217 | root@rhel-8 # iptables -A INPUT -s 10.10.10.0/24 -j ACCEPT
218 |
219 | Showing this rule through the iptables command yields what we might expect:
220 |
221 | root@rhel-8 # iptables -nL INPUT
222 | Chain INPUT (policy ACCEPT)
223 | target prot opt source destination
224 | ACCEPT all -- 10.10.10.0/24 0.0.0.0/0
225 |
226 | But it will also be shown in the nft ruleset:
227 |
228 | root@rhel-8 # nft list ruleset
229 | table ip filter {
230 | chain INPUT {
231 | type filter hook input priority filter; policy accept;
232 | ip saddr 10.10.10.0/24 counter packets 0 bytes 0 accept
233 | }
234 | }
235 |
236 | Note how the iptables rule was automatically translated into the nft syntax. Studying the automatic translation is one way to discover the nft equivalents of the iptables rules. In some cases, however, there isn’t a direct equivalent. In those cases, nft will let you know by showing a comment like this one:
237 |
238 | table ip nat {
239 | chain PREROUTING {
240 | meta l4proto tcp counter packets 0 bytes 0 # xt_REDIRECT
241 | }
242 | }
243 |
244 | Summary
245 |
246 | To summarize, the iptables-nft variant utilizes the newer nftables kernel infrastructure. This gives the variant some benefits over iptables-legacy while allowing it to remain a 100% compatible drop-in replacement for the legacy command. Note, however, that iptables-nft and nftables are not equivalent. They merely share infrastructure.
247 |
248 | It is also important to note that while iptables-nft can supplant iptables-legacy, you should never use them simultaneously.
249 |
250 | ## Model-Driven Telemetry in the Enterprise [[{monitoring]]
251 | https://www.brighttalk.com/webcast/17628/384377?player-preauth=2BiCCR552sHx%2FC02RlHmQIwBkVlhwe7BEsWkRAzKHRM%3D
252 |
253 | Watch out SNMP! gRPC is here: Model-Driven Telemetry in the Enterprise
254 |
255 | Jeremy Cohoe, Technical Marketing Engineer, Cisco
256 | Feb 6 2020 | 31 mins
257 |
258 | - We know the challenges of SNMP with it's UDP transport, limited
259 | filtering and encoding options, and the tax to the device CPU and
260 | memory resources when multiple tools are polling. Now that gRPC
261 | Dial-Out model-driven telemetry is here there are options for
262 | migrating to the newer TCP based solution that is supported by YANG
263 | data models. These data models make finding specific data points or
264 | KPI's easy - the days of analyzing MIB's and OIDs are over
265 |
266 | [[}]]
267 |
268 | ## Scapy [[{TODO]]
269 | https://scapy.readthedocs.io/en/latest/introduction.html#what-makes-scapy-so-special
270 |
271 | Scapy is a Python program that enables the user to send, sniff,
272 | dissect and forge network packets. This capability allows
273 | construction of tools that can probe, scan or attack networks.
274 |
275 | First, with most other networking tools, you won’t build something
276 | the author didn’t imagine. These tools have been built for a
277 | specific goal and can’t deviate much from it. For example, an ARP
278 | cache poisoning program won’t let you use double 802.1q
279 | encapsulation. Or try to find a program that can send, say, an ICMP
280 | packet with padding (I said padding, not payload, see?). In fact,
281 | each time you have a new need, you have to build a new tool.
282 |
283 | Second, they usually confuse decoding and interpreting. Machines are
284 | good at decoding and can help human beings with that. Interpretation
285 | is reserved for human beings. Some programs try to mimic this
286 | behavior. For instance they say “this port is open” instead of
287 | “I received a SYN-ACK”. Sometimes they are right. Sometimes not.
288 | It’s easier for beginners, but when you know what you’re doing,
289 | you keep on trying to deduce what really happened from the
290 | program’s interpretation to make your own, which is hard because
291 | you lost a big amount of information. And you often end up using
292 | tcpdump -xX to decode and interpret what the tool missed.
293 |
294 | Third, even programs which only decode do not give you all the
295 | information they received. The vision of the network they give you is
296 | the one their author thought was sufficient. But it is not complete,
297 | and you have a bias. For instance, do you know a tool that reports
298 | the Ethernet padding
299 |
300 | Scapy is a Python program that enables the user to send, sniff,
301 | dissect and forge network packets. This capability allows
302 | construction of tools that can probe, scan or attack networks.
303 | [[}]]
304 |
305 |
306 | [[{monitoring.network.wireshark,qa.UX]]
307 | ## WireShark
308 |
309 | * world’s most popular and widely-used open-source and cross-platform
310 | network protocol analyzer.
311 | * Previously known as Ethereal.
312 | * Latest version is 4.0. (2022-10)
313 | * Features include:
314 | - powerful display filter syntax with support and wizard dialogs.
315 | (for example to filter just traffic to a port, dump in ASCII the
316 | HTTP traffic from client to server, ...)
317 | - GUI views for Packet Detail, Packet Bytes
318 | - Hex dump imports
319 | - MaxMind geolocation.
320 | - Support for AT_NUMERIC address type (v4+) that allows simple
321 | numeric addresses for protocols that don’t have a more
322 | common-style address approach,
323 | - JSON mapping for Protobuf messages (v4)
324 | - extcap passwords support in tshark and related cli tools.
325 | - DVB Selection Information Table (DVB SIT), (v4)
326 | - gRPC Web (gRPC-Web) (v4)
327 | - SSH File Transfer Protocol (SFTP) (v4)
328 | * REF:
329 | [[monitoring.network.wireshark}]]
330 |
331 | ## Termshark, Wireshark-like terminal interface for TShark
332 |
333 | * written in Go.
334 | * 2.0 release includes support for piped input, and stream reassembly,
335 | performance optimizations, ...
336 |
337 |
338 | [[{monitoring.network.iperf3]]
339 | ## iPerf3: Test network thorughput
340 |
341 | *
342 |
343 | * cross-platform command-line-based program for performing real-time
344 | network throughput measurements.
345 |
346 | * It is one of the most powerful tools for testing the maximum
347 | achievable bandwidth in IP networks (supports IPv4 and IPv6).
348 | [[monitoring.network.iperf3}]]
349 |
350 |
--------------------------------------------------------------------------------
/linux_nftables.txt:
--------------------------------------------------------------------------------
1 | # NFTables
2 |
3 |
4 | For decades, iptables has been the preferred packet filtering system in the Linux kernel.
5 | Used extensively across the Kubernetes networking ecosystem, iptables is now on the way out and is expected to be removed from the next generation of Linux distributions.
6 |
7 | ... The successor to iptables -- nftables -- is ready to carry the torch instead, with a newly released beta kube-proxy implementation
8 | in v1.31 and network policy using Calico’s nftables backend.
9 |
10 |
--------------------------------------------------------------------------------
/linux_summary.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/earizon/DevOps/2f0a426941c4a9300edf52c94f676b02cbe29f63/linux_summary.pdf
--------------------------------------------------------------------------------
/linux_systemd.txt:
--------------------------------------------------------------------------------
1 | [[{linux.configuration.systemd]]
2 | # .ystemD
3 |
4 | [{configuration.systemd.101]]
5 |
6 | ## SystemD Service Configuration
7 |
8 | *
9 | *
10 | *
11 |
12 | ```
13 | | "SERVICE UNIT" "TARGETS"
14 | | -------------------------- -----------------
15 | | * createNew unit_collection
16 | | * run "wants"
17 | | * lifespan:daemon|run-once
18 | |
19 | | $ systemctl --type=service <··· Check unit_collections.
20 | |
21 | | $ systemctl status firewalld.service <··· Check status of service
22 | |
23 | | $ sudo systemctl isolate \ <··· Change to new runlevel (run-level)
24 | | multi-user.target
25 | |
26 | | $ sudo systemctl \
27 | | enable|start|stop|restart|disable \
28 | | firewalld.service
29 | |
30 | | $ cat /etc/systemd/system/MyCustomScript.service
31 | | [Unit]
32 | | Description = making network connection up
33 | | After = network.target
34 | | [Service]
35 | | ExecStart = /root/scripts/conup.sh
36 | | [Install]
37 | | WantedBy = multi-user.target
38 | |
39 | | $ sudo systemctl daemon-reload # <·· Don't forget (config stays on RAM)
40 | |
41 | | ◆ SYSTEMD CORE: manager, systemd
42 | |
43 | | ◆ SYSTEMD UTILITIES [[PM.TODO]]
44 | | · systemctl : main control tool from systemd.
45 | | · journalctl : Queyr systemd Journal
46 | | · notify : Notify Serv.Mgn about start-up,
47 | | completion/...daemon status changes
48 | | · analyze : analyze and debug system-manager.
49 | | · systemd-cgls : recursively shows contents of selected
50 | | control group hierarchy in a tree.
51 | | · systemd-cgtop : shows "top" control groups of local
52 | | control group hierarchy.
53 | | · loginctl : Control SystemD login Manager.
54 | | · systemd-nspawn: Spawn a command or OS in a light-weight
55 | | container. In many ways it is is similar
56 | | to chroot(1), but more powerful by fully
57 | | virtualizing FS hierarchy, process tree,
58 | | various IPC subsystems and host+domain name.
59 | | ("light docker alternative")
60 | |
61 | | ◆ SYSTEMD DAEMONS ◆ SYSTEMD TARGETS
62 | | · systemd : · bootmode · reboot · logind
63 | | · journald : · basic · multiuser · graphical
64 | | · networkd : · shutdown · dbus dlog · user-session
65 | | · logind : · display service
66 | | · user sessiona :
67 | ```
68 | [[configuration.systemd.101}]]
69 |
70 | * File name extensions for unit types
71 | ```
72 | | .target : group units. Used to call
73 | | other units that are responsible for
74 | | services, filesystems ...
75 | | (equivalent to the classical SysV runlevels)
76 | | .service : handle services that SysV-init-based distributions will typically
77 | | start or end using init scripts.
78 | | .(auto)mount : mounting and unmounting filesystems
79 | | .path : allow systemd to monitor files and directories specified
80 | | when an access happens in path, systemd will start the appropriate unit
81 | | .socket : create one or more sockets for socket activation.
82 | | service unit associated will start the service when a connection request
83 | | is received.
84 | ```
85 |
86 | ## CONFIG. FILE LAYOUT
87 |
88 | (NOTE: /etc takes precedence over /usr)
89 | *Maintainer *: /usr/lib/systemd/system ( + $ systemctl daemon-reload)
90 | *Administrator*: /etc/systemd/system/[name.type.d]/ ) ( + $ systemctl daemon-reload)
91 | *runtime *: /runtime/systemd/system
92 |
93 | ## chkservice systemd ncurses (UI in terminal)
94 |
95 | *
96 |
97 |
98 | [[{monitoring.journalctl,monitoring.101,,security.audit.user]]
99 | ## Journalctl(find logs)
100 |
101 | ```
102 | | Display/filter/search system logs
103 | | # journalctl # <· all logs
104 | | # journalctl -b # <· Boot Messages
105 | | # journalctl -b -1 # <· Last Boot Messages
106 | | # journalctl --list-boots # <· list system boots
107 | | # journalctl --since "3 hour ago" # <· Time range
108 | | "2 days ago" #
109 | | --until "2015-06-26 23:20:00" #
110 | | # journalctl -u nginx.service # <· by unit (can be specified multiple times)
111 | | # journalctl -f # <· Follow ("tail")
112 | | # journalctl -n 50 # <· most recent (50) entries
113 | | # journalctl -r # <· reverse chronological order
114 | | # journalctl -b -1 -p "crit" # <· By priority:
115 | | # -b -1 : FROM emergency
116 | | # -p "crit" : TO: Critical
117 | | # journalctl _UID=108 # <· By _UID
118 | | # journalctl -o json # -o: output format: [[qa.UX]]
119 | | short : (default), syslog style
120 | | short-monotonic: like short, but time stamp shown with precision
121 | | cat : very short, no date,time or source server names
122 | | json : json one long-line
123 | | json-pretty :
124 | | verbose :
125 | ```
126 |
127 | * NOTE: journal is "synchronous". Eacth time someone tries to write it checks if
128 | there is space or something needs to be deleted. (vs remove each 24 day,...)
129 |
130 | Clean/Compact/Delete logs:
131 |
132 | ```
133 | | [[{troubleshooting.storage.logs}]]
134 | | $ sudo journalctl --vacuum-time=2d # <· Retain only last two days
135 | | $ sudo journalctl --vacuum-size=500M # <· Retain only last 500 MB
136 | ```
137 |
138 | [[monitoring.journalctl}]]
139 |
140 | [[{monitoring.logs.rsyslog,security.audit.user,doc_has.comparative,]]
141 | ## Rsyslog ("ancient" log system)
142 |
143 | *
144 | *
145 | [[monitoring.logs.rsyslog}]]
146 |
147 | [[{monitoring.logs.syslog]]
148 | ## The Rocket-fast Syslog Server
149 |
150 | * Year: 2004
151 | * (primary) author: Rainer Gerhards
152 | * Implements and extends syslog protocol (RFC-5424) [[standards.RFC_5424]]
153 | Extracted from
154 | """... Logging formats themselves can vary pretty widely, despite
155 | the existence of standards like RFC 5424 and it's predecessor RFC
156 | 3164. Windows has it's own system based around the Windows Event Log.
157 | Journald has a wide set of output formats, including JSON. Cisco
158 | device logs typically follow their own special format, which might
159 | require special consideration for some systems. And of course there
160 | are competing standards like the Common Event Format. """
161 | * Adopted by RedHat, Debian*, SuSE, Solaris, FreeBSD, ...
162 | * Replaced by journald in Fedora 20+
163 |
164 | Important extensions include:
165 | * ISO 8601 timestamp with millisecond and timezone
166 | * addition of the name of relays in the host fields
167 | to make it possible to track the path a given message has traversed
168 | * reliable transport using TCP
169 | * GSS-API and TLS support
170 | * logging directly into various database engines.
171 | * support for RFC 5424, RFC 5425, RFC 5426
172 | * support for RELP (Reliable_Event_Logging_Protocol)
173 | * support for buffered operation modes:
174 | messages are buffered locally if the receiver is not ready
175 | * complete input/output support for systemd journal
176 | * "Infinite" logs. Can store years of logs from hundreds of machines.
177 | [[monitoring.logs.syslog}]]
178 |
179 | ## Journald [[{monitoring.logs.journald]]
180 |
181 |
182 | * system service for collecting and storing log data, introduced with systemd.
183 | * easier for admins to find relevant info.
184 | * replaces simple plain text log files with a special file format
185 | optimized for log messages with index-like queries,
186 | adding Structure to Log Files.
187 | * It does *not* include a well-defined remote logging implementation,
188 | relying on existing syslog-protocol implementations to relay
189 | to a central log host (and **losing most of the benefits**).
190 | * retains full syslog compatibility by providing the same API in C,
191 | supporting the same protocol, and also forwarding plain-text versions
192 | of messages to an existing syslog implementation.
193 | Obviously the format, as well as the journald API allow for structured data.
194 |
195 | Syslog-protocol Problems:
196 | * syslog implementations (ussually) write log messages to plain text files
197 | with lack of structure.
198 | * syslog protocol does *NOT* provide a means of separating messages
199 | by application-defined targets (for example log messages per virt.host)
200 | This means that, for example, web servers generally write their own access
201 | logs so that the main system log is not flooded with web server status messages.
202 | * log files write messages terminated by a newline:
203 | (very) hard for programs to emit multi-line information such as backtraces
204 | when an error occurs, and log parsing software must often do a lot of work
205 | to combine log messages spread over multiple lines.
206 |
207 | * journalctl:
208 | * The journald structured file format does not work well with standard
209 | UNIX tools optimized for plain text. The journalctl tool will be used.
210 | * very fast access to entries filtered by:
211 | date, emitting program, program PID, UID, service, ... [[doc_has.keypoint]]
212 | (But it just work for single machine, since indexing is lost when
213 | using a remote centralized log system.
214 | * Can also access backups in single files or directories of other systems.
215 |
216 | ## Modern logging and Journald:
217 | * Modern architectures use many systems where it becomes impractical to
218 | read logs on individual machines.
219 | * Centralized logging are usually stored in a (time-series) database
220 | address many of the same issues that journald does without the problems
221 | * Journald allows applications to send key-value fields that the
222 | centralized systems could use directly instead of relying on these heuristics.
223 | * Sadly, journald does not come with a usable remote logging solution*.
224 | * systemd-journal-remote is more of a proof-of-concept than an actually
225 | useful tool, lacking good authentication among other things.
226 | [[monitoring.logs.journald}]]
227 |
228 |
229 | [[{security.101]]
230 | ## Systemd Service hardening
231 |
232 | *
233 |
234 | Systemd service sandboxing and security hardening 101
235 | [[security.101}]]
236 |
237 | [[{job_control.task_scheduling.systemd-run,PM.TODO]]
238 | ## systemd-run
239 |
240 | ... Turns out, I can run arbitrary programs as background services
241 | with a simple systemd-run ...
242 | It's officially my new favorite way to demonize long-running tasks:
243 | - No more forgotten nohup
244 | - Handy resource limits
245 | - Status and logs out of the box
246 |
247 |
248 | ## run0
249 |
250 | Symbolic link to systemd-run that imitates "sudo", but much safer since
251 | no context is inherited from "non-trustable" client.
252 | [[job_control.task_scheduling.systemd-run}]]
253 |
254 |
255 |
256 | [[linux.configuration.systemd}]]
257 |
--------------------------------------------------------------------------------
/linux_who_is_who.txt:
--------------------------------------------------------------------------------
1 | # Who-is-Who: [[{PM.who_is_who,PM.WiP]]
2 |
3 |
4 | (Forcibly incomplete but still quite pertinent list of core people and companies)
5 | - Linus Torvalds:
6 | - He loves C++ and Microkernels, author of a Unix like
7 | hobbie project for x86. Nothing serious.
8 | -
9 | -
10 | - Alan Cox:
11 |
12 | - Ingo Molnár:
13 |
14 | - Completely Fair Scheduler
15 | - in-kernel TUX HTTP / FTP server.
16 | - thread handling enhancements
17 | - real-time preemption patch set
18 | (with Thomas Gleixner and others)
19 | - Moshe Bar: Author on the "Linux Internals" book. (With Ingo Molnár
20 | scheduler).
21 | - Serial entrepreneur, co-founder of Qumranet (sold to Red Hat),
22 | which created the industry standard KVM hypervisor, powering nearly
23 | all cloud offerings. He also co-founded software company XenSource,
24 | the makers of the Xen hypervisor (sold to Citrix).
25 | Previously regular columnist for Linux Journal, Byte.com, Dr.Dobbs.
26 | He started learning UNIX on a PDP-11 with AT&T UNIX Release 6 back
27 | in 1981.
28 | Today he works as CEO of https://www.codenotary.com.
29 | Moshe is at this time a board member of Tinaba SpA, Graylog Inc,
30 | ePrice SpA, Quandoo AG, Cascade Partners LP and Optigrowth Sarl.
31 | - Patrick Volkerding, creator of Slackware, the first Linux
32 | distribution.
33 |
34 | - Marc Ewing, creator of Red Hat
35 |
36 | - Robert Love:
37 |
38 | Ximian, Linux Desktop Group, SuSe Novell, worked on GNOME,
39 | on 2007 joins Google to work on Android, engineered several
40 | kernel and system-level solutions like its novel shared memory
41 | subsystem, ashmem. From 2014 Love continues to work at Google
42 | as Director of Engineering for Search Infrastructure.
43 | - Andries Brouwer
44 |
45 | - Bredan D.Gregg: Author of many great books and posts on
46 | Linux observability and monitoring
47 |
48 | - Rusty Russell:
49 |
50 | work on the Linux kernel's succesful networking subsystem
51 | (netfilter, iptables) and the Filesystem Hierarchy Standard.
52 | - Sebastien Godard (http://pagesperso-orange.fr/sebastien.godard/),
53 | author of the systat utilities for linux.
54 | - Many, many others, ...
55 | [[PM.who_is_who}]]
56 |
--------------------------------------------------------------------------------
/vagrant.txt:
--------------------------------------------------------------------------------
1 | # Vagrant (VMs as code):[[{vagrant,01_PM.low_code,101,troubleshooting]]
2 |
3 | - External Links:
4 | -
5 | - CLI Reference
6 | -
7 | - Providers list
8 | - *Boxes Search*
9 | - Networking
10 |
11 | - Vagrant Boxes: Pre-built VMs avoiding slow and tedious process.
12 | - They can be used as base image to clone & customize a new imagee.
13 | (Specifying the box to use for your Vagrant environment is always the first
14 | step after creating a new Vagrantfile).
15 |
16 |
17 | ```
18 | | # SHARING ---------------------------------------------------------------------------
19 | |
20 | |
21 | | $ vagrant share <- share a Vagrant environment with anyone in the World.
22 | |
23 | | 3 primary (not mutually exclusive) sharing modes -------------------------------------
24 | | URL_POINTING_TO_VAGRANT_VM <· URL "consumer" does not need Vagrant.
25 | | Useful to test webhooks, demos with clients, ...
26 | |
27 | | $ vagrant connect --ssh # <· instant SSH access with local/remote client
28 | | (pair programming, debugging ops problems, etc....)
29 | | $ vagrant connect # <· expose tcp-port for general-sharing (local/remote)
30 | ```
31 |
32 | ```
33 | | $ vagrant "COMMAND" -h <- List help on command
34 | | $ vagrant list-commands <- Most frequently used commands
35 | | ┌───────────────────────────┴───────────────────────────┘
36 | | v
37 | | box manages boxes: installation, removal, etc.
38 | | destroy stops and deletes all traces of the vagrant machine
39 | | global-status outputs status Vagrant environments for this user
40 | | halt stops the vagrant machine
41 | | help shows the help for a subcommand
42 | | init initializes new environment (new Vagrantfile)
43 | | login log in to HashiCorp's Vagrant Cloud
44 | | package packages a running vagrant environment into a box
45 | | plugin manages plugins: install, uninstall, update, etc.
46 | | port displays information about guest port mappings
47 | | powershell connects to machine via powershell remoting
48 | | provision provisions the vagrant machine
49 | | push deploys enviroment code → (configured) destination
50 | | rdp connects to machine via RDP
51 | | reload restart Vagrant VM, load new Vagrantfile config
52 | | resume resume a suspended vagrant machine
53 | | snapshot manages snapshots: saving, restoring, etc.
54 | | ssh connects to machine via SSH
55 | | ssh-config outputs OpenSSH connection config.
56 | | status outputs status of the vagrant machine
57 | | suspend suspends the machine
58 | | up starts and provisions the vagrant environment
59 | | validate validates the Vagrantfile
60 | | version prints current and latest Vagrant version
61 | |
62 | | OTHER COMMANDS
63 | | cap checks and executes capability
64 | | docker-exec attach to an already-running docker container
65 | | docker-logs outputs the logs from the Docker container
66 | | docker-run run a one-off command in the context of a container
67 | | list-commands outputs all available Vagrant subcommands, even non-primary ones
68 | | provider show provider for this environment
69 | | rsync syncs rsync synced folders to remote machine
70 | | rsync-auto syncs rsync synced folders automatically when files change
71 | ```
72 |
73 | ```
74 | | # QUICK START --------------------------------------------------------------
75 | | $ mkdir vagrant_getting_started
76 | | $ cd vagrant_getting_started
77 | | $ vagrant init <- create new Vagrantfile
78 | ```
79 |
80 | ```
81 | | "Advanced" Vagranfile Example: 3 VM's Cluster using Virtual Box -----------
82 | |
83 | | # -*- mode: ruby -*-
84 | | # vi: set ft=ruby :
85 | |
86 | | VAGRANTFILE_API_VERSION = "2"
87 | |
88 | | Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
89 | | # Use the same key for each machine
90 | | config.ssh.insert_key = false
91 | |
92 | | config.vm.define "vagrant1" do |vagrant1|
93 | | vagrant1.vm.box = "ubuntu/xenial64"
94 | | vagrant1.vm.provider :virtualbox do |v|
95 | | v.customize ["modifyvm", :id, "--memory", 1024]
96 | | end
97 | | vagrant1.vm.network "forwarded_port", guest: 80, host: 8080
98 | | vagrant1.vm.network "forwarded_port", guest: 443, host: 8443
99 | | vagrant1.vm.network "private_network", ip: "192.168.0.1"
100 | | # Provision through custom bootstrap.sh script
101 | | config.vm.provision :shell, path: "bootstrap.sh"
102 | | end
103 | | config.vm.define "vagrant2" do |vagrant2|
104 | | vagrant2.vm.box = "ubuntu/xenial64"
105 | | vagrant2.vm.provider :virtualbox do |v|
106 | | v.customize ["modifyvm", :id, "--memory", 2048]
107 | | end
108 | | vagrant2.vm.network "forwarded_port", guest: 80, host: 8081
109 | | vagrant2.vm.network "forwarded_port", guest: 443, host: 8444
110 | | vagrant2.vm.network "private_network", ip: "192.168.0.2"
111 | | end
112 | | config.vm.define "vagrant3" do |vagrant3|
113 | | vagrant3.vm.box = "ubuntu/xenial64"
114 | | vagrant3.vm.provider :virtualbox do |v|
115 | | v.customize ["modifyvm", :id, "--memory", 2048]
116 | | end
117 | | vagrant3.vm.network "forwarded_port", guest: 80, host: 8082
118 | | vagrant3.vm.network "forwarded_port", guest: 443, host: 8445
119 | | vagrant3.vm.network "private_network", ip: "192.168.0.3"
120 | | end
121 | | end
122 | ```
123 | [[vagrant}]]
124 |
--------------------------------------------------------------------------------
/vim.txt:
--------------------------------------------------------------------------------
1 | [[{vim]]
2 |
3 | # Vim Cheat Sheet
4 | vim History:
5 | - Core Author: Bram Moolenaar (Bram@Vim.org)
6 |
7 | ## CONCEPTS:
8 | ===========
9 | vim buffer: in-memory buffer of file on disk.
10 | vim window: graphical viewport on a buffer.
11 | buffer N <--> M window (viewport)
12 | vim tab: collection of windows.
13 |
14 | :edit $file (:e ) open file in current window
15 | :split $file (:sp) open file in a new horizontal split
16 | (:vsplit for new vertical split)
17 |
18 | EXIT VIM:
19 | :w <- (w)rite to current file. (dump current memory buffer to file on disk)
20 | :w <- to
21 | :wa <- write all buffers
22 | :q <- quit (! to force, loosing non writen modifications in buffer)
23 |
24 |
25 | gf : edit file under your cursor in current window. ( to move to previous buffer)
26 | g: edit file under your cursor in new tab.
27 | f: edit file under your cursor in new vert. split.
28 |
29 | //: navigate to previous buffer/scroll-line in current window
30 | :h CTRL-^ and :h alternate-file for more info.
31 |
32 | TIP: Learn to use :b and :sb to switch buffers instead of cycling! (See :h :b)
33 |
34 | # BASICS: │ *VISUAL("EASY") MODE*: │
35 | ───────────────────────────────────────────────┼─────────────────────────────────────┤
36 | [Esc] key ← exit edit/visual/... mode │ v: start V. (mark) mode │
37 | :help "keyword" ← open help for keyword │ (then apply command on it) │
38 | :ls ← list open buffers │Ctrl: Enter Visual Block Mode. Very │
39 | :b 3 ← edit buffer 3 │ + v powerful to draw ASII diagrams,│
40 | u | Ctrl + r ← Undo | Redo │ edit tabbed data, repetitive │
41 | . ← repeat last command │ "switch-case like" code,... │
42 | ...! ← '!' forces cmd ... │ > ← shift text right │
43 | execution │ < ← shift text left │
44 | :!sh ← pass visual-block/range as STDIN to
45 | sh/python/... and replace block/range
46 | with STDOUT from sh/python/... execution
47 |
48 | ☞ Any action(insert/delete/...) prefixed with number "N" is repeated "N" times☜
49 |
50 | INSERT/EDIT: *(BOOK)MARKING* ☞FAST MOVE☜ among code
51 | (i)nsert before cursor │ (r)eplace single character │ ma ← set local ( line) (m)ark
52 | (I)nsert start of line │ (J)oin line below with space │ mA ← set global (file,line)book(m)ark
53 | (a)ppend after cursor │ (gJ) " " witouth "" │ 'a ← go to 'a' (file+line)
54 | (A)ppend at end-of-line │ (cc)hange/replace entire line │ :marks ← list "book"marks
55 | (o)pen(add) new line below │ (C)hange/ " to end-of-line
56 | (O)pen(add) new line above │ (ciw)hange/replace word │ *MACROS*
57 | (ea) append at end-of-word │ (cw)hange/replace to end-of-word │ qa ← start recording macro 'a'
58 | (q)uit|close current pane │ gwip - reflow paragraph │ q ← stop recording
59 | (buffer still in memory)│ │ @a ← run macro 'a'
60 | (d)elete marked text │ │ @@ - rerun last macro
61 | (R)eplace (vs insert) │ │ TIP: macros are really powerful, they
62 | │ │ can be used even as a basic data
63 | │ │ processing engine.
64 |
65 | ● MOVE RELATIVE TO:
66 | ▶ cursor ▶ word ▶ screen
67 | │ ↑k │ w: forward to w.start │ H: S. top
68 | └ ←h ↓j →l │ b: backward to w.start │ M: S. middle
69 | ─ └ e: forward to w. end │ L: S. low
70 | ▶ window: +... │ Ctrl+b: back full
71 | │ ↑k ▶ Scroll: │ Ctrl+u: back half
72 | │ ←h ↓j →l │ Ctrl + e : down 1 line │ Ctrl+f: for. full
73 | │ _: "maximize" win └ Ctrl + y : up 1 line └ Ctrl+d: for. half
74 | │ =: make all win.
75 | │ same size
76 | │ :h CTRL-W : Help
77 | └
78 | ▶ code-block: ▶ line:
79 | │ %: jump to "peer" │ 0: start-of-l.
80 | │ block-delimiter │ $: end-of-l.
81 | │ ({[←...→)}] │ 9g: line 9
82 | │ {: prev. paragraph │ gg: first line of d.
83 | │ or block. │ G: last line of d.
84 | │ }: next paragraph └ 4G: 4th line of d.
85 | └ or block
86 |
87 | *SEARCH*(regex) *REPLACE* (re gex) *YANK("COPY") AND PASTE* *REGISTERS* (C&P for the pros)
88 | /... ← forward search │:%s/XX/YY/... ← (s)ubstitute XX→YY │ yy ← yank line │ :reg ← show registers content
89 | ?... ← backward " │ ^ ^ ┌ i == ignore case │ yw ← yank to next work │ "xy ← yank into register x
90 | (n)ext: repeat search │ │ flags├ g == ** * │ y$ ← yank to end-of-line │ (persist @ ~/.viminfo)
91 | (N) backward │ │ └*c == confirm* │ (p)aste after cursor │ "xp ← paste "x"─reg.
92 | │ │ ┌ % : All file │ (P)aste before cursor │ (y)ank("copy") marked text
93 | │ └ range├ 3,$: line 3 to end │ dd ← cut line
94 | │ └ ... │ dw ← cut to next word
95 | │ ** *: all matches in line │ (D)elete to end-of-line
96 | │ vs just first │ x ← cut char. on cursor
97 |
98 |
99 | *MULTIFILE EDIT* *TABS* *SPELL CHECK (7.0+)*
100 | :e file_name (current pane, new buffer) │ :tabn ... open (new/existing)file │ :set spell spelllang=en_us
101 | :bn (b)uffer (n)ext in current pane │ in new tab │ :setlocal spell spelllang=en_us
102 | :bp (b)uffer (n)ext in current pane │ Ctrl+wT: move current window to new tab │ (^ prefer on mixed prose/code)
103 | :bd (b)uffer (d)elete │ gt: (g)o next (t)ab (gT for prev) │ :set nospell ← turn-off
104 | :ls*list buffers │ #gt (g)o to tab # │ - Color code:
105 | :sp file_or_buffer hori.(s)plit→edit │ :tabm # (m)ove tab to #th pos. │ · Red : misspelled words
106 | :vsp file_or_buffer vert.(s)plit→edit │ │ · Orange: rare words
107 | Ctrl+ws: split window horizontally │ │ · Blue : words NOT capitalized
108 | Ctrl+wv: split window vertically │ │]s [s ← Move to next/prev.error
109 | Ctrl+w←: move to (←↑→↓) window │ │ z= : Show alternatives
110 | │ zg : add to dict.
111 | │ zw : remove from dict
112 |
113 |
114 | # Package Manager (vim 8.0+/Neovim):
115 | - Package manager: Replaces ad-hoc plugin managers before 8.0
116 | (bundler, vim-plug,... *not needed anymore*)
117 | │Package│ 1 ←···→ 1+ │plugin│ 1 ←····→ 1+ │vimscript│
118 |
119 | - Use like:
120 | $*$ mkdir -p ~/.vim/pack/$name/start* ← $name : arbitrary name, ussually local or plugin
121 | ~/.local/share/nvim/site/pack/$name/start for neovim
122 | $*$ cd ~/.vim/pack/$name/start*
123 | $*$ git clone $url_git_repo * ← use $*$ git pull ...* to update/swith package version
124 |
125 | # ALE: (A)synchronous (L)int (E)ngine -> """lint while you type""".
126 | - Vim Script Plugin (vs CoC NodeJS)
127 | - Install:
128 | $ mkdir -p ~/.vim/pack/git-plugins/start
129 | $ git clone --depth 1 \
130 | https://github.com/dense-analysis/ale.git \
131 | ~/.vim/pack/git-plugins/start/ale
132 |
133 |
134 |
135 | - :help ale-options <·· help on global options
136 | - :help ale-integration-options <·· help on options for particular linters.
137 | - :help ale-fix <·· help on how to fix files with ALE.
138 | Ctrl+k: ale_previous_wrap (nmap )
139 | Ctrl+j: ale_next_wrap (nmap )
140 |
141 | - :ALEFix "Fixing"
142 | - :ALEFixSuggest ··> suggest some supported tools for fixing code.
143 | - :ALEGoToDefinition (with LSP enabled)
144 | - :ALEFindReferences (with LSP enabled)
145 | - :ALEHover (with LSP enabled, show "brief info about symbol 'down' the cursor)
146 | - : ALESymbolSearch (with LSP enabled)
147 | - :ALERename (LSP? refactor)
148 | - :ALEFileRename (LSP? rename file + fix import paths : tsserver only).
149 | - : ALECodeAction (execute actions on cursor or visual range like automatically fixing errors).
150 | - :ALEInfo Show linters configured for current file
151 |
152 | - ALE makes use of job control functions and timers (NeoVim 0.2 and Vim 8+)
153 | to run linters in the contents of text-buffers (in memory "real-time"
154 | buffers vs saved-to-disk).
155 |
156 |
157 |
158 |
159 | # CoC Plugin: Visual Code Intellisense support: [[{]]
160 |
161 | - popular Vim plugin written in TypeScript.
162 | WARN: dependent on the npm ecosystem for providing full IDE features to Vim.
163 | - Both ALE and coc.nvim implement Language Server Protocol (LSP) clients for
164 | upporting diagnostics (linting with a live server),
165 | - easiest way to using 'ALE' and coc.nvim together:
166 | REF:
167 | 1) :CocConfig <··· Will open 'coc.nvim' config file
168 | + "diagnostic.displayByAle": true <··· add this line to inform coc.nvim to send diagnostics to ALE
169 | so ALE controls how all problems are presented to you.
170 | You can further configure how problems appear
171 | by using all of the settings mentioned in ALE's
172 | help file, including how often diagnostics are
173 | requested. See :help ale-lint.
174 |
175 | 2) Edit ~/.vimrc and add:
176 | + let g:ale_disable_lsp = 1 <··· Disable LSP features in ALE (let coc.nvim in charge of it)
177 | ^^^^^^^^^^^^^^^^^^^^^^^^^
178 | WARN: before plugins are loaded!!!
179 |
180 | Optional:
181 | 'b:ale_disable_lsp' can also be set/unset in 'ftplugin' files to en/dis-able LSP features in ALE for different filetypes.
182 | - add support for Language Server Protocol (LSP) allowing to reuse "Visual Studio Plugins"
183 |
184 | *CoC Tunning* *CoC Troubleshooting*
185 | ─ Open definition/references/... in new tab: │- Try ':CocOpenLog' to find anything there that
186 | → :CocConfig │ might indicate the issue.
187 | → Fix entry: Ex: │- Ex: Fix CoC JAVA:
188 | "coc.preferences.jumpCommand": "tab drop"│ → :CocConfig → Fix entry:
189 | │ "java.home": "/usr/lib/jvm/java-14-openjdk"
190 | │ → :CocCommand java.clean.workspace
191 | │ → restart coc
192 | │- :CocDisable
193 | [[}]]
194 |
195 |
196 | # Vim 9.0 (2022-06) [[{01_PM.NEW]]
197 | - many small additions.
198 | - Vim9 script.
199 | - Much better performance by compiling commands (up to x10/x100)
200 | - closer to common languages like JS/TypeScript/Java.
201 | - Splitting up large script is much simpler with 'import/export' support
202 | - WARN: No 100% backwards compatible by legacy scripts support
203 | still working. (no plans to drop it)
204 | - # vs " for comments.
205 | - dictionary not available as function arg.
206 | - error control.
207 | - def vs function!
208 | - argument+return types must be specified.
209 | - Code coverage dramatically increased.
210 | - Vim 8.2 (2019-12)
211 |
212 | Vim is Charityware. You can use and copy it as much as you like, but
213 | you are encouraged to make a donation for needy children in Uganda.
214 | Please visit the ICCF web site for more information:
215 | https://iccf-holland.org
216 | [[01_PM.NEW}]]
217 |
218 | ## vim tips [[{01_PM.TODO]]
219 | http://rayninfo.co.uk/vimtips.html
220 |
221 | [[qa.UX]]
222 | ## YouCompleteMe
223 |
224 | *
225 | - an identifier-based engine that works with every programming language,
226 | - a powerful clangd-based engine that provides native semantic code
227 | completion for C/C++/Objective-C/Objective-C++/CUDA (from now on
228 | referred to as "the C-family languages"),
229 | - a Jedi-based completion engine for Python 2 and 3,
230 | - an OmniSharp-Roslyn-based completion engine for C#,
231 | - a Gopls-based completion engine for Go,
232 | - a TSServer-based completion engine for JavaScript and TypeScript,
233 | - a rls-based completion engine for Rust,
234 | - a jdt.ls-based completion engine for Java.
235 | - a generic Language Server Protocol implementation for any language
236 |
237 | YCM also provides semantic IDE-like features in a number of languages, including:
238 |
239 | - displaying signature help (argument hints) when entering the
240 | arguments to a function call (Vim only)
241 | - finding declarations, definitions, usages, etc. of identifiers,
242 | - displaying type information for classes, variables, functions etc.,
243 | - displaying documentation for methods, members, etc. in the preview
244 | window, or in a popup next to the cursor (Vim only)
245 | - fixing common coding errors, like missing semi-colons, typos, etc.,
246 | - semantic renaming of variables across files,
247 | - formatting code,
248 | - removing unused imports, sorting imports, etc.
249 | See Autocomplet vs CoC comparative video:
250 | https://www.youtube.com/watch?v=ICU9OEsNiRA
251 |
252 |
253 | ## Clean vim registers effectively
254 | https://stackoverflow.com/questions/19430200/how-to-clear-vim-registers-effectively
255 |
256 | ## Regex in vim
257 | https://stackoverflow.com/questions/17731244/how-to-regex-in-vim
258 |
259 | ## vim bookmarks
260 | *
261 |
262 | [[01_PM.TODO}]]
263 |
264 | [[vim}]]
265 |
--------------------------------------------------------------------------------
/windows.txt:
--------------------------------------------------------------------------------
1 | [[{windows]]
2 | ## Apropos:
3 |
4 | * misserable collection of unordered Windows Administration Notes.
5 | Content is versioned in git. commits, issues and pull-requests welcome!
6 |
7 |
8 |
9 | # Install IIS web server
10 |
11 | ```
12 | | $ Install-WindowsFeature -name Web-Server -IncludeManagementTools
13 | ```
14 |
15 | [[{]]
16 | # Scoop Package manager
17 |
18 | *
19 | * "apt" or chocolatey in command line
20 |
21 |
22 | ## Install chocolatey:
23 |
24 | * From powershell admin console:
25 | ```
26 | @powershell -NoProfile -ExecutionPolicy Unrestricted
27 | -Command
28 | "iex ((New-Object Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" &&
29 | SET PATH=%PATH%;%systemdrive%\chocolatey\bin
30 | ```
31 | [[}]]
32 |
33 | [[{troubleshooting.dtrace]]
34 | # DTrace on Window 10
35 |
36 | *
37 | [[troubleshooting.dtrace}]]
38 |
39 | [[{]]
40 | # local-kernel-debugging
41 |
42 | *
43 | [[}]]
44 |
45 | [[{]]
46 | # icacls
47 |
48 | *
49 | [[}]]
50 |
51 | [[{101.group_policy]]
52 | # windows-group-policy
53 |
54 | * Windows Group Policy: What Is It and How to Use It:
55 |
56 | [[101.group_policy}]]
57 |
58 | [[{]]
59 | # PowerShell admin script collection
60 |
61 | *
62 | ```
63 | eventidscript.ps1 Getadcomputerbyfilter.ps1
64 | 90daysnologon.ps1 HealthCheck.ps1
65 | ADExpirationDate.ps1 HotfixOneLinerScriptRemote.ps1
66 | AWSA.ps1 IPRangeaddressping.ps1
67 | AddUserstoGroupsADBulk.ps1 IPServerInfo.ps1
68 | AddmemberstoADGroupbyimportingmembersfromCSV.ps1 ImportSysAdminsuite.ps1
69 | AddAdGroupList.ps1 Ipconfig-Release-Renew.ps1
70 | AddUserstoGroup.ps1 Kill-process.ps1
71 | AdministratorsandPowerusersonRemoteComputerstoexcelsheet.ps1 LICENSE
72 | CleanMatter.ps1 README.mdUpdateREADME.md2yearsago
73 | ClearMobileFielfromList.ps1 TracingtheSourceofAccountLockouts.ps1
74 | CopyFoldersPermissionstoOtherFolder.ps1 TrustRelationshipFix.ps1
75 | CopyGroupMemberstoAnotherGroup.ps1 Uninstall-Software.ps1
76 | CopymembersfromGroupstoanothergroup.ps1 Usermembership.ps1
77 | Count_Files_Folders.ps1 WhoIsOwner.ps1
78 | DJlastserverreboottime.ps1 WindowsUpdate.ps1
79 | DJ.ps1 _config.ymlSetthemejekyll-theme-slate2yearsago
80 | Directory_Count.ps1 checklogicalspace.ps1
81 | Disable-Bulk-AD-Users-FromCSV.ps1 clonepermissions.ps1
82 | DriveSpace.ps1 copyfilestoremote.ps1
83 | ExportOUnamesgroupsAD.ps1 copyfilestoserverlist.ps1
84 | ExportOUnamesgroupsAD_EmployeeRedirection.ps1 copyfiletoserver.ps1
85 | ExportOUnamesgroupsAD_employeeredirectionmigration.ps1 currentlyloggedonuser.ps1
86 | Export-PrinterInfo.ps1 defaultDomainPasswordPolicy.ps1
87 | FindMacAddressbyiporsystem.ps1 dhcpreservations.ps1
88 | FinduserbySID.ps1 disablecomputersnew.ps1
89 | Findwhatcomputerauserisloggedinto.ps1 diskhealthsmart.ps1
90 | FindServerIsPendingReboot.ps1 domaincontrollerandgroupmemberquery.ps1
91 | ForceaRemoteUserLogoff.ps1 findalldomaincontrollersindomainldapfilter.ps1
92 | GetDisplayNamefromusername(SAMAccountName).ps1 finddomaincontrollers.ps1
93 | GetReportofMultipleServersHDDSpace+RAMUsage.ps1 findfolderpermissionpergroup.ps1
94 | GetserialnumbersforcomputersinActiveDirectory.ps1 getaduseridentity.ps1
95 | Get-ADDirectReport.ps1 getaduser.ps1
96 | Get-CPUUtilizationByServerAndProcess.ps1 getcurrentusersloggedin.ps1
97 | Get-DHCPLeases.ps1 getsysteminfo.ps1
98 | Get-DNSandWins.ps1 getvmbymacaddress.ps1
99 | Get-FailingDrive.ps1 get-inventory.ps1
100 | Get-LockedOutLocation.ps1 get-servers.ps1
101 | Get-LockedOutUser.ps1 get-systeminfo.ps1
102 | Get-MACAddress.ps1 getRebootTime.ps1
103 | Get-OU-From-IP.ps1 getadcomputer_lastlogon.ps1
104 | Get-PendingReboot.ps1 getadlocked_csv.ps1
105 | Get-ProductKey.ps1 getadreports.ps1
106 | Get-Remote-LocalAdmin.ps1 getaduserpermissions.ps1
107 | Get-ServersUptime.ps1 getlockedoutlocation.ps1
108 | Get-Shares.ps1 getmacbysystem.ps1
109 | Get-Uptime.ps1 groupadpermissions.ps1
110 | Get-UserGroupMembership.ps1 hotfixmalware.ps1
111 | GetLocalAccount.ps1 updatelocalmachine.ps1
112 | GetProductKey.ps1 userloggedon.ps1
113 | GetProductKeys.ps1
114 | GetServers.ps1
115 | GetSoftware.ps1
116 | ```
117 | [[}]]
118 |
119 | # Android como monitor Sistema Windows 10
120 |
121 | *
122 |
123 |
124 | [[{storage.sftp]]
125 | # Mount remote File Systems as SFTP:
126 |
127 | *
128 |
129 | An easy-to-use utility that mounts remote file systems as Windows
130 | drives via SFTP. Once connected, you can browse and work with files
131 | as if they were stored on your local machine.
132 | [[storage.sftp}]]
133 |
134 | [[{VDI]]
135 |
136 | # Virtual Desktop Optimization
137 |
138 | *
139 | *
140 | Optimizing Windows 10, version 2004 for a Virtual Desktop Infrastructure (VDI) role
141 |
142 | * VDI optimization principles
143 | * Windows Optional Features cleanup
144 | * Additional information
145 |
146 | This article is intended to provide suggestions for configurations
147 | for Windows 10, build 2004, for optimal performance in Virtualized
148 | Desktop environments, including Virtual Desktop Infrastructure (VDI)
149 | and Windows Virtual Desktop. All settings in this guide are suggested
150 | optimization settings only and are in no way requirements.
151 |
152 | The information in this guide is pertinent to Windows 10, version
153 | 2004, operating system (OS) build 19041.
154 |
155 | The guiding principles to optimize performance of Windows 10 in a
156 | virtual desktop environment are to minimize graphic redraws and
157 | effects, background activities that have no major benefit to the
158 | virtual desktop environment, and generally reduce running processes
159 | to the bare minimum. A secondary goal is to reduce disk space usage
160 | in the base image to the bare minimum. With virtual desktop
161 | implementations, the smallest possible base, or "gold" image size,
162 | can slightly reduce memory utilization on the host system, as well as
163 | a small reduction in overall network operations required to deliver
164 | the desktop environment to the consumer.
165 | [[VDI}]]
166 |
167 | [[{qa.Optimize]]
168 | # Optimize,Harden,Debloat W10 Deployments
169 |
170 | * Fully Optimize, Harden, and Debloat Windows 10 Deployments to
171 | Windows Best Practices and DoD STIG/SRG Requirements. The ultimate
172 | Windows 10 security and privacy script!
173 |
174 | [[qa.Optimize}]]
175 |
176 | [[{101.DISM]]
177 | # DISM: W10 maintenance Swiss Army knife
178 |
179 | *
180 | *
181 |
182 | [[101.DISM}]]
183 |
184 | # Shutdown/Hibernate
185 |
186 | *
187 |
188 | ```
189 | | %windir%\System32\shutdown.exe -s <·· Shutdown
190 | | %windir%\System32\shutdown.exe -r <·· Reboot
191 | | %windir%\System32\shutdown.exe -l <·· Logoff
192 | | %windir%\System32\rundll32.exe powrprof.dll,SetSuspendState Standby <·· Standby
193 | | %windir%\System32\rundll32.exe powrprof.dll,SetSuspendState Hibernate <·· Hibernate
194 | ```
195 |
196 | * EDIT: As pointed out in comment by @mica, the suspend (sleep)
197 | actually hibernates. Apparently this happens in windows 8 and above.
198 | To 'sleep', disable hibernation OR get an external Microsoft tool
199 | (not built-in) "One of Microsoft's Sysinternals tool is PsShutdown
200 | using the command psshutdown -d -t 0 it will correctly sleep, not
201 | hibernate, a computer" Source:
202 |
203 |
204 | [{wsl]]
205 | # Windows Subsystem For Linux (WSL) Summary
206 |
207 | * PRESETUP: Windows Features -> Activate "Virtual Machine Platform"
208 |
209 | ```
210 | | wsl.exe --list --verbose -v <·· Listing
211 | | wsl.exe --list --running
212 | | wsl.exe --list
213 | | wsl.exe --list --online
214 | | wsl.exe --install -d Ubuntu-20.04 <·· Install online
215 | |
216 | | wsl.exe --terminate docker-desktop <·· Turn off
217 | | wsl.exe --terminate docker-desktop-data
218 | | wsl.exe --terminate Ubuntu-20.04
219 | |
220 | | REF: https://devblogs.microsoft.com/commandline/distro-installation-added-to-wsl-install-in-windows-10-insiders-preview-build-20246/
221 | |
222 | | wsl.exe -l -v <·· show WSL versions
223 | | wsl.exe --set-version DISTRNAME 1 <·· Drop wsl version 2 → 1
224 | |
225 | | wsl.exe --set-version Ubuntu-20.04 2 <·· Convert WSL 1 to 2
226 | ```
227 |
228 | * TODO: Troubleshooting. Liberating Disk space occupied by WSL
229 | - Even after removing files on Linux the space is NOT freed by Windows :
230 | - Solution:
231 |
232 |
233 | ## Compiling Custom Linux Kernel for WSL2:
234 |
235 | * REF:
236 |
237 | * e.g. Compiling kernel with ecrypfs module:
238 |
239 | ```
240 | | $ sudo apt install \
241 | | build-essential flex bison \
242 | | libssl-dev libelf-dev libncurses5-dev git
243 | |
244 | | $ git clone https://github.com/microsoft/WSL2-Linux-Kernel.git
245 | |
246 | | $ cd WSL2-Linux-Kernel
247 | |
248 | | $ cat /proc/config.gz | gunzip > .config # Export current (running) kernel configuration
249 | |
250 | | $ editor .config
251 | |
252 | | - #CONFIG_DM_CRYPT is not set
253 | | + CONFIG_DM_CRYPT=y
254 | |
255 | | $ sudo make
256 | | $ sudo make modules_install
257 | | $ cp ./arch/x86_64/boot/bzImage /mnt/c/Users/$userName
258 | | $ editor /mnt/c/Users/${userName}/.wslconfig
259 | | + [wsl2]
260 | | + kernel=C:\\Users\\\\bzImage
261 | | + swap=0
262 | | + localhostForwarding=true
263 | | +
264 | |
265 | | @ cmd console:
266 | | $ C:\Users\${userName}\wsl --shutdown <···· Exit and Restart WSL2 (In powershell)
267 | |
268 | | # TEST NEW KERNEL -------------------------------
269 | | $ fallocate -l 1024M mysecrets.img # Create an encrypted disk image file
270 | | $ sudo cryptsetup -y luksFormat mysecrets.img #
271 | | $ sudo cryptsetup open mysecrets.img mysecrets # Open the newly created disk image
272 | | $ sudo mkfs.ext4 /dev/mapper/mysecrets # Format using ext4.
273 | |
274 | | $ sudo mount -t ext4 /dev/mapper/mysecrets ~/mysecrets # finally mount
275 | | ...
276 | | $ sudo umount ~/mysecrets
277 | | $ sudo cryptsetup close mysecrets
278 | ```
279 | [[wsl}]]
280 |
281 |
282 | [[{troubleshooting.update]]
283 | # Windows Update from PowerShell
284 |
285 | *
286 |
287 | [[troubleshooting.update}]]
288 |
289 | [[{security.SCCM]]
290 | # System Center Configuration Manager
291 |
292 | *
293 |
294 | System Center Configuration Manager (conocido en sus siglas como
295 | SCCM) o, desde la versión 1910, Microsoft Endpoint Configuration
296 | Manager (ECM) es el nombre comercial de la línea de software de
297 | Administración de Cambios y Configuraciones de computadoras,
298 | servidores, dispositivos móviles y software, desarrollado por
299 | Microsoft. Actualmente permite la gestión de equipos informáticos que
300 | ejecuten Windows, macOS, Linux o UNIX, como también software de
301 | dispositivos móviles como Windows Mobile, Symbian, iOS y Android.
302 |
303 | Características:
304 | * Administración de aplicaciones, Application management.
305 | * Acceso a los recursos de la empresa, Company resource access.
306 | * Configuración de cumplimiento, Compliance settings.
307 | * Endpoint Protection, Endpoint Protection.
308 | * Inventario, Inventory.
309 | * Administración de dispositivos móviles, Mobile device management.
310 | * Implementación de sistema operativo, Operating system deployment.
311 | * Administración de energía, Power management.
312 | * Perfiles de conexión remota, Remote connection profiles.
313 | * Elementos de configuración de perfiles y datos de usuario, User data and profiles configuration items.
314 | * Control remoto, Remote control.
315 | * Medición de software, Software metering.
316 | * Actualizaciones de software, Software updates.
317 | * Generación de informes, Reporting.
318 | [[security.SCCM}]]
319 |
320 | [[{troubleshooting.fs]]
321 | # System File Checker
322 |
323 | *
324 |
325 | * command line utility (sfc /scannow) to check for system file corruption.
326 |
327 | * scans the integrity of all protected operating system files and
328 | replaces incorrect, corrupted, changed, or damaged versions with the
329 | correct versions where possible.
330 |
331 | * first introduced in Windows 98 and subsequent versions of Windows XP,
332 | Windows Vista, Windows 7, 8.1 and Windows 10 has this feature.
333 |
334 | * With Windows 10, Microsoft integrated System File Checker with Windows
335 | Resource Protection (WRP), which protects registry keys and folders
336 | as well as critical system files. So that If any changes are detected
337 | to a protected system file, the modified file is restored from a
338 | cached copy located in the ( %WinDir%\System32\dllcache) Windows
339 | folder itself.
340 | [[troubleshooting.fs}]]
341 |
342 | [[{]]
343 | # 11 utilidades para administrar y monitorizar nuestro sistema Windows
344 |
345 | *
346 | [[}]]
347 |
348 | [[{security.backups]]
349 | # Recovery Directory
350 |
351 | A bunch of folks have asked me about what is the goal of different
352 | directories in a Windows filesystem hierarchy. So I have decided to
353 | write a short series about that. In this writeup we are going to talk
354 | about the “Recovery” directory. It could be that you have never seen
355 | this directory before on your root drive (“C:\Recovery”). The reason
356 | for that is that the directory is marked “hidden” - as shown in the
357 | screenshot below. By the way it is not enough to display hidden items
358 | in explorer to see it. In order to show it we need to unmark “Hide
359 | Protected Operating system files (recommended)”
360 |
361 | you can see the entire flow in the following link
362 | .
363 | Overall, the directory is a leftover from a previous version of Windows
364 | (the version before an upgrade that was made). It is used in cases where
365 | there are issues after an upgrade and the user wants to revert back.
366 | Thus, after a successful upgrade you can probably delete it
367 | (https://lnkd.in/d5tgrQSM). See you next time ;-)
368 | [[security.backups}]]
369 |
370 |
371 | [[{security.backups]]
372 | # robocopy : Remote robust file copy.
373 |
374 | *
375 |
376 | - Remote copies/backups with multi-threading for higher performance
377 | (with the /mt parameter) and the ability to restart the transfer
378 | in case it's interrupted (with the /z parameter)
379 |
380 | ```
381 | $ robocopy [[ ...]] []
382 | ```
383 | [[security.backups}]]
384 |
385 | # tiny11builder [[{]]
386 | *
387 | Scripts to build a trimmed-down Windows 11 image - now in PowerShell!
388 | [[}]]
389 |
390 | [[{office.licence.activation]]
391 | ## Microsoft Activation Scripts (MAS)
392 |
393 | *
394 | * A Windows and Office activator using HWID / Ohook / KMS38 / Online
395 | KMS activation methods, with a focus on open-source code and fewer
396 | antivirus detections.
397 | [[office.licences.activation}]]
398 |
399 |
400 | [[windows}]]
401 |
402 |
--------------------------------------------------------------------------------