├── xx
├── static
├── .nojekyll
└── img
│ ├── favicon.ico
│ ├── docusaurus.png
│ └── logo.svg
├── docs
├── 🎪🎪🎪🎪🎪🎪🎪🎪🎪🎪🎪🎪
├── 🎪🎪🎪-5️⃣ 💠 HomeAssist.md
├── 🎪🎪🎪-1️⃣🛢 DB ➜ PostGres.md
├── 🎪🎪🎪-5️⃣ 💠💠💠 NetData.md
├── 🎪🎪🎪-1️⃣🛢 DB ➜ Redis.md
├── 🎪🎪🎪-5️⃣ 💠💠💠 Bitwarden.md
├── 🎪🎪 🐬☸️☸️-5️⃣ NET ➜.md
├── 🎪🎪🎪🎪-🔐 Auth ➜ Basic.md
├── 🎪🎪🎪-5️⃣ 💠💠💠 Dashy.md
├── 🎪🎪🎪-1️⃣🛢 DB ➜ MariaDB.md
├── 🎪🎪 🐬☸️☸️-7️⃣ Proxy ➜ Traefik.md
├── 🎪🎪 🐬☸️ Cluster ➜➜ K3s.md
├── 🎪-9️⃣ 📀📀 S3 ➜ MinIO.md
├── 🎪-9️⃣ 📀 STO NAS ➜➜ Alist.md
├── 🎪🎪 🐬☸️☸️-1️⃣ Helm ➜ Basic.md
├── 🎪🎪🎪🎪-🔐🔐 Auth ➜➜ Radius.md
├── 🎪-🦚.md
├── 🎪-7️⃣ 🌐-3️⃣ DNS ➜ AdGuard.md
├── 🎪🎪🎪-5️⃣ 💠💠💠 Jump Server.md
├── 🎪-7️⃣ 🌐-7️⃣ VPN ➜ Design.md
├── 🎪🎪 🐬☸️☸️-3️⃣ STO ➜➜ Demo.md
├── 🎪🎪 🐬☸️☸️-0️⃣ K8s ➜ Basic.1.md
├── 🎪-3️⃣ 📚 Blog ➜ Docusaurus.md
├── 🎪🎪🎪-5️⃣ 💠💠💠 Gitea.md
├── 🎪🎪 🐬☸️☸️☸️ Demo ➜ xxx.md
├── 🎪🎪 🐬☸️☸️-0️⃣ K8s ➜ Basic.9 Misc.md
├── 🎪🎪🎪🎪-🔐🔐🔐 SSO ➜➜ Authelia Use.md
├── 🎪-7️⃣ 🌐-7️⃣ VPN ➜➜ Wireguard.md
├── 🎪🎪🎪🎪-🔐🔐 Auth ➜➜ LDAP.md
├── 🎪🎪🎪🎪-🔐🔐🔐 SSO ➜ Authelia.md
├── 🎪🎪 🐬☸️ Cluster ➜ MiniKube.md
├── 🎪🎪 🐬☸️☸️-1️⃣ Helm ➜ Demo.md
├── 🎪-9️⃣ 📀📀📀 CEPH ➜ Build.md
├── 🎪🎪 🐬☸️ Cluster ➜➜➜ K8s.md
├── 🎪-9️⃣ 📀 STO NAS ➜ DSM.md
├── 🎪🎪 🐬☸️☸️-3️⃣ STO ➜ Basic.md
├── 🎪-7️⃣ 🌐🌐-0️⃣ Proxy ➜ Teaefik.md
└── 🎪-7️⃣ 🌐-7️⃣ VPN ➜➜➜ Netmaker.md
├── babel.config.js
├── blog
├── 2021-08-26-welcome
│ ├── docusaurus-plushie-banner.jpeg
│ └── index.md
├── 2019-05-28-first-blog.md
├── 2019-05-28-first-blog-post.md
└── authors.yml
├── src
├── pages
│ ├── markdown-page.md
│ └── index.module.css
├── components
│ └── HomepageFeatures
│ │ ├── styles.module.css
│ │ └── index.js
└── css
│ └── custom.css
├── .gitignore
├── sidebars.js
├── package.json
├── docusaurus.config.js
└── README.md
/xx:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/static/.nojekyll:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪🎪🎪🎪🎪🎪🎪🎪🎪🎪:
--------------------------------------------------------------------------------
1 | misc - todo
2 |
3 |
4 | 🔵 1. resume - txt
5 |
6 |
7 |
8 |
9 |
--------------------------------------------------------------------------------
/static/img/favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Homeless-Xu/Xu-Jian.github.io/main/static/img/favicon.ico
--------------------------------------------------------------------------------
/babel.config.js:
--------------------------------------------------------------------------------
1 | module.exports = {
2 | presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
3 | };
4 |
--------------------------------------------------------------------------------
/static/img/docusaurus.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Homeless-Xu/Xu-Jian.github.io/main/static/img/docusaurus.png
--------------------------------------------------------------------------------
/blog/2021-08-26-welcome/docusaurus-plushie-banner.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Homeless-Xu/Xu-Jian.github.io/main/blog/2021-08-26-welcome/docusaurus-plushie-banner.jpeg
--------------------------------------------------------------------------------
/src/pages/markdown-page.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Markdown page example
3 | ---
4 |
5 | # Markdown page example
6 |
7 | You don't need React to write simple standalone pages.
8 |
--------------------------------------------------------------------------------
/src/components/HomepageFeatures/styles.module.css:
--------------------------------------------------------------------------------
1 | .features {
2 | display: flex;
3 | align-items: center;
4 | padding: 2rem 0;
5 | width: 100%;
6 | }
7 |
8 | .featureSvg {
9 | height: 200px;
10 | width: 200px;
11 | }
12 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Dependencies
2 | /node_modules
3 |
4 | # Production
5 | /build
6 |
7 | # Generated files
8 | .docusaurus
9 | .cache-loader
10 |
11 | # Misc
12 | .DS_Store
13 | .env.local
14 | .env.development.local
15 | .env.test.local
16 | .env.production.local
17 |
18 | npm-debug.log*
19 | yarn-debug.log*
20 | yarn-error.log*
21 |
--------------------------------------------------------------------------------
/blog/2019-05-28-first-blog.md:
--------------------------------------------------------------------------------
1 | ---
2 | slug: first-blog-post
3 | title: First Blog Post
4 | authors:
5 | name: Gao Wei
6 | title: Docusaurus Core Team
7 | url: https://github.com/wgao19
8 | image_url: https://github.com/wgao19.png
9 | tags: [hola, docusaurus]
10 | ---
11 |
12 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
13 |
--------------------------------------------------------------------------------
/blog/2019-05-28-first-blog-post.md:
--------------------------------------------------------------------------------
1 | ---
2 | slug: first-blog-post
3 | title: First Blog Post
4 | authors:
5 | name: Gao Wei
6 | title: Docusaurus Core Team
7 | url: https://github.com/wgao19
8 | image_url: https://github.com/wgao19.png
9 | tags: [hola, docusaurus]
10 | ---
11 |
12 | Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
13 |
--------------------------------------------------------------------------------
/src/pages/index.module.css:
--------------------------------------------------------------------------------
1 | /**
2 | * CSS files with the .module.css suffix will be treated as CSS modules
3 | * and scoped locally.
4 | */
5 |
6 | .heroBanner {
7 | padding: 4rem 0;
8 | text-align: center;
9 | position: relative;
10 | overflow: hidden;
11 | }
12 |
13 | @media screen and (max-width: 996px) {
14 | .heroBanner {
15 | padding: 2rem;
16 | }
17 | }
18 |
19 | .buttons {
20 | display: flex;
21 | align-items: center;
22 | justify-content: center;
23 | }
24 |
--------------------------------------------------------------------------------
/blog/authors.yml:
--------------------------------------------------------------------------------
1 | endi:
2 | name: Endilie Yacop Sucipto
3 | title: Maintainer of Docusaurus
4 | url: https://github.com/endiliey
5 | image_url: https://github.com/endiliey.png
6 |
7 | yangshun:
8 | name: Yangshun Tay
9 | title: Front End Engineer @ Facebook
10 | url: https://github.com/yangshun
11 | image_url: https://github.com/yangshun.png
12 |
13 | slorber:
14 | name: Sébastien Lorber
15 | title: Docusaurus maintainer
16 | url: https://sebastienlorber.com
17 | image_url: https://github.com/slorber.png
18 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪-5️⃣ 💠 HomeAssist.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 3930
3 | title: 🎪🎪🎪-5️⃣💠💠 ➜ HomeAssist
4 | ---
5 |
6 | # HomeAssist
7 |
8 |
9 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 LAB. APP. HomeAssist ..
10 |
11 | 🔵 Install OS vs Docker
12 | https://www.home-assistant.io/installation/
13 | Docker no support few function...
14 |
15 | esxi: download vmdk.from website. deploy
16 |
17 |
18 | 🔵 Esxi Doanload & install
19 |
20 | https://www.home-assistant.io/installation/linux
21 | ...
22 | esxi use ide disk drive.! if not bootable. -.-
23 |
24 | ip:8123
25 |
26 |
27 |
28 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
29 | 🔵 todo
30 |
31 | https://demo.home-assistant.io/#/lovelace/0
32 |
33 |
34 | 🔵 M
35 |
36 |
37 |
38 |
39 |
40 | 🔵 Power Usage
41 |
42 |
43 |
--------------------------------------------------------------------------------
/sidebars.js:
--------------------------------------------------------------------------------
1 | /**
2 | * Creating a sidebar enables you to:
3 | - create an ordered group of docs
4 | - render a sidebar for each doc of that group
5 | - provide next/previous navigation
6 |
7 | The sidebars can be generated from the filesystem, or explicitly defined here.
8 |
9 | Create as many sidebars as you want.
10 | */
11 |
12 | // @ts-check
13 |
14 | /** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */
15 | const sidebars = {
16 | // By default, Docusaurus generates a sidebar from the docs folder structure
17 | tutorialSidebar: [{type: 'autogenerated', dirName: '.'}],
18 |
19 | // But you can create a sidebar manually
20 | /*
21 | tutorialSidebar: [
22 | {
23 | type: 'category',
24 | label: 'Tutorial',
25 | items: ['hello'],
26 | },
27 | ],
28 | */
29 | };
30 |
31 | module.exports = sidebars;
32 |
--------------------------------------------------------------------------------
/blog/2021-08-26-welcome/index.md:
--------------------------------------------------------------------------------
1 | ---
2 | slug: welcome
3 | title: Welcome
4 | authors: [slorber, yangshun]
5 | tags: [facebook, hello, docusaurus]
6 | ---
7 |
8 | [Docusaurus blogging features](https://docusaurus.io/docs/blog) are powered by the [blog plugin](https://docusaurus.io/docs/api/plugins/@docusaurus/plugin-content-blog).
9 |
10 | Simply add Markdown files (or folders) to the `blog` directory.
11 |
12 | Regular blog authors can be added to `authors.yml`.
13 |
14 | The blog post date can be extracted from filenames, such as:
15 |
16 | - `2019-05-30-welcome.md`
17 | - `2019-05-30-welcome/index.md`
18 |
19 | A blog post folder can be convenient to co-locate blog post images:
20 |
21 | 
22 |
23 | The blog supports tags as well!
24 |
25 | **And if you don't want a blog**: just delete this directory, and use `blog: false` in your Docusaurus config.
26 |
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "blog-never-del",
3 | "version": "0.0.0",
4 | "private": true,
5 | "scripts": {
6 | "docusaurus": "docusaurus",
7 | "start": "docusaurus start",
8 | "build": "docusaurus build",
9 | "swizzle": "docusaurus swizzle",
10 | "deploy": "docusaurus deploy",
11 | "clear": "docusaurus clear",
12 | "serve": "docusaurus serve",
13 | "write-translations": "docusaurus write-translations",
14 | "write-heading-ids": "docusaurus write-heading-ids"
15 | },
16 | "dependencies": {
17 | "@docusaurus/core": "2.0.0-rc.1",
18 | "@docusaurus/preset-classic": "2.0.0-rc.1",
19 | "@mdx-js/react": "^1.6.22",
20 | "clsx": "^1.2.1",
21 | "prism-react-renderer": "^1.3.5",
22 | "react": "^17.0.2",
23 | "react-dom": "^17.0.2"
24 | },
25 | "devDependencies": {
26 | "@docusaurus/module-type-aliases": "2.0.0-rc.1"
27 | },
28 | "browserslist": {
29 | "production": [
30 | ">0.5%",
31 | "not dead",
32 | "not op_mini all"
33 | ],
34 | "development": [
35 | "last 1 chrome version",
36 | "last 1 firefox version",
37 | "last 1 safari version"
38 | ]
39 | },
40 | "engines": {
41 | "node": ">=16.14"
42 | }
43 | }
44 |
--------------------------------------------------------------------------------
/src/css/custom.css:
--------------------------------------------------------------------------------
1 | /**
2 | * Any CSS included here will be global. The classic template
3 | * bundles Infima by default. Infima is a CSS framework designed to
4 | * work well for content-centric websites.
5 | */
6 |
7 | /* You can override the default Infima variables here. */
8 | :root {
9 | --ifm-color-primary: #2e8555;
10 | --ifm-color-primary-dark: #29784c;
11 | --ifm-color-primary-darker: #277148;
12 | --ifm-color-primary-darkest: #205d3b;
13 | --ifm-color-primary-light: #33925d;
14 | --ifm-color-primary-lighter: #359962;
15 | --ifm-color-primary-lightest: #3cad6e;
16 | --ifm-code-font-size: 95%;
17 | --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1);
18 | --doc-sidebar-width: 500px !important;
19 | }
20 |
21 | /* For readability concerns, you should choose a lighter palette in dark mode. */
22 | [data-theme='dark'] {
23 | --ifm-color-primary: #25c2a0;
24 | --ifm-color-primary-dark: #21af90;
25 | --ifm-color-primary-darker: #1fa588;
26 | --ifm-color-primary-darkest: #1a8870;
27 | --ifm-color-primary-light: #29d5b0;
28 | --ifm-color-primary-lighter: #32d8b4;
29 | --ifm-color-primary-lightest: #4fddbf;
30 | --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.3);
31 | }
32 |
33 |
34 | article {
35 | max-width: 1000px !important;
36 | margin-left: auto;
37 | margin-right: auto;
38 | }
39 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪-1️⃣🛢 DB ➜ PostGres.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 3930
3 | title: 🎪🎪🎪-1️⃣🛢 DB ➜ Postgres
4 | ---
5 |
6 | # Postgres
7 |
8 |
9 |
10 |
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 DB - Postgress
12 |
13 |
14 |
15 | 🔶 test login
16 |
17 | db: postgress
18 | user: postgress
19 | pwd : your ..
20 |
21 |
22 | 🔵 Postgress GUI / CLI toole
23 |
24 | 🔶 GUI
25 |
26 | https://postgresapp.com/documentation/gui-tools.html
27 |
28 |
29 | 🔶 install psql
30 |
31 | sudo apt-get install postgresql-client -y
32 | brew install libpq
33 |
34 | 🔶 CLI
35 |
36 | psql -U username -h localhost -p 5432 dbname
37 | psql -U postgres -h localhost -p 5432 postgres
38 | psql -U usergitea -h 172.16.1.140 -p 5432 dbgitea
39 |
40 |
41 |
42 |
43 | 🔵 Config User & DB
44 |
45 | 🔶 create user
46 |
47 | CREATE ROLE gitea WITH LOGIN PASSWORD 'gitea';
48 | CREATE ROLE USERgitea WITH LOGIN PASSWORD 'Yshskl0@19';
49 |
50 |
51 | 🔶 create DB
52 |
53 | CREATE DATABASE DBgitea WITH OWNER USERgitea TEMPLATE template0 ENCODING UTF8 LC_COLLATE 'en_US.UTF-8' LC_CTYPE 'en_US.UTF-8';
54 |
55 |
56 | 🔶 remote test
57 |
58 | psql -U usergitea -h 172.16.1.140 -p 5432 dbgitea
59 |
60 |
61 | 🔶 final
62 |
63 | dbname: dbgitea
64 | username: usergitea
65 |
66 |
67 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪-5️⃣ 💠💠💠 NetData.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 3930
3 | title: 🎪🎪🎪-5️⃣💠💠💠 ➜ NetData
4 | ---
5 |
6 | # Moniter ▪ NetData
7 |
8 |
9 |
10 |
11 |
12 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵HomeLAB ✶ Moniter ▪ NetData
13 |
14 |
15 |
16 |
17 | 🔵 Manual
18 |
19 | 🔶 Offical
20 | https://learn.netdata.cloud/docs/agent/packaging/installer/methods/kubernetes
21 |
22 |
23 | ⭐️⭐️⭐️⭐️⭐️⭐️⭐️
24 | https://www.gitiu.com/journal/netdata-install/
25 |
26 |
27 |
28 |
29 | 🔵 WHY
30 | zabbix + grafana too heavy.
31 |
32 |
33 | 🔵 NetData Desc
34 | https://github.com/netdata/netdata
35 |
36 |
37 |
38 |
39 | 🔵 Docker install
40 |
41 |
42 | docker run -d --name=netdata \
43 | -p 19999:19999 \
44 | -v netdataconfig:/etc/netdata \
45 | -v netdatalib:/var/lib/netdata \
46 | -v netdatacache:/var/cache/netdata \
47 | -v /etc/passwd:/host/etc/passwd:ro \
48 | -v /etc/group:/host/etc/group:ro \
49 | -v /proc:/host/proc:ro \
50 | -v /sys:/host/sys:ro \
51 | -v /etc/os-release:/host/etc/os-release:ro \
52 | --restart unless-stopped \
53 | --cap-add SYS_PTRACE \
54 | --security-opt apparmor=unconfined \
55 | netdata/netdata
56 |
57 |
58 |
59 | 🔵 Cloud free ... netdata ..
60 |
61 |
62 |
63 | 🔵 Usage.. server + agent
64 |
65 |
66 |
67 |
68 |
69 | 🔵 Agent - MAC
70 |
71 |
72 |
73 |
74 |
75 |
76 | 🔵 Agent. esxi ??
77 |
78 |
79 |
80 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪-1️⃣🛢 DB ➜ Redis.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 3930
3 | title: 🎪🎪🎪-1️⃣🛢 DB ➜ Redis
4 | ---
5 |
6 | # Redis
7 |
8 |
9 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
10 | 🔵 Redis Desc
11 |
12 | use ram as default storage.
13 | can can save data to disk storage too.
14 | depend how you config it.
15 |
16 |
17 |
18 | 🔵 Bitnami redis image
19 | Bitnami improve a lot on docker. like safety . etc.
20 | so we use bitnami `s docker image .
21 | bitnami/redis
22 |
23 |
24 | 🔵 Redis GUI tool
25 | redis-insight is best
26 |
27 |
28 | 🔵 redis cli tool
29 |
30 | redis-cli
31 |
32 | sudo apt-get install redis-tools -y
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Redis - Docker ✅
41 |
42 |
43 | DBredis:
44 | container_name: DB-Redis
45 | image: 'bitnami/redis:latest'
46 | ports:
47 | - "6379:6379"
48 | environment:
49 | - ALLOW_EMPTY_PASSWORD=yes
50 | volumes:
51 | - /mnt/dpnvme/DMGS-DKP/DB-Redis:/bitnami
52 |
53 |
54 | 🐞
55 |
56 | ‼️ ⭐️ docker not run as root. most tun as user 1001.
57 | ‼️ ⭐️ docker not run as root. most tun as user 1001.
58 | ‼️ ⭐️ docker not run as root. most tun as user 1001.
59 |
60 | /bit/nami/redis permission denied
61 |
62 | The idea behind all this is that the user with uid 1001 inside the container should be able to write to /bitnami/redis/data.
63 |
64 |
65 | sudo chown 1001 host folder path
66 | sudo chown 1001 /mnt/dpnvme/DMGS-DKP/DB-Redis
67 |
68 |
69 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪-5️⃣ 💠💠💠 Bitwarden.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 3930
3 | title: 🎪🎪🎪-5️⃣💠💠💠 ➜ Bitwarden
4 | ---
5 |
6 | # Password Manager
7 |
8 |
9 |
10 |
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
12 |
13 | 🔵 DSM instal DSM
14 |
15 | install docker
16 | install bitwarden
17 |
18 |
19 |
20 | 🔵 Config DSM Docker Https
21 |
22 | 🔶 desc
23 |
24 | if docker in dsm need https/ssl
25 | it is easy to set in dsm.
26 | dsm have build in reverse proxy.
27 |
28 | like here.
29 | bitwarden docker use http 88 port.
30 | but bitwaren have to use https.
31 | so we froward dsm`s https://xxx.9443 to docker`s 80.
32 |
33 | now we visit url https://bit.rv.ark:9443 it forward to bitwarden 80.
34 |
35 |
36 | 🔶 make ssl
37 |
38 | use mkcert to generate local ssl *.rv.ark
39 |
40 |
41 | 🔶 set static dns
42 |
43 | bit.rv.ark >> 10.1.1.89 (dsm host ip)
44 |
45 |
46 | 🔵 run bitwarden docker in dsm.
47 |
48 |
49 |
50 | 🔵 web visit
51 | https://bit.rv.ark:9443
52 |
53 |
54 |
55 |
56 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 USE
57 |
58 | bitwarden app.
59 | >> left above corner
60 | >> set local bitwarden url https://bit.rv.ark:9443
61 |
62 | login to local bit
63 |
64 |
65 | 🔶 import old data
66 |
67 | use web. no found import in mac app.
68 | web >> tool >> import
69 |
70 | now . web is useless.
71 | just run the server.
72 | everything else can be done in bitwarden app.
73 |
74 |
75 |
76 |
77 |
78 |
79 | vps.
80 |
81 | frp.0214.ICU ssl.
82 | >> dvm.frpc
83 |
84 |
85 |
86 | https://bit.rv.ark:1443
87 |
88 |
89 |
90 | 🔵 nginx fxdl to homelab.
91 |
92 | dvm.0214.icu >>
93 |
94 | rdvm.0214.icu >> lo...
95 |
96 |
97 |
98 |
99 |
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️☸️-5️⃣ NET ➜.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️☸️5️⃣ NET ➜ Basic
4 | ---
5 |
6 |
7 | # NET ✶ Demo
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 | 🔵 How config network
16 |
17 | not like docker . neet create docker network first ??
18 |
19 |
20 |
21 |
22 |
23 |
24 | 🔵 why
25 |
26 | in one node. container network is easy
27 | in cluster. container network is much hard.
28 | need connect pod in different physical node.
29 |
30 |
31 | vlan is virtual network over real network.
32 | vxlan is virtual network over vlan.
33 |
34 | k8s cluster use vxlan. so k8s will not impact your real vlan.
35 | just like docker network will not impact host network.
36 |
37 | so overlay/vxlan is just something like vlan. very easy.
38 |
39 |
40 |
41 |
42 |
43 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
44 |
45 | 📗
46 | https://kubernetes.io/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
47 |
48 | https://morningspace.github.io/tech/k8s-net-cni/
49 |
50 | https://cloud.tencent.com/developer/article/1804680
51 |
52 |
53 |
54 | 🔵 K8s default network
55 |
56 | none
57 | host
58 | default bridge
59 | custom ...
60 |
61 |
62 |
63 | k8s use CNI to build cluster network.
64 | CNI: Container Network Interface.
65 | CNI have lots plugin to use.
66 | Flannel ➜ overlay
67 | Calico
68 |
69 |
70 |
71 |
72 |
73 | 🔵 Overlay / Route / Underlay
74 |
75 |
76 |
77 |
78 | 🔵 network server /network permit
79 |
80 | any pod`s traffic (lan + wan)
81 | need set/config permit/service first.
82 | otherwise default is deny all.
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪🎪-🔐 Auth ➜ Basic.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 4930
3 | title: 🎪🎪🎪🎪-🔐 Auth
4 | ---
5 |
6 |
7 | # Auth ➜ LDAP SSO oAuth2
8 |
9 |
10 |
11 | 🔵 LDAP Desc
12 |
13 | 🔶 Why LDAP
14 |
15 | almost all softeare support LDAP.
16 | LDAP allow you use one account(create in ladp). to login all software.
17 | windows login
18 | linux ssh
19 | wifi account
20 |
21 |
22 | 🔶 LDAP desc
23 |
24 | LightWeight Directory Access Protocol
25 | jsut a software. storage some user/company information.
26 |
27 | LDAP is CS mode: server + client
28 | Build & config ladp server first.
29 | create staff user in ladp server
30 | join client device (win/mac/linux/wifi) into ladp server.
31 | login client device use account you created in ladp server
32 |
33 |
34 | 🔶 LDAP software
35 |
36 | windows server: AD is most common.
37 | but i no like win. so use linux ldap software: openLDAP.
38 |
39 |
40 | 🔵 SSO
41 |
42 | 🔶 Why SSO
43 |
44 | LDAP allow you use one account at all app
45 | but in each app you need type in same username/passwrod !
46 | if you have lots app need login.
47 | we have a better choose: SSO
48 |
49 | SSO. only you login one app. then auto login all other app.
50 | SSO: sigle sign-on
51 |
52 |
53 |
54 | 🔶 SSO Software
55 |
56 | Authelia
57 |
58 |
59 | 🔵 0Auth2
60 |
61 | 🔶 Why oAuth2
62 |
63 | SSO is local use. not for internet.
64 | oAuth2: Open Auth. it is for internet.
65 |
66 | a lot internet app allow you login via google/facebook ...
67 | that app use oAuth.
68 | that app get user info from google (not username password)
69 | (because you sign google. already.)
70 |
71 |
72 |
73 |
74 |
75 |
--------------------------------------------------------------------------------
/static/img/logo.svg:
--------------------------------------------------------------------------------
1 |
2 |
4 |
32 |
--------------------------------------------------------------------------------
/src/components/HomepageFeatures/index.js:
--------------------------------------------------------------------------------
1 | import React from 'react';
2 | import clsx from 'clsx';
3 | import styles from './styles.module.css';
4 |
5 | const FeatureList = [
6 | {
7 | title: 'Easy to Use',
8 | Svg: require('@site/static/img/undraw_docusaurus_mountain.svg').default,
9 | description: (
10 | <>
11 | Docusaurus was designed from the ground up to be easily installed and
12 | used to get your website up and running quickly.
13 | >
14 | ),
15 | },
16 | {
17 | title: 'Focus on What Matters',
18 | Svg: require('@site/static/img/undraw_docusaurus_tree.svg').default,
19 | description: (
20 | <>
21 | Docusaurus lets you focus on your docs, and we'll do the chores. Go
22 | ahead and move your docs into the docs directory.
23 | >
24 | ),
25 | },
26 | {
27 | title: 'Powered by React',
28 | Svg: require('@site/static/img/undraw_docusaurus_react.svg').default,
29 | description: (
30 | <>
31 | Extend or customize your website layout by reusing React. Docusaurus can
32 | be extended while reusing the same header and footer.
33 | >
34 | ),
35 | },
36 | ];
37 |
38 | function Feature({Svg, title, description}) {
39 | return (
40 |
41 |
42 |
43 |
44 |
45 |
{title}
46 |
{description}
47 |
48 |
49 | );
50 | }
51 |
52 | export default function HomepageFeatures() {
53 | return (
54 |
55 |
56 |
57 | {FeatureList.map((props, idx) => (
58 |
59 | ))}
60 |
61 |
62 |
63 | );
64 | }
65 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪-5️⃣ 💠💠💠 Dashy.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 3930
3 | title: 🎪🎪🎪-5️⃣💠💠💠 ➜ Dashy
4 | ---
5 |
6 | # Dashy
7 |
8 |
9 |
10 |
11 |
12 |
13 | - '8395:80'
14 | volumes:
15 | - '/root/data/docker_data/dashy/public/conf.yml:/app/public/conf.yml'
16 | - '/root/data/docker_data/dashy/icons:/app/public/item-icons/icons'
17 |
18 |
19 |
20 |
21 |
22 |
23 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 ICON AIO
24 |
25 | ‼️ Offical Manual https://dashy.to/docs/icons/ ‼️
26 | ‼️ Offical Manual https://dashy.to/docs/icons/ ‼️
27 | ‼️ Offical Manual https://dashy.to/docs/icons/ ‼️
28 |
29 |
30 |
31 | ‼️ Sample config https://gist.github.com/Lissy93/000f712a5ce98f212817d20bc16bab10
32 |
33 |
34 |
35 |
36 |
37 | 🔵 Favicon Demo
38 | ‼️ use other website`s favicon ‼️
39 |
40 | need edit yml?? no gui add??
41 |
42 | icon: "favicon"
43 | url: "https://mail.protonmail.com/"
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 | 🔵 online icon - fontawesone ➜ fas xxx ;❌
52 |
53 | https://fontawesome.com/icons
54 |
55 |
56 | fa-solid 改成 fas 就行。
57 |
58 |
59 |
60 |
61 |
62 |
63 | 🔵 online icon - Simple ICON ➜ si-xxxx
64 |
65 | https://simpleicons.org/?q=froti
66 |
67 | ‼️ all black no color icon ‼️
68 | si-fortinet
69 |
70 |
71 |
72 |
73 | 🔵 online icon - HomeLAB ➜ hl-xxxx
74 |
75 | https://github.com/WalkxHub/dashboard-icons
76 |
77 | 🔶use
78 | find icon in web.
79 | get icon name kubernetes-dashboard.png
80 | add hl-
81 | delete .png
82 | like :
83 |
84 | hl-mikrotik
85 | hl-zabbix
86 | hl-wireguard
87 | hl-zerotier
88 | hl-vmwareesxi
89 | hl-synology-dsm
90 |
91 |
92 |
93 |
94 | 🔵 Online URL
95 |
96 | https://i.ibb.co/710B3Yc/space-invader-x256.png
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Local icon
105 | download to icons folder.
106 |
107 | /public/item-icons/
108 |
109 |
110 | icon: si-adguard
111 |
112 |
113 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪-1️⃣🛢 DB ➜ MariaDB.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 3930
3 | title: 🎪🎪🎪-1️⃣🛢 DB ➜ MariaDB
4 | ---
5 |
6 | # Mysql
7 |
8 |
9 |
10 |
11 |
12 | ⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️------⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛⬛️⬛️
13 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 MySQL🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
14 | ⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️------⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️
15 |
16 | 🔸 数据库操作
17 | • 创建数据库 create database wpj1105;
18 | • 删除数据库 drop database wpj1105;
19 | • 使用数据库 use wpj1105;
20 | • 显示数据库 show databases;
21 |
22 |
23 | 🔸 判断
24 | drop database if exists wpj1105; # 如果某数据库存在就删除
25 | drop table if exists student; # 如果某表存在就删除
26 |
27 |
28 | 🔸 表
29 | ⦿ 创建表
30 | create table student(
31 | id int auto_increment primary key,
32 | name varchar(50),
33 | sex varchar(20),
34 | date varchar(50),
35 | content varchar(100)
36 | )default charset=utf8;
37 |
38 |
39 | ⦿ 删除表 drop table student;
40 | ⦿ 查看表的结构 describe student; #可以简写为desc student;
41 |
42 |
43 | 🔸 插入数据
44 | 这个就要看user_token 表结构了
45 | +-------+-------+---------+-------------+-------------+
46 | | id | token | user_id | create_time | expire_time |
47 | +-------+-------+---------+-------------+-------------+
48 | | 12 | | 0 | 0 | 0 |
49 |
50 | 语法: INSERT INTO table_name ( field1, field2,...fieldN ) VALUES ( value1, value2,...valueN );
51 | 实例: insert into user_token (id,token,user_id,create_time,expire_time) VALUES (98,1,2,3,4);
52 | 这个例子肯定是对的
53 |
54 |
55 | 🔸修改某一条数据
56 | update student set sex=’男’ where id=4;
57 |
58 | 🔸修改某字段数据
59 | UPDATE table_name SET field1=new-value1,
60 | UPDATE user SET transfer_enable=21474836480
61 |
62 |
63 |
64 | 🔸 删除某表某条记录
65 | 删除一条记录: delete from 表名 where id=1;
66 |
67 |
68 | 🔸 删除某表所有记录(保留结构)
69 | use ss-vps1;
70 | show tables;
71 | DELETE FROM `22`;
72 | DELETE FROM `ss_node_online_log`;
73 | ❗️❗️ 表名外面 必须加上 `` . 这个不是单引号. 而是ESC键下面的符号. ❗️❗️
74 |
75 | 顺便把 ss-vps1 里的无用数据都删除了吧
76 | DELETE FROM `ss_checkin_log`;
77 | DELETE FROM `ss_node_info_log`;
78 | DELETE FROM `user_traffic_log`;
79 |
80 |
81 |
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️☸️-7️⃣ Proxy ➜ Traefik.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️☸️7️⃣ Proxy ➜ Traefik ❌
4 | ---
5 |
6 |
7 | # K8s Proxy ✶ Traefik
8 |
9 |
10 | 🔵 WHY
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 misc
23 | https://artifacthub.io/packages/helm/traefik/traefik
24 |
25 | helm repo add traefik https://helm.traefik.io/traefik
26 | helm install traefik traefik/traefik --namespace=test
27 | kubectl port-forward traefik 9000:9000
28 |
29 |
30 |
31 |
32 |
33 |
34 | cat < proxy-traefik.yaml
35 | apiVersion: traefik.containo.us/v1alpha1
36 | kind: IngressRoute
37 | metadata:
38 | name: dashboard
39 | spec:
40 | entryPoints:
41 | - web
42 | routes:
43 | - match: Host(`traefik.localhost`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
44 | kind: Rule
45 | services:
46 | - name: api@internal
47 | kind: TraefikService
48 | EOF
49 |
50 | kubectl apply -f proxy-traefik.yaml
51 |
52 |
53 |
54 |
55 |
56 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 K3s. Traefik
57 |
58 | k3s. use traefik as default.
59 | so no need install. just comfig it.
60 | cat /var/lib/rancher/k3s/server/manifests/traefik.yaml
61 |
62 |
63 |
64 | 🔵 k3s traefik dashboard
65 |
66 |
67 | spec:
68 | chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
69 | set:
70 | rbac.enabled: "true"
71 | ssl.enabled: "true"
72 | metrics.prometheus.enabled: "true"
73 | kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
74 | image: "rancher/library-traefik"
75 |
76 | dashboard.enabled: "true" # <-- add this line
77 | dashboard.domain: "traefik.internal" # <-- and this one with a resolvable DNS name
78 |
79 |
80 | Helm will pick up the changes automagically
81 | and the dashboard will be available under http://traefik.internal/dashboard/.
82 | Keep in mind that after a reboot of the master the file will be restored without the added lines.
83 |
84 |
85 | traefik.k3s.rv.net
86 |
--------------------------------------------------------------------------------
/docusaurus.config.js:
--------------------------------------------------------------------------------
1 | // @ts-check
2 | // Note: type annotations allow type checking and IDEs autocompletion
3 |
4 | const lightCodeTheme = require('prism-react-renderer/themes/github');
5 | const darkCodeTheme = require('prism-react-renderer/themes/dracula');
6 |
7 | /** @type {import('@docusaurus/types').Config} */
8 | const config = {
9 | title: '🎪',
10 | tagline: 'Homelab build doc',
11 | url: 'https://mirandaxx.github.io',
12 | baseUrl: '/',
13 | onBrokenLinks: 'throw',
14 | onBrokenMarkdownLinks: 'warn',
15 | favicon: 'img/favicon.ico',
16 | organizationName: 'mirandaxx', // Usually your GitHub org/user name.
17 | projectName: 'mirandaxx.github.io', // Usually your repo name.
18 | deploymentBranch: 'gh-pages', // no change. must add this line.
19 |
20 |
21 | i18n: {
22 | defaultLocale: 'en',
23 | locales: ['en'],
24 | },
25 |
26 | presets: [
27 | [
28 | 'classic',
29 | /** @type {import('@docusaurus/preset-classic').Options} */
30 | ({
31 | docs: {
32 | routeBasePath: '/',
33 |
34 | //editUrl:
35 | // 'https://github.com/facebook/docusaurus/tree/main/packages/create-docusaurus/templates/shared/',
36 | },
37 | blog: false,
38 | theme: {
39 | customCss: require.resolve('./src/css/custom.css'),
40 | },
41 | }),
42 | ],
43 | ],
44 |
45 | themeConfig:
46 | /** @type {import('@docusaurus/preset-classic').ThemeConfig} */
47 | ({
48 | navbar: {
49 | title: 'RV-LAB',
50 | logo: {
51 | alt: 'My Site Logo',
52 | src: 'img/logo.svg',
53 | },
54 | items: [
55 | {
56 | href: 'https://github.com/MirandaXX',
57 | label: 'GitHub',
58 | position: 'right',
59 | },
60 | ],
61 | },
62 | footer: {
63 | style: 'dark',
64 | links: [
65 | ],
66 | copyright: `Copyright © ${new Date().getFullYear()} RV-LAB, Inc. Built with Docusaurus.`,
67 | },
68 | prism: {
69 | theme: lightCodeTheme,
70 | darkTheme: darkCodeTheme,
71 | },
72 | }),
73 | };
74 |
75 | module.exports = config;
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️ Cluster ➜➜ K3s.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️ Cluster ➜➜ K3s
4 | ---
5 |
6 | # K3s
7 |
8 |
9 |
10 |
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 LAB. K3s ➜ vps
12 | 🔵 why
13 |
14 | k8s / minikube need 2G+ ram.
15 | k3s only need. 521 MB +
16 |
17 |
18 | 🔵 K3s cluster.
19 |
20 | k3s: master
21 | vps: worker
22 |
23 |
24 |
25 | 🔵 Hostname & hosts
26 |
27 | 🔶 hostname
28 |
29 | hostnamectl set-hostname K3s.MGR
30 | hostnamectl set-hostname K3s.DKT
31 | hostnamectl set-hostname K3s.VPS
32 |
33 |
34 | 🔶 hosts file
35 |
36 | vi /etc/hosts
37 |
38 | 172.16.1.33 K3s.MGR
39 | 172.16.1.144 K3s.DKT
40 | 10.214.214.214 K3s.VPS
41 |
42 |
43 | if node use diff ip.
44 | just make sure node can ping other nodes.
45 | choose any ip works.
46 |
47 |
48 | 🔵 install docker: all node
49 |
50 |
51 |
52 | 🔵 master node
53 |
54 | 🔶 install last version ➜ curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=latest sh -
55 |
56 | 🔶 uninstall - option ➜ /usr/local/bin/k3s-uninstall.sh
57 |
58 | 🔶 check Status ➜ systemctl status k3s
59 |
60 | 🔶 check Node ➜ sudo kubectl get nodes -o wide
61 |
62 | 🔶 open firewall ➜ ufw allow 6443,443 proto tcp
63 |
64 | 🔶 get join token ➜ sudo cat /var/lib/rancher/k3s/server/node-token
65 |
66 |
67 |
68 | 🔵 work node join k3s
69 |
70 | curl -sfL http://get.k3s.io | K3S_URL=https://:6443 K3S_TOKEN= INSTALL_K3S_CHANNEL=latest sh -
71 |
72 | curl -sfL http://get.k3s.io | K3S_URL=https://10.214.214.33:6443 K3S_TOKEN=K10cdcb44scaeacc87089a29910422e3d873f2eb4a245fdbc9c14::server:9bb20d4d146766c028555f520af8c243 INSTALL_K3S_CHANNEL=latest sh -
73 | curl -sfL http://get.k3s.io | K3S_URL=https://172.16.1.33:6443 K3S_TOKEN=K10cdcbs6as10422e3d873f2eb4a245fdbc9c14::server:9bb20d4d146766c028555f520af8c243 INSTALL_K3S_CHANNEL=latest sh -
74 |
75 |
76 | 🔶 leave k3s ➜ /usr/local/bin/k3s-killall.sh
77 |
78 |
79 |
80 |
81 |
82 |
83 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
84 |
85 | 🔵 lens add k3s
86 |
87 | /etc/rancher/k3s/k3s.yaml
88 | change ip inside.
89 | server: https://127.0.0.1:6443
90 | server: https://172.16.1.144:6443
91 |
92 |
93 |
94 |
--------------------------------------------------------------------------------
/docs/🎪-9️⃣ 📀📀 S3 ➜ MinIO.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 1920
3 | title: 🎪-9️⃣📀📀 S3 ➜ MinIO
4 | ---
5 |
6 | # Storage ✶ S3 ➜ MinIO
7 |
8 |
9 |
10 | 🔵 MinIO Desc
11 |
12 | OSS:
13 | give file a address/url.
14 | use url to use file like download
15 | Amazon S3 / MinIO
16 |
17 |
18 | a lot service need s3 storage.`
19 | ceph has s3, but config not easy. so use this.
20 |
21 |
22 |
23 | 🔵 Demo Desc
24 |
25 |
26 |
27 | 🔵 tool
28 |
29 | s3cmd.
30 | mount s3 to linux. like local disk.
31 |
32 |
33 |
34 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Docker Compose Demo
35 |
36 | STOminio:
37 | container_name: STO-S3-MinIO
38 | image: minio/minio:latest
39 | restart: always
40 | privileged: true
41 | ports:
42 | - "9000:9000" # api port
43 | - "9009:9001" # webui port
44 | environment:
45 | MINIO_ROOT_USER: admin # webui: user name
46 | MINIO_ROOT_PASSWORD: admin # webui: user password
47 |
48 |
49 | volumes:
50 | - /mnt/dpnvme/DMGS-DKP/STO-S3-MinIO-Disk-NeverDEL:/data
51 | # disk
52 | - /mnt/dpnvme/DMGS-DKP/STO-S3-MinIO-Config:/root/.minio/
53 | # config
54 |
55 | command: server --console-address ':9001' /data
56 | # let mini use data as disk
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Config
65 |
66 |
67 |
68 | 🔵 Basic
69 |
70 | Bucket: like disk
71 | Object: like file / folder
72 |
73 |
74 |
75 | 🔵 Create Bucket Config
76 |
77 | s3-dir-wikijs
78 | versioning ‼️
79 | only can enable on create.
80 |
81 |
82 | 🔵 set Bucket permit
83 |
84 | want easy use policy
85 | want custom use rules
86 |
87 | xx bucket >> manage >> access policy ➜ private / public / custom (use rules blow)
88 | xx bucket >> manage >> access rules ➜
89 |
90 |
91 |
92 | 🔵 visit public bucket file
93 |
94 | s3-dir-wikijs/11.png
95 | http://172.16.1.140:9000/s3-dir-wikijs/11.png
96 |
97 |
98 |
99 | 🔵 visit Private bucket file
100 |
101 | 🔶 Create API user!
102 |
103 | no any use by default.
104 | the docker cmd only create minio webui user. not api user.
105 | here need api use.
106 |
107 |
108 | 🔶 Visit private file
109 |
110 | if have webui access. visit use webui.
111 | if no webui. install a tool. use cmd to visit file.
112 | in linux use s3cmd. is very good. tool
113 |
114 |
115 | 🔵 s3cmd linux use demo
116 |
117 | login.
118 | https://www.bilibili.com/video/BV1w34y1k7BG?vd_source=7a6c9ba7c1460c134545b9e1f189e1cd
119 |
120 |
--------------------------------------------------------------------------------
/docs/🎪-9️⃣ 📀 STO NAS ➜➜ Alist.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 1910
3 | title: 🎪-9️⃣📀 STO NAS ➜➜ Cloud Driver
4 | ---
5 |
6 | # NAS ✶ Cloud ➜ Alist
7 |
8 |
9 |
10 |
11 | 🔵 Needs: Sync ✅ cloud & nas
12 |
13 | 1. use synology cloud sync app: sync all dropbox & google drive to NAS
14 | 2. use nas / alist to manage synced folder in NAS
15 |
16 |
17 | 🔵 Needs: Sync ❌ local & local
18 |
19 |
20 | 🔵 Need: Manager ✅
21 |
22 | connect all cloud drive & local drive to alist.
23 | use alist website to manage all dirve.
24 |
25 | you can share file to other people use alist too.
26 | have guest mode.
27 |
28 |
29 |
30 |
31 | 🔵 Misc
32 |
33 | s3cmd?
34 | linux mount nas`s s3.
35 | use rsync. put all config in s3
36 |
37 |
38 |
39 |
40 |
41 |
42 | 🔵 Needs: Version Control / git
43 |
44 |
45 |
46 |
47 |
48 | 🔵 Need: Share / web/ internet / webdav
49 |
50 | how share file to other people .
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Alist Demo
63 |
64 | 🔵 Alist - Docker
65 |
66 | docker run -d --restart=always
67 | -v /etc/alist:/opt/alist/data
68 | -p 5244:5244
69 | --name="alist"
70 | xhofe/alist:latest
71 |
72 |
73 |
74 |
75 | 🔵 Login
76 |
77 | http://10.1.1.89:5244/@manage/login
78 |
79 | 🔶 default password
80 |
81 | check log
82 | docker exec -it alist ./alist -password
83 |
84 |
85 |
86 | 🔵 change admin password.
87 |
88 |
89 | 🔵 add local folder
90 |
91 | 🔶 mount
92 |
93 | mount nas folder into docker.
94 | alist mount local folder.
95 |
96 |
97 | 🔶 add account/folder
98 |
99 | type: native (local dirve)
100 | virtual path: /xxx ➜ what path show on alist dashboard.
101 | index (set folder order)
102 | /
103 |
104 |
105 | extract_folder
106 | front when order. put folder at head
107 | back: when order. put folder at last
108 |
109 |
110 |
111 | 🔵 add folder password
112 |
113 | alisy.0214.icu/Dir1/xxx
114 | alisy.0214.icu/Dir1/yyy
115 |
116 | if want give folder Dir1 add password
117 |
118 | meta
119 | path: /Dir1
120 | password: set a password
121 | save
122 |
123 |
124 |
125 | 🔵 set alist reserve proxy
126 |
127 | use traefik
128 |
129 |
130 |
131 |
132 |
133 |
134 | 🔵 Add remote google Driver ✅✅✅✅✅✅
135 |
136 | ‼️ Offical best manual https://alist-doc.nn.ci/docs/driver/googledrive/
137 | ‼️ Offical best manual https://alist-doc.nn.ci/docs/driver/googledrive/
138 | ‼️ Offical best manual https://alist-doc.nn.ci/docs/driver/googledrive/
139 |
140 |
141 | 🔶 token tool:
142 |
143 | https://tool.nn.ci/google/request
144 |
145 |
146 |
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️☸️-1️⃣ Helm ➜ Basic.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️☸️1️⃣ Helm ➜ Basic
4 | ---
5 |
6 | # Helm Basic
7 |
8 |
9 | https://www.youtube.com/watch?v=6mHgb3cDOjU&list=PLmOn9nNkQxJHYUm2zkuf9_7XJJT8kzAph&index=45&ab_channel=%E5%B0%9A%E7%A1%85%E8%B0%B7IT%E5%9F%B9%E8%AE%AD%E5%AD%A6%E6%A0%A1
10 |
11 |
12 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Hlem Install
13 |
14 | 🔵 K8s tool
15 |
16 | 🔶 K8s APP-GUI ➜ lens ➜ manager local+remote cluster. ➜ best
17 | 🔶 K8s CMD-GUI ➜ k9s ➜ manager local cluster.
18 | 🔶 K8s package ➜ helm ➜ like apt. make k8s deploy pod very easy.
19 |
20 |
21 | 🔻 lens enable prometheus ✅
22 |
23 | lens have builded in promethes
24 | but you need able it for all cluster.
25 |
26 | cluster >> setting >> lens metric >> enable all >> restart app
27 |
28 |
29 |
30 |
31 |
32 | 🔵 Helm Use
33 |
34 | install helm to cluster maneger node.
35 |
36 | add helm app repo then install helm app
37 |
38 | config app & run
39 |
40 |
41 |
42 | 🔵 Helm search app
43 |
44 | 🔶 use cmd 👎
45 |
46 | helm search hub xxx ➜ search in allavailable repo
47 | helm search repo xxx ➜ search in installed ready repo
48 |
49 |
50 | 🔶 use website 👍
51 |
52 | https://artifacthub.io/packages/search?kind=0&sort=relevance&page=1
53 | website. tell you how install & config
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
63 |
64 |
65 | 🔵 available vaule - check how ✅
66 |
67 | app only allow you change some value.
68 | depends on how the helm app is created.
69 |
70 | every app is different available value.
71 | you have to check in helm app`s website.
72 |
73 |
74 | 🔻 use web 👍
75 | https://artifacthub.io/packages/helm/bitnami/nginx
76 | https://artifacthub.io/packages/helm/bitnami/nginx?modal=values
77 |
78 | you can check all info in web. no need cmd.
79 |
80 |
81 | 🔻 use cmd 👎
82 | -- use cmd must add repo first. then check
83 |
84 | helm repo add bitnami https://charts.bitnami.com/bitnami
85 | helm show values bitnami/nginx
86 |
87 | helm repo add bitnami https://charts.bitnami.com/bitnami
88 | helm show values bitnami/mariadb
89 |
90 |
91 |
92 | 🔵 available vaule - check demo
93 |
94 | https://artifacthub.io/packages/helm/bitnami/nginx
95 |
96 | Traffic Exposure parameters
97 | service.ports.http 80
98 |
99 |
100 |
101 | 🔵 custom value
102 |
103 | two way to set your custom value.
104 | -f ➜ use yaml file
105 | --set ➜ use cmd
106 |
107 | 🔶 set demo
108 |
109 | helm install my-nginx bitnami/nginx --set xxx=yyy
110 | helm install my-nginx bitnami/nginx --set service.ports.http=880
111 | helm install my-nginx bitnami/nginx --set service.ports.http=880,service.ports.https=8443
112 |
113 |
114 | 🔶 yaml demo
115 |
116 | helm install my-nginx bitnami/nginx -f xxx.yaml
117 | helm install my-nginx bitnami/nginx -f config-nginx.yaml
118 |
119 |
120 | cat < config-nginx.yaml
121 | service.ports.http: 880,
122 | service.ports.https: 8443,
123 | EOF
124 |
125 | http:172.16.1.33:880
126 | https:172.16.1.33:8443
127 |
128 |
129 |
130 |
131 |
132 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪🎪-🔐🔐 Auth ➜➜ Radius.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 4930
3 | title: 🎪🎪🎪🎪-🔐🔐 Auth ➜➜ radius
4 | ---
5 |
6 |
7 | # Auth ➜ radius
8 |
9 |
10 |
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 radius. server Demo Synology ✅
12 |
13 | 🔵 nas. create ldap & user
14 |
15 | 🔶 create ldap
16 | fqdn: guest.rv.ark
17 | base dn: dc=guest,dc=rv,dc=ark
18 | bind dn: uid=root,cn=users,dc=guest,dc=rv,dc=ark
19 |
20 | 🔶 enable admin & set password
21 |
22 | 🔶 create test user user01 12345678
23 |
24 | 🔵 join nas to ldap.
25 |
26 | dns address: need a dns can ping guest.rv.ark
27 |
28 | dn account: admin
29 | base dn: dc=guest,dc=rv,dc=ark
30 |
31 |
32 |
33 | 🔵 nas create radius.
34 |
35 | so radius can use user in ldap.
36 | 1. auth port 1812.
37 | allow ldap user only
38 |
39 |
40 |
41 | 🔵 add radius client
42 |
43 | wifi/ap/ros is client.
44 |
45 |
46 |
47 |
48 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 radius. server Demo ros ✅
49 | https://kknews.cc/code/o4a4ne6.html
50 |
51 | ros have radius server too.
52 | it is user manager.
53 |
54 |
55 | 🔵 enable radius server
56 |
57 | maneger >> session > setting .. enable
58 | 1812 auth
59 | 1813 acct
60 |
61 | advance . can use paypel
62 |
63 |
64 | 🔵 add client device(not user)
65 |
66 | all wifi ap hardware need join radius first
67 | so their wireless can support radius.
68 |
69 |
70 | name: any
71 | share secrt: encty between device and ros .
72 | client device ip:
73 | coa port: 3799 default.
74 |
75 |
76 |
77 | 🔵 SRV: Create group - option
78 |
79 | rg-gst-wifi guest wifi radius group
80 | here can set a lot. like speed limit.
81 |
82 |
83 |
84 | 🔵 SRV: Create user
85 |
86 | manager >> user >> add
87 |
88 |
89 | 🔵 SRV: User Manager Portal
90 |
91 | http://router-ip/um
92 | fuck good...
93 |
94 |
95 | 🔵 SRV: Misc
96 |
97 | 🔶 html custom.
98 |
99 | all file can custom.
100 | html can change everything..
101 | User Manager data is in user-manager5
102 |
103 |
104 | 🔵 SRC: enable radisu imcoming ?
105 |
106 | ros >> radius >> incoming >> accrpt port 3799
107 |
108 |
109 |
110 | 🟢 CLI - Device Join Radius
111 | mikrotik. chr(client) join mikrotik rb4011.(server)
112 |
113 | radius >> add >>
114 | servie:
115 | server address: 10.111.111.11
116 | secret: test
117 | protol: (in srv. manager. set.cert... )
118 |
119 |
120 | 🔵 user manager session
121 |
122 | user manager >> session is for client user. not for client device.
123 | you join wifi hardware in. you can not check if joined.
124 | config wifi. use radius. use radius user to login wifi.
125 | then you can see the user in session.
126 |
127 |
128 |
129 | 🟢 CLI - User Join Radius
130 | enable ap`s radisu.
131 | done..
132 | then you can use raidus server`s user to login
133 |
134 |
135 |
136 |
137 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Ros. User Manager - Detail
138 |
139 | ‼️ User Manager offical manual https://help.mikrotik.com/docs/display/ROS/User+Manager
140 | ‼️ User Manager offical manual https://help.mikrotik.com/docs/display/ROS/User+Manager
141 | ‼️ User Manager offical manual https://help.mikrotik.com/docs/display/ROS/User+Manager
142 |
143 |
144 |
145 | 🔵 Srv. let radius support ladp ?
146 | windows ad have NPS. if use windows ad. it can support radius.
147 | openldap ... no ?? so forget it.
148 |
149 |
150 |
151 |
152 | 🔵 um basic
153 |
154 | UM >> user >>
155 |
156 | ◎ Shared user: 1 ➜ no login many device use one account at same time.
157 |
158 |
159 |
160 | 🔵 func - pay
161 |
162 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # 🎪 HomeLAB Build Doc 🎪
2 |
3 |
4 | Blog Link: https://mirandaxx.github.io
5 |
6 |
7 |
8 | 🔵 Hardware
9 |
10 | xxxx.x Starlink
11 | 0219.1 FortiGate 60F
12 |
13 | 0219.11 Mikrotik RB4011
14 | 0219.22 Mikrotik CRS328
15 |
16 | 0219.33 Mikrotik AP-Master
17 | 1928.40 Ruckus AP-Guest-Mesh_01
18 | 1928.41 Ruckus AP-Guest-Mesh_02
19 |
20 | 0219.13 HP-Zbook_G3 Esxi-G3
21 | 0219.15 HP-Zbook_G5 Esxi-G5
22 |
23 | 1001.88 Synology NAS
24 | 0099.xx Camera*6
25 |
26 |
27 |
28 |
29 |
30 |
31 | 🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐 NET
32 |
33 | 🔵 Structure
34 |
35 | ✅ VPN: Wireguard + Netmaker
36 | ✅ DNS: AdGuard
37 | ✅ Proxy: Traefik
38 | 🚫 VXLAN: NSXT (use too much cpu ram)
39 |
40 |
41 | 🔵 VPN
42 |
43 | vps.s 1214.214
44 |
45 | ros.c 1214.011
46 | ros.c 1214.022
47 |
48 | k3s.c 1214.033
49 | dkt.c 1214.144
50 | mac.c 1214.099
51 |
52 |
53 |
54 | 🔵 VLAN
55 |
56 | MGR_1219 10.219.219.0/24 Manager
57 | OWN_1111 10.111.111.0/24 Owner
58 |
59 | Gst_0168 192.168.168.0/24 Guest Wifi
60 |
61 | Srv_1721 172.16.1.0/24 Server
62 | Srv_1728 172.16.8.0/24 Server
63 |
64 | STO_1001 10.1.1.0/24 NAS_01G
65 | STO_1010 10.10.10.0/24 NAS_10G
66 | STO_1012 10.12.12.0/24 CEPH
67 |
68 | SEC_0099 192.168.99.0/24 Camera
69 |
70 |
71 |
72 |
73 |
74 | 🔵 IP Tables
75 |
76 | xxxx.001 ★ Firewall
77 |
78 | xxxx.011 ★ RB4011
79 | xxxx.012 ✩ CHR
80 | xxxx.022 ★ CRS328
81 | 1111.013 ★ AP
82 |
83 | xxxx.088 ★ NAS.HW
84 | xxxx.089 ✩ NAS.VM
85 |
86 | 1720.070 ✩ CEPH.S
87 | 1720.077 ✩ CEPH.C
88 |
89 | 1720.080 ✩ K8s.S
90 | 1720.083 ✩ K8s.C
91 |
92 | 1721.033 ✩ K3s.S.MGR
93 | 1721.144 ✩ K3s.C.DKT
94 | 1214.214 ★ K3s.C.VPS
95 |
96 |
97 | 1111.099 ★ iMAC
98 | 0099.111 ✩ Win7-Canmera
99 | 1721.123 ✩ HomeAssist
100 |
101 |
102 |
103 |
104 |
105 |
106 | 📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀 STO
107 |
108 | ✅ NAS: Synology
109 | ✅ S3: MinIO
110 | ✅ RBD: Ceph
111 |
112 |
113 |
114 | 🔵 CEPH-RBD
115 |
116 | Pool_BD-K8s-DB
117 | Pool_BD-K8s-APP
118 |
119 | Pool_BD-K3s-AIO
120 |
121 |
122 |
123 |
124 | 🔵 NAS
125 |
126 | 🔶 NAS.HW
127 |
128 |
129 | 🔶 NAS.VM
130 | - Docker
131 |
132 | - Cloud Sync:
133 | Dropbox * 4
134 | Google Driver * 2
135 |
136 |
137 |
138 |
139 |
140 | 🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐 Auth
141 |
142 |
143 | ✅ LDAP: openLDAP ad.rv.ark
144 | ✅ LDAP: Synology adnas.rv.ark
145 |
146 | ✅ Radius RB4011
147 |
148 | ❌ SSO: Authelia
149 |
150 |
151 |
152 |
153 | 🔵 LDAP Account
154 |
155 | 🔶 nas
156 | adu.nas ➜ user
157 | ada.nas ➜ admin
158 |
159 |
160 |
161 |
162 | 🔵 LDAP client
163 |
164 | Mikrotik. AP ❌
165 |
166 |
167 |
168 | 🔵 Radius
169 |
170 | ✅ AP-Guest
171 |
172 |
173 |
174 | 🔵 SSO
175 |
176 |
177 |
178 |
179 |
180 | 🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢 DB
181 |
182 |
183 |
184 |
185 |
186 |
187 | 💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠 APP
188 |
189 | ✅ dashy
190 |
191 |
192 |
193 |
194 |
195 |
196 | 🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰 Tool
197 |
198 | K8s: GUI: lens
199 |
200 | ✅ code-server: remote config server in web vscode
201 |
202 | ✅ DB Redis-CLI GUI redis-insight
203 |
204 |
205 |
206 |
207 |
208 |
209 |
210 | 🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉 Misc / ToDo
211 |
212 |
213 |
214 |
215 | 💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠 Moniter
216 |
217 | metric + influxdb + grafana
218 | 通过 Prometheus 采集数据
219 |
220 |
221 |
--------------------------------------------------------------------------------
/docs/🎪-🦚.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 1
3 | title: 🎪-🦚
4 | slug: /
5 | ---
6 |
7 | # LAB Structure
8 |
9 |
10 | 🔵 Hardware
11 |
12 | xxxx.x Starlink
13 | 0219.1 FortiGate 60F
14 |
15 | 0219.11 Mikrotik RB4011
16 | 0219.22 Mikrotik CRS328
17 |
18 | 0219.33 Mikrotik AP-Master
19 | 1928.40 Ruckus AP-Guest-Mesh_01
20 | 1928.41 Ruckus AP-Guest-Mesh_02
21 |
22 | 0219.13 HP-Zbook_G3 Esxi-G3
23 | 0219.15 HP-Zbook_G5 Esxi-G5
24 |
25 | 1001.88 Synology NAS
26 | 0099.xx Camera*6
27 |
28 |
29 |
30 |
31 |
32 |
33 | 🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐🌐 NET
34 |
35 | 🔵 Structure
36 |
37 | ✅ VPN: Wireguard + Netmaker
38 | ✅ DNS: AdGuard
39 | ✅ Proxy: Traefik
40 | 🚫 VXLAN: NSXT (use too much cpu ram)
41 |
42 |
43 | 🔵 VPN
44 |
45 | vps.s 1214.214
46 |
47 | ros.c 1214.011
48 | ros.c 1214.022
49 |
50 | k3s.c 1214.033
51 | dkt.c 1214.144
52 | mac.c 1214.099
53 |
54 |
55 |
56 | 🔵 VLAN
57 |
58 | MGR_1219 10.219.219.0/24 Manager
59 | OWN_1111 10.111.111.0/24 Owner
60 |
61 | Gst_0168 192.168.168.0/24 Guest Wifi
62 |
63 | Srv_1721 172.16.1.0/24 Server
64 | Srv_1728 172.16.8.0/24 Server
65 |
66 | STO_1001 10.1.1.0/24 NAS_01G
67 | STO_1010 10.10.10.0/24 NAS_10G
68 | STO_1012 10.12.12.0/24 CEPH
69 |
70 | SEC_0099 192.168.99.0/24 Camera
71 |
72 |
73 |
74 |
75 |
76 | 🔵 IP Tables
77 |
78 | xxxx.001 ★ Firewall
79 |
80 | xxxx.011 ★ RB4011
81 | xxxx.012 ✩ CHR
82 | xxxx.022 ★ CRS328
83 | 1111.013 ★ AP
84 |
85 | xxxx.088 ★ NAS.HW
86 | xxxx.089 ✩ NAS.VM
87 |
88 | 1720.070 ✩ CEPH.S
89 | 1720.077 ✩ CEPH.C
90 |
91 | 1720.080 ✩ K8s.S
92 | 1720.083 ✩ K8s.C
93 |
94 | 1721.033 ✩ K3s.S.MGR
95 | 1721.144 ✩ K3s.C.DKT
96 | 1214.214 ★ K3s.C.VPS
97 |
98 |
99 | 1111.099 ★ iMAC
100 | 0099.111 ✩ Win7-Canmera
101 | 1721.123 ✩ HomeAssist
102 |
103 |
104 |
105 |
106 |
107 |
108 | 📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀📀 STO
109 |
110 | ✅ NAS: Synology
111 | ✅ S3: MinIO
112 | ✅ RBD: Ceph
113 |
114 |
115 |
116 | 🔵 CEPH-RBD
117 |
118 | Pool_BD-K8s-DB
119 | Pool_BD-K8s-APP
120 |
121 | Pool_BD-K3s-AIO
122 |
123 |
124 |
125 |
126 | 🔵 NAS
127 |
128 | 🔶 NAS.HW
129 | NFS. to Esxi (for iso)
130 |
131 |
132 | 🔶 NAS.VM
133 | - Docker
134 |
135 | - Cloud Sync:
136 | Dropbox * 4
137 | Google Driver * 2
138 |
139 |
140 |
141 |
142 |
143 | 🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐🔐 Auth
144 |
145 |
146 | ✅ LDAP: openLDAP ad.rv.ark
147 | ✅ LDAP: Synology adnas.rv.ark
148 |
149 | ✅ Radius RB4011
150 |
151 | ❌ SSO: Authelia
152 |
153 |
154 |
155 |
156 | 🔵 LDAP Account
157 |
158 | 🔶 nas
159 | adu.nas ➜ user
160 | ada.nas ➜ admin
161 |
162 |
163 |
164 |
165 | 🔵 LDAP client
166 |
167 | Mikrotik. AP ❌
168 |
169 |
170 |
171 | 🔵 Radius
172 |
173 | ✅ AP-Guest
174 |
175 |
176 |
177 | 🔵 SSO
178 |
179 |
180 |
181 |
182 |
183 | 🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢🛢 DB
184 |
185 |
186 |
187 |
188 |
189 |
190 | 💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠 APP
191 |
192 | ✅ dashy
193 |
194 |
195 |
196 |
197 |
198 |
199 | 🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰🧰 Tool
200 |
201 | K8s: GUI: lens
202 |
203 | ✅ code-server: remote config server in web vscode
204 |
205 | ✅ DB Redis-CLI GUI redis-insight
206 |
207 |
208 |
209 |
210 |
211 |
212 |
213 | 🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉 Misc / ToDo
214 |
215 | 🔵 NAS - Git
216 |
217 |
218 |
219 |
220 |
221 |
222 |
223 | 💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠💠 Moniter
224 |
225 | metric + influxdb + grafana
226 | 通过 Prometheus 采集数据
227 |
228 |
229 |
--------------------------------------------------------------------------------
/docs/🎪-7️⃣ 🌐-3️⃣ DNS ➜ AdGuard.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 1703
3 | title: 🎪-7️⃣🌐-3️⃣ DNS ➜ AdGuard
4 | ---
5 |
6 |
7 | # DNS ✶ AdGuard
8 |
9 |
10 |
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 NET DNS: AdGuard Home
12 |
13 |
14 | 🔵 Desc
15 |
16 | adguard = pihole(remove ad) + smartnds(choose fatest dns)
17 |
18 | https://icloudnative.io/posts/adguard-home/
19 |
20 |
21 | 🔵 Adgurad vs pihole
22 |
23 | https://github.com/AdguardTeam/AdGuardHome
24 |
25 |
26 | 🔵 Docker
27 |
28 | https://hub.docker.com/r/adguard/adguardhome
29 |
30 | 🔶 use host mode if possible.
31 |
32 |
33 | docker run --name adguardhome\
34 | --restart unless-stopped\
35 | -v /my/own/workdir:/opt/adguardhome/work\
36 | -v /my/own/confdir:/opt/adguardhome/conf\
37 | -d adguard/adguardhome
38 |
39 |
40 |
41 | 🔵 add upstream dns server.
42 |
43 | https://p3terx.com/archives/use-adguard-home-to-build-dns-to-prevent-pollution-and-remove-ads-2.html
44 |
45 |
46 |
47 | 🔵 dns filter.
48 |
49 | anti-AD https://anti-ad.net/easylist.txt
50 |
51 | halflife https://cdn.jsdelivr.net/gh/o0HalfLife0o/list@master/ad.txt
52 |
53 |
54 | https://anti-ad.net/easylist.txt
55 |
56 |
57 |
58 |
59 |
60 | 🔶 free dns
61 |
62 | Google 8.8.8.8 8.8.4.4
63 | Cloudflare 1.1.1.1 1.0.0.1
64 | Quad9 9.9.9.9 149.112.112.112
65 | OpenDNS Home 208.67.222.222 208.67.220.220
66 |
67 |
68 |
69 | server 8.8.8.8
70 | server 8.8.4.4
71 | server 1.1.1.1
72 | server 1.0.0.1
73 | server 9.9.9.9
74 | server 149.112.112.112
75 | server 208.67.222.222
76 | server 208.67.220.220
77 |
78 | server-tcp 8.8.8.8
79 | server-tcp 8.8.4.4
80 | server-tcp 1.1.1.1
81 | server-tcp 1.0.0.1
82 | server-tcp 9.9.9.9
83 | server-tcp 149.112.112.112
84 | server-tcp 208.67.222.222
85 | server-tcp 208.67.220.220
86 |
87 | server-tls 8.8.8.8
88 | server-tls 8.8.4.4
89 | server-tls 1.1.1.1
90 | server-tls 1.0.0.1
91 | server-tls 9.9.9.9
92 | server-tls 149.112.112.112
93 | server-tls 208.67.222.222
94 | server-tls 208.67.220.220
95 |
96 |
97 |
98 |
99 |
100 |
101 | 🔵 router dhcp set
102 |
103 | main dns: 172.16.8.2
104 | sec 172.16.8.4
105 | 3 1.1.1.1
106 | 4. 8.8.8.8
107 |
108 |
109 |
110 |
111 | 🔵 result test .
112 |
113 | dig can show how long dns request takes.
114 |
115 | compare. yourself
116 |
117 | dig @1.1.1.1 xclient.info
118 | dig @172.16.8.2 xclient.info
119 |
120 |
121 |
122 |
123 | 🔶 server test
124 |
125 | nslookup -q=a 0214.icu 172.16.8.4 ➜ use 172.16.8.4 as dns server
126 | nslookup -q=a 0214.icu 8.8.8.8 ➜ use 8.8.8.8 as dns server
127 |
128 | U.2204 ~ nslookup -q=a 0214.icu 172.16.8.4
129 | Server: 172.16.8.4
130 | Address: 172.16.8.4#53
131 |
132 | Non-authoritative answer:
133 | Name: 0214.icu
134 | Address: 172.93.42.232
135 |
136 |
137 | 🔵 Ubuntu buildin systemd-resolved Setup ✅
138 |
139 | 🔶 Check 53 status
140 |
141 | sudo lsof -i:53
142 | COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
143 | systemd-r 858 systemd-resolve 13u IPv4 22529 0t0 UDP localhost:domain
144 | systemd-r 858 systemd-resolve 14u IPv4 22530 0t0 TCP localhost:domain (LISTEN)
145 |
146 | 53 is used by default in ubunbtu.
147 | if you need use smartdns/ this need use 53 too.
148 | so have to free the 53 first.
149 |
150 |
151 | 🔶 Change resolved.conf
152 |
153 | vi /etc/systemd/resolved.conf
154 | edit line #DNSStubListener=yes
155 | to be DNSStubListener=no
156 |
157 |
158 | this will free the 53 port.
159 | this is system buildin function .
160 | ‼️ do not disable it. just let it not use 53.‼️
161 | ‼️ do not disable it. just let it not use 53.‼️
162 | ‼️ do not disable it. just let it not use 53.‼️
163 | otherwise network will go wrong.
164 |
165 |
166 |
167 | 🔵 Desc
168 |
169 | smartdns: get the fastest dns for you .
170 | pi-hole : remove ads.
171 |
172 | smartdns IP : 172.16.8.4
173 | pi-hole IP : 172.16.8.2
174 |
175 | pc dns server: 172.16.8.2
176 | pi-hole dns server: 172.16.8.4
177 | smartdns dns server: 1.1.1.1 ...
178 |
179 |
180 |
181 | 🔵 Demo
182 |
183 | pc (pihole ip )
184 | >> pihole (smartdns ip )
185 | >> internet
186 |
187 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪-5️⃣ 💠💠💠 Jump Server.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 3930
3 | title: 🎪🎪🎪-5️⃣💠💠💠 ➜ Jump Server
4 | ---
5 |
6 | # Jump Server
7 |
8 |
9 |
10 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Jump Server.
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Jump Server.
12 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Jump Server.
13 |
14 | pc (internet) <====> vps (internet)
15 | vps (wireguard.Srv) <======> homelab-JumpServer: wireguard.Cli
16 | homelab-JumpServer <======> Homelab-other machine
17 |
18 |
19 | 🔵 WHY
20 |
21 | visit homelab`s internel server via internet is a must.
22 | you can install vpn to any client in homelab
23 | but best way is use jumpserver.
24 |
25 | any want connect into your homelab.
26 | need login to jumpser first. then type in usernam/password.
27 |
28 |
29 |
30 |
31 | 🔵 Jump Server Function
32 | log everything you do!
33 |
34 |
35 |
36 | 🔵 OpenSource Jump Server
37 |
38 | jumpserver/ teleport
39 | https://www.jumpserver.org/
40 |
41 |
42 | 🔶jms_all
43 | https://hub.docker.com/r/jumpserver/jms_all
44 |
45 | need mysql
46 |
47 |
48 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Prepair - mysql
49 |
50 | 🔵 Login
51 |
52 | mycli -h DockerProd.ark -P 3306 -u root
53 | SHOW DATABASES;
54 |
55 | 🔵 Check Database
56 |
57 | ◎ simple info ➜ SHOW DATABASES;
58 | ◎ detail info ➜ select * from information_schema.schemata;
59 |
60 |
61 | 🔵 create DB: DBjumpserver ✔️
62 |
63 | create database DBNAME CHARACTER SET utf8 COLLATE utf8_bin;
64 | create database DBjumpserver CHARACTER SET utf8 COLLATE utf8_bin;
65 | ‼️ Database coding must requirements uft8 ‼️
66 | ‼️ Database coding must requirements uft8 ‼️
67 |
68 |
69 | 🔵 Create user: ✅
70 |
71 | CREATE USER 'USERjumpserver'@'%' IDENTIFIED BY 'password';
72 |
73 | % only means. can remote to mysql use server`s lan ip.
74 | or bydefault. not allow user remote login.
75 |
76 |
77 | 🔵 Give Permit ✅
78 |
79 | GRANT ALL PRIVILEGES ON DBjumpserver.* TO 'USERjumpserver'@'%';
80 | FLUSH PRIVILEGES;
81 |
82 | only give USERjumpserver access DBjumpserver.
83 | no access other db on mysql.
84 |
85 |
86 | 🔵 mysql remote test
87 |
88 | mycli -h 172.16.1.140 -P 3306 -u USERjumpserver
89 | show databases;
90 |
91 |
92 |
93 |
94 |
95 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 create key
96 |
97 | 🔶 WHY
98 |
99 | the data in jumpserver is encrypted!
100 | you must create a key when create jumpserver.
101 | if need update docker some day.
102 | use the same key to unlock the old encrypted data ??
103 |
104 | you can create key any linux machine. (linux need have /dev/urandom )
105 | just need keep the key!
106 |
107 |
108 |
109 | 🔶 Create SECRET_KEY
110 |
111 | if [ "$SECRET_KEY" = "" ]; then SECRET_KEY=`cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 50`; echo "SECRET_KEY=$SECRET_KEY" >> ~/.bashrc; echo $SECRET_KEY; else echo $SECRET_KEY; fi
112 |
113 |
114 | 🔶 Create BOOTSTRAP_TOKEN
115 |
116 | if [ "$BOOTSTRAP_TOKEN" = "" ]; then BOOTSTRAP_TOKEN=`cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 16`; echo "BOOTSTRAP_TOKEN=$BOOTSTRAP_TOKEN" >> ~/.bashrc; echo $BOOTSTRAP_TOKEN; else echo $BOOTSTRAP_TOKEN; fi
117 |
118 |
119 |
120 |
121 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Run Jumpserver
122 |
123 | 🔵 First time
124 |
125 | need tell jumpserver docker how connect db.
126 | after connected. jumpserver docker will create some mysql file.
127 | then copy that data out.
128 |
129 | next time. run a new jumpserver docker
130 | no need type in mysql info.
131 | just need mount the mysql folder.
132 |
133 |
134 | 🔶 first run
135 |
136 | docker run --name DPjumpserver -d \
137 | -v /mnt/ceph-img-Docker-prod/DPdata/jumpserver/data:/opt/jumpserver/data \
138 | -p 80:80 \
139 | -p 2222:2222 \
140 | -e SECRET_KEY=vvLHJn9gLneiZsXtPbr019zvFsUE4AmgyNM7psyMbMkWuu7tMf \
141 | -e BOOTSTRAP_TOKEN=f48gD26NeNgl2QfD \
142 | -e DB_HOST=dockerprod.ark \
143 | -e DB_PORT=3306 \
144 | -e DB_USER=USERjumpserver \
145 | -e DB_PASSWORD=xxxxxxxx \
146 | -e DB_NAME=DBjumpserver \
147 | jumpserver/jms_all:latest
148 |
149 |
150 |
151 |
152 |
153 | 🔵 web login
154 |
155 | http://172.16.1.140/ui/#/
156 | http://172.16.1.140/core/auth/login/
157 |
158 | ‼️ fuck chrome no web. safari works. ‼️
159 | ‼️ fuck chrome no web. safari works. ‼️
160 | ‼️ fuck chrome no web. safari works. ‼️
161 |
162 |
163 | need long time to start.
--------------------------------------------------------------------------------
/docs/🎪-7️⃣ 🌐-7️⃣ VPN ➜ Design.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 1717
3 | title: 🎪-7️⃣🌐-7️⃣ VPN ➜ Choose
4 | ---
5 |
6 |
7 | # VPN ✶ Choose
8 |
9 |
10 | 🔵 Final result
11 |
12 | 1= vps: choose wireguard
13 | 2+ vps: choose netmaker
14 |
15 | wireguard is not smart. but only need one vps host.
16 | netmaker is much smart. but must need two vps host.
17 |
18 | 🔥 netmaker server itself can not be vpn node. so need another vps.
19 | 🔥 netmaker server itself can not be vpn node. so need another vps.
20 | 🔥 netmaker server itself can not be vpn node. so need another vps.
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 | 🔵 Why Wireguard & netmaker
29 |
30 | Wireguard best. fastest.
31 | netmaker make deploy wireguard much easy.
32 |
33 |
34 | 🔵 netmaker Desc
35 |
36 | ‼️ Offical mamual https://netmaker.readthedocs.io/en/master/quick-start.html ‼️
37 | ‼️ Offical mamual https://netmaker.readthedocs.io/en/master/quick-start.html ‼️
38 | ‼️ Offical mamual https://netmaker.readthedocs.io/en/master/quick-start.html ‼️
39 |
40 |
41 | CS mode.
42 | server: lots docker
43 | client: one docker/one cmd
44 |
45 |
46 |
47 | 🔵 Netmaker Srv ( Host vs Docker ) ✅
48 |
49 | ‼️ use Docker is easy. but host itself can not join vpn network
50 | ‼️ use Docker is easy. but host itself can not join vpn network
51 | ‼️ use Docker is easy. but host itself can not join vpn network
52 | https://docs.netmaker.org/troubleshoot.html?highlight=host
53 |
54 |
55 | use docker compose build netmaker server node is very easy.
56 | but by default. server host itself in not joined to vpn network.
57 | and you can not install netclient to server host node
58 | - you can not install both netmaker & netclient at same host.
59 | - host`s docker can visit vpn network.
60 | - host itself can not visit vpn network.
61 |
62 | so if you have two vps.
63 | vps_srv install netmaker.
64 | vps_cli install netclient. ➜ use this vps to visit your homelab vlan.
65 |
66 |
67 | if you only have one vps.
68 | you need use reverse proxy tool. like traefik/nginx/nginx proxy manager.
69 | to help your vps visit vlan inside homelab.
70 | netmaker have build in traefik already. we can use this too.
71 |
72 | edit traefik docker.
73 | add static router to traefik forward homelab vlan to netmaker
74 | make traefik docker can visit vpn & homelab lan.
75 | then set rever proxy on traefik.
76 |
77 |
78 |
79 |
80 |
81 |
82 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Netmaker Basic
83 |
84 |
85 | 🔵 Demo & Needs
86 |
87 | we need iphone visit all vlan under homelab. via internet.
88 |
89 | (ingress Gateway) internet: VPN.Srv.VPS 10.214.214.254
90 | (egress Gateway) homelab: VPN.CLI.LAB 10.214.214.1 ➜ a lot vlan
91 | (client ) iPhone: VPN.CLI.iphone 10.214.214.x
92 |
93 |
94 |
95 | 🔵 Build VPN Network
96 |
97 | 🔶 win / mac /linux
98 |
99 | add srv.vps. ➜ config netmaker setting.
100 | add cli.lab ➜ connect homelab & vps
101 | now cli.lab can ping srv.vps use 10.214.214.0/24
102 | but srv.vps not able ping vlan under cli.lab.
103 |
104 |
105 | 🔶 Iphone/andriod
106 |
107 | phone is not supprted yet.
108 | so need one more step. use one node as relay.
109 |
110 | config srv.vps as ingress gataway.
111 | add cli.iphone under ingress gateway. ➜ so iphone can ping 10.214.214.0/24
112 |
113 |
114 |
115 | 🔵 Config Egress
116 |
117 | egress gateway allow you visit vlan under homelab.
118 | config cli.lab as egress gateway.
119 | allow 10.214.214.0/24 to visit vlan under homelab.
120 | egress gateway range: 172.16.1.0/24 ➜ one vlan under homelab.
121 | interface name: ens160 ➜ which nic netmaker forward 172.16.1.0 traffic to
122 | login to cli.lab server. get your nic name.
123 |
124 | egress will add postup & postdown / firewall rules to cli.lab node. like.
125 | iptables -A FORWARD -i nm-demo -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
126 | iptables -D FORWARD -i nm-demo -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
127 |
128 | egress will add 172.16.1.0/24 to all vpn nodes. too
129 | so all vpn node can ping 172.16.1.0/24
130 |
131 |
132 | 🔶 srv node test.
133 |
134 | we use netmaker docker in vps.
135 | so need enter docker to test. -.-
136 | it works.
137 |
138 |
139 | 🔶 cli.iphone test
140 |
141 | ‼️ if change some netmaker config. iphone need delete old vpn config. rescan the qr code ‼️
142 | ‼️ if change some netmaker config. iphone need delete old vpn config. rescan the qr code ‼️
143 | ‼️ if change some netmaker config. iphone need delete old vpn config. rescan the qr code ‼️
144 |
145 |
146 |
147 |
148 |
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️☸️-3️⃣ STO ➜➜ Demo.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️☸️3️⃣ STO ➜ Demo
4 | ---
5 |
6 |
7 | # Storage ✶ Demo
8 |
9 |
10 | 🔵 Must Know
11 |
12 | 🔶 rbd vs nfs
13 |
14 | most cluster storage use rbd / nfs
15 | - rbd is for one machine/pod
16 | - nfs if for lot machine/pod
17 |
18 |
19 | 🔶 pvc VS volume-folder
20 |
21 | we use rbd
22 | ➜ means one pod need one pvc!
23 | ➜ need create a lot pvc!
24 |
25 | ➜ means every folder you mount to docker/docker-compose
26 | you need create a pvc in k8s.
27 |
28 | pvc is like volume-folder.
29 | you delete pod. pvc still there.
30 | but you delete pvc. all data gone.
31 |
32 |
33 | 🔶 PV / StorageCass
34 |
35 | pv just like a hardware disk.
36 | you just need decide which disk to use.
37 | there are lots kinds of disk -.-
38 |
39 | - prod disk
40 | - test disk
41 |
42 | - fast disk
43 | - slow disk
44 |
45 | - backup-yes disk
46 | - backup-no disk
47 |
48 |
49 | 🔶 PV & PVC
50 |
51 | PV: like formated disk.
52 | PVC: like folder
53 |
54 | pv & pvc is same to you .
55 | you delete pvc. it maybe auto delete pv for you.
56 | - depends is storage support this function
57 |
58 |
59 | 🔶 StorageCass & real storage cluster
60 |
61 |
62 |
63 |
64 | 🔵 How PVC Work
65 |
66 | you create pvc.
67 | 🔥 k8s auto create pv. & bond to your pvc.
68 |
69 |
70 | you crteate pod & use pvc
71 |
72 | you delete pod & delete pvc
73 | you no need keep data, so you delete your pvc.
74 | if you need keep data, never delete your pvc.
75 | 🔥 k8s maybe auto delete pv for you !!!!!
76 | - depends is storage support this function
77 |
78 |
79 |
80 | 🔵 StorageClass config
81 |
82 | StorageClass decide what kind of disk/pv user can use.
83 |
84 | StorageClass is bond to storage cluster pool.
85 | diffeent pool have different config.
86 | how config storageclass depends on your pool conig.
87 |
88 | you no need much kind of disk/storageclass.
89 |
90 | SC-Test.0 ➜ Test pool ➜ Normal disk with 0 copy backup
91 | SC-Prod.2 ➜ Prod pool ➜ Normal disk with 2 copy backup
92 | SC-DB ➜ DB pool ➜ high performace disk
93 |
94 |
95 |
96 |
97 |
98 | 🔵 PVC vs Volume_folder
99 |
100 | in docker ➜ volume is a folder on host.
101 | in k8s ➜ pvc is a rbd disk on pod (not on host)
102 |
103 |
104 | in docker you can creat folder/file for volume very easy.
105 | it is local
106 |
107 | in k8s. you can only visit pvc folder in pod!
108 | it is rbd disk.
109 | rbd means one disk only allow one pod to use.
110 | some pods can not start without some config file.
111 | you have to mount pvc to other pod in order to put file in pvc.
112 | -.-
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
121 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 misc
122 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
123 |
124 |
125 |
126 | 🔵 PV Access Modes ✅
127 |
128 | ◎ RWO ReadWriteOnce
129 | ◎ ROX ReadOnlyMany
130 | ◎ RWX ReadWriteMany
131 |
132 | Access Modes is how disk can mount to node.
133 |
134 | most common is RWO:
135 | disk mount to one node only. that node can read and write data.
136 |
137 | many storage like ceph support ROX:
138 | Disk can mount to many node, but only one node can write.
139 |
140 | less storage support RWX:
141 | disk can mount to many node. all node can read and write!
142 |
143 | not all storage support three above mode.
144 | ‼️ ceph-csi only support RWO ROX. No RWX ‼️
145 | ‼️ ceph-csi only support RWO ROX. No RWX ‼️
146 | ‼️ ceph-csi only support RWO ROX. No RWX ‼️
147 |
148 |
149 | ==
150 | 🔵 RBD-Pod Volume set
151 |
152 | pod
153 | >> spec
154 | >> container.
155 | >> VolumeDevice # ‼️ this means usd rbd type storage
156 | - name: KeepWeSame # ‼️set volume to use. name is just blow 3 line. must keep same
157 | - devicepaths # Path into Docker
158 |
159 | pod
160 | >> spec
161 | >> volumes:
162 | - name: KeepWeSame
163 |
164 |
165 |
166 |
167 |
168 |
169 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
170 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
171 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
172 |
173 | 🔵 one pvc . mount many folder. ✅
174 |
175 | apiVersion: v1
176 | kind: Pod
177 | metadata:
178 | name: test
179 | spec:
180 | containers:
181 | - name: test
182 | image: nginx
183 | volumeMounts:
184 | # 网站数据挂载
185 | - name: config
186 | mountPath: /usr/share/nginx/html
187 | subPath: html
188 | # Nginx 配置挂载
189 | - name: config
190 | mountPath: /etc/nginx/nginx.conf
191 | subPath: nginx.conf
192 | volumes:
193 | - name: config
194 | persistentVolumeClaim:
195 | claimName: test-nfs-claim
196 |
197 |
198 |
199 |
200 |
201 |
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️☸️-0️⃣ K8s ➜ Basic.1.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️☸️0️⃣ K8s ➜ Basic
4 | ---
5 |
6 | # K8s ✶ Basic
7 |
8 |
9 |
10 | ---
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 POD ✅
12 |
13 |
14 | 🔵 pod config
15 |
16 | image control
17 | restart control
18 | source control ➜ set/limit cpu & ram
19 |
20 | health check ➜ docker runs not means service runs.
21 | ➜ health check not check pod status,
22 | ➜ check your real service status inside pod.
23 |
24 |
25 |
26 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Node Choose ✅
27 |
28 | https://www.youtube.com/watch?v=PQHjwA2qZdQ&list=PLmOn9nNkQxJHYUm2zkuf9_7XJJT8kzAph&index=28&ab_channel=%E5%B0%9A%E7%A1%85%E8%B0%B7IT%E5%9F%B9%E8%AE%AD%E5%AD%A6%E6%A0%A1
29 |
30 |
31 | 🔵 how k8s choose node
32 |
33 | manual choose node: node name
34 | Auto Choose Node: resource / nodeSelector / nodeAffinity
35 |
36 |
37 |
38 |
39 | 🔵 node name
40 |
41 | apiVersion: v1
42 | kind: Pod
43 | metadata:
44 | name: nginx
45 | spec:
46 | nodeName: foo-node # use this node
47 |
48 |
49 |
50 | 🔵 node selector
51 |
52 | one k8s cluster have lots nodes.
53 | some nodes for prod env.
54 | some nodes for test env.
55 | k8s allow you choose node for pod.
56 |
57 | first give role to your k8s node. then let pod choose a role.
58 |
59 |
60 | 🔶 get node
61 |
62 | kubectl get node
63 | NAME STATUS ROLES AGE VERSION
64 | k3s.vps Ready 4m23s v1.24.2+k3s2
65 | k3s.mgr Ready control-plane,master 2m30s v1.24.2+k3s2
66 | k3s.dkt Ready 10m v1.24.2+k3s2
67 |
68 |
69 | 🔶 set env_role
70 |
71 | kubectl label node k3s.mgr env_role=lan-main
72 | kubectl label node k3s.dkt env_role=lan-test
73 | kubectl label node k3s.vps env_role=wan-proxy
74 |
75 |
76 | 🔶 check env_role
77 | kubectl get nodes --show-labels
78 |
79 |
80 | 🔶 use role
81 |
82 | spec:
83 | nodeSelector:
84 | env_role: lan-main
85 |
86 |
87 |
88 |
89 |
90 |
91 | ---
92 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Controller Kind
93 |
94 |
95 | 🔵 why need controller.
96 |
97 | controller. can control app even after app deployed.
98 | change copy numbers / replicas
99 | update pod image
100 | rollback old version
101 | ....
102 |
103 |
104 |
105 |
106 |
107 | 🔵 controller & pod
108 |
109 | controller use label to control pod.
110 | so put same label in both control and pod
111 |
112 |
113 | Deployment is one of the controller use most.
114 |
115 |
116 | apiVersion: apps/v1
117 | kind: Deployment
118 | metadata:
119 | spec:
120 | replicas: 1
121 | selector:
122 | matchLabels:
123 | app: web ➜ 📍
124 | strategy: {}
125 | template:
126 | metadata:
127 | creationTimestamp: null
128 | labels:
129 | app: web ➜ 📍
130 |
131 |
132 |
133 |
134 | 🔵 Controller.kind ➜ deployment
135 |
136 |
137 | 🔵 Controller.kind ➜ daemonset
138 |
139 | let every node run some pod.
140 | like log agent. need install to every node.
141 |
142 |
143 |
144 | 🔵 Controller.kind ➜ job & cronjob
145 |
146 | job ➜ one time job
147 | cronjob ➜ repeat job many times.
148 |
149 |
150 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Service
151 |
152 |
153 |
154 |
155 |
156 |
157 | 🔵 Stateful VS Stateless ✅
158 |
159 | one app: many copy.
160 | it is something about copy under one app.
161 |
162 | state less ➜ all copy under one app. is same.
163 | state ful ➜ all cpoy under one app. is not same.
164 |
165 |
166 | app-mysql: mysql-pod-master + mysql-pod-slave.
167 | this two pod are not same!
168 | must run master first. then run slave.
169 |
170 | app-nginx
171 | nginx is for load balance. no master. all pod is equel.
172 | so this is state less
173 |
174 |
175 |
176 | Stateful: every copy have a different pod name
177 | Stateless: every copy have a same pod name
178 |
179 |
180 |
181 |
182 | Stateful ➜ use statefulSet to deploy pod
183 | stateless ➜ use deployment to deploy pod ➜ nginx usual use deployment.
184 |
185 |
186 |
187 |
188 |
189 |
190 |
191 |
192 |
193 |
194 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Secret
195 |
196 |
197 |
198 | 🔵 create secret
199 |
200 | create secret-xxx.yaml
201 | put use name passwd in it.
202 | apply secret-xxx.yaml
203 | then your username & password is saved to k8s.
204 | kubectl get secret to check.
205 |
206 |
207 | 🔵 use secret
208 |
209 | - use variable
210 | - mount volume
211 |
212 |
213 |
214 |
215 |
216 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 ConfigMAP & Secret
217 |
218 | https://www.youtube.com/watch?v=MymwImXFUz4&list=PLmOn9nNkQxJHYUm2zkuf9_7XJJT8kzAph&index=39&ab_channel=%E5%B0%9A%E7%A1%85%E8%B0%B7IT%E5%9F%B9%E8%AE%AD%E5%AD%A6%E6%A0%A1
219 |
220 |
221 |
222 |
223 | 🔵 create configmap
224 |
225 | redis.host=redis
226 | redis.port=1111
227 |
228 |
229 |
230 |
231 |
232 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Ingress (router)
233 |
234 | https://www.youtube.com/watch?v=ntLJxQbN3O8&list=PLmOn9nNkQxJHYUm2zkuf9_7XJJT8kzAph&index=44&ab_channel=%E5%B0%9A%E7%A1%85%E8%B0%B7IT%E5%9F%B9%E8%AE%AD%E5%AD%A6%E6%A0%A1
235 |
236 |
237 | 🔵 Why Ingress
238 |
239 | node port. only avail on one node!
240 | node port. use ip. in reality we use url.
241 |
242 | ingress is based on nodeport. make it more better.
243 | ingress is like router.
244 |
245 |
246 |
247 | 🔵 ingress workflow
248 |
249 | ◎ config nodeport first
250 |
251 | ◎ choose route (choose ingress gateway) ➜ nginx/traefik
252 | ◎ install route (deploy ingress gateway) ➜ nginx/traefik
253 |
254 | ◎ config router(ingress rule ) ➜ config nginx/traefik
255 |
256 |
257 |
--------------------------------------------------------------------------------
/docs/🎪-3️⃣ 📚 Blog ➜ Docusaurus.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 130
3 | title: 🎪-3️⃣📚 Blog ➜ Docusaurus
4 | ---
5 |
6 | # Blog Build
7 |
8 |
9 |
10 | 🔵 WHY
11 |
12 | blog is a proof of what you can.
13 | you will forget what you can -.-
14 |
15 |
16 | 🔵 Blog CHoose
17 |
18 | ◎ gitbook ➜ not bad
19 | ◎ Docusaurus ➜ like gitbook.
20 |
21 | use vscode ➜ write .md
22 | use Docusaurus ➜ generate .html
23 | use github pages ➜ host .html blog
24 |
25 |
26 | 🔵 Docusaurus basic demo
27 |
28 | npx create-docusaurus@latest xxxx classic
29 | npx create-docusaurus@latest Blog-NeverDEL classic
30 | cd Blog-NeverDEL
31 | npx docusaurus start
32 |
33 |
34 |
35 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Github page basic
36 |
37 | 🔵 Github Page Desc
38 |
39 | you upload html file, nginx help you display as website.
40 | you upload html file, github pages help you display as website.
41 | github pages == nginx.
42 |
43 |
44 |
45 | 🔵 Github Page Type
46 |
47 | .github.io ➜ one organization one name ➜ one github page
48 | .github.io ➜ one account one username ➜ one github page
49 | .github.io ➜ one account many project ➜ many github page
50 |
51 | organization ➜ for company
52 | user ➜ for us
53 | projectname ➜ both can use
54 |
55 | we usually use user type
56 |
57 |
58 |
59 |
60 | 🔵 gitpage repo Desc
61 |
62 | gitpage repo need two function
63 | ◎ save project file (all file expect build folder)
64 | ◎ show project html (all file inside build folder after run npm build)
65 |
66 | you can use one repo for two function: one branch for file. one branch for html
67 | you can use two repo for two function: one repo for file. one repo for html
68 | here we choose one repo two function
69 |
70 | normal repo only need one branch. ( project files branch )
71 | 🔥 gitpage repo must need two branch. ( project files branch & project html branch )
72 | 🔥 gitpage repo must need two branch. ( project files branch & project html branch )
73 | 🔥 gitpage repo must need two branch. ( project files branch & project html branch )
74 |
75 |
76 | gitpages service not help you run npm build.
77 |
78 | gitpages service only help show html under branch`s root folder.
79 | main branch: ➜ use main branch`s root folder
80 | gh-pages branch: ➜ use gh-pages branch`s root folder
81 |
82 |
83 | you must create a gh-pages branch for gitpages service
84 | and set branch in github repo`s setting.
85 |
86 |
87 | gitpages service not help show html under repo`s subfolder
88 |
89 | we only need create gh-pages branch.
90 | and let gitpage repo use gh-pages branch.
91 | as for what is side gh-pages branch (should be all file under build folder)
92 | docusaurus have a easy cmd for us.
93 | yarn deploy
94 |
95 |
96 |
97 |
98 |
99 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 gitpage demo
100 |
101 |
102 | 🔵 github prepair
103 |
104 | 1. create username based github page repo
105 | 2. upload ssh key to github account.
106 |
107 | 🔶 create repo
108 | xxxxxxxxx.github.io
109 | mirandaxx.github.io
110 |
111 | 🔶 ssh key
112 |
113 |
114 | 🔵 local prepair
115 |
116 | npx create-docusaurus@latest xxxx classic
117 | npx create-docusaurus@latest Blog-NeverDEL classic
118 |
119 | cd Blog-NeverDEL
120 | yarn install
121 |
122 | yarn start
123 |
124 |
125 |
126 | 🔵 push local folder
127 |
128 | cd Blog-NeverDEL
129 | git init
130 |
131 | git remote add origin git@github.com:MirandaXX/mirandaxx.github.io.git
132 | - here choose ssh type. no https type
133 |
134 | git branch -M main
135 | git push -u origin main
136 | - check github if file uploaded
137 |
138 |
139 |
140 | 🔵 github branch setup
141 |
142 | 1. create a branch: gh-pages
143 | 2. let gitpage server use gh-pages branch.
144 |
145 | 🔶 create branch: name must be gh-pages
146 | 🔶 use new branch
147 |
148 | github >> mirandaxx.github.io
149 | >> setting >> pages
150 | >> sources:
151 | choose gh-pages
152 | and use /root
153 | save
154 |
155 |
156 |
157 | 🔵 edit docusaurus.config.js
158 |
159 | before use yarn deploy cmd, must config first
160 |
161 |
162 | const config = {
163 |
164 | url: 'https://mirandaxx.github.io', // github page link url
165 | baseUrl: '/', // no change
166 | organizationName: 'mirandaxx', // your user name.
167 | projectName: 'mirandaxx.github.io', // your repo name.
168 | deploymentBranch: 'gh-pages', // no change. must add this line.
169 |
170 |
171 |
172 | 🔵 deploy
173 |
174 | USE_SSH=true yarn deploy
175 | - just copy no edit
176 |
177 | visit test
178 | https://mirandaxx.github.io
179 |
180 | need wait few minutes.
181 |
182 |
183 |
184 |
185 |
186 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Custom docusaurus
187 |
188 | 🔵 sidebar width
189 |
190 | (/src/css/custom.css)
191 |
192 | :root {
193 | --doc-sidebar-width: 600px !important;
194 |
195 |
196 | 🔵 doc mode ✅
197 |
198 | i no need homepage.
199 | only need doc function
200 |
201 | https://docusaurus.io/docs/docs-introduction#docs-only-mode
202 |
203 | 1. del/rename src/pages/index.js
204 |
205 | 2. add one line in docs
206 | routeBasePath: '/',
207 |
208 |
209 | docusaurus.config.js
210 | module.exports
211 | presets
212 | docs
213 |
214 |
215 | 🔵 choose a markdown as homepage
216 |
217 | any xx.md
218 |
219 | add follow will be homepage.
220 |
221 | ---
222 | slug: /
223 | ---
224 |
225 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪-5️⃣ 💠💠💠 Gitea.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 3930
3 | title: 🎪🎪🎪-5️⃣💠💠💠 ➜ Gittea
4 | ---
5 |
6 | # Gittea
7 |
8 |
9 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 HomeLAB-Gittea
10 | 🔵 Offical Manual
11 | https://docs.gitea.io/en-us/install-with-docker/
12 |
13 |
14 | 🔵 Prepair
15 |
16 | 🔶 Install Docker-compose
17 |
18 |
19 | 🔶 Create Custom volume
20 |
21 | docker volume ls
22 | docker volume create volumegitea
23 | docker volume create volumegiteadb
24 |
25 |
26 | 🔶 Create Custom network
27 |
28 | docker network create network-gitea
29 | docker network ls
30 |
31 |
32 | 🔶 Create DockerFile 💯
33 |
34 |
35 | cat <> docker-compose.gitea.yml
36 | version: "3"
37 |
38 | networks: # ‼️ any network use in container. must add here first
39 | network-gitea: # ‼️ changeme
40 | external: false
41 |
42 | volumes: # ‼️ any volume use in container. must add here first
43 | volumegitea: # ‼️ changeme
44 | external: true
45 | volumegiteadb: # ‼️ changeme
46 | external: true
47 |
48 | services:
49 | server:
50 | image: gitea/gitea:latest
51 | container_name: gitea
52 | environment:
53 | - USER_UID=1000
54 | - USER_GID=1000
55 | - DB_TYPE=postgres
56 | - DB_HOST=db:5432
57 | - DB_NAME=gitea
58 | - DB_USER=gitea
59 | - DB_PASSWD=gitea
60 | restart: always
61 | networks:
62 | - network-gitea # ‼️ changeme
63 | volumes:
64 | - volumegitea:/data # ‼️ changeme
65 | - /etc/timezone:/etc/timezone:ro
66 | - /etc/localtime:/etc/localtime:ro
67 | ports:
68 | - "3000:3000" # changeme - option
69 | - "222:22" # changeme - option
70 | depends_on:
71 | - db
72 |
73 | db:
74 | image: postgres:latest
75 | restart: always
76 | environment:
77 | - POSTGRES_USER=gitea
78 | - POSTGRES_PASSWORD=gitea
79 | - POSTGRES_DB=gitea
80 | networks:
81 | - network-gitea # ‼️ changeme
82 | volumes:
83 | - volumegiteadb:/var/lib/mysql # ‼️ changeme
84 | EOF
85 |
86 |
87 |
88 |
89 | 🔶 run
90 |
91 | docker-compose up -d
92 | docker-compose -f filepath up -d
93 | docker-compose -f docker-compose.gitea.yml up -d
94 |
95 |
96 | 🔵 web login
97 | ip:3000
98 |
99 | ◎ chalge all localhost to your serverip.
100 | ◎ set admin password
101 | ◎ install
102 |
103 |
104 |
105 |
106 |
107 |
108 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
109 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 custom DB version
110 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
111 |
112 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 LAB ▪ Base ▪ gitea
113 |
114 | 🔵 Git Desc
115 |
116 | can auto backup everything you change. good for file/blog
117 | you delete by mistake. git allow visit histroy version any time.
118 |
119 | create a folder.
120 | enable folder with git
121 | set auto backup time.
122 | write blog in it.
123 |
124 |
125 | 🔵 WHY
126 |
127 | all important file
128 | - docker config file.
129 | - notes/blogs
130 |
131 |
132 | 🔵 Git Tooles
133 |
134 | compare: https://docs.gitea.io/zh-cn/comparison/
135 |
136 | public: github.
137 | private: gitea ➜ recommond
138 | synology: git server
139 |
140 |
141 |
142 | 🔵 Gitea desc
143 |
144 | need database.. postgress..
145 |
146 |
147 |
148 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 DB - Postgress
149 |
150 | 🔵 Docker Postgress.
151 |
152 | docker run -d \
153 | --restart always \
154 | --name DKP-postgres \
155 | -p 5432:5432 \
156 | -e POSTGRES_PASSWORD=xxxx \
157 | -e PGDATA=/var/lib/postgresql/data/pgdata \
158 | -v /mnt/dpnvme/DMGS-DKP/postgres/data:/var/lib/postgresql/data \
159 | postgres
160 |
161 |
162 |
163 |
164 | 🔶 test login
165 |
166 | db: postgress
167 | user: postgress
168 | pwd : your ..
169 |
170 |
171 | 🔵 Postgress GUI / CLI toole
172 | 🔶 GUI
173 | https://postgresapp.com/documentation/gui-tools.html
174 |
175 |
176 | 🔶 install psql
177 |
178 | sudo apt-get install postgresql-client -y
179 | brew install libpq
180 |
181 | 🔶 CLI
182 |
183 | psql -U username -h localhost -p 5432 dbname
184 | psql -U postgres -h localhost -p 5432 postgres
185 | psql -U usergitea -h 172.16.1.140 -p 5432 dbgitea
186 |
187 |
188 | 🔵 Config User & DB
189 |
190 | 🔶 create user
191 |
192 | CREATE ROLE gitea WITH LOGIN PASSWORD 'gitea';
193 | CREATE ROLE USERgitea WITH LOGIN PASSWORD 'Yshskl0@19';
194 |
195 |
196 | 🔶 create DB
197 |
198 | CREATE DATABASE DBgitea WITH OWNER USERgitea TEMPLATE template0 ENCODING UTF8 LC_COLLATE 'en_US.UTF-8' LC_CTYPE 'en_US.UTF-8';
199 |
200 |
201 | 🔶 remote test
202 |
203 | psql -U usergitea -h 172.16.1.140 -p 5432 dbgitea
204 |
205 |
206 | 🔶 final
207 |
208 | dbname: dbgitea
209 | username: usergitea
210 |
211 |
212 |
213 |
214 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
215 |
216 | 🔵 run gitea
217 |
218 | docker run -d \
219 | -p 2299:22 \
220 | -p 3000:3000 \
221 | --name DKPgitea \
222 | --restart always \
223 | -v /mnt/dpnvme/DMGS-DKP/gitea/data:/data \
224 | gitea/gitea:latest
225 |
226 |
227 | ‼️ no need config db when run docker. config in web ‼️
228 |
229 |
230 |
231 | 🔵 Login
232 |
233 | http://172.16.1.140:3000/
234 |
235 |
236 |
237 | 🔵 first Config
238 |
239 | rooturl chang to ip.
240 | or web url willbe ..
241 | http://localhost:3000/
242 | http://172.16.1.140:3000/user/login
243 |
244 |
245 | /mnt/dpnvme/DMGS-DKP/gitea/data/gitea/conf/app.ini
246 |
247 |
248 | 🔵 create account
249 |
250 | 🔵 login
251 |
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️☸️☸️ Demo ➜ xxx.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️☸️☸️ Demo ➜ ❌
4 | ---
5 |
6 |
7 | # K8s Real Demo ➜
8 |
9 |
10 |
11 |
12 | 🔵 todo
13 | move all docker-compose to k3s/k8s
14 |
15 |
16 | fa9928b172e1 jumpserver/jms_all:latest "./entrypoint.sh" 2 days ago Up 2 hours APP-Jumpserver
17 | ca4884bacf92 ghcr.io/requarks/wiki:2 "docker-entrypoint.s…" 2 days ago Up 10 seconds NoteWikijs
18 | 3aefa5a3e0be minio/minio:latest "/usr/bin/docker-ent…" 2 days ago Up 2 hours STO-S3-MinIO
19 | 6befd0f41f87 postgres:latest "docker-entrypoint.s…" 2 days ago Up 2 hours 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp DB-Postgres-DKP
20 | bb833dc88257 gravitl/netclient:v0.14.3 "/bin/bash ./netclie…" 2 days ago Restarting (0) 7 seconds ago VPN-NetmakerCLI-DKP
21 | 236ed4f2807a mysql:latest "docker-entrypoint.s…" 2 days ago Up 2 hours 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp DB-MySQL-DKP
22 | 60f41c6b1851 linuxserver/heimdall "/init" 2 days ago Up 2 hours DashBoard-Heimdall
23 | a8cf52dcd68f travelping/nettools:latest "tail -F anything" 2 days ago Up 2 hours Net-Debug-ToolBOX
24 | 409a1fd3a37c gitea/gitea:latest "/usr/bin/entrypoint…" 2 days ago Up 10 seconds APP-Gitea-dkp
25 | 077cdb28bdb2 traefik:latest "/entrypoint.sh --lo…" 2 days ago Up 2 hours
26 |
27 |
28 |
29 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 k3s prepair
30 |
31 | 🔵 namespace
32 |
33 | ns-app
34 | ns-db
35 | ns-tool
36 | ns-proxy
37 |
38 |
39 |
40 | 🔵 network
41 |
42 |
43 |
44 | 🔵 storage
45 |
46 | default: local-path
47 | ceph pvc
48 |
49 |
50 |
51 |
52 | sto-sc-cephcsi-rbd-K3s
53 |
54 |
55 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 mysql - heml ❌
56 |
57 | https://artifacthub.io/packages/helm/bitnami/mysql
58 |
59 |
60 | helm repo add bitnami https://charts.bitnami.com/bitnami
61 |
62 | helm install my-release bitnami/mysql
63 | helm install mysql-test bitnami/mysql --namespace ns-db --set global.storageClass=local-path
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 nettools
74 |
75 | 🔵 docker compose demo
76 |
77 | netdebug:
78 | container_name: Net-Debug-ToolBOX
79 | image: travelping/nettools:latest
80 | restart: always
81 | command: tail -F anything
82 | # keep docker running.
83 | # some docker no have task. will auto shutdown.
84 |
85 |
86 | 🔵 K8s - CMD
87 |
88 | kubectl run netdebug --image=travelping/nettools -- sleep infinity ✅
89 |
90 |
91 |
92 | 🔵 k8s - yml
93 |
94 | 🔶 create namespace: ns-tool-net
95 | 🔶 create pod yaml: toolNetNettoolsPod.yaml
96 |
97 | apiVersion: v3
98 | kind: Pod
99 | metadata:
100 | name: toolNetNettoolsPod
101 | namespace: ns-tool-net
102 | spec:
103 | containers:
104 | - name: toolNetNettoolsPod
105 | image: travelping/nettools:latest
106 | command:
107 | - sleep
108 | - "infinity"
109 | imagePullPolicy: IfNotPresent
110 | restartPolicy: Never
111 |
112 |
113 |
114 | 🔶 apply ➜ kubectl apply -f /mnt/tool-net-nettools-pod.yaml
115 |
116 | pod/tool-net-nettools-pod created
117 |
118 |
119 | Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "tool-net-nettools-pod": E
120 | rror response from daemon:
121 | failed to update store for object type *libnetwork.endpointCnt: Key not found in store
122 |
123 |
124 |
125 | 🔶 1. import to configmap .
126 |
127 | kubectl run -it --image=jrecord/nettools nettools --restart=Never --namespace=default
128 |
129 |
130 |
131 |
132 | kubectl apply -f nettools.yaml
133 |
134 |
135 |
136 |
137 |
138 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 K8s ✶ Nginx ✅
139 |
140 | 🔶 Deploy kubectl create deployment nginx --image=nginx
141 | 🔶 Export server Port kubectl expose deployment nginx --port=80 --type=NodePort
142 | 🔶 Check local Port kubectl get svc
143 | 🔶 Visit http://172.16.0.80:32461/
144 |
145 |
146 | ==
147 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Docker ✶ NetBox ✅
148 |
149 | 🔵 Docker Netbox ✅
150 |
151 | ⭐️⭐️⭐️⭐️⭐️
152 | https://computingforgeeks.com/how-to-run-netbox-ipam-tool-in-docker-containers/
153 |
154 |
155 | ⭐️⭐️⭐️ - offical manual.
156 | https://github.com/netbox-community/netbox-docker
157 |
158 | docker-compose up -d
159 | web: 8000
160 | admin / admin
161 |
162 |
163 | ==
164 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 K8s ✶ NetBox ❌
165 | ==
166 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 K8s_Local ✶ Net-Tool ✅
167 |
168 | 🔵 Docker Demo
169 |
170 | 🔶 Desc
171 |
172 | a docker with net tooles.
173 | good for test network/debug
174 |
175 | 🔶 Run
176 |
177 | docker run -dt travelping/nettools
178 |
179 |
180 | 🔵 Minikube Demo
181 |
182 | 🔶 run
183 |
184 | kubectl create deployment netdebug --image=travelping/nettools ❌
185 | kubectl run netdebug --image=travelping/nettools -- sleep infinity ✅
186 |
187 | ‼️ docker must have a task that will never finish. or docker will auto stop it ‼️
188 | ‼️ docker must have a task that will never finish. or docker will auto stop it ‼️
189 | ‼️ docker must have a task that will never finish. or docker will auto stop it ‼️
190 |
191 |
192 | 🔶 enter docker
193 |
194 | kubectl exec -it -- bash
195 | kubectl exec netdebug -it -- bash
196 |
197 |
198 |
199 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Misc
200 |
201 | kubectl run my-app --image=gcr.io/some-repo/my-app:v1 --port=3000
202 |
203 | $ kubectl expose deployment my-app --type=LoadBalancer --port=8080 --target-port=3000
204 | service "my-app" exposed
205 |
206 |
207 |
208 |
209 |
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️☸️-0️⃣ K8s ➜ Basic.9 Misc.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️☸️0️⃣ K8s ➜ Misc.9
4 | ---
5 |
6 |
7 | # K8s Misc
8 |
9 |
10 |
11 |
12 |
13 | 🔵 POD Config
14 |
15 | 🔶 spec
16 | spec-pod (podname; volumes:disk mount to container; )
17 | spec-containers ( Container name; image url; export port; mount out path)
18 |
19 |
20 |
21 | 🔶 metadata
22 |
23 | name ➜ pod name
24 | namespace ➜ set namespace. ‼️ default namespace is: default ‼️
25 | labels ➜ set label
26 | annotaton ➜ set note
27 |
28 |
29 | 🔶 name space
30 |
31 | not use k8s default namespace if possible.
32 | namespace is how you category all your dockeres.
33 |
34 | NS-Prod
35 | NS-Test
36 | NS-AIO
37 |
38 |
39 | 🔶 Labels
40 |
41 | label-version
42 | label-prod/test
43 | label-web/sql/app
44 | label-tool
45 |
46 |
47 |
48 |
49 |
50 | 🔵 Ingress
51 |
52 | 🔶 Needs
53 |
54 | Ingre allow outside user. visit service inside cluster.
55 | NodePort need use host node port, the port on host is limited!
56 | so need something better: Ingress
57 |
58 |
59 | 🔶 Desc
60 |
61 | service use tcp/udp.
62 | ingress use http/https
63 |
64 | if need use ingress. have to install ingress controller first.
65 | like nginx ingress controller.
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 K8s basic
75 |
76 |
77 | https://www.qikqiak.com/k8s-book/
78 | https://www.qikqiak.com/k8s-book/
79 | https://www.qikqiak.com/k8s-book/
80 |
81 |
82 |
83 | 🔵 deployment / backup
84 |
85 | you can deploy pod use kubectl create -f pod.yaml
86 | but you use k8s. the pod must be important.
87 | you need at least run two pod at same time.
88 | in case one pod down. k8s will switch to use another one
89 | just like back up.
90 |
91 |
92 | apiVersion: apps/v1
93 | kind: Deployment
94 | spec:
95 | replicas: 2
96 |
97 |
98 | 🔵 RC: Replication Controller
99 |
100 | auto increase/decrease pod number for you.
101 | when pod load is too heavy.
102 | it auto deploy another for you.
103 |
104 |
105 |
106 |
107 | 🔵 static pod
108 |
109 | by default k8s auto assign a node for pod.
110 | but sometimes you want some pod run on a fixed node.
111 | than use this.
112 |
113 |
114 |
115 |
116 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 ConfigMap
117 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 ConfigMap
118 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 ConfigMap
119 |
120 |
121 |
122 |
123 |
124 |
125 | 🔵 configmap vs config
126 |
127 | you can create configmap from config file.
128 | but configmap is not config file
129 | but configmap is not config file
130 | but configmap is not config file
131 |
132 | during import/create it will add some line to head & end
133 |
134 | kubectl describe cm custom-dashy.yaml
135 | Name: custom-dashy.yaml
136 | Namespace: default
137 | Labels:
138 | Annotations:
139 |
140 |
141 |
142 | 🔵 How use configmap in pod.yaml
143 |
144 | spec.env ➜ can give a diff name for pair.
145 | spec.envFrom ➜ no change name
146 | spec.volumes ➜
147 |
148 |
149 | configMapKeyRef ➜ must choose configmap & choose key.
150 | configMapRef ➜ only choose configmap . no need choose key
151 |
152 |
153 |
154 |
155 |
156 | 🔵 configmap ➜ evn demo ✅
157 |
158 | [root@localhost ~]# vim demo.yaml
159 | apiVersion: v1
160 | kind: ConfigMap # 🔥
161 | metadata:
162 | name: cm-podxx # 🔥 ConfigMap name. pod need this name to use this caonfigmap
163 | data: # 🔥 all list under data is what we set.
164 | hostname: "k8s" # Key: hostname value: k8s
165 | password: "123" # Key: password value: 123
166 | ---
167 | apiVersion: v1
168 | kind: Pod
169 | metadata:
170 | name: zhangsan
171 | spec:
172 | containers:
173 | - name: zhangsan
174 | image: busybox
175 |
176 | env: # 🔥 use env type
177 | - name: env-podxx-HostName # 🔥 set a env name. any name
178 | valueFrom:
179 | configMapKeyRef: # use configmap.
180 | name: cm-podxx # 🔥 choose ConfigMap name; must same with the name in configmap.
181 | key: hostname # 🔥 choose what key to use; must same with the name in configmap.
182 | # 🔥 no need set value -.-.
183 |
184 | - name: env-podxx-Password # 🔥 set a env name
185 | valueFrom:
186 | configMapKeyRef:
187 | name: cm-podxx
188 | key: password
189 |
190 |
191 |
192 |
193 |
194 |
195 | 🔵 configmap ➜ Mount demo ✅
196 |
197 |
198 | 🔶 Create configmap (import conf)
199 |
200 | kubectl create configmap xx --from-file=yyyy
201 | kubectl create configmap cm-demo --from-file=/root/redis.conf
202 | kubectl create configmap custom-dashy.yaml --from-file=/root/dashy.conf
203 | kubectl create configmap dashy-config --from-file=/root/dashy.conf
204 |
205 |
206 |
207 | 🔶 check configmap: ➜ kubectl get cm
208 | 🔶 check detail ➜ kubectl describe cm xxxx
209 |
210 |
211 |
212 |
213 | apiVersion: v1
214 | kind: Pod
215 | metadata:
216 | name: nginx
217 | spec:
218 | containers:
219 | - name: nginx
220 | image: nginx
221 | ports:
222 | - containerPort: 80
223 | volumeMounts:
224 | - name: html # 🔥 use volume: must same as volumes.name
225 | mountPath: /var/www/html # 🔥 set intermal path
226 | volumes:
227 | - name: html # 🔥 create volume name: must same as volumeMounts.name
228 | configMap: #
229 | name: nginx-html # 🔥 must same as the configmap file name in k8s.
230 |
231 |
232 |
233 |
234 |
235 |
236 |
237 | 🔶 deployment
238 |
239 | deploy ment can deal with serverless docker very good.
240 | but not stateful docker.
241 | stateful docker is most difficult part in k8s.
242 | there is a lot to concern .
243 |
244 |
245 |
246 |
247 |
248 |
249 |
250 | 🔵 Storage
251 |
252 | ◎ some pod can move beacuse they use nfs/smb
253 | ◎ some pod can not move beacuse they use rbd/iscsi
254 |
255 | rbd driver ➜ can mount one pod only ➜ ➜ can not move to other node
256 | nfs driver ➜ can mount many pod ➜ ➜ can move to other node
257 |
258 |
259 | no need high disk performace ➜ can use nfs/smb
260 | do need high disk performace ➜ must rbd/iscis
261 |
262 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪🎪-🔐🔐🔐 SSO ➜➜ Authelia Use.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 4930
3 | title: 🎪🎪🎪🎪-🔐🔐🔐 SSO ➜➜ Authelia USE
4 | ---
5 |
6 | # SSO Authelia USE
7 |
8 |
9 |
10 |
11 | 🔵 goal
12 |
13 | authelia + openldap ✅
14 |
15 | authelia + treafik
16 |
17 |
18 |
19 |
20 |
21 | 🔵
22 | 1.
23 |
24 | 0214.icu >> vps traefik (ssl)
25 | vpn: openldap + authelia
26 |
27 | if need traefik combind authelia.. need in same docker network.????
28 | we can use public port..
29 |
30 |
31 | or add a local traefik .
32 |
33 |
34 |
35 |
36 | 0214.icu >> vps traefik (ssl) <===> VPN. traefik local.
37 | lan: traefik + openldap + authelia
38 |
39 |
40 | . use swarn. or...k8s. or ...
41 |
42 |
43 |
44 |
45 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
46 |
47 | 🔵 How
48 | use traefik middle ware.
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 | 🔵 traefik + authelia
61 |
62 |
63 | https://github.com/authelia/authelia/blob/master/examples/compose/lite/docker-compose.yml
64 |
65 |
66 |
67 |
68 | https://wbsu2003.4everland.app/2022/03/09/%E5%8D%95%E7%82%B9%E7%99%BB%E5%BD%95%E6%9C%8D%E5%8A%A1Authelia%EF%BC%88%E4%B8%8B%E7%AF%87%EF%BC%89/
69 |
70 |
71 |
72 |
73 |
74 | Outline使用Authelia实现本地认证
75 |
76 |
77 | https://wbsu2003.4everland.app/2022/03/11/Outline%E4%BD%BF%E7%94%A8Authelia%E5%AE%9E%E7%8E%B0%E6%9C%AC%E5%9C%B0%E8%AE%A4%E8%AF%81/
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 | https://www.youtube.com/watch?v=yHICIVhm_Lo&t=621s&ab_channel=Sagit
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 | services:
110 | authelia:
111 | image: authelia/authelia
112 | container_name: authelia
113 | volumes:
114 | - ./authelia:/config
115 | networks:
116 | - net
117 | labels:
118 | - 'traefik.enable=true'
119 | - 'traefik.http.routers.authelia.rule=Host(`authelia.example.com`)'
120 | - 'traefik.http.routers.authelia.entrypoints=https'
121 | - 'traefik.http.routers.authelia.tls=true'
122 | - 'traefik.http.routers.authelia.tls.certresolver=letsencrypt'
123 | - 'traefik.http.middlewares.authelia.forwardauth.address=http://authelia:9091/api/verify?rd=https://authelia.example.com' # yamllint disable-line rule:line-length
124 | - 'traefik.http.middlewares.authelia.forwardauth.trustForwardHeader=true'
125 | - 'traefik.http.middlewares.authelia.forwardauth.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email' # yamllint disable-line rule:line-length
126 | expose:
127 | - 9091
128 | restart: unless-stopped
129 | healthcheck:
130 | disable: true
131 | environment:
132 | - TZ=Australia/Melbourne
133 |
134 | redis:
135 | image: redis:alpine
136 | container_name: redis
137 | volumes:
138 | - ./redis:/data
139 | networks:
140 | - net
141 | expose:
142 | - 6379
143 | restart: unless-stopped
144 | environment:
145 | - TZ=Australia/Melbourne
146 |
147 | traefik:
148 | image: traefik:v2.8
149 | container_name: traefik
150 | volumes:
151 | - ./traefik:/etc/traefik
152 | - /var/run/docker.sock:/var/run/docker.sock
153 | networks:
154 | - net
155 | labels:
156 | - 'traefik.enable=true'
157 | - 'traefik.http.routers.api.rule=Host(`traefik.example.com`)'
158 | - 'traefik.http.routers.api.entrypoints=https'
159 | - 'traefik.http.routers.api.service=api@internal'
160 | - 'traefik.http.routers.api.tls=true'
161 | - 'traefik.http.routers.api.tls.certresolver=letsencrypt'
162 | - 'traefik.http.routers.api.middlewares=authelia@docker'
163 | ports:
164 | - 80:80
165 | - 443:443
166 | command:
167 | - '--api'
168 | - '--providers.docker=true'
169 | - '--providers.docker.exposedByDefault=false'
170 | - '--entrypoints.http=true'
171 | - '--entrypoints.http.address=:80'
172 | - '--entrypoints.http.http.redirections.entrypoint.to=https'
173 | - '--entrypoints.http.http.redirections.entrypoint.scheme=https'
174 | - '--entrypoints.https=true'
175 | - '--entrypoints.https.address=:443'
176 | - '--certificatesResolvers.letsencrypt.acme.email=your-email@your-domain.com'
177 | - '--certificatesResolvers.letsencrypt.acme.storage=/etc/traefik/acme.json'
178 | - '--certificatesResolvers.letsencrypt.acme.httpChallenge.entryPoint=http'
179 | - '--log=true'
180 | - '--log.level=DEBUG'
181 |
182 | secure:
183 | image: traefik/whoami
184 | container_name: secure
185 | networks:
186 | - net
187 | labels:
188 | - 'traefik.enable=true'
189 | - 'traefik.http.routers.secure.rule=Host(`secure.example.com`)'
190 | - 'traefik.http.routers.secure.entrypoints=https'
191 | - 'traefik.http.routers.secure.tls=true'
192 | - 'traefik.http.routers.secure.tls.certresolver=letsencrypt'
193 | - 'traefik.http.routers.secure.middlewares=authelia@docker'
194 | expose:
195 | - 80
196 | restart: unless-stopped
197 |
198 | public:
199 | image: traefik/whoami
200 | container_name: public
201 | networks:
202 | - net
203 | labels:
204 | - 'traefik.enable=true'
205 | - 'traefik.http.routers.public.rule=Host(`public.example.com`)'
206 | - 'traefik.http.routers.public.entrypoints=https'
207 | - 'traefik.http.routers.public.tls=true'
208 | - 'traefik.http.routers.public.tls.certresolver=letsencrypt'
209 | - 'traefik.http.routers.public.middlewares=authelia@docker'
210 | expose:
211 | - 80
212 | restart: unless-stop
213 |
214 |
215 |
216 |
217 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 config swarn first'
218 |
219 | vps + local doker.
220 |
221 |
222 |
223 |
224 | 🔵 traefik + authelia
225 |
226 |
227 | https://github.com/authelia/authelia/blob/master/examples/compose/lite/docker-compose.yml
228 |
229 |
230 |
231 |
232 | https://wbsu2003.4everland.app/2022/03/09/%E5%8D%95%E7%82%B9%E7%99%BB%E5%BD%95%E6%9C%8D%E5%8A%A1Authelia%EF%BC%88%E4%B8%8B%E7%AF%87%EF%BC%89/
233 |
234 |
235 |
236 | 有了 Authelia,再配合 Fail2ban 防止暴力破解,公网访问的安全性问题会得到很大的保障。
237 |
238 | 虽然官方强调 OpenID Connect 仍处于预览阶段,但实际上Authelia 已经支持 OIDC 认证,不过限于篇幅,还是留到下回吧。
239 |
240 | 下期预告👉『Outline使用Authelia实现本地认证』,文章将讨论如何实现 Outline 通过 Authelia 的 OIDC 完成本地认证,而不再需要借助基于公网的第三方认证
241 |
242 |
243 |
244 | Outline使用Authelia实现本地认证
245 |
246 |
247 | https://wbsu2003.4everland.app/2022/03/11/Outline%E4%BD%BF%E7%94%A8Authelia%E5%AE%9E%E7%8E%B0%E6%9C%AC%E5%9C%B0%E8%AE%A4%E8%AF%81/
248 |
249 |
250 |
251 |
252 |
253 |
254 |
255 |
256 |
257 | https://www.youtube.com/watch?v=yHICIVhm_Lo&t=621s&ab_channel=Sagit
--------------------------------------------------------------------------------
/docs/🎪-7️⃣ 🌐-7️⃣ VPN ➜➜ Wireguard.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 1718
3 | title: 🎪-7️⃣🌐-7️⃣ VPN ➜➜ Wireguard
4 | ---
5 |
6 | # VPN ✶ Wireguard
7 |
8 |
9 | 🔵 Goal
10 |
11 | vps_ubuntu(vpn.srv) == (vpn.cli)Ros_router
12 |
13 | under ros. a lot vlan.
14 | we want visit homelab vlan. via vps.
15 | because homelab no public internet ip.
16 |
17 |
18 | 🔵 wireguard must know
19 |
20 | all node is even (both can be server or client)
21 |
22 | client: send vpn connect request
23 | server: accept vpn connect request
24 |
25 | only need one node(client node) join another node (server node ). and vpn is builded!
26 | no need both node send request to each other.
27 | means server node. no need join client node.
28 |
29 | usually
30 | server: have public ip
31 | client: no public ip
32 | let client node to join server node.
33 |
34 |
35 |
36 |
37 |
38 | 🔵 How build wireguaed vpn
39 |
40 | 🔶 server enable ip forward
41 |
42 | 🔶 vpn role
43 |
44 | one server node.
45 | lot client node.
46 | let client node join same server node.
47 |
48 |
49 | 🔶 Peer
50 |
51 | peer is node.
52 | one peer one node.
53 |
54 | server need add lots client peer. (one client one peer )
55 | client need add one server peer
56 |
57 |
58 | 🔶 Peer.key
59 |
60 | all node need genrate a pair key.
61 | server node need add all client public key. one client peer add one client public key.
62 | client node need add one server public key.
63 |
64 |
65 |
66 |
67 |
68 | 🔵 ip forward
69 |
70 | 🔶 check
71 |
72 | cat /proc/sys/net/ipv4/ip_forward
73 | 1 ok
74 | 0 deny
75 |
76 | 🔶 enable
77 |
78 | vi /etc/sysctl.conf
79 | add
80 | net.ipv4.ip_forward=1
81 |
82 |
83 |
84 |
85 |
86 |
87 | 🔵 VPN Server Basic config
88 |
89 |
90 | 🔶 install apt install wireguard -y
91 |
92 | 🔶 generate private key wg genkey > private
93 | 🔶 use private gnerate public: wg pubkey < private
94 |
95 | 🔶 create basic confug
96 |
97 | vi /etc/wireguard/wg-vps.conf
98 |
99 | [Interface]
100 | Address = 10.214.214.214/32
101 | ListenPort = 4455
102 | PrivateKey = yJore1MNmgAAYFunP8DGRl16/9qVFvm83VJVDGDeCUg=
103 |
104 |
105 |
106 | 🔶 enable firewall
107 |
108 | apt install ufw -y
109 | systemctl enable ufw
110 | systemctl start ufw
111 | ufw allow 4455
112 |
113 |
114 | 🔶 config nic autostart
115 |
116 | sudo systemctl enable wg-quick@wg-vps
117 | sudo systemctl start wg-quick@wg-vps
118 | sudo systemctl restart wg-quick@wg-vps
119 |
120 | sudo systemctl status wg-quick@wg-vps
121 |
122 | ip a
123 |
124 |
125 | 🔶 check status
126 |
127 | VPS ~ wg
128 | interface: wg-vps
129 | public key: fOa79C1HXltCoiSujJK9zIlx2hRD8eqSWL5lr+5TuSI=
130 | private key: (hidden)
131 | listening port: 4455
132 |
133 |
134 |
135 |
136 |
137 |
138 |
139 |
140 | 🔵 VPN Client - Ubuntu 💯
141 |
142 | vi /etc/wireguard/wg-dkp.conf
143 |
144 |
145 |
146 | [Interface]
147 | Address = 10.214.214.140/32
148 | PrivateKey = IAnGIuJI+ly3czJ0E1ZKy08CuqVUIOymGPb5ob5PNWw=
149 | ListenPort = 19629
150 |
151 | [Peer]
152 | PublicKey = fOa79C1HXltCoiSujJK9zIlx2hRD8eqSWL5lr+5TuSI=
153 | AllowedIPs = 10.214.214.0/24
154 | Endpoint = 172.93.42.232:4455
155 | PersistentKeepalive = 25
156 |
157 |
158 |
159 | sudo systemctl enable wg-quick@wg-vps
160 | sudo systemctl start wg-quick@wg-vps
161 | sudo systemctl restart wg-quick@wg-vps
162 |
163 | sudo systemctl status wg-quick@wg-dkt
164 |
165 | ip a
166 |
167 |
168 |
169 |
170 |
171 | 🔵 VPN Client - Ubuntu Ros:
172 |
173 |
174 | 🔶 general key & nic
175 |
176 | wireguard >> new >> VPN-WG-ROS..
177 | this will create a private and a public key .
178 |
179 |
180 | 🔶 set ip
181 |
182 | 10.214.214.11/32 VPN-WG-ROS
183 |
184 |
185 | 🔶 connect vps
186 |
187 | wireguard >> peer >> add
188 |
189 | interface: VPN-WG-ROS
190 | public key: vps`s public key.
191 | endpiunt: vps`s internet ip
192 | port: use wg cmd check in vps.
193 | allow address: empty
194 |
195 | not . vpn is connect. but not ping other vpn node! yet.
196 |
197 |
198 |
199 |
200 | 🔵 keep alive
201 |
202 | add this. or it will disconnect.
203 |
204 | PersistentKeepalive = 25
205 |
206 |
207 |
208 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
209 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
210 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
211 |
212 |
213 | 🔵 Design VPN network ✅
214 |
215 | server wg vpn ip ➜ 10.214.214.214/32
216 |
217 | client_1 wg vpn ip ➜ 10.214.214.11/32
218 | client_2 wg vpn ip ➜ 10.214.214.22/32
219 | client_3 wg vpn ip ➜ 10.214.214.33/32
220 |
221 | one vpn node only need one ip. no need use /24. that will confuse you.
222 |
223 |
224 |
225 | 🔵 VPN Desc
226 |
227 | if you have a working handshake. it means vpn works.
228 | a working handshake. no need config any allowed ips.
229 |
230 | working vpn. not means you can ping node use vpn ip.
231 | this function need config allowed ip.
232 |
233 |
234 |
235 |
236 | 🔵 Allowed IPS ➜ VPN part
237 |
238 | if vpn client & vpn server need ping each other use vpn ip.
239 | this need config on both client peer. and server peer.
240 |
241 | allowed ip means. what traffic can use wireguard vpn.
242 |
243 |
244 | 🔶 Server peer_client_11 Allowed IPS ➜ add client_11 `s vpn ip ➜ 10.214.214.11/32
245 | 🔶 server peer_client_22 Allowed IPS ➜ add client_22 `s vpn ip ➜ 10.214.214.22/32
246 | 🔶 server peer_client_33 Allowed IPS ➜ add client_33 `s vpn ip ➜ 10.214.214.33/32
247 | this let 10.214.214.xx/32 can use vpn tunnel. let server can ping 10.214.214.xx
248 |
249 |
250 | 🔶 Client_11 peer Allowed IPS ➜ add server `s vpn ip: ➜ 10.214.214.214/32
251 | 🔶 Client_22 peer Allowed IPS ➜ add server `s vpn ip: ➜ 10.214.214.214/32
252 | 🔶 Client_33 peer Allowed IPS ➜ add server `s vpn ip: ➜ 10.214.214.214/32
253 | this let 10.214.214.214/32 can use vpn tunnel. let client can ping 10.214.214.214
254 |
255 |
256 | now:
257 | server can ping all client.
258 | all client can ping server node
259 | but. client can not ping other client.
260 | if you need allow client ping client. change allowed ip on client node.
261 | 🔶 Client_11 peer Allowed IPS ➜ ➜ 10.214.214.0/24
262 | 🔶 Client_22 peer Allowed IPS ➜ ➜ 10.214.214.0/24
263 | 🔶 Client_33 peer Allowed IPS ➜ ➜ 10.214.214.0/24
264 | this allow 10.214.214.0/24 use vpn tunnel. in client node.
265 | so when you ping 10.214.214.0/24 in client node.
266 | client node will forward 10.214.214.0/24 to vpn server.
267 | vpn server. can ping all node.
268 | so client can ping other node.
269 |
270 |
271 | 🔥 no need change server peer Allowed IPS
272 | peer under vpn server.
273 | only decide what client. vpn server can ping out.
274 |
275 |
276 |
277 |
278 |
279 |
280 |
281 | 🔵 homelab vlan
282 |
283 | client_11 have vlan_11
284 | client_22 have vlan_22
285 | server need visit vlan_11 under client_11
286 |
287 | vlan_11 is under client_11.
288 | client_11 of course can visit vlan_11.
289 |
290 | all need todo is give server a route: let vlan_11 use vpn tunnel.
291 | and let sever send vlan_11 traffic to client_11 node.
292 |
293 |
294 | just config server `s client_11 `s peer.
295 | add a allowed ips like
296 | allowed ips = 172.16.1.0/24
297 | now server can ping 172.16.1.0/24
298 |
299 |
300 |
301 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪🎪-🔐🔐 Auth ➜➜ LDAP.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 4930
3 | title: 🎪🎪🎪🎪-🔐🔐 Auth ➜➜ OpenLDAP
4 | ---
5 |
6 |
7 | # Auth ➜ OpenLDAP
8 |
9 |
10 |
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 openLDAP. basic
12 |
13 | no need database.
14 |
15 |
16 | 🔵 LDAP Desc Detail💯
17 |
18 | 🔶 LDAP Desc
19 |
20 | ldap is just like domain/url
21 | domain must have two parts. do ladp must have 2+ dc.
22 | and you can set sub ldap too. just like sub domain
23 |
24 | root domain: *.0214.ark ➜ dc=0214,dc=ark
25 | sub domain: xx.0214.ark ➜ dc=xx,dc=0214,dc=ark
26 |
27 | CN. OU is just like folder/file
28 | CN=test,OU=developer,DC=rv,DC=ark
29 | cn: a file named test
30 | ou: a folder named developer
31 |
32 | if you need find file/cn
33 | use this
34 | CN=test,OU=developer,DC=rv,DC=ark
35 |
36 |
37 |
38 |
39 | 🔶 Create a Domain
40 |
41 | ◎ LDAP domain: xxx.yyy rv.ark
42 | ◎ LDAP_BASE_DN: dc=xxx,dc=yyy dc=rv,dc=ark
43 |
44 |
45 | 🔶 Domain tree demo
46 |
47 | dc=rv,dc=ark
48 | cn=file_0
49 | ou=Folder_1
50 | cn=file_1
51 | ou=Folder_2
52 | cn=file_21
53 | cn=file_22
54 |
55 | 🔶 DC OU CN
56 |
57 | DC( like root /): top of ldap
58 | OU( like folder): ...
59 | CN( like file ): end of ldap
60 | you must have CN.
61 | you can have OU
62 | you can have lots OU
63 | file can under root (DC)
64 | file can under foler (OU)
65 |
66 |
67 | ‼️ every cn have his own unique path cn-dc cn-ou-dc ‼️
68 | ‼️ every cn have his own unique path cn-dc cn-ou-dc ‼️
69 | ‼️ every cn have his own unique path cn-dc cn-ou-dc ‼️
70 |
71 |
72 |
73 | 🔶 DN / base_dn
74 | Distinguished Name ➜ file cn`s path ➜
75 | Base Distinguished Name ➜ root`s path ➜ dc=rv,dc=ark
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 | 🔵 database prepair
84 |
85 | 🔶 needs
86 |
87 | i have mariadb already.
88 | i use same mariadb for all docker.
89 | so i not create new mariadb. i use what i have already.
90 |
91 | ldap db: dbldap
92 | ldap user: userldap
93 | ldap user password: userldappassword
94 | set permit: let user: userldap can only visit dbldap
95 |
96 |
97 | 🔶 cmd
98 |
99 | MariaDB [(none)]> CREATE DATABASE dbldap;
100 | MariaDB [(none)]> CREATE USER 'userldap'@'%' IDENTIFIED BY 'password';
101 | MariaDB [(none)]> GRANT ALL PRIVILEGES ON dbldap.* TO 'userldap'@'%';
102 | MariaDB [(none)]> show databases;
103 |
104 |
105 |
106 |
107 |
108 |
109 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 openLDAP. Docker demo - prepair
110 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 openLDAP. Docker demo - prepair
111 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 openLDAP. Docker demo - prepair
112 |
113 | 🔵 Other Demo
114 | https://gist.github.com/thomasdarimont/d22a616a74b45964106461efb948df9c
115 |
116 |
117 |
118 |
119 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 worked demo 1
120 |
121 | ⭐️ only change domain & password. no other ⭐️
122 |
123 |
124 | version: '3'
125 | services:
126 | openldap:
127 | image: osixia/openldap:latest
128 | container_name: openldap
129 | domainname: "rv.ark"
130 | hostname: "openldap"
131 | environment:
132 | LDAP_LOG_LEVEL: "256"
133 | LDAP_ORGANISATION: "RVLAB Inc."
134 | LDAP_DOMAIN: "rv.ark"
135 | LDAP_BASE_DN: "dc=rv,dc=ark"
136 | LDAP_ADMIN_PASSWORD: "changeme"
137 | LDAP_CONFIG_PASSWORD: "changeme"
138 | LDAP_READONLY_USER: "false"
139 | LDAP_READONLY_USER_USERNAME: "readonly"
140 | LDAP_READONLY_USER_PASSWORD: "readonly"
141 | LDAP_RFC2307BIS_SCHEMA: "false"
142 | LDAP_BACKEND: "mdb"
143 | LDAP_TLS: "true"
144 | LDAP_TLS_CRT_FILENAME: "ldap.crt"
145 | LDAP_TLS_KEY_FILENAME: "ldap.key"
146 | LDAP_TLS_CA_CRT_FILENAME: "ca.crt"
147 | LDAP_TLS_ENFORCE: "false"
148 | LDAP_TLS_CIPHER_SUITE: "SECURE256:-VERS-SSL3.0"
149 | LDAP_TLS_PROTOCOL_MIN: "3.1"
150 | LDAP_TLS_VERIFY_CLIENT: "demand"
151 | LDAP_REPLICATION: "false"
152 | #LDAP_REPLICATION_CONFIG_SYNCPROV: "binddn="cn=admin,cn=config" bindmethod=simple credentials=$LDAP_CONFIG_PASSWORD searchbase="cn=config" type=refreshAndPersist retry="60 +" timeout=1 starttls=critical"
153 | #LDAP_REPLICATION_DB_SYNCPROV: "binddn="cn=admin,$LDAP_BASE_DN" bindmethod=simple credentials=$LDAP_ADMIN_PASSWORD searchbase="$LDAP_BASE_DN" type=refreshAndPersist interval=00:00:00:10 retry="60 +" timeout=1 starttls=critical"
154 | #docker-compose.ymlLDAP_REPLICATION_HOSTS: "#PYTHON2BASH:['ldap://ldap.example.org','ldap://ldap2.example.org']"
155 | KEEP_EXISTING_CONFIG: "false"
156 | LDAP_REMOVE_CONFIG_AFTER_SETUP: "true"
157 | LDAP_SSL_HELPER_PREFIX: "ldap"
158 | tty: true
159 | stdin_open: true
160 | volumes:
161 | - /var/lib/ldap
162 | - /etc/ldap/slapd.d
163 | - /container/service/slapd/assets/certs/
164 | ports:
165 | - "389:389"
166 | - "636:636"
167 | phpldapadmin:
168 | image: osixia/phpldapadmin:latest
169 | container_name: phpldapadmin
170 | environment:
171 | PHPLDAPADMIN_LDAP_HOSTS: "openldap"
172 | PHPLDAPADMIN_HTTPS: "false"
173 | ports:
174 | - "8080:80"
175 | depends_on:
176 | - openldap
177 |
178 |
179 |
180 |
181 |
182 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 simple demo 2
183 |
184 | version: '3'
185 | services:
186 | openldap:
187 | image: osixia/openldap:latest
188 | container_name: openldap
189 | domainname: "rv.ark"
190 | hostname: "openldap"
191 | environment:
192 | LDAP_DOMAIN: "rv.ark"
193 | LDAP_BASE_DN: "dc=rv,dc=ark"
194 | LDAP_ADMIN_PASSWORD: "changeme"
195 | LDAP_CONFIG_PASSWORD: "changeme"
196 | volumes:
197 | - /mnt/dpnvme/DMGS-DKP/Auth-openLDAP/ldap.dir:/var/lib/ldap
198 | - /mnt/dpnvme/DMGS-DKP/Auth-openLDAP/slapd.dir:/etc/ldap/slapd.d
199 | ports:
200 | - "389:389"
201 | - "636:636"
202 |
203 | phpldapadmin:
204 | image: osixia/phpldapadmin:latest
205 | container_name: phpldapadmin
206 | environment:
207 | PHPLDAPADMIN_LDAP_HOSTS: "openldap"
208 | PHPLDAPADMIN_HTTPS: "false"
209 | ports:
210 | - "8080:80"
211 | depends_on:
212 | - openldap
213 |
214 |
215 |
216 |
217 |
218 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Use demo
219 |
220 |
221 | 🔵 visit & login
222 |
223 | http://172.16.1.144:8080
224 | default cn user is admin.
225 |
226 | cn=admin,dc=rv,dc=ark
227 | cn=admin,dc=example,dc=org
228 |
229 |
230 | 🔵 design ldap
231 |
232 | domain
233 | cn: xxxxx ➜ admin/manager / root .. manager account (default ??? )
234 | ou: prod
235 | cn: xxxx
236 | ou: test
237 | cn: yyyy
238 | ou: IT
239 |
240 | 🔶 Design
241 |
242 | two kinds: cn & ou.
243 | cn is user. ou is group.
244 |
245 | a lot pc. (department )
246 |
247 | router. firewall. wifi.
248 | win. linux.
249 | traefik. hemidall.
250 |
251 | local vm. remote vps.
252 |
253 |
254 |
255 |
256 |
257 |
258 |
259 |
260 |
261 | 🔵 creatr user demo
262 |
263 | 🔶 create group.
264 |
265 | ‼️ if want create user( need gpid )
266 | must create group first. (this will create a gpid)
267 | >> Generic: Postfix Group
268 |
269 |
270 | 🔶 create user
271 |
272 | Generic: User Account
273 |
274 |
275 | 🔶 add email to user
276 |
277 | user >> Add new attribute >> email ...
278 |
279 |
280 |
281 | 🔵 Create OU
282 |
283 | Generic: Organisational Unit
284 |
285 |
286 |
287 |
288 |
289 |
290 |
291 | 🔵 add device to ldap
292 |
293 |
294 |
295 | 🔵 How Join
296 |
297 | - join ldap from client.
298 | - add client in server
299 |
300 | two way to add client to domain.
301 | usually use client to join server.
302 | somethings error. try add client from server.
303 |
304 |
305 | 🔶 mikrotik
306 |
307 | ros radius
308 | no support.openldap. .
309 | only support windows ad (NPS).
310 |
311 |
312 |
313 |
--------------------------------------------------------------------------------
/docs/🎪🎪🎪🎪-🔐🔐🔐 SSO ➜ Authelia.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 4930
3 | title: 🎪🎪🎪🎪-🔐🔐🔐 SSO ➜ Authelia install
4 | ---
5 |
6 | # SSO Authelia
7 |
8 |
9 | 🔵 file
10 |
11 | 🔶 Offical config manual
12 |
13 | https://www.authelia.com/configuration/
14 |
15 |
16 | 🔶 docker - compose
17 |
18 | https://github.com/authelia/authelia/blob/master/examples/compose/lite/docker-compose.yml
19 |
20 |
21 |
22 |
23 | 🔵 prepair
24 |
25 | authelia is for something like traefik.
26 | you better config traefik first. and enable ssl.
27 |
28 | you need like openldap too.
29 |
30 | domain. & SSL.
31 |
32 |
33 |
34 |
35 | 🔵 Authelia Docker needs.
36 |
37 | 1. redis - must
38 | 2. database - option
39 | Authelia have buildin database sqlite3.
40 |
41 |
42 |
43 |
44 | 🔵 email / verify / 2FA
45 |
46 | every user first login authelia need use 2FA verify.
47 | set 2FA means. must set email first.
48 |
49 |
50 | 🔶 send & receive email
51 |
52 | how sender email: use email`s smtp. function. ➜ in authelia config
53 | who receive emial: config email for login user. ➜ in ldap / of file.
54 |
55 |
56 | 🔶 gmail smtp
57 |
58 | very easy. just need email address and a app password.
59 | ‼️ must login google create a app password. email password no work ‼️
60 | ‼️ must login google create a app password. email password no work ‼️
61 | ‼️ must login google create a app password. email password no work ‼️
62 |
63 |
64 | 🔶 how verify
65 |
66 | 1. authelia send out a email.
67 | 2. ldap user receive a email
68 | 3. ldap get a link from email.
69 | link have two parts.
70 | one: paste link to 2fa app like google auth... or bitwarden. to get password.
71 | one: type password back to verify.
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 | 🔵 Config must know
81 |
82 | 🔶 authelia URL ✅
83 |
84 | default_redirection_url: https://auth.0214.icu
85 | authelia need a url to work.
86 | auth.domain sso.domain any name you like.
87 | if you have a port to visit authelia. add it behind.
88 | https://xxx.0214.icu:yyy
89 |
90 |
91 | 🔶 default rule
92 |
93 | access_control:
94 | default_policy: deny
95 |
96 | authelia should like firewall.
97 | deny all incoming request by default.
98 | but we need allow the app (who use authelia)
99 |
100 | - bypass ➜ no need password
101 | - one_factor ➜ password
102 | - two_factor ➜ passwor d + 2FA
103 |
104 |
105 | 🔶 Auth method:
106 |
107 | authentication_backend:
108 | file: ➜ means you choose file to auth user.
109 | ldap: ➜ means you choose ldap to auth user.
110 |
111 | if file:
112 | need create a new file.
113 | input user * passwor in that file.
114 | 🔥 https://github.com/authelia/authelia/tree/master/examples/compose/lite/authelia
115 |
116 | if ldap:
117 | my demo .
118 |
119 |
120 |
121 |
122 | 🔵 SMTP comfig 💯
123 |
124 | https://www.authelia.com/configuration/notifications/smtp/
125 |
126 |
127 | notifier:
128 | smtp:
129 | username: xx2610@gmail.com
130 | password: nfmthdxxxxxxx
131 | sender: xx2610@gmail.com
132 | host: smtp.gmail.com
133 | port: 587
134 |
135 |
136 | ‼️ password is not email password. it is google app passowr ‼️
137 | ‼️ password is not email password. it is google app passowr ‼️
138 | ‼️ password is not email password. it is google app passowr ‼️
139 |
140 |
141 |
142 |
143 |
144 |
145 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 docker compose ✅
146 |
147 |
148 | DBredis:
149 | container_name: DB-Redis
150 | image: 'bitnami/redis:latest'
151 | ports:
152 | - "6379:6379"
153 | environment:
154 | - ALLOW_EMPTY_PASSWORD=yes
155 | volumes:
156 | - /mnt/dpnvme/DMGS-DKP/DB-Redis:/bitnami
157 | # need set host folder permit like 777
158 | # default puid is like 1001.
159 |
160 | AUTHauthelia:
161 | container_name: Auth-SSO-Authelia
162 | image: authelia/authelia:latest
163 | restart: unless-stopped
164 | ports:
165 | - 9091:9091
166 | volumes:
167 | - /mnt/dpnvme/DMGS-DKP/Auth-SSO-Authelia:/config
168 | depends_on:
169 | - DBredis
170 |
171 |
172 |
173 |
174 |
175 |
176 |
177 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 config demo 💯 ✅
178 |
179 |
180 | # yamllint disable rule:comments-indentation
181 | ---
182 | ###############################################################################
183 | # Authelia Configuration #
184 | ###############################################################################
185 |
186 | theme: light #light/dark
187 | jwt_secret: 7tiqSgZY8kb8JthmoVoHWja2
188 | #any text or number you want to add here to create jwt Token
189 | default_redirection_url: https://auth.0214.icu # ⭐️⭐️⭐️⭐️⭐️
190 | server:
191 | host: 0.0.0.0
192 | port: 9091
193 | path: ""
194 | read_buffer_size: 4096
195 | write_buffer_size: 4096
196 | enable_pprof: false
197 | enable_expvars: false
198 | disable_healthcheck: false
199 | tls:
200 | key: ""
201 | certificate: ""
202 |
203 | log:
204 | level: debug
205 |
206 | totp:
207 | issuer: 0214.icu # ⭐️⭐️⭐️⭐️⭐️ no any port. just main domain. not sub domain
208 | period: 30
209 | skew: 1
210 |
211 | authentication_backend:
212 | ldap:
213 | implementation: custom
214 | url: ldap://openldap
215 | # ⭐️ must
216 | # if in same docker network. can use dns name.
217 | # 389 is default port.
218 | timeout: 5s
219 | start_tls: false
220 | tls:
221 | server_name: ldap.example.com
222 | skip_verify: false
223 | minimum_version: TLS1.2
224 | base_dn: OU=OU-SSO-Authelia,DC=rv,DC=ark
225 | # ⭐️ DC=rv,DC=ark ➜ this allow all domain user login to authelia.
226 | # ⭐️ OU=OU-SSO-Authelia,DC=rv,DC=ark ➜ this only allow user under OU-SSO-Authelia can login authelia
227 | users_filter: (&({username_attribute}={input})(objectClass=person))
228 | username_attribute: uid
229 | mail_attribute: mail
230 | display_name_attribute: displayName
231 | groups_filter: (&(member={dn})(objectClass=groupOfNames))
232 | group_name_attribute: cn
233 | permit_referrals: false
234 | permit_unauthenticated_bind: false
235 | user: CN=admin,DC=rv,DC=ark
236 | # ⭐️ cn just use admin. no change;
237 | password: Ys
238 | # ⭐️ domain admin `s password
239 |
240 | # ⭐️⭐️⭐️⭐️⭐️ openldap. username must not include . ➜ xxyy ok xx.yy no ok.
241 | # ⭐️⭐️⭐️⭐️⭐️ openldap. username must not include . ➜ xxyy ok xx.yy no ok.
242 | # ⭐️⭐️⭐️⭐️⭐️ openldap. username must not include . ➜ xxyy ok xx.yy no ok.
243 |
244 |
245 | access_control:
246 | default_policy: deny
247 |
248 | rules:
249 | ## bypass rule
250 | - domain: "auth.0214.icu" # ⭐️⭐️ he url that authelia itself use. no port.
251 | policy: bypass
252 | - domain:
253 | - "dashy.0214.icu" #⭐️⭐️xample domain to protect no port.
254 | - "jumpserver.0214.icu" #⭐️⭐️example domain to protect no port.
255 | policy: one_factor
256 | - domain:
257 | - "dsm.0214.icu" #⭐️⭐️example subdomain to protec no port.
258 | - "dvm.0214.icu" #⭐️⭐️xample subdomain to protect no port.
259 | policy: two_factor
260 |
261 | session:
262 | name: authelia_session
263 | secret: unsecure_session_secret #any text or number you want to add here to create jwt Token
264 | expiration: 3600 # 1 hour
265 | inactivity: 300 # 5 minutes
266 | domain: 0214.icu # ⭐️⭐️⭐️⭐️⭐️ no port. main domain.
267 |
268 | regulation:
269 | max_retries: 3
270 | find_time: 10m
271 | ban_time: 12h
272 |
273 | storage:
274 | local:
275 | path: /config/db.sqlite3 # can use MySQL too
276 | encryption_key: tujXiHx2ety6HRErqquML35m # encryption database info
277 |
278 |
279 | notifier:
280 | smtp:
281 | username: xx2610@gmail.com
282 | password: nfmthdxnopxx
283 | sender: xx2610@gmail.com
284 | host: smtp.gmail.com
285 | port: 587
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️ Cluster ➜ MiniKube.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️ Cluster ➜ MiniKube
4 | ---
5 |
6 | # Minikube Build
7 |
8 |
9 |
10 |
11 |
12 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
13 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 minikube - basic
14 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
15 |
16 | 🔵 Minikube Desc
17 |
18 | minikube can make a k8s cluster at local machine very quick.
19 | it is good for test!
20 | when we learn k8s we need a test enviroment which can rebuild easy.
21 |
22 | we have two option
23 | minikube + docker ➜ ... ➜ easy
24 | minikube + Podman ➜ better ➜ a little difficlut
25 |
26 |
27 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 MacOS: MiniKube + Docker ✅
28 | 🔵 Mac Install minikube ✅
29 |
30 | 🔶 Install Docker-desktop
31 |
32 | brew install docker --cask
33 | (cask install app with GUI )
34 |
35 | 🔶 Start Docker
36 |
37 | open app in gui.
38 |
39 | 🔶 Install Minikube
40 |
41 | brew install minikube
42 |
43 | 🔶 Start Minikube
44 |
45 | minikube start / stop
46 |
47 |
48 |
49 |
50 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 MacOS: MiniKube + Podman ✅
51 |
52 | 🔵 podman / docker cli / docker desktop
53 |
54 | 🔶 Docker cli vs Docker Desktop
55 |
56 | docker cli only work on linux
57 | no windows and macos
58 |
59 | if windows and macos need use docker cli
60 | it need install a vm to win/mac first
61 | then run docker cli in that vm.
62 |
63 | so docker-desktop is a vm under screen.
64 |
65 | so if possible install docker cli not docker desktop.
66 | docker cli use much less ram.
67 |
68 | docker-desktop is still free. but have some limit.
69 | it begian to charge.
70 |
71 |
72 |
73 | 🔶 Podman vs Docker
74 |
75 | new vision k8s stop support docker.
76 | and use podman instead.
77 | podman is opensource
78 | docker begian charge money.
79 |
80 | in simple podman is better..
81 |
82 |
83 |
84 | 🔵 Mac Install Podman & Minikube
85 |
86 | brew install podman
87 | brew install Minikube
88 |
89 |
90 | 🔵 Podman Config
91 |
92 | 🔶 Create VM
93 |
94 | podman machine init --cpus 4 --memory 8192 --disk-size 80
95 | podman machine init --cpus 2 --memory 2048 --disk-size 20
96 |
97 |
98 | 🔶 Start VM
99 |
100 | podman machine start
101 |
102 | after create a vm for podman.
103 | now we can use podman like
104 | iMAC ~ podman image list --all
105 | REPOSITORY TAG IMAGE ID CREATED SIZE
106 |
107 |
108 | 🔶 Give Podman Root permission
109 |
110 | by default . podman permission is rootless.
111 | if we need set rootful permission need stop podman first.
112 |
113 | iMAC ~ podman machine stop
114 | Machine "podman-machine-default" stopped successfully
115 | iMAC ~ podman machine set --rootful
116 |
117 |
118 | 🔶 Start Podman again
119 |
120 | podman machine start
121 |
122 | The system helper service is not installed; the default Docker API socket
123 | address can't be used by podman. If you would like to install it run the
124 | following commands:
125 |
126 | sudo /usr/local/Cellar/podman/4.1.0/bin/podman-mac-helper install
127 | podman machine stop; podman machine start
128 |
129 | You can still connect Docker API clients by setting DOCKER_HOST using the
130 | following command in your terminal session:
131 |
132 | export DOCKER_HOST='unix:///Users/techark/.local/share/containers/podman/machine/podman-machine-default/podman.sock'
133 |
134 | Machine "podman-machine-default" started successfully
135 |
136 |
137 | ‼️ there is warning during start .
138 | ‼️ Follow your own warning: run two command:
139 |
140 | sudo /usr/local/Cellar/podman/4.1.0/bin/podman-mac-helper install
141 | export DOCKER_HOST='unix:///Users/techark/.local/share/containers/podman/machine/podman-machine-default/podman.sock'
142 |
143 |
144 | 🔶 Restart Podman
145 |
146 | podman machine stop
147 | podman machine start
148 | now no waring.
149 |
150 |
151 | 🔶 Test Podman
152 |
153 | podman run docker.io/hello-world
154 |
155 | podman run docker.io/hello-world
156 |
157 |
158 | 🔵 Minikube Start
159 |
160 | 🔶 Debug Mode
161 |
162 | minikube start --alsologtostderr -v=7
163 |
164 | if error. try: minikube delete
165 |
166 |
167 | 🔶 Start
168 |
169 | minikube start
170 | minikube start --driver=podman --container-runtime=cri-o
171 | minikube start --cpus 4 --memory 7915 --disk-size=80G --driver=podman --network-plugin=cni --cni=auto
172 |
173 |
174 | 🌟 Enabled addons: storage-provisioner, default-storageclass
175 | 💡 kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
176 | 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
177 |
178 |
179 |
180 | 🔶 Open dashboard
181 |
182 | minikube dashboard
183 |
184 |
185 | 🔵 Minikube kubectl Config
186 |
187 | ‼️‼️ When use minikube kubectl. all command is little different ‼️‼️
188 | ‼️‼️ When use minikube kubectl. all command is little different ‼️‼️
189 | ‼️‼️ When use minikube kubectl. all command is little different ‼️‼️
190 |
191 | minikube kubectl -- create deployment nginxdepl --image=nginx
192 | you need add -- after kubectl
193 |
194 |
195 | or we setup alias. as follow.
196 | so we can use kubectl as normal minikube.
197 |
198 | https://minikube.sigs.k8s.io/docs/handbook/kubectl/
199 |
200 | 🔶 1. Make alias
201 |
202 | alias kubectl="minikube kubectl --"
203 |
204 | 🔶 2. create a symbolic link
205 |
206 | ln -s $(which minikube) /usr/local/bin/kubectl
207 |
208 |
209 | 🔵 Minikube Config
210 |
211 | 🔶 Config CPU & Memory
212 |
213 | minikube stop
214 | minikube config set memory 7000
215 | minikube config set cpus 4
216 | minikube start
217 |
218 |
219 | 🔵 Minikube Reset - option
220 |
221 | minikube delete --all
222 |
223 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Minikube ✶ Demo 💯
224 |
225 | ‼️ Offical Manual ‼️
226 | https://minikube.sigs.k8s.io/docs/start/
227 |
228 |
229 |
230 | 🔵 Start Minikube
231 |
232 | 🔶 Start Docker & Podman
233 |
234 | if use docker-desktop.
235 | have to run docker in gui first.
236 | if use podman.
237 | need start podman first
238 |
239 |
240 | 🔶 Start Minikube
241 |
242 | Minikube start
243 |
244 |
245 | 🔶 Check Minikube Status
246 |
247 | iMAC ~ minikube status
248 | minikube
249 | type: Control Plane
250 | host: Running
251 | kubelet: Running
252 | apiserver: Running
253 | kubeconfig: Configured
254 | docker-env: in-use
255 |
256 |
257 | 🔵 1. Create Deployment & export Port
258 |
259 | 🔶 Deploy Docker
260 |
261 | kubectl create deployment dashy --image=lissy93/dashy
262 |
263 | ◎ Delete Depoly
264 | kubectl delete deploy
265 | kubectl delete deploy nginx
266 |
267 | 🔶 Export Server Port
268 |
269 | kubectl expose deployment dashy --port=80 --type=NodePort
270 | this is the port server use / not port you visit
271 |
272 |
273 | 🔶 Check
274 |
275 | ◎ Check Deploy kubectl get deployments
276 | ◎ Check Pods kubectl get pods
277 | ◎ Check Service kubectl get services
278 |
279 |
280 | 🔵 Run Service (enable Out Cluster Visit ])
281 |
282 | minikube service dashy --url
283 | http://192.168.64.3:31147
284 |
285 |
286 | ‼️ maybe open a web for you. that web is wrong! close it. still need one step. ‼️
287 | keep that terminal/ no close if close web will close too.
288 | open a new termianl run below commond
289 |
290 |
291 |
292 | 🔵 Forword Port To Local
293 |
294 | kubectl port-forward service/NAME LocalPort:ServerPort
295 |
296 | kubectl port-forward service/dashy 7000:80 ✅
297 | kubectl port-forward service/dashy 88:80 ❌
298 | ‼️ local port can not too small. local port under 100. need root permission‼️
299 |
300 |
301 | 🔶 Visit
302 |
303 | http://localhost:7000
304 |
305 |
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️☸️-1️⃣ Helm ➜ Demo.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️☸️1️⃣ Helm ➜ Demo
4 | ---
5 |
6 | # Helm Demo
7 |
8 |
9 |
10 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
12 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
13 |
14 |
15 |
16 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 ❌❌❌❌❌❌❌❌
17 |
18 | 🔵 Helm Demo ➜ Dashy
19 |
20 | https://artifacthub.io/packages/helm/krzwiatrzyk/dashy
21 |
22 |
23 | helm repo add krzwiatrzyk https://krzwiatrzyk.github.io/charts/
24 | helm install dashy-v1 krzwiatrzyk/dashy --version 0.0.3
25 | helm status dashy-v1
26 |
27 | 🔥 if use helm in k3s export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
28 | 🔥 if use helm in k3s export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
29 | 🔥 if use helm in k3s export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
30 |
31 | here our dashy not start.
32 | some app like nginx no need config and start success
33 | ➜ use default config
34 |
35 | some app like dashy must need config. or start fail
36 | so how to config app like dashy?
37 | every app diffs. -.- check app`s config manual
38 |
39 |
40 | 🔶 check manual
41 |
42 | https://artifacthub.io/packages/helm/krzwiatrzyk/dashy
43 |
44 | helm upgrade --install --set-file configMap.config.data."config\.yml"=config.yml
45 | helm upgrade --install --set-file configMap.config.data."config\.yml"=cm-dashy-config.yml
46 |
47 | --set-file configMap.config.data."config\.yml"=xxxx.yml
48 | ➜ left part means this app neea a config file named: config.yml.
49 | ➜ xxxx.yml means you can use your own name to overwrite the default filename.
50 |
51 | so we create a compigmap file. named: cm-dashy-config.yml
52 | then use belw cmd to import cm-dashy-config.yml as app`s config file
53 | helm upgrade --install --set-file configMap.config.data."config\.yml"=cm-dashy-config.yml
54 |
55 | ❌ ❌❌❌❌❌❌
56 |
57 | kubectl create configmap cm-dashy-config.yaml --from-file=/root/dashy.conf
58 | kubectl create configmap config.yaml --from-file=/root/dashy.conf
59 |
60 |
61 | helm install dashy-v1 krzwiatrzyk/dashy --version 0.0.3
62 |
63 | helm upgrade --install --set-file configMap.config.data."config\.yml"=cm-dashy-config.yml
64 | helm upgrade --install --set-file configMap.config.data."config\.yml"=cm-dashy-config.yml
65 | helm upgrade dashy-v1 krzwiatrzyk/dashy --install --set-file configMap.config.data."config\.yml"=cm-dashy-config.yaml
66 |
67 |
68 | sh.helm.release.v1.dashy-v1.v1
69 |
70 |
71 |
72 |
73 | 🔶 create configmap
74 |
75 |
76 | 🔥 it is volume. not file fuck
77 |
78 | MountVolume.SetUp failed for volume "config" : configmap references non-existent config key: conf.yml
79 |
80 | helm mount volume..
81 |
82 |
83 |
84 |
85 |
86 | configmap:
87 | config:
88 | enabled: true
89 | data:
90 | conf.yml: |
91 | {{- .Files.Get "conf.yml" | nindent 4 }}
92 |
93 | # -- Probe configuration
94 | # -- [[ref]](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
95 | # @default -- See below
96 |
97 |
98 | persistence:
99 | # -- Default persistence for configuration files.
100 | # @default -- See below
101 | config:
102 | # -- Enables or disables the persistence item
103 | enabled: true
104 |
105 | # -- Sets the persistence type
106 | # Valid options are pvc, emptyDir, hostPath, secret, configMap or custom
107 | type: configMap
108 |
109 | # -- Where to mount the volume in the main container.
110 | # Defaults to `/`,
111 | # setting to '-' creates the volume but disables the volumeMount.
112 | mountPath: /app/public/conf.yml
113 |
114 | # -- Used in conjunction with `existingClaim`. Specifies a sub-path inside the referenced volume instead of its root
115 | subPath: conf.yml
116 | name: dashy-config
117 | items:
118 | - key: conf.yml
119 | path: conf.yml
120 |
121 |
122 |
123 |
124 |
125 | 🔵 volume
126 |
127 | Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[config]: timed out waiting for the condition
128 |
129 |
130 |
131 |
132 | persistence:
133 | config:
134 | accessMode: ReadWriteOnce
135 | enabled: false
136 | readOnly: false
137 | retain: false
138 | size: 1Gi
139 | type: pvc
140 | shared:
141 | enabled: false
142 | mountPath: /shared
143 | type: emptyDir
144 |
145 |
146 | -.-
147 |
148 |
149 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
150 |
151 | 🔵 helm available valuse
152 |
153 | helm show values chart-name. ➜ check what value can set .
154 | helm show values krzwiatrzyk/dashy
155 |
156 |
157 |
158 |
159 | 🔵 Helm APP Custom
160 |
161 | almost all app need some config.
162 | - set value.
163 | - mount config file
164 | - expose port
165 |
166 | in helm. need two part.
167 | -f xxx.yaml ➜ use custom value file ➜ change app`s defaule setting
168 | --set-file ➜ mount custom config file ➜ like nginx.conf
169 |
170 | helm install --dry-run --debug \
171 | stable/rabbitmq \
172 | --name testrmq \
173 | --namespace rmq \
174 | -f rabbit-values.yaml \
175 | --set-file rabbitmq.advancedConfig=rabbitmq.config
176 |
177 | we manybe no need change app`s default value.
178 | but most time need. mount custom config.
179 |
180 |
181 |
182 | 🔵 Helm Custom ➜ config
183 |
184 | 🔶 check default value
185 |
186 | we want change an helm app`s config.
187 | need konw app`s default setting first.
188 | this can check in website
189 | https://artifacthub.io/packages/helm/krzwiatrzyk/dashy
190 | https://artifacthub.io/packages/helm/krzwiatrzyk/dashy?modal=values
191 |
192 |
193 | 🔶 analy default value
194 |
195 | configmap:
196 | config:
197 | enabled: true
198 | data:
199 | conf.yml: |
200 | {{- .Files.Get "conf.yml" | nindent 4 }}
201 |
202 | helm upgrade --install --set-file configMap.config.data."config\.yml"=config.yml
203 |
204 | it need a custom file: named : config.yml prepair first.
205 | but i want change the filename: cm-dashy.yaml
206 | so i need change default value first.
207 | how overwrite heml `s defalt value.
208 |
209 |
210 | 🔵 Helm overwrite default value
211 |
212 | the app`s default custom filename is config.yml
213 | we want change it to cm-appname-config.yml
214 |
215 | in helm we can use --set flag to overwrite a value.
216 |
217 | helm upgrade --install -green -f -values.yaml -100.0.112+xxxx.tgz --set .deployment.strategy=blue-green
218 | helm upgrade --install -f -values.yaml -100.0.112+xxxx.tgz --set .image.tag=9.0.1xx.xx.xx
219 |
220 |
221 | helm upgrade --install --set-file configMap.config.data."config\.yml"=config.yml
222 | helm upgrade --install --set-file configMap.config.data."config\.yml"=cm-dashy-config.yml
223 |
224 |
225 |
226 |
227 |
228 | we need use
229 |
230 |
231 | helm install --dry-run --debug \
232 | stable/rabbitmq \
233 | --name testrmq \
234 | --namespace rmq \
235 | -f rabbit-values.yaml \
236 | --set-file rabbitmq.advancedConfig=rabbitmq.config
237 |
238 |
239 |
240 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 helm demo nginx
241 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 helm demo nginx
242 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 helm demo nginx
243 |
244 |
245 | 🔵 Mount Config
246 |
247 | in docker we use volumes when start docker
248 | volumes:
249 | - ./license.dat:/etc/sys0/license.dat
250 | - ./config.json:/etc/sys0/config.json
251 |
252 |
253 | in k8s/helm same
254 |
255 | helm install my-nginx bitnami/nginx
256 | helm install my-nginx -f values.yaml bitnami/nginx
257 | helm install my-nginx --set ingress.enabled=true bitnami/nginx
258 |
259 |
260 | or better change config use values.yaml.
261 | and run app use your own value.yaml.
262 | all value you can use in docker cmd.
263 | you can set them into value.yaml
264 |
265 |
--------------------------------------------------------------------------------
/docs/🎪-9️⃣ 📀📀📀 CEPH ➜ Build.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 1930
3 | title: 🎪-9️⃣📀📀📀 CEPH ➜ Build
4 | ---
5 |
6 | # Storage ✶ CEPH ➜ Build
7 |
8 | ⭐️⭐️⭐️⭐️⭐️⭐️⭐️⭐️⭐️⭐️⭐️⭐️⭐️⭐️
9 | https://www.ityww.cn/1588.html
10 |
11 |
12 |
13 | 🔵 VM Prepair ✅
14 |
15 | 🔶 K8s Role Desc
16 |
17 | only two kinds Role : manager and worker(osd)
18 | no need manual set/choose monitor node
19 | during install. ceph automatic choose/set all the role.
20 |
21 |
22 | 🔶 VM System Version
23 |
24 | https://docs.ceph.com/en/quincy/start/os-recommendations/
25 |
26 | ‼️ Ceph 17.2 is test on Ubuntu_20 only. No Ubuntu_22 yet ‼️
27 | ‼️ Ceph 17.2 is test on Ubuntu_20 only. No Ubuntu_22 yet ‼️
28 | ‼️ Ceph 17.2 is test on Ubuntu_20 only. No Ubuntu_22 yet ‼️
29 |
30 |
31 | 🔶 VM Number: must at least two
32 |
33 | CEPH-Mgr x 1
34 |
35 | CEPH-Node x 1
36 | 3 x 345G Physical iscsi disk from nas
37 | 1 x 1T Physical iscsi disk from nas
38 |
39 |
40 | 🔶 VM NIC
41 |
42 | ◎ Manager
43 | VLAN-STO-1012-CEPH_internel 10.12.12.0/24 Must ➜ ceph cluster internal commute.
44 |
45 | ◎ Worker
46 | VLAN-STO-1010-NAS_ISCSI 10.10.10.0/24 Option ➜ nas. provide disk to worker
47 | VLAN-STO-1012-CEPH_internel 10.12.12.0/24 Must ➜ ceph cluster internal commute.
48 |
49 |
50 |
51 | 🔶 VM Basic Setup
52 |
53 | ◎ Static IP
54 | ◎ update apt Source
55 | ◎ install docker
56 |
57 |
58 |
59 |
60 | 🔵 VM Mount iscsi disk ✅
61 |
62 | https://manjaro.site/how-to-connect-to-iscsi-volume-from-ubuntu-20-04/
63 |
64 | 🔶 iSCSI Initiator Install
65 |
66 | sudo apt install open-iscsi
67 |
68 |
69 | 🔶 Config network card for iscsi
70 |
71 | ◎ why
72 | we have three nic.
73 | give nic a nickname
74 | easy for manager.
75 |
76 |
77 |
78 | network:
79 | ethernets:
80 | ens160:
81 | match:
82 | macaddress: 00:50:56:85:b8:cc
83 | set-name: Nic-CEPH_1012
84 | dhcp4: false
85 | addresses: [10.12.12.77/24]
86 | gateway4: 10.12.12.11
87 | nameservers:
88 | addresses: [10.12.12.11,10.12.12.1]
89 | optional: true
90 | ens224:
91 | match:
92 | macaddress: 00:50:56:85:ec:43
93 | set-name: Nic-NAS_1010
94 | dhcp4: false
95 | addresses: [10.10.10.77/24]
96 | optional: true
97 | ens192:
98 | match:
99 | macaddress: 00:50:56:85:55:a5
100 | set-name: Nic-CEPH_1011
101 | dhcp4: false
102 | addresses: [10.11.11.77/24]
103 | optional: true
104 | version: 2
105 |
106 |
107 |
108 |
109 |
110 | 🔶 config iscsi auth
111 |
112 | ◎ if you nas need chap. do this. otherwise skip.
113 | sudo vi /etc/iscsi/iscsid.conf
114 | node.session.auth.authmethod = CHAP
115 | node.session.auth.username = username
116 | node.session.auth.password = password
117 |
118 |
119 | 🔶 enable server on boot
120 |
121 | sudo systemctl enable open-iscsi
122 | sudo systemctl enable iscsid
123 |
124 |
125 | 🔶 Discover remote nas iscsi target
126 |
127 | CEPH-99 ~ sudo iscsiadm -m discovery -t sendtargets -p 10.10.10.88
128 | 10.10.10.88:3260,1 iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH11.b6f8cfe4bdb
129 | 10.10.10.88:3260,1 iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH22.b6f8cfe4bdb
130 | 10.10.10.88:3260,1 iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH33.b6f8cfe4bdb
131 |
132 |
133 | 🔶 manual connect target
134 |
135 | sudo iscsiadm --mode node --targetname iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH11.b6f8cfe4bdb --portal 10.10.10.88 --login
136 | sudo iscsiadm --mode node --targetname iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH22.b6f8cfe4bdb --portal 10.10.10.88 --login
137 | sudo iscsiadm --mode node --targetname iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH33.b6f8cfe4bdb --portal 10.10.10.88 --login
138 |
139 |
140 | 🔶 Comnand
141 |
142 | ◎ restart services sudo systemctl restart iscsid open-iscsi
143 | ◎ List connected iscsi node sudo iscsiadm -m node -l
144 | ◎ Disconnect all iscsi node sudo iscsiadm -m node -u
145 |
146 | ◎ List Disk sudo fdisk -l
147 | now you should see your iscsi disk at here.
148 | next is auto connect iscsi tatget at boot.
149 |
150 |
151 |
152 |
153 |
154 | 🔶 auto mount: Config iscsi node
155 |
156 | use root user.
157 | cd cd /etc/iscsi/nodes
158 | config every iscsi taeget you need auto connect.
159 | enter iscsi tagert config default file.
160 | change
161 | node.startup = manual to node.startup = automatic
162 | sed -i 's/node.startup = manual/node.startup = automatic/g' ./default
163 |
164 | cat ./default | grep node.startup
165 | node.startup = automatic
166 |
167 | reboot server. check result.
168 |
169 |
170 | CEPH-99 ~ sudo su -
171 | CEPH-99.Root ~ cd /etc/iscsi/nodes
172 | CEPH-99.Root nodes ls
173 | iqn.2000-01.com.synology:NAS-DS2015XS.Target-0000.b6f8cfe4bdb
174 | iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH11.b6f8cfe4bdb
175 | iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH22.b6f8cfe4bdb
176 | iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH33.b6f8cfe4bdb
177 | CEPH-99.Root nodes cd iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH11.b6f8cfe4bdb
178 | CEPH-99.Root iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH11.b6f8cfe4bdb ls
179 | 10.10.10.88,3260,1
180 | CEPH-99.Root iqn.2000-01.com.synology:NAS-DS2015XS.Target-CEPH11.b6f8cfe4bdb cd 10.10.10.88,3260,1
181 | CEPH-99.Root 10.10.10.88,3260,1 ls
182 | default
183 |
184 |
185 |
186 |
187 | 🔵 All Node: NTP & Timezone ✅
188 |
189 | 🔶 NTP
190 |
191 | ◎ status sudo systemctl status ntp
192 | ◎ add local ntp server sudo vi /etc/ntp.conf
193 | ◎ restart sudo service ntp restart
194 |
195 |
196 | 🔶 Timezone
197 |
198 | timedatectl list-timezones
199 | sudo timedatectl set-timezone America/Los_Angeles
200 |
201 |
202 |
203 | 🔵 All Node: Hostname & Hosts
204 |
205 | 🔶 Change Hostname
206 |
207 | hostnamectl set-hostname CEPH-Mgr
208 | hostnamectl set-hostname CEPH-Node
209 |
210 | ‼️ ceph host name need like CEPH-Node03 not CEPH-Node03.ark (fqdn) ‼️
211 |
212 |
213 | 🔶 Config Hosts file
214 |
215 | > /etc/hosts means replace
216 | >> /etc/hosts means append
217 |
218 |
219 | sudo bash -c 'echo "
220 | 127.0.0.1 localhost
221 | 10.12.12.70 CEPH-Mgr
222 | 10.12.12.77 CEPH-Node" > /etc/hosts'
223 |
224 |
225 |
226 |
227 |
228 |
229 |
230 | 🔵 Mgr Node: install Tool: Cephadm
231 |
232 | 🔶 Ubuntu 20
233 |
234 | sudo apt install -y cephadm
235 |
236 |
237 | 🔵 Mgr Node: deploy Monitor
238 |
239 | ‼️ Moniter must install in local machine which have cephadm installed. here: Manger node ‼️
240 | ip choose ceph internel vlan.
241 |
242 | sudo cephadm bootstrap --mon-ip 10.12.12.70
243 |
244 | during install . remember password.
245 |
246 | URL: https://CEPH-Mgr:8443/
247 | User: admin
248 | Password: r3uq4z2agg
249 |
250 |
251 | 🔶 what this do
252 |
253 | create ssh key.
254 | later you need upload this public key to other node
255 |
256 | and create a lot docker to mgr.
257 |
258 |
259 | 🔵 Monitor web login
260 |
261 | https://10.12.12.70:8443/
262 | https://10.12.12.70:3000/
263 |
264 |
265 |
266 | 🔵 Monitor Enable ceph command
267 |
268 | sudo cephadm install ceph-common
269 |
270 | if no this.
271 | you can not use ceph command under ceph-mgr
272 | you have to enter cephadm shell first
273 | then use ceph command under cephadm shell
274 | sudo cephadm shell
275 | ceph -s
276 |
277 |
278 |
279 |
280 | 🔵 upload ssh key To other node
281 |
282 | cephadm already create key for you .
283 | no need crtate ssh key yourself .
284 |
285 | as for user we use root.
286 | by default root is disabled in ubuntu
287 |
288 |
289 | 🔶 ALL Node: Config root User
290 |
291 | ◎ Enable root
292 | sudo passwd root
293 |
294 | ◎ Allow Root ssh
295 |
296 | sudo sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
297 | sudo service sshd restart
298 |
299 |
300 | 🔶 CEPH-Mgr Upload public key to other node
301 |
302 | ssh-copy-id -f -i /etc/ceph/ceph.pub root@CEPH-Node
303 |
304 |
305 |
306 |
307 | 🔵 Add node to ceph Cluster.
308 |
309 | 🔶 add
310 |
311 | CEPH-Mgr ~ ceph orch host add CEPH-Node
312 |
313 |
314 | 🔶 check host
315 |
316 | CEPH-Mgr ~ sudo ceph orch host ls
317 | HOST ADDR LABELS STATUS
318 | CEPH-Mgr CEPH-Mgr
319 | CEPH-Node CEPH-Node
320 |
321 |
322 | 🔵 Auto Deploy Monitor
323 |
324 | chepadm will automatic deploy manager & monitor node to some host under cluster.
325 | ceph need many manager & monitor for high ability.
326 | just wait. you need do nothing here.
327 | no need manual choose which host use what service (manager / monitor)
328 |
329 |
330 | CEPH-Mgr ~ sudo ceph -s
331 | cluster:
332 | id: b57b2062-e75d-11ec-8f3e-45be4942c0cb
333 | health: HEALTH_WARN
334 | OSD count 0 < osd_pool_default_size 3
335 |
336 | services:
337 | mon: 1 daemons, quorum CEPH-Mgr (age 17m)
338 | mgr: CEPH-Mgr.ljhrsr(active, since 16m), standbys: CEPH-Node.lemojs
339 | osd: 0 osds: 0 up, 0 in
340 |
341 |
342 |
343 |
344 | 🔵 OSD setup .
345 |
346 | 🔶 All Node Check available disk
347 |
348 | i only use ceph-node mount all physical disk.
349 |
350 | CEPH-Node ~ lsblk
351 | sdb 8:16 0 345G 0 disk
352 | sdc 8:32 0 345G 0 disk
353 | sdd 8:48 0 345G 0 disk
354 |
355 |
356 |
357 | 🔶 add disk to ceph cluster
358 |
359 | in ceph-mgr
360 |
361 | sudo ceph orch daemon add osd CEPH-Node:/dev/sdb
362 | sudo ceph orch daemon add osd CEPH-Node:/dev/sdc
363 | sudo ceph orch daemon add osd CEPH-Node:/dev/sdd
364 |
365 |
366 | if disk not empty. need use blow command first
367 | sgdisk --zap-all /dev/sdxxxx
368 |
369 |
370 |
371 |
372 |
373 |
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️ Cluster ➜➜➜ K8s.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️ Cluster ➜➜➜ K8s
4 | ---
5 |
6 | # K8s Build
7 |
8 |
9 |
10 |
11 |
12 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Basic info
13 |
14 | 🔵 Why k8s
15 |
16 | docker
17 | >> docker compose -- for simple use
18 | >> swarm -- no recommand
19 | >> k8s -- final
20 |
21 |
22 | docker is everywhere.
23 | when you have lots docker. you need manager.
24 | k8s /k3s can help you.
25 |
26 |
27 | 🔵 K8s / K3s
28 |
29 | K3s is light weight. ➜ less ram (0.5G ram +)
30 | K8s is heavy weight. ➜ much ram
31 |
32 | k8s have cluster build tool: kubeadm.
33 |
34 | ◎ K8s Version
35 | https://kubernetes.io/releases/
36 | 1.24+
37 |
38 |
39 | 🔵 minikube
40 |
41 | for test only. we need one.
42 | learn on minikube. then try on k8s.
43 |
44 |
45 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 K8s Demo
46 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 K8s Demo
47 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 K8s Demo
48 |
49 |
50 | 🔵 require
51 |
52 | 🔶 ubuntu 20
53 |
54 | ‼️ 🔥 NO Docker installed! Docker has buildin xxx. but we need CRI-O ‼️
55 | ‼️ 🔥 NO Docker installed! Docker has buildin xxx. but we need CRI-O ‼️
56 | ‼️ 🔥 NO Docker installed! Docker has buildin xxx. but we need CRI-O ‼️
57 |
58 |
59 | 🔶 uninstall old docker
60 |
61 | $ sudo apt-get purge -y docker-engine docker docker.io docker-ce
62 | $ sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce
63 |
64 |
65 | ‼️ 🔥 at least two node. or k8s dashboard no work. ‼️
66 |
67 |
68 | ram must 2G + . most vps no....
69 |
70 |
71 |
72 | 🔵 VM Prepair
73 |
74 |
75 | 🔶 set Hostname & static IP ???
76 |
77 | K8s-Manager 172.16.0.80
78 | K8s-WorkerG3 172.16.0.83
79 |
80 |
81 | hostnamectl set-hostname your-new-hostname
82 | hostnamectl set-hostname K8s-Worker-VPS
83 |
84 | client only need add manager`s hostname?
85 | what if some nodes not in same vlan.
86 |
87 |
88 |
89 |
90 |
91 | 🔵 Disable swap - All Node
92 |
93 | https://graspingtech.com/disable-swap-ubuntu/
94 |
95 |
96 | 🔶 Check
97 |
98 | sudo swapon --show
99 | if nothing it is disabled.
100 |
101 |
102 | 🔶 forever turn off swap
103 |
104 | sudo vi /etc/fstab
105 | remove /swap line
106 | /swap.img none swap sw 0 0
107 |
108 |
109 | 🔵 Config firewall
110 |
111 | 🔶 like this is ok.
112 |
113 | [root@localhost ~]# lsmod |grep br_netfilter
114 | br_netfilter 22209 0
115 | bridge 136173 1 br_netfilter
116 |
117 |
118 |
119 |
120 |
121 | 🔵 Install CRI-O - ALL Node ✅
122 |
123 | ‼️ if you have docker/contained install already. something will wrong. ‼️
124 | use a new system. No install docker/contained!
125 |
126 |
127 | 🔶 Set OS & CRI-o version value
128 |
129 | K8s-Mgr.Root ~ OS=xUbuntu_20.04
130 | K8s-Mgr.Root ~ VERSION=1.24
131 | 🔥 must run this. set system version
132 | ◎ find your system
133 | https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/
134 |
135 | ◎ check lsteat cri-o version
136 | https://github.com/cri-o/cri-o
137 | should be same version with k8s.
138 |
139 |
140 | 🔶 Copy & Paste
141 |
142 | # Add Kubic Repo
143 | echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | \
144 | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
145 |
146 | # Import Public Key
147 | curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | \
148 | sudo apt-key add -
149 |
150 | # Add CRI Repo
151 | echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" | \
152 | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
153 |
154 | # Import Public Key
155 | curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | \
156 | sudo apt-key add -
157 |
158 |
159 | 🔶 Install
160 |
161 | sudo apt update
162 | sudo apt install cri-o cri-o-runc cri-tools -y
163 |
164 |
165 | 🔶 enable
166 |
167 | sudo systemctl enable crio.service
168 |
169 |
170 | 🔶 start & check
171 |
172 | sudo systemctl start crio.service
173 | sudo crictl info
174 | ‼️ make sure runtime is ready. network ignore it. ‼️
175 |
176 |
177 |
178 | 🔵 Kubernetes tooles install - ALl Node ✅
179 |
180 | 🔶 Tooles Desc
181 |
182 | ◎ kubeadm: Deploy k8s cluster
183 | ◎ kubelet: manager pod/network
184 | ◎ kubectl: k8s cli tool.
185 |
186 |
187 | 🔶 add key
188 |
189 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
190 |
191 |
192 | 🔶 add apt repository
193 |
194 | echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
195 | sudo mv ~/kubernetes.list /etc/apt/sources.list.d
196 |
197 |
198 | 🔶 Install tooles
199 |
200 | sudo apt-get update
201 |
202 | sudo apt-get install -y kubelet kubeadm kubectl
203 |
204 | sudo apt-mark hold kubelet kubeadm kubectl
205 | apt-mark means no update these package when you update your system!
206 | update package often cause problems.
207 |
208 |
209 |
210 |
211 |
212 |
213 | 🔵 Create Cluster (Mgr node) ✅
214 |
215 | 🔶 Create
216 |
217 | sudo kubeadm init --pod-network-cidr=10.244.0.0/16
218 |
219 | kubeadm join 172.16.0.80:6443 --token qrtig7v35 \
220 | --discovery-token-ca-cert-hash sha256:d6
221 |
222 | 🔵 check join cmd
223 |
224 | if forget join cmd.
225 | check
226 |
227 | kubeadm token create --print-join-command
228 |
229 | kubeadm join 172.16.0.80:6443 --token oi0nrh.ns09no --discovery-token-ca-cert-hash sha256:7
230 |
231 |
232 |
233 |
234 |
235 |
236 | 🔵 Worker join cluster.
237 |
238 | kubeadm join 172.16.0.80:6443 --token qrti3b.3tg7v35 \
239 | --discovery-token-ca-cert-hash sha256:f83007a
240 |
241 |
242 | 🔶 ?
243 |
244 | 🔵 Reset K8s Cluster - option
245 |
246 | https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
247 | https://www.techrunnr.com/how-to-reset-kubernetes-cluster/
248 |
249 | kubeadm reset -f
250 | ‼️ All node need reset. ‼️
251 |
252 |
253 |
254 |
255 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Manager config dashboard
256 | 🔶
257 |
258 | mkdir -p $HOME/.kube
259 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
260 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
261 |
262 |
263 | 🔶 firewall
264 |
265 | sudo ufw allow 6443
266 | sudo ufw allow 6443/tcp
267 |
268 |
269 | 🔵 flannel plugin
270 |
271 | 🔶 function
272 |
273 | manage docker network
274 | like subnet.
275 |
276 |
277 | 🔶 Install
278 |
279 | kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
280 | kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
281 |
282 |
283 | 🔶 check status
284 |
285 | kubectl get pods --all-namespaces
286 | kubectl get componentstatus
287 | kubectl get cs
288 |
289 |
290 |
291 |
292 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 K8s Dashboard
293 |
294 | 🔵 K8s dashboard
295 |
296 |
297 | https://github.com/kubernetes/dashboard
298 |
299 |
300 | https://github.com/kubernetes/dashboard/releases
301 |
302 |
303 | 🔶 install
304 |
305 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
306 |
307 |
308 | ◎ delete/remove - option
309 | kubectl delete -f xxxx
310 |
311 |
312 | 🔶 Check dashboard stauts
313 |
314 | kubectl get pods --all-namespaces
315 | make sure kubernetes-dashboard is running . not pendding ..
316 |
317 | if not running check name spcae log
318 | kubectl get events -n
319 | kubectl get events -n kubernetes-dashboard
320 |
321 |
322 |
323 |
324 | 🔶 Dashboard Login Desc
325 |
326 | by default.
327 | only allow local machine (k8s manager node) to login.
328 |
329 | we need login dashboard from remote machine.
330 |
331 |
332 |
333 | 🔶 Dashboard ▪ Change type
334 |
335 | kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
336 |
337 | k8s-app: kubernetes-dashboard
338 | sessionAffinity: None
339 | type: NodePort ➜ change from clusterip to nodeport
340 |
341 |
342 | 🔶 Dashboard ▪ get web port (443:30484)
343 |
344 | K8s-Manager ~ kubectl get svc -n kubernetes-dashboard
345 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
346 | dashboard-metrics-scraper ClusterIP 10.102.194.58 8000/TCP 4m41s
347 | kubernetes-dashboard NodePort 10.97.178.197 443:xxxxx/TCP 4m41s
348 |
349 |
350 |
351 | 🔶Dashboard ▪ Log in
352 |
353 | https://172.16.0.80:xxxxx/
354 | https://172.16.0.80:30775
355 |
356 | need token.
357 | we need create user first.
358 | then create token for that user
359 | then can back.
360 |
361 |
362 |
363 | 🔵 Create Dashboard user
364 |
365 | create two yaml file: xxx.yaml
366 | use kubectl apply -f xxx.yaml to apply.
367 |
368 |
369 | 🔶 File_01: miranda.yaml
370 |
371 |
372 | sudo bash -c 'echo "
373 | apiVersion: v1
374 | kind: ServiceAccount
375 | metadata:
376 | name: miranda
377 | namespace: kubernetes-dashboard
378 | " > /root/miranda.yaml'
379 |
380 |
381 |
382 | 🔶 File_02: ClusterRoleBinding.yaml
383 |
384 | sudo bash -c 'echo "
385 | apiVersion: rbac.authorization.k8s.io/v1
386 | kind: ClusterRoleBinding
387 | metadata:
388 | name: miranda
389 | roleRef:
390 | apiGroup: rbac.authorization.k8s.io
391 | kind: ClusterRole
392 | name: cluster-admin
393 | subjects:
394 | - kind: ServiceAccount
395 | name: miranda
396 | namespace: kubernetes-dashboard
397 | " > /root/ClusterRoleBinding.yaml'
398 |
399 |
400 |
401 |
402 |
403 |
404 |
405 | 🔶 apply yaml
406 |
407 | kubectl apply -f miranda.yaml
408 | kubectl apply -f ClusterRoleBinding.yaml
409 |
410 | ◎ delete yaml (option )
411 | kubectl delete -f miranda.yaml
412 |
413 |
414 | 🔶 get user token & login
415 |
416 | kubectl -n kubernetes-dashboard create token miranda
417 |
418 | cpoy token to website.
419 | so we can login. now.
420 |
421 |
422 |
423 |
424 |
425 |
426 |
--------------------------------------------------------------------------------
/docs/🎪-9️⃣ 📀 STO NAS ➜ DSM.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 1910
3 | title: 🎪-9️⃣📀 STO NAS ➜ DSM7
4 | ---
5 |
6 |
7 | # NAS ✶ Local ➜ ESXI . Synology
8 |
9 |
10 |
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 HomeLAB ✶ NAS ▪ DSM7.1 ESXI
12 |
13 | 🔵 Why
14 |
15 | truenas difficult to set file permission.
16 |
17 |
18 |
19 | 🔵 How install DSM
20 |
21 | 1. download other`s custemed image. (much easy ) ➜ i succeed
22 | 2. build your own image (diffculty ) ➜ i failed
23 |
24 | so here is demo
25 | install dsm7.1 to vm under esxi7
26 | i use esxi notnaml disk as data storage disk
27 | if your vm use whole physical disk (maybe works too)
28 |
29 | ‼️ if you want install dsm to physical machine this custom image no work ‼️
30 | ‼️ if you want install dsm to physical machine this custom image no work ‼️
31 | ‼️ if you want install dsm to physical machine this custom image no work ‼️
32 |
33 |
34 |
35 |
36 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 DSM 7.1 ESXi Demo ✅
37 |
38 | 🔵 Must Know
39 |
40 | ‼️ this for esxi only. no for physical machine. no waste your time. ‼️
41 | ‼️ this for esxi only. no for physical machine. no waste your time. ‼️
42 |
43 | ‼️ when begin install dsm. must cut wan internet but keep lan internet ‼️
44 | ‼️ when begin install dsm. must cut wan internet but keep lan internet ‼️
45 | you need lan because you need login dsm web to install dsm system.
46 | you must cut internet. during install. after install you can connect back
47 |
48 |
49 | 🔵 Workflow
50 |
51 | download custom img: ➜ get two vmdk.
52 | upload two vmdk to esxi: ➜ get one esxi disk (dsm boot disk)
53 | create new vm. ➜ choose redhat ent.. 7+
54 | change bios mode to bios. (no efi)
55 | delete default iscsi dirver
56 | add sata dirver control
57 | add boot disk
58 | add data disk
59 | set disk boot order (sata 0:0 / sata 0:1)
60 |
61 | boot dsm
62 |
63 | search ip use dysonolgy assistant tool / or check your router`s dhcp pool
64 | login dsm install web.
65 | download xxx.pat from offical web.
66 | pat should same version with dsm.
67 | here we download DSM_DS3617xs_42661.pat
68 |
69 | ‼️ must no internet. if no you will enter ecovermode, if so re create vm ‼️
70 | ‼️ must no internet. if no you will enter ecovermode, if so re create vm ‼️
71 |
72 | k)
73 |
74 |
75 |
76 | 🔶 Download customed vmkd.
77 |
78 | 1. download zip
79 | 2. get two vmkd file.
80 | 3. upload both file to esxi
81 | it auto become one esxi disk.
82 |
83 | 🔵 Config VM
84 |
85 | 🔶 create vm
86 |
87 | linux / redhat ent 7+
88 | we need sata control.
89 | some linux version in esxi no sata control.
90 |
91 |
92 | 🔶 Delete default vm config
93 |
94 | delete default iscsi control - must
95 | dsm no support iscsi dirver.
96 |
97 | Delte cd-rom
98 | we no need it .
99 |
100 | delete default disk
101 | we need delete it. and add it back later.
102 | it is about boot order.
103 | this is not must. if you know how to set boot order.
104 |
105 |
106 | 🔶 Add sata control
107 |
108 | boot disk have to use sata/ide control
109 | data disk can sata mode.
110 |
111 |
112 | 🔶 Add disk
113 |
114 | we need two disk:
115 | boot disk: the disk we download & upload to esxi.
116 | Data disk: we need install dsm system to disk!
117 | data disk must 15G+
118 |
119 |
120 |
121 | 🔶 config disk control
122 |
123 | boot disk must use sata/ide control.
124 | you can try sata first. if sata no bootable. use ide.
125 |
126 | data disk. usually sata works.
127 | i just try add esxi disk create like normal vm.
128 | if you rdm esxi host sata control to vm. maybe much east.
129 |
130 | ‼️ if you can enter dsm web. but it say no disk found.
131 | either the custom image no have driver you need. or you did something wrong.
132 | if you use esxi7 & add normal disk like create vm.
133 | you must should be good at this step.
134 |
135 |
136 | 🔶 set boot order:
137 |
138 | Boot disk: use sata: 0.0 means first boot disk
139 | data disk: use sata: 0.1 means second boot disk i.
140 |
141 |
142 | 🔶 nic
143 |
144 | usually vmxnet 3 works.
145 | if you boot into dsm. but no ip found.
146 | try change to other.
147 |
148 |
149 |
150 |
151 |
152 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 DSM 7.1 DIY Image ❌❌❌
153 |
154 | ❌❌❌ end up no disk found. ❌❌❌
155 |
156 |
157 | 🔵 Tool Desc
158 |
159 | dsm default img no support esxi.
160 | need add esxi nic driver.
161 | if you add/mount physical disk to vm. you need add disk dirver too.
162 | like nvme dirver.
163 |
164 | this tool help you install driver to dsm image.
165 | create a custom dsm img for you.
166 | then you can use thant img to create dsm vm.
167 |
168 |
169 | 🔵 Tool Workflow
170 |
171 | download tool from github
172 |
173 | add some driver (esxi nic / nvme driver)
174 | make a image
175 | make bootfile
176 | boot dsm
177 |
178 |
179 |
180 | 🔵 Prepair
181 |
182 | we need a vm to create image.
183 | need remote use screen.
184 |
185 |
186 | 🔶 JQ tool - must
187 |
188 | apt install jq -y
189 |
190 |
191 |
192 |
193 |
194 |
195 | 🔵 Downlaod tool
196 |
197 | wget https://github.com/tossp/redpill-tool-chain/archive/refs/heads/master.zip
198 |
199 | unzip master.zip
200 |
201 | mv redpill-tool-chain-master dsm7 & cd dsm7
202 |
203 |
204 |
205 | 🔵 config default user_config ‼️
206 |
207 | 🔶 1. rename - must
208 |
209 | mv sample_user_config.json ds918p_user_config.json
210 | mv sample_user_config.json ds3622xsp_user_config.json
211 | rename to the dsm version you want install.
212 |
213 | 🔶 2. edit
214 |
215 | if you use usb boot mode. you need set vid pid
216 | google: vid pid usb drive
217 |
218 | ‼️ if no usb node. (sata mode ) no edit. ‼️
219 |
220 |
221 |
222 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Driver
223 |
224 | 🔵 Github DSM Driver Center
225 |
226 | https://github.com/pocopico/rp-ext
227 |
228 |
229 | 🔵 esxi nic driver - Must ✅
230 |
231 | esxi vm have three kinds nic
232 | e1000、e1000e、vmxnet3
233 | we add all.
234 |
235 |
236 | ./redpill_tool_chain.sh add https://raw.githubusercontent.com/pocopico/rp-ext/main/e1000/rpext-index.json
237 |
238 | ./redpill_tool_chain.sh add https://raw.githubusercontent.com/pocopico/rp-ext/main/e1000e/rpext-index.json
239 |
240 | ./redpill_tool_chain.sh add https://raw.githubusercontent.com/pocopico/rp-ext/main/vmxnet3/rpext-index.json
241 |
242 |
243 | 🔵 usb nic driver - option
244 |
245 | ./redpill_tool_chain.sh add https://raw.githubusercontent.com/pocopico/rp-ext/master/ax88179_178a/rpext-index.json
246 |
247 |
248 |
249 | 🔵 Add other Driver
250 |
251 | no too much driver. too big it will fail.
252 |
253 |
254 |
255 | ./redpill_tool_chain.sh add https://raw.githubusercontent.com/pocopico/rp-ext/master/vmw_pvscsi/rpext-index.json
256 |
257 |
258 | ./redpill_tool_chain.sh add https://raw.githubusercontent.com/pocopico/rp-ext/master/mvsas/rpext-index.json
259 |
260 |
261 |
262 | sata sas
263 |
264 | ./redpill_tool_chain.sh add https://raw.githubusercontent.com/pocopico/rp-ext/master/aic94xx/rpext-index.json
265 |
266 |
267 |
268 | 🔵 nvme sm981/pm981/pm983
269 |
270 |
271 |
272 |
273 | 🔵 common driver ?
274 |
275 | 安装 thethorgroup.virtio :
276 | ./redpill_tool_chain.sh add https://github.com/jumkey/redpill-load/raw/develop/redpill-virtio/rpext-index.json
277 |
278 | 安装 thethorgroup.boot-wait :
279 | ./redpill_tool_chain.sh add https://github.com/jumkey/redpill-load/raw/develop/redpill-boot-wait/rpext-index.json
280 |
281 | 安装 pocopico.mpt3sas :
282 | ./redpill_tool_chain.sh add https://raw.githubusercontent.com/pocopico/rp-ext/master/mpt3sas/rpext-index.json
283 |
284 | 安装 jumkey.dtb :
285 | ./redpill_tool_chain.sh add https://github.com/jumkey/redpill-load/raw/develop/redpill-dtb/rpext-index.json
286 |
287 |
288 |
289 |
290 |
291 | 🔵 Build image & bootfile
292 |
293 | 🔶 Check usable version
294 |
295 | ./redpill_tool_chain.sh
296 | ds3622xsp-7.1.0-42661
297 | ds918p-7.1.0-42661
298 |
299 |
300 | 🔶 Build image (xxx.img)
301 |
302 | ./redpill_tool_chain.sh build ds3622xsp-7.1.0-42661
303 | ./redpill_tool_chain.sh build ds918p-7.1.0-42661
304 |
305 | here tool help us put nic driver to image.
306 |
307 | 🐞 if xxx libnetwork.endpointCnt: Key not found in store
308 | ‼️ sudo service docker restart & retry it ‼️
309 | ‼️ sudo service docker restart & retry it ‼️
310 |
311 |
312 | 🔶 Build bootfile (xxx.pat)
313 |
314 | ./redpill_tool_chain.sh auto ds3622xsp-7.1.0-42661
315 | ./redpill_tool_chain.sh auto ds918p-7.1.0-42661
316 |
317 | [#] Generating GRUB config... [OK]
318 | [#] Creating loader image at /opt/redpill-load/images/redpill-DS3622xs+_7.1.0-42661_b1655606748.img... [OK]
319 |
320 |
321 | 🔶 Download bootable PAT image
322 |
323 | Docker-Test.Root images pwd
324 | /root/dsm7/images
325 | Docker-Test.Root images ls
326 | redpill-DS3622xs+_7.1.0-42661_b1655606748.img
327 |
328 |
329 |
330 | 🔵 Generate SN/MAC ?
331 |
332 | ./redpill_tool_chain.sh sn ds918p
333 | ./redpill_tool_chain.sh sn dva3221
334 |
335 |
336 |
337 |
338 | 🔵 Change IMG to VMDK(esxi)
339 |
340 | 🔶 Desc
341 |
342 | this img can use on physical machine.
343 | if we need use in esxi. need change img to vmdk!
344 | a lot way to change img. we choose qumu-img
345 |
346 |
347 | ‼️ vmdk have two kinds. workstation and esxi.‼️
348 | we use StarWind V2V Image Converter change to esxi type .
349 |
350 | output two file.
351 | upload both two file to esxi. it become one xxx.vmdk!!!!
352 |
353 |
354 |
355 |
356 |
357 | 🔵 esxi vm
358 |
359 | Create new vm.
360 |
361 | System: must linux >> redhat >> ent... x 64 .. v7
362 | or maybe no sata control
363 |
364 |
365 | boot mode: bios no efi.
366 |
367 | Delete default disk & iscsi driver.
368 | add sata control.
369 |
370 | change boot disk to sata control.
371 |
372 | add new disk (vm virtual disk )
373 | set to sata control
374 |
375 |
376 | boot.
377 | then use dysonolgy search tool
378 | or go router check what ip new vm use.
379 |
380 | 🔶 visit dsm web
381 |
382 | 10.1.1.50:5000
383 |
384 |
385 | 🐞 No Disk found
386 | means no driver found.
387 | need add scsi driver ??
388 |
--------------------------------------------------------------------------------
/docs/🎪🎪 🐬☸️☸️-3️⃣ STO ➜ Basic.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 2930
3 | title: 🎪🎪🐬☸️☸️3️⃣ STO ➜ Basic
4 | ---
5 |
6 |
7 | # Storage ✶ PV PVC CSI
8 |
9 |
10 |
11 |
12 |
13 | 🔵 StorageClass / Driver / Plugin
14 |
15 | 🔶 WHY
16 |
17 | PV & PVC make k8s storage config much easy.
18 | but config pv & pvc is still not that easy.
19 | it is two people`s job!
20 | if you want config pvc. must let it.admin config pv first.
21 |
22 | we have somthing better: Provisioner/StorageClass
23 | Provisioner/StorageClass can auotmatic create pv when docker user create pvc.
24 |
25 | Provisioner/StorageClass is just like driver for storage cluster.
26 | different storage cluster need different dirver
27 |
28 |
29 |
30 | 🔶 StorageClass / Driver _ CEPH-CSI
31 |
32 | StorageClass is not buildin.
33 | we need config StorageClass First!
34 | we use ceph-rbd
35 | so we need install ceph-csi plugin first.
36 |
37 | with this plugin. k8s can control ceph cluster.
38 | so k8s can create/delete ceph-rbd disk on ceph.
39 |
40 |
41 |
42 | 🔵 CEPH-CSI Desc
43 |
44 | CSI: Container Storage Interface
45 | a dirver/plugin for container; allow k8s control ceph cluster.
46 | like NIC: make one pc connect other pc.
47 | here CSI: make K8s Storage Connect Ceph Storage.
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 | 🔵 How
57 |
58 | i have a detail demo under sto.rbd.ceph-csi install
59 | this is simple demo
60 |
61 |
62 | install cephadm to all k8s node
63 | so k8s can support ceph storage.
64 |
65 |
66 | install ceph-csi driver to all k8s node
67 | so k8s can support ceph much smart
68 | like auto create pv for you .
69 |
70 |
71 | prepair pool & user in ceph cluster.
72 |
73 | test pvc in k8s
74 |
75 |
76 |
77 |
78 |
79 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 CEPH-CSI prepair
80 |
81 |
82 | 🔵 install cephadm tool all node
83 |
84 | sudo apt install -y cephadm
85 |
86 | test ceph port
87 | nmap -p 6789 10.12.12.70
88 |
89 |
90 |
91 | 🔵 config ceph-csi plugin/driver in all node
92 |
93 |
94 | git clone --depth 1 --branch devel https://github.com/ceph/ceph-csi.git
95 |
96 | cd /root/ceph-csi/deploy/rbd/kubernetes
97 |
98 |
99 | 🔶 1. edit csi-config-map.yaml
100 |
101 | ‼️ only change id&ip ‼️
102 | ‼️ only change id&ip ‼️
103 | ‼️ only change id&ip ‼️
104 |
105 |
106 | cat < csi-config-map.yaml
107 | ---
108 | apiVersion: v1
109 | kind: ConfigMap
110 | data:
111 | config.json: |-
112 | [
113 | {
114 | "clusterID": "b57b2062-e75d-11ec-8f3e-45be4942c0cb", # ‼️ ChanegMe-01
115 | "monitors": [
116 | "10.12.12.70:6789" # ‼️ ChangeMe-02
117 | ]
118 | }
119 | ]
120 | metadata:
121 | name: ceph-csi-config
122 | EOF
123 |
124 |
125 |
126 | kubectl apply -f csi-config-map.yaml
127 | kubectl get configmap
128 |
129 |
130 |
131 | 🔶 2. csi-rbd-secret.yaml ➜ set Ceph Username & password ✅
132 |
133 | ‼️ try use ceph admin if possible . other user have unknow problem
134 |
135 | cat < csi-rbd-secret.yaml
136 | ---
137 | apiVersion: v1
138 | kind: Secret
139 | metadata:
140 | name: csi-rbd-secret
141 | namespace: default
142 | stringData:
143 | userID: admin # ‼️ ChangeMe-01
144 | userKey: AQC08qBirxxxxx= # ‼️ ChangeMe-02
145 | EOF
146 |
147 |
148 |
149 | kubectl apply -f csi-rbd-secret.yaml
150 | kubectl get secret
151 |
152 |
153 |
154 | 🔶 3. csi-kms-config-map.yaml ➜ set kms / just copy&paste ✅
155 |
156 | cat < csi-kms-config-map.yaml
157 | ---
158 | apiVersion: v1
159 | kind: ConfigMap
160 | data:
161 | config.json: |-
162 | {}
163 | metadata:
164 | name: ceph-csi-encryption-kms-config
165 | EOF
166 |
167 |
168 |
169 | kubectl apply -f csi-kms-config-map.yaml
170 |
171 |
172 |
173 |
174 |
175 |
176 |
177 |
178 | 🔶 4. ceph-csi-config.yaml ➜ unknow / just copy&paste
179 |
180 | cat < ceph-config-map.yaml
181 | ---
182 | apiVersion: v1
183 | kind: ConfigMap
184 | data:
185 | ceph.conf: |
186 | [global]
187 | auth_cluster_required = cephx
188 | auth_service_required = cephx
189 | auth_client_required = cephx
190 | # keyring is a required key and its value should be empty
191 | keyring: |
192 | metadata:
193 | name: ceph-config
194 | EOF
195 |
196 |
197 | kubectl apply -f ceph-config-map.yaml
198 |
199 |
200 |
201 |
202 |
203 |
204 |
205 | 🔶 5. Edit provisioner.yaml - Option
206 |
207 | by default it create 3 same pod for backup!
208 | we k3s for test. only need 1.
209 |
210 |
211 | 🔻 Check replicas
212 | cat csi-rbdplugin-provisioner.yaml | grep replicas
213 | replicas: 3
214 |
215 | ◎ use sed change to 1
216 | sed -i 's/replicas: 3/replicas: 1/g' csi-rbdplugin-provisioner.yaml
217 |
218 | ◎ check result
219 | cat csi-rbdplugin-provisioner.yaml | grep replicas
220 | replicas: 1
221 |
222 |
223 |
224 | 🔶 apply other.
225 |
226 | kubectl create -f csi-provisioner-rbac.yaml
227 | kubectl create -f csi-nodeplugin-rbac.yaml
228 |
229 | kubectl create -f csi-rbdplugin-provisioner.yaml
230 | kubectl create -f csi-rbdplugin.yaml
231 |
232 |
233 |
234 |
235 | 🔶 check status
236 | now. ceph-csi is done. you can test.it.
237 | this need take some time to finisth.
238 |
239 | kubectl get pods
240 | NAME READY STATUS RESTARTS AGE
241 | csi-rbdplugin-xp84x 3/3 Running 0 2m47s
242 | csi-rbdplugin-7bj26 3/3 Running 0 2m47s
243 | csi-rbdplugin-provisioner-5d969665c5-2gvbq 7/7 Running 0 2m51s
244 |
245 |
246 |
247 |
248 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 ceph prepair
249 |
250 | 🔵 ceph prepair
251 |
252 | 🔶 Create Ceph K3s Pool: CEPH_BD-K3s
253 |
254 | ceph osd lspools
255 | ceph osd pool create CEPH_BD-K3s 128 128
256 | ceph osd pool set CEPH_BD-K3s size 1
257 | ceph osd pool set CEPH_BD-K3s target_size_bytes 1000G
258 |
259 | sudo rbd pool init CEPH_BD-K3s
260 | rbd ls CEPH_BD-K3s
261 |
262 |
263 |
264 | 🔶 Create Ceph K3s User : k3s
265 |
266 | ceph auth get-or-create client.k3s mon 'allow r' osd 'allow rw pool=CEPH_BD-K3s'
267 |
268 | [client.k3s]
269 | key = AQAJcctixrxxxx
270 |
271 |
272 | 🔶 get admin key ➜ ceph auth get client.admin
273 |
274 | key = AQC08qBixxxx09AX7TtstKNAA==
275 |
276 |
277 | 🔶 get cluster id / fsid ➜ ceph mon dump
278 | fsid b57b2062-e75d-11ec-8f3e-45be4942c0cb
279 |
280 |
281 |
282 |
283 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 k8s pvc demo
284 |
285 | 🔵 Config StorageClass ✅
286 |
287 |
288 |
289 |
290 | cat < csi-config-map.yaml
291 | ---
292 | apiVersion: v1
293 | kind: ConfigMap
294 | data:
295 | config.json: |-
296 | [
297 | {
298 | "clusterID": "b57b2062-e75d-11ec-8f3e-45be4942c0cb", # ‼️ ChanegMe-01
299 | "monitors": [
300 | "10.12.12.70:6789" # ‼️ ChangeMe-02
301 | ]
302 | }
303 | ]
304 | metadata:
305 | name: ceph-csi-config
306 | EOF
307 |
308 |
309 | cat < sto-sc-cephcsi-rbd-k3s.yaml
310 | ---
311 | apiVersion: storage.k8s.io/v1
312 | kind: StorageClass
313 | metadata:
314 | name: sto-sc-cephcsi-rbd-k3s # ‼️ Change Me 03
315 | provisioner: rbd.csi.ceph.com
316 | parameters:
317 | clusterID: b57b2062-e75d-11ec-8f3e-45be4942c0cb # ‼️ Change Me 01
318 | pool: CEPH_BD-K3s # ‼️ Change Me 02
319 | imageFeatures: layering
320 | csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
321 | csi.storage.k8s.io/provisioner-secret-namespace: default
322 | csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
323 | csi.storage.k8s.io/controller-expand-secret-namespace: default
324 | csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
325 | csi.storage.k8s.io/node-stage-secret-namespace: default
326 | reclaimPolicy: Delete
327 | allowVolumeExpansion: true
328 | mountOptions:
329 | - discard
330 | EOF
331 |
332 |
333 |
334 | kubectl apply -f sto-sc-cephcsi-rbd-k3s.yaml
335 |
336 |
337 |
338 |
339 |
340 | 🔵 pvc create demo
341 |
342 | cat < pvc-k3s-db-mysql.yaml
343 | ---
344 | apiVersion: v1
345 | kind: PersistentVolumeClaim
346 | metadata:
347 | name: pvc-k3s-db-mysql # ‼️ Must set pvc name
348 | spec:
349 | accessModes:
350 | - ReadWriteOnce # ‼️ ceph available Option: ReadWriteOnce/ReadOnlyMany
351 | volumeMode: Block # ‼️ NoChange: this is block type.
352 | resources:
353 | requests:
354 | storage: 500Gi # ‼️ Option Change Size
355 | storageClassName: sto-sc-cephcsi-rbd-k3s # ‼️ Must Chaneg To your sc name
356 | EOF
357 |
358 |
359 |
360 |
361 | kubectl apply -f pvc-k3s-db-mysql.yaml
362 |
363 |
364 |
365 |
366 | 🔵 PVC debug
367 |
368 | 🔶 check sc pv pvc status
369 |
370 | kubectl get sc
371 | kubectl get pv
372 | kubectl get pvc ➜ if ok. it should bond to sc!
373 |
374 |
375 | K3s ~ kubectl get pvc
376 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
377 | sto-pvc-k3s Pending ❌ sto-sc-cephcsi-rbd-k3s 37m
378 |
379 |
380 | 🔶 kubectl describe pvc
381 |
382 |
383 |
384 |
385 |
386 | 🔵 pvc use demo
387 |
388 |
389 | cat < pod-netdebug.yaml
390 | ---
391 | apiVersion: v1
392 | kind: Pod
393 | metadata:
394 | name: pod-netdebug # ‼️ set pod name
395 | spec:
396 | containers:
397 | - name: nettools # ‼️ any name
398 | image: travelping/nettools # ‼️ image url must right
399 | command: # ‼️ must run something. ro docker die.
400 | - sleep
401 | - "infinity"
402 | volumeDevices:
403 | - name: ceph-k3s-pod-nettools # ‼️ any name... same with blow: volumes.name
404 | devicePath: /tmp # ‼️ container inside path
405 | volumes:
406 | - name: ceph-k3s-pod-nettools # ‼️ any name... same wuith up: volumeDevices.name
407 | persistentVolumeClaim:
408 | claimName: sto-pvc-k3s # ‼️ use your pvc name
409 | EOF
410 |
411 |
412 | kubectl apply -f pod-netdebug.yaml
413 |
414 |
415 |
416 |
417 |
418 | 🔵 check ceph.
419 |
420 | CEPH-MGR.Root ~ rbd -p Pool_BD-K8s_Prod ls
421 | IMG-K8s-Prod
422 | IMG-K8s-Prod-NoCSI
423 | csi-vol-59b173b3-ee1b-11ec-8899-0e9db053906f
424 |
425 |
426 | ‼️ deleted pvc delete disk in ceph; delete pod. disk still keep ‼️
427 | ‼️ deleted pvc delete disk in ceph; delete pod. disk still keep ‼️
428 | ‼️ deleted pvc delete disk in ceph; delete pod. disk still keep ‼️
429 |
430 | pvc is like disk. you can mount all pod folder into one pvc.
431 | but never delete pvc.
432 |
433 | but. how to create folder. in pvc?
434 | how my k8s node can visit pvc folder.
435 |
436 |
437 |
438 |
439 |
440 |
--------------------------------------------------------------------------------
/docs/🎪-7️⃣ 🌐🌐-0️⃣ Proxy ➜ Teaefik.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 1720
3 | title: 🎪-7️⃣🌐🌐 Proxy ➜ Teaefik
4 | ---
5 |
6 |
7 |
8 | # Net ✶ Proxy
9 |
10 |
11 | 🔵 Demo
12 |
13 | WAN-VPS: 172.93.42.232 ➜ VPN.Srv: 10.214.214.254 ➜ traefik_vpn
14 | LAN-DKP: 172.16.1.140 ➜ VPN.CLI: 10.214.214.140 ➜ traefik_local (port 8888)
15 | LAN-DVM: 10.1.1.89 ➜ LAN ➜ dashy (port 8099)
16 |
17 | traefik.0214.icu >>> 172.93.42.232 (traefik forward to lan) >> 172.16.1.140:8888
18 | dashy.0214.icu >>> 172.93.42.232 (traefik forward to lan) >> 10.1.1.89:8099
19 |
20 |
21 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 custom traefik
22 |
23 | 🔵 How add rotue use docker-compose file
24 |
25 | 🔶 use entrypoint.sh
26 | most docker have a default script: entrypoint.sh
27 | this shell run when docker start.
28 | so just need add route cmd to entrypoint.sh file.
29 | 1. copy entrypoint.sh out from docker.
30 | 2. add route cmd into entrypoint.sh
31 | 3. mount changed entrypoint.sh back to docker.
32 | change entrypoint.sh file permit. so docker can mount it .
33 | chmod 777 entrypoint.sh
34 |
35 |
36 | #!/bin/sh
37 | set -e
38 | route add -net 10.214.214.0 netmask 255.255.255.0 gw netmaker
39 | route add -net 172.16.1.0 netmask 255.255.255.0 gw netmaker
40 | route add -net 10.1.1.0 netmask 255.255.255.0 gw netmaker
41 |
42 | # first arg is `-f` or `--some-option`
43 |
44 | ‼️ by default container no have prtmit add route. need give permit ‼️
45 | ‼️ by default container no have prtmit add route. need give permit ‼️
46 | ‼️ by default container no have prtmit add route. need give permit ‼️
47 |
48 | in docker-compose file
49 | add this under traefik
50 |
51 | cap_add:
52 | - NET_ADMIN
53 |
54 |
55 |
56 |
57 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 traefik http forward demo ✅
58 |
59 | 🔵 How
60 |
61 | - create all.yaml
62 | - mount xxx.yaml to docker traefik
63 | - /root/all.yaml:/all.yaml
64 | - docker traefik use command load custom.yaml.
65 | - "--providers.file.filename=/all.yaml"
66 |
67 |
68 | 🔶 add endpoints: web
69 |
70 | - "--entrypoints.web.address=:80"
71 | i not config ssl yet. so need use http.
72 | if have ssl. no need this.
73 |
74 |
75 | 🔵 vi Traefik-Forward-Config.yaml
76 |
77 |
78 | http:
79 | routers:
80 | RH-Dashy-DVM:
81 | service: SH-Dashy-DVM
82 | entrypoints: web
83 | rule: "Host(`dashy.0214.icu`)"
84 | RH-Traefik-DPK:
85 | service: SH-Traefik-DKP
86 | entrypoints: web
87 | rule: "Host(`traefik.0214.icu`)"
88 |
89 | services:
90 | SH-Dashy-DVM:
91 | loadBalancer:
92 | servers:
93 | - url: http://10.1.1.89:8099
94 | SH-Traefik-DKP:
95 | loadBalancer:
96 | servers:
97 | - url: http://172.16.1.140:8888
98 |
99 |
100 |
101 | 🔶 visit
102 |
103 | must use safari. or chrome incognito mode (in case have cash!!!! )
104 | must use safari. or chrome incognito mode (in case have cash!!!! )
105 | must use safari. or chrome incognito mode (in case have cash!!!! )
106 |
107 |
108 | http://traefik.0214.icu
109 | http://dashy.0214.icu
110 |
111 |
112 |
113 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Manyal SSL Config Demo 💯
114 |
115 |
116 | 🔵 insecureskipverify ???
117 |
118 | - "--serverstransport.insecureskipverify=true"
119 |
120 | when traefik visit website inside homelab.
121 | it can ignore tls error.
122 | allow you visit with tls error.
123 |
124 |
125 |
126 | 🔵 How (manual apply ssl)
127 |
128 | - use certbot manyal apply ssl
129 | - copy ssl out & mount ssl into traefik
130 | - config traefik yaml & mount
131 | - let traefik load yaml
132 |
133 |
134 |
135 | 🔵 apply SSL
136 |
137 | certbot certonly -d "*.0214.icu" --rsa-key-size 4096 --manual --preferred-challenges dns --server https://acme-v02.api.letsencrypt.org/directory
138 | ...
139 | follow it. go dns website. add some record to your dns and comeback.
140 | ...
141 |
142 | # make sure that private key are 4096 bit long. 2048 bit will NOT work ‼️
143 | # openssl rsa -text -in privkey.pem | grep bit
144 | # writing RSA key
145 | # RSA Private-Key: (2048 bit, 2 primes) ➜ this no work. change one ...
146 |
147 |
148 | 🔵 copy ssl out
149 |
150 | certFile: /root/ssl-nodel/fullchain.pem
151 | keyFile: /root/ssl-nodel/privkey.pem
152 |
153 |
154 |
155 | 🔵 Config docker compose file
156 |
157 | 🔶 Mount
158 |
159 | - /root/ssl-nodel:/certs
160 | - /root/all.yaml:/all.yaml
161 |
162 |
163 | 🔶 let traefik load all.yaml
164 |
165 | command:
166 | - "--providers.file.filename=/all.yaml"
167 |
168 |
169 | 🔶 delete netmaker default ssl config line
170 |
171 | - "--entrypoints.websecure.http.tls.certResolver=http" # delete this
172 |
173 |
174 |
175 | 🔵 config demo all.yaml ✅
176 |
177 |
178 | http:
179 | routers:
180 | RH-Dashy-DVM:
181 | service: SH-Dashy-DVM
182 | entrypoints: websecure
183 | rule: "Host(`dashy.0214.icu`)"
184 | RH-Traefik-DPK:
185 | service: SH-Traefik-DKP
186 | entrypoints: websecure
187 | rule: "Host(`traefik.0214.icu`)"
188 |
189 | services:
190 | SH-Dashy-DVM:
191 | loadBalancer:
192 | servers:
193 | - url: http://10.1.1.89:8099
194 | SH-Traefik-DKP:
195 | loadBalancer:
196 | servers:
197 | - url: http://172.16.1.140:8888
198 |
199 |
200 |
201 | tls:
202 | stores:
203 | default:
204 | defaultCertificate:
205 | certFile: /certs/fullchain.pem
206 | keyFile: /certs/privkey.pem
207 |
208 |
209 |
210 |
211 |
212 | 🔵 visit
213 |
214 | 🔶 visit
215 |
216 | must use safari. or chrome incognito mode (in case have cash!!!! )
217 | must use safari. or chrome incognito mode (in case have cash!!!! )
218 | must use safari. or chrome incognito mode (in case have cash!!!! )
219 |
220 |
221 |
222 | https://traefik.0214.icu
223 | fuck. this must allow cookie to work... -.-
224 |
225 | https://dashy.0214.icu
226 |
227 |
228 |
229 |
230 |
231 |
232 |
233 |
234 |
235 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 netmaker traefik demo
236 |
237 |
238 | traefik:
239 | image: traefik:latest
240 | container_name: traefik
241 | cap_add:
242 | - NET_ADMIN
243 | command:
244 | - "--api.insecure=true"
245 | - "--providers.file.filename=/custom.yaml"
246 |
247 | - "--entrypoints.web.address=:80"
248 | - "--entrypoints.websecure.address=:443"
249 | - "--entrypoints.websecure.http.tls=true"
250 |
251 | - "--log.level=INFO"
252 | - "--providers.docker=true"
253 | - "--providers.docker.exposedByDefault=false"
254 | - "--serverstransport.insecureskipverify=true"
255 | restart: always
256 | volumes:
257 | - /var/run/docker.sock:/var/run/docker.sock:ro
258 |
259 | - /root/entrypoint.sh:/entrypoint.sh
260 | - /root/Traefik-Forward-Config.yaml:/custom.yaml
261 | - /root/ssl-nodel:/certs
262 |
263 |
264 | ports:
265 | - "443:443"
266 | - "80:80"
267 | - "8089:8080"
268 |
269 |
270 |
271 |
272 |
273 |
274 |
275 |
276 |
277 |
278 |
279 |
280 |
281 |
282 |
283 |
284 |
285 |
286 |
287 |
288 |
289 |
290 |
291 |
292 |
293 |
294 |
295 |
296 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Traefik
297 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Traefik
298 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Traefik
299 |
300 | 🔵 WHY Reverse Proxy ✅
301 |
302 | one host only one 80 443
303 | docker allow us run a lot website in one host.
304 | you can give different website differnet port.
305 | but Reverse Proxy allow all website use 80/443 at same time.
306 | just use different url.
307 |
308 | all 80/443 request go to Reverse Proxy docker first.
309 | let Reverse Proxy docker forward to different website depends on different url.
310 |
311 |
312 | Reverse Proxy can do much more function
313 |
314 |
315 |
316 |
317 | 🔵 traefik Desc ✅
318 |
319 | traefik like nginx`s Reverse Proxy function. but do better at Reverse Proxy
320 | traefik like nginx`s Reverse Proxy function. but do better at Reverse Proxy
321 | traefik like nginx`s Reverse Proxy function. but do better at Reverse Proxy
322 |
323 |
324 | router connect data between wan & lan
325 | fraefik connect url between Domain & ip
326 |
327 | router deal with data; fraefik deal with url.
328 |
329 | point all domain dns to fraefik server.
330 | let fraefik forward request to your real internel server.
331 |
332 |
333 | 🔶 nginx ingress vs traefik ingress
334 |
335 | https://www.modb.pro/db/197174
336 |
337 |
338 |
339 | 🔵 How Use traefik ✅
340 |
341 | traefik need use docker.sock; means have the same permit as docker.
342 | so traefik can auto config almost everything for you.
343 | traefik is enabled on all docker by default.
344 | means. if you login traefik dashboard.
345 | traefik almost auto config all for you.
346 | you only need change a little!
347 |
348 | but auto config is good for beginner.
349 | at end we usually disable auto config.
350 |
351 |
352 |
353 | 🔵 traefik config
354 |
355 | two kinds config
356 |
357 | static config ➜ config not change often
358 | donynic config ➜ config chnage often
359 |
360 | all kinds config can use
361 | config file: mount file: traefik.yml
362 | cli: set when run docker/docker-compose
363 |
364 |
365 | for easy understand.
366 | we set all in docker-compose file.
367 | not use any traefik.yml file.
368 |
369 |
370 |
371 | 🔶 providers. ✅
372 |
373 | where traefik runs: docker/k8s ..
374 | even just install to host.
375 |
376 | different planform have different api.
377 | if traefik need auto config for you.
378 | must set the right providers.
379 |
380 |
381 |
382 |
383 |
384 | 🔵 Network ✅
385 |
386 | by default all docker is quarented.
387 | one docker is not able to connect other docker.
388 |
389 | if want app use traefik auto discover mode.
390 | we must make sure app can visit traefik first.
391 |
392 | so create a docker network: dk-net-traefik
393 | put all app who need use traefik under same docker network: dk-net-traefik
394 |
395 |
396 | 🔶 note
397 |
398 | one machine one docker-compose file
399 | all docker under docker-compose-file use same network by default.
400 |
401 | one machine two+ docker-compose file
402 | you need create network.
403 | and join to same network.
404 |
405 | many machine many docker-compose file
406 | create cluster first: like k8s/swarm
407 | in cluster. create a docker network.
408 | join app & traefik to same docker network.
409 |
410 |
411 |
412 |
413 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 config
414 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 config
415 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 config
416 |
417 |
418 |
419 | 1. provider ✅
420 |
421 | different physical router have differnet cmd.
422 | traefik(software) can run on all hardware.
423 | you just need set which hardware/provider you run traefik on .
424 | . file/ docker / k8s /swarm
425 |
426 |
427 | 2. config:
428 |
429 | static / dynamic config
430 |
431 |
432 | 3. entrypoint & services
433 |
434 | entrypoint: frontend ➜ data in traefik ➜ bind host ip & port.
435 | services: backend ➜ data out traefik ➜ send data to which server.
436 |
437 |
438 | 🔶 router & rule
439 |
440 | hardware router: what ip send to what enterface.
441 | traefik router: what url send to what server.
442 |
443 | traefik router ➜ like ip
444 | traefik router.rule ➜ like url ➜ give ip a name. easy to remember.
445 |
446 | ‼️ traefik router(ip), traefik rule(url) ‼️
447 | ‼️ traefik router(ip), traefik rule(url) ‼️
448 | ‼️ traefik router(ip), traefik rule(url) ‼️
449 |
450 |
451 |
452 |
453 |
454 |
455 |
456 |
457 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
458 |
459 | demo file good
460 |
461 | https://github.com/htpcBeginner/docker-traefik/blob/master/docker-compose-t2.yml
462 |
463 |
464 |
--------------------------------------------------------------------------------
/docs/🎪-7️⃣ 🌐-7️⃣ VPN ➜➜➜ Netmaker.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 1719
3 | title: 🎪-7️⃣🌐-7️⃣ VPN ➜➜➜ Netmaker
4 | ---
5 |
6 |
7 | # VPN ✶ Netmaker
8 |
9 |
10 |
11 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 Netmaker demo
12 |
13 | 🔵 netmaker Server require
14 |
15 | Ubuntu 20. best .
16 | public ip.
17 | domain.
18 |
19 |
20 | 🔵 Domain prepair
21 |
22 | 🔶 Set A record for: dashboard & api & broker
23 |
24 | dashboard.0214.icu
25 | api.0214.icu
26 | broker.0214.icu
27 |
28 | no change name. must use dashboard/api/broker + domain.
29 |
30 |
31 | 🔶 Check DNS
32 |
33 | VPS ~ nslookup dashboard.0214.icu 8.8.8.8
34 | Server: 8.8.8.8
35 | Address: 8.8.8.8#53
36 |
37 | Non-authoritative answer:
38 | Name: dashboard.0214.icu
39 | Address: 172.93.42.232
40 |
41 |
42 |
43 |
44 | 🔵 SRV Install Depndenices
45 |
46 | sudo apt-get install -y docker.io docker-compose wireguard
47 |
48 | 🔶 option
49 | sudo apt install containerd
50 | systemctl start docker
51 | systemctl enable docker
52 |
53 |
54 |
55 | 🔵 check port & firewall
56 |
57 | make sure 443 not used
58 |
59 | sudo ufw allow proto tcp from any to any port 443 && sudo ufw allow 51821:51830/udp
60 | iptables --policy FORWARD ACCEPT
61 |
62 |
63 |
64 | 🔵 config docker-compose file
65 |
66 | wget -O docker-compose.yml https://raw.githubusercontent.com/gravitl/netmaker/master/compose/docker-compose.traefik.yml
67 | sed -i 's/NETMAKER_BASE_DOMAIN//g' docker-compose.yml
68 | sed -i 's/SERVER_PUBLIC_IP//g' docker-compose.yml
69 | sed -i 's/YOUR_EMAIL//g' docker-compose.yml
70 |
71 |
72 | sed -i 's/NETMAKER_BASE_DOMAIN/0214.icu/g' docker-compose.yml
73 | sed -i 's/SERVER_PUBLIC_IP/172.93.42.232/g' docker-compose.yml
74 | sed -i 's/YOUR_EMAIL/xx2610@gmail.com/g' docker-compose.yml
75 |
76 |
77 | 🔶 get NETMAKER_BASE_DOMAIN
78 |
79 | in compose file
80 | BACKEND_URL: "https://api.NETMAKER_BASE_DOMAIN"
81 | so NETMAKER_BASE_DOMAIN should be your domain. no head.
82 | 0214.icu
83 |
84 | 🔶 get public ip
85 |
86 | ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'
87 |
88 |
89 |
90 |
91 | 🔶 generate key
92 |
93 | VPS ~ tr -dc A-Za-z0-9 / dev
262 |
263 |
264 |
265 | The configuration file for this is /etc/network/routes
266 |
267 |
268 |
269 | ..
270 |
271 | create network for docekr.
272 | egress this network ????
273 |
274 |
275 |
276 | ..
277 | 自己创建 overlay 网络 .
278 | 不用swarm 也能用 啊. fuck. 除了 nsx 还有啥overlay ???
279 |
280 |
281 | overlay 主机. 必须额外一个网卡啊?
282 |
283 |
284 | use k8s -.-
285 |
286 | how add vps to k8s.
287 |
288 | vps. run cmd. use vlan... doen ????
289 |
290 |
291 |
292 |
293 |
294 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵
295 |
296 | 🔵
297 |
298 |
299 |
300 | 🔵 test on other vpn node.
301 |
302 |
303 |
304 |
305 | 1 packets transmitted, 1 packets received, 0% packet loss
306 | round-trip min/avg/max = 1.191/1.191/1.191 ms
307 | bash-5.1# ip a
308 | 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
309 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
310 | inet 127.0.0.1/8 scope host lo
311 | valid_lft forever preferred_lft forever
312 | inet6 ::1/128 scope host
313 | valid_lft forever preferred_lft forever
314 | 3: nm-0214: mtu 1280 qdisc noqueue state UNKNOWN group default qlen 1000
315 | link/none
316 | inet 10.214.214.254/24 scope global nm-0214
317 | valid_lft forever preferred_lft forever
318 | inet 10.214.214.254/32 scope global nm-0214
319 | valid_lft forever preferred_lft forever
320 | 26: eth0@if27: mtu 1500 qdisc noqueue state UP group default
321 | link/ether 02:42:ac:13:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
322 | inet 172.19.0.5/16 brd 172.19.255.255 scope global eth0
323 | valid_lft forever preferred_lft forever
324 | inet6 fe80::42:acff:fe13:5/64 scope link
325 | valid_lft forever preferred_lft forever
326 |
327 |
328 |
329 |
330 | bash-5.1# route
331 | Kernel IP routing table
332 | Destination Gateway Genmask Flags Metric Ref Use Iface
333 | default vps.local 0.0.0.0 UG 0 0 0 eth0
334 | 10.1.1.0 * 255.255.255.0 U 0 0 0 nm-0214
335 | 10.214.214.0 * 255.255.255.0 U 0 0 0 nm-0214
336 | 172.16.1.0 * 255.255.255.0 U 0 0 0 nm-0214
337 | 172.16.254.254 * 255.255.255.255 UH 0 0 0 nm-0214
338 | 172.19.0.0 * 255.255.0.0 U 0 0 0 eth0
339 |
340 |
341 |
342 |
343 | 🔵 give etho another ip!!!
344 |
345 |
346 |
347 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 install netmaker in host .❌❌❌❌
348 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 install netmaker in host .❌❌❌❌
349 | 🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵🔵 install netmaker in host .❌❌❌❌
350 |
351 | https://netmaker.readthedocs.io/en/master/server-installation.html#nodocker
352 |
353 |
354 |
355 | 🔵 why no docker
356 |
357 | docker have lots cons.
358 | docker can not allow host itself have vpn!
359 | so that host can do nothing but run netmaker server.
360 |
361 | no docker is the best choose.
362 | but.. very difficulty.. fuck....
363 |
364 |
365 |
366 | 🔵 dns prepair
367 |
368 | same as docker install.
369 |
370 |
371 | 🔵 install netmaker
372 |
373 | 🔶 get download link .
374 |
375 | go here
376 | https://github.com/gravitl/netmaker/releases
377 | get latest link.
378 | https://github.com/gravitl/netmaker/releases/download/v0.14.4/netmaker
379 |
380 |
381 | 🔶 download install sh
382 |
383 | wget https://raw.githubusercontent.com/gravitl/netmaker/master/scripts/netmaker-server.sh
384 |
385 | vi
386 |
387 | change download link to new link
388 | wget -O /etc/netmaker/netmaker https://github.com/gravitl/netmaker/releases/download/latest/netmaker
389 | wget -O /etc/netmaker/netmaker https://github.com/gravitl/netmaker/releases/download/v0.14.4/netmaker
390 |
391 |
392 | 🔶 install
393 |
394 |
395 |
396 | 🔵 change masterkey
397 |
398 | 🔶 generate key
399 |
400 | tr -dc A-Za-z0-9 /usr/share/nginx/html/config.js'
476 |
477 | ‼️ change me. use your own BACKEND_URL: "https://api.0214.icu"
478 | sudo sh -c 'BACKEND_URL=http://:PORT /usr/share/nginx/html/generate_config_js.sh >/usr/share/nginx/html/config.js'
479 |
480 |
481 | 🔶 check config.
482 |
483 | cat config.js
484 | window.REACT_APP_BACKEND='https://api.0214.icu';
485 |
486 | 🔶 restart nginx
487 |
488 | sudo systemctl start nginx
489 |
490 |
491 |
492 | 🔵 what this do
493 |
494 | this do nothing?
495 | no web available now...
496 |
497 |
498 | -.- why fuck do it.
499 | tool should make life easy .
500 |
501 |
502 |
503 |
504 |
505 |
506 |
507 |
--------------------------------------------------------------------------------