├── resources
└── process_life_cycle.mmd
├── LICENSE
├── notes
├── utilities.md
├── finding_files.md
├── inodes_and_symlinks.md
├── dwm.md
├── firewall.md
├── tar_and_gzip.md
├── disk_usage.md
├── package_managers.md
├── shells_and_bash_configuration.md
├── introduction.md
├── task_state_analysis.md
└── files_and_dirs.md
├── scripts
└── fs_benchmark.py
├── quizzes
├── file_systems.md
├── networking.md
├── files.md
└── users.md
├── technical_writing_humanization_prompt.md
├── README.md
└── notes_template.md
/resources/process_life_cycle.mmd:
--------------------------------------------------------------------------------
1 | stateDiagram-v2
2 | [*] --> New : Process created
3 | New --> Ready : Admit (admission scheduler)
4 | Ready --> Running : Dispatch (scheduler)
5 | Running --> Waiting : I/O or event wait
6 | Running --> Ready : Interrupt (preemption)
7 | Running --> Terminated : Exit (process completion)
8 |
9 | Waiting --> Ready : I/O completed
10 | Waiting --> Suspended_Wait : Swap out (long-term scheduler)
11 | Suspended_Wait --> Suspended_Ready : I/O completed
12 | Suspended_Ready --> Ready : Swap in (long-term scheduler)
13 |
14 | Terminated --> [*]
15 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2021 Adam Djellouli
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/notes/utilities.md:
--------------------------------------------------------------------------------
1 | ## Utilities
2 |
3 | We will discuss various tools that can be used on Linux systems for tasks such as taking screenshots, recording screens, preparing bootable sticks, and detecting malware. It provides brief explanations of each tool and includes installation and usage instructions.
4 |
5 | ## Taking Screenshots with Scrot
6 |
7 | Scrot is a command-line utility for taking screenshots in Linux. It allows you to capture the entire screen, a specific window, or a selected area of the screen. Scrot is available in most Linux distributions, and you can install it using the package manager of your distribution.
8 |
9 | To take a screenshot of the entire screen, use the following command:
10 |
11 | ```
12 | scrot screenshot.png
13 | ```
14 |
15 | This will save the screenshot as a PNG image in the current working directory. If you want to specify a different directory or file name, use the -f option followed by the path and file name of your choice:
16 |
17 | ```
18 | scrot -f /path/to/screenshot.png
19 | ```
20 |
21 | To take a screenshot of a specific window, use the -u option followed by the window title or ID:
22 |
23 | ```
24 | scrot -u "Window Title" screenshot.png
25 | ```
26 |
27 | To take a screenshot of a selected area of the screen, use the -s option:
28 |
29 | ```
30 | scrot -s screenshot.png
31 | ```
32 |
33 | This will allow you to draw a rectangle around the area you want to capture using the mouse.
34 |
35 | Scrot also supports a number of additional options, such as -b to include window borders in the screenshot, -d to specify a delay before taking the screenshot, and -e to execute a command after the screenshot is taken. For example, the following command will take a screenshot of the current window with borders, wait for 3 seconds, and then save the screenshot to the Desktop:
36 |
37 | ```
38 | scrot -b -d 3 screenshot.png -e 'mv $f ~/Desktop/'
39 | ```
40 |
41 | ## Recording the Screen with Vokoscreen
42 |
43 | Vokoscreen is a graphical tool for recording screencasts in Linux. It allows you to capture the entire screen, a specific window, or a selected area of the screen, and also supports the capture of audio and webcam video.
44 |
45 | To install Vokoscreen on a Debian-based system, use the following command:
46 |
47 | ```
48 | apt install vokoscreen
49 | ```
50 |
51 | To start the program, simply run the `vokoscreenNG` command. This will open the Vokoscreen window, which provides a number of options for configuring the screen recording.
52 |
53 | 
54 |
55 | To start the recording, click on the "Start" button and select the area of the screen you want to capture. You can also choose to record audio and webcam video, as well as specify a file name and location for the recording. When you are finished, click on the "Stop" button to end the recording.
56 |
57 |
58 | ## Preparing a Bootable USB Stick with USBImager
59 |
60 | USBImager is a graphical utility for creating bootable USB sticks in Linux. It allows you to write an image file to a USB drive, making it possible to boot a computer from the USB drive.
61 |
62 | 
63 |
64 | ## Malware Detection with Maldet
65 |
66 | One tool that can be used for malware detection is Maldet (short for Linux Malware Detect). Maldet is an open-source antivirus and malware scanning tool that uses a combination of signature-based and heuristic-based detection methods to identify and remove malware. It is specifically designed to detect and remove malware that targets Linux systems, and is regularly updated with the latest malware definitions.
67 |
68 | ### Installation
69 |
70 | To install Maldet on a Debian-based system, use the following commands:
71 |
72 | ```
73 | wget http://www.rfxn.com/downloads/maldetect-current.tar.gz
74 | tar xvfvz maldetect-current.tar.gz
75 | cd maldetect-2.1.1 && sudo ./install.sh
76 | ```
77 |
78 | This will download and extract the latest version of Maldet, and then run the installation script to install the software on the system.
79 |
80 | ### Scanning for malware
81 |
82 | To scan for malware on a Linux system using Maldet, use the following command:
83 |
84 | ```
85 | maldet --scan-all /path/to/directory
86 | ```
87 |
88 | Replace /path/to/directory with the path of the directory you want to scan. You can also use the --scan-all flag to scan the entire system.
89 |
90 | For example, to scan the home directory of the current user, use the following command:
91 |
92 | ```
93 | maldet --scan-all ~
94 | ```
95 |
96 | The scan may take some time, depending on the size of the directory and the number of files it contains. When the scan is complete, Maldet will output a report ID, which can be used to view the scan report and take further action on any infected files that were detected.
97 |
98 | ### Viewing the scan report
99 |
100 | To view the scan report for a particular scan, use the following command:
101 |
102 | ```
103 | maldet --report ID
104 | ```
105 |
106 | Replace ID with the report ID of the scan you want to view. The report will show a list of all the files that were scanned, along with any infected files that were detected.
107 |
108 | ### Quarantining infected files
109 |
110 | If Maldet detects any infected files during a scan, you can use the following command to quarantine them:
111 |
112 | ```
113 | maldet -q ID
114 | ```
115 |
116 | Replace ID with the report ID of the scan you want to quarantine the infected files for. This will move the infected files to a quarantine directory, where they can be safely removed or further examined.
117 |
--------------------------------------------------------------------------------
/scripts/fs_benchmark.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | """
3 | fs_benchmark.py – Measure and plot file-system read / write throughput
4 | versus the number of parallel workers.
5 | python fs_benchmark.py --directory /tmp --file-size 256 --max-workers 8
6 | """
7 |
8 | from __future__ import annotations
9 |
10 | import argparse
11 | import concurrent.futures as cf
12 | import os
13 | import pathlib
14 | import random
15 | import string
16 | import sys
17 | import time
18 | from typing import Tuple, List, Dict
19 |
20 | import matplotlib.pyplot as plt
21 |
22 |
23 | # ───────────────────────────── helpers ──────────────────────────────
24 |
25 |
26 | def _random_name(n: int = 8) -> str:
27 | return "".join(random.choices(string.ascii_lowercase + string.digits, k=n))
28 |
29 |
30 | def _write_file(path: pathlib.Path, size_mb: int, chunk_mb: int = 4) -> float:
31 | """Write *size_mb* MiB to *path* in *chunk_mb* MiB chunks.
32 | Returns elapsed seconds."""
33 | chunk = b"\0" * (chunk_mb * 1024 * 1024)
34 | chunks = size_mb // chunk_mb
35 | leftover = size_mb % chunk_mb
36 |
37 | start = time.perf_counter()
38 | with path.open("wb", buffering=0) as fh:
39 | for _ in range(chunks):
40 | fh.write(chunk)
41 | if leftover:
42 | fh.write(b"\0" * (leftover * 1024 * 1024))
43 | fh.flush()
44 | os.fsync(fh.fileno()) # force flush to the device
45 | return time.perf_counter() - start
46 |
47 |
48 | def _read_file(path: pathlib.Path, chunk_mb: int = 4) -> float:
49 | """Read entire file using *chunk_mb* MiB blocks. Returns elapsed seconds."""
50 | bufsize = chunk_mb * 1024 * 1024
51 | start = time.perf_counter()
52 | with path.open("rb", buffering=bufsize) as fh:
53 | while fh.read(bufsize):
54 | pass
55 | return time.perf_counter() - start
56 |
57 |
58 | def _worker(task: str, path: str, size_mb: int) -> Tuple[str, float]:
59 | p = pathlib.Path(path)
60 | if task == "write":
61 | t = _write_file(p, size_mb)
62 | elif task == "read":
63 | t = _read_file(p)
64 | else: # pragma: no cover
65 | raise ValueError(task)
66 | return task, t
67 |
68 |
69 | # ───────────────────────────── benchmark ──────────────────────────────
70 |
71 |
72 | def run_benchmark(
73 | directory: pathlib.Path,
74 | file_size_mb: int,
75 | max_workers: int,
76 | ) -> Tuple[Dict[int, float], Dict[int, float]]:
77 | """
78 | Returns two dicts: {workers → aggregate MB/s} for writes and reads.
79 | """
80 | write_bw: Dict[int, float] = {}
81 | read_bw: Dict[int, float] = {}
82 |
83 | # Pre-generate file names once – each worker gets its own file
84 | filenames = [directory / f"{_random_name()}_{i}.dat" for i in range(max_workers)]
85 |
86 | # ── loop over 1 … max_workers ──────────────────────
87 | for n in range(1, max_workers + 1):
88 | # pick n files for this round
89 | files_n = filenames[:n]
90 |
91 | # WRITE test
92 | with cf.ProcessPoolExecutor(max_workers=n) as pool:
93 | futures = [
94 | pool.submit(_worker, "write", str(p), file_size_mb) for p in files_n
95 | ]
96 | elapsed = [f.result()[1] for f in cf.as_completed(futures)]
97 | # MB / s = (n · size_MB) / mean(elapsed)
98 | write_bw[n] = (n * file_size_mb) / (sum(elapsed) / n)
99 |
100 | # READ test
101 | with cf.ProcessPoolExecutor(max_workers=n) as pool:
102 | futures = [pool.submit(_worker, "read", str(p), 0) for p in files_n]
103 | elapsed = [f.result()[1] for f in cf.as_completed(futures)]
104 | read_bw[n] = (n * file_size_mb) / (sum(elapsed) / n)
105 |
106 | # cleanup
107 | for p in filenames:
108 | try:
109 | p.unlink()
110 | except FileNotFoundError:
111 | pass
112 |
113 | return write_bw, read_bw
114 |
115 |
116 | # ───────────────────────────── plotting ──────────────────────────────
117 |
118 |
119 | def plot_results(write_bw: dict[int, float], read_bw: dict[int, float], out: pathlib.Path):
120 | workers = sorted(write_bw.keys())
121 | plt.figure()
122 | plt.plot(workers, [write_bw[w] for w in workers], marker="o", label="Write")
123 | plt.plot(workers, [read_bw[w] for w in workers], marker="s", label="Read")
124 | plt.xlabel("Parallel workers (processes)")
125 | plt.ylabel("Throughput [MB/s]")
126 | plt.title("File-system throughput scaling")
127 | plt.grid(True, which="both", ls="--", alpha=0.4)
128 | plt.legend()
129 | plt.tight_layout()
130 | plt.savefig(out)
131 | print(f"[+] Plot saved to {out}")
132 |
133 |
134 | # ───────────────────────────── main ───────────────────────────────────
135 |
136 |
137 | def parse_args() -> argparse.Namespace:
138 | ap = argparse.ArgumentParser(
139 | description="Measure and plot FS read/write throughput scaling."
140 | )
141 | ap.add_argument(
142 | "-d",
143 | "--directory",
144 | type=pathlib.Path,
145 | required=True,
146 | help="Directory on the target file-system (must be writable)",
147 | )
148 | ap.add_argument(
149 | "-s",
150 | "--file-size",
151 | type=int,
152 | default=128,
153 | metavar="MiB",
154 | help="Size of the test file each worker writes (default: 128)",
155 | )
156 | ap.add_argument(
157 | "-n",
158 | "--max-workers",
159 | type=int,
160 | default=os.cpu_count() or 4,
161 | help="Maximum number of parallel processes (default: CPU count)",
162 | )
163 | ap.add_argument(
164 | "-o",
165 | "--output",
166 | type=pathlib.Path,
167 | default=pathlib.Path("fs_benchmark.png"),
168 | help="Output PNG for the plot",
169 | )
170 | return ap.parse_args()
171 |
172 |
173 | def main() -> None:
174 | if os.geteuid() == 0:
175 | print(
176 | "Warning: running as root may bypass user-space caches and skew numbers.",
177 | file=sys.stderr,
178 | )
179 | args = parse_args()
180 |
181 | args.directory.mkdir(parents=True, exist_ok=True)
182 |
183 | print(
184 | f"[*] Benchmarking in {args.directory} – "
185 | f"{args.file_size} MiB per worker – up to {args.max_workers} workers"
186 | )
187 | write_bw, read_bw = run_benchmark(
188 | args.directory, args.file_size, args.max_workers
189 | )
190 |
191 | # show simple table
192 | print("\nWorkers | Write MB/s | Read MB/s")
193 | print("-----------------------------------------")
194 | for w in sorted(write_bw):
195 | print(
196 | f"{w:7} | {write_bw[w]:11.0f} | {read_bw[w]:11.0f}"
197 | )
198 |
199 | plot_results(write_bw, read_bw, args.output)
200 |
201 |
202 | if __name__ == "__main__":
203 | try:
204 | main()
205 | except KeyboardInterrupt:
206 | print("\nAborted.", file=sys.stderr)
207 |
--------------------------------------------------------------------------------
/quizzes/file_systems.md:
--------------------------------------------------------------------------------
1 | #### Q. Which command creates an ext4 filesystem on `/dev/sdb1`?
2 |
3 | * [ ] `mkfs -t ext3 /dev/sdb1`
4 | * [ ] `mkfs.ext4 -o /dev/sdb1`
5 | * [x] `mkfs.ext4 /dev/sdb1`
6 | * [ ] `mkfs.ext4 -f /mount/sdb1`
7 | * [ ] `format ext4 /dev/sdb1`
8 |
9 | #### Q. In `/etc/fstab`, what does the third field specify?
10 |
11 | * [ ] The filesystem UUID
12 | * [x] The mount point
13 | * [ ] Mount options
14 | * [ ] Dump/pass order
15 | * [ ] Filesystem type
16 |
17 | #### Q. What is an inode in a Linux filesystem?
18 |
19 | * [ ] A special file that holds swap space
20 | * [ ] The superblock backup area
21 | * [x] A data structure storing metadata about a file
22 | * [ ] The journal log of filesystem changes
23 | * [ ] A symbolic link to the file’s data blocks
24 |
25 | #### Q. Which mount option disables updating the file access time?
26 |
27 | * [ ] `nosuid`
28 | * [ ] `nodev`
29 | * [x] `noatime`
30 | * [ ] `ro`
31 | * [ ] `sync`
32 |
33 | #### Q. How do you repair a corrupted ext4 filesystem on `/dev/sda2`?
34 |
35 | * [ ] `fsck.ext4 /mount/sda2`
36 | * [x] `fsck.ext4 -y /dev/sda2`
37 | * [ ] `e2fsck /mount/sda2`
38 | * [ ] `tune2fs -r /dev/sda2`
39 | * [ ] `mkfs.ext4 -r /dev/sda2`
40 |
41 | #### Q. Which of these is a journaling filesystem?
42 |
43 | * [ ] FAT32
44 | * [ ] NTFS
45 | * [x] XFS
46 | * [ ] ISO9660
47 | * [ ] UDF
48 |
49 | #### Q. To mount a filesystem by its UUID, which command format is correct?
50 |
51 | * [ ] `mount /dev/disk/by-label/UUID-1234 /mnt`
52 | * [x] `mount UUID=1234-abcd /mnt`
53 | * [ ] `mount -t ext4 1234-abcd /mnt`
54 | * [ ] `mount /mnt UUID=1234-abcd`
55 | * [ ] `mount /dev/sdb1 /mnt --uuid`
56 |
57 | #### Q. Which command mounts the partition `/dev/sdb2` (ext4) on the directory `/data`?
58 |
59 | * [ ] `mount /dev/sdb2 /data -t ext4`
60 | * [x] `mount -t ext4 /dev/sdb2 /data`
61 | * [ ] `mount /data /dev/sdb2 -t ext4`
62 | * [ ] `mount -o ext4 /data /dev/sdb2`
63 | * [ ] `mount /dev/sdb2 ext4 /data`
64 |
65 | #### Q. Where do you define persistent (automatic) mounts so they survive reboot?
66 |
67 | * [ ] `/etc/mtab`
68 | * [ ] `/proc/mounts`
69 | * [x] `/etc/fstab`
70 | * [ ] `/etc/exports`
71 | * [ ] `/etc/auto.master`
72 |
73 | #### Q. In `/etc/fstab`, which field (by position) specifies mount options?
74 |
75 | * [ ] 1st field
76 | * [ ] 2nd field
77 | * [ ] 3rd field
78 | * [x] 4th field
79 | * [ ] 5th field
80 |
81 | #### Q. Which mount option makes a filesystem read-only?
82 |
83 | * [ ] `noexec`
84 | * [ ] `nosuid`
85 | * [x] `ro`
86 | * [ ] `rw`
87 | * [ ] `nodev`
88 |
89 | #### Q. What command causes the system to (re)mount all filesystems listed in `/etc/fstab`?
90 |
91 | * [ ] `mount --all`
92 | * [x] `mount -a`
93 | * [ ] `mount --reload`
94 | * [ ] `mount --fstab`
95 | * [ ] `mount --enable`
96 |
97 | #### Q. How do you create a bind-mount of `/var/log` onto `/mnt/logs`?
98 |
99 | * [ ] `mount --loop /var/log /mnt/logs`
100 | * [x] `mount --bind /var/log /mnt/logs`
101 | * [ ] `mount -t bind /var/log /mnt/logs`
102 | * [ ] `mount -o loop /var/log /mnt/logs`
103 | * [ ] `mount -o dirbind /var/log /mnt/logs`
104 |
105 | #### Q. Which command unmounts `/mnt/data` but only when it’s no longer busy (lazy unmount)?
106 |
107 | * [ ] `umount -f /mnt/data`
108 | * [x] `umount -l /mnt/data`
109 | * [ ] `umount /mnt/data --lazy`
110 | * [ ] `umount -a /mnt/data`
111 | * [ ] `umount --detach /mnt/data`
112 |
113 | #### Q. Which utility displays currently mounted filesystems in a tree view?
114 |
115 | * [ ] `mount --tree`
116 | * [ ] `df --tree`
117 | * [x] `findmnt`
118 | * [ ] `lsblk -t`
119 | * [ ] `blkid --tree`
120 |
121 | #### Q. To mount a `tmpfs` of size 512 MB at `/mnt/tmp`, which command is correct?
122 |
123 | * [ ] `mount -t tmpfs tmpfs /mnt/tmp size=512M`
124 | * [x] `mount -t tmpfs -o size=512M tmpfs /mnt/tmp`
125 | * [ ] `mount tmpfs /mnt/tmp -o 512M`
126 | * [ ] `mount -o tmpfs,size=512M /mnt/tmp`
127 | * [ ] `mount -t tmpfs /mnt/tmp -L 512M`
128 |
129 | #### Q. Which service handles dynamic on-demand automounting via `/etc/auto.*` maps?
130 |
131 | * [ ] `autofs`
132 | * [ ] `systemd-automount`
133 | * [ ] `autohome`
134 | * [x] `autofs`
135 | * [ ] `automountd`
136 |
137 | #### Q. Which file lists directories to be shared via NFS on the server?
138 |
139 | * [ ] `/etc/hosts.allow`
140 | * [ ] `/etc/exports.conf`
141 | * [x] `/etc/exports`
142 | * [ ] `/etc/nfs.conf`
143 | * [ ] `/etc/exports.d/nfs.exports`
144 |
145 | #### Q. What command applies changes made in `/etc/exports` without restarting the NFS service?
146 |
147 | * [ ] `systemctl restart nfs-server`
148 | * [ ] `exportfs --reload-all`
149 | * [ ] `exportfs -arv`
150 | * [x] `exportfs -ra`
151 | * [ ] `exportfs --update`
152 |
153 | #### Q. By default, which port does the NFS server listen on for NFSv3?
154 |
155 | * [ ] TCP/2049 only
156 | * [ ] UDP/111 only
157 | * [ ] TCP/20048
158 | * [x] TCP/2049 and uses portmapper on 111
159 | * [ ] UDP/2049
160 |
161 | #### Q. Which mount option on the client makes file writes synchronous (i.e., safe but slower)?
162 |
163 | * [ ] `soft`
164 | * [ ] `intr`
165 | * [x] `sync`
166 | * [ ] `bg`
167 | * [ ] `noexec`
168 |
169 | #### Q. How do you mount an NFS export `server:/export/home` on `/mnt/home`?
170 |
171 | * [ ] `mount nfs server:/export/home /mnt/home`
172 | * [ ] `mount -t nfs4 server:/export/home /mnt/home`
173 | * [x] `mount -t nfs server:/export/home /mnt/home`
174 | * [ ] `mount.nfs /export/home /mnt/home`
175 | * [ ] `mount.nfs4 server:/export/home /mnt/home`
176 |
177 | #### Q. Which utility shows currently mounted clients on an NFS server?
178 |
179 | * [ ] `showmount -e`
180 | * [x] `showmount -a`
181 | * [ ] `rpcinfo -p`
182 | * [ ] `nfsstat -s`
183 | * [ ] `exportfs -v`
184 |
185 | #### Q. What does the `no_root_squash` option in `/etc/exports` do?
186 |
187 | * [ ] Allows root on the server to map to root on the client
188 | * [x] Allows root on the client to act as root on exported share
189 | * [ ] Disables UID mapping entirely
190 | * [ ] Prevents any root access to the share
191 | * [ ] Enables root to change squash options
192 |
193 | #### Q. Which protocol does NFSv4 use by default for locking and state management?
194 |
195 | * [ ] NLM (Network Lock Manager)
196 | * [ ] statd over RPCBIND
197 | * [x] Built-in stateful protocol over TCP/2049
198 | * [ ] LDAP
199 | * [ ] HTTP
200 |
201 | #### Q. In `/etc/fstab`, which option ensures an NFS mount retries indefinitely until the server is available?
202 |
203 | * [ ] `soft`
204 | * [ ] `timeo=0`
205 | * [x] `hard`
206 | * [ ] `nolock`
207 | * [ ] `noauto`
208 |
209 | #### Q. What is a common symptom of a “stale file handle” error on NFS clients?
210 |
211 | * [ ] Authentication failures when mounting
212 | * [ ] Files always appearing with zero size
213 | * [x] “Stale file handle” messages when accessing files after server reboot or export change
214 | * [ ] Inability to resolve hostnames
215 | * [ ] Kernel panic on file operations
216 |
217 | #### Q. What does the `tune2fs -l /dev/sdb1` command display?
218 |
219 | * [ ] Live I/O statistics for the filesystem
220 | * [ ] The on-disk block allocation map
221 | * [x] Filesystem superblock parameters and labels
222 | * [ ] A list of files in the root directory
223 | * [ ] Current mount options in use
224 |
225 | #### Q. Which filesystem is case-sensitive but not case-preserving?
226 |
227 | * [ ] NTFS
228 | * [x] UDF
229 | * [ ] ext4
230 | * [ ] XFS
231 | * [ ] VFAT
232 |
233 | #### Q. What does enabling quotas on a filesystem allow you to do?
234 |
235 | * [ ] Encrypt user data at rest
236 | * [ ] Automatically back up changed files
237 | * [ ] Mount the filesystem read-only
238 | * [x] Limit disk usage per user or group
239 | * [ ] Convert the filesystem to read-write compression
240 |
--------------------------------------------------------------------------------
/technical_writing_humanization_prompt.md:
--------------------------------------------------------------------------------
1 | # Technical Writing Humanization Prompt
2 |
3 | ## Your Task
4 | Transform robotic, mechanical technical documentation into natural, conversational, and engaging content that feels like it's written by a helpful human mentor rather than a technical manual.
5 |
6 | ## Core Transformation Principles
7 |
8 | ### 1. Tone and Voice Changes
9 | **FROM (Robotic):**
10 | - "Execute the following command to..."
11 | - "The system will perform..."
12 | - "This operation results in..."
13 | - "Users should implement..."
14 |
15 | **TO (Human):**
16 | - "Let's try this command..."
17 | - "Here's what happens when you..."
18 | - "You'll see something like..."
19 | - "Here's how you can..."
20 |
21 | ### 2. Structure and Flow
22 | **FROM (Mechanical):**
23 | - Numbered lists without context
24 | - Isolated code blocks
25 | - Technical jargon without explanation
26 | - Passive voice descriptions
27 |
28 | **TO (Conversational):**
29 | - Scenarios that explain WHY first
30 | - Code blocks with "What just happened?" explanations
31 | - Plain English translations of technical terms
32 | - Active voice with "you" as the subject
33 |
34 | ### 3. Content Organization
35 | **FROM (Documentation Style):**
36 | ```
37 | Command: ls -la
38 | Purpose: Lists directory contents
39 | Syntax: ls [options] [directory]
40 | Options: -l (long format), -a (all files)
41 | ```
42 |
43 | **TO (Teaching Style):**
44 | ```
45 | **Want to see what's in a folder?**
46 |
47 | Try this:
48 | ```bash
49 | ls -la
50 | ```
51 |
52 | **What you'll see:**
53 | A detailed list showing all files (even hidden ones) with their permissions, sizes, and dates.
54 |
55 | **Breaking it down:**
56 | - `ls` = "list stuff"
57 | - `-l` = "give me details"
58 | - `-a` = "show hidden files too"
59 | ```
60 |
61 | ## Specific Transformation Rules
62 |
63 | ### Rule 1: Start with the Problem
64 | **Before:** "The grep command searches for patterns..."
65 | **After:** "Need to find specific text in a file? That's where grep comes in handy..."
66 |
67 | ### Rule 2: Use Relatable Examples
68 | **Before:** "Example: grep 'pattern' file.txt"
69 | **After:** "Say you're looking for your friend's email in a huge contact list..."
70 |
71 | ### Rule 3: Add Emotional Context
72 | **Before:** "If an error occurs..."
73 | **After:** "Don't panic if you see an error - it happens to everyone..."
74 |
75 | ### Rule 4: Explain the "Why"
76 | **Before:** "Use sudo to execute as root"
77 | **After:** "Sometimes you need admin privileges (that's what sudo gives you) because..."
78 |
79 | ### Rule 5: Make Mistakes Normal
80 | **Before:** "Incorrect usage will result in errors"
81 | **After:** "Made a typo? No worries - here's how to fix it quickly..."
82 |
83 | ### Rule 6: Use Analogies
84 | **Before:** "Pipes redirect output between commands"
85 | **After:** "Think of pipes like connecting garden hoses - the output of one flows into the next..."
86 |
87 | ## Content Section Templates
88 |
89 | ### For Command Explanations:
90 | ```markdown
91 | #### [Task Name] (What You Want to Accomplish)
92 |
93 | **The situation:** [Real-world scenario when you'd need this]
94 |
95 | **The solution:**
96 | ```bash
97 | command --option value
98 | ```
99 |
100 | **What just happened?**
101 | [Plain English explanation of what the command did]
102 |
103 | **Breaking it down:**
104 | - `command` = [simple explanation]
105 | - `--option` = [why you'd use this]
106 | - `value` = [what goes here]
107 |
108 | **Pro tip:** [Helpful advice or common gotcha]
109 | ```
110 |
111 | ### For Troubleshooting:
112 | ```markdown
113 | #### "Error Message Here"
114 |
115 | **What this usually means:** [Common cause in plain English]
116 |
117 | **Don't worry - this is fixable!**
118 |
119 | **Quick diagnosis:**
120 | ```bash
121 | diagnostic_command
122 | ```
123 |
124 | **The fix:**
125 | ```bash
126 | solution_command
127 | ```
128 |
129 | **Why this works:** [Explanation that builds understanding]
130 | ```
131 |
132 | ### For Configuration:
133 | ```markdown
134 | #### Setting Up [Feature]
135 |
136 | **Why you'd want this:** [Benefit explanation]
137 |
138 | **The easy way:**
139 | ```bash
140 | simple_setup_command
141 | ```
142 |
143 | **What this did:** [Explanation of changes made]
144 |
145 | **Want to customize it?** Here's how to tweak the settings...
146 | ```
147 |
148 | ## Language Patterns to Transform
149 |
150 | ### Replace These Phrases:
151 | | Robotic | Human |
152 | |---------|-------|
153 | | "Execute the command" | "Try this command" / "Run this" |
154 | | "The system will" | "You'll see" / "This will" |
155 | | "Utilize the following" | "Use this" / "Try this approach" |
156 | | "In order to" | "To" |
157 | | "It is recommended" | "I suggest" / "You should" |
158 | | "Subsequently" | "Next" / "Then" |
159 | | "Prior to" | "Before" |
160 | | "Implement the solution" | "Here's how to fix it" |
161 |
162 | ### Add These Human Elements:
163 | - **Encouragement:** "Great job!", "You're getting the hang of it!"
164 | - **Reassurance:** "Don't worry", "This is normal", "Everyone does this"
165 | - **Relatability:** "We've all been there", "I know it's confusing at first"
166 | - **Context:** "Here's why this matters", "This will save you time"
167 | - **Anticipation:** "You might be wondering", "The next logical step"
168 |
169 | ## Quality Check Questions
170 |
171 | After transformation, ask yourself:
172 |
173 | 1. **Would a friend understand this?** (Not just a technical colleague)
174 | 2. **Does it explain WHY before HOW?**
175 | 3. **Are there real-world examples?**
176 | 4. **Does it acknowledge that mistakes happen?**
177 | 5. **Is the tone encouraging rather than intimidating?**
178 | 6. **Would someone new to the topic feel welcomed?**
179 | 7. **Are technical terms explained in plain English?**
180 | 8. **Does it build confidence rather than just transfer information?**
181 |
182 | ## Example Transformation
183 |
184 | ### BEFORE (Robotic):
185 | ```
186 | IV. Modifying Previous Commands:
187 |
188 | Repeat Last Command with Substitution:
189 |
190 | ^old^new
191 |
192 | Repeats the last command, replacing old with new.
193 |
194 | Example:
195 |
196 | ^foo^bar
197 |
198 | If the last command was echo foo, this would execute echo bar.
199 | ```
200 |
201 | ### AFTER (Human):
202 | ```
203 | #### Quick Command Fixes
204 |
205 | Made a typo in your last command? No need to retype everything! You can quickly fix and re-run commands using substitution.
206 |
207 | **Fix and repeat with `^old^new`:**
208 |
209 | Say you just ran this command with a typo:
210 | ```bash
211 | echo "Hello wrold"
212 | ```
213 |
214 | Instead of retyping the whole thing, just fix the typo:
215 | ```bash
216 | ^wrold^world
217 | ```
218 |
219 | This automatically changes your previous command to `echo "Hello world"` and runs it. It's like a quick "find and replace" for your last command.
220 |
221 | **Real-world example:**
222 | ```bash
223 | # Oops, wrong filename
224 | cp important_file.txt /tmp/backup/
225 |
226 | # Quick fix - just change the destination
227 | ^backup^backups
228 | # This runs: cp important_file.txt /tmp/backups/
229 | ```
230 | ```
231 |
232 | ## Your Mission
233 |
234 | Transform the given technical content by:
235 |
236 | 1. **Reading through once** to understand the technical concepts
237 | 2. **Identifying robotic language patterns** using the rules above
238 | 3. **Rewriting with human context** - always start with WHY someone would need this
239 | 4. **Adding encouraging, reassuring language** throughout
240 | 5. **Including real-world examples** that people can relate to
241 | 6. **Explaining technical terms** in plain English
242 | 7. **Making mistakes and troubleshooting normal** parts of the learning process
243 | 8. **Ending with confidence-building elements** and next steps
244 |
245 | Remember: You're not just translating technical information - you're being a patient, encouraging teacher who wants the reader to succeed and feel confident.
246 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # 🐧 Linux Notes & Guides
2 |
3 | [](https://choosealicense.com/licenses/mit/)
4 | [](https://github.com/djeada/Linux-Notes/stargazers)
5 | [](https://github.com/djeada/Linux-Notes/network)
6 |
7 | > Comprehensive guides and practical notes covering Linux fundamentals to advanced system administration. Designed for beginners through DevOps professionals.
8 |
9 | 
10 |
11 | ## 📚 What You'll Learn
12 |
13 | This repository provides structured learning materials covering:
14 |
15 | - File systems, commands, permissions, and shell configuration
16 | - Process management, services, user management, and startup sequences
17 | - SSH/SCP, firewalls, ports, and network file systems
18 | - Kernel management, virtualization, encryption, and security (SELinux)
19 | - Text processing (grep, sed, awk), monitoring, and package management
20 |
21 | Each section includes **hands-on challenges** to reinforce your understanding and build practical skills.
22 |
23 | ## 🗂️ Learning Path
24 |
25 | The notes are organized from beginner to advanced concepts. Follow this recommended progression:
26 |
27 | ### 🔰 Beginner Level
28 | - [Introduction to Linux](https://github.com/djeada/Linux-Notes/blob/main/notes/introduction.md)
29 | - [Basic Commands](https://github.com/djeada/Linux-Notes/blob/main/notes/commands.md)
30 | - [Files and Directories](https://github.com/djeada/Linux-Notes/blob/main/notes/files_and_dirs.md)
31 | - [File System Structure](https://github.com/djeada/Linux-Notes/blob/main/notes/file_system.md)
32 | - [Permissions](https://github.com/djeada/Linux-Notes/blob/main/notes/permissions.md)
33 |
34 | ### 🔧 Intermediate Level
35 | - [Shells and Configuration](https://github.com/djeada/Linux-Notes/blob/main/notes/shells_and_bash_configuration.md)
36 | - [Pipe and Redirect](https://github.com/djeada/Linux-Notes/blob/main/notes/pipe_and_redirect.md)
37 | - [Finding Files](https://github.com/djeada/Linux-Notes/blob/main/notes/finding_files.md)
38 | - [Environment Variables](https://github.com/djeada/Linux-Notes/blob/main/notes/enviroment_variable.md)
39 | - [Archive Management (Tar & Gzip)](https://github.com/djeada/Linux-Notes/blob/main/notes/tar_and_gzip.md)
40 | - [Inodes and Symlinks](https://github.com/djeada/Linux-Notes/blob/main/notes/inodes_and_symlinks.md)
41 |
42 | ### 👥 System Administration
43 | - [User Management](https://github.com/djeada/Linux-Notes/blob/main/notes/managing_users.md)
44 | - [Process Management](https://github.com/djeada/Linux-Notes/blob/main/notes/processes.md)
45 | - [Disk Usage & Monitoring](https://github.com/djeada/Linux-Notes/blob/main/notes/disk_usage.md)
46 | - [Mounting File Systems](https://github.com/djeada/Linux-Notes/blob/main/notes/mounting.md)
47 | - [System Startup & Boot Process](https://github.com/djeada/Linux-Notes/blob/main/notes/system_startup.md)
48 | - [Scheduled Tasks (Cron)](https://github.com/djeada/Linux-Notes/blob/main/notes/cron_jobs.md)
49 | - [System Services](https://github.com/djeada/Linux-Notes/blob/main/notes/services.md)
50 |
51 | ### 🌐 Networking & Security
52 | - [SSH & SCP](https://github.com/djeada/Linux-Notes/blob/main/notes/ssh_and_scp.md)
53 | - [Networking Fundamentals](https://github.com/djeada/Linux-Notes/blob/main/notes/networking.md)
54 | - [Port Management](https://github.com/djeada/Linux-Notes/blob/main/notes/ports.md)
55 | - [Firewall Configuration](https://github.com/djeada/Linux-Notes/blob/main/notes/firewall.md)
56 | - [Package Managers](https://github.com/djeada/Linux-Notes/blob/main/notes/package_managers.md)
57 | - [Performance Monitoring](https://github.com/djeada/Linux-Notes/blob/main/notes/performance_monitoring.md)
58 | - [Log Analysis](https://github.com/djeada/Linux-Notes/blob/main/notes/log_files_and_journals.md)
59 |
60 | ### 🔍 Text Processing & Utilities
61 | - [Grep - Pattern Searching](https://github.com/djeada/Linux-Notes/blob/main/notes/grep.md)
62 | - [Sed & Awk - Text Processing](https://github.com/djeada/Linux-Notes/blob/main/notes/sed_and_awk.md)
63 | - [System Utilities](https://github.com/djeada/Linux-Notes/blob/main/notes/utilities.md)
64 |
65 | ### 🚀 Advanced Topics
66 | - [Encryption & Security](https://github.com/djeada/Linux-Notes/blob/main/notes/encryption.md)
67 | - [Kernel Management](https://github.com/djeada/Linux-Notes/blob/main/notes/kernel.md)
68 | - [Environment Modules](https://github.com/djeada/Linux-Notes/blob/main/notes/enviroment_modules.md)
69 | - [Virtual Machines](https://github.com/djeada/Linux-Notes/blob/main/notes/virtual_machines.md)
70 | - [Disk Partitioning](https://github.com/djeada/Linux-Notes/blob/main/notes/partitions.md)
71 | - [Logical Volume Management](https://github.com/djeada/Linux-Notes/blob/main/notes/logical_volume_management.md)
72 | - [Network File System (NFS)](https://github.com/djeada/Linux-Notes/blob/main/notes/nfs.md)
73 | - [LDAP Integration](https://github.com/djeada/Linux-Notes/blob/main/notes/ldap.md)
74 | - [SELinux Security](https://github.com/djeada/Linux-Notes/blob/main/notes/selinux.md)
75 | - [Dynamic Window Manager](https://github.com/djeada/Linux-Notes/blob/main/notes/dwm.md)
76 |
77 | ## 🎯 Quick Start
78 |
79 | 1. **New to Linux?** Start with [Introduction to Linux](https://github.com/djeada/Linux-Notes/blob/main/notes/introduction.md)
80 | 2. **Need specific commands?** Jump to [Commands](https://github.com/djeada/Linux-Notes/blob/main/notes/commands.md)
81 | 3. **System admin tasks?** Check the System Administration section
82 | 4. **Looking for challenges?** Each guide includes practical exercises
83 |
84 | ## 📖 References & Further Reading
85 |
86 | ### 📚 Essential Books
87 | - **Nemeth, Evi; Snyder, Garth; Hein, Trent R.; Whaley, Ben**
88 | *UNIX and Linux System Administration Handbook* - [Amazon](https://amzn.to/3DZQSbU)
89 |
90 | - **Frisch, Æleen**
91 | *Essential System Administration* - [Amazon](https://amzn.to/3Gbkqnl)
92 |
93 | - **Turnbull, James; Lieverdink, Peter; Matotek, Dennis**
94 | *Pro Linux System Administration* - [Amazon](https://amzn.to/4chs7V2)
95 |
96 | ### 🌐 Online Resources
97 | - [Columbia University - UNIX/Linux Lectures](https://www.cs.columbia.edu/~smb/classes/s06-4118/lectures.html)
98 | - [Imperial College London - Unix Introduction](http://www.doc.ic.ac.uk/~wjk/UnixIntro)
99 | - [Linux From Scratch](https://www.linuxfromscratch.org/)
100 | - [GoLinuxCloud - Commands Cheat Sheet](https://www.golinuxcloud.com/linux-commands-cheat-sheet/)
101 | - [How To Secure A Linux Server](https://github.com/imthenachoman/How-To-Secure-A-Linux-Server)
102 |
103 | ### 🔧 Specialized Topics
104 | - [Environment Modules Tutorial](https://manjusri.ucsc.edu/2017/09/08/environment-modules/)
105 | - [SELinux for Mortals](https://craigmbooth.com/blog/selinux-for-mortals/)
106 | - [Managing ACLs on Linux](https://linuxconfig.org/how-to-manage-acls-on-linux)
107 | - [Beginner's Guide to Nmap](https://www.linux.com/training-tutorials/beginners-guide-nmap/)
108 |
109 | ## 🤝 Contributing
110 |
111 | We welcome contributions! Whether you want to:
112 |
113 | - 🐛 **Report bugs** - [Open an issue](https://github.com/djeada/Linux-Notes/issues)
114 | - ✨ **Suggest improvements** - [Start a discussion](https://github.com/djeada/Linux-Notes/discussions)
115 | - 📝 **Add content** - [Submit a pull request](https://github.com/djeada/Linux-Notes/pulls)
116 | - 🔧 **Fix typos or errors** - Pull requests welcome!
117 |
118 | Please read our contribution guidelines before submitting changes.
119 |
120 | ## 📄 License
121 |
122 | This project is licensed under the [MIT License](https://choosealicense.com/licenses/mit/) - feel free to use, modify, and distribute.
123 |
124 | ## ⭐ Star History
125 |
126 | [](https://star-history.com/#djeada/Linux-Notes&Date)
127 |
128 | ---
129 |
130 |
131 | Happy Learning! 🚀
132 | Built with ❤️ for the Linux community
133 |
134 |
--------------------------------------------------------------------------------
/quizzes/networking.md:
--------------------------------------------------------------------------------
1 | #### Q. Which command shows the current IP addresses assigned to all network interfaces in modern Linux distributions?
2 |
3 | * [ ] `ifconfig -a`
4 | * [x] `ip addr show`
5 | * [ ] `netstat -i`
6 | * [ ] `route -n`
7 | * [ ] `iptables -L`
8 |
9 | #### Q. Where is the default gateway specified in the routing table?
10 |
11 | * [ ] Destination `0.0.0.0` / Genmask `255.255.255.255`
12 | * [x] Destination `0.0.0.0` / Genmask `0.0.0.0`
13 | * [ ] Destination `127.0.0.0` / Genmask `255.0.0.0`
14 | * [ ] Destination `255.255.255.255` / Genmask `255.255.255.255`
15 | * [ ] Destination `192.168.1.0` / Genmask `255.255.255.0`
16 |
17 | #### Q. Which file is used to define static DNS servers for name resolution system-wide?
18 |
19 | * [ ] `/etc/hostname`
20 | * [ ] `/etc/resolvconf.conf`
21 | * [x] `/etc/resolv.conf`
22 | * [ ] `/etc/hosts.dns`
23 | * [ ] `/etc/dns.conf`
24 |
25 | #### Q. How do you display all listening TCP ports and their associated processes?
26 |
27 | * [ ] `ss -uplt`
28 | * [ ] `netstat -ulpn`
29 | * [x] `ss -tulpn`
30 | * [ ] `lsof -i udp`
31 | * [ ] `iptables -L -n`
32 |
33 | #### Q. To permanently assign a static IP to `eth0` on a Debian-based system, which file would you edit?
34 |
35 | * [x] `/etc/network/interfaces`
36 | * [ ] `/etc/sysconfig/network-scripts/ifcfg-eth0`
37 | * [ ] `/etc/netplan/config.yaml`
38 | * [ ] `/etc/dhcp/dhclient.conf`
39 | * [ ] `/etc/NetworkManager/system-connections/eth0.nmconnection`
40 |
41 | #### Q. What command tests connectivity and reports round-trip time to a host?
42 |
43 | * [ ] `traceroute`
44 | * [x] `ping`
45 | * [ ] `dig`
46 | * [ ] `nslookup`
47 | * [ ] `arping`
48 |
49 | #### Q. Which utility shows real-time bandwidth usage per network interface?
50 |
51 | * [x] `iftop`
52 | * [ ] `nethogs`
53 | * [ ] `nload`
54 | * [ ] `iperf`
55 | * [ ] `netcat`
56 |
57 | #### Q. How do you flush all IPv4 routes from the routing table?
58 |
59 | * [ ] `ip route flush all`
60 | * [x] `ip route flush table main`
61 | * [ ] `route del default`
62 | * [ ] `ifdown --flush`
63 | * [ ] `netstat -r --flush`
64 |
65 | #### Q. In `/etc/hosts`, what does the line `127.0.0.1 localhost` achieve?
66 |
67 | * [ ] Maps `localhost` to the public IP of the host
68 | * [x] Resolves `localhost` to the loopback interface
69 | * [ ] Routes all traffic to `localhost` through the router
70 | * [ ] Disables DNS lookup for `localhost`
71 | * [ ] Assigns a virtual IP for containers
72 |
73 | #### Q. Which command captures packets on `eth0` and writes them to `capture.pcap`?
74 |
75 | * [ ] `tcpdump -w eth0 capture.pcap`
76 | * [x] `tcpdump -i eth0 -w capture.pcap`
77 | * [ ] `wireshark -i eth0 -o capture.pcap`
78 | * [ ] `snort -c eth0 capture.pcap`
79 | * [ ] `nmap -sP eth0 -o capture.pcap`
80 |
81 | #### Q. What is the purpose of the `mtu` parameter in network interface configuration?
82 |
83 | * [ ] Maximum Transfer Unit - defines maximum bandwidth
84 | * [x] Maximum Transmission Unit - defines maximum packet size
85 | * [ ] Memory Transfer Unit - defines buffer size
86 | * [ ] Multiple Terminal Unit - defines connection count
87 | * [ ] Message Transfer Unit - defines protocol type
88 |
89 | #### Q. Which command adds a static route to network `192.168.10.0/24` via gateway `192.168.1.1`?
90 |
91 | * [ ] `route add -net 192.168.10.0/24 gw 192.168.1.1`
92 | * [x] `ip route add 192.168.10.0/24 via 192.168.1.1`
93 | * [ ] `netstat -r 192.168.10.0/24 192.168.1.1`
94 | * [ ] `ifroute add 192.168.10.0/24 192.168.1.1`
95 | * [ ] `gateway add 192.168.10.0/24 192.168.1.1`
96 |
97 | #### Q. Which file contains the system's hostname on most Linux distributions?
98 |
99 | * [ ] `/etc/hosts`
100 | * [x] `/etc/hostname`
101 | * [ ] `/etc/sysconfig/network`
102 | * [ ] `/proc/sys/kernel/hostname`
103 | * [ ] `/etc/machine-info`
104 |
105 | #### Q. What does the command `ip link set eth0 down` accomplish?
106 |
107 | * [ ] Removes IP address from eth0
108 | * [x] Disables the eth0 network interface
109 | * [ ] Deletes the eth0 interface permanently
110 | * [ ] Sets eth0 to promiscuous mode
111 | * [ ] Flushes the ARP cache for eth0
112 |
113 | #### Q. Which protocol does the `dig` command primarily use for DNS queries?
114 |
115 | * [ ] TCP only
116 | * [x] UDP (with TCP fallback)
117 | * [ ] ICMP
118 | * [ ] HTTP
119 | * [ ] DHCP
120 |
121 | #### Q. In iptables, what does the `-j DROP` action do to packets?
122 |
123 | * [ ] Forwards packets to another chain
124 | * [ ] Logs packet information and continues
125 | * [x] Silently discards the packet
126 | * [ ] Sends ICMP unreachable message
127 | * [ ] Redirects packet to localhost
128 |
129 | #### Q. Which command shows network interface statistics including packet counts and errors?
130 |
131 | * [ ] `ip addr`
132 | * [x] `ip -s link`
133 | * [ ] `netstat -r`
134 | * [ ] `ss -i`
135 | * [ ] `ifconfig -v`
136 |
137 | #### Q. What is the default DHCP client configuration file on Ubuntu systems?
138 |
139 | * [ ] `/etc/dhcp/dhclient.conf`
140 | * [ ] `/etc/dhcpcd.conf`
141 | * [x] `/etc/dhcp/dhclient.conf`
142 | * [ ] `/etc/network/dhcp.conf`
143 | * [ ] `/etc/systemd/network/dhcp.conf`
144 |
145 | #### Q. Which command displays the current ARP table entries?
146 |
147 | * [ ] `route -n`
148 | * [x] `ip neigh show`
149 | * [ ] `netstat -arp`
150 | * [ ] `ss -arp`
151 | * [ ] `ifconfig -arp`
152 |
153 | #### Q. How do you enable IP forwarding temporarily on a Linux system?
154 |
155 | * [ ] `echo 1 > /proc/sys/net/ipv4/ip_forward`
156 | * [x] `echo 1 > /proc/sys/net/ipv4/ip_forward`
157 | * [ ] `sysctl net.ipv4.ip_forward=1`
158 | * [ ] `iptables -A FORWARD -j ACCEPT`
159 | * [ ] Both A and C are correct
160 |
161 | #### Q. Which port does SSH typically use for connections?
162 |
163 | * [ ] 21
164 | * [x] 22
165 | * [ ] 23
166 | * [ ] 25
167 | * [ ] 80
168 |
169 | #### Q. What does the `netstat -tulpn` command display?
170 |
171 | * [ ] Only UDP connections
172 | * [x] TCP and UDP listening ports with process information
173 | * [ ] Only TCP connections
174 | * [ ] Network interface statistics
175 | * [ ] Routing table information
176 |
177 | #### Q. Which configuration directive in `/etc/resolv.conf` specifies the DNS search domain?
178 |
179 | * [ ] `nameserver`
180 | * [x] `search`
181 | * [ ] `domain`
182 | * [ ] `options`
183 | * [ ] `sortlist`
184 |
185 | #### Q. How do you test if a specific TCP port is open on a remote host using netcat?
186 |
187 | * [ ] `nc -u hostname port`
188 | * [x] `nc -zv hostname port`
189 | * [ ] `nc -l hostname port`
190 | * [ ] `nc -s hostname port`
191 | * [ ] `nc -p hostname port`
192 |
193 | #### Q. Which systemd service manages network connections on modern Linux distributions?
194 |
195 | * [ ] `network.service`
196 | * [x] `NetworkManager.service`
197 | * [ ] `networking.service`
198 | * [ ] `systemd-networkd.service`
199 | * [ ] `ifupdown.service`
200 |
201 | #### Q. What does the `traceroute` command accomplish?
202 |
203 | * [ ] Tests bandwidth between hosts
204 | * [x] Shows the path packets take to reach a destination
205 | * [ ] Captures network packets
206 | * [ ] Displays network interface configuration
207 | * [ ] Monitors real-time traffic
208 |
209 | #### Q. In a CIDR notation `192.168.1.0/24`, what does the `/24` represent?
210 |
211 | * [ ] The host portion of the address
212 | * [x] The number of network bits in the subnet mask
213 | * [ ] The maximum number of hosts
214 | * [ ] The VLAN identifier
215 | * [ ] The default gateway address
216 |
217 | #### Q. Which command creates a network namespace named `test`?
218 |
219 | * [ ] `ip netns create test`
220 | * [x] `ip netns add test`
221 | * [ ] `netns create test`
222 | * [ ] `namespace add test`
223 | * [ ] `ip namespace create test`
224 |
225 | #### Q. What is the purpose of the loopback interface (`lo`)?
226 |
227 | * [ ] External network communication
228 | * [x] Internal system communication and testing
229 | * [ ] Wireless network management
230 | * [ ] VPN connections
231 | * [ ] Bridge configuration
232 |
233 | #### Q. Which file would you modify to make network interface changes persistent on CentOS/RHEL systems?
234 |
235 | * [ ] `/etc/network/interfaces`
236 | * [x] `/etc/sysconfig/network-scripts/ifcfg-ethX`
237 | * [ ] `/etc/netplan/config.yaml`
238 | * [ ] `/etc/systemd/network/ethX.network`
239 | * [ ] `/etc/NetworkManager/conf.d/ethX.conf`
240 |
241 | #### Q. How do you display only IPv6 addresses using the `ip` command?
242 |
243 | * [ ] `ip -6 addr`
244 | * [x] `ip -6 addr show`
245 | * [ ] `ip addr show ipv6`
246 | * [ ] `ip6 addr show`
247 | * [ ] `ip addr -6`
248 |
--------------------------------------------------------------------------------
/notes/finding_files.md:
--------------------------------------------------------------------------------
1 | ## Finding Files
2 |
3 | The `find`, `locate`, and `which` commands are commonly used for file search operations. The `find` command performs a comprehensive search using attributes such as name, size, and type. `locate` provides a faster, albeit periodically updated, search by filename. `which` locates the path of a program's executable within the system's `PATH`.
4 |
5 | ### Find
6 |
7 | The `find` command is used to locate the specific files and directories based on various criteria like file name, size, modification time, etc. It is one of the powerful commands, capable of handling operations such as search, copy, remove, and modify attributes of files/directories.
8 |
9 | The general syntax of the `find` command is as follows:
10 |
11 | ```bash
12 | find [path...] [expression]
13 | ```
14 |
15 | - `[path...]` refers to where you want to look for. It can be a single directory or multiple directories.
16 | - `[expression]` refers to the search criteria like name, size, file type, etc.
17 |
18 | #### Commonly Used Options
19 |
20 | The `find` command includes various options, or "flags," that modify its behavior. Below are some commonly used flags:
21 |
22 | | Option | Description |
23 | | --- | --- |
24 | | `-name pattern` | Search for files based on their name. |
25 | | `-type [f\|d\|l]` | Search for files (`f`), directories (`d`), or symbolic links (`l`). |
26 | | `-user user_name` | Search for files owned by a specific user. |
27 | | `-size +N` | Search for files larger than N blocks (1 block = 512 bytes). |
28 | | `-exec command {} \;` | Execute a command on each file that matches the criteria. The `{}` is replaced by the current file name. |
29 | | `-delete` | Deletes the files that match the given criteria. |
30 | | `-ok command {} \;` | Similar to `-exec`, but asks for affirmation before executing the command. |
31 |
32 | #### Finding Files by Name
33 |
34 | To find a specific file named `error.log` in the `/var/log/` directory:
35 |
36 | ```bash
37 | find /var/log -name error.log
38 | ```
39 |
40 | Suppose there is a file named `error.log` in `/var/log/app/`:
41 |
42 | ```
43 | /var/log/app/error.log
44 | ```
45 |
46 | #### Finding Files by User
47 |
48 | To find all files owned by the user `admin` in the `/home` directory:
49 |
50 | ```bash
51 | find /home -user admin
52 | ```
53 |
54 | Suppose the `admin` user owns several files in `/home/admin/`:
55 |
56 | ```
57 | /home/admin/file1.txt
58 | /home/admin/file2.log
59 | ```
60 |
61 | #### Excluding Files by User
62 |
63 | To find all files in the `/home` directory not owned by the user `guest`:
64 |
65 | ```bash
66 | find /home ! -user guest
67 | ```
68 |
69 | If `guest` owns files in `/home/guest/`, this command excludes those files.
70 |
71 | #### Finding Files Modified More Recently Than Another File
72 |
73 | To find files modified more recently than `file2`:
74 |
75 | ```bash
76 | find -anewer file2
77 | ```
78 |
79 | This finds files updated after `file2`, such as:
80 |
81 | ```
82 | file3
83 | file4
84 | ```
85 |
86 | #### Finding and Deleting Files Modified More Recently Than Another File
87 |
88 | To find files newer than `file2` and delete them:
89 |
90 | ```bash
91 | find -anewer file2 -exec rm -v {} \;
92 | ```
93 |
94 | This will delete files, such as `file3` and `file4`, and print each deleted file's name due to the `-v` (verbose) option.
95 |
96 | #### Other Examples of `find` Usage
97 |
98 | To find all files larger than 10MB and display them using the `ls` command:
99 |
100 | ```bash
101 | find / -type f -size +10M -exec ls -lh {} \;
102 | ```
103 |
104 | To find and remove all files with the `.bak` extension in the current directory and its subdirectories:
105 |
106 | ```bash
107 | find . -name "*.bak" -type f -delete
108 | ```
109 |
110 | To find all files larger than 2000 blocks (approximately 1MB) and ask the user for permission to remove them:
111 |
112 | ```bash
113 | find $HOME -type f -size +2000 -exec ls -s {} \; -ok rm -f {} \;
114 | ```
115 |
116 | 🔴 **Caution**: The `find` command can be very powerful, but it also poses a risk of unintentional file deletion or modification, especially when combined with `-exec` or `-delete`. Always double-check your commands and use `-ok` instead of `-exec` when performing critical operations.
117 |
118 | ### Locate
119 |
120 | The `locate` command is a quicker alternative to `find` for searching filenames in the filesystem. It uses a database (`updatedb`) that stores references to all files in the filesystem. While faster, it may not always have the most up-to-date information as the database is updated periodically (usually through a nightly cron job).
121 |
122 | The general syntax of the `locate` command is as follows:
123 |
124 | ```bash
125 | locate [option] pattern
126 | ```
127 |
128 | - `[option]` refers to additional parameters that can be passed to `locate`.
129 | - `pattern` refers to the file or directory name you are searching for.
130 |
131 | #### Commonly Used Options
132 |
133 | Here are some commonly used options with the `locate` command:
134 |
135 | | Option | Description |
136 | | --- | --- |
137 | | `-i` | Ignore case distinctions in both the pattern and the file names. |
138 | | `-l, --limit, -n` | Limit the number of match results. |
139 | | `-S, --statistics` | Display statistics about each read database. |
140 | | `-b, --basename` | Match only the base name against the specified patterns. |
141 | | `-r, --regexp REGEXP` | Search for a basic regexp REGEXP. |
142 |
143 | #### Examples
144 |
145 | To find a file called `example.txt`:
146 |
147 | ```bash
148 | locate example.txt
149 | ```
150 |
151 | Suppose `example.txt` exists in multiple locations:
152 |
153 | ```
154 | /home/user/Documents/example.txt
155 | /usr/share/docs/example.txt
156 | ```
157 |
158 | #### Case-Insensitive Search
159 |
160 | To find a file called `example.txt` and ignore case:
161 |
162 | ```bash
163 | locate -i example.txt
164 | ```
165 |
166 | This command will return results like:
167 |
168 | ```
169 | /home/user/Documents/Example.txt
170 | /usr/share/docs/example.TXT
171 | ```
172 |
173 | #### Limiting the Number of Results
174 |
175 | To limit the number of returned results to 5:
176 |
177 | ```bash
178 | locate -l 5 example.txt
179 | ```
180 |
181 | The output will show only the first 5 matches found:
182 |
183 | ```
184 | /home/user/Documents/example.txt
185 | /usr/share/docs/example.txt
186 | /var/log/example.txt
187 | /tmp/example.txt
188 | /etc/example.txt
189 | ```
190 |
191 | #### Matching Only the Base Name
192 |
193 | To match only the base name against the pattern:
194 |
195 | ```bash
196 | locate -b '\example.txt'
197 | ```
198 |
199 | This command focuses on the base name, ignoring the directory path:
200 |
201 | ```
202 | /home/user/example.txt
203 | /usr/share/example.txt
204 | ```
205 |
206 | #### Searching with Regular Expressions
207 |
208 | To search for a regular expression pattern:
209 |
210 | ```bash
211 | locate -r 'ex.*\.txt$'
212 | ```
213 |
214 | This command will find files matching the regular expression `ex.*\.txt$`, such as:
215 |
216 | ```
217 | /home/user/Documents/exam.txt
218 | /usr/share/docs/example.txt
219 | ```
220 |
221 | #### Important Note
222 |
223 | The `locate` command is faster than `find` but might not always show the most up-to-date information. If the file or directory was recently created or deleted, the database might not reflect the change. The database of filenames is typically stored at `/var/lib/mlocate/mlocate.db`. To update the database manually, use the `updatedb` command (requires root privileges).
224 |
225 | ### Which
226 |
227 | The `which` command in Unix/Linux is used to locate the executable file associated with a given command. It searches for the executable in directories specified by the `PATH` environment variable.
228 |
229 | The general syntax of the `which` command is as follows:
230 |
231 | ```bash
232 | which [option] program_name
233 | ```
234 |
235 | - `[option]` refers to additional parameters that can be passed to which.
236 | - `program_name` is the name of the executable you want to locate.
237 |
238 | #### Commonly Used Options
239 |
240 | The `which` command is used to locate the executable file associated with a given command by searching through the directories listed in the `PATH` environment variable.
241 |
242 | Here are some commonly used options with the `which` command:
243 |
244 | | Option | Description |
245 | | --- | --- |
246 | | `-a` | Print all matching pathnames of each argument. |
247 |
248 | #### Examples
249 |
250 | To find the location of the `ls` command:
251 |
252 | ```bash
253 | which ls
254 | ```
255 |
256 | Output might look like:
257 |
258 | ```
259 | /bin/ls
260 | ```
261 |
262 | This indicates that the `ls` executable is located at `/bin/ls`.
263 |
264 | #### Finding All Instances of an Executable
265 |
266 | To find all the locations of the `python` command:
267 |
268 | ```bash
269 | which -a python
270 | ```
271 |
272 | Output might include:
273 |
274 | ```
275 | /usr/bin/python
276 | /usr/local/bin/python
277 | ```
278 |
279 | This indicates that there are multiple `python` executables located at `/usr/bin/python` and `/usr/local/bin/python`.
280 |
281 | #### Note
282 |
283 | The `which` command only searches for executables in directories specified in the `PATH` variable. If an executable is located elsewhere, `which` will not be able to find it. This limitation means that if a binary is not in a directory included in `PATH`, `which` will not display it, even if it exists on the system.
284 |
285 | ### Challenges
286 |
287 | 1. Use the `which` command to find the location of executable files for tools like `cat`, `ls`, `reboot`, and `chmod`.
288 | 2. Utilize the `find` command to locate all files in your home directory that are larger than 1GB.
289 | 3. Employ `find` or `locate` to search for all `.mp3` files within your home directory. Which method do you find faster?
290 | 4. Find all `.txt` files in your home directory that contain the string "linux". You might need to use a combination of commands to achieve this.
291 | 5. Use the `find` command to search for all symbolic links within the `/usr/bin` directory.
292 | 6. Display all subdirectories in the `/usr/local` directory that are owned by the root user.
293 | 7. List all files in the `/var/log` directory that have been modified within the past 24 hours.
294 | 8. Use `locate` to find all files with the `.conf` extension. Remember that the database may need to be updated.
295 | 9. Find all files and directories in your home directory that you have full permission to modify (read, write, and execute).
296 | 10. Use `which` to determine the paths of `python3` and `pip3`. Are they in the same directory? What does this tell you about your Python installation?
297 |
--------------------------------------------------------------------------------
/notes/inodes_and_symlinks.md:
--------------------------------------------------------------------------------
1 | ## File System Metadata and Links
2 |
3 | Inodes are critical as they store essential metadata about files, such as permissions and locations, allowing efficient file system management. Hard links are important because they let multiple file names point to the same inode, saving disk space by avoiding data duplication. Symlinks provide flexibility by creating references to files or directories, allowing for easier access and organization without duplicating the actual content. Together, these structures optimize storage, file access, and navigation in file systems.
4 |
5 | ### Inodes
6 |
7 | An inode (short for "index node") is a fundamental concept in many filesystems, serving as a data structure that describes a file or a directory. Each inode contains crucial metadata about a file, but not the file's actual data.
8 |
9 | ```
10 | +---------------------+ +-----------------------+
11 | | Directory | | Inode |
12 | | (Directory Entry) | | (Metadata & Pointers) |
13 | +---------------------+ +-----------------------+
14 | | Filename: "file.txt"| ----> | Inode Number: 1234 |
15 | | Inode Number: 1234 | | Permissions: 0644 |
16 | +---------------------+ | Owner UID: 1000 |
17 | | Size: 2048 bytes |
18 | | Timestamps: ... |
19 | | Pointers: |
20 | | +---------+ |
21 | | | Block 1 |--+ |
22 | | +---------+ | |
23 | | | |
24 | | +---------+ | |
25 | | | Block 2 |<-+ |
26 | | +---------+ |
27 | +-----------------------+
28 | ```
29 |
30 | Main idea:
31 |
32 | - An inode stores essential metadata such as the file's owner, permissions, size, timestamps (creation, modification, and last accessed), and pointers to the file's data blocks.
33 | - Every file or directory has a unique inode number within a given filesystem. This number helps the system efficiently manage and locate the file's data.
34 | - Multiple filenames can point to the same inode (hard links).
35 | - The number of inodes is fixed when the file system is created, limiting the number of files.
36 | - When a file is deleted, the inode and data blocks are freed if no other links point to it.
37 | - The **directory entry** contains the filename and the inode number.
38 | - The **inode** stores metadata and pointers to the data blocks.
39 | - The actual **data blocks** (Block 1, Block 2, etc.) contain the file's content.
40 |
41 | To view the inode number and other details of files in a directory, use the `ls -li` command. The first column in the output displays the inode number.
42 |
43 | ```
44 | $ ls -li
45 | total 8
46 | 684867 -rw-r--r-- 1 user user 41 Mar 1 12:34 file1
47 | 684868 -rw-r--r-- 1 user user 41 Mar 1 12:34 file2
48 | 684869 -rw-r--r-- 1 user user 41 Mar 1 12:34 file3
49 | ```
50 |
51 | Here, the inode numbers for `file1`, `file2`, and `file3` are `684867`, `684868`, and `684869`.
52 |
53 | For more detailed inode information about a particular file, use the `stat` command:
54 |
55 | ```
56 | $ stat file1
57 | File: file1
58 | Size: 41 Blocks: 8 IO Block: 4096 regular file
59 | Device: 806h/2054d Inode: 684867 Links: 1
60 | Access: (0644/-rw-rw-r--) Uid: ( 1000/ adam) Gid: ( 1000/ adam)
61 | ```
62 |
63 | An inode stores various types of metadata, but does not store the filename or the file's content. The breakdown of the inode metadata is as follows:
64 |
65 | ```
66 | Inode Number: 1234
67 | +--------------------------------+
68 | | File Type and Permissions |
69 | | User ID (Owner) |
70 | | Group ID |
71 | | File Size |
72 | | Access Time |
73 | | Modification Time |
74 | | Change Time |
75 | | Block Pointers: |
76 | | - Direct Blocks |
77 | | - Single Indirect Block |
78 | | - Double Indirect Block |
79 | | - Triple Indirect Block |
80 | +--------------------------------+
81 | ```
82 |
83 | - **Permissions** define the access rights for the file, such as read, write, and execute permissions for the owner, group, and others.
84 | - **Owner UID** identifies the user who owns the file.
85 | - **File size** is the total size of the file in bytes.
86 | - **Timestamps** include the access time, modification time, and change time for the file.
87 | - The inode does not store the actual content of the file but contains **pointers** that indicate the location of the file's data blocks on the disk. These pointers direct the system to the specific blocks where the file's data is stored.
88 |
89 | ### Hardlinks
90 |
91 | A hardlink creates an additional reference to the existing inode of a file. It's essentially another name for an existing file on the same filesystem.
92 |
93 | I. Use the `ln` command to crea a hardlink:
94 |
95 | ```
96 | ln existing_file hardlink_name
97 | ```
98 |
99 | II. Deleting a hardlink leaves the original file untouched. However, if you delete the source file, all its hardlinks will still point to its content, as they all reference the same inode.
100 |
101 | ```
102 | +----------------------+ +-----------------------+
103 | | Directory Entry 1 | | Directory Entry 2 |
104 | | Filename: "file1.txt"| | Filename: "file2.txt" |
105 | | Inode Number: 1234 | | Inode Number: 1234 |
106 | +----------------------+ +-----------------------+
107 | \ /
108 | \ /
109 | \ /
110 | \ /
111 | +-------------------+
112 | | Inode 1234 |
113 | | (File Metadata) |
114 | +-------------------+
115 | ```
116 |
117 | - Both "file1.txt" and "file2.txt" point to the same inode (1234).
118 | - They are indistinguishable at the file content level.
119 | - Deleting one link does not delete the inode until all links are removed.
120 |
121 | ### Symlinks (Symbolic Links)
122 |
123 | Symlinks are special pointers that reference the path to another file or directory.
124 |
125 | I. Unlike hardlinks, symlinks can point to objects across different filesystems or even non-existent targets.
126 |
127 | II. Use the `ln -s` command to create a symlink:
128 |
129 | ```
130 | ln -s existing_file symlink_name
131 | ```
132 |
133 | III. To determine the target of a symlink, use the `readlink -f` command:
134 |
135 | ```
136 | readlink -f symlink_name
137 | ```
138 |
139 | IV. Deleting the symlink doesn't affect the target, but if the target file or directory is removed, the symlink becomes a "dangling link", pointing to a non-existent location.
140 |
141 | ```
142 | +-----------------------+ +-----------------------+
143 | | Symlink File | ----> | Target File |
144 | | Filename: "link.txt" | | Filename: "file.txt" |
145 | | Inode Number: 5678 | | Inode Number: 1234 |
146 | +-----------------------+ +-----------------------+
147 | | Inode 5678 contains: | | Inode 1234 (Metadata) |
148 | | Path to "file.txt" | +-----------------------+
149 | +-----------------------+
150 | ```
151 |
152 | - The symlink "link.txt" has its own inode (5678) and contains the path to "file.txt".
153 | - Accessing "link.txt" redirects to "file.txt".
154 | - If "file.txt" is deleted, "link.txt" becomes a broken link.
155 |
156 | ### Key Differences Between Hardlinks and Symlinks
157 |
158 | | Feature | Hardlink | Symlink |
159 | | ---------------------------------------------- | ----------------------- | ------------------------------------ |
160 | | Points across different filesystems | No | Yes |
161 | | Affected by changes to its target's attributes | Yes (Shares same inode) | No (Points to a path, not an inode) |
162 | | Points to non-existent files | No | Yes (Can create "dangling links") |
163 | | Reference | Inode of the target | Path to the target |
164 |
165 | ### Challenges
166 |
167 | 1. Create a text file named `myfile.txt` in a directory. In another directory, create a hard link to `myfile.txt` called `myhardlink`. Delete `myhardlink` and observe what happens to the original `myfile.txt`. Reflect on whether `myfile.txt` is still accessible and why hard links work this way.
168 | 2. Create a text file named `inodefile.txt`. Then, in the same directory, create a symlink to `inodefile.txt` named `symlink_to_inodefile`. Use `ls -li` to display the inode numbers for both files and compare them. Discuss why the inode numbers are different and how symlinks are managed differently from hard links.
169 | 3. Navigate to the `/lib` folder and use the `ls -l` command to list all files, identifying which ones are symlinks. Distinguish between hard links and symlinks, using link count and symbolic link indicators. Explain how you identified each type and what they reveal about the library files.
170 | 4. Create a text file named `original.txt` and a symlink to it named `dangling_symlink`. Delete `original.txt` and try to access `dangling_symlink`. Discuss what happens and why the symlink is now considered "dangling."
171 | 5. Research whether it’s possible for a filesystem to run out of inodes even if there is still disk space available. Explain the circumstances in which this could happen and why inode availability is essential for file storage.
172 | 6. Try creating a hard link to a directory. Document what happens and explain why most filesystems do not allow hard links to directories, considering potential risks or technical limitations.
173 | 7. Create a text file named `multi.txt` and make three hard links to it in different locations. Modify the contents of `multi.txt` and check the content of all three hard links. Describe your observations and explain how hard links reflect changes to the original file.
174 | 8. Use the `ls` command with a flag that shows the file type for each item in the `/etc` directory. Identify the flag to use and describe the indicators for different types of items (regular files, directories, symlinks, etc.).
175 | 9. Create two text files, `fileA.txt` and `fileB.txt`. Then create a symlink named `mylink` that points to `fileA.txt`. Without deleting `mylink`, change its target to `fileB.txt` and explain the process you used. Discuss how this method avoids recreating the symlink.
176 | 10. Research the typical space consumption of an inode on a filesystem. Explain how inode size can vary based on the filesystem and why inode space consumption is an important factor in filesystem design.
177 |
--------------------------------------------------------------------------------
/notes_template.md:
--------------------------------------------------------------------------------
1 | TODO:
2 |
3 | - real life story
4 | - concrete full steps to reproduce
5 |
6 |
7 | # [Topic Name]
8 |
9 | > A friendly introduction explaining what this topic is and why you'd want to learn it. Think of this as answering "What problem does this solve for me?"
10 |
11 | ![Optional descriptive image or diagram]
12 |
13 | ## What You Need to Know
14 |
15 | Let's start with the basics. [Topic] is essentially [simple explanation in everyday terms].
16 |
17 | ### Key Terms (The Essentials)
18 |
19 | Before we dive in, here are the terms you'll encounter:
20 |
21 | | Term | What It Actually Means |
22 | |------|------------------------|
23 | | **Term 1** | Plain English explanation with context |
24 | | **Term 2** | What this means in practice |
25 | | **Term 3** | Why this matters to you |
26 |
27 | ### The Big Picture
28 |
29 | Think of [topic] like [relatable analogy]. Just as [analogy continues], [topic] works by [simple explanation].
30 |
31 | **Here's what's happening behind the scenes:**
32 |
33 | When you use [topic], your system basically:
34 | 1. Takes your input
35 | 2. Processes it in a specific way
36 | 3. Gives you the result you want
37 |
38 | ## Getting Your Hands Dirty
39 |
40 | Now let's actually use this stuff. Don't worry - we'll start simple and work our way up.
41 |
42 | ### Your First Command
43 |
44 | The most basic thing you can do is:
45 |
46 | ```bash
47 | simple_command filename
48 | ```
49 |
50 | **What just happened?**
51 |
52 | This command told your system to [explain in simple terms]. You'll see something like:
53 |
54 | ```
55 | Typical output you'd see
56 | ```
57 |
58 | The output means [explanation of what the user is seeing].
59 |
60 | ### Common Things You'll Want to Do
61 |
62 | #### Task 1: [Everyday Task Name]
63 |
64 | **The situation:** You need to [describe real scenario].
65 |
66 | **The solution:**
67 |
68 | ```bash
69 | command --helpful-option filename
70 | ```
71 |
72 | **Why this works:**
73 | - `--helpful-option` tells the command to [explain benefit]
74 | - `filename` is obviously the file you want to work with
75 |
76 | **Pro tip:** If you see an error like "permission denied," try adding `sudo` at the beginning.
77 |
78 | #### Task 2: [Another Common Need]
79 |
80 | **When you'd use this:** [Real-world scenario]
81 |
82 | ```bash
83 | another_command input_file output_file
84 | ```
85 |
86 | Made a mistake? No worries - you can quickly fix it:
87 |
88 | ```bash
89 | # Oops, wrong output name
90 | ^output_file^correct_name
91 | ```
92 |
93 | This reruns your command with the correction. Much faster than retyping everything!
94 |
95 | ### Level Up: More Powerful Techniques
96 |
97 | Once you're comfortable with the basics, here are some tricks that'll save you time:
98 |
99 | #### Combining Commands Like a Pro
100 |
101 | Instead of running commands one by one:
102 |
103 | ```bash
104 | # The tedious way
105 | first_command input.txt
106 | second_command processed.txt
107 | third_command final.txt
108 | ```
109 |
110 | You can chain them together:
111 |
112 | ```bash
113 | # The smart way
114 | first_command input.txt | second_command | third_command > final.txt
115 | ```
116 |
117 | **What's happening here:**
118 | - The `|` (pipe) passes output from one command to the next
119 | - `>` saves the final result to a file
120 | - Your system does all three steps automatically
121 |
122 | ## Setting Things Up
123 |
124 | ### Configuration Files (Don't Panic!)
125 |
126 | Most of the time, the default settings work fine. But if you want to customize things, here's where to look:
127 |
128 | **System-wide settings:** `/etc/[topic]/config`
129 | **Your personal settings:** `~/.config/[topic]/config`
130 |
131 | **Quick example - changing a basic setting:**
132 |
133 | ```bash
134 | # Open your config file
135 | nano ~/.config/[topic]/config
136 |
137 | # Add this line to change [setting]
138 | preferred_option=your_value
139 | ```
140 |
141 | Save the file (Ctrl+X, then Y, then Enter if using nano), and you're done!
142 |
143 | ### Making Life Easier with Aliases
144 |
145 | Tired of typing long commands? Create shortcuts:
146 |
147 | ```bash
148 | # Instead of typing this every time:
149 | alias myshortcut='long_complicated_command --with --many --options'
150 |
151 | # Now you can just type:
152 | myshortcut
153 | ```
154 |
155 | **Making it permanent:**
156 |
157 | Add your aliases to `~/.bashrc` so they survive reboots:
158 |
159 | ```bash
160 | echo "alias myshortcut='long_complicated_command --with --many --options'" >> ~/.bashrc
161 | source ~/.bashrc
162 | ```
163 |
164 | ## When Things Go Wrong (And They Will)
165 |
166 | Don't worry - everyone runs into problems. Here's how to fix the most common issues:
167 |
168 | ### "Command not found"
169 |
170 | **What you see:**
171 | ```
172 | bash: mysterious_command: command not found
173 | ```
174 |
175 | **What this means:** The command isn't installed or isn't in your PATH.
176 |
177 | **Quick fixes to try:**
178 | 1. **Check if it's installed:** `which mysterious_command`
179 | 2. **Install it:** `sudo apt install package-name` (Ubuntu/Debian) or `sudo yum install package-name` (RedHat/CentOS)
180 | 3. **Check your PATH:** `echo $PATH`
181 |
182 | ### "Permission denied"
183 |
184 | **The situation:** You're trying to access or modify something you don't own.
185 |
186 | **Quick fix:** Add `sudo` to the beginning:
187 | ```bash
188 | # This fails:
189 | echo "new content" > /etc/important-file
190 |
191 | # This works:
192 | sudo echo "new content" > /etc/important-file
193 | ```
194 |
195 | **But wait!** Sometimes even `sudo` won't work with redirects. In that case:
196 | ```bash
197 | echo "new content" | sudo tee /etc/important-file
198 | ```
199 |
200 | ### "File or directory not found"
201 |
202 | **Usually means:** You're in the wrong directory or the file doesn't exist.
203 |
204 | **Debug it:**
205 | ```bash
206 | # Where am I?
207 | pwd
208 |
209 | # What's here?
210 | ls -la
211 |
212 | # Is the file really where I think it is?
213 | find . -name "filename*"
214 | ```
215 |
216 | ## Real-World Examples
217 |
218 | Let's look at some actual scenarios where you'd use this.
219 |
220 | ### Scenario: Daily Backup Task
221 |
222 | **The problem:** You want to backup your important files every day without thinking about it.
223 |
224 | **The solution:**
225 |
226 | ```bash
227 | # Create a simple backup script
228 | #!/bin/bash
229 | tar -czf backup-$(date +%Y%m%d).tar.gz ~/Documents ~/Pictures
230 |
231 | # Make it executable
232 | chmod +x backup.sh
233 |
234 | # Run it automatically every day at 2 AM
235 | echo "0 2 * * * /path/to/backup.sh" | crontab -
236 | ```
237 |
238 | **What's happening:**
239 | - `tar` creates a compressed archive
240 | - `$(date +%Y%m%d)` adds today's date to the filename
241 | - `crontab` schedules it to run automatically
242 |
243 | ### Scenario: Finding That File You Lost
244 |
245 | **The problem:** You know you have a file with "budget" in the name, but where is it?
246 |
247 | **The solution:**
248 |
249 | ```bash
250 | # Search everywhere for files with "budget" in the name
251 | find / -name "*budget*" 2>/dev/null
252 |
253 | # Too many results? Be more specific:
254 | find ~/Documents -name "*budget*.xlsx" -mtime -30
255 | ```
256 |
257 | **Translation:**
258 | - `find /` searches everywhere (starting from root)
259 | - `2>/dev/null` hides permission error messages
260 | - `-mtime -30` finds files modified in the last 30 days
261 |
262 | ## Quick Reference
263 |
264 | ### Commands You'll Use Daily
265 |
266 | | What You Want to Do | Command | Example |
267 | |---------------------|---------|---------|
268 | | List files | `ls` | `ls -la` (detailed list) |
269 | | Copy files | `cp` | `cp source.txt backup.txt` |
270 | | Move/rename | `mv` | `mv oldname.txt newname.txt` |
271 | | Delete files | `rm` | `rm unwanted.txt` |
272 | | Create directory | `mkdir` | `mkdir new_folder` |
273 |
274 | ### Useful Shortcuts
275 |
276 | | Shortcut | What It Does |
277 | |----------|--------------|
278 | | `Ctrl+C` | Stop whatever's running |
279 | | `Ctrl+L` | Clear the screen |
280 | | `Tab` | Auto-complete filenames |
281 | | `↑` | Previous command |
282 | | `!!` | Repeat last command |
283 |
284 | ## Practice Makes Perfect
285 |
286 | ### Start Here (Beginner)
287 |
288 | 1. **Get comfortable with navigation:**
289 | - Use `ls` to see what's in your current directory
290 | - Use `cd` to move around
291 | - Try `pwd` to see where you are
292 |
293 | 2. **Practice with files:**
294 | - Create a test file: `touch test.txt`
295 | - Copy it: `cp test.txt test_copy.txt`
296 | - Delete the copy: `rm test_copy.txt`
297 |
298 | ### Next Level (Intermediate)
299 |
300 | 3. **Combine commands:**
301 | - List all `.txt` files: `ls *.txt`
302 | - Count them: `ls *.txt | wc -l`
303 | - Find the biggest one: `ls -la *.txt | sort -k5 -n | tail -1`
304 |
305 | 4. **Create a useful script:**
306 | - Make a script that shows disk usage and current time
307 | - Make it executable and run it
308 |
309 | ### Advanced Challenges
310 |
311 | 5. **Automate something annoying:**
312 | - Set up automatic file organization
313 | - Create a custom backup solution
314 | - Build a monitoring script for system resources
315 |
316 | 6. **Troubleshooting practice:**
317 | - Intentionally break something (in a safe environment)
318 | - Practice diagnosing and fixing the issue
319 | - Document what you learned
320 |
321 | ### Solutions and Hints
322 |
323 | **Don't peek until you've tried!**
324 |
325 |
326 | Click for hints and solutions
327 |
328 | **Challenge 1 hints:**
329 | - Remember: `ls` shows files, `cd dirname` enters a directory
330 | - If you get lost, `cd ~` takes you home
331 |
332 | **Challenge 3 solution:**
333 | ```bash
334 | # Count .txt files
335 | ls *.txt | wc -l
336 |
337 | # Find biggest file (size is in column 5)
338 | ls -la *.txt | sort -k5 -n | tail -1
339 | ```
340 |
341 | **Challenge 4 example script:**
342 | ```bash
343 | #!/bin/bash
344 | echo "=== System Status ==="
345 | echo "Current time: $(date)"
346 | echo "Disk usage:"
347 | df -h
348 | echo "Memory usage:"
349 | free -h
350 | ```
351 |
352 |
353 |
354 | ## What's Next?
355 |
356 | Once you're comfortable with [current topic], you might want to explore:
357 |
358 | - **[Related Topic 1](./related_topic1.md)** - builds on what you learned here
359 | - **[Related Topic 2](./related_topic2.md)** - useful for similar tasks
360 | - **[Advanced Topic](./advanced_topic.md)** - when you're ready for more complexity
361 |
362 | ## Helpful Resources
363 |
364 | ### When You Need Help
365 |
366 | - **Quick help:** `man command_name` (built-in manual)
367 | - **Friendly explanations:** `command_name --help`
368 | - **Online communities:** Stack Overflow, Reddit's r/linux4noobs
369 |
370 | ### Keep Learning
371 |
372 | - [Official Documentation](https://example.com/docs) - comprehensive but technical
373 | - [Beginner Tutorials](https://example.com/beginners) - step-by-step guides
374 | - [Video Series](https://example.com/videos) - visual learners
375 |
376 | ---
377 |
378 | **What's next?** Try [Next Topic](./next_topic.md) to build on what you've learned here.
379 |
380 | **Need to review?** Go back to [Previous Topic](./previous_topic.md) if something wasn't clear.
381 |
--------------------------------------------------------------------------------
/notes/dwm.md:
--------------------------------------------------------------------------------
1 | ## Dynamic Window Manager (DWM)
2 |
3 | The Dynamic Window Manager (DWM) is a minimal, lightweight, and highly efficient tiling window manager designed to help you manage application windows in a clean and distraction-free manner. Instead of overlapping windows as seen in traditional window managers, DWM organizes windows in a tiled layout, making it easy to see and navigate multiple applications simultaneously. DWM is based on the X Window System, making it suitable for use on Unix-like operating systems.
4 |
5 | DWM stands out for its extreme simplicity and high customization capability. It is part of the suckless software philosophy, which emphasizes clarity, simplicity, and resource efficiency.
6 |
7 | 
8 |
9 | Here's how DWM looks in action. Each window takes a portion of the screen, allowing for easy multitasking.
10 |
11 | ### Installation
12 |
13 | Installation of DWM is straightforward. If you're on a Debian-based system such as Ubuntu or Mint, you can install DWM and its related suckless-tools packages using the following command in your terminal:
14 |
15 | ```bash
16 | sudo apt install dwm suckless-tools
17 | ```
18 |
19 | The suckless-tools package contains additional utilities developed by the suckless community, including dmenu, a fast and lightweight dynamic menu for X, and slock, a simple screen locker utility.
20 |
21 | After installation, you can choose DWM as your window manager from the login screen.
22 |
23 | Remember that DWM is highly customizable, but changes typically require modifying the source code and recompiling. So if you're up for some tinkering, you can clone the DWM source code from the suckless website, make changes to suit your needs, and then build and install your custom version of DWM.
24 |
25 | ### Usage
26 |
27 | - After installing DWM, interaction is mainly through keyboard shortcuts, making it quicker and more efficient than using a mouse in a tiling window manager environment.
28 | - Opening a new terminal can be done by pressing `Shift + Alt + Enter`, which typically launches the `st` terminal or `xterm` if `st` isn't installed.
29 | - To move focus between different windows, you use `Alt + j` and `Alt + k`, which cycle through the windows in the currently visible tag.
30 | - Promoting a window from the stack to the master area or vice versa is done by pressing `Alt + Enter`.
31 | - Closing the currently focused window can be achieved by either clicking the close button or typing `exit` and pressing `Enter` in terminal windows.
32 | - DWM uses a concept called tags, similar to virtual desktops or workspaces, and switching views to another tag is done by pressing `Alt + [tag number]`, with tag numbers ranging from 1 to 9.
33 | - To quit DWM, you press `Shift + Alt + q`, which will close the window manager and return you to your display manager or console.
34 | - Moving a window to another tag involves focusing on the window and then pressing `Shift + Alt + [tag number]`.
35 | - Toggling a window between tiled and floating modes is done by focusing on it and pressing `Alt + t`. This shortcut can also be used to revert the window back to the tiled layout.
36 | - Toggling a window to full-screen mode within the current layout is done by pressing `Alt + m`, which switches to the monocle layout, making the focused window occupy the entire screen. This is particularly useful when you want to focus on a single window.
37 | - Resizing a floating window can be accomplished by holding `Alt`, then right-clicking and dragging the window.
38 | - Moving a floating window similarly requires holding `Alt` and then left-clicking and dragging the window.
39 | - Launching an application launcher like `dmenu` can be done by pressing `Alt + p`, allowing you to start applications by typing their name, making for a quick and efficient workflow.
40 | - Cycling through different layouts (e.g., tiling, floating, monocle) is done by pressing `Alt + Space`. This is useful when you want to experiment with different ways of arranging your windows.
41 | - Viewing all windows across all tags simultaneously can be achieved by pressing `Alt + 0`. This allows you to quickly access any open window without switching tags.
42 | - Moving a window to the scratchpad (a hidden, floating workspace) is done by pressing `Shift + Alt + s`. This is useful for keeping a window handy without cluttering your main workspace.
43 | - Switching between the last used tags is quickly done by pressing `Alt + Tab`, allowing for fast toggling between workspaces.
44 | - Focusing on the master area window directly can be done by pressing `Alt + l`. This is handy when you want to quickly shift your attention to the main application you're working on.
45 | - Adjusting the master area size can be done dynamically by holding `Alt` and pressing `h` or `l`. This allows you to give more screen space to the master window or the stack area as needed.
46 | - Restarting DWM without logging out is possible by pressing `Shift + Alt + r`, which reloads DWM with any new configurations applied, making it easier to test changes without disrupting your session.
47 | - Taking a screenshot can be done by integrating tools like `scrot` or `maim` and binding them to a key combination in your `config.h`. For example, `Alt + Shift + s` could be used to take a screenshot of your current screen.
48 | - Adjusting gaps between windows (if you've patched DWM with gaps support) can be done with custom keybindings, allowing you to increase or decrease spacing dynamically, tailoring your workspace to your visual preferences.
49 | - Killing a non-responsive window can be done by focusing on it and pressing `Shift + Alt + c`, force-closing the application without needing to open a task manager.
50 | - Checking the system status (like CPU usage, memory, or battery) can be enhanced by adding a custom status bar script to DWM, providing real-time system information directly on your screen.
51 | - Locking your screen for security can be done by setting up a keybinding like `Alt + Shift + l` to launch a screen locker like `slock`, ensuring your system is protected when you step away.
52 | - Navigating between monitors in a multi-monitor setup is done by pressing `Alt + Shift + [arrow key]`, allowing you to quickly move focus or windows across different screens.
53 |
54 | ### Configuration
55 |
56 | Unlike other window managers that use configuration files, DWM is customized by directly modifying its source code and then recompiling it. This approach provides a lot of flexibility and control over DWM's behavior and appearance. The main configuration is located in the `config.h` file, which can be found in the DWM source code directory.
57 |
58 | Follow these steps to customize DWM to your preferences:
59 |
60 | #### Download the DWM Source Code
61 |
62 | You can clone the source code from the official suckless git repository using the following command:
63 |
64 | ```bash
65 | git clone https://git.suckless.org/dwm
66 | ```
67 |
68 | #### Navigate to the `dwm` Directory and Create a `config.h` File
69 |
70 | The `config.def.h` file contains the default settings. To customize DWM, you should first copy `config.def.h` to `config.h`. Then, you can edit the `config.h` file with your preferred text editor (e.g., nano, vim, emacs). Here's how to do that:
71 |
72 | ```bash
73 | cd dwm
74 | cp config.def.h config.h
75 | nano config.h
76 | ```
77 |
78 | #### Customize the `config.h` File
79 |
80 | In the `config.h` file, you can change various settings according to your preferences. For example, you can modify key bindings, set custom colors, define the status bar's appearance, and select the default font.
81 |
82 | **Changing the Border Configuration:**
83 |
84 | I. Within `config.h`, locate the section where the border settings are defined. It typically looks like this:
85 |
86 | ```c
87 | static const unsigned int borderpx = 1; /* border pixel of windows */
88 | ```
89 |
90 | `borderpx` controls the width of the border around each window. The default value is usually `1` pixel.
91 |
92 | II. To increase or decrease the border width, change the value of `borderpx`. For example, to set the border width to 2 pixels, change the line to:
93 |
94 | ```c
95 | static const unsigned int borderpx = 2;
96 | ```
97 |
98 | If you want to remove the border entirely, you can set `borderpx` to `0`:
99 |
100 | ```c
101 | static const unsigned int borderpx = 0;
102 | ```
103 |
104 | III. The border color for both focused and unfocused windows is defined in the color scheme section of `config.h`. Look for the following lines:
105 |
106 | ```c
107 | static const char col_gray1[] = "#222222";
108 | static const char col_gray2[] = "#444444";
109 | static const char col_gray3[] = "#bbbbbb";
110 | static const char col_gray4[] = "#eeeeee";
111 | static const char col_cyan[] = "#005577";
112 | static const char *colors[][3] = {
113 | /* fg bg border */
114 | [SchemeNorm] = { col_gray3, col_gray1, col_gray2 },
115 | [SchemeSel] = { col_gray4, col_cyan, col_cyan },
116 | };
117 | ```
118 |
119 | Here, `col_cyan` is used for the border of the focused window, and `col_gray2` is used for the unfocused windows. To change the border color, replace the hex color code with your preferred color. For example, to change the focused window border to red:
120 |
121 | ```c
122 | static const char col_red[] = "#ff0000";
123 | static const char *colors[][3] = {
124 |
125 | /* fg bg border */
126 |
127 | [SchemeNorm] = { col_gray3, col_gray1, col_gray2 },
128 |
129 | [SchemeSel] = { col_gray4, col_red, col_red },
130 |
131 | };
132 | ```
133 |
134 | IV. After adjusting the border settings and any other customizations, save your changes and exit the text editor.
135 |
136 | #### Compile and Install the Modified DWM
137 |
138 | After modifying the `config.h` file, you need to compile the DWM source code and install the new binary:
139 |
140 | ```bash
141 | sudo make clean install
142 | ```
143 |
144 | This command will clean up any previous builds and compile your customized version of DWM.
145 |
146 | #### Apply the Changes
147 |
148 | To apply your changes, you need to restart DWM. You can do this by logging out and logging back in, or by restarting your X session. Once you log back in, the updated DWM with your new border settings and other customizations should be in effect.
149 |
150 | ### Further Resources
151 |
152 | - If you are seeking additional information about configuring and using DWM, the official DWM Tutorial provided by the suckless community is an excellent starting point. It offers a comprehensive walkthrough of basic DWM usage and configuration, available at [https://dwm.suckless.org/tutorial/](https://dwm.suckless.org/tutorial/).
153 | - For a more in-depth understanding of DWM and its functionalities, the DWM man page is an invaluable resource. You can access it in your terminal by running the command `man dwm`.
154 | - The DWM Config Archive is a collection of user-submitted `config.h` files, serving as a treasure trove of interesting and varied configurations. Exploring these files can provide new ideas for your own DWM setup or even a ready-to-use configuration that suits your needs. Visit the archive at [https://dwm.suckless.org/customisation/](https://dwm.suckless.org/customisation/).
155 |
--------------------------------------------------------------------------------
/notes/firewall.md:
--------------------------------------------------------------------------------
1 | ## Firewalls
2 |
3 | A firewall is like a guard for your computer. It keeps your computer safe from others who shouldn't use it. It checks the information going in and out and follows safety rules. In Linux, there are several utilities to manage your firewall, including `iptables`, `ufw`, and `firewalld`.
4 |
5 | ```
6 | INTERNET TRAFFIC ---> |--------------------------| ---> INTERNAL NETWORK
7 | [IP:123.45.67.89] | | [Accepted IP: 123.45.67.89]
8 | Port 80 (HTTP) | +----------------+ | Port 80 -> Allowed
9 | | | FIREWALL | |
10 | | | Rules Applied: | |
11 | [IP: 98.76.54.32] | | - Allow HTTP | | [Rejected IP: 98.76.54.32]
12 | Port 22 (SSH) | | - Block SSH | | Port 22 -> Blocked
13 | | +----------------+ |
14 | | |
15 | |--------------------------|
16 | ```
17 |
18 | ## Iptables
19 |
20 | `Iptables` is a command-line utility for managing the Linux firewall. It is pre-installed on most Linux systems and lets you configure rules to control incoming and outgoing network traffic.
21 |
22 | To view the current rules, use the `-L` flag:
23 |
24 | ```bash
25 | iptables -L
26 | ```
27 |
28 | An example output might look something like this:
29 |
30 | ```
31 | Chain INPUT (policy ACCEPT)
32 | target prot opt source destination
33 | ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
34 | DROP icmp -- anywhere anywhere icmp echo-request
35 | ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
36 | LOG all -- anywhere anywhere limit: avg 10/min burst 5 LOG level debug prefix "iptables denied: "
37 |
38 | Chain FORWARD (policy DROP)
39 | target prot opt source destination
40 | ACCEPT all -- 192.168.1.0/24 192.168.1.0/24
41 | DROP all -- anywhere anywhere
42 |
43 | Chain OUTPUT (policy ACCEPT)
44 | target prot opt source destination
45 | ```
46 |
47 | Explanation:
48 |
49 | I. **Chain Names**: `INPUT`, `FORWARD`, `OUTPUT` are the default chains in iptables.
50 |
51 | II. **Policies**: Set to `ACCEPT`, `DROP`, or `REJECT`. For example, the default policy for `FORWARD` is `DROP`.
52 |
53 | III. **Rules**: Listed under each chain.
54 |
55 | - **Target**: The action to take, e.g., `ACCEPT`, `DROP`, `LOG`.
56 | - **Prot**: The protocol, e.g., `tcp`, `udp`, `icmp`, or `all`.
57 | - **Opt**: Options, often includes flags like `--`.
58 | - **Source and Destination**: IP addresses or ranges for source and/or destination.
59 | - **Additional Conditions**: For example, `tcp dpt:ssh` means TCP packets destined for SSH port.
60 | - **Logging**: The `LOG` rule can specify logging of packets, including a prefix for log messages.
61 |
62 | To add a new rule, use the `-A` flag followed by the rule itself. For example, to allow incoming traffic on port 80 (used for HTTP), use:
63 |
64 | ```bash
65 | iptables -A INPUT -p tcp --dport 80 -j ACCEPT
66 | ```
67 |
68 | To delete a rule, use the `-D` flag followed by the rule number (as displayed by the `-L` flag). For example, to delete the second rule in the INPUT chain, use:
69 |
70 | ```bash
71 | iptables -D INPUT 2
72 | ```
73 |
74 | 🔴 Caution: Keep in mind that changes to the safety guard's rules with iptables don't last when you restart your computer. To keep the changes, save them in a file and bring them back when your computer starts.
75 |
76 | I. On Debian-based systems, you can save the current iptables configuration with:
77 |
78 | ```bash
79 | iptables-save > /etc/iptables/rules.v4
80 | ```
81 |
82 | And ensure they are reloaded on boot by installing the `iptables-persistent` package.
83 |
84 | II. On Red Hat-based systems, you can save the configuration with:
85 |
86 | ```bash
87 | service iptables save
88 | ```
89 |
90 | ## UFW
91 |
92 | UFW (Uncomplicated Firewall) is a user-friendly alternative to iptables for managing the Linux firewall. It is pre-installed on many Linux distributions, including Ubuntu.
93 |
94 | To view the configured rules, use the status numbered command:
95 |
96 | ```
97 | ufw status numbered
98 | ```
99 |
100 | An example output of this command might look something like this:
101 |
102 | ```
103 | Status: active
104 |
105 | To Action From
106 | -- ------ ----
107 | [ 1] 22/tcp ALLOW IN Anywhere
108 | [ 2] 80/tcp ALLOW IN Anywhere
109 | [ 3] 443/tcp ALLOW IN Anywhere
110 | [ 4] 1000:2000/tcp ALLOW IN 192.168.1.0/24
111 | [ 5] 22/tcp ALLOW IN Anywhere (v6)
112 | [ 6] 80/tcp ALLOW IN Anywhere (v6)
113 | [ 7] 443/tcp ALLOW IN Anywhere (v6)
114 | ```
115 |
116 | Explanation:
117 |
118 | I. **Status**: Indicates whether the firewall is active or inactive. In this case, it's `active`.
119 |
120 | II. **Columns in the Output**:
121 |
122 | - **To**: This column shows the port or port range and protocol (like `22/tcp`) for which the rule is applied.
123 | - **Action**: Specifies the action (`ALLOW IN`, `DENY`, etc.) taken by the firewall for matching traffic.
124 | - **From**: This column indicates the source of the traffic for which the rule is applicable. It can be an IP address, a subnet, or `Anywhere`.
125 |
126 | III. **Numbered Rules**: Each rule is prefixed with a number in square brackets (e.g., `[ 1]`). This numbering is crucial for modifying or deleting specific rules, as it allows you to reference them easily.
127 |
128 | IV. **IPv4 and IPv6 Rules**: The rules apply to both IPv4 and IPv6 traffic if suffixed with `(v6)`.
129 |
130 | To allow incoming traffic on a specific port, use the allow command followed by the protocol and port number. For example, to allow incoming `SSH` connections (which use port 22 by default), use:
131 |
132 | ```bash
133 | ufw allow ssh
134 | ```
135 |
136 | To block incoming traffic on a specific port, use the deny command followed by the protocol and port number. For example, to block incoming `HTTP` connections (which use port 80 by default), use:
137 |
138 | ```bash
139 | ufw deny http
140 | ```
141 |
142 | To activate the firewall and apply the rules, use the enable command:
143 |
144 | ```bash
145 | ufw enable
146 | ```
147 |
148 | You can also set default policies for incoming and outgoing traffic using the default command. For example, to deny all incoming traffic and allow all outgoing traffic, use:
149 |
150 | ```bash
151 | ufw default deny incoming
152 | ufw default allow outgoing
153 | ufw enable
154 | ```
155 |
156 | ## Firewalld
157 |
158 | Firewalld is a more advanced firewall used by Fedora and other Linux distributions. It lets you configure firewalls using zones, which are collections of rules that apply to specific types of network interfaces.
159 |
160 | To view the currently configured rules, use the `--list-all flag`:
161 |
162 | ```bash
163 | firewall-cmd --list-all
164 | ```
165 |
166 | An example output might look something like this:
167 |
168 | ```
169 | public (active)
170 | target: default
171 | icmp-block-inversion: no
172 | interfaces: eth0
173 | sources:
174 | services: ssh dhcpv6-client http https
175 | ports: 8080/tcp 9090/tcp
176 | protocols:
177 | masquerade: no
178 | forward-ports:
179 | source-ports:
180 | icmp-blocks:
181 | rich rules:
182 | rule family="ipv4" source address="192.168.0.0/24" accept
183 | rule family="ipv4" source address="10.0.0.0/8" port port="443" protocol="tcp" accept
184 | ```
185 |
186 | Explanation:
187 |
188 | - **Zone**: The name of the zone (e.g., `public`) and its status (`active`).
189 | - **Target**: The default action for incoming traffic not matching any other rule.
190 | - **Interfaces**: Network interfaces (e.g., `eth0`) associated with the zone.
191 | - **Services**: Predefined services allowed in this zone (e.g., `ssh`, `http`, `https`).
192 | - **Ports**: Custom ports that are open (e.g., `8080/tcp`, `9090/tcp`).
193 | - **Protocols, Masquerade, Forward-ports, Source-ports, Icmp-blocks**: Other network settings and rules.
194 | - **Rich Rules**: More complex rules defined, like allowing specific IP ranges on certain ports. For example, the rule allowing all traffic from the `192.168.0.0/24` subnet, and allowing TCP traffic on port `443` from the `10.0.0.0/8` subnet.
195 |
196 | I. Adding Rules
197 |
198 | To add a new rule, use the `--add-service` flag followed by the service name. For example, to allow incoming `SSH` connections, use:
199 |
200 | ```bash
201 | firewall-cmd --permanent --add-service=ssh
202 | ```
203 |
204 | II. Removing Rules
205 |
206 | To remove a rule, use the `--remove-service` flag followed by the service name. For example, to block incoming `HTTP` connections, use:
207 |
208 | ```bash
209 | firewall-cmd --permanent --remove-service=http
210 | ```
211 |
212 | III. Applying Changes
213 |
214 | To apply the changes and reload the firewall, use the `--reload` flag:
215 |
216 | ```
217 | firewall-cmd --reload
218 | ```
219 |
220 | To make the changes persistent across reboots, restart the `firewalld.service` using systemctl. For example:
221 |
222 | ```
223 | systemctl restart firewalld.service
224 | ```
225 |
226 | ### Challenges
227 |
228 | 1. Block all incoming traffic on port 80 (HTTP) while allowing all incoming traffic on port 22 (SSH). Test the configuration by attempting to access the system on both ports and verify that HTTP is blocked while SSH remains accessible. Discuss the importance of selectively blocking and allowing specific ports for securing a system.
229 | 2. Configure the firewall to deny all incoming traffic by default and allow all outgoing traffic. Explain the purpose of setting default policies, and discuss how a restrictive incoming policy can improve system security by blocking unsolicited connections.
230 | 3. Create a firewall rule to block incoming ICMP echo requests, effectively preventing ping requests. Test the configuration by pinging the system from another device, and discuss why blocking ICMP can be a useful security measure for preventing network reconnaissance.
231 | 4. Set up a rule to allow incoming traffic on port 80 (HTTP) only from a specific IP address. Test this rule by trying to access the system from the allowed IP address and from a different IP. Discuss the use cases for restricting access to specific IP addresses and how it helps secure services exposed to the internet.
232 | 5. What happens if you manually modify iptables rules on a remote server? Is recovery possible, and could you still connect via SSH afterward?
233 | 6. Modify firewall rules to allow SSH access (port 22) only from a set of predefined IP addresses. Test access from an allowed IP and a blocked IP, and discuss the importance of limiting SSH access to trusted sources as a security best practice for remote management.
234 | 7. Implement a rule to limit the rate of incoming connections on a specific port (e.g., 100 connections per minute) to mitigate potential DoS attacks. Simulate an excessive number of connections to this port, and monitor the firewall’s response. Explain how rate limiting can protect services from abuse and help maintain system availability.
235 | 8. Set up the firewall to log details of all dropped packets for analysis and monitoring purposes. Review the log entries to ensure that dropped packets are being recorded, and explain how logging provides insights into unauthorized access attempts and potential threats.
236 | 9. Create a rule to forward traffic incoming on a specific port (e.g., 8080) to another port (e.g., 80). Test this configuration by sending requests to port 8080 and verifying that they reach the service listening on port 80. Discuss port forwarding as a method for managing and redirecting traffic to internal services.
237 | 10. Configure the firewall to block all outgoing traffic to specific domains or IP addresses. Test this by attempting to connect to the blocked addresses, and discuss scenarios where limiting outgoing connections is beneficial, such as preventing communication with known malicious domains.
238 |
--------------------------------------------------------------------------------
/notes/tar_and_gzip.md:
--------------------------------------------------------------------------------
1 | ## File Compression and Archiving Commands
2 |
3 | Working with files on Unix-based systems often involves managing multiple files and directories, especially when it comes to storage or transferring data. Tools like `tar` and `gzip` are invaluable for packaging and compressing files efficiently. Understanding how to use these commands can simplify tasks like backing up data, sharing files, or deploying applications.
4 |
5 | Imagine you have a collection of files and folders that you want to bundle together into a single package. Think of it as packing items into a suitcase for a trip—`tar` acts as the suitcase that holds everything together.
6 |
7 | ```
8 | Files and Directories:
9 |
10 | +-----------+ +-----------+ +-----------+
11 | | Folder1 | | Folder2 | | File1 |
12 | +-----------+ +-----------+ +-----------+
13 | \ | /
14 | \ | /
15 | \ | /
16 | \ | /
17 | \ | /
18 | \ | /
19 | \ | /
20 | \ | /
21 | \ | /
22 | \ | /
23 | \ | /
24 | \ | /
25 | \ | /
26 | \ | /
27 | \ | /
28 | \| /
29 | +-----------------+
30 | | Tar Archive |
31 | +-----------------+
32 | ```
33 |
34 | In this diagram, multiple folders and files are combined into a single tar archive. Now, to make this package even more manageable, especially for transferring over networks or saving space, we can compress it using `gzip`. This is akin to vacuum-sealing your suitcase to make it as compact as possible.
35 |
36 | ```
37 | Tar Archive:
38 |
39 | +-----------------+
40 | | Tar Archive |
41 | +-----------------+
42 | |
43 | v
44 | +----------------------+
45 | | Gzipped Tar Archive |
46 | +----------------------+
47 | ```
48 |
49 | By compressing the tar archive, we reduce its size, making it faster to transfer and requiring less storage space.
50 |
51 | ### Tar
52 |
53 | The `tar` command stands for "tape archive," a name that harks back to when data was stored on magnetic tapes. Despite its historical name, `tar` remains a powerful utility for creating and manipulating archive files on modern systems. It consolidates multiple files and directories into a single archive file while preserving important metadata like file permissions, ownership, and timestamps.
54 |
55 | Some common options used with the `tar` command include:
56 |
57 | | Option | Description |
58 | |--------|------------------------------------------------|
59 | | `-c` | Create a new archive |
60 | | `-v` | Verbosely list files processed |
61 | | `-f` | Specify the filename of the archive |
62 | | `-x` | Extract files from an archive |
63 | | `-t` | List the contents of an archive |
64 | | `-z` | Compress or decompress the archive using gzip |
65 | | `-j` | Compress or decompress the archive using bzip2 |
66 | | `-C` | Change to a directory before performing actions|
67 |
68 | For example, to create a tar archive named `archive.tar` containing the directories `dir1`, `dir2`, and the file `file1.txt`, you would use:
69 |
70 | ```bash
71 | tar -cvf archive.tar dir1 dir2 file1.txt
72 | ```
73 |
74 | Breaking down this command:
75 |
76 | - `-c` tells `tar` to create a new archive.
77 | - `-v` enables verbose mode, so it lists the files being processed.
78 | - `-f archive.tar` specifies the name of the archive file to create.
79 |
80 | Upon running this command, you might see output like:
81 |
82 | ```
83 | dir1/
84 | dir1/file2.txt
85 | dir2/
86 | dir2/file3.txt
87 | file1.txt
88 | ```
89 |
90 | This output shows that `tar` is including each specified file and directory into the archive.
91 |
92 | ### Compressing the Archive with `gzip`
93 |
94 | While `tar` itself does not compress files, it can be combined with compression utilities like `gzip` to reduce the size of the archive. This is often done by adding the `-z` option to the `tar` command.
95 |
96 | To create a compressed tar archive (often called a "tarball") using gzip, you would run:
97 |
98 | ```bash
99 | tar -czvf archive.tar.gz dir1 dir2 file1.txt
100 | ```
101 |
102 | Here, the `-z` option tells `tar` to compress the archive using gzip. The resulting file `archive.tar.gz` is both an archive and compressed.
103 |
104 | ### Extracting Files from an Archive
105 |
106 | To extract files from a tar archive, you use the `-x` option. For example:
107 |
108 | ```bash
109 | tar -xvf archive.tar
110 | ```
111 |
112 | This command extracts all files from `archive.tar` into the current directory. If the archive was compressed with gzip, you can still extract it in one step:
113 |
114 | ```bash
115 | tar -xzvf archive.tar.gz
116 | ```
117 |
118 | Again, the `-z` option is used to indicate that the archive is compressed with gzip.
119 |
120 | ### Listing the Contents of an Archive
121 |
122 | Before extracting files, you might want to see what's inside an archive. You can do this with the `-t` option:
123 |
124 | ```bash
125 | tar -tvf archive.tar
126 | ```
127 |
128 | Or for a compressed archive:
129 |
130 | ```bash
131 | tar -tzvf archive.tar.gz
132 | ```
133 |
134 | This command lists all the files contained in the archive without extracting them. The output might look like:
135 |
136 | ```
137 | -rw-r--r-- user/group 1024 2024-10-10 12:00 dir1/file2.txt
138 | -rw-r--r-- user/group 2048 2024-10-10 12:01 dir2/file3.txt
139 | -rw-r--r-- user/group 512 2024-10-10 12:02 file1.txt
140 | ```
141 |
142 | ### Using `gzip` Independently
143 |
144 | The `gzip` command can also be used on its own to compress individual files. For example, to compress a file named `largefile.txt`, you can use:
145 |
146 | ```bash
147 | gzip largefile.txt
148 | ```
149 |
150 | This command replaces `largefile.txt` with a compressed file named `largefile.txt.gz`.
151 |
152 | To decompress the file, you can use:
153 |
154 | ```bash
155 | gzip -d largefile.txt.gz
156 | ```
157 |
158 | Or equivalently:
159 |
160 | ```bash
161 | gunzip largefile.txt.gz
162 | ```
163 |
164 | ### Practical Examples
165 |
166 | #### Backing Up a Directory
167 |
168 | Suppose you have a directory called `project` that you want to back up. You can create a compressed archive of the directory with:
169 |
170 | ```bash
171 | tar -czvf project_backup.tar.gz project
172 | ```
173 |
174 | This command creates a compressed tarball named `project_backup.tar.gz` containing the entire `project` directory.
175 |
176 | #### Extracting to a Specific Directory
177 |
178 | If you want to extract the contents of an archive to a specific directory, you can use the `-C` option. For example:
179 |
180 | ```bash
181 | tar -xzvf project_backup.tar.gz -C /path/to/destination
182 | ```
183 |
184 | This command extracts the contents of `project_backup.tar.gz` into `/path/to/destination`.
185 |
186 | ### Understanding File Permissions and Ownership
187 |
188 | One of the strengths of using `tar` is that it preserves file permissions and ownership by default. This is important when you're archiving files that need to maintain their original access rights.
189 |
190 | For instance, if a file is owned by `user1` and has specific permissions, when you extract the archive as a different user, `tar` will attempt to preserve the original ownership and permissions. If you have the necessary permissions (e.g., running as root), the files will retain their original ownership.
191 |
192 | ### Excluding Files from an Archive
193 |
194 | Sometimes, you might want to exclude certain files or directories when creating an archive. You can use the `--exclude` option to do this.
195 |
196 | For example:
197 |
198 | ```bash
199 | tar -czvf archive.tar.gz dir1 --exclude='dir1/tmp/*'
200 | ```
201 |
202 | This command archives `dir1` but excludes all files in the `dir1/tmp` directory.
203 |
204 | ### Archiving Over SSH
205 |
206 | You can create an archive and transfer it over SSH in one step. This is useful for backing up data from a remote server.
207 |
208 | ```bash
209 | ssh user@remotehost "tar -czvf - /path/to/dir" > archive.tar.gz
210 | ```
211 |
212 | In this command:
213 |
214 | - `ssh user@remotehost` connects to the remote host.
215 | - `"tar -czvf - /path/to/dir"` runs the `tar` command on the remote host, with `-` as the filename, which means the output is sent to stdout.
216 | - `> archive.tar.gz` redirects the output to a file on the local machine.
217 |
218 | ### Splitting Large Archives
219 |
220 | For very large archives, you might need to split the archive into smaller pieces. You can do this using the `split` command.
221 |
222 | First, create the archive without compression:
223 |
224 | ```bash
225 | tar -cvf large_archive.tar dir_to_archive
226 | ```
227 |
228 | Then split the archive into pieces of 100MB:
229 |
230 | ```bash
231 | split -b 100M large_archive.tar "archive_part_"
232 | ```
233 |
234 | This command creates files named `archive_part_aa`, `archive_part_ab`, etc.
235 |
236 | To reconstruct the original archive, you can concatenate the parts:
237 |
238 | ```bash
239 | cat archive_part_* > large_archive.tar
240 | ```
241 |
242 | Then extract the archive as usual.
243 |
244 | ### Using Other Compression Tools with `tar`
245 |
246 | While `gzip` is commonly used, `tar` can work with other compression tools like `bzip2` and `xz` for better compression ratios.
247 |
248 | #### Using `bzip2`
249 |
250 | To create a tar archive compressed with `bzip2`, use the `-j` option:
251 |
252 | ```bash
253 | tar -cjvf archive.tar.bz2 dir1 dir2 file1.txt
254 | ```
255 |
256 | To extract:
257 |
258 | ```bash
259 | tar -xjvf archive.tar.bz2
260 | ```
261 |
262 | #### Using `xz`
263 |
264 | For `xz` compression, use the `-J` option:
265 |
266 | ```bash
267 | tar -cJvf archive.tar.xz dir1 dir2 file1.txt
268 | ```
269 |
270 | To extract:
271 |
272 | ```bash
273 | tar -xJvf archive.tar.xz
274 | ```
275 |
276 | ### Common Pitfalls and Tips
277 |
278 | - The order of options matters. For example, `-czvf` is not the same as `-cfvz`. Typically, you should specify the action (`-c`, `-x`, `-t`) first, followed by other options.
279 | - While it's common to use `.tar.gz` for gzip-compressed archives, the extension does not affect how the file is processed. However, using standard extensions helps others understand the file format.
280 | - When extracting archives as a different user, you might encounter permission issues. Running `tar` with `sudo` can help preserve ownership.
281 | - By default, `tar` will overwrite existing files when extracting. Use the `--keep-old-files` option to prevent this.
282 |
283 | ### Challenges
284 |
285 | 1. Make an archive of your home folder using `tar`. To check that everything was included, copy your archives to the `/tmp` folder and extract the files there. Delete the copies from `/tmp` when done.
286 | 2. Use `tar` with and without the `-z` option to create an archive of any folder. Compare the sizes of your original folder, the archive, and the compressed archive.
287 | 3. Create a script that compresses all `.txt` files in a given folder using `gzip`. The script should skip already compressed files (with `.gz` extension).
288 | 4. Use a combination of `tar` and `gzip` commands to create a compressed archive of a folder. Then, extract the archive to a new location and compare the contents of the original folder and the extracted folder to ensure they are identical.
289 | 5. Create a compressed archive using tar and gzip. Then, use the `gzip -l` command to view the compression ratio and other details about the compressed archive.
290 | 6. Write a script that takes a folder as input and creates an archive for each subfolder within that folder. The script should name the archives based on the subfolder names.
291 | 7. Compress a folder using tar and `gzip`, then decompress it using the same tools. Time how long each operation takes and compare the results.
292 | 8. Create a compressed archive of a folder containing various file types (e.g., text files, images, audio files). Use the `file` command to determine the types of files in the compressed archive without extracting it.
293 | 9. Write a script that finds the largest file in a directory and compresses it using `gzip`. The script should display the file name and its size before and after compression.
294 |
--------------------------------------------------------------------------------
/notes/disk_usage.md:
--------------------------------------------------------------------------------
1 | ## Disk Usage Management
2 |
3 | Managing and monitoring disk usage is necessary for server maintenance, allowing administrators to identify disk space shortages caused by large log files, such as Apache or system logs, and malfunctioning applications that generate excessive data. Tools like `df` provide quick overviews of available disk space, while `du` helps analyze directory sizes to locate space hogs. For planning future storage needs, tracking data growth with monitoring software like Nagios or Grafana enables accurate forecasting and timely upgrades of storage hardware or cloud solutions. Regularly cleaning up unused files involves deleting obsolete backups, removing temporary files from `/tmp`, archiving old user data, and eliminating redundant application caches using automated scripts or cleanup utilities like BleachBit.
4 |
5 | TODO:
6 |
7 | - grafana
8 |
9 | ### Understanding the df command
10 |
11 | The `df` (disk filesystem) command provides information about the filesystems on your machine. It shows details such as total size, used space, available space, and the percentage of space used. To display these statistics in a human-readable format, using units like KB, MB, or GB, you can use the `-h` (human-readable) option.
12 |
13 | For example, executing `df -h` might produce an output like the following:
14 |
15 | | Filesystem | Size | Used | Available | Use% | Mounted on |
16 | | --- | --- | --- | --- | --- | --- |
17 | | /dev/sda1 | 2.0T | 1.0T | 1.0T | 50% | / |
18 | | /dev/sda2 | 500G | 200G | 300G | 40% | /boot |
19 |
20 | This output provides the following information:
21 |
22 | * `Filesystem`: The name of each filesystem.
23 | * `Size`: The total size of each filesystem.
24 | * `Used`: The amount of space that has been used within each filesystem.
25 | * `Available`: The remaining free space within each filesystem.
26 | * `Use%`: The percentage of total space that has been used in each filesystem.
27 | * `Mounted on`: The mount point of each filesystem, indicating where it is accessible within the system's directory structure.
28 |
29 | ### Exploring the `du` Command
30 |
31 | The `du` (disk usage) command is used to estimate the space occupied by files or directories. To display the output in a human-readable format, you can use the `-h` option. The `-s` option provides a summarized result for directories. For example, running `du -sh .` will show the total size of the current directory in a human-readable format.
32 |
33 | To find the top 10 largest directories starting from the root directory (`/`), you can use the following command:
34 |
35 | ```bash
36 | du -x / | sort -nr | head -10
37 | ```
38 |
39 | An example output might look like this:
40 |
41 | ```
42 | 10485760 /usr
43 | 5120000 /var
44 | 2097152 /lib
45 | 1024000 /opt
46 | 524288 /boot
47 | 256000 /home
48 | 128000 /bin
49 | 64000 /sbin
50 | 32000 /etc
51 | 16000 /tmp
52 | ```
53 |
54 | In this command:
55 |
56 | - `du -x /` calculates the size of each directory within the root filesystem.
57 | - `sort -nr` sorts these sizes in descending numerical order.
58 | - `head -10` limits the output to the top 10 largest directories.
59 |
60 | This command sequence helps you quickly identify the directories consuming the most space on your system.
61 |
62 | To further improve the speed of the `du` command, especially when dealing with many subdirectories, you can use `xargs -P` to parallelize the processing. This approach takes advantage of multiple CPU cores, allowing `du` to run on multiple directories simultaneously. Additionally, combining it with `awk` can help format the output more cleanly.
63 |
64 | Here’s an enhanced example that finds the top 10 largest directories and uses `xargs` to speed up the process:
65 |
66 | ```bash
67 | find / -maxdepth 1 -type d | xargs -I{} -P 4 du -sh {} 2>/dev/null | sort -hr | head -10 | awk '{printf "%-10s %s\n", $1, $2}'
68 | ```
69 |
70 | Explanation:
71 |
72 | I. `find / -maxdepth 1 -type d`: This command finds all directories at the root level (`/`), limiting the search to the top-level directories only (`-maxdepth 1`).
73 |
74 | II. `xargs -I{} -P 4 du -sh {} 2>/dev/null`:
75 |
76 | - `xargs` takes the output of `find` and passes each directory to the `du` command.
77 | - `-I{}` is used to specify the replacement string `{}` for the directory name.
78 | - `-P 4` specifies that up to 4 `du` processes can run in parallel, leveraging multiple cores for faster execution.
79 | - `du -sh {}` calculates the size of each directory in a human-readable format.
80 | - `2>/dev/null` suppresses any error messages, such as permission denied errors.
81 |
82 | III. `sort -hr`: Sorts the output in human-readable format and in reverse order, so the largest directories come first.
83 |
84 | IV. `head -10`: Limits the output to the top 10 largest directories.
85 |
86 | V. `awk '{printf "%-10s %s\n", $1, $2}'`: Formats the output, ensuring the size and directory name align neatly. The `%-10s` ensures the size column has a fixed width, making the output more readable.
87 |
88 | By using `xargs -P`, you can significantly reduce the time it takes to compute the disk usage of directories, especially on systems with many directories and multiple CPU cores. This method effectively utilizes system resources to perform the operation more efficiently.
89 |
90 | ### The `ncdu` Command
91 |
92 | For a more visual and interactive representation of disk usage, you can use `ncdu` (NCurses Disk Usage). `ncdu` is a ncurses-based tool that provides a user-friendly interface to quickly assess which directories are consuming the most disk space. If it is not already installed, you can install it via your package manager, such as `apt` for Debian-based systems or `yum` for Red Hat-based systems.
93 |
94 | Running the command `ncdu -x /` will start the program at the root directory (`/`) and present an interactive interface. Here, you can navigate through directories using arrow keys and view their sizes, making it easier to identify space hogs.
95 |
96 | Here’s an example of what the output might look like in a non-interactive, textual representation:
97 |
98 | ```
99 | ncdu 1.15 ~ Use the arrow keys to navigate, press ? for help
100 | --- / -----------------------------------------------------------------------
101 | 4.6 GiB [##########] /usr
102 | 2.1 GiB [#### ] /var
103 | 600.0 MiB [# ] /lib
104 | 500.0 MiB [# ] /opt
105 | 400.0 MiB [ ] /boot
106 | 300.0 MiB [ ] /sbin
107 | 200.0 MiB [ ] /bin
108 | 100.0 MiB [ ] /etc
109 | 50.0 MiB [ ] /tmp
110 | 20.0 MiB [ ] /home
111 | 10.0 MiB [ ] /root
112 | 5.0 MiB [ ] /run
113 | 1.0 MiB [ ] /srv
114 | 0.5 MiB [ ] /dev
115 | 0.1 MiB [ ] /mnt
116 | 0.0 MiB [ ] /proc
117 | 0.0 MiB [ ] /sys
118 | Total disk usage: 8.8 GiB Apparent size: 8.8 GiB Items: 123456
119 | ```
120 |
121 | In this output:
122 |
123 | - The bar `[##########]` visually represents the proportion of disk space used by each directory.
124 | - The size of each directory is displayed, making it easy to compare.
125 | - The total disk usage and apparent size are summarized at the bottom, along with the total number of items analyzed.
126 |
127 | `ncdu` is especially useful for quickly finding large directories and files, thanks to its intuitive interface. The ability to easily navigate through directories makes it a powerful tool for managing disk space on your system.
128 |
129 | ### Cleaning Up Disk Space
130 |
131 | Once you've identified what's using your disk space, the next step is often to free up space. Here are a few strategies:
132 |
133 | - Removing unnecessary packages and dependencies is an effective way to free up disk space. Over time, systems can accumulate outdated or unused packages, which can be safely removed. For instance, on a Debian-based system like Ubuntu, the `apt-get autoremove` command can help clean out these unused packages.
134 | - Clearing the package manager cache can also reclaim significant disk space. Package managers often store downloaded packages in a cache, which can grow large over time. On systems using `apt`, you can use the `apt clean` command to clear the cache.
135 | - Finding and removing large files is another strategy. The `find` command can be utilized to search for files exceeding a certain size, enabling users to review and decide if those files should be deleted. For example, `find / -type f -size +100M` will list files larger than 100 MB.
136 | - Using a disk cleanup utility can automate the process of deleting various unnecessary files. Tools like `bleachbit` can efficiently remove temporary files, cache, cookies, internet history, and log files, helping to free up space.
137 | - Archiving and compressing less frequently used data can also save space. Files and directories that are rarely accessed can be compressed using tools like `tar`, `gzip`, or `bzip2`, reducing their size and freeing up more disk space.
138 |
139 | ### Automating Disk Usage Checks
140 |
141 | For ongoing disk usage monitoring, consider setting up automated tasks. For instance, you can schedule a cron job that runs `df` and `du` at regular intervals and sends reports via email or logs them for later review.
142 |
143 | Monitoring disk usage proactively can prevent potential issues related to low disk space, such as application errors, slow performance, or system crashes.
144 |
145 | #### Bash Script Example for Disk Usage Monitoring
146 |
147 | ```bash
148 | #!/bin/bash
149 |
150 | # Script to monitor disk usage and report
151 |
152 | # Set the path for the log file
153 | LOG_FILE="/var/log/disk_usage_report.log"
154 |
155 | # Get disk usage with df
156 | echo "Disk Usage Report - $(date)" >> "$LOG_FILE"
157 | echo "---------------------------------" >> "$LOG_FILE"
158 | df -h >> "$LOG_FILE"
159 |
160 | # Get top 10 directories consuming space
161 | echo "" >> "$LOG_FILE"
162 | echo "Top 10 Directories by Size:" >> "$LOG_FILE"
163 | du -x / | sort -nr | head -10 >> "$LOG_FILE"
164 |
165 | # Optionally, you can send this log via email instead of writing to a file
166 | # For email, you can use: mail -s "Disk Usage Report" recipient@example.com < "$LOG_FILE"
167 |
168 | # End of script
169 | ```
170 |
171 | - Save it as `disk_usage_monitor.sh`.
172 | - If you prefer to move the script to a standard location for cron jobs and set it up with a single command, you can use a system directory like `/etc/cron.daily`. This directory is used for scripts that should be run daily by the system's cron daemon. Here's how you can do it:
173 |
174 | ```bash
175 | sudo chmod +x /path/to/disk_usage_monitor.sh && sudo mv /path/to/disk_usage_monitor.sh /etc/cron.daily/
176 | ```
177 |
178 | ### Challenges
179 |
180 | 1. Explain the concept of filesystems and mount points, and then display the available free space on the root filesystem (`/`). Discuss why monitoring free space on the root is crucial for system stability.
181 | 2. List all currently mounted filesystems and calculate the percentage of space used on each. Explain the importance of monitoring multiple filesystems, especially in systems with separate partitions for critical directories like `/var`, `/home`, or `/boot`.
182 | 3. Identify all filesystems configured on the system, whether mounted or not, and display relevant information such as filesystem type, size, and last mount point. Discuss the purpose of different filesystem types and reasons they might not be mounted.
183 | 4. Calculate the total size of the directory you’re in, including all files and subdirectories. Discuss recursive disk usage and the impact of nested directories on storage.
184 | 5. Provide a breakdown of disk space usage within the `/home` directory for each user. Discuss the significance of managing space within `/home` and how it affects individual user accounts.
185 | 6. List the top 10 directories consuming the most disk space across the entire system. Explain how these large directories can affect disk performance and the importance of periodically checking them.
186 | 7. Track data being written to the disk in real-time for a set period, displaying a summary of write activity. Discuss the reasons behind tracking disk write activity, including potential implications for system performance and health.
187 | 8. Identify individual files that occupy the most space on the disk. Discuss strategies for managing large files and how deleting or relocating these files can reclaim disk space.
188 | 9. Take snapshots of disk usage at two different times and compare them to identify any significant changes or trends. Discuss the importance of historical data in predicting future disk space needs and planning for expansion or cleanup.
189 | 10. Analyze disk usage by categorizing files based on their extensions (e.g., `.txt`, `.jpg`, `.log`). Explain how file type classification can help in identifying disk space hogs and in organizing cleanup strategies.
190 |
--------------------------------------------------------------------------------
/quizzes/files.md:
--------------------------------------------------------------------------------
1 | #### Q. How do you quit out of a `more` session before reaching the end?
2 |
3 | * [ ] `:q`
4 | * [ ] `Ctrl-C`
5 | * [x] `q`
6 | * [ ] `Esc`
7 | * [ ] `ZZ`
8 |
9 | #### Q. Which command displays the amount of free and used disk space on all mounted filesystems?
10 |
11 | * [ ] `du -h`
12 | * [x] `df -h`
13 | * [ ] `lsblk`
14 | * [ ] `fdisk -l`
15 | * [ ] `free -h`
16 |
17 | #### Q. What does the `-h` option do when used with `df` or `du`?
18 |
19 | * [ ] Hides zero-size files
20 | * [x] Shows sizes in human-readable format (e.g., K, M, G)
21 | * [ ] Halts the command after first output
22 | * [ ] Outputs headers only
23 | * [ ] Highlights large files
24 |
25 | #### Q. Which command summarizes disk usage of each subdirectory in the current directory, in human-readable form?
26 |
27 | * [ ] `ls -lh .`
28 | * [ ] `df -h .`
29 | * [x] `du -sh *`
30 | * [ ] `du --all .`
31 | * [ ] `find . -type d -exec du -h {} \;`
32 |
33 | #### Q. In the output of `df -h`, what does the “Available” column represent?
34 |
35 | * [ ] Total capacity of the filesystem
36 | * [ ] Percentage of used space
37 | * [x] Amount of space free for non-root users
38 | * [ ] Size of filesystem metadata
39 | * [ ] Space reserved for snapshots
40 |
41 | #### Q. How do you display memory usage (RAM and swap) in human-readable format?
42 |
43 | * [ ] `df -m`
44 | * [ ] `du -h /proc/meminfo`
45 | * [x] `free -h`
46 | * [ ] `top -m`
47 | * [ ] `vmstat -h`
48 |
49 | #### Q. Which `du` option excludes files and directories that are mounted on other filesystems?
50 |
51 | * [ ] `--no-dereference`
52 | * [x] `-x`
53 | * [ ] `--exclude-type=other`
54 | * [ ] `--one-file-system`
55 | * [ ] `--max-depth=1`
56 |
57 | #### Q. What does the “Used” column in `free -h` indicate?
58 |
59 | * [ ] Total swap space used only
60 | * [ ] Memory used by cache and buffers only
61 | * [x] Total memory in use (including cache and buffers)
62 | * [ ] Amount of free memory minus cache
63 | * [ ] Memory currently locked by processes
64 |
65 | #### Q. To include filesystem type in the output of `df`, which option is used?
66 |
67 | * [ ] `df --verbose`
68 | * [ ] `df -i`
69 | * [x] `df -T`
70 | * [ ] `df --type`
71 | * [ ] `df -F`
72 |
73 | #### Q. How can you display the top 5 largest directories under `/var` sorted by size?
74 |
75 | * [ ] `du -h /var | sort -h | tail -5`
76 | * [ ] `du -sh /var/* | sort -h | head -5`
77 | * [x] `du -sh /var/* | sort -hr | head -5`
78 | * [ ] `df -h /var | sort -hr | head -5`
79 | * [ ] `ls -Sh /var`
80 |
81 | #### Q. Which command shows the inode usage (number of files) for each mounted filesystem?
82 |
83 | * [ ] `df -h`
84 | * [x] `df -i`
85 | * [ ] `du -i`
86 | * [ ] `ls -l /`
87 | * [ ] `stat -f /`
88 |
89 |
90 | #### Q. Which option tells `more` to display line numbers before each line?
91 |
92 | * [ ] `more -n`
93 | * [ ] `more -l`
94 | * [x] `more -d`
95 | * [ ] `more -p`
96 | * [ ] `more -c`
97 |
98 | #### Q. What is the effect of the `-c` option when running `more -c file.txt`?
99 |
100 | * [ ] Counts pages before display
101 | * [ ] Continues from last position
102 | * [x] Clears the screen before displaying each page
103 | * [ ] Compresses output to one column
104 | * [ ] Colors matching text
105 |
106 | #### Q. Which command lists all files, including hidden ones, in long format?
107 |
108 | * [ ] `ls -h`
109 | * [ ] `ls -l`
110 | * [x] `ls -la`
111 | * [ ] `ls -a`
112 | * [ ] `ls -lh`
113 |
114 | #### Q. How do you copy a directory named `project` and all its contents to `/backup`?
115 |
116 | * [ ] `cp project /backup`
117 | * [ ] `cp -r project /backup/project`
118 | * [x] `cp -a project /backup/`
119 | * [ ] `cp -f project /backup/`
120 | * [ ] `cp -d project /backup/`
121 |
122 | #### Q. Which command moves `file1.txt` to `archive/` and overwrites without prompting?
123 |
124 | * [ ] `mv -i file1.txt archive/`
125 | * [x] `mv -f file1.txt archive/`
126 | * [ ] `mv -u file1.txt archive/`
127 | * [ ] `mv -n file1.txt archive/`
128 | * [ ] `mv file1.txt archive/`
129 |
130 | #### Q. To remove the directory `old_logs` and its contents recursively, which command is correct?
131 |
132 | * [ ] `rm old_logs`
133 | * [ ] `rmdir old_logs/*`
134 | * [x] `rm -rf old_logs/`
135 | * [ ] `rm -r old_logs/*`
136 | * [ ] `rmdir -r old_logs`
137 |
138 | #### Q. What does `touch newfile.txt` do if `newfile.txt` already exists?
139 |
140 | * [ ] Deletes the file and recreates it
141 | * [ ] Does nothing
142 | * [x] Updates the file’s access and modification timestamps
143 | * [ ] Opens the file in the default editor
144 | * [ ] Converts it to an executable
145 |
146 | #### Q. Which command creates a symbolic link named `link` pointing to `/usr/bin/python3`?
147 |
148 | * [ ] `ln /usr/bin/python3 link`
149 | * [ ] `ln -f /usr/bin/python3 link`
150 | * [ ] `ln --hard /usr/bin/python3 link`
151 | * [x] `ln -s /usr/bin/python3 link`
152 | * [ ] `ln -r /usr/bin/python3 link`
153 |
154 | #### Q. How do you change the owner of `script.sh` to user `alice` and group `devs`?
155 |
156 | * [ ] `chmod alice:devs script.sh`
157 | * [x] `chown alice:devs script.sh`
158 | * [ ] `chgrp alice:devs script.sh`
159 | * [ ] `chown -g devs alice script.sh`
160 | * [ ] `chmod 750 script.sh`
161 |
162 | #### Q. How do you invoke `more` to view the contents of `file.txt` one screen at a time?
163 |
164 | * [ ] `more | file.txt`
165 | * [x] `more file.txt`
166 | * [ ] `cat more file.txt`
167 | * [ ] `view file.txt`
168 | * [ ] `less file.txt`
169 |
170 | #### Q. Which command outputs the entire contents of `file.txt` to standard output?
171 |
172 | * [ ] `head file.txt`
173 | * [ ] `tail file.txt`
174 | * [x] `cat file.txt`
175 | * [ ] `more file.txt`
176 | * [ ] `dd if=file.txt of=/dev/stdout`
177 |
178 | #### Q. By default, how many lines does `head` display from the start of a file?
179 |
180 | * [ ] 5
181 | * [x] 10
182 | * [ ] 15
183 | * [ ] 20
184 | * [ ] 25
185 |
186 | #### Q. Which option shows the last 20 lines of `log.txt`?
187 |
188 | * [ ] `tail log.txt 20`
189 | * [x] `tail -n 20 log.txt`
190 | * [ ] `tail -20 log.txt`
191 | * [ ] `tail --head=20 log.txt`
192 | * [ ] `tail -f log.txt 20`
193 |
194 | #### Q. To follow new lines appended to `access.log` in real time, which command is used?
195 |
196 | * [ ] `tail access.log`
197 | * [ ] `cat access.log -f`
198 | * [x] `tail -f access.log`
199 | * [ ] `more +F access.log`
200 | * [ ] `head -f access.log`
201 |
202 | #### Q. Which command reverses the line order when displaying `notes.txt`?
203 |
204 | * [ ] `rev notes.txt`
205 | * [x] `tac notes.txt`
206 | * [ ] `nl notes.txt`
207 | * [ ] `tail -r notes.txt`
208 | * [ ] `awk '1' notes.txt`
209 |
210 | #### Q. How do you display lines 50 through 60 of `data.csv` using a single command?
211 |
212 | * [ ] `head -n 60 data.csv \| tail -n +50`
213 | * [ ] `tail -n +50 data.csv \| head -n 60`
214 | * [x] `tail -n +50 data.csv | head -n 11`
215 | * [ ] `sed -n '50,60p' data.csv`
216 | * [ ] `awk 'NR>=50 && NR<=60' data.csv`
217 |
218 | #### Q. Which tool numbers each output line when reading `script.sh`?
219 |
220 | * [ ] `cat -n script.sh`
221 | * [x] `nl script.sh`
222 | * [ ] `sed = script.sh`
223 | * [ ] `awk '{print NR, $0}' script.sh`
224 | * [ ] `less -N script.sh`
225 |
226 | #### Q. What does `dd if=/dev/zero of=out.bin bs=1M count=1` do?
227 |
228 | * [ ] Reads one block of zeros from `out.bin`
229 | * [x] Creates a 1 MiB file `out.bin` filled with zeros
230 | * [ ] Appends 1 MiB of zeros to `/dev/zero`
231 | * [ ] Copies `out.bin` to `/dev/zero`
232 | * [ ] Displays the first megabyte of `/dev/zero`
233 |
234 | #### Q. To split `large.txt` into 1000-line files named `xaa`, `xab`, etc., which command is correct?
235 |
236 | * [ ] `split -b 1000 large.txt x`
237 | * [x] `split -l 1000 large.txt x`
238 | * [ ] `csplit -f x -l 1000 large.txt`
239 | * [ ] `split --lines=1000 large.txt xaa`
240 | * [ ] `split -n 1000 large.txt x`
241 |
242 | #### Q. In a Bash script, which built-in reads a line of input into the variable `$line`?
243 |
244 | * [ ] `cat line`
245 | * [ ] `readfile line`
246 | * [x] `read line`
247 | * [ ] `getline line`
248 | * [ ] `scanf "%s" line`
249 |
250 | #### Q. Which key do you press to advance exactly one more line when viewing with `more`?
251 |
252 | * [ ] Spacebar
253 | * [x] Enter
254 | * [ ] `n`
255 | * [ ] `l`
256 | * [ ] `→`
257 |
258 | #### Q. What happens when you press the spacebar while in `more`?
259 |
260 | * [x] It advances one full screen (page)
261 | * [ ] It exits `more`
262 | * [ ] It scrolls backwards one screen
263 | * [ ] It searches for the next pattern
264 | * [ ] It refreshes the display
265 |
266 | #### Q. Which `rsync` option preserves symbolic links, devices, attributes, permissions, ownerships, and timestamps?
267 |
268 | * [ ] `-r`
269 | * [ ] `-a`
270 | * [x] `-rlptgoD`
271 | * [ ] `-z`
272 | * [ ] `-v`
273 |
274 | #### Q. How do you copy a file `file.txt` to `/backup/` using `cp`, prompting before overwrite?
275 |
276 | * [ ] `cp file.txt /backup/`
277 | * [x] `cp -i file.txt /backup/`
278 | * [ ] `cp -f file.txt /backup/`
279 | * [ ] `cp -r file.txt /backup/`
280 | * [ ] `cp -p file.txt /backup/`
281 |
282 | #### Q. What does the `-n` (or `--dry-run`) flag do when used with `rsync`?
283 |
284 | * [ ] Enables network compression
285 | * [ ] Forces overwrite without prompt
286 | * [x] Shows what would be transferred without making changes
287 | * [ ] Limits bandwidth usage
288 | * [ ] Deletes extraneous files from destination
289 |
290 | #### Q. Which `mv` command renames a file `old.txt` to `new.txt`, overwriting without prompting?
291 |
292 | * [ ] `mv -i old.txt new.txt`
293 | * [x] `mv -f old.txt new.txt`
294 | * [ ] `mv old.txt new.txt --no-clobber`
295 | * [ ] `mv --backup=existing old.txt new.txt`
296 | * [ ] `mv -n old.txt new.txt`
297 |
298 | #### Q. To remove a directory named `data` and all its contents, which `rm` command is correct?
299 |
300 | * [ ] `rm data`
301 | * [ ] `rm -r data`
302 | * [x] `rm -rf data`
303 | * [ ] `rm --recursive data`
304 | * [ ] `rm -i data`
305 |
306 | #### Q. How can you use `rsync` to delete files in the destination that no longer exist in the source?
307 |
308 | * [ ] `rsync -a /src/ /dest/`
309 | * [ ] `rsync --delete-excluded /src/ /dest/`
310 | * [x] `rsync -a --delete /src/ /dest/`
311 | * [ ] `rsync -d --remove-source-files /src/ /dest/`
312 | * [ ] `rsync -z --prune-empty-dirs /src/ /dest/`
313 |
314 | #### Q. Which `cp` option copies directories recursively, preserving symlinks?
315 |
316 | * [ ] `cp -lR`
317 | * [ ] `cp -R`
318 | * [x] `cp -a`
319 | * [ ] `cp -d`
320 | * [ ] `cp -H`
321 |
322 | #### Q. What happens if you run `rm *` in a directory without write permission on the parent?
323 |
324 | * [ ] Files are removed regardless
325 | * [x] You cannot remove files, permission denied
326 | * [ ] Files are moved to trash
327 | * [ ] Parent directory is deleted
328 | * [ ] Only files with execute bit are removed
329 |
330 | #### Q. To move all `.jpg` files from `/tmp` to `/images` and show progress, which command is appropriate?
331 |
332 | * [ ] `mv /tmp/*.jpg /images`
333 | * [ ] `mv -v /tmp/*.jpg /images`
334 | * [x] `mv -v /tmp/*.jpg /images/`
335 | * [ ] `rsync -a --progress /tmp/*.jpg /images/`
336 | * [ ] `cp -v /tmp/*.jpg /images/ && rm /tmp/*.jpg`
337 |
338 | #### Q. Which `rsync` option enables compression during transfer?
339 |
340 | * [ ] `-r`
341 | * [ ] `-a`
342 | * [ ] `--delete`
343 | * [x] `-z`
344 | * [ ] `--checksum`
345 |
346 |
347 | #### Q. Which key lets you move backward one screen in `more` (if supported)?
348 |
349 | * [ ] Spacebar
350 | * [ ] `Enter`
351 | * [x] `b`
352 | * [ ] `q`
353 | * [ ] `u`
354 |
355 | #### Q. How can you search forward for the next occurrence of “ERROR” while inside `more`?
356 |
357 | * [ ] `/ERROR` then Enter
358 | * [x] `?ERROR` then Enter
359 | * [ ] `ERROR` then Space
360 | * [ ] `nERROR` then Enter
361 | * [ ] `fERROR` then Enter
362 |
363 | #### Q. To pipe the output of `ls -lR /var` through `more`, which command is correct?
364 |
365 | * [ ] `ls -lR /var > more`
366 | * [ ] `more ls -lR /var`
367 | * [x] `ls -lR /var | more`
368 | * [ ] `cat ls -lR /var | more`
369 | * [ ] `pipe ls -lR /var more`
370 |
371 | #### Q. Which pager is generally considered more feature-rich compared to `more`?
372 |
373 | * [ ] `view`
374 | * [x] `less`
375 | * [ ] `pg`
376 | * [ ] `morex`
377 | * [ ] `lined`
378 |
379 |
380 | #### Q. To find all `.log` files under `/var` modified in the last 7 days, which command would you use?
381 |
382 | * [ ] `find /var -name "*.log" -mtime +7`
383 | * [ ] `grep "*.log" /var -mtime -7`
384 | * [x] `find /var -name "*.log" -mtime -7`
385 | * [ ] `locate /var/*.log --time -7`
386 | * [ ] `find /var -type f -newermt 7days`
387 |
388 | #### Q. Which command displays disk usage of each file and directory in the current path, human-readable?
389 |
390 | * [ ] `df -h .`
391 | * [ ] `ls -lh`
392 | * [x] `du -sh *`
393 | * [ ] `du -h /`
394 | * [ ] `stat --human *`
395 |
396 | #### Q. What does the `file` command do when run on `example.bin`?
397 |
398 | * [ ] Opens the file in a pager
399 | * [ ] Calculates a checksum of the file
400 | * [x] Determines and prints the file type
401 | * [ ] Converts it to a text file
402 | * [ ] Edits the file in vi mode
403 |
--------------------------------------------------------------------------------
/quizzes/users.md:
--------------------------------------------------------------------------------
1 | #### Q. Which file contains the list of local user accounts and their default shells?
2 |
3 | * [ ] `/etc/shadow`
4 | * [ ] `/etc/group`
5 | * [x] `/etc/passwd`
6 | * [ ] `/etc/userlist`
7 | * [ ] `/etc/login.defs`
8 |
9 | #### Q. What command creates a new user named `alice` without creating a home directory?
10 |
11 | * [ ] `useradd -m alice`
12 | * [x] `useradd -M alice`
13 | * [ ] `adduser --no-home alice`
14 | * [ ] `usermod -N alice`
15 | * [ ] `adduser -d alice`
16 |
17 | #### Q. Which file stores the encrypted password hashes on a typical Linux system?
18 |
19 | * [ ] `/etc/passwd`
20 | * [ ] `/etc/login.defs`
21 | * [x] `/etc/shadow`
22 | * [ ] `/etc/security/pwhash`
23 | * [ ] `/etc/securetty`
24 |
25 | #### Q. How do you add an existing user `bob` to the supplementary group `docker`?
26 |
27 | * [ ] `groupadd docker bob`
28 | * [ ] `usermod -a docker bob`
29 | * [x] `usermod -aG docker bob`
30 | * [ ] `addgroup bob docker`
31 | * [ ] `gpasswd -add bob docker`
32 |
33 | #### Q. Which command changes the user’s login shell to `/bin/zsh` for user `carol`?
34 |
35 | * [ ] `usermod --shell zsh carol`
36 | * [ ] `chsh carol -s zsh`
37 | * [x] `chsh -s /bin/zsh carol`
38 | * [ ] `useradd -s /bin/zsh carol`
39 | * [ ] `passwd -s /bin/zsh carol`
40 |
41 | #### Q. What does the `id` command display by default when run as a normal user?
42 |
43 | * [ ] Current password expiry information
44 | * [x] UID, GID, and group memberships
45 | * [ ] Last login time and source IP
46 | * [ ] List of all users on the system
47 | * [ ] Home directory and shell path
48 |
49 | #### Q. How do you lock the account of user `dave` to prevent logins?
50 |
51 | * [ ] `passwd -e dave`
52 | * [ ] `usermod -D dave`
53 | * [x] `passwd -l dave`
54 | * [ ] `userdel -r dave`
55 | * [ ] `chage -E0 dave`
56 |
57 | #### Q. Which directory is the default parent for newly created user home directories on many Linux distributions?
58 |
59 | * [ ] `/home/users/`
60 | * [ ] `/usr/home/`
61 | * [x] `/home/`
62 | * [ ] `/etc/home/`
63 | * [ ] `/var/home/`
64 | #### Q. Which protocol does LDAP use by default for directory access?
65 |
66 | * [ ] HTTP
67 | * [x] TCP/IP on port 389
68 | * [ ] UDP on port 53
69 | * [ ] SMTP on port 25
70 | * [ ] TCP/IP on port 636
71 |
72 | #### Q. What is the default port for LDAP over SSL (LDAPS)?
73 |
74 | * [ ] 389
75 | * [ ] 465
76 | * [x] 636
77 | * [ ] 3268
78 | * [ ] 8443
79 |
80 | #### Q. In an LDAP directory entry, what does “dn” stand for?
81 |
82 | * [x] Distinguished Name
83 | * [ ] Directory Number
84 | * [ ] Domain Namespace
85 | * [ ] Data Node
86 | * [ ] Default Name
87 |
88 | #### Q. Which file defines sudo privileges for users and groups?
89 |
90 | * [ ] `/etc/passwd`
91 | * [ ] `/etc/shadow`
92 | * [x] `/etc/sudoers`
93 | * [ ] `/etc/sudo.conf`
94 | * [ ] `/etc/security/sudoers.d`
95 |
96 | #### Q. What is the safest way to edit the sudoers file?
97 |
98 | * [ ] `nano /etc/sudoers`
99 | * [x] `visudo`
100 | * [ ] `vim /etc/sudoers`
101 | * [ ] `sudoedit /etc/sudoers.d`
102 | * [ ] `edit /etc/sudoers`
103 |
104 | #### Q. Which sudoers directive allows a user to run all commands without being prompted for a password?
105 |
106 | * [ ] `ALL =(ALL) ALL`
107 | * [x] `NOPASSWD: ALL`
108 | * [ ] `PASSWD: ALL`
109 | * [ ] `!authenticate`
110 | * [ ] `NOPROMPT: ALL`
111 |
112 | #### Q. In sudoers syntax, what does `%admin ALL=(ALL) ALL` do?
113 |
114 | * [ ] Grants user “admin” full sudo rights
115 | * [ ] Grants all users full sudo rights on hosts named “admin”
116 | * [x] Grants all members of the “admin” group full sudo rights
117 | * [ ] Denies group “admin” any sudo rights
118 | * [ ] Logs all commands run by group “admin”
119 |
120 | #### Q. Which command runs `apt update` as root but preserves your current environment variables?
121 |
122 | * [ ] `sudo apt update`
123 | * [ ] `sudo -i apt update`
124 | * [x] `sudo -E apt update`
125 | * [ ] `sudo -s apt update`
126 | * [ ] `sudoenv apt update`
127 |
128 | #### Q. By default, where does sudo log its authentication events on many Linux systems?
129 |
130 | * [ ] `/var/log/syslog`
131 | * [ ] `/var/log/auth.log`
132 | * [x] `/var/log/auth.log`
133 | * [ ] `/var/log/sudo.log`
134 | * [ ] `/var/log/secure`
135 |
136 | #### Q. Which sudo option runs a login shell as the target user (typically root)?
137 |
138 | * [ ] `-E`
139 | * [ ] `-s`
140 | * [x] `-i`
141 | * [ ] `-l`
142 | * [ ] `-u`
143 |
144 | #### Q. How do you allow user `alice` to run only `/usr/bin/systemctl` via sudo?
145 |
146 | * [ ] `alice ALL=(ALL) ALL: /usr/bin/systemctl`
147 | * [ ] `alice ALL=(ALL) /usr/bin/systemctl`
148 | * [x] `alice ALL=(ALL) NOPASSWD: /usr/bin/systemctl`
149 | * [ ] `alice ALL=(root) /usr/bin/systemctl`
150 | * [ ] `alice ALL=(ALL) !/usr/bin/systemctl`
151 |
152 | #### Q. Which command displays both standard permissions and ACLs for the file `example.txt`?
153 |
154 | * [ ] `ls -l example.txt`
155 | * [ ] `stat -c "%A %n" example.txt`
156 | * [x] `getfacl example.txt`
157 | * [ ] `lsattr example.txt`
158 | * [ ] `aclshow example.txt`
159 |
160 | #### Q. How do you give user `bob` read (`r`) and execute (`x`) permissions on `script.sh` without affecting existing ACL entries?
161 |
162 | * [ ] `chmod u+rx script.sh`
163 | * [ ] `setfacl u:bob:rx script.sh`
164 | * [x] `setfacl -m u:bob:rx script.sh`
165 | * [ ] `setfacl --add u:bob:rx script.sh`
166 | * [ ] `chmod +a "bob:rx" script.sh`
167 |
168 | #### Q. Which option removes all ACL entries (but leaves standard permissions intact) on `data/`?
169 |
170 | * [ ] `setfacl -x a:data`
171 | * [x] `setfacl -b data/`
172 | * [ ] `setfacl --clear-mask data/`
173 | * [ ] `chmod a-rwx data/`
174 | * [ ] `getfacl --remove-all data/`
175 |
176 | #### Q. What does the “mask” entry in an ACL represent?
177 |
178 | * [ ] The maximum file size allowed
179 | * [x] The maximum permissions granted to named users and groups
180 | * [ ] The default ACL applied to new files
181 | * [ ] The owner’s effective permissions
182 | * [ ] A special ACL for the `root` user
183 |
184 | #### Q. How do you set a default ACL so that any new file in `project/` grants group `devs` write access?
185 |
186 | * [ ] `setfacl -m d:g:devs:rw project/`
187 | * [x] `setfacl -d -m g:devs:rw project/`
188 | * [ ] `setfacl --default g:devs:rw project/`
189 | * [ ] `setfacl -m g:devs:rw project/`
190 | * [ ] `chmod g+w project/`
191 |
192 | #### Q. Which command recursively applies the ACL change to all files and subdirectories under `shared/`?
193 |
194 | * [ ] `setfacl -m u:alice:r shared/`
195 | * [ ] `setfacl --recursive u:alice:r shared/`
196 | * [x] `setfacl -R -m u:alice:r shared/`
197 | * [ ] `getfacl -R shared/ | setfacl --apply`
198 | * [ ] `chmod -R +a "alice:rx" shared/`
199 |
200 | #### Q. After setting ACLs, which command shows the effective permissions for user `carol` on `report.pdf`?
201 |
202 | * [ ] `getfacl --effective carol report.pdf`
203 | * [x] `getfacl -e report.pdf`
204 | * [ ] `setfacl --check u:carol report.pdf`
205 | * [ ] `aclcheck report.pdf carol`
206 | * [ ] `stat -c "%A %n" report.pdf`
207 |
208 | #### Q. To remove only the ACL entry for group `sales` on `budget.xls`, which command is correct?
209 |
210 | * [ ] `setfacl -m g:sales: budget.xls`
211 | * [ ] `setfacl --delete g:sales budget.xls`
212 | * [x] `setfacl -x g:sales budget.xls`
213 | * [ ] `chmod g-sales- budget.xls`
214 | * [ ] `getfacl -x g:sales budget.xls`
215 |
216 | #### Q. What happens if you copy a file with ACLs using `cp --preserve=all`?
217 |
218 | * [ ] The ACLs are stripped on the copy.
219 | * [ ] Only the owner and group ACL entries are kept.
220 | * [x] All ACL entries and attributes are preserved on the copy.
221 | * [ ] The copy is placed in a default ACL-enabled directory.
222 | * [ ] The mask is reset but other ACLs are kept.
223 |
224 | #### Q. Which umask setting will ensure that new files allow group write permission so ACL default entries can grant rwx to a group?
225 |
226 | * [ ] `umask 022`
227 | * [ ] `umask 077`
228 | * [x] `umask 002`
229 | * [ ] `umask 027`
230 | * [ ] `umask 007`
231 |
232 | #### Q. What does the `sudo -l` command do for the invoking user?
233 |
234 | * [ ] Lists all processes running as root
235 | * [x] Lists which commands the user is allowed (and not allowed) to run via sudo
236 | * [ ] Locks the sudo account for one hour
237 | * [ ] Logs you out of the root shell
238 | * [ ] Lists all sudo logs
239 |
240 | #### Q. Which default in `/etc/sudoers` prevents users from keeping their environment variables unless explicitly allowed?
241 |
242 | * [ ] `env_keep`
243 | * [ ] `env_passwd`
244 | * [x] `env_reset`
245 | * [ ] `requiretty`
246 | * [ ] `preserve_env`
247 |
248 | #### Q. Which permission bit allows a user to read a file?
249 |
250 | * [x] `r`
251 | * [ ] `w`
252 | * [ ] `x`
253 | * [ ] `s`
254 | * [ ] `t`
255 |
256 | #### Q. What does the octal permission `754` represent for user/group/others?
257 |
258 | * [ ] `rwx rwx rwx`
259 | * [ ] `rwx r-x r--`
260 | * [x] `rwx r-x r--`
261 | * [ ] `rwx r-- r-x`
262 | * [ ] `rwx rw- r-x`
263 |
264 | #### Q. How do you add execute permission for the owner of `script.sh` without affecting other bits?
265 |
266 | * [ ] `chmod 700 script.sh`
267 | * [ ] `chmod u=+x script.sh`
268 | * [x] `chmod u+x script.sh`
269 | * [ ] `chmod +x script.sh`
270 | * [ ] `chmod a+x script.sh`
271 |
272 | #### Q. Which special permission on a directory causes new files to inherit the directory’s group?
273 |
274 | * [ ] Sticky bit (`chmod +t`)
275 | * [ ] Set-user-ID (`chmod u+s`)
276 | * [x] Set-group-ID (`chmod g+s`)
277 | * [ ] No-execute (`chmod -x`)
278 | * [ ] Immutable bit (`chattr +i`)
279 |
280 | #### Q. What does the sticky bit (`t`) do when set on `/tmp`?
281 |
282 | * [ ] Makes files in `/tmp` executable
283 | * [ ] Prevents deletion by anyone
284 | * [x] Restricts file deletion so only owner/root can remove their files
285 | * [ ] Inherits owner’s permissions on new files
286 | * [ ] Encrypts files in `/tmp`
287 |
288 | #### Q. How do you recursively set permissions `755` on all directories under `project/`?
289 |
290 | * [ ] `chmod -R 755 project/`
291 | * [ ] `find project/ -type f -exec chmod 755 {} +`
292 | * [x] `find project/ -type d -exec chmod 755 {} +`
293 | * [ ] `chmod 755 project/*`
294 | * [ ] `chmod 755 project/**`
295 |
296 | #### Q. Which command shows the numeric (octal) permission representation for files in the current directory?
297 |
298 | * [ ] `ls -l`
299 | * [x] `stat -c "%a %n" *`
300 | * [ ] `getfacl *`
301 | * [ ] `ls -n`
302 | * [ ] `stat --octal *`
303 |
304 | #### Q. What is the effect of `chmod o-rwx file.txt`?
305 |
306 | * [ ] Grants all permissions to others
307 | * [ ] Removes read/write, grants execute to others
308 | * [x] Revokes all permissions (read, write, execute) for others
309 | * [ ] Sets owner permissions to none
310 | * [ ] Makes the file immutable
311 |
312 | #### Q. Which command displays both standard and ACL permissions for `data/`?
313 |
314 | * [ ] `ls -la data/`
315 | * [x] `getfacl data/`
316 | * [ ] `stat data/`
317 | * [ ] `aclshow data/`
318 | * [ ] `lsattr data/`
319 |
320 | #### Q. How do you set an ACL to give user `bob` read and write access to `report.txt`?
321 |
322 | * [ ] `chmod user: bob:rw report.txt`
323 | * [ ] `setfacl -R u:bob:rw report.txt`
324 | * [x] `setfacl -m u:bob:rw report.txt`
325 | * [ ] `setfacl --add bob:rw report.txt`
326 | * [ ] `aclmod u:bob:rw report.txt`
327 |
328 | #### Q. Which command-line tool can you use to search an LDAP directory?
329 |
330 | * [ ] ldapadd
331 | * [ ] ldapmodify
332 | * [x] ldapsearch
333 | * [ ] ldappasswd
334 | * [ ] slapcat
335 |
336 | #### Q. In the LDAP schema, which attribute uniquely identifies an entry within its parent?
337 |
338 | * [ ] cn (Common Name)
339 | * [x] rdn (Relative Distinguished Name)
340 | * [ ] uid (User ID)
341 | * [ ] objectClass
342 | * [ ] dn (Distinguished Name)
343 |
344 | #### Q. Which suffix is commonly used to specify the base DN for a company “example.com”?
345 |
346 | * [ ] dc=company,dc=com
347 | * [ ] dn=example,cn=com
348 | * [x] dc=example,dc=com
349 | * [ ] ou=example,ou=com
350 | * [ ] cn=example,cn=com
351 |
352 | #### Q. How do you add a new entry to the directory using LDAP tools?
353 |
354 | * [ ] slapcat -i entry.ldif
355 | * [x] ldapadd -f entry.ldif
356 | * [ ] ldapmodify -a entry.ldif
357 | * [ ] ldapsearch -a entry.ldif
358 | * [ ] ldapdelete -f entry.ldif
359 |
360 | #### Q. Which operation modifies an existing LDAP entry?
361 |
362 | * [ ] ldapadd
363 | * [ ] ldapdelete
364 | * [x] ldapmodify
365 | * [ ] ldapsearch
366 | * [ ] slapindex
367 |
368 | #### Q. What file format is used to batch import or export LDAP entries?
369 |
370 | * [ ] JSON
371 | * [ ] XML
372 | * [x] LDIF
373 | * [ ] CSV
374 | * [ ] YAML
375 |
376 | #### Q. Which objectClass would you include to create a user entry in OpenLDAP?
377 |
378 | * [ ] objectClass: organization
379 | * [ ] objectClass: domain
380 | * [x] objectClass: inetOrgPerson
381 | * [ ] objectClass: posixGroup
382 | * [ ] objectClass: ldapSubentry
383 |
384 | #### Q. To delete user `eve` and remove her home directory and mail spool, which command is correct?
385 |
386 | * [ ] `userdel eve --remove`
387 | * [ ] `deluser eve --home`
388 | * [x] `userdel -r eve`
389 | * [ ] `usermod --delete eve -h`
390 | * [ ] `rmuser eve -a`
391 |
392 | #### Q. Which file associates group names with GIDs and lists group membership?
393 |
394 | * [ ] `/etc/passwd`
395 | * [x] `/etc/group`
396 | * [ ] `/etc/shadow`
397 | * [ ] `/etc/gshadow`
398 | * [ ] `/etc/sudoers`
399 |
--------------------------------------------------------------------------------
/notes/package_managers.md:
--------------------------------------------------------------------------------
1 | ## Package Managers
2 |
3 | Debian and Ubuntu are popular Linux distributions for home users. These distributions and their derivatives use the Advanced Package Tool (`APT`). Other distributions use alternative package managers, like `DNF`, `YUM`, `Pacman`, which have unique functionalities and syntax.
4 |
5 | Be cautious with package managers as they install software and dependencies and may affect your system's configuration.
6 |
7 | ```
8 | User
9 | |
10 | | uses
11 | V
12 | Package Manager (e.g., APT, DNF, YUM, Pacman)
13 | |
14 | | fetches metadata and package lists from
15 | V
16 | Package Repository
17 | |
18 | | downloads
19 | V
20 | Package files (.deb, .rpm, .tar.xz, etc.)
21 | |
22 | | unpacks/installs to
23 | V
24 | System directories (/usr/bin, /usr/lib, etc.)
25 | ```
26 |
27 | ## Installing and Updating Software Packages
28 |
29 | ### APT
30 |
31 | APT (Advanced Package Tool) is the command-line tool used in Debian-based Linux distributions for handling packages. It's preferred over its predecessors, `apt-get` and `aptitude`.
32 |
33 | I. Updating Repository Information
34 |
35 | Before installing or upgrading packages, update the list of available packages and their versions:
36 |
37 | ```bash
38 | apt update
39 | ```
40 |
41 | II. Upgrading Installed Packages
42 |
43 | To upgrade all installed packages to their latest available versions:
44 |
45 | ```bash
46 | apt upgrade
47 | ```
48 |
49 | III. Installing New Packages
50 |
51 | To install a new package from the repositories. For example, installing `httpd`:
52 |
53 | ```bash
54 | apt install httpd
55 | ```
56 |
57 | IV. Installing Local .deb Files
58 |
59 | If you have a `.deb` package file downloaded locally, install it using:
60 |
61 | ```bash
62 | apt install /path/to/package/name.deb
63 | ```
64 |
65 | V. Verifying Installation
66 |
67 | To check if a package is successfully installed and to view its details:
68 |
69 | ```bash
70 | apt show httpd
71 | ```
72 |
73 | ### YUM
74 |
75 | YUM (Yellowdog Updater, Modified) is a package manager used in Red Hat-based Linux distributions. It differs from `apt` in that it doesn't require repository updates before installing or updating software.
76 |
77 | I. Checking for Updates
78 |
79 | To check available updates for installed packages:
80 |
81 | ```bash
82 | yum check-update
83 | ```
84 |
85 | II. Updating All Packages
86 |
87 | To update all packages to their latest versions:
88 |
89 | ```bash
90 | yum update
91 | ```
92 |
93 | III. Updating Specific Packages
94 |
95 | To update a particular package, such as `httpd`:
96 |
97 | ```bash
98 | yum update httpd
99 | ```
100 |
101 | IV. Searching for Packages
102 |
103 | To search for a package by name. For example, searching for `apache`:
104 |
105 | ```bash
106 | yum search apache
107 | ```
108 |
109 | V. Installing Packages
110 |
111 | To install a specific package, like `httpd`:
112 |
113 | ```bash
114 | yum install httpd
115 | ```
116 |
117 | VI. Displaying Package Information
118 |
119 | To display detailed information about a package:
120 |
121 | ```bash
122 | yum info httpd
123 | ```
124 |
125 | VII. Listing Installed Packages
126 |
127 | To display a list of all installed packages:
128 |
129 | ```bash
130 | yum list installed
131 | ```
132 |
133 | VIII. Removing Packages
134 |
135 | To remove an installed package, such as `httpd`:
136 |
137 | ```bash
138 | yum remove httpd
139 | ```
140 |
141 | IX. Cleaning Cache
142 |
143 | To clean the YUM cache, which includes removing downloaded packages and metadata:
144 |
145 | ```bash
146 | yum clean all
147 | ```
148 |
149 | ### Tarballs
150 |
151 | Installing software from tarballs is an alternative to using package managers on Linux. This manual method is broken down into three primary steps:
152 |
153 | I. Extract
154 |
155 | First, navigate to the directory containing the tarball. Use the following command to extract its contents:
156 |
157 | ```
158 | tar -zxvf path_to_tar.tar.gz
159 | cd path_to_tar
160 | ```
161 |
162 | II. Compile
163 |
164 | The process might vary depending on the software, but generally, you would run:
165 |
166 | ```
167 | make
168 | ```
169 |
170 | If there is a configuration file (`configure` script) present, especially one listing dependencies, run it before executing `make`.
171 |
172 | III. Install
173 |
174 | Installation is often done through `make install`, which should place the executable in the correct directory:
175 |
176 | ```
177 | make install
178 | ```
179 |
180 | Alternatively, for some software, you may need to manually copy the compiled executable to a directory like `/usr/local/bin`.
181 |
182 | 🔴 **Caution**: Remember that software installed via tarballs does not benefit from automatic updates typically provided by package managers. This means manually tracking and updating software for security patches and new features.
183 |
184 | ### RPM
185 |
186 | RPM (Red Hat Package Manager) is a low-level package manager used in Red Hat-based Linux distributions. It allows direct management of software packages but requires a bit more manual intervention compared to higher-level tools like YUM.
187 |
188 | I. Downloading RPM Packages
189 |
190 | To download an RPM package from a website:
191 |
192 | ```bash
193 | wget http://some_website/sample_file.rpm
194 | ```
195 |
196 | II. Installing Packages with RPM
197 |
198 | To install a downloaded RPM package:
199 |
200 | ```bash
201 | rpm -ivh sample_file.rpm
202 | ```
203 |
204 | `i` stands for install, `v` for verbose (showing detailed output), and `h` for hash (displaying progress as hash marks).
205 |
206 | III. Listing All Installed Packages
207 |
208 | To list all installed packages:
209 |
210 | ```bash
211 | rpm -qa
212 | ```
213 |
214 | IV. Listing a Specific Package
215 |
216 | To check if a specific package, like `nano`, is installed:
217 |
218 | ```bash
219 | rpm -qa nano
220 | ```
221 |
222 | V. Displaying Package Documentation
223 |
224 | To display documentation files of a specific package:
225 |
226 | ```bash
227 | rpm -qd nano
228 | ```
229 |
230 | VI. Removing Packages with RPM
231 |
232 | To remove an installed package:
233 |
234 | ```bash
235 | rpm -e nano
236 | ```
237 |
238 | `e` stands for erase, which removes the package.
239 |
240 | 🔴 **Note**: While RPM provides a granular control over package management, it doesn't resolve dependencies automatically. It's important to ensure that dependencies are managed manually or through a higher-level tool like YUM or DNF.
241 |
242 | ## Software Package Repositories
243 |
244 | A software package repository in the context of Linux and other Unix-like operating systems is a centralized storage location containing various software packages. These repositories are essential components in the package management system, utilized by package managers to download and install software and updates.
245 |
246 | ### Key Components of a Repository
247 |
248 | - The **label** might be something like `base`, `updates`, or `extras`, which uniquely identifies each repository in configuration files. For example, in a YUM repository configuration, the label `[base]` identifies the base repository.
249 | - The **name** could be descriptive, such as "CentOS Base Repository" or "Fedora Updates", giving a clear indication of the content or purpose. For instance, "CentOS Base" suggests this repository contains the core CentOS packages.
250 | - A **mirrorlist** example could look like `http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os`. This URL directs the system to a list of available mirrors that host the repository, ensuring a fast and reliable download experience.
251 | - The **base URL** might be specified as `http://mirror.centos.org/centos/7/os/x86_64/`, indicating the primary location where the RPM packages for CentOS 7 are stored. Users download packages directly from this URL if the mirrorlist is not used.
252 | - The **GPG Check** setting in a repository configuration might look like `gpgcheck=1`, where the value `1` indicates that GPG signature verification is enabled. This ensures that the packages are authentic and have not been tampered with.
253 |
254 | ### Common Repository Labels
255 |
256 | - The **Base** repository, for instance, could include packages like `glibc`, `bash`, and `kernel`, which are essential components of the operating system. These packages are crucial for the system's basic operation and are well-tested.
257 | - The **Updates** repository may provide updated versions of core packages, such as `httpd` (Apache HTTP Server) or `openssl`, containing security fixes and performance improvements. For example, a critical security patch for `openssl` would appear in the Updates repository.
258 | - An **Optional** repository might include packages like `nginx`, `mysql-community-server`, or other open-source software that, while useful, is not officially supported by the distribution's core team. These packages provide additional capabilities but may have less thorough testing.
259 | - The **Supplemental** repository could offer proprietary software such as `Oracle Java` or commercial fonts. These are not open-source and are often provided with restrictions on usage, but they expand the range of software available on the system.
260 | - The **Extras** repository might contain newer or experimental software, such as a beta version of `gcc` or `Python`, which is not yet included in the Base repository. This repository allows users to access the latest features and test them before they become part of the standard offering.
261 |
262 | ### Managing APT Repositories
263 |
264 | APT repositories are defined in `/etc/apt/sources.list` and in the `/etc/apt/sources.list.d/` directory.
265 |
266 | - The `add-apt-repository` command is used to add or remove APT repositories.
267 | - It modifies the `/etc/apt/sources.list` file or creates new files in `/etc/apt/sources.list.d/`.
268 | - Install this utility with the following commands:
269 |
270 | ```bash
271 | apt update
272 | apt install software-properties-common
273 | ```
274 |
275 | #### Example: Installing Wine
276 |
277 | To demonstrate managing APT repositories, here's how you can install Wine on a Debian-based system:
278 |
279 | I. Get and Install the Repository Key
280 |
281 | Download and install the GPG key for the Wine repository:
282 |
283 | ```bash
284 | wget -nc https://dl.winehq.org/wine-builds/winehq.key
285 | gpg -o /etc/apt/trusted.gpg.d/winehq.key.gpg --dearmor winehq.key
286 | ```
287 |
288 | II. Add the Wine Repository
289 |
290 | Add the Wine repository to your system's sources:
291 |
292 | ```bash
293 | add-apt-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ focal main'
294 | ```
295 |
296 | III. Update the Package Database
297 |
298 | Update APT's package database to recognize the new repository:
299 |
300 | ```bash
301 | apt update
302 | ```
303 |
304 | IV. Install Wine
305 |
306 | Install the stable version of Wine with:
307 |
308 | ```bash
309 | sudo apt install --install-recommends winehq-stable
310 | ```
311 |
312 | V. Verify the Installation
313 |
314 | Confirm that Wine is correctly installed:
315 |
316 | ```bash
317 | wine --version
318 | ```
319 |
320 | 🔴 **Note**: It's important to ensure that repositories and their keys are obtained from trusted sources to avoid security risks. Incorrect or malicious repositories can compromise the system's integrity and security.
321 |
322 | ### Managing YUM Repositories
323 |
324 | The configuration files for YUM repositories are located in the `/etc/yum.repos.d` directory.
325 |
326 | I. Display Repositories
327 |
328 | To display a list of all enabled and available repositories, use:
329 |
330 | ```bash
331 | yum repolist all
332 | ```
333 |
334 | II. Add a New Repository
335 |
336 | To add a new repository by specifying its URL, use the `yum-config-manager` tool:
337 |
338 | ```bash
339 | yum-config-manager --add-repo=[URL]
340 | ```
341 |
342 | III. Enable a Repository
343 |
344 | If a repository is disabled and you want to enable it, use the following command. Replace `[repo_id]` with the actual repository ID:
345 |
346 | ```bash
347 | yum-config-manager --enable [repo_id]
348 | ```
349 |
350 | IV. Disable a Repository
351 |
352 | To disable a repository temporarily (for example, to prevent updates from that repository), use:
353 |
354 | ```bash
355 | yum-config-manager --disable [repo_id]
356 | ```
357 |
358 | ## Challenges
359 |
360 | 1. Configure a Linux system to use both official and third-party repositories while preventing package conflicts.
361 | 2. Safely upgrade a major software package (like Python or MySQL) ensuring all system dependencies are maintained.
362 | 3. Script a solution to automatically switch to a backup repository when the primary YUM or APT repository fails.
363 | 4. Create a script or use existing tools to automate security updates on a Linux system without breaking package dependencies.
364 | 5. Download and compile a piece of software from a tarball, resolving all dependencies manually.
365 | 6. Use the `alien` tool or similar to convert an RPM package to a DEB package and ensure it installs correctly on a Debian-based system.
366 | 7. Is it possible for apt to remove an already installed package when you install a new one? If so, under what circumstances can this occur?
367 | 8. Set up and configure a custom YUM repository on a CentOS system.
368 | 9. Install a Linux software package on a system without direct internet access using offline methods.
369 | 10. Write a script to automate the cleanup of old or unused packages and maintenance tasks like cache clearing in a Linux environment.
370 |
--------------------------------------------------------------------------------
/notes/shells_and_bash_configuration.md:
--------------------------------------------------------------------------------
1 | ## Shells
2 |
3 | A Unix shell is a command-line interpreter that provides a user interface for accessing an operating system's services. It allows users to execute commands, run programs, and manage system resources. The shell acts as an intermediary between the user and the operating system kernel, translating user commands into actions performed by the system.
4 |
5 | ### The Interaction Model
6 |
7 | The interaction between the user, shell, and operating system can be visualized as follows:
8 |
9 | ```
10 | +-------------------+ +----------------+ +--------------------+
11 | | | | | | |
12 | | User Input |<----->| Shell |<----->| Operating System |
13 | | (Keyboard/Screen) | | (e.g., Bash) | | (Kernel/HW) |
14 | | | | | | |
15 | +-------------------+ +----------------+ +--------------------+
16 | ```
17 |
18 | - **User input** consists of the commands and data entered by the user through devices like keyboards or other input peripherals, initiating interactions with the system.
19 | - The **shell** acts as an interpreter, translating user commands into instructions and communicating them to the operating system for execution.
20 | - The **operating system** is responsible for executing the commands provided by the shell and managing the system's hardware resources to fulfill user requests.
21 | - By handling user input, the shell serves as a crucial interface between the user and the operating system, ensuring smooth communication and task execution.
22 |
23 | ### Common Shells
24 |
25 | There are several types of shells available, each with unique features:
26 |
27 | | Shell | Description | Benefits | Considerations/Drawbacks |
28 | |---------------------------|----------------------------------------------------------------|---------------------------------------------------------------|----------------------------------------------------------------|
29 | | **`bash` (Bourne-Again SHell)** | The default shell on most Linux distributions; backward-compatible with the original Bourne shell. | Widely used, with extensive scripting support and community resources. | Lacks some advanced features present in newer shells like `zsh`. |
30 | | **`zsh` (Z Shell)** | Known for its rich feature set, including improved auto-completion, spell correction, and theming capabilities. | Highly customizable, with better autocompletion and plugins. | Slight learning curve for users unfamiliar with its configuration. |
31 | | **`ksh` (Korn SHell)** | Combines features of the Bourne shell and the C shell (`csh`). | Useful for scripting, combining the best of both worlds (Bourne and C shell). | Not as widely adopted as `bash` or `zsh`. |
32 | | **`tcsh` (TENEX C Shell)**| An enhanced version of the C shell, featuring command-line editing and programmable word completion. | Better user experience with command-line editing features. | Less common compared to `bash` or `zsh`. |
33 | | **`sh` (Bourne SHell)** | The original Unix shell, simple and portable. | Lightweight and portable for basic scripting tasks. | Lacks many modern features available in newer shells. |
34 |
35 | ### Examining Available Shells
36 |
37 | To see which shells are installed on your system, inspect the `/etc/shells` file. This file lists all the valid login shells available.
38 |
39 | ```bash
40 | cat /etc/shells
41 | ```
42 |
43 | **Example Output:**
44 |
45 | ```
46 | /bin/sh
47 | /bin/bash
48 | /bin/dash
49 | /bin/zsh
50 | /usr/bin/zsh
51 | ```
52 |
53 | ### Identifying Your Current Shell
54 |
55 | To determine your current active shell, you can use several methods:
56 |
57 | #### Method 1: Using the `$SHELL` Variable
58 |
59 | ```bash
60 | echo "$SHELL"
61 | ```
62 |
63 | **Note:** The `$SHELL` variable shows your default login shell, not necessarily the shell you're currently using.
64 |
65 | #### Method 2: Inspecting the Shell Process
66 |
67 | ```bash
68 | ps -p "$$" -o comm=
69 | ```
70 |
71 | - `$$` represents the current shell's process ID.
72 | - `ps -p` selects the process with that ID.
73 | - `-o comm=` outputs the command name (the shell).
74 |
75 | #### Method 3: Using `echo "$0"`
76 |
77 | ```bash
78 | echo "$0"
79 | ```
80 |
81 | - `$0` contains the name of the shell or script being executed.
82 |
83 | ### Switching Shells
84 |
85 | #### Temporarily Switching Shells
86 |
87 | You can start a different shell session by typing its name:
88 |
89 | ```bash
90 | zsh
91 | ```
92 |
93 | To return to your previous shell, type `exit` or press `Ctrl+D`.
94 |
95 | #### Permanently Changing Your Default Shell
96 |
97 | To change your default login shell, use the `chsh` (change shell) command:
98 |
99 | ```bash
100 | chsh -s /bin/zsh
101 | ```
102 |
103 | - You'll be prompted for your password.
104 | - Changes will take effect the next time you log in.
105 |
106 | **Important:** The shell must be listed in `/etc/shells`; otherwise, `chsh` will not accept it.
107 |
108 | ### Bash Configuration Files
109 |
110 | When Bash starts, it reads and executes commands from various startup files. These files allow you to customize your shell environment.
111 |
112 | #### Types of Shells
113 |
114 | Understanding which configuration files are read depends on how the shell is invoked:
115 |
116 | - A **login shell** is a shell session that requires the user to authenticate, such as when logging in from a console or via SSH, before accessing the system.
117 | - An **interactive non-login shell** is opened after the user has already logged in, for instance, when opening a new terminal window, and does not require further authentication.
118 |
119 | #### Configuration Files Overview
120 |
121 | I. **Global Configuration Files** (affect all users):
122 |
123 | - `/etc/profile`: Executed for login shells.
124 | - `/etc/bash.bashrc` or `/etc/bashrc`: Executed for interactive non-login shells.
125 |
126 | II. **User-Specific Configuration Files** (affect only the current user):
127 |
128 | - `~/.bash_profile` or `~/.bash_login` or `~/.profile`: Read by login shells. Bash reads the first one it finds.
129 | - `~/.bashrc`: Read by interactive non-login shells.
130 | - `~/.bash_logout`: Executed when a login shell exits.
131 |
132 | ### Bash Startup Sequence
133 |
134 | #### For Login Shells:
135 |
136 | 1. Bash reads `/etc/profile`.
137 | 2. Then it looks for `~/.bash_profile`, `~/.bash_login`, and `~/.profile` (in that order) and reads the first one it finds.
138 |
139 | #### For Interactive Non-Login Shells:
140 |
141 | 1. Bash reads `/etc/bash.bashrc` or `/etc/bashrc` (system-wide configuration).
142 | 2. Then it reads `~/.bashrc` (user-specific configuration).
143 |
144 | ### Best Practice: Source `~/.bashrc` from `~/.bash_profile`
145 |
146 | To ensure that your settings are consistent across all shell types, it's common to source `~/.bashrc` from `~/.bash_profile`.
147 |
148 | **Example `~/.bash_profile`:**
149 |
150 | ```bash
151 | # ~/.bash_profile
152 |
153 | # Source the user's bashrc if it exists
154 | if [ -f ~/.bashrc ]; then
155 | . ~/.bashrc
156 | fi
157 | ```
158 |
159 | ### Sample `~/.bashrc` File
160 |
161 | ```bash
162 | # ~/.bashrc
163 |
164 | # Source global definitions if any
165 | if [ -f /etc/bashrc ]; then
166 | . /etc/bashrc
167 | fi
168 |
169 | # Alias definitions
170 | alias ll='ls -alF'
171 | alias la='ls -A'
172 | alias l='ls -CF'
173 |
174 | # Environment variables
175 | export EDITOR='nano'
176 | export HISTSIZE=1000
177 | export HISTFILESIZE=2000
178 |
179 | # Prompt customization
180 | PS1='\u@\h:\w\$ '
181 |
182 | # Functions
183 | extract() {
184 | if [ -f "$1" ]; then
185 | case "$1" in
186 | *.tar.bz2) tar xjf "$1" ;;
187 | *.tar.gz) tar xzf "$1" ;;
188 | *.bz2) bunzip2 "$1" ;;
189 | *.rar) unrar x "$1" ;;
190 | *.gz) gunzip "$1" ;;
191 | *.tar) tar xf "$1" ;;
192 | *.tbz2) tar xjf "$1" ;;
193 | *.tgz) tar xzf "$1" ;;
194 | *.zip) unzip "$1" ;;
195 | *.Z) uncompress "$1";;
196 | *.7z) 7z x "$1" ;;
197 | *) echo "Don't know how to extract '$1'..." ;;
198 | esac
199 | else
200 | echo "'$1' is not a valid file!"
201 | fi
202 | }
203 | ```
204 |
205 | - **Aliases** provide a way to create shortcuts for frequently used commands, reducing the need for repetitive typing.
206 | - Using `alias ll='ls -alF'` is an example that lists all files in long format, including indicators for file types.
207 | - **Environment variables** are key-value pairs that configure the shell or external programs.
208 | - Setting `export EDITOR='nano'` ensures that nano becomes the default text editor when editing files through the terminal.
209 | - **Prompt customization** helps users personalize their command prompt, displaying important information like username and directory.
210 | - The command `PS1='\u@\h:\w\$ '` modifies the prompt to show the username, hostname, and the current working directory.
211 | - **Functions** are used to create reusable commands that can handle multiple steps or repetitive tasks.
212 | - A function like `extract()` is useful for extracting different archive types such as `.zip`, `.tar.gz`, and `.rar` files, making file management more efficient.
213 |
214 | ### Terminals
215 |
216 | A terminal emulator is a program that emulates a physical terminal within a graphical interface, allowing users to interact with the shell.
217 |
218 | #### Terminal Emulator Features
219 |
220 | - **Multiple Tabs** allow users to run multiple shell sessions within a single window, improving multitasking efficiency.
221 | - **Split Panes** let users divide the terminal window into multiple panes, each running its own session simultaneously.
222 | - **Customizable Appearance** gives users control over how their terminal looks, enabling adjustments to match personal preferences.
223 | - **Color Schemes** allow changing the text and background colors, enhancing readability or aesthetics.
224 | - **Fonts** can be modified in type and size to suit individual reading comfort.
225 | - **Transparency** is supported by some terminals, allowing the background to appear transparent for a seamless visual experience.
226 | - **Keyboard Shortcuts** make navigation and actions faster within the terminal.
227 | - **Copy/Paste** shortcuts enable quick copying and pasting of text without using the mouse.
228 | - **Navigation** shortcuts allow users to easily switch between tabs or panes using the keyboard.
229 | - **Scrollback Buffer** enables users to view previous output by scrolling up, ensuring that past terminal output is accessible for review.
230 |
231 | #### Common Terminal Emulators
232 |
233 | | Terminal Emulator | Description | Benefits | Considerations/Drawbacks |
234 | |----------------------|---------------------------------------------------------|------------------------------------------------------|-----------------------------------------------------|
235 | | **GNOME Terminal** | Default terminal emulator on GNOME desktop environments. | Integrated with GNOME, easy to use. | Lacks some advanced customization features. |
236 | | **Konsole** | Default terminal emulator on KDE Plasma desktop environments. | Highly customizable and integrates well with KDE. | Primarily designed for KDE, may not be ideal for other environments. |
237 | | **xterm** | Basic terminal emulator for the X Window System. | Lightweight and highly portable. | Lacks modern features like tabs or split views. |
238 | | **Terminator** | Allows arranging multiple terminals in grids. | Ideal for multitasking with a grid layout. | May be overkill for basic terminal usage. |
239 | | **iTerm2** | Popular terminal emulator for macOS with advanced features. | Offers split panes, hotkeys, and extensive customization. | Only available on macOS. |
240 |
241 | #### Opening a Terminal
242 |
243 | - `Ctrl + Alt + T` (commonly opens the default terminal).
244 | - Navigate to the system's application menu and select the terminal emulator.
245 |
246 | 
247 |
248 | ### Challenges
249 |
250 | 1. Find if there are any existing aliases for a command, like `cat`. Use `alias cat` to see the aliases for `cat`.
251 | 2. Display all aliases currently defined in your shell. Simply execute `alias` without any arguments.
252 | 3. Open `~/.bashrc` in a text editor, add a new alias like `alias ll='ls -la'`. Save the file, reopen your terminal, and verify the new alias. To remove it, delete or comment out the line in `~/.bashrc`, then save and restart your terminal.
253 | 4. Use the `find` command to search your system for files containing 'profile' in their name. Try `find / -name '*profile*'`.
254 | 5. Create a new user whose default shell is a non-standard program. For example, `useradd -s /bin/tar username` creates a user with `/bin/tar` as their shell. Be aware of the implications this may have on user interaction with the system.
255 | 6. Change your default shell using `chsh -s /path/to/shell`, then open a new terminal session and explore the new environment. Experiment with commands like `alias`, `set`, and `declare -f` to inspect custom variables, aliases, and functions.
256 |
--------------------------------------------------------------------------------
/notes/introduction.md:
--------------------------------------------------------------------------------
1 | ## Introduction to Linux
2 |
3 | Linux is a versatile and powerful open-source operating system that forms the backbone of countless technological infrastructures, from servers and desktops to mobile devices and embedded systems. Known for its stability, security, and flexibility, Linux provides a robust platform that can be customized to suit a wide range of applications. It is supported by a vibrant global community of developers and users, which contributes to its continuous evolution and ensures a rich ecosystem of software and tools. Whether for personal use, enterprise environments, or innovative tech projects, Linux offers a reliable and adaptable solution for modern computing needs.
4 |
5 | ### What is Operating System?
6 |
7 | ```
8 | # OS tree:
9 | +-------+
10 | | User |
11 | +-------+
12 | |
13 | -----------------------------------
14 | | | |
15 | +-------------+ +-------------+ +-------------+
16 | | Application | | Application | | Application |
17 | +-------------+ +-------------+ +-------------+
18 | \ | /
19 | \ | /
20 | \ | /
21 | +-----------------------+
22 | | Operating System |
23 | +-----------------------+
24 | | | |
25 | +----------+-------+--------+----------+
26 | | | | | |
27 | +-----+ +-----+ +---------+ +------+
28 | | RAM | | CPU | | Input/ | | ... |
29 | | | | | | Output | | |
30 | +-----+ +-----+ +---------+ +------+
31 | ```
32 |
33 | OS manages:
34 |
35 | - Memory (MMU)
36 | - Process
37 | - Devices (Drivers)
38 | - Storage
39 | - CPU (Scheduling)
40 | - Networking
41 |
42 |
43 | Operating systems are the fundamental layer that enables communication between computer hardware and user applications, evolving over time through a rich interplay of design philosophies and technological innovations. The Unix family, known for its modularity and robust design principles, has given rise to a diverse range of systems that embody both traditional and modern approaches to computing. Alongside this evolution, systems like Linux have emerged, driven by community collaboration and adaptability, offering a dynamic platform that continuously reshapes the computing landscape. In parallel, alternative paradigms, such as those seen in the Windows ecosystem, highlight different methodologies and priorities, collectively creating a broad and intricate tapestry of technologies that support everything from personal devices to complex enterprise infrastructures.
44 |
45 | ```
46 | Operating Systems
47 | ├── Unix & Unix-like Systems
48 | │ ├── Original Unix
49 | │ │ ├── AT&T Unix (System V)
50 | │ │ │ ├── Solaris (SunOS)
51 | │ │ │ ├── AIX (IBM)
52 | │ │ │ └── HP-UX (HP)
53 | │ │ └── BSD Unix
54 | │ │ ├── FreeBSD
55 | │ │ ├── NetBSD
56 | │ │ ├── OpenBSD
57 | │ │ └── Darwin (forms the core of macOS)
58 | │ └── Linux (Unix-like)
59 | │ ├── Debian Family
60 | │ │ ├── Ubuntu
61 | │ │ └── Others (e.g., Linux Mint)
62 | │ ├── Red Hat Family
63 | │ │ ├── Fedora
64 | │ │ ├── CentOS
65 | │ │ └── RHEL (Red Hat Enterprise Linux)
66 | │ └── Other Distributions (e.g., Arch, SUSE)
67 | └── Non-Unix Systems
68 | ├── Windows Family (NT-based and earlier)
69 | └── Others (e.g., DOS, AmigaOS)
70 | ```
71 |
72 | ### Why Learn Linux?
73 |
74 | - Linux has demonstrated **consistent growth** over the past three decades, affirming its enduring relevance in the technology industry and maintaining its popularity among professionals and enthusiasts.
75 | - The **versatility** of Linux is showcased by its use across a wide range of systems, including web servers, supercomputers, IoT devices, and even Tesla's electric cars. Additionally, many operating systems, such as Android and Unix-like systems like Playstation OS, Mac OS, and OS X, are either based on or inspired by Linux, underlining its widespread influence.
76 | - The **Linux kernel** is engineered to support a vast array of hardware types, from personal computers and servers to mobile devices and embedded systems, making it a highly adaptable operating system for various applications.
77 | - Linux offers a rich selection of **native software**, and many popular applications from Windows and Mac platforms have been ported to run on it, ensuring a broad spectrum of software availability for users.
78 | - Due to its **open-source** and modular nature, Linux can be tailored to meet a wide range of requirements, facilitating diverse applications across different sectors.
79 | - The **Linux community** is robust and continuously contributes to its development and improvement. This community, along with an extensive ecosystem that includes forums, educational resources, tools, and conferences, provides ample support for users seeking to learn and solve problems.
80 | - For businesses, especially startups, Linux is a **cost-effective** solution. It enables the efficient running of websites, databases, and applications without the hefty licensing fees associated with other operating systems. Its ease of installation, use, upgrade, deployment, and maintenance makes it an attractive choice for optimizing operational efficiency.
81 |
82 | ### Before Linux
83 |
84 | ```
85 | Multics Unix GNU Linux
86 | (1960s) (1970s) (1983) (1991)
87 | | | | |
88 | | | | |
89 | -----------------------------------------------------------------
90 | ```
91 |
92 | I. **Multics (Multiplexed Information and Computer Services):**
93 |
94 | An early time-sharing operating system.
95 |
96 | II. **Unix (Uniplexed Information and Computer Services, or Unics):**
97 |
98 | - Developed to overcome many of Multics’ problems.
99 | - Provides a hierarchical file system
100 | - Manages processes
101 | - Offers a command-line interface
102 | - Includes a wide range of utilities
103 |
104 | III. **POSIX (Portable Operating System Interface):**
105 |
106 | - An IEEE 1003.1 standard from the 1980s
107 | - Defines the language interface between application programs and the UNIX operating system, ensuring portability
108 | - Specifies the C library, system interfaces and headers, as well as various commands and utilities
109 |
110 | IV. **GNU (GNU’s Not Unix):**
111 |
112 | - Introduced in 1983 to promote the Free Software concept
113 | - Embodies the freedoms to run, study, modify, and redistribute software
114 | - Uses the GNU General Public License (GPL) to protect these freedoms
115 | - Aims to create a complete free-software operating system, including projects such as the shell, core utilities (e.g., ls), compilers, and libraries (e.g., the C library)
116 |
117 | V. **Linux Kernel:**
118 |
119 | - Introduced by Linus Torvalds
120 | - Licensed under GPL version 2 (GPLv2)
121 | - Compiled using GNU GCC
122 | - Provides a Unix-like operating system with advantages such as low cost, full control, and strong community support
123 | - Serves as the kernel that the GNU project required
124 |
125 | ### The History of Linux
126 |
127 | - In **1971**, UNIX was released by Ken Thompson and Dennis Ritchie, serving as a pioneering operating system that laid the foundation for many future systems, including Linux.
128 | - The GNU Project was established in **1983** by Richard Stallman with the goal of creating a completely free and open-source operating system, setting the stage for the development of Linux.
129 | - In **1987**, Andrew S. Tanenbaum introduced MINIX, a simplified UNIX-like system designed for academic purposes, which later inspired Linus Torvalds in the creation of Linux.
130 | - **1991** marked the release of the first version of the Linux kernel by Linus Torvalds, a student at the University of Helsinki, as a small, experimental project initially compatible only with his own computer.
131 | - In **1992**, Linus Torvalds agreed to license the Linux kernel under the GNU General Public License (GPL), ensuring that it would remain free and open-source as part of the Free Software ecosystem.
132 | - The release of Red Hat Linux in **1994** became a pivotal moment, as it emerged as one of the most popular and influential Linux distributions, contributing significantly to the growth of Linux in the enterprise market.
133 | - The Linux Foundation was formed in **2007**, bringing together various organizations supporting Linux and sponsoring the work of Linus Torvalds, while also leading collaborative development on the Linux kernel and other open-source projects.
134 | - In **2008**, the Android operating system, based on the Linux kernel, was officially released by Google. Android quickly became the dominant operating system for smartphones and tablets, significantly expanding the use and visibility of Linux in the consumer market.
135 | - **2011** saw the release of the Linux 3.0 kernel, a major milestone in Linux development that introduced significant advancements in process and network management, file systems, and driver support.
136 | - The Linux 4.0 kernel was released in **2015**, featuring live kernel patching and numerous enhancements that made Linux more suitable for cloud-based applications.
137 | - In **2017**, the Linux 4.14 kernel introduced improved security features, broader hardware support, and enhanced file system handling, further advancing the operating system's capabilities.
138 | - The release of **Ubuntu 18.04 LTS** in **2018** marked a significant moment for one of the most popular Linux distributions. This version included support for the GNOME desktop environment by default, replacing Unity, and emphasized improvements in security and stability.
139 | - The Linux 5.10 kernel, released in **2020** as a long-term support (LTS) version, brought several major improvements, including enhanced system security, hardware support, and overall performance enhancements.
140 | - In **2023**, the Linux kernel reached version 6.0, representing a new phase in the evolution of the kernel with major updates in hardware support, security features, and optimizations for modern computing environments, including cloud and edge computing.
141 |
142 | ### Understanding a Linux Distribution
143 |
144 | A Linux distribution, often simply referred to as a "distro," is a particular variant of Linux that packages together the Linux kernel and a variety of additional software to create a fully functional operating system.
145 |
146 | Each distribution includes:
147 |
148 | - The **Linux Kernel**, which is the core component responsible for managing hardware, processes, memory, and peripherals.
149 | - **Libraries** are included in each distribution, providing standard functions like input/output processing, mathematical computations, and other functionalities that various programs can use.
150 | - **System Daemons** are background services that start up during boot time to offer essential system functionalities, such as logging, task scheduling, and network management.
151 | - The inclusion of **Development and Packaging Tools** is essential for compiling and managing software packages, facilitating software installation and updates.
152 | - Each distribution also comes with **Life-cycle Management Utilities** that help manage system updates, configure system settings, and monitor the overall health of the system.
153 |
154 | Before a distribution is released, all of these components are thoroughly tested together for compatibility and interoperability. This ensures a seamless user experience and functionality.
155 |
156 | Linux distributions can be installed and run on a wide range of hardware, including servers, desktops, laptops, and more. They come in numerous variants each tailored for specific user groups or usage scenarios.
157 |
158 | Examples of popular Linux distributions include:
159 |
160 | - **Ubuntu** is known for its user-friendly nature, making it a popular recommendation for Linux beginners.
161 | - The **Debian** distribution is renowned for its stability, often used in server environments and serving as the base for other distributions like Ubuntu.
162 | - As a cutting-edge distribution, **Fedora** includes the latest software technologies and is sponsored by Red Hat.
163 | - **openSUSE** is recognized for its robustness and versatility, making it suitable for both server and desktop environments.
164 | - **Cumulus Linux** is a specialized Linux distribution designed specifically for networking hardware.
165 |
166 | ### Challenges
167 |
168 | 1. Understand the distinction between a Linux distribution and a Linux kernel. What role does each one play and how do they interact within the overall Linux operating system?
169 | 2. Where can you find various Linux distributions for download? Explore the different platforms that offer reliable and safe Linux distro downloads.
170 | 3. Is Linux the same as UNIX? Investigate the relationship and differences between these two operating systems. Consider their histories, similarities, differences, and the reasons behind the development of Linux.
171 | 4. Are all Linux distributions free? Examine the various Linux distributions and their pricing models. Consider factors like the availability of professional support, additional services, and enterprise features.
172 | 5. Explore how well Linux operates with various hardware configurations. What are the key considerations when installing Linux on different devices?
173 | 6. The Linux community and ecosystem are vast and vibrant. Where can you find resources, forums, tutorials, and other forms of support that can help you navigate the Linux world?
174 | 7. Linux operates under the GNU General Public License (GPL). What is the significance of this license? How does it affect how Linux can be used, modified, and redistributed?
175 | 8. Linux is often lauded for its security features. What are these features and how do they work to maintain system security?
176 | 9. Linux often involves using a command line interface. What are some of the basic commands that every Linux user should know?
177 |
--------------------------------------------------------------------------------
/notes/task_state_analysis.md:
--------------------------------------------------------------------------------
1 | ## Task-State Analysis for Monitoring Application Processes
2 |
3 | Monitoring the performance of applications often involves keeping an eye on resource usage like CPU load, memory consumption, and disk I/O. However, to truly understand what's happening inside an application, especially one that's multi-threaded, it's helpful to look at the states of its threads over time. Task-State Analysis offers a way to do this by observing how threads transition between different states, such as running, sleeping, or waiting for I/O. This approach provides deeper insights into the application's behavior without the need for intrusive monitoring tools.
4 |
5 | ### Visualizing Threads Within a Process
6 |
7 | To grasp how threads operate within a process, imagine a process as a container that holds multiple threads, each performing its own tasks but sharing the same resources.
8 |
9 | ```
10 | +-------------------------------------+
11 | | Process A |
12 | | (Runs in its own memory space) |
13 | | |
14 | | +-----------+ +-----------+ |
15 | | | Thread 1 | | Thread 2 | |
16 | | +-----------+ +-----------+ |
17 | | | | |
18 | | | Shared Memory | |
19 | | +----------------+ |
20 | | |
21 | +-------------------------------------+
22 | ```
23 |
24 | In this diagram, **Process A** contains **Thread 1** and **Thread 2**, both of which can access shared memory within the process. This setup allows threads to communicate efficiently but also requires careful synchronization to prevent conflicts.
25 |
26 | ### Understanding Thread States
27 |
28 | Every thread (also known as a task) has a state that indicates what it's currently doing. These states help the operating system manage resources and schedule tasks effectively. The common thread states include:
29 |
30 | | State | Meaning | Description |
31 | |-------|-------------------------|---------------------------------------------------------------------------------------|
32 | | `R` | Running | The thread is either currently running on the CPU or is ready to run. |
33 | | `S` | Sleeping | The thread is waiting for an event, such as I/O completion or a signal. |
34 | | `D` | Uninterruptible Sleep | The thread is in a sleep state that cannot be interrupted, usually waiting for I/O operations. |
35 | | `T` | Stopped | The thread has been stopped, often by a signal or debugger. |
36 | | `Z` | Zombie | The thread has finished execution but still has an entry in the process table. |
37 |
38 | ### Sampling Thread States Using `/proc`
39 |
40 | One non-intrusive way to monitor thread states is by sampling data from the `/proc` file system. This virtual file system provides detailed information about running processes and threads.
41 |
42 | For example, to check the state of a specific process, you can look at `/proc/[PID]/stat`, where `[PID]` is the process ID. This file contains various statistics about the process, including its current state.
43 |
44 | ```bash
45 | cat /proc/1234/stat
46 | ```
47 |
48 | The output might look like this (fields are space-separated):
49 |
50 | ```
51 | 1234 (myprocess) S 1000 1234 1234 0 -1 4194560 500 0 0 0 0 0 0 0 20 0 1 0 100 0 0 18446744073709551615 4194304 4198400 140736897651776 0 0 0 0 0 0 0 0 0 17 0 0 0 0 0 0
52 | ```
53 |
54 | Here, the third field (`S`) represents the state of the process, which in this case is `S` for sleeping. By periodically reading this file, you can track how the state changes over time.
55 |
56 | ### Monitoring Thread States with Commands
57 |
58 | To get a snapshot of all running processes and their states, the `ps` command is quite handy. For instance:
59 |
60 | ```bash
61 | ps -eo pid,tid,stat,comm
62 | ```
63 |
64 | This command lists the process ID (`pid`), thread ID (`tid`), state (`stat`), and command name (`comm`) for all processes and their threads. An example output might be:
65 |
66 | ```
67 | PID TID STAT COMMAND
68 | 1 1 Ss systemd
69 | 2 2 S kthreadd
70 | 3 3 S rcu_gp
71 | 1234 1234 S myprocess
72 | 1234 1235 R myprocess
73 | ```
74 |
75 | In this output:
76 |
77 | - Process `1234` has two threads: one in a sleeping state (`S`) and one running (`R`).
78 | - The `PID` and `TID` are the same for the main thread of the process.
79 |
80 | By examining which threads are in which states, you can identify if threads are spending too much time waiting or if they're actively running.
81 |
82 | ### Interpreting the Output
83 |
84 | Suppose you notice that many threads are in the `D` state (uninterruptible sleep). This could indicate that they are waiting for I/O operations to complete, which might be a sign of disk bottlenecks.
85 |
86 | To dig deeper, you could use:
87 |
88 | ```bash
89 | ps -eo state,pid,cmd | grep "^D"
90 | ```
91 |
92 | This command filters the list to show only threads in the uninterruptible sleep state. The output could be:
93 |
94 | ```
95 | D 5678 [kjournald]
96 | D 1234 myprocess
97 | ```
98 |
99 | Here, `myprocess` with PID `1234` is in an uninterruptible sleep state, suggesting it's waiting for an I/O operation.
100 |
101 | ### Using `/proc` to Sample Threads Over Time
102 |
103 | By scripting the sampling of thread states, you can collect data over an extended period. For example, a simple Bash script could sample the states every second:
104 |
105 | ```bash
106 | while true; do
107 | ps -eo state | sort | uniq -c
108 | sleep 1
109 | done
110 | ```
111 |
112 | This script counts the number of threads in each state every second. Sample output might be:
113 |
114 | ```
115 | 50 R
116 | 200 S
117 | 5 D
118 | ```
119 |
120 | Interpreting this, you might see that most threads are sleeping (`S`), some are running (`R`), and a few are in uninterruptible sleep (`D`).
121 |
122 | ### Tools for Task-State Analysis
123 |
124 | While command-line tools provide valuable insights, specialized tools can offer more detailed analysis.
125 |
126 | #### `htop`
127 |
128 | An interactive process viewer that shows a real-time overview of system processes.
129 |
130 | ```bash
131 | htop
132 | ```
133 |
134 | In `htop`, you can see CPU usage per core, memory usage, and a list of processes with their CPU and memory consumption. You can also sort processes by various criteria.
135 |
136 | #### `perf`
137 |
138 | A powerful profiling tool that can collect performance data.
139 |
140 | ```bash
141 | perf top
142 | ```
143 |
144 | This command shows a live view of the functions consuming the most CPU time, helping identify hotspots in your application.
145 |
146 | ### Application in Database Systems
147 |
148 | Database systems are often multi-threaded and I/O-intensive, making them prime candidates for Task-State Analysis. For example, if a database server experiences slow query performance, monitoring thread states can reveal whether threads are waiting on I/O, locks, or CPU resources.
149 |
150 | Suppose you notice many threads in the `S` state waiting for locks. This could indicate contention and might prompt you to optimize your queries or adjust your database configuration.
151 |
152 | ### Shifting Focus from Resource Utilization
153 |
154 | Traditional monitoring focuses on metrics like CPU and memory usage. While important, these metrics don't always tell the whole story. Task-State Analysis shifts the focus to what threads are actually doing.
155 |
156 | By understanding thread states, you can:
157 |
158 | - Identify if threads are mostly waiting rather than doing work.
159 | - Detect if I/O waits are causing performance issues.
160 | - Determine if there are synchronization problems causing threads to sleep.
161 |
162 | ### Practical Steps to Implement Task-State Analysis
163 |
164 | 1. Use scripts or monitoring tools to collect thread state data at regular intervals.
165 | 2. Look for trends, such as an increasing number of threads in uninterruptible sleep.
166 | 3. Relate the thread states to what the application is doing at the time.
167 | 4. If unusual patterns emerge, delve deeper using more specialized tools or logs.
168 | 5. Based on your findings, optimize code, adjust configurations, or allocate resources as needed.
169 |
170 | ### Example Scenario: Diagnosing a Performance Issue
171 |
172 | Imagine an application that has become sluggish. Users report slow response times, and initial monitoring shows that CPU usage is low. Using Task-State Analysis, you sample the thread states and find that a significant number of threads are in the `D` state.
173 |
174 | By examining these threads, you discover they are waiting for disk I/O. Checking the disk performance with `iostat`, you notice high I/O wait times.
175 |
176 | ```bash
177 | iostat -x 1 3
178 | ```
179 |
180 | Sample output:
181 |
182 | ```
183 | avg-cpu: %user %nice %system %iowait %steal %idle
184 | 5.00 0.00 2.00 90.00 0.00 3.00
185 |
186 | Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
187 | sda 0.00 0.00 100.00 50.00 5.00 2.50 70.00 5.00 50.00 30.00 70.00 5.00 75.00
188 | ```
189 |
190 | The high `%iowait` and `await` times indicate disk latency. In this case, upgrading the storage system or optimizing disk usage could help the performance issues.
191 |
192 | ### Understanding the Caveats of Task-State Analysis
193 |
194 | While Task-State Analysis provides valuable insights, it's important to consider:
195 |
196 | - Frequent sampling can introduce overhead. Balance the frequency with the need for timely data.
197 | - Threads can change states rapidly. Sampling might miss brief but significant events.
198 | - Understanding what the thread states mean in the context of your application is crucial.
199 |
200 | ### Combining Task-State Analysis with Other Metrics
201 |
202 | For a comprehensive view, combine Task-State Analysis with other monitoring methods:
203 |
204 | - Monitoring **CPU and Memory Usage** helps identify resource utilization levels, which can be correlated with specific thread states to better understand how each thread impacts overall system performance.
205 | - Regularly reviewing **Application Logs** is essential, as logs often contain error messages or warnings that can shed light on abnormal thread behavior or unexpected application issues.
206 | - Integrating **Network Monitoring** can be particularly useful if threads are frequently waiting on network I/O, as network performance metrics may reveal underlying issues impacting response times.
207 | - **Disk I/O metrics** should also be reviewed, as they help in identifying delays due to storage performance, especially for threads engaged in heavy read and write operations.
208 | - **System-level tracing** tools provide insights into thread transitions and can be valuable for identifying patterns or repeated states that might indicate inefficiencies.
209 | - Combining **user activity monitoring** can add context to Task-State Analysis, as user interactions can directly influence thread states, especially in interactive applications.
210 |
211 | ### Challenges
212 |
213 | 1. Use the `ps` command to view the current states of all threads in a specific process. Record the states and explain the significance of each, such as `R` for running, `S` for sleeping, and `D` for uninterruptible sleep. Then, check the `/proc/[PID]/stat` file for the same process and compare the results with `ps`. Discuss how these commands help monitor thread behavior over time.
214 | 2. Write a Bash script that samples thread states every second for a specific process and logs the count of each state (`R`, `S`, `D`, etc.). Run the script for a few minutes while the process is under load, then analyze the log to determine the predominant thread state. Discuss what the observed states reveal about the application’s behavior and possible bottlenecks.
215 | 3. Identify a process with threads in the `D` (uninterruptible sleep) state, suggesting that it is waiting for I/O. Use `iostat` to measure disk performance during this time and analyze the output to identify potential disk bottlenecks. Discuss how `iowait` can impact application performance and propose ways to address high I/O wait times.
216 | 4. Launch `htop` and configure it to display thread information for a specific process. Observe the states of the threads over time. Discuss how interactive tools like `htop` complement command-line sampling for real-time monitoring of thread behavior.
217 | 5. Use a tool like `dd` or `stress-ng` to simulate high disk I/O on your system. While the tool is running, monitor thread states for various processes using `ps` and `htop`. Record the proportion of threads in the `D` state and explain how simulated disk stress impacts thread states across the system.
218 | 6. Run a multi-threaded application on your system and monitor its threads over time. Pay special attention to any threads in the `S` (sleeping) state and determine if they are waiting for locks or synchronization events. Discuss how sleeping threads might indicate contention issues and propose potential optimizations to reduce waiting times.
219 | 7. If possible, install a database server (like MySQL or PostgreSQL) and run several queries to put it under load. Use `ps` or `top` to observe the states of database threads, particularly looking for `D` or `S` states. Explain how Task-State Analysis can help diagnose database performance issues related to I/O waits or lock contention.
220 | 8. Use both `scp` and `sftp` to transfer a large file and monitor the task states of each tool’s threads during the transfer. Record the observed states and transfer times, then compare the results. Discuss which protocol is more efficient in terms of thread activity and overall performance.
221 | 9. Use the `perf top` command while running a multi-threaded application to identify functions that are consuming significant CPU time. Discuss how `perf` can supplement Task-State Analysis by providing insights into CPU-bound threads and hotspots in the code, offering a more complete view of application performance.
222 | 10. Imagine a scenario where a web application is experiencing slow response times. Use Task-State Analysis to monitor the application’s threads over time, identifying threads that are predominantly in the `S` or `D` state. Based on your observations, suggest possible reasons for the performance issue and recommend adjustments, such as increasing resources or optimizing specific parts of the application.
223 |
--------------------------------------------------------------------------------
/notes/files_and_dirs.md:
--------------------------------------------------------------------------------
1 | ## Understanding Files and Directories
2 |
3 | One of the fundamental skills is to navigate and manage files and directories effectively. Here, we focus on the crucial concepts that will facilitate your work within the file system.
4 |
5 | TODO:
6 | - rsync
7 |
8 | ### Types of File Paths
9 |
10 | The two main types of paths are **Absolute Path** and **Relative Path**.
11 |
12 | #### Absolute Path
13 |
14 | An **absolute path** specifies the complete location of a file or directory from the root directory, indicated by a forward slash (`/`). It provides the full pathway from the root to the target file or directory, making it independent of the current working directory. As a result, absolute paths remain consistent regardless of the user's current position within the file system.
15 |
16 | Example of Absolute Path:
17 |
18 | ```bash
19 | /home/user/notes/file_name.txt
20 | ```
21 |
22 | This path directs to the file `file_name.txt` located in the `notes` directory, which is under the `user` directory within the `home` directory, starting from the root of the file system (`/`).
23 |
24 | #### Relative Path
25 |
26 | A **relative path** describes the location of a file or directory based on the current working directory. Unlike absolute paths, relative paths do not start from the root. Instead, they specify the path from the user's current location in the file system.
27 |
28 | Example of Relative Path:
29 |
30 | ```bash
31 | notes/file_name.txt
32 | ```
33 |
34 | This path points to the file `file_name.txt` located in the `notes` directory under the current working directory.
35 |
36 | Key Differences:
37 |
38 | - Absolute paths provide a consistent and unchanging route to a file or directory, useful for scripts or commands that need to run regardless of the current directory. In contrast, relative paths offer flexibility and brevity, which can be advantageous when navigating within a specific directory hierarchy.
39 | - While absolute paths reduce the risk of errors due to their clarity and specificity, relative paths require careful handling as they depend on the user's current location within the file system. Therefore, absolute paths are often preferred for their reliability, whereas relative paths are convenient for quick access within a known directory structure.
40 |
41 | ### Navigation and File Manipulation
42 |
43 | Linux provides several commands to navigate through the file system and manipulate files and directories. Here are a few fundamental commands:
44 |
45 | ### Print Working Directory (`pwd`)
46 |
47 | The `pwd` command displays the absolute path of the current directory.
48 |
49 | ```bash
50 | pwd
51 | ```
52 |
53 | #### Change Directory (cd)
54 |
55 | The cd command allows you to change your current working directory.
56 |
57 | To move to a different directory, specify its path:
58 |
59 | ```bash
60 | cd /path/to/directory
61 | ```
62 |
63 | To navigate to your home directory, use:
64 |
65 | ```bash
66 | cd ~
67 | ```
68 |
69 | #### List Directory Contents (ls)
70 |
71 | The ls command lists the contents of a directory.
72 |
73 | To view the directory's contents, use:
74 |
75 | ```bash
76 | ls
77 | ```
78 |
79 | For a more detailed listing, including hidden files and additional information (file permissions, the number of links, the owner, the size, and the last modification date), use:
80 |
81 | ```bash
82 | ls -al
83 | ```
84 |
85 | To display specific file groups using wildcards, for example, to list all .txt files in the /home/mydirectory directory, use:
86 |
87 | ```bash
88 | ls /home/mydirectory/*.txt
89 | ```
90 |
91 | ### Managing Files and Directories
92 |
93 | Working with files and directories is a key aspect of Linux. Various commands facilitate the creation, manipulation, and inspection of these resources.
94 |
95 | #### Creating Files and Directories
96 |
97 | In Linux, you can use the `touch` command to create an empty file. For example, to create a new file named `sample.txt`, you would type:
98 |
99 | ```bash
100 | touch sample.txt
101 | ```
102 |
103 | Creating a new directory involves the mkdir command. To create a directory called example_dir, you would run:
104 |
105 | ```bash
106 | mkdir example_dir
107 | ```
108 |
109 | #### Copying Files and Directories
110 |
111 | The cp command is employed to copy files and directories from one location to another. For instance, to copy a file named file.txt to a directory named directory, you would use:
112 |
113 | ```bash
114 | cp file.txt directory/
115 | ```
116 |
117 | If you need to copy a directory and its contents, the -r (recursive) option is crucial:
118 |
119 | ```bash
120 | cp -r source_dir destination_dir
121 | ```
122 |
123 | There are several options that can modify the behavior of cp:
124 |
125 | - The `-a` option, also known as the archive option, preserves file attributes and symbolic links within the copied directories.
126 | - The `-v` option, known as the verbose option, provides detailed output of the operation.
127 |
128 | #### Moving and Renaming Files and Directories
129 |
130 | The mv command helps with moving or renaming files and directories. To rename a file from oldname.txt to newname.txt, you would execute:
131 |
132 | ```bash
133 | mv oldname.txt newname.txt
134 | ```
135 |
136 | The same mv command helps you move a file from one directory to another. For instance, to move file.txt from the current directory to another directory called dir1, you would use:
137 |
138 | ```bash
139 | mv file.txt dir1/
140 | ```
141 |
142 | #### Removing Files and Directories
143 |
144 | Files and directories can be removed with the rm command. To remove a file named file.txt, you would type:
145 |
146 | ```bash
147 | rm file.txt
148 | ```
149 |
150 | To remove an entire directory and its contents, you need to include the -r (recursive) option:
151 |
152 | ```bash
153 | rm -r directory_name
154 | ```
155 |
156 | Warning: The rm command is powerful and potentially destructive, especially when used with the -r (recursive) and -f (force) options. Use it with caution.
157 |
158 | ### Viewing and Inspecting File Contents
159 |
160 | There are various ways to view and inspect the contents of files in Unix-like operating systems. We can use the `cat`, `more`, `less`, `head`, and `tail` commands to achieve this.
161 |
162 | #### Displaying File Content with `cat`
163 |
164 | The `cat` (concatenate) command is a standard tool used to display the entire contents of a file. It writes the contents of a file to standard output (the terminal). For instance, to display the content of a file named `file.txt`, use the following command:
165 |
166 | ```bash
167 | cat file.txt
168 | ```
169 |
170 | However, keep in mind that cat is not ideal for large files because it dumps all content to the terminal at once.
171 |
172 | #### Paginating File Content with more and less
173 |
174 | For more manageable file viewing, particularly for larger files, the more and less commands are useful. They display content page by page, making it easier to digest.
175 |
176 | The more command shows the content of a file, pausing after each screenful:
177 |
178 | ```bash
179 | more file.txt
180 | ```
181 |
182 | On the other hand, the less command, which is a more advanced and flexible version of more, allows both forward and backward navigation through the file:
183 |
184 | ```bash
185 | less file.txt
186 | ```
187 |
188 | While viewing files with more or less, you can use these commands:
189 |
190 | - The `Enter` command moves forward by one line.
191 | - The `Space` command moves forward by one page.
192 | - The `b` command moves one page backward, applicable in the "less" pager only.
193 | - The `q` command quits the pager, usable in both "more" and "less" pagers.
194 | - The `/pattern` command searches for the next occurrence of the specified text "pattern," available in the "less" pager only.
195 |
196 | #### Viewing File Parts with head and tail
197 |
198 | The head and tail commands are designed to output the beginning and the end of files, respectively.
199 |
200 | The head command outputs the first part of files. It writes the first ten lines of each file to standard output. If more than one file is specified, it precedes each set of output with a header identifying the file. For instance, to display the first ten lines of file.txt, use:
201 |
202 | ```bash
203 | head file.txt
204 | ```
205 |
206 | Conversely, the tail command outputs the last part of files. It writes the last ten lines of each file to standard output. If more than one file is specified, it precedes each set of output with a header identifying the file. For instance, to display the last ten lines of file.txt, use:
207 |
208 | ```bash
209 | tail file.txt
210 | ```
211 |
212 | The number of lines can be adjusted using the -n option followed by the desired number of lines. For example, to view the last 20 lines of file.txt, you would use:
213 |
214 | ```bash
215 | tail -n 20 file.txt
216 | ```
217 |
218 | ### File Name Expansion Techniques
219 |
220 | Brace expansion and globs are powerful tools in Linux for dealing with filenames that conform to certain patterns or templates. They are conceptually different and serve different purposes, but both can be used to save time and effort when working with files.
221 |
222 | #### Brace Expansion
223 |
224 | Brace expansion is a powerful feature in Unix-like shells that allows you to generate a series of strings from a pattern. This feature can be particularly useful for creating sequences of commands or filenames efficiently. Brace expansion uses a list of comma-separated values enclosed in curly braces `{}`, which can be prefixed or suffixed with additional text.
225 |
226 | ##### Basic Example
227 |
228 | The following command demonstrates a simple use of brace expansion:
229 |
230 | ```bash
231 | echo a{b,c}d
232 | ```
233 |
234 | This command will produce the output:
235 |
236 | ```
237 | abd acd
238 | ```
239 |
240 | Here's how it works:
241 |
242 | - The expression `{b,c}` is expanded into `b` and `c`.
243 | - The resulting strings are then combined with the prefix `a` and the suffix `d`, producing `abd` and `acd`.
244 |
245 | ##### Generating Multiple Strings
246 |
247 | Brace expansion can be used to generate multiple strings from a single pattern, which can be particularly useful for batch operations. For example, to create a series of files with a common base name but varying in both number and a secondary identifier, you can use:
248 |
249 | ```bash
250 | touch file{1..4}{a..f}
251 | ```
252 |
253 | This command will create 24 files, named from `file1a` through `file4f`. The `{1..4}` range generates numbers 1 through 4, while `{a..f}` generates letters from `a` to `f`. The `touch` command then creates a file for each combination.
254 |
255 | ##### Advanced Usage
256 |
257 | Brace expansion can also handle nested patterns, allowing for complex combinations. For instance:
258 |
259 | ```bash
260 | echo {A,B{1..3},C}
261 | ```
262 |
263 | This command will expand to:
264 |
265 | ```
266 | A B1 B2 B3 C
267 | ```
268 |
269 | Here, the inner brace `{1..3}` is expanded first, resulting in `B1`, `B2`, and `B3`. The final list includes the strings `A`, `B1`, `B2`, `B3`, and `C`.
270 |
271 | #### Globs
272 |
273 | Globs are pattern matching tools used in Unix-like systems to match filenames or directories based on wildcard characters. Unlike brace expansion, which generates new strings, globs are used to find existing files and directories that match a specific pattern. This feature is particularly useful for performing operations on multiple files with similar names or extensions.
274 |
275 | ##### Common Wildcards
276 |
277 | - **`*` (Asterisk)**: Matches any number of characters, including none. For example, `*.txt` matches all files with a `.txt` extension in the current directory.
278 | - **`?` (Question Mark)**: Matches exactly one character. For example, `file?.txt` matches `file1.txt` but not `file12.txt`.
279 | - **`[...]` (Square Brackets)**: Matches any one character within the brackets. For example, `file[12].txt` matches `file1.txt` and `file2.txt` but not `file3.txt`.
280 |
281 | ##### Usage Examples
282 |
283 | I. Listing Files:
284 |
285 | ```bash
286 | ls *.txt
287 | ```
288 |
289 | This command lists all files with a `.txt` extension in the current directory.
290 |
291 | II. Copying Files:
292 |
293 | ```bash
294 | cp image?.png /backup/
295 | ```
296 |
297 | This command copies files like `image1.png`, `image2.png`, etc., to the `/backup/` directory.
298 |
299 | ##### Comparison to Regular Expressions
300 |
301 | While globs use wildcard characters to match patterns, they differ from regular expressions (regex) in syntax and behavior. Understanding these differences is crucial for using each tool appropriately.
302 |
303 | | Wildcard | Description in Globs | Description in Regex |
304 | | -------- | ------------------------------- | ----------------------------------------- |
305 | | `*` | Matches any number of characters | Matches zero or more of the preceding element |
306 | | `?` | Matches exactly one character | Makes the preceding element optional |
307 | | `.` | Matches the dot character literally | Matches any single character except newline |
308 |
309 | - In **globs**, `*` can match any number of characters, while `?` matches exactly one character. For example, `file*` matches `file`, `filename`, `file123`, etc.
310 | - In **regex**, `*` matches zero or more occurrences of the preceding element, and `?` makes the preceding element optional. For example, `file.*` matches `file` followed by any character sequence, and `colou?r` matches both `color` and `colour`.
311 |
312 | ### Challenges
313 |
314 | 1. In Linux, how can you recognize files that are hidden?
315 | 2. What symbol is used to denote the top-most directory in the Linux file system hierarchy?
316 | 3. List several commands that can be used to display the contents of a file in the Linux command line. Discuss the differences in their functionalities.
317 | 4. Practice navigating through various directories such as your home directory, the root directory (`/`), `/var/log`, and your desktop using the `cd` command. Try employing both relative and absolute paths. Explain the differences between these two types of paths.
318 | 5. Use the `ls` command to enumerate the files in your current directory. Try using different options to reveal hidden files, sort files by their modification dates, and list files in a long detailed format. Discuss the output in each case.
319 | 6. Display the contents of hidden files from your home directory. Utilize commands like `more`, `less`, and `cat`. Discuss the differences between the outputs of these commands.
320 | 7. Construct a temporary directory in your home directory. Create three empty files in the temporary directory using the `touch` command. Use the `ls` command and redirect its output into each of these files. Then, display the contents of the files on the terminal.
321 | 8. Replicate the entire contents of the `temp` directory to your home directory. Explain the options you used with the `cp` command to accomplish this.
322 | 9. Create another temporary directory in your home directory. Generate three text files in this temporary directory and redirect some text into each file using the `>` operator. Now, move this temporary directory to a new location. Discuss the steps you took to complete this task.
323 | 10. Practice employing globs to match against the names of existing files. For example, construct a command that will display all files in your current directory with filenames consisting of exactly five characters. Discuss how you crafted your glob pattern to achieve this.
324 |
--------------------------------------------------------------------------------