└── README.md
/README.md:
--------------------------------------------------------------------------------
1 | # Red Hat
2 |
3 | - [RHCSA 8](#RHCSA-8)
4 | - [RHCE 9](#RHCE-9)
5 |
6 | ## RHCSA 8
7 |
8 | - [Understand and use essential tools](#Understand-and-use-essential-tools)
9 | - [Create simple shell scripts](#Create-simple-shell-scripts)
10 | - [Operate running systems](#Operate-running-systems)
11 | - [Configure local storage](#Configure-local-storage)
12 | - [Create and configure file systems](#Create-and-configure-file-systems)
13 | - [Deploy, configure, and maintain systems](#Deploy-configure-and-maintain-systems)
14 | - [Manage basic networking](#Manage-basic-networking)
15 | - [Manage users and groups](#Manage-users-and-groups)
16 | - [Manage security](#Manage-security)
17 | - [Manage containers](#Manage-containers)
18 | - [RHCSA Exercises](#RHCSA-Exercises)
19 |
20 | ### Understand and use essential tools
21 |
22 | 1. Programmable completion for bash is provided in the bash-completion module. To install this module:
23 | ```shell
24 | sudo dnf install bash-completion
25 | ```
26 |
27 | 1. Access a shell prompt and issue commands with correct syntax
28 |
29 | * Common commands and their options, as well as vim usage, are shown below:
30 | | Command | Options | Description |
31 | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------|
32 | | ls | -h (human readable)
-a (show hidden)
-l (detailed)
-lt (newist file first)
-ltr (oldest file first) | List of files and directories |
33 | | pwd | | Print working directory |
34 | | cd | ~ (home)
/ (root)
- (switch)
.. (parent) | Change directories |
35 | | who | whoami (show user) | Show logged in users |
36 | | what | w (shorthand) | Show logged in users with more detail |
37 | | uptime | | Show system uptime |
38 | | logname | | Show real username (if using su) |
39 | | id | | Shows a user's UID, username, GUID etc. |
40 | | groups | | Lists groups for users |
41 | | last | | List all user logins and system reboots |
42 | | lastb | | List all failed login attempts |
43 | | lastlog | | List recent logins |
44 | | uname | -a (details) | System information |
45 | | hostnamectl | set-hostname | View hostname |
46 | | clear | | Clear the screen |
47 | | timedatectl | set-time
list-timezones
set-timezone
| Display system time |
48 | | date | --set | View system date |
49 | | which | | Show path to a command |
50 | | wc | | Word count |
51 | | lspci | -m (legible) | PCI buses details |
52 | | lsusb | | USB buses details |
53 | | lscpu | | Processor details |
54 | | gzip/bzip2 | -d (uncompress) | Compress files |
55 | | gunzip/bunzip2 | | Uncompress files |
56 | | tar | -c (create)
-f (specifies name)
-v (verbose)
-r (append to existing)
-x (extract)
-z (compress with gzip)
-j (compress with bzip2) | Archive file |
57 | | star | | Enhanced tar |
58 | | man | -k (keyword)
-f (short description) | Manual |
59 | | mandb | | Update the mandb |
60 | | ssh | -l (as different user) | SSH to another Linux system |
61 | | tty | | Display terminal name |
62 | | whatis | | Search the command in the mandb for description |
63 | | info | | More detailed than man |
64 | | apropos | | Search the command in the mandb |
65 | | grep | -n (show line numbers)
-v (pattern exclusion)
-i (case insensitive)
-E (use alternation)
-w (word match) | Find text |
66 |
67 | | Key | Description |
68 | |--------------------------|---------------------------------|
69 | | i | Change to insert mode |
70 | | h, j, k, l | Move left, down, up, right |
71 | | w, b, e, ge | Move word at a time |
72 | | n[action] | Do n times |
73 | | x | Remove a character |
74 | | a | Append |
75 | | f[char] | Move to next given char in line |
76 | | F[char] | Move to previous char in line |
77 | | ; and , | Repeat last f or F |
78 | | /yourtext and then: n, N | Search text |
79 | | d[movement] | Delete by giving movement |
80 | | r[char] | Replaces character below cursor |
81 | | 0, $ | Move to start/end of line |
82 | | o, O | Add new line |
83 | | % | Goto corresponding parentheses |
84 | | ci[movement] | Change inside of given movement |
85 | | D | Delete to end of line |
86 | | S | Clear current line |
87 | | gg / G | Move to start / end of buffer |
88 | | yy | Copy current line |
89 | | p | Paste copied text after cursor |
90 |
91 | 1. Use input-output redirection (>, >>, |, 2>, etc.)
92 | * The default locations for input, output, and error are referred to as standard input (stdin), standard output (stdout), and standard error (stderr).
93 |
94 | * Standard input redirection can be done to have a command read the required information from an alternative source, such as a file, instead of the keyboard. For example:
95 | ```shell
96 | cat < /etc/cron.allow
97 | ```
98 |
99 | * Standard output redirection sends the output generated by a command to an alternative destination, such as a file. For example:
100 | ```shell
101 | ll > ll.out
102 | ```
103 |
104 | * Standard error redirection sends the output generated by a command to an alternative destination, such as a file. For example:
105 | ```shell
106 | echo test 2> outerr.out
107 | ```
108 |
109 | * Instead of > to create or overwrite, >> can be used to append to a file.
110 |
111 | * To redirect both stdout and stderror to a file:
112 | ```shell
113 | echo test >> result.txt 2>&1
114 | ```
115 |
116 | 1. Use grep and regular expressions to analyse text
117 | * The grep command is used to find text. For example:
118 | ```shell
119 | grep user100 /etc/passwd
120 | ```
121 | * Common regular expression parameters are shown below:
122 | | Symbol | Description |
123 | |--------|--------------------------------------------------------------------|
124 | | ^ | Beginning of a line or word |
125 | | $ | End of a line or word |
126 | | \| | Or |
127 | | . | Any character |
128 | | * | Any number of any character |
129 | | ? | Exactly one character |
130 | | [] | Range of characters |
131 | | \ | Escape character |
132 | | '' | Mask meaning of enclosed special characters |
133 | | "" | Mask meaning of all enclosed special characters except \, $ and '' |
134 |
135 | 1. Access remote systems using SSH
136 |
137 | * Secure Shell (SSH) provides a secure mechanism for data transmission between source and destination systems over IP networks.
138 |
139 | * SSH uses encryption and performs data integrity checks on transmitted data.
140 |
141 | * The version of SSH used is defined in `/etc/ssh/sshd_config`.
142 |
143 | * The most common authentication methods are Password-Based Authentication and Public/Private Key-Based Authentication.
144 |
145 | * The command *ssh-keygen* is used to generate keys and place them in the .ssh directory, and the command *ssh-copy-id* is used to copy the public key file to your account on the remote server.
146 |
147 | * TCP Wrappers is a host-based mechanism that is used to limit access to wrappers-aware TCP services on the system by inbound clients. 2 files `/etc/hosts.allow` and `/etc/hosts.deny` are used to control access. The .allow file is referenced before the .deny file. The format of the files is \:\.
148 |
149 | * All messages related to TCP Wrappers are logged to the `/var/log/secure` file.
150 |
151 | * To login using SSH:
152 | ```shell
153 | ssh user@host
154 | ```
155 |
156 | 1. Log in and switch users in multiuser targets
157 |
158 | * A user can switch to another user using the *su* command. The *-i* option ensures that the target users login scripts are run:
159 | ```shell
160 | sudo -i -u targetUser
161 | ```
162 |
163 | * To run a command as root without switching:
164 | ```shell
165 | sudo -c
166 | ```
167 |
168 | * The configuration for which users can run which commands using sudo is defined in the `/etc/sudoers` file. The visudo command is used to edit the sudoers file. The sudo command logs successful authentication and command data to `/var/log/secure`.
169 |
170 | 1. Archive, compress, unpack, and decompress files using tar, star, gzip, and bzip2
171 |
172 | * To archive using tar:
173 | ```shell
174 | tar cvf myTar.tar /home
175 | ```
176 |
177 | * To unpack using tar:
178 | ```shell
179 | tar xvf myTar.tar
180 | ```
181 |
182 | * To compress using tar and gzip:
183 | ```shell
184 | tar cvfz myTar.tar /home
185 | ```
186 |
187 | * To compress using tar and bzip2:
188 | ```shell
189 | tar cvfj myTar.tar /home
190 | ```
191 |
192 | * To decompress using tar and gzip:
193 | ```shell
194 | tar xvfz myTar.tar /home
195 | ```
196 |
197 | * To decompress using tar and bzip2:
198 | ```shell
199 | tar xvfj myTar.tar /home
200 | ```
201 |
202 | * The star command is an enhanced version of tar. It also supports SELinux security contexts and extended file attributes. The options are like tar.
203 |
204 |
205 | 1. Create and edit text files
206 |
207 | * To create an empty file:
208 | ```shell
209 | touch file
210 | cat > newfile
211 | ```
212 |
213 | * To create a file using vim:
214 | ```shell
215 | vi file
216 | ```
217 |
218 | 1. Create, delete, copy, and move files and directories
219 |
220 | * To create a directory:
221 | ```shell
222 | mkdir directory
223 | ```
224 |
225 | * To move a file or directory:
226 | ```shell
227 | mv item1 item2
228 | ```
229 |
230 | * To copy a file or directory:
231 | ```shell
232 | cp item1 item2
233 | ```
234 |
235 | * To remove a file:
236 | ```shell
237 | rm file1
238 | ```
239 |
240 | * To remove an empty directory:
241 | ```shell
242 | rmdir directory
243 | ```
244 |
245 | * To remove a non-empty directory:
246 | ```shell
247 | rm -r directory
248 | ```
249 |
250 | 1. Create hard and soft links
251 |
252 | * A soft link associates one file with another. If the original file is removed the soft link will point to nothing. To create a soft link to file1:
253 | ```shell
254 | ln -s file1 softlink
255 | ```
256 |
257 | * A hard link associates multiple files to the same inode making them indistinguishable. If the original file is removed, you will still have access to the data through the linked file. To create a hard link to file1:
258 | ```shell
259 | ln file1 hardlink
260 | ```
261 |
262 | 1. List, set, and change standard ugo/rwx permissions
263 |
264 | * Permissions are set for the user, group, and others. User is the owner of the file or the directory, group is a set of users with identical access defined in `/etc/group`, and others are all other users. The types of permission are read, write, and execute.
265 |
266 | * Permission combinations are shown below:
267 | | Octal Value | Binary Notation | Symbolic Notation | Explanation |
268 | |-------------|-----------------|-------------------|---------------------------------------|
269 | | 0 | 000 | --- | No permissions. |
270 | | 1 | 001 | --x | Execute permission only. |
271 | | 2 | 010 | -w- | Write permission only. |
272 | | 3 | 011 | -wx | Write and execute permissions. |
273 | | 4 | 100 | r-- | Read permission only. |
274 | | 5 | 101 | r-x | Read and execute permissions. |
275 | | 6 | 110 | rw- | Read and write permissions. |
276 | | 7 | 111 | rwx | Read, write, and execute permissions. |
277 |
278 | * To grant the owner, group, and others all permissions using the *chmod* command:
279 | ```shell
280 | chmod 777 file1
281 | ```
282 |
283 | * The default permissions are calculated based on the umask. The default umask for root is 0022 and 0002 for regular users (the leading 0 has no significance). The pre-defined initial permissions are 666 for files and 777 for directories. The umask is subtracted from these initial permissions to obtain the default permissions. To change the default umask:
284 | ```shell
285 | umask 027
286 | ```
287 |
288 | * Every file and directory has an owner. By default, the creator assumes ownership. The owner's group is assigned to a file or directory. To change the ownership of a file or directory:
289 | ```shell
290 | useradd user100
291 | chown user100 item1
292 | chgrp user100 item1
293 | ```
294 |
295 | ```shell
296 | chown user100:user100 item1
297 | ```
298 | * Note that the -R option must be used to recursively change all files in a directory.
299 |
300 | 1. Locate, read, and use system documentation including man, info, and files in `/usr/share/doc`
301 |
302 | * The *man* command can be used to view help for a command. To search for a command based on a keyword the *apropros* command or *man* with the -k option can be used. The *mandb* command is used to build the man database.
303 |
304 | * To search for a command based on a keyword in occurring in its man page:
305 | ```shell
306 | man -K
307 | ```
308 |
309 | * The *whatis* command can be used to search for a command in the man database for a short description.
310 |
311 | * The *info* command provides more detailed information than the *man* command.
312 |
313 | * The `/usr/share/doc` directory contains documentation for all installed packages under sub-directories that match package names followed by their version.
314 |
315 | ### Create simple shell scripts
316 |
317 | 1. Conditionally execute code (use of: if, test, [], etc.)
318 |
319 | * An example using if and test statements is shown below:
320 | ```shell
321 | #!/bin/bash
322 | ping -c 1 $1
323 | if test "$?" -eq "0"; then
324 | echo "$1 IP is reachable"
325 | else
326 | echo "$1 IP is not reachable"
327 | fi
328 | exit
329 | ```
330 |
331 | * Input arguments can be passed in after the script name, with e.g. 1 being the first input argument. The *$?* term expands the exit status of the most recently executed command. When using *echo* the *-e* argument can be used to print characters such as new lines.
332 |
333 | * An example using a case statement is shown below:
334 | ```shell
335 | #!/bin/bash
336 | now=$(date + "%a")
337 | case $now in
338 | Mon)
339 | echo "Full Backup";
340 | ;;
341 | Tue|Wed|Thu|Fri)
342 | echo "Partial Backup";
343 | ;;
344 | Sat|Sun)
345 | echo "No Backup";
346 | ;;
347 | *) ;;
348 | esac
349 | exit
350 | ```
351 |
352 | * An example using [] is shown below:
353 | ```shell
354 | #!/bin/bash
355 | ping -c 1 $1
356 | if ["$?" -eq "0"]; then
357 | echo "$1 IP is reachable"
358 | else
359 | echo "$1 IP is not reachable"
360 | fi
361 | exit
362 | ```
363 |
364 | 1. Use Looping constructs (for, etc.) to process file, command line input
365 |
366 | * An example of a for loop is shown below:
367 | ```shell
368 | #!/bin/bash
369 | for file in ./*.log
370 | do
371 | mv "${file}" "${file}".txt
372 | done
373 | exit
374 | ```
375 |
376 | * An example of a while loop is shown below:
377 | ```shell
378 | #!/bin/bash
379 | input = "/home/kafka.log"
380 | while IFS = read -r line
381 | do
382 | echo "$line"
383 | done < "$input"
384 | exit
385 | ```
386 |
387 | 1. Process script inputs ($1, $2, etc.)
388 |
389 | * The first variable passed into a script can be accessed with *$1*.
390 |
391 | 1. Processing output of shell commands within a script
392 |
393 | * An example is shown below:
394 | ```shell
395 | #!/bin/bash
396 | echo "Hello World!" >> example-`date +%Y%m%d-%H%M`.log
397 | exit
398 | ```
399 |
400 | 1. Processing shell command exit codes
401 |
402 | * The *$?* term expands the exit status of the most recently executed command.
403 |
404 | ### Operate running systems
405 |
406 | 1. Boot, reboot, and shut down a system normally
407 |
408 | * The RHEL boot process occurs when the system is powered up or reset and lasts until all enabled services are started and a login prompt appears at the screen. The login process consists of 4 steps:
409 |
410 | * The firmware is the Basic Input Output System (BIOS) or Unified Extensible Firmware Interface (UEFI) code that is stored in flash memory on the motherboard. The first thing it does is run the power-on-self-test (POST) to initialise the system hardware components. It also installs appropriate drivers for the video hardware and displays system messages to the screen. It scans the available storage devices to locate a boot device (GRUB2 on RHEL), and then loads it into memory and passes control to it.
411 |
412 | * The boot loader presents a menu with a list of bootable kernels available on the system. After a pre-defined amount of time it boots the default kernel. GRUB2 searches for the kernel in the `/boot` file system. It then extracts the kernel code into memory and loads it based on the configuration in `/boot/grub2/grub.cfg`. Note that for UEFI systems, GRUB2 looks in `/boot/efi` instead and loads based on configuration in `/boot/efi/EFI/redhat/grub.efi`. Once the kernel is loaded, GRUB2 passes control to it.
413 |
414 | * The kernel loads the initial RAM disk (initrd) image from the `/boot` file system. This acts as a temporary file system. The kernel then loads necessary modules from initrd to allow access to the physical disks and the partitions and file systems within. It also loads any drivers required to support the boot process. Later, the kernel unmounts initrd and mounts the actual root file system.
415 |
416 | * The kernel continues the boot process. *systemd* is the default system initialisation scheme. It starts all enabled user space system and network services.
417 |
418 | * The *shutdown* command is used to halt, power off, or reboot the system gracefully. This command broadcasts a warning message to all logged-in users, disables further user logins, waits for the specified time, and then stops the service and shuts to the system down to the specified target state.
419 |
420 | * To shut down the system now:
421 | ```shell
422 | shutdown -P now
423 | ```
424 |
425 | * To halt the system now:
426 | ```shell
427 | shutdown -H now
428 | ```
429 |
430 | * To reboot the system now:
431 | ```shell
432 | shutdown -r now
433 | ```
434 |
435 | * To shut down the system after 5 minutes:
436 | ```shell
437 | shutdown -P 5
438 | ```
439 |
440 | 1. Boot systems into different targets manually
441 |
442 | * *systemd* is the default system initialisation mechanism in RHEL 8. It is the first process that starts at boot and it is the last process that terminates at shutdown.
443 |
444 | * *Units* are systemd objects that are used for organising boot and maintenance tasks, such as hardware initialisation, socket creation, file system mounts, and service start-ups. Unit configuration is stored in their respective configuration files, which are auto generated from other configurations, created dynamically from the system state, produced at runtime, or user developed. Units are in one of several operational states, including active, inactive, in the process of being activated or deactivated, and failed. Units can be enabled or disabled.
445 |
446 | * Units have a name and a type, which are encoded in files of the form unitname.type. Units can be viewed using the *systemctl* command. A target is a logical collection of units. They are a special systemd unit type with the .target file extension.
447 |
448 | * *systemctl* is the primary command for interaction with systemd.
449 |
450 | * To boot into a custom target the *e* key can be pressed at the GRUB2 menu, and the desired target specified using systemd.unit. After editing press *ctrl+x* to boot into the target state. To boot into the emergency target:
451 | ```shell
452 | systemd.unit=emergency.target
453 | ```
454 |
455 | * To boot into the rescue target:
456 | ```shell
457 | systemd.unit=rescue.target
458 | ```
459 |
460 | * Run *systemctl reboot* after you are done to reboot the system.
461 |
462 | 1. Interrupt the boot process in order to gain access to a system
463 |
464 | * Press *e* at the GRUB2 menu and add "rd.break" in place of "ro crash". This boot option tells the boot sequence to stop while the system is still using initramfs so that we can access the emergency shell.
465 |
466 | * Press *ctrl+x* to reboot.
467 |
468 | * Run the following command to remount the `/sysroot` directory with rw privileges:
469 | ```shell
470 | mount -o remount,rw /sysroot
471 | ```
472 | * Run the following command to change the root directory to `/sysroot`:
473 | ```shell
474 | chroot /sysroot
475 | ```
476 | * Run *passwd* command to change the root password.
477 |
478 | * Run the following commands to create an empty, hidden file to instruct the system to perform SELinux relabelling after the next boot:
479 | ```shell
480 | touch /.autorelabel
481 | exit
482 | exit
483 | ```
484 |
485 | 1. Identify CPU/memory intensive processes and kill processes
486 |
487 | * A process is a unit for provisioning system resources. A process is created in memory in its own address space when a program, application, or command is initiated. Processes are organised in a hierarchical fashion. Each process has a parent process that spawns it and may have one or many child processes. Each process is assigned a unique identification number, known as the Process Identifier (PID). When a process completes its lifecycle or is terminated, this event is reported back to its parent process, and all the resources provisioned to it are then freed and the PID is removed. Processes spawned at system boot are called daemons. Many of these sit in memory and wait for an event to trigger a request to use their services.
488 |
489 | * There are 5 basic process states:
490 | * Running: The process is being executed by the CPU.
491 |
492 | * Sleeping: The process is waiting for input from a user or another process.
493 |
494 | * Waiting: The process has received the input it was waiting for and is now ready to run when its turn arrives.
495 |
496 | * Stopped: The process is halted and will not run even when its turn arrives, unless a signal is sent to change its behaviour.
497 |
498 | * Zombie: The process is dead. Its entry is retained until the parent process permits it to die.
499 |
500 | * The *ps* and *top* commands can be used to view running processes.
501 |
502 | * The *pidof* or *pgrep* commands can be used to view the PID associated with a process name.
503 |
504 | * The *ps* command can be used to view the processes associated with a particular user. An example is shown below:
505 | ```shell
506 | ps -U root
507 | ```
508 |
509 | * To kill a process the *kill* or *pkill* commands can be used. Ordinary users can kill processes they own, while the *root* user can kill any process. The *kill* command requires a PID and the *pkill* command requires a process name. An example is shown below:
510 | ```shell
511 | pkill crond
512 | kill `pidof crond`
513 | ```
514 |
515 | * The list of signals accessible by *kill* can be seen by passing the *-l* option. The default signal is SIGTERM which signals for a process to terminate in an orderly fashion.
516 |
517 | * To use the SIGKILL signal:
518 | ```shell
519 | pkill -9 crond
520 | kill -9 `pgrep crond`
521 | ```
522 |
523 | * The *killall* command can be used to terminate all processes that match a specified criterion.
524 |
525 | 1. Adjust process scheduling
526 |
527 | * The priority of a process ranges from -20 (highest) to +19 (lowest). A higher niceness lowers the execution priority of a process and a lower niceness increases it. A child process inherits the niceness of its parent process.
528 |
529 | * To run a command with a lower (+2) priority:
530 | ```shell
531 | nice -2 top
532 | ```
533 |
534 | * To run a command with a higher (-2) priority:
535 | ```shell
536 | nice --2 top
537 | ```
538 |
539 | * To renice a running process:
540 | ```shell
541 | renice 5 1919
542 | ```
543 |
544 | 1. Manage tuning profiles
545 |
546 | * Tuned is a service which monitors the system and optimises the performance of the system for different use cases. There are pre-defined tuned profiles available in the `/usr/lib/tuned` directory. New profiles are created in the `/etc/tuned` directory. The *tuned-adm* command allows you to interact with the Tuned service.
547 |
548 | * To install and start the tuned service:
549 | ```shell
550 | yum install tuned
551 | systemctl enable --now tuned
552 | ```
553 |
554 | * To check the currently active profile:
555 | ```shell
556 | tuned-adm active
557 | ```
558 |
559 | * To check the recommended profile:
560 | ```shell
561 | tuned-adm recommend
562 | ```
563 |
564 | * To change the active profile:
565 | ```shell
566 | tuned-adm profile
567 | ```
568 |
569 | * To create a customised profile and set it as active:
570 | ```shell
571 | mkdir /etc/tuned/
572 | vi /etc/tuned//
573 | # customise as required
574 | tuned-adm profile
575 | systmctl restart tuned.service
576 | ```
577 |
578 | 1. Locate and interpret system log files and journals
579 |
580 | * In RHEL logs capture messages generated by the kernel, daemons, commands, user activities, applications, and other events. The daemon that is responsible for system logging is called *rsyslogd*. The configuration file for *rsyslogd* is in the `/etc/rsyslog.conf` file. As defined in this configuration file, the default repository for most logs is the `/var/log` directory.
581 |
582 | * The below commands can be used to start and stop the daemon:
583 | ```shell
584 | systemctl stop rsyslog
585 | systemctl start rsyslog
586 | ```
587 | * A script called *logrotate* in `/etc/cron.daily` invokes the *logrotate* command to rotate log files as per the configuration file.
588 |
589 | * The boot log file is available at `/var/log/boot.log` and contains logs generated during system start-up. The system log file is available in `/var/log/messages` and is the default location for storing most system activities.
590 |
591 | 1. Preserve system journals
592 |
593 | * In addition to system logging, the *journald* daemon (which is an element of *systemd*) also collects and manages log messages from the kernel and daemon processes. It also captures system logs and RAM disk messages, and any alerts generated during the early boot stage. It stores these messages in binary format in files called *journals* in the `/var/run/journal` directory. These files are structured and indexed for faster and easier searches and can be viewed and managed using the *journalctl* command.
594 |
595 | * By default, journals are stored temporarily in the `/run/log/journal` directory. This is a memory-based virtual file system and does not persist across reboots. To have journal files stored persistently in `/var/log/journal` the following commands can be run:
596 | ```shell
597 | mkdir -p /var/log/journal
598 | systemctl restart systemd-journald
599 | ```
600 |
601 | 1. Start, stop, and check the status of network services
602 |
603 | * The *sshd* daemon manages ssh connections to the server. To check the status of this service:
604 | ```shell
605 | systemctl is-active sshd
606 | systemctl status sshd
607 | ```
608 |
609 | * To start and stop this service:
610 | ```shell
611 | systemctl start sshd
612 | systemctl stop sshd
613 | ```
614 |
615 | * To enable or disable this service:
616 | ```shell
617 | systemctl enable sshd
618 | systemctl disable sshd
619 | ```
620 |
621 | * To completely disable the service (i.e. to avoid loading the service at all):
622 | ```shell
623 | systemctl mask sshd
624 | systemctl unmask sshd
625 | ```
626 |
627 | 1. Securely transfer files between systems
628 |
629 | * To transfer a file using the Secure Copy Protocol (SCP):
630 | ```shell
631 | scp @:
632 | ```
633 |
634 | * To transfer a directory:
635 | ```shell
636 | scp /etc/ssh/* @:/tmp
637 | ```
638 |
639 | * The direction of transfer can also be reversed:
640 | ```shell
641 | scp @:/tmp/sshd_config sshd_config_external
642 | ```
643 |
644 | ### Configure local storage
645 |
646 | 1. List, create, delete partitions on MBR and GPT disks
647 |
648 | * Data is stored on disk drives that are logically divided into partitions. A partition can exist on a portion of a disk, an entire disk, or across multiple disks. Each partition can contain a file system, raw data space, swap space, or dump space.
649 |
650 | * A disk in RHEL can be divided into several partitions. This partition information is stored on the disk in a small region, which is read by the operating system at boot time. This is known as the Master Boot Record (MBR) on BIOS-based systems, and GUID Partition Table (GPT) on UEFI-based systems. At system boot, the BIOS/UEFI scans all storage devices, detects the presence of MBR/GPT, identifies the boot disks, loads the boot loader program in memory from the default boot disk, executes the boot code to read the partition table and identify the `/boot` partition, and continues with the boot process by loading the kernel in the memory and passing control over to it.
651 |
652 | * MBR allows the creation of only up to 4 primary partitions on a single disk, with the option of using one of the 4 partitions as an extended partition to hold an arbitrary number of logical partitions. MBR also lacks addressing space beyond 2TB due to its 32-bit nature and the disk sector size of 512-byte that it uses. MBR is also non-redundant, so a system becomes unbootable if it becomes corrupted somehow.
653 |
654 | * GPT is a newer 64-bit partitioning standard developed and integrated to UEFI firmware. GPT allows for 128 partitions, use of disks much larger than 2TB, and redundant locations for the storage of partition information. GPT also allows a BIOS-based system to boot from a GPT disk, using the boot loader program stored in a protective MBR at the first disk sector.
655 |
656 | * To list the mount points, size, and available space:
657 | ```shell
658 | df -h
659 | ```
660 |
661 | * In RHEL block devices are an abstraction for certain hardware, such hard disks. The *blkid* command lists all block devices. The *lsblk* command lists more details about block devices.
662 |
663 | * To list all disks and partitions:
664 | ```shell
665 | fdisk -l # MBR
666 | gdisk -l # GPT
667 | ```
668 |
669 | * For MBR based partitions the *fdisk* utility can be used to create and delete partitions. To make a change to a disk:
670 | ```shell
671 | fdisk
672 | ```
673 |
674 | * For GPT based partitions the *gdisk* utility can be used to create and delete partitions. To make a change to a disk:
675 | ```shell
676 | gdisk
677 | ```
678 |
679 | * To inform the OS of partition table changes:
680 | ```shell
681 | partprobe
682 | ```
683 |
684 | 1. Create and remove physical volumes
685 |
686 | * Logical Volume Manager (LVM) is used to provide an abstraction layer between the physical storage and the file system. This enables the file system to be resized, to span across multiple physical disks, use random disk space, etc. One or more partitions or disks (physical volumes) can form a logical container (volume group), which is then divided into logical partitions (called logical volumes). These are further divided into physical extents (PEs) and logical extents (LEs).
687 |
688 | * A physical volume (PV) is created when a block storage device is brought under LVM control after going through the initialisation process. This process constructs LVM data structures on the device, including a label on the second sector and metadata information. The label includes a UUID, device size, and pointers to the locations of data and metadata areas.
689 |
690 | * A volume group (VG) is created when at least one physical volume is added to it. The space from all physical volumes in a volume group is aggregated to form one large pool of storage, which is then used to build one or more logical volumes. LVM writes metadata information for the volume group on each physical volume that is added to it.
691 |
692 | * To view physical volumes:
693 | ```shell
694 | pvs
695 | ```
696 |
697 | * To view physical volumes with additional details:
698 | ```shell
699 | pvdisplay
700 | ```
701 |
702 | * To initialise a disk or partition for use by LVM:
703 | ```shell
704 | pvcreate
705 | ```
706 |
707 | * To remove a physical volume from a disk:
708 | ```shell
709 | pvremove
710 | ```
711 |
712 | 1. Assign physical volumes to volume groups
713 |
714 | * To view volume groups:
715 | ```shell
716 | vgs
717 | ```
718 |
719 | * To view volume groups with additional details:
720 | ```shell
721 | vgdisplay
722 | ```
723 |
724 | * To create a volume group:
725 | ```shell
726 | vgcreate
727 | ```
728 |
729 | * To extend an existing volume group:
730 | ```shell
731 | vgextend
732 | ```
733 |
734 | * To remove a disk from a volume group:
735 | ```shell
736 | vgreduce
737 | ```
738 |
739 | * To remove the last disk from a volume group:
740 | ```shell
741 | vgremove
742 | ```
743 |
744 | 1. Create and delete logical volumes
745 |
746 | * To view logical volumes:
747 | ```shell
748 | lvs
749 | ```
750 |
751 | * To view logical volumes with additional details:
752 | ```shell
753 | lvdisplay
754 | ```
755 |
756 | * To create a logical volume in vg1 named lv1 and with 4GB of space:
757 | ```shell
758 | lvcreate -L 4G -n lv1 vg1
759 | ```
760 |
761 | * To extend the logical volume by 1GB:
762 | ```shell
763 | lvextend -L+1G
764 | ```
765 |
766 | * To extend the logical volume by 1GB:
767 | ```shell
768 | lvextend -L+1G
769 | ```
770 |
771 | * To reduce the size for a logical volume by 1GB:
772 | ```shell
773 | lvreduce -L-1G
774 | ```
775 |
776 | * To remove a logical volume:
777 | ```shell
778 | umount
779 | lvremove
780 | ```
781 |
782 | 1. Configure systems to mount file systems at boot by Universally Unique ID (UUID) or label
783 |
784 | * The `/etc/fstab` file is a system configuration file that lists all available disks, disk partitions and their options. Each file system is described on a separate line. The `/etc/fstab` file is used by the *mount* command, which reads the file to determine which options should be used when mounting the specific device. A file system can be added to this file so that it is mounted on boot automatically.
785 |
786 | * The *e2label* command can be used to change the label on ext file systems. This can then be used instead of the UUID.
787 |
788 | 1. Add new partitions and logical volumes, and swap to a system non-destructively
789 |
790 | * Virtual memory is equal to RAM plus swap space. A swap partition is a standard disk partition that is designated as swap space by the *mkswap* command. A file can also be used as swap space but that is not recommended.
791 |
792 | * To create a swap:
793 | ```shell
794 | mkswap
795 | ```
796 |
797 | * To enable a swap:
798 | ```shell
799 | swapon
800 | ```
801 |
802 | * To check the status of swaps:
803 | ```shell
804 | swapon -s
805 | ```
806 |
807 | * To disable a swap:
808 | ```shell
809 | swapoff
810 | ```
811 |
812 | * The `/etc/fstab` file will need a new entry for the swap so that it is created persistently.
813 |
814 | ### Create and configure file systems
815 |
816 | 1. Create, mount, unmount, and use vfat, ext4, and xfs file systems
817 |
818 | * A file system is a logical container that is used to store files and directories. Each file system must be connected to the root of the directory hierarchy to be accessible. This is typically done automatically on system boot but can be done manually as well. Each file system can be mounted or unmounted using the UUID associated with it or by using a label that can be assigned to it. Mounting is the process of attaching an additional filesystem, which resides on a CDROM, Hard Disk Drive (HDD) or other storage device, to the currently accessible filesystem of a computer.
819 |
820 | * Each file system is created in a separate partition or logical volume. A typical RHEL system has numerous file systems. During OS installation, the `/` and `/boot` file systems are created by default. Typical additional file systems created during installation include `/home`, `/opt`, `/tmp`, `/usr` and `/var`.
821 |
822 | * File systems supported in RHEL are categorised as disk-based, network-based, and memory-based. Disk-based and network-based file systems are stored persistently, while data in memory-based systems is lost at system reboot. The different file systems are shown below:
823 |
824 | | File System | Type | Description |
825 | |----------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
826 | | ext2 | Disk | The second generation of the extended file system. The first generation is no longer supported. ext2 is deprecated in RHEL and will be removed in a future RHEL release. |
827 | | ext3 | Disk | The third generation of the extended file system. It supports metadata journaling for faster recovery, superior reliability, file systems up to 16TB, files up to 2TB, and up to 32,000 sub-directories. ext3 writes each metadata update in its entirety to the journal after it has been completed. The system looks in the file system journal following a reboot after a system crash has occurred, and recovers the file system rapidly using the updated structural information stored in its journal. |
828 | | ext4 | Disk | The fourth generation of the extended file system. It supports file systems up to 1EB, files up to 16TB, an unlimited number of sub-directories, metadata and quota journaling, and extended user attributes. |
829 | | xfs | Disk | XFS is a highly scalable and high-performance 64-bit file system. It supports metadata journaling for faster crash recovery, online defragmentation, expansion quota journaling, and extended user attributes. It supports file systems and files of sizes up to 8EB. It is the default file system in RHEL 8. |
830 | | btrfs | Disk | B-tree file system that supports a system size of 50TB. It supports more files, larger files, and larger volumes than ext4 and supports snapshotting and compression capabilities. |
831 | | vfat | Disk | This is used for post-Windows 95 file system format on hard disks, USB drives, and floppy disks. |
832 | | iso9660 | Disk | This is used for CD/DVD-based optical file systems. |
833 | | BIOS Boot | Disk | A very small partition required for booting a device with a GUID partition table (GPT) on a BIOS system. |
834 | | EFI System Partition | Disk | A small partition required for booting a device with a GUID partition table (GPT) on a UEFI system. |
835 | | NFS | Network | A directory or file system shared over the network for access by other Linux systems. |
836 | | AutoFS | Network | An NFS file system set to mount and unmount automatically on a remote system. |
837 | | CIFS | Network | Common Internet File System (aka Samba). A directory or file system shared over the network for access by Windows and other Linux systems. |
838 |
839 | * The *mount* command is used to attach a file system to a desired point in the directory hierarchy to make it accessible to users and applications. This point is referred to as the *mount point*, which is essentially an empty directory created solely for this point. The *mount* command requires the absolute pathname (or its UUID or label) to the block device containing the file system, and a mount point name to attach it to the directory tree. The *mount* command adds an entry to the `/proc/mounts` file and instructs the kernel to add the entry to the `/proc/mounts` file as well after a file system has been successfully mounted.
840 |
841 | * The opposite of the *mount* command is *unmount*, which is used to detach a file system from the directory hierarchy and make it inaccessible to users and applications.
842 |
843 | * To create a vfat file system:
844 | ```shell
845 | mkfs.vfat
846 | ```
847 |
848 | * To mount a vfat file system:
849 | ```shell
850 | mount /mnt
851 | ```
852 |
853 | * To unmount a vfat file system:
854 | ```shell
855 | umount /mnt
856 | ```
857 |
858 | * To check a vfat file system:
859 | ```shell
860 | fsck.vfat
861 | ```
862 |
863 | * To create an ext4 file system:
864 | ```shell
865 | mkfs.ext4
866 | ```
867 |
868 | * To mount an ext4 file system:
869 | ```shell
870 | mount /mnt
871 | ```
872 |
873 | * To unmount an ext4 file system:
874 | ```shell
875 | umount /mnt
876 | ```
877 |
878 | * To check an ext4 file system:
879 | ```shell
880 | fsck
881 | ```
882 |
883 | * To get additional details about an ext4 file system:
884 | ```shell
885 | dumpe2fs
886 | ```
887 |
888 | * To create a xfs file system:
889 | ```shell
890 | mkfs.xfs
891 | ```
892 |
893 | * To mount a xfs file system:
894 | ```shell
895 | mount /mnt
896 | ```
897 |
898 | * To unmount a xfs file system:
899 | ```shell
900 | umount /mnt
901 | ```
902 |
903 | * To check a xfs file system:
904 | ```shell
905 | xfs_repair
906 | ```
907 |
908 | * To get additional details about an xfs file system:
909 | ```shell
910 | xfs_info
911 | ```
912 |
913 | * The path is the device name or a regular file that shall contain the file system.
914 |
915 | 1. Mount and unmount network file systems using NFS
916 |
917 | * To confirm nfs-utils is installed:
918 | ```shell
919 | dnf install nfs-utils
920 | ```
921 |
922 | * To mount the network file system:
923 | ```shell
924 | mount -t nfs 10.0.2.5:/home/nfs-share /mnt
925 | ```
926 |
927 | * Alternatively the following can be run after adding the entry to `/etc/fstab`:
928 | ```shell
929 | mount -a
930 | ```
931 |
932 | * Using AutoFS with NFS:
933 | ```shell
934 | # on the server
935 | systemctl status
936 | mkdir /common
937 | echo "/common *(rw)" >> /etc/exports
938 | systemctl restart nfs-server.service
939 |
940 | # on the client
941 | dnf install autofs -y
942 | mkdir /autodir
943 | vi /etc/auto.master
944 | # add line
945 | #/- /etc/auto.master.d/auto.dir
946 | vi /etc/auto.master.d/auto.dir
947 | # add line
948 | #/autodir 172.25.1.4:/common
949 | systemctl restart autofs & systemctl enable autofs
950 |
951 | # on the server
952 | touch /common/test
953 |
954 | # on the client
955 | ls /autodir # confirm test file is created
956 | ```
957 |
958 | 1. Extend existing logical volumes
959 |
960 | * To extend the logical volume size by 2GB:
961 | ```shell
962 | lvextend -L+2G /dev/vg1/lv1
963 | lvdisplay # confirm changes
964 | ```
965 |
966 | * To extend the file system:
967 | ```shell
968 | df -Th # confirm file system type
969 | resize2fs /dev/vg1/lvl1 # for ext3 or ext4
970 | xfs_growfs /mnt # for xfs
971 | ```
972 |
973 | 1. Create and configure set-GID directories for collaboration
974 |
975 | * SUID (meaning set user id) is used to specify that a user can run an executable file with effective permissions of the file owner. This is primarily used to elevate the privileges of the current user. When a user executes the file, the operating system will execute as the file owner. Instead of the normal *x* which represents execute permissions, an *s* will be visible. To set the SUID:
976 | ```shell
977 | chmod u+s
978 | ```
979 |
980 | * SGID (meaning set group id) is used to specify that a user can run an executable file with effective permissions of the owning group. When a user executes the file, the operating system will execute as the owning group. Instead of the normal x which represents execute permissions, an s will be visible. To set the SGID:
981 | ```shell
982 | chmod g+s
983 | ```
984 |
985 | * To create a group and shared directory:
986 | ```shell
987 | groupadd accounts
988 | mkdir -p /home/shared/accounts
989 | chown nobody:accounts /home/shared/accounts
990 | chmod g+s /home/shared/accounts
991 | chmod 070 /home/shared/accounts
992 | ```
993 |
994 | * When using SGID on a directory all files that are created in the directory will be owned by the group of the directory as opposed to the group of the owner.
995 |
996 | * If the sticky bit is set on a directory, the files in that directory can only be removed by the owner. A typical use case is for the `/tmp` directory. It can be written to by any user, but other users cannot delete the files of others. To set the sticky bit:
997 | ```shell
998 | chmod +t
999 | ```
1000 |
1001 | * The SUID, SGID and sticky bit can also be set with number notation. The standard number (rwx) is prepended with 4 for SUID, 2 for SGID, and 1 for the sticky bit.
1002 |
1003 | * To remove special permissions the *-* flag is used instead of the *+* flag.
1004 |
1005 | 1. Configure disk compression
1006 |
1007 | * The Virtual Data Optimiser (VDO) provides data reduction in the form of deduplication, compression, and thin provisioning.
1008 |
1009 | * To install vdo:
1010 | ```shell
1011 | dnf install vdo kmod-kvdo
1012 | ```
1013 |
1014 | * To create the vdo:
1015 | ```shell
1016 | vdo create --name=vdo1 --device=/dev/sdb --vdoLogicalSize=30G --writePolicy=async
1017 | ```
1018 |
1019 | * To create and mount the file system:
1020 | ```shell
1021 | mkfs.xfs /dev/mapper/vdo1
1022 | mount /dev/mapper/vdo1 /mnt
1023 | ```
1024 |
1025 | 1. Manage layered storage
1026 |
1027 | * Stratis is a storage management solution introduced in RHEL 8 that allows the configuration of advanced storage features such as pool-based management, thin provisioning, file system snapshots and monitoring.
1028 |
1029 | * To install stratis:
1030 | ```shell
1031 | dnf install stratisd stratis-cli
1032 | systemctl start stratisd
1033 | ```
1034 |
1035 | * To confirm there is no file system on the disk to be used:
1036 | ```shell
1037 | lsblk
1038 | blkid -p /dev/sdb
1039 | ```
1040 |
1041 | * If there is a file system remove it using:
1042 | ```shell
1043 | wipefs -a /dev/sdb
1044 | ```
1045 |
1046 | * To create a stratis pool and confirm:
1047 | ```shell
1048 | stratis pool create strat1 /dev/sdb
1049 | stratis pool list
1050 | ```
1051 |
1052 | * To create a file system and confirm:
1053 | ```shell
1054 | stratis fs create strat1 fs1
1055 | stratis fs list
1056 | ```
1057 |
1058 | * To mount the file system and confirm:
1059 | ```shell
1060 | mount /stratis/strat1/fs1 /mnt
1061 | df -h
1062 | # add to /etc/fstab to make it persistent
1063 | ```
1064 |
1065 | * To add a disk to the stratis pool and confirm:
1066 | ```shell
1067 | stratis pool add-data strat1 /dev/sdc
1068 | stratis pool list
1069 | ```
1070 |
1071 | * To create a snapshot and confirm:
1072 | ```shell
1073 | stratis fs snapshot strat1 fs1 snapshot1
1074 | stratis filesystem list strat1
1075 | ```
1076 |
1077 | * To mount a snapshot:
1078 | ```shell
1079 | unmount /stratis/strat1/fs1
1080 | mount /stratis/strat1/snapshot1 /mnt
1081 | ```
1082 |
1083 | * To destroy a snapshot and confirm:
1084 | ```shell
1085 | unmount /stratis/strat1/snapshot1
1086 | stratis filesystem destroy strat1 snapshot1
1087 | stratis filesystem list
1088 | ```
1089 |
1090 | * To remove a stratis filesystem and pool and confirm:
1091 | ```shell
1092 | stratis filesystem destroy strat1 fs1
1093 | stratis filesystem list
1094 | stratis pool destroy strat1
1095 | stratis pool list
1096 | ```
1097 |
1098 | 1. Diagnose and correct file permission problems
1099 |
1100 | * File permissions can be modified using *chmod* and *setfacl*.
1101 |
1102 | ### Deploy, configure, and maintain systems
1103 |
1104 | 1. Schedule tasks using at and cron
1105 |
1106 | * Job scheduling and execution is handled by the *atd* and *crond* daemons. While *atd* manages jobs scheduled to run once in the future, *crond* is responsible for running jobs repetitively at pre-specified times. At start-up, *crond* reads schedules in files located in the `/var/spool/cron` and `/etc/cron.d` directories, and loads them in memory for later execution.
1107 |
1108 | * There are 4 files that control permissions for setting scheduled jobs. These are *at.allow*, *at.deny*, *cron.allow* and *cron.deny*. These files are in the `/etc` directory. The syntax of the files is identical, with each file taking 1 username per line. If no files exist, then no users are permitted. By default, the *deny* files exist and are empty, and the *allow* files do not exist. This opens up full access to using both tools for all users.
1109 |
1110 | * All activities involving *atd* and *crond* are logged to the `/var/log/cron` file.
1111 |
1112 | * The *at* command is used to schedule one-time execution of a program by the *atd* daemon. All submitted jobs are stored in the `/var/spool/at` directory.
1113 |
1114 | * To schedule a job using *at* the below syntax is used:
1115 | ```shell
1116 | at 11:30pm 6/30/15
1117 | ```
1118 |
1119 | * The commands to execute are defined in the terminal, press *ctrl+d* when finished. The added job can be viewed with *at* and can be removed with the *-d* option.
1120 |
1121 | * A shell script can also be provided:
1122 | ```shell
1123 | at -f ~/script1.sh 11:30pm 6/30/15
1124 | ```
1125 |
1126 | * The `/etc/crontab` file has the following columns:
1127 | * 1: Minutes of hour (0-59), with multiple comma separated values, or * to represent every minute.
1128 | * 2: Hours of day (0-23), with multiple comma separated values, or * to represent every hour.
1129 | * 3: Days of month (1-31), with multiple comma separated values, or * to represent every day.
1130 | * 4: Month of year (1-12, jan-dec), with multiple comma separated values, or * to represent every month.
1131 | * 5: Day of week (0-6, sun-sat), with multiple comma separated values, or * to represent every day.
1132 | * 6: Full path name of the command or script to be executed, along with any arguments.
1133 |
1134 | * Step values can be used with */2 meaning every 2nd minute.
1135 |
1136 | * The *crontab* command can be used to edit the file. Common options are *e* (edit), *l* (view), *r* (remove):
1137 | ```shell
1138 | crontab -e
1139 | ```
1140 |
1141 | 1. Start and stop services and configure services to start automatically at boot
1142 |
1143 | * To check the status of a service:
1144 | ```shell
1145 | systemctl status
1146 | ```
1147 |
1148 | * To start a service:
1149 | ```shell
1150 | systemctl start
1151 | ```
1152 |
1153 | * To stop a service:
1154 | ```shell
1155 | systemctl stop
1156 | ```
1157 |
1158 | * To make a service reload its configuration:
1159 | ```shell
1160 | systemctl reload
1161 | ```
1162 |
1163 | * To make a service reload its configuration or restart if it can't reload:
1164 | ```shell
1165 | systemctl reload-or-restart
1166 | ```
1167 |
1168 | * To make a service start on boot:
1169 | ```shell
1170 | systemctl enable
1171 | ```
1172 |
1173 | * To stop a service starting on boot:
1174 | ```shell
1175 | systemctl disable
1176 | ```
1177 |
1178 | * To check if a service is enabled:
1179 | ```shell
1180 | systemctl is-enabled
1181 | ```
1182 |
1183 | * To check if a service has failed:
1184 | ```shell
1185 | systemctl is-failed
1186 | ```
1187 |
1188 | * To view the configuration file for a service:
1189 | ```shell
1190 | systemctl cat /usr/lib/sysdtemd/system/
1191 | ```
1192 |
1193 | * To view the dependencies for a service:
1194 | ```shell
1195 | systemctl list-dependencies
1196 | ```
1197 |
1198 | * To stop a service from being run by anyone but the system and from being started on boot:
1199 | ```shell
1200 | systemctl mask
1201 | ```
1202 |
1203 | * To remove a mask:
1204 | ```shell
1205 | systemctl unmask
1206 | ```
1207 |
1208 | 1. Configure systems to boot into a specific target automatically
1209 |
1210 | * To get the default target:
1211 | ```shell
1212 | systemctl get-default
1213 | ```
1214 |
1215 | * To list available targets:
1216 | ```shell
1217 | systemctl list-units --type target --all
1218 | ```
1219 |
1220 | * To change the default target:
1221 | ```shell
1222 | systemctl set-default
1223 | ```
1224 |
1225 | * The change will take affect after a reboot.
1226 |
1227 | 1. Configure time service clients
1228 |
1229 | * To print the date:
1230 | ```shell
1231 | date +%d%m%y-%H%M%S
1232 | ```
1233 |
1234 | * To set the system clock as per the hardware clock:
1235 | ```shell
1236 | hwclock -s
1237 | ```
1238 |
1239 | * To set the hardware clock as per the system clock:
1240 | ```shell
1241 | hwclock -w
1242 | ```
1243 |
1244 | * The *timedatectl* command can also be used to view the date and time.
1245 |
1246 | * To change the date or time:
1247 | ```shell
1248 | timedatectl set-time 2020-03-18
1249 | timedatectl set-time 22:43:00
1250 | ```
1251 |
1252 | * To view a list of time zones:
1253 | ```shell
1254 | timedatectl list-timezones
1255 | ```
1256 |
1257 | * To change the time zone:
1258 | ```shell
1259 | timedatectl set-timezone
1260 | ```
1261 |
1262 | * To enable NTP:
1263 | ```shell
1264 | timedatectl set-ntp yes
1265 | ```
1266 |
1267 | * To start the *chronyd* service:
1268 | ```shell
1269 | systemctl start chronyd
1270 | ```
1271 |
1272 | 1. Install and update software packages from Red Hat Network, a remote repository, or from the local file system
1273 |
1274 | * The .rpm extension is a format for files that are manipulated by the Redhat Package Manager (RPM) package management system. RHEL 8 provides tools for the installation and administration of RPM packages. A package is a group of files organised in a directory structure and metadata that makes up a software application.
1275 |
1276 | * An RPM package name follows the below format:
1277 | ```shell
1278 | openssl-1.0.1e-34.el7.x86_64.rpm
1279 | # package name = openssl
1280 | # package version = 1.0.1e
1281 | # package release = 34
1282 | # RHEL version = el7
1283 | # processor architecture = x86_64
1284 | # extension = .rpm
1285 | ```
1286 |
1287 | * The *subscription-manager* command can be used to link a Red Hat subscription to a system.
1288 |
1289 | * The *dnf* command is the front-end to *rpm* and is the preferred tool for package management. The *yum* command has been superseded by *dnf* in RHEL 8. It requires that the system has access to a software repository. The primary benefit of *dnf* is that it automatically resolves dependencies by downloading and installing any additional required packages.
1290 |
1291 | * To list enabled and disabled repositories:
1292 | ```shell
1293 | dnf repolist all
1294 | dnf repoinfo
1295 | ```
1296 |
1297 | * To search for a package:
1298 | ```shell
1299 | dnf search
1300 | dnf list
1301 | ```
1302 |
1303 | * To view more information about a particular package:
1304 | ```shell
1305 | dnf info
1306 | ```
1307 |
1308 | * To install a package:
1309 | ```shell
1310 | dnf install
1311 | ```
1312 |
1313 | * To remove a package:
1314 | ```shell
1315 | dnf remove
1316 | ```
1317 |
1318 | * To find a package from a file:
1319 | ```shell
1320 | dnf provides
1321 | ```
1322 |
1323 | * To install a package locally:
1324 | ```shell
1325 | dnf localinstall
1326 | ```
1327 |
1328 | * To view available groups:
1329 | ```shell
1330 | dnf groups list
1331 | ```
1332 |
1333 | * To install a group (e.g. System Tools):
1334 | ```shell
1335 | dnf group "System Tools"
1336 | ```
1337 |
1338 | * To remove a group (e.g. System Tools):
1339 | ```shell
1340 | dnf group remove "System Tools"
1341 | ```
1342 |
1343 | * To see the history of installations using *dnf*:
1344 | ```shell
1345 | dnf history list
1346 | ```
1347 |
1348 | * To undo a particular installation (e.g. ID=22):
1349 | ```shell
1350 | dnf history undo 22
1351 | ```
1352 |
1353 | * To redo a particular installation (e.g. ID=22):
1354 | ```shell
1355 | dnf history redo 22
1356 | ```
1357 |
1358 | * To add a repository using the dnf config manager:
1359 | ```shell
1360 | dnf config-manager --add-repo
1361 | ```
1362 |
1363 | * To enable a repository using the dnf config manager:
1364 | ```shell
1365 | dnf config-manager --enablerepo
1366 | ```
1367 |
1368 | * To disable a repository using the dnf config manager:
1369 | ```shell
1370 | dnf config-manager --disablerepo
1371 | ```
1372 |
1373 | * To create a repository:
1374 | ```shell
1375 | dnf install createrepo
1376 | mkdir
1377 | createrepo --
1378 | yum-config-manager --add-repo file://
1379 | ```
1380 |
1381 | 1. Work with package module streams
1382 |
1383 | * RHEL 8 introduced the concept of Application Streams. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Module streams represent versions of the Application Stream components. Only one module stream can be active at a particular time, but it allows multiple different versions to be available in the same dnf repository.
1384 |
1385 | * To view modules:
1386 | ```shell
1387 | dnf module list
1388 | ```
1389 |
1390 | * To get information about a module:
1391 | ```shell
1392 | dnf module info --profile
1393 | ```
1394 |
1395 | * To install a module:
1396 | ```shell
1397 | dnf module install
1398 | ```
1399 |
1400 | * To remove a module:
1401 | ```shell
1402 | dnf module remove
1403 | ```
1404 |
1405 | * To reset a module after removing it:
1406 | ```shell
1407 | dnf module reset
1408 | ```
1409 |
1410 | * To be specific about the module installation:
1411 | ```shell
1412 | dnf module install :/
1413 | ```
1414 |
1415 | * To check the version of a module:
1416 | ```shell
1417 | -v
1418 | ```
1419 |
1420 | 1. Modify the system bootloader
1421 |
1422 | * The GRUB2 configuration can be edited directly on the boot screen. The configuration can also be edited using the command line.
1423 |
1424 | * To view the grub2 commands:
1425 | ```shell
1426 | grub2
1427 | ```
1428 |
1429 | * To make a change to the configuration:
1430 | ```shell
1431 | vi /etc/default/grub
1432 | # Change a value
1433 | grub2-mkconfig -o /boot/grub2/grub.cfg
1434 | # View changes
1435 | vi /boot/grub2/grub.cfg
1436 | ```
1437 |
1438 | ### Manage basic networking
1439 |
1440 | 1. Configure IPv4 and IPv6 addresses
1441 |
1442 | * The format of an IPv4 address is a set of 4 8-bit integers that gives a 32-bit IP address. The format of an IPv6 is a set of 8 16-bit hexadecimal numbers that gives a 128-bit IP address.
1443 |
1444 | * The *nmcli* command is used to configure networking using the NetworkManager service. This command is used to create, display, edit, delete, activate, and deactivate network connections. Each network device corresponds to a Network Manager device.
1445 |
1446 | * Using nmcli with the connection option lists the available connection profiles in NetworkManager.
1447 |
1448 | * The *ip* command can also be used for network configuration. The main difference between ip and nmcli is that changes made with the ip command are not persistent.
1449 |
1450 | * To view system IP addresses:
1451 | ```shell
1452 | ip addr
1453 | ```
1454 |
1455 | * To show the current connections:
1456 | ```shell
1457 | nmcli connection show
1458 | ```
1459 |
1460 | * Using nmcli with the device option lists the available network devices in the system.
1461 |
1462 | * To view the current network device status and details:
1463 | ```shell
1464 | nmcli device status
1465 | nmcli device show
1466 | ```
1467 |
1468 | * To add an ethernet IPv4 connection:
1469 | ```shell
1470 | nmcli connection add con-name ifname type ethernet ip4 gw4
1471 | ```
1472 |
1473 | * To manually modify a connection:
1474 | ```shell
1475 | nmcli connection modify ipv4.addresses
1476 | nmcli connection modify ipv4.method manual
1477 | ```
1478 |
1479 | * To delete a connection:
1480 | ```shell
1481 | nmcli connection delete
1482 | ```
1483 |
1484 | * To activate a connection:
1485 | ```shell
1486 | nmcli connection up
1487 | ```
1488 |
1489 | * To deactivate a connection:
1490 | ```shell
1491 | nmcli connection down
1492 | ```
1493 |
1494 | * To check the DNS servers that are being used:
1495 | ```shell
1496 | cat /etc/resolv.conf
1497 | ```
1498 |
1499 | * To change the DNS server being used:
1500 | ```shell
1501 | nmcli con mod ipv4.dns
1502 | systemctl restart NetworkManager.service
1503 | ```
1504 |
1505 | 1. Configure hostname resolution
1506 |
1507 | * To lookup the IP address based on a host name the *host* or *nslookup* commands can be used.
1508 |
1509 | * The `/etc/hosts` file is like a local DNS. The `/etc/nsswitch.conf` file controls the order that resources are checked for resolution.
1510 |
1511 | * To lookup the hostname:
1512 | ```shell
1513 | hostname -s # short
1514 | hostname -f # fully qualified domain name
1515 | ```
1516 |
1517 | * The hostname file is in `/etc/hostname`. To refresh any changes run the *hostnamectl* command.
1518 |
1519 | 1. Configure network services to start automatically at boot
1520 |
1521 | * To install a service and make it start automatically at boot:
1522 | ```shell
1523 | dnf install httpd
1524 | systemctl enable httpd
1525 | ```
1526 |
1527 | * To set a connection to be enabled on boot:
1528 | ```shell
1529 | nmcli connection modify eth0 connection.autoconnect yes
1530 | ```
1531 |
1532 | 1. Restrict network access using firewall-cmd/firewall
1533 |
1534 | * Netfilter is a framework provided by the Linux kernel that provides functions for packet filtering. In RHEL 7 and earlier iptables was the default way of configuring Netfilter. Disadvantages of ipables were that a separate version (ip6tables) was required for ipv6, and that the user interface is not very user friendly.
1535 |
1536 | * The default firewall system in RHEL 8 is *firewalld*. Firewalld is a zone-based firewall. Each zone can be associated with one or more network interfaces, and each zone can be configured to accept or deny services and ports. The *firewall-cmd* command is the command line client for firewalld.
1537 |
1538 | * To check firewall zones:
1539 | ```shell
1540 | firewall-cmd --get-zones
1541 | ```
1542 |
1543 | * To list configuration for a zone:
1544 | ```shell
1545 | firewall-cmd --zone work --list-all
1546 | ```
1547 |
1548 | * To create a new zone:
1549 | ```shell
1550 | firewall-cmd --new-zone servers --permanent
1551 | ```
1552 |
1553 | * To reload firewall-cmd configuration:
1554 | ```shell
1555 | firewall-cmd --reload
1556 | ```
1557 |
1558 | * To add a service to a zone:
1559 | ```shell
1560 | firewall-cmd --zone servers --add-service=ssh --permanent
1561 | ```
1562 |
1563 | * To add an interface to a zone:
1564 | ```shell
1565 | firewall-cmd --change-interface=enp0s8 --zone=servers --permanent
1566 | ```
1567 |
1568 | * To get active zones:
1569 | ```shell
1570 | firewall-cmd --get-active-zones
1571 | ```
1572 |
1573 | * To set a default zone:
1574 | ```shell
1575 | firewall-cmd --set-default-zone=servers
1576 | ```
1577 |
1578 | * To check the services allowed for a zone:
1579 | ```shell
1580 | firewall-cmd --get-services
1581 | ```
1582 |
1583 | * To add a port to a zone:
1584 | ```shell
1585 | firewall-cmd --add-port 8080/tcp --permanent --zone servers
1586 | ```
1587 |
1588 | * To remove a service from a zone:
1589 | ```shell
1590 | firewall-cmd --remove-service https --permanent --zone servers
1591 | ```
1592 |
1593 | * To remove a port from a zone:
1594 | ```shell
1595 | firewall-cmd --remove-port 8080/tcp --permanent --zone servers
1596 | ```
1597 |
1598 | ### Manage users and groups
1599 |
1600 | 1. Create, delete, and modify local user accounts
1601 |
1602 | * RHEL 8 supports three user account types: root, normal and service. The root user has full access to all services and administrative functions. A normal user can run applications and programs that they are authorised to execute. Service accounts are responsible for taking care of the installed services.
1603 |
1604 | * The `/etc/passwd` file contains vital user login data.
1605 |
1606 | * The `/etc/shadow` file is readable only by the root user and contains user authentication information. Each row in the file corresponds to one entry in the passwd file. The password expiry settings are defined in the `/etc/login.defs` file. The `/etc/defaults/useradd` file contains defaults for the *useradd* command.
1607 |
1608 | * The `/etc/group` file contains the group information. Each row in the file stores one group entry.
1609 |
1610 | * The `/etc/gshadow` file stores encrypted group passwords. Each row in the file corresponds to one entry in the group file.
1611 |
1612 | * Due to manual modification, inconsistencies may arise between the above four authentication files. The *pwck* command is used to check for inconsistencies.
1613 |
1614 | * The *vipw* and *vigr* commands are used to modify the *passwd* and *group* files, respectively. These commands disable write access to these files while the privileged user is making the modifications.
1615 |
1616 | * To create a user:
1617 | ```shell
1618 | useradd user1
1619 | ```
1620 |
1621 | * To check that the user has been created:
1622 | ```shell
1623 | cat /etc/group | grep user1
1624 | ```
1625 |
1626 | * To specify the UID and GID at user creation:
1627 | ```shell
1628 | useradd -u 1010 -g 1005 user1
1629 | ```
1630 |
1631 | * To create a user and add them to a group:
1632 | ```shell
1633 | useradd -G IT user2
1634 | ```
1635 |
1636 | * Note that *-G* is a secondary group, and *-g* is the primary group. The primary group is the group that the operating system assigns to files to which a user belongs. A secondary group is one or more other groups to which a user also belongs.
1637 |
1638 | * To delete a user:
1639 | ```shell
1640 | userdel user1
1641 | ```
1642 |
1643 | * To modify a user:
1644 | ```shell
1645 | usermod -l user5 user1 # note that home directory will remain as user1
1646 | ```
1647 |
1648 | * To add a user but not give access to the shell:
1649 | ```shell
1650 | useradd -s /sbin/nologin user
1651 | ```
1652 |
1653 | 1. Change passwords and adjust password aging for local user accounts
1654 |
1655 | * To change the password for a user:
1656 | ```shell
1657 | passwd user1
1658 | ```
1659 |
1660 | * To step through password aging information the *chage* command can be used without any options.
1661 |
1662 | * To view user password expiry information:
1663 | ```shell
1664 | chage -l user1
1665 | ```
1666 |
1667 | * To set the password expiry for a user 30 days from now:
1668 | ```shell
1669 | chage -M 30 user1
1670 | ```
1671 |
1672 | * To set the password expiry date:
1673 | ```shell
1674 | chage -E 2021-01-01 user1
1675 | ```
1676 |
1677 | * To set the password to never expire:
1678 | ```shell
1679 | chage -E -1 user1
1680 | ```
1681 |
1682 | 1. Create, delete, and modify local groups and group memberships
1683 |
1684 | * To create a group:
1685 | ```shell
1686 | groupadd IT
1687 | ```
1688 |
1689 | * To create a group with a specific GID:
1690 | ```shell
1691 | groupadd -g 3032
1692 | ```
1693 |
1694 | * To delete a group:
1695 | ```shell
1696 | groupdel IT
1697 | ```
1698 |
1699 | * To modify the name of a group:
1700 | ```shell
1701 | groupmod -n IT-Support IT
1702 | ```
1703 |
1704 | * To modify the GID of a group:
1705 | ```shell
1706 | groupmod -g 3033 IT-Support
1707 | ```
1708 |
1709 | * To add a user to a group:
1710 | ```shell
1711 | groupmod -aG IT-Support user1
1712 | ```
1713 |
1714 | * To view the members of a group:
1715 | ```shell
1716 | groupmems -l -g IT-Support
1717 | ```
1718 |
1719 | * To remove a user from a group:
1720 | ```shell
1721 | gpasswd -d user1 IT-Support
1722 | ```
1723 |
1724 | 1. Configure superuser access
1725 |
1726 | * To view the sudoers file:
1727 | ```shell
1728 | visudo /etc/sudoers
1729 | ```
1730 |
1731 | * Members of the wheel group can use sudo on all commands. To add a user to the wheel group:
1732 | ```shell
1733 | sudo usermod -aG wheel user1
1734 | ```
1735 |
1736 | * To allow an individual user sudo access to specific commands:
1737 | ```shell
1738 | visudo /etc/sudoers
1739 | user2 ALL=(root) /bin/ls, /bin/df -h, /bin/date
1740 | ```
1741 |
1742 | ### Manage security
1743 |
1744 | 1. Configure firewall settings using firewall-cmd/firewalld
1745 |
1746 | * Network settings such as masquerading and IP forwarding can also be configured in the firewall-config GUI application. To install this application:
1747 | ```shell
1748 | dnf install firewall-config
1749 | ```
1750 |
1751 | * To set port forwarding in the kernel setting:
1752 | ```shell
1753 | vi /etc/sysctl.conf
1754 | # add line
1755 | net.ipv4.ip_forward=1
1756 | # save file
1757 | sysctl -p
1758 | ```
1759 |
1760 | 1. Create and use file access control lists
1761 |
1762 | * To give a user read and write access to a file using an access control list:
1763 | ```shell
1764 | setfacl -m u:user1:rw- testFile
1765 | getfacl testFile
1766 | ```
1767 |
1768 | * To restrict a user from accessing a file using an access control list:
1769 | ```shell
1770 | setfacl -m u:user1:000 testFile
1771 | getfacl testFile
1772 | ```
1773 |
1774 | * To remove an access control list for a user:
1775 | ```shell
1776 | setfacl -x u:user1 testFile
1777 | getfacl testFile
1778 | ```
1779 |
1780 | * To give a group read and execute access to a directory recursively using an access control list:
1781 | ```shell
1782 | setfacl -R -m g:IT-Support:r-x testDirectory
1783 | getfacl testFile
1784 | ```
1785 |
1786 | * To remove an access control list for a group:
1787 | ```shell
1788 | setfacl -x g:IT-Support testDirectory
1789 | getfacl testFile
1790 | ```
1791 |
1792 | 1. Configure key-based authentication for SSH
1793 |
1794 | * To generate an id_rsa and id_rsa.pub files:
1795 | ```shell
1796 | ssh-keygen
1797 | ```
1798 | * To enable ssh for a user:
1799 | ```shell
1800 | touch authorized_keys
1801 | echo "publicKey" > /home/new_user/.ssh/authorized_keys
1802 | ```
1803 |
1804 | * To copy the public key from server1 to server2:
1805 | ```shell
1806 | ssh-copy-id -i ~/.ssh/id_rsa.pub server2
1807 | cat ~/.ssh/known_hosts # validate from server1
1808 | ```
1809 |
1810 | 1. Set enforcing and permissive modes for SELinux
1811 |
1812 | * Security Enhanced Linux (SELinux) is an implementation of Mandatory Access Control (MAC) architecture developed by the U.S National Security Agency (NSA). MAC provides an added layer of protection beyond the standard Linux Discretionary Access Control (DAC), which includes the traditional file and directory permissions, ACL settings, setuid/setgid bit settings, su/sudo privileges etc.
1813 |
1814 | * MAC controls are fine-grained; they protect other services in the event one of the services is negotiated. MAC uses a set of defined authorisation rules called policy to examine security attributes associated with subjects and objects when a subject tries to access an object and decides whether to permit this access attempt. SELinux decisions are stored in a special cache referred to as Access Vector Cache (AVC).
1815 |
1816 | * When an application or process makes a request to access an object, SELinux checks with the AVC, where permissions are cached for subjects and objects. If a decision is unable to be made, it sends the request to the security server. The security server checks for the security context of the app or process and the object. Security context is applied from the SELinux policy database.
1817 |
1818 | * To check the SELinux status:
1819 | ```shell
1820 | getenforce
1821 | sestatus
1822 | ```
1823 |
1824 | * To put SELinux into permissive mode modify the `/etc/selinux/config` file as per the below and reboot:
1825 | ```shell
1826 | SELINUX=permissive
1827 | ```
1828 |
1829 | * Messages logged from SELinux are available in `/var/log/messages`.
1830 |
1831 | 1. List and identify SELinux file and process context
1832 |
1833 | * To view the SELinux contexts for files:
1834 | ```shell
1835 | ls -Z
1836 | ```
1837 |
1838 | * To view the contexts for a user:
1839 | ```shell
1840 | id -Z
1841 | ```
1842 | * The contexts shown follow the user:role:type:level syntax. The SELinux user is mapped to a Linux user using the SELinux policy. The role is an intermediary between domains and SELinux users. The type defines a domain for processes, and a type for files. The level is used for Multi-Category Security (MCS) and Multi-Level Security (MLS).
1843 |
1844 | * To view the processes for a user:
1845 | ```shell
1846 | ps -Z # ps -Zaux to see additional information
1847 | ```
1848 |
1849 | 1. Restore default file contexts
1850 |
1851 | * To view the SELinux contexts for files:
1852 | ```shell
1853 | chcon unconfined:u:object_r:tmp_t:s0
1854 | ```
1855 |
1856 | * To restore the SELinux contexts for a file:
1857 | ```shell
1858 | restorecon file.txt
1859 | ```
1860 |
1861 | * To restore the SELinux contexts recursively for a directory:
1862 | ```shell
1863 | restorecon -R directory
1864 | ```
1865 |
1866 | 1. Use Boolean settings to modify system SELinux settings
1867 |
1868 | * SELinux has many contexts and policies already defined. Booleans within SELinux allow common rules to be turned on and off.
1869 |
1870 | * To check a SELinux Boolean setting:
1871 | ```shell
1872 | getsebool -a | grep virtualbox
1873 | ```
1874 |
1875 | * To set a SELinux Boolean setting permanently:
1876 | ```shell
1877 | setsebool -P use_virtualbox on
1878 | ```
1879 |
1880 | 1. Diagnose and address routine SELinux policy violations
1881 |
1882 | * The SELinux Administration tool is a graphical tool that enables many configuration and management operations to be performed. To install and run the tool:
1883 | ```shell
1884 | yum install setools-gui
1885 | yum install policycoreutils-gui
1886 | system-config-selinux
1887 | ```
1888 |
1889 | * SELinux alerts are written to `/var/log/audit/audit.log` if the *auditd* daemon is running, or to the `/var/log/messages` file via the *rsyslog* daemon in the absence of *auditd*.
1890 |
1891 | * A GUI called the SELinux Troubleshooter can be accessed using the *sealert* command. This allows SELinux denial messages to be analysed and provides recommendations on how to fix issues.
1892 |
1893 | ### Manage containers
1894 |
1895 | 1. Find and retrieve container images from a remote registry
1896 |
1897 | * A container is used for running multiple isolated applications on the same hardware. Unlike a virtual machine, containers share the host systems operating system. This is more lightweight but a little less flexible.
1898 |
1899 | * Podman is a container engine developed by Redhat. Podman is an alternative to the well-known container engine Docker. It is used to directly manage pods and container images. The Podman Command Line Interface (CLI) uses the same commands as the Docker CLI. Docker is not officially supported in RHEL 8.
1900 |
1901 | * To search for an image in a remote repository and download it:
1902 | ```shell
1903 | dnf install podman -y
1904 | podman search httpd # note that docker.io/library/httpd has 3000+ stars
1905 | podman pull docker.io/library/httpd
1906 | ```
1907 |
1908 | 1. Inspect container images
1909 |
1910 | * To view images after they have been downloaded:
1911 | ```shell
1912 | podman images
1913 | ```
1914 |
1915 | * To inspect an image using Podman:
1916 | ```shell
1917 | podman inspect -l # -l gets the latest container
1918 | ```
1919 |
1920 | * To inspect an image in a remote registry using Skopeo:
1921 | ```shell
1922 | dnf install skopeo -y
1923 | skopeo inspect docker://registry.fedoraproject.org/fedora:latest
1924 | ```
1925 |
1926 | 1. Perform container management using commands such as podman and skopeo
1927 |
1928 | * The man page for Podman and bash-completion can be used to provide more details on the usage of Podman.
1929 |
1930 | * To view the logs for a container:
1931 | ```shell
1932 | podman logs -l
1933 | ```
1934 |
1935 | * To view the pids for a container:
1936 | ```shell
1937 | podman top -l
1938 | ```
1939 |
1940 | 1. Perform basic container management such as running, starting, stopping, and listing running containers
1941 |
1942 | * To start, stop and remove a container:
1943 | ```shell
1944 | podman run -dt -p 8080:80/tcp docker.io/library/httpd # redirect requests on 8080 host port to 80 container port
1945 | podman ps -a # view container details, use -a to see all
1946 | # check using 127.0.0.1:8080 on a browser
1947 | podman stop af1fc4ca0253 # container ID from podman ps -a
1948 | podman rm af1fc4ca0253
1949 | ```
1950 |
1951 | 1. Run a service inside a container
1952 |
1953 | * A Dockerfile can be used to create a custom container:
1954 | ```shell
1955 | sudo setsebool -P container_manage_cgroup on
1956 | vi Dockerfile
1957 | # contents of Dockerfile
1958 | #####
1959 | #FROM registry.access.redhat.com/ubi8/ubi-init
1960 | #RUN yum -y install httpd; yum clean all; systemctl enable httpd;
1961 | #RUN echo "Successful Web Server Test" > /var/www/html/index.html
1962 | #RUN mkdir /etc/systemd/system/httpd.service.d/; echo -e '[Service]\nRestart=always' > /etc/systemd/system/httpd.service.d/httpd.conf
1963 | #EXPOSE 80
1964 | #####
1965 | podman build -t mysysd .
1966 | podman run -d --name=mysysd_run -p 80:80 mysysd
1967 | podman ps # confirm that container is running
1968 | ```
1969 |
1970 | * Note that the SELinux Boolean referred to above can be found using:
1971 | ```shell
1972 | getsebool -a | grep "container"
1973 | ```
1974 |
1975 | * Note that the registry above is the Podman Universal Base Image (UBI) for RHEL 8.
1976 |
1977 | 1. Configure a container to start automatically as a systemd service
1978 |
1979 | * Podman was not originally designed to bring up an entire Linux system or manage services for such things as start-up order, dependency, checking, and failed service recovery. That is the job of an initialisation system like systemd.
1980 |
1981 | * By setting up a systemd unit file on your host computer, you can have the host automatically start, stop, check the status, and otherwise manage a container as a systemd service. Many Linux services are already packaged for RHEL to run as systemd services.
1982 |
1983 | * To configure a container to run as a systemd service:
1984 | ```shell
1985 | sudo setsebool -P container_manage_cgroup on
1986 | podman run -d --name httpd-server -p 8080:80 # -d for detached, -p for port forwarding
1987 | podman ps # confirm the container is running
1988 | vi /etc/systemd/system/httpd-container.service
1989 | # contents of httpd-container.service
1990 | #####
1991 | #[Unit]
1992 | #Description=httpd Container Service
1993 | #Wants=syslog.service
1994 | #
1995 | #[Service]
1996 | #Restart=always
1997 | #ExecStart=/usr/bin/podman start -a httpd-server
1998 | #ExecStop=/usr/bin/podman stop -t 2 httpd-server
1999 | #
2000 | #[Install]
2001 | #WantedBy=multi-user.target
2002 | #####
2003 | systemctl start httpd-container.service
2004 | systemctl status httpd-container.service # confirm running
2005 | systemctl enable httpd-container.service # will not run as part multi-user.target
2006 | ```
2007 |
2008 | * Note that other systemd services can be viewed in `/etc/systemd/system` and used as examples.
2009 |
2010 | 1. Attach persistent storage to a container
2011 |
2012 | * To attach persistent storage to a container:
2013 | ```shell
2014 | ls /dev/sda1 # using this disk
2015 | mkdir -p /home/containers/disk1
2016 | podman run --privileged -it -v /home/containers/disk1:/mnt docker.io/library/httpd /bin/bash # --privileged to allow with SELinux, -it for interactive terminal, -v to mount, and /bin/bash to provide a terminal
2017 | ```
2018 |
2019 | ### RHCSA Exercises
2020 |
2021 | 1. Recovery and Practise Tasks
2022 |
2023 | * Recover the system and fix repositories:
2024 | ```shell
2025 | # press e at grub menu
2026 | rd.break # add to line starting with "linux16"
2027 | # Replace line containing "BAD" with "x86_64"
2028 | mount -o remount, rw /sysroot
2029 | chroot /sysroot
2030 | passwd
2031 | touch /.autorelabel
2032 | # reboot
2033 | # reboot - will occur automaticaly after relabel (you can now login)
2034 | grub2-mkconfig -o /boot/grub2/grub.cfg # fix grub config
2035 | yum repolist all
2036 | # change files in /etc/yum.repos.d to enable repository
2037 | yum update -y
2038 | # reboot
2039 | ```
2040 |
2041 | * Add 3 new users alice, bob and charles. Create a marketing group and add these users to the group. Create a directory `/marketing` and change the owner to alice and group to marketing. Set permissions so that members of the marketing group can share documents in the directory but nobody else can see them. Give charles read-only permission. Create an empty file in the directory:
2042 | ```shell
2043 | useradd alice
2044 | useradd bob
2045 | useradd charles
2046 | groupadd marketing
2047 | mkdir /marketing
2048 | usermod -aG marketing alice
2049 | usermod -aG marketing bob
2050 | usermod -aG marketing charles
2051 | chgrp marketing marketing # may require restart to take effect
2052 | chmod 770 marketing
2053 | setfacl -m u:charles:r marketing
2054 | setfacl -m g:marketing:-wx marketing
2055 | touch file
2056 | ```
2057 |
2058 | * Set the system time zone and configure the system to use NTP:
2059 | ```shell
2060 | yum install chrony
2061 | systemctl enable chronyd.service
2062 | systemctl start chronyd.service
2063 | timedatectl set-timezone Australia/Sydney
2064 | timedatectl set-ntp true
2065 | ```
2066 |
2067 | * Install and enable the GNOME desktop:
2068 | ```shell
2069 | yum grouplist
2070 | yum groupinstall "GNOME Desktop" -y
2071 | systemtctl set-default graphical.target
2072 | reboot
2073 | ```
2074 |
2075 | * Configure the system to be an NFS client:
2076 | ```shell
2077 | yum install nfs-utils
2078 | ```
2079 |
2080 | * Configure password aging for charles so his password expires in 60 days:
2081 | ```shell
2082 | chage -M 60 charles
2083 | chage -l charles # to confirm result
2084 | ```
2085 |
2086 | * Lock bobs account:
2087 | ```shell
2088 | passwd -l bob
2089 | passwd --status bob # to confirm result
2090 | ```
2091 |
2092 | * Find all setuid files on the system and save the list to `/testresults/setuid.list`:
2093 | ```shell
2094 | find / -perm /4000 > setuid.list
2095 | ```
2096 |
2097 | * Set the system FQDN to *centos.local* and alias to *centos*:
2098 | ```shell
2099 | hostnamectl set-hostname centos --pretty
2100 | hostnamectl set-hostname centos.local
2101 | hostname -s # to confirm result
2102 | hostname # to confirm result
2103 | ```
2104 |
2105 | * As charles, create a once-off job that creates a file called `/testresults/bob` containing the text "Hello World. This is Charles." in 2 days later:
2106 | ```shell
2107 | vi hello.sh
2108 | # contents of hello.sh
2109 | #####
2110 | #!/bin/bash
2111 | # echo "Hello World. This is Charles." > bob
2112 | #####
2113 | chmod 755 hello.sh
2114 | usermod charles -U -e -- "" # for some reason the account was locked
2115 | at now + 2 days -f /testresults/bob/hello.sh
2116 | cd /var/spool/at # can check directory as root to confirm
2117 | atq # check queued job as charles
2118 | # atrm 1 # can remove the job using this command
2119 | ```
2120 |
2121 | * As alice, create a periodic job that appends the current date to the file `/testresults/alice` every 5 minutes every Sunday and Wednesday between the hours of 3am and 4am. Remove the ability of bob to create cron jobs:
2122 | ```shell
2123 | echo "bob" >> /etc/at.deny
2124 | sudo -i -u alice
2125 | vi addDate.sh
2126 | # contents of hello.sh
2127 | #####
2128 | ##!/bin/bash
2129 | #date >> alice
2130 | #####
2131 | /testresults/alice/addDate.sh
2132 | crontab -e
2133 | # */5 03,04 * * sun,wed /testresults/alice/addDate.sh
2134 | crontab -l # view crontab
2135 | # crontab -r can remove the job using this command
2136 | ```
2137 |
2138 | * Set the system SELinux mode to permissive:
2139 | ```shell
2140 | setstatus # confirm current mode is not permissive
2141 | vi /etc/selinux/config # Update to permissive
2142 | reboot
2143 | setstatus # confirm current mode is permissive
2144 | ```
2145 |
2146 | * Create a firewall rule to drop all traffic from 10.10.10.*:
2147 | ```shell
2148 | firewall-cmd --zone=drop --add-source 10.10.10.0/24
2149 | firewall-cmd --list-all --zone=drop # confirm rule is added
2150 | firewall-cmd --permanent --add-source 10.10.10.0/24
2151 | reboot
2152 | firewall-cmd --list-all --zone=drop # confirm rule remains
2153 | ```
2154 |
2155 | 1. Linux Academy - Using SSH, Redirection, and Permissions in Linux
2156 |
2157 | * Enable SSH to connect without a password from the dev user on server1 to the dev user on server2:
2158 | ```shell
2159 | ssh dev@3.85.167.210
2160 | ssh-keygen # created in /home/dev/.ssh
2161 | ssh-copy-id 34.204.14.34
2162 | ```
2163 |
2164 | * Copy all tar files from `/home/dev/` on server1 to `/home/dev/` on server2, and extract them making sure the output is redirected to `/home/dev/tar-output.log`:
2165 | ```shell
2166 | scp *.tar* dev@34.204.14.34:/home/dev
2167 | tar xfz deploy_script.tar.gz > tar-output.log
2168 | tar xfz deploy_content.tar.gz >> tar-output.log
2169 | ```
2170 |
2171 | * Set the umask so that new files are only readable and writeable by the owner:
2172 | ```shell
2173 | umask 0066 # default is 0666, subtract 0066 to get 0600
2174 | ```
2175 |
2176 | * Verify the `/home/dev/deploy.sh` script is executable and run it:
2177 | ```shell
2178 | chmod 711 deploy.sh
2179 | ./deploy.sh
2180 | ```
2181 |
2182 | 1. Linux Academy - Storage Management
2183 |
2184 | * Create a 2GB GPT Partition:
2185 | ```shell
2186 | lsblk # observe nvme1n1 disk
2187 | sudo gdisk /dev/nvme1n1
2188 | # enter n for new partition
2189 | # accept default partition number
2190 | # accept default starting sector
2191 | # for the ending sector, enter +2G to create a 2GB partition
2192 | # accept default partition type
2193 | # enter w to write the partition information
2194 | # enter y to proceed
2195 | lsblk # observe nvme1n1 now has partition
2196 | partprobe # inform OS of partition change
2197 | ```
2198 |
2199 | * Create a 2GB MBR Partition:
2200 | ```shell
2201 | lsblk # observe nvme2n1 disk
2202 | sudo fdisk /dev/nvme2n1
2203 | # enter n for new partition
2204 | # accept default partition type
2205 | # accept default partition number
2206 | # accept default first sector
2207 | # for the ending sector, enter +2G to create a 2GB partition
2208 | # enter w to write the partition information
2209 | ```
2210 |
2211 | * Format the GPT Partition with XFS and mount the device persistently:
2212 | ```shell
2213 | sudo mkfs.xfs /dev/nvme1n1p1
2214 | sudo blkid # observe nvme1n1p1 UUID
2215 | vi /etc/fstab
2216 | # add a line with the new UUID and specify /mnt/gptxfs
2217 | mkdir /mnt/gptxfs
2218 | sudo mount -a
2219 | mount # confirm that it's mounted
2220 | ```
2221 |
2222 | * Format the MBR Partition with ext4 and mount the device persistently:
2223 | ```shell
2224 | sudo mkfs.ext4 /dev/nvme2n1p1
2225 | mkdir /mnt/mbrext4
2226 | mount /dev/nvme2n1p1 /mnt/mbrext4
2227 | mount # confirm that it's mounted
2228 | ```
2229 |
2230 | 1. Linux Academy - Working with LVM Storage
2231 |
2232 | * Create Physical Devices:
2233 | ```shell
2234 | lsblk # observe disks xvdf and xvdg
2235 | pvcreate /dev/xvdf /dev/xvdg
2236 | ```
2237 |
2238 | * Create Volume Group:
2239 | ```shell
2240 | vgcreate RHCSA /dev/xvdf /dev/xvdg
2241 | vgdisplay # view details
2242 | ```
2243 |
2244 | * Create a Logical Volume:
2245 | ```shell
2246 | lvcreate -n pinehead -L 3G RHCSA
2247 | lvdisplay # or lvs, to view details
2248 | ```
2249 |
2250 | * Format the LV as XFS and mount it persistently at `/mnt/lvol`:
2251 | ```shell
2252 | fdisk -l # get path for lv
2253 | mkfs.xfs /dev/mapper/RHCSA-pinehead
2254 | mkdir /mnt/lvol
2255 | blkid # copy UUID for /dev/mapper/RHCSA-pinehead
2256 | echo "UUID=76747796-dc33-4a99-8f33-58a4db9a2b59" >> /etc/fstab
2257 | # add the path /mnt/vol and copy the other columns
2258 | mount -a
2259 | mount # confirm that it's mounted
2260 | ```
2261 |
2262 | * Grow the mount point by 200MB:
2263 | ```shell
2264 | lvextend -L +200M /dev/RHCSA/pinehead
2265 | ```
2266 |
2267 | 1. Linux Academy - Network File Systems
2268 |
2269 | * Set up a SAMBA share:
2270 | ```shell
2271 | # on the server
2272 | yum install samba -y
2273 | vi /etc/samba/smb.conf
2274 | # add the below block
2275 | #####
2276 | #[share]
2277 | # browsable = yes
2278 | # path = /smb
2279 | # writeable = yes
2280 | #####
2281 | useradd shareuser
2282 | smbpasswd -a shareuser # enter password
2283 | mkdir /smb
2284 | systemctl start smb
2285 | chmod 777 /smb
2286 |
2287 | # on the client
2288 | mkdir /mnt/smb
2289 | yum install cifs-utils -y
2290 | # on the server hostname -I shows private IP
2291 | mount -t cifs //10.0.1.100/share /mnt/smb -o username=shareuser,password= # private ip used
2292 | ```
2293 |
2294 | * Set up the NFS share:
2295 | ```shell
2296 | # on the server
2297 | yum install nfs-utils -y
2298 | mkdir /nfs
2299 | echo "/nfs *(rw)" >> /etc/exports
2300 | chmod 777 /nfs
2301 | exportfs -a
2302 | systemctl start {rpcbind,nfs-server,rpc-statd,nfs-idmapd}
2303 |
2304 | # on the client
2305 | yum install nfs-utils -y
2306 | mkdir /mnt/nfs
2307 | showmount -e 10.0.1.101 # private ip used
2308 | systemctl start rpcbind
2309 | mount -t nfs 10.0.1.101:/nfs /mnt/nfs
2310 | ```
2311 |
2312 | 1. Linux Academy - Maintaining Linux Systems
2313 |
2314 | * Schedule a job to update the server midnight tonight:
2315 | ```shell
2316 | echo "dnf update -y" > update.sh
2317 | chmod +x update.sh
2318 | at midnight -f update.sh
2319 | atq # to verify that job is scheduled
2320 | ```
2321 |
2322 | * Modify the NTP pools:
2323 | ```shell
2324 | vi /etc/chrony.conf
2325 | # modify the pool directive at the top of the file
2326 | ```
2327 |
2328 | * Modify GRUB to boot a different kernel:
2329 | ```shell
2330 | grubby --info=ALL # list installed kernels
2331 | grubby --set-default-index=1
2332 | grubby --default-index # verify it worked
2333 | ```
2334 |
2335 | 1. Linux Academy - Managing Users in Linux
2336 |
2337 | * Create the superhero group:
2338 | ```shell
2339 | groupadd superhero
2340 | ```
2341 |
2342 | * Add user accounts for Tony Stark, Diana Prince, and Carol Danvers and add them to the superhero group:
2343 | ```shell
2344 | useradd tstark -G superhero
2345 | useradd cdanvers -G superhero
2346 | useradd dprince -G superhero
2347 | ```
2348 |
2349 | * Replace the primary group of Tony Stark with the wheel group:
2350 | ```shell
2351 | usermod tstark -ag wheel
2352 | grep wheel /etc/group # to verify
2353 | ```
2354 |
2355 | * Lock the account of Diana Prince:
2356 | ```shell
2357 | usermod -L dprince
2358 | chage dprince -E 0
2359 | ```
2360 |
2361 | 1. Linux Academy - SELinux Learning Activity
2362 |
2363 | * Fix the SELinux permission on `/opt/website`:
2364 | ```shell
2365 | cd /var/www # the default root directory for a web server
2366 | ls -Z # observe permission on html folder
2367 | semanage fcontext -a -t httpd_sys_content_t '/opt/website(/.*)'
2368 | restorecon /opt/website
2369 | ```
2370 |
2371 | * Deploy the website and test:
2372 | ```shell
2373 | mv /root/index.html /opt/website
2374 | curl localhost/index.html # receive connection refused response
2375 | systemctl start httpd # need to start the service
2376 | setenforce 0 # set to permissive to allow for now
2377 | ```
2378 |
2379 | * Resolve the error when attempting to access `/opt/website`:
2380 | ```shell
2381 | ll -Z # notice website has admin_home_t
2382 | restorecon /opt/website/index.html
2383 | ```
2384 |
2385 | 1. Linux Academy - Setting up VDO
2386 |
2387 | * Install VDO and ensure the service is running:
2388 | ```shell
2389 | dnf install vdo -y
2390 | systemctl start vdo && systemctl enable vdo
2391 | ```
2392 |
2393 | * Setup a 100G VM storage volume:
2394 | ```shell
2395 | vdo create --name=ContainerStorage --device=/dev/nvme1n1 --vdoLogicalSize=100G --sparseIndex=disabled
2396 | # spareIndex set to meet requirement of dense index deduplication
2397 | mkfs.xfs -K /dev/mapper/ContainerStorage
2398 | mkdir /mnt/containers
2399 | mount /dev/mapper/ContainerStorage /mnt/containers
2400 | vi /etc/fstab # add line /dev/mapper/ContainerStorage /mnt/containers xfs defaults,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
2401 | ```
2402 |
2403 | * Setup a 60G website storage volume:
2404 | ```shell
2405 | vdo create --name=WebsiteStorage --device=/dev/nvme2n1 --vdoLogicalSize=60G --deduplication=disabled
2406 | # deduplication set to meet requirement of no deduplication
2407 | mkfs.xfs -K /dev/mapper/WebsiteStorage
2408 | mkdir /mnt/website
2409 | mount /dev/mapper/WebsiteFiles /mnt/website
2410 | vi /etc/fstab # add line for /dev/mapper/WebsiteStorage /mnt/website xfs defaults,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
2411 | ```
2412 |
2413 | 1. Linux Academy - Final Practise Exam
2414 |
2415 | * Start the guest VM:
2416 | ```shell
2417 | # use a VNC viewer connect to IP:5901
2418 | virsh list --all
2419 | virsh start --centos7.0
2420 | # we already have the VM installed, we just needed to start it (so we don't need virt-install)
2421 | dnf install virt-viewer -y
2422 | virt-viewer centos7.0 # virt-manager can also be used
2423 | # now we are connected to the virtual machine
2424 | # send key Ctrl+Alt+Del when prompted for password, as we don't know it
2425 | # press e on GRUB screen
2426 | # add rd.break on the linux16 line
2427 | # now at the emergency console
2428 | mount -o remount, rw /sysroot
2429 | chroot /sysroot
2430 | passwd
2431 | touch /.autorelabel
2432 | reboot -f # needs -f to work for some reason
2433 | # it will restart when it completes relabelling
2434 | ```
2435 |
2436 | * Create three users (Derek, Tom, and Kenny) that belong to the instructors group. Prevent Tom's user from accessing a shell, and make his account expire 10 day from now:
2437 | ```shell
2438 | groupadd instructors
2439 | useradd derek -G instructors
2440 | useradd tom -s /sbin/nologin -G instructors
2441 | useradd kenny -G instructors
2442 | chage tom -E 2020-10-14
2443 | chage -l tom # to check
2444 | cat /etc/group | grep instructors # to check
2445 | ```
2446 |
2447 | * Download and configure apache to serve index.html from `/var/web` and access it from the host machine:
2448 | ```shell
2449 | # there is some setup first to establish connectivity/repo
2450 | nmcli device # eth0 shown as disconnected
2451 | nmcli connection up eth0
2452 | vi /etc/yum.repos.d/centos7.repo
2453 | # contents of centos.repo
2454 | #####
2455 | #[centos7]
2456 | #name = centos
2457 | #baseurl = http://mirror.centos.org/centos/7/os/x86_64/
2458 | #enabled = 1
2459 | #gpgcheck = 1
2460 | #gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
2461 | #####
2462 | yum repolist # confirm
2463 | yum install httpd -y
2464 | systemctl start httpd.service
2465 | mkdir /var/web
2466 | vi /etc/httpd/conf/httpd.conf
2467 | # change DocumentRoot to "/var/web"
2468 | # change Directory tag to "/var/web"
2469 | # change Directory tag to "/var/web/html"
2470 | echo "Hello world" > /var/web/index.html
2471 | systemctl start httpd.service
2472 | ip a s # note the first inet address for eth0 # from the guest VM
2473 | curl http://192.168.122.213/ # from the host
2474 | # note that no route to host returned
2475 | firewall-cmd --list-services # notice no http service
2476 | firewall-cmd --add-service=http --permanent
2477 | firewall-cmd --reload
2478 | firewall-cmd --list-services # confirm http service
2479 | curl http://192.168.122.255/ # from the host
2480 | # note that 403 error is returned
2481 | # ll -Z comparision between /var/web and /var/www shows that the SELinux type of index.html should be httpd_sys_context_t and not var_t
2482 | yum provides \*/semanage # suggests policycoreutils-python
2483 | yum install policycoreutils-python -y
2484 | semanage fcontext -a -t httpd_sys_content_t "/var/web(/.*)?"
2485 | restorecon -R -v /var/web
2486 | curl http://192.168.122.255/ # from the host - success!
2487 | ```
2488 |
2489 | * Configure umask to ensure all files created by any user cannot be accessed by the "other" users:
2490 | ```shell
2491 | umask 0026 # also reflect change in /etc/profile and /etc/bashrc
2492 | # default for files is 0666 so will be 0640 after mask
2493 | ```
2494 |
2495 | * Find all files in `/etc` (not including subdirectories) that are older than 720 days, and output a list to `/root/oldfiles`:
2496 | ```shell
2497 | find /etc -maxdepth 1 -mtime +720 > /root/oldfiles
2498 | ```
2499 |
2500 | * Find all log messages in `/var/log/messages` that contain "ACPI", and export them to a file called `/root/logs`. Then archive all of `/var/log` and save it to `/tmp/log_archive.tgz`:
2501 | ```shell
2502 | grep "ACPI" /var/log/messages > /root/logs
2503 | tar -czf /tmp/log_archive.tgz /var/log/ # note f flag must be last!
2504 | ```
2505 |
2506 | * Modify the GRUB timeout and make it 1 second instead of 5 seconds:
2507 | ```shell
2508 | find / -iname grub.cfgreboot
2509 | # /etc/grub.d, /etc/default/grub and grub2-mkconfig referred to in /boot/grub2/grub.cfg
2510 | vi /etc/default/grub # change GRUB_TIMEOUT to 1
2511 | grub2-mkconfig -o /boot/grub2/grub.cfg
2512 | reboot # confirm timeout now 1 second
2513 | ```
2514 |
2515 | * Create a daily cron job at 4:27PM for the Derek user that runs `cat /etc/redhat-release` and redirects the output to `/home/derek/release`:
2516 | ```shell
2517 | cd /home/derek
2518 | vi script.sh
2519 | # contents of script.sh
2520 | #####
2521 | ##!/bin/sh
2522 | #cat /etc/redhat-release > /home/derek/release
2523 | #####
2524 | chmod +x script.sh
2525 | crontab -u derek -e
2526 | # contents of crontab
2527 | #####
2528 | #27 16 * * * /home/derek/script.sh
2529 | #####
2530 | crontab -u derek -l # confirm
2531 | ```
2532 |
2533 | * Configure `time.nist.gov` as the only NTP Server:
2534 | ```shell
2535 | vi /etc/chrony.conf
2536 | # replace lines at the top with server time.nist.gov
2537 | ```
2538 |
2539 | * Create an 800M swap partition on the `vdb` disk and use the UUID to ensure that it is persistent:
2540 | ```shell
2541 | fdisk -l # note that we have one MBR partitions
2542 | fdisk /dev/vdb
2543 | # select n
2544 | # select p
2545 | # select default
2546 | # select default
2547 | # enter +800M
2548 | # select w
2549 | partprobe
2550 | lsblk # confirm creation
2551 | mkswap /dev/vdb1
2552 | vi /etc/fstab
2553 | # add line containing UUID and swap for the next 2 columns
2554 | swapon -a
2555 | swap # confirm swap is available
2556 | ```
2557 |
2558 | * Create a new logical volume (LV-A) with a size of 30 extents that belongs to the volume group VG-A (with a PE size of 32M). After creating the volume, configure the server to mount it persistently on `/mnt`:
2559 | ```shell
2560 | # observe through fdisk -l and df -h that /dev/vdc is available with no file system
2561 | yum provides pvcreate # lvm2 identified
2562 | yum install lvm2 -y
2563 | pvcreate /dev/vdc
2564 | vgcreate VG-A /dev/vdc -s 32M
2565 | lvcreate -n LV-A -l 30 VG-A
2566 | mkfs.xfs /dev/VG-A/LV-A
2567 | # note in directory /dev/mapper the name is VG--A-LV--A
2568 | # add an entry to /etc/fstab at /dev/mapper/VG--A-LV--A and /mnt (note that you can mount without the UUID here)
2569 | mount -a
2570 | df -h # verify that LV-A is mounted
2571 | ```
2572 |
2573 | * On the host, not the guest VM, utilise ldap.linuxacademy.com for SSO, and configure AutoFS to mount user's home directories on login. Make sure to use Kerberos:
2574 | ```shell
2575 | # this objective is no longer required in RHCSA 8
2576 | ```
2577 |
2578 | * Change the hostname of the guest to "RHCSA":
2579 | ```shell
2580 | hostnamectl set-hostname rhcsa
2581 | ```
2582 |
2583 | 1. Asghar Ghori - Exercise 3-1: Create Compressed Archives
2584 |
2585 | * Create tar files compressed with gzip and bzip2 and extract them:
2586 | ```shell
2587 | # gzip
2588 | tar -czf home.tar.gz /home
2589 | tar -tf /home.tar.gz # list files
2590 | tar -xf home.tar.gz
2591 |
2592 | # bzip
2593 | tar -cjf home.tar.bz2 /home
2594 | tar -xf home.tar.bz2 -C /tmp
2595 | ```
2596 |
2597 | 1. Asghar Ghori - Exercise 3-2: Create and Manage Hard Links
2598 |
2599 | * Create an empty file *hard1* under */tmp* and display its attributes. Create hard links *hard2* and *hard3*. Edit *hard2* and observe the attributes. Remove *hard1* and *hard3* and list the attributes again:
2600 | ```shell
2601 | touch hard1
2602 | ln hard1 hard2
2603 | ln hard1 hard3
2604 | ll -i
2605 | # observe link count is 3 and same inode number
2606 | echo "hi" > hard2
2607 | # observe file size increased to the same value for all files
2608 | rm hard1
2609 | rm hard3
2610 | # observe link count is 1
2611 | ```
2612 |
2613 | 1. Asghar Ghori - Exercise 3-3: Create and Manage Soft Links
2614 |
2615 | * Create an empty file *soft1* under `/root` pointing to `/tmp/hard2`. Edit *soft1* and list the attributes after editing. Remove *hard2* and then list *soft1*:
2616 | ```shell
2617 | ln -s /tmp/hard2 soft1
2618 | ll -i
2619 | # observe soft1 and hard2 have the same inode number
2620 | echo "hi" >> soft1
2621 | # observe file size increased
2622 | cd /root
2623 | ll -i
2624 | # observe the soft link is now broken
2625 | ```
2626 |
2627 | 1. Asghar Ghori - Exercise 4-1: Modify Permission Bits Using Symbolic Form
2628 |
2629 | * Create a file *permfile1* with read permissions for owner, group and other. Add an execute bit for the owner and a write bit for group and public. Revoke the write bit from public and assign read, write, and execute bits to the three user categories at the same time. Revoke write from the owning group and write, and execute bits from public:
2630 | ```shell
2631 | touch permfile1
2632 | chmod 444 permfile1
2633 | chmod -v u+x,g+w,o+w permfile1
2634 | chmod -v o-w,a=rwx permfile1
2635 | chmod -v g-w,o-wx permfile1
2636 | ```
2637 |
2638 | 1. Asghar Ghori - Exercise 4-2: Modify Permission Bits Using Octal Form
2639 |
2640 | * Create a file *permfile2* with read permissions for owner, group and other. Add an execute bit for the owner and a write bit for group and public. Revoke the write bit from public and assign read, write, and execute permissions to the three user categories at the same time:
2641 | ```shell
2642 | touch permfile2
2643 | chmod 444 permfile2
2644 | chmod -v 566 permfile2
2645 | chmod -v 564 permfile2
2646 | chmod -v 777 permfile2
2647 | ```
2648 |
2649 | 1. Asghar Ghori - Exercise 4-3: Test the Effect of setuid Bit on Executable Files
2650 |
2651 | * As root, remove the setuid bit from `/usr/bin/su`. Observe the behaviour for another user attempting to switch into root, and then add the setuid bit back:
2652 | ```shell
2653 | chmod -v u-s /usr/bin/su
2654 | # users now receive authentication failure when attempting to switch
2655 | chmod -v u+s /usr/bin/su
2656 | ```
2657 |
2658 | 1. Asghar Ghori - Exercise 4-4: Test the Effect of setgid Bit on Executable Files
2659 |
2660 | * As root, remove the setgid bit from `/usr/bin/write`. Observe the behaviour when another user attempts to run this command, and then add the setgid bit back:
2661 | ```shell
2662 | chmod -v g-s /usr/bin/write
2663 | # Other users can no longer write to root
2664 | chmod -v g+s /usr/bin/write
2665 | ```
2666 |
2667 | 1. Asghar Ghori - Exercise 4-5: Set up Shared Directory for Group Collaboration
2668 |
2669 | * Create users *user100* and *user200*. Create a group *sgrp* with GID 9999 and add *user100* and *user200* to this group. Create a directory `/sdir` with ownership and owning groups belong to *root* and *sgrp*, and set the setgid bit on */sdir* and test:
2670 | ```shell
2671 | groupadd sgrp -g 9999
2672 | useradd user100 -G sgrp
2673 | useradd user200 -G sgrp
2674 | mkdir /sdir
2675 | chown root:sgrp sdir
2676 | chmod g+s,g+w sdir
2677 | # as user100
2678 | cd /sdir
2679 | touch file
2680 | # owning group is sgrp and not user100 due to setgid bit
2681 | # as user200
2682 | vi file
2683 | # user200 can also read and write
2684 | ```
2685 |
2686 | 1. Asghar Ghori - Exercise 4-6: Test the Effect of Sticky Bit
2687 |
2688 | * Create a file under `/tmp` as *user100* and try to delete it as *user200*. Unset the sticky bit on `/tmp` and try to erase the file again. Restore the sticky bit on `/tmp`:
2689 | ```shell
2690 | # as user100
2691 | touch /tmp/myfile
2692 | # as user200
2693 | rm /tmp/myfile
2694 | # cannot remove file: Operation not permitted
2695 | # as root
2696 | chmod -v o-t tmp
2697 | # as user200
2698 | rm /tmp/myfile
2699 | # file can now be removed
2700 | # as root
2701 | chmod -v o+t tmp
2702 | ```
2703 |
2704 | 1. Asghar Ghori - Exercise 4-7: Identify, Apply, and Erase Access ACLs
2705 |
2706 | * Create a file *acluser* as *user100* in `/tmp` and check if there are any ACL settings on the file. Apply access ACLs on the file for *user100* for read and write access. Add *user200* to the file for full permissions. Remove all access ACLs from the file:
2707 | ```shell
2708 | # as user100
2709 | touch /tmp/acluser
2710 | cd /tmp
2711 | getfacl acluser
2712 | # no ACLs on the file
2713 | setfacl -m u:user100:rw,u:user200:rwx acluser
2714 | getfacl acluser
2715 | # ACLs have been added
2716 | setfacl -x user100,user200 acluser
2717 | getfacl acluser
2718 | # ACLs have been removed
2719 | ```
2720 |
2721 | 1. Asghar Ghori - Exercise 4-8: Apply, Identify, and Erase Default ACLs
2722 |
2723 | * Create a directory *projects* as *user100* under `/tmp`. Set the default ACLs on the directory for *user100* and *user200* to give them full permissions. Create a subdirectory *prjdir1* and a file *prjfile1* under *projects* and observe the effects of default ACLs on them. Delete the default entries:
2724 | ```shell
2725 | # as user100
2726 | cd /tmp
2727 | mkdir projects
2728 | getfacl projects
2729 | # No default ACLs for user100 and user200
2730 | setfacl -dm u:user100:rwx,u:user200:rwx projects
2731 | getfacl projects
2732 | # Default ACLs added for user100 and user200
2733 | mkdir projects/prjdir1
2734 | getfacl prjdir1
2735 | # Default ACLs inherited
2736 | touch prjdir1/prjfile1
2737 | getfacl prjfile1
2738 | # Default ACLs inherited
2739 | setfacl -k projects
2740 | ```
2741 |
2742 | 1. Asghar Ghori - Exercise 5-1: Create a User Account with Default Attributes
2743 |
2744 | * Create *user300* with the default attributes in the *useradd* and *login.defs* files. Assign this user a password and show the line entries from all 4 authentication files:
2745 | ```shell
2746 | useradd user300
2747 | passwd user300
2748 | grep user300 /etc/passwd /etc/shadow /etc/group /etc/gshadow
2749 | ```
2750 |
2751 |
2752 | 1. Asghar Ghori - Exercise 5-2: Create a User Account with Custom Values
2753 |
2754 | * Create *user300* with the default attributes in the *useradd* and *login.defs* files. Assign this user a password and show the line entries from all 4 authentication files:
2755 | ```shell
2756 | useradd user300
2757 | passwd user300
2758 | grep user300 /etc/passwd /etc/shadow /etc/group /etc/gshadow
2759 | ```
2760 |
2761 | 1. Asghar Ghori - Exercise 5-3: Modify and Delete a User Account
2762 |
2763 | * For *user200* change the login name to *user200new*, UID to 2000, home directory to `/home/user200new`, and login shell to `/sbin/nologin`. Display the line entry for *user2new* from the *passwd* for validation. Remove this user and confirm the deletion:
2764 | ```shell
2765 | usermod -l user200new -m -d /home/user200new -s /sbin/nologin -u 2000 user200
2766 | grep user200new /etc/passwd # confirm updated values
2767 | userdel -r user200new
2768 | grep user200new /etc/passwd # confirm user200new deleted
2769 | ```
2770 |
2771 | 1. Asghar Ghori - Exercise 5-4: Create a User Account with No-Login Access
2772 |
2773 | * Create an account *user400* with default attributes but with a non-interactive shell. Assign this user the nologin shell to prevent them from signing in. Display the new line entry frmo the *passwd* file and test the account:
2774 | ```shell
2775 | useradd user400 -s /sbin/nologin
2776 | passwd user400 # change password
2777 | grep user400 /etc/passwd
2778 | sudo -i -u user400 # This account is currently not available
2779 | ```
2780 |
2781 | 1. Asghar Ghori - Exercise 6-1: Set and Confirm Password Aging with chage
2782 |
2783 | * Configure password ageing for user100 using the *chage* command. Set the mindays to 7, maxdays to 28, and warndays to 5. Verify the new settings. Rerun the command and set account expiry to January 31, 2020:
2784 | ```shell
2785 | chage -m 7 -M 28 -W 5 user100
2786 | chage -l user100
2787 | chage -E 2021-01-31 user100
2788 | chage -l
2789 | ```
2790 |
2791 | 1. Asghar Ghori - Exercise 6-2: Set and Confirm Password Aging with passwd
2792 |
2793 | * Configure password aging for *user100* using the *passwd* command. Set the mindays to 10, maxdays to 90, and warndays to 14, and verify the new settings. Set the number of inactivity days to 5 and ensure that the user is forced to change their password upon next login:
2794 | ```shell
2795 | passwd -n 10 -x 90 -w 14 user100
2796 | passwd -S user100 # view status
2797 | passwd -i 5 user100
2798 | passwd -e user100
2799 | passwd -S user100
2800 | ```
2801 |
2802 | 1. Asghar Ghori - Exercise 6-3: Lock and Unlock a User Account with usermod and passwd
2803 |
2804 | * Disable the ability of user100 to log in using the *usermod* and *passwd* commands. Verify the change and then reverse it:
2805 | ```shell
2806 | grep user100 /etc/shadow # confirm account not locked by absence of "!" in password
2807 | passwd -l user100 # usermod -L also works
2808 | grep user100 /etc/shadow
2809 | passwd -u user100 # usermod -U also works
2810 | ```
2811 |
2812 | 1. Asghar Ghori - Exercise 6-4: Create a Group and Add Members
2813 |
2814 | * Create a group called *linuxadm* with GID 5000 and another group called *dba* sharing the GID 5000. Add *user100* as a secondary member to group *linxadm*:
2815 | ```shell
2816 | groupadd -g 5000 linuxadm
2817 | groupadd -o -g 5000 dba # note need -o to share GID
2818 | usermod -G linuxadm user100
2819 | grep user100 /etc/group # confirm user added to group
2820 | ```
2821 |
2822 | 1. Asghar Ghori - Exercise 6-5: Modify and Delete a Group Account
2823 |
2824 | * Change the *linuxadm* group name to *sysadm* and the GID to 6000. Modify the primary group for user100 to *sysadm*. Remove the *sysadm* group and confirm:
2825 | ```shell
2826 | groupmod -n sysadm -g 6000 linuxadm
2827 | usermod -g sysadm user100
2828 | groupdel sysadm # can't remove while user100 has as primary group
2829 | ```
2830 |
2831 | 1. Asghar Ghori - Exercise 6-6: Modify File Owner and Owning Group
2832 |
2833 | * Create a file *file10* and a directory *dir10* as *user200* under `/tmp`, and then change the ownership for *file10* to *user100* and the owning group to *dba* in 2 separate transactions. Apply ownership on *file10* to *user200* and owning group to *user100* at the same time. Change the 2 attributes on the directory to *user200:dba* recursively:
2834 | ```shell
2835 | # as user200
2836 | mkdir /tmp/dir10
2837 | touch /tmp/file10
2838 | sudo chown user100 /tmp/file10
2839 | sudo chgrp dba /tmp/file10
2840 | sudo chown user200:user100 /tmp/file10
2841 | sudo chown -R user200:user100 /tmp/dir10
2842 | ```
2843 |
2844 | 1. Asghar Ghori - Exercise 7-1: Modify Primary Command Prompt
2845 |
2846 | * Customise the primary shell prompt to display the information enclosed within the quotes "\:" using variable and command substitution. Edit the `~/.profile`file for *user100* and define the new value in there for permanence:
2847 | ```shell
2848 | export PS1="< $LOGNAME on $(hostname) in \$PWD>"
2849 | # add to ~/.profile for user100
2850 | ```
2851 |
2852 | 1. Asghar Ghori - Exercise 8-1: Submit, View, List, and Remove an at Job
2853 |
2854 | * Submit a job as *user100* to run the *date* command at 11:30pm on March 31, 2021, and have the output and any error messages generated redirected to `/tmp/date.out`. List the submitted job and then remove it:
2855 | ```shell
2856 | # as user100
2857 | at 11:30pm 03/31/2021
2858 | # enter "date &> /tmp/date.out"
2859 | atq # view job in queue
2860 | at -c 1 # view job details
2861 | atrm 1 # remove job
2862 | ```
2863 |
2864 | 1. Asghar Ghori - Exercise 8-2: Add, List, and Remove a Cron Job
2865 |
2866 | * Assume all users are currently denied access to cron. Submit a cron job as *user100* to echo "Hello, this is a cron test.". Schedule this command to execute at every fifth minute past the hour between 10:00 am and 11:00 am on the fifth and twentieth of every month. Have the output redirected to `/tmp/hello.out`. List the cron entry and then remove it:
2867 | ```shell
2868 | # as root
2869 | echo "user100" > /etc/cron.allow
2870 | # ensure cron.deny is empty
2871 | # as user100
2872 | crontab
2873 | # */5 10,11 5,20 * * echo "Hello, this is a cron test." >> /tmp/hello.out
2874 | crontab -e # list
2875 | crontab -l # remove
2876 | ```
2877 |
2878 | 1. Asghar Ghori - Exercise 9-1: Perform Package Management Tasks Using rpm
2879 |
2880 | * Verify the integrity and authenticity of a package called *dcraw* located in the `/mnt/AppStream/Packages` directory on the installation image and then install it. Display basic information about the package, show files it contains, list documentation files, verify the package attributes and remove the package:
2881 | ```shell
2882 | ls -l /mnt/AppStream/Packages/dcraw*
2883 | rpmkeys -K /mnt/AppStream/Packages/dcraw-9.27.0-9.e18.x86_64.rpm # check integrity
2884 | sudo rpm -ivh /mnt/AppStream/Packages/dcraw-9.27.0-9.e18.x86_64.rpm # -i is install, -v is verbose and -h is hash
2885 | rpm -qi dcraw # -q is query and -i is install
2886 | rpm -qd dcraw # -q is query and -d is docfiles
2887 | rpm -Vv dcraw # -V is verify and -v is verbose
2888 | sudo rpm -ve # -v is verbose and -e is erase
2889 | ```
2890 |
2891 | 1. Asghar Ghori - Exercise 10-1: Configure Access to Pre-Built ISO Repositories
2892 |
2893 | * Access the repositories that are available on the RHEL 8 image. Create a definition file for the repositories and confirm:
2894 | ```shell
2895 | df -h # after mounting optical drive in VirtualBox
2896 | vi /etc/yum.repos.d/centos.local
2897 | # contents of centos.local
2898 | #####
2899 | #[BaseOS]
2900 | #name=BaseOS
2901 | #baseurl=file:///run/media/$name/BaseOS
2902 | #gpgcheck=0
2903 | #
2904 | #[AppStream]
2905 | #name=AppStream
2906 | #baseurl=file:///run/media/$name/AppStream
2907 | #gpgcheck=0
2908 | #####
2909 | dnf repolist # confirm new repos are added
2910 | ```
2911 |
2912 | 1. Asghar Ghori - Exercise 10-2: Manipulate Individual Packages
2913 |
2914 | * Determine if the *cifs-utils* package is installed and if it is available for installation. Display its information before installing it. Install the package and display its information again. Remove the package along with its dependencies and confirm the removal:
2915 | ```shell
2916 | dnf config-manager --disable AppStream
2917 | dnf config-manager --disable BaseOS
2918 | dnf list installed | greps cifs-utils # confirm not installed
2919 | dnf info cifs-utils # display information
2920 | dnf install cifs-utils -y
2921 | dnf info cifs-utils # Repository now says @System
2922 | dnf remove cifs-utils -y
2923 | ```
2924 |
2925 | 1. Asghar Ghori - Exercise 10-3: Manipulate Package Groups
2926 |
2927 | * Perform management operations on a package group called *system tools*. Determine if this group is already installed and if it is available for installation. List the packages it contains and install it. Remove the group along with its dependencies and confirm the removal:
2928 | ```shell
2929 | dnf group list # shows System Tools as an available group
2930 | dnf group info "System Tools"
2931 | dnf group install "System Tools" -y
2932 | dnf group list "System Tools" # shows installed
2933 | dnf group remove "System Tools" -y
2934 | ```
2935 |
2936 | 1. Asghar Ghori - Exercise 10-4: Manipulate Modules
2937 |
2938 | * Perform management operations on a module called *postgresql*. Determine if this module is already installed and if it is available for installation. Show its information and install the default profile for stream 10. Remove the module profile along with any dependencies and confirm its removal:
2939 | ```shell
2940 | dnf module list "postgresql" # no [i] tag shown so not installed
2941 | dnf module info postgresql:10 # note there are multiple streams
2942 | sudo dnf module install --profile postgresql:10 -y
2943 | dnf module list "postgresql" # [i] tag shown so it's installed
2944 | sudo dnf module remove postgresql:10 -y
2945 | ```
2946 |
2947 | 1. Asghar Ghori - Exercise 10-5: Install a Module from an Alternative Stream
2948 |
2949 | * Downgrade a module to a lower version. Remove the stream *perl* 5.26 and confirm its removal. Manually enable the stream *perl* 5.24 and confirm its new status. Install the new version of the module and display its information:
2950 | ```shell
2951 | dnf module list perl # 5.26 shown as installed
2952 | dnf module remove perl -y
2953 | dnf module reset perl # make no version enabled
2954 | dnf module install perl:5.26/minimal --allowerasing
2955 | dnf module list perl # confirm module installed
2956 | ```
2957 |
2958 | 1. Asghar Ghori - Exercise 11-1: Reset the root User Password
2959 |
2960 | * Terminate the boot process at an early stage to access a debug shell to reset the root password:
2961 | ```shell
2962 | # add rd.break affter "rhgb quiet" to reboot into debug shell
2963 | mount -o remount, rw /sysroot
2964 | chroot /sysroot
2965 | passwd # change password
2966 | touch /.autorelabel
2967 | ```
2968 |
2969 | 1. Asghar Ghori - Exercise 11-2: Download and Install a New Kernel
2970 |
2971 | * Download the latest available kernel packages from the Red Hat Customer Portal and install them:
2972 | ```shell
2973 | uname -r # view kernel version
2974 | rpm -qa | grep "kernel"
2975 | # find versions on access.redhat website, download and move to /tmp
2976 | sudo dnf install /tmp/kernel* -y
2977 | ```
2978 |
2979 | 1. Asghar Ghori - Exercise 12-1: Manage Tuning Profiles
2980 |
2981 | * Install the *tuned* service, start it and enable it for auto-restart upon reboot. Display all available profiles and the current active profile. Switch to one of the available profiles and confirm. Determine the recommended profile for the system and switch to it. Deactive tuning and reactivate it:
2982 | ```shell
2983 | sudo systemctl status tuned-adm # already installed and enabled
2984 | sudo tuned-adm profile # active profile is virtual-guest
2985 | sudo tuned-adm profile desktop # switch to desktop profile
2986 | sudo tuned-adm profile recommend # virtual-guest is recommended
2987 | sudo tuned-adm off # turn off profile
2988 | ```
2989 |
2990 | 1. Asghar Ghori - Exercise 13-1: Add Required Storage to server2
2991 |
2992 | * Add 4x250MB, 1x4GB, and 2x1GB disks:
2993 | ```shell
2994 | # in virtual box add a VDI disk to the SATA controller
2995 | lsblk # added disks shown as sdb, sdc, sdd
2996 | ```
2997 |
2998 | 1. Asghar Ghori - Exercise 13-2: Create an MBR Partition
2999 |
3000 | * Assign partition type "msdos" to `/dev/sdb` for using it as an MBR disk. Create and confirm a 100MB primary partition on the disk:
3001 | ```shell
3002 | parted /dev/sdb print # first line shows unrecognised disk label
3003 | parted /dev/sdb mklabel msdos
3004 | parted /dev/sdb mkpart primary 1m 101m
3005 | parted /dev/sdb print # confirm added partition
3006 | ```
3007 |
3008 | 1. Asghar Ghori - Exercise 13-3: Delete an MBR Partition
3009 |
3010 | * Delete the *sdb1* partition that was created in Exercise 13-2 above:
3011 | ```shell
3012 | parted /dev/sdb rm 1
3013 | parted /dev/sdb print # confirm deletion
3014 | ```
3015 |
3016 | 1. Asghar Ghori - Exercise 13-4: Create a GPT Partition
3017 |
3018 | * Assign partition type "gpt" to `/dev/sdc` for using it as a GPT disk. Create and confirm a 200MB partition on the disk:
3019 | ```shell
3020 | gdisk /dev/sdc
3021 | # enter n for new
3022 | # enter default partition number
3023 | # enter default first sector
3024 | # enter +200MB for last sector
3025 | # enter default file system type
3026 | # enter default hex code
3027 | # enter w to write
3028 | lsblk # can see sdc1 partition with 200M
3029 | ```
3030 |
3031 | 1. Asghar Ghori - Exercise 13-5: Delete a GPT Partition
3032 |
3033 | * Delete the *sdc1* partition that was created in Exercise 13-4 above:
3034 | ```shell
3035 | gdisk /dev/sdc
3036 | # enter d for delete
3037 | # enter w to write
3038 | lsblk # can see no partitions under sdc
3039 | ```
3040 |
3041 | 1. Asghar Ghori - Exercise 13-6: Install Software and Activate VDO
3042 |
3043 | * Install the VDO software packages, start the VDO services, and mark it for autostart on subsequent reboots:
3044 | ```shell
3045 | dnf install vdo kmod-kvdo -y
3046 | systemctl start vdo.service & systemctl enable vdo.service
3047 | ```
3048 |
3049 | 1. Asghar Ghori - Exercise 13-7: Create a VDO Volume
3050 |
3051 | * Create a volume called *vdo-vol1* of logical size 16GB on the `/dev/sdc` disk (the actual size of `/dev/sdc` is 4GB). List the volume and display its status information. Show the activation status of the compression and de-duplication features:
3052 | ```shell
3053 | wipefs -a /dev/sdc # couldn't create without doing this first
3054 | vdo create --name vdo-vol1 --device /dev/sdc --vdoLogicalSize 16G --vdoSlabSize 128
3055 | # VDO instance 0 volume is ready at /dev/mapper/vdo-vol1
3056 | lsblk # confirm vdo-vol1 added below sdc
3057 | vdo list # returns vdo-vol1
3058 | vdo status --name vdo-vol1 # shows status
3059 | vdo status --name vdo-vol1 | grep -i "compression" # enabled
3060 | vdo status --name vdo-vol1 | grep -i "deduplication" # enabled
3061 | ```
3062 |
3063 | 1. Asghar Ghori - Exercise 13-8: Delete a VDO Volume
3064 |
3065 | * Delete the *vdo-vol1* volume that was created in Exercise 13-7 above and confirm the removal:
3066 | ```shell
3067 | vdo remove --name vdo-vol1
3068 | vdo list # confirm removal
3069 | ```
3070 |
3071 | 1. Asghar Ghori - Exercise 14-1: Create a Physical Volume and Volume Group
3072 |
3073 | * Initialise one partition *sdd1* (90MB) and one disk *sdb* (250MB) for use in LVM. Create a volume group called *vgbook* and add both physical volumes to it. Use the PE size of 16MB and list and display the volume group and the physical volumes:
3074 | ```shell
3075 | parted /dev/sdd mklabel msdos
3076 | parted /dev/sdd mkpart primary 1m 91m
3077 | parted /dev/sdd set 1 lvm on
3078 | pvcreate /dev/sdd1 /dev/sdb
3079 | vgcreate -vs 16 vgbook /dev/sdd1 /dev/sdb
3080 | vgs vgbook # list information about vgbook
3081 | vgdisplay -v vbook # list detailed information about vgbook
3082 | pvs # list information about pvs
3083 | ```
3084 |
3085 | 1. Asghar Ghori - Exercise 14-2: Create Logical Volumes
3086 |
3087 | * Create two logical volumes, *lvol0* and *lvbook1*, in the *vgbook* volume group. Use 120MB for *lvol0* and 192MB for *lvbook1*. Display the details of the volume group and the logical volumes:
3088 | ```shell
3089 | lvcreate -vL 120M vgbook
3090 | lvcreate -vL 192M -n lvbook1 vgbook
3091 | lvs # display information
3092 | vgdisplay -v vgbook # display detailed information about volume group
3093 | ```
3094 |
3095 | 1. Asghar Ghori - Exercise 14-3: Extend a Volume Group and a Logical Volume
3096 |
3097 | * Add another partition *sdd2* of size 158MB to *vgbook* to increase the pool of allocatable space. Initialise the new partition prior to adding it to the volume group. Increase the size of *lvbook1* to 336MB. Display the basic information for the physical volumes, volume group, and logical volume:
3098 | ```shell
3099 | parted mkpart /dev/sdd primary 90 250
3100 | parted /dev/sdd set 2 lvm on
3101 | parted /dev/sdd print # confirm new partition added
3102 | vgextend vgbook /dev/sdd2
3103 | pvs # display information
3104 | vgs # display information
3105 | lvextend vgbook/lvbook1 -L +144M
3106 | lvs # display information
3107 | ```
3108 |
3109 | 1. Asghar Ghori - Exercise 14-4: Rename, Reduce, Extend, and Remove Logical Volumes
3110 |
3111 | * Rename *lvol0* to *lvbook2*. Decrease the size of *lvbook2* to 50MB using the *lvreduce* command and then add 32MB with the *lvresize* command. Remove both logical volumes. Display the summary for the volume groups, logical volumes, and physical volumes:
3112 | ```shell
3113 | lvrename vgbook/lvol0 vgbook/lvbook2
3114 | lvreduce vgbook/lvbook2 -L 50M
3115 | lvextend vgbook/lvbook2 -L +32M
3116 | lvremove vgbook/lvbook1
3117 | lvremove vgbook/lvbook2
3118 | pvs # display information
3119 | vgs # display information
3120 | lvs # display information
3121 | ```
3122 |
3123 | 1. Asghar Ghori - Exercise 14-5: Reduce and Remove a Volume Group
3124 |
3125 | * Reduce *vgbook* by removing the *sdd1* and *sdd2* physical volumes from it, then remove the volume group. Confirm the deletion of the volume group and the logical volumes at the end:
3126 | ```shell
3127 | vgreduce vgbook /dev/sdd1 /dev/sdd2
3128 | vgremove vgbook
3129 | vgs # confirm removals
3130 | pvs # can be used to show output of vgreduce
3131 | ```
3132 |
3133 | 1. Asghar Ghori - Exercise 14-5: Reduce and Remove a Volume Group
3134 |
3135 | * Reduce *vgbook* by removing the *sdd1* and *sdd2* physical volumes from it, then remove the volume group. Confirm the deletion of the volume group and the logical volumes at the end:
3136 | ```shell
3137 | vgreduce vgbook /dev/sdd1 /dev/sdd2
3138 | vgremove vgbook
3139 | vgs # confirm removals
3140 | pvs # can be used to show output of vgreduce
3141 | ```
3142 |
3143 | 1. Asghar Ghori - Exercise 14-6: Uninitialise Physical Volumes
3144 |
3145 | * Uninitialise all three physical volumes - *sdd1*, *sdd2*, and *sdb* - by deleting the LVM structural information from them. Use the *pvs* command for confirmation. Remove the partitions from the *sdd* disk and verify that all disks are now in their original raw state:
3146 | ```shell
3147 | pvremove /dev/sdd1 /dev/sdd2 /dev/sdb
3148 | pvs
3149 | parted /dev/sdd
3150 | # enter print to view partitions
3151 | # enter rm 1
3152 | # enter rm 2
3153 | ```
3154 |
3155 | 1. Asghar Ghori - Exercise 14-7: Install Software and Activate Stratis
3156 |
3157 | * Install the Stratis software packages, start the Stratis service, and mark it for autostart on subsequent system reboots:
3158 | ```shell
3159 | dnf install stratis-cli -y
3160 | systemctl start stratisd.service & systemctl enable stratisd.service
3161 | ```
3162 |
3163 | 1. Asghar Ghori - Exercise 14-8: Create and Confirm a Pool and File System
3164 |
3165 | * Create a Stratis pool and a file system in it. Display information about the pool, file system, and device used:
3166 | ```shell
3167 | stratis pool create mypool /dev/sdd
3168 | stratis pool list # confirm stratis pool created
3169 | stratis filesystem create mypool myfs
3170 | stratis filesystem list # confirm filesystem created, get device path
3171 | mkdir /myfs1
3172 | mount /stratis/mypool/myfs /myfs1
3173 | ```
3174 |
3175 | 1. Asghar Ghori - Exercise 14-9: Expand and Rename a Pool and File System
3176 |
3177 | * Expand the Stratis pool *mypool* using the *sdd* disk. Rename the pool and the file system it contains:
3178 | ```shell
3179 | stratis pool add-data mypool /dev/sdd
3180 | stratis pool rename mypool mynewpool
3181 | stratis pool list # confirm changes
3182 | ```
3183 |
3184 | 1. Asghar Ghori - Exercise 14-10: Destroy a File System and Pool
3185 |
3186 | * Destroy the Stratis file system and the pool that was created, expanded, and renamed in the above exercises. Verify the deletion with appropriate commands:
3187 | ```shell
3188 | umount /bookfs1
3189 | stratis filesystem destroy mynewpool myfs
3190 | stratis filesystem list # confirm deletion
3191 | stratis pool destroy mynewpool
3192 | stratis pool list # confirm deletion
3193 | ```
3194 |
3195 | 1. Asghar Ghori - Exercise 15-1: Create and Mount Ext4, VFAT, and XFS File Systems in Partitions
3196 |
3197 | * Create 2x100MB partitions on the `/dev/sdb` disk, initialise them separately with the Ext4 and VFAT file system types, define them for persistence using their UUIDs, create mount points called `/ext4fs` and `/vfatfs1`, attach them to the directory structure, and verify their availability and usage. Use the disk `/dev/sdc` and repeat the above procedure to establish an XFS file system in it and mount it on `/xfsfs1`:
3198 | ```shell
3199 | parted /dev/sdb
3200 | # enter mklabel
3201 | # enter msdos
3202 | # enter mkpart
3203 | # enter primary
3204 | # enter ext4
3205 | # enter start as 0
3206 | # enter end as 100MB
3207 | # enter print to verify
3208 | parted /dev/sdb mkpart primary 101MB 201MB
3209 | # file system entered during partition created is different
3210 | lsblk # verify partitions
3211 | mkfs.ext4 /dev/sdb1
3212 | mkfs.vfat /dev/sdb2
3213 | parted /dev/sdc
3214 | # enter mklabel
3215 | # enter msdos
3216 | # enter mkpart
3217 | # enter primary
3218 | # enter xfs
3219 | # enter start as 0
3220 | # enter end as 100MB
3221 | mkfs.xfs /dev/sdc1
3222 | mkdir /ext4fs /vfatfs1 /xfsfs1
3223 | lsblk -f # get UUID for each file system
3224 | vi /etc/fstab
3225 | # add entries using UUIDs with defaults and file system name
3226 | df -hT # view file systems and mount points
3227 | ```
3228 |
3229 | 1. Asghar Ghori - Exercise 15-2: Create and Mount XFS File System in VDO Volume
3230 |
3231 | * Create a VDO volume called *vdo1* of logical size 16GB on the *sdc* disk (actual size 4GB). Initialise the volume with the XFS file system type, define it for persistence using its device files, create a mount point called `/xfsvdo1`, attach it to the directory structure, and verify its availability and usage:
3232 | ```shell
3233 | wipefs -a /dev/sdc
3234 | vdo create --device /dev/sdc --vdoLogicalSize 16G --name vdo1 --vdoSlabSize 128
3235 | vdo list # list the vdo
3236 | lsblk /dev/sdc # show information about disk
3237 | mkdir /xfsvdo1
3238 | vdo status # get vdo path
3239 | mkfs.xfs /dev/mapper/vdo1
3240 | vi /etc/fstab
3241 | # copy example from man vdo create
3242 | mount -a
3243 | df -hT # view file systems and mount points
3244 | ```
3245 |
3246 | 1. Asghar Ghori - Exercise 15-3: Create and Mount Ext4 and XFS File Systems in LVM Logical Volumes
3247 |
3248 | * Create a volume group called *vgfs* comprised of a 160MB physical volume created in a partition on the `/dev/sdd` disk. The PE size for the volume group should be set at 16MB. Create 2 logical volumes called *ext4vol* and *xfsvol* of sizes 80MB each and initialise them with the Ext4 and XFS file system types. Ensure that both file systems are persistently defined using their logical volume device filenames. Create mount points */ext4fs2* and */xfsfs2*, mount the file systems, and verify their availability and usage:
3249 | ```shell
3250 | vgcreate vgfs /dev/sdd --physicalextentsize 16MB
3251 | lvcreate vgfs --name ext4vol -L 80MB
3252 | lvcreate vgfs --name xfsvol -L 80MB
3253 | mkfs.ext4 /dev/vgfs/ext4vol
3254 | mkfs.xfs /dev/vgfs/xfsvol
3255 | blkid # copy UUID for /dev/mapper/vgfs-ext4vol and /dev/mapper/vgfs-xfsvol
3256 | vi /etc/fstab
3257 | # add lines with copied UUID
3258 | mount -a
3259 | df -hT # confirm added
3260 | ```
3261 |
3262 | 1. Asghar Ghori - Exercise 15-4: Resize Ext4 and XFS File Systems in LVM Logical Volumes
3263 |
3264 | * Grow the size of the *vgfs* volume group that was created above by adding the whole *sdc* disk to it. Extend the *ext4vol* logical volume along with the file system it contains by 40MB using 2 separate commands. Extend the *xfsvol* logical volume along with the file system it contains by 40MB using a single command:
3265 | ```shell
3266 | vdo remove --name vdo1 # need to use this disk
3267 | vgextend vgfs /dev/sdc
3268 | lvextend -L +80 /dev/vgfs/ext4vol
3269 | fsadm resize /dev/vgfs/ext4vol
3270 | lvextend -L +80 /dev/vgfs/xfsvol
3271 | fsadm resize /dev/vgfs/xfsvol
3272 | lvresize -r -L +40 /dev/vgfs/xfsvol # -r resizes file system
3273 | lvs # confirm resizing
3274 | ```
3275 |
3276 | 1. Asghar Ghori - Exercise 15-5: Create, Mount, and Expand XFS File System in Stratis Volume
3277 |
3278 | * Create a Stratis pool called *strpool* and a file system *strfs2* by reusing the 1GB *sdc* disk. Display information about the pool, file system, and device used. Expand the pool to include another 1GB disk *sdh* and confirm:
3279 | ```shell
3280 | stratis pool create strpool /dev/sdc
3281 | stratis filesystem create strpool strfs2
3282 | stratis pool list # view created stratis pool
3283 | stratis filesystem list # view created filesystem
3284 | stratis pool add-data strpool /dev/sdd
3285 | stratis blockdev list strpool # list block devices in pool
3286 | mkdir /strfs2
3287 | lsblk /stratis/strpool/strfs2 -o UUID
3288 | vi /etc/fstab
3289 | # add line
3290 | # UUID=2913810d-baed-4544-aced-a6a2c21191fe /strfs2 xfs x-systemd.requires=stratisd.service 0 0
3291 | ```
3292 |
3293 |
3294 | 1. Asghar Ghori - Exercise 15-6: Create and Activate Swap in Partition and Logical Volume
3295 |
3296 | * Create 1 swap area in a new 40MB partition called *sdc3* using the *mkswap* command. Create another swap area in a 140MB logical volume called *swapvol* in *vgfs*. Add their entries to the `/etc/fstab` file for persistence. Use the UUID and priority 1 for the partition swap and the device file and priority 2 for the logical volume swap. Activate them and use appropriate tools to validate the activation:
3297 | ```shell
3298 | parted /dev/sdc
3299 | # enter mklabel msdos
3300 | # enter mkpart primary 0 40
3301 | parted /dev/sdd
3302 | # enter mklabel msdos
3303 | # enter mkpart primary 0 140
3304 | mkswap -L sdc3 /dev/sdc 40
3305 | vgcreate vgfs /dev/sdd1
3306 | lvcreate vgfs --name swapvol -L 132
3307 | mkswap swapvol /dev/sdd1
3308 | mkswap /dev/vgfs/swapvol
3309 | lsblk -f # get UUID
3310 | vi /etc/fstab
3311 | # add 2 lines, e.g. first line
3312 | # UUID=WzDb5Y-QMtj-fYeo-iW0f-sj8I-ShRu-EWRIcp swap swap pri=2 0 0
3313 | mount -a
3314 | ```
3315 |
3316 | 1. Asghar Ghori - Exercise 16-1: Export Share on NFS Server
3317 |
3318 | * Create a directory called `/common` and export it to *server1* in read/write mode. Ensure that NFS traffic is allowed through the firewall. Confirm the export:
3319 | ```shell
3320 | dnf install nfs-utils -y
3321 | mkdir /common
3322 | firewall-cmd --permanent --add-service=nfs
3323 | firewall-cmd --reload
3324 | systemctl start nfs-server.service & systemctl enable nfs-server.service
3325 | echo "/nfs *(rw)" >> /etc/exports
3326 | exportfs -av
3327 | ```
3328 |
3329 | 1. Asghar Ghori - Exercise 16-2: Mount Share on NFS Client
3330 |
3331 | * Mount the `/common` share exported above. Create a mount point called `/local`, mount the remote share manually, and confirm the mount. Add the remote share to the file system table for persistence. Remount the share and confirm the mount. Create a test file in the mount point and confirm the file creation on the NFS server:
3332 | ```shell
3333 | dnf install cifs-utils -y
3334 | mkdir /local
3335 | chmod 755 local
3336 | mount 10.0.2.15:/common /local
3337 | vi /etc/fstab
3338 | # add line
3339 | # 10.0.2.15:/common /local nfs _netdev 0 0
3340 | mount -a
3341 | touch /local/test # confirm that it appears on server in common
3342 | ```
3343 |
3344 | 1. Asghar Ghori - Exercise 16-3: Access NFS Share Using Direct Map
3345 |
3346 | * Configure a direct map to automount the NFS share `/common` that is available from *server2*. Install the relevant software, create a local mount point `/autodir`, and set up AutoFS maps to support the automatic mounting. Note that `/common` is already mounted on the `/local` mount point on *server1* via *fstab*. Ensure there is no conflict in configuration or functionality between the 2:
3347 | ```shell
3348 | dnf install autofs -y
3349 | mkdir /autodir
3350 | vi /etc/auto.master
3351 | # add line
3352 | #/- /etc/auto.master.d/auto.dir
3353 | vi /etc/auto.master.d/auto.dir
3354 | # add line
3355 | #/autodir 172.25.1.4:/common
3356 | systemctl restart autofs
3357 | ```
3358 |
3359 | 1. Asghar Ghori - Exercise 16-4: Access NFS Share Using Indirect Map
3360 |
3361 | * Configure an indirect map to automount the NFS share `/common` that is available from *server2*. Install the relevant software and set up AutoFS maps to support the automatic mounting. Observe that the specified mount point "autoindir" is created automatically under `/misc`. Note that `/common` is already mounted on the `/local` mount point on *server1* via *fstab*. Ensure there is no conflict in configuration or functionality between the 2:
3362 | ```shell
3363 | dnf install autofs -y
3364 | grep /misc /etc/auto.master # confirm entry is there
3365 | vi /etc/auto.misc
3366 | # add line
3367 | #autoindir 172.25.1.4:/common
3368 | systemctl restart autofs
3369 | ```
3370 |
3371 | 1. Asghar Ghori - Exercise 16-5: Automount User Home Directories Using Indirect Map
3372 |
3373 | * On *server1* (NFS server), create a user account called *user30* with UID 3000. Add the `/home` directory to the list of NFS shares so that it becomes available for remote mount. On *server2* (NFS client), create a user account called *user30* with UID 3000, base directory `/nfshome`, and no user home directory. Create an umbrella mount point called `/nfshome` for mounting the user home directory from the NFS server. Install the relevent software and establish an indirect map to automount the remote home directory of *user30* under `/nfshome`. Observe that the home directory of *user30* is automounted under `/nfshome` when you sign in as *user30*:
3374 | ```shell
3375 | # on server 1 (NFS server)
3376 | useradd -u 3000 user30
3377 | echo password1 | passwd --stdin user30
3378 | vi /etc/exports
3379 | # add line
3380 | #/home *(rw)
3381 | exportfs -avr
3382 |
3383 | # on server 2 (NFS client)
3384 | dnf install autofs -y
3385 | useradd user30 -u 3000 -Mb /nfshome
3386 | echo password1 | passwd --stdin user30
3387 | mkdir /nfshome
3388 | vi /etc/auto.master
3389 | # add line
3390 | #/nfshome /etc/auto.master.d/auto.home
3391 | vi /etc/auto.master.d/auto.home
3392 | # add line
3393 | #* -rw &:/home&
3394 | systemctl enable autofs.service & systemctl start autofs.service
3395 | sudo su - user30
3396 | # confirm home directory is mounted
3397 | ```
3398 |
3399 | 1. Asghar Ghori - Exercise 17.1: Change System Hostname
3400 |
3401 | * Change the hostnames of *server1* to *server10.example.com* and *server2* to *server20.example.com* by editing a file and restarting the corresponding service daemon and using a command respectively:
3402 | ```shell
3403 | # on server 1
3404 | vi /etc/hostname
3405 | # change line to server10.example.com
3406 | systemctl restart systemd-hostnamed
3407 |
3408 | # on server 2
3409 | hostnamectl set-hostname server20.example.com
3410 | ```
3411 |
3412 | 1. Asghar Ghori - Exercise 17.2: Add Network Devices to server10 and server20
3413 |
3414 | * Add one network interface to *server10* and one to *server20* using VirtualBox:
3415 | ```shell
3416 | # A NAT Network has already been created and attached to both servers in VirtualBox to allow them to have seperate IP addresses (note that the MAC addressed had to be changed)
3417 | # Add a second Internal Network adapter named intnet to each server
3418 | nmcli conn show # observe enp0s8 added as a connection
3419 | ```
3420 |
3421 | 1. Asghar Ghori - Exercise 17.3: Configure New Network Connection Manually
3422 |
3423 | * Create a connection profile for the new network interface on *server10* using a text editing tool. Assign the IP 172.10.10.110/24 with gateway 172.10.10.1 and set it to autoactivate at system reboots. Deactivate and reactive this interface at the command prompt:
3424 | ```shell
3425 | vi /etc/sysconfig/network-scripts/ifcfg-enp0s8
3426 | # add contents of file
3427 | #TYPE=Ethernet
3428 | #BOOTPROTO=static
3429 | #IPV4_FAILURE_FATAL=no
3430 | #IPV6INIT=no
3431 | #NAME=enp0s8
3432 | #DEVICE=enp0s8
3433 | #ONBOOT=yes
3434 | #IPADDR=172.10.10.110
3435 | #PREFIX=24
3436 | #GATEWAY=172.10.10.1
3437 | ifdown enp0s8
3438 | ifup enp0s8
3439 | ip a # verify activation
3440 | ```
3441 |
3442 | 1. Asghar Ghori - Exercise 17.4: Configure New Network Connection Using nmcli
3443 |
3444 | * Create a connection profile using the *nmcli* command for the new network interface enp0s8 that was added to *server20*. Assign the IP 172.10.10.120/24 with gateway 172.10.10.1, and set it to autoactivate at system reboot. Deactivate and reactivate this interface at the command prompt:
3445 | ```shell
3446 | nmcli dev status # show devices with enp0s8 disconnected
3447 | nmcli con add type Ethernet ifname enp0s8 con-name enp0s8 ip4 172.10.10.120/24 gw4 172.10.10.1
3448 | nmcli conn show # verify connection added
3449 | nmcli con down enp0s8
3450 | nmcli con up enp0s8
3451 | ip a # confirm ip address is as specified
3452 | ```
3453 |
3454 | 1. Asghar Ghori - Exercise 17.5: Update Hosts Table and Test Connectivity
3455 |
3456 | * Update the `/etc/hosts` file on both *server10* and *server20*. Add the IP addresses assigned to both connections and map them to hostnames *server10*, *server10s8*, *server20*, and *server20s8* appropriately. Test connectivity from *server10* to *server20* to and from *server10s8* to *server20s8* using their IP addresses and then their hostnames:
3457 | ```shell
3458 | ## on server20
3459 | vi /etc/hosts
3460 | # add lines
3461 | #172.10.10.120 server20.example.com server20
3462 | #172.10.10.120 server20s8.example.com server20s8
3463 | #192.168.0.110 server10.example.com server10
3464 | #192.168.0.110 server10s8.example.com server10s8
3465 |
3466 | ## on server10
3467 | vi /etc/hosts
3468 | # add lines
3469 | #172.10.10.120 server20.example.com server20
3470 | #172.10.10.120 server20s8.example.com server20s8
3471 | #192.168.0.110 server10.example.com server10
3472 | #192.168.0.110 server10s8.example.com server10s8
3473 | ping server10 # confirm host name resolves
3474 | ```
3475 |
3476 | 1. Asghar Ghori - Exercise 18.1: Configure NTP Client
3477 |
3478 | * Install the Chrony software package and activate the service without making any changes to the default configuration. Validate the binding and operation:
3479 | ```shell
3480 | dnf install chrony -y
3481 | vi /etc/chrony.conf # view default configuration
3482 | systemctl start chronyd.service & systemctl enable chronyd.service
3483 | chronyc sources # view time sources
3484 | chronyc tracking # view clock performance
3485 | ```
3486 |
3487 | 1. Asghar Ghori - Exercise 19.1: Access RHEL System from Another RHEL System
3488 |
3489 | * Issue the *ssh* command as *user1* on *server10* to log in to *server20*. Run appropriate commands on *server20* for validation. Log off and return to the originating system:
3490 | ```shell
3491 | # on server 10
3492 | ssh user1@server20
3493 | whoami
3494 | pwd
3495 | hostname # check some basic information
3496 | # ctrl + D to logout
3497 | ```
3498 |
3499 | 1. Asghar Ghori - Exercise 19.2: Access RHEL System from Windows
3500 |
3501 | * Use a program called PuTTY to access *server20* using its IP address and as *user1*. Run appropriate commands on *server20* for validation. Log off to terminate the session:
3502 | ```shell
3503 | # as above but using the server20 IP address in PuTTy
3504 | ```
3505 |
3506 | 1. Asghar Ghori - Exercise 19.3: Generate, Distribute, and Use SSH Keys
3507 |
3508 | * Generate a password-less ssh key pair using RSA for *user1* on *server10*. Display the private and public file contents. Distribute the public key to *server20* and attempt to log on to *server20* from *server10*. Show the log file message for the login attempt:
3509 | ```shell
3510 | # on server10
3511 | ssh-keygen
3512 | # press enter to select default file names and no password
3513 | ssh-copy-id server20
3514 | ssh server20 # confirm you can login
3515 |
3516 | # on server20
3517 | vi /var/log/secure # view login event
3518 | ```
3519 |
3520 | 1. Asghar Ghori - Exercise 20.1: Add Services and Ports, and Manage Zones
3521 |
3522 | * Determine the current active zone. Add and activate a permanent rule to allow HTTP traffic on port 80, and then add a runtime rule for traffic intended for TCP port 443. Add a permanent rule to the *internal* zone for TCP port range 5901 to 5910. Confirm the changes and display the contents of the affected zone files. Switch the default zone to the *internal* zone and activate it:
3523 | ```shell
3524 | # on server10
3525 | firewall-cmd --get-active-zones # returns public with enp0s8 interface
3526 | firewall-cmd --add-service=http --permanent
3527 | firewall-cmd --add-service=https
3528 | firewall-cmd --add-port=80/tcp --permanent
3529 | firewall-cmd --add-port=443/tcp
3530 | firewall-cmd --zone=internal --add-port=5901-5910/tcp --permanent
3531 | firewall-cmd --reload
3532 | firewall-cmd --list-services # confirm result
3533 | firewall-cmd --list-ports # confirm result
3534 | vi /etc/firewalld/zones/public.xml # view configuration
3535 | vi /etc/firewalld/zones/internal.xml # view configuration
3536 | firewall-cmd --set-default-zone=internal
3537 | firewall-cmd --reload
3538 | firewall-cmd --get-active-zones # returns internal with enp0s8 interface
3539 | ```
3540 |
3541 | 1. Asghar Ghori - Exercise 20.2: Remove Services and Ports, and Manage Zones
3542 |
3543 | * Remove the 2 permanent rules added above. Switch back to the *public* zone as the default zone, and confirm the changes:
3544 | ```shell
3545 | firewall-cmd --set-default-zone=public
3546 | firewall-cmd --remove-service=http --permanent
3547 | firewall-cmd --remove-port=80/tcp --permanent
3548 | firewall-cmd --reload
3549 | firewall-cmd --list-services # confirm result
3550 | firewall-cmd --list-ports # confirm result
3551 | ```
3552 |
3553 | 1. Asghar Ghori - Exercise 20.3: Test the Effect of Firewall Rule
3554 |
3555 | * Remove the *sshd* service rule from the runtime configuration on *server10*, and try to access the server from *server20* using the *ssh* command:
3556 | ```shell
3557 | # on server10
3558 | firewall-cmd --remove-service=ssh --permanent
3559 | firewall-cmd --reload
3560 |
3561 | # on server20
3562 | ssh user1@server10
3563 | # no route to host message displayed
3564 |
3565 | # on server10
3566 | firewall-cmd --add-service=ssh --permanent
3567 | firewall-cmd --reload
3568 |
3569 | # on server20
3570 | ssh user1@server10
3571 | # success
3572 | ```
3573 |
3574 | 1. Asghar Ghori - Exercise 21.1: Modify SELinux File Context
3575 |
3576 | * Create a directory *sedir1* under `/tmp` and a file *sefile1* under *sedir1*. Check the context on the directory and file. Change the SELinux user and type to user_u and public_content_t on both and verify:
3577 | ```shell
3578 | mkdir /tmp/sedir1
3579 | touch /tmp/sedir1/sefile1
3580 | cd /tmp/sedir1
3581 | ll -Z # unconfined_u:object_r:user_tmp_t:s0 shown
3582 | chcon -u user_u -R sedir1
3583 | chcon -t public_content_t -R sedir1
3584 | ```
3585 |
3586 | 1. Asghar Ghori - Exercise 21.2: Add and Apply File Context
3587 |
3588 | * Add the current context on *sedir1* to the SELinux policy database to ensure a relabeling will not reset it to its previous value. Next, you will change the context on the directory to some random values. Restore the default context from the policy database back to the directory recursively:
3589 | ```shell
3590 | semanage fcontext -a -t public_content_t -s user_u '/tmp/sedir1(/.*)?'
3591 | cat /etc/selinux/targeted/contexts/files/file_contexts.local # view recently added policies
3592 | restorecon -Rv sedir1 # any chcon changes are reverted with this
3593 | ```
3594 |
3595 | 1. Asghar Ghori - Exercise 21.3: Add and Delete Network Ports
3596 |
3597 | * Add a non-standard port 8010 to the SELinux policy database for the *httpd* service and confirm the addition. Remove the port from the policy and verify the deletion:
3598 | ```shell
3599 | semanage port -a -t http_port_t -p tcp 8010
3600 | semanage port -l | grep http # list all port settings
3601 | semanage port -d -t http_port_t -p tcp 8010
3602 | semanage port -l | grep http
3603 | ```
3604 |
3605 | 1. Asghar Ghori - Exercise 21.4: Copy Files with and without Context
3606 |
3607 | * Create a file called *sefile2* under `/tmp` and display its context. Copy this file to the `/etc/default` directory, and observe the change in the context. Remove *sefile2* from `/etc/default`, and copy it again to the same destination, ensuring that the target file receives the source file's context:
3608 | ```shell
3609 | cd /tmp
3610 | touch sefile2
3611 | ll -Zrt # sefile2 context is unconfined_u:object_r:user_tmp_t:s0
3612 | cp sefile2 /etc/default
3613 | cd /etc/default
3614 | ll -Zrt # sefile2 context is unconfined_u:object_r:etc_t:s0
3615 | rm /etc/default/sefile2
3616 | cp /tmp/sefile2 /etc/default/sefile2 --preserve=context
3617 | ll -Zrt # sefile2 context is unconfined_u:object_r:user_tmp_t:s0
3618 | ```
3619 |
3620 | 1. Asghar Ghori - Exercise 21.5: View and Toggle SELinux Boolean Values
3621 |
3622 | * Display the current state of the Boolean nfs_export_all_rw. Toggle its value temporarily, and reboot the system. Flip its value persistently after the system has been back up:
3623 | ```shell
3624 | getsebool nfs_export_all_rw # nfs_export_all_rw --> on
3625 | sestatus -b | grep nfs_export_all_rw # also works
3626 | setsebool nfs_export_all_rw_off
3627 | reboot
3628 | setsebool nfs_export_all_rw_off -P
3629 | ```
3630 |
3631 | 1. Prince Bajaj - Managing Containers
3632 |
3633 | * Download the Apache web server container image (httpd 2.4) and inspect the container image. Check the exposed ports in the container image configuration:
3634 | ```shell
3635 | # as root
3636 | usermod user1 -aG wheel
3637 | cat /etc/groups | grep wheel # confirm
3638 |
3639 | # as user1
3640 | podman search httpd # get connection refused
3641 | # this was because your VM was setup as an Internal Network and not a NAT network so it couldn't access the internet
3642 | # see result registry.access.redhat.com/rhscl/httpd-24-rhel7
3643 | skopeo inspect --creds name:password docker://registry.access.redhat.com/rhscl/httpd-24-rhel7
3644 | podman pull registry.access.redhat.com/rhscl/httpd-24-rhel7
3645 | podman inspect registry.access.redhat.com/rhscl/httpd-24-rhel7
3646 | # exposed ports shown as 8080 and 8443
3647 | ```
3648 |
3649 | * Run the httpd container in the background. Assign the name *myweb* to the container, verify that the container is running, stop the container and verify that it has stopped, and delete the container and the container image:
3650 | ```shell
3651 | podman run --name myweb -d registry.access.redhat.com/rhscl/httpd-24-rhel7
3652 | podman ps # view running containers
3653 | podman stop myweb
3654 | podman ps # view running containers
3655 | podman rm myweb
3656 | podman rmi registry.access.redhat.com/rhscl/httpd-24-rhel7
3657 | ```
3658 |
3659 | * Pull the Apache web server container image (httpd 2.4) and run the container with the name *webserver*. Configure *webserver* to display content "Welcome to container-based web server". Use port 3333 on the host machine to receive http requests. Start a bash shell in the container to verify the configuration:
3660 | ```shell
3661 | # as root
3662 | dnf install httpd -y
3663 | vi /var/www/html/index.html
3664 | # add row "Welcome to container-based web server"
3665 |
3666 | # as user1
3667 | podman search httpd
3668 | podman pull registry.access.redhat.com/rhscl/httpd-24-rhel7
3669 | podman inspect registry.access.redhat.com/rhscl/httpd-24-rhel7 # shows 8080 in exposedPorts, and /opt/rh/httpd24/root/var/www is shown as HTTPD_DATA_ORIG_PATH
3670 | podman run -d=true -p 3333:8080 --name=webserver -v /var/www/html:/opt/rh/httpd24/root/var/www/html registry.access.redhat.com/rhscl/httpd-24-rhel7
3671 | curl http://localhost:3333 # success!
3672 |
3673 | # to go into the container and (for e.g.) check the SELinux context
3674 | podman exec -it webserver /bin/bash
3675 | cd /opt/rh/httpd24/root/var/www/html
3676 | ls -ldZ
3677 |
3678 | # you can also just go to /var/www/html/index.html in the container and change it there
3679 | ```
3680 |
3681 | * Configure the system to start the *webserver* container at boot as a systemd service. Start/enable the systemd service to make sure the container will start at book, and reboot the system to verify if the container is running as expected:
3682 | ```shell
3683 | # as root
3684 | podman pull registry.access.redhat.com/rhscl/httpd-24-rhel7
3685 | vi /var/www/html/index
3686 | # add row "Welcome to container-based web server"
3687 | podman run -d=true -p 3333:8080/tcp --name=webserver -v /var/www/html:/opt/rh/httpd24/root/var/www/html registry.access.redhat.com/rhscl/httpd-24-rhel7
3688 | cd /etc/systemd/system
3689 | podman generate systemd webserver >> httpd-container.service
3690 | systemctl daemon-reload
3691 | systemctl enable httpd-container.service --now
3692 | reboot
3693 | systemctl status httpd-container.service
3694 | curl http://localhost:3333 # success
3695 |
3696 | # this can also be done as a non-root user
3697 | podman pull registry.access.redhat.com/rhscl/httpd-24-rhel7
3698 | sudo vi /var/www/html/index.html
3699 | # add row "Welcome to container-based web server"
3700 | sudo setsebool -P container_manage_cgroup true
3701 | podman run -d=true -p 3333:8080/tcp --name=webserver -v /var/www/html:/opt/rh/httpd24/root/var/www/html registry.access.redhat.com/rhscl/httpd-24-rhel7
3702 | podman generate systemd webserver > /home/jr/.config/systemd/user/httpd-container.service
3703 | cd /home/jr/.config/systemd/user
3704 | sudo semanage fcontext -a -t systemd_unit_file_t httpd-container.service
3705 | sudo restorecon httpd-container.service
3706 | systemctl enable --user httpd-container.service --now
3707 | ```
3708 |
3709 | * Pull the *mariadb* image to your system and run it publishing the exposed port. Set the root password for the mariadb service as *mysql*. Verify if you can login as root from local host:
3710 | ```shell
3711 | # as user1
3712 | sudo dnf install mysql -y
3713 | podman search mariadb
3714 | podman pull docker.io/library/mariadb
3715 | podman inspect docker.io/library/mariadb # ExposedPorts 3306
3716 | podman run --name mariadb -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysql docker.io/library/mariadb
3717 | podman inspect mariadb # IPAddress is 10.88.0.22
3718 | mysql -h 10.88.0.22 -u root -p
3719 | ```
3720 |
3721 | 1. Linux Hint - Bash Script Examples
3722 |
3723 | * Create a hello world script:
3724 | ```shell
3725 | !#/bin/bash
3726 | echo "Hello World!"
3727 | exit
3728 | ```
3729 |
3730 | * Create a script that uses a while loop to count to 5:
3731 | ```shell
3732 | !#/bin/bash
3733 | count=0
3734 | while [ $count -le 5 ]
3735 | do
3736 | echo "$count"
3737 | count = $(($count + 1))
3738 | done
3739 | exit
3740 | ```
3741 |
3742 | * Note the formatting requirements. For example, there can be no space between the equals and the variable names, there must be a space between the "]" and the condition, and there must be 2 sets of round brackets in the variable incrementation.
3743 |
3744 | * Create a script that uses a for loop to count to 5:
3745 | ```shell
3746 | !#/bin/bash
3747 | count=5
3748 | for ((i=1; i<=$count; i++))
3749 | do
3750 | echo "$i"
3751 | done
3752 | exit
3753 | ```
3754 |
3755 | * Create a script that uses a for loop to count to 5 printing whether the number is even or odd:
3756 | ```shell
3757 | !#/bin/bash
3758 | count=5
3759 | for ((i=1; i<=$count; i++))
3760 | do
3761 | if [ $(($i%2)) -eq 0 ]
3762 | then
3763 | echo "$i is even"
3764 | else
3765 | echo "$i is odd"
3766 | fi
3767 | done
3768 | exit
3769 | ```
3770 |
3771 | * Create a script that uses a for loop to count to a user defined number printing whether the number is even or odd:
3772 | ```shell
3773 | !#/bin/bash
3774 | echo "Enter a number: "
3775 | read count
3776 | for ((i=1; i<=$count; i++))
3777 | do
3778 | if [ $(($i%2)) -eq 0 ]
3779 | then
3780 | echo "$i is even"
3781 | else
3782 | echo "$i is odd"
3783 | fi
3784 | done
3785 | exit
3786 | ```
3787 |
3788 | * Create a script that uses a function to multiply 2 numbers together:
3789 | ```shell
3790 | !#/bin/bash
3791 | Rectangle_Area() {
3792 | area=$(($1 * $2))
3793 | echo "Area is: $area"
3794 | }
3795 |
3796 | Rectangle_Area 10 20
3797 | exit
3798 | ```
3799 |
3800 | * Create a script that uses the output of another command to make a decision:
3801 | ```shell
3802 | !#/bin/bash
3803 | ping -c 1 $1 > /dev/null 2>&1
3804 | if [ $? -eq 0 ]
3805 | then
3806 | echo "Connectivity to $1 established"
3807 | else
3808 | echo "Connectivity to $1 unavailable"
3809 | fi
3810 | exit
3811 | ```
3812 |
3813 | 1. Asghar Ghori - Sample RHCSA Exam 1
3814 |
3815 | * Setup a virtual machine RHEL 8 Server for GUI. Add a 10GB disk for the OS and use the default storage partitioning. Add 2 300MB disks. Add a network interface, but do not configure the hostname and network connection.
3816 |
3817 | * Assuming the root user password is lost, reboot the system and reset the root user password to root1234:
3818 | ```shell
3819 | # ctrl + e after reboot
3820 | # add rd.break after Linux line
3821 | # ctrl + d
3822 | mount -o remount, rw /sysroot
3823 | chroot /sysroot
3824 | passwd
3825 | # change password to root12345
3826 | touch /.autorelabel
3827 | exit
3828 | reboot
3829 | ```
3830 |
3831 | * Using a manual method (i.e. create/modify files by hand), configure a network connection on the primary network device with IP address 192.168.0.241/24, gateway 192.168.0.1, and nameserver 192.168.0.1:
3832 | ```shell
3833 | vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
3834 | systemctl restart NetworkManager.service
3835 | # add line IPADDR=192.168.0.241
3836 | # add line GATEWAY=192.168.0.1
3837 | # add line DNS=192.168.0.1
3838 | # add line PREFIX=24
3839 | # change BOOTPROTO from dhcp to none
3840 | ifup enp0s3
3841 | nmcli con show # validate
3842 | ```
3843 |
3844 | * Using a manual method (modify file by hand), set the system hostname to rhcsa1.example.com and alias rhcsa1. Make sure the new hostname is reflected in the command prompt:
3845 | ```shell
3846 | vi /etc/hostname
3847 | # replace line with rhcsa1.example.com
3848 | vi /etc/hosts
3849 | # add rhcsa1.example.com and rhcsa1 to first line
3850 | systemctl restart NetworkManager.service
3851 | vi ~/.bashrc
3852 | # add line export PS1 = <($hostname)>
3853 | ```
3854 |
3855 | * Set the default boot target to multi-user:
3856 | ```shell
3857 | systemctl set-default multi-user.target
3858 | ```
3859 |
3860 | * Set SELinux to permissive mode:
3861 | ```shell
3862 | setenforce permissive
3863 | sestatus # confirm
3864 | vi /etc/selinux/config
3865 | # change line SELINUX=permissive for permanence
3866 | ```
3867 |
3868 | * Perform a case-insensitive search for all lines in the `/usr/share/dict/linux.words` file that begin with the pattern "essential". Redirect the output to `/tmp/pattern.txt`. Make sure that empty lines are omitted:
3869 | ```shell
3870 | grep '^essential' /usr/share/dict/linux.words > /tmp/pattern.txt
3871 | ```
3872 |
3873 | * Change the primary command prompt for the root user to display the hostname, username, and current working directory information in that order. Update the per-user initialisation file for permanence:
3874 | ```shell
3875 | vi /root/.bashrc
3876 | # add line export PS1 = '<$(whoami) on $(hostname) in $(pwd)>'$
3877 | ```
3878 |
3879 | * Create user accounts called user10, user20, and user30. Set their passwords to Temp1234. Make accounts for user10 and user30 to expire on December 31, 2021:
3880 | ```shell
3881 | useradd user10
3882 | useradd user20
3883 | useradd user30
3884 | passwd user10 # enter password
3885 | passwd user20 # enter password
3886 | passwd user30 # enter password
3887 | chage -E 2021-12-31 user10
3888 | chage -E 2021-12-31 user30
3889 | chage -l user10 # confirm
3890 | ```
3891 |
3892 | * Create a group called group10 and add users user20 and user30 as secondary members:
3893 | ```shell
3894 | groupadd group10
3895 | usermod -aG group10 user20
3896 | usermod -aG group10 user30
3897 | cat /etc/group | grep "group10" # confirm
3898 | ```
3899 |
3900 | * Create a user account called user40 with UID 2929. Set the password to user1234:
3901 | ```shell
3902 | useradd -u 2929 user40
3903 | passwd user40 # enter password
3904 | ```
3905 |
3906 | * Create a directory called dir1 under `/tmp` with ownership and owning groups set to root. Configure default ACLs on the directory and give user user10 read, write, and execute permissions:
3907 | ```shell
3908 | mkdir /tmp/dir1
3909 | cd /tmp
3910 | # tmp already has ownership with root
3911 | setfacl -m u:user10:rwx dir1
3912 | ```
3913 |
3914 | * Attach the RHEL 8 ISO image to the VM and mount it persistently to `/mnt/cdrom`. Define access to both repositories and confirm:
3915 | ```shell
3916 | # add ISO to the virtualbox optical drive
3917 | mkdir /mnt/cdrom
3918 | mount /dev/sr0 /mnt/cdrom
3919 | vi /etc/yum.repos.d/image.repo
3920 | blkid /dev/sr0 >> /etc/fstab
3921 | vi /etc/fstab
3922 | # format line with UUID /mnt/cdrom iso9660 defaults 0 0
3923 | # contents of image.repo
3924 | #####
3925 | #[BaseOS]
3926 | #name=BaseOS
3927 | #baseurl=file:///mnt/cdrom/BaseOS
3928 | #enabled=1
3929 | #gpgenabled=1
3930 | #gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
3931 | #
3932 | #[AppStream]
3933 | #name=AppStream
3934 | #baseurl=file:///mnt/cdrom/AppStream
3935 | #enabled=1
3936 | #gpgenabled=1
3937 | #gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
3938 | #####
3939 | yum repolist # confirm
3940 | ```
3941 |
3942 | * Create a logical volume called lvol1 of size 300MB in vgtest volume group. Mount the Ext4 file system persistently to `/mnt/mnt1`:
3943 | ```shell
3944 | mkdir /mnt/mnt1
3945 | # /dev/sdb is already 300MB so don't need to worry about partitioning
3946 | vgcreate vgtest /dev/sdb
3947 | lvcreate --name lvol1 -L 296MB vgtest
3948 | lsblk # confirm
3949 | mkfs.ext4 /dev/mapper/vgtest-lvol1
3950 | vi /etc/fstab
3951 | # add line
3952 | # /dev/mapper/vgtest-lvol1 /mnt/mnt1 ext4 defaults 0 0
3953 | mount -a
3954 | lsblk # confirm
3955 | ```
3956 |
3957 | * Change group membership on `/mnt/mnt1` to group10. Set read/write/execute permissions on `/mnt/mnt1` for group members, and revoke all permissions for public:
3958 | ```shell
3959 | chgrp group10 /mnt/mnt1
3960 | chmod 770 /mnt/mnt1
3961 | ```
3962 |
3963 | * Create a logical volume called lvswap of size 300MB in the vgtest volume group. Initialise the logical volume for swap use. Use the UUID and place an entry for persistence:
3964 | ```shell
3965 | # /dev/sdc is already 300MB so don't need to worry about partitioning
3966 | vgcreate vgswap /dev/sdc
3967 | lvcreate --name lvswap -L 296MB vgswap /dev/sdc
3968 | mkswap /dev/mapper-vgswap-lvswap # UUID returned
3969 | blkid /dev/sdc >> /etc/fstab
3970 | # organise new line so that it has UUID= swp swap defaults 0 0
3971 | swapon -a
3972 | lsblk # confirm
3973 | ```
3974 |
3975 | * Use tar and bzip2 to create a compressed archive of the `/etc/sysconfig` directory. Store the archive under `/tmp` as etc.tar.bz2:
3976 | ```shell
3977 | tar -cvzf /tmp/etc.tar.bz2 /etc/sysconfig
3978 | ```
3979 |
3980 | * Create a directory hierarchy `/dir1/dir2/dir3/dir4`, and apply SELinux contexts for `/etc` on it recursively:
3981 | ```shell
3982 | mkdir -p /dir1/dir2/dir3/dir4
3983 | ll -Z
3984 | # etc shown as system_u:object_r:etc_t:s0
3985 | # dir1 shown as unconfined_u:object_r:default_t:s0
3986 | semanage fcontext -a -t etc_t "/dir1(/.*)?"
3987 | restorecon -R -v /dir1
3988 | ll -Z # confirm
3989 | ```
3990 |
3991 | * Enable access to the atd service for user20 and deny for user30:
3992 | ```shell
3993 | echo "user30" >> /etc/at.deny
3994 | # just don't create at.allow
3995 | ```
3996 |
3997 | * Add a custom message "This is the RHCSA sample exam on $(date) by $LOGNAME" to the `/var/log/messages` file as the root user. Use regular expression to confirm the message entry to the log file:
3998 | ```shell
3999 | logger "This is the RHCSA sample exam on $(date) by $LOGNAME"
4000 | grep "This is the" /var/log/messages
4001 | ```
4002 |
4003 | * Allow user20 to use sudo without being prompted for their password:
4004 | ```shell
4005 | usermod -aG wheel user20
4006 | # still prompts for password, could change the wheel group behaviour or add new line to sudoers
4007 | visudo
4008 | # add line at end user20 ALL=(ALL) NOPASSWD: ALL
4009 | ```
4010 |
4011 | 1. Asghar Ghori - Sample RHCSA Exam 2
4012 |
4013 | * Setup a virtual machine RHEL 8 Server for GUI. Add a 10GB disk for the OS and use the default storage partitioning. Add 1 400MB disk. Add a network interface, but do not configure the hostname and network connection.
4014 |
4015 | * Using the nmcli command, configure a network connection on the primary network device with IP address 192.168.0.242/24, gateway 192.168.0.1, and nameserver 192.168.0.1:
4016 | ```shell
4017 | nmcli con add ifname enp0s3 con-name mycon type ethernet ip4 192.168.0.242/24 gw4 192.168.0.1 ipv4.dns "192.168.0.1"
4018 | # man nmcli-examples can be referred to if you forget format
4019 | nmcli con show mycon | grep ipv4 # confirm
4020 | ```
4021 |
4022 | * Using the hostnamectl command, set the system hostname to rhcsa2.example.com and alias rhcsa2. Make sure that the new hostname is reflected in the command prompt:
4023 | ```shell
4024 | hostnamectl set-hostname rhcsa2.example.com
4025 | hostnamectl set-hostname --static rhcsa2 # not necessary due to format of FQDN
4026 | # the hostname already appears in the command prompt
4027 | ```
4028 |
4029 | * Create a user account called user70 with UID 7000 and comments "I am user70". Set the maximum allowable inactivity for this user to 30 days:
4030 | ```shell
4031 | useradd -u 7000 -c "I am user70" user70
4032 | chage -I 30 user70
4033 | ```
4034 |
4035 | * Create a user account called user50 with a non-interactive shell:
4036 | ```shell
4037 | useradd user50 -s /sbin/nologin
4038 | ```
4039 |
4040 | * Create a file called testfile1 under `/tmp` with ownership and owning group set to root. Configure access ACLs on the file and give user10 read and write access. Test access by logging in as user10 and editing the file:
4041 | ```shell
4042 | useradd user10
4043 | passwd user10 # set password
4044 | touch /tmp/testfile1
4045 | cd /tmp
4046 | setfacl -m u:user10:rw testfile1
4047 | sudo su user10
4048 | vi /tmp/testfile1 # can edit the file
4049 | ```
4050 |
4051 | * Attach the RHEL 8 ISO image to the VM and mount it persistently to `/mnt/dvdrom`. Define access to both repositories and confirm:
4052 | ```shell
4053 | mkdir /mnt/dvdrom
4054 | lsblk # rom is at /dev/sr0
4055 | mount /dev/sr0 /mnt/dvdrom
4056 | blkid /dev/sr0 >> /etc/fstab
4057 | vi /etc/fstab
4058 | # format line with UUID /mnt/dvdrom iso9660 defaults 0 0
4059 | vi /etc/yum.repos.d/image.repo
4060 | # contents of image.repo
4061 | #####
4062 | #[BaseOS]
4063 | #name=BaseOS
4064 | #baseurl=file:///mnt/dvdrom/BaseOS
4065 | #enabled=1
4066 | #gpgenabled=1
4067 | #gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
4068 | #
4069 | #[AppStream]
4070 | #name=AppStream
4071 | #baseurl=file:///mnt/dvdrom/AppStream
4072 | #enabled=1
4073 | #gpgenabled=1
4074 | #gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
4075 | #####
4076 | yum repolist # confirm
4077 | ```
4078 |
4079 | * Create a logical volume called lv1 of size equal to 10 LEs in vg1 volume group (create vg1 with PE size 8MB in a partition on the 400MB disk). Initialise the logical volume with XFS file system type and mount it on `/mnt/lvfs1`. Create a file called lv1file1 in the mount point. Set the file system to automatically mount at each system reboot:
4080 | ```shell
4081 | parted /dev/sdb
4082 | mklabel msdos
4083 | mkpart
4084 | # enter primary
4085 | # enter xfs
4086 | # enter 0
4087 | # enter 100MB
4088 | vgcreate vg1 -s 8MB /dev/sdb1
4089 | lvcreate --name lv1 -l 10 vg1 /dev/sdb1
4090 | mkfs.xfs /dev/mapper/vg1-lv1
4091 | mkdir /mnt/lvfs1
4092 | vi /etc/fstab
4093 | # add line for /dev/mapper/vg1-lv1 /mnt/lvfs1 xfs defaults 0 0
4094 | mount -a
4095 | df -h # confirm
4096 | touch /mnt/lvfs1/hi
4097 | ```
4098 |
4099 | * Add a group called group20 and change group membership on `/mnt/lvfs1` to group20. Set read/write/execute permissions on `/mnt/lvfs1` for the owner and group members, and no permissions for others:
4100 | ```shell
4101 | groupadd group20
4102 | chgrp group20 -R /mnt/lvfs1
4103 | chmod 770 -R /mnt/lvfs1
4104 | ```
4105 |
4106 | * Extend the file system in the logical volume lv1 by 64MB without unmounting it and without losing any data:
4107 | ```shell
4108 | lvextend -L +64MB vg1/lv1 /dev/sdb1
4109 | # realised that the partition of 100MB isn't enough
4110 | parted /dev/sdb
4111 | resizepart
4112 | # expand partition 1 to 200MB
4113 | pvresize /dev/sdb1
4114 | lvextend -L +64MB vg1/lv1 /dev/sdb1
4115 | ```
4116 |
4117 | * Create a swap partition of size 85MB on the 400MB disk. Use its UUID and ensure it is activated after every system reboot:
4118 | ```shell
4119 | parted /dev/sdb
4120 | mkpart
4121 | # enter primary
4122 | # enter linux-swap
4123 | # enter 200MB
4124 | # enter 285MB
4125 | mkswap /dev/sdb2
4126 | vi /etc/fstab
4127 | # add line for UUID swap swap defaults 0 0
4128 | swapon -a
4129 | ```
4130 |
4131 | * Create a disk partition of size 100MB on the 400MB disk and format it with Ext4 file system structures. Assign label stdlabel to the file system. Mount the file system on `/mnt/stdfs1` persistently using the label. Create file stdfile1 in the mount point:
4132 | ```shell
4133 | parted /dev/sdb
4134 | mkpart
4135 | # enter primary
4136 | # enter ext4
4137 | # enter 290MB
4138 | # enter 390MB
4139 | mkfs.ext4 -L stdlabel /dev/sdb3
4140 | mkdir /mnt/stdfs1
4141 | vi /etc/fstab
4142 | # add line for UUID /mnt/stdfs1 ext4 defaults 0 0
4143 | touch /mnt/stdfs1/hi
4144 | ```
4145 |
4146 | * Use tar and gzip to create a compressed archive of the `/usr/local` directory. Store the archive under `/tmp` using a filename of your choice:
4147 | ```shell
4148 | tar -czvf /tmp/local.tar.gz /usr/local
4149 | ```
4150 |
4151 | * Create a directory `/direct01` and apply SELinux contexts for `/root`:
4152 | ```shell
4153 | mkdir /direct01
4154 | ll -Z
4155 | # direct01 has unconfined_u:object_r:default_t:s0
4156 | # root has system_u:object_r:admin_home_t:s0
4157 | semanage fcontext -a -t admin_home_t -s system_u "/direct01(/.*)?"
4158 | restorecon -R -v /direct01
4159 | ll -Zrt # confirm
4160 | ```
4161 |
4162 | * Set up a cron job for user70 to search for core files in the `/var` directory and copy them to the directory `/tmp/coredir1`. This job should run every Monday at 1:20 a.m:
4163 | ```shell
4164 | mkdir /tmp/coredir1
4165 | crontab -u user70 -e
4166 | 20 1 * * Mon find /var -name core -type f exec cp '{}' /tmp/coredir1 \;
4167 | crontab -u user70 -l # confirm
4168 | ```
4169 |
4170 | * Search for all files in the entire directory structure that have been modified in the past 30 days and save the file listing in the `/var/tmp/modfiles.txt` file:
4171 | ```shell
4172 | find / -mtime -30 >> /var/tmp/modfiles.txt
4173 | ```
4174 |
4175 | * Modify the bootloader program and set the default autoboot timer value to 2 seconds:
4176 | ```shell
4177 | vi /etc/default/grub
4178 | # set GRUB_TIMEOUT=2
4179 | grub2-mkconfig -o /boot/grub2/grub.cfg
4180 | ```
4181 |
4182 | * Determine the recommended tuning profile for the system and apply it:
4183 | ```shell
4184 | tuned-adm recommend
4185 | # virtual-guest is returned
4186 | tuned-adm active
4187 | # virtual-guest is returned
4188 | # no change required
4189 | ```
4190 |
4191 | * Configure Chrony to synchronise system time with the hardware clock:
4192 | ```shell
4193 | systemctl status chronyd.service
4194 | vi /etc/chrony.conf
4195 | # everything looks alright
4196 | ```
4197 |
4198 | * Install package group called "Development Tools", and capture its information in `/tmp/systemtools.out` file:
4199 | ```shell
4200 | yum grouplist # view available groups
4201 | yum groupinstall "Development Tools" -y >> /tmp/systemtools.out
4202 | ```
4203 |
4204 | * Lock user account user70. Use regular expressions to capture the line that shows the lock and store the output in file `/tmp/user70.lock`:
4205 | ```shell
4206 | usermod -L user70
4207 | grep user70 /etc/shadow >> /tmp/user70.lock # observe !
4208 | ```
4209 |
4210 | 1. Asghar Ghori - Sample RHCSA Exam 3
4211 |
4212 | * Build 2 virtual machines with RHEL 8 Server for GUI. Add a 10GB disk for the OS and use the default storage partitioning. Add 1 4GB disk to VM1 and 2 1GB disks to VM2. Assign a network interface, but do not configure the hostname and network connection.
4213 |
4214 | * The VirtualBox Network CIDR for the NAT network is 192.168.0.0/24.
4215 |
4216 | * On VM1, set the system hostname to rhcsa3.example.com and alias rhcsa3 using the hostnamectl command. Make sure that the new hostname is reflected in the command prompt:
4217 | ```shell
4218 | hostnamectl set-hostname rhcsa3.example.com
4219 | ```
4220 |
4221 | * On rhcsa3, configure a network connection on the primary network device with IP address 192.168.0.243/24, gateway 192.168.0.1, and nameserver 192.168.0.1 using the nmcli command:
4222 | ```shell
4223 | nmcli con add type ethernet ifname enp0s3 con-name mycon ip4 192.168.0.243/24 gw4 192.168.0.1 ipv4.dns 192.168.0.1
4224 | ```
4225 |
4226 | * On VM2, set the system hostname to rhcsa4.example.com and alias rhcsa4 using a manual method (modify file by hand). Make sure that the new hostname is reflected in the command prompt:
4227 | ```shell
4228 | vi /etc/hostname
4229 | # change to rhcsa4.example.com
4230 | ```
4231 |
4232 | * On rhcsa4, configure a network connection on the primary network device with IP address 192.168.0.244/24, gateway 192.168.0.1, and nameserver 192.168.0.1 using a manual method (create/modify files by hand):
4233 | ```shell
4234 | vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
4235 | #TYPE=Ethernet
4236 | #BOOTPROTO=static
4237 | #DEFROUTE=yes
4238 | #IPV4_FAILURE_FATAL=no
4239 | #IPV4INIT=no
4240 | #NAME=mycon
4241 | #DEVICE=enp0s3
4242 | #ONBOOT=yes
4243 | #IPADDR=192.168.0.243
4244 | #PREFIX=24
4245 | #GATEWAY=192.168.0.1
4246 | #DNS1=192.168.0.1
4247 | ifup enp0s3
4248 | nmcli con edit enp0s3 # play around with print ipv4 etc. to confirm settings
4249 | ```
4250 |
4251 | * Run "ping -c2 rhcsa4" on rhcsa3. Run "ping -c2 rhcsa3" on rhcsa4. You should see 0% loss in both outputs:
4252 | ```shell
4253 | # on rhcsa3
4254 | vi /etc/hosts
4255 | # add line 192.168.0.244 rhcsa4
4256 | ping rhcsa3 # confirm
4257 |
4258 | # on rhcsa4
4259 | vi /etc/hosts
4260 | # add line 192.168.0.243 rhcsa3
4261 | ping rhcsa4 # confirm
4262 | ```
4263 |
4264 | * On rhcsa3 and rhcsa4, attach the RHEL 8 ISO image to the VM and mount it persistently to `/mnt/cdrom`. Define access to both repositories and confirm:
4265 | ```shell
4266 | # attach disks in VirtualBox
4267 | # on rhcsa3 and rhcsa4
4268 | mkdir /mnt/cdrom
4269 | mount /dev/sr0 /mnt/cdrom
4270 | blkid # get UUID
4271 | vi /etc/fstab
4272 | # add line with UUID /mnt/cdrom iso9660 defaults 0 0
4273 | mount -a # confirm
4274 | vi /etc/yum.repos.d/image.repo
4275 | #####
4276 | #[BaseOS]
4277 | #name=BaseOS
4278 | #baseurl=file:///mnt/cdrom/BaseOS
4279 | #enabled=1
4280 | #gpgenabled=1
4281 | #gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
4282 | #
4283 | #[AppStream]
4284 | #name=AppStream
4285 | #baseurl=file:///mnt/cdrom/AppStream
4286 | #enabled=1
4287 | #gpgenabled=1
4288 | #gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
4289 | #####
4290 | yum repolist # confirm
4291 | ```
4292 |
4293 | * On rhcsa3, add HTTP port 8300/tcp to the SELinux policy database:
4294 | ```shell
4295 | semange port -l | grep http # 8300 not in list for http_port_t
4296 | semanage port -a -t http_port_t -p tcp 8300
4297 | ```
4298 |
4299 | * On rhcsa3, create VDO volume vdo1 on the 4GB disk with logical size 16GB and mounted with Ext4 structures on `/mnt/vdo1`:
4300 | ```shell
4301 | TBC
4302 | ```
4303 |
4304 | * Configure NFS service on rhcsa3 and share `/rh_share3` with rhcsa4. Configure AutoFS direct map on rhcsa4 to mount `/rh_share3` on `/mnt/rh_share4`. User user80 (create on both systems) should be able to create files under the share on the NFS server and under the mount point on the NFS client:
4305 | ```shell
4306 | # on rhcsa3
4307 | mkdir /rh_share3
4308 | chmod 777 rh_share3
4309 | useradd user80
4310 | passwd user80
4311 | # enter Temp1234
4312 | dnf install cifs-utils -y
4313 | systemctl enable nfs-server.service --now
4314 | firewall-cmd --add-service=nfs --permanent
4315 | firewall-cmd --reload
4316 | vi /etc/exports
4317 | # add line rh_share3 rhcsa4(rw)
4318 | exportfs -av
4319 |
4320 | # on rhcsa4
4321 | useradd user80
4322 | passwd user80
4323 | # enter Temp1234
4324 | mkdir /mnt/rh_share4
4325 | chmod 777 rh_share4
4326 | # mount rhcsa3:/rh_share3 /mnt/nfs
4327 | # mount | grep nfs # get details for /etc/fstab
4328 | # vi /etc/fstab
4329 | # add line rhcsa3:/rh_share3 /mnt/rh_share4 nfs4 _netdev 0 0
4330 | # above not required with AutoFS
4331 | dnf install autofs -y
4332 | vi /etc/auto.master
4333 | # add line /mnt/rh_rhcsa3 /etc/auto.master.d/auto.home
4334 | vi /etc/auto.master.d/auto.home
4335 | # add line * -rw rhcsa3:/rh_share3
4336 | ```
4337 |
4338 | * Configure NFS service on rhcsa4 and share the home directory for user user60 (create on both systems) with rhcsa3. Configure AutoFS indirect map on rhcsa3 to automatically mount the home directory under `/nfsdir` when user60 logs on to rhcsa3:
4339 | ```shell
4340 | # on rhcsa3
4341 | useradd user60
4342 | passwd user60
4343 | # enter Temp1234
4344 | dnf install autofs -y
4345 | mkdir /nfsdir
4346 | vi /etc/auto.master
4347 | # add line for /nfsdir /etc/auto.master.d/auto.home
4348 | vi /etc/auto.master.d/auto.home
4349 | # add line for * -rw rhcsa4:/home/user60
4350 | systemctl enable autofs.service --now
4351 |
4352 | # on rhcsa4
4353 | useradd user60
4354 | passwd user60
4355 | # enter Temp1234
4356 | vi /etc/exports
4357 | # add line for /home rhcsa3(rw)
4358 | exportfs -va
4359 | ```
4360 |
4361 | * On rhcsa4, create Stratis pool pool1 and volume str1 on a 1GB disk, and mount it to `/mnt/str1`:
4362 | ```shell
4363 | dnf provides stratis
4364 | dnf install stratis-cli -y
4365 | systemctl enable stratisd.service --now
4366 | stratis pool create pool1 /dev/sdc
4367 | stratis filesystem create pool1 vol1
4368 | mkdir /mnt/str1
4369 | mount /stratis/pool1/vol1 /mnt/str1
4370 | blkid # get information for /etc/fstab
4371 | vi /etc/fstab
4372 | # add line for UUID /mnt/str1 xfs defaults 0 0
4373 | ```
4374 |
4375 | * On rhcsa4, expand Stratis pool pool1 using the other 1GB disk. Confirm that `/mnt/str1` sees the storage expansion:
4376 | ```shell
4377 | stratis pool add-data pool1 /dev/sdb
4378 | stratis blockdev # extra disk visible
4379 | ```
4380 |
4381 | * On rhcsa3, create a group called group30 with GID 3000, and add user60 and user80 to this group. Create a directory called `/sdata`, enable setgid bit on it, and add write permission bit for the group. Set ownership and owning group to root and group30. Create a file called file1 under `/sdata` as user user60 and modify the file as user80 successfully:
4382 | ```shell
4383 | TBC
4384 | ```
4385 |
4386 | * On rhcsa3, create directory `/dir1` with full permissions for everyone. Disallow non-owners to remove files. Test by creating file `/tmp/dir1/stkfile1` as user60 and removing it as user80:
4387 | ```shell
4388 | TBC
4389 | ```
4390 |
4391 | * On rhcsa3, search for all manual pages for the description containing the keyword "password" and redirect the output to file `/tmp/man.out`:
4392 | ```shell
4393 | man -k password >> /tmp.man.out
4394 | # or potentially man -wK "password" if relying on the description is not enough
4395 | ```
4396 |
4397 | * On rhcsa3, create file lnfile1 under `/tmp` and create one hard link `/tmp/lnfile2` and one soft link `/boot/file1`. Edit lnfile1 using the links and confirm:
4398 | ```shell
4399 | cd /tmp
4400 | touch lnfile1
4401 | ln lnfile1 lnfile2
4402 | ln -s /boot/file1 lnfile1
4403 | ```
4404 |
4405 | * On rhcsa3, install module postgresql version 9.6:
4406 | ```shell
4407 | dnf module list postgresql # stream 10 shown as default
4408 | dnf module install postgresql:9.6
4409 | dnf module list # stream 9.6 shown as installed
4410 | ```
4411 |
4412 | * On rhcsa3, add the http service to the "external" firewalld zone persistently:
4413 | ```shell
4414 | firewall-cmd --zone=external --add-service=http --permanent
4415 | ```
4416 |
4417 | * On rhcsa3, set SELinux type shadow_t on a new file testfile1 in `/usr` and ensure that the context is not affected by a SELinux relabelling:
4418 | ```shell
4419 | cd /usr
4420 | touch /usr/testfile1
4421 | ll -Zrt # type shown as unconfined_u:object_r:usr_t:s0
4422 | semange fcontext -a -t /usr/testfile1
4423 | restorecon -R -v /usr/testfile1
4424 | ```
4425 |
4426 | * Configure password-less ssh access for user60 from rhcsa3 to rhcsa4:
4427 | ```shell
4428 | sudo su - user60
4429 | ssh-keygen # do not provide a password
4430 | ssh-copy-id rhcsa4 # enter user60 pasword on rhcsa4
4431 | ```
4432 |
4433 | 1. RHCSA 8 Practise Exam
4434 |
4435 | * Interrupt the boot process and reset the root password:
4436 | ```shell
4437 | # interrupt boot process and add rd.break at end of linux line
4438 | mount -o remount, rw /sysroot
4439 | chroot /sysroot
4440 | passwd
4441 | # enter new passwd
4442 | touch /.autorelabel
4443 | # you could also add enforcing=0 to the end of the Linux line to avoid having to do this
4444 | # ctrl + D
4445 | reboot
4446 | ```
4447 |
4448 | * Repos are available from the repo server at http://repo.eight.example.com/BaseOS and http://repo.eight.example.com/AppStream for you to use during the exam. Setup these repos:
4449 | ```shell
4450 | vi /etc/yum.repos.d/localrepo.repo
4451 | #[BaseOS]
4452 | #name=BaseOS
4453 | #baseurl=http://repo.eight.example.com/BaseOS
4454 | #enabled=1
4455 | #
4456 | #[AppStream]
4457 | #name=AppStream
4458 | #baseurl=http://repo.eight.example.com/AppStream
4459 | #enabled=1
4460 | dnf repolist # confirm
4461 | # you could also use dnf config-manager --add-repo
4462 | ```
4463 |
4464 | * The system time should be set to your (or nearest to you) timezone and ensure NTP sync is configured:
4465 | ```shell
4466 | timedatectl set-timezone Australia/Sydney
4467 | timedatectl set-ntp true
4468 | timedatectl status # confirm status
4469 | ```
4470 |
4471 | * Add the following secondary IP addresses statically to your current running interface. Do this in a way that doesn’t compromise your existing settings:
4472 | ```shell
4473 | # IPV4 - 10.0.0.5/24
4474 | # IPV6 - fd01::100/64
4475 | nmcli con edit System\ eth0
4476 | goto ipv4.addresses
4477 | add 10.0.0.5/24
4478 | goto ipv6.addresses
4479 | add fd01::100/64
4480 | back
4481 | save
4482 | nmcli con edit System\ eth1
4483 | goto ipv4.addresses
4484 | add 10.0.0.5/24
4485 | goto ipv6.addresses
4486 | add fd01::100/64
4487 | back
4488 | save
4489 | nmcli con reload
4490 | # enter yes when asked if you want to set to manual
4491 | ```
4492 |
4493 | * Enable packet forwarding on system1. This should persist after reboot:
4494 | ```shell
4495 | vi /etc/sysctl.conf
4496 | # add line for net.ipv4.port_forward=1
4497 | ```
4498 |
4499 | * System1 should boot into the multiuser target by default and boot messages should be present (not silenced):
4500 | ```shell
4501 | systemctl set-default multi-user.target
4502 | vi /etc/default/grub
4503 | # remove rhgb quiet from GRUB_CMDLINE_LINUX
4504 | grub2-mkconfig -o /boot/grub2/grub.cfg
4505 | reboot
4506 | ```
4507 |
4508 | * Create a new 2GB volume group named “vgprac”:
4509 | ```shell
4510 | lsblk
4511 | # /dev/sdb is available with 8GB
4512 | # the file system already has ~36MB in use and is mounted to /extradisk1
4513 | umount /dev/sdb
4514 | parted /dev/sdb
4515 | mklabel
4516 | # enter msdos
4517 | mkpart
4518 | # enter primary
4519 | # enter xfs
4520 | # enter 0
4521 | # enter 2.1GB
4522 | set
4523 | # enter 1
4524 | # enter lvm
4525 | # enter on
4526 | vgcreate vgprac /dev/sdb1
4527 | # enter y to wipe
4528 | ```
4529 |
4530 | * Create a 500MB logical volume named “lvprac” inside the “vgprac” volume group:
4531 | ```shell
4532 | lvcreate --name lvprac -L 500MB vgprac
4533 | ```
4534 |
4535 | * The “lvprac” logical volume should be formatted with the xfs filesystem and mount persistently on the `/mnt/lvprac` directory:
4536 | ```shell
4537 | mkdir /mnt/lvprac
4538 | mkfs.xfs /dev/mapper/vgprac-lvprac
4539 | vi /etc/fstab
4540 | # comment out line for old /dev/sdb
4541 | # add line for /dev/mapper/vgprac-lvprac
4542 | mount -a
4543 | df -h # confirm mounted
4544 | ```
4545 |
4546 | * Extend the xfs filesystem on “lvprac” by 500MB:
4547 | ```shell
4548 | lvextend -r -L +500MB /dev/vgprac/lvprac
4549 | ```
4550 |
4551 | * Use the appropriate utility to create a 5TiB thin provisioned volume:
4552 | ```shell
4553 | lsblk
4554 | # /dev/sdc is available with 8GB
4555 | dnf install vdo kmod-vdo -y
4556 | umount /extradisk2
4557 | vdo create --name=myvolume --device=/dev/sdc --vdoLogicalSize=5T --force
4558 | vi /etc/fstab
4559 | # comment out line for old /dev/sdc
4560 | ```
4561 |
4562 | * Configure a basic web server that displays “Welcome to the web server” once connected to it. Ensure the firewall allows the http/https services:
4563 | ```shell
4564 | vi /var/www/html/index.html
4565 | # add line "Welcome to the web server"
4566 | systemctl restart httpd.service
4567 | curl http://localhost
4568 | # success
4569 | # from server1
4570 | curl http://server2.eight.example.com
4571 | # no route to host shown
4572 | # on server2
4573 | firewall-cmd --add-port=80/tcp --permanent
4574 | firewall-cmd --reload
4575 | # from server1
4576 | curl http://server2.eight.example.com
4577 | # success
4578 | ```
4579 |
4580 | * Find all files that are larger than 5MB in the /etc directory and copy them to /find/largefiles:
4581 | ```shell
4582 | mkdir -p /find/largefiles
4583 | find /etc/ -size +5M -exec cp {} /find/largefiles \;
4584 | # the {} is substituted by the output of find, and the ; is mandatory for an exec but must be escaped
4585 | ```
4586 |
4587 | * Write a script named awesome.sh in the root directory on system1. If “me” is given as an argument, then the script should output “Yes, I’m awesome.” If “them” is given as an argument, then the script should output “Okay, they are awesome.” If the argument is empty or anything else is given, the script should output “Usage ./awesome.sh me|them”:
4588 | ```shell
4589 | vi /awesome.sh
4590 | chmod +x /awesome.sh
4591 | # contents of awesome.sh
4592 | ##!/bin/bash
4593 | #if [ $1 = "me" ]; then
4594 | # echo "Yes, I'm awesome."
4595 | #elif [ $1 = "them"]; then
4596 | # echo "Okay, they are awesome."
4597 | #else
4598 | # echo "Usage /.awesome.sh me|them"
4599 | #fi
4600 | #note that = had to be used and not -eq
4601 | ```
4602 |
4603 | * Create users phil, laura, stewart, and kevin. All new users should have a file named “Welcome” in their home folder after account creation. All user passwords should expire after 60 days and be at least 8 characters in length. Phil and laura should be part of the “accounting” group. If the group doesn’t already exist, create it. Stewart and kevin should be part of the “marketing” group. If the group doesn’t already exist, create it:
4604 | ```shell
4605 | groupadd accounting
4606 | groupadd marketing
4607 | vi /etc/security/pwquality.conf
4608 | # uncomment out the line that already had minlen = 8
4609 | mkdir /etc/skel/Welcome
4610 | useradd phil -G accounting
4611 | useradd laura -G accounting
4612 | useradd stewart -G marketing
4613 | useradd kevin -G marketing
4614 | chage -M 60 phil
4615 | chage -M 60 laura
4616 | chage -M 60 stewart
4617 | chage -M 60 kevin
4618 | chage -l phil # confirm
4619 | # can also change in /etc/login.defs
4620 | ```
4621 |
4622 | * Only members of the accounting group should have access to the `/accounting` directory. Make laura the owner of this directory. Make the accounting group the group owner of the `/accounting` directory:
4623 | ```shell
4624 | mkdir /accounting
4625 | chmod 770 /accounting
4626 | chown laura:accounting /accounting
4627 | ```
4628 |
4629 | * Only members of the marketing group should have access to the `/marketing` directory. Make stewart the owner of this directory. Make the marketing group the group owner of the `/marketing` directory:
4630 | ```shell
4631 | mkdir /marketing
4632 | chmod 770 /marketing
4633 | chown stewart:marketing /marketing
4634 | ```
4635 |
4636 | * New files should be owned by the group owner and only the file creator should have the permissions to delete their own files:
4637 | ```shell
4638 | chmod +ts /marketing
4639 | chmod +ts /accounting
4640 | ```
4641 |
4642 | * Create a cron job that writes “This practice exam was easy and I’m ready to ace my RHCSA” to `/var/log/messages` at 12pm only on weekdays:
4643 | ```shell
4644 | crontab -e
4645 | #* 12 * * 1-5 echo "This practise exam was easy and I'm ready to ace my RHCSA" >> /var/log/messagees
4646 | # you can look at info crontab if you forget the syntax
4647 | ```
4648 |
4649 | ## RHCE 9
4650 |
4651 | - [Be able to perform all tasks expected of a Red Hat Certified System Administrator](#Be-able-to-perform-all-tasks-expected-of-a-Red-Hat-Certified-System-Administrator)
4652 | - [Understand core components of Ansible](#Understand-core-components-of-Ansible)
4653 | - [Use roles and Ansible Content Collections](#Use-roles-and-Ansible-Content-Collections)
4654 | - [Install and configure an Ansible control node](#Install-and-configure-an-Ansible-control-node)
4655 | - [Configure Ansible managed nodes](#Configure-Ansible-managed-nodes)
4656 | - [Run playbooks with Automation content navigator](#Run-playbooks-with-Automation-content-navigator)
4657 | - [Create Ansible plays and playbooks](#Create-Ansible-plays-and-playbooks)
4658 | - [Automate standard RHCSA tasks using Ansible modules that work with](#Automate-standard-RHCSA-tasks-using-Ansible-modules-that-work-with)
4659 | - [Manage-content](#Manage-content)
4660 | - [RHCE Exercises](#RHCE-Exercises)
4661 |
4662 | ### Be able to perform all tasks expected of a Red Hat Certified System Administrator
4663 |
4664 | 1. Understand and use essential tools
4665 |
4666 | 1. Operate running systems
4667 |
4668 | 1. Configure local storage
4669 |
4670 | 1. Create and configure file systems
4671 |
4672 | 1. Deploy, configure, and maintain systems
4673 |
4674 | 1. Manage users and groups
4675 |
4676 | 1. Manage security
4677 |
4678 | ### Understand core components of Ansible
4679 |
4680 | 1. Inventories
4681 |
4682 | * Inventories are what Ansible uses to locate and run against multiple hosts. The default ansible 'hosts' file is `/etc/ansible/hosts`. The default location of the hosts file can be set in `/etc/ansible/ansible.cfg`.
4683 |
4684 | * The file can contain individual hosts, groups of hosts, groups of groups, and host and group level variables. It can also contain variables that determine how you connect to a host.
4685 |
4686 | * An example of an INI-based host inventory file is shown below:
4687 | ```shell
4688 | mail.example.com
4689 |
4690 | [webservers]
4691 | web01.example.com
4692 | web02.example.com
4693 |
4694 | [dbservers]
4695 | db[01:04].example.com
4696 | ```
4697 |
4698 | * Note that square brackets can be used instead of writing a separate line for each host.
4699 |
4700 | * An example of a YAML-based host inventory file is shown below:
4701 | ```shell
4702 | all:
4703 | hosts:
4704 | mail.example.com
4705 | children:
4706 | webservers:
4707 | hosts:
4708 | web01.example.com
4709 | web02.example.com
4710 | dbservers:
4711 | hosts:
4712 | db[01:04].example.com
4713 | ```
4714 |
4715 | 1. Modules
4716 |
4717 | * Modules are essentially tools for tasks. They usually take parameters and return JSON. Modules can be run from the command line or within a playbook. Ansible ships with a significant number of modules by default, and custom modules can also be written.
4718 |
4719 | * Each specific task in Ansible is written through a Module. Multiple Modules are written in sequential order. Multiple Modules for related Tasks are called a Play. All Plays together make a Playbook.
4720 | Dragon’s Dogma: Dark Arise
4721 | * To run modules through a YAML file:
4722 | ```shell
4723 | ansible-playbook example.yml
4724 | ```
4725 |
4726 | * The `--syntax-check` parameter can be used to check the syntax before running a playbook. The `-C` parameter can be used to dry-run a playbook.
4727 |
4728 | * To run a module independently:
4729 | ```shell
4730 | ansible myservers -m ping
4731 | ```
4732 |
4733 | 1. Variables
4734 |
4735 | * Variable names should only contain letters, numbers, and underscores. A variable name should also start with a letter. There are three main scopes for variables: Global, Host and Play.
4736 |
4737 | * Variables are typically used for configuration values and various parameters. They can store the return value of executed commands and may also be dictionaries. Ansible provides a number of predefined variables.
4738 |
4739 | * An example of INI-based based variables:
4740 | ```yaml
4741 | [webservers]
4742 | host1 http_port=80 maxRequestsPerChild=500
4743 | host2 http_port=305 maxRequestsPerChild=600
4744 | ```
4745 |
4746 | * An example of YAML-based based variables:
4747 | ```yaml
4748 | webservers
4749 | host1:
4750 | http_port: 80
4751 | maxRequestsPerChild: 500
4752 | host2:
4753 | http_port: 305
4754 | maxRequestsPerChild: 600
4755 | ```
4756 |
4757 | * An example of variables used within a playbook is shown below:
4758 | ```yaml
4759 | ---
4760 | - name: Create a file with vars
4761 | hosts: localhost
4762 | vars:
4763 | fileName: createdusingvars
4764 | tasks:
4765 | - name: Create a file
4766 | file:
4767 | state: touch
4768 | path: /tmp/{{ fileName }}.txt
4769 | ```
4770 |
4771 | * When referencing a variable and the `{` character is the first character in the line, the reference must be enclosed in quotes.
4772 |
4773 | 1. Facts
4774 |
4775 | * Facts provide certain information about a given target host. They are automatically discovered by Ansible when it reaches out to a host. Facts can be disabled and can be cached for use in playbook executions.
4776 |
4777 | * To gather a list of facts for a host:
4778 | ```shell
4779 | ansible localhost -m setup
4780 | ```
4781 |
4782 | * You can also include this task in a playbook to list the facts:
4783 | ```yaml
4784 | ---
4785 | - name: Print all available facts
4786 | ansible.builtin.debug:
4787 | var: ansible_facts
4788 | ```
4789 |
4790 | * Nested facts can be referred to using the below syntax:
4791 | ```shell
4792 | {{ ansible_facts['devices']['xvda']['model'] }}
4793 | ```
4794 |
4795 | * In addition to Facts, Ansible contains many special (or magic) variables that are automatically available.
4796 |
4797 | * The `hostvars` variable contains host details for any host in the play, at any point in the playbook. To access a fact from another node, you can use the syntax `{{ hostvars['test.example.com']['ansible_facts']['distribution'] }}`.
4798 |
4799 | * The `groups` variable lists all of the groups and hosts in the inventory. You can use this to enumerate the hosts within a group:
4800 | ```jinja2
4801 | {% for host in groups['app_servers'] %}
4802 | # something that applies to all app servers.
4803 | {% endfor %}
4804 | ```
4805 |
4806 | * The below example shows using hostvars and groups together to find all of the IP addresses in a group:
4807 | ```jinja2
4808 | {% for host in groups['app_servers'] %}
4809 | {{ hostvars[host]['ansible_facts']['eth0']['ipv4']['address'] }}
4810 | {% endfor %}
4811 | ```
4812 |
4813 | * The `group_names` variable contains a list of all the groups the current host is in. The below example shows using this variable to create a templated file that varies based on the group membership of the host:
4814 | ```jinja2
4815 | {% if 'webserver' in group_names %}
4816 | # some part of a configuration file that only applies to webservers
4817 | {% endif %}
4818 | ```
4819 |
4820 | * The `inventory_hostname` variable contains the host as configured as configured in your inventory, as an alternative to `ansible_hostname` when fact-gathering is disabled (`gather_facts: no`). You can also use `inventory_hostname_short` which contains the part up to the first period.
4821 |
4822 | * The `ansible_play_hosts` is the list of all hosts still active in the current play.
4823 |
4824 | * The `ansible_play_batch` is a list of hostnames that are in scope for the current ‘batch’ of the play. The batch size is defined by `serial`, when not set it is equivalent to the whole play.
4825 |
4826 | * The `ansible_playbook_python` is the path to the Python executable used to invoke the Ansible command line tool.
4827 |
4828 | * The `inventory_dir` is the pathname of the directory holding Ansible’s inventory host file.
4829 |
4830 | * The `playbook_dir` contains the playbook base directory.
4831 |
4832 | * The `role_path` contains the current role’s pathname and only works inside a role.
4833 |
4834 | * The `ansible_check_mode` is a boolean, set to `True` if you run Ansible wtih `--check`.
4835 |
4836 | * Special variables can be viewed in a playbook such as the below:
4837 | ```yaml
4838 | ---
4839 | - name: Show some special variables
4840 | hosts: localhost
4841 | tasks:
4842 | - name: Show inventory host name
4843 | debug:
4844 | msg: "{{ inventory_hostname }}"
4845 |
4846 | - name: Show group
4847 | debug:
4848 | msg: "{{ group_names }}"
4849 |
4850 | - name: Show host variables
4851 | debug:
4852 | msg: "{{ hostvars }}"
4853 | ```
4854 |
4855 | * Custom facts
4856 |
4857 | 1. Loops
4858 |
4859 | * Loops work hand in hand with conditions as we can loop certain tasks until a condition is met. Ansible provides the `loop` and `with_*` directives for creating loops.
4860 |
4861 | * An example is shown below:
4862 | ```yaml
4863 | ---
4864 | - name: Create users with loop
4865 | hosts: localhost
4866 |
4867 | tasks:
4868 | - name: Create users
4869 | user:
4870 | name: "{{ item }}"
4871 | loop:
4872 | - jerry
4873 | - kramer
4874 | - elaine
4875 | ```
4876 |
4877 | * An alternate example is shown below:
4878 | ```yaml
4879 | ---
4880 | - name: Create users through loop
4881 | hosts: localhost
4882 |
4883 | vars:
4884 | users: [jerry,kramer,elaine]
4885 |
4886 | tasks:
4887 | - name: Create users
4888 | user:
4889 | name: "{{ item }}"
4890 | with_items: "{{ users }}"
4891 | ```
4892 |
4893 | * An example for installing packages using a loop is shown below:
4894 | ```yaml
4895 | ---
4896 | - name: Install packages using a loop
4897 | hosts: localhost
4898 |
4899 | vars:
4900 | packages: [ftp, telnet, htop]
4901 |
4902 | tasks:
4903 | - name: Install package
4904 | yum:
4905 | name: "{{ item }}"
4906 | state: present
4907 | with_items: "{{ packages }}"
4908 | ```
4909 |
4910 | * An alternate example for installing packages which is possible because yum provides the loop implementation natively:
4911 | ```yaml
4912 | ---
4913 | - name: Install packages through loop
4914 | hosts: localhost
4915 |
4916 | vars:
4917 | packages: [ftp, telnet, htop]
4918 |
4919 | tasks:
4920 | - name: Install packages
4921 | yum:
4922 | name: "{{ packages }}"
4923 | state: present
4924 | ```
4925 |
4926 | 1. Conditional tasks
4927 |
4928 | * The `when` keyboard is used to make tasks conditional.
4929 |
4930 | 1. Plays
4931 |
4932 | * The goal of a play is to map a group of hosts to some well-defined roles. A play can consist of one or more tasks which make calls to Ansible modules.
4933 |
4934 | 1. Handling task failure
4935 |
4936 | * By default, Ansible stops executing tasks on a host when a task fails on that host. You can use the `ignore_errors` option to continue despite the failure. This directive only works when the task can run and returns a value of 'failed'. For example, the condition `when: response.failed` is used to trigger if the registered variable `response` has returned a failure.
4937 |
4938 | * Ansible lets you define what "failure" means in each task using the `failed_when` conditional. This is commonly used on registered variables:
4939 | ```yaml
4940 | - name: Fail task when both files are identical
4941 | ansible.builtin.raw: diff foo/file1 bar/file2
4942 | register: diff_cmd
4943 | failed_when: diff_cmd.rc == 0 or diff_cmd.rc >= 2
4944 | ```
4945 |
4946 | * Ansible lets you define when a particular task has “changed” a remote node using the `changed_when` conditional. This lets you determine, based on return codes or output, whether a change should be reported in Ansible statistics and whether a handler should be triggered or not. usually a handler is used to handle executing tasks only if there is a change.
4947 |
4948 | 1. Playbooks
4949 |
4950 | * A playbook is a series of plays. An example of a playbook:
4951 | ```yaml
4952 | ---
4953 | - hosts: webservers
4954 | become: yes
4955 | tasks:
4956 | - name: ensure apache is at the latest version
4957 | yum:
4958 | name: httpd
4959 | state: latest
4960 | - name: write our custom apache config file
4961 | template:
4962 | src: /srv/httpd.j2
4963 | dest: /etc/httpd/conf/httpd.conf
4964 | - name: ensure that apache is started
4965 | service:
4966 | name: httpd
4967 | state: started
4968 | - hosts: dbservers
4969 | become: yes
4970 | tasks:
4971 | - name: ensure postgresql is at the latest version
4972 | yum:
4973 | name: postgresql
4974 | state: latest
4975 | - name: ensure that postgres is started
4976 | service:
4977 | name: postgresql
4978 | state: started
4979 | ```
4980 |
4981 | 1. Configuration Files
4982 |
4983 | * The Ansible configuration files are taken from the below locations in order:
4984 | * `ANSIBLE_CONFIG` (environment variable)
4985 | * `ansible.cfg` (in the current directory)
4986 | * `~/.ansible.cfg` (in the home directory)
4987 | * `/etc/ansible/ansible.cfg`
4988 |
4989 | * A configuration file will not automatically load if it is in a world-writable directory.
4990 |
4991 | * The ansible-config command can be used to view configurations:
4992 | * list - Prints all configuration options
4993 | * dump - Dumps configuration
4994 | * view - View the configuration file
4995 |
4996 | * Commonly used settings:
4997 | * inventory - Specifies the default inventory file
4998 | * roles_path - Sets paths to search in for roles
4999 | * forks - Specifies the amount of hosts configured by Ansible at the same time (Parallelism)
5000 | * ansible_managed - Text inserted into templates which indicate that file is managed by Ansible and changes will be overwritten
5001 |
5002 | 1. Roles
5003 |
5004 | * Roles help to split playbooks into smaller groups. Roles let you automatically load related vars, files, tasks, handlers, and other Ansible artifacts based on a known file structure. After you group your content in roles, you can easily reuse them and share them with other users.
5005 |
5006 | * Consider the playbook below:
5007 | ```yaml
5008 | ---
5009 | - name: Setup httpd webserver
5010 | hosts: east-webservers
5011 |
5012 | tasks:
5013 | - name: Install httpd packages
5014 | yum:
5015 | name: httpd
5016 | state: present
5017 |
5018 | - name: Start httpd
5019 | service:
5020 | name: httpd
5021 | state: started
5022 |
5023 | - name: Open port http on firewall
5024 | firewalld:
5025 | service: http
5026 | permanent: true
5027 | state: enabled
5028 |
5029 | - name: Restart firewalld
5030 | service:
5031 | name: firewalld
5032 | state: restarted
5033 |
5034 | hosts: west-webservers
5035 | tasks:
5036 | - name: Install httpd packages
5037 | yum:
5038 | name: httpd
5039 | state: present
5040 |
5041 | - name: Start httpd
5042 | service:
5043 | name: httpd
5044 | state: started
5045 | ```
5046 |
5047 | * This playbook can be simplified with the addition of two roles. The first role is created in `/etc/ansible/roles/fullinstall/tasks/main.yml`:
5048 | ```yaml
5049 | ---
5050 | - name: Install httpd packages
5051 | yum:
5052 | name: httpd
5053 | state: present
5054 |
5055 | - name: Start httpd
5056 | service:
5057 | name: httpd
5058 | state: started
5059 |
5060 | - name: Open port http on firewall
5061 | firewalld:
5062 | service: http
5063 | permanent: true
5064 | state: enabled
5065 |
5066 | - name: Restart firewalld
5067 | service:
5068 | name: firewalld
5069 | state: restarted
5070 | ```
5071 |
5072 | * The second role is created in `/etc/ansible/roles/basicinstall/tasks/main.yml`:
5073 | ```yaml
5074 | ---
5075 | - name: Install httpd packages
5076 | yum:
5077 | name: httpd
5078 | state: present
5079 |
5080 | - name: Start httpd
5081 | service:
5082 | name: httpd
5083 | state: started
5084 | ```
5085 |
5086 | * The simplified playbook is shown below:
5087 | ```yaml
5088 | ---
5089 | - name: Full install
5090 | hosts: all
5091 | roles:
5092 | - fullinstall
5093 |
5094 | - name: Basic install
5095 | hosts: all
5096 | roles:
5097 | - basicinstall
5098 | ```
5099 |
5100 | * By default, Ansible downloads roles to the first writable directory in the default list of paths `~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles`. This installs roles in the home directory of the user running ansible-galaxy. You can override this behaviour by setting `ANSIBLE_ROLES_PATH`, using the `--roles-path` option for the `ansible-galaxy` command, or defining `roles_path` in an `ansible.cfg` file.
5101 |
5102 | 1. Use provided documentation to look up specific information about Ansible modules and commands
5103 |
5104 | * To check the documentation for the `yum` module:
5105 | ```shell
5106 | ansible-doc -t module yum
5107 | ```
5108 |
5109 | ### Use roles and Ansible Content Collections
5110 |
5111 | 1. Create and work with roles
5112 |
5113 | * Roles simplify long playbooks by grouping tasks into smaller playbooks.
5114 |
5115 | 1. Install roles and use them in playbooks
5116 |
5117 | * The default location for roles is in `/etc/ansible/roles` which is configurable as `roles_path` in `ansible.cfg`. Ansible also looks for roles in collections, in the `roles` directory relative to the playbook file, and in the directory where the playbook file is located.
5118 |
5119 | * To install a role run:
5120 | ```shell
5121 | ansible-galaxy install author.role
5122 | ```
5123 |
5124 | * To install a role from an FTP locations $URL, first run `wget $URL`. Then create a `requirements.yml` file with each collection specified using one of these methods:
5125 | ```yml
5126 | - name: http-role-gz
5127 | src: https://some.webserver.example.com/files/main.tar.gz
5128 | ```
5129 |
5130 | * Run the following:
5131 | ```shell
5132 | ansible-galaxy install -r requirements.yml -p .
5133 | ansible-galaxy install -r requirements.yml -p . # allow documentation commands to work
5134 | ```
5135 |
5136 | 1. Install Content Collections and use them in playbooks
5137 |
5138 | * To install a collection run:
5139 | ```shell
5140 | ansible-galaxy collection install authorname.collectionname
5141 | ```
5142 |
5143 | * Collections can be used in playbooks by using:
5144 | ```shell
5145 | tasks:
5146 | import_role:
5147 | name: awx.awx
5148 | ```
5149 |
5150 | * Or by using:
5151 | ```shell
5152 | collections:
5153 | name: awx.awx
5154 | tasks:
5155 | ```
5156 |
5157 | * The Fully Qualified Collection Name (FQCN) is used when referring to modules provided by an Ansible Collection.
5158 |
5159 | 1. Obtain a set of related roles, supplementary modules, and other content from content collections, and use them in a playbook.
5160 |
5161 | * To install the `ansible-posix` collection:
5162 | ```shell
5163 | ansible-galaxy collection install ansible-posix
5164 | ```
5165 |
5166 | * To install the `ansible-posix` collection from an FTP locations $URL, first run `wget $URL`. Then create a `requirements.yml` file with each collection specified using one of these methods:
5167 | ```yml
5168 | collections:
5169 | # directory containing the collection
5170 | - source: ./my_namespace/my_collection/
5171 | type: dir
5172 |
5173 | # directory containing a namespace, with collections as subdirectories
5174 | - source: ./my_namespace/
5175 | type: subdirs
5176 | ```
5177 |
5178 | * Run the following:
5179 | ```shell
5180 | ansible-galaxy collection install -r requirements.yml -p .
5181 | ansible-galaxy collection install -r requirements.yml # allow documentation commands to work
5182 | ```
5183 |
5184 | * Note that the collection is installed into `~./ansible/collections/ansible_collections`. This can be overwritten with `-p /path/to/collection`, but you must update `ansible.cfg` accordingly. The documentation of the installed collection can be referred, but the `ansible-doc` command only searches the system directories for documentation.
5185 |
5186 | * Create and run a playbook enforce-selinux.yml:
5187 | ```yaml
5188 | ---
5189 | - name: set SELinux to enforcing
5190 | hosts: localhost
5191 | become: yes
5192 | tasks:
5193 | - name: set SElinux to enforcing
5194 | ansible.posix.selinux:
5195 | policy: targeted
5196 | state: enforcing
5197 | ```
5198 |
5199 | ### Install and configure an Ansible control node
5200 |
5201 | 1. Install required packages
5202 |
5203 | * To install Ansible using dnf:
5204 | ```shell
5205 | sudo subscription-manager repos --enable codeready-builder-for-rhel-9-$(arch)-rpms
5206 | sudo dnf install epel-release epel-next-release
5207 | dnf install -y ansible-core
5208 | ```
5209 |
5210 | 1. Create a static host inventory file
5211 |
5212 | * An inventory is a list of hosts that Ansible manages. Inventory files may contain hosts, patterns, groups, and variables. Multiple inventory files may be specified using a directory. Inventory files may be specified in INI or YAML format.
5213 |
5214 | * The default location is `/etc/ansible/hosts`. The location can be set in `ansible.cfg` or specified in the CLI using:
5215 | ```shell
5216 | ansible -i
5217 | ```
5218 |
5219 | * Best practices for inventory variables:
5220 | * Variables should be stored in YAML files located relative to the inventory file.
5221 | * Host and group variables should be stored in the `host_vars` and `group_vars` directories respectively (the directories need to be created).
5222 | * Variable files should be named after the host or group for which they contain variables (files may end in .yml or .yaml).
5223 |
5224 | * A host file can be static or dynamic. A dynamic host file can reflect updated IP addresses automatically but requires additional plugins.
5225 |
5226 | * To view the hosts in the file:
5227 | ```shell
5228 | ansible-inventory --list
5229 | ```
5230 |
5231 | 1. Create a configuration file
5232 |
5233 | * An example of creating a custom configuration file, and updating the default configuration file:
5234 | ```shell
5235 | cd ansible
5236 | vi ansible.cfg
5237 | ### contents of file
5238 | [defaults]
5239 | interpreter_python = auto
5240 | inventory = /home/cloud_user/ansible/inventory/inv.ini
5241 | roles_path = /etc/ansible/roles:/home/cloud_user/ansible/roles
5242 | ```
5243 |
5244 | 1. Create and use static inventories to define groups of hosts
5245 |
5246 | * A sample inventory file is shown below:
5247 | ```yaml
5248 | [webservers]
5249 | foo.example.com
5250 | bar.example.com
5251 |
5252 | [dbservers]
5253 | one.example.com
5254 | two.example.com
5255 | three.example.com
5256 | ```
5257 |
5258 | ### Configure Ansible managed nodes
5259 |
5260 | 1. Create and distribute SSH keys to managed nodes
5261 |
5262 | * A control node is any machine with Ansible installed. You can run Ansible commands and playbooks from any control node. A managed node (also sometimes called a "host") is a network device or server you manage with Ansible. Ansible is not installed on managed nodes.
5263 |
5264 | * The following is an example of generating SSH keys on the control node and distributing them to managed nodes mypearson2c and mypearson3c:
5265 | ```shell
5266 | ssh-keygen
5267 | # enter password
5268 | # now we have id.rsa and id_rsa.pub in /home/cloud_user/.ssh/
5269 | ssh-copy-id cloud_user@mspearson2c.mylabserver.com
5270 | # enter password
5271 | ssh-copy-id cloud_user@mspearson3c.mylabserver.com
5272 | # enter password
5273 | ```
5274 |
5275 | * A playbook can be used to distribute keys to managed nodes. To install the `openssh_keypair` module:
5276 | ```shell
5277 | ansible-galaxy collection install community.crypto
5278 | ```
5279 |
5280 | * Create a playbook `/home/ansible/ansible/bootstrap.yml`:
5281 | ```yaml
5282 | ---
5283 | - name: Key preparation
5284 | hosts: localhost
5285 | tasks:
5286 | - name: SSH key directory exists
5287 | file:
5288 | path: /home/ansible/.ssh
5289 | state: directory
5290 | mode: 0700
5291 |
5292 | - name: SSH keypair exists
5293 | community.crypto.openssh_keypair:
5294 | path: /home/ansible/.ssh/id_rsa
5295 | state: present
5296 |
5297 | - name: Bootstrap automation user and install keys
5298 | hosts: all
5299 | tasks:
5300 | - name: Confirm user exists
5301 | user:
5302 | name: ansible
5303 | state: present
5304 |
5305 | - name: sudo passwordless access for ansible user
5306 | copy:
5307 | content: "ansible ALL=(ALL) NOPASSWD:ALL"
5308 | dest: /etc/sudoers.d/ansible
5309 | validate: /usr/sbin/visudo -csf %s
5310 |
5311 | - name: Public key is in remote authorized keys
5312 | ansible.posix.authorized_key:
5313 | user: ansible
5314 | state: present
5315 | key: "{{ lookup('file', '/home/ansible/.ssh/id_rsa.pub') }}"
5316 | ```
5317 |
5318 | * Note that until you have setup the keys, in `ansible.cfg` you will need to set `ask_pass=true` and `host_key_checking=false` under `[defaults]`, and `become_ask_pass=true` under `[privilege_escalation]`. This will allow you to provide the authentication and sudo passwords as part of the playbook execution.
5319 |
5320 | 1. Configure privilege escalation on managed nodes
5321 |
5322 | * The following is an example of configuring privilege escalation on managed nodes mypearson2c and mypearson3c:
5323 | ```shell
5324 | # perform these steps on both mypearson2c and mypearson3c
5325 | sudo visudo
5326 | # add line
5327 | cloud_user ALL=(ALL) NOPASSWD: ALL
5328 | ```
5329 |
5330 | * The example above provides a playbook for setting up privilege escalation on managed nodes.
5331 |
5332 | 1. Deploy files to managed nodes
5333 |
5334 | * The copy module can be used to copy files into managed nodes.
5335 |
5336 | 1. Be able to analyze simple shell scripts and convert them to playbooks
5337 |
5338 | * Common Ansible modules can be used to replicate the functionailty of scripts. The `command` and `shell` modules allow you to execute commands on the managed nodes.
5339 |
5340 | ### Run playbooks with Automation content navigator
5341 |
5342 | 1. Know how to run playbooks with Automation content navigator
5343 |
5344 | * Ansible content navigator is a command line, content-creator-focused tool with a text-based user interface. You can use it to launch and watch jobs and playbooks, share playbook and job run artifacts in JSON, browse and introspect automation execution environments, and more.
5345 |
5346 | * Install Ansible Navigator:
5347 | ```shell
5348 | # install podman
5349 | sudo dnf install container-tools
5350 | # install ansible-navigator
5351 | sudo dnf install python3-pip
5352 | python3 -m pip install ansible-navigator --user
5353 | ansible-navigator
5354 | ```
5355 |
5356 | * You can also install Ansible Navigator using:
5357 | ```shell
5358 | subscription-manager register
5359 | subscription-manager list --available # find the Pool ID for the subscription containing Ansible Automation Platform
5360 | subscription-manager attach --pool-=$Pool_ID
5361 | subscription-manager repos --list | grep ansible
5362 | subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-9-aarch64-rpms
5363 | dnf install ansible-navigator -y
5364 | ```
5365 |
5366 | * After running `ansible-navigator` you are presented with an interactive interface. You can type `:run -i ` to run a playbook. You can also run a playbook using `ansible-navigator run -m stdout $playbook`. This provides backwards compatability with the standard `ansible-playbook` command.
5367 |
5368 | * Note that there is no `ansible-doc` for `ansible-navigator`, but you can use `ansible-navigator --help`.
5369 |
5370 | 1. Use Automation content navigator to find new modules in available Ansible Content Collections and use them
5371 |
5372 | * Run the following:
5373 | ```shell
5374 | ansible-galaxy collection install community.general -p collections
5375 | ansible-galaxy collection # collection shows as type bind_mount
5376 | ```
5377 |
5378 | 1. Use Automation content navigator to create inventories and configure the Ansible environment
5379 |
5380 | * Run the following:
5381 | ```shell
5382 | ansible-navigator inventory -i assets/inventory.cfg # inspect an inventory
5383 | ansible-navigator config
5384 | ```
5385 |
5386 | ### Create Ansible plays and playbooks
5387 |
5388 | 1. Know how to work with commonly used Ansible modules
5389 |
5390 | * Commonly used modules include:
5391 | * Ping
5392 | * Validates a server is running and reachable.
5393 | * No required parameters.
5394 | * Setup
5395 | * Gather Ansible facts.
5396 | * No required parameters.
5397 | * Yum
5398 | * Manage packages with the YUM package manager.
5399 | * Common parameters are name and state.
5400 | * Service
5401 | * Control services on remote hosts.
5402 | * Common parameters are name (required), state and enabled.
5403 | * User
5404 | * Manage user accounts and attributes.
5405 | * Common parameters are name (required), state, group and groups.
5406 | * Copy
5407 | * Copy files to a remote host.
5408 | * Common parameters are src, dest (required), owner, group and mode.
5409 | * File
5410 | * Manage files and directories.
5411 | * Common parameters are path (required), state, owner, group and mode.
5412 | * Git
5413 | * Interact with git repositories.
5414 | * Common parameters are repo (required), dest (required) and clone.
5415 |
5416 | * A sample playbook to copy a file to a remote client:
5417 | ```yaml
5418 | ---
5419 | - name: Copy a file from local to remote
5420 | hosts: all
5421 | tasks:
5422 | - name: Copying file
5423 | copy:
5424 | src: /apps/ansible/temp
5425 | dest: /tmp
5426 | owner: myname
5427 | group: myname
5428 | mode: 0644
5429 | ```
5430 |
5431 | * A sample playbook to change file permissions:
5432 | ```yaml
5433 | --
5434 | - name: Change file permissions
5435 | hosts: all
5436 | tasks:
5437 | - name: File Permissions
5438 | file:
5439 | path: /etc/ansible/temp
5440 | mode: '0777'
5441 | ```
5442 |
5443 | * A sample playbook to check a file or directory status:
5444 | ```yaml
5445 | ---
5446 | - name: File status module
5447 | hosts: localhost
5448 | tasks:
5449 | - name: Check file status and attributes
5450 | stat:
5451 | path: /etc/hosts
5452 | register: fs
5453 |
5454 | - name: Show results
5455 | debug:
5456 | msg: File attributes {{ fs }}
5457 | ```
5458 |
5459 | * A sample playbook to create a directory or file and remove
5460 | ```yaml
5461 | ---
5462 | - name: Create and Remove file
5463 | hosts: all
5464 | tasks:
5465 | - name: Create a directory
5466 | file:
5467 | path: /tmp/seinfeld
5468 | owner: myuser
5469 | mode: 770
5470 | state: directory
5471 |
5472 | - name: Create a file in that directory
5473 | file:
5474 | path: /tmp/seinfeld/jerry
5475 | state: touch
5476 |
5477 | - name: Stat the new file jerry
5478 | stat:
5479 | path: /tmp/seinfeld/jerry
5480 | register: jf
5481 |
5482 | - name: Show file status
5483 | debug:
5484 | msg: File status and attributes {{ jf }}
5485 |
5486 | - name: Remove file
5487 | file:
5488 | path: /tmp/seinfeld/jerry
5489 | state: absent
5490 | ```
5491 |
5492 | * A sample playbook to create a file and add text:
5493 | ```yaml
5494 | ---
5495 | - name: Create a file and add text
5496 | hosts: localhost
5497 | tasks:
5498 | - name: Create a new file
5499 | file:
5500 | path: /tmp/george
5501 | state: touch
5502 |
5503 | - name: Add text to the file
5504 | blockinfile:
5505 | path: /tmp/george
5506 | block: Sample text
5507 | ```
5508 |
5509 | * A sample playbook to setup httpd and open a firewall port:
5510 | ```yaml
5511 | ---
5512 | - name: Setup httpd and open firewall port
5513 | hosts: all
5514 | tasks:
5515 | - name: Install apache packages
5516 | yum:
5517 | name: httpd
5518 | state: present
5519 |
5520 | - name: Start httpd
5521 | service:
5522 | name: httpd
5523 | state: started
5524 |
5525 | - name: Open port 80 for http access
5526 | firewalld:
5527 | service: http
5528 | permanent: true
5529 | state: enabled
5530 |
5531 | - name: Restart firewalld service to load firewall changes
5532 | service:
5533 | name: firewalld
5534 | state: reloaded
5535 | ```
5536 |
5537 | * Note that managing firewalld requires the following collection:
5538 | ```shell
5539 | ansible-galaxy collection install --ignore-certs ansible.posix
5540 | ```
5541 |
5542 | * A sample playbook to run a shell script:
5543 | ```yaml
5544 | ---
5545 | - name: Playbook for shell script
5546 | hosts: all
5547 | tasks:
5548 | - name: Run shell script
5549 | shell: "/home/username/cfile.sh"
5550 | ```
5551 |
5552 | * A sample playbook to setup a cronjob:
5553 | ```yaml
5554 | ---
5555 | - name: Create a cron job
5556 | hosts: all
5557 | tasks:
5558 | - name: Schedule cron
5559 | cron:
5560 | name: This job is scheduled by Ansible
5561 | minute: "0"
5562 | hour: "10"
5563 | day: "*"
5564 | month: "*"
5565 | weekday: "4"
5566 | user: root
5567 | job: "/home/username/cfile.sh"
5568 | ```
5569 |
5570 | * A sample playbook to download a file:
5571 | ```yaml
5572 | ---
5573 | - name: Download Tomcat from tomcat.apache.org
5574 | hosts: all
5575 | tasks:
5576 | - name: Create a directory /opt/tomcat
5577 | file:
5578 | path: /opt/tomcat
5579 | state: directory
5580 | mode: 0755
5581 | owner: root
5582 | group: root
5583 | - name: Download Tomcat using get_url
5584 | get_url:
5585 | url: https://dlcdn.apache.org/tomcat/tomcat-8/v8.5.95/bin/apache-tomcat-8.5.95.tar.gz
5586 | dest: /opt/tomcat
5587 | mode: 0755
5588 | ```
5589 |
5590 | * A sample playbook mount a filesystem:
5591 | ```yaml
5592 | ---
5593 | - name: Create and mount new storage
5594 | hosts: all
5595 | tasks:
5596 | - name: Create new partition
5597 | parted:
5598 | name: files
5599 | label: gpt
5600 | device: /dev/sdb
5601 | number: 1
5602 | state: present
5603 | part_start: 1MiB
5604 | part_end: 1GiB
5605 |
5606 | - name: Create xfs filesystem
5607 | filesystem:
5608 | dev: /dev/sdb1
5609 | fstype: xfs
5610 |
5611 | - name: Create mount directory
5612 | file:
5613 | path: /data
5614 | state: directory
5615 |
5616 | - name: Mount the filesystem
5617 | mount:
5618 | src: /dev/sdb1
5619 | path: /data
5620 | fstype: xfs
5621 | state: mounted
5622 | ```
5623 |
5624 | * A sample playbook to create a user:
5625 | ```yaml
5626 | ---
5627 | - name: Playbook for creating users
5628 | hosts: all
5629 | tasks:
5630 | - name: Create users
5631 | user:
5632 | name: george
5633 | home: /home/george
5634 | shell: /bin/bash
5635 | ```
5636 |
5637 | * A sample playbook to update the password for a user:
5638 | ```yaml
5639 | ---
5640 | - name: Add or update user password
5641 | hosts: all
5642 | tasks:
5643 | - name: Change "george" password
5644 | user:
5645 | name: george
5646 | update_password: always
5647 | password: "{{ newpassword|password_hash('sha512') }}"
5648 | ```
5649 |
5650 | * To run the playbook and provide a password:
5651 | ```shell
5652 | ansible-playbook changepass.yml --extra-vars newpassword=abc123
5653 | ```
5654 |
5655 | * A sample playbook to kill a process:
5656 | ```yaml
5657 | ---
5658 | - name: Find a process and kill it
5659 | hosts: 192.168.1.105
5660 | tasks:
5661 | - name: Get running process from remote host
5662 | ignore_errors: yes
5663 | shell: "ps -few | grep top | awk '{print $2}'"
5664 | register: running_process
5665 |
5666 | - name: Show process
5667 | debug:
5668 | msg: Processes are {{ running_process }}
5669 |
5670 | - name: Kill running processes
5671 | ignore_errors: yes
5672 | shell: "kill {{ item }}"
5673 | with_items: "{{ running_process.stdout_lines }}"
5674 | ```
5675 |
5676 | * A sample playbook to kill a process:
5677 | ```yaml
5678 | ---
5679 | - name: Find a process and kill it
5680 | hosts: 192.168.1.105
5681 | tasks:
5682 | - name: Get running process from remote host
5683 | ignore_errors: yes
5684 | shell: "ps -few | grep top | awk '{print $2}'"
5685 | register: running_process
5686 |
5687 | - name: Show process
5688 | debug:
5689 | msg: Processes are {{ running_process }}
5690 |
5691 | - name: Kill running processes
5692 | ignore_errors: yes
5693 | shell: "kill {{ item }}"
5694 | with_items: "{{ running_process.stdout_lines }}"
5695 | ```
5696 |
5697 | * To run a playbook from a particular task:
5698 | ```shell
5699 | ansible-playbook yamlfile.yml --start-at-task 'Task name'
5700 | ```
5701 |
5702 | * The `-b` flag will run a module in become mode (privilege escalation).
5703 |
5704 | 1. Use variables to retrieve the results of running a command
5705 |
5706 | * The register keyword is used to store the results of running a command as a variable. Variables can then be referenced by other tasks in the playbook. Registered variables are only valid on the host for the current playbook run. The return values differ from module to module.
5707 |
5708 | * A sample playbook register.yml is shown below:
5709 | ```yaml
5710 | ---
5711 | - hosts: mspearson2
5712 | tasks:
5713 | - name: create a file
5714 | file:
5715 | path: /tmp/testFile
5716 | state: touch
5717 | register: var
5718 | - name: display debug msg
5719 | debug: msg="Register output is {{ var }}"
5720 | - name: edit file
5721 | lineinfile:
5722 | path: /tmp/testFile
5723 | line: "The uid is {{ var.uid }} and gid is {{ var.gid }}"
5724 | ```
5725 |
5726 | * This playbook is run using:
5727 | ```shell
5728 | ansible-playbook playbooks/register.yml
5729 | ```
5730 |
5731 | * The result stored in `/tmp/testFile` shows the variables for uid and gid.
5732 |
5733 | 1. Use conditionals to control play execution
5734 |
5735 | * Handlers are executed at the end of the play once all tasks are finished. They are typically used to start, reload, restart, or stop services. Sometimes you only want to run a task when a change is made on a machine. For example, restarting a service only if a task updates the configuration of that service.
5736 |
5737 | * A sample handler is shown below:
5738 | ```yaml
5739 | ---
5740 | - name: Verify Apache installation
5741 | hosts: localhost
5742 |
5743 | tasks:
5744 | - name: Ensure Apache is at the latest version
5745 | yum:
5746 | name: httpd
5747 | state: latest
5748 |
5749 | - name: Copy updated Apache config file
5750 | copy:
5751 | src: /tmp/httpd.conf
5752 | dest: /etc/httpd.conf
5753 | notify:
5754 | - Restart Apache
5755 |
5756 | - name: Ensure Apache is running
5757 | service:
5758 | name: httpd
5759 | state: started
5760 |
5761 | handlers:
5762 | - name: Restart Apache
5763 | service:
5764 | name: httpd
5765 | state: restarted
5766 | ```
5767 |
5768 | * The when statement can be used to conditionally execute tasks. An example is shown below:
5769 | ```yaml
5770 | ---
5771 | - name: Install Apache WebServer
5772 | hosts: localhost
5773 | tasks:
5774 | - name: Install Apache on an Ubuntu Server
5775 | apt-get:
5776 | name: apache2
5777 | state: present
5778 | when: ansible_os_family == "Ubuntu"
5779 |
5780 | - name: Install Apache on CentOS Server
5781 | yum:
5782 | name: httpd
5783 | state: present
5784 | when: ansible_os_family == "Redhat"
5785 | ```
5786 |
5787 | * Tags are references or aliases to a task. Instead of running an entire playbook, use tags to target a specific task you need to run. Consider the playbook `httpbytags.yml` shown below:
5788 | ```yaml
5789 | ---
5790 | - name: Setup Apache server
5791 | hosts: localhost
5792 | tasks:
5793 | - name: Install httpd
5794 | yum:
5795 | name: httpd
5796 | state: present
5797 | tags: i-httpd
5798 | - name: Start httpd
5799 | service:
5800 | name: httpd
5801 | state: started
5802 | tags: s-httpd
5803 | ```
5804 |
5805 | * To list tags in a playbook run `ansible-playbook httpbytags.yml --list-tags`.
5806 |
5807 | * To target tasks using tags run `ansible-playbook httpbytags.yml -t i-httpd`. To skip tasks using tags run `ansible-playbook httpbytags.yml --skip-tags i-httpd`
5808 |
5809 | 1. Configure error handling
5810 |
5811 | * By default, Ansible stops executing tasks on a host when a task fails on that host. You can use `ignore_errors` to continue despite of the failure.
5812 |
5813 | 1. Create playbooks to configure systems to a specified state
5814 |
5815 | * Various example playbooks are provided in the surrounding sections.
5816 |
5817 | ### Automate standard RHCSA tasks using Ansible modules that work with:
5818 |
5819 | 1. Software packages and repositories
5820 |
5821 | * The `yum` module in `ansible-core` can be used to manage packages.
5822 |
5823 | 1. Services
5824 |
5825 | * The `service` module in `ansible-core` can be used to manage services.
5826 |
5827 | 1. Firewall rules
5828 |
5829 | * The `ansible.posix.firewalld` module in the `ansible.posix` collection can be used to manage firewall rules.
5830 |
5831 | 1. File systems
5832 |
5833 | * The `community.general.filesystem` module in the `community.general` collection can be used to manage file systems.
5834 |
5835 | 1. Storage devices
5836 |
5837 | * Various modules in the `community.general` collection can be used to manage block devices.
5838 |
5839 | 1. File content
5840 |
5841 | * The `copy` module in `ansible-core` can be used to copy file contents.
5842 |
5843 | 1. Archiving
5844 |
5845 | * The `community.general.archive` module in the `community.general` collection can be used to create or extend an archive.
5846 |
5847 | 1. Task scheduling
5848 |
5849 | * The `cron` module in `ansible-core` can be used to manage task scheduling.
5850 |
5851 | 1. Security
5852 |
5853 | * The `community.general.sefcontext` module in the `community.general` collection can be used to manage SELinux file contexts.
5854 |
5855 | 1. Users and groups
5856 |
5857 | * The `user` and `group` modules in `ansible-core` can be used to manage users and groups.
5858 |
5859 | ### Manage content
5860 |
5861 | 1. Create and use templates to create customized configuration files
5862 |
5863 | * The `template` module in `ansible-core` can be used with templates to populate files on managed hosts.
5864 |
5865 | 1. Use Ansible Vault in playbooks to protect sensitive data
5866 |
5867 | * An encrypted playbook can be created using `ansible-playbook $playbook_name --ask-vault-pass`. The YAML file can then be executed by `ansible-playbook $playbook_name --ask-vault-pass`.
5868 |
5869 | * A non-encrypted playbook can be encrypted using `ansible-vault encrypt $playbook_name`.
5870 |
5871 | * An encrypted playbook can be viewed using `ansible-vault view $playbook_name` and edited with `ansible-vault edit httpvault.yml`. You will be prompted for passwords where they are not provided in the command.
5872 |
5873 | * Instead of encrypting the entire playbook, strings within the playbook can be encrypted. This is done using `ansible-vault encrypt_string $string`. The encrypted value can be substituted for the variable value inside of the playbook. The `--ask-vault-pass` option must be provided when running the playbook regardless of whether individual strings are encrypted or the entire playbook.
5874 |
5875 | ### RHCE Exercises
5876 |
5877 | #### Practise Exam 1
5878 |
5879 | 1. Setup
5880 |
5881 | * Create a VirtualBox machine for the control node. In VirtualBox create a new machine with the VirtualBox additions and the RHEL 9.2 installer ISOs attached as IDE devices. Set networking mode to Bridged Adapter. Set the Pointing Device to USB Tablet and enable the USB 3.0 Controller. Set shared Clipboard and Drag'n'Drop to Bidirectional. Uncheck Audio output. Start the machine and install RHEL.
5882 |
5883 | * On the control node run the following setup as root:
5884 | ```shell
5885 | blkid # note the UUID of the RHEL ISO
5886 | mkdir /mnt/rheliso
5887 | vi /etc/fstab
5888 | # add the below line
5889 | # UUID="2023-04-13-16-58-02-00" /mnt/rheliso iso9660 loop 0 0
5890 | mount -a # confirm no error is returned
5891 | ```
5892 |
5893 | * On the control node setup the dnf repositories as root:
5894 | ```shell
5895 | vi /etc/yum.repos.d/redhat.repo # add the below content
5896 | # [BaseOS]
5897 | # name=BaseOS
5898 | # baseUrl=file:///mnt/rheliso/BaseOS
5899 | # enabled=1
5900 | # gpgcheck=0
5901 | #
5902 | # [AppStream]
5903 | # name=AppStream
5904 | # enabled=1
5905 | # baseurl=file:///mnt/rheliso/AppStream
5906 | # gpgcheck=0
5907 | dnf repolist # confirm repos are returned
5908 | ```
5909 |
5910 | * On the control node run the following setup as root to install guest additions:
5911 | ```shell
5912 | dnf install kernel-headers kernel-devel
5913 | # run the guest additions installer
5914 | ```
5915 |
5916 | * On the control node run the following setup:
5917 | ```shell
5918 | useradd ansible
5919 | echo password | passwd --stdin ansible
5920 | echo "ansible ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/ansible
5921 | dnf install ansible-core -y
5922 | ssh-keygen # select all defaults
5923 | ```
5924 |
5925 | * Clone the control node machine five times for managed nodes 1-5. Attach a 2GB SATA drive to nodes 1-3, and a 1GB SATA drive to node 4. Note that cloning means the above steps are also done on each managed node, but if the servers existed already, we would have had to do this manually.
5926 |
5927 | * On each managed node get the IP address using `ifconfig`. On the control node, switch to the toot user and add hostnames for the managed nodes:
5928 | ```shell
5929 | vi /etc/hosts
5930 | # add the following lines
5931 | # 192.168.1.116 node1.example.com
5932 | # 192.168.1.117 node2.example.com
5933 | # 192.168.1.109 node3.example.com
5934 | # 192.168.1.118 node4.example.com
5935 | # 192.168.1.111 node5.example.com
5936 | ```
5937 |
5938 | * On the control node run the following to make vim friendlier for playbook creation:
5939 | ```shell
5940 | echo "autocmd FileType yml setlocal ai ts=2 sw=2 et cuc nu" >> ~/.vimrc
5941 | ```
5942 |
5943 | 1. Task 1
5944 |
5945 | * Run the following commands on the control node:
5946 | ```shell
5947 | vi /etc/ansible/ansible/hosts # add the below
5948 | # [dev]
5949 | # node1.example.com
5950 | #
5951 | # [test]
5952 | # node2.example.com
5953 | #
5954 | # [proxy]
5955 | # node3.example.com
5956 | #
5957 | # [prod]
5958 | # node4.example.com
5959 | # node5.example.com
5960 | #
5961 | # [webservers:children]
5962 | # prod
5963 |
5964 | vi /etc/ansible/ansible/ansible.cfg
5965 | # [defaults]
5966 | # roles_path=/home/ansible/ansible/roles
5967 | # inventory=/home/ansible/ansible/hosts
5968 | # remote_user=ansible
5969 | # host_key_checking=false
5970 |
5971 | # [privilege_escalation]
5972 | # become=true
5973 | # become_user=root
5974 | # become_method=sudo
5975 | # become_ask_pass=false
5976 | ```
5977 |
5978 | 1. Task 2
5979 |
5980 | * To list the available modules run `ansible-doc -l`. To get help on a module run `ansible-doc -t module $module`.
5981 |
5982 | * Create and run the following script on the control node:
5983 | ```shell
5984 | #!/bin/bash
5985 | ansible all -m yum_repository -a "name=EPEL description=RHEL9 baseurl=https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm gpgcheck=no enabled=no"
5986 | ```
5987 |
5988 | 1. Task 3
5989 |
5990 | * Create and run the following playbook on the control node. The playbook can be tested using `ansible-playbook packages.yml -C`.
5991 | ```yaml
5992 | ---
5993 | - name: Install packages
5994 | hosts: dev,prod,webservers
5995 | become: true
5996 | tasks:
5997 | - name: Install common packages
5998 | yum:
5999 | name:
6000 | - httpd
6001 | - mod_ssl
6002 | - mariadb
6003 | state: present
6004 |
6005 | - name: Install dev specific packages
6006 | yum:
6007 | name:
6008 | - '@Development tools'
6009 | state: latest
6010 | when: "'dev' in group_names"
6011 |
6012 | - name: Update all packages on dev to the latest version
6013 | yum:
6014 | name:
6015 | - '*'
6016 | state: latest
6017 | when: "'dev' in group_names"
6018 | ```
6019 |
6020 | 1. Task 4
6021 |
6022 | * Create the following file `main.yml` at `/home/ansible/ansible/roles/sample-apache/tasks`:
6023 | ```yaml
6024 | - name: Start and enable httpd
6025 | service:
6026 | name: httpd
6027 | state: started
6028 | enabled: yes
6029 |
6030 | - name: Start and enable firewalld
6031 | service:
6032 | name: firewalld
6033 | state: started
6034 | enabled: yes
6035 |
6036 | - name: Allow http service
6037 | firewalld:
6038 | service: http
6039 | state: enabled
6040 | permanent: true
6041 | immediate: true
6042 |
6043 | - name: Create and serve message
6044 | template:
6045 | src: /home/ansible/ansible/roles/sample-apache/templates/index.html.j2
6046 | dest: /var/www/html/index.html
6047 | notify:
6048 | - restart
6049 | ```
6050 |
6051 | * Create the following file at `/home/ansible/ansible/roles/sample-apache/templates/index.html.j2`:
6052 | ```shell
6053 | Welcome to {{ ansible_fqdn }} on {{ ansible_default_ipv4.address }}
6054 | ```
6055 |
6056 | * If you forget the ansible facts variables you can run `ansible localhost -m setup`. Note that the `firewalld` module requires you to run `ansible-galaxy collection install ansible.posix`.
6057 |
6058 | * Create the following file `main.yml` at `/home/ansible/ansible/roles/sample-apache/handlers`:
6059 | ```yaml
6060 | - name: restart
6061 | service:
6062 | name: httpd
6063 | state: restarted
6064 | ```
6065 |
6066 | 1. Task 5
6067 |
6068 | * Create a file `requirements.yml` at `/home/ansible/ansible/roles/requirements.yml`:
6069 | ```yaml
6070 | - name: haproxy-role
6071 | src: geerlingguy.haproxy
6072 |
6073 | - name: php_role
6074 | src: geerlingguy.php
6075 | ```
6076 |
6077 | * Install the roles using `ansible-galaxy install -r requirements.yml -p /home/ansible/ansible/roles`. Observe the new roles available in the directory.
6078 |
6079 | 1. Task 6
6080 |
6081 | * Update the `/etc/hosts` file on managed node 3 to define the FQDNs used in the below playbook.
6082 |
6083 | * Create the playbook `/home/ansible/ansible/role.yml`:
6084 | ```yaml
6085 | ---
6086 | - name: Install haproxy-role
6087 | hosts: proxy
6088 | become: true
6089 | vars:
6090 | haproxy_frontend_port: 81
6091 | haproxy_backend_servers:
6092 | - name: node4.example.com
6093 | address: 192.168.1.118:80
6094 | - name: node5.example.com
6095 | address: 192.168.1.111:80
6096 |
6097 | tasks:
6098 | - name: Install haproxy-role prereqs
6099 | yum:
6100 | name:
6101 | - haproxy
6102 | state: present
6103 |
6104 | - name: Open port 81
6105 | firewalld:
6106 | port: 81/tcp
6107 | permanent: true
6108 | state: enabled
6109 | immediate: true
6110 | notify: Restart httpd service
6111 |
6112 | - name: Install haproxy-role
6113 | include_role:
6114 | name: haproxy-role
6115 |
6116 | handlers:
6117 | - name: Restart httpd service
6118 | service:
6119 | name: httpd
6120 | state: restarted
6121 |
6122 | - name: Install php_role
6123 | hosts: prod
6124 | become: true
6125 | tasks:
6126 | - name: Install php_role prereqs
6127 | yum:
6128 | name:
6129 | - php
6130 | - php-xml.x86_64
6131 | state: present
6132 |
6133 | - name: Install php_role
6134 | include_role:
6135 | name: php_role
6136 |
6137 | handlers:
6138 | - name: Restart httpd service
6139 | service:
6140 | name:
6141 | - httpd
6142 | - firewalld
6143 | state: restarted
6144 | ```
6145 |
6146 | * Installation currently fails due to a missing php-xmlrpc package. This appears to be an incompatibility with RHEL 9. Attempting to install this package from the EPEL repository gives an error about failing to download metadata.
6147 |
6148 | 1. Task 7
6149 |
6150 | * Create a file `/home/ansible/ansible/secrets.txt`:
6151 | ```shell
6152 | echo reallysafepw > secret.txt
6153 | ```
6154 |
6155 | * Create a file `lock.yml` at `/home/ansible/ansible/lock.yml` using `ansible-vault encrypt lock.yml --vault-password-file secret.txt`. The contents of the original `lock.yml`:
6156 | ```yaml
6157 | pw_dev: dev
6158 | pw_mgr: mgr
6159 | ```
6160 |
6161 | * You can also directly create the encrypted playbook using `ansible-vault create lock.yml` and entering the password when prompted.
6162 |
6163 | 1. Task 8
6164 |
6165 | * Create a file `/home/ansible/ansible/users_list.yml`:
6166 | ```yaml
6167 | users:
6168 | - username: bill
6169 | job: developer
6170 | - username: chris
6171 | job: manager
6172 | - username: dave
6173 | job: test
6174 | - username: ethan
6175 | job: developer
6176 | ```
6177 |
6178 | * Create a file `/home/ansible/ansible/users.yml` and run it with `ansible-playbook users.yml --vault-password-file secret.txt`:
6179 | ```yaml
6180 | ---
6181 | - name: Create users
6182 | hosts: all
6183 | become: true
6184 | vars_files:
6185 | - lock.yml
6186 | - users_list.yml
6187 |
6188 | tasks:
6189 | - name: Create devops group
6190 | group:
6191 | name: devops
6192 | when: "'dev' in group_names"
6193 |
6194 | - name: Create managers group
6195 | group:
6196 | name: managers
6197 | when: "('proxy' in group_names)"
6198 |
6199 | - name: Create developer users
6200 | user:
6201 | name: "{{ item.username }}"
6202 | group: devops
6203 | password: "{{ pw_dev | password_hash('sha512') }}"
6204 | when: "('dev' in group_names) and ('developer' in item.job)"
6205 | loop: "{{ users }}"
6206 |
6207 | - name: Create manager users
6208 | user:
6209 | name: "{{ item.username }}"
6210 | group: managers
6211 | password: "{{ pw_mgr | password_hash('sha512') }}"
6212 | when: "('proxy' in group_names) and ('manager' in item.job)"
6213 | loop: "{{ users }}"
6214 | ```
6215 |
6216 | 1. Task 9
6217 |
6218 | * Create a file `/home/ansible/ansible/report.txt`:
6219 | ```yaml
6220 | HOST=inventory hostname
6221 | MEMORY=total memory in mb
6222 | BIOS=bios version
6223 | SDA_DISK_SIZE=disk size
6224 | SDB_DISK_SIZE=disk size
6225 | ```
6226 |
6227 | * Create a file `/home/ansible/ansible/report.yml`:
6228 | ```yaml
6229 | ---
6230 | - name: Create a report
6231 | hosts: all
6232 | tasks:
6233 | - name: Copy the report template
6234 | copy:
6235 | src: /home/ansible/ansible/report.txt
6236 | dest: /root/report.txt
6237 |
6238 | - name: Populate the HOST varible
6239 | lineinfile:
6240 | path: /root/report.txt
6241 | state: present
6242 | regex: "^HOST="
6243 | line: "HOST='{{ ansible_facts.hostname }}'"
6244 |
6245 | - name: Populate the MEMORY varible
6246 | lineinfile:
6247 | path: /root/report.txt
6248 | state: present
6249 | regex: "^MEMORY="
6250 | line: "MEMORY='{{ ansible_facts.memtotal_mb }}'"
6251 |
6252 | - name: Populate the BIOS varible
6253 | lineinfile:
6254 | path: /root/report.txt
6255 | state: present
6256 | regex: "^BIOS="
6257 | line: "BIOS='{{ ansible_bios_version }}'"
6258 |
6259 | - name: Populate the SDA_DISK_SIZE varible
6260 | lineinfile:
6261 | path: /root/report.txt
6262 | state: present
6263 | regex: "^SDA_DISK_SIZE="
6264 | line: "SDA_DISK_SIZE='{{ ansible_devices.sda.size }}'"
6265 | when: 'ansible_devices.sda.size is defined'
6266 |
6267 | - name: Populate the SDA_DISK_SIZE varible
6268 | lineinfile:
6269 | path: /root/report.txt
6270 | state: present
6271 | regex: "^SDA_DISK_SIZE=NONE"
6272 | line: "SDA_DISK_SIZE='{{ ansible_devices.sda.size }}'"
6273 | when: 'ansible_devices.sda.size is not defined'
6274 |
6275 | - name: Populate the SDB_DISK_SIZE varible
6276 | lineinfile:
6277 | path: /root/report.txt
6278 | state: present
6279 | regex: "^SDB_DISK_SIZE="
6280 | line: "SDB_DISK_SIZE='{{ ansible_devices.sdb.size }}'"
6281 | when: 'ansible_devices.sdb.size is defined'
6282 |
6283 | - name: Populate the SDB_DISK_SIZE varible
6284 | lineinfile:
6285 | path: /root/report.txt
6286 | state: present
6287 | regex: "^SDB_DISK_SIZE="
6288 | line: "SDB_DISK_SIZE=NONE"
6289 | when: 'ansible_devices.sdb.size is not defined'
6290 | ```
6291 |
6292 | 1. Task 10
6293 |
6294 | * The `hostvars` variable contains information about hosts in the inventory. You can run it to get an idea of the variables available when creating a j2 template.
6295 |
6296 | * Create a file `/home/ansible/ansible/hosts.j2`:
6297 | ```jinja2
6298 | {%for host in groups['all']%}
6299 | {{hostvars[host]['ansible_default_ipv4']['address']}} {{hostvars[host]['ansible_fqdn'] {{hostvars[host]['ansible_hostname']}}
6300 | {%endfor%}
6301 | ```
6302 |
6303 | * Create a playbook `/home/ansible/ansible/hosts.yml` and run it with `ansible-playbook hosts.yml`:
6304 | ```yml
6305 | ---
6306 | - name: Populate j2 template
6307 | hosts: all
6308 | tasks:
6309 | - name: Populate j2 template
6310 | template:
6311 | src: /home/ansible/ansible/hosts.j2
6312 | dest: /root/myhosts
6313 | when: "'dev' in group_names"
6314 | ```
6315 |
6316 | 1. Task 11
6317 |
6318 | * The tasks below are available in the community.general collection. Run `ansible-galaxy collection install community.general -p /home/ansible/ansible/collections` to install the collection.
6319 |
6320 | * Create a playbook `/home/ansible/ansible/logvol.yml` and run it with `ansible-playbook logvol.yml`:
6321 | ```yml
6322 | ---
6323 | - name: Create logical volumes
6324 | hosts: all
6325 | become: true
6326 | tasks:
6327 | - name: Create partition
6328 | community.general.parted:
6329 | device: /dev/sdb
6330 | state: present
6331 | flags: [ lvm ]
6332 | number: 1
6333 | when: "ansible_devices.sdb is defined"
6334 |
6335 | - name: Create LVG
6336 | community.general.lvg:
6337 | vg: vg0
6338 | pvs: /dev/sdb1
6339 | state: present
6340 | when: "ansible_devices.sdb.partitions.sdb1 is defined"
6341 |
6342 | - name: Create 1500MB LVOL
6343 | community.general.lvol:
6344 | vg: vg0
6345 | lv: lv0
6346 | size: 1500m
6347 | state: present
6348 | when: "ansible_lvm.vgs.vg0 is defined and ((ansible_lvm.vgs.vg0.size_g | float) > 1.5) and ansible_lvm.lvs.lv0 is not defined"
6349 |
6350 | - name: Print error message
6351 | debug:
6352 | msg: "Not enough space for logical volume."
6353 | when: "ansible_lvm.vgs.vg0 is defined and ((ansible_lvm.vgs.vg0.size_g | float) < 1.5)"
6354 |
6355 | - name: Create 800MB LVOL
6356 | community.general.lvol:
6357 | vg: vg0
6358 | lv: lv0
6359 | size: 800m
6360 | state: present
6361 | when: "ansible_lvm.vgs.vg0 is defined and ((ansible_lvm.vgs.vg0.size_g | float) > 0.8) and ansible_lvm.lvs.lv0 is not defined"
6362 |
6363 | - name: Create the filesystem
6364 | community.general.filesystem:
6365 | fstype: xfs
6366 | dev: /dev/vg0/lv0
6367 | state: present
6368 | when: "ansible_lvm.vgs.vg0 is defined"
6369 | ```
6370 |
6371 | 1. Task 12
6372 |
6373 | * Create a file at `/home/ansible/ansible/index.html`:
6374 | ```html
6375 | Development
6376 | ```
6377 |
6378 | * Create a playbook `/home/ansible/ansible/index.html` and run it with `ansible-playbook logvol.yml`:
6379 | ```yaml
6380 | ---
6381 | - name: Serve a file
6382 | hosts: dev
6383 | become: true
6384 | tasks:
6385 | - name: Create the webdev user
6386 | user:
6387 | name: webdev
6388 | state: present
6389 |
6390 | - name: Create webdev directory
6391 | file:
6392 | path: /webdev
6393 | state: directory
6394 | mode: '2755'
6395 | owner: webdev
6396 |
6397 | - name: Create the html webdev directory
6398 | file:
6399 | path: /var/www/html/webdev
6400 | state: directory
6401 | mode: '2755'
6402 | owner: root
6403 |
6404 | - name: Set correct SELinux file context
6405 | sefcontext:
6406 | target: '/var/www/html/webdev(/.*)?'
6407 | setype: httpd_sys_content_t
6408 | state: present
6409 |
6410 | - name: Set correct SELinux file context
6411 | sefcontext:
6412 | target: '/webdev(/.*)?'
6413 | setype: httpd_sys_content_t
6414 | state: present
6415 |
6416 | - name: Check firewall rules
6417 | firewalld:
6418 | service: http
6419 | permanent: true
6420 | state: enabled
6421 | immediate: true
6422 |
6423 | - name: Copy html file into webdev directory
6424 | copy:
6425 | src: /home/ansible/ansible/index.html
6426 | dest: /webdev/index.html
6427 | owner: webdev
6428 | notify: Restart httpd
6429 |
6430 | - name: Create the symlink
6431 | file:
6432 | src: /webdev
6433 | dest: /var/www/html/webdev
6434 | state: link
6435 | force: yes
6436 |
6437 | handlers:
6438 | - name: Restart httpd
6439 | service:
6440 | name: httpd
6441 | state: restarted
6442 | enabled: yes
6443 | ```
6444 |
6445 | * Note that the for the above to work on the `dev` host I had to set `DocumentRoot "/var/www/html/webdev` in `/etc/httpd/conf/httpd.conf`. There is likely a better solution than that.
6446 |
6447 | 1. Task 13
6448 |
6449 | * The system roles can be installed using:
6450 | ```shell
6451 | dnf install rhel-system-roles -y
6452 | ```
6453 |
6454 | * Once installed, documentation and sample playbooks are available at `/usr/share/doc/rhel-system-roles/` under each role.
6455 |
6456 | * Create a playbook `/home/ansible/ansible/timesync.yml` and run it with `ansible-playbook timesync.yml`:
6457 | ```yml
6458 | ---
6459 | - name: Set the time using timesync
6460 | hosts: all
6461 | vars:
6462 | timesync_ntp_servers:
6463 | - hostname: 0.uk.pool.ntp.org
6464 | boost: true
6465 | roles:
6466 | - /usr/share/ansible/roles/rhel-system-roles.timesync
6467 | ```
6468 |
6469 | 1. Task 14
6470 |
6471 | * Create a playbook `/home/ansible/ansible/regulartasks.yml` and run it with `ansible-playbook regulartasks.yml`:
6472 | ```yml
6473 | ---
6474 | - name: Append the date
6475 | hosts: all
6476 | become: true
6477 | tasks:
6478 | - name: Append the date using cron
6479 | cron:
6480 | name: datejob
6481 | hour: "12"
6482 | user: root
6483 | job: "date >> /root/datefile"
6484 | ```
6485 |
6486 | 1. Task 15
6487 |
6488 | * Create a playbook `/home/ansible/ansible/issue.yml` and run it with `ansible-playbook issue.yml`:
6489 | ```yml
6490 | ---
6491 | - name: Update a file conditionally
6492 | hosts: all
6493 | tasks:
6494 | - name: Update file for dev
6495 | copy:
6496 | content: "Development"
6497 | dest: /etc/issue
6498 | when: "'dev' in group_names"
6499 |
6500 | - name: Update file for tets
6501 | copy:
6502 | content: "Test"
6503 | dest: /etc/issue
6504 | when: "'test' in group_names"
6505 |
6506 | - name: Update file for prod
6507 | copy:
6508 | content: "Production"
6509 | dest: /etc/issue
6510 | when: "'prod' in group_names"
6511 | ```
6512 |
6513 | 1. Task 16
6514 |
6515 | * Create an encrypted playbook using `ansible-vault create myvault.yml`:
6516 | ```shell
6517 | ansible-vault create myvault.yml # enter pw as notsafepw
6518 | ansible-vault rekey myvault.yml # enter old and new pw as requested
6519 | ```
6520 |
6521 | 1. Task 17
6522 |
6523 | * Create a playbook `ansible-vault create target.yml`:
6524 | ```yaml
6525 | ---
6526 | - name: Change default target
6527 | hosts: all
6528 | tasks:
6529 | - name: Change default target
6530 | file:
6531 | src: /usr/lib/systemd/system/multi-user.target
6532 | dest: /etc/systemd/system/default.target
6533 | state: link
6534 | ```
6535 |
6536 | * This is not a good solution as it requires implementation level knowledge.
6537 |
6538 | #### Practise Exam 2
6539 |
6540 | 1. Setup
6541 |
6542 | * Create a VirtualBox machine for the control node. In VirtualBox create a new machine with the VirtualBox additions and the RHEL 9.2 installer ISOs attached as IDE devices. Set networking mode to Bridged Adapter. Set the Pointing Device to USB Tablet and enable the USB 3.0 Controller. Set shared Clipboard and Drag'n'Drop to Bidirectional. Uncheck Audio output. Start the machine and install RHEL.
6543 |
6544 | * On the control node run the following setup as root:
6545 | ```shell
6546 | blkid # note the UUID of the RHEL ISO
6547 | mkdir /mnt/rheliso
6548 | vi /etc/fstab
6549 | echo "UUID="" /mnt/rheliso iso9660 loop 0 0" >> /etc/fstab
6550 | mount -a # confirm no error is returned
6551 | ```
6552 |
6553 | * On the control node setup the dnf repositories as root:
6554 | ```shell
6555 | vi /etc/yum.repos.d/redhat.repo # add the below content
6556 | # [BaseOS]
6557 | # name=BaseOS
6558 | # baseUrl=file:///mnt/rheliso/BaseOS
6559 | # enabled=1
6560 | # gpgcheck=0
6561 | #
6562 | # [AppStream]
6563 | # name=AppStream
6564 | # enabled=1
6565 | # baseurl=file:///mnt/rheliso/AppStream
6566 | # gpgcheck=0
6567 | dnf repolist # confirm repos are returned
6568 | ```
6569 |
6570 | * On the control node run the following setup as root to install guest additions:
6571 | ```shell
6572 | dnf install kernel-headers kernel-devel
6573 | # run the guest additions installer
6574 | ```
6575 |
6576 | * On the control node run the following setup:
6577 | ```shell
6578 | useradd ansible
6579 | echo password | passwd --stdin ansible
6580 | echo "ansible ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/ansible
6581 | dnf install ansible-core rhel-system-roles -y
6582 | ```
6583 |
6584 | * On the control node run the following to make vim friendlier for playbook creation:
6585 | ```shell
6586 | echo "autocmd FileType yml setlocal ai ts=2 sw=2 et cuc nu" >> ~/.vimrc
6587 | ```
6588 |
6589 | * Clone the control node machine five times for managed nodes 1-5. Attach a 2GB SATA drive to nodes 1-3, and a 1GB SATA drive to node 4. Note that cloning means the above steps are also done on each managed node, but if the servers existed already, we would have had to do this manually.
6590 |
6591 | * On each managed node get the IP address using `ifconfig`. On the control node, switch to the root user and add hostnames for the managed nodes:
6592 | ```shell
6593 | vi /etc/hosts
6594 | # add the following lines
6595 | # 192.168.1.116 node1.example.com
6596 | # 192.168.1.117 node2.example.com
6597 | # 192.168.1.109 node3.example.com
6598 | # 192.168.1.118 node4.example.com
6599 | # 192.168.1.111 node5.example.com
6600 | ```
6601 |
6602 | * On the control node run the following:
6603 | ```shell
6604 | ssh-keygen # select defaults
6605 | ssh-copy-id ansible@node1.example.com # repeat for the remaining managed nodes
6606 | ansible all -m ping # confirm ssh connectivity
6607 | '''
6608 |
6609 | 1. Task 1
6610 |
6611 | * Run the following commands on the control node:
6612 | ```shell
6613 | vi /etc/ansible/ansible/hosts # add the below
6614 | # [dev]
6615 | # node1.example.com
6616 | #
6617 | # [test]
6618 | # node2.example.com
6619 | #
6620 | # [proxy]
6621 | # node3.example.com
6622 | #
6623 | # [prod]
6624 | # node4.example.com
6625 | # node5.example.com
6626 | #
6627 | # [webservers:children]
6628 | # prod
6629 |
6630 | vi /etc/ansible/ansible/ansible.cfg
6631 | # [defaults]
6632 | # roles_path=/home/ansible/ansible/roles
6633 | # inventory=/home/ansible/ansible/hosts
6634 | # remote_user=ansible
6635 | # host_key_checking=false
6636 |
6637 | # [privilege_escalation]
6638 | # become=true
6639 | # become_user=root
6640 | # become_method=sudo
6641 | # become_ask_pass=false
6642 | ```
6643 |
6644 | 1. Task 2
6645 |
6646 | * Create and run the following `adhoc.sh` script on the control node:
6647 | ```shell
6648 | #!/bin/bash
6649 | # create the 'devops' user with necessary permissions
6650 | ansible all -u ansible -m user -a "name=devops" --ask-pass
6651 | ansible all -u ansible -m shell -a "echo password | passwd --stdin devops"
6652 | ansible all -u ansible -m shell -a "echo 'devops ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/devops"
6653 |
6654 | # allow SSH without password
6655 | ssh-copy-id -i /home/devops/.ssh/id_rsa.pub node1.example.com
6656 | ssh-copy-id -i /home/devops/.ssh/id_rsa.pub node2.example.com
6657 | ssh-copy-id -i /home/devops/.ssh/id_rsa.pub node3.example.com
6658 | ssh-copy-id -i /home/devops/.ssh/id_rsa.pub node4.example.com
6659 | ssh-copy-id -i /home/devops/.ssh/id_rsa.pub node5.example.com
6660 | ```
6661 |
6662 | * Note that this requires an existing user `ansible` to authenticate to the managed nodes. The ask pass parameter is used initially as the devops user is not created in the remote systems, and there are no keys setup for the ansible user yet.
6663 |
6664 | 1. Task 3
6665 |
6666 | * Create and run the following playbook `/home/ansible/ansible/motd.yml` script on the control node:
6667 | ```yaml
6668 | ---
6669 | - name: Write to /etc/motd
6670 | hosts: all
6671 | tasks:
6672 | - name: Write to /etc/motd for dev
6673 | copy:
6674 | content: 'Welcome to Dev Server {{ ansible_fqdn }}'
6675 | dest: /etc/motd
6676 | when: "'dev' in group_names"
6677 |
6678 | - name: Write to /etc/motd for webservers
6679 | copy:
6680 | content: 'Welcome to Apache Server {{ ansible_fqdn }}'
6681 | dest: /etc/motd
6682 | when: "'webservers' in group_names"
6683 |
6684 | - name: Write to /etc/motd for test
6685 | copy:
6686 | content: 'Welcome to MySQL Server {{ ansible_fqdn }}'
6687 | dest: /etc/motd
6688 | when: "'test' in group_names"
6689 | ```
6690 |
6691 | 1. Task 4
6692 |
6693 | * Create and run the following playbook `/home/ansible/ansible/sshd.yml` script on the control node:
6694 | ```yaml
6695 | ---
6696 | - name: Configure SSHD daemon
6697 | hosts: all
6698 | become: true
6699 | tasks:
6700 | - name: Set Banner
6701 | lineinfile:
6702 | path: /etc/ssh/sshd_config.d/50-redhat.conf
6703 | regexp: '^Banner '
6704 | line: Banner /etc/issue
6705 | state: present
6706 |
6707 | - name: Set PermitRootLogin
6708 | lineinfile:
6709 | path: /etc/ssh/sshd_config.d/50-redhat.conf
6710 | regexp: '^PermitRootLogin '
6711 | line: PermitRootLogin no
6712 | state: present
6713 |
6714 | - name: Set MaxAuthTries
6715 | lineinfile:
6716 | path: /etc/ssh/sshd_config.d/50-redhat.conf
6717 | regexp: '^MaxAuthTries '
6718 | line: MaxAuthTries 6
6719 | state: present
6720 |
6721 | - name: Restart SSHD
6722 | service:
6723 | name: sshd
6724 | state: restarted
6725 | ```
6726 |
6727 | 1. Task 5
6728 |
6729 | * Run the following commands on the control node:
6730 | ```shell
6731 | ansible-vault create secret.yml # set password as admin123
6732 | # set file contents as below
6733 | # user_pass: user
6734 | # database_pass: database
6735 | echo admin123 > secret.txt
6736 | ansible-vault view secret.yml --vault-pass-file secret.txt
6737 | ```
6738 |
6739 | 1. Task 6
6740 |
6741 | * Create `/vars/userlist.yml`:
6742 | ```yaml
6743 | ---
6744 | users:
6745 | - username: alice
6746 | job: developer
6747 | - username: vincent
6748 | job: manager
6749 | - username: sandy
6750 | job: tester
6751 | - username: patrick
6752 | job: developer
6753 | ```
6754 |
6755 | * Create and run the playbook `/home/ansible/ansible/users.yml`:
6756 | ```yaml
6757 | ---
6758 | - name: Create users
6759 | hosts: all
6760 | vars_files:
6761 | - userlist.yml
6762 | - secret.yml
6763 | tasks:
6764 | - name: Create developer users from the userlist
6765 | user:
6766 | name: "{{ item.username }}"
6767 | group: wheel
6768 | password: "{{ user_pass | password_hash('sha512') }}"
6769 | with_items: "{{ users }}"
6770 | when: "('dev' in group_names) and (item.job == 'developer')"
6771 |
6772 | - name: Create manager users from the userlist
6773 | user:
6774 | name: "{{ item.username }}"
6775 | group: wheel
6776 | password: "{{ database_pass | password_hash('sha512') }}"
6777 | with_items: "{{ users }}"
6778 | when: "('test' in group_names) and (item.job == 'manager')"
6779 | ```
6780 |
6781 | 1. Task 7
6782 |
6783 | * Create and run the script `/home/ansible/ansible/repository.sh`:
6784 | ```shell
6785 | #!/bin/bash
6786 | ansible test -m yum_repository -a "name=mysql56-community description='MySQL 5.6 YUM Repo' baseurl=http://repo.example.com/rpms enabled=0"
6787 | ```
6788 |
6789 | 1. Task 8
6790 |
6791 | * Update networking to NAT on the control node to install additional collections. Create and run the script `/home/ansible/ansible/repository.sh`:
6792 | ```shell
6793 | ansible-galaxy collection install ansible.posix
6794 | ansible-galaxy collection install community.general
6795 | ```
6796 |
6797 | * Revert the networking to Bridged Adapter. Add an additional 1GB and 2GB drive to nodes 4 and 5 respectively. Create and run the playbook `/home/ansible/ansible/logvol.yml`:
6798 | ```yaml
6799 | ---
6800 | - name: Setup volumes
6801 | hosts: all
6802 | become: true
6803 | tasks:
6804 | - name: Create partition
6805 | parted:
6806 | device: /dev/sdb
6807 | number: 1
6808 | flags: [ lvm ]
6809 | state: present
6810 | when: "ansible_facts['devices']['sdb'] is defined"
6811 |
6812 | - name: Create volume group
6813 | lvg:
6814 | vg: vg0
6815 | pvs: /dev/sdb1
6816 | when: "ansible_facts['devices']['sdb'] is defined"
6817 |
6818 | - name: Create logical volume
6819 | lvol:
6820 | vg: vg0
6821 | lv: lv0
6822 | size: 1500m
6823 | state: present
6824 | when: "ansible_facts['devices']['sdb'] is defined and ansible_lvm['vgs']['vg0'] is defined and ansible_lvm['lvs']['lv0'] is not defined and (ansible_lvm['vgs']['vg0']['free_g'] | float) > 1.5"
6825 |
6826 | - name: Print error message
6827 | debug:
6828 | msg: "Not enough space in volume group"
6829 | when: "ansible_facts['devices']['sdb'] is defined and ansible_lvm['vgs']['vg0'] is defined and (ansible_lvm['vgs']['vg0']['free_g']) | float <= 1.5"
6830 |
6831 | - name: Create logical volume
6832 | lvol:
6833 | vg: vg0
6834 | lv: lv0
6835 | size: 800m
6836 | state: present
6837 | when: "ansible_facts['devices']['sdb'] is defined and ansible_lvm['vgs']['vg0'] is defined and ansible_lvm['lvs']['lv0'] is not defined and (ansible_lvm['vgs']['vg0']['free_g']) | float > 0.8"
6838 |
6839 | - name: Create the file system
6840 | filesystem:
6841 | dev: /dev/vg0/lv0
6842 | state: present
6843 | fstype: xfs
6844 | when: "ansible_lvm['lvs']['lv0'] is defined"
6845 |
6846 | - name: Mount file system
6847 | mount:
6848 | src: /dev/vg0/lv0
6849 | path: /mnt/data
6850 | state: mounted
6851 | fstype: xfs
6852 | when: "ansible_lvm['lvs']['lv0'] is defined"
6853 | ```
6854 |
6855 | * Note that for some reason you can't access the lvm properties using `ansible_facts['ansible_lvm']`.
6856 |
6857 | 1. Task 9
6858 |
6859 | * Create and run the playbook `/home/ansible/ansible/regular_tasks.yml`:
6860 | ```yaml
6861 | ---
6862 | - name: Create cron job on dev
6863 | hosts: dev
6864 | become: true
6865 | tasks:
6866 | - name: Create cron job on dev
6867 | cron:
6868 | name: hourly job
6869 | minute: "0"
6870 | job: "date >> /root/time"
6871 | ```
6872 |
6873 | 1. Task 10
6874 |
6875 | * Create the role `/home/ansible/ansible/roles/sample-apache/tasks/main.yml`:
6876 | ```yaml
6877 | - name: Install httpd
6878 | yum:
6879 | name: httpd
6880 | state: present
6881 |
6882 | - name: Enable http in firewalld
6883 | firewalld:
6884 | service: http
6885 | state: enabled
6886 | permanent: true
6887 | immediate: true
6888 |
6889 | - name: Start httpd
6890 | service:
6891 | name: httpd
6892 | state: started
6893 | enabled: yes
6894 |
6895 | - name: Copy template
6896 | template:
6897 | src: /home/ansible/ansible/index.html.j2
6898 | dest: /var/www/html/index.html
6899 | notify:
6900 | - Restart httpd
6901 | ```
6902 |
6903 | * Create the handler `/home/ansible/ansible/index.html.j2`:
6904 | ```shell
6905 | Welcome to the server {{ ansible_hostname }}
6906 | ```
6907 |
6908 | * Create and run the playbook
6909 | ```yaml
6910 | ---
6911 | - name: Setup Apache
6912 | hosts: all
6913 | become: true
6914 | roles:
6915 | - sample-apache
6916 | ```
6917 |
6918 | 1. Task 11
6919 |
6920 | * Create the file `/home/ansible/ansible/requirements.yml`:
6921 | ```yaml
6922 | - name: haproxy-role
6923 | src: geerlingguy.haproxy
6924 | ```
6925 |
6926 | * Install the role using `ansible-galaxy install -r requirements.yml -p roles`.
6927 |
6928 | * The documentation in `/home/ansible/ansible/roles/haproxy-role` can be referred. Create and run the playbook `/home/ansible/ansible/haproxy.yml`:
6929 | ```yaml
6930 | ---
6931 | - name: Configure load balancing
6932 | hosts: proxy
6933 | become: true
6934 | vars:
6935 | haproxy_frontend_port: 80
6936 | haproxy_backend_servers:
6937 | - name: node4.example.com
6938 | address: 192.168.1.111:80
6939 | - name: node5.example.com
6940 | address: 192.168.1.112:80
6941 | pre_tasks:
6942 | - name: Install pre-requisites
6943 | yum:
6944 | name:
6945 | - haproxy
6946 | state: present
6947 | roles:
6948 | - haproxy-role
6949 | ```
6950 |
6951 | * In this instance the hostnames for node4 and node5 were added to the `/etc/hosts` file in node3. The httpd service was running on node3 which initially prevented playbook completion. This was identified using `sudo ss -6 -tlnp | grep 80` and resolved using `sudo systemctl stop httpd`.
6952 |
6953 | 1. Task 12
6954 |
6955 | * Create and run the playbook `/home/ansible/ansible/timesync.yml`:
6956 | ```yaml
6957 | ---
6958 | - name: Install timesync
6959 | hosts: all
6960 | vars:
6961 | timesync_ntp_servers:
6962 | - hostname: 0.uk.pool.ntp.org
6963 | tasks:
6964 | - name: Set timezone to UTC
6965 | timezone:
6966 | name: UTC
6967 | roles:
6968 | - /usr/share/ansible/roles/rhel-system-roles.timesync
6969 | ```
6970 |
6971 | 1. Task 13
6972 |
6973 | * Create a file `/home/ansible/ansible/hosts.j2`:
6974 | ```jinja2
6975 | {%for host in groups['all']%}
6976 | {{ hostvars[host]['ansible_default_ipv4']['address'] }} {{ hostvars[host]['ansible_fqdn'] }} {{ hostvars[host]['ansible_hostname'] }}
6977 | {%endfor%}
6978 | ```
6979 |
6980 | * Create and run the playbook `/home/ansible/ansible/hosts.yml`:
6981 | ```yaml
6982 | ---
6983 | - name: Create file based on template
6984 | hosts: all
6985 | become: true
6986 | tasks:
6987 | - name: Create file based on template
6988 | template:
6989 | src: /home/ansible/ansible/hosts.j2
6990 | dest: /root/myhosts
6991 | when: "'dev' in group_names"
6992 | ```
6993 |
6994 | 1. Task 14
6995 |
6996 | * Create a file `/home/ansible/ansible/requirements.yml`:
6997 | ```yaml
6998 | ---
6999 | - name: sample-php_roles
7000 | src: geerlingguy.php
7001 | ```
7002 |
7003 | * Run `ansible-galaxy install -r requirements.yml -p roles`.
7004 |
7005 | 1. Task 15
7006 |
7007 | * Create a file `/home/ansible/ansible/specs.empty`:
7008 | ```yaml
7009 | HOST=
7010 | MEMORY=
7011 | BIOS=
7012 | SDA_DISK_SIZE=
7013 | SDB_DISK_SIZE=
7014 | ```
7015 |
7016 | * Create and run the playbook `/home/ansible/ansible/specs.yml`:
7017 | ```yaml
7018 | ---
7019 | - name: Populate specs file
7020 | hosts: all
7021 | become: true
7022 | tasks:
7023 | - name: Copy file to hosts
7024 | copy:
7025 | src: /home/ansible/ansible/specs.empty
7026 | dest: /root/specs.txt
7027 | - name: Update hosts
7028 | lineinfile:
7029 | path: /root/specs.txt
7030 | regexp: '^HOST='
7031 | line: 'HOST={{ ansible_hostname }}'
7032 | - name: Update memory
7033 | lineinfile:
7034 | path: /root/specs.txt
7035 | regexp: '^MEMORY='
7036 | line: 'MEMORY={{ ansible_memtotal_mb }}'
7037 | - name: Update BIOS
7038 | lineinfile:
7039 | path: /root/specs.txt
7040 | regexp: '^BIOS='
7041 | line: 'BIOS={{ ansible_bios_version }}'
7042 | - name: Update SDA disk size
7043 | lineinfile:
7044 | path: /root/specs.txt
7045 | regexp: '^SDA_DISK_SIZE='
7046 | line: "SDA_DISK_SIZE={{ ansible_devices['sda']['size'] }}"
7047 | when: "ansible_devices['sda'] is defined"
7048 | - name: Update SDB disk size
7049 | lineinfile:
7050 | path: /root/specs.txt
7051 | regexp: '^SDB_DISK_SIZE='
7052 | line: "SDB_DISK_SIZE={{ ansible_devices['sdb']['size'] }}"
7053 | when: "ansible_devices['sdb'] is defined"
7054 | ```
7055 |
7056 | 1. Task 16
7057 |
7058 | * Create and run the playbook `/home/ansible/ansible/packages.yml`:
7059 | ```yaml
7060 | ---
7061 | - name: Install packages
7062 | hosts: all
7063 | become: true
7064 | tasks:
7065 | - name: Install packages using yum for proxy
7066 | yum:
7067 | name:
7068 | - httpd
7069 | - mod_ssl
7070 | state: present
7071 | when: "'proxy' in group_names"
7072 | - name: Install packages using yum for dev
7073 | yum:
7074 | name:
7075 | - '@Development tools'
7076 | state: present
7077 | when: "'dev' in group_names"
7078 | ```
7079 |
7080 | 1. Task 17
7081 |
7082 | * Create and run the playbook `/home/ansible/ansible/mysecret.yml`:
7083 | ```shell
7084 | ansible-vault create mysecret.yml # enter notasafepass
7085 | # add a line dev_pass: devops
7086 | ansible-vault rekey mysecret.yml # enter new password devops123
7087 | ansible-vault edit mysecret.yml
7088 | # add a line dev_pass: devops123
7089 | ```
7090 |
7091 | #### Practise Exam 3
7092 |
7093 | 1. Install Essential tools and ensure access to remote hosts
7094 |
7095 | * Run the following:
7096 | ```shell
7097 | # on the control node
7098 | sudo -su root
7099 | ansible localhost -m user -a "name=ansible"
7100 | echo password | passwd --stdin ansible
7101 | echo "ansible ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/ansible
7102 | mdkir ansible
7103 | cd ansible
7104 | ansible-config init --disabled > ansible_reference.cfg
7105 | sudo cp /etc/ansible/hosts hosts.ini
7106 | sudo chown ansible:ansible hosts.ini
7107 | ansible-galaxy collection install community.general community.crypto ansible.posix
7108 | python3 -m pip install ansible-navigator --user
7109 | ```
7110 |
7111 | 1. Install Essential tools and ensure access to remote hosts
7112 |
7113 | * Run the following:
7114 | ```shell
7115 | # on the control node
7116 | sudo -su root
7117 | mkdir /mnt/rheliso
7118 | blkid # note UUID
7119 | echo "UUID='' /mnt/rheliso iso9660 loop 0 0" >> /etc/fstab
7120 | mount -a
7121 | vi /etc/yum.repos.d/redhat.repo
7122 | # add the BaseOS and AppStream repos
7123 | dnf install kernel-headers kernel-devel -y
7124 | # install guest additions
7125 | useradd ansible
7126 | echo password | passwd --stdin ansible
7127 | echo "ansible ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/ansible
7128 | sudo -su ansible
7129 | sudo dnf install ansible-core rhel-system-roles
7130 | sudo dnf install pip -y
7131 | python3 -m pip install ansible-navigator --user
7132 | echo "autocmd FileType yml setlocal ai sw=2 ts=2 et cu" >> ~/.vimrc
7133 | mdkir ~/ansible
7134 | cd ~/ansible
7135 | ansible-config init --disabled > ansible_reference.cfg
7136 | sudo cp /etc/ansible/hosts hosts.ini
7137 | sudo chown ansible:ansible hosts.ini
7138 | ansible-galaxy collection install community.general community.crypto ansible.posix
7139 | ```
7140 |
7141 | * Add the managed nodes in `/etc/fstab`.
7142 |
7143 | * Update `ansible.cfg` in the working directory:
7144 | ```shell
7145 | [defaults]
7146 | inventory=/home/ansible/ansible/hosts.ini
7147 | host_key_checking=false
7148 | ask_pass=true
7149 | remote_user=ansible
7150 |
7151 | [privilege_escalation]
7152 | become=false
7153 | become_method=sudo
7154 | become_ask_pass=true
7155 | become_user=root
7156 | ```
7157 |
7158 | * Login to each managed node and create the ansible user. Create and run the `bootstrap.yml` playbook on the control node:
7159 | ```yml
7160 | ---
7161 | - name: Bootstrap automation user
7162 | hosts: all
7163 | become: true
7164 | tasks:
7165 | - name: Check automation user
7166 | user:
7167 | name: ansible
7168 | password: "{{ 'password' | password_hash('sha512') }}"
7169 | home: /home/ansible
7170 | state: present
7171 |
7172 | - name: Check key folder
7173 | file:
7174 | state: directory
7175 | mode: 0700
7176 | name: /home/ansible/.ssh
7177 |
7178 | - name: Check sudo access
7179 | copy:
7180 | content: "ansible ALL=(ALL) NOPASSWD:ALL"
7181 | dest: /etc/sudoers.d/ansible
7182 | validate: /usr/sbin/visudo -csf %s
7183 |
7184 | - name: Copy public key into authorized_keys
7185 | ansible.posix.authorized_key:
7186 | user: ansible
7187 | state: present
7188 | key: "{{ lookup('file', lookup('env', 'HOME') + '/.ssh/id_rsa.pub') }}"
7189 | ```
7190 |
7191 | 1. Create an Ansible Inventory
7192 |
7193 | * Add the following to `/home/ansible/inventory`:
7194 | ```yml
7195 | [web]
7196 | web01
7197 | web02
7198 |
7199 | [development]
7200 | dev01
7201 |
7202 | [dc1:children]
7203 | web
7204 | development
7205 | ```
7206 |
7207 | 1. Configure Ansible Settings
7208 |
7209 | * Update `ansible.cfg` in the working directory:
7210 | ```yml
7211 | [defaults]
7212 | inventory=/home/ansible/rhce1/inventory
7213 | host_key_checking=False
7214 | ask_pass=False
7215 | remote_user=ansible
7216 | forks=3
7217 | timeout=120
7218 |
7219 | [privilege_escalation]
7220 | become=True
7221 | become_method=sudo
7222 | become_ask_pass=False
7223 | become_user=root
7224 | ```
7225 |
7226 | 1. User Management and Group Assignments
7227 |
7228 | * Run the following:
7229 | ```shell
7230 | mkdir vars
7231 | cd vars
7232 | ansible-vault create vault.yml # enter 'rocky'
7233 | # enter user_password: password!
7234 | cd ..
7235 | echo rocky > users_vault.txt
7236 | ```
7237 |
7238 | * Create the playbook `users.yml` and run using `ansible-playbook users.yml --vault-password-file users_vault.txt`:
7239 | ```yml
7240 | ---
7241 | - name: Create users in managed nodes
7242 | hosts: dc1
7243 | vars_files:
7244 | - vars/vault.yml
7245 | tasks:
7246 | - name: Create the admins groups
7247 | group:
7248 | name: admins
7249 | state: present
7250 |
7251 | - name: Create the users group
7252 | group:
7253 | name: users
7254 | state: present
7255 |
7256 | - name: Create tony
7257 | user:
7258 | name: tony
7259 | password: "{{ user_password | password_hash('sha512') }}"
7260 | groups: admins
7261 |
7262 | - name: Create carmela
7263 | user:
7264 | name: carmela
7265 | password: "{{ user_password | password_hash('sha512') }}"
7266 | groups: admins
7267 |
7268 | - name: Create paulie
7269 | user:
7270 | name: paulie
7271 | password: "{{ user_password | password_hash('sha512') }}"
7272 | groups: users
7273 |
7274 | - name: Create chris
7275 | user:
7276 | name: chris
7277 | password: "{{ user_password | password_hash('sha512') }}"
7278 | groups: users
7279 |
7280 | - name: Give tony sudo access
7281 | copy:
7282 | content: "tony ALL=(ALL) NOPASSWD:ALL"
7283 | dest: /etc/sudoers.d/tony
7284 | ```
7285 |
7286 | 1. Setup a Cron Job for Logging Date
7287 |
7288 | * Create and run the playbook `cron.yml`
7289 | ```yml
7290 | ---
7291 | - name: Setup cron jobs on managed nodes
7292 | hosts: dev01
7293 | tasks:
7294 | - name: Setup cron job
7295 | cron:
7296 | name: "Write date to file"
7297 | minute: "*/2"
7298 | job: "date '+\\%Y-\\%m-\\%d \\%H:\\%M:\\%S' >> /tmp/logme.txt"
7299 | state: present
7300 | ```
7301 |
7302 | 1. Extract Host Information from Inventory using Ansible Navigator
7303 |
7304 | * Create and run the script `fetch_host_info.sh`
7305 | ```shell
7306 | #!/bin/bash
7307 | ansible-navigator inventory -i inventory -m stdout --host web01
7308 | ```
7309 |
7310 | 1. Configure Time Synchronization Using RHEL System Roles
7311 |
7312 | * Run the following:
7313 | ```shell
7314 | sudo dnf install rhel-system-roles -y
7315 | ```
7316 |
7317 | * Create and run the playbook `timesync.yml`
7318 | ```yml
7319 | ---
7320 | - name: Configure timesync
7321 | hosts: all
7322 | vars:
7323 | timesync_ntp_servers:
7324 | - hostname: 2.rhel.pool.ntp.org
7325 | iburst: True
7326 | pool: True
7327 | roles:
7328 | - /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/roles/timesync
7329 | ```
7330 |
7331 | 1. Software Installation Using Variables
7332 |
7333 | * Create the file `software_vars.yml`
7334 | ```yml
7335 | ---
7336 | software_packages:
7337 | - vim
7338 | - nmap
7339 | ```
7340 |
7341 | * Create and run the playbook `software_install.yml`
7342 | ```yml
7343 | ---
7344 | - name: Install software packages
7345 | hosts: all
7346 | vars_files:
7347 | - software_vars.yml
7348 | vars:
7349 | software_group: "@Virtualization Host"
7350 | tasks:
7351 | - name: Install packages for web
7352 | yum:
7353 | name: "{{ item }}"
7354 | state: present
7355 | with_items:
7356 | - "{{ software_packages }}"
7357 | when: "'web' in group_names"
7358 |
7359 | - name: Install packages for dev01
7360 | yum:
7361 | name: "{{ software_group }}"
7362 | state: present
7363 | when: "'development' in group_names"
7364 | ```
7365 |
7366 | 1. Debugging an API Key from an Ansible Vault
7367 |
7368 | * Run the following:
7369 | ```shell
7370 | ansible-vault create api_key.yml # trustme!123
7371 | ```
7372 |
7373 | * Add the following content to `api_key.yml`:
7374 | ```yml
7375 | my_api_key: "f3eb0782983d3a417de12b96eb551a90"
7376 | ```
7377 |
7378 | * Run the following:
7379 | ```shell
7380 | ansible-vault create api_key.yml # trustme!123
7381 | echo 'trustme!123' > vault-key.txt
7382 | ```
7383 |
7384 | * Create `playbook-secret.yml` and run using `ansible-playbook playbook-secret.yml --vault-password-file vault-key.txt:`
7385 | ```yml
7386 | ---
7387 | - name: Fetch API keys
7388 | hosts: localhost
7389 | vars_files:
7390 | - api_key.yml
7391 | tasks:
7392 | - name: Print API key
7393 | debug:
7394 | var: my_api_key
7395 | ```
7396 |
7397 | 1. Configure SELinux Settings for dev01
7398 |
7399 | * Create and run `selinux.yml`:
7400 | ```yml
7401 | ---
7402 | - name: Configure SELinux
7403 | hosts: dev01
7404 | vars:
7405 | selinux_policy: targeted
7406 | selinux_state: enforcing
7407 | selinux_ports:
7408 | - {ports: '82', proto: 'tcp', setype: 'http_port_t', state: 'present', local: true}
7409 | roles:
7410 | - /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/roles/selinux
7411 | tasks:
7412 | - name: Install http package
7413 | yum:
7414 | name: httpd
7415 | state: present
7416 |
7417 | - name: Enable http service
7418 | service:
7419 | name: httpd
7420 | state: started
7421 | enabled: yes
7422 |
7423 | - name: Enable firewall port
7424 | ansible.posix.firewalld:
7425 | port: 82/tcp
7426 | state: enabled
7427 | permanent: yes
7428 | immediate: yes
7429 |
7430 | - name: Check Apache port
7431 | lineinfile:
7432 | path: /etc/httpd/conf/httpd.conf
7433 | regexp: '^Listen '
7434 | insertafter: '^#Listen '
7435 | line: Listen 82
7436 | ```
7437 |
7438 | 1. Troubleshoot and Fix the Playbook
7439 |
7440 | * Create and run `fixme.yml`:
7441 | ```yml
7442 | ---
7443 | - name: Troublesome Playbook
7444 | hosts: all
7445 | become: yes
7446 | vars:
7447 | install_package: "vsftpd"
7448 | service_name: "vsftpd"
7449 |
7450 | tasks:
7451 | - name: Install a package
7452 | yum:
7453 | name: "{{ install_package }}"
7454 | state: "installed"
7455 |
7456 | - name: Start and enable a service
7457 | service:
7458 | name: "{{ service_name }}"
7459 | enabled: yes
7460 | state: started
7461 | ```
7462 |
7463 | 1. Ad-Hoc Command Execution via Shell Script
7464 |
7465 | * Create and run `adhocfile.sh`:
7466 | ```shell
7467 | #!/bin/bash
7468 | ansible all -m file -a "path=/tmp/sample.txt state=touch owner=carmela mode=0644"
7469 | ansible all -m lineinfile -a "path=/tmp/sample.txt line='Hello ansible world'"
7470 | ```
7471 |
7472 | 1. Create Swap Partition Based on Memory
7473 |
7474 | * Create and run `swap.yml`:
7475 | ```yml
7476 | ---
7477 | - name: Create Swap Partition Based on Memory
7478 | hosts: all
7479 | tasks:
7480 | - name: Create partition
7481 | community.general.parted:
7482 | device: /dev/sdb
7483 | number: 1
7484 | state: present
7485 | when: "ansible_devices['sdb'] is defined and ansible_memtotal_mb < 8000"
7486 |
7487 | - name: Display message
7488 | debug:
7489 | msg: "Available memory is {{ ansible_memfree_mb }}"
7490 | when: "ansible_memtotal_mb < 8000"
7491 |
7492 | - name: Create file system
7493 | community.general.filesystem:
7494 | fstype: swap
7495 | dev: /dev/sdb1
7496 | state: present
7497 | when: "ansible_devices['sdb']['partitions']['sdb1'] is defined"
7498 |
7499 | - name: Activate the swap space
7500 | command: "swapon /dev/sdb1"
7501 | when: "ansible_devices['sdb']['partitions']['sdb1'] is defined"
7502 |
7503 | - name: Mount file system
7504 | ansible.posix.mount:
7505 | path: /mnt/swp
7506 | src: /dev/sdb1
7507 | state: mounted
7508 | fstype: swap
7509 | when: "ansible_devices['sdb']['partitions']['sdb1'] is defined"
7510 | ```
7511 |
7512 | 1. Check Webpage Status and Debug on Failure
7513 |
7514 | * Create and run `check_webpage.yml`:
7515 | ```yml
7516 | ---
7517 | - name: Check Webpage Status and Debug on Failure
7518 | hosts: localhost
7519 | become: false
7520 | gather_facts: no
7521 | tasks:
7522 | - name: Block to attempt fetching the webpage status
7523 | block:
7524 | - name: Attempt to fetch the status of webserver
7525 | ansible.builtin.uri:
7526 | url: http://169.254.3.5
7527 | method: GET
7528 | status_code: 200
7529 | register: webpage_result
7530 |
7531 | rescue:
7532 | - name: Display debug message if webpage check fails
7533 | ansible.builtin.debug:
7534 | msg: "{{ webpage_result }}"
7535 | ```
7536 |
7537 | 1. Configure SSH Security Settings with Ansible Role
7538 |
7539 | * Create `/home/ansible/rhce1/roles/secure_ssh/tasks/main.yml`:
7540 | ```yml
7541 | ---
7542 | - name: Configure X11Forwarding
7543 | lineinfile:
7544 | path: /etc/ssh/sshd_config
7545 | line: X11Forwarding no
7546 | notify: Restart SSH
7547 |
7548 | - name: Configure PermitRootLogin
7549 | lineinfile:
7550 | path: /etc/ssh/sshd_config
7551 | line: PermitRootLogin no
7552 | notify: Restart SSH
7553 |
7554 | - name: Configure MaxAuthTries
7555 | lineinfile:
7556 | path: /etc/ssh/sshd_config
7557 | line: MaxAuthTries 3
7558 | notify: Restart SSH
7559 |
7560 | - name: Configure AllowTcpForwarding
7561 | lineinfile:
7562 | path: /etc/ssh/sshd_config
7563 | line: AllowTcpForwarding no
7564 | notify: Restart SSH
7565 | ```
7566 |
7567 | * Create `/home/ansible/rhce1/roles/secure_ssh/handlers/main.yml`:
7568 | ```yml
7569 | ---
7570 | - name: Restart SSH
7571 | service:
7572 | name: sshd
7573 | state: restarted
7574 | ```
7575 |
7576 | * Create and run `secure_ssh_playbookyml`:
7577 | ```yml
7578 | ---
7579 | - name: Secure SSH
7580 | hosts: all
7581 | roles:
7582 | - secure_ssh
7583 | ```
7584 |
7585 | 1. Task: Configure Web Server with System Information
7586 |
7587 | * Create `webserver.j2`:
7588 | ```yml
7589 | Servername: {{ ansible_hostname }}
7590 | IP Aaddress: {{ ansible_default_ipv4['address'] }}
7591 | Free Memory: {{ ansible_memfree_mb }}MB
7592 | OS: {{ ansible_os_family }}
7593 | Kernel Version: {{ ansible_kernel }}
7594 | ```
7595 |
7596 | * Create and run `webserver.yml`:
7597 | ```yml
7598 | ---
7599 | - name: Configure Web Server with System Information
7600 | hosts: web01
7601 | tasks:
7602 | - name: Install httpd
7603 | yum:
7604 | name: httpd
7605 | state: present
7606 |
7607 | - name: Enable httpd
7608 | service:
7609 | name: httpd
7610 | state: started
7611 | enabled: yes
7612 |
7613 | - name: Enable firewall
7614 | ansible.posix.firewalld:
7615 | service: http
7616 | state: enabled
7617 | immediate: true
7618 | permanent: true
7619 | tags: setfirewall
7620 |
7621 | - name: Populate template
7622 | template:
7623 | src: webserver.j2
7624 | dest: /var/www/html/index.html
7625 | ```
--------------------------------------------------------------------------------