├── 02navigation_file ├── 01linux-commands-tutorial.md ├── 02In-ClassLabfile-structure-manipulation-lab.md ├── 03complete-linux-customization-tutorial.md ├── 04linux-bash-update-lab.md ├── 05HomeLablinux-project-structure-lab.md ├── assignments.txt ├── bashrcUpdatePractice.txt ├── solutions.txt └── theory.txt ├── 03textpiperedir ├── 01linux-text-commands.md ├── 02linux-pipes-redirection.md ├── 03pipe-explanation.md ├── 04redirection-explanation.md ├── 05python-script-usage-examples.md ├── 06pipes-redirection-explanation-indepth.md ├── explanationfiledescriptor.txt ├── file_descriptor.py ├── practice │ ├── application.log │ ├── assignment.txt │ └── log_generator.py ├── setup_generator.py └── theory.txt ├── 04UserGroupPermission ├── 01linux-user-management-guide.md ├── 02linux-file-permissions-guide.md ├── Lab01linux-user-group.md ├── Lab02linux-file-permissions.md ├── Lab03linux-umask.md ├── Lab04linux-add-sudoer-lab.md ├── UmaskConcept+lab3.txt ├── txt01User_GroupManagment_theory.txt ├── txt02permission_access.txt ├── txtlab1UserGroupCreation.txt └── txtlab2filepermissions.txt ├── 05EverythingIsAFile ├── 00linux-fundamental_conceptsFile+Inode+Dentry.md ├── 01linux-everything-is-file-concept.md ├── 02linux-commands-vs-python-code.md ├── 03linux-character-devices-pipes-examples.md ├── 04linux-terminal-character-device.md ├── 05file-descriptors-vs-inodes.md ├── file_inode_dentry.py ├── linux_file_types.py ├── sockets │ ├── 01simple_socket_server.txt │ ├── client.py │ ├── server.py │ └── sockets-programming-guide.md ├── symbolichardlinks │ ├── hard │ │ ├── hard-links-safe-update-strategy.md │ │ └── theory_instructions.txt │ ├── symbolic │ │ ├── config_dev.json │ │ ├── config_prod.json │ │ ├── read_config.py │ │ ├── symlink-soft-tutorial.md │ │ └── theory_instructions.txt │ └── theory.txt ├── text00FundamentalsFile+Inode+Dentry.txt ├── text01TypesOfFiles.txt ├── textfiledescriptorVSinode.txt └── textoutdatedlab1FindCompareFiles.txt ├── 06CompilingLinking ├── 01BasicIdea.txt ├── 01LabIntrotoClang.txt ├── 02LabMultipleSourceFiles.txt ├── 03LabCmake.txt └── todolist │ ├── CMakeLists.txt │ ├── Task.cpp │ ├── Task.h │ ├── Task.o │ ├── TodoList.cpp │ ├── TodoList.h │ ├── TodoList.o │ ├── main.cpp │ └── main.o ├── 07SoftwarePackageManagment ├── 01 TarVsZIPLab ├── 01LabIntroToTools.txt ├── 01WgetVsCurl ├── 02_theory.txt ├── 03LabInstallingJSONParser.txt ├── 03NewVersionGDB.txt ├── 04LabBuildingOwnPackageDebian.txt ├── my_debian_package │ ├── CMakeLists.txt │ └── myprogram.cpp ├── simple-examples-zip-request.py └── simple-gdb-cpp-tutorial.md ├── 08AdvancedTextProcessing ├── 1_cut_paste │ ├── LabCutPaste.txt │ ├── data.txt │ ├── file1.txt │ ├── file2.txt │ ├── file3.txt │ ├── order_generator.py │ ├── theory_cut.txt │ └── theory_paste.txt ├── 2_greplab │ ├── app.log │ ├── instructions.txt │ ├── lab_setup.py │ ├── module_0.py │ ├── module_1.py │ ├── module_2.py │ ├── module_3.py │ ├── module_4.py │ ├── solutions.txt │ └── theory_grep.txt ├── 3_sedlab │ ├── app.py │ ├── instructions.txt │ ├── settings.ini │ ├── settings_new.ini │ ├── solutions.txt │ └── theory_sed.txt ├── 4_awklab │ ├── application.log │ ├── instructions.txt │ ├── log_generator.py │ ├── solutions.txt │ └── theory_awk.txt └── books.csv ├── 09LinuxArchitecture+FHS ├── 01LinuxArchitecture.txt ├── 02LinuxFilesystemHierarchy.txt ├── 03ConnectDots.txt ├── 04LabObeserveConnection.txt ├── 05Lab1Continue.txt ├── cpu_load_simulator.py └── file_wait.py ├── 10FileSystemsBasics ├── 00VFS.txt ├── 01CopyOnWrite+JournalingConcepts.txt ├── Copy_on_Write.py ├── Journaling.py ├── VFSIDEA.cpp ├── VFSIDEA.java └── VFSIDEA.rs ├── 11ProcessService ├── Processes.txt ├── ServicesMySQLPrimer.txt ├── complete-process-guide.md ├── fibonacci.py ├── infinite_loop.py ├── linux-services-guide.md ├── signal_handling.py └── wait_script.py ├── 12EncryptionDecryption ├── 00LabPrepCommunication.txt ├── 01basicEncryptionDecription.txt ├── 02LabSymmetricOpenSSL_AES.txt ├── 03LabAssymetricOpenSSL_RSA.txt ├── 04LabAssymetric+SymmetricLevelUp.txt ├── OpenSSl + AES256-CBC-SALT-RSA └── theory.txt ├── 13Hashing ├── LabHashFunctions.txt └── WebApplicationHashing.txt ├── README.md ├── SecureCopyExercise.txt ├── Setup-Instructor-VM Labs on Azure.md ├── bashrc └── correct_bash.txt /02navigation_file/01linux-commands-tutorial.md: -------------------------------------------------------------------------------- 1 | # Linux Navigation Commands Tutorial 2 | 3 | This tutorial focuses on essential commands for navigating the Linux filesystem. 4 | 5 | ## 1. pwd: Print Working Directory 6 | 7 | The `pwd` command shows the full path of the current working directory. 8 | 9 | ```bash 10 | pwd 11 | ``` 12 | 13 | Example output: 14 | ``` 15 | /home/username/documents 16 | ``` 17 | 18 | ## 2. ls: List Directory Contents 19 | 20 | The `ls` command lists files and directories in the current directory. 21 | 22 | Basic usage: 23 | ```bash 24 | ls 25 | ``` 26 | 27 | Useful options: 28 | - `ls -l`: Long format, showing permissions, owner, size, and modification date 29 | - `ls -a`: Shows hidden files (those starting with a dot) 30 | - `ls -h`: Human-readable file sizes 31 | - `ls -R`: Recursively list subdirectories 32 | 33 | Examples: 34 | ```bash 35 | ls -l 36 | ls -la 37 | ls -lh /etc 38 | ``` 39 | 40 | ## 3. cd: Change Directory 41 | 42 | The `cd` command is used to change the current working directory. 43 | 44 | Basic usage: 45 | ```bash 46 | cd directory_name 47 | ``` 48 | 49 | Special uses: 50 | - `cd` or `cd ~`: Change to home directory 51 | - `cd ..`: Move up one directory 52 | - `cd -`: Switch to the previous directory 53 | - `cd /`: Change to root directory 54 | 55 | Examples: 56 | ```bash 57 | cd /etc 58 | cd ~/documents 59 | cd ../../ 60 | ``` 61 | 62 | ## 4. tree: Display Directory Structure 63 | 64 | While not always pre-installed, `tree` is excellent for visualizing directory structures. 65 | 66 | Basic usage: 67 | ```bash 68 | tree 69 | ``` 70 | 71 | Useful options: 72 | - `tree -L n`: Limit display to n levels deep 73 | - `tree -d`: Show only directories 74 | 75 | Example: 76 | ```bash 77 | tree -L 2 /etc 78 | ``` 79 | 80 | ## 5. find: Search for Files and Directories 81 | 82 | The `find` command can be used for navigation by locating files and directories. 83 | 84 | Basic usage: 85 | ```bash 86 | find /path/to/search -name "filename" 87 | ``` 88 | 89 | Example: 90 | ```bash 91 | find /home/username -name "*.txt" 92 | ``` 93 | 94 | ## 6. which: Locate a Command 95 | 96 | The `which` command shows the full path of shell commands. 97 | 98 | Basic usage: 99 | ```bash 100 | which command_name 101 | ``` 102 | 103 | Example: 104 | ```bash 105 | which python 106 | ``` 107 | 108 | ## Navigation Tips 109 | 110 | 1. Use Tab for auto-completion of file and directory names. 111 | 2. Use wildcards (`*`, `?`) with `ls` and `find` for flexible searching. 112 | 3. The `/` at the beginning of a path indicates the root directory. 113 | 4. `~` is a shortcut for your home directory. 114 | 5. Use relative paths (without leading `/`) for locations relative to your current directory. 115 | 116 | ## Practice Exercises 117 | 118 | 1. Starting from your home directory, navigate to `/etc`, list its contents, then return to your home directory. 119 | 2. Create a directory structure in your home folder: `~/test/subdir1/subdir2`. Navigate into `subdir2` using a single command. 120 | 3. Use `find` to locate all `.conf` files in the `/etc` directory. 121 | 4. Use `tree` to display the structure of your home directory, limiting the output to 2 levels deep. 122 | 123 | Remember, practice is key to becoming comfortable with these navigation commands. Experiment in a safe directory to gain confidence in moving around the Linux filesystem. 124 | -------------------------------------------------------------------------------- /02navigation_file/04linux-bash-update-lab.md: -------------------------------------------------------------------------------- 1 | # Customizing Linux: The Dotfile Movement 2 | 3 | ## Introduction to Dotfiles 4 | 5 | The default console in Linux isn't always user-friendly. Since there are no graphical interfaces for many settings, people have started experimenting with the appearance and functionality of the console through text-based configurations. 6 | 7 | If you've seen cool-looking terminals in videos or movies, chances are they've been customized using dotfiles. Dotfiles are configuration files that start with a dot (.) and are used to personalize your system. 8 | 9 | ## The Dotfile Community 10 | 11 | Luckily for us, many enthusiasts share their dotfiles publicly. Here are some popular dotfile repositories: 12 | 13 | - [Mathias Bynens](https://github.com/mathiasbynens/dotfiles) 14 | - [Zach Holman](https://github.com/holman/dotfiles) 15 | - [Nick Nisi](https://github.com/nicknisi/dotfiles) 16 | - [Webpro's Dotfiles for macOS](https://github.com/webpro/dotfiles) 17 | - [Dotfiles GitHub Community](https://github.com/webpro/awesome-dotfiles) 18 | - [Jess Fraz](https://github.com/jessfraz/dotfiles) 19 | 20 | ## Customizing Your Bash Prompt 21 | 22 | We'll use Jess Fraz's `.bash_prompt` as an example to customize our bash prompt. This script includes several advanced features: 23 | 24 | 1. Setting $TERM based on terminal capabilities 25 | 2. Git prompt customization 26 | 3. Cloud environment detection 27 | 4. Color customization 28 | 5. Prompt customization (PS1) 29 | 6. Secondary prompt customization (PS2) 30 | 31 | ### Steps to Customize Your Prompt 32 | 33 | 1. Download the bash prompt script: 34 | ``` 35 | curl https://raw.githubusercontent.com/jessfraz/dotfiles/master/.bash_prompt > bash_experiment.txt 36 | ``` 37 | 38 | 2. Backup your existing .bashrc: 39 | ``` 40 | cp ~/.bashrc ~/.bashrc.backup 41 | ``` 42 | 43 | 3. Open your .bashrc file in a text editor: 44 | ``` 45 | nano ~/.bashrc 46 | ``` 47 | 48 | 4. Append the contents of bash_experiment.txt to your .bashrc. You can do this by copying and pasting, or by using this command: 49 | ``` 50 | cat bash_experiment.txt >> ~/.bashrc 51 | ``` 52 | 53 | 5. Test the changes by opening a new terminal or sourcing your .bashrc: 54 | ``` 55 | source ~/.bashrc 56 | ``` 57 | 58 | 6. Personalize: Feel free to modify any part of the script to suit your preferences. 59 | 60 | ### Assignment: Add Time to Your Prompt 61 | 62 | To add the current time to your prompt, add these lines to the end of your .bashrc: 63 | 64 | ```bash 65 | get_current_time() { 66 | echo $(date +"%d-%m-%Y %H:%M:%S") 67 | } 68 | PS1+="\\[$white\\][\$(get_current_time)] "; 69 | ``` 70 | 71 | This will add the current date and time to your prompt. 72 | 73 | ## Tips for Customization 74 | 75 | 1. Always backup your configuration files before making changes. 76 | 2. Test changes in a new terminal window before making them permanent. 77 | 3. Read through the scripts you're adding to understand what they do. 78 | 4. Don't be afraid to experiment and make changes to suit your needs. 79 | 5. Remember that customization is about making your environment work best for you. 80 | 81 | By following these steps, you'll have a highly customized and informative bash prompt. As you become more comfortable with dotfiles, you can explore other customizations to further enhance your Linux experience. 82 | 83 | -------------------------------------------------------------------------------- /02navigation_file/05HomeLablinux-project-structure-lab.md: -------------------------------------------------------------------------------- 1 | # Linux Project Structure Lab: Building a Simple Personal Website 2 | 3 | ## Introduction 4 | 5 | In this lab, you'll create a basic directory structure for a personal website project. This hands-on approach will help you practice Linux navigation and file management skills while building a simple project structure. 6 | 7 | ## Objectives 8 | 9 | - Practice creating directories and files using the command line 10 | - Navigate efficiently between directories 11 | - Use relative and absolute paths 12 | - Organize a simple project structure 13 | 14 | ## Project Overview 15 | 16 | You'll be creating a structure for a personal website with the following sections: 17 | - Home 18 | - About 19 | - Portfolio 20 | 21 | ## Lab Tasks 22 | 23 | ### Task 1: Setting Up the Project Root 24 | 25 | 1. Create a directory called `my_website` in your home folder. 26 | 2. Navigate into this new directory. 27 | 28 | ### Task 2: Creating Main Sections 29 | 30 | From the `my_website` directory: 31 | 32 | 1. Create directories for each main section: `home`, `about`, and `portfolio`. 33 | 2. In the `home` directory, create a file called `index.html`. 34 | 3. In the `about` directory, create a file called `about.html`. 35 | 4. In the `portfolio` directory, create a file called `projects.html`. 36 | 37 | ### Task 3: Adding CSS and JavaScript 38 | 39 | 1. Create a `css` directory in the project root. 40 | 2. Inside `css`, create a file called `styles.css`. 41 | 3. Create a `js` directory in the project root. 42 | 4. Inside `js`, create a file called `script.js`. 43 | 44 | ### Task 4: Setting Up the Portfolio 45 | 46 | 1. Navigate to the `portfolio` directory. 47 | 2. Create two subdirectories: `project1` and `project2`. 48 | 3. In each project subdirectory, create an `index.html` file. 49 | 4. Use a single command to create a file called `description.txt` in both project directories. 50 | 51 | ### Task 5: Adding Images 52 | 53 | 1. Create an `images` directory in the project root. 54 | 2. Inside `images`, create three subdirectories: `home`, `about`, and `portfolio`. 55 | 3. In the `images/home` directory, create an empty file called `hero.jpg`. 56 | 4. In the `images/about` directory, create an empty file called `profile.jpg`. 57 | 5. In the `images/portfolio` directory, create two empty files: `project1.jpg` and `project2.jpg`. 58 | 59 | ### Task 6: Final Touch and Verification 60 | 61 | 1. Return to the project root directory. 62 | 2. Create a `README.md` file in the project root. 63 | 3. Use the `find` command to list all `.html` files in your project. 64 | 4. Use the `tree` command (if available) to display your entire project structure. 65 | 66 | ## Final Project Structure 67 | 68 | After completing all the tasks, your project structure should look like this: 69 | 70 | ``` 71 | my_website/ 72 | │ 73 | ├── README.md 74 | ├── home/ 75 | │ └── index.html 76 | ├── about/ 77 | │ └── about.html 78 | ├── portfolio/ 79 | │ ├── projects.html 80 | │ ├── project1/ 81 | │ │ ├── index.html 82 | │ │ └── description.txt 83 | │ └── project2/ 84 | │ ├── index.html 85 | │ └── description.txt 86 | ├── css/ 87 | │ └── styles.css 88 | ├── js/ 89 | │ └── script.js 90 | └── images/ 91 | ├── home/ 92 | │ └── hero.jpg 93 | ├── about/ 94 | │ └── profile.jpg 95 | └── portfolio/ 96 | ├── project1.jpg 97 | └── project2.jpg 98 | ``` 99 | 100 | ## Conclusion 101 | 102 | By completing this lab, you've practiced essential Linux navigation and file management skills while creating a basic structure for a personal website. This structure provides a foundation for a simple yet well-organized web project, demonstrating the importance of logical file organization in web development. 103 | 104 | -------------------------------------------------------------------------------- /02navigation_file/assignments.txt: -------------------------------------------------------------------------------- 1 | # Warmp UP 2 | 3 | 4 | # Assignment: Create a Simple File and Folder Structure and Verify Using tree 5 | 6 | Objectives 7 | Create a directory structure that includes a couple of sub-directories. 8 | Create some files within these directories. 9 | Use the tree command to verify your work. 10 | Steps 11 | 12 | Open a terminal. 13 | 14 | Create a root directory for your assignment and navigate into it: 15 | mkdir AssignmentRoot 16 | cd AssignmentRoot 17 | 18 | Create two sub-directories within AssignmentRoot: 19 | mkdir SubDir1 SubDir2 20 | 21 | Navigate into SubDir1 and create a file: 22 | cd SubDir1 23 | touch file1.txt 24 | cd .. 25 | 26 | Navigate into SubDir2 and create another file: 27 | 28 | cd SubDir2 29 | touch file2.txt 30 | cd .. 31 | 32 | 33 | Display the tree structure listing of AssignmentRoot and its contents: 34 | 35 | tree AssignmentRoot 36 | 37 | Note: If the tree command is not installed, you can install it using: 38 | 39 | sudo apt install tree 40 | 41 | The tree command will display a hierarchical structure of directories and files, giving you a clear view of the structure you've just created. 42 | 43 | After completing these steps, you should see an output that confirms you've created AssignmentRoot, with two sub-directories SubDir1 and SubDir2, each containing a text file (file1.txt and file2.txt, respectively). This verifies that you've correctly followed the assignment steps. 44 | 45 | 46 | If you don't want to install tree, you can use the built-in find command to display the directory structure, although it won't be as neatly formatted as with tree. 47 | 48 | From within the AssignmentRoot directory, you can use: 49 | 50 | find . 51 | 52 | This will list all files and directories under the current directory, which in this case is AssignmentRoot. 53 | 54 | Assignment 1: Set Up a Basic Programming Project 55 | Objective: 56 | Create a basic project directory structure. 57 | Folder and File Structure: 58 | 59 | MyProject1/ 60 | │ 61 | ├── Images/ 62 | │ ├── logo.png 63 | │ └── banner.png 64 | │ 65 | ├── Code/ 66 | │ ├── main.py 67 | │ └── utils.py 68 | │ 69 | └── Configs/ 70 | └── settings.conf 71 | 72 | 73 | Assignment 2: Hidden Files in Project 74 | Objective: 75 | Introduce hidden configuration files to your project structure. 76 | Folder and File Structure: 77 | 78 | 79 | MyProject2/ 80 | │ 81 | ├── Images/ 82 | │ 83 | ├── Code/ 84 | │ 85 | ├── Configs/ 86 | │ └── .secret_key 87 | │ 88 | ├── .config 89 | │ 90 | └── .database_connection 91 | 92 | 93 | Assignment 3: Simulate a Git-Initialized Project 94 | Objective: 95 | Mimic a programming project directory that uses Git for version control. 96 | Folder and File Structure: 97 | 98 | MyProject3/ 99 | │ 100 | ├── Images/ 101 | │ 102 | ├── Code/ 103 | │ 104 | ├── Configs/ 105 | │ 106 | └── .git/ 107 | ├── config 108 | ├── HEAD 109 | ├── description 110 | ├── hooks/ 111 | ├── info/ 112 | ├── objects/ 113 | └── refs/ 114 | 115 | 116 | The tree or find commands 117 | can be used to verify the structure after executing the solution commands for each assignment. 118 | 119 | Command to show hidden files using tree: 120 | 121 | tree -a 122 | 123 | So, with the -a flag, tree will list all files, including the hidden ones. 124 | -------------------------------------------------------------------------------- /02navigation_file/bashrcUpdatePractice.txt: -------------------------------------------------------------------------------- 1 | 2 | # Dotfile movement :) 3 | 4 | So as you notice, default console sometimes not so friendly per say, 5 | so since there are no really graphical interfaces, people start to experiment with appearance 6 | of console. 7 | 8 | If you watched some videos or movies, you may notice that it looks really cool sometimes. 9 | So the since you don't have a GUI, there is only one way to change settings of something, only through text 10 | and, that's exactly we are going to do. 11 | 12 | Luckily for us there are many enthusiast who kindly shares their so called dotfiles 13 | Or files for changing settings of programs, through direct text manipulation 14 | 15 | Examples: 16 | Mathias Bynens 17 | https://github.com/mathiasbynens/dotfiles 18 | 19 | Zach Holman 20 | https://github.com/holman/dotfiles 21 | 22 | Nick Nisi 23 | https://github.com/nicknisi/dotfiles 24 | 25 | Webpro's Dotfiles for macOS 26 | https://github.com/webpro/dotfiles 27 | 28 | Dotfiles GitHub Community 29 | https://github.com/webpro/awesome-dotfiles 30 | 31 | Jess Fraz 32 | https://github.com/jessfraz/dotfiles 33 | 34 | As a practice we do next 35 | https://github.com/jessfraz/dotfiles/blob/master/.bash_prompt 36 | 37 | Setting $TERM based on terminal capabilities: 38 | This can help ensure that you have the best color experience and terminal compatibility. 39 | 40 | Git Prompt Customization: 41 | The prompt_git function is used to modify your prompt based on the state 42 | of a Git repository (if you are in one). It shows branch names, and 43 | indicates uncommitted changes, unstaged changes, 44 | untracked files, and stashed files with special symbols. 45 | 46 | Cloud Detection: Checks if you're in a virtualized environment (e.g., cloud server) 47 | and adds a cloud icon if so. 48 | 49 | Color Customization: 50 | Uses tput if available (preferred for portability and capability detection) 51 | or ANSI escape codes to set various colors. 52 | It also sets specific colors based on conditions 53 | (e.g., highlighting the username if logged in as root). 54 | 55 | Customizing Prompt ($PS1): 56 | This is the main part where all the previous elements are combined 57 | to create a visually pleasing and informative prompt. 58 | It shows user, host, working directory, and Git status in various colors and formats. 59 | 60 | Setting the secondary prompt ($PS2): 61 | This is what you see when a command is continued on the next line, 62 | and it's set to a simple arrow. 63 | 64 | 65 | # curl is a versatile command-line tool used for transferring data with URLs. 66 | One common use case is downloading files from the internet. 67 | 68 | curl https://raw.githubusercontent.com/jessfraz/dotfiles/master/.bash_prompt > bash_experiment.txt 69 | 70 | Before adding this to your .bashrc, consider the following steps: 71 | 72 | Backup: Always backup your existing .bashrc file. 73 | 74 | cp ~/.bashrc ~/.bashrc.backup 75 | 76 | 77 | Modify using nano: 78 | You can append this script to the end of your .bashrc 79 | nano ~/.bashrc 80 | 81 | Test: Open a new terminal or source your .bashrc: 82 | 83 | source ~/.bashrc 84 | This will apply the changes. Make sure everything looks and works as expected. 85 | 86 | Personalize: Feel free to tweak any part of this to fit your personal preferences. Remember that customization is all about making your environment work best for you. 87 | 88 | 89 | # Assignment 90 | 91 | # Add time to your prompt 92 | get_current_time() { 93 | echo $(date +"%d-%m-%Y %H:%M:%S") 94 | } 95 | 96 | PS1+="\\[$white\\][\$(get_current_time)] "; 97 | -------------------------------------------------------------------------------- /02navigation_file/solutions.txt: -------------------------------------------------------------------------------- 1 | Solutions Navigation and File Manipulation: 2 | 3 | Assignment 1 Solution: 4 | 5 | mkdir MyProject 6 | cd MyProject 7 | mkdir Images Code Configs 8 | cd Images 9 | touch logo.png banner.png 10 | cd .. 11 | cd Code 12 | touch main.py utils.py 13 | cd .. 14 | cd Configs 15 | touch settings.conf 16 | 17 | 18 | Assignment 2 Solution: 19 | 20 | cd MyProject 21 | touch .config .database_connection 22 | cd Configs 23 | touch .secret_key 24 | 25 | 26 | Assignment 3 Solution: 27 | cd MyProject 28 | mkdir .git 29 | cd .git 30 | touch config HEAD description 31 | mkdir hooks info objects refs -------------------------------------------------------------------------------- /02navigation_file/theory.txt: -------------------------------------------------------------------------------- 1 | # Basic Navigation Commands 2 | 3 | => pwd: Print Working Directory 4 | Shows the full pathname of the current working directory. 5 | pwd 6 | 7 | => ls: List 8 | Lists all the files and directories in the current directory. 9 | ls 10 | 11 | Can be used with various options, 12 | like ls -l for a long format listing, 13 | ls -a to see hidden files. 14 | 15 | => cd: Change Directory 16 | 17 | Changes the current directory to the one specified in the arguments. 18 | cd /path/to/directory 19 | 20 | Without arguments, cd typically takes you to the home directory. 21 | cd .. moves up one directory. 22 | 23 | => mkdir: Make Directory 24 | 25 | Creates a new directory. 26 | mkdir new_directory 27 | 28 | => rmdir: Remove Directory 29 | Removes an empty directory. 30 | 31 | rmdir directory_name 32 | 33 | # File Manipulation Commands 34 | => touch: Create File 35 | 36 | => touch: Creates a new empty file. 37 | touch new_file.txt 38 | 39 | => rm: Remove File 40 | Deletes a file. Use cautiously. 41 | rm file_name 42 | 43 | => cp: Copy 44 | Copies files or directories. 45 | 46 | cp source destination 47 | 48 | => mv: Move 49 | 50 | Moves files or directories, can also be used to rename files. 51 | mv old_name new_name 52 | 53 | 54 | Most famous command, because joke says: go to read manual page 55 | => man: Manual 56 | Displays the user manual for the specified command. 57 | 58 | man command_name 59 | 60 | These commands offer the basic functionalities you'd need to navigate and manipulate a Linux filesystem. 61 | They are the building blocks for many more complex operations and scripts. 62 | 63 | -------------------------------------------------------------------------------- /03textpiperedir/01linux-text-commands.md: -------------------------------------------------------------------------------- 1 | # Linux Basic Text Manipulation Commands Tutorial 2 | 3 | This tutorial covers essential Linux commands for manipulating and analyzing text files, focusing solely on individual command usage. 4 | 5 | ## 1. cat - Concatenate and display file contents 6 | 7 | The `cat` command is used to display the contents of files. 8 | 9 | ### Basic usage: 10 | ```bash 11 | cat filename 12 | ``` 13 | 14 | ### Examples: 15 | ```bash 16 | # Display contents of a file 17 | cat hello.txt 18 | 19 | # Display file contents with line numbers 20 | cat -n filename.txt 21 | ``` 22 | 23 | ## 2. echo - Display messages or variables 24 | 25 | The `echo` command prints text to the terminal. 26 | 27 | ### Basic usage: 28 | ```bash 29 | echo "Your message here" 30 | ``` 31 | 32 | ### Examples: 33 | ```bash 34 | # Print a simple message 35 | echo "Hello, World!" 36 | 37 | # Print the value of a variable 38 | NAME="Alice" 39 | echo "My name is $NAME" 40 | ``` 41 | 42 | ## 3. wc - Word, line, character, and byte count 43 | 44 | The `wc` (word count) command is used to count lines, words, characters, or bytes in a file. 45 | 46 | ### Basic usage: 47 | ```bash 48 | wc [options] filename 49 | ``` 50 | 51 | ### Common options: 52 | - `-l`: Count lines 53 | - `-w`: Count words 54 | - `-m`: Count characters 55 | - `-c`: Count bytes 56 | 57 | ### Examples: 58 | ```bash 59 | # Count lines in a file 60 | wc -l filename.txt 61 | 62 | # Count words in a file 63 | wc -w filename.txt 64 | 65 | # Display all counts (lines, words, characters) 66 | wc filename.txt 67 | ``` 68 | 69 | ## 4. sort - Sort lines of text 70 | 71 | The `sort` command is used to sort lines of text alphabetically or numerically. 72 | 73 | ### Basic usage: 74 | ```bash 75 | sort filename 76 | ``` 77 | 78 | ### Examples: 79 | ```bash 80 | # Sort lines alphabetically 81 | sort names.txt 82 | 83 | # Sort lines numerically 84 | sort -n numbers.txt 85 | 86 | # Sort in reverse order 87 | sort -r filename.txt 88 | ``` 89 | 90 | ## 5. grep - Search for patterns in files 91 | 92 | The `grep` command searches for specific patterns in files. 93 | 94 | ### Basic usage: 95 | ```bash 96 | grep "pattern" filename 97 | ``` 98 | 99 | ### Examples: 100 | ```bash 101 | # Search for a word in a file 102 | grep "error" logfile.txt 103 | 104 | # Case-insensitive search 105 | grep -i "warning" logfile.txt 106 | 107 | # Display line numbers with matches 108 | grep -n "TODO" filename.txt 109 | ``` 110 | 111 | ## 6. head - Display the beginning of a file 112 | 113 | The `head` command shows the first part of a file, by default the first 10 lines. 114 | 115 | ### Basic usage: 116 | ```bash 117 | head filename 118 | ``` 119 | 120 | ### Examples: 121 | ```bash 122 | # Display first 10 lines (default) 123 | head filename.txt 124 | 125 | # Display first 5 lines 126 | head -n 5 filename.txt 127 | ``` 128 | 129 | ## 7. tail - Display the end of a file 130 | 131 | The `tail` command shows the last part of a file, by default the last 10 lines. 132 | 133 | ### Basic usage: 134 | ```bash 135 | tail filename 136 | ``` 137 | 138 | ### Examples: 139 | ```bash 140 | # Display last 10 lines (default) 141 | tail filename.txt 142 | 143 | # Display last 20 lines 144 | tail -n 20 filename.txt 145 | ``` 146 | 147 | These basic text manipulation commands provide powerful tools for working with text files in Linux. Each command can be used independently to perform specific tasks on text data. 148 | -------------------------------------------------------------------------------- /03textpiperedir/02linux-pipes-redirection.md: -------------------------------------------------------------------------------- 1 | # Idea of Pipes and Redirection 2 | 3 | Time to awaken Sharingan :) 4 | 5 | ## Introduction 6 | 7 | Ok I believe at this time you can start to feel boring, It's time to introduce mechanism which brings power. Namely Pipes and Redirection. 8 | 9 | Most of the useful things you can accomplish in Linux, one way or another most likely involve one of these patterns. Pipes and redirection are fundamental concepts in Unix and Unix-like operating systems, and they're representative of the OS's philosophy of "everything is a file." 10 | 11 | ## a) Pipes (|) 12 | 13 | A pipe, symbolized as |, is a mechanism for inter-process communication. It allows the output of one process to be used as the input of another. 14 | 15 | ### Operating System Perspective 16 | 17 | - Pipes are an example of a mechanism for Inter-Process Communication (IPC). 18 | - A pipe creates a linear communication channel between processes, often termed as a "half-duplex" channel, because data flows only in one direction. 19 | - Data written by the sending process can be read by the receiving process. 20 | - This is usually done in a buffered manner, which means data is stored in a temporary area (buffer) before it's sent to the receiving process. 21 | 22 | ### Practical Example 23 | 24 | If you want to search for a specific word in a file and then count its occurrences, you might use grep to find the word and wc to count its occurrences, combined with a pipe: 25 | 26 | ```bash 27 | cat filename.txt | grep "specific-word" | wc -l 28 | ``` 29 | 30 | Here: 31 | 1. `cat` outputs the content of the file. 32 | 2. `grep` takes this output to find lines containing "specific-word" 33 | 3. `wc` counts these lines. 34 | 35 | ## b) Redirection (>, >>, <) 36 | 37 | Redirection allows you to direct the input and output streams of a process to a file or from a file. 38 | 39 | ### Operating System Perspective 40 | 41 | The OS assigns every process a table of file descriptors. The first three are standard: 42 | - 0: standard input (stdin) 43 | - 1: standard output (stdout) 44 | - 2: standard error (stderr) 45 | 46 | Redirection works by changing the file descriptor table of a process. Instead of pointing to the default locations (e.g., the terminal), they're changed to point to a file. 47 | 48 | ### Examples 49 | 50 | 1. Output Redirection (> and >>): 51 | - `>`: Redirects the output of a command to a file. If the file exists, it's overwritten. If it doesn't, it's created. 52 | ```bash 53 | echo "Hello, World!" > output.txt 54 | ``` 55 | - `>>`: Appends the output of a command to a file. If the file doesn't exist, it's created. 56 | ```bash 57 | echo "Hello again!" >> output.txt 58 | ``` 59 | 60 | 2. Input Redirection (<): 61 | - Directs the input for a process from a file instead of from the standard input (typically the keyboard). 62 | ```bash 63 | sort < input.txt 64 | ``` 65 | Here, `sort` reads its input from input.txt rather than the terminal. 66 | 67 | ## Conclusion 68 | 69 | This model allows small utilities to be combined, enhancing the power and flexibility of the Linux command line. 70 | -------------------------------------------------------------------------------- /03textpiperedir/03pipe-explanation.md: -------------------------------------------------------------------------------- 1 | # Understanding Pipes in Command Line 2 | 3 | ## Before Pipe 4 | 5 | ```mermaid 6 | graph TD 7 | subgraph "Before Pipe" 8 | subgraph "Command 1" 9 | B1[Command 1] 10 | B1FD0[FD 0: stdin] 11 | B1FD1[FD 1: stdout] 12 | B1FD2[FD 2: stderr] 13 | B1 --- B1FD0 14 | B1 --- B1FD1 15 | B1 --- B1FD2 16 | end 17 | subgraph "Command 2" 18 | B2[Command 2] 19 | B2FD0[FD 0: stdin] 20 | B2FD1[FD 1: stdout] 21 | B2FD2[FD 2: stderr] 22 | B2 --- B2FD0 23 | B2 --- B2FD1 24 | B2 --- B2FD2 25 | end 26 | end 27 | ``` 28 | 29 | In this initial state, each command has its own standard input (stdin), standard output (stdout), and standard error (stderr) streams. These are typically connected to the terminal or other default sources/destinations. 30 | 31 | ## After Pipe: Command 1 | Command 2 32 | 33 | ```mermaid 34 | graph TD 35 | subgraph "After Pipe: Command 1 | Command 2" 36 | A1[Command 1] 37 | A2[Command 2] 38 | A1FD0[FD 0: stdin] 39 | A1FD1[FD 1: pipe write] 40 | A1FD2[FD 2: stderr] 41 | A2FD0[FD 0: pipe read] 42 | A2FD1[FD 1: stdout] 43 | A2FD2[FD 2: stderr] 44 | PIPE((Pipe)) 45 | A1 --- A1FD0 46 | A1 --- A1FD1 47 | A1 --- A1FD2 48 | A2 --- A2FD0 49 | A2 --- A2FD1 50 | A2 --- A2FD2 51 | A1FD1 --> PIPE 52 | PIPE --> A2FD0 53 | end 54 | ``` 55 | 56 | When a pipe is used: 57 | 58 | 1. Command 1's stdout (FD 1) is connected to the write end of the pipe. 59 | 2. Command 2's stdin (FD 0) is connected to the read end of the pipe. 60 | 3. This allows the output of Command 1 to be directly fed as input to Command 2. 61 | 4. stderr (FD 2) for both commands typically remains connected to the terminal. 62 | 5. Command 1's stdin and Command 2's stdout also typically remain connected to the terminal. 63 | 64 | The pipe acts as a buffer, allowing data to flow from Command 1 to Command 2 without needing to be stored in a temporary file or displayed on the screen. 65 | -------------------------------------------------------------------------------- /03textpiperedir/04redirection-explanation.md: -------------------------------------------------------------------------------- 1 | # Understanding Redirections in Command Line 2 | 3 | ## Before Redirection 4 | 5 | ```mermaid 6 | graph TD 7 | subgraph "Before Redirection" 8 | C[Command] 9 | C_IN[stdin] 10 | C_OUT[stdout] 11 | C_ERR[stderr] 12 | C --- C_IN 13 | C --- C_OUT 14 | C --- C_ERR 15 | TERM[Terminal] 16 | C_IN -.- TERM 17 | C_OUT -.- TERM 18 | C_ERR -.- TERM 19 | end 20 | ``` 21 | 22 | In the initial state, a command's standard input (stdin), standard output (stdout), and standard error (stderr) are typically connected to the terminal. This means: 23 | - Input is read from the keyboard 24 | - Output is displayed on the screen 25 | - Error messages are also displayed on the screen 26 | 27 | ## After Input Redirection: command < input_file 28 | 29 | ```mermaid 30 | graph TD 31 | subgraph "Input Redirection" 32 | C[Command] 33 | C_IN[stdin] 34 | C_OUT[stdout] 35 | C_ERR[stderr] 36 | C --- C_IN 37 | C --- C_OUT 38 | C --- C_ERR 39 | TERM[Terminal] 40 | FILE[Input File] 41 | FILE --> C_IN 42 | C_OUT -.- TERM 43 | C_ERR -.- TERM 44 | end 45 | ``` 46 | 47 | When input is redirected: 48 | 1. The command's stdin is connected to the specified input file instead of the keyboard. 49 | 2. The command reads its input from the file rather than waiting for user input. 50 | 3. stdout and stderr typically remain connected to the terminal. 51 | 52 | ## After Output Redirection: command > output_file 53 | 54 | ```mermaid 55 | graph TD 56 | subgraph "Output Redirection" 57 | C[Command] 58 | C_IN[stdin] 59 | C_OUT[stdout] 60 | C_ERR[stderr] 61 | C --- C_IN 62 | C --- C_OUT 63 | C --- C_ERR 64 | TERM[Terminal] 65 | FILE[Output File] 66 | C_IN -.- TERM 67 | C_OUT --> FILE 68 | C_ERR -.- TERM 69 | end 70 | ``` 71 | 72 | When output is redirected: 73 | 1. The command's stdout is connected to the specified output file instead of the screen. 74 | 2. The command's output is written to the file rather than displayed on the terminal. 75 | 3. stdin typically remains connected to the keyboard, and stderr to the terminal. 76 | 77 | ## After Error Redirection: command 2> error_file 78 | 79 | ```mermaid 80 | graph TD 81 | subgraph "Error Redirection" 82 | C[Command] 83 | C_IN[stdin] 84 | C_OUT[stdout] 85 | C_ERR[stderr] 86 | C --- C_IN 87 | C --- C_OUT 88 | C --- C_ERR 89 | TERM[Terminal] 90 | FILE[Error File] 91 | C_IN -.- TERM 92 | C_OUT -.- TERM 93 | C_ERR --> FILE 94 | end 95 | ``` 96 | 97 | When error output is redirected: 98 | 1. The command's stderr is connected to the specified error file instead of the screen. 99 | 2. Error messages are written to the file rather than displayed on the terminal. 100 | 3. stdin and stdout typically remain connected to the terminal. 101 | 102 | ## Notes on Redirections: 103 | - Input redirection (`<`) changes where the command reads its input from. 104 | - Output redirection (`>`) changes where the command sends its output to. 105 | - Error redirection (`2>`) changes where the command sends its error messages to. 106 | - You can combine redirections, e.g., `command < input_file > output_file 2> error_file` 107 | - Use `>>` for appending output to a file instead of overwriting it. 108 | - Use `2>&1` to redirect stderr to the same place as stdout. 109 | 110 | Redirections allow you to control the input and output streams of commands, enabling more flexible and powerful command-line operations. 111 | -------------------------------------------------------------------------------- /03textpiperedir/05python-script-usage-examples.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Examples of Commands Using the Python Script 4 | 5 | ## Introduction 6 | 7 | To deeply understand Pipes and Redirection, we need to go through a code demonstration first. We'll use the following Python script (`file_descriptor.py`) as our example: 8 | 9 | ```python 10 | import os 11 | import sys 12 | 13 | # Open a file and get the file descriptor 14 | fd = os.open("sample.txt", os.O_RDWR | os.O_CREAT) 15 | 16 | # Write to the file using the file descriptor 17 | os.write(fd, b"Hello, File Descriptors!") 18 | 19 | # Move the file pointer to the beginning 20 | os.lseek(fd, 0, os.SEEK_SET) 21 | 22 | # Read from the file using the file descriptor 23 | content = os.read(fd, 100) # Read 100 bytes 24 | 25 | # Print the content to stdout 26 | print(f"Read from file (stdout): {content.decode()}") 27 | 28 | # Simulate an error message to stderr 29 | print("This is an error message!", file=sys.stderr) 30 | 31 | # Prompt the user for input from stdin 32 | user_input = input("Enter something for stdin demonstration: ") 33 | 34 | # Print the user input to stdout 35 | print(f"You entered (stdout): {user_input}") 36 | 37 | # Close the file descriptor 38 | os.close(fd) 39 | ``` 40 | 41 | Assuming our Python script is saved as `file_descriptor.py`, here are some examples of commands that demonstrate pipes and redirection using this script: 42 | 43 | ## 1. Basic Execution 44 | 45 | ```bash 46 | python file_descriptor.py 47 | ``` 48 | This will run the script normally, writing to sample.txt and interacting with stdout, stderr, and stdin as coded. 49 | 50 | ## 2. Redirecting stdout to a File 51 | 52 | ```bash 53 | python file_descriptor.py > output.txt 54 | ``` 55 | This will redirect the stdout of the script to output.txt. The file will contain the "Read from file" message and the echoed user input, but not the error message (which goes to stderr). 56 | 57 | ## 3. Redirecting stderr to a File 58 | 59 | ```bash 60 | python file_descriptor.py 2> error.txt 61 | ``` 62 | This will redirect the stderr of the script to error.txt. The file will contain only the error message. 63 | 64 | ## 4. Redirecting Both stdout and stderr to Different Files 65 | 66 | ```bash 67 | python file_descriptor.py > output.txt 2> error.txt 68 | ``` 69 | This separates the standard output and error streams into two different files. 70 | 71 | ## 5. Redirecting Both stdout and stderr to the Same File 72 | 73 | ```bash 74 | python file_descriptor.py > all_output.txt 2>&1 75 | ``` 76 | This redirects both stdout and stderr to all_output.txt. The `2>&1` syntax means "redirect stderr to wherever stdout is currently pointing". 77 | 78 | ## 6. Using Input Redirection 79 | 80 | ```bash 81 | echo "Predefined input" | python file_descriptor.py 82 | ``` 83 | This pipes the output of the echo command as input to our Python script, automatically answering the input prompt. 84 | 85 | ## 7. Combining Input and Output Redirection 86 | 87 | ```bash 88 | echo "Predefined input" | python file_descriptor.py > output.txt 2> error.txt 89 | ``` 90 | This command provides input to the script via a pipe, redirects stdout to output.txt, and stderr to error.txt. 91 | 92 | ## 8. Using the Script in a Pipeline 93 | 94 | ```bash 95 | python file_descriptor.py | grep "stdout" 96 | ``` 97 | This runs the script and pipes its output to grep, which will only display lines containing "stdout". 98 | 99 | ## 9. Redirecting to /dev/null 100 | 101 | ```bash 102 | python file_descriptor.py > /dev/null 2>&1 103 | ``` 104 | This discards all output (both stdout and stderr) by redirecting it to /dev/null. 105 | 106 | ## 10. Using tee for Output and Logging 107 | 108 | ```bash 109 | python file_descriptor.py | tee output.txt 110 | ``` 111 | This runs the script, displaying its output on the screen and also saving it to output.txt. 112 | 113 | These examples demonstrate various ways to use pipes and redirection with the given Python script, illustrating the concepts discussed in the previous explanation. 114 | -------------------------------------------------------------------------------- /03textpiperedir/explanationfiledescriptor.txt: -------------------------------------------------------------------------------- 1 | Let's use Linux pipes and redirection to demonstrate the interplay 2 | between stdin, stdout, and stderr with the Python script provided. 3 | 4 | First, here's a quick refresher on the script. The script does the following: 5 | 6 | Writes "Hello, File Descriptors!" to a file. 7 | Reads the file content and prints it to stdout. 8 | Prints an error message to stderr. 9 | Prompts the user for input and prints it to stdout. 10 | 11 | Here's how we can use Linux pipes and redirection with this script: 12 | 13 | 1) Redirect stdout to a File and Display stderr: 14 | 15 | python3 file_descriptor.py > output.txt 16 | This will write the results of stdout (the file content and user input) to output.txt. 17 | The error message will still be displayed in the terminal because we haven't redirected stderr. 18 | 19 | 2) Redirect Both stdout and stderr to Separate Files: 20 | 21 | python3 file_descriptor.py > output.txt 2> error.txt 22 | Here, stdout is redirected to output.txt and stderr is redirected to error.txt. 23 | 24 | 3) Provide Input Using a Pipe: 25 | Let's assume you have a file called input.txt with some text. 26 | You can pipe this content to the script as follows: 27 | 28 | cat input.txt | python3 file_descriptor.py > output.txt 2> error.txt 29 | This will use the content of input.txt as the input for the script when it prompts the user. 30 | 31 | 32 | 4) Use the Content from stdout as Input for Another Command: 33 | 34 | For instance, let's take the content read from the file (displayed in stdout) and 35 | pipe it to grep to search for a specific word: 36 | 37 | python3 file_descriptor.py 2> error.txt | grep "File" 38 | 39 | This will display lines containing the word "File" from the stdout of the script. -------------------------------------------------------------------------------- /03textpiperedir/file_descriptor.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | 4 | # Open a file and get the file descriptor 5 | fd = os.open("sample.txt", os.O_RDWR | os.O_CREAT) 6 | 7 | # Write to the file using the file descriptor 8 | os.write(fd, b"Hello, File Descriptors!") 9 | 10 | # Move the file pointer to the beginning 11 | os.lseek(fd, 0, os.SEEK_SET) 12 | 13 | # Read from the file using the file descriptor 14 | content = os.read(fd, 100) # Read 100 bytes 15 | 16 | # Print the content to stdout 17 | print(f"Read from file (stdout): {content.decode()}") 18 | 19 | # Simulate an error message to stderr 20 | print("This is an error message!", file=sys.stderr) 21 | 22 | # Prompt the user for input from stdin 23 | user_input = input("Enter something for stdin demonstration: ") 24 | 25 | # Print the user input to stdout 26 | print(f"You entered (stdout): {user_input}") 27 | 28 | # Close the file descriptor 29 | os.close(fd) 30 | -------------------------------------------------------------------------------- /03textpiperedir/practice/assignment.txt: -------------------------------------------------------------------------------- 1 | First execute python script which will generate a psedo log with 1000 entries 2 | 3 | python3 log_generator.py 4 | 5 | Output: you should have application.log 6 | 7 | Try to examine it with cat head or tail commmands 8 | 9 | head -n 10 application.log 10 | 11 | Assignment #1: Log Analysis 12 | 13 | Scenario: 14 | After launching your new web application, users start reporting various issues. 15 | Some complain about performance issues, while others are encountering errors. 16 | To diagnose and prioritize the problems, you decide to analyze the application 17 | logs using different log levels. 18 | 19 | Tasks: 20 | 21 | INFO Analysis: 22 | 23 | Extract all lines with the "INFO" log level from application.log. 24 | Count the total number of "INFO" messages. 25 | Save these messages to a file named info_logs.txt. 26 | 27 | grep "INFO" application.log > info_logs.txt 28 | cat info_logs.txt | wc -l 29 | 30 | WARNING Analysis: 31 | 32 | Extract all lines with the "WARNING" log level from application.log. 33 | Count the total number of "WARNING" messages. 34 | Save these messages to a file named warning_logs.txt. 35 | 36 | grep "WARNING" application.log > warning_logs.txt 37 | cat warning_logs.txt | wc -l 38 | 39 | ERROR Analysis: 40 | 41 | Extract all lines with the "ERROR" log level from application.log. 42 | Count the total number of "ERROR" messages. 43 | Sort the unique error messages. 44 | Save these sorted unique messages to a file named sorted_errors.txt. 45 | Display the first 10 unique error messages. 46 | 47 | 48 | grep "ERROR" application.log | sort | uniq > sorted_errors.txt 49 | head -n 10 sorted_errors.txt 50 | 51 | DEBUG Analysis: 52 | 53 | Extract all lines with the "DEBUG" log level from application.log. 54 | Count the total number of "DEBUG" messages. 55 | Save these messages to a file named debug_logs.txt. 56 | 57 | 58 | grep "DEBUG" application.log > debug_logs.txt 59 | cat debug_logs.txt | wc -l 60 | 61 | Advanced Scenario: 62 | 63 | Top 5 Recurring Errors: 64 | 65 | Identify the five most frequently occurring error messages. 66 | Save these frequent errors with their occurrence counts to a file named top_errors.txt. 67 | Commands to use: 68 | 69 | grep "ERROR" application.log | sort | uniq -c | sort -nr | head -n 5 > top_errors.txt 70 | 71 | 72 | INFO vs. ERROR: 73 | Compare the number of "INFO" messages with "ERROR" messages. 74 | Which type of log message is more frequent? 75 | Commands to use: 76 | 77 | grep -c "INFO" application.log 78 | grep -c "ERROR" application.log 79 | 80 | 81 | -------------------------------------------------------------------------------- /03textpiperedir/practice/log_generator.py: -------------------------------------------------------------------------------- 1 | import random 2 | import datetime 3 | 4 | # Define constants 5 | LOG_LEVELS = ["INFO", "WARNING", "ERROR", "DEBUG"] 6 | MODULES = ["AUTH", "DATABASE", "NETWORK", "UI", "API"] 7 | LOG_MESSAGES = { 8 | "INFO": ["User logged in.", "New entry added.", "Connection established.", "Session started."], 9 | "WARNING": ["Login retries almost exhausted.", "DB nearing max capacity.", "Network latency detected.", "UI unresponsive."], 10 | "ERROR": ["Failed login attempt.", "DB connection lost.", "Network error.", "UI crashed."], 11 | "DEBUG": ["Auth method called.", "DB query executed.", "Network packet sent.", "UI button clicked."] 12 | } 13 | LOG_FILE = "application.log" 14 | NUM_OF_LOGS = 1000 # Number of log entries to generate 15 | 16 | def generate_log(): 17 | """ 18 | Generate a single pseudo-log entry. 19 | """ 20 | timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") 21 | log_level = random.choice(LOG_LEVELS) 22 | module = random.choice(MODULES) 23 | message = random.choice(LOG_MESSAGES[log_level]) 24 | 25 | return f"[{timestamp}] [{log_level}] [{module}]: {message}" 26 | 27 | def main(): 28 | """ 29 | Generate the pseudo-log file. 30 | """ 31 | with open(LOG_FILE, "w") as log_file: 32 | for _ in range(NUM_OF_LOGS): 33 | log_file.write(generate_log() + "\n") 34 | print(f"{NUM_OF_LOGS} pseudo-log entries generated in {LOG_FILE}.") 35 | 36 | if __name__ == "__main__": 37 | main() 38 | -------------------------------------------------------------------------------- /03textpiperedir/setup_generator.py: -------------------------------------------------------------------------------- 1 | import os 2 | import random 3 | 4 | def create_file(filename, content): 5 | with open(filename, 'w') as file: 6 | file.write(content) 7 | print(f"Created: {filename}") 8 | 9 | def create_content_files(): 10 | # Create hello.txt 11 | create_file("hello.txt", "Hello, World!\nWelcome to Linux text manipulation.") 12 | 13 | # Create names.txt 14 | names = ["Alice", "Bob", "Charlie", "David", "Eva", "Frank", "Grace", "Henry", "Ivy", "Jack"] 15 | create_file("names.txt", "\n".join(random.sample(names, len(names)))) 16 | 17 | # Create numbers.txt 18 | numbers = [str(random.randint(1, 100)) for _ in range(20)] 19 | create_file("numbers.txt", "\n".join(numbers)) 20 | 21 | # Create logfile.txt 22 | log_entries = [ 23 | "2023-09-10 10:15:32 INFO: Application started", 24 | "2023-09-10 10:16:45 WARNING: Low memory warning", 25 | "2023-09-10 10:17:30 ERROR: Failed to connect to database", 26 | "2023-09-10 10:18:22 INFO: User logged in", 27 | "2023-09-10 10:19:15 DEBUG: Processing item 1", 28 | "2023-09-10 10:20:05 ERROR: Invalid input received", 29 | "2023-09-10 10:21:18 WARNING: CPU usage high", 30 | "2023-09-10 10:22:30 INFO: Task completed successfully", 31 | "2023-09-10 10:23:45 DEBUG: Closing connection", 32 | "2023-09-10 10:24:50 INFO: Application shutdown" 33 | ] 34 | create_file("logfile.txt", "\n".join(log_entries)) 35 | 36 | # Create todo.txt 37 | todos = [ 38 | "TODO: Implement new feature", 39 | "Buy groceries", 40 | "TODO: Fix bug in login system", 41 | "Call dentist for appointment", 42 | "TODO: Update documentation", 43 | "Prepare presentation for meeting", 44 | "TODO: Optimize database queries", 45 | "Plan weekend trip", 46 | "TODO: Write unit tests", 47 | "Send birthday card to mom" 48 | ] 49 | create_file("todo.txt", "\n".join(todos)) 50 | 51 | def main(): 52 | # Create content files 53 | create_content_files() 54 | 55 | if __name__ == "__main__": 56 | main() -------------------------------------------------------------------------------- /04UserGroupPermission/Lab04linux-add-sudoer-lab.md: -------------------------------------------------------------------------------- 1 | # Lab: Adding a User to Sudoers List 2 | 3 | ## Objective 4 | Learn how to create a new user, add them to the sudoers list, verify their sudo privileges, and properly clean up afterwards in a standard Ubuntu environment. 5 | 6 | 7 | ## Lab Steps 8 | 9 | ### 1. Create a New User 10 | ```bash 11 | sudo useradd -m newuser 12 | ``` 13 | This creates a new user 'newuser' with a home directory. 14 | 15 | ### 2. Set Password for New User 16 | ```bash 17 | sudo passwd newuser 18 | ``` 19 | Set a password for the new user when prompted. 20 | 21 | ### 3. Verify User Creation 22 | ```bash 23 | id newuser 24 | ls -l /home/newuser 25 | ``` 26 | This should display the user's UID, GID, groups, and confirm the existence of their home directory. 27 | 28 | ### 4. Add User to sudo Group 29 | ```bash 30 | sudo usermod -aG sudo newuser 31 | ``` 32 | 33 | ### 5. Verify Group Membership 34 | ```bash 35 | groups newuser 36 | ``` 37 | You should see 'sudo' listed among the groups. 38 | 39 | ### 6. Check sudoers File 40 | ```bash 41 | sudo visudo 42 | ``` 43 | Look for this line, which grants sudo privileges to members of the sudo group: 44 | ``` 45 | %sudo ALL=(ALL:ALL) ALL 46 | ``` 47 | 48 | ### 7. Test sudo Access 49 | Switch to the new user: 50 | ```bash 51 | su - newuser 52 | ``` 53 | Try running a command with sudo: 54 | ```bash 55 | sudo ls /root 56 | ``` 57 | You should be prompted for newuser's password, then see the contents of /root. 58 | 59 | ### 8. Verify sudo Privileges 60 | ```bash 61 | sudo -l 62 | ``` 63 | This will show the sudo privileges for the current user (newuser). 64 | 65 | ### 9. Exit newuser Session 66 | ```bash 67 | exit 68 | ``` 69 | This will return you to your original user session. 70 | 71 | ### 10. Cleanup 72 | Remove the test user and their home directory: 73 | ```bash 74 | sudo userdel -r newuser 75 | ``` 76 | 77 | ### 11. Verify Cleanup 78 | Confirm that the user has been removed: 79 | ```bash 80 | id newuser 81 | ls -l /home/newuser 82 | ``` 83 | Both commands should return errors, indicating that the user and their home directory no longer exist. 84 | 85 | ## Conclusion 86 | You've successfully: 87 | 1. Created a new user 88 | 2. Added them to the sudo group 89 | 3. Verified their sudo privileges 90 | 4. Properly cleaned up by removing the test user 91 | 92 | This process demonstrates the full lifecycle of adding and removing a user with sudo privileges. It's crucial to always clean up after such operations, especially in production or shared environments, to maintain system security and hygiene. 93 | -------------------------------------------------------------------------------- /04UserGroupPermission/txt01User_GroupManagment_theory.txt: -------------------------------------------------------------------------------- 1 | Users and Groups 2 | 3 | Users: Individual user accounts that have specific privileges and permissions. 4 | Groups: Collections of users that share a common set of permissions. 5 | Superuser (root): The user with the highest level of permissions, 6 | capable of performing any operation on the system. 7 | 8 | 9 | User Attributes 10 | UID (User ID): A unique identifier assigned to each user. 11 | GID (Group ID): A unique identifier assigned to each group. 12 | Home Directory: The personal workspace for each user to store their files and personal settings. 13 | 14 | 15 | User Management Commands 16 | 17 | useradd, userdel, usermod: Commands to add, delete, and modify user accounts, respectively. 18 | groupadd, groupdel, groupmod: Commands to add, delete, and modify groups, respectively. 19 | passwd: Command to change the password of a user account. 20 | 21 | 22 | Directory and File Ownership 23 | Ownership: Assigning ownership of files and directories to specific users or groups. 24 | chown and chgrp: Commands to change the ownership and group of files and directories, respectively. 25 | 26 | Extra Important command 27 | sudo: To temporarily grant administrative privileges to a regular user. 28 | su: To switch to another user account. 29 | last: To show the last logins in the system. 30 | sudo cat /home/user_name/.bash_history: You can see whatever user was doing. 31 | 32 | -------------------------------------------------------------------------------- /04UserGroupPermission/txt02permission_access.txt: -------------------------------------------------------------------------------- 1 | In Linux and other Unix-like operating systems, the permission and access rights 2 | system is a crucial security feature that controls the level of access/interaction 3 | users and processes can have with files and directories. 4 | 5 | Types of Permissions 6 | Linux systems have three basic types of permissions: 7 | 8 | => Read (r): 9 | 10 | Files: Allows a user to read the contents of a file. 11 | Directories: Allows a user to list the files in the directory. 12 | 13 | => Write (w): 14 | 15 | Files: Allows a user to modify a file or delete it. 16 | Directories: Allows a user to create new files or delete files in the directory. 17 | 18 | => Execute (x): 19 | 20 | Files: Allows a user to execute a file. 21 | Directories: Allows a user to enter the directory and access files and directories inside. 22 | 23 | => Permission Groups 24 | Permissions are defined for three groups: 25 | User (u): The owner of the file/directory. 26 | Group (g): Users who are members of the file's group. 27 | Others (o): Users who are neither the file owner nor members of the file's group. 28 | 29 | 30 | => Viewing Permissions 31 | 32 | To view the permissions of a file or directory, use the ls -l command. 33 | The permission field will look something like this: 34 | 35 | -rwxr-xr-- 36 | 37 | or for full picture 38 | stat: To display file or directory permissions. 39 | 40 | The first character indicates the type of file (- for a regular file, d for a directory). 41 | The next three characters represent the user permissions (rwx). 42 | The next three represent the group permissions (r-x). 43 | The last three represent the permissions for others (r--). 44 | 45 | => Modifying Permissions 46 | To modify the permissions of a file or directory, use the chmod command. 47 | 48 | Here are some examples: 49 | Adding permissions: 50 | chmod u+x filename # Add execute permission for the user 51 | chmod g+w filename # Add write permission for the group 52 | 53 | => Removing permissions: 54 | chmod o-r filename # Remove read permission for others 55 | 56 | => Setting permissions: 57 | 58 | chmod u=rwx,g=rx,o=r filename # Set permissions explicitly for user, group, and others 59 | 60 | 61 | 62 | => Using numerical notation: 63 | 64 | chmod 755 filename # Equivalent to u=rwx,g=rx,o=rx 65 | 66 | Extra theory: 67 | In Linux, numerical notations for file permissions are derived from 68 | the binary representation of the permissions for the user, group, and others. 69 | Each of these entities can have a permission value between 0 and 7, 70 | represented as a three-bit binary number. 71 | Here is the full breakdown of the numerical notations, 72 | which are the sum of read (r; 4), write (w; 2), and execute (x; 1) permissions: 73 | 74 | Octal Value | Binary Representation | Permission (rwx notation) 75 | 0 000 --- 76 | 1 001 --x 77 | 2 010 -w- 78 | 3 011 -wx 79 | 4 100 r-- 80 | 5 101 r-x 81 | 6 110 rw- 82 | 7 111 rwx 83 | 84 | Using this chart, we can understand how the numerical notation for chmod works. 85 | For instance: 86 | 87 | chmod 755 filename 88 | 7 (user): rwx (read, write, and execute permissions) 89 | 5 (group): r-x (read and execute permissions) 90 | 5 (others): r-x (read and execute permissions) 91 | 92 | More examples: 93 | 94 | chmod 644 filename 95 | 6 (user): rw- (read and write permissions) 96 | 4 (group): r-- (read permission) 97 | 4 (others): r-- (read permission) 98 | 99 | chmod 777 filename 100 | 7 (user): rwx (read, write, and execute permissions) 101 | 7 (group): rwx (read, write, and execute permissions) 102 | 7 (others): rwx (read, write, and execute permissions) 103 | 104 | chmod 000 filename 105 | 0 (user): --- (no permissions) 106 | 0 (group): --- (no permissions) 107 | 0 (others): --- (no permissions) 108 | 109 | 110 | 111 | => Ownership 112 | To change the ownership of a file or directory, use the chown command. 113 | Here are some examples: 114 | 115 | chown newowner filename 116 | 117 | Changing the group: 118 | chown :newgroup filename 119 | 120 | Changing both owner and group: 121 | chown newowner:newgroup filename 122 | 123 | First lab: Creating a new user and groups 124 | Second lab: Manipulating file permissions -------------------------------------------------------------------------------- /04UserGroupPermission/txtlab1UserGroupCreation.txt: -------------------------------------------------------------------------------- 1 | How to Create Users and Groups in Linux 2 | 3 | Step 1: Open the Terminal 4 | Open your terminal. You'll perform all the actions in this guide from the terminal. 5 | 6 | Step 2: Creating a Group 7 | Before creating a user, you may want to create a group. 8 | To create a group, use the groupadd command followed by the name of the group. 9 | Here, we create a group called dev_team: 10 | 11 | $ sudo groupadd dev_team 12 | You can verify that the group has been created by checking the file /etc/group: 13 | 14 | $ grep dev_team /etc/group 15 | 16 | 17 | Step 3: Creating a User 18 | 19 | To create a user, use the useradd command followed by the username. 20 | Here, we create a user called dev_alex: 21 | 22 | $ sudo useradd -m dev_alex 23 | The -m option creates a home directory for the user. 24 | 25 | 26 | Step 4: Setting a Password for the New User 27 | Set a password for the new user using the passwd command: 28 | 29 | $ sudo passwd dev_alex 30 | You will be prompted to enter and confirm a password for the user. Don't forget it. 31 | 32 | Step 5: Adding the User to a Group 33 | 34 | To add the user to the group you created earlier, use the usermod command: 35 | $ sudo usermod -aG dev_team dev_alex 36 | 37 | The -aG option appends the user to the supplementary groups mentioned. 38 | 39 | Step 6: Verify User and Group Assignment 40 | Verify the user's details with the id command: 41 | $ id dev_alex 42 | 43 | The output will display the user's UID, GID, and the groups the user belongs to, 44 | confirming that the user has been added to the dev_team group. 45 | 46 | Step 7: Switching to the New User 47 | To switch to the new user's environment, use the su command: 48 | 49 | $ su - dev_alex 50 | 51 | You can then verify that you are logged 52 | in as dev_alex using: 53 | 54 | $ whoami 55 | 56 | #Before deleting make sure to finish lab2. 57 | 58 | Step 8: Deleting Users and Groups 59 | If necessary, users can be deleted with the userdel command, 60 | and groups can be deleted with the groupdel command: 61 | 62 | $ sudo userdel dev_alex 63 | $ sudo groupdel dev_team 64 | 65 | Note: Be cautious when deleting users and groups, 66 | as this action can result in loss of data and system instability. 67 | 68 | Conclusion 69 | Now you know how to create, modify, and delete users and groups in Linux. 70 | It's always recommended to perform such actions with care to maintain system security 71 | and integrity. Practice these operations to get comfortable with 72 | user and group management in Linux. 73 | 74 | Once you feel ready, you can proceed with more advanced tasks like 75 | configuring file permissions and ownership. 76 | 77 | P.S 78 | To list all users and groups on a Linux system. 79 | 80 | 81 | Listing All Users 82 | 83 | Using getent command: 84 | $ getent passwd 85 | 86 | Listing All Groups 87 | Using getent command: 88 | $ getent group 89 | 90 | 91 | 92 | Notes: 93 | 94 | /etc/passwd file: 95 | This file contains information about the users on the system. 96 | Each line in this file represents login information for one user. 97 | 98 | /etc/group file: 99 | This file contains information about groups on the system. 100 | Each line in this file represents one group. 101 | 102 | getent command: This command is used to get entries from Name Service Switch libraries, 103 | and it can be used to fetch details from passwd and group databases 104 | which read from /etc/passwd and /etc/group respectively, 105 | as well as other databases configured in the /etc/nsswitch.conf file. -------------------------------------------------------------------------------- /04UserGroupPermission/txtlab2filepermissions.txt: -------------------------------------------------------------------------------- 1 | Lab2: Setting File Permissions 2 | 3 | Step 1: Create Another User 4 | Open the terminal and create a new user, 5 | for instance, dev_bob, but do not assign him to any group: 6 | 7 | $ sudo useradd -m dev_bob 8 | 9 | 10 | Set a password for dev_bob: 11 | $ sudo passwd dev_bob 12 | 13 | Step 2: Log In as dev_alex and Create a Python Script 14 | 15 | Switch to the dev_alex user: 16 | $ su - dev_alex 17 | 18 | Create a Python script called hello.py using a text editor. Here, we'll use nano: 19 | 20 | $ nano hello.py 21 | 22 | print("Hello, dev_team!") 23 | 24 | Save and exit the editor (for nano: Ctrl+O to write changes, Ctrl+X to exit). 25 | 26 | Step 3: Change File Permissions 27 | Set the file permissions so that only users in the dev_team group can execute the script. 28 | First, change the group ownership of the file to dev_team: 29 | 30 | $ sudo chgrp dev_team hello.py 31 | 32 | Next, set the file permissions to allow execution by the group: 33 | 34 | $ chmod 770 hello.py 35 | 36 | Step 4: Verify the File Permissions 37 | Check the file permissions using the ls command: 38 | 39 | $ ls -l hello.py 40 | The output should indicate that the file owner and group members have read, write, and execute permissions, 41 | while others have no permissions. 42 | 43 | Step 5: Test the File Execution 44 | First, try executing the script as dev_alex: 45 | 46 | $ python3 hello.py 47 | You should see the output: "Hello, dev_team!" 48 | 49 | Next, switch to the dev_bob user and try executing the script: 50 | 51 | $ su - dev_bob 52 | $ python3 /home/dev_alex/hello.py 53 | 54 | dev_bob should not have the permissions to execute the script, and you should see a permission denied error. 55 | 56 | Ok after a while Bob was promoted to become developer :) 57 | in the final step we will add dev_bob to the dev_team group and demonstrate that he can now execute the script. 58 | 59 | Step 6: Adding dev_bob to the dev_team Group 60 | Switch back to a user with sudo privileges and add dev_bob to the dev_team group: 61 | 62 | $ sudo usermod -aG dev_team dev_bob 63 | 64 | Step 7: Verify the Group Assignment for dev_bob 65 | Verify that dev_bob has been added to the dev_team group: 66 | 67 | $ groups dev_bob 68 | 69 | You should see dev_team in the list of groups for dev_bob. 70 | 71 | Step 8: Log In as dev_bob and Execute the Script 72 | Switch to the dev_bob user and execute the script: 73 | 74 | $ su - dev_bob 75 | Now, navigate to dev_alex's home directory and try running the script: 76 | 77 | $ python3 /home/dev_alex/hello.py 78 | 79 | This time, since dev_bob is a member of the dev_team group, 80 | he should be able to execute the script, and you should see the "Hello, dev_team!" message output to the terminal. -------------------------------------------------------------------------------- /05EverythingIsAFile/01linux-everything-is-file-concept.md: -------------------------------------------------------------------------------- 1 | # Introduction to the "Everything is a File" Concept in Linux 2 | 3 | ## Overview 4 | 5 | In Linux and other Unix-like operating systems, there's a fundamental design philosophy often summarized as "everything is a file." This concept is more than a metaphor; it's a guiding principle that influences how the operating system is structured and how users and applications interact with it. 6 | 7 | ## What Does "Everything is a File" Mean? 8 | 9 | At its core, the phrase means that Linux represents various system components—including hardware devices, processes, and inter-process communication channels—as files within the filesystem hierarchy. This abstraction allows users and programs to interact with these components using standard file operations like read, write, open, and close. 10 | 11 | ## Types of Files in Linux 12 | 13 | Linux categorizes files into several types, each serving different purposes but adhering to the file abstraction: 14 | 15 | 1. **Regular Files**: These are the most common files, containing data like text, images, or executable programs. 16 | 2. **Directories**: Special files that list other files and directories, forming the filesystem's hierarchical structure. 17 | 3. **Character and Block Device Files**: Found in the `/dev` directory, these files represent hardware devices. 18 | - Character device files handle data character by character (e.g., keyboards) 19 | - Block device files handle data in blocks (e.g., hard drives) 20 | 4. **Pipes and FIFOs**: Used for inter-process communication, allowing data to flow in one direction between processes. 21 | 5. **Sockets**: Facilitate network communication between processes, either on the same machine or over a network. 22 | 6. **Symbolic Links**: Files that point to other files or directories, similar to shortcuts. 23 | 7. **Special Files**: Includes files in `/proc` and `/sys`, which provide interfaces to kernel data structures. 24 | 25 | ## Benefits of the File Abstraction 26 | 27 | ### Simplifies Interaction 28 | By representing diverse system components as files, Linux allows users and applications to interact with them using familiar tools and system calls. 29 | 30 | ### Enhances Flexibility 31 | This uniform interface makes it easier to write programs that manipulate various system resources without needing specialized APIs for each type. 32 | 33 | ### Facilitates Scripting and Automation 34 | The file abstraction enables powerful scripting capabilities. Since devices and processes are accessible as files, shell scripts can easily manipulate them using standard command-line utilities. 35 | 36 | 37 | ## Conclusion 38 | 39 | The "everything is a file" philosophy is a cornerstone of Linux's design, offering a unified and consistent way to interact with a wide range of system components. This abstraction simplifies the operating system's complexity, making it more accessible to users and developers alike. By treating devices, processes, and network connections as files, Linux provides a powerful and flexible environment conducive to innovation and efficient system management. 40 | -------------------------------------------------------------------------------- /05EverythingIsAFile/02linux-commands-vs-python-code.md: -------------------------------------------------------------------------------- 1 | # Linux Commands vs. Python Code: Explicit File and Directory Operations 2 | 3 | This document compares common Linux commands for file and directory operations with their Python code equivalents, using explicit file operations. This comparison illustrates how the "Everything is a File" concept in Linux translates to programmatic operations in Python. 4 | 5 | ## 1. Creating a File 6 | 7 | ### Linux Command: 8 | ```bash 9 | touch example.txt 10 | ``` 11 | 12 | ### Python Code: 13 | ```python 14 | # Create an empty file 15 | f = open('example.txt', 'w') 16 | f.close() 17 | 18 | # Or create a file with content 19 | f = open('example.txt', 'w') 20 | f.write('Hello, Linux!') 21 | f.close() 22 | ``` 23 | 24 | ## 2. Writing to a File 25 | 26 | ### Linux Command: 27 | ```bash 28 | echo "Hello, Linux!" > example.txt 29 | ``` 30 | 31 | ### Python Code: 32 | ```python 33 | f = open('example.txt', 'w') 34 | f.write("Hello, Linux!") 35 | f.close() 36 | ``` 37 | 38 | ## 3. Reading from a File 39 | 40 | ### Linux Command: 41 | ```bash 42 | cat example.txt 43 | ``` 44 | 45 | ### Python Code: 46 | ```python 47 | f = open('example.txt', 'r') 48 | content = f.read() 49 | print(content) 50 | f.close() 51 | ``` 52 | 53 | ## 4. Appending to a File 54 | 55 | ### Linux Command: 56 | ```bash 57 | echo "This is a new line" >> example.txt 58 | ``` 59 | 60 | ### Python Code: 61 | ```python 62 | f = open('example.txt', 'a') 63 | f.write("\nThis is a new line") 64 | f.close() 65 | ``` 66 | 67 | ## 5. Renaming a File 68 | 69 | ### Linux Command: 70 | ```bash 71 | mv old_name.txt new_name.txt 72 | ``` 73 | 74 | ### Python Code: 75 | ```python 76 | import os 77 | os.rename('old_name.txt', 'new_name.txt') 78 | ``` 79 | 80 | ## 6. Removing a File 81 | 82 | ### Linux Command: 83 | ```bash 84 | rm example.txt 85 | ``` 86 | 87 | ### Python Code: 88 | ```python 89 | import os 90 | os.remove('example.txt') 91 | ``` 92 | 93 | ## 7. Creating a Directory 94 | 95 | ### Linux Command: 96 | ```bash 97 | mkdir new_directory 98 | ``` 99 | 100 | ### Python Code: 101 | ```python 102 | import os 103 | os.mkdir('new_directory') 104 | ``` 105 | 106 | ## 8. Listing Directory Contents 107 | 108 | ### Linux Command: 109 | ```bash 110 | ls -l 111 | ``` 112 | 113 | ### Python Code: 114 | ```python 115 | import os 116 | for item in os.listdir('.'): 117 | print(item) 118 | 119 | # For more details, similar to ls -l: 120 | import os 121 | from datetime import datetime 122 | for item in os.listdir('.'): 123 | stats = os.stat(item) 124 | print(f"{item:20} Size: {stats.st_size:10} Last modified: {datetime.fromtimestamp(stats.st_mtime)}") 125 | ``` 126 | 127 | ## 9. Removing a Directory 128 | 129 | ### Linux Command: 130 | ```bash 131 | rmdir empty_directory # For empty directories 132 | rm -r non_empty_directory # For non-empty directories 133 | ``` 134 | 135 | ### Python Code: 136 | ```python 137 | import os 138 | os.rmdir('empty_directory') # For empty directories 139 | 140 | import shutil 141 | shutil.rmtree('non_empty_directory') # For non-empty directories 142 | ``` 143 | 144 | ## 10. Changing File Permissions 145 | 146 | ### Linux Command: 147 | ```bash 148 | chmod 644 example.txt 149 | ``` 150 | 151 | ### Python Code: 152 | ```python 153 | import os 154 | os.chmod('example.txt', 0o644) 155 | ``` 156 | 157 | These examples demonstrate how file and directory operations in Linux can be performed both through command-line instructions and Python code, illustrating the consistent interface provided by the "Everything is a File" philosophy. 158 | -------------------------------------------------------------------------------- /05EverythingIsAFile/03linux-character-devices-pipes-examples.md: -------------------------------------------------------------------------------- 1 | # Linux Character Devices, Pipes, and Named Pipes: Commands and Python Code 2 | 3 | This document demonstrates the use of character devices, pipes, and named pipes (FIFOs) in Linux, showing both shell commands and equivalent Python code. 4 | 5 | ## 1. Character Devices 6 | 7 | Character devices in Linux are accessed as files, typically in the `/dev` directory. They allow reading or writing one character at a time. 8 | 9 | ### Example: Reading from the keyboard (stdin) 10 | 11 | #### Linux Command: 12 | ```bash 13 | cat /dev/stdin 14 | # Type some text and press Ctrl+D to end input 15 | ``` 16 | 17 | #### Python Code: 18 | ```python 19 | import sys 20 | 21 | print("Type some text (press Ctrl+D to end):") 22 | for line in sys.stdin: 23 | print("You typed:", line.strip()) 24 | ``` 25 | 26 | ### Example: Writing to the terminal (stdout) 27 | 28 | #### Linux Command: 29 | ```bash 30 | echo "Hello, terminal!" > /dev/tty 31 | ``` 32 | 33 | #### Python Code: 34 | ```python 35 | import sys 36 | 37 | sys.stdout.write("Hello, terminal!\n") 38 | sys.stdout.flush() 39 | ``` 40 | 41 | ## 2. Pipes 42 | 43 | Pipes allow the output of one process to be used as input to another process. 44 | 45 | ### Example: Using a pipe to filter output 46 | 47 | #### Linux Command: 48 | ```bash 49 | ls -l | grep ".txt" 50 | ``` 51 | 52 | 53 | ## 3. Named Pipes (FIFOs) 54 | 55 | Named pipes, or FIFOs, are special files that act as a pipe between two processes. 56 | 57 | ### Creating and Using a Named Pipe 58 | 59 | #### Linux Commands: 60 | ```bash 61 | # Terminal 1 62 | mkfifo my_pipe 63 | cat > my_pipe 64 | 65 | # Terminal 2 66 | cat < my_pipe 67 | ``` 68 | 69 | #### Python Code: 70 | ```python 71 | import os 72 | 73 | # Create the named pipe 74 | os.mkfifo("my_pipe") 75 | 76 | # In one Python script (writer.py) 77 | f = open("my_pipe", "w") 78 | f.write("Hello from the named pipe!") 79 | f.close() 80 | 81 | 82 | # In another Python script (reader.py) 83 | import os 84 | f = open("my_pipe", "r") 85 | message = f.read() 86 | print("Received:", message) 87 | f.close() 88 | 89 | # Clean up 90 | os.remove("my_pipe") 91 | ``` 92 | 93 | To use this Python example: 94 | 1. Run the writer script in another terminal: `python writer.py` 95 | 2. Run the reader script in one terminal: `python reader.py` 96 | 97 | # Multiple messages Canonical Producer Consumer Pattern 98 | #### Python code 99 | 100 | ```python 101 | # writer.py 102 | import os 103 | 104 | # Open the named pipe in write mode 105 | f = open("my_pipe", "w") 106 | 107 | try: 108 | while True: 109 | # Get the message from the user 110 | message = input("Enter message to send (type 'exit' to quit): ") 111 | if message == 'exit': 112 | # Send the "stop" message before exiting 113 | f.write("stop\n") 114 | f.flush() # Ensure the "stop" message is sent immediately 115 | break 116 | # Write the message to the pipe 117 | f.write(message + "\n") 118 | f.flush() # Ensure the message is sent immediately 119 | finally: 120 | f.close() 121 | 122 | # reader.py 123 | import os 124 | import time 125 | 126 | # Open the named pipe in read mode 127 | f = open("my_pipe", "r") 128 | 129 | try: 130 | while True: 131 | # Read a message from the pipe 132 | message = f.readline().strip() # Read one line at a time 133 | if message: 134 | print("Received:", message) 135 | if message == 'exit': 136 | print("Exit message received. Stopping the reader.") 137 | break 138 | else: 139 | # If no message is received, wait for a bit before checking again 140 | time.sleep(1) # This prevents the loop from running too fast 141 | finally: 142 | f.close() 143 | ``` 144 | 145 | These examples demonstrate how character devices, pipes, and named pipes in Linux can be interacted with as if they were files, both through shell commands and Python code. This illustrates the "Everything is a File" philosophy in Linux, where even these specialized system features are accessed through a consistent file-like interface. 146 | -------------------------------------------------------------------------------- /05EverythingIsAFile/04linux-terminal-character-device.md: -------------------------------------------------------------------------------- 1 | # Understanding Terminal Input and Echoing in Linux 2 | 3 | This document explores the concepts of terminal input modes and echoing in Linux, demonstrating how these features relate to the "Everything is a File" philosophy and how they can be manipulated using both shell commands and Python code. 4 | 5 | ## Terminal Input Modes 6 | 7 | Linux terminals operate in different input modes: 8 | 9 | 1. **Canonical Mode (Cooked Mode)**: 10 | - Default mode 11 | - Input is line-buffered (sent after pressing Enter) 12 | - Echoing is enabled (typed characters are displayed immediately) 13 | 14 | 2. **Non-Canonical Mode (Raw Mode)**: 15 | - Input is not line-buffered (each character sent immediately) 16 | - Echoing can be enabled or disabled 17 | - Used by programs needing to process each keystroke (e.g., text editors) 18 | 19 | ## Reading from the Keyboard (stdin) 20 | 21 | ### Linux Command 22 | 23 | ```bash 24 | cat /dev/stdin 25 | # Type some text and press Ctrl+D to end input 26 | ``` 27 | 28 | #### What Happens: 29 | - Terminal is in canonical mode by default 30 | - Echoing is enabled, so typed characters appear immediately 31 | - `cat` reads from `/dev/stdin`, which is connected to keyboard input 32 | - Ctrl+D sends an EOF signal, ending input 33 | 34 | ### Python Code 35 | 36 | ```python 37 | import sys 38 | 39 | print("Type some text (press Ctrl+D to end):") 40 | for line in sys.stdin: 41 | print("You typed:", line.strip()) 42 | ``` 43 | 44 | ## Modifying Terminal Behavior 45 | 46 | ### Disabling Echoing 47 | 48 | #### Linux Commands 49 | 50 | ```bash 51 | # Disable echoing 52 | stty -echo 53 | cat /dev/stdin 54 | # Type some text (it won't be displayed) and press Ctrl+D 55 | 56 | # Re-enable echoing 57 | stty echo 58 | ``` 59 | 60 | #### Python Code (Using getpass) 61 | 62 | ```python 63 | import getpass 64 | 65 | print("Type some text (it won't be displayed):") 66 | input_text = getpass.getpass(prompt='') 67 | print("You typed:", input_text) 68 | ``` 69 | 70 | ## Checking Terminal Settings 71 | 72 | ### Linux Command 73 | 74 | ```bash 75 | stty -a 76 | ``` 77 | 78 | This displays all current terminal settings. Look for `echo` in the output to see if echoing is enabled. 79 | 80 | ## Summary 81 | 82 | - **Echoing**: By default, terminals display each character typed due to the ECHO flag. 83 | - **Canonical Mode**: Input is line-buffered and sent when Enter is pressed. 84 | - **Non-Canonical Mode**: Input is sent immediately, character by character. 85 | - **`cat /dev/stdin` Behavior**: With default settings, input is visible as it's typed. 86 | 87 | These examples demonstrate how terminal input and output in Linux adhere to the "Everything is a File" philosophy. Even keyboard input and terminal settings are accessed and manipulated through file-like interfaces (`/dev/stdin`, `stty`), showcasing the consistency and flexibility of the Linux system design. 88 | -------------------------------------------------------------------------------- /05EverythingIsAFile/05file-descriptors-vs-inodes.md: -------------------------------------------------------------------------------- 1 | # File Descriptors vs. Inode Numbers 2 | 3 | File descriptors and inode numbers are different concepts that pertain to different layers in a filesystem's architecture. Let's delve into each term individually to differentiate them properly: 4 | 5 | ## Inode Number 6 | 7 | ### Definition 8 | An inode (index node) is a data structure on a filesystem on Unix and Linux systems which stores information about a file or directory, including attributes (like permissions, ownership) and disk block locations which essentially define the file or directory. 9 | 10 | ### Characteristics 11 | - **Uniqueness**: Every file or directory has a unique inode number within the filesystem. 12 | - **Persistence**: Inode information is persistent across reboots; it resides on the disk until the file is deleted. 13 | - **Usage**: It is used by the filesystem to manage files and directories. 14 | 15 | ## File Descriptor 16 | 17 | ### Definition 18 | A file descriptor is an abstract indicator used by the kernel to access a file or other input/output resource, such as a pipe or network socket. It is typically an integer that is used to identify an open file within a process. 19 | 20 | ### Characteristics 21 | - **Uniqueness**: File descriptors are unique per process. Different processes can have file descriptors with the same number, but within a process, each open file has a unique descriptor. 22 | - **Persistence**: File descriptors are not persistent across reboots, and they cease to exist when a process terminates. 23 | - **Usage**: File descriptors are used by processes to read from or write to open files through system calls like `read()` and `write()`. 24 | 25 | ## Example 26 | 27 | A file named "example.txt" will have an inode number that contains metadata information about the file. This inode number is unique on the filesystem and can be used to identify the file at the filesystem level. 28 | 29 | When a process opens "example.txt" to read or write, the kernel assigns a file descriptor to this open file in the context of the process. This file descriptor is used by the process to perform operations on the open file. 30 | 31 | ## Summary 32 | 33 | - **File descriptor**: Process-specific and used to refer to an open file during the runtime of a process. 34 | - **Inode number**: Filesystem-specific and used to refer to a file or directory persistently within the filesystem, irrespective of whether any process has the file open. 35 | -------------------------------------------------------------------------------- /05EverythingIsAFile/file_inode_dentry.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | # File abstraction, what is relevant only inode object 4 | 5 | class File: 6 | def __init__(self, name, inode): 7 | self.name = name 8 | self.inode = inode 9 | 10 | def read(self): 11 | data_blocks = self.inode.data_block_pointers 12 | content = "" 13 | for block_number in data_blocks: 14 | content += f"Data Block {block_number}: {disk_space[block_number]}\n" 15 | return content 16 | 17 | def write(self, data): 18 | pass # In this abstraction, writing to the file is not implemented. 19 | 20 | 21 | 22 | 23 | # Defining the inode class which will store metadata about each file 24 | # An inode object with pointers to data blocks where the file's content is stored 25 | class Inode: 26 | def __init__(self, inode_number, file_type, permissions, size, data_block_pointers): 27 | self.inode_number = inode_number 28 | self.file_type = file_type 29 | self.permissions = permissions 30 | self.size = size 31 | self.data_block_pointers = data_block_pointers 32 | self.timestamps = {'creation': None, 'modification': None} 33 | 34 | 35 | 36 | # Defining the dentry class to map file names to inodes and store in a cache 37 | class Dentry: 38 | def __init__(self): 39 | self.dentry_cache = {} 40 | 41 | def add_dentry(self, file_name, file): 42 | self.dentry_cache[file_name] = file 43 | 44 | def lookup_dentry(self, file_name): 45 | return self.dentry_cache.get(file_name) 46 | 47 | 48 | 49 | # Data blocks on the "disk" (for demonstration purposes) 50 | disk_space = { 51 | 1: "Data block 1: The actual content of the file (part 1)...", 52 | 2: "Data block 2: The actual content of the file (part 2)...", 53 | 3: "Data block 3: The actual content of the file (part 3)...", 54 | 4: "", # Empty data block for directories or unused blocks 55 | 5: "", 56 | 6: "", 57 | } 58 | 59 | # Creating a few Inode objects to represent different files and directories in our filesystem 60 | inode1 = Inode(inode_number=1, file_type='regular file', permissions='rw-r--r--', size=1024, data_block_pointers=[1, 2, 3]) 61 | inode2 = Inode(inode_number=2, file_type='directory', permissions='rwxr-xr-x', size=4096, data_block_pointers=[4, 5, 6]) 62 | 63 | # Creating File and Directory objects that associate with specific Inodes 64 | file1 = File(name='file1.txt', inode=inode1) 65 | directory1 = File(name='my_directory', inode=inode2) 66 | 67 | # Adding entries to the dentry cache, associating file and directory names with File objects 68 | dentry_cache = Dentry() 69 | dentry_cache.add_dentry(file1.name, file1) 70 | dentry_cache.add_dentry(directory1.name, directory1) 71 | 72 | # Demonstrating a lookup in the dentry cache to find a file or directory based on its name 73 | name_to_lookup = 'file1.txt' 74 | found_entry = dentry_cache.lookup_dentry(name_to_lookup) 75 | 76 | if found_entry: 77 | if found_entry.inode.file_type == 'regular file': 78 | print(f"File '{name_to_lookup}' found with the following details:") 79 | print(f" - Name: {found_entry.name}") 80 | print(f" - Inode number: {found_entry.inode.inode_number}") 81 | print(f" - File type: {found_entry.inode.file_type}") 82 | print(f" - Permissions: {found_entry.inode.permissions}") 83 | print(f" - Size: {found_entry.inode.size} bytes") 84 | print(f" - Content:\n{found_entry.read()}") 85 | elif found_entry.inode.file_type == 'directory': 86 | print(f"Directory '{name_to_lookup}' found with the following details:") 87 | print(f" - Name: {found_entry.name}") 88 | print(f" - Inode number: {found_entry.inode.inode_number}") 89 | print(f" - File type: {found_entry.inode.file_type}") 90 | print(f" - Permissions: {found_entry.inode.permissions}") 91 | else: 92 | print(f"Entry '{name_to_lookup}' not found in the file system.") 93 | -------------------------------------------------------------------------------- /05EverythingIsAFile/linux_file_types.py: -------------------------------------------------------------------------------- 1 | 2 | import os 3 | import socket 4 | import time 5 | 6 | # Step 1: Working with Regular Files 7 | # Create regular files and write data to them 8 | with open('file1.txt', 'w') as f: 9 | f.write("Hello, this is a regular file.\n") 10 | 11 | # Step 2: Working with Directories 12 | # Create a directory and organize files within it 13 | os.mkdir('example_dir') 14 | os.rename('file1.txt', 'example_dir/file1.txt') 15 | 16 | # Step 3: Working with Symbolic Links (Symlinks) 17 | # Create a symbolic link to a file 18 | os.symlink('file1.txt', 'example_dir/symlink_to_file1.txt') 19 | 20 | # Step 4: Working with FIFO (Named Pipes) 21 | # Create a named pipe and write data to it in a separate process 22 | os.mkfifo('example_fifo') 23 | def fifo_writer(): 24 | with open('example_fifo', 'w') as f: 25 | f.write("Hello from the FIFO writer!") 26 | 27 | from threading import Thread 28 | writer_thread = Thread(target=fifo_writer) 29 | writer_thread.start() 30 | 31 | # Open the named pipe and read data from it 32 | with open('example_fifo', 'r') as f: 33 | print(f.read()) # Output: Hello from the FIFO writer! 34 | 35 | # Clean up FIFO file 36 | os.remove('example_fifo') 37 | 38 | # Step 5: Working with Sockets 39 | # Open a socket for inter-process communication 40 | def socket_server(): 41 | server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 42 | server_socket.bind(('localhost', 65432)) 43 | server_socket.listen() 44 | conn, addr = server_socket.accept() 45 | with conn: 46 | conn.sendall(b'Hello from the socket server!') 47 | 48 | server_thread = Thread(target=socket_server) 49 | server_thread.start() 50 | 51 | # Allow the server to start listening before the client attempts to connect 52 | time.sleep(0.1) 53 | 54 | client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 55 | client_socket.connect(('localhost', 65432)) 56 | data = client_socket.recv(1024) 57 | print(data.decode('utf-8')) # Output: Hello from the socket server! 58 | 59 | # Cleanup: close socket and remove created files and directories 60 | client_socket.close() 61 | # os.remove('example_dir/symlink_to_file1.txt') 62 | # os.remove('example_dir/file1.txt') 63 | # os.rmdir('example_dir') -------------------------------------------------------------------------------- /05EverythingIsAFile/sockets/01simple_socket_server.txt: -------------------------------------------------------------------------------- 1 | We need to create a 2 files server.py and client.py 2 | 3 | server.py 4 | import socket 5 | from threading import Thread 6 | 7 | def handle_client(conn, addr): 8 | while True: 9 | # Receiving a message from the client 10 | data = conn.recv(1024) 11 | if not data: 12 | break 13 | 14 | print(f"Received message from {addr}: {data.decode('utf-8')}") 15 | 16 | # Sending a response to the client 17 | conn.sendall(b'Received your message!') 18 | 19 | conn.close() 20 | print(f"Connection with {addr} closed.") 21 | 22 | def socket_server(): 23 | # Creating a new socket object using IPv4 and TCP protocols 24 | server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 25 | 26 | # Setting the SO_REUSEADDR option to allow the socket to be reused immediately 27 | server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 28 | 29 | # Binding the socket to localhost on port 65432 30 | server_socket.bind(('localhost', 65432)) 31 | 32 | # Starting to listen for connections 33 | server_socket.listen() 34 | print("Server is listening...") 35 | 36 | while True: 37 | # Accepting a new connection 38 | conn, addr = server_socket.accept() 39 | print(f"Connected by {addr}") 40 | 41 | # Sending a welcome message to the client 42 | conn.sendall(b'Hello from the socket server!') 43 | 44 | # Creating a new thread to handle the client 45 | client_thread = Thread(target=handle_client, args=(conn, addr)) 46 | client_thread.start() 47 | 48 | # Start the socket server 49 | socket_server() 50 | 51 | 52 | client.py 53 | import socket 54 | 55 | def client_chat(): 56 | # Creating a new socket object using IPv4 and TCP protocols 57 | client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 58 | 59 | # Connecting to the server at localhost on port 65432 60 | client_socket.connect(('localhost', 65432)) 61 | 62 | # Receiving a welcome message from the server 63 | data = client_socket.recv(1024) 64 | print(data.decode('utf-8')) 65 | 66 | while True: 67 | # Taking user input and sending it to the server 68 | msg = input("You: ") 69 | 70 | # If the user types "exit", close the connection and exit the loop 71 | if msg.lower() == "exit": 72 | break 73 | 74 | # Send the user's message to the server 75 | client_socket.sendall(msg.encode('utf-8')) 76 | 77 | # Receiving a response from the server and printing it 78 | data = client_socket.recv(1024) 79 | print(f"Server: {data.decode('utf-8')}") 80 | 81 | # Close the socket connection 82 | client_socket.close() 83 | print("Connection closed.") 84 | 85 | # Start the client chat 86 | client_chat() 87 | 88 | 89 | How to run the scripts: 90 | Save the server code in a file named server.py. 91 | 92 | Save the client code in a file named client.py. 93 | 94 | Open two terminal windows: one for the server and one for the client. 95 | 96 | In the server terminal, navigate to the directory where server.py is located and run the script with the command: 97 | 98 | 99 | python3 server.py 100 | In the client terminal, navigate to the directory where client.py is located and run the script with the command: 101 | 102 | python3 client.py 103 | You should see the welcome message from the server in the client terminal, and now you can start typing messages in the client terminal. 104 | 105 | Messages typed in the client terminal will be sent to the server and a response will be received and displayed. 106 | 107 | To exit the chat, type "exit" in the client terminal. 108 | 109 | That's it! Now you have a simple server-client chat system running on your local machine. 110 | Just with Linux built-in sockets. 111 | 112 | -------------------------------------------------------------------------------- /05EverythingIsAFile/sockets/client.py: -------------------------------------------------------------------------------- 1 | import socket 2 | 3 | def client_chat(): 4 | # Creating a new socket object using IPv4 and TCP protocols 5 | client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 6 | 7 | # Connecting to the server at localhost on port 65432 8 | client_socket.connect(('localhost', 65432)) 9 | 10 | # Receiving a welcome message from the server 11 | data = client_socket.recv(1024) 12 | print(data.decode('utf-8')) 13 | 14 | while True: 15 | # Taking user input and sending it to the server 16 | msg = input("You: ") 17 | 18 | # If the user types "exit", close the connection and exit the loop 19 | if msg.lower() == "exit": 20 | break 21 | 22 | # Send the user's message to the server 23 | client_socket.sendall(msg.encode('utf-8')) 24 | 25 | # Receiving a response from the server and printing it 26 | data = client_socket.recv(1024) 27 | print(f"Server: {data.decode('utf-8')}") 28 | 29 | # Close the socket connection 30 | client_socket.close() 31 | print("Connection closed.") 32 | 33 | # Start the client chat 34 | client_chat() 35 | -------------------------------------------------------------------------------- /05EverythingIsAFile/sockets/server.py: -------------------------------------------------------------------------------- 1 | import socket 2 | from threading import Thread 3 | 4 | def handle_client(conn, addr): 5 | while True: 6 | # Receiving a message from the client 7 | data = conn.recv(1024) 8 | if not data: 9 | break 10 | 11 | print(f"Received message from {addr}: {data.decode('utf-8')}") 12 | 13 | # Sending a response to the client 14 | conn.sendall(b'Received your message!') 15 | 16 | conn.close() 17 | print(f"Connection with {addr} closed.") 18 | 19 | def socket_server(): 20 | # Creating a new socket object using IPv4 and TCP protocols 21 | server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 22 | 23 | # Setting the SO_REUSEADDR option to allow the socket to be reused immediately 24 | server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 25 | 26 | # Binding the socket to localhost on port 65432 27 | server_socket.bind(('localhost', 65432)) 28 | 29 | # Starting to listen for connections 30 | server_socket.listen() 31 | print("Server is listening...") 32 | 33 | while True: 34 | # Accepting a new connection 35 | conn, addr = server_socket.accept() 36 | print(f"Connected by {addr}") 37 | 38 | # Sending a welcome message to the client 39 | conn.sendall(b'Hello from the socket server!') 40 | 41 | # Creating a new thread to handle the client 42 | client_thread = Thread(target=handle_client, args=(conn, addr)) 43 | client_thread.start() 44 | 45 | # Start the socket server 46 | socket_server() 47 | -------------------------------------------------------------------------------- /05EverythingIsAFile/symbolichardlinks/symbolic/config_dev.json: -------------------------------------------------------------------------------- 1 | { 2 | "environment": "development", 3 | "database": { 4 | "host": "localhost", 5 | "port": 5432 6 | } 7 | } 8 | 9 | 10 | -------------------------------------------------------------------------------- /05EverythingIsAFile/symbolichardlinks/symbolic/config_prod.json: -------------------------------------------------------------------------------- 1 | { 2 | "environment": "production", 3 | "database": { 4 | "host": "prod.mydatabase.com", 5 | "port": 5432 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /05EverythingIsAFile/symbolichardlinks/symbolic/read_config.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | def read_config(filename): 4 | with open(filename, 'r') as file: 5 | config = json.load(file) 6 | return config 7 | 8 | config = read_config('config.json') 9 | print(f"Environment: {config['environment']}") 10 | print(f"Database host: {config['database']['host']}") 11 | print(f"Database port: {config['database']['port']}") 12 | -------------------------------------------------------------------------------- /05EverythingIsAFile/symbolichardlinks/symbolic/theory_instructions.txt: -------------------------------------------------------------------------------- 1 | Symbolic (Soft) Links 2 | A symbolic link, also known as a soft link, is a special kind of file that points to another file or directory. 3 | It's essentially a shortcut to another file or directory. 4 | If you delete the symbolic link, the original file remains unaffected. 5 | If you delete the original file, the symbolic link will "break," as it points to a non-existent file. 6 | 7 | To create a symbolic link, you use the ln command with the -s option (stands for soft), as follows: 8 | 9 | ln -s target_file link_name 10 | 11 | Symbolic links can help when you want to easily switch between different versions of a file or keep files synchronized across different directories. 12 | 13 | Assigment: Dynamic Configuration Switching with Symbolic Links 14 | In a development environment, you often have various configurations - for development, testing, production, etc. 15 | Symbolic links can facilitate smooth transitions between these environments without changing the code or manually copying and replacing files. 16 | 17 | Let's create a scenario where we have a Python script that reads from a configuration file, 18 | and we will switch between different configurations using symbolic links dynamically: 19 | 20 | First, create two configuration files: 21 | 22 | config_dev.json 23 | { 24 | "environment": "development", 25 | "database": { 26 | "host": "localhost", 27 | "port": 5432 28 | } 29 | } 30 | 31 | config_prod.json 32 | { 33 | "environment": "production", 34 | "database": { 35 | "host": "prod.mydatabase.com", 36 | "port": 5432 37 | } 38 | } 39 | 40 | Now, in your Python script, you'll always read from a file called config.json: 41 | 42 | read_config.py: 43 | 44 | import json 45 | 46 | def read_config(filename): 47 | with open(filename, 'r') as file: 48 | config = json.load(file) 49 | return config 50 | 51 | config = read_config('config.json') 52 | print(f"Environment: {config['environment']}") 53 | print(f"Database host: {config['database']['host']}") 54 | print(f"Database port: {config['database']['port']}") 55 | 56 | To start, create a symbolic link to your development configuration: 57 | 58 | ln -s config_dev.json config.json 59 | 60 | Run your script: 61 | 62 | python read_config.py 63 | 64 | You should see the development configuration output. 65 | 66 | Now, to switch to the production configuration, update the symbolic link: 67 | -sf says create soft link forcefully, basically overwrites old version 68 | 69 | ln -sf config_prod.json config.json 70 | Run your script again: 71 | 72 | python read_config.py 73 | 74 | 75 | You'll see it now uses the production configuration. 76 | 77 | Conclusion: 78 | This setup allows you to switch between configurations quickly without modifying your Python script or manually copying files, 79 | providing a dynamic way to switch configurations. Soft Links are very useful, because they can point not only files but directories as well. 80 | 81 | 82 | 83 | 84 | 85 | -------------------------------------------------------------------------------- /05EverythingIsAFile/symbolichardlinks/theory.txt: -------------------------------------------------------------------------------- 1 | => Hard Links 2 | 3 | Explanation: 4 | Hard link is a reference or pointer to the physical data on the disk. 5 | When you create a hard link, it will point to the same inode as the original file, 6 | essentially referring to the same physical location on the disk. 7 | Deleting the original file will not affect the hard link as they share the same inode. 8 | Hard links cannot cross different filesystems and cannot link to directories. 9 | 10 | Usage: 11 | To create a hard link, you can use the ln command as follows: 12 | 13 | ln source_file hard_link_name 14 | 15 | To check the inode number of files, you can use the ls command with -i option: 16 | ls -i source_file hard_link_name 17 | 18 | => Soft Links (Symbolic Links) 19 | 20 | Explanation: 21 | Soft link, also known as a symbolic link, is a special kind of file that points to another file or directory. 22 | Soft links can cross file systems and can link to directories. 23 | 24 | If you delete the target file to which a soft link points, the soft link becomes broken, as it points to a non-existing file. 25 | 26 | Usage: 27 | To create a soft link, you use the ln command with the -s option: 28 | 29 | ln -s source_file soft_link_name 30 | 31 | To check if a file is a symbolic link and identify its target, you can use ls command with -l option: 32 | 33 | ls -l soft_link_name 34 | Examples: 35 | Let's assume we have a file named file1.txt and we wish to create hard and soft links to this file. 36 | 37 | Creating a hard link: 38 | ln file1.txt hard_link_to_file1.txt 39 | 40 | Creating a soft link: 41 | ln -s file1.txt soft_link_to_file1.txt 42 | 43 | Listing files and observing the links: 44 | 45 | ls -li 46 | You would see both hard_link_to_file1.txt and soft_link_to_file1.txt in the output, 47 | alongside their inode numbers (for hard links) and target files (for soft links). -------------------------------------------------------------------------------- /05EverythingIsAFile/textfiledescriptorVSinode.txt: -------------------------------------------------------------------------------- 1 | File descriptors VS inode numbers 2 | are different concepts and they pertain to different layers in a filesystem's architecture. 3 | Let's delve into each term individually to differentiate them properly: 4 | 5 | Inode Number 6 | Definition: An inode (index node) is a data structure on a filesystem on Unix and Linux systems 7 | which stores information about a file or directory, including attributes (like permissions, ownership) 8 | and disk block locations which essentially define the file or directory. 9 | 10 | Uniqueness: Every file or directory has a unique inode number within the filesystem. 11 | Persistence: Inode information is persistent across reboots; it resides on the disk until the file is deleted. 12 | Usage: It is used by the filesystem to manage files and directories. 13 | 14 | 15 | 16 | File Descriptor 17 | Definition: A file descriptor is an abstract indicator used by the kernel to access a file or other input/output resource, 18 | such as a pipe or network socket. It is typically an integer that is used to identify an open file within a process. 19 | 20 | Uniqueness: File descriptors are unique per process. 21 | Different processes can have file descriptors with the same number, but within a process, each open file has a unique descriptor. 22 | 23 | Persistence: File descriptors are not persistent across reboots, and they cease to exist when a process terminates. 24 | 25 | 26 | Usage: File descriptors are used by processes to read from or write to open files 27 | through system calls like read() and write(). 28 | 29 | Example: 30 | 31 | A file named "example.txt" will have an inode number that contains metadata information about the file. 32 | This inode number is unique on the filesystem and can be used to identify the file at the filesystem level. 33 | 34 | When a process opens "example.txt" to read or write, 35 | the kernel assigns a file descriptor to this open file in the context of the process. 36 | This file descriptor is used by the process to perform operations on the open file. 37 | 38 | So, to sum up: 39 | 40 | File descriptor is process-specific and is used to refer to an open file during the runtime of a process. 41 | 42 | Inode number is filesystem-specific and is used to refer to a file or directory persistently within the filesystem, 43 | irrespective of whether any process has the file open. -------------------------------------------------------------------------------- /06CompilingLinking/01BasicIdea.txt: -------------------------------------------------------------------------------- 1 | Compiler we will use in our class is clang 2 | 3 | Clang is a compiler front end for the programming languages C, C++, Objective-C, 4 | Objective-C++, OpenMP, OpenCL, and CUDA. It uses LLVM as its back end 5 | and has been part of the LLVM release cycle since LLVM 2.6. 6 | 7 | Features and Goals: 8 | 9 | Performance: Clang is designed to perform both compilation and linking 10 | extremely quickly. It aims to reduce both the memory usage and runtime 11 | of these processes compared to its predecessors. 12 | 13 | Diagnostics: 14 | Clang provides rich, detailed diagnostics (error and warning messages) for developers, 15 | which are extremely helpful for debugging. 16 | 17 | Modularity and Library-Based Design: 18 | Clang is designed to be able to replace parts of the compiler chain with 19 | other parts or extend it in various ways, all while being API compatible with GCC 20 | (GNU Compiler Collection) when command line options are concerned. 21 | 22 | Static Code Analysis: 23 | Clang provides a static code analyzer that offers a variety of checks to identify bugs, 24 | memory leaks, and other potential issues in the code, which enhances code quality 25 | and reliability. 26 | 27 | The Clang compiler has been adopted by a variety of tech companies, organizations, 28 | and open-source projects due to its performance, detailed diagnostics, 29 | and extensibility. Some notable adopters of Clang include: 30 | 31 | Apple: 32 | Apple uses Clang as the default compiler for its Xcode IDE. 33 | Clang is used to compile applications for macOS, iOS, watchOS, and tvOS. 34 | 35 | Google: 36 | Google utilizes Clang for several projects, including Android (since Android Nougat) 37 | and Chrome. The Chromium project also recommends Clang for 38 | building Chromium on various platforms. 39 | 40 | Sony: 41 | Sony has adopted Clang for developing software on PlayStation platforms. 42 | 43 | General Responsobilities of any kind of compilers. 44 | 45 | In general, the responsibility of a compiler is to translate a 46 | high-level source code written in one programming language 47 | into a lower-level language, often machine code or an intermediate code, 48 | so that it can be executed by a computer. The compilation process can be broken 49 | down into several stages. 50 | 51 | Preprocessing: 52 | The preprocessor handles directives like #include and #define. 53 | It expands included files, macros, and evaluates conditional compilation statements, 54 | producing an expanded source code ready for the next stage. 55 | 56 | Compiling to Assembly Code: 57 | The compiler translates the preprocessed high-level source code into assembly code. 58 | This representation is closer to machine code but still maintains a level of human readability, 59 | representing the operations and data movements that the source code describes. 60 | 61 | Assembling to Object Code: 62 | The assembler converts the assembly code into object code. 63 | Object code is a binary representation of the program, consisting of machine instructions, data, and information needed for linking. 64 | 65 | Linking: 66 | The linker takes one or more object files and combines them into a single executable file. 67 | It resolves symbols, addresses, and handles the arrangement of data and code in memory, 68 | producing a file that can be executed by the system. 69 | 70 | Debugging: 71 | During compilation, the compiler can embed additional debugging information into the executable. 72 | This information is crucial for debugging tools, as it maps machine instructions back to the original source code, 73 | aiding developers in identifying and fixing issues. 74 | 75 | These responsibilities demonstrate how a compiler transforms human-readable code 76 | into an executable program, with each step progressively moving closer to machine-level 77 | representation while facilitating the development process through debugging. 78 | 79 | In practice we will concentrate on clang 80 | 81 | Open and follow. 82 | 01LabIntrotoClang.txt -------------------------------------------------------------------------------- /06CompilingLinking/01LabIntrotoClang.txt: -------------------------------------------------------------------------------- 1 | 2 | Lab1 Basic Ideas of Compiling 3 | 4 | Make sure to install clang compiler 5 | sudo apt install clang 6 | 7 | Step 1: Compile the Source Code 8 | First, write a simple C++ program in a file named hello_world.cpp. 9 | 10 | #include 11 | 12 | int main() { 13 | std::cout << "Hello, World!" << std::endl; 14 | return 0; 15 | } 16 | Then, compile this code using clang++. 17 | 18 | clang++ hello_world.cpp -o hello 19 | 20 | 21 | Step 2: Delete Compiled File 22 | 23 | rm hello 24 | 25 | 26 | Step 3: Compile and Save Intermediate Files 27 | Compile the code again, but this time save all the intermediate files. 28 | 29 | clang++ -save-temps hello_world.cpp -o hello 30 | 31 | You will see the following files generated: 32 | 33 | hello_world.ii (preprocessed file) 34 | hello_world.s (assembly code) 35 | hello_world.o (object file) 36 | hello (final executable) 37 | 38 | Step 4: Perform Compilation Manually 39 | Now, let’s manually perform each compilation 40 | step and confirm that the intermediate files match. 41 | 42 | a) Preprocess 43 | 44 | clang++ -E hello_world.cpp -o manual_preprocessed.ii 45 | diff manual_preprocessed.ii hello_world.ii 46 | 47 | 48 | b) Compile to Assembly Code 49 | 50 | clang++ -S manual_preprocessed.ii -o manual_compiled.s 51 | diff manual_compiled.s hello_world.s 52 | 53 | c) Assemble to Object Code 54 | 55 | clang++ -c manual_compiled.s -o manual_assembled.o 56 | diff manual_assembled.o hello_world.o 57 | 58 | 59 | d) Linking 60 | 61 | clang++ manual_assembled.o -o manual_hello 62 | diff manual_hello hello 63 | 64 | Confirmation 65 | At each stage, using the diff command will confirm that the files generated manually 66 | are identical to the ones generated by the -save-temps option. If there is no output 67 | from the diff command, it indicates that the files are the same. 68 | 69 | Summary 70 | This tutorial guides you through compiling a C++ program, saving intermediate files, 71 | manually performing each compilation step, and confirming the intermediate files' integrity 72 | at each stage. It provides practical insight into the compilation and linking process -------------------------------------------------------------------------------- /06CompilingLinking/03LabCmake.txt: -------------------------------------------------------------------------------- 1 | Ok obviosly, if a project consist more than 3 files nobody gonna compile them manually. 2 | 3 | So here comes another tool called cmake 4 | 5 | Simplified CMake tutorial tailored to help you understand how to use CMake to build a project, 6 | using the Task and TodoList application we discussed earlier. 7 | CMake is a cross-platform build-system generator that can generate files to build your project using various build tools. 8 | 9 | 1. Installation 10 | First, ensure that CMake is installed on your system: 11 | 12 | sudo apt update 13 | sudo apt install cmake 14 | 15 | 16 | 2. Project Structure 17 | 18 | Your project should have the following structure: 19 | 20 | YourProjectFolder 21 | │ 22 | ├── CMakeLists.txt 23 | ├── Task.cpp 24 | ├── Task.h 25 | ├── TodoList.cpp 26 | ├── TodoList.h 27 | └── main.cpp 28 | 29 | 30 | 3. CMakeLists.txt 31 | Create a CMakeLists.txt file in the root of your project directory with the following content: 32 | 33 | 34 | cmake_minimum_required(VERSION 3.10) 35 | 36 | project(TodoListApp) 37 | 38 | set(CMAKE_CXX_STANDARD 14) 39 | 40 | add_executable(TodoListApp main.cpp Task.cpp TodoList.cpp) 41 | 42 | 43 | This file tells CMake the minimum version required, the project name, the C++ standard to use, 44 | and which files to compile and link to create the TodoListApp executable. 45 | 46 | 4. Generate Build Files and Compile 47 | 48 | Navigate to your project directory in the terminal and create a build directory, then navigate into it: 49 | 50 | 51 | mkdir build 52 | cd build 53 | 54 | 55 | Run CMake to generate build files and then compile the project: 56 | cmake .. 57 | make 58 | 59 | 60 | The .. in cmake .. points to the parent directory where CMakeLists.txt is located. 61 | After running these commands, the TodoListApp executable should be generated inside the build directory. 62 | 63 | 5. Run the Application 64 | Run the compiled application: 65 | 66 | ./TodoListApp 67 | 68 | 69 | Benefits of Using CMake: 70 | 71 | Cross-Platform: 72 | CMake is a cross-platform tool, making it easier to manage builds on different operating systems and IDEs. 73 | 74 | Out-of-Source Builds: 75 | 76 | CMake encourages out-of-source builds, keeping the build files separate from your source code, 77 | which helps maintain a clean project structure. 78 | 79 | Scalability: 80 | CMake is well-suited for both small and large projects, and it can handle complex build systems 81 | with multiple dependencies and configurations. 82 | 83 | Manageable: 84 | CMakeLists.txt files are organized hierarchically, 85 | making it easier to manage multi-directory projects and dependencies. 86 | 87 | Find Packages & Libraries: 88 | CMake can automatically find libraries and packages, manage dependencies, and link against the correct libraries, 89 | making the build process smoother. 90 | 91 | Customizable Build Configurations: 92 | You can easily manage different build configurations, compiler flags, and build types (Debug, Release) with CMake. 93 | 94 | Conclusion 95 | This simple tutorial introduces the basics of using CMake to manage your build process. 96 | CMake offers several advanced features and functions that you can explore as your project grows and evolves, 97 | such as managing external dependencies, setting custom build flags, and configuring installation targets. -------------------------------------------------------------------------------- /06CompilingLinking/todolist/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 3.10) 2 | 3 | project(TodoListApp) 4 | 5 | set(CMAKE_CXX_STANDARD 14) 6 | 7 | add_executable(TodoListApp main.cpp Task.cpp TodoList.cpp) 8 | -------------------------------------------------------------------------------- /06CompilingLinking/todolist/Task.cpp: -------------------------------------------------------------------------------- 1 | 2 | #include "Task.h" 3 | #include 4 | 5 | Task::Task(const std::string& description) 6 | : description(description), completed(false) {} 7 | 8 | 9 | void Task::printTaskDetails() const { 10 | std::cout << "Task: " << description << " - " << (completed ? "Completed" : "Not Completed") << std::endl; 11 | } 12 | 13 | void Task::complete(){ 14 | completed = true; 15 | // For demonstration purposes, print details when a task is completed. 16 | printTaskDetails(); 17 | } 18 | 19 | std::string Task::getDescription() const { 20 | return description; 21 | } 22 | 23 | 24 | bool Task::isCompleted() const { 25 | return completed; 26 | } 27 | 28 | -------------------------------------------------------------------------------- /06CompilingLinking/todolist/Task.h: -------------------------------------------------------------------------------- 1 | #pragma once 2 | 3 | #include 4 | 5 | class Task { 6 | public: 7 | Task(const std::string& description); 8 | void complete(); 9 | std::string getDescription() const; 10 | bool isCompleted() const; 11 | void printTaskDetails() const; 12 | private: 13 | std::string description; 14 | bool completed; 15 | }; 16 | 17 | -------------------------------------------------------------------------------- /06CompilingLinking/todolist/Task.o: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alfazick/linuxprogramming/47e37ff18239f182c29f4da30ce616c19ba6662f/06CompilingLinking/todolist/Task.o -------------------------------------------------------------------------------- /06CompilingLinking/todolist/TodoList.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include "TodoList.h" 3 | 4 | void TodoList::addTask(const Task& task) { 5 | tasks.push_back(task); 6 | } 7 | 8 | void TodoList::completeTask(int index) { 9 | if (index >= 0 && index < tasks.size()) { 10 | tasks[index].complete(); 11 | } 12 | } 13 | 14 | void TodoList::displayTasks() const { 15 | for (const auto& task : tasks) { 16 | std::cout << (task.isCompleted() ? "[x] " : "[ ] ") << task.getDescription() << std::endl; 17 | } 18 | } 19 | -------------------------------------------------------------------------------- /06CompilingLinking/todolist/TodoList.h: -------------------------------------------------------------------------------- 1 | #pragma once 2 | 3 | #include 4 | #include "Task.h" 5 | 6 | class TodoList { 7 | public: 8 | void addTask(const Task& task); 9 | void completeTask(int index); 10 | void displayTasks() const; 11 | private: 12 | std::vector tasks; 13 | }; 14 | -------------------------------------------------------------------------------- /06CompilingLinking/todolist/TodoList.o: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alfazick/linuxprogramming/47e37ff18239f182c29f4da30ce616c19ba6662f/06CompilingLinking/todolist/TodoList.o -------------------------------------------------------------------------------- /06CompilingLinking/todolist/main.cpp: -------------------------------------------------------------------------------- 1 | #include "TodoList.h" 2 | 3 | int main() { 4 | TodoList todoList; 5 | todoList.addTask(Task("Learn about compiling and linking")); 6 | todoList.addTask(Task("Write a TodoList application")); 7 | 8 | // Display tasks before completing any 9 | todoList.displayTasks(); 10 | 11 | // Complete the first task 12 | todoList.completeTask(0); 13 | 14 | // Display tasks after completing the first task 15 | todoList.displayTasks(); 16 | 17 | 18 | return 0; 19 | } 20 | -------------------------------------------------------------------------------- /06CompilingLinking/todolist/main.o: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alfazick/linuxprogramming/47e37ff18239f182c29f4da30ce616c19ba6662f/06CompilingLinking/todolist/main.o -------------------------------------------------------------------------------- /07SoftwarePackageManagment/01 TarVsZIPLab: -------------------------------------------------------------------------------- 1 | 2 | The tar command in Linux is designed to store and preserve file permissions (like read-only statuses), 3 | ownerships, and dates, among other metadata. When you extract the files from a tar archive, they will retain the permissions 4 | they had when they were archived, including read-only status if you set it using something like "chmod." 5 | 6 | On the other hand, zip doesn't preserve all file metadata by default. While zip can store some basic permissions, 7 | it's not as comprehensive as tar, especially in the context of a Unix-like system such as Linux. If file permissions are crucial, 8 | like in system backups or software distribution, tar is usually the preferred format on Linux systems. If you're transferring 9 | files between different systems or file permission preservation is not a priority, zip's compatibility and ease of use might be more beneficial. 10 | 11 | side-by-side comparison of how you'd use the tar and zip commands to archive files in Linux: 12 | 13 | To Archive Files: 14 | 15 | tar: 16 | 17 | tar -cvf archive_name.tar /path/to/directory_or_file 18 | Explanation: c creates an archive, v provides verbose output, f allows you to specify the archive file's name. 19 | 20 | zip: 21 | 22 | zip -r archive_name.zip /path/to/directory_or_file 23 | Explanation: -r (or --recurse-paths) tells zip to travel down directories recursively and compress everything in them. 24 | 25 | To Archive and Compress Files: 26 | 27 | tar with gzip compression (creates a .tar.gz): 28 | 29 | tar -czvf archive_name.tar.gz /path/to/directory_or_file 30 | Explanation: z applies gzip compression. 31 | 32 | tar with bzip2 compression (creates a .tar.bz2): 33 | 34 | tar -cjvf archive_name.tar.bz2 /path/to/directory_or_file 35 | Explanation: j applies bzip2 compression. 36 | 37 | zip (compression is inherent): 38 | 39 | zip -r archive_name.zip /path/to/directory_or_file 40 | To Extract Files: 41 | 42 | tar: 43 | 44 | tar -xvf archive_name.tar -C /path/to/destination_directory 45 | Explanation: x extracts files, C specifies the directory to extract to. 46 | 47 | zip: 48 | 49 | unzip archive_name.zip -d /path/to/destination_directory 50 | Explanation: -d specifies the directory to extract files to. 51 | 52 | These are basic examples. Both tar and zip offer many more options that you can explore using their respective man pages 53 | (man tar or man zip in the terminal). The choice between 54 | using tar and zip can depend on your specific needs in terms of compression, 55 | preservation of file attributes, and cross-platform compatibility. 56 | -------------------------------------------------------------------------------- /07SoftwarePackageManagment/01LabIntroToTools.txt: -------------------------------------------------------------------------------- 1 | 2 | Introduction to Tools 3 | 4 | 1. wget 5 | 6 | Theory 7 | Description: wget is a non-interactive downloader in Linux that retrieves files using HTTP, HTTPS, and FTP. 8 | 9 | Usage: 10 | Basic Syntax: wget [option]... [URL]... 11 | 12 | Common Options: 13 | -c: Continue getting partially downloaded files. 14 | -O [filename]: Directs downloads to the filename provided. 15 | 16 | Practical 17 | Objective: Download the nano text editor source code. 18 | 19 | Commands: 20 | 21 | wget https://www.nano-editor.org/dist/v5/nano-5.0.tar.xz 22 | 23 | 2. curl 24 | 25 | Theory 26 | Description: curl is utilized to transfer data from or to a server using various protocols. 27 | Usage: 28 | Basic Syntax: curl [options] [URL]... 29 | Common Options: 30 | -o [filename]: Write output to [filename] instead of stdout. 31 | -u [user:password]: Authenticate with specified credentials. 32 | Practical 33 | Objective: Download a file and save it as example.txt. 34 | 35 | Commands: 36 | To download the HTML content of google.com using curl and save it to a file called google.html, you can use the following command: 37 | 38 | curl -o google.html http://www.google.com 39 | 40 | This will fetch the HTML content from Google's homepage and save it to a file named google.html in the 41 | current working directory. If you open this file with a text editor, you'll see the HTML that your web browser 42 | uses to render the Google homepage. 43 | 44 | Please note that web scraping (i.e., programmatically downloading and processing web pages) 45 | should be done in compliance with the legalities and ethical guidelines, and it is important to 46 | respect the robots.txt file of a website, which provides guidelines on what you are allowed to access and download. 47 | 48 | 49 | 3. tar 50 | 51 | Theory 52 | Description: tar is used to create, maintain, modify, or extract files that are archived in the tar format. 53 | Usage: 54 | Basic Syntax: tar [options] [archive-file] [file or directory to be archived] 55 | Common Options: 56 | -c: Create archive. 57 | -x: Extract archive. 58 | -v: Verbosely show the .tar file progress in the terminal. 59 | -f: Filename to archive. 60 | 61 | Practical 62 | Objective: Create a tarball of a directory and extract it elsewhere. 63 | 64 | Commands: 65 | 66 | mkdir example_dir 67 | echo "This is a test file" > example_dir/test.txt 68 | tar -cvf example_tarball.tar example_dir 69 | mkdir extract_here 70 | tar -xvf example_tarball.tar -C extract_here 71 | 72 | 73 | 4. gcc 74 | Theory 75 | Description: gcc is a compiler for languages including C and C++, and it's used to compile and link programs. 76 | 77 | Usage: 78 | 79 | Basic Syntax: gcc [options] [source files] [object files] [-Ldir] -llibname [-o outfile] 80 | Common Options: 81 | -o [filename]: Specifies the filename of the compiled output. 82 | -Wall: Enable all warning messages. 83 | Practical 84 | Objective: Compile and run a simple "Hello, World!" C program. 85 | 86 | Commands: 87 | 88 | echo '#include\n int main(){ printf("Hello, World!\\n"); return 0; }' > hello.c 89 | gcc -Wall -o hello hello.c 90 | ./hello 91 | 92 | 93 | This lab provides a fundamental understanding and hands-on approach for 94 | utilizing wget, curl, tar, and gcc in a Linux environment. 95 | This foundational knowledge assists in managing packages, transferring data, archiving, and compiling source code. 96 | -------------------------------------------------------------------------------- /07SoftwarePackageManagment/01WgetVsCurl: -------------------------------------------------------------------------------- 1 | both wget and curl are command-line tools used for downloading files from the internet but they have different capabilities and syntax. 2 | Here's a basic comparison: 3 | 4 | Downloading a Single File: 5 | 6 | wget: 7 | 8 | wget http://example.com/file.iso 9 | Explanation: This will download the file named "file.iso" from the specified URL. 10 | 11 | curl: 12 | 13 | curl -O http://example.com/file.iso 14 | Explanation: The -O (capital o) flag tells curl to write the downloaded data to a file named as the remote file's name. 15 | 16 | Downloading and Saving with a Different File Name: 17 | 18 | wget: 19 | 20 | wget -O custom_name.iso http://example.com/file.iso 21 | Explanation: The -O (capital o) option allows you to specify a different file name or location. 22 | 23 | curl: 24 | 25 | curl -o custom_name.iso http://example.com/file.iso 26 | Explanation: The -o (lowercase o) option allows you to specify a different file name or location. 27 | 28 | Downloading Multiple Files: 29 | 30 | wget: 31 | 32 | wget -i urls.txt 33 | Explanation: The -i option tells wget to read URLs from a file, one per line. 34 | 35 | curl: 36 | 37 | xargs -n 1 curl -O < urls.txt 38 | Explanation: curl doesn't natively support downloading from a list of URLs, so you can use xargs to read URLs from a file. 39 | xargs reads items from the standard input, delimited by blanks or newlines, 40 | and executes the command one or more times with any initial arguments. 41 | 42 | Resume a Broken Download: 43 | 44 | wget: 45 | 46 | wget -c http://example.com/file.iso 47 | Explanation: The -c or --continue flag makes wget attempt to resume the download of a file 48 | if it detects that the file has partially downloaded. 49 | 50 | curl: 51 | 52 | curl -C - -O http://example.com/file.iso 53 | Explanation: The -C - option tells curl to automatically find out 54 | where/how to resume the transfer. 55 | It then uses the -O option to write out the downloaded data. 56 | 57 | Sending Headers with Request: 58 | 59 | wget: 60 | 61 | wget --header="Header-Name: value" http://example.com 62 | Explanation: The --header option allows you to send a custom header in the request. 63 | 64 | curl: 65 | 66 | curl -H "Header-Name: value" http://example.com 67 | Explanation: The -H option allows you to send a custom header in the request. 68 | 69 | Both wget and curl have many more options and capabilities. You can explore more details in their man pages (man wget or man curl). 70 | While wget is typically used for downloading files (it can recursively download files), curl provides a more powerful set of features, 71 | including the ability to communicate with different protocols, method requests, and even work as a REST client. 72 | 73 | Extra: 74 | example using a free API from CoinGecko which provides Bitcoin price data. 75 | Please note that the availability and behavior of the API can change over time, 76 | so this example may not always work indefinitely. 77 | 78 | 79 | Using wget: 80 | wget is less commonly used for API interaction because it's primarily a file downloader. 81 | However, it can still be used for simple API requests. Here's how you might use wget with the CoinGecko API: 82 | 83 | wget -qO- --header='accept: application/json' 'https://api.coingecko.com/api/v3/simple/price?ids=bitcoin&vs_currencies=usd' 84 | 85 | In this example, -q is for quiet mode (not showing progress), O- is to print the content to standard output rather than a file, 86 | and --header='accept: application/json' is to send the specified header, indicating that we expect a JSON response. 87 | 88 | Using curl: 89 | Here's how you might use curl to make a request to the CoinGecko API to retrieve the current price of Bitcoin in US Dollars: 90 | 91 | curl -X 'GET' \ 92 | 'https://api.coingecko.com/api/v3/simple/price?ids=bitcoin&vs_currencies=usd' \ 93 | -H 'accept: application/json' 94 | This command uses curl to send an HTTP GET request to the specified URL. 95 | The -X 'GET' specifies the request method to use when communicating with the HTTP server. 96 | The -H 'accept: application/json' specifies an extra header to include in the request when sending HTTP to a server, 97 | which in this case is to accept JSON response. 98 | -------------------------------------------------------------------------------- /07SoftwarePackageManagment/02_theory.txt: -------------------------------------------------------------------------------- 1 | General Flow for Software Installation: Get Source Code -> Archive -> Compile 2 | 3 | 4 | 1. Get Source Code: 5 | This is where you download the raw, often uncompressed, 6 | source code of the software directly from the web. 7 | Developers share their projects' source code in repositories (like GitHub, GitLab, etc.) or on their websites. 8 | 9 | Examples: 10 | Using wget or curl to download source code from a provided URL. 11 | They both can retrieve files from servers using various types of network protocols. 12 | 13 | 14 | 2. Archive: 15 | Developers often archive (or compress) their code into tarballs (*.tar.gz or *.tar.bz2 files) 16 | to reduce the file size and make downloads faster and more manageable. 17 | 18 | Examples: 19 | Using tar to bundle or unbundle files. 20 | 21 | Bundling: Combining many files and directories into a single file (tarball). 22 | Unbundling: Extracting the files and directories from the tarball. 23 | 24 | After unbundling the tarball, you'll have a directory containing the source code, 25 | which can typically be compiled and installed. 26 | 27 | 3. Compile: 28 | 29 | Compiling is translating the source code (often written in languages like C or C++) 30 | into machine code that can be executed by your computer's processor. 31 | 32 | Examples: 33 | 34 | Using gcc to compile the source code into an executable. 35 | It translates the high-level code into machine code suitable for your specific system architecture. 36 | 37 | Normally, open-source projects distributed in this manner come with 38 | a configure script and a Makefile which simplify the compilation process: 39 | 40 | 41 | ./configure 42 | make 43 | sudo make install 44 | 45 | ./configure: Checks your system for necessary tools and libraries, adjusting its internal settings to match your environment. 46 | make: Compiles the code using instructions in the Makefile. 47 | sudo make install: Installs the compiled code into the system directories. 48 | 49 | Additional Notes: 50 | 51 | - Dependencies: 52 | Software may rely on other software (dependencies) to run. 53 | Dependencies should be installed before compiling and installing the new software. 54 | 55 | - Version Control: 56 | Ensure you're installing a stable and secure version of the software. 57 | Always look for official documentation or repositories for reliable source code and installation instructions. 58 | 59 | - Uninstallation/Upgrade: 60 | Manual installation (compiling from source) often does not provide a straightforward way to 61 | uninstall or upgrade the software. For this, software managers like apt, yum, or zypper (depending on your distribution) 62 | are advantageous since they manage versions and uninstallation cleanly. -------------------------------------------------------------------------------- /07SoftwarePackageManagment/03LabInstallingJSONParser.txt: -------------------------------------------------------------------------------- 1 | Lab Exercise: Installing and Utilizing Jq 2 | 3 | https://jqlang.github.io/jq/ 4 | 5 | 6 | Objective: 7 | 8 | Download, compile, and install jq from source. 9 | Utilize jq to manipulate JSON data. 10 | 11 | Prerequisites: 12 | Basic knowledge of Linux command-line tools. 13 | Basic knowledge of JSON data structure. 14 | A Linux system with wget, curl, tar, and build-essential tools installed. 15 | 16 | Lab Overview: 17 | 18 | Downloading Jq Source Code 19 | Extracting and Installing Jq 20 | Testing Jq Installation 21 | Practical Utilization of Jq 22 | 23 | 24 | 1. Downloading Jq Source Code 25 | Objective: Download jq 1.6 source code from GitHub using wget or curl. 26 | 27 | Task A: Using wget 28 | 29 | wget https://github.com/stedolan/jq/releases/download/jq-1.6/jq-1.6.tar.gz 30 | Task B: Using curl 31 | 32 | curl -LO https://github.com/stedolan/jq/releases/download/jq-1.6/jq-1.6.tar.gz 33 | 34 | 35 | 2. Extracting and Installing Jq 36 | Objective: Extract the tarball and install jq. 37 | 38 | Task A: Extracting Jq 39 | 40 | tar xzf jq-1.6.tar.gz 41 | cd jq-1.6 42 | 43 | Task B: Installing Jq 44 | Note: Installation of additional dependencies might be necessary. 45 | 46 | autoreconf -i 47 | ./configure 48 | make 49 | make install 50 | 51 | 3. Testing Jq Installation 52 | Objective: Ensure jq is installed and functioning correctly. 53 | Task: Verify Jq Version 54 | 55 | jq --version 56 | 57 | Expected Output: 58 | jq-1.6 59 | 60 | 4. Practical Utilization of Jq 61 | 62 | Objective: Use jq to filter and format JSON data. 63 | 64 | Task A: Create Sample JSON Data 65 | 66 | echo '{"users": [{"name": "John", "age": 30}, {"name": "Jane", "age": 25}]}' > users.json 67 | 68 | Task B: Utilize Jq to Extract Data 69 | Use jq to extract the name of the first user. 70 | 71 | jq '.users[0].name' users.json 72 | Expected Output: 73 | 74 | "John" 75 | 76 | Task C: Advanced Jq Usage 77 | Filter users yonger than 28 and get their names. 78 | 79 | jq '.users[] | select(.age < 28) | .name' users.json 80 | 81 | Expected Output: 82 | "Jane" 83 | 84 | Lab Summary 85 | Key Learning: Basic source code management and JSON data manipulation using command-line tools. 86 | 87 | Additional Notes: Exploring more advanced functionalities of jq and mastering 88 | JSON data handling are crucial in dealing with API responses and configuration files in 89 | devOps, Data Engineering, and Software Development contexts. 90 | 91 | -------------------------------------------------------------------------------- /07SoftwarePackageManagment/03NewVersionGDB.txt: -------------------------------------------------------------------------------- 1 | 2 | Lab Exercise: Installing and Utilizing GDB 3 | Objective: 4 | Part#1 Learn how to download, compile, and install GDB from source. 5 | Lab Part #1 Do it at home. Compiling whole project is time consuming. 6 | 7 | Part#2 Utilize GDB to debug a simple C program. 8 | 9 | Prerequisites: 10 | Basic knowledge of Linux command-line tools. 11 | Basic understanding of C programming. 12 | A Linux system with development tools like clang, make, wget, or curl. 13 | Lab Overview: 14 | Downloading GDB Source Code 15 | Compiling and Installing GDB 16 | Configuring GDB 17 | Basic Debugging with GDB 18 | 1. Downloading GDB Source Code 19 | Objective: Securely download the latest version of GDB. 20 | 21 | Task: Use wget or curl to download the GDB source code. 22 | 23 | wget http://sourceware.org/pub/gdb/releases/gdb-10.1.tar.xz 24 | # or 25 | curl -O http://sourceware.org/pub/gdb/releases/gdb-10.1.tar.xz 26 | 27 | 2. Compiling and Installing GDB 28 | Objective: Compile GDB from the downloaded source and install it on the system. 29 | 30 | Task A: Extract the source code 31 | 32 | tar -xvf gdb-10.1.tar.xz 33 | cd gdb-10.1 34 | Task B: Compile and Install 35 | 36 | ./configure 37 | make 38 | sudo make install 39 | 40 | 3. Configuring GDB 41 | Objective: Ensure GDB is correctly installed and ready to use. 42 | 43 | Task: Verify the installation by checking the version of GDB. 44 | 45 | gdb --version 46 | 47 | Part# 2 48 | We do regular installation of gdb and clang 49 | sudo apt install gdb 50 | sudo apt install clang 51 | 52 | confirm by running 53 | which gdb 54 | which clang 55 | or 56 | gdb --version 57 | clang --version 58 | 59 | 60 | asic Debugging with GDB 61 | Objective: 62 | Utilize GDB to identify and fix a bug in a simple C program. 63 | 64 | Background: 65 | A segmentation fault occurs when a program attempts to access a memory location that it's not allowed to access, or attempts to access a memory location in a way that is not allowed (for example, trying to write to a read-only location, or to overwrite part of the operating system). 66 | 67 | Task: Debugging a Program with GDB 68 | Write the Program 69 | 70 | First, create a file named example.c using a text editor like nano. 71 | Here's a simple program that will intentionally cause a segmentation fault: 72 | 73 | #include 74 | #include 75 | 76 | int main() { 77 | char *ptr = "hello world"; 78 | printf("%s\n", ptr); 79 | ptr[0] = 'H'; // Attempt to modify a string literal, undefined behavior 80 | printf("%s\n", ptr); 81 | return 0; 82 | } 83 | 84 | This program attempts to modify a string literal, 85 | which is stored in read-only memory, leading to undefined behavior and a segmentation fault. 86 | 87 | Compile the Program with Debugging Information 88 | Compile the program with -g option to include debugging information: 89 | clang -g example.c -o example 90 | or 91 | gcc -g example.c -o example 92 | 93 | 94 | Start GDB 95 | 96 | Start GDB with your program: 97 | 98 | gdb ./example 99 | 100 | 101 | Set a Breakpoint 102 | 103 | Before running the program, set a breakpoint at main function to start debugging: 104 | 105 | (gdb) break main 106 | 107 | Run the Program 108 | 109 | Now, run the program within GDB: 110 | 111 | (gdb) run 112 | The program will start and stop at the breakpoint you set, 113 | allowing you to inspect the state before it crashes. 114 | 115 | Step Through the Code 116 | 117 | Use the next command to execute the next line of code 118 | without stepping into functions: 119 | 120 | (gdb) next 121 | Continue using next until you reach the line that modifies the string literal. 122 | GDB will let you step over it, but when you proceed, the program will crash. 123 | 124 | Identify the Problem 125 | 126 | GDB will report a segmentation fault. 127 | At this point, you can inspect the variables and the line that caused the fault: 128 | 129 | (gdb) list 130 | The list command shows the part of the source code being executed. 131 | The line attempting to modify the string literal is the culprit. 132 | 133 | Fixing the Bug 134 | Exit GDB (type quit) and modify example.c 135 | to avoid modifying a string literal. 136 | One way to fix the bug is to change the string declaration 137 | to use an array so it's modifiable: 138 | 139 | char ptr[] = "hello world"; 140 | 141 | Recompile and rerun your program in GDB to verify the fix. 142 | 143 | Lab Summary: 144 | You've learned how to use GDB to debug a C program 145 | By setting breakpoints, 146 | Stepping through the code, and inspecting variables. 147 | -------------------------------------------------------------------------------- /07SoftwarePackageManagment/04LabBuildingOwnPackageDebian.txt: -------------------------------------------------------------------------------- 1 | Lab4 : Building a Debian Package with a Simple C++ Program using CMake and Clang++ 2 | 3 | Objective: 4 | In this lab, you will create a Debian package from a simple C++ program's source code. 5 | 6 | 1. Prerequisites: 7 | 8 | Ensure that you have the necessary utilities and compilers installed: 9 | 10 | sudo apt-get update 11 | sudo apt-get install dh-make fakeroot build-essential clang cmake 12 | 13 | 14 | 2. Write a Simple C++ Program: 15 | Create a file named myprogram.cpp with the following content: 16 | 17 | #include 18 | 19 | int main() { 20 | std::cout << "Hello, Debian!" << std::endl; 21 | return 0; 22 | } 23 | 24 | 3. Create a CMakeLists.txt: 25 | Create a CMakeLists.txt with the following content: 26 | 27 | cmake_minimum_required(VERSION 3.10) 28 | project(MyProgram) 29 | set(CMAKE_CXX_STANDARD 14) 30 | set(CMAKE_CXX_COMPILER "clang++") 31 | add_executable(myprogram myprogram.cpp) 32 | install(TARGETS myprogram DESTINATION bin) 33 | 34 | 35 | 4. Create Source Tarball: 36 | Package the myprogram.cpp and CMakeLists.txt into a tarball named myprogram-1.0.tar.gz: 37 | 38 | mkdir myprogram-1.0 39 | cp myprogram.cpp CMakeLists.txt myprogram-1.0 40 | tar czvf myprogram-1.0.tar.gz myprogram-1.0 41 | 42 | 5. Build the Debian Package: 43 | a) Create a Working Directory: 44 | 45 | rm -rf WORK && mkdir WORK && cd WORK 46 | cp ../myprogram-1.0.tar.gz . 47 | 48 | b) Expand the Source Tarball: 49 | 50 | tar xvf myprogram-1.0.tar.gz 51 | 52 | c) Navigate into Expanded Directory: 53 | 54 | cd myprogram-1.0 55 | 56 | d) Build the Package: 57 | 58 | dh_make -f ../*myprogram-1.0.tar.gz 59 | dpkg-buildpackage -uc -us 60 | 61 | 6. Install and Verify the Package: 62 | a) Install the Package: 63 | 64 | cd .. 65 | sudo dpkg --install *.deb 66 | 67 | b) Verify Installation: 68 | 69 | myprogram 70 | Output should be: Hello, Debian! 71 | 72 | c) Uninstall the Package: 73 | If you have multiple packages depending on myprogram, 74 | you might have to remove all of them before you can successfully remove myprogram. 75 | You can check the dependencies using the following command: 76 | 77 | apt rdepends myprogram 78 | 79 | You should remove the myprogram-dbgsym package first before removing the myprogram package. 80 | Here's how you can do it: 81 | sudo dpkg --remove myprogram-dbgsym 82 | sudo dpkg --remove myprogram 83 | 84 | 85 | The first command removes the myprogram-dbgsym package, which has a dependency on myprogram. 86 | Once the dependent package is removed, the second command should successfully remove 87 | the myprogram package without encountering any dependency issues. 88 | 89 | 90 | Conclusion: 91 | In this lab, you have successfully created, built, installed, and uninstalled 92 | a Debian package containing a simple C++ program, using cmake and clang++. -------------------------------------------------------------------------------- /07SoftwarePackageManagment/my_debian_package/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 3.10) 2 | project(MyProgram) 3 | set(CMAKE_CXX_STANDARD 14) 4 | set(CMAKE_CXX_COMPILER "clang++") 5 | add_executable(myprogram myprogram.cpp) 6 | install(TARGETS myprogram DESTINATION bin) 7 | -------------------------------------------------------------------------------- /07SoftwarePackageManagment/my_debian_package/myprogram.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | 3 | int main() { 4 | std::cout << "Hello, Debian!" << std::endl; 5 | return 0; 6 | } 7 | -------------------------------------------------------------------------------- /07SoftwarePackageManagment/simple-examples-zip-request.py: -------------------------------------------------------------------------------- 1 | import shutil 2 | import requests 3 | 4 | # Create a zip file 5 | # Parameters: (output_name, format, directory_to_compress) 6 | # This will create 'my_folder.zip' from the contents of 'path/to/folder' 7 | shutil.make_archive('my_folder', 'zip', 'path/to/folder') 8 | 9 | # Extract a zip file 10 | # Parameters: (zip_file_to_extract, where_to_extract) 11 | # This will extract 'my_folder.zip' into 'extract_here' directory 12 | shutil.unpack_archive('my_folder.zip', 'extract_here') 13 | 14 | # Download a file from the internet 15 | # 1. Get the file from URL 16 | # 2. Save it locally as 'downloaded_file.txt' 17 | response = requests.get('https://example.com/file.txt') 18 | with open('downloaded_file.txt', 'wb') as f: 19 | f.write(response.content) 20 | -------------------------------------------------------------------------------- /07SoftwarePackageManagment/simple-gdb-cpp-tutorial.md: -------------------------------------------------------------------------------- 1 | # Simple GDB C++ Tutorial 2 | 3 | ## Step 1: Create the Sample Program 4 | Create a file named `simple.cpp`: 5 | 6 | ```cpp 7 | #include 8 | #include 9 | 10 | // Function to calculate array sum 11 | int calculateSum(const std::vector& numbers) { 12 | int sum = 0; 13 | // Bug: Off-by-one error 14 | for (size_t i = 0; i <= numbers.size(); i++) { // Should be < 15 | sum += numbers[i]; 16 | } 17 | return sum; 18 | } 19 | 20 | // Function to print array 21 | void printArray(const std::vector& numbers) { 22 | std::cout << "Array contents: "; 23 | // Bug: Accessing invalid index 24 | for (size_t i = 0; i < numbers.size() + 1; i++) { // Should not have +1 25 | std::cout << numbers[i] << " "; 26 | } 27 | std::cout << std::endl; 28 | } 29 | 30 | int main() { 31 | std::vector numbers = {1, 2, 3, 4, 5}; 32 | 33 | std::cout << "Starting program...\n"; 34 | 35 | printArray(numbers); 36 | 37 | int sum = calculateSum(numbers); 38 | std::cout << "Sum of array: " << sum << std::endl; 39 | 40 | return 0; 41 | } 42 | ``` 43 | 44 | ## Step 2: Compile the Program 45 | ```bash 46 | g++ -g simple.cpp -o simple 47 | ``` 48 | 49 | ## Step 3: Basic Debugging Session 50 | 51 | Start GDB: 52 | ```bash 53 | gdb ./simple 54 | ``` 55 | 56 | ### Debug the Print Function 57 | ```bash 58 | # Set breakpoint at main 59 | (gdb) b main 60 | (gdb) run 61 | 62 | # Program stops at main 63 | (gdb) n # Execute vector initialization 64 | (gdb) p numbers # Examine vector contents 65 | Outputs: numbers = std::vector of length 5, capacity 5 = {1, 2, 3, 4, 5} 66 | 67 | # Set breakpoint in printArray 68 | (gdb) b printArray 69 | (gdb) c # Continue to printArray 70 | 71 | # Inside printArray, examine the loop 72 | (gdb) p numbers.size() # Check vector size 73 | (gdb) n # Step through loop 74 | (gdb) p i # Check loop counter 75 | ``` 76 | 77 | ### Debug the Sum Function 78 | ```bash 79 | # Set breakpoint in calculateSum 80 | (gdb) b calculateSum 81 | (gdb) c # Continue to calculateSum 82 | 83 | # Watch the sum variable 84 | (gdb) watch sum 85 | (gdb) n # Step through calculation 86 | (gdb) p i # Check loop counter 87 | (gdb) p numbers.size() # Check vector size 88 | ``` 89 | 90 | ## Step 4: Common Debugging Tasks 91 | 92 | ### Print Vector Contents 93 | ```bash 94 | (gdb) p numbers 95 | (gdb) p numbers.size() 96 | ``` 97 | 98 | ### Check Current Location 99 | ```bash 100 | (gdb) bt # Show backtrace 101 | (gdb) list # Show current source code 102 | ``` 103 | 104 | ### Examine Variables 105 | ```bash 106 | (gdb) info locals # Show all local variables 107 | (gdb) p sum # Print specific variable 108 | ``` 109 | 110 | ## Step 5: Finding the Bugs 111 | 112 | Bug 1 in printArray: 113 | ```cpp 114 | // Bug: 115 | for (size_t i = 0; i < numbers.size() + 1; i++) 116 | // Fix: 117 | for (size_t i = 0; i < numbers.size(); i++) 118 | ``` 119 | 120 | Bug 2 in calculateSum: 121 | ```cpp 122 | // Bug: 123 | for (size_t i = 0; i <= numbers.size(); i++) 124 | // Fix: 125 | for (size_t i = 0; i < numbers.size(); i++) 126 | ``` 127 | 128 | ## Quick Reference Commands 129 | - `run` (or `r`): Start program 130 | - `break` (or `b`): Set breakpoint 131 | - `continue` (or `c`): Continue execution 132 | - `next` (or `n`): Execute next line 133 | - `print` (or `p`): Print variable 134 | - `quit` (or `q`): Exit GDB 135 | 136 | ## Practice Exercises 137 | 138 | 1. Find the Array Access Bug: 139 | ```bash 140 | gdb ./simple 141 | (gdb) b printArray 142 | (gdb) run 143 | (gdb) p numbers.size() 144 | (gdb) watch i 145 | (gdb) n # Step until crash 146 | (gdb) p i # Check index at crash 147 | ``` 148 | 149 | 2. Find the Sum Bug: 150 | ```bash 151 | gdb ./simple 152 | (gdb) b calculateSum 153 | (gdb) run 154 | (gdb) watch sum 155 | (gdb) n # Watch sum change 156 | (gdb) p numbers[i] # Will crash at invalid index 157 | ``` 158 | 159 | 160 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/1_cut_paste/data.txt: -------------------------------------------------------------------------------- 1 | apple,red,5 2 | banana,yellow,6 3 | cherry,red,20 -------------------------------------------------------------------------------- /08AdvancedTextProcessing/1_cut_paste/file1.txt: -------------------------------------------------------------------------------- 1 | apple 2 | banana 3 | cherry -------------------------------------------------------------------------------- /08AdvancedTextProcessing/1_cut_paste/file2.txt: -------------------------------------------------------------------------------- 1 | red 2 | yellow 3 | red -------------------------------------------------------------------------------- /08AdvancedTextProcessing/1_cut_paste/file3.txt: -------------------------------------------------------------------------------- 1 | 1 2 | 2 3 | 3 4 | 4 5 | 5 6 | 6 -------------------------------------------------------------------------------- /08AdvancedTextProcessing/1_cut_paste/order_generator.py: -------------------------------------------------------------------------------- 1 | import random 2 | import string 3 | 4 | # List of example user names 5 | usernames = ["Alice", "Bob", "Charlie", "Diana", "Ethan", "Fiona", "George", "Hannah"] 6 | 7 | # Email domains 8 | email_domains = ["gmail.com", "yahoo.com"] 9 | 10 | # Street names 11 | streets = ["Baker Street", "Elm Street", "Maple Street", "Pine Street"] 12 | 13 | # Cities 14 | cities = ["Springfield", "Shelbyville", "Ogdenville", "North Haverbrook"] 15 | 16 | # Function to generate random strings 17 | def generate_random_string(length): 18 | letters = string.ascii_lowercase 19 | return ''.join(random.choice(letters) for _ in range(length)) 20 | 21 | # Function to generate realistic email addresses 22 | def generate_realistic_email(name, email_set): 23 | while True: 24 | domain = random.choice(email_domains) 25 | email = name.lower() + '.' + generate_random_string(5) + "@" + domain 26 | if email not in email_set: 27 | email_set.add(email) 28 | return email 29 | 30 | # Function to generate realistic order IDs 31 | def generate_realistic_order_id(name, order_id_set): 32 | while True: 33 | order_id = name[:3].upper() + str(random.randint(100, 999)) + '-' + generate_random_string(2).upper() 34 | if order_id not in order_id_set: 35 | order_id_set.add(order_id) 36 | return order_id 37 | 38 | # Function to generate realistic addresses 39 | def generate_realistic_address(name, address_set): 40 | while True: 41 | street = random.choice(streets) + ' ' + str(random.randint(1, 100)) 42 | city = random.choice(cities) 43 | zipcode = str(random.randint(10000, 99999)) 44 | address = f"{name},{street},{city},{zipcode}" 45 | if address not in address_set: 46 | address_set.add(address) 47 | return address 48 | 49 | # Open files for writing 50 | with open("user_emails.csv", "w") as email_file, open("user_orders.csv", "w") as order_file, open("user_addresses.csv", "w") as address_file: 51 | # Sets to store used emails, order_ids, and addresses to ensure uniqueness 52 | used_emails = set() 53 | used_order_ids = set() 54 | used_addresses = set() 55 | 56 | # Generate 1000 entries 57 | for _ in range(1000): 58 | username = random.choice(usernames) 59 | email = generate_realistic_email(username, used_emails) 60 | order_id = generate_realistic_order_id(username, used_order_ids) 61 | address = generate_realistic_address(username, used_addresses) 62 | 63 | # Write data to files 64 | email_file.write(f"{username},{email}\n") 65 | order_file.write(f"{username},{order_id}\n") 66 | address_file.write(f"{address}\n") 67 | 68 | print("User emails, orders, and addresses generated successfully!") 69 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/1_cut_paste/theory_cut.txt: -------------------------------------------------------------------------------- 1 | CUT: Practical Guide 2 | The cut command in Linux is used to cut portions of each line from a file or stream and outputs the result. 3 | It's especially useful for extracting specific fields/columns from a text file or command output. 4 | Below are some useful options of the cut command: 5 | 6 | -f (field): Specifies the field(s) to be extracted. 7 | -d (delimiter): Sets the field delimiter. 8 | -c (characters): Cuts by character position. 9 | --complement: Inverts the selection for cutting. 10 | 11 | Understanding the Logic Behind cut in Linux 12 | The cut command in Linux is utilized for manipulating textual data at the line level, 13 | extracting specified portions of each line from a file or stream. 14 | The fundamental logic behind cut revolves around treating text data as a 15 | series of records and fields, treating each line as a single record and the 16 | designated portions (e.g., characters or delimited sections) as fields. 17 | 18 | Core Concepts: 19 | 20 | 1) Line as a Record: 21 | Each line in the input (file/stream) is treated as a single record. 22 | Processing is done line by line, treating each line independently. 23 | 24 | 2) Fields within a Line: 25 | A field is a specific section or portion of a line. 26 | Fields are typically determined by a delimiter, or by specific character positions. 27 | 28 | How cut Works: 29 | 30 | a) Read Line: 31 | cut reads an input line, whether from a file or standard input. 32 | b) Identify Fields: 33 | Using the specified delimiter (-d option), cut divides the line into fields. 34 | If the delimiter is a comma: one,two,three -> [one][two][three] 35 | If character positions are specified (-c option), the specified characters are directly selected. 36 | 37 | c) Extract and Output: 38 | The fields/characters specified with -f or -c are extracted. 39 | The extracted data is written to standard output. 40 | This process is repeated for every line in the input. 41 | 42 | Example. 43 | 44 | Considering a text file data.txt with the following content: 45 | 46 | apple,red,5 47 | banana,yellow,6 48 | cherry,red,20 49 | 50 | The logic of cut would proceed as follows: 51 | 52 | First Line Processing: 53 | 54 | Line: apple,red,5 55 | Fields (with -d','): [apple][red][5] 56 | Extract (with -f2): red 57 | Output: red 58 | Second Line Processing: 59 | 60 | Line: banana,yellow,6 61 | Fields: [banana][yellow][6] 62 | Extract: yellow 63 | Output: yellow 64 | ... and so forth, for each subsequent line in the file or stream. 65 | 66 | Practical example Applied on books.csv: 67 | 68 | 69 | # Goal1: Find All Unique Genres of Book 70 | cut -d',' -f4 books.csv|sort|uniq 71 | 72 | # Goal2: Extract lines with Software Engineering, Keep only Title and Author 73 | grep "Software Engineering" books.csv | cut -d',' -f1,2 74 | 75 | alternatively :) 76 | 77 | cut -d',' -f1,2 | grep "Software Engineering" books.csv 78 | 79 | 80 | I don't know which one is the better, probably grep first -------------------------------------------------------------------------------- /08AdvancedTextProcessing/1_cut_paste/theory_paste.txt: -------------------------------------------------------------------------------- 1 | PASTE 2 | The paste command in Unix or Linux is used to merge lines of files side by side 3 | (horizontally), and it is often used in conjunction with other text processing 4 | tools like cut, sort, grep, etc. It is particularly useful when you are trying 5 | to combine data from two files or streams into a single output and in scenarios 6 | where you want to join corresponding lines from files together. 7 | 8 | Basic Syntax 9 | 10 | paste [OPTION]... [FILE]... 11 | 12 | Commonly Used Options 13 | 14 | -d, --delimiters=LIST: Use characters from LIST instead of TAB as delimiter. 15 | -s, --serial: Paste one file at a time instead of in parallel. 16 | 17 | Practical Examples 18 | 19 | Example 1: Merging Lines from Two Files 20 | Suppose you have two files: 21 | 22 | file1.txt: 23 | 24 | apple 25 | banana 26 | cherry 27 | 28 | file2.txt: 29 | 30 | red 31 | yellow 32 | red 33 | 34 | Using paste to merge the corresponding lines: 35 | 36 | paste file1.txt file2.txt 37 | 38 | Output: 39 | apple red 40 | banana yellow 41 | cherry red 42 | 43 | 44 | Example 2: Changing Delimiters 45 | If you want to change the delimiter (for example, using a comma): 46 | 47 | paste -d, file1.txt file2.txt 48 | 49 | 50 | Output: 51 | apple,red 52 | banana,yellow 53 | cherry,red 54 | 55 | Example 3: Merging Lines from the Same File 56 | You can also use paste to format data from a single file. 57 | 58 | Given file3.txt: 59 | 1 60 | 2 61 | 3 62 | 4 63 | 5 64 | 6 65 | 66 | You can organize it into two columns: 67 | 68 | paste - - < file3.txt 69 | Output: 70 | 71 | 1 2 72 | 3 4 73 | 5 6 74 | 75 | Here, - - tells paste to expect two inputs. 76 | Since we're redirecting file3.txt as input, 77 | it treats every two lines as those separate inputs. 78 | 79 | Example 4: Merging All Lines into One 80 | Using the -s (serial) option, you can concatenate all lines of a file into a single line: 81 | 82 | paste -s -d',' file1.txt 83 | Output: 84 | apple,banana,cherry 85 | 86 | 87 | Here, -d',' changes the delimiter to a comma. 88 | 89 | 90 | Conclusion 91 | The paste command is quite versatile and is especially useful in 92 | shell scripting and data processing to format outputs and combine data from different sources. 93 | It is simple, yet it can be combined with other text processing commands for powerful 94 | data manipulation and analysis in Linux. -------------------------------------------------------------------------------- /08AdvancedTextProcessing/2_greplab/app.log: -------------------------------------------------------------------------------- 1 | [INFO] This is a log statement 2 | [DEBUG] This is a log statement 3 | [ERROR] There was an error on module_1.py 4 | [INFO] This is a log statement 5 | [INFO] This is a log statement 6 | [INFO] This is a log statement 7 | [ERROR] There was an error on module_1.py 8 | [INFO] This is a log statement 9 | [WARNING] This is a log statement 10 | [INFO] This is a log statement 11 | [WARNING] This is a log statement 12 | [WARNING] This is a log statement 13 | [INFO] This is a log statement 14 | [ERROR] There was an error on module_2.py 15 | [ERROR] There was an error on module_3.py 16 | [DEBUG] Accessing http://example.com/resource1 17 | [ERROR] There was an error on module_1.py 18 | [INFO] This is a log statement 19 | [ERROR] There was an error on module_2.py 20 | [INFO] This is a log statement 21 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/2_greplab/instructions.txt: -------------------------------------------------------------------------------- 1 | Grep Lab Exercise for Software Developers 2 | 3 | Objective: Apply `grep` to real-world software development scenarios. 4 | 5 | Setup: 6 | 1. Consider you have a codebase with multiple `.py` (Python) files. 7 | 2. You've also got application logs (`app.log`) recording the behavior of your application. 8 | 9 | Tasks: 10 | 1. Find all Python functions in the codebase that might be printing to the console. 11 | 2. Identify all error logs in `app.log`. 12 | 3. Search for all TODO comments in the codebase. 13 | 4. List all instances of a variable named `dbConnection` in the codebase, 14 | along with their line numbers. 15 | 5. Extract URLs (starting with "http:// or "https://") from `app.log`. 16 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/2_greplab/lab_setup.py: -------------------------------------------------------------------------------- 1 | import os 2 | import random 3 | 4 | # Generate .py files 5 | for i in range(5): 6 | with open(f"module_{i}.py", 'w') as f: 7 | f.write(f"def function_{i}():\n") 8 | if i % 2 == 0: 9 | f.write(" print('This is a print statement')\n") 10 | f.write("\n") 11 | f.write("# TODO: Check this function later\n") 12 | f.write("dbConnection = 'Some connection string here'\n") 13 | f.write(f"def another_function_{i}():\n") 14 | f.write(" pass\n") 15 | 16 | # Generate app.log 17 | log_levels = ["INFO", "DEBUG", "WARNING", "ERROR"] 18 | urls = [ 19 | "http://example.com/resource1", 20 | "https://secure-example.com/resource2", 21 | "http://another-example.com/page" 22 | ] 23 | 24 | with open("app.log", 'w') as f: 25 | for i in range(20): 26 | random_log = random.choice(log_levels) 27 | if random_log == "ERROR": 28 | f.write(f"[ERROR] There was an error on module_{random.randint(0,4)}.py\n") 29 | elif random_log == "DEBUG": 30 | f.write(f"[DEBUG] Accessing {random.choice(urls)}\n") 31 | else: 32 | f.write(f"[{random_log}] This is a log statement\n") 33 | 34 | print("Python files and app.log generated!") 35 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/2_greplab/module_0.py: -------------------------------------------------------------------------------- 1 | def function_0(): 2 | print('This is a print statement') 3 | 4 | # TODO: Check this function later 5 | dbConnection = 'Some connection string here' 6 | def another_function_0(): 7 | pass 8 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/2_greplab/module_1.py: -------------------------------------------------------------------------------- 1 | def function_1(): 2 | 3 | # TODO: Check this function later 4 | dbConnection = 'Some connection string here' 5 | def another_function_1(): 6 | pass 7 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/2_greplab/module_2.py: -------------------------------------------------------------------------------- 1 | def function_2(): 2 | print('This is a print statement') 3 | 4 | # TODO: Check this function later 5 | dbConnection = 'Some connection string here' 6 | def another_function_2(): 7 | pass 8 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/2_greplab/module_3.py: -------------------------------------------------------------------------------- 1 | def function_3(): 2 | 3 | # TODO: Check this function later 4 | dbConnection = 'Some connection string here' 5 | def another_function_3(): 6 | pass 7 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/2_greplab/module_4.py: -------------------------------------------------------------------------------- 1 | def function_4(): 2 | print('This is a print statement') 3 | 4 | # TODO: Check this function later 5 | dbConnection = 'Some connection string here' 6 | def another_function_4(): 7 | pass 8 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/2_greplab/solutions.txt: -------------------------------------------------------------------------------- 1 | Grep Lab Exercise Solutions and Explanations for Software Developers 2 | 3 | 1. Find all Python functions printing to the console: 4 | Command: 5 | $ grep -r "def.*print" *.py 6 | 7 | Explanation: 8 | - The `-r` flag enables recursive search. 9 | - The pattern "def.*print" searches for function definitions (`def`) that have a `print` statement within them. 10 | 11 | 2. Identify all error logs in `app.log`: 12 | Command: 13 | $ grep "ERROR" app.log 14 | 15 | Explanation: 16 | - This command searches for all lines containing the string "ERROR" in the `app.log` file, commonly used to indicate error logs. 17 | 18 | 3. Search for all TODO comments in the codebase: 19 | Command: 20 | $ grep -r "TODO" *.py 21 | 22 | Explanation: 23 | - Developers often use "TODO" comments to mark portions of the code that need future attention. 24 | 25 | 4. List `dbConnection` instances with line numbers: 26 | Command: 27 | $ grep -rn "dbConnection" *.py 28 | 29 | Explanation: 30 | - The `-n` flag prefixes each line of output with its line number. 31 | - This command searches for the variable `dbConnection` across Python files. 32 | 33 | 5. Extract URLs from `app.log`: 34 | Command: 35 | $ grep -o 'http\://[^ ]*' app.log; grep -o 'https\://[^ ]*' app.log 36 | 37 | Explanation: 38 | - The `-o` flag extracts only the matching part of the input. 39 | - These commands search for substrings starting with "http://" or "https://", respectively, and ending when a space character is encountered. 40 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/3_sedlab/app.py: -------------------------------------------------------------------------------- 1 | def main(): 2 | print("Debug: Starting application") 3 | # TODO: Implement feature X 4 | connect_database() 5 | # TODO: Implement feature Y 6 | 7 | def connect_database(): 8 | url = "http://localhost:8080" 9 | print("Debug: Connecting to database") 10 | # Connection logic here 11 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/3_sedlab/instructions.txt: -------------------------------------------------------------------------------- 1 | Exercises: 2 | 3 | Code Refactoring: Use sed to change all instances of the string Debug: to Info: in app.py. 4 | 5 | Configuration Management: 6 | Update settings.ini to change the database URL 7 | to http://production.database.com without modifying the original file. 8 | 9 | Code Analysis: Extract all lines from app.py that contain the word TODO. 10 | 11 | Function Renaming: In app.py, change the 12 | function name connect_database to establish_database_connection 13 | and update all its occurrences. 14 | 15 | Solutions: 16 | 17 | $ sed 's/Debug:/Info:/g' app.py 18 | 19 | $ sed 's#http://localhost:8080#http://production.database.com#g' settings.ini > settings_new.ini 20 | 21 | $ sed -n '/TODO/p' app.py 22 | 23 | $ sed 's/connect_database/establish_database_connection/g' app.py -------------------------------------------------------------------------------- /08AdvancedTextProcessing/3_sedlab/settings.ini: -------------------------------------------------------------------------------- 1 | [database] 2 | url = http://localhost:8080 3 | user = admin 4 | password = admin1234 5 | 6 | [server] 7 | mode = debug 8 | port = 8080 9 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/3_sedlab/settings_new.ini: -------------------------------------------------------------------------------- 1 | [database] 2 | url = http://production.database.com 3 | user = admin 4 | password = admin1234 5 | 6 | [server] 7 | mode = debug 8 | port = 8080 9 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/3_sedlab/solutions.txt: -------------------------------------------------------------------------------- 1 | 1. Code Refactoring 2 | Solution: $ sed 's/Debug:/Info:/g' app.py 3 | 4 | Explanation: 5 | 6 | sed: The stream editor command used for parsing and transforming text. 7 | 8 | 's/Debug:/Info:/g': This is a sed expression for substitution: 9 | 10 | s: The substitute command. 11 | Debug:: The pattern to search for. 12 | Info:: The replacement string. 13 | g: Stands for "global", meaning it will replace all occurrences in a line, 14 | not just the first one. 15 | app.py: The input file which sed will operate on. 16 | 17 | When this command is run, every occurrence of the string "Debug:" in app.py 18 | will be replaced by "Info:", and the result will be printed to the console. 19 | The file app.py remains unchanged unless -i option is used with sed. 20 | 21 | 2. Configuration Management 22 | Solution: 23 | $ sed 's#http://localhost:8080#http://production.database.com#g' settings.ini > settings_new.ini 24 | 25 | Explanation: 26 | 27 | sed: The stream editor. 28 | 29 | 's#http://localhost:8080#http://production.database.com#g': 30 | This is a sed substitution expression similar to the first, 31 | but note the use of the # delimiter instead of /. We've used a different delimiter 32 | because the search and replace strings contain forward slashes, 33 | and using / as a delimiter would complicate the expression. 34 | 35 | settings.ini: The input file that sed will operate on. 36 | 37 | > settings_new.ini: This redirects the output (the modified content) to a new file 38 | named settings_new.ini. Without this, the result would simply be printed to the console. 39 | 40 | When the command is run, it will search for the database 41 | URL "http://localhost:8080" in settings.ini and replace 42 | it with "http://production.database.com", 43 | saving the modified content to settings_new.ini. 44 | 45 | 3. Code Analysis 46 | Solution: $ sed -n '/TODO/p' app.py 47 | 48 | Explanation: 49 | 50 | sed -n: The -n option tells sed to suppress automatic printing of pattern space 51 | (i.e., it won't print anything unless explicitly told to). 52 | 53 | /TODO/p: This is a sed expression to search and print: 54 | 55 | /TODO/: The pattern to search for. 56 | p: Print command. This tells sed to print lines that match the pattern. 57 | app.py: The input file. 58 | 59 | When this command is run, only the lines containing the word "TODO" 60 | in app.py will be printed to the console. 61 | 62 | 4. Function Renaming 63 | Solution: $ sed 's/connect_database/establish_database_connection/g' app.py 64 | 65 | Explanation: 66 | 67 | sed: The stream editor. 68 | 69 | 's/connect_database/establish_database_connection/g' 70 | 71 | This is another sed substitution expression: 72 | 73 | s: The substitute command. 74 | connect_database: The pattern to search for (the original function name). 75 | establish_database_connection: The replacement string (the new function name). 76 | g: Stands for "global". 77 | app.py: The input file. 78 | 79 | When run, this command will replace all occurrences of the function name 80 | "connect_database" with "establish_database_connection" in app.py and 81 | print the modified content to the console. 82 | The original file remains unchanged unless the -i option is used. -------------------------------------------------------------------------------- /08AdvancedTextProcessing/3_sedlab/theory_sed.txt: -------------------------------------------------------------------------------- 1 | The sed (stream editor) command in Linux is a powerful utility 2 | for parsing and transforming text in data streams or files. 3 | It works on the basis of applying a script of editing commands to each line of input. 4 | Here's a brief primer on using sed: 5 | 6 | 1. Basic Syntax 7 | sed [OPTIONS] 'command' file 8 | 9 | 2. Common Operations 10 | a) Print lines containing a specific pattern: 11 | sed -n '/pattern/p' file 12 | 13 | b) Delete lines containing a specific pattern: 14 | sed '/pattern/d' file 15 | 16 | c) Substitute (replace) first occurrence of a pattern in each line: 17 | sed 's/pattern/replacement/' file 18 | 19 | d) Substitute all occurrences of a pattern in each line: 20 | sed 's/pattern/replacement/g' file 21 | 22 | e) In-Place Editing 23 | By default, sed outputs the result to standard output. 24 | If you want to modify the file in place, use the -i option: 25 | 26 | sed -i 's/pattern/replacement/' file -------------------------------------------------------------------------------- /08AdvancedTextProcessing/4_awklab/instructions.txt: -------------------------------------------------------------------------------- 1 | AWK Lab Exercise Instructions 2 | 3 | Objective: Develop a solid understanding of 4 | AWK's capabilities by analyzing pseudo-log files. 5 | 6 | Setup: 7 | 8 | 1. Familiarize yourself with the structure of a hypothetical log file, application.log: 9 | Format: [TIMESTAMP] [LOG_LEVEL] [MODULE] Message 10 | Example: [2022-09-03 12:30:45] [ERROR] [AUTH] Failed login attempt. 11 | 12 | 2. Recognize that logs can be vital for software developers to debug and 13 | monitor system behavior. 14 | 15 | Tasks: 16 | 17 | 1. Identify and list all unique log levels in the log file. 18 | 2. Extract and display all logs originating from the DATABASE module. 19 | 3. Determine the number of unique error messages logged by the AUTH module. 20 | 4. Rank and display the software modules based on the number of logs they generated. 21 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/4_awklab/log_generator.py: -------------------------------------------------------------------------------- 1 | import random 2 | import datetime 3 | 4 | # Define constants 5 | LOG_LEVELS = ["INFO", "WARNING", "ERROR", "DEBUG"] 6 | MODULES = ["AUTH", "DATABASE", "NETWORK", "UI", "API"] 7 | LOG_MESSAGES = { 8 | "INFO": ["User logged in.", "New entry added.", "Connection established.", "Session started."], 9 | "WARNING": ["Login retries almost exhausted.", "DB nearing max capacity.", "Network latency detected.", "UI unresponsive."], 10 | "ERROR": ["Failed login attempt.", "DB connection lost.", "Network error.", "UI crashed."], 11 | "DEBUG": ["Auth method called.", "DB query executed.", "Network packet sent.", "UI button clicked."] 12 | } 13 | LOG_FILE = "application.log" 14 | NUM_OF_LOGS = 1000 # Number of log entries to generate 15 | 16 | def generate_log(): 17 | """ 18 | Generate a single pseudo-log entry. 19 | """ 20 | timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") 21 | log_level = random.choice(LOG_LEVELS) 22 | module = random.choice(MODULES) 23 | message = random.choice(LOG_MESSAGES[log_level]) 24 | 25 | return f"[{timestamp}] [{log_level}] [{module}]: {message}" 26 | 27 | def main(): 28 | """ 29 | Generate the pseudo-log file. 30 | """ 31 | with open(LOG_FILE, "w") as log_file: 32 | for _ in range(NUM_OF_LOGS): 33 | log_file.write(generate_log() + "\n") 34 | print(f"{NUM_OF_LOGS} pseudo-log entries generated in {LOG_FILE}.") 35 | 36 | if __name__ == "__main__": 37 | main() 38 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/4_awklab/solutions.txt: -------------------------------------------------------------------------------- 1 | AWK Lab Exercise Solutions and Explanations 2 | 3 | 1. Display all unique log levels: 4 | Command: 5 | 6 | $ awk -F'] \\[' '{print $2}' application.log | sort | uniq 7 | 8 | Explanation: 9 | - `-F'] \\['`: This sets the field separator to the pattern `] [`. 10 | We use double backslashes due to the escape sequence in the shell. 11 | - `{print $2}`: This AWK command prints the second field (i.e., the log level). 12 | 13 | 14 | 2. Extract logs from the DATABASE module: 15 | Command: 16 | 17 | $ awk -F'] \\[' '/DATABASE/ {print}' application.log 18 | 19 | Explanation: 20 | - The pattern `/DATABASE/` checks each line for the presence of the string "DATABASE". 21 | - The `{print}` action tells `awk` to print lines that match the pattern. 22 | 23 | 3. Display unique error messages from the AUTH module: 24 | Command: 25 | $ awk -F'] \\[' '$2 == "[ERROR]" && $3 ~ "AUTH" {print $4}' application.log | sort | uniq 26 | 27 | Explanation: 28 | - This command checks for lines where the log level (second field) is "[ERROR]" and the module (third field) contains "AUTH". If both conditions are met, it then prints the message (fourth field). 29 | 30 | 4. Rank modules by log count: 31 | Command: 32 | $ awk -F'] \\[' '{print $3}' application.log | cut -d':' -f1 | sort | uniq -c | sort -rn 33 | 34 | Explanation: 35 | - `-F'] \\['`: Sets the field separator. 36 | - `{print $3}`: Prints the module name (e.g., [DATABASE]:). 37 | - `cut -d':' -f1`: Further processes the output to only capture the module name without the trailing colon and message. 38 | - `uniq -c`: Prefixes lines by the number of occurrences. 39 | - `sort -rn`: Sorts in reverse numerical order to rank modules by log count. 40 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/4_awklab/theory_awk.txt: -------------------------------------------------------------------------------- 1 | 2 | NEED WORK 3 | 4 | AWK - A Brief Introduction 5 | 6 | AWK is a powerful text-processing language named after its creators: 7 | Aho, Weinberger, and Kernighan. Used predominantly for pattern scanning and processing, 8 | AWK is particularly efficient when working with structured data, 9 | such as columnar or delimited data in files. 10 | 11 | 12 | Key Concepts: 13 | 1. Records and Fields: 14 | - By default, AWK reads a file line by line, treating each line as a record. 15 | - Within a record, AWK further divides the data into fields based on a field separator, 16 | which is a space by default. 17 | 18 | 2. Built-in Variables: 19 | - FS: Field Separator 20 | - OFS: Output Field Separator 21 | - NF: Number of Fields in a record 22 | - NR: Current Record Number 23 | 24 | 3. Patterns and Actions: 25 | - AWK operates on the principle of patterns and actions. 26 | For every input line that matches a pattern, AWK performs the associated action. 27 | 28 | Example: 29 | 30 | $ echo "apple orange" | awk '{print $2, $1}' 31 | 32 | Output: orange apple 33 | 34 | This example swaps the two words, showcasing how AWK uses spaces as 35 | the default field separator and operates on fields. 36 | -------------------------------------------------------------------------------- /08AdvancedTextProcessing/books.csv: -------------------------------------------------------------------------------- 1 | Title,Author,Price ($),Genre 2 | Catcher in the Rye,J.D. Salinger,10.00,Fiction 3 | 1984,George Orwell,8.00,Fiction 4 | Clean Code,Robert C. Martin,25.00,Technology 5 | Thinking Fast and Slow,Daniel Kahneman,12.00,Psychology 6 | Pragmatic Programmer,Andrew Hunt,30.00,Technology 7 | Hitchhiker's Guide to the Galaxy,Douglas Adams,15.00,Science Fiction 8 | Art of Computer Programming,Donald E. Knuth,80.00,Computer Science 9 | Clean Code,Robert C. Martin,50.00,Software Engineering 10 | Pragmatic Programmer,Andrew Hunt,60.00,Software Engineering 11 | Sapiens,Yuval Noah Harari,30.00,History 12 | Deep Learning,Ian Goodfellow,70.00,Machine Learning 13 | Design of Everyday Things,Don Norman,40.00,Design 14 | Cosmos,Carl Sagan,25.00,Science 15 | 1984,George Orwell,20.00,Fiction 16 | Catcher in the Rye,J.D. Salinger,15.00,Fiction 17 | -------------------------------------------------------------------------------- /09LinuxArchitecture+FHS/01LinuxArchitecture.txt: -------------------------------------------------------------------------------- 1 | => Linux Architecture. Components - bird eye view. 2 | 3 | Hardware: This is the physical layer of computers where all the physical components reside. 4 | 5 | Kernel: The core part of Linux, responsible for managing 6 | the system's resources and interacting with hardware. 7 | 8 | System Libraries: These provide a set of standard functions that user applications 9 | can use to perform system-level tasks. 10 | 11 | Shell: This is a user interface to the kernel. 12 | It takes commands from the user and executes the system's programs. 13 | Different types of shells provide various features, but all serve as the command interpreter 14 | between the user and the system. 15 | 16 | System Utilities: These are tools and programs that perform individual, 17 | specialized tasks that users may invoke via the shell or directly as background services. 18 | 19 | Users: Users interact with the shell using commands, 20 | and they are an essential part of the system's ecosystem. 21 | Their privileges and environment are managed by the system, 22 | but they are the actors who make use of the architecture to perform work. 23 | 24 | User Applications: 25 | These are the high-level applications that users interact with directly, 26 | such as office suites, browsers, and games. 27 | They often use system libraries and utilities to function and are accessed 28 | via a shell or a graphical user interface. 29 | 30 | 31 | 32 | -------------------------------------------------------------------------------- /09LinuxArchitecture+FHS/02LinuxFilesystemHierarchy.txt: -------------------------------------------------------------------------------- 1 | => Linux FileSystemHierarhy or LHS 2 | 3 | The filesystem hierarchy standardizes the layout of directories and files in a Linux system, 4 | which reflects the architecture's modularity and organization. 5 | Here's how the filesystem hierarchy connects with the Linux architecture: 6 | 7 | / (Root): 8 | The root directory is at the base of the filesystem hierarchy. 9 | Everything on a Linux system falls under the root directory. 10 | 11 | /bin and /usr/bin (User Binaries): 12 | These directories contain essential user command binaries that are necessary 13 | for the system to boot and run. It connects with the shell in the architecture 14 | as it contains the binaries that users directly interact with. 15 | 16 | /sbin and /usr/sbin (System Binaries): 17 | Here lie the system binaries that are usually run by the root user for system administration. 18 | This relates to the system utilities in the architecture, which perform specialized tasks. 19 | 20 | /etc (Configuration Files): 21 | This directory holds the system-wide configuration files read by applications at startup. 22 | It’s a critical part of the system utilities layer, 23 | affecting how the system behaves for users and the shell. 24 | 25 | /dev (Device Files): 26 | In Linux, hardware devices are treated as files, and the /dev directory contains these device files. 27 | 28 | /proc and /sys (System Information): 29 | These virtual filesystems don't contain real files but are interfaces 30 | to system and process information managed by the kernel. 31 | 32 | /var (Variable Files): 33 | This includes files that are expected to grow over time, 34 | such as logs, spool files, and cache data. It's part of the system's ongoing operation, 35 | tying back to the process and memory management aspects of the kernel. 36 | 37 | /home (User Home Directories): 38 | Each user has a directory in /home, where their personal files are stored. 39 | This relates directly to the user component of the architecture, providing a space for user data. 40 | 41 | /lib and /usr/lib (Libraries): 42 | These directories contain library files that support the binaries in /bin and /sbin. 43 | They are connected to the system libraries in the architecture, which provide 44 | higher-level functions to the system's programs. 45 | 46 | /boot (Boot Loader Files): 47 | Contains the files needed to boot the operating system. 48 | It's closely related to the kernel component since the kernel itself is 49 | typically stored here along with the boot loader. 50 | 51 | /tmp (Temporary Files): 52 | Used by applications to store temporary files, 53 | which aligns with the process management function of 54 | the kernel and its management of system resources. 55 | 56 | Each directory in the Linux filesystem hierarchy has a specific purpose 57 | and contains a certain type of file or directory entries, 58 | which helps maintain a well-organized system that reflects the architecture's principles. 59 | 60 | -------------------------------------------------------------------------------- /09LinuxArchitecture+FHS/03ConnectDots.txt: -------------------------------------------------------------------------------- 1 | LinuxArchitecture + Linux Filesystem Hierarchy Standard (FHS) 2 | 3 | 4 | Here is how each aspect of the FHS connects with the broader Linux architecture: 5 | 6 | 7 | => Linux Architecture Component: Kernel and System Files 8 | 9 | /boot: 10 | This directory contains the static bootloader and kernel files necessary to boot the system, 11 | directly connected to the Linux kernel layer in the architecture. 12 | 13 | /lib, /lib64: 14 | These contain essential shared libraries for system boot-up and running commands in root 15 | filesystem binaries, closely linked to the system libraries in the architecture. 16 | 17 | /etc: Configuration files critical for the system's operation, 18 | which the kernel and various system utilities read at startup or during operation. 19 | 20 | 21 | => Linux Architecture Component: User and Shell Interface 22 | /bin, /usr/bin: User binaries that include essential commands required for all users, 23 | such as ls, cp, etc., are necessary for user interaction with the shell. 24 | 25 | 26 | /sbin, /usr/sbin: System administration binaries that are generally not run 27 | by ordinary users but require superuser (root) privileges, 28 | reflecting the administrative functions accessible through the shell. 29 | 30 | 31 | => Linux Architecture Component: User Data and Applications 32 | 33 | /home: Home directories for user files, reflecting the user space in 34 | the architecture where each user's data and user-specific configurations are stored. 35 | 36 | /usr/local: Local hierarchy for user-installed software and files, 37 | which is separate from system software and reflects the modularity of user applications within the architecture. 38 | 39 | 40 | => Linux Architecture Component: System Operation and Process Management 41 | 42 | /var: Variable data files that include logs, spool files, and transient and temporary files, 43 | representing the system's ongoing operation managed by the kernel. 44 | 45 | /proc: A virtual filesystem providing process and kernel information as files, 46 | directly representing the kernel's process management in a user-accessible format. 47 | 48 | /dev: Device files representing hardware devices, which the kernel manages, 49 | providing the interface between the hardware (kernel management) and user-space utilities. 50 | 51 | 52 | => Linux Architecture Component: System State and Services 53 | 54 | /run: Holds runtime data, like the system's state since the last boot, 55 | which is tightly coupled with the kernel's management of system state. 56 | 57 | /sys: Another virtual filesystem that provides a window into the kernel, 58 | exposing the kernel's view of the hardware in a structured way similar to /proc. 59 | 60 | 61 | Temporary Data 62 | 63 | /tmp: Temporary files used by applications and the system, 64 | managed by the kernel's process and resource management functionality. 65 | 66 | -------------------------------------------------------------------------------- /09LinuxArchitecture+FHS/05Lab1Continue.txt: -------------------------------------------------------------------------------- 1 | # cpu_load_simulator.py 2 | import threading 3 | import time 4 | import tempfile 5 | import syslog 6 | 7 | # Function to simulate CPU intensive task 8 | def cpu_intensive_task(): 9 | while True: 10 | # Perform a calculation that uses CPU resources 11 | [x**2 for x in range(10000)] 12 | 13 | # Create and start multiple threads to increase CPU load 14 | def create_load(threads=4): 15 | syslog.syslog(syslog.LOG_INFO, f"Simulating high CPU load with {threads} threads.") 16 | for _ in range(threads): 17 | thread = threading.Thread(target=cpu_intensive_task) 18 | thread.daemon = True # allows thread to exit when main thread exits 19 | thread.start() 20 | 21 | # Run the function to create CPU load 22 | if __name__ == "__main__": 23 | try: 24 | # You can change the number of threads to increase or decrease the load 25 | create_load(threads=8) 26 | 27 | # Demonstrate creating a temporary file 28 | with tempfile.NamedTemporaryFile() as temp_file: 29 | syslog.syslog(syslog.LOG_INFO, f"Created temporary file at {temp_file.name}") 30 | # Keep the program running 31 | while True: 32 | time.sleep(1) 33 | except KeyboardInterrupt: 34 | syslog.syslog(syslog.LOG_INFO, "Load simulation stopped by user.") 35 | 36 | 37 | Open two terminals to follow along. 38 | 39 | 40 | Running the Script: 41 | Create and Execute the cpu_load_simulator.py script in your terminal. 42 | This will start the process and begin the simulation. 43 | 44 | Identifying the Process ID (PID): 45 | To find the process ID of your script, use the following command: 46 | 47 | ps aux | grep cpu_load_simulator.py 48 | The PID will be listed in the output, typically the second column. 49 | 50 | Accessing the Temporary File: 51 | Use the PID to list all the open files associated with the process: 52 | 53 | ls -l /proc/[PID]/fd 54 | 55 | In the output, look for a symlink that points to a file within /tmp/. This is your temporary file. 56 | 57 | Viewing the Temporary File's Contents: 58 | To read the contents of the temporary file: 59 | 60 | cat /proc/[PID]/fd/[FD_NUMBER] 61 | Replace [FD_NUMBER] with the file descriptor number for the temporary file. 62 | 63 | Viewing Logs: 64 | Check the logs to confirm the script is executing as expected: 65 | 66 | tail -f /var/log/syslog 67 | 68 | tail -f /var/log/messages 69 | 70 | These commands will continuously output the log file contents. 71 | Look for entries from your script. 72 | 73 | Cleaning Up: 74 | Once you're ready to stop the simulation: 75 | 76 | First, terminate the script by pressing Ctrl+C in the terminal where it's running. 77 | If that doesn't work, kill the process using: 78 | 79 | kill [PID] 80 | Replace [PID] with the actual process ID you identified earlier. 81 | 82 | Confirming Cleanup: 83 | 84 | After killing the process, check that the temporary file has been removed by listing the open files again: 85 | 86 | ls -l /proc/[PID]/fd 87 | 88 | The temporary file descriptor should no longer appear in the list. 89 | 90 | Deliverables 91 | Submit the following screenshots as your lab deliverables: 92 | 93 | Screenshot showing the PID of the cpu_load_simulator.py script. 94 | Screenshot showing the symlink in the /proc/[PID]/fd directory pointing to the /tmp/ directory. 95 | Screenshot showing the contents of the temporary file. 96 | Screenshot of the log entries corresponding to the script execution. 97 | 98 | Final Steps: 99 | 100 | If you've used tail -f to view the logs, you can stop the command by pressing Ctrl+C. 101 | Review the logs to see the final messages from your script, 102 | confirming that it has stopped and cleaned up the temporary file. 103 | 104 | Remember, the temporary file should be automatically cleaned up by the NamedTemporaryFile 105 | mechanism when the script exits properly. If not, and the file is still present, 106 | you may need to remove it manually using the file path found earlier. 107 | -------------------------------------------------------------------------------- /09LinuxArchitecture+FHS/cpu_load_simulator.py: -------------------------------------------------------------------------------- 1 | # cpu_load_simulator.py 2 | import threading 3 | import time 4 | import tempfile 5 | import syslog 6 | 7 | # Function to simulate CPU intensive task 8 | def cpu_intensive_task(): 9 | while True: 10 | # Perform a calculation that uses CPU resources 11 | [x**2 for x in range(10000)] 12 | 13 | # Create and start multiple threads to increase CPU load 14 | def create_load(threads=4): 15 | syslog.syslog(syslog.LOG_INFO, f"Simulating high CPU load with {threads} threads.") 16 | for _ in range(threads): 17 | thread = threading.Thread(target=cpu_intensive_task) 18 | thread.daemon = True # allows thread to exit when main thread exits 19 | thread.start() 20 | 21 | # Run the function to create CPU load 22 | if __name__ == "__main__": 23 | try: 24 | # You can change the number of threads to increase or decrease the load 25 | create_load(threads=8) 26 | 27 | # Demonstrate creating a temporary file 28 | with tempfile.NamedTemporaryFile() as temp_file: 29 | syslog.syslog(syslog.LOG_INFO, f"Created temporary file at {temp_file.name}") 30 | # Keep the program running 31 | while True: 32 | time.sleep(1) 33 | except KeyboardInterrupt: 34 | syslog.syslog(syslog.LOG_INFO, "Load simulation stopped by user.") 35 | -------------------------------------------------------------------------------- /09LinuxArchitecture+FHS/file_wait.py: -------------------------------------------------------------------------------- 1 | # file_wait.py 2 | with open('/tmp/testfile.txt', 'w') as f: 3 | print("File is open. Please enter any text and press enter:") 4 | user_input = input() 5 | f.write(user_input) 6 | print("You've written to the file!") 7 | -------------------------------------------------------------------------------- /10FileSystemsBasics/00VFS.txt: -------------------------------------------------------------------------------- 1 | Virtual File System (VFS) is a software layer in the kernel 2 | that provides a common interface to various kinds of file systems. 3 | VFS allows the operating system to access different types of file systems 4 | in a uniform way. It facilitates the integration of various file systems 5 | seamlessly into the Linux environment. 6 | 7 | How VFS works and its functionalities: 8 | 9 | Core Concepts 10 | File system independence: It provides a mechanism to support multiple 11 | file systems transparently, including ext4, XFS, NFS, etc. 12 | 13 | Common file model: It creates a common file model that represents 14 | all supported file systems to allow applications and system services 15 | to work with different file systems transparently. 16 | 17 | Components of VFS 18 | Superblock Object: It holds information about a mounted file system. 19 | 20 | Inode Object: It represents an individual file in a file system. 21 | Each file has a unique inode that stores metadata like file permissions, 22 | ownership, etc. 23 | 24 | File Object: It represents an open file and maintains the status of the file, 25 | including current file position, access modes, etc. 26 | 27 | Dentry Object: It connects inodes to filenames and represents 28 | a cache of directory entries to speed up lookups. 29 | 30 | Major Interfaces of VFS 31 | 32 | System Calls: VFS implements system calls like open, read, write, close, 33 | etc., to work with files and directories. 34 | 35 | File Operations: This is a set of operations (functions pointers) 36 | for working with files, including opening files, reading data from files, 37 | writing data to files, etc. 38 | 39 | Inode Operations: This set of operations is for working with inodes, 40 | including creating new files, deleting files, looking up files in directories, 41 | etc. 42 | 43 | Superblock Operations: These operations work on file system superblocks, 44 | including mounting and unmounting file systems, getting statistics about 45 | file systems, etc. 46 | 47 | Address Space Operations: These are operations on file memory mappings, 48 | and they deal with memory-mapped files and page cache management. 49 | 50 | VFS Workflow 51 | Mounting a Filesystem: 52 | When a file system is mounted, VFS initializes a superblock object 53 | representing that file system. 54 | 55 | File Operations: 56 | When a system call is made to work with files (like opening a file), 57 | VFS invokes the appropriate file operations defined by the 58 | specific file system through the VFS interface. 59 | 60 | Path Resolution: 61 | VFS resolves file paths by traversing the directory entry 62 | (dentry) cache to find the inode associated with a filename. 63 | 64 | Buffered I/O and Caching: 65 | VFS employs caching mechanisms to speed up file system access, 66 | including the dentry cache for directory entries and the page cache 67 | for file data. 68 | 69 | Advantages of VFS 70 | 71 | Uniformity: It provides a uniform interface to different types of file systems, 72 | simplifying user and application interactions with files. 73 | 74 | Extensibility: New file systems can be added to the Linux kernel easily, 75 | as they just need to implement the VFS interfaces. 76 | 77 | Performance: Through caching mechanisms, it can enhance the performance of 78 | file system access. 79 | 80 | Only reason example is in Java, because it uses interface word explicitly. 81 | In Rust it's called Trait. 82 | In CPP, similar idea can be implemented with abstract classes. 83 | 84 | 85 | Refer to VFSIDEA.java file as a starter. 86 | 87 | In the above example: 88 | 89 | The VFS interface defines a set of methods that correspond to file operations. 90 | 91 | The Ext4 and XFS classes implement the VFS interface, 92 | each providing its own specific implementations for the file operations. 93 | 94 | In the main function, we instantiate Ext4 and XFS objects 95 | (representing file systems) and work with them through the VFS interface. 96 | This shows how you can work with different file systems 97 | through a common interface, similar to how the VFS in Linux allows 98 | you to work with different file systems through a common set of system calls. 99 | 100 | This Java example abstractly represents the concept of how VFS and 101 | different filesystems interact in the Linux kernel, 102 | albeit in a much simplified form. 103 | 104 | It illustrates the underlying principle of using a common interface 105 | to work with different implementations, reflecting the abstraction 106 | mechanism employed in VFS. -------------------------------------------------------------------------------- /10FileSystemsBasics/Copy_on_Write.py: -------------------------------------------------------------------------------- 1 | class CowList: 2 | def __init__(self, original_list): 3 | self.original_list = original_list 4 | self.modified_indices = {} 5 | 6 | def __getitem__(self, index): 7 | # If index has been modified, return the modified value, otherwise return the value from the original list 8 | if index in self.modified_indices: 9 | return self.modified_indices[index] 10 | return self.original_list[index] 11 | 12 | def __setitem__(self, index, value): 13 | # The "write" happens here; we record the modification in the dictionary, creating an actual copy of that element 14 | self.modified_indices[index] = value 15 | 16 | def __str__(self): 17 | final_list = [] 18 | 19 | # Construct the final list using an explicit loop, checking each index for modifications 20 | for i in range(len(self.original_list)): 21 | if i in self.modified_indices: 22 | final_list.append(self.modified_indices[i]) 23 | else: 24 | final_list.append(self.original_list[i]) 25 | 26 | return str(final_list) 27 | 28 | 29 | # Creating an original list 30 | original_list = [1, 2, 3, 4, 5] 31 | 32 | # Creating a COW list based on the original list 33 | cow_list = CowList(original_list) 34 | 35 | # Displaying the COW list (it should match the original list) 36 | print("COW List before modification: ", cow_list) 37 | 38 | # Modifying an element in the COW list (this will trigger an actual copy of the modified element) 39 | cow_list[2] = 99 40 | 41 | # Displaying the COW list and the original list to demonstrate that the original list remains unchanged 42 | print("COW List after modification: ", cow_list) 43 | print("Original list: ", original_list) 44 | -------------------------------------------------------------------------------- /10FileSystemsBasics/Journaling.py: -------------------------------------------------------------------------------- 1 | class JournalingFileSystem: 2 | def __init__(self): 3 | self.file_system = {} 4 | self.journal = [] 5 | 6 | def write(self, filename, data): 7 | # Log the operation in the journal 8 | self.journal.append(("write", filename, data)) 9 | # Modify the file system immediately 10 | self.file_system[filename] = data 11 | 12 | def delete(self, filename): 13 | # Log the operation in the journal but do not modify the file system immediately 14 | self.journal.append(("delete", filename)) 15 | 16 | def crash_recovery(self): 17 | # Clear the current state of the file system 18 | self.file_system = {} 19 | # Replay the journal to restore the file system to the last consistent state 20 | for operation, filename, *args in self.journal: 21 | if operation == "write": 22 | self.file_system[filename] = args[0] 23 | elif operation == "delete": 24 | self.file_system.pop(filename, None) 25 | 26 | def display(self): 27 | print(f"File System: {self.file_system}") 28 | print(f"Journal: {self.journal}") 29 | 30 | 31 | # Example usage 32 | fs = JournalingFileSystem() 33 | fs.write("file1.txt", "Hello, World!") 34 | fs.write("file2.txt", "Hello, Journaling!") 35 | fs.delete("file1.txt") 36 | fs.display() 37 | 38 | print("\nSimulating crash...\n") 39 | fs.crash_recovery() 40 | fs.display() 41 | -------------------------------------------------------------------------------- /10FileSystemsBasics/VFSIDEA.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | 4 | // Abstract class to represent the VFS interface 5 | class Interface { 6 | public: 7 | virtual void read(const std::string& fileName) = 0; 8 | virtual void write(const std::string& fileName, const std::string& data) = 0; 9 | virtual void open(const std::string& fileName) = 0; 10 | virtual void close(const std::string& fileName) = 0; 11 | 12 | virtual ~Interface() = default; 13 | }; 14 | 15 | // Struct to represent the superblock data, including maximum file and volume size 16 | struct Superblock { 17 | uint64_t maxFileSize; 18 | uint64_t maxVolumeSize; 19 | }; 20 | 21 | // Ext4 class implementing the VFS interface and providing concrete implementations for the operations 22 | class Ext4 : public Interface { 23 | public: 24 | Ext4(Superblock sb, const std::string& ds) : superblock(sb), dataStructures(ds) {} 25 | 26 | void read(const std::string& fileName) override { 27 | std::cout << "Ext4 reading file: " << fileName << '\n'; 28 | } 29 | 30 | void write(const std::string& fileName, const std::string& data) override { 31 | std::cout << "Ext4 writing data to file: " << fileName << '\n'; 32 | } 33 | 34 | void open(const std::string& fileName) override { 35 | std::cout << "Ext4 opening file: " << fileName << '\n'; 36 | } 37 | 38 | void close(const std::string& fileName) override { 39 | std::cout << "Ext4 closing file: " << fileName << '\n'; 40 | } 41 | 42 | private: 43 | Superblock superblock; 44 | std::string dataStructures; 45 | }; 46 | 47 | // XFS class implementing the VFS interface and providing concrete implementations for the operations 48 | class XFS : public Interface { 49 | public: 50 | XFS(Superblock sb, const std::string& ds) : superblock(sb), dataStructures(ds) {} 51 | 52 | void read(const std::string& fileName) override { 53 | std::cout << "XFS reading file: " << fileName << '\n'; 54 | } 55 | 56 | void write(const std::string& fileName, const std::string& data) override { 57 | std::cout << "XFS writing data to file: " << fileName << '\n'; 58 | } 59 | 60 | void open(const std::string& fileName) override { 61 | std::cout << "XFS opening file: " << fileName << '\n'; 62 | } 63 | 64 | void close(const std::string& fileName) override { 65 | std::cout << "XFS closing file: " << fileName << '\n'; 66 | } 67 | 68 | private: 69 | Superblock superblock; 70 | std::string dataStructures; 71 | }; 72 | 73 | int main() { 74 | uint64_t default_size = 16 * 1024; 75 | Superblock ext4SB {default_size,default_size}; 76 | Ext4 ext4(ext4SB, "HTree"); 77 | 78 | Superblock xfsSB {default_size, default_size}; 79 | XFS xfs(xfsSB, "B+Tree"); 80 | 81 | ext4.open("document.txt"); 82 | ext4.read("document.txt"); 83 | ext4.write("document.txt", "Hello, World!"); 84 | ext4.close("document.txt"); 85 | 86 | xfs.open("document.txt"); 87 | xfs.read("document.txt"); 88 | xfs.write("document.txt", "Hello, C++!"); 89 | xfs.close("document.txt"); 90 | 91 | return 0; 92 | } 93 | -------------------------------------------------------------------------------- /10FileSystemsBasics/VFSIDEA.java: -------------------------------------------------------------------------------- 1 | // Defining a "VFS" interface with abstract methods resembling file operations 2 | interface VFS { 3 | void read(String fileName); 4 | void write(String fileName, String data); 5 | void open(String fileName); 6 | void close(String fileName); 7 | } 8 | 9 | // An "Ext4" class implementing the VFS interface and providing concrete implementations for the operations 10 | class Ext4 implements VFS { 11 | @Override 12 | public void read(String fileName) { 13 | System.out.println("Ext4 reading file: " + fileName); 14 | } 15 | 16 | @Override 17 | public void write(String fileName, String data) { 18 | System.out.println("Ext4 writing data to file: " + fileName); 19 | } 20 | 21 | @Override 22 | public void open(String fileName) { 23 | System.out.println("Ext4 opening file: " + fileName); 24 | } 25 | 26 | @Override 27 | public void close(String fileName) { 28 | System.out.println("Ext4 closing file: " + fileName); 29 | } 30 | } 31 | 32 | // An "XFS" class implementing the VFS interface, with its own concrete implementations for the operations 33 | class XFS implements VFS { 34 | @Override 35 | public void read(String fileName) { 36 | System.out.println("XFS reading file: " + fileName); 37 | } 38 | 39 | @Override 40 | public void write(String fileName, String data) { 41 | System.out.println("XFS writing data to file: " + fileName); 42 | } 43 | 44 | @Override 45 | public void open(String fileName) { 46 | System.out.println("XFS opening file: " + fileName); 47 | } 48 | 49 | @Override 50 | public void close(String fileName) { 51 | System.out.println("XFS closing file: " + fileName); 52 | } 53 | } 54 | 55 | // A test class to demonstrate the concept 56 | public class VFSIDEA { 57 | public static void main(String[] args) { 58 | VFS ext4 = new Ext4(); 59 | ext4.open("document.txt"); 60 | ext4.read("document.txt"); 61 | ext4.write("document.txt", "Hello, World!"); 62 | ext4.close("document.txt"); 63 | 64 | VFS xfs = new XFS(); 65 | xfs.open("document.txt"); 66 | xfs.read("document.txt"); 67 | xfs.write("document.txt", "Hello, World!"); 68 | xfs.close("document.txt"); 69 | } 70 | } 71 | -------------------------------------------------------------------------------- /10FileSystemsBasics/VFSIDEA.rs: -------------------------------------------------------------------------------- 1 | // Defining a trait to represent the VFS interface 2 | trait VFS { 3 | fn read(&self, file_name: &str); 4 | fn write(&self, file_name: &str, data: &str); 5 | fn open(&self, file_name: &str); 6 | fn close(&self, file_name: &str); 7 | } 8 | 9 | // Implementing the VFS trait for a hypothetical Ext4 file system 10 | struct Ext4; 11 | 12 | impl VFS for Ext4 { 13 | fn read(&self, file_name: &str) { 14 | println!("Ext4 reading file: {}", file_name); 15 | } 16 | 17 | fn write(&self, file_name: &str, data: &str) { 18 | println!("Ext4 writing data to file: {}", file_name); 19 | } 20 | 21 | fn open(&self, file_name: &str) { 22 | println!("Ext4 opening file: {}", file_name); 23 | } 24 | 25 | fn close(&self, file_name: &str) { 26 | println!("Ext4 closing file: {}", file_name); 27 | } 28 | } 29 | 30 | // Implementing the VFS trait for a hypothetical XFS file system 31 | struct XFS; 32 | 33 | impl VFS for XFS { 34 | fn read(&self, file_name: &str) { 35 | println!("XFS reading file: {}", file_name); 36 | } 37 | 38 | fn write(&self, file_name: &str, data: &str) { 39 | println!("XFS writing data to file: {}", file_name); 40 | } 41 | 42 | fn open(&self, file_name: &str) { 43 | println!("XFS opening file: {}", file_name); 44 | } 45 | 46 | fn close(&self, file_name: &str) { 47 | println!("XFS closing file: {}", file_name); 48 | } 49 | } 50 | 51 | fn main() { 52 | let ext4 = Ext4; 53 | ext4.open("document.txt"); 54 | ext4.read("document.txt"); 55 | ext4.write("document.txt", "Hello, World!"); 56 | ext4.close("document.txt"); 57 | 58 | let xfs = XFS; 59 | xfs.open("document.txt"); 60 | xfs.read("document.txt"); 61 | xfs.write("document.txt", "Hello, World!"); 62 | xfs.close("document.txt"); 63 | } 64 | -------------------------------------------------------------------------------- /11ProcessService/fibonacci.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import time 3 | 4 | def fibonacci(n): 5 | if n <= 1: 6 | return n 7 | else: 8 | return fibonacci(n-1) + fibonacci(n-2) 9 | 10 | if __name__ == "__main__": 11 | if len(sys.argv) != 2: 12 | print("Usage: python3 fibonacci.py [n]") 13 | sys.exit(1) 14 | 15 | n = int(sys.argv[1]) 16 | print(f"Calculating Fibonacci({n})") 17 | 18 | start_time = time.time() 19 | result = fibonacci(n) 20 | end_time = time.time() 21 | 22 | print(f"Result: {result}") 23 | print(f"Execution Time: {end_time - start_time} seconds") 24 | -------------------------------------------------------------------------------- /11ProcessService/infinite_loop.py: -------------------------------------------------------------------------------- 1 | import time 2 | i = 1 3 | while True: 4 | print(i) 5 | i += 1 6 | time.sleep(1) -------------------------------------------------------------------------------- /11ProcessService/signal_handling.py: -------------------------------------------------------------------------------- 1 | 2 | import os 3 | import time 4 | 5 | pid = os.fork() 6 | 7 | if pid > 0: 8 | print(f"Parent process (PID: {os.getpid()}), Child PID: {pid}") 9 | os.wait() 10 | else: 11 | print(f"Child process (PID: {os.getpid()}) running") 12 | time.sleep(100) 13 | print("Child process finishing") 14 | 15 | print("Process (PID: {}) completed".format(os.getpid())) -------------------------------------------------------------------------------- /11ProcessService/wait_script.py: -------------------------------------------------------------------------------- 1 | input("Press Enter to continue...") -------------------------------------------------------------------------------- /12EncryptionDecryption/00LabPrepCommunication.txt: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Command ifconfig allows you to learn your local IP address 5 | look for eth0: inet you should see IPv4 6 | 7 | 8 | Setup ftp server on a fly using Python inside folder in Linux 9 | python3 -m http.server 10 | 11 | 12 | Obtain a file from an ftp server 13 | wget http://IP_ADDRESS_OF_FTP_SERVER:8000/path/to/your/file 14 | 15 | Ok now is assignment 16 | On server side 17 | Create an textfile,containing a message you plan to encode 18 | 19 | 20 | Then run decode 21 | python3 encrypt.py 22 | 23 | def encrypt(plain_text, shift): 24 | encrypted_text = "" 25 | for char in plain_text: 26 | if char.isalpha(): 27 | shifted = ord(char) + shift 28 | if char.isupper(): 29 | encrypted_text += chr((shifted - 65) % 26 + 65) 30 | else: 31 | encrypted_text += chr((shifted - 97) % 26 + 97) 32 | else: 33 | encrypted_text += char 34 | return encrypted_text 35 | 36 | def main(): 37 | filename = input("Enter filename to encrypt: ") 38 | shift = int(input("Enter shift value: ")) 39 | 40 | try: 41 | with open(filename, 'r') as file: 42 | plaintext = file.read() 43 | ciphertext = encrypt(plaintext, shift) 44 | 45 | with open(filename + "_encoded", 'w') as file: 46 | file.write(ciphertext) 47 | print(f"Encoded file saved as {filename}_encoded") 48 | except FileNotFoundError: 49 | print("File not found. Please check the filename and try again.") 50 | 51 | if __name__ == "__main__": 52 | main() 53 | 54 | run ftp server where that encrypted file is located 55 | 56 | 57 | 58 | On client side 59 | get the encrypted file 60 | 61 | then run 62 | decrypt.py 63 | 64 | def decrypt(encrypted_text, shift): 65 | decrypted_text = "" 66 | for char in encrypted_text: 67 | if char.isalpha(): 68 | shifted = ord(char) - shift 69 | if char.isupper(): 70 | decrypted_text += chr((shifted - 65) % 26 + 65) 71 | else: 72 | decrypted_text += chr((shifted - 97) % 26 + 97) 73 | else: 74 | decrypted_text += char 75 | return decrypted_text 76 | 77 | def main(): 78 | filename = input("Enter text filename to decrypt: ") 79 | shift = int(input("Enter shift value: ")) 80 | 81 | try: 82 | with open(filename, 'r') as file: 83 | ciphertext = file.read() 84 | plaintext = decrypt(ciphertext, shift) 85 | 86 | with open(filename + "_decoded", 'w') as file: 87 | file.write(plaintext) 88 | print(f"Decoded file saved as {filename}_decoded") 89 | except FileNotFoundError: 90 | print("File not found. Please check the filename and try again.") 91 | 92 | if __name__ == "__main__": 93 | main() 94 | 95 | 96 | Usage Instructions: 97 | On the Server Side (encode.py): Run this script to encrypt a file. It will ask for the filename and the shift value. 98 | The encrypted file will be saved with _encoded added to the original filename. 99 | 100 | On the Client Side (decode.py): Run this script to decrypt a file. It will ask for the filename of the encrypted file and the shift value used for encryption. 101 | The decrypted file will be saved with _decoded added to the original filename. 102 | 103 | These scripts use a simple substitution cipher for encryption and decryption. 104 | Please note that this method is not secure for sensitive or critical data encryption in real-world scenarios. 105 | 106 | -------------------------------------------------------------------------------- /12EncryptionDecryption/01basicEncryptionDecription.txt: -------------------------------------------------------------------------------- 1 | Practical Lab: Basic Encryption and Decryption with Python 2 | Objective: 3 | Understand the fundamental concept of encryption and decryption. 4 | Implement a simple substitution cipher using Python. 5 | Requirements: 6 | Basic knowledge of Python programming. 7 | Python installed on your machine. 8 | Task: 9 | Create a Python program that: 10 | 11 | Takes user input as plaintext. 12 | Encrypts the plaintext into ciphertext using a substitution cipher. 13 | Decrypts the ciphertext back into the original plaintext. 14 | 15 | 16 | Steps: 17 | 18 | Step 1: Input Data 19 | Take a user input string that needs to be encrypted. 20 | 21 | plaintext = input("Enter text to encrypt: ") 22 | 23 | 24 | Step 2: Create a Substitution Cipher 25 | Develop a simple substitution cipher. Here we'll shift each letter by a fixed number of positions in the alphabet. 26 | 27 | def encrypt(plain_text, shift): 28 | encrypted_text = "" 29 | for char in plain_text: 30 | if char.isalpha(): # Check if the character is an alphabet 31 | shifted = ord(char) + shift # Shift the character 32 | # Check if uppercase or lowercase and rotate if it goes beyond A-Z or a-z 33 | if char.isupper(): 34 | encrypted_text += chr((shifted - 65) % 26 + 65) 35 | else: 36 | encrypted_text += chr((shifted - 97) % 26 + 97) 37 | else: 38 | encrypted_text += char # if it's not an alphabet, add it without encryption 39 | return encrypted_text 40 | 41 | 42 | Step 3: Encrypt User Input 43 | Encrypt the user input using the substitution cipher. 44 | 45 | shift_value = 3 # Shifting each letter by 3 positions 46 | ciphertext = encrypt(plaintext, shift_value) 47 | print("Encrypted Text: ", ciphertext) 48 | 49 | 50 | Step 4: Decryption Function 51 | Implement a function to decrypt the ciphertext back into plaintext by reversing the shift applied during the encryption. 52 | 53 | def decrypt(encrypted_text, shift): 54 | decrypted_text = "" 55 | for char in encrypted_text: 56 | if char.isalpha(): 57 | shifted = ord(char) - shift 58 | if char.isupper(): 59 | decrypted_text += chr((shifted - 65) % 26 + 65) 60 | else: 61 | decrypted_text += chr((shifted - 97) % 26 + 97) 62 | else: 63 | decrypted_text += char 64 | return decrypted_text 65 | 66 | 67 | 68 | Step 5: Decrypt Ciphertext 69 | Decrypt the ciphertext and validate the result by comparing it to the original user input. 70 | 71 | decrypted_text = decrypt(ciphertext, shift_value) 72 | print("Decrypted Text: ", decrypted_text) 73 | 74 | Test: 75 | Run the program and enter a plaintext input when prompted. 76 | Verify the encryption and decryption by checking the displayed ciphertext and decrypted text. 77 | Change the shift_value and observe how the ciphertext changes. 78 | 79 | Notes: 80 | This is a very basic form of encryption (Caesar Cipher) and not suitable for secure communications 81 | in real-world applications. 82 | Modern encryption uses complex algorithms and keys to secure data. 83 | Always employ established encryption libraries and methods for handling sensitive data. 84 | 85 | Wrap-up: 86 | This lab provides a foundational understanding of the encryption and decryption processes 87 | using Python programming. Ensure to delve deeper into more advanced cryptographic methods and 88 | libraries like cryptography in Python for a thorough comprehension and 89 | practical knowledge of data security. -------------------------------------------------------------------------------- /12EncryptionDecryption/02LabSymmetricOpenSSL_AES.txt: -------------------------------------------------------------------------------- 1 | Practical Lab: Basic File Encryption and Decryption on Linux 2 | 3 | Objective: 4 | Understand basic file encryption and decryption using OpenSSL on Linux. 5 | Securely manage sensitive information. 6 | 7 | Requirements: 8 | Basic knowledge of the Linux command line. 9 | A Linux environment (physical or virtual machine). 10 | OpenSSL installed on your system. 11 | 12 | Task: 13 | Encrypt a text file using OpenSSL. 14 | Decrypt the file and verify the content. 15 | 16 | 17 | Steps: 18 | Step 1: Prepare the Environment 19 | Open your Linux terminal. 20 | Ensure OpenSSL is installed by running openssl version. 21 | If not installed, you can install it using your package manager, 22 | for instance: sudo apt-get install openssl (for Debian/Ubuntu systems). 23 | 24 | Step 2: Create a Text File 25 | Create a text file with some content to encrypt. 26 | 27 | echo "This is secret information" > secret.txt 28 | 29 | Step 3: Encrypt the File 30 | Encrypt the secret.txt file using OpenSSL with symmetric encryption (AES). 31 | 32 | openssl enc -aes-256-cbc -salt -in secret.txt -out secret.txt.enc 33 | 34 | During this step, OpenSSL will ask you to provide a password 35 | which will be used to generate a key for encryption. Keep the password in a secure place 36 | – if you lose it, the encrypted data cannot be recovered. 37 | 38 | Step 4: Verify Encryption 39 | Ensure that the content of secret.txt.enc is not human-readable. 40 | 41 | cat secret.txt.enc 42 | 43 | 44 | Step 5: Decrypt the File 45 | Decrypt the file back into its original format using OpenSSL. 46 | 47 | openssl enc -aes-256-cbc -d -in secret.txt.enc -out secret_decrypted.txt 48 | 49 | You’ll be prompted to enter the password used during the encryption process. 50 | 51 | Step 6: Validate Decryption 52 | Verify that secret_decrypted.txt and secret.txt have identical content. 53 | 54 | cat secret_decrypted.txt 55 | 56 | Notes: 57 | Make sure you securely manage the passwords/keys used during encryption. 58 | Always ensure you have backups of data and understand the encryption/decryption process 59 | before applying it to important files. 60 | 61 | This example uses symmetric-key encryption where the same password is used for 62 | both encryption and decryption. 63 | In real-world scenarios, asymmetric encryption or a hybrid approach often 64 | provides stronger security. -------------------------------------------------------------------------------- /12EncryptionDecryption/03LabAssymetricOpenSSL_RSA.txt: -------------------------------------------------------------------------------- 1 | 2 | Lab Scenario 3 | Alice wants to send Bob a confidential message. 4 | To ensure that only Bob can read the message, Alice will encrypt it using Bob's public key. 5 | Bob will then decrypt it using his private key. 6 | 7 | Prerequisites 8 | 9 | Basic knowledge of the Linux command line 10 | OpenSSL installed on Linux 11 | 12 | Step 1: Key Generation 13 | Bob generates an RSA key pair, consisting of a public and private key. 14 | 15 | # Generate private key 16 | openssl genpkey -algorithm RSA -out bob_private_key.pem 17 | 18 | # Extract public key 19 | openssl rsa -pubout -in bob_private_key.pem -out bob_public_key.pem 20 | 21 | Bob shares bob_public_key.pem with Alice but keeps bob_private_key.pem secret. 22 | 23 | Step 2: Encryption 24 | Alice encrypts her message using Bob's public key. 25 | 26 | First, Alice writes her message to a file. 27 | 28 | echo "Hello, Bob! This is a secret message from Alice." > alice_message.txt 29 | 30 | Next, Alice encrypts the message using Bob's public key. 31 | 32 | openssl rsautl -encrypt -pubin -inkey bob_public_key.pem -in alice_message.txt -out encrypted_message.bin 33 | Alice sends encrypted_message.bin to Bob. 34 | 35 | Step 3: Decryption 36 | Bob decrypts the received message using his private key. 37 | 38 | openssl rsautl -decrypt -inkey bob_private_key.pem -in encrypted_message.bin -out decrypted_message.txt 39 | Bob can now read the original message from Alice by viewing decrypted_message.txt. 40 | 41 | cat decrypted_message.txt 42 | 43 | Lab Notes 44 | 45 | The encryption and decryption steps are abstracted for educational clarity. 46 | In real-world applications, asymmetric encryption is typically used to encrypt a symmetric key, 47 | which in turn is used to encrypt the actual message/data. 48 | 49 | This approach melds the security advantages of asymmetric encryption with 50 | the computational efficiency of symmetric encryption. 51 | 52 | Key management, storage, and exchange should be handled with utmost security in mind to prevent 53 | unauthorized access or leakage. 54 | 55 | For larger messages, consider using asymmetric encryption to encrypt a symmetric key and 56 | then using symmetric encryption for the actual message to optimize for computational efficiency. 57 | 58 | Conclusion 59 | Through this lab, you’ve observed the utility of asymmetric encryption 60 | in securing communications without necessitating the secure exchange of secret keys. 61 | Bob and Alice successfully communicated securely: Alice encrypted a message that only Bob could 62 | decrypt with his private key. This paradigm underpins numerous secure communication protocols 63 | on the internet and beyond. 64 | 65 | Feel free to explore variations of this lab to deepen your understanding of 66 | encryption and decryption processes! -------------------------------------------------------------------------------- /12EncryptionDecryption/04LabAssymetric+SymmetricLevelUp.txt: -------------------------------------------------------------------------------- 1 | Lab: Utilizing Asymmetric Encryption to Secure Symmetric Keys 2 | Objective: 3 | Understand the usage of asymmetric encryption in securing symmetric key transmission, 4 | combining security with computational efficiency. 5 | 6 | Tools Required: 7 | 8 | OpenSSL on Linux 9 | 10 | Tasks: 11 | Generate Asymmetric Key Pair 12 | Generate Private Key: 13 | 14 | openssl genpkey -algorithm RSA -out private_key.pem 15 | 16 | Generate Public Key: 17 | 18 | openssl rsa -pubout -in private_key.pem -out public_key.pem 19 | 20 | 21 | Generate and Encrypt Symmetric Key 22 | 23 | Generate a Symmetric Key: 24 | 25 | openssl rand -base64 32 > symmetric_key.key 26 | 27 | Encrypt the Symmetric Key with the Public Key: 28 | 29 | openssl rsautl -encrypt -inkey public_key.pem -pubin -in symmetric_key.key -out encrypted_symmetric_key.key 30 | 31 | Note: Only someone with the private key can decrypt the symmetric key. 32 | 33 | Encrypt Message using Symmetric Key 34 | 35 | echo "This is a confidential message" > message.txt 36 | 37 | Encrypt the Message using Symmetric Key: 38 | 39 | openssl enc -aes-256-cbc -salt -in message.txt -out message.enc -pass file:symmetric_key.key 40 | 41 | Share message.enc and encrypted_symmetric_key.key 42 | 43 | Decrypt Symmetric Key using Asymmetric Encryption 44 | 45 | Decrypt Symmetric Key: 46 | 47 | openssl rsautl -decrypt -inkey private_key.pem -in encrypted_symmetric_key.key -out decrypted_symmetric_key.key 48 | 49 | Decrypt Message using Decrypted Symmetric Key 50 | 51 | Decrypt the Message: 52 | 53 | openssl enc -d -aes-256-cbc -in message.enc -out decrypted_message.txt -pass file:decrypted_symmetric_key.key 54 | 55 | Check if decrypted_message.txt matches the original message: 56 | 57 | cat decrypted_message.txt 58 | 59 | Notes: 60 | 61 | Asymmetric Encryption Usage: For securing the transmission of the symmetric key. 62 | Symmetric Encryption Usage: For efficiently encrypting the actual message. 63 | 64 | Explanation: 65 | Asymmetric Key Pair: Private key remains confidential, while the public key can be shared. 66 | 67 | Symmetric Key: Provides computational efficiency in encrypting and decrypting messages, 68 | especially for larger data. 69 | 70 | Combining Asymmetric and Symmetric: The symmetric key is secured with asymmetric encryption for transmission, 71 | thus providing a balance between security and efficiency. 72 | 73 | This lab illustrates a practical usage of asymmetric and symmetric encryption in ensuring secure and 74 | computationally efficient communication. It's pivotal to note that in real-world applications, 75 | additional considerations regarding key management, integrity checks, and secure channels are vital and 76 | should be implemented in accordance with best practices. -------------------------------------------------------------------------------- /12EncryptionDecryption/OpenSSl + AES256-CBC-SALT-RSA: -------------------------------------------------------------------------------- 1 | OpenSSL 2 | OpenSSL is a robust, full-featured open-source toolkit that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, 3 | along with a full-strength general-purpose cryptography library. 4 | It's widely used for securing communications over computer networks and for various cryptographic operations. 5 | 6 | 7 | AES-256-CBC: 8 | 9 | AES: Advanced Encryption Standard, a symmetric encryption algorithm widely used across the globe. 10 | 256: Refers to the key size – 256 bits, offering a high level of security. 11 | CBC: Cipher Block Chaining, a mode of operation for block ciphers. 12 | It means each block of plaintext is XORed with the previous ciphertext block before being encrypted. 13 | This way, each ciphertext block depends on all plaintext blocks processed up to that point, enhancing security. 14 | 15 | 16 | Salt in Encryption: 17 | A "salt" is a random data that is used as an additional input to a one-way function that hashes data, a password, or passphrase. 18 | Salts are used to safeguard passwords in storage. In the context of OpenSSL's encryption, it adds an extra layer of security. 19 | When you encrypt a file with OpenSSL using a password, the salt ensures that even if two files are encrypted with the same password, 20 | the encrypted files will be different, due to the use of different salt values. 21 | 22 | 23 | RSA (Rivest-Shamir-Adleman) 24 | RSA is one of the first public-key cryptosystems and is widely used for secure data transmission. 25 | It's named after its inventors, Ron Rivest, Adi Shamir, and Leonard Adleman, 26 | who first publicly described it in 1977. 27 | 28 | 29 | Here's a brief overview of how RSA works: 30 | Key Generation: 31 | 32 | RSA involves a pair of keys: a public key and a private key. 33 | The public key can be shared with anyone and is used for encrypting data. 34 | The private key is kept secret and is used for decrypting data. 35 | -------------------------------------------------------------------------------- /12EncryptionDecryption/theory.txt: -------------------------------------------------------------------------------- 1 | Understanding Encryption 2 | 3 | Encryption plays a pivotal role in safeguarding data and ensuring its confidentiality 4 | across various digital platforms and communications. Let's delve into a deeper understanding: 5 | 6 | Core Concept of Encryption 7 | 8 | Basic Definition: Encryption is the technological process through which data, 9 | referred to as plaintext, is converted into an unreadable format, known as ciphertext. 10 | 11 | Primary Objective: The fundamental aim is to protect the data’s confidentiality, 12 | making it inaccessible and indecipherable to unauthorized entities. 13 | 14 | Key Elements in Encryption 15 | 16 | Plaintext: The original, readable information that needs to be protected. 17 | 18 | Ciphertext: The unreadable, encrypted data formed post the application of an encryption 19 | algorithm to the plaintext. 20 | 21 | Encryption Key: A secret key used within the encryption algorithm to determine 22 | the exact transformation of plaintext into ciphertext. 23 | 24 | Encryption Process 25 | Step 1: Begin with the plaintext - the initial, easily readable information. 26 | Step 2: Utilize an encryption algorithm and a key to transform plaintext into ciphertext. 27 | Step 3: The ciphertext, being uninterpretable, protects the original data from unauthorized access. 28 | 29 | 30 | Decryption 31 | 32 | Counterpart to Encryption: 33 | Decryption is the process of converting the ciphertext back into its 34 | original plaintext form using a decryption key. 35 | 36 | The Distinction: Encryption vs. Encoding 37 | 38 | Encryption: 39 | Purpose: Preserving confidentiality. 40 | Key Use: A secret key is imperative for the decryption of the data. 41 | Accessibility: Only authorized parties, who possess the decryption key, 42 | can revert the data to its original form. 43 | 44 | 45 | Encoding: 46 | Purpose: Ensuring data integrity and interoperability, not confidentiality. 47 | Algorithm Accessibility: The algorithm, or method used for encoding, is publicly accessible. 48 | Decoding: Data can be returned to its original format by anyone who has access 49 | to the decoding algorithm, without needing a specific key. 50 | 51 | Significance of Encryption in Cybersecurity 52 | Data Protection: Safeguards sensitive data during storage and transmission, 53 | making unauthorized access unfruitful due to the unreadability of the content. 54 | 55 | Secure Communication: Ensures that transmitted messages across networks remain 56 | confidential and unaltered, thereby establishing a secure communication channel. 57 | 58 | Authentication: By utilizing keys, encryption also helps in verifying the 59 | legitimacy of parties involved in digital communication. 60 | 61 | Various Forms of Encryption 62 | 63 | Symmetric Encryption: Uses the same key for both encryption and decryption processes. 64 | 65 | Asymmetric Encryption: Utilizes two keys - a public key for encryption and a 66 | private key for decryption. 67 | 68 | Concluding Note: 69 | Encryption, by transforming readable data into an unintelligible format, 70 | acts as a crucial defense mechanism against unauthorized data access and leakage, 71 | reinforcing the confidentiality and security of digital communications and stored information. 72 | This mechanism is imperative in modern digital communication and data storage, 73 | ensuring a secure medium for data transmission and safeguarding sensitive information 74 | from potential cyber threats. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # linuxprogramming 2 | 3 | # Instructions 4 | 5 | **02 Navigation -> Ready** 6 | 7 | **03 Basic Text Manipulation, Pipes, Redirection, File Descriptors Concept -> Ready** 8 | 9 | **04 User-Group Managment + Permission Access -> Ready** 10 | 11 | **05 Everything is A File -> Ready** 12 | 13 | **06 Compiling and Linking -> Ready** 14 | 15 | **07 Software Package Managment -> Ready** 16 | 17 | **08 Advanced Text Processing -> Ready** 18 | 19 | **09 LinuxArchitecture and FileSystemsHierarchyStandard -> Ready** 20 | 21 | **10 FileSystemsBasics -> Ready** 22 | 23 | **11 Processes and Services -> Ready** 24 | 25 | **12 Encryption and Decryption -> Ready** 26 | 27 | **13 Hashing -> Ready** 28 | 29 | **14 GIT -> Ready -> https://github.com/alfazick/gitpractice** 30 | 31 | **15 LLM Practice > Ready -> https://github.com/alfazick/llm-locally** 32 | 33 | -------------------------------------------------------------------------------- /SecureCopyExercise.txt: -------------------------------------------------------------------------------- 1 | => Practical Guide to Secure File Transfers 2 | 3 | Access Credentials: Prepare the server's address, 4 | SSH port (in this example, we're using port 5031), and a user account with 5 | required permissions. 6 | 7 | 8 | Part 1: Server-to-Local File Transfer 9 | Step 1: Connect to the Server 10 | Initiate an SSH connection to your remote server: 11 | 12 | ssh -p 5031 student@xxxx.eastus.cloudapp.azure.com 13 | Step 2: Create a Test File on the Server 14 | Once connected, create a sample file to use for the transfer: 15 | 16 | echo "Hello from the cloud!" > message.txt 17 | To verify the file's content, use: 18 | 19 | cat message.txt 20 | Disconnect from the server after this step. 21 | 22 | Step 3: Securely Transfer the File to the Local Machine 23 | On your local machine, run: 24 | 25 | scp -P 5031 student@xxxx.eastus.cloudapp.azure.com:message.txt /local/directory/path 26 | Note: The -P flag specifies the port number used for the connection, 27 | essential when the remote server uses a port other than the default SSH port. 28 | The colon (:) is required to denote the path to the file or directory on the remote server. 29 | 30 | Part 2: Local-to-Server File Transfer 31 | 32 | Step 1: Update the File Locally 33 | Edit the received file to include additional text: 34 | echo "Acknowledged from the local machine." >> /local/directory/path/message.txt 35 | 36 | 37 | Step 2: Transfer the Updated File Back to the Server 38 | 39 | Use SCP to send the modified file back to the remote server: 40 | scp -P 5031 /local/directory/path/message.txt student@xxxx.eastus.cloudapp.azure.com:/remote/directory/path 41 | 42 | 43 | Part 3: Verification 44 | Step 1: SSH into the Remote Server 45 | Reconnect to the server using SSH: 46 | 47 | ssh -p 5031 student@xxxx.eastus.cloudapp.azure.com 48 | 49 | 50 | Step 2: Verify the File Content 51 | Check the content of the transferred file: 52 | 53 | cat /remote/directory/path/message.txt 54 | You should see the updated text in the file, confirming a successful transfer. -------------------------------------------------------------------------------- /Setup-Instructor-VM Labs on Azure.md: -------------------------------------------------------------------------------- 1 | 2 | # Azure Lab Services Setup Guide 3 | 4 | This guide outlines the process to set up Azure Lab Services with Linux VMs. 5 | 6 | ## Prerequisites 7 | 8 | - **Azure Subscription**: If you don't have one, [sign up for Azure](https://azure.com/free). 9 | 10 | ## Steps 11 | 12 | ### 1. Create a Lab Plan 13 | 14 | - Navigate to the Azure portal. 15 | - Click on **Create a resource**. 16 | - Search for "Lab Services" and select **Lab Plan**. 17 | - Click **Create**. 18 | - Fill out the necessary fields: Subscription, Resource Group, and a name for the Lab Plan. 19 | 20 | ### 2. Create a Lab Account 21 | 22 | - Within the created Lab Plan, select **+ Add**. 23 | - Provide a name for the Lab Account and fill in any other required details. 24 | 25 | ### 3. Create a Lab 26 | 27 | - Inside the Lab Account, find and select the option to create a new lab. 28 | - Define the VM type, software configurations, and other specifics for this lab. 29 | 30 | ### 4. Template VM Configuration 31 | 32 | - Start the template VM from within the lab. 33 | - Once active, connect to the VM. For Linux VMs, use SSH. 34 | - Install and configure any required software or tools. 35 | - After setting up, shut down the VM. 36 | 37 | ### 5. Publish the Lab 38 | 39 | - In the lab settings, choose **Publish**. 40 | - Define the number of VM instances and the level of user access. 41 | - Share the registration link with users for them to access the VMs. 42 | 43 | ### 6. Accessing the Lab VMs 44 | 45 | - Users can register using the provided link. 46 | - After registration, they can start, stop, and connect to the VMs as per the permissions you've set. 47 | 48 | ### 7. Monitoring and Management 49 | 50 | - Monitor VM usage, active VMs, and user activity. 51 | - Adjust user access, start/stop VMs, or reset environments as necessary. 52 | 53 | ### 8. Clean-Up 54 | 55 | - Delete labs or Lab Plans that are no longer required to prevent unnecessary charges. 56 | 57 | ## Notes 58 | 59 | Always refer to the [official Azure documentation](https://docs.microsoft.com/azure) for the most updated processes and best practices. 60 | 61 | --------------------------------------------------------------------------------