├── Backtracking.md ├── Boyer Moore Voting.md ├── Bubble Sort.md ├── Bucket Sort.md ├── Circular Sort.md ├── Comparator function.md ├── Counting Sort.md ├── Dutch National Flag algorithm.md ├── Floyd's Tortoise and Hare (Cycle Detection) Algorithm.md ├── Heap Sort.md ├── Insertion Sort.md ├── KNP Algorithm.md ├── Kadane's Algorithm.md ├── Merge Sort.md ├── Morris Traversal.md ├── Quick Sort.md ├── README.md ├── Radix Sort.md ├── Selection Sort.md ├── Shell Sort.md ├── Sieve Of Eratosthenes.md └── Sorting Algorithm.md /Backtracking.md: -------------------------------------------------------------------------------- 1 | # Backtracking Algorithm Guide 2 | 3 | ## Introduction 4 | Backtracking is a recursive algorithm used for solving problems by trying all possible solutions and eliminating those that fail to satisfy constraints. It follows a **depth-first search (DFS)** approach and is commonly used for combinatorial problems. 5 | 6 | ## How Backtracking Works 7 | 1. **Make a choice** – Add a candidate to the current solution. 8 | 2. **Check if it's valid** – If the solution is complete and meets constraints, store it. 9 | 3. **Recur** – Explore further by making additional choices. 10 | 4. **Backtrack** – If the choice leads to an invalid solution, undo the last step and try another option. 11 | 12 | ## General Pseudocode 13 | ```plaintext 14 | Function Backtrack(currentState, result): 15 | If currentState is a complete solution: 16 | Store currentState in result 17 | Return 18 | 19 | For each possible choice: 20 | If choice is valid: 21 | Make choice 22 | Backtrack(currentState, result) 23 | Undo choice (backtrack) 24 | ``` 25 | 26 | --- 27 | 28 | # Backtracking Examples 29 | 30 | ## 1. **Subset Generation** 31 | **Problem:** Generate all subsets of a given set `[1, 2, 3]`. 32 | 33 | ### **Pseudocode** 34 | ```plaintext 35 | Function GenerateSubsets(index, currentSubset, nums, result): 36 | If index == size of nums: 37 | Store currentSubset in result 38 | Return 39 | 40 | // Include the current element 41 | Add nums[index] to currentSubset 42 | GenerateSubsets(index + 1, currentSubset, nums, result) 43 | Remove last element from currentSubset // Backtrack 44 | 45 | // Exclude the current element 46 | GenerateSubsets(index + 1, currentSubset, nums, result) 47 | ``` 48 | 49 | ### **C++ Implementation** 50 | ```cpp 51 | #include 52 | #include 53 | using namespace std; 54 | 55 | void generateSubsets(int index, vector& subset, vector& nums, vector>& result) { 56 | if (index == nums.size()) { 57 | result.push_back(subset); 58 | return; 59 | } 60 | 61 | // Include element 62 | subset.push_back(nums[index]); 63 | generateSubsets(index + 1, subset, nums, result); 64 | subset.pop_back(); // Backtrack 65 | 66 | // Exclude element 67 | generateSubsets(index + 1, subset, nums, result); 68 | } 69 | 70 | int main() { 71 | vector nums = {1, 2, 3}; 72 | vector> result; 73 | vector subset; 74 | generateSubsets(0, subset, nums, result); 75 | 76 | for (auto subset : result) { 77 | for (int num : subset) cout << num << " "; 78 | cout << endl; 79 | } 80 | return 0; 81 | } 82 | ``` 83 | 84 | --- 85 | 86 | ## 2. **N-Queens Problem** 87 | **Problem:** Place `N` queens on an `N×N` chessboard so that no two queens attack each other. 88 | 89 | ### **Pseudocode** 90 | ```plaintext 91 | Function SolveNQueens(board, row, N, result): 92 | If row == N: 93 | Store board in result 94 | Return 95 | 96 | For each column in 0 to N-1: 97 | If placing queen at (row, col) is valid: 98 | Place queen at (row, col) 99 | SolveNQueens(board, row + 1, N, result) 100 | Remove queen from (row, col) // Backtrack 101 | ``` 102 | 103 | ### **C++ Implementation** 104 | ```cpp 105 | #include 106 | #include 107 | using namespace std; 108 | 109 | bool isSafe(vector& board, int row, int col, int N) { 110 | for (int i = 0; i < row; i++) 111 | if (board[i][col] == 'Q') return false; 112 | 113 | for (int i = row, j = col; i >= 0 && j >= 0; i--, j--) 114 | if (board[i][j] == 'Q') return false; 115 | 116 | for (int i = row, j = col; i >= 0 && j < N; i--, j++) 117 | if (board[i][j] == 'Q') return false; 118 | 119 | return true; 120 | } 121 | 122 | void solveNQueens(int row, vector& board, int N, vector>& result) { 123 | if (row == N) { 124 | result.push_back(board); 125 | return; 126 | } 127 | 128 | for (int col = 0; col < N; col++) { 129 | if (isSafe(board, row, col, N)) { 130 | board[row][col] = 'Q'; 131 | solveNQueens(row + 1, board, N, result); 132 | board[row][col] = '.'; // Backtrack 133 | } 134 | } 135 | } 136 | 137 | int main() { 138 | int N = 4; 139 | vector> result; 140 | vector board(N, string(N, '.')); 141 | 142 | solveNQueens(0, board, N, result); 143 | 144 | for (auto sol : result) { 145 | for (auto row : sol) cout << row << endl; 146 | cout << endl; 147 | } 148 | return 0; 149 | } 150 | ``` 151 | 152 | --- 153 | 154 | ## Complexity Analysis 155 | - **Time Complexity:** Backtracking algorithms often have exponential time complexity `O(2^N)`, `O(N!)`, or `O(B^D)` depending on the branching factor and depth of recursion. 156 | - **Space Complexity:** `O(N)` for storing recursive calls and `O(N^2)` for problems like N-Queens (storing board state). 157 | 158 | ## When to Use Backtracking? 159 | ✅ Problems with **multiple solutions** where all possibilities must be explored (e.g., N-Queens, Sudoku). 160 | ✅ **Constraint satisfaction problems** where invalid solutions must be pruned early. 161 | ✅ **Combinatorial problems** requiring **all possible arrangements**, subsets, or sequences. 162 | 163 | ## Conclusion 164 | Backtracking is a powerful technique for solving complex problems by exploring all possibilities and eliminating unfeasible paths. By understanding its structure and applying optimizations (e.g., pruning, memoization), we can make it more efficient. 165 | 166 | **Happy Coding! 🚀** 167 | -------------------------------------------------------------------------------- /Boyer Moore Voting.md: -------------------------------------------------------------------------------- 1 | # **Boyer-Moore Voting Algorithm** 2 | 3 | ## **Introduction** 4 | The **Boyer-Moore Voting Algorithm** is an efficient algorithm used to find the **majority element** in a sequence. A majority element is an element that appears **more than ⌊n/2⌋ times** in an array of size `n`. 5 | 6 | This algorithm runs in **O(n) time** and uses **O(1) space**, making it highly efficient for large datasets. 7 | 8 | --- 9 | 10 | ## **Intuition & Working** 11 | ### **Key Idea** 12 | The algorithm works by maintaining a **candidate** for the majority element and a **counter** that helps verify the candidate. 13 | 14 | 1. **Candidate Selection Phase:** 15 | - Traverse the array and maintain a **count** of occurrences of the current candidate. 16 | - If the count becomes `0`, select the current element as the new **candidate**. 17 | 18 | 2. **Candidate Verification Phase (Optional):** 19 | - Since the first pass does not guarantee correctness in cases where no majority element exists, a second pass may be required to verify the count. 20 | 21 | --- 22 | 23 | ## **Algorithm Steps** 24 | 1. Initialize two variables: 25 | - `candidate` (stores the potential majority element) 26 | - `count` (tracks the candidate’s frequency) 27 | 2. Iterate through the array: 28 | - If `count == 0`, set the **current element as candidate**. 29 | - If the current element is the **same as the candidate**, increment `count`. 30 | - Otherwise, decrement `count`. 31 | 3. (Optional) Verify if the candidate appears more than `⌊n/2⌋` times in a second pass. 32 | 33 | --- 34 | 35 | ## **Code Implementation** 36 | ### **C++ Implementation** 37 | ```cpp 38 | #include 39 | #include 40 | using namespace std; 41 | 42 | class Solution { 43 | public: 44 | int majorityElement(vector& nums) { 45 | int candidate = 0, count = 0; 46 | 47 | // Phase 1: Find a candidate 48 | for (int num : nums) { 49 | if (count == 0) { 50 | candidate = num; 51 | } 52 | count += (num == candidate) ? 1 : -1; 53 | } 54 | 55 | // Phase 2: Verify the candidate (Optional) 56 | int freq = 0; 57 | for (int num : nums) { 58 | if (num == candidate) freq++; 59 | } 60 | 61 | return (freq > nums.size() / 2) ? candidate : -1; // -1 means no majority element 62 | } 63 | }; 64 | 65 | int main() { 66 | Solution sol; 67 | vector nums = {2, 2, 1, 1, 1, 2, 2}; 68 | cout << "Majority Element: " << sol.majorityElement(nums) << endl; // Output: 2 69 | return 0; 70 | } 71 | ``` 72 | 73 | --- 74 | 75 | ## **Time & Space Complexity** 76 | - **Time Complexity:** `O(n)`, since we traverse the array once (or twice in the verification step). 77 | - **Space Complexity:** `O(1)`, as we use only two variables. 78 | 79 | --- 80 | 81 | ## **Applications** 82 | 1. **Voting systems** – Finding the most voted candidate. 83 | 2. **Data stream processing** – Identifying dominant trends. 84 | 3. **Fraud detection** – Spotting frequently occurring transactions. 85 | 4. **Stock market analysis** – Recognizing patterns in stock price movements. 86 | 87 | --- 88 | 89 | ## **Advantages & Limitations** 90 | ### ✅ **Advantages** 91 | - Runs in **linear time O(n)**. 92 | - Uses **constant space O(1)**. 93 | - Simple and elegant approach. 94 | 95 | ### ❌ **Limitations** 96 | - Only works if a **majority element exists**. 97 | - If no element appears more than `⌊n/2⌋` times, the result may be incorrect without verification. 98 | - Requires a second pass for strict correctness in generic cases. 99 | 100 | --- 101 | 102 | ## **Example Walkthrough** 103 | ### **Example 1** 104 | #### **Input:** 105 | ```cpp 106 | nums = [3, 3, 4, 2, 3, 3, 3, 2, 3] 107 | ``` 108 | #### **Processing:** 109 | 1. `candidate = 3, count = 1` 110 | 2. `candidate = 3, count = 2` 111 | 3. `candidate = 3, count = 1` (decremented) 112 | 4. `candidate = 3, count = 0` 113 | 5. `candidate = 3 (new), count = 1` 114 | 6. `candidate = 3, count = 2` 115 | 7. `candidate = 3, count = 3` 116 | 8. `candidate = 3, count = 2` (decremented) 117 | 9. `candidate = 3, count = 3` 118 | 119 | #### **Output:** 120 | ```cpp 121 | Majority Element: 3 122 | ``` 123 | 124 | --- 125 | 126 | ## **Final Thoughts** 127 | - The **Boyer-Moore Voting Algorithm** is an optimal way to find the majority element in linear time with constant space. 128 | - However, if a majority element is not guaranteed to exist, **verification is necessary**. 129 | - It is widely used in **voting systems, large-scale data analysis, and fraud detection**. 130 | 131 | **🚀 If you found this helpful, don't forget to ⭐ the repo!** 132 | -------------------------------------------------------------------------------- /Bubble Sort.md: -------------------------------------------------------------------------------- 1 | # Bubble Sort Algorithm 2 | 3 | ## Definition 4 | Bubble Sort is a simple sorting algorithm that repeatedly swaps adjacent elements if they are in the wrong order. It is named for the way smaller elements "bubble" to the top of the array. 5 | 6 | --- 7 | 8 | ## Pseudocode 9 | ```plaintext 10 | BubbleSort(arr, n): 11 | for i from 0 to n-1: 12 | for j from 0 to n-i-1: 13 | if arr[j] > arr[j+1]: 14 | swap(arr[j], arr[j+1]) 15 | ``` 16 | 17 | --- 18 | 19 | ## C++ Implementation 20 | ```cpp 21 | #include 22 | using namespace std; 23 | 24 | void bubbleSort(int arr[], int n) { 25 | for (int i = 0; i < n - 1; i++) { 26 | for (int j = 0; j < n - i - 1; j++) { 27 | if (arr[j] > arr[j + 1]) { 28 | swap(arr[j], arr[j + 1]); 29 | } 30 | } 31 | } 32 | } 33 | 34 | void printArray(int arr[], int n) { 35 | for (int i = 0; i < n; i++) { 36 | cout << arr[i] << " "; 37 | } 38 | cout << endl; 39 | } 40 | 41 | int main() { 42 | int arr[] = {64, 34, 25, 12, 22, 11, 90}; 43 | int n = sizeof(arr) / sizeof(arr[0]); 44 | 45 | cout << "Original Array: "; 46 | printArray(arr, n); 47 | 48 | bubbleSort(arr, n); 49 | 50 | cout << "Sorted Array: "; 51 | printArray(arr, n); 52 | return 0; 53 | } 54 | ``` 55 | 56 | --- 57 | 58 | ## Step-by-Step Explanation 59 | Let's take an example array: 60 | ```plaintext 61 | arr[] = {64, 34, 25, 12, 22, 11, 90} 62 | ``` 63 | ### **Pass 1:** 64 | ```plaintext 65 | (64 34) 25 12 22 11 90 → Swap → (34 64) 25 12 22 11 90 66 | 34 (64 25) 12 22 11 90 → Swap → 34 (25 64) 12 22 11 90 67 | 34 25 (64 12) 22 11 90 → Swap → 34 25 (12 64) 22 11 90 68 | 34 25 12 (64 22) 11 90 → Swap → 34 25 12 (22 64) 11 90 69 | 34 25 12 22 (64 11) 90 → Swap → 34 25 12 22 (11 64) 90 70 | 34 25 12 22 11 (64 90) → No swap 71 | ``` 72 | ### **Pass 2:** 73 | ```plaintext 74 | (34 25) 12 22 11 64 90 → Swap → (25 34) 12 22 11 64 90 75 | 25 (34 12) 22 11 64 90 → Swap → 25 (12 34) 22 11 64 90 76 | 25 12 (34 22) 11 64 90 → Swap → 25 12 (22 34) 11 64 90 77 | 25 12 22 (34 11) 64 90 → Swap → 25 12 22 (11 34) 64 90 78 | ``` 79 | Repeating this process, we get: 80 | ```plaintext 81 | Final Sorted Array: {11, 12, 22, 25, 34, 64, 90} 82 | ``` 83 | 84 | --- 85 | 86 | ## Time and Space Complexity 87 | | Case | Time Complexity | 88 | |------------|----------------| 89 | | Best Case | O(n) (Already sorted) | 90 | | Worst Case | O(n²) (Reversed order) | 91 | | Average Case | O(n²) | 92 | | Space Complexity | O(1) (In-place sort) | 93 | 94 | --- 95 | 96 | ## Applications and Uses 97 | 1. **Educational Purposes** - Used to teach sorting concepts due to its simplicity. 98 | 2. **Small Data Sets** - Works well for small arrays where performance is not a concern. 99 | 3. **Step-by-Step Visualization** - Used in animations and debugging for learning sorting concepts. 100 | 4. **Detecting Sorted Data** - If no swaps occur in a pass, the array is already sorted. 101 | 102 | --- 103 | 104 | ## Specific Problems Where Bubble Sort is Useful 105 | 1. **Detecting Nearly Sorted Data** - If the input array is almost sorted, Bubble Sort performs efficiently in O(n) time. 106 | 2. **Sorting Small Lists** - In small datasets, where simplicity matters more than efficiency, Bubble Sort can be useful. 107 | 3. **Sorting Student Roll Numbers** - When working with small numbers of records in an educational system. 108 | 4. **Checking Array Sortedness** - Bubble Sort can be used to verify whether an array is sorted by checking for swaps in one pass. 109 | 5. **Bubble Down Effect in Simulations** - Used in some simulations where elements need to settle in order, like particle sorting. 110 | 111 | --- 112 | 113 | ## Conclusion 114 | Bubble Sort is easy to implement but inefficient for large datasets due to its O(n²) complexity. It is useful in teaching, debugging, and handling small datasets efficiently. 115 | -------------------------------------------------------------------------------- /Bucket Sort.md: -------------------------------------------------------------------------------- 1 | # Bucket Sort Algorithm 2 | 3 | ## Definition 4 | Bucket Sort is a sorting algorithm that divides elements into multiple buckets and sorts each bucket individually using another sorting technique, such as Insertion Sort. 5 | 6 | --- 7 | 8 | ## Pseudocode 9 | ```plaintext 10 | BucketSort(arr, n): 11 | Create empty buckets 12 | Distribute elements into buckets based on range 13 | Sort each bucket using insertion sort 14 | Merge sorted buckets back into the array 15 | ``` 16 | 17 | --- 18 | 19 | ## C++ Implementation 20 | ```cpp 21 | #include 22 | #include 23 | #include 24 | using namespace std; 25 | 26 | void bucketSort(float arr[], int n) { 27 | vector buckets[n]; 28 | 29 | for (int i = 0; i < n; i++) { 30 | int bucketIndex = n * arr[i]; 31 | buckets[bucketIndex].push_back(arr[i]); 32 | } 33 | 34 | for (int i = 0; i < n; i++) 35 | sort(buckets[i].begin(), buckets[i].end()); 36 | 37 | int index = 0; 38 | for (int i = 0; i < n; i++) 39 | for (float num : buckets[i]) 40 | arr[index++] = num; 41 | } 42 | 43 | void printArray(float arr[], int size) { 44 | for (int i = 0; i < size; i++) 45 | cout << arr[i] << " "; 46 | cout << endl; 47 | } 48 | 49 | int main() { 50 | float arr[] = {0.42, 0.32, 0.33, 0.52, 0.37, 0.47, 0.51}; 51 | int n = sizeof(arr) / sizeof(arr[0]); 52 | bucketSort(arr, n); 53 | cout << "Sorted array: "; 54 | printArray(arr, n); 55 | return 0; 56 | } 57 | ``` 58 | 59 | --- 60 | 61 | ## Step-by-Step Explanation 62 | ### Given array: 63 | ```plaintext 64 | arr[] = {0.42, 0.32, 0.33, 0.52, 0.37, 0.47, 0.51} 65 | ``` 66 | 1. Create `n` empty buckets. 67 | 2. Distribute elements into buckets based on their value: 68 | - `0.42 → bucket[4]` 69 | - `0.32 → bucket[3]` 70 | - `0.33 → bucket[3]` 71 | - `0.52 → bucket[5]` 72 | - `0.37 → bucket[3]` 73 | - `0.47 → bucket[4]` 74 | - `0.51 → bucket[5]` 75 | 3. Sort individual buckets. 76 | 4. Merge all sorted buckets into the final array. 77 | 78 | Final sorted array: `{0.32, 0.33, 0.37, 0.42, 0.47, 0.51, 0.52}` 79 | 80 | --- 81 | 82 | ## Time and Space Complexity 83 | | Case | Time Complexity | 84 | |------------|----------------| 85 | | Best Case | O(n + k) | 86 | | Worst Case | O(n²) | 87 | | Average Case | O(n + k) | 88 | | Space Complexity | O(n) | 89 | 90 | where `k` is the number of buckets. 91 | 92 | --- 93 | 94 | ## Applications and Uses 95 | 1. **Sorting Floating-Point Numbers** - Best suited for numbers uniformly distributed over a range. 96 | 2. **Histogram Sorting** - Efficient when sorting elements within a fixed range. 97 | 3. **Data Clustering** - Used for clustering large datasets before further analysis. 98 | 99 | --- 100 | 101 | ## Specific Problems Where Bucket Sort is Useful 102 | 1. **Sorting Fractions or Floating-Point Numbers** - Sorting decimal values efficiently. 103 | 2. **Visualizing Data for Machine Learning** - Sorting datasets before further processing. 104 | 3. **Sorting in Computational Geometry** - Used in nearest neighbor problems and convex hull algorithms. 105 | 106 | --- 107 | 108 | ## Conclusion 109 | Bucket Sort is a powerful algorithm when working with uniformly distributed data, but its efficiency depends on the proper selection of bucket sizes and the sorting technique used inside the buckets. 110 | -------------------------------------------------------------------------------- /Circular Sort.md: -------------------------------------------------------------------------------- 1 | # Cycle Sort (Circular Sort) 2 | 3 | ## Introduction 4 | Cycle Sort is an in-place, non-comparative sorting algorithm that minimizes the number of writes to the array. It is optimal when writes are costly, such as in EEPROMs or flash memory, where the number of writes affects the lifespan of the memory. 5 | 6 | ### **Key Features:** 7 | - In-place sorting (O(1) extra space) 8 | - Runs in O(n²) time complexity in the worst case 9 | - Minimizes the number of swaps (optimal for cases where write operations are costly) 10 | 11 | --- 12 | 13 | ## **Algorithm Explanation** 14 | Cycle Sort works by identifying **cycles** in the permutation and rotating them into the correct position. 15 | 16 | 1. **Start from the first element** and determine where it should be in a sorted array. 17 | 2. **Place the element at its correct position** by swapping. 18 | 3. **Continue shifting displaced elements until the cycle completes.** 19 | 4. **Repeat the process for the remaining elements.** 20 | 21 | --- 22 | 23 | ## **Pseudocode** 24 | ```plaintext 25 | Function cycleSort(arr, n): 26 | for cycle_start from 0 to n-2: 27 | item = arr[cycle_start] 28 | pos = cycle_start 29 | 30 | // Find the correct position of the item 31 | for i from cycle_start+1 to n-1: 32 | if arr[i] < item: 33 | pos += 1 34 | 35 | // If item is already in the correct position, continue 36 | if pos == cycle_start: 37 | continue 38 | 39 | // Skip duplicate elements 40 | while item == arr[pos]: 41 | pos += 1 42 | 43 | // Swap the item into its correct position 44 | swap(item, arr[pos]) 45 | 46 | // Rotate the rest of the cycle 47 | while pos != cycle_start: 48 | pos = cycle_start 49 | for i from cycle_start+1 to n-1: 50 | if arr[i] < item: 51 | pos += 1 52 | 53 | while item == arr[pos]: 54 | pos += 1 55 | 56 | swap(item, arr[pos]) 57 | ``` 58 | 59 | --- 60 | 61 | ## **C++ Implementation** 62 | ```cpp 63 | #include 64 | #include 65 | using namespace std; 66 | 67 | void cycleSort(vector& arr, int n) { 68 | for (int cycle_start = 0; cycle_start < n - 1; cycle_start++) { 69 | int item = arr[cycle_start]; 70 | int pos = cycle_start; 71 | 72 | // Find the position where we put the current item 73 | for (int i = cycle_start + 1; i < n; i++) { 74 | if (arr[i] < item) pos++; 75 | } 76 | 77 | // If item is already in the correct position, continue 78 | if (pos == cycle_start) continue; 79 | 80 | // Skip duplicate elements 81 | while (item == arr[pos]) pos++; 82 | swap(item, arr[pos]); 83 | 84 | // Rotate the rest of the cycle 85 | while (pos != cycle_start) { 86 | pos = cycle_start; 87 | for (int i = cycle_start + 1; i < n; i++) { 88 | if (arr[i] < item) pos++; 89 | } 90 | 91 | while (item == arr[pos]) pos++; 92 | swap(item, arr[pos]); 93 | } 94 | } 95 | } 96 | 97 | int main() { 98 | vector arr = {4, 3, 2, 1, 5}; 99 | int n = arr.size(); 100 | 101 | cycleSort(arr, n); 102 | 103 | cout << "Sorted array: "; 104 | for (int num : arr) { 105 | cout << num << " "; 106 | } 107 | cout << endl; 108 | return 0; 109 | } 110 | ``` 111 | 112 | --- 113 | 114 | ## **Complexity Analysis** 115 | - **Best Case:** `O(n log n)` (rare but possible) 116 | - **Average Case:** `O(n²)` 117 | - **Worst Case:** `O(n²)` 118 | - **Space Complexity:** `O(1)` (in-place sorting) 119 | - **Number of Writes:** `O(n)` (optimal when minimizing writes) 120 | 121 | --- 122 | 123 | ## **Edge Cases** 124 | ✅ Already sorted array (no swaps needed) 125 | ✅ Reverse sorted array (max swaps needed) 126 | ✅ Array with duplicate elements 127 | ✅ Array with all elements the same (no swaps needed) 128 | 129 | --- 130 | 131 | ## **Conclusion** 132 | Cycle Sort is efficient when minimizing write operations is crucial, but it is not the best general-purpose sorting algorithm due to its `O(n²)` time complexity. However, it remains an important sorting technique in scenarios where write operations are expensive, such as EEPROM memory devices. 133 | 134 | 🚀 **Happy Coding!** 135 | -------------------------------------------------------------------------------- /Comparator function.md: -------------------------------------------------------------------------------- 1 | # Comparator Function in C++ 2 | 3 | ## 📌 Introduction 4 | A **Comparator Function** in C++ is a function used to define custom sorting orders. It is commonly used with: 5 | - `std::sort()` 6 | - `std::priority_queue` 7 | - `std::set` & `std::map` 8 | - Custom data structures 9 | 10 | A comparator function should return: 11 | - **`true`** → if the first element should come **before** the second. 12 | - **`false`** → if the first element should come **after** the second. 13 | 14 | --- 15 | 16 | ## 🔹 Syntax 17 | A comparator function takes two elements as arguments and returns a boolean. 18 | 19 | ```cpp 20 | bool compare(int a, int b) { 21 | return a < b; // Ascending order 22 | } 23 | ``` 24 | 25 | --- 26 | 27 | ## 🔷 1. Sorting with `sort()` 28 | The `sort()` function in C++ allows a custom comparator to define the sorting order. 29 | 30 | ### **🟢 Example 1: Sorting in Descending Order** 31 | ```cpp 32 | #include 33 | using namespace std; 34 | 35 | bool compare(int a, int b) { 36 | return a > b; // Descending order 37 | } 38 | 39 | int main() { 40 | vector arr = {5, 2, 9, 1, 5, 6}; 41 | 42 | sort(arr.begin(), arr.end(), compare); 43 | 44 | for (int x : arr) cout << x << " "; // Output: 9 6 5 5 2 1 45 | } 46 | ``` 47 | 48 | --- 49 | 50 | ### **🔷 2. Sorting a Vector of Pairs** 51 | Sorting a vector of pairs based on: 52 | 1. First element in **ascending** order. 53 | 2. If first elements are equal, sort the second element in **descending** order. 54 | 55 | ```cpp 56 | #include 57 | using namespace std; 58 | 59 | bool compare(pair a, pair b) { 60 | if (a.first == b.first) 61 | return a.second > b.second; // Sort second value in descending order 62 | return a.first < b.first; // Sort first value in ascending order 63 | } 64 | 65 | int main() { 66 | vector> v = {{3, 4}, {2, 3}, {3, 2}, {1, 5}}; 67 | 68 | sort(v.begin(), v.end(), compare); 69 | 70 | for (auto p : v) cout << "(" << p.first << "," << p.second << ") "; 71 | // Output: (1,5) (2,3) (3,4) (3,2) 72 | } 73 | ``` 74 | 75 | --- 76 | 77 | ## 🔷 3. Using Comparator in `priority_queue` 78 | ### **🟢 Min Heap (Smallest element on top)** 79 | ```cpp 80 | priority_queue, greater> minHeap; 81 | ``` 82 | 83 | ### **🔴 Max Heap with Custom Comparator** 84 | ```cpp 85 | struct Compare { 86 | bool operator()(int a, int b) { 87 | return a < b; // Max heap (reverse order) 88 | } 89 | }; 90 | 91 | priority_queue, Compare> maxHeap; 92 | ``` 93 | 94 | --- 95 | 96 | ## 🔷 4. Sorting Custom Structures 97 | Comparators can be used for sorting custom data types. 98 | 99 | ```cpp 100 | #include 101 | using namespace std; 102 | 103 | struct Student { 104 | string name; 105 | int marks; 106 | }; 107 | 108 | bool compare(Student a, Student b) { 109 | return a.marks > b.marks; // Sort by marks in descending order 110 | } 111 | 112 | int main() { 113 | vector students = {{"Alice", 85}, {"Bob", 90}, {"Charlie", 80}}; 114 | 115 | sort(students.begin(), students.end(), compare); 116 | 117 | for (auto s : students) cout << s.name << " " << s.marks << "\n"; 118 | // Output: Bob 90 119 | // Alice 85 120 | // Charlie 80 121 | } 122 | ``` 123 | 124 | --- 125 | 126 | ## 🔹 Summary 127 | ✅ A **Comparator Function** defines custom sorting behavior. 128 | ✅ Used with `sort()`, `priority_queue`, `set`, `map`. 129 | ✅ Should return `true` if **first element should come before second**. 130 | ✅ Can be used with primitive types, pairs, or custom structures. 131 | 132 | --- 133 | 134 | ### 🚀 **Now you can implement custom sorting in your C++ programs!** 135 | -------------------------------------------------------------------------------- /Counting Sort.md: -------------------------------------------------------------------------------- 1 | # Counting Sort Algorithm 2 | 3 | ## Definition 4 | Counting Sort is a non-comparison-based sorting algorithm that sorts elements by counting the number of occurrences of each unique element. It works efficiently for a limited range of integer values. 5 | 6 | --- 7 | 8 | ## Pseudocode 9 | ```plaintext 10 | CountingSort(arr, n, maxVal): 11 | Create count array of size maxVal + 1 and initialize to 0 12 | for each element in arr: 13 | count[element] += 1 14 | 15 | index = 0 16 | for i from 0 to maxVal: 17 | while count[i] > 0: 18 | arr[index] = i 19 | index += 1 20 | count[i] -= 1 21 | ``` 22 | 23 | --- 24 | 25 | ## C++ Implementation 26 | ```cpp 27 | #include 28 | #include 29 | using namespace std; 30 | 31 | void countingSort(int arr[], int n) { 32 | int maxVal = *max_element(arr, arr + n); 33 | vector count(maxVal + 1, 0); 34 | 35 | for (int i = 0; i < n; i++) 36 | count[arr[i]]++; 37 | 38 | int index = 0; 39 | for (int i = 0; i <= maxVal; i++) { 40 | while (count[i] > 0) { 41 | arr[index++] = i; 42 | count[i]--; 43 | } 44 | } 45 | } 46 | 47 | void printArray(int arr[], int size) { 48 | for (int i = 0; i < size; i++) 49 | cout << arr[i] << " "; 50 | cout << endl; 51 | } 52 | 53 | int main() { 54 | int arr[] = {4, 2, 2, 8, 3, 3, 1}; 55 | int n = sizeof(arr) / sizeof(arr[0]); 56 | countingSort(arr, n); 57 | cout << "Sorted array: "; 58 | printArray(arr, n); 59 | return 0; 60 | } 61 | ``` 62 | 63 | --- 64 | 65 | ## Step-by-Step Explanation 66 | ### Given array: 67 | ```plaintext 68 | arr[] = {4, 2, 2, 8, 3, 3, 1} 69 | ``` 70 | 1. Find the maximum value: `maxVal = 8` 71 | 2. Create a count array: `[0, 1, 2, 0, 1, 0, 0, 0, 1]` 72 | 3. Reconstruct sorted array: `{1, 2, 2, 3, 3, 4, 8}` 73 | 74 | Final sorted array: `{1, 2, 2, 3, 3, 4, 8}` 75 | 76 | --- 77 | 78 | ## Time and Space Complexity 79 | | Case | Time Complexity | 80 | |------------|----------------| 81 | | Best Case | O(n + k) | 82 | | Worst Case | O(n + k) | 83 | | Average Case | O(n + k) | 84 | | Space Complexity | O(k) | 85 | 86 | where `k` is the range of input values. 87 | 88 | --- 89 | 90 | ## Applications and Uses 91 | 1. **Sorting Small Range Values** - Best for cases where `k` is not significantly larger than `n`. 92 | 2. **Used in Radix Sort** - As a subroutine for sorting digits. 93 | 3. **Histogram-based Sorting** - Useful in scenarios like age grouping or frequency analysis. 94 | 4. **DNA Sequencing** - Used where values have a limited range. 95 | 96 | --- 97 | 98 | ## Specific Problems Where Counting Sort is Useful 99 | 1. **Sort Characters in a String** - Problems like "Sort Characters by Frequency" (LeetCode 451). 100 | 2. **Sort Elements with Limited Range** - Works well for problems involving small-range integers. 101 | 3. **Finding the K-th Most Frequent Element** - Useful in problems requiring frequency analysis. 102 | 103 | --- 104 | 105 | ## Conclusion 106 | Counting Sort is a fast and efficient algorithm when `k` is small relative to `n`. However, it is not suitable for sorting large-range values due to its O(k) space complexity. 107 | -------------------------------------------------------------------------------- /Dutch National Flag algorithm.md: -------------------------------------------------------------------------------- 1 | # Dutch National Flag algorithm 2 | 3 | The Dutch National Flag algorithm is designed to sort an array with three distinct values efficiently in one pass using constant extra space. It was proposed by Edsger W. Dijkstra and is particularly well-suited for problems like the "Sort Colors" problem where the values to be sorted are 0, 1, and 2. 4 | 5 | ### Key Idea: 6 | The algorithm partitions the array into three sections: 7 | 1. Elements equal to 0. 8 | 2. Elements equal to 1. 9 | 3. Elements equal to 2. 10 | 11 | The goal is to sort the array so that all 0s come first, followed by all 1s, and then all 2s. 12 | 13 | ### Dutch National Flag Algorithm Implementation: 14 | 15 | ```cpp 16 | class Solution { 17 | public: 18 | void sortColors(vector& nums) { 19 | int low = 0, mid = 0, high = nums.size() - 1; 20 | 21 | while (mid <= high) { 22 | if (nums[mid] == 0) { 23 | swap(nums[low], nums[mid]); 24 | low++; 25 | mid++; 26 | } else if (nums[mid] == 1) { 27 | mid++; 28 | } else { // nums[mid] == 2 29 | swap(nums[mid], nums[high]); 30 | high--; 31 | } 32 | } 33 | } 34 | }; 35 | ``` 36 | 37 | ### How It Works: 38 | We use three pointers: 39 | 1. `low` - This pointer marks the boundary between the section of 0s and the section of 1s and 2s. 40 | 2. `mid` - This pointer is used to traverse the array. 41 | 3. `high` - This pointer marks the boundary between the section of 2s and the section of 0s and 1s. 42 | 43 | ### Steps: 44 | 1. **Initialization**: 45 | - Set `low` to the beginning of the array (`0`). 46 | - Set `mid` to the beginning of the array (`0`). 47 | - Set `high` to the end of the array (`n - 1`). 48 | 49 | 2. **Traversal**: 50 | - While `mid` is less than or equal to `high`, check the value at `nums[mid]`: 51 | - If `nums[mid] == 0`: 52 | - Swap `nums[low]` and `nums[mid]`. 53 | - Increment both `low` and `mid` because the element `0` is now in the correct section. 54 | - If `nums[mid] == 1`: 55 | - Simply increment `mid` because the element `1` is already in the correct section. 56 | - If `nums[mid] == 2`: 57 | - Swap `nums[mid]` and `nums[high]`. 58 | - Decrement `high` because the element `2` is now in the correct section. 59 | - Do not increment `mid` in this case because the swapped element at `mid` needs to be examined. 60 | 61 | 3. **Termination**: 62 | - The loop terminates when `mid` surpasses `high`, indicating that all elements are sorted into their respective sections. 63 | 64 | ### Example Walkthrough: 65 | For an array `[2, 0, 2, 1, 1, 0]`: 66 | 67 | - Initial state: 68 | ``` 69 | low = 0, mid = 0, high = 5 70 | Array: [2, 0, 2, 1, 1, 0] 71 | ``` 72 | 73 | - First iteration (`nums[mid] == 2`): 74 | - Swap `nums[mid]` and `nums[high]`: 75 | ``` 76 | low = 0, mid = 0, high = 4 77 | Array: [0, 0, 2, 1, 1, 2] 78 | ``` 79 | 80 | - Second iteration (`nums[mid] == 0`): 81 | - Swap `nums[low]` and `nums[mid]`: 82 | ``` 83 | low = 1, mid = 1, high = 4 84 | Array: [0, 0, 2, 1, 1, 2] 85 | ``` 86 | 87 | - Third iteration (`nums[mid] == 0`): 88 | - Swap `nums[low]` and `nums[mid]`: 89 | ``` 90 | low = 2, mid = 2, high = 4 91 | Array: [0, 0, 1, 1, 2, 2] 92 | ``` 93 | 94 | - Fourth iteration (`nums[mid] == 1`): 95 | - Increment `mid`: 96 | ``` 97 | low = 2, mid = 3, high = 4 98 | Array: [0, 0, 1, 1, 2, 2] 99 | ``` 100 | 101 | - Fifth iteration (`nums[mid] == 1`): 102 | - Increment `mid`: 103 | ``` 104 | low = 2, mid = 4, high = 4 105 | Array: [0, 0, 1, 1, 2, 2] 106 | ``` 107 | 108 | - Sixth iteration (`nums[mid] == 2`): 109 | - Swap `nums[mid]` and `nums[high]`: 110 | ``` 111 | low = 2, mid = 4, high = 3 112 | Array: [0, 0, 1, 1, 2, 2] 113 | ``` 114 | 115 | - Loop terminates as `mid` (4) is now greater than `high` (3). 116 | 117 | ### Summary: 118 | - The array is now sorted as `[0, 0, 1, 1, 2, 2]`. 119 | - The algorithm runs in O(n) time complexity, as it makes a single pass through the array. 120 | - It uses O(1) extra space, making it very space-efficient. 121 | 122 | This algorithm ensures an efficient and optimal solution to the problem, meeting the constraints provided. 123 | 124 | 125 | 126 | 127 | 128 | -------------------------------------------------------------------------------- /Floyd's Tortoise and Hare (Cycle Detection) Algorithm.md: -------------------------------------------------------------------------------- 1 | # Floyd's Tortoise and Hare (Cycle Detection) Algorithm 2 | 3 | ## **Introduction** 4 | Floyd's Cycle Detection Algorithm, also known as the **Tortoise and Hare Algorithm**, is an efficient method to detect cycles in a sequence of values. It is widely used in computer science for problems involving linked lists, graph traversal, and numerical sequences. 5 | 6 | This algorithm uses two pointers (a slow-moving "tortoise" and a fast-moving "hare") to traverse the sequence. If there is a cycle, the two pointers will eventually meet. Otherwise, the fast pointer will reach the end. 7 | 8 | --- 9 | 10 | ## **Working Principle** 11 | ### **Step 1: Initialize Two Pointers** 12 | - Start with two pointers: `slow` (moves **one step** at a time) and `fast` (moves **two steps** at a time). 13 | 14 | ### **Step 2: Detect Cycle (Phase 1)** 15 | - Move the `slow` pointer by one step and the `fast` pointer by two steps. 16 | - If they meet at some point, a **cycle exists**. 17 | 18 | ### **Step 3: Find Cycle Start (Phase 2)** 19 | - Reset the `slow` pointer to the **start** of the sequence. 20 | - Move both `slow` and `fast` one step at a time. 21 | - The meeting point is the **start of the cycle**. 22 | 23 | --- 24 | 25 | ## **Implementation in C++** 26 | ```cpp 27 | #include 28 | #include 29 | 30 | using namespace std; 31 | 32 | int findDuplicate(vector& nums) { 33 | int slow = nums[0], fast = nums[0]; 34 | 35 | // Phase 1: Detect cycle 36 | do { 37 | slow = nums[slow]; 38 | fast = nums[nums[fast]]; 39 | } while (slow != fast); 40 | 41 | // Phase 2: Find the duplicate 42 | slow = nums[0]; 43 | while (slow != fast) { 44 | slow = nums[slow]; 45 | fast = nums[fast]; 46 | } 47 | 48 | return slow; 49 | } 50 | 51 | int main() { 52 | vector nums = {3, 1, 3, 4, 2}; 53 | cout << "Duplicate Number: " << findDuplicate(nums) << endl; 54 | return 0; 55 | } 56 | ``` 57 | 58 | --- 59 | 60 | ## **Complexity Analysis** 61 | - **Time Complexity:** `O(n)`, since each pointer moves at most `O(n)` times. 62 | - **Space Complexity:** `O(1)`, as only two pointers are used. 63 | 64 | --- 65 | 66 | ## **Applications of Floyd's Cycle Detection Algorithm** 67 | 1. **Detecting cycles in linked lists** (e.g., checking if a linked list has a loop). 68 | 2. **Finding duplicate numbers in an array** (as in the example above). 69 | 3. **Graph cycle detection** in directed graphs. 70 | 4. **Periodicity detection** in pseudo-random number generators. 71 | 5. **Cycle detection in functional mappings** (e.g., solving mathematical problems involving repeated function application). 72 | 73 | --- 74 | 75 | ## **Advantages** 76 | ✅ Uses only **O(1) extra space** (constant memory usage). 77 | ✅ Works in **O(n) time complexity**. 78 | ✅ Simple and efficient compared to other cycle detection methods. 79 | 80 | --- 81 | 82 | ## **Conclusion** 83 | Floyd's Cycle Detection Algorithm is an elegant and efficient method for detecting cycles in a sequence. Its wide range of applications, from linked lists to numerical sequences, makes it a fundamental concept in computer science. By understanding and implementing this algorithm, developers can efficiently handle cycle-related problems in various domains. 84 | 85 | --- 86 | 87 | *Feel free to use this explanation in your GitHub repository or documentation!* 🚀 88 | -------------------------------------------------------------------------------- /Heap Sort.md: -------------------------------------------------------------------------------- 1 | # Heap Sort Algorithm 2 | 3 | ## Definition 4 | Heap Sort is a comparison-based sorting technique based on Binary Heap data structure. It works by building a max heap and extracting the largest element repeatedly. 5 | 6 | --- 7 | 8 | ## Pseudocode 9 | ```plaintext 10 | HeapSort(arr): 11 | BuildMaxHeap(arr) 12 | for i from n-1 to 1: 13 | swap(arr[0], arr[i]) 14 | Heapify(arr, 0, i) 15 | 16 | BuildMaxHeap(arr): 17 | for i from n/2 down to 0: 18 | Heapify(arr, i, n) 19 | 20 | Heapify(arr, i, n): 21 | largest = i 22 | left = 2 * i + 1 23 | right = 2 * i + 2 24 | if left < n and arr[left] > arr[largest]: 25 | largest = left 26 | if right < n and arr[right] > arr[largest]: 27 | largest = right 28 | if largest != i: 29 | swap(arr[i], arr[largest]) 30 | Heapify(arr, largest, n) 31 | ``` 32 | 33 | --- 34 | 35 | ## C++ Implementation 36 | ```cpp 37 | #include 38 | using namespace std; 39 | 40 | void heapify(int arr[], int n, int i) { 41 | int largest = i; 42 | int left = 2 * i + 1; 43 | int right = 2 * i + 2; 44 | 45 | if (left < n && arr[left] > arr[largest]) 46 | largest = left; 47 | 48 | if (right < n && arr[right] > arr[largest]) 49 | largest = right; 50 | 51 | if (largest != i) { 52 | swap(arr[i], arr[largest]); 53 | heapify(arr, n, largest); 54 | } 55 | } 56 | 57 | void heapSort(int arr[], int n) { 58 | for (int i = n / 2 - 1; i >= 0; i--) 59 | heapify(arr, n, i); 60 | 61 | for (int i = n - 1; i > 0; i--) { 62 | swap(arr[0], arr[i]); 63 | heapify(arr, i, 0); 64 | } 65 | } 66 | 67 | void printArray(int arr[], int size) { 68 | for (int i = 0; i < size; i++) 69 | cout << arr[i] << " "; 70 | cout << endl; 71 | } 72 | 73 | int main() { 74 | int arr[] = {12, 11, 13, 5, 6, 7}; 75 | int n = sizeof(arr) / sizeof(arr[0]); 76 | heapSort(arr, n); 77 | cout << "Sorted array: "; 78 | printArray(arr, n); 79 | return 0; 80 | } 81 | ``` 82 | 83 | --- 84 | 85 | ## Step-by-Step Explanation 86 | ### Given array: 87 | ```plaintext 88 | arr[] = {12, 11, 13, 5, 6, 7} 89 | ``` 90 | 1. Build max heap from input array. 91 | 2. Extract maximum element and place it at the end. 92 | 3. Heapify the remaining heap. 93 | 4. Repeat until the array is sorted. 94 | 95 | Final sorted array: `{5, 6, 7, 11, 12, 13}` 96 | 97 | --- 98 | 99 | ## Time and Space Complexity 100 | | Case | Time Complexity | 101 | |------------|----------------| 102 | | Best Case | O(n log n) | 103 | | Worst Case | O(n log n) | 104 | | Average Case | O(n log n) | 105 | | Space Complexity | O(1) | 106 | 107 | --- 108 | 109 | ## Applications and Uses 110 | 1. **Priority Queues** - Heap Sort is useful in applications requiring priority-based operations. 111 | 2. **Graph Algorithms** - Used in Dijkstra’s and Prim’s algorithms. 112 | 3. **Real-Time Processing** - Used where consistent O(n log n) performance is required. 113 | 4. **Operating Systems** - Used for scheduling processes. 114 | 115 | --- 116 | 117 | ## Specific Problems Where Heap Sort is Useful 118 | 1. **Finding the k-th Largest/Smallest Element** - Efficient in problems like "Kth Largest Element in an Array" (LeetCode 215). 119 | 2. **Sorting Almost Sorted Arrays** - Ideal for problems where elements are close to their sorted positions. 120 | 3. **Merging k Sorted Lists** - Used in problems like "Merge k Sorted Lists" (LeetCode 23). 121 | 122 | --- 123 | 124 | ## Conclusion 125 | Heap Sort is an in-place sorting algorithm with O(n log n) complexity, making it efficient for large datasets where stable sorting is not required. 126 | -------------------------------------------------------------------------------- /Insertion Sort.md: -------------------------------------------------------------------------------- 1 | # Insertion Sort Algorithm 2 | 3 | ## Definition 4 | Insertion Sort is a simple and efficient comparison-based sorting algorithm that builds the final sorted array one item at a time by inserting each element into its correct position. 5 | 6 | --- 7 | 8 | ## Pseudocode 9 | ```plaintext 10 | InsertionSort(arr, n): 11 | for i from 1 to n-1: 12 | key = arr[i] 13 | j = i - 1 14 | while j >= 0 and arr[j] > key: 15 | arr[j + 1] = arr[j] 16 | j = j - 1 17 | arr[j + 1] = key 18 | ``` 19 | 20 | --- 21 | 22 | ## C++ Implementation 23 | ```cpp 24 | #include 25 | using namespace std; 26 | 27 | void insertionSort(int arr[], int n) { 28 | for (int i = 1; i < n; i++) { 29 | int key = arr[i]; 30 | int j = i - 1; 31 | while (j >= 0 && arr[j] > key) { 32 | arr[j + 1] = arr[j]; 33 | j = j - 1; 34 | } 35 | arr[j + 1] = key; 36 | } 37 | } 38 | 39 | void printArray(int arr[], int n) { 40 | for (int i = 0; i < n; i++) { 41 | cout << arr[i] << " "; 42 | } 43 | cout << endl; 44 | } 45 | 46 | int main() { 47 | int arr[] = {12, 11, 13, 5, 6}; 48 | int n = sizeof(arr) / sizeof(arr[0]); 49 | 50 | cout << "Original Array: "; 51 | printArray(arr, n); 52 | 53 | insertionSort(arr, n); 54 | 55 | cout << "Sorted Array: "; 56 | printArray(arr, n); 57 | return 0; 58 | } 59 | ``` 60 | 61 | --- 62 | 63 | ## Step-by-Step Explanation 64 | Let's take an example array: 65 | ```plaintext 66 | arr[] = {12, 11, 13, 5, 6} 67 | ``` 68 | ### **Pass 1:** 69 | - Compare **11** with **12**, insert 11 before 12 → `{11, 12, 13, 5, 6}` 70 | 71 | ### **Pass 2:** 72 | - Compare **13** with 12, no change → `{11, 12, 13, 5, 6}` 73 | 74 | ### **Pass 3:** 75 | - Compare **5** with 13, 12, and 11, insert before 11 → `{5, 11, 12, 13, 6}` 76 | 77 | ### **Pass 4:** 78 | - Compare **6** with 13, 12, and 11, insert before 11 → `{5, 6, 11, 12, 13}` 79 | 80 | Final sorted array: `{5, 6, 11, 12, 13}` 81 | 82 | --- 83 | 84 | ## Time and Space Complexity 85 | | Case | Time Complexity | 86 | |------------|----------------| 87 | | Best Case | O(n) (Already sorted) | 88 | | Worst Case | O(n²) (Reversed order) | 89 | | Average Case | O(n²) | 90 | | Space Complexity | O(1) (In-place sorting) | 91 | 92 | --- 93 | 94 | ## Applications and Uses 95 | 1. **Efficient for Small Data Sets** - Works well for small and nearly sorted datasets. 96 | 2. **Stable Sort** - Preserves the relative order of equal elements. 97 | 3. **Used in Online Sorting** - Suitable for situations where elements arrive one by one and need to be sorted dynamically. 98 | 4. **Sorting Playing Cards** - Similar to how people sort playing cards in hand. 99 | 100 | --- 101 | 102 | ## Specific Problems Where Insertion Sort is Useful 103 | 1. **Sorting Partially Sorted Arrays** - If an array is nearly sorted, Insertion Sort performs in O(n) time. 104 | 2. **Sorting Small Lists in Hybrid Algorithms** - Often used in Timsort and IntroSort for small subarrays. 105 | 3. **Maintaining Order in a Stream of Data** - Used in online algorithms where new elements are continuously added. 106 | 4. **Minimal Memory Overhead** - Used in memory-constrained applications where in-place sorting is required. 107 | 108 | --- 109 | 110 | ## Conclusion 111 | Insertion Sort is an easy-to-implement algorithm that works well for small or nearly sorted datasets. However, it is inefficient for large datasets due to its O(n²) complexity. 112 | -------------------------------------------------------------------------------- /KNP Algorithm.md: -------------------------------------------------------------------------------- 1 | # Knuth-Morris-Pratt (KMP) Algorithm 2 | 3 | ## Introduction 4 | The **Knuth-Morris-Pratt (KMP) algorithm** is an efficient string-searching algorithm used to find the first occurrence of a pattern within a text. Unlike the brute-force method, KMP preprocesses the pattern to avoid redundant comparisons, making it significantly faster for large texts. 5 | 6 | ## Algorithm Steps 7 | ### 1. Compute the LPS (Longest Prefix Suffix) Array: 8 | - The LPS array is crucial in reducing redundant comparisons. 9 | - It stores the length of the longest proper prefix which is also a suffix for each prefix of the pattern. 10 | - This helps determine how much the pattern should be shifted when a mismatch occurs. 11 | 12 | ### 2. Use the LPS Array to Search the Pattern in the Text: 13 | - Compare characters of the pattern with the text. 14 | - If characters match, move both pointers forward. 15 | - If a mismatch occurs, use the LPS array to determine the next comparison point, avoiding unnecessary resets. 16 | 17 | --- 18 | 19 | ## C++ Implementation 20 | ```cpp 21 | #include 22 | #include 23 | using namespace std; 24 | 25 | // Function to compute the LPS array 26 | vector computeLPS(const string& pattern) { 27 | int m = pattern.length(); 28 | vector lps(m, 0); 29 | int len = 0; 30 | int i = 1; 31 | 32 | while (i < m) { 33 | if (pattern[i] == pattern[len]) { 34 | len++; 35 | lps[i] = len; 36 | i++; 37 | } else { 38 | if (len != 0) { 39 | len = lps[len - 1]; 40 | } else { 41 | lps[i] = 0; 42 | i++; 43 | } 44 | } 45 | } 46 | return lps; 47 | } 48 | 49 | // KMP Algorithm for pattern searching 50 | void KMPSearch(const string& text, const string& pattern) { 51 | int n = text.length(); 52 | int m = pattern.length(); 53 | vector lps = computeLPS(pattern); 54 | 55 | int i = 0, j = 0; 56 | while (i < n) { 57 | if (text[i] == pattern[j]) { 58 | i++; 59 | j++; 60 | } 61 | if (j == m) { 62 | cout << "Pattern found at index " << i - j << endl; 63 | j = lps[j - 1]; 64 | } else if (i < n && text[i] != pattern[j]) { 65 | if (j != 0) { 66 | j = lps[j - 1]; 67 | } else { 68 | i++; 69 | } 70 | } 71 | } 72 | } 73 | 74 | int main() { 75 | string text = "ababcababcabc"; 76 | string pattern = "abc"; 77 | KMPSearch(text, pattern); 78 | return 0; 79 | } 80 | ``` 81 | 82 | --- 83 | 84 | ## Complexity Analysis 85 | - **Preprocessing LPS Array:** \(O(m)\) 86 | - **Pattern Searching:** \(O(n)\) 87 | - **Overall Time Complexity:** \(O(n + m)\) 88 | - **Space Complexity:** \(O(m)\) (for LPS array) 89 | 90 | ## Why is KMP Efficient? 91 | - Unlike brute force methods (\(O(nm)\)), KMP does not backtrack the text pointer after a mismatch. 92 | - Utilizes the LPS array to determine the next position for comparison, making it significantly faster. 93 | 94 | ## Advantages of KMP 95 | - **Eliminates unnecessary comparisons**, improving efficiency. 96 | - **Works well for long texts** where multiple pattern matches might occur. 97 | - **Optimized for multiple pattern searches**, especially in real-time applications. 98 | 99 | ## Applications 100 | - **Text Searching**: Searching for words in documents, search engines. 101 | - **Plagiarism Detection**: Comparing documents for similarities. 102 | - **DNA Sequence Matching**: Identifying gene patterns in biological sequences. 103 | - **Spam Filtering**: Detecting spam phrases in messages and emails. 104 | - **Intrusion Detection Systems**: Pattern matching in cybersecurity to detect malicious activities. 105 | - **Data Mining**: Identifying recurring patterns in large datasets. 106 | -------------------------------------------------------------------------------- /Kadane's Algorithm.md: -------------------------------------------------------------------------------- 1 | # Kadane's Algorithm 2 | 3 | ## Introduction 4 | Kadane's Algorithm is a famous algorithm used to find the maximum sum of a contiguous subarray within a one-dimensional numeric array. It runs in **O(n)** time complexity, making it an efficient solution for the **Maximum Subarray Sum** problem. 5 | 6 | ## Problem Statement 7 | Given an array `arr[]` of size `n`, find the **maximum sum of a contiguous subarray**. 8 | 9 | ### Example: 10 | #### Input: 11 | ```cpp 12 | arr[] = {-2, 1, -3, 4, -1, 2, 1, -5, 4} 13 | ``` 14 | #### Output: 15 | ``` 16 | Maximum contiguous sum is 6 17 | ``` 18 | #### Explanation: 19 | The subarray `[4, -1, 2, 1]` has the maximum sum `6`. 20 | 21 | --- 22 | 23 | ## Approach 24 | 1. Initialize two variables: 25 | - `maxSum` to store the maximum sum found so far. 26 | - `currentSum` to store the sum of the current subarray. 27 | 2. Iterate through the array: 28 | - Add the current element to `currentSum`. 29 | - If `currentSum` exceeds `maxSum`, update `maxSum`. 30 | - If `currentSum` becomes negative, reset it to `0`. 31 | 3. Return `maxSum` as the result. 32 | 33 | --- 34 | 35 | ## Pseudo Code 36 | ``` 37 | function kadaneAlgorithm(arr, n): 38 | maxSum = -∞ // Initialize max sum as negative infinity 39 | currentSum = 0 40 | 41 | for i from 0 to n-1: 42 | currentSum = currentSum + arr[i] 43 | if currentSum > maxSum: 44 | maxSum = currentSum 45 | if currentSum < 0: 46 | currentSum = 0 47 | 48 | return maxSum 49 | ``` 50 | 51 | --- 52 | 53 | ## C++ Implementation 54 | ```cpp 55 | #include 56 | #include 57 | using namespace std; 58 | 59 | int kadane(int arr[], int n) { 60 | int maxSum = INT_MIN, currentSum = 0; 61 | 62 | for (int i = 0; i < n; i++) { 63 | currentSum += arr[i]; 64 | if (currentSum > maxSum) 65 | maxSum = currentSum; 66 | if (currentSum < 0) 67 | currentSum = 0; 68 | } 69 | return maxSum; 70 | } 71 | 72 | int main() { 73 | int arr[] = {-2, 1, -3, 4, -1, 2, 1, -5, 4}; 74 | int n = sizeof(arr) / sizeof(arr[0]); 75 | cout << "Maximum contiguous sum is " << kadane(arr, n) << endl; 76 | return 0; 77 | } 78 | ``` 79 | 80 | ## C++ Implementation Advanced 81 | ```cpp 82 | #include 83 | #include 84 | using namespace std; 85 | 86 | int kadane(int arr[], int n) { 87 | int maxSum = INT_MIN, currentMax = 0; 88 | 89 | for (int i = 0; i < n; i++) { 90 | currentMax = max(arr[i], currentMax + arr[i]); // Standard Kadane’s formula 91 | maxSum = max(maxSum, currentMax); 92 | } 93 | return maxSum; 94 | } 95 | 96 | int main() { 97 | int arr[] = {-2, 1, -3, 4, -1, 2, 1, -5, 4}; 98 | int n = sizeof(arr) / sizeof(arr[0]); 99 | cout << "Maximum contiguous sum is " << kadane(arr, n) << endl; 100 | return 0; 101 | } 102 | ``` 103 | 104 | --- 105 | 106 | ## Complexity Analysis 107 | - **Time Complexity:** `O(n)`, since we traverse the array once. 108 | - **Space Complexity:** `O(1)`, as we use only a few extra variables. 109 | 110 | --- 111 | 112 | ## Edge Cases Considered 113 | - All negative elements (choose the largest element as the max sum). 114 | - A mix of positive and negative numbers. 115 | - An already sorted increasing or decreasing array. 116 | - Single-element arrays. 117 | 118 | --- 119 | 120 | ## Variants of Kadane's Algorithm 121 | 1. **Finding the subarray itself**: 122 | - Maintain `start`, `end`, and `tempStart` indices. 123 | 2. **2D Kadane's Algorithm**: 124 | - Used for finding the maximum sum submatrix in a 2D array. 125 | 126 | --- 127 | 128 | ## Conclusion 129 | Kadane’s Algorithm efficiently finds the maximum contiguous subarray sum in linear time, making it a fundamental algorithm in competitive programming and interviews. 130 | -------------------------------------------------------------------------------- /Merge Sort.md: -------------------------------------------------------------------------------- 1 | # Merge Sort Algorithm 2 | 3 | ## Definition 4 | Merge Sort is a divide-and-conquer sorting algorithm that splits an array into smaller subarrays, recursively sorts them, and then merges them back together in sorted order. 5 | 6 | --- 7 | 8 | ## Pseudocode 9 | ```plaintext 10 | MergeSort(arr, left, right): 11 | if left < right: 12 | mid = (left + right) / 2 13 | MergeSort(arr, left, mid) 14 | MergeSort(arr, mid + 1, right) 15 | Merge(arr, left, mid, right) 16 | 17 | Merge(arr, left, mid, right): 18 | Create leftSubArray and rightSubArray 19 | Merge the two subarrays in sorted order 20 | ``` 21 | 22 | --- 23 | 24 | ## C++ Implementation 25 | ```cpp 26 | #include 27 | using namespace std; 28 | 29 | void merge(int arr[], int left, int mid, int right) { 30 | int n1 = mid - left + 1; 31 | int n2 = right - mid; 32 | 33 | int leftArr[n1], rightArr[n2]; 34 | for (int i = 0; i < n1; i++) 35 | leftArr[i] = arr[left + i]; 36 | for (int i = 0; i < n2; i++) 37 | rightArr[i] = arr[mid + 1 + i]; 38 | 39 | int i = 0, j = 0, k = left; 40 | while (i < n1 && j < n2) { 41 | if (leftArr[i] <= rightArr[j]) { 42 | arr[k] = leftArr[i]; 43 | i++; 44 | } else { 45 | arr[k] = rightArr[j]; 46 | j++; 47 | } 48 | k++; 49 | } 50 | 51 | while (i < n1) { 52 | arr[k] = leftArr[i]; 53 | i++; 54 | k++; 55 | } 56 | 57 | while (j < n2) { 58 | arr[k] = rightArr[j]; 59 | j++; 60 | k++; 61 | } 62 | } 63 | 64 | void mergeSort(int arr[], int left, int right) { 65 | if (left < right) { 66 | int mid = left + (right - left) / 2; 67 | 68 | mergeSort(arr, left, mid); 69 | mergeSort(arr, mid + 1, right); 70 | merge(arr, left, mid, right); 71 | } 72 | } 73 | 74 | void printArray(int arr[], int n) { 75 | for (int i = 0; i < n; i++) { 76 | cout << arr[i] << " "; 77 | } 78 | cout << endl; 79 | } 80 | 81 | int main() { 82 | int arr[] = {12, 11, 13, 5, 6, 7}; 83 | int n = sizeof(arr) / sizeof(arr[0]); 84 | 85 | cout << "Original Array: "; 86 | printArray(arr, n); 87 | 88 | mergeSort(arr, 0, n - 1); 89 | 90 | cout << "Sorted Array: "; 91 | printArray(arr, n); 92 | return 0; 93 | } 94 | ``` 95 | 96 | --- 97 | 98 | ## Step-by-Step Explanation 99 | ### Given array: 100 | ```plaintext 101 | arr[] = {12, 11, 13, 5, 6, 7} 102 | ``` 103 | ### **Step 1: Divide the Array** 104 | - Split into `{12, 11, 13}` and `{5, 6, 7}` 105 | - Further split into `{12}`, `{11, 13}`, `{5}`, `{6, 7}` 106 | - Continue until single elements remain 107 | 108 | ### **Step 2: Merge and Sort** 109 | - Merge `{11, 13}` into `{11, 12, 13}` 110 | - Merge `{6, 7}` into `{5, 6, 7}` 111 | - Finally, merge `{11, 12, 13}` and `{5, 6, 7}` into `{5, 6, 7, 11, 12, 13}` 112 | 113 | Final sorted array: `{5, 6, 7, 11, 12, 13}` 114 | 115 | --- 116 | 117 | ## Time and Space Complexity 118 | | Case | Time Complexity | 119 | |------------|----------------| 120 | | Best Case | O(n log n) | 121 | | Worst Case | O(n log n) | 122 | | Average Case | O(n log n) | 123 | | Space Complexity | O(n) | 124 | 125 | --- 126 | 127 | ## Applications and Uses 128 | 1. **Sorting Large Data Sets** - Merge Sort is used when dealing with massive amounts of data. 129 | 2. **External Sorting** - Used in scenarios where data does not fit into memory. 130 | 3. **Used in Linked Lists** - Merge Sort performs efficiently on linked lists as it minimizes movement. 131 | 4. **Parallel Processing** - Suitable for multi-threaded implementations due to divide-and-conquer nature. 132 | 133 | --- 134 | 135 | ## Specific Problems Where Merge Sort is Useful 136 | 1. **Sorting Large Files** - Used in external sorting where data is too big to fit into RAM. 137 | 2. **Counting Inversions in an Array** - Helps find the number of inversions in an array in O(n log n) time. 138 | 3. **Sorting Linked Lists** - Efficient for linked list sorting since it does not require random access. 139 | 4. **TimSort (Hybrid Algorithm)** - TimSort (used in Python and Java) is a hybrid of Merge Sort and Insertion Sort. 140 | 141 | --- 142 | 143 | ## Conclusion 144 | Merge Sort is a highly efficient and stable sorting algorithm with O(n log n) time complexity. It is particularly useful for large datasets and external sorting but requires additional memory for merging. 145 | -------------------------------------------------------------------------------- /Morris Traversal.md: -------------------------------------------------------------------------------- 1 | # Morris Traversal 2 | 3 | ## Introduction 4 | Morris Traversal is a tree traversal algorithm that **does not use recursion or a stack**. It modifies the tree temporarily to achieve an **O(1) space complexity** while maintaining **O(n) time complexity**. It is mainly used for **inorder** and **preorder** traversal of binary trees. 5 | 6 | --- 7 | 8 | ## Algorithm 9 | Morris Traversal works by using **threaded binary trees**. The key idea is to create a temporary link (thread) to the inorder predecessor of the current node, which helps in backtracking. 10 | 11 | ### Steps for Morris Inorder Traversal: 12 | 1. Initialize the current node as the root. 13 | 2. If the current node has no left child, print the node and move to the right child. 14 | 3. Otherwise, find the inorder predecessor of the current node (rightmost node of the left subtree).\4. If the predecessor's right child is `NULL`, make the current node its right child and move to the left child. 15 | 5. If the predecessor's right child is already set to the current node, remove the thread, print the current node, and move to the right child. 16 | 6. Repeat until the entire tree is traversed. 17 | 18 | ### Steps for Morris Preorder Traversal: 19 | 1. Initialize the current node as the root. 20 | 2. If the current node has no left child, print the node and move to the right child. 21 | 3. Otherwise, find the inorder predecessor. 22 | 4. If the predecessor's right child is `NULL`, make the current node its right child, print the current node, and move to the left child. 23 | 5. If the predecessor's right child is already set to the current node, remove the thread and move to the right child. 24 | 6. Repeat until the entire tree is traversed. 25 | 26 | --- 27 | 28 | ## Pseudocode 29 | 30 | ### **Morris Inorder Traversal** 31 | ```plaintext 32 | function MorrisInorder(root): 33 | current = root 34 | while current is not NULL: 35 | if current.left is NULL: 36 | print current.data 37 | current = current.right 38 | else: 39 | predecessor = current.left 40 | while predecessor.right is not NULL and predecessor.right != current: 41 | predecessor = predecessor.right 42 | 43 | if predecessor.right is NULL: 44 | predecessor.right = current 45 | current = current.left 46 | else: 47 | predecessor.right = NULL 48 | print current.data 49 | current = current.right 50 | ``` 51 | 52 | ### **Morris Preorder Traversal** 53 | ```plaintext 54 | function MorrisPreorder(root): 55 | current = root 56 | while current is not NULL: 57 | if current.left is NULL: 58 | print current.data 59 | current = current.right 60 | else: 61 | predecessor = current.left 62 | while predecessor.right is not NULL and predecessor.right != current: 63 | predecessor = predecessor.right 64 | 65 | if predecessor.right is NULL: 66 | predecessor.right = current 67 | print current.data 68 | current = current.left 69 | else: 70 | predecessor.right = NULL 71 | current = current.right 72 | ``` 73 | 74 | --- 75 | 76 | ## C++ Implementation 77 | 78 | ### **Morris Inorder Traversal** 79 | ```cpp 80 | #include 81 | using namespace std; 82 | 83 | struct Node { 84 | int data; 85 | Node* left; 86 | Node* right; 87 | }; 88 | 89 | Node* createNode(int data) { 90 | Node* newNode = new Node(); 91 | newNode->data = data; 92 | newNode->left = newNode->right = NULL; 93 | return newNode; 94 | } 95 | 96 | void morrisInorder(Node* root) { 97 | Node* current = root; 98 | while (current != NULL) { 99 | if (current->left == NULL) { 100 | cout << current->data << " "; 101 | current = current->right; 102 | } else { 103 | Node* predecessor = current->left; 104 | while (predecessor->right != NULL && predecessor->right != current) 105 | predecessor = predecessor->right; 106 | 107 | if (predecessor->right == NULL) { 108 | predecessor->right = current; 109 | current = current->left; 110 | } else { 111 | predecessor->right = NULL; 112 | cout << current->data << " "; 113 | current = current->right; 114 | } 115 | } 116 | } 117 | } 118 | 119 | int main() { 120 | Node* root = createNode(1); 121 | root->left = createNode(2); 122 | root->right = createNode(3); 123 | root->left->left = createNode(4); 124 | root->left->right = createNode(5); 125 | 126 | cout << "Morris Inorder Traversal: "; 127 | morrisInorder(root); 128 | return 0; 129 | } 130 | ``` 131 | 132 | ### **Morris Preorder Traversal** 133 | ```cpp 134 | void morrisPreorder(Node* root) { 135 | Node* current = root; 136 | while (current != NULL) { 137 | if (current->left == NULL) { 138 | cout << current->data << " "; 139 | current = current->right; 140 | } else { 141 | Node* predecessor = current->left; 142 | while (predecessor->right != NULL && predecessor->right != current) 143 | predecessor = predecessor->right; 144 | 145 | if (predecessor->right == NULL) { 146 | predecessor->right = current; 147 | cout << current->data << " "; 148 | current = current->left; 149 | } else { 150 | predecessor->right = NULL; 151 | current = current->right; 152 | } 153 | } 154 | } 155 | } 156 | ``` 157 | 158 | --- 159 | 160 | ## Complexity Analysis 161 | | Algorithm | Time Complexity | Space Complexity | 162 | |------------|---------------|-----------------| 163 | | Morris Inorder | O(n) | O(1) | 164 | | Morris Preorder | O(n) | O(1) | 165 | 166 | --- 167 | 168 | ## Advantages 169 | - **O(1) space complexity** (does not use stack or recursion). 170 | - **Efficient traversal** without modifying the tree permanently. 171 | 172 | ## Disadvantages 173 | - **Modifies the tree temporarily**, which can be an issue in some applications. 174 | - **Not well-suited for certain tree structures** where modification is restricted. 175 | 176 | --- 177 | 178 | ## Conclusion 179 | Morris Traversal is a powerful technique for tree traversal **without recursion or extra space**. It is useful in scenarios where memory is constrained, but care must be taken due to its temporary modifications to the tree. 180 | 181 | --- 182 | 183 | ### **References** 184 | - [Binary Tree Traversals - GeeksforGeeks](https://www.geeksforgeeks.org/inorder-tree-traversal-without-recursion-and-without-stack/) 185 | -------------------------------------------------------------------------------- /Quick Sort.md: -------------------------------------------------------------------------------- 1 | # Quick Sort Algorithm 2 | 3 | ## Definition 4 | Quick Sort is a divide-and-conquer algorithm that selects a pivot element, partitions the array around the pivot, and recursively sorts the partitions. 5 | 6 | --- 7 | 8 | ## Pseudocode 9 | ```plaintext 10 | QuickSort(arr, low, high): 11 | if low < high: 12 | pivotIndex = Partition(arr, low, high) 13 | QuickSort(arr, low, pivotIndex - 1) 14 | QuickSort(arr, pivotIndex + 1, high) 15 | 16 | Partition(arr, low, high): 17 | pivot = arr[high] 18 | i = low - 1 19 | for j from low to high - 1: 20 | if arr[j] < pivot: 21 | i++ 22 | swap(arr[i], arr[j]) 23 | swap(arr[i + 1], arr[high]) 24 | return i + 1 25 | ``` 26 | 27 | --- 28 | 29 | ## C++ Implementation 30 | ```cpp 31 | #include 32 | using namespace std; 33 | 34 | int partition(int arr[], int low, int high) { 35 | int pivot = arr[high]; 36 | int i = (low - 1); 37 | for (int j = low; j < high; j++) { 38 | if (arr[j] < pivot) { 39 | i++; 40 | swap(arr[i], arr[j]); 41 | } 42 | } 43 | swap(arr[i + 1], arr[high]); 44 | return (i + 1); 45 | } 46 | 47 | void quickSort(int arr[], int low, int high) { 48 | if (low < high) { 49 | int pi = partition(arr, low, high); 50 | quickSort(arr, low, pi - 1); 51 | quickSort(arr, pi + 1, high); 52 | } 53 | } 54 | 55 | void printArray(int arr[], int size) { 56 | for (int i = 0; i < size; i++) 57 | cout << arr[i] << " "; 58 | cout << endl; 59 | } 60 | 61 | int main() { 62 | int arr[] = {10, 7, 8, 9, 1, 5}; 63 | int n = sizeof(arr) / sizeof(arr[0]); 64 | quickSort(arr, 0, n - 1); 65 | cout << "Sorted array: "; 66 | printArray(arr, n); 67 | return 0; 68 | } 69 | ``` 70 | 71 | --- 72 | 73 | ## Step-by-Step Explanation 74 | ### Given array: 75 | ```plaintext 76 | arr[] = {10, 7, 8, 9, 1, 5} 77 | ``` 78 | 1. Choose pivot (e.g., last element `5`) 79 | 2. Partition array into elements `< 5` and `>= 5` 80 | 3. Recursively apply Quick Sort on partitions 81 | 82 | Final sorted array: `{1, 5, 7, 8, 9, 10}` 83 | 84 | --- 85 | 86 | ## Time and Space Complexity 87 | | Case | Time Complexity | 88 | |------------|----------------| 89 | | Best Case | O(n log n) | 90 | | Worst Case | O(n^2) | 91 | | Average Case | O(n log n) | 92 | | Space Complexity | O(log n) | 93 | 94 | --- 95 | 96 | ## Applications and Uses 97 | 1. **Efficient Sorting** - Quick Sort is widely used in sorting algorithms due to its average O(n log n) complexity. 98 | 2. **Divide-and-Conquer Algorithms** - Forms a base for other recursive solutions. 99 | 3. **Database Sorting** - Used in indexing and searching operations. 100 | 4. **Competitive Programming** - Preferred for fast in-memory sorting. 101 | 102 | --- 103 | 104 | ## Specific Problems Where Quick Sort is Useful 105 | 1. **Sorting Large Arrays in Competitive Programming** - Quick Sort is often the fastest approach. 106 | 2. **Sorting in Database Queries** - Used for optimizing query operations. 107 | 3. **Median Finding Algorithms** - Quick Sort helps in efficiently finding medians. 108 | 4. **External Sorting** - Used in cases where memory efficiency is critical. 109 | 110 | --- 111 | 112 | ## Conclusion 113 | Quick Sort is one of the fastest sorting algorithms, widely used in practice, but it can have O(n²) worst-case complexity if pivot selection is poor. Choosing a good pivot (e.g., median of three) helps improve efficiency. 114 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Algorithms -------------------------------------------------------------------------------- /Radix Sort.md: -------------------------------------------------------------------------------- 1 | # Radix Sort Algorithm 2 | 3 | ## Definition 4 | Radix Sort is a non-comparative integer sorting algorithm that sorts numbers digit by digit, from the least significant to the most significant digit, using a stable sorting algorithm like Counting Sort. 5 | 6 | --- 7 | 8 | ## Pseudocode 9 | ```plaintext 10 | RadixSort(arr, n): 11 | Find the maximum number to determine the number of digits 12 | for each digit position (1s, 10s, 100s, ...): 13 | Perform Counting Sort based on the current digit 14 | ``` 15 | 16 | --- 17 | 18 | ## C++ Implementation 19 | ```cpp 20 | #include 21 | #include 22 | #include 23 | using namespace std; 24 | 25 | int getMax(int arr[], int n) { 26 | int maxVal = arr[0]; 27 | for (int i = 1; i < n; i++) 28 | if (arr[i] > maxVal) 29 | maxVal = arr[i]; 30 | return maxVal; 31 | } 32 | 33 | void countingSort(int arr[], int n, int exp) { 34 | vector output(n); 35 | vector count(10, 0); 36 | 37 | for (int i = 0; i < n; i++) 38 | count[(arr[i] / exp) % 10]++; 39 | 40 | for (int i = 1; i < 10; i++) 41 | count[i] += count[i - 1]; 42 | 43 | for (int i = n - 1; i >= 0; i--) { 44 | output[count[(arr[i] / exp) % 10] - 1] = arr[i]; 45 | count[(arr[i] / exp) % 10]--; 46 | } 47 | 48 | for (int i = 0; i < n; i++) 49 | arr[i] = output[i]; 50 | } 51 | 52 | void radixSort(int arr[], int n) { 53 | int maxVal = getMax(arr, n); 54 | for (int exp = 1; maxVal / exp > 0; exp *= 10) 55 | countingSort(arr, n, exp); 56 | } 57 | 58 | void printArray(int arr[], int size) { 59 | for (int i = 0; i < size; i++) 60 | cout << arr[i] << " "; 61 | cout << endl; 62 | } 63 | 64 | int main() { 65 | int arr[] = {170, 45, 75, 90, 802, 24, 2, 66}; 66 | int n = sizeof(arr) / sizeof(arr[0]); 67 | radixSort(arr, n); 68 | cout << "Sorted array: "; 69 | printArray(arr, n); 70 | return 0; 71 | } 72 | ``` 73 | 74 | --- 75 | 76 | ## Step-by-Step Explanation 77 | ### Given array: 78 | ```plaintext 79 | arr[] = {170, 45, 75, 90, 802, 24, 2, 66} 80 | ``` 81 | 1. Find the maximum number (802) to determine the number of digit passes. 82 | 2. Sort numbers based on each digit position using Counting Sort: 83 | - **1st pass (1s place):** `{170, 90, 802, 2, 24, 45, 75, 66}` 84 | - **2nd pass (10s place):** `{802, 2, 24, 45, 66, 170, 75, 90}` 85 | - **3rd pass (100s place):** `{2, 24, 45, 66, 75, 90, 170, 802}` 86 | 3. The final sorted array: `{2, 24, 45, 66, 75, 90, 170, 802}` 87 | 88 | --- 89 | 90 | ## Time and Space Complexity 91 | | Case | Time Complexity | 92 | |------------|----------------| 93 | | Best Case | O(nk) | 94 | | Worst Case | O(nk) | 95 | | Average Case | O(nk) | 96 | | Space Complexity | O(n + k) | 97 | 98 | where `k` is the number of digits in the maximum number. 99 | 100 | --- 101 | 102 | ## Applications and Uses 103 | 1. **Large Numbers Sorting** - Used when sorting large integers efficiently. 104 | 2. **Sorting Strings** - Used in fixed-length string sorting. 105 | 3. **Used in DNA Sequencing** - Efficient for sorting large datasets in bioinformatics. 106 | 4. **Post Office Sorting** - Used for sorting postal codes. 107 | 108 | --- 109 | 110 | ## Specific Problems Where Radix Sort is Useful 111 | 1. **Sorting Phone Numbers** - Since phone numbers have fixed digits, Radix Sort is efficient. 112 | 2. **Sorting Large Data Without Comparisons** - Useful for scenarios where comparison-based sorting is inefficient. 113 | 3. **LSD vs. MSD Sorting Problems** - Used in problems requiring sorting based on different digit significance. 114 | 115 | --- 116 | 117 | ## Conclusion 118 | Radix Sort is a powerful non-comparative sorting algorithm that excels in sorting fixed-length numbers or strings. However, it is not ideal when dealing with a large range of values due to its auxiliary space usage. 119 | -------------------------------------------------------------------------------- /Selection Sort.md: -------------------------------------------------------------------------------- 1 | # Selection Sort Algorithm 2 | 3 | ## Definition 4 | Selection Sort is a simple comparison-based sorting algorithm that repeatedly selects the smallest (or largest) element from the unsorted portion and places it in the correct position. 5 | 6 | --- 7 | 8 | ## Pseudocode 9 | ```plaintext 10 | SelectionSort(arr, n): 11 | for i from 0 to n-1: 12 | min_index = i 13 | for j from i+1 to n: 14 | if arr[j] < arr[min_index]: 15 | min_index = j 16 | swap(arr[i], arr[min_index]) 17 | ``` 18 | 19 | --- 20 | 21 | ## C++ Implementation 22 | ```cpp 23 | #include 24 | using namespace std; 25 | 26 | void selectionSort(int arr[], int n) { 27 | for (int i = 0; i < n - 1; i++) { 28 | int min_index = i; 29 | for (int j = i + 1; j < n; j++) { 30 | if (arr[j] < arr[min_index]) { 31 | min_index = j; 32 | } 33 | } 34 | swap(arr[i], arr[min_index]); 35 | } 36 | } 37 | 38 | void printArray(int arr[], int n) { 39 | for (int i = 0; i < n; i++) { 40 | cout << arr[i] << " "; 41 | } 42 | cout << endl; 43 | } 44 | 45 | int main() { 46 | int arr[] = {64, 25, 12, 22, 11}; 47 | int n = sizeof(arr) / sizeof(arr[0]); 48 | 49 | cout << "Original Array: "; 50 | printArray(arr, n); 51 | 52 | selectionSort(arr, n); 53 | 54 | cout << "Sorted Array: "; 55 | printArray(arr, n); 56 | return 0; 57 | } 58 | ``` 59 | 60 | --- 61 | 62 | ## Step-by-Step Explanation 63 | Let's take an example array: 64 | ```plaintext 65 | arr[] = {64, 25, 12, 22, 11} 66 | ``` 67 | ### **Pass 1:** 68 | - Find the smallest element in `{64, 25, 12, 22, 11}` → **11** 69 | - Swap 11 with 64 → `{11, 25, 12, 22, 64}` 70 | 71 | ### **Pass 2:** 72 | - Find the smallest element in `{25, 12, 22, 64}` → **12** 73 | - Swap 12 with 25 → `{11, 12, 25, 22, 64}` 74 | 75 | ### **Pass 3:** 76 | - Find the smallest element in `{25, 22, 64}` → **22** 77 | - Swap 22 with 25 → `{11, 12, 22, 25, 64}` 78 | 79 | ### **Pass 4:** 80 | - Find the smallest element in `{25, 64}` → **25** (no swap needed) 81 | - Final sorted array: `{11, 12, 22, 25, 64}` 82 | 83 | --- 84 | 85 | ## Time and Space Complexity 86 | | Case | Time Complexity | 87 | |------------|----------------| 88 | | Best Case | O(n²) | 89 | | Worst Case | O(n²) | 90 | | Average Case | O(n²) | 91 | | Space Complexity | O(1) (In-place sort) | 92 | 93 | --- 94 | 95 | ## Applications and Uses 96 | 1. **Small Data Sets** - Efficient for small datasets where simplicity is preferred. 97 | 2. **No Extra Space Needed** - Works in O(1) extra space (in-place sorting). 98 | 3. **Easy to Implement** - Simple algorithm for teaching sorting concepts. 99 | 4. **Not Stable** - Doesn't preserve the relative order of equal elements. 100 | 5. **Used in Embedded Systems** - Works well in environments with memory constraints. 101 | 102 | --- 103 | 104 | ## Specific Problems Where Selection Sort is Useful 105 | 1. **Sorting Students by Marks** - If the dataset is small, Selection Sort can be used to arrange students' scores in ascending order. 106 | 2. **Finding the Kth Smallest/Largest Element** - Instead of sorting the entire array, we can run Selection Sort for the first K iterations to find the Kth smallest/largest element efficiently. 107 | 3. **Arranging Players in a Tournament** - When ranking a small number of players based on their scores. 108 | 4. **Selection in Hardware Implementation** - Used where simple, minimal memory sorting is required in embedded systems. 109 | 110 | --- 111 | 112 | ## Conclusion 113 | Selection Sort is an easy-to-understand sorting algorithm but inefficient for large datasets due to its O(n²) complexity. It is useful for teaching and small-scale applications where memory is a concern. 114 | -------------------------------------------------------------------------------- /Shell Sort.md: -------------------------------------------------------------------------------- 1 | # Shell Sort Algorithm 2 | 3 | ## Definition 4 | Shell Sort is an optimization of Insertion Sort that allows elements to move farther apart, reducing the total number of swaps and making it more efficient for large datasets. 5 | 6 | --- 7 | 8 | ## Pseudocode 9 | ```plaintext 10 | ShellSort(arr, n): 11 | Start with a large gap, then reduce the gap 12 | While gap > 0: 13 | Perform insertion sort on elements separated by gap 14 | Reduce the gap 15 | ``` 16 | 17 | --- 18 | 19 | ## C++ Implementation 20 | ```cpp 21 | #include 22 | using namespace std; 23 | 24 | void shellSort(int arr[], int n) { 25 | for (int gap = n / 2; gap > 0; gap /= 2) { 26 | for (int i = gap; i < n; i++) { 27 | int temp = arr[i]; 28 | int j; 29 | for (j = i; j >= gap && arr[j - gap] > temp; j -= gap) 30 | arr[j] = arr[j - gap]; 31 | arr[j] = temp; 32 | } 33 | } 34 | } 35 | 36 | void printArray(int arr[], int size) { 37 | for (int i = 0; i < size; i++) 38 | cout << arr[i] << " "; 39 | cout << endl; 40 | } 41 | 42 | int main() { 43 | int arr[] = {12, 34, 54, 2, 3}; 44 | int n = sizeof(arr) / sizeof(arr[0]); 45 | shellSort(arr, n); 46 | cout << "Sorted array: "; 47 | printArray(arr, n); 48 | return 0; 49 | } 50 | ``` 51 | 52 | --- 53 | 54 | ## Step-by-Step Explanation 55 | ### Given array: 56 | ```plaintext 57 | arr[] = {12, 34, 54, 2, 3} 58 | ``` 59 | 1. Start with `gap = n/2 = 5/2 = 2`. 60 | 2. Perform Insertion Sort with elements spaced by `gap`. 61 | 3. Reduce `gap` and repeat sorting until `gap = 1`. 62 | 4. Perform a final Insertion Sort on the nearly sorted array. 63 | 64 | Final sorted array: `{2, 3, 12, 34, 54}` 65 | 66 | --- 67 | 68 | ## Time and Space Complexity 69 | | Case | Time Complexity | 70 | |------------|----------------| 71 | | Best Case | O(n log n) | 72 | | Worst Case | O(n²) | 73 | | Average Case | O(n log n) | 74 | | Space Complexity | O(1) | 75 | 76 | --- 77 | 78 | ## Applications and Uses 79 | 1. **Efficient Sorting for Medium-Sized Datasets** - Works better than Insertion Sort. 80 | 2. **Embedded Systems** - Used in hardware with limited processing power. 81 | 3. **Online Data Streams** - Works well when data is received in chunks. 82 | 83 | --- 84 | 85 | ## Specific Problems Where Shell Sort is Useful 86 | 1. **Sorting Small to Medium-Sized Arrays** - Faster than Bubble and Insertion Sort. 87 | 2. **Nearly Sorted Data** - Performs better than Merge or Quick Sort. 88 | 3. **CPU Scheduling Algorithms** - Used in older operating systems. 89 | 90 | --- 91 | 92 | ## Conclusion 93 | Shell Sort improves upon Insertion Sort by reducing the number of swaps, making it efficient for certain datasets while maintaining an easy-to-implement approach. 94 | -------------------------------------------------------------------------------- /Sieve Of Eratosthenes.md: -------------------------------------------------------------------------------- 1 | # Sieve of Eratosthenes 2 | 3 | ## Introduction 4 | The **Sieve of Eratosthenes** is an efficient algorithm for finding all prime numbers up to a given limit. It works by iteratively marking the multiples of each prime number starting from 2. 5 | 6 | ## Algorithm 7 | 1. Create a boolean array `isPrime[]` and initialize all entries as `true`. 8 | 2. Start from the first prime number, 2. 9 | 3. Mark all multiples of 2 (except 2 itself) as `false`. 10 | 4. Move to the next unmarked number (which is the next prime) and repeat the process. 11 | 5. Continue until all numbers up to the given limit are processed. 12 | 13 | ## Pseudocode 14 | ```plaintext 15 | SieveOfEratosthenes(n): 16 | Create an array isPrime of size n+1 and set all elements to true 17 | Set isPrime[0] and isPrime[1] to false (0 and 1 are not prime) 18 | 19 | For i from 2 to sqrt(n): 20 | If isPrime[i] is true: 21 | For j from i*i to n with step i: 22 | Set isPrime[j] to false 23 | 24 | Print all numbers where isPrime[i] is true 25 | ``` 26 | 27 | ## Implementation in C++ 28 | ```cpp 29 | #include 30 | #include 31 | using namespace std; 32 | 33 | void sieveOfEratosthenes(int n) { 34 | vector isPrime(n + 1, true); 35 | isPrime[0] = isPrime[1] = false; 36 | 37 | for (int i = 2; i * i <= n; i++) { 38 | if (isPrime[i]) { 39 | for (int j = i * i; j <= n; j += i) { 40 | isPrime[j] = false; 41 | } 42 | } 43 | } 44 | 45 | cout << "Prime numbers up to " << n << ": "; 46 | for (int i = 2; i <= n; i++) { 47 | if (isPrime[i]) { 48 | cout << i << " "; 49 | } 50 | } 51 | cout << endl; 52 | } 53 | 54 | int main() { 55 | int n; 56 | cout << "Enter the limit: "; 57 | cin >> n; 58 | sieveOfEratosthenes(n); 59 | return 0; 60 | } 61 | ``` 62 | 63 | ## Complexity Analysis 64 | - **Time Complexity**: **O(n log log n)** (Highly efficient for large `n`) 65 | - **Space Complexity**: **O(n)** (Stores a boolean array of size `n`) 66 | 67 | ## Example Output 68 | ``` 69 | Enter the limit: 30 70 | Prime numbers up to 30: 2 3 5 7 11 13 17 19 23 29 71 | ``` 72 | 73 | ## Applications 74 | - Finding prime numbers efficiently. 75 | - Cryptography and security algorithms. 76 | - Generating prime numbers for mathematical problems. 77 | 78 | ## References 79 | - [Wikipedia: Sieve of Eratosthenes](https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) 80 | - [GeeksforGeeks Explanation](https://www.geeksforgeeks.org/sieve-of-eratosthenes/) 81 | 82 | --- 83 | 84 | This repository contains a simple and efficient implementation of the **Sieve of Eratosthenes** algorithm in C++. Feel free to contribute and improve the code! 85 | -------------------------------------------------------------------------------- /Sorting Algorithm.md: -------------------------------------------------------------------------------- 1 | # Analysis of Sorting Algorithms 2 | 3 | ## Introduction 4 | Sorting algorithms are fundamental in computer science and are used in various applications such as database management, searching, and data organization. Each sorting algorithm has its own advantages, use cases, and efficiency based on the dataset and problem constraints. 5 | 6 | --- 7 | 8 | ## Brief Working of Sorting Algorithms 9 | 10 | | Algorithm | Working Mechanism | 11 | |-----------|------------------| 12 | | **Bubble Sort** | Repeatedly swaps adjacent elements if they are in the wrong order, moving the largest element to the end in each pass. | 13 | | **Selection Sort** | Selects the smallest element in each iteration and swaps it with the first unsorted element. | 14 | | **Insertion Sort** | Inserts each element in its correct position by shifting larger elements to the right. | 15 | | **Merge Sort** | Recursively divides the array into halves, sorts each half, and merges them in sorted order. | 16 | | **Quick Sort** | Selects a pivot, partitions the array around it, and recursively sorts the subarrays. | 17 | | **Heap Sort** | Converts the array into a heap and repeatedly extracts the maximum (or minimum) element. | 18 | | **Counting Sort** | Uses a frequency array to count occurrences and place elements directly into the sorted array. | 19 | | **Radix Sort** | Sorts numbers digit by digit using counting sort as a subroutine. | 20 | | **Bucket Sort** | Divides the elements into buckets, sorts each bucket, and combines them. | 21 | | **Shell Sort** | Uses a decreasing sequence of gaps to perform insertion sort on elements spaced apart, reducing swaps. | 22 | 23 | --- 24 | 25 | ## Comparison of Sorting Algorithms 26 | 27 | | Algorithm | Best Case (Efficiency) | Average Case (Efficiency) | Worst Case (Efficiency) | Space Complexity | Stable | Use Cases | 28 | |---------------|------------|-------------|------------|----------------|--------|-----------| 29 | | **Bubble Sort** | O(n) (0️⃣) | O(n²) (3️⃣) | O(n²) (3️⃣) | O(1) | Yes | Small datasets, teaching purposes | 30 | | **Selection Sort** | O(n²) (3️⃣) | O(n²) (3️⃣) | O(n²) (3️⃣) | O(1) | No | Small datasets, when swaps are expensive | 31 | | **Insertion Sort** | O(n) (0️⃣) | O(n²) (3️⃣) | O(n²) (3️⃣) | O(1) | Yes | Nearly sorted arrays, small datasets | 32 | | **Merge Sort** | O(n log n) (1️⃣) | O(n log n) (1️⃣) | O(n log n) (1️⃣) | O(n) | Yes | Large datasets, linked lists | 33 | | **Quick Sort** | O(n log n) (1️⃣) | O(n log n) (1️⃣) | O(n²) (3️⃣) | O(log n) | No | General-purpose sorting, quick execution | 34 | | **Heap Sort** | O(n log n) (1️⃣) | O(n log n) (1️⃣) | O(n log n) (1️⃣) | O(1) | No | Priority queues, large datasets | 35 | | **Counting Sort** | O(n + k) (1️⃣) | O(n + k) (1️⃣) | O(n + k) (1️⃣) | O(k) | Yes | Sorting integers with a limited range | 36 | | **Radix Sort** | O(nk) (1️⃣) | O(nk) (1️⃣) | O(nk) (1️⃣) | O(n + k) | Yes | Sorting numbers, words, or fixed-length strings | 37 | | **Bucket Sort** | O(n + k) (1️⃣) | O(n + k) (1️⃣) | O(n²) (3️⃣) | O(n + k) | Yes | Floating-point numbers, uniform distribution | 38 | | **Shell Sort** | O(n log n) (1️⃣) | O(n log n) (1️⃣) | O(n²) (3️⃣) | O(1) | No | Medium-sized datasets, optimized insertion sort | 39 | 40 | --- 41 | --------------------------------------------------------------------------------