├── Data Structures ├── Arrays │ ├── README.md │ ├── Topkfrequentelements.md │ ├── 3Sum.md │ ├── Intro_to_Arrays.md │ ├── Implementation_of_queue_using_array.md │ ├── Subarray_vs_Subsequence.md │ ├── TwoSumII.md │ └── queue.md ├── Hashingtechnique.md ├── Linked List │ ├── Implementation_of_queue_using_linkedlist.md │ └── Insertion in Linked List.md ├── Stacks │ └── README.md ├── BinarySearch │ └── Peak_Element_in_MountainArray.md ├── LinkedList.md └── BinarySearchTrees │ └── SearchInsertDelete.md ├── Algorithms ├── Searching Algorithms │ ├── README.md │ ├── BFS.md │ ├── Linear_Search.md │ ├── Binary_Search.md │ └── ternary_Search.md ├── Sorting Algorithms │ ├── README.md │ ├── ShellSort.md │ ├── RadixSort.md │ ├── Insertion_Sort.md │ ├── Heap_Sort.md │ ├── Bubble_Sort.md │ ├── Merge_Sort.md │ ├── Selection_sort.md │ ├── Insertion_Sort_Java.md │ └── Quick sort.md ├── Dynammic programming │ ├── LCS-1.png │ ├── LCS-2.png │ └── Longest Common Subsequence.md ├── Asymptotic Notations │ ├── big-omega.md │ ├── Big-Oh.md │ ├── Introduction.md │ └── Big-theta.md ├── Tree │ ├── Inorder_Traversal.md │ ├── Invert_Binary_Tree.md │ └── Prim's_Algorithm.md ├── Floyd_Warshall │ └── Floyd_Warshall.md ├── Sieve of Eratosthenes │ └── Sieve of Eratosthenes.md ├── Recursive algorithm │ └── Akra_Bazzi.md └── Segmented Sieve │ └── Segmented Sieve.md ├── .github ├── ISSUE_TEMPLATE │ ├── script-addition.md │ └── improve-a-doc.md └── PULL_REQUEST_TEMPLATE.md ├── LICENSE ├── README.md └── CODE_OF_CONDUCT.md /Data Structures/Arrays/README.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /Algorithms/Searching Algorithms/README.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /Algorithms/Sorting Algorithms/README.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /Algorithms/Dynammic programming/LCS-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HackClubRAIT/Wizard-Of-Docs/HEAD/Algorithms/Dynammic programming/LCS-1.png -------------------------------------------------------------------------------- /Algorithms/Dynammic programming/LCS-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HackClubRAIT/Wizard-Of-Docs/HEAD/Algorithms/Dynammic programming/LCS-2.png -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/script-addition.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Script Addition 3 | about: Add a DSA doc. 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | ### Description 11 | Describe more about the issue. 12 | 13 | ### Programming language 14 | - [ ] C 15 | - [ ] C++ 16 | - [ ] Java 17 | - [ ] Python 18 | 19 | #### Are you contributing under any open-source program ? 20 | 21 | 22 |
23 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/improve-a-doc.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Improve a Doc 3 | about: Improve a already existing Doc 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | ### Description of the change 11 | Describe more about the change. 12 | 13 | ### Domain of Docs 14 | 15 | - [ ] C 16 | - [ ] C++ 17 | - [ ] Java 18 | - [ ] Python 19 | 20 | #### Are you contributing under any open-source program ? 21 | 22 | 23 |
24 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | ## What is the change? 2 | Remove this line and add a description 3 | 4 | ## Related issue? 5 | closes: #issue_number 6 | 7 | ## Checklist: 8 | Before you create this PR, confirm all the requirements listed below by checking the checkboxes `[x]`: 9 | 10 | - [ ] I have followed the [Code of Conduct](https://github.com/HackClubRAIT/Wizard-Of-Docs/blob/ec224497bce316f7b4736a901f70688f251cca87/CODE_OF_CONDUCT.md). 11 | - [ ] I have checked there aren't other open [Pull Requests](https://github.com/siddhi-244/Embellish/pulls) for the same update/change. 12 | - [ ] I have you tested the code before submission. 13 | - [ ] I have commented my code, particularly in hard-to-understand areas. 14 | - [ ] My changes generates no new warnings. 15 | - [ ] I'm (HSOC) Hack-Club-Rait Summer of code '22 Contributor 16 | 17 | ## Screenshots (if any) 18 | -------------------------------------------------------------------------------- /Algorithms/Searching Algorithms/BFS.md: -------------------------------------------------------------------------------- 1 | ## Breadth First Search 2 | 3 | Breadth-first search (BFS) is an algorithm for exploring a tree or graph. 4 | It starts at the tree root and explores all nodes at the present depth prior to moving on to the nodes at the next depth level. 5 | 6 | #### Pseudocode: 7 | 8 | 9 | 1 procedure BFS(A, root) is 10 | 2 let l be a queue 11 | 3 label root as explored 12 | 4 l.enqueue(root) 13 | 5 while l is not empty do 14 | 6 v := l.dequeue() 15 | 7 if v is the goal then 16 | 8 return v 17 | 9 for all edges from v to w in A.adjacentEdges(v) do 18 | 10 if w is not labeled as explored then 19 | 11 label w as explored 20 | 12 l.enqueue(w) 21 | 22 | #### Time Complexity: 23 | O(V) where V is the number of nodes. 24 | #### Space Complexity: 25 | O(|V|) = O(b^d) 26 | 27 | 28 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 HackClubRAIT 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Algorithms/Asymptotic Notations/big-omega.md: -------------------------------------------------------------------------------- 1 | 2 | # 🧭Big-Ω(Omega) 3 | Big-Omega (Ω) notations are used to define the lower bound for the runtime taken by the algorithm. Sometimes, we define the least amount of time while not stating the maximum limit. Big-Ω comes in handy in such cases. 4 | # 5 | ### Definition- 6 | Big-Omega (Ω) notation gives a lower bound for a function _f(n)_. 7 | 8 | We write _f(n) = Ω(g(n))_, If there are positive constants *n0* and *c* such that, to the right of *n0* the *f(n)* always lies on or above _c.g(n)_. 9 | 10 | *Ω(g(n)) = { f(n) : There exist positive constant c and n0 such that 0 ≤ c g(n) ≤ _f(n)_, for all n ≤ n0}* 11 | ![](https://www.tutorialspoint.com/assets/questions/media/26169/big_omega.jpg) 12 | # 13 | We say that the running time is "big-Ω of *f(n)***.**" We use the big-Ω notation for **asymptotic lower bounds** since it bounds the growth of the running time from below for large enough input sizes. 14 | # 15 | #### Note- 16 | Just like Big-O, Big-Ω does not give a precise view about the time. That is we can say that the worst-case running time of binary search can be *Ω(1)*, but that might not be true for all cases. 17 | -------------------------------------------------------------------------------- /Algorithms/Asymptotic Notations/Big-Oh.md: -------------------------------------------------------------------------------- 1 | # ⭕ Big-Oh(O) 2 | Big-O notation is used to define an upper bound. This means that it defines the maximum value or the upper limit of the time taken by the program to complete. 3 | The Big-O notation is the most commonly used notation to define algorithms run time. 4 |
5 | ![](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTDfxCAOEiRPGw7ip5x2nCOX4GDeJ3II1LRxxMordmViO5Ae7nFwP4LJJXgQvEW9NcxGfY&usqp=CAU) 6 |
7 | Let us understand this by taking the example of the binary search algorithm. The worst case of it is *Θ(log2​n)* but the algorithm can also find a solution in the first iteration. 8 | So does the time complexity of it becomes *Θ(1)*? 9 | ->No, it doesn't. But we can precisely say its runtime would never cross *Θ(log2​n)*. In such a case we need to use the upper bound that is Big-O notation. 10 | # 11 | #### ✍Definition- 12 | *Big-Oh is about finding an asymptotic upper bound.* 13 | Formal definition of Big-Oh: 14 | f(N) = O(g(N)), if there exists positive constants c, N0 such that _f(N) ≤ c **.** g(N) ∀ N ≥ N0 ._ 15 | ![](https://cdn.programiz.com/sites/tutorial2program/files/big0.png) 16 | - The topic of concern here is growth i.e. how _f_ grows when _N_ is large. 17 | 18 | - And not being concerned with small _N_ or constant factors. 19 | 20 | 21 | # 22 | 23 | #### 📝 Note- 24 | We can say that the running time of binary search is _always_ _Θ(log2​n)_. Also since *Θ* is a tight bound on the run time, unlike _O_. That means if you have 10 rupees in your pocket you can totally say that you have the amount in your pocket but not more than 10 million rupees. 25 | The above statement though true doesn't give a precise view of money in your pocket. 26 | -------------------------------------------------------------------------------- /Algorithms/Asymptotic Notations/Introduction.md: -------------------------------------------------------------------------------- 1 | 2 | # 🤖 Introduction 3 | Analysis of algorithms can't be done without comparing the algorithms. We need to understand why a certain algorithm is more preferred over the other for certain scenarios. 4 | 5 | > We are interested in time taken by an algorithm 6 | 7 | In simple words, we just need a measure to identify the runtime of algorithms i.e. running time. 8 | The running time of an algorithm is simply how much time a computer takes to run the lines of code of the algorithm— 9 | That really depends on a variety of factors 10 | 11 | 12 | 13 | - ⚡ The speed of the computer 14 | - 💬The programming language 15 | - 👾The compiler 16 | - 🙄And some other factors too 17 | 18 | 19 | 20 | The above-mentioned factors are not in the hands of a developer. This means if a particular algorithm can give better time on a particular machine with some specific setup, will not be able to give similar results on other machines. So what is a generalized way to measure the time taken by an algorithm? 21 | A universally accepted method defining the runtime in terms of the number of inputs. This means for example if a select sort algorithm is given an array of only 2 elements then it will take only 2 iterations to solve the problem, but for the same algorithm if the array of size 5 is given then it will make 5 iterations to solve the problem. So time taken increased therefore, the time depends on the input size. 22 | The idea is that to measure an algorithm's efficiency we must focus on how fast a function grows with the input size. Also known as the **rate of growth** of the running time. 23 | 24 | ## Asymptotic Notations 25 | 26 | Asymptotic Notation are a measure used to describe the running time of an algorithm - how much time an algorithm takes with a given input, n. Basically it's very similar to the measure of time that is second, just as we use the seconds to measure the time for measuring how long an algorithm will take to complete in terms of its input size. There are three different notations: big Oh(O), big Theta (Θ), and big Omega (Ω). 27 | -------------------------------------------------------------------------------- /Data Structures/Arrays/Topkfrequentelements.md: -------------------------------------------------------------------------------- 1 | # Top K Frequent Elements 2 | 3 | Problem Link : https://leetcode.com/problems/top-k-frequent-elements/ 4 | 5 | ### Problem : 6 | 7 | Given an integer array nums and an integer k, return the k most frequent elements. You may return the answer in any order. 8 | 9 | ### Example 1: 10 | 11 | Input: nums = [1,1,1,2,2,3], k = 2
12 | Output: [1,2] 13 | 14 | ### Example 2: 15 | 16 | Input: nums = [1], k = 1
17 | Output: [1] 18 | 19 | ### Explanation 20 | 21 | Sample input: [1,1,1,2,2,3] and k = 2 22 | 23 | 1) Use map/dictionary and store the frequency of the number and maximum frequency of all the numbers.
24 | So at the end of this operation, for the sample problem, map would look like this: 1 → 3, 2 → 2, 3 →1. Also, maximum frequency will be 3.
25 | 2) Now, since, we cannot use regular sorting approach, another thing that comes to mind is, bucket sort.
26 | 3) Create a multi-storage bucket with (maximum frequency + 1)as its size. Now, based on frequency of the word, put it in the appropriate bucket level.
In our example, Put 1 at level 3, put 2 at level 2 and put 3 at level 1.
27 | 4) There might be more than one numbers with the same frequency. So, we can use linked list to store more than one elements at the same level.
28 | 5) Now, iterate over the bucket elements and keep a counter to match with the input value k. 29 | 30 | ### Solution 31 | 32 | - In Python 33 | 34 | ```c 35 | class Solution: 36 | def topKFrequent(self, nums: List[int], k: int) -> List[int]: 37 | count = {} 38 | freq = [[] for i in range(len(nums) + 1)] 39 | 40 | for n in nums: 41 | count[n] = 1 + count.get(n, 0) 42 | for n, c in count.items(): 43 | freq[c].append(n) 44 | 45 | res = [] 46 | for i in range(len(freq) - 1, 0, -1): 47 | for n in freq[i]: 48 | res.append(n) 49 | if len(res) == k: 50 | return res 51 | 52 | 53 | ``` 54 | ### Time Complexity - O(n) 55 | ### Space Complexity - O(n) 56 | -------------------------------------------------------------------------------- /Algorithms/Sorting Algorithms/ShellSort.md: -------------------------------------------------------------------------------- 1 | ## Description 2 | Shell sort is a variation version of the insertion-sort algorithm. It intially sorts elements that are far apart from each other and successively reduces the interval between the elements to be sorted. 3 | 4 | Shell Sort is similar to Insertion Sort algorithm and is efficient in sorting widely placed unsorted array. Let's say, if an element in an unsorted array is much far from its sorted position, then insertion sort becomes costly as it will compare and shift one by one every element greater than it (i.e. it will take (element original position - element sorted position) number of swaps/shifts to sort that element.). See figure 1, 5 | 6 | Shell sort addresses this problem and reduces the number of shifts/swaps by dividing the array into subarrays of intervals (gap) and then applying insertion sort on the sub-arrays. This process is repeated with reducing interval (gap) size until the gap becomes 0. As a result, the number of swaps significantly reduces but at the cost of more number of comparisons. 7 | 8 | ## Code 9 | ```cpp 10 | 11 | #include 12 | using namespace std; 13 | 14 | 15 | void ShellSort(int arr[], int n) { 16 | 17 | for (int level = n / 2; level > 0; level /= 2) { 18 | for (int i = level; i < n; i += 1) { 19 | int temp = arr[i]; 20 | int j; 21 | for (j = i; j >= level && arr[j - level] > temp; j -= level) { 22 | arr[j] = arr[j - level]; 23 | } 24 | arr[j] = temp; 25 | } 26 | } 27 | } 28 | 29 | 30 | void printArray(int array[], int size) { 31 | for (int i = 0; i < size; i++) 32 | cout << array[i] << " "; 33 | cout << endl; 34 | } 35 | 36 | 37 | int main() { 38 | int arr[] = {10, 2, 3, 7, 4, 6, 5, 1}; 39 | int size = sizeof(arr) / sizeof(arr[0]); 40 | ShellSort(arr, size); 41 | cout << "Sorted Array : \n"; 42 | printArray(arr, size); 43 | } 44 | 45 | ``` 46 | 47 | ## Complexities 48 | ### Time complexity : 49 | Best Case : O(nlog n) 50 | 51 | Worst Case : O(n2) 52 | 53 | Average : O(nlog n) 54 | 55 | ### Space complexity : 56 | O(1) 57 | 58 | 59 | -------------------------------------------------------------------------------- /Algorithms/Sorting Algorithms/RadixSort.md: -------------------------------------------------------------------------------- 1 | ## Description 2 | Radix sort is a sorting technique which is different from the concept of bubble sort,selection,merge ot quisk sort as radix sort is done by depending on the digits of the elements from the right most bit to the left most bit and by sorting the positional digits at 3 | nth element of the array. 4 | Radix sort is a non-comparison based sorting algorithm.It depends on the place values of the numbers and grouped together according to the place values and thus sorting is performed. 5 | 6 | ## Code 7 | ```cpp 8 | 9 | #include 10 | using namespace std; 11 | 12 | 13 | int getMax(int arr[], int size) { 14 | int max = arr[0]; 15 | for (int i = 1; i < size; i++) 16 | if (arr[i] > max) 17 | max = arr[i]; 18 | return max; 19 | } 20 | 21 | 22 | void CountingSort(int arr[], int size, int place) { 23 | const int max = 10; 24 | int output[size]; 25 | int count[max]; 26 | 27 | for (int i = 0; i < max; ++i) 28 | count[i] = 0; 29 | 30 | 31 | for (int i = 0; i < size; i++) 32 | count[(arr[i] / place) % 10]++; 33 | 34 | 35 | for (int i = 1; i < max; i++) 36 | count[i] += count[i - 1]; 37 | 38 | 39 | for (int i = size - 1; i >= 0; i--) { 40 | output[count[(arr[i] / place) % 10] - 1] = arr[i]; 41 | count[(arr[i] / place) % 10]--; 42 | } 43 | 44 | for (int i = 0; i < size; i++) 45 | arr[i] = output[i]; 46 | } 47 | 48 | void RadixSort(int arr[], int size) { 49 | 50 | int max = getMax(arr, size); 51 | 52 | 53 | for (int place = 1; max / place > 0; place *= 10) 54 | CountingSort(arr, size, place); 55 | } 56 | 57 | 58 | void printArray(int array[], int size) { 59 | for (int i = 0; i < size; i++) 60 | cout << array[i] << " "; 61 | cout << endl; 62 | } 63 | 64 | 65 | int main() { 66 | int arr[] = {12, 132, 164, 23, 1, 45, 78}; 67 | int size = sizeof(arr) / sizeof(arr[0]); 68 | RadixSort(arr, size); 69 | printArray(arr, size); 70 | } 71 | ``` 72 | 73 | ## Complexities 74 | ### Time complexity : 75 | Best Case : O(n+k) 76 | 77 | Worst Case : O(n+k) 78 | 79 | Average : O(n+k) 80 | 81 | ### Space complexity : 82 | O(max) 83 | 84 | 85 | -------------------------------------------------------------------------------- /Algorithms/Asymptotic Notations/Big-theta.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # 🕒Big-θ (Big-Theta) notation 4 | Let us take an example of the array of *n* elements. To perform the linear search on the array the maximum number of times that the for-loop can run is *n*. This worst-case occurs when the value being searched for is not present in the array. 5 |
6 | For every for loop there certain fixed number of computations that are - 7 | 8 | - Calling ith element of the array. 9 | - Comparing the values of the called element of the array and the value to be found. 10 | - Value if found return it. 11 | - Increament the iterator. 12 | 13 | Now, this all computations would take a constant amount of time for each iteration. Let us assume all of the computations collectively take c1 units of time. Therefore for n iterations, it would take *c1.n* units of time. 14 | Also, there might be some extra time required for accepting the value to be found from the user, initialising pointer for array, initialising the declared variables ,etc. Assuming that value to be c2. Therefore the total time taken would sum up to c1.n + c2. 15 | But for a very large value of *n*, *c2* can be ignored. Also since *c1* is constant, so we can derive that the time taken is a function of array size *n*. The notation we use for this running is *Θ(n)* (theta of n or big-theta of n). and not this theta - 16 | ![](https://www.google.com/url?sa=i&url=https%3A%2F%2Floveforquotes.com%2Fi%2Fheygirls-think-theta-none-0d318ff8c5d545d1a99bb8f373e21018&psig=AOvVaw3JcvICfp_xJ3aQ0iMOdPZz&ust=1634902072087000&source=images&cd=vfe&ved=0CAsQjRxqFwoTCPjg0P2y2_MCFQAAAAAdAAAAABAb) 17 | 18 | When running time is *Θ(n)*, it implies that once *n* gets large enough, the running time is at least k1**.** *n* and at most k1**.** *n* for some constants k1 and k2. Here's how to think of *Θ(n)*: 19 | ![](![](https://cdn.kastatic.org/ka-perseus-images/c14a48f24cae3fd563cb3627ee2a74f56c0bcef6.png)) 20 | We are not concerned about small values but for only values larger than *n* at the dashed line since for small values *c2* also plays a part. 21 | We can literally use any function of *n* like *n2, n.log2 n* etc. 22 | # 23 | When we big-*Θ* it provides average case or to be more specific asymptotically tight bound of the time required to run the code. 24 | -------------------------------------------------------------------------------- /Data Structures/Arrays/3Sum.md: -------------------------------------------------------------------------------- 1 | # 3Sum 2 | This problem in some ways is a continuation of the two sum problem. So, if you have not yet solved the two Sum problem then you have to do so because it will help you understand the 3 Sum problem better. 3 | 4 | Problem Link : https://leetcode.com/problems/3sum/ 5 | 6 | ### Problem : 7 | 8 | Given an integer array nums , return all the triplets [nums[i], nums[j], nums[k]] such that i != j, i != k, and j != k, and
nums[i] + nums[j] + nums[k] == 0. 9 | 10 | Notice that the solution set must not contain duplicate triplets. 11 | 12 | ### Example 1: 13 | 14 | Input: nums = [-1,0,1,2,-1,-4]
15 | Output: [[-1,-1,2],[-1,0,1]] 16 | 17 | ### Example 2: 18 | 19 | Input: nums = [0,1,1]
20 | Output: []
21 | Explanation: The only possible triplet does not sum up to 0. 22 | 23 | ### Solution : Efficient Solution 24 | - Using three pointers 25 | - The idea is to sort the array first, then run two loops to process the triplets. We fix the outer loop and move the two pointers (indexes) of the inner loop inwards to arrive at the result. 26 | 27 | ### Algorithm 28 | 29 | 1) Sort the given array.
30 | 2) Loop over the array and fix the first element of the possible triplet, arr[i].
31 | 3) Then fix two pointers, one at i + 1 and the other at n – 1. And look at the sum,
32 | a) If the sum is smaller than the required sum, increment the first pointer.
33 | b) Else, If the sum is bigger, Decrease the end pointer to reduce the sum.
34 | c) Else, if the sum of elements at two-pointer is equal to given sum then print the triplet and break. 35 | 36 | 37 | ### Implementation 38 | 39 | #### In Python 40 | ```c 41 | class Solution: 42 | def threeSum(self, nums: List[int]) -> List[List[int]]: 43 | res = [] 44 | nums.sort() 45 | 46 | for i, a in enumerate(nums): 47 | if i > 0 and a == nums[i - 1]: 48 | continue 49 | 50 | l, r = i + 1, len(nums) - 1 51 | while l < r: 52 | threeSum = a + nums[l] + nums[r] 53 | if threeSum > 0: 54 | r -= 1 55 | elif threeSum < 0: 56 | l += 1 57 | else: 58 | res.append([a, nums[l], nums[r]]) 59 | l += 1 60 | while nums[l] == nums[l - 1] and l < r: 61 | l += 1 62 | return res 63 | ``` 64 | ### Time Complexity - O(n^2) 65 | ### Space Complexity - O(1) 66 | -------------------------------------------------------------------------------- /Algorithms/Searching Algorithms/Linear_Search.md: -------------------------------------------------------------------------------- 1 | ## Linear Search 2 | 3 | Linear search algorithm is the most simplest algorithm to do sequential search and this technique iterates over the sequence and checks one item at a time, until the desired item is found or all items have been examined. There are two types of linear search methods : 4 | 5 | * **Unordered Linear Search** 6 | 7 | * **Ordered Linear Search** 8 | 9 | ### Unordered Linear Search: 10 | Let us assume we are given an array where the order of elements is not known. That means the elements of the array are not sorted. In this case, to search for an element we have to scan the complete array and see if the element is there in the given list or not. 11 | 12 | #### Pseudocode: 13 | 14 | ```cpp 15 | int UnorderedLS(int A[], int n, int data) { 16 | for(int i = 0; i < n; i++) { 17 | if(A[i] == data) 18 | return i; 19 | } 20 | return -1; 21 | } 22 | ``` 23 | #### Time Complexity: 24 | O(n) in the worst case we need to scan the complete array. 25 | #### Space Complexity: 26 | O(1) 27 | 28 | ### Ordered Linear Search: 29 | If the elements of the array are already sorted (i.e user inputs sorted data) then in many cases we don't have to scan the complete array to see if it the element is there in the given array or not. In the pseudocode below, you can see that, at any point if the value at A[i] is greater than the data to be searched, then we just return -1 without searching the remaining array. 30 | 31 | #### Pseudocode: 32 | 33 | ```cpp 34 | int OrderedLS(int A[], int n, int data) { 35 | for(int i = 0; i < n; i++) { 36 | if(A[i] == data) 37 | return i; 38 | else if(A[i] > data) 39 | return -1; 40 | } 41 | return -1; 42 | } 43 | ``` 44 | #### Time Complexity: 45 | O(n) in the worst case we need to scan the complete array. 46 | #### Space Complexity: 47 | O(1) 48 | 49 | ### Program 50 | 51 | ```cpp 52 | #include 53 | 54 | using namespace std; 55 | 56 | int main() 57 | { 58 | int a[20],n,x,i,p=0; 59 | cout<<"Enter the size of the array max[20]"; 60 | cin>>n; 61 | cout<<"\nEnter elements of the array\n"; 62 | for(i=0;i>a[i]; 64 | cout<<"\nEnter element to search:"; 65 | cin>>x; 66 | for(i=0;i 4 | 5 | ## What is an Insertion Sort ? 6 | 7 | Insertion sort is a simple sorting algorithm that works similar to the way you sort playing cards in your hands. The array is virtually split into a sorted and an unsorted part.
8 | Values from the unsorted part are picked and placed at the correct position in the sorted part. 9 |
10 |
11 | 12 | ## Algorithm 13 | 14 | To sort an array of size n in ascending order: 15 | 16 | 1. Iterate from arr[1] to arr[n] over the array. 17 | 2. Compare the current element (key) to its predecessor. 18 | 3. If the key element is smaller than its predecessor, compare it to the elements before. Move the greater elements one position up to make space for the swapped element. 19 | 20 |

21 | 22 | ## Code: 23 | 24 | Here is the code for Insertion Sort using C++
25 |
26 | 27 | ``` 28 | #include 29 | using namespace std; 30 | 31 | int main() 32 | { 33 | int n; 34 | cout << "Enter the total elements" << endl; 35 | cin >> n; 36 | 37 | int arr[n]; 38 | 39 | // taking input 40 | 41 | cout << "Enter the elements" << endl; 42 | for (int i = 0; i < n; i++) 43 | { 44 | cin >> arr[i]; 45 | } 46 | 47 | // sorting the array 48 | 49 | for (int i = 1; i < n; i++) 50 | { 51 | int current = arr[i]; 52 | int j = i - 1; 53 | 54 | while (arr[j] > current && j >= 0) 55 | { 56 | arr[j + 1] = arr[j]; 57 | j--; 58 | } 59 | 60 | arr[j + 1] = current; 61 | } 62 | 63 | cout << "The sorted array :" << endl; 64 | for (int i = 0; i < n; i++) 65 | { 66 | cout << arr[i] << " "; 67 | } 68 | 69 | return 0; 70 | } 71 | 72 | 73 | ``` 74 | 75 |
76 | 77 | ``` 78 | Input : 12, 11, 13, 5, 6 79 | Output : 5, 6, 11, 12, 13 80 | ``` 81 | 82 |
83 | 84 | ## Time Complexity 85 | 86 | The time complexity for insertion sort is O(n^2) in the worst case 87 | 88 |
89 | 90 | ### Auxiliary Space: 91 | 92 | The sorting algorithm takes a constant space O(1) 93 | 94 |
95 | 96 | ### Boundary Cases: 97 | 98 | Insertion sort takes maximum time to sort if elements are sorted in reverse order. And it takes minimum time (Order of n) when elements are already sorted. 99 | 100 |
101 | 102 | ### Algorithmic Paradigm: 103 | 104 | Insertion sort uses the Incremental Approach 105 | 106 |
107 | 108 | ### Sorting In Place: 109 | 110 | The sorting are done in place to avoid extra memory. 111 | -------------------------------------------------------------------------------- /Data Structures/Arrays/Intro_to_Arrays.md: -------------------------------------------------------------------------------- 1 | # Arrays 2 | 3 | ``` 4 | An array in C/C++ or be it in any programming language 5 | is a collection of similar data items stored at contiguous 6 | memory locations and elements can be accessed randomly 7 | using indices of an array 8 | ``` 9 | 10 | # Declaration of Arrays in C++ 11 | 12 | ## 1-> Size Specification 13 | int arr[10]; //this is one of the ways to declare an array with size *10* in this case 14 | int n = 10; 15 | int arr[n]; // the size the arrray can be an user input as well 16 | 17 | ## 2-> With values but no size 18 | int arr[] = {1,2,3,4,5}; //This way we can declare an array of some elements 19 | ## 3-> With values and size as well 20 | int arr2[6] = {19,10,8,17,9,15}; //This way we can declare an array of n elements where n = 6 in this case 21 | 22 | ![array](https://cdn.programiz.com/sites/tutorial2program/files/cpp-array-initialization.png) 23 |

24 | ## 4-> Dynamic array allocation 25 | int * arr = new int[5];//this way we can allocate dynamic contiguous memory for our array 26 | 27 | --One more way to do it is with user input size-- 28 | 29 | int n; 30 | cin>>n; 31 | 32 | int * arr = new int[n];//This way array memory will be allocated at the runtime of our program 33 | 34 | Accessing indices in this type of arrays is quite interesting for example, 35 | 36 | arr[i] //will allow us to access the integer value at the ith index; 37 | 38 | i[arr] //this will work same as the above 39 | 40 | * (i + arr) //even this way is just another way of accessing the ith index 41 | 42 | -- all thanks to the pointers in c++ -- 43 | 44 | 45 | ## Code Snippets 46 | 47 | 48 | # Example1 : 49 | 50 | #include 51 | using namespace std; 52 | 53 | int main() 54 | { 55 | int arr[5]; 56 | arr[0] = 5; 57 | arr[2] = -10; 58 | 59 | // this is same as arr[1] = 2 60 | arr[3 / 3] = 2; 61 | arr[3] = arr[0]; 62 | 63 | cout << arr[0] << " " << arr[1] << " " << arr[2] << " " 64 | << arr[3]; 65 | 66 | return 0; 67 | } 68 | 69 | 70 | # Example2 : 71 | 72 | #include 73 | using namespace std; 74 | 75 | int main() 76 | { 77 | int * arr = new int[4]; 78 | arr[0] = 5; 79 | arr[2] = -10; 80 | 81 | // this is same as arr[1] = 2 82 | arr[3 / 3] = 2; 83 | arr[3] = arr[0]; 84 | 85 | cout << arr[0] << " " << arr[1] << " " << arr[2] << " " 86 | << arr[3]; 87 | 88 | return 0; 89 | } 90 | 91 | 92 | Output : 5 2 -10 5 93 | 94 | 95 | 96 | 97 | -------------------------------------------------------------------------------- /Algorithms/Tree/Inorder_Traversal.md: -------------------------------------------------------------------------------- 1 | # ⭐ Inorder Tree Traversal (Left👈 -> Root☝ -> Right👉) 2 | In computer science, tree traversal (also known as tree search and walking the tree) is a form of graph traversal and refers to the process of visiting (e.g. retrieving, updating, or deleting) each node in a tree data structure, exactly once. Such traversals are classified by the order in which the nodes are visited. The following algorithms are described for a binary tree, but they may be generalized to other trees as well. 3 | #### Example: 4 | ##### Input: `root node` *(Pointer)* 5 | ##### Output: `A B C D E F G H I ` 6 | ##### Explanation: 7 | ### Consider structure of Tree Node for clear understanding 8 | ##### Node consist of value and pointer to its left and right child 9 | ```py 10 | class Node: 11 | def __init__(self, data): 12 | self.left = None 13 | self.right = None 14 | self.data = data 15 | ``` 16 | 17 | 18 | ![tree](https://upload.wikimedia.org/wikipedia/commons/7/75/Sorted_binary_tree_ALL_RGB.svg)
19 | - Recursively traverse the current node's left subtree. 20 | - Visit the current node (in the figure: position green). 21 | - Recursively traverse the current node's right subtree. 22 | 23 | ##### Note 24 | In a binary search tree ordered such that in each node the key is greater than all keys in its left subtree and less than all keys in its right subtree, in-order traversal retrieves the keys in ascending sorted order. 25 | > #### Recursive 26 | ### Pseudo Code 27 | ``` js 28 | procedure inorder(node) 29 | // if no node then backtrack 30 | if node = null 31 | return 32 | 33 | inorder(node.left) 34 | visit(node) 35 | inorder(node.right) 36 | ``` 37 | ### Code `Python` 38 | ``` py 39 | def inorder(root): 40 | if not root: 41 | return 42 | inorder(root.left) 43 | print(root.data) 44 | inorder(root.right) 45 | ``` 46 | 47 | > #### Iterative 48 | ### Pseudo Code 49 | ``` js 50 | procedure iterativeInorder(node) 51 | stack ← empty stack 52 | while not stack.isEmpty() or node ≠ null 53 | if node ≠ null 54 | stack.push(node) 55 | node ← node.left 56 | else 57 | node ← stack.pop() 58 | visit(node) 59 | node ← node.right 60 | ``` 61 | ### Code `Python` 62 | ``` py 63 | def iterativeInorder(root): 64 | stack = [] 65 | temp = root 66 | while len(stack) and temp: 67 | if temp: 68 | stack.append(temp) 69 | temp = temp.left 70 | else: 71 | temp = stack.pop() 72 | print(temp.data) 73 | temp = temp.right 74 | ``` 75 | 76 | 77 | #### ⏲️ Time Complexities: 78 | `O(n)` *As we are visiting every node* 79 |
80 | #### 👾 Space complexities: 81 | `O(n)` *recursion stack space* 82 | -------------------------------------------------------------------------------- /Data Structures/Hashingtechnique.md: -------------------------------------------------------------------------------- 1 | # Hashing 2 | - The technique of mapping a large chunk of data into small tables with the help of a hashing function is called as Hashing. 3 | - Hashing is an data structure which is designed in order to solve the problem of efficiently storing and finding data in an array. 4 | - Hash tables are used to store the data in an array format. 5 | - Hashing is a two-step process. 6 | 1. In Hashing the hash function converts the item into a small integer or hash value and this integer is used as an index which stores the original data. 7 | 2. Data is stored in an hash table. A hash key can be used to locate data quickly. 8 | 9 | 10 | 11 | ## Examples of Hashing in Data Structure 12 | 13 | - In school, teacher assigns a unique roll number to each student. Later, teacher uses that roll number to retrieve information about that student. 14 | 15 | ## Hash Function 16 | - The function which maps arbitary size of data to fixed-sized data is called as hash function. 17 | - It returns hash value, hash codes, and hash sums. 18 | hash = hashfunc(key) 19 | index = hash % array_size 20 | - The hash function must satisfy the following requirements: 21 | 1. A good hash function should be easy to compute. 22 | 2. A good hash function should never get stuck in clustering and should distribute keys evenly across the hash table. 23 | 3. A good hash function should avoid collision whenever two elements or items get assigned to the same hash value. 24 | 25 | ## Hash Table 26 | - Hashing uses hash tables to store the key-value pairs. 27 | - The hash table uses the hash function and generates an index. 28 | - This unique index is used to perform insert, update, and search operations. 29 | 30 | # Collision Resolution Techniques 31 | - If two keys are assigned the same index number in the hash table hashing falls into a collision. 32 | - As each index in a hash table is supposed to store only one value the collision creates a problem . 33 | - Hashing uses several collision resolution techniques for managing performance of a hash table. 34 | 35 | ## Linear Probing 36 | - Hashing results into an array index which is already occupied for storing a value. 37 | - In such case, hashing performs search operation and linearly probes for the next empty cell. 38 | 39 | ## Double Hashing 40 | - Double hashing technique uses two hash functions. 41 | - Second hash function is used only when the first function causes a collision. 42 | - An offset is provided for the index to store the value. 43 | - The formula for the double hashing technique is as follows: 44 | (firstHash(key) + i * secondHash(key)) % sizeOfTable 45 | 46 | - As compared to linear probing double hashing has a high computation cost as it searches the next free slot faster than the linear probing method. 47 | -------------------------------------------------------------------------------- /Data Structures/Arrays/Implementation_of_queue_using_array.md: -------------------------------------------------------------------------------- 1 | # C++ Program to Implement Queue using Array 2 | 3 | ``` 4 | A queue is an abstract data structure that contains a collection of elements. Queue implements the FIFO mechanism i.e. the element that is inserted first is also deleted first. In other words, the least recently added element is removed first in a queue. 5 | ``` 6 | 7 | 8 | ### 1-> The function Insert() inserts an element into the queue. If the rear is equal to n-1, then the queue is full and overflow is displayed. If front is -1, it is incremented by 1. Then rear is incremented by 1 and the element is inserted in index of rear. This is shown below − 9 | void Insert() { 10 | int val; 11 | if (rear == n - 1) 12 | cout<<"Queue Overflow"<>val; 18 | rear++; 19 | queue[rear] = val; 20 | } 21 | } 22 | 23 | ### 2->In the function Delete(), if there are no elements in the queue then it is underflow condition. Otherwise the element at front is displayed and front is incremented by one. This is shown below − 24 | void Delete() { 25 | if (front == - 1 || front > rear) { 26 | cout<<"Queue Underflow "; 27 | return ; 28 | } 29 | else { 30 | cout<<"Element deleted from queue is : "<< queue[front] < In the function display(), if front is -1 then queue is empty. Otherwise all the queue elements are displayed using a for loop. This is shown below − 35 | void Display() { 36 | if (front == - 1) 37 | cout<<"Queue is empty"< The function main() provides a choice to the user if they want to insert, delete or display the queue. According to the user response, the appropriate function is called using switch. If the user enters an invalid response, then that is printed. The code snippet for this is given below − 46 | int main() { 47 | int ch; 48 | cout<<"1) Insert element to queue"<>ch; 55 | switch (ch) { 56 | case 1: Insert(); 57 | break; 58 | case 2: Delete(); 59 | break; 60 | case 3: Display(); 61 | break; 62 | case 4: cout<<"Exit"< 4 | 5 | ## What is an Heap Sort ? 6 | 7 | Heap Sort is a sorting technique which uses Binary Heap Data Structure to sort an array.
8 | Heap Sort is similar to selection sort. In this first the minimum element is fetched and placed at beginning. 9 |
10 |
11 | 12 | ## Algorithm 13 | 14 | To arrange list of elements in ascending order using heap sort algorithm is done as follows: 15 | 16 | 1. With the given list of elements construct Binary tree. 17 | 2. Now, the Binary tree should be transformed in a Minimum Heap. 18 | 3. By Using the Heapify method the root element must be deleted from the Minimum Heap. 19 | 4. A sorted list must be created and the deleted element should be put in it. 20 | 5. Until the minimum heap becomes empty keep on repeating the same procedure. 21 | 6. Lastly the sorted list must be displayed. 22 | 23 |

24 | 25 | ## Code: 26 | 27 | Here is the code for Heap Sort using C++
28 |
29 | 30 | ``` 31 | 32 | #include 33 | 34 | using namespace std; 35 | 36 | void heapify(int arr[], int n, int i) 37 | { 38 | int largest = i; 39 | int l = 2 * i + 1; 40 | int r = 2 * i + 2; 41 | 42 | if (l < n && arr[l] > arr[largest]) 43 | largest = l; 44 | 45 | if (r < n && arr[r] > arr[largest]) 46 | largest = r; 47 | 48 | if (largest != i) { 49 | swap(arr[i], arr[largest]); 50 | 51 | heapify(arr, n, largest); 52 | } 53 | } 54 | 55 | void heapSort(int arr[], int n) 56 | { 57 | 58 | for (int i = n / 2 - 1; i >= 0; i--) 59 | heapify(arr, n, i); 60 | 61 | for (int i = n - 1; i > 0; i--) { 62 | 63 | swap(arr[0], arr[i]); 64 | 65 | heapify(arr, i, 0); 66 | } 67 | } 68 | 69 | void printArray(int arr[], int n) 70 | { 71 | for (int i = 0; i < n; ++i) 72 | cout << arr[i] << " "; 73 | cout << "\n"; 74 | } 75 | 76 | int main() 77 | { 78 | int arr[] = { 12, 11, 13, 5, 6, 7 }; 79 | int n = sizeof(arr) / sizeof(arr[0]); 80 | 81 | heapSort(arr, n); 82 | 83 | cout << "Sorted array is \n"; 84 | printArray(arr, n); 85 | } 86 | 87 | 88 | 89 | ``` 90 | 91 |
92 | 93 | ``` 94 | Input : 7, 12, 11, 13, 5, 6 95 | Output : 5, 6, 7, 11, 12, 13 96 | ``` 97 | 98 |
99 | 100 | ## Time Complexity 101 | 102 | The time complexity for heap sort is O(nlog n). 103 | 104 |
105 | 106 | ## Space Complexity 107 | 108 | The space complexity for heap sort is O(1). 109 | 110 |
111 | 112 | ### Auxiliary Space: 113 | 114 | Heap Sort uses O(1) auxiliary space. 115 | 116 |
117 | 118 | ### Stability 119 | 120 | Heap Sort is not stable by nature. 121 | 122 |
123 | 124 | ### Sorting In Place: 125 | 126 | Heap sort is inplace algorithm. 127 | 128 |
129 | 130 | ### Advantages of Heap sort: 131 | 132 | 1. Heap Sort is efficient. 133 | 2. In Heap Sort memory usage is minimal. 134 | 3. Heap Sort is simpler to understand. 135 | -------------------------------------------------------------------------------- /Data Structures/Arrays/Subarray_vs_Subsequence.md: -------------------------------------------------------------------------------- 1 | # Subarrays Vs Subsequence 2 | 3 | ## Subarays 4 | 5 | ``` 6 | Subarrays are arrays within another array. 7 | It contains contiguous elements 8 | Example: Let's consider an array 9 | A = {1,2,3,4,5} 10 | Then the subarray of given array are {}, {1}, {2}, {3}, {4}, {5}, {1,2}, 11 | {1,2,3}, {1,2,3,4}, {1,2,3,4,5}, {2,3}, {2,3,4}, {2,3,4,5}, {3,4}, 12 | {3,4,5}, {4,5}. 13 | Number of Subarray an array of 'n' element can have (excluding empty subarray) = (n*(n+1))/2 . 14 | ``` 15 | 16 | ### Program to print all non empty subarrays: 17 | 18 | ``` 19 | #include 20 | using namespace std; 21 | 22 | int main() {
23 | int arr[5] = {1,2,3,4,5}; 24 | for (int i=0; i<5; i++) { 25 | for (int j=i; j<5; j++) { 26 | for (int k=i; k<=j; k++) { 27 | cout< 65 | using namespace std; 66 | 67 | int main() { 68 | int arr[5] = {1,2,3,4,5}; 69 | int n = 5; 70 | int noOfSubseq = pow(2, n); //to find no of non zero subsequence i.e. (2^n-1) 71 | for (int i=1; ileft); 61 | // Get Right Side of the Tree 62 | TreeNode *rght = invertTree(root->right); 63 | // Put Right Side to Left Side of the Tree 64 | root->left = rght; 65 | // Put Left Side to Right Side of the Tree 66 | root->right = lft; 67 | return root; 68 | } 69 | }; 70 | ``` 71 | ## Explanation : 72 | The invertTree function in the above code samples first determines whether the tree is empty. If not, it switches the root's two children and then recursively swaps the two sub-trees until the root has some value. The recursive calls are terminated when the root is NULL. 73 | 74 | ## Time Complexity : 75 | Due to the fact that each node in the tree is only visited once, the time complexity is O(n), where n is the number of nodes in the tree. We can't do better than that since we have to visit each node to reverse it. 76 | 77 | ## Space Complexity : 78 | In the worst-case scenario, O(h) function calls will be placed on the stack due to recursion, where h is the height of the tree. The space complexity is (n) since h ∈ O(n). 79 | -------------------------------------------------------------------------------- /Data Structures/Arrays/TwoSumII.md: -------------------------------------------------------------------------------- 1 | # Two Sum II 2 | 3 | Problem Link : https://leetcode.com/problems/two-sum-ii-input-array-is-sorted/ 4 | 5 | Given a 1-indexed array of integers numbers that is already sorted in non-decreasing order . Find two numbers such that they add up to a specific target number.
Let these two numbers be numbers[index1] and numbers[index2] where 1 <= index1 < index2 <= numbers.length. 6 | Return the indices of the two numbers, index1 and index2, added by one as an integer array [index1, index2] of length 2. 7 | You may not use the same element twice. 8 | 9 | 10 | ### Example : 11 | 12 | Input: numbers = [2,7,11,15], target = 9
13 | Output: [1,2]
14 | Explanation: The sum of 2 and 7 is 9. Therefore, index1 = 1, index2 = 2. We return [1, 2]. 15 | 16 | 17 | ### Simple Python Solution with HashMap 18 | 19 | This method works in O(n) time if range of numbers is known.
20 | Let sum be the given sum and A[] be the array in which we need to find pair. 21 | 22 | #### Algorithm 23 | 24 | 1) Initialize Binary Hash Map M[] = {0, 0}
25 | 2) Do following for each element A[i] in A[]
26 | (a) If M[x - A[i]] is set then print the pair (A[i], x A[i])
27 | (b) Set M[A[i]] 28 | 29 | #### Implementation 30 | ```c 31 | class Solution: 32 | def twoSum(self, numbers: List[int], target: int) -> List[int]: 33 | subtract = {} 34 | for i, num in enumerate(numbers): 35 | if num in subtract: 36 | return [subtract[num]+1, i+1] 37 | subtract[target-num] = i 38 | return [] 39 | ``` 40 | #### Time Complexity - O(n) 41 | #### Space Complexity - O(R) where R is range of integers 42 | 43 | ### Using Two pointers 44 | 45 | #### Algorithm: 46 | hasArrayTwoCandidates (A[], ar_size, sum) 47 | 48 | 1) Initialize two index variables to find the candidate 49 | elements in the sorted array.
50 | (a) Initialize first to the left most index: l = 0
51 | (b) Initialize second the right most index: r = ar_size-1
52 | 2) Loop while l < r.
53 | (a) If (A[l] + A[r] == sum) then return 1
54 | (b) Else if( A[l] + A[r] < sum ) then l++
55 | (c) Else r--
56 | 3) No candidates in whole array - return 0 57 | 58 | #### Example: 59 | Let Array be A= {-8, 1, 4, 6, 10, 45} and sum to find be 16 60 | 61 | Initialize l= 0, r = 5
62 | A[l] + A[r] ( -8 + 45) > 16 => decrement r. Nowr = 10
63 | A[l] + A[r] ( -8 + 10) < 2 => increment l. Nowl= 1
64 | A[l] + A[r] ( 1 + 10) < 16 => increment l. Nowl= 2
65 | A[l] + A[r] ( 4 + 10) < 14 => increment l. Nowl= 3
66 | A[l] + A[r] ( 6 + 10) == 16 => Found candidates (return [l+1, r+1]) 67 | 68 | #### Implementation 69 | 70 | ```c 71 | class Solution: 72 | def twoSum(self, numbers: List[int], target: int) -> List[int]: 73 | l, r = 0, len(numbers) - 1 74 | 75 | while l < r: 76 | curSum = numbers[l] + numbers[r] 77 | 78 | if curSum > target: 79 | r -= 1 80 | elif curSum < target: 81 | l += 1 82 | else: 83 | return [l + 1, r + 1] 84 | ``` 85 | 86 | #### Time Complexity - O(n) 87 | #### Space Complexity - O(1) 88 | -------------------------------------------------------------------------------- /Algorithms/Sorting Algorithms/Bubble_Sort.md: -------------------------------------------------------------------------------- 1 | # ⭐ BUBBLE SORT 2 | 3 | Bubble Sort is the simplest and most classical sorting algorithm that works by comparing and swapping the adjacent elements if they are in wrong order. While this may not be the most efficient sorting way, it is certainly easiest to understand and implement. 4 | 5 | #### Example: 6 | 7 | ##### Input: [5 1 4 2 8] 8 | 9 | ##### First Pass: 10 | ( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since 5 > 1. 11 | ( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4 12 | ( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2 13 | ( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not swap them. 14 | 15 | ##### Second Pass: 16 | ( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ) 17 | ( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2 18 | ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) 19 | ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) 20 | Now, the array is already sorted, but our algorithm does not know if it is completed. The algorithm needs one whole pass without any swap to know it is sorted. 21 | 22 | ##### Third Pass: 23 | ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) 24 | ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) 25 | ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) 26 | ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) 27 | 28 | ##### Output: [1 2 4 5 8] 29 | 30 | Now, generally we would resort to the above method of implementing bubble sort. Since we are iterating through the array N number of times (N is the numbr of elements in the input array), the time complexity would be O(N^2). We are not using any additional space, so the space complexity would be O(1). 31 | 32 | However, if we can somehow inform our algorithm that the array is already sorted, then we wont have to traverse the remaining passes. To do this we can use a flag. Two solutions implementing this approach will be given below. This wont change the worst time complexity but the best time complexity would improve to O(N). 33 | 34 | ### SOLUTION 1 35 | ``` 36 | def bubbleSort(array): 37 | # Write your code here. 38 | for i in range(len(array)): 39 | // This check is to ensure that the array is not already sorted before going through another pass. 40 | alreadySorted = True 41 | for j in range(len(array) - i - 1): 42 | if array[j] > array[j+1]: 43 | // Setting alreadySorted to false when the condition is true. 44 | alreadySorted = False 45 | array[j], array[j+1] = array[j+1], array[j] 46 | if alreadySorted: 47 | break 48 | 49 | return array 50 | ``` 51 | 52 | ### SOLUTION 2. A variation to the above solution using while loop. 53 | ``` 54 | def bubbleSort(array): 55 | # Write your code here. 56 | alreadySorted = False 57 | i = 0 58 | // This check is to ensure that the array is not already sorted before going through another pass. 59 | while not alreadySorted: 60 | alreadySorted = True 61 | for j in range(len(array) - i - 1): 62 | if array[j] > array[j+1]: 63 | alreadySorted = False 64 | array[j], array[j+1] = array[j+1], array[j] 65 | i += 1 66 | 67 | return array 68 | ``` 69 | 70 | #### ⏲️ Time Complexities: 71 | Best: O(N) 72 |
73 | Average: O(N^2) 74 |
75 | Worst: O(N^2) 76 | 77 | #### 👾 Space complexities: 78 | Best: O(1) 79 |
80 | Average: O(1) 81 |
82 | Worst: O(1) 83 | -------------------------------------------------------------------------------- /Data Structures/Arrays/queue.md: -------------------------------------------------------------------------------- 1 | # Queue 2 | 3 | Queue is linear Data Structure in which insertion can take place at one 4 | end called rear of the queue, and deletion can take place at other end 5 | called front of queue. 6 | 7 | - The terms front and rear are used in describing a linear list only when it is implemented as a queue. 8 | - Queue is also called FIFO (First In First Out) list. 9 | - Example: Peoples standing in queue at ATM. 10 | 11 | ### Types of Queue 12 | 13 | - There are 4 types of queue. 14 | 15 | 1. Linear Queue/ (Simple Queue) 16 | 2. Circular Queue 17 | 3. Priority Queue 18 | 4. Double ended Queue. 19 | 20 | ### Operations in Queue 21 | 22 | 1. Enqueue : The process to add an element into Queue is called enqueue. 23 | 2. Dequeue : The process of removing an element from queue is called dequeue. 24 | 3. Overflow (Is full) : If there is no space to add new element in list then it is called as overflow. 25 | 4. Underflow (Is empty) : If there is no any element to remove form list then it is called as underflow 26 | 27 | ### Linear Queue (Simple Queue) 28 | 29 | - In this type of queue, the array is used for implementation. 30 | - The elements are arranged in sequential mode such that front position 31 | is always be less than or equal to the rear position. 32 | - The rear is incremented, when an element is added, and front is 33 | incremented when an element is removed. Thus, the front follows the 34 | rear. 35 | 36 | #### program to implement Linear Queue and perform Insert, Delete, Traverse operation in C 37 | 38 | ```c 39 | 40 | #include 41 | #define MAX 5 42 | int q[MAX],front=-1, rear=-1; //Global Data 43 | void Insert(); //Functions Declarations 44 | void Delete(); 45 | void Display(); 46 | void main() 47 | { 48 | int ch; 49 | do 50 | { 51 | printf("\n\nMENU"); 52 | printf("\n1 Insert\n2 Delete\n3 Display\n4 Exit"); 53 | printf("\nChoice ? "); 54 | scanf("%d",&ch); 55 | switch(ch) 56 | { 57 | case 1: Insert(); break; 58 | case 2: Delete(); break; 59 | case 3: Display(); break; 60 | case 4: break; 61 | default:printf("\nWrong Choice "); 62 | } 63 | }while(ch!=4); 64 | printf("\nThank You"); 65 | } // end of main 66 | 67 | //functions 68 | void Insert() 69 | { 70 | int item; 71 | if(rear==MAX-1) 72 | printf("\nOVERFLOW"); 73 | else 74 | { 75 | printf("\nEnter the element to insert in Q "); 76 | scanf("%d",&item); 77 | if(rear==-1) //initially empty 78 | front=rear=0; 79 | else 80 | rear++; 81 | q[rear]=item; 82 | printf("\n%d is inserted in Q ",item); 83 | } 84 | } 85 | void Delete() 86 | { 87 | int item; 88 | if(front==-1) 89 | printf("\nUNDERFLOW"); 90 | else 91 | { 92 | item = q[front]; 93 | if(front==rear) // Q contains single element 94 | front=rear=-1; 95 | else 96 | front++; 97 | printf("\n%d is deleted from Q",item); 98 | } 99 | } 100 | void Display() 101 | { 102 | int i; 103 | if(front==-1) 104 | printf("\nQ is empty"); 105 | else 106 | { 107 | printf("\nQ is ..\nFRONT-->"); 108 | for(i=front;i<=rear;i++) 109 | printf("\t%d",q[i]); 110 | printf("\t<--REAR"); 111 | } 112 | } 113 | 114 | ``` 115 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Wizard-Of-Docs 2 | 3 | An open source project to bring all the data structures and algorithms docs under one repository. 4 | 5 | Data Structures are the main part of many computer science algorithms as they enable the programmers to handle the data in an efficient way. 6 |
7 | 8 | # What is the type of contribution? 9 | 10 | 1) Contribution to this repository is going to be in the form of documentation 11 | 12 | 2) Preferred language should be English 13 | 14 | 3) The documentation should be clear, consise and complete 15 | 16 | 4) The starting letter of every word should be in uppercase, do not use spaces or hyphen(-), instead use underscore (_) to join words 17 | 18 | 5) There are two separate folders to contribute data-structures & algorithms respectively 19 | 20 | 6) Make sure the issue you are creating does not exist or is merged already,issue can be created to write the same code with different logic in different languages 21 | (but not the theoritical part) 22 | 23 | # How to contribute? 24 | 25 | **1.** Fork [this](https://github.com/HackClubRAIT/Wizard-Of-Docs) repository. 26 | 27 | **2.** Clone your forked copy of the project. 28 | 29 | ``` 30 | git clone https://github.com//Wizard-Of-Docs 31 | ``` 32 | 33 | **3.** Navigate to the project directory :file_folder: . 34 | 35 | ``` 36 | cd Wizard-Of-Docs 37 | ``` 38 | 39 | **4.** Add a reference(remote) to the original repository. 40 | 41 | ``` 42 | git remote add upstream https://github.com/HackClubRAIT/Wizard-Of-Docs 43 | ``` 44 | 45 | **5.** Check the remotes for this repository. 46 | ``` 47 | git remote -v 48 | ``` 49 | 50 | **6.** Always take a pull from the upstream repository to your master branch to keep it at par with the main project(updated repository). 51 | 52 | ``` 53 | git pull upstream main 54 | ``` 55 | 56 | **7.** Create a new branch. 57 | 58 | ``` 59 | git checkout -b 60 | ``` 61 | 62 | **8.** Perform your desired changes to the code base. 63 | 64 | 65 | **9.** Track your changes:heavy_check_mark: . 66 | 67 | ``` 68 | git add . 69 | ``` 70 | 71 | **10.** Commit your changes . 72 | 73 | ``` 74 | git commit -m "Relevant message" 75 | ``` 76 | 77 | **11.** Push the committed changes in your feature branch to your remote repo. 78 | ``` 79 | git push -u origin 80 | ``` 81 | 82 | **12.** To create a pull request, click on `compare and pull requests`. Please ensure you compare your feature branch to the desired branch of the repository you are supposed to make a PR to. 83 | 84 | 85 | **13.** Add appropriate title and description to your pull request explaining your changes and efforts done. 86 | 87 | 88 | **14.** Click on `Create Pull Request`. 89 | 90 | 91 | **15** Voila! You have made a PR to the Wizard-Of-Docs. Sit back patiently and relax while your PR is reviewed. 92 | 93 |
94 | 95 | # **Project Contributors** 96 | 97 | 98 | 99 | 100 | 101 |
102 |
103 | 104 | ## This repository is a part of the following Open Source Program:
105 | Hack Club RAIT 106 | 107 | 108 | ![1632670084686](https://user-images.githubusercontent.com/80090908/179052180-5067b5fe-9c98-421e-b818-ae4bd7976ca8.jpg) 109 | 110 | -------------------------------------------------------------------------------- /Data Structures/Linked List/Implementation_of_queue_using_linkedlist.md: -------------------------------------------------------------------------------- 1 | # C++ Program to Implement Queue using Linked List 2 | ``` 3 | A queue is an abstract data structure that contains a collection of elements. Queue implements the FIFO mechanism i.e the element that is inserted first is also deleted first. In other words, the least recently added element is removed first in a queue. 4 | ``` 5 | 6 | 7 | ### 1-> The function Insert() inserts an element into the queue. If rear is NULL,then the queue is empty and a single element is inserted. Otherwise, a node is inserted after rear with the required element and then that node is set to rear. This is shown below − 8 | void Insert() { 9 | int val; 10 | cout<<"Insert the element in queue : "<>val; 12 | if (rear == NULL) { 13 | rear = (struct node *)malloc(sizeof(struct node)); 14 | rear->next = NULL; 15 | rear->data = val; 16 | front = rear; 17 | } else { 18 | temp=(struct node *)malloc(sizeof(struct node)); 19 | rear->next = temp; 20 | temp->data = val; 21 | temp->next = NULL; 22 | rear = temp; 23 | } 24 | } 25 | 26 | ### 2->In the function Delete(), if there are no elements in queue then it is underflow condition. If there is only one element in the queue that is deleted and front and rear are set to NULL. Otherwise, the element at front is deleted and front points to next element. This is shown below − 27 | void Delete() { 28 | temp = front; 29 | if (front == NULL) { 30 | cout<<"Underflow"<next != NULL) { 34 | temp = temp->next; 35 | cout<<"Element deleted from queue is : "<data<data< In the function display(), if front and rear are NULL then queue is empty. Otherwise, all the queue elements are displayed using a while loop with the help of temp variable. This is shown below − 46 | void Display() { 47 | temp = front; 48 | if ((front == NULL) && (rear == NULL)) { 49 | cout<<"Queue is empty"<data<<" "; 55 | temp = temp->next; 56 | } 57 | ### 4-> The function main() provides a choice to the user if they want to insert, delete or display the queue. According to the user response, the appropriate function is called using switch. If the user enters an invalid response, then that is printed. The code snippet for this is given below − 58 | int main() { 59 | int ch; 60 | cout<<"1) Insert element to queue"<>ch; 67 | switch (ch) { 68 | case 1: Insert(); 69 | break; 70 | case 2: Delete(); 71 | break; 72 | case 3: Display(); 73 | break; 74 | case 4: cout<<"Exit"< 89 | 90 | ### Auxiliary Space: 91 | 92 | Merge Sort takes O(n) auxiliary space. 93 | 94 |
95 | 96 | ### Algorithmic Paradigm: 97 | 98 | Merge Sort uses Divide and Conquer approach. 99 | 100 | ### Stability 101 | 102 | Merge Sort is stable by nature. 103 | 104 | ### Sorting In Place: 105 | 106 | Merge Sort is not in place because it requires additional memory space to store the auxiliary arrays. -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | We as members, contributors, and leaders pledge to make participation in our 6 | community a harassment-free experience for everyone, regardless of age, body 7 | size, visible or invisible disability, ethnicity, sex characteristics, gender 8 | identity and expression, level of experience, education, socio-economic status, 9 | nationality, personal appearance, race, religion, or sexual identity 10 | and orientation. 11 | 12 | We pledge to act and interact in ways that contribute to an open, welcoming, 13 | diverse, inclusive, and healthy community. 14 | 15 | ## Our Standards 16 | 17 | Examples of behavior that contributes to a positive environment for our 18 | community include: 19 | 20 | * Demonstrating empathy and kindness toward other people 21 | * Being respectful of differing opinions, viewpoints, and experiences 22 | * Giving and gracefully accepting constructive feedback 23 | * Accepting responsibility and apologizing to those affected by our mistakes, 24 | and learning from the experience 25 | * Focusing on what is best not just for us as individuals, but for the 26 | overall community 27 | 28 | Examples of unacceptable behavior include: 29 | 30 | * The use of sexualized language or imagery, and sexual attention or 31 | advances of any kind 32 | * Trolling, insulting or derogatory comments, and personal or political attacks 33 | * Public or private harassment 34 | * Publishing others' private information, such as a physical or email 35 | address, without their explicit permission 36 | * Other conduct which could reasonably be considered inappropriate in a 37 | professional setting 38 | 39 | ## Our Responsibilities 40 | 41 | Project maintainers are responsible for clarifying the standards of acceptable 42 | behavior and are expected to take appropriate and fair corrective action in 43 | response to any instances of unacceptable behavior. 44 | 45 | Project maintainers have the right and responsibility to remove, edit, or 46 | reject comments, commits, code, wiki edits, issues, and other contributions 47 | that are not aligned to this Code of Conduct, or to ban temporarily or 48 | permanently any contributor for other behaviors that they deem inappropriate, 49 | threatening, offensive, or harmful. 50 | 51 | 52 | ## Scope 53 | 54 | This Code of Conduct applies within all community spaces, and also applies when 55 | an individual is officially representing the community in public spaces. 56 | Examples of representing our community include using an official e-mail address, 57 | posting via an official social media account, or acting as an appointed 58 | representative at an online or offline event. 59 | 60 | ## Enforcement 61 | 62 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 63 | reported to the community leaders responsible for enforcement. 64 | All complaints will be reviewed and investigated promptly and fairly. 65 | 66 | All community leaders are obligated to respect the privacy and security of the 67 | reporter of any incident. 68 | 69 | 70 | ## Attribution 71 | 72 | This Code of Conduct is adapted from the [Contributor Covenant][homepage]. 73 | 74 | Community Impact Guidelines were inspired by [Mozilla's code of conduct 75 | enforcement ladder](https://github.com/mozilla/diversity). 76 | 77 | [homepage]: https://www.contributor-covenant.org 78 | 79 | For answers to common questions about this code of conduct, see the FAQ at 80 | [Contributor Covenant FAQ](https://www.contributor-covenant.org/faq). 81 | -------------------------------------------------------------------------------- /Algorithms/Sorting Algorithms/Selection_sort.md: -------------------------------------------------------------------------------- 1 | # Selection Sort 2 | 3 | ## What is Selection sort? 4 | 5 | It is a simple sorting algorithm and is an in-place comparison-based algorithm which sort the given array repeatedly by finding the minimum element from unsorted part and putting it at the beginning of the array. 6 | 7 | ## Algorithm 8 | ``` 9 | Step 1 - Set the MIN to location 0. 10 | Step 2 - Search the minimum element in the array. 11 | Step 3 - Swap the value at location MIN. 12 | Step 4 - Increment MIN to point to next element. 13 | Step 5 - Repeat until array is sorted. 14 | ``` 15 | #### Following Example explain the above steps. 16 | 17 | 1.First pass: 18 | A[ ] = {8, 6, 3, 2, 5, 4} 19 | Set MIN at index 0. 20 | Find the minimum element in A[0 . . . 5] and swap it with element at MIN. 21 | 22 | 2.Second Pass: 23 | A[ ] = {2, 6, 3, 8, 5, 4} 24 | Increment MIN to index 1. 25 | Find the minimum element in A[1 . . . 5] and swap it with element at MIN. 26 | 27 | 3.Third pass: 28 | A[ ] = {2, 3, 6, 8, 5, 4} 29 | Increment MIN to index 2. 30 | Find the minimum element in A[2 . . . 5] and swap it with element at MIN. 31 | 32 | 4.Fourth pass: 33 | A[ ] = {2, 3, 4, 8, 5, 6} 34 | Increment MIN to index 3. 35 | Find the minimum element in A[3 . . . 5] and swap it with element at MIN. 36 | 37 | 5.Fifth pass: 38 | A[ ] = {2, 3, 4, 5, 8, 6} 39 | Increment MIN to index 4. 40 | Find the minimum element in A[4 . . . 5] and swap it with element at MIN. 41 | 42 | A[ ] = {2, 3, 4, 5, 6, 8} 43 | We got the sorted array. 44 | 45 | Here we can observe that for 6 elements we need 5 pass(iterations) so, for n elements (n-1) passes are required. 46 | 47 | ## Code: 48 | 49 | #include 50 | #include 51 | 52 | // This is swap function for swapping the minimum element with element at MIN. 53 | void swap(int *x,int *y) 54 | { 55 | int temp=*x; 56 | *x=*y; 57 | *y=temp; 58 | } 59 | void SelectionSort(int A[],int n) 60 | { 61 | int i,j,k; 62 | // i refers to index of MIN. 63 | for(i=0;i 2 | 🎯 Insertion at the beginning
3 | 🎯 Insertion at the end
4 | 🎯 Insertion at a given position
5 | 6 | ## Let's start by creating a ListNode: 7 | ``` 8 | #include 9 | using namespace std; 10 | 11 | struct ListNode{ 12 | int data; 13 | struct ListNode *next; 14 | }; 15 | ``` 16 | 17 | ## Inserting an element at the beginning:
18 | 1. We check if *head* already exists 19 | 2. If it does, we point the new node's next to it and make the new node as the new head. 20 | 21 | ``` 22 | struct ListNode *insertAtBeginning(struct ListNode *head, int data){ 23 | 24 | struct ListNode *temp; 25 | 26 | temp = (struct ListNode *)malloc(sizeof(struct ListNode)); 27 | temp ->data = data; 28 | temp ->next = NULL; 29 | 30 | if (head == NULL){ 31 | head = temp; 32 | head->next = NULL; 33 | } 34 | 35 | else{ 36 | temp -> next = head; 37 | head = temp; 38 | } 39 | 40 | return head; 41 | } 42 | ``` 43 | 44 | ## Inserting an element at the end:
45 | 1. We traverse the list till the next pointer points to *NULL* 46 | 2. Then point the next pointer to the new node. 47 | 3. The new node's next pointer points to *NULL* 48 | 49 | ``` 50 | struct ListNode *insertAtEnd(struct ListNode *head, int data){ 51 | struct ListNode *temp, *curr; 52 | 53 | temp = (struct ListNode *)malloc(sizeof(struct ListNode)); 54 | temp -> data = data; 55 | temp -> next = NULL; 56 | 57 | curr = head; 58 | if (curr == NULL) 59 | head = temp; 60 | 61 | else{ 62 | 63 | while (curr -> next != NULL) 64 | curr = curr -> next; 65 | 66 | curr -> next = temp; 67 | } 68 | 69 | return head; 70 | } 71 | ``` 72 | 73 | ## Inserting an element at the given position:
74 | 1. Run a loop to reach the given position. 75 | 2. Point new node's next pointer to previous node's next. 76 | 3. And make the new node as the next of previous node. 77 | ``` 78 | struct ListNode *insertAtGivenPosition(struct ListNode *head, struct ListNode *newNode, int n){ 79 | 80 | struct ListNode *pred = head; 81 | 82 | if (n <= 1){ 83 | newNode -> next = head; 84 | return newNode; 85 | } 86 | 87 | while (--n && pred != NULL) 88 | pred = pred -> next; 89 | 90 | if (pred == NULL) 91 | return NULL; 92 | 93 | newNode -> next = pred -> next; 94 | pred -> next = newNode; 95 | return head; 96 | } 97 | ``` 98 | 99 | ## Print List: 100 | ``` 101 | void printList(ListNode* n) 102 | { 103 | while (n != NULL) { 104 | cout << n->data << " "; 105 | n = n->next; 106 | } 107 | } 108 | ``` 109 | 110 | ## *MAIN* function (): 111 | ``` 112 | int main() 113 | { 114 | ListNode* head = NULL; 115 | ListNode* second = NULL; 116 | ListNode* third = NULL; 117 | ListNode* newNode = NULL; 118 | 119 | //allocate 3 nodes in the heap 120 | head = new ListNode(); 121 | second = new ListNode(); 122 | third = new ListNode(); 123 | 124 | head->data = 1; // assign data in first node 125 | head->next = second; // Link first node with second 126 | 127 | second->data = 2; // assign data to second node 128 | second->next = third; 129 | 130 | third->data = 3; // assign data to third node 131 | third->next = NULL; 132 | 133 | head = insertAtBeginning(head, 4); 134 | 135 | newNode = new ListNode(); 136 | newNode->data = 5; 137 | newNode->next = NULL; 138 | 139 | head = insertAtGivenPosition(head, newNode, 5); 140 | 141 | head = insertAtEnd(head, 6); 142 | 143 | printList(head); 144 | 145 | return 0; 146 | } 147 | ``` 148 | -------------------------------------------------------------------------------- /Algorithms/Floyd_Warshall/Floyd_Warshall.md: -------------------------------------------------------------------------------- 1 | # ⭐ Floyd_Warshall 2 | 3 | 4 | Floyd–Warshall algorithm is an algorithm for finding shortest paths in a weighted graph with positive or negative edge weights (but with no negative cycles)
5 | 6 | ##### Algorithm 7 | Create a |V| x |V| matrix // It represents the distance between every pair of vertices as given
8 | For each cell (i,j) in M do-
9 | if i = = j
10 | M[ i ][ j ] = 0 // For all diagonal elements, value = 0
11 | if (i , j) is an edge in E
12 | M[ i ][ j ] = weight(i,j) // If there exists a direct edge between the vertices, value = weight of edge
13 | else
14 | M[ i ][ j ] = infinity // If there is no direct edge between the vertices, value = ∞
15 | for k from 1 to |V|
16 | for i from 1 to |V|
17 | for j from 1 to |V|
18 | if M[ i ][ j ] > M[ i ][ k ] + M[ k ][ j ]
19 | M[ i ][ j ] = M[ i ][ k ] + M[ k ][ j ]
20 | 21 | ##### Problem : 22 | ![fw](https://user-images.githubusercontent.com/65402647/136140246-dcb5f9c5-76ff-42a4-a6a5-d82870251d75.png) 23 | 24 | #### STEPS 25 | We initialize the solution matrix same as the input graph matrix as a first step. Then we update the solution matrix by considering all vertices as an intermediate vertex.
26 | The idea is to one by one pick all vertices and update all shortest paths which include the picked vertex as an intermediate vertex in the shortest path.
27 | When we pick vertex number k as an intermediate vertex, we already have considered vertices {0, 1, 2, .. k-1} as intermediate vertices.
28 | For every pair (i, j) of source and destination vertices respectively, there are two possible cases.
29 | k is not an intermediate vertex in shortest path from i to j. We keep the value of dist[i][j] as it is.
30 | k is an intermediate vertex in shortest path from i to j. We update the value of dist[i][j] as dist[i][k] + dist[k][j].
31 | 32 | ### Program in C : 33 | #include
34 | int min(int,int);
35 | void floyds(int p[10][10],int n)
36 | {
37 | int i,j,k;
38 | for(k=1;k<=n;k++)
39 | for(i=1;i<=n;i++)
40 | for(j=1;j<=n;j++)
41 | if(i==j)
42 | p[i][j]=0;
43 | else
44 | p[i][j]=min(p[i][j],p[i][k]+p[k][j]);
45 | }
46 | int min(int a,int b)
47 | {
48 | if(a 49 | return(a);
50 | else
51 | return(b);
52 | }
53 | void main()
54 | {
55 | int p[10][10],w,n,e,u,v,i,j;;
56 | printf("\n Enter the number of vertices:");
57 | scanf("%d",&n);
58 | printf("\n Enter the number of edges:\n");
59 | scanf("%d",&e);
60 | for(i=1;i<=n;i++)
61 | {
62 | for(j=1;j<=n;j++)
63 | p[i][j]=999;
64 | }
65 | for(i=1;i<=e;i++)
66 | {
67 | printf("\n Enter the end vertices of edge%d with its weight \n",i);
68 | scanf("%d%d%d",&u,&v,&w);
69 | p[u][v]=w;
70 | }
71 | printf("\n Matrix of input data:\n");
72 | for(i=1;i<=n;i++)
73 | {
74 | for(j=1;j<=n;j++)
75 | printf("%d \t",p[i][j]);
76 | printf("\n");
77 | }
78 | floyds(p,n);
79 | printf("\n Transitive closure:\n");
80 | for(i=1;i<=n;i++)
81 | {
82 | for(j=1;j<=n;j++)
83 | printf("%d \t",p[i][j]);
84 | printf("\n");
85 | }
86 | printf("\n The shortest paths are:\n");
87 | for(i=1;i<=n;i++)
88 | for(j=1;j<=n;j++)
89 | {
90 | if(i!=j)
91 | printf("\n <%d,%d>=%d",i,j,p[i][j]);
92 | }
93 | }
94 | 95 | ##### Output: 96 | ![image6](https://user-images.githubusercontent.com/65402647/136140129-856840e8-fca4-4a8c-be71-b1090821b702.png) 97 | 98 | #### ⏲️ Time Complexities: 99 | Floyd Warshall Algorithm consists of three loops over all the nodes.
100 | The inner most loop consists of only constant complexity operations.
101 | Hence, the asymptotic complexity of Floyd Warshall algorithm is O(n3).
102 | Here, n is the number of nodes in the given graph.
103 | 104 | #### 👾 Space complexities: 105 | Space complexity: O(n2) 106 | 107 | 108 | 109 | -------------------------------------------------------------------------------- /Data Structures/Stacks/README.md: -------------------------------------------------------------------------------- 1 |

Stack

2 | 3 | Stack is a linear data structure built upon the **LIFO** principle, i.e. Last-In-First-Out. It can perform insertion and deletion at only one of its ends, called the **top**. 4 | 5 | ## Implementation 6 | 7 | ### 1. Using array 8 | 9 | A stack can be implemented using an array, and an integer representing the current number of elements in the stack. The size of the array can be anything as per a problem's requirement, though usually set as INT_MAX to maximize the stack's capacity and avoid overflow. 10 | 11 | ``` cpp 12 | int stk[INT_MAX]; 13 | int i; 14 | ``` 15 | 16 | ### 2. Using linked list 17 | 18 | A stack can also be implemented using a linked list. One of the members of the structure is a 'next' pointer pointing to the next node in the stack, and a variable 'data' for storing the value at that node. This implementation is useful when hardcoding the capacity of the stack is not preferred. 19 | 20 | ``` cpp 21 | struct stack 22 | { 23 | stack *next; 24 | int data; 25 | }; 26 | ``` 27 | 28 | ## Operations 29 | 30 | A stack should be able to perform a certain number of standard operations like push, pop, is_empty, and display. 31 | 32 | Insertion of an element into a stack is known as 'push' while deletion of an element from it is known as 'pop'. Both operations can be performed only at the top of the stack. 33 | 34 | An is_empty function is used to determine whether or not the stack has no elements left. It comes handy while performing the pop operation. 35 | 36 | ### 1. In Array Implementation 37 | 38 | #### Insertion 39 | 40 | - Check if the stack is full or not 41 | - If not, assign the value of data to the ith position of arr 42 | - Increment the value of i, to ensure the next element is inserted at the correct position. 43 | 44 | ``` cpp 45 | void push(int data) 46 | { 47 | if(i == INT_MAX) 48 | cout << "Insetion failed! Stack is full"; 49 | else 50 | { 51 | arr[i] = data; 52 | i++; 53 | } 54 | } 55 | ``` 56 | 57 | #### Deletion 58 | 59 | - Check if the stack is empty or not 60 | - If not, decrement the value of i 61 | 62 | Note: We do not need to actually remove the value or replace it with a default value since it will be overwritten when insertion is performed. 63 | 64 | ``` cpp 65 | void pop() 66 | { 67 | if(is_empty()) 68 | cout << "Deletion failed! Stack is empty"; 69 | else 70 | i--; 71 | } 72 | ``` 73 | 74 | #### is_empty() 75 | 76 | - i represents the current number of elements in the stack 77 | - if i is zero, the stack is empty. 78 | 79 | ``` cpp 80 | bool is_empty() 81 | { 82 | if(i == 0) 83 | return 1; 84 | return 0; 85 | } 86 | ``` 87 | 88 | #### Display 89 | 90 | ``` cpp 91 | void display() 92 | { 93 | for(int j = 0; j < i; ++j) 94 | cout << arr[j] << " "; 95 | } 96 | ``` 97 | 98 | ### 2. In Linked List Implementation 99 | 100 | #### Insertion 101 | 102 | - Create a new node temp 103 | - Initialise temp with data 104 | - Point the next pointer of temp to the top of the stack stk 105 | - Now that temp is the topmost node, assign it to stk 106 | 107 | 108 | ``` cpp 109 | void push(int data) 110 | { 111 | stack temp = new stack(data); 112 | // stk is the topmost node of the stack 113 | temp -> next = stk; 114 | stk = temp; 115 | } 116 | ``` 117 | 118 | #### Deletion 119 | 120 | - Check if the stack is empty or not 121 | - If not, reassign the topmost node stk to its next node 122 | 123 | ``` cpp 124 | void pop() 125 | { 126 | if(is_empty()) 127 | cout << "Deletion failed! Stack is empty"; 128 | else 129 | stk = stk -> next; 130 | } 131 | ``` 132 | 133 | #### is_empty() 134 | 135 | ``` cpp 136 | bool is_empty() 137 | { 138 | if(stk == NULL) 139 | return 1; 140 | return 0; 141 | } 142 | ``` 143 | 144 | #### Display 145 | - Initialise a temp pointer to topmost node stk 146 | - Traverse stack via temp pointer until NULL is encountered 147 | - Display value at nodes throughout iteration 148 | ``` cpp 149 | void display() 150 | { 151 | stack temp = stk; 152 | while(temp != NULL) 153 | { 154 | cout << temp -> data; 155 | temp = temp -> next; 156 | } 157 | } 158 | ``` 159 | -------------------------------------------------------------------------------- /Algorithms/Searching Algorithms/Binary_Search.md: -------------------------------------------------------------------------------- 1 | # ⭐ Binary search 2 | 3 | In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array. 4 | > Input must be in sorted order 5 | #### Example 1: 6 | 7 | ##### Input: `[20, 30, 40, 50, 80, 90, 100], 40` 8 | ###### input1: `array` 9 | ###### input2: `Target` 10 | 11 | ##### Explanation: 12 | `[20, 30, 40, 50, 80, 90, 100]`
13 | consider above array as a `Binary search tree` 14 | 15 | ![tree](https://upload.wikimedia.org/wikipedia/commons/f/f4/Binary_search_example_tree.svg)
16 | Here, middle element be root of a tree and left part is left sub tree and right part is right sub tree
17 | calculate middle index and compare it with target
18 | if target is equal to value at middle index then simply return the current index
19 | if value at current index is greater then target then we can conclude that our target is present in left half of array
20 | if value at current index is less then target then we can conclude that our target is present in right half of array
21 | perform same operation again to reach at target
22 | 23 | ##### Visual explanation 24 | ```py 25 | Target = 40 26 | [20, 30, 40, 50, 80, 90, 100] -> mid [50] ( target < 50 ) // skipping right half of array 27 | | 28 | [20, 30, 40] -> mid [30] (target > 30) // skipping left half of array 29 | | 30 | [40] -> mid [40] (target == 40) // return index 31 | 32 | output -> 2 33 | ``` 34 | 35 | 36 | 37 | ##### Output: `2` 38 | 39 | 40 | ### Pseudo Code 41 | ``` js 42 | function binary_search(A, n, T) is 43 | L := 0 44 | R := n − 1 45 | while L ≤ R do 46 | m := floor((L + R) / 2) 47 | if A[m] < T then 48 | L := m + 1 49 | else if A[m] > T then 50 | R := m − 1 51 | else: 52 | return m 53 | return unsuccessful 54 | ``` 55 | 56 | ### Code `Python` 57 | ``` py 58 | def binarySearch(arr, size, target): 59 | leftBound = 0 60 | rightBound = size - 1 61 | 62 | while leftBound <= rightBound: 63 | mid = leftBound + (rightBound - leftBound) // 2 # also take care of overflow situation 64 | if arr[mid] < target: 65 | leftBound = mid + 1 66 | elif arr[mid] > target: 67 | rightBound := mid − 1 68 | else: 69 | return mid 70 | return -1 71 | ``` 72 | ### `Output` 73 | Target element `40` is found at `index 2`.
74 | Output: `2` 75 |
76 | #### Example 2: 77 | 78 | ##### Input: `[-21 -19 -18 1 4 6 8 9 11 18 22], -18` 79 | ###### input1: `array` 80 | ###### input2: `Target` 81 | ### Code `Java` 82 | ```java 83 | static int binarySearch(int[] arr, int target) { //Declaring the binary search function 84 | int start = 0; 85 | int end = arr.length - 1; 86 | 87 | while (start <= end) { 88 | // Find the index of middle element 89 | int mid = start + (end - start) / 2; 90 | 91 | // check if element at the mid index is greater or smaller or equal to the target element 92 | if (target > arr[mid]) { 93 | start = mid + 1; 94 | } else if (target < arr[mid]) { 95 | end = mid - 1; 96 | } else { //the case when arr[mid]==target 97 | return mid; //ans found 98 | } 99 | } 100 | return 0; //when target not found return 0 101 | } 102 | ``` 103 | >Note: The while loop runs until the value of start index is less than equal to the end index. When start=end at that time mid=start=end thus arr[mid] 104 | >is the target. After that start>end which breaks the while loop. 105 | 106 | ### `Output` 107 | Target element `-18` is found at `index 2`.
108 | Output: `2` 109 | 110 | #### ⏲️ Time Complexities: 111 | `The time complexity of the binary search algorithm is O(log n).`
112 | `The best-case time complexity would be O(1) when the central index would directly match the desired value.`
113 |
114 | #### 👾 Space complexities: 115 | `O(1)` 116 | -------------------------------------------------------------------------------- /Data Structures/BinarySearch/Peak_Element_in_MountainArray.md: -------------------------------------------------------------------------------- 1 | ## Problem Description 2 | 3 | An array `arr` a **mountain** if the following properties hold: 4 | 5 | - `arr.length >= 3` 6 | - There exists some `i` with `0 < i < arr.length - 1` such that: 7 | - `arr[0] < arr[1] < ... < arr[i - 1] < arr[i]` 8 | - `arr[i] > arr[i + 1] > ... > arr[arr.length - 1]` 9 | 10 | Given a mountain array `arr`, return the index `i` such that `arr[0] < arr[1] < ... < arr[i - 1] < arr[i] > arr[i + 1] > ... > arr[arr.length - 1]`. 11 | 12 | You must solve it in `O(log(arr.length))` time complexity. 13 | 14 | The Problems aks us to find out the Index of the element which has a decreasing/Increasing list of element on the left side from that element as well as Increasing / decreasing list of elements on its right side. 15 | 16 | Basically If you imagine a Mountain 🗻, and elements to be trees we need to find the element on top of the mountain where the pattern of elements changes Below are few examples 👇🏽 17 | 18 | **Example 1:** 19 | 20 | ```markdown 21 | Input: arr = [0,1,0] 22 | Output: 1 23 | ``` 24 | Here Value 1 is the Mountain Peak Element and its Index is 1 25 | 26 | **Example 2:** 27 | 28 | ```markdown 29 | Input: arr = [0,2,1,0] 30 | Output: 1 31 | ``` 32 | Here Value 1 is the Mountain Peak Element and its Index is 1 33 | 34 | **Example 3:** 35 | ```markdown 36 | Input: arr = [0,5,10,2] 37 | Output: 2 38 | ``` 39 | Here Value 1 is the Mountain Peak Element and its Index is 1 40 | 41 | ## Solution 42 | 43 | ## With Java 44 | 45 | By using basic Principle of binary Search and making small adjustment for this question we can solve the question. 46 | 47 | ```java 48 | public class Mountain { 49 | public static void main(String[] args) { 50 | int[] array = { 10, 20, 30, 40, 30, 20, 10}; 51 | int result = peakIndexInMountainArray(array); 52 | System.out.println(result); 53 | 54 | } 55 | // here we mentioned the method to be static, because if the method is static we can access that method without an object(i.e , we can access the method directly with its name). 56 | 57 | public static int peakIndexInMountainArray(int[] arr) { 58 | int start = 0; 59 | int end = arr.length - 1; 60 | 61 | while (start < end) { 62 | int mid = start + (end - start) / 2; 63 | if (arr[mid] > arr[mid+1]) { 64 | // you are in dec part of array 65 | // this may be the ans, but look at left 66 | // this is why end != mid - 1 67 | end = mid; 68 | } else { 69 | // you are in asc part of array 70 | start = mid + 1; // because we know that mid+1 element > mid element 71 | } 72 | } 73 | // in the end, start == end and pointing to the largest number because of the 2 checks above 74 | // start and end are always trying to find max element in the above 2 checks 75 | // hence, when they are pointing to just one element, that is the max one because that is what the checks say 76 | // more elaboration: at every point of time for start and end, they have the best possible answer till that time 77 | // and if we are saying that only one item is remaining, hence cuz of above line that is the best possible ans 78 | return start; // or return end as both are = 79 | } 80 | } 81 | ``` 82 | ## With Python 83 | ```python 84 | 85 | def Peak_Index_In_Mountainarray(arr): 86 | start = 0 87 | end = len(arr)-1 88 | while(start arr[mid+1]): 91 | # you are in dec part of array 92 | # this may be the ans, but look at left 93 | # this is why end != mid - 1 94 | end = mid 95 | else: 96 | # you are in asc part of array 97 | start = mid+1 # because we know that mid+1 element > mid element 98 | # in the end, start == end and pointing to the largest number because of the 2 checks above 99 | # start and end are always trying to find max element in the above 2 checks 100 | # hence, when they are pointing to just one element, that is the max one because that is what the checks say 101 | # more elaboration: at every point of time for start and end, they have the best possible answer till that time 102 | # and if we are saying that only one item is remaining, hence cuz of above line that is the best possible ans 103 | return start # or return end as both are same 104 | 105 | array = [ 10, 20, 30, 40, 30, 20, 10] # just a array of random Integers 106 | result = Peak_Index_In_Mountainarray(array) # calling out the funtion and storing value 107 | print(result) 108 | ``` 109 | -------------------------------------------------------------------------------- /Data Structures/LinkedList.md: -------------------------------------------------------------------------------- 1 | # Linked List 2 | ● A linked list is a linear data structure where each element is a separate object.\ 3 | ● Each element or node of a list is comprising of two items: 4 |
 1 Data
  5 |  2 Pointer(reference) to the next node.
6 | ● A linked list is a linear data structure, in which the elements are not stored at 7 | contiguous memory locations.\ 8 | ● The first node of a linked list is known as head.\ 9 | ● The last node of a linked list is known as tail.\ 10 | ● The last node has a reference to null. 11 | 12 | ## Linked list class 13 | ``` 14 | class Node { 15 | public : 16 | int data; // to store the data stored 17 | Node *next; // to store the address of next pointer 18 | Node(int data) { 19 | this -> data = data; 20 | next = NULL; 21 | } 22 | ``` 23 | Note: The first node in the linked list is known as Head pointer and the last node is 24 | referenced as Tail pointer. We must never lose the address of the head pointer as it 25 | references the starting address of the linked list and is, if lost, would lead to losing of the 26 | list. 27 | 28 | ## Printing of the linked list 29 | To print the linked list, we will start traversing the list from the beginning of the list(head) 30 | until we reach the NULL pointer which will always be the tail pointer. Follow the code 31 | below: 32 | ``` 33 | void print(Node *head) { 34 | Node *tmp = head; 35 | while(tmp != NULL) { 36 | cout << tmp->data << “ “; 37 | tmp = tmp->next; 38 | } 39 | cout << endl; 40 | } 41 | ``` 42 | 43 | ## Types Of LinkedList 44 | There are generally three types of linked list:\ 45 | ● Singly: Each node contains only one link which points to the subsequent node in the 46 | list.\ 47 | ● Doubly: It’s a two-way linked list as each node points not only to the next pointer 48 | but also to the previous pointer.\ 49 | ● Circular: There is no tail node i.e., the next field is never NULL and the next field for 50 | the last node points to the head node. 51 | 52 | ## Taking Input in a list 53 | ``` 54 | Node* takeInput() { 55 | int data; 56 | cin >> data; 57 | Node *head = NULL; 58 | Node *tail = NULL; 59 | while(data != -1) { // -1 is used for terminating 60 | Node *newNode = new Node(data); 61 | if(head == NULL) { 62 | head = newNode; 63 | tail = newNode; 64 | } 65 | else { 66 | tail -> next = newNode; 67 | tail = tail -> next; 68 | // OR 69 | // tail = newNode; 70 | } 71 | cin >> data; 72 | } 73 | return head; 74 | } 75 | ``` 76 | To take input in the user, we need to keep few things in the mind:\ 77 | ● Always use the first pointer as the head pointer.\ 78 | ● When initialising the new pointer the next pointer should always be referenced to 79 | NULL.\ 80 | ● The current node’s next pointer should always point to the next node to connect the 81 | linked list. 82 | 83 | ## Operations on Linked Lists 84 | 85 | ### Insertion 86 | There are 3 cases:\ 87 | ● Case 1: Insert node at the last\ 88 | This can be directly done by normal insertion as discussed above while we took input.\ 89 | 90 | ● Case 2: Insert node at the beginning\ 91 | ○ First-of-all store the head pointer in some other pointer.\ 92 | ○ Now, mark the new pointer as the head and store the previous head to the\ 93 | next pointer of the current head.\ 94 | ○ Update the new head.\ 95 | 96 | ● Case 3: Insert node anywhere in the middle\ 97 | ○ For this case, we always need to store the address of the previous pointer as 98 | well as the current pointer of the location at which new pointer is to be 99 | inserted.\ 100 | ○ Now let the new inserted pointer be curr. Point the previous pointer’s next to 101 | curr and curr’s next to the original pointer at the given location.\ 102 | ○ This way the new pointer will be inserted easily. 103 | 104 | ``` 105 | Node* insertNode(Node *head, int i, int data) { 106 | Node *newNode = new Node(data); 107 | int count = 0; 108 | Node *temp = head; 109 | if(i == 0) { //Case 2 110 | newNode -> next = head; 111 | head = newNode; 112 | return head; 113 | } 114 | while(temp != NULL && count < i - 1) { //Case 3 115 | temp = temp -> next; 116 | count++; 117 | } 118 | if(temp != NULL) { 119 | Node *a = temp -> next; 120 | temp -> next = newNode; 121 | newNode -> next = a; 122 | } 123 | return head; //Returns the new head pointer after insertion 124 | } 125 | ``` 126 | 127 | ## Deletion of node 128 | There are 2 cases:\ 129 | ● Case 1: Deletion of the head pointer\ 130 | In order to delete the head node, we can directly remove it from the linked list by 131 | pointing the head to the next.\ 132 | ● Case 2: Deletion of any node in the list\ 133 | In order to delete the node from the middle/last, we would need the previous 134 | pointer as well as the next pointer to the node to be deleted. Now directly point the 135 | previous pointer to the current node’s next pointer. 136 | 137 | 138 | 139 | 140 | -------------------------------------------------------------------------------- /Algorithms/Sieve of Eratosthenes/Sieve of Eratosthenes.md: -------------------------------------------------------------------------------- 1 | # Sieve of Eratosthenes Algorithm 2 | 3 | A Prime Number has a unique property which states that the number can only be divisible by itself or 1. 4 | 5 | ## What does it do? 6 | For a given upper limit, this algorithm computes all the prime numbers upto the given limit by using the precomputed prime numbers repeatedly. 7 | 8 | The traditional algorithm for checking prime property would iterate over all the composites multiple times for each number for checking its properties. 9 | Whereas this algorithm only has to iterate over all numbers only once while crossing out the composites and marking out the primes. 10 | Once all the primes are marked, they are collected inside a list/vector and are used as required. 11 | Hence the time complexity for the traditional algorithm increases with increase in range as well as size of numbers whereas Sieve of Eratosthenes only takes O(N). 12 | 13 | Sieve of Eratosthenes algorithm is the most efficient for collecting multiple primes for a huge range of numbers of big magnitudes. 14 | 15 | 16 | ## Steps 17 | **Step 1)** A list/vector is created where all the primes would be stored. 18 | 19 | **Step 2)** All the numbers upto the given range is initially marked as Prime (true) [except for 0 and 1]. 20 | 21 | **Step 3)** As the primes are marked true, all the multiples of those primes are marked as composites (false). If a number is already marked false, their multiples are skipped. 22 | 23 | **Step 4)** All the numbers which were multiples are marked false and only those numbers remain marked as prime (true) which are not a multiple of any other number, hence it is a prime number. 24 | 25 | **Step 5)** The marked primes are then collected in the list/vector for their required use. 26 | 27 | 28 | ## Code in C++ 29 | ```cpp 30 | #include 31 | using namespace std; 32 | 33 | vector sieveOfEratosthenes(int up) { 34 | vector primes; 35 | 36 | // First marking all numbers as prime numbers 37 | vector mark(up + 1, true); 38 | 39 | // Marking each of multiples of the primes as a composite number 40 | for(int i= 2; i*i<= up; i++) { 41 | if(mark[i] == true) { 42 | // Logically all multiples below square of prime will automatically be marked as multiples of smaller primes, 43 | // Eg. If i=7, upto i*i=49, all multiples of 7, that is, 7*2, 7*3... are already marked by 2, 3 and so on. 44 | // If i=13, upto 13*13=169, all multiples of 13 including 13*11, 13*7, 13*2, etc are all marked as the multiples of smaller primes. 45 | // So no need to mark them again, hence starting from the square of the prime... 46 | for(int j= i*i; j<=up; j+=i) 47 | mark[j] = false; 48 | } 49 | } 50 | 51 | // All the numbers that are still marked as primes are then stored inside the primes vector while omiting 0 and 1 52 | for(int i=2; i<=up; i++) 53 | if(mark[i]) 54 | primes.push_back(i); 55 | 56 | return primes; 57 | } 58 | 59 | int main() { 60 | 61 | vector primes; 62 | int up; 63 | 64 | cout << "\nEnter the upper limit: "; 65 | cin >> up; 66 | cout << endl; 67 | 68 | primes = sieveOfEratosthenes(up); 69 | 70 | printf("\nHere are the primes in range 1-%d:\n", up); 71 | 72 | for(auto p : primes) 73 | cout << p << ", "; 74 | cout << endl; 75 | 76 | return 0; 77 | } 78 | ``` 79 | 80 | ## Code in Python 81 | ```python 82 | 83 | def sieveOfEratosthenes(limit : int) -> list: 84 | # All numbers upto limit [except 0 and 1] are intially marked as primes 85 | mark = [False]*2 + [True]*(limit-1) 86 | primes = list() 87 | 88 | for i in range(2, limit+1): 89 | if mark[i]: 90 | # Multiples before the square of prime is already marked as multiples of smaller primes 91 | # Eg. For prime=13, 13*2, 13*3, 13*5, 13*7, 13*11 will already be marked as multiples of 2, 3, 5, 7, 11 respectively 92 | # Only multiples from 13*13 should begin marking as composites 93 | for j in range(i*i, limit+1, i): 94 | mark[j] = False # Marked as composite 95 | 96 | # All numbers still marked as primes are primes as they are not multiples of any other primes numbers 97 | # Collecting aLl primes 98 | for i, m in enumerate(mark): 99 | if m: 100 | primes.append(i) 101 | 102 | return primes 103 | 104 | def main(): 105 | up = int(input("Enter the upper limit: ")) 106 | 107 | primes = sieveOfEratosthenes(up) 108 | 109 | print(f"\nHere are the primes in range 1-{up}:") 110 | print(*primes, sep= ", ", end= "\n\n") 111 | 112 | 113 | if __name__ == "__main__": 114 | main() 115 | 116 | ``` -------------------------------------------------------------------------------- /Algorithms/Sorting Algorithms/Insertion_Sort_Java.md: -------------------------------------------------------------------------------- 1 | # Insertion Sort 2 | ## What is Insertion Sort? 3 | Insertion sort is a sorting algorithm that places an unsorted element at its suitable place in each iteration. 4 | 5 | Insertion sort works similarly as we sort cards in our hand in a card game. 6 | 7 | We assume that the first card is already sorted then, we select an unsorted card. If the unsorted card is greater than the card in hand, it is placed on the right otherwise, to the left. In the same way, other unsorted cards are taken and put in their right place. 8 | 9 | A similar approach is used by insertion sort. 10 | ## Algorithm 11 | To sort an element in ascending order
12 | step 1: Iterate from arr[1] to arr[N] over the array.
13 | step 2: Compare the current element (key) to its predecessor.
14 | step 3:If the key element is smaller than its predecessor, compare it to the elements before. Move the greater elements one position up to make space for the swapped element. 15 |
16 | ## Code 17 | Here is the code of Insertion Sort in Java 18 | ``` Java 19 | package com.company; 20 | import java.util.*; 21 | public class InsertionSort { 22 | static void swap(int[]arr,int a,int b) 23 | { 24 | int temp =arr[a]; 25 | arr[a]= arr[b]; 26 | arr[b]= temp; 27 | } 28 | static void insertionSort(int[]arr) 29 | { 30 | int n = arr.length; 31 | // Number of Passes 32 | for(int i=0;i<=n-2;i++) 33 | { 34 | //no of comparisons 35 | for (int j =i+1;j>0;j--) 36 | { 37 | if(arr[j-1]>arr[j]) 38 | swap(arr,j,j-1); 39 | else 40 | break; 41 | } 42 | } 43 | System.out.println(Arrays.toString(arr)); 44 | } 45 | public static void main(String[] args) { 46 | Scanner sc = new Scanner(System.in); 47 | int n; 48 | System.out.println("Enter Array Size"); 49 | n = sc.nextInt(); 50 | int[]arr = new int[n]; 51 | System.out.println("Enter Array elements"); 52 | for(int i=0;i 65 | 9 7 5 4 1 66 |
67 | Output: Array before Sorting
68 | [9, 7, 5, 4, 1]
69 | Array after Sorting
70 | [1, 4, 5, 7, 9] 71 |
72 | ## Time Complexity 73 | **Worst Case:** When elements are arranged in descending order we take to iterators i(for countong pass) and j(for comparisons)
74 | [9, 7, 5, 4, 1]
75 | For Pass 1: when i=0 76 | [9, 7, 5, 4, 1] -> [7, 9, 5, 4, 1]
no. of comparisons here is 1 hence j=1.
77 | For Pass 2: when i=1 78 | [7, 9, 5, 4, 1] -> [7, 5, 9, 4, 1]->[5, 7, 9, 4, 1]
no. of comparisons here is 2 hence j=2.
79 | For Pass 3: when i=2 80 | [5, 7, 9, 4, 1] -> [5, 7, 4, 9, 1]->[5, 4, 7, 9, 1]->[4, 5, 7, 9, 1]
no. of comparisons here is 3 hence j=3.
81 | For Pass 4: when i=3
82 | [4, 5, 7, 9, 1]->[4, 5, 7, 1, 9]->[4, 5, 1, 7, 9]->[4, 1, 5, 7, 9]->[1, 4, 5, 7, 9]
83 | no. of comparisons here is 4 hence j=4.

84 | **Note:** Here we see for 5 elements there are 4 passes and i runs from 0 to 3,therefore if there are n elements there will be n-1 passes and i will run from 0 to n-2 times which we have already done in the code section. 85 |

86 | **Analysis:** For pass 1 we have made 1 comparison, for pass 2 we have made 2 comparisons and so on.
87 | For 5 elements we have made 1+2+3+4 comparisons.
So according to it for n elements,we would have made 1+2+3+...+(n-1)comparisons.
88 | Summation of (n-1)terms: (n-1)n/2=(n*n-n)/2
89 | Therefore, time complexity in worst case is O(n^2). 90 |

91 | 92 | **Best Case:** When elemnts are already arranged in ascending order
93 | For pass 1: [1, 2, 3, 4, 5,] no of comparisons 1,j=1.
94 | For pass 2: [1, 2, 3, 4, 5,] no of comparisons 1,j=1.
95 | For pass 3: [1, 2, 3, 4, 5,] no of comparisons 1,j=1.
96 | For pass 4: [1, 2, 3, 4, 5,] no of comparisons 1,j=1.
97 |
98 | Here,for 5 elements we have made only 1 comparison in each step that is total 4 comparisons. Therefore, for n elments already sorted we would make total(n-1) comparisons hence Time complexity in best case is O(n). 99 |

100 | ## Algorithm Paradigm: 101 | It uses incremental Approach. 102 |

103 | --- 104 | # Conclusion 105 | This is a documentation of Insertion Sort in java. 106 |
107 | Resource for detailed study of Insertion Sort: 108 | [GeeksforGeeks](https://www.geeksforgeeks.org/insertion-sort/)
109 | Resource for detailed study of Other DSA topics:[Wizard-of_docs github repo](https://github.com/HackClubRAIT/Wizard-Of-Docs) 110 | --- 111 | --- 112 | Don't forget to give a ⭐ to [Wizard-Of-Docs](https://github.com/HackClubRAIT/Wizard-Of-Docs) and keep contributing. 113 |
114 | Happy Coding! 115 | --- -------------------------------------------------------------------------------- /Algorithms/Dynammic programming/Longest Common Subsequence.md: -------------------------------------------------------------------------------- 1 | # Longest Common Subsequence 2 | 3 | A subsequence is a sequence that can be derived from another sequence by deleting some elements without changing the order of the remaining elements. 4 | 5 | Longest common subsequence (LCS) of 2 sequences is a subsequence, with maximal length, which is common to both the sequences. 6 | eg. Given strings "ace" and "abcde" , longest common subsequence is 3, which is "ace" 7 | 8 | ### Note : Subsequence doesn't need to be contiguous. 9 | 10 | ## Solution 11 | We can solve this problem either recursively or by using Dynamic Programming. 12 | 13 | ### 1. Recursive Approach 14 | 15 | 1. If any one of the string is empty then longest common subsequence will be of length 0. (Base case) 16 | e.g. "" and "abc" the longest common substring will be of length 0, because there is nothing common, between these two strings. 17 | 18 | 2. If str1[i] == str2[j], then move to next character for both the strings (str1 and str2) 19 | 20 | 3. If str1[i] != str2[j], then try both the cases and return the one which results in longest common subsequence. 21 | 1. Move to the next character in str1 22 | 2. Move to the next character in str2 23 | 24 | 25 | ```java 26 | public class App { 27 | 28 | public static void main(String[] args) { 29 | System.out.println(longestCommonSubsequence("pmjghexybyrgzczy", "hafcdqbgncrcbihkd")); 30 | } 31 | 32 | public static int longestCommonSubsequence(String text1, String text2) { 33 | if (text1.length() == 0 || text2.length() == 0) { 34 | return 0; 35 | } 36 | 37 | if (text1.charAt(0) == text2.charAt(0)) { 38 | return 1 + longestCommonSubsequence(text1.substring(1), text2.substring(1)); 39 | } else { 40 | return Math.max(longestCommonSubsequence(text1.substring(1), text2), 41 | longestCommonSubsequence(text1, text2.substring(1))); 42 | } 43 | } 44 | } 45 | 46 | ``` 47 | 48 | The recursive approach solves the same subproblem everytime, we can improve the runtime by using the Dynamic Programming approach. 49 | 50 | ### Recursive implementation will result in Time Limit Exceeded error on Leetcode 😟 51 | 52 | ### 2. Dynamic Programming - Bottom Up (Tabulation) Approach 53 | For example lets find the longest common subsequence for strings, "abc" and "cab". 54 | 55 | Approach: We start filling the dpTable, row by row, and we fill all the columns in a single row, before moving to next row. 56 | By doing this we are solving the subproblems, which will help us, to get to the result of our actual problem. 57 | 58 | Since there can't be anything common when anyone of the two strings is empty, the longest common subsequence will be 0. So in dpTable all the values in first row and first column will be 0. 59 | 60 | 61 | Now while filling the cell dpTable[i][j], there can be two cases 62 | 1. str1[i] == str2[j], in this case dpTable[i][j] = dpTable[i - 1][j - 1] + 1 63 | 2. str1[i] != str2[j], in this case dpTable[i][j] = Math.max(dpTable[i - 1][j], dpTable[i][j - 1]) 64 | 65 | ### Case 1 : When str1[i] == str2[j] 66 | When we can move to only right left 67 | LCS-1 68 | 69 | 70 | ### Case 2 : When str1[i] != str2[j] 71 | When we can move to only right left 72 | LCS-2 73 | 74 | ```java 75 | public class App { 76 | 77 | public static void main(String[] args) { 78 | System.out.println(longestCommonSubsequence("abc", "cab")); 79 | } 80 | 81 | public static int longestCommonSubsequence(String text1, String text2) { 82 | 83 | int rows = text1.length(); 84 | int columns = text2.length(); 85 | 86 | if(rows == 0 || columns == 0) 87 | return 0; 88 | 89 | int[][] dpTable = new int[rows+1][columns+1]; 90 | for(int i = 1; i <= rows; i++) { 91 | for(int j = 1; j <= columns; j++) { 92 | if(text1.charAt(i-1) == text2.charAt(j-1)) { 93 | dpTable[i][j] = dpTable[i-1][j-1] + 1; 94 | } else { 95 | dpTable[i][j] = Math.max(dpTable[i-1][j], dpTable[i][j-1]); 96 | } 97 | } 98 | } 99 | 100 | System.out.println(subSequence(text1, text2, dpTable)); 101 | return dpTable[rows][columns]; 102 | } 103 | 104 | public static StringBuilder subSequence(String text1, String text2, int[][] dpTable) { 105 | String subsequence = ""; 106 | int row = text1.length(); 107 | int column = text2.length(); 108 | while(row > 0 && column > 0 && dpTable[row][column] != 0) { 109 | if(dpTable[row][column] == dpTable[row - 1][column]) { 110 | row = row - 1; 111 | } else if(dpTable[row][column] == dpTable[row][column-1]) { 112 | column = column -1; 113 | } else { 114 | subsequence += text1.charAt(row-1); 115 | row = row - 1; 116 | column = column - 1; 117 | } 118 | } 119 | StringBuilder sb = new StringBuilder(subsequence); 120 | return sb.reverse(); 121 | } 122 | 123 | } 124 | ``` 125 | **Note the order of checks in the `subSequence()` method 💥 , for constructing the subsequence.** 126 | -------------------------------------------------------------------------------- /Algorithms/Tree/Prim's_Algorithm.md: -------------------------------------------------------------------------------- 1 | # Prim's Algorithm 2 | 3 | - Prim’s algorithm is used to find the Minimum Spanning Tree(MST) of a connected or undirected graph. Spanning Tree of a graph is a subgraph that is also a tree and includes all the vertices. Minimum Spanning Tree is the spanning tree with a minimum edge weight sum. 4 | 5 | ## Algorithm 6 | - Step 1: Keep a track of all the vertices that have been visited and added to the spanning tree. 7 | 8 | - Step 2: Initially the spanning tree is empty. 9 | 10 | - Step 3: Choose a random vertex, and add it to the spanning tree. This becomes the root node. 11 | 12 | - Step 4: Add a new vertex, say x, such that x is not in the already built spanning tree. x is connected to the built spanning tree using minimum weight edge. (Thus, x can be adjacent to any of the nodes that have already been added in the spanning tree). 13 | Adding x to the spanning tree should not form cycles. 14 | - Step 5: Repeat the Step 4, till all the vertices of the graph are added to the spanning tree. 15 | 16 | - Step 6: Print the total cost of the spanning tree. 17 | 18 | 19 | ## Code (In C++) 20 | 21 | #include 22 | 23 | using namespace std; 24 | 25 | // Number of vertices in the graph 26 | const int V=6; 27 | 28 | // Function to find the vertex with minimum key value 29 | int min_Key(int key[], bool visited[]) 30 | { 31 | int min = 999, min_index; // 999 represents an Infinite value 32 | 33 | for (int v = 0; v < V; v++) { 34 | if (visited[v] == false && key[v] < min) { 35 | // vertex should not be visited 36 | min = key[v]; 37 | min_index = v; 38 | } 39 | } 40 | return min_index; 41 | } 42 | 43 | // Function to print the final MST stored in parent[] 44 | int print_MST(int parent[], int cost[V][V]) 45 | { 46 | int minCost=0; 47 | cout<<"Edge \tWeight\n"; 48 | for (int i = 1; i< V; i++) { 49 | cout<>cost[i][j]; 108 | } 109 | } 110 | find_MST(cost); 111 | 112 | return 0; 113 | } 114 | 115 | 116 | - Output: 117 | ![image](https://user-images.githubusercontent.com/71593494/136537495-0108de97-e885-49bd-81c4-676b6bf3a367.png) 118 | 119 | 120 | 121 | ## Time Complexity: 122 | 123 | - Time complexity of the above C++ program is O(V2) since it uses adjacency matrix representation for the input graph. However, using an adjacency list representation, with the help of binary heap, can reduce the complexity of Prim's algorithm to O(ElogV). 124 | 125 | 126 | ## Real Time Examples: 127 | 1>> Designing the networks including computer networks, telecommunication networks, transportation networks, electricity grid and water supply networks. 128 | 129 | 2>> Used in algorithms for approximately finding solutions to problems like Travelling Salesman problem, minimum cut problem, etc. 130 | - The objective of a Travelling Salesman problem is to find the shortest route in a graph that visits each vertex only once and returns back to the source vertex. 131 | - A minimum cut problem is used to find the minimum number of cuts between all the pairs of vertices in a planar graph. A graph can be classified as planar if it can be drawn in a plane with no edges crossing each other. For example, 132 | 3>> Analysis of clusters. 133 | 4>> Handwriting recognition of mathematical expressions. 134 | 5>> Image registration and segmentation 135 | -------------------------------------------------------------------------------- /Data Structures/BinarySearchTrees/SearchInsertDelete.md: -------------------------------------------------------------------------------- 1 | # Introduction to Binary Search Trees 2 | ● In a binary search tree (BST), the left node of a vertex should always be lesser than the root. 3 | 4 | ● Let's say a node containing value 9 can’t be stored anywhere in the left subtree of a node containing value 5. It has to be stored in somewhere in the right subtree only. 5 | 6 | ● Thus, every node in the left subtree of a node always has lesser value than it, and every node in the right subtree of a node always has a greater value than it. 7 | 8 | ## SEARCHING for a node with given value in BST:- 9 | Searching in a BST is a very important concept since the binary search algorithm is based on this. The binary search is an excellent searching algorithm due to its time complexity of O(log N), where N is the size of the sample set. Logarithmic complexity programs take way less time as compared to those with linear time complexity, for very large number of inputs. 10 | 11 |
Algorithm: The rule is, if the given value (to be searched) is greater than the current node’s value, we continue the search in only the right subtree of the current node. And if the given value is lesser, the search goes on only in the left subtree. Since we are using recursion here, the recursive calls foe the left and the right subtree are referred to as the traversal here.
12 | 13 | ### Code for SEARCHING in a BST:- 14 | 15 | ``` 16 | bool searchInTree(node *root, int val) 17 | { 18 | bool a = false, b = false; 19 | if (!root) 20 | return false; 21 | if (root->val == val) 22 | return true; 23 | else if (val > root->val) 24 | a = searchInTree(root->right, val); 25 | else 26 | b = searchInTree(root->left, val); 27 | return a || b; //if it finds the given value in either left or right subtree, TRUE will be returned, otherwise FALSE 28 | } 29 | ``` 30 | 31 | ### Time Complexity:- 32 | For searching an element, we have to traverse all elements. Therefore, searching in binary search tree has worst case time complexity of O(n). In general, time complexity is O(h) where h is height of BST. 33 | 34 | ## INSERTING a node with given value in BST:- 35 | Insertion in BST takes place quite similar to the search algorithm, the only difference being that whenever any of the nodes while traversal is found to be NULL, the given value is inserted there. 36 | 37 |
ALGORITHM: Similar to binary search, if the given value to be inserted is greater than the value of the current node, we traverse in its right subtree, and if given value is lesser than the current node value, we traverse in the left subtree. Our aim is to find an empty branch where we can insert the given value node.
38 | 39 | ### Code for INSERTING in a BST:- 40 | 41 | ``` 42 | node *insert(node *root, int val) 43 | { 44 | if (!root) //whenever the current node passed to tbe recursive function is NULL, we know that there's a vacancy here 45 | return new node(val); // creating a new node with the given value and returning that 46 | if (val > root->val) 47 | root->right = insert(root->right, val); 48 | else 49 | root->left = insert(root->left, val); 50 | return root; 51 | } 52 | 53 | ``` 54 | 55 | ### Time Complexity:- 56 | In order to insert an element as left child, we have to traverse all elements. Therefore, insertion in binary tree has worst case complexity of O(n). 57 | 58 | ## DELETION of a node with given value in BST:- 59 | The deletion algorithm in a BST is comparatively easy as compared to that in a Binary Tree. We just have to delete the current node if its value is equal to the given value, and put its right node in its place if it exists, otherwise the left node. 60 | 61 |
ALGORITHM: Let's take an example of the BST, suppose we want to delete the node with value x. We will traverse to that node and see if its right node exists and is not NULL. If it exists, we create a new node with the right node’s value and put it at the current node’s place. If the right node doesn’t exist,we connect the left node in place of the current node.
62 | 63 | ### Code for Deleting a node in a BST:- 64 | 65 | ``` 66 | struct node* deleteNode(struct node* root, int key){ 67 | if (root == NULL) return root; //base case of recursion 68 | if (key < root->key) 69 | root->left = deleteNode(root->left, key); //traversing in the left subtree 70 | else if (key > root->key) 71 | root->right = deleteNode(root->right, key); //traversing in the right subtree 72 | else{ 73 | if (root->left == NULL){ //if only right node present 74 | struct node *temp = root->right; 75 | free(root); 76 | return temp; 77 | } 78 | else if (root->right == NULL){ //if only left node present 79 | struct node *temp = root->left; 80 | free(root); 81 | return temp; 82 | } 83 | struct node* temp = minValueNode(root->right); //if both left and right nodes are NULL, none present 84 | root->key = temp->key; 85 | root->right = deleteNode(root->right, temp->key); 86 | } 87 | return root; 88 | } 89 | 90 | ``` 91 | ### Time Complexity:- 92 | For deletion of element, we have to traverse all elements to find that element(assuming we do breadth first traversal). Therefore, deletion in binary tree has worst case complexity of O(n). 93 | 94 | ## Note: 95 | So, now you know why using a BST is a really time efficient data structure. Its basic operations such as insertion, deletion and search take O(log N) time complexity to be processed. -------------------------------------------------------------------------------- /Algorithms/Sorting Algorithms/Quick sort.md: -------------------------------------------------------------------------------- 1 | # ⭐ QUICK SORT 2 | 3 | Quicksort is an in-place sorting algorithm.Quicksort is a divide-and-conquer algorithm. 4 | It works by selecting a 'pivot' element from the array and partitioning the other elements into two sub-arrays, 5 | according to whether they are less than or greater than the pivot. For this reason, 6 | it is sometimes called partition-exchange sort.The sub-arrays are then sorted recursively. 7 | This can be done in-place, requiring small additional amounts of memory to perform the sorting. 8 | #### Example: 9 | 10 | ##### Input: `[9, 0, 1, 12, 3], 0, 4` 11 | ###### input1: `array` 12 | ###### input0: `start index` 13 | ###### input1: `end index` 14 | 15 | ##### Partition phase: 16 | `[9, 0, 1, 12, 3]`
17 | Here, algorithm select `9` as pivot element and swap all the elements less than pivot to the left side of pivot
18 | and large elements to right side.From start index to end index
19 | `[3, 0, 1, 9, 12]`
20 | this is array after first pass as you can see that all the element less than 9 (pivot) are to the left side and large element to right side
21 | **we can also conclude on thing that pivot is in it's sorted position**
22 | ##### Recursion phase 23 | As we get sorted index of pivot element after that we just have to perform same operation of quick sort on other 2 sides of array
24 | ie 25 | ```py 26 | [9, 0, 1, 12, 3] -> pivot [9] 27 | | 28 | [3, 0, 1, 9, 12] -> 9 is at its sorted index 29 | | 30 | | | 31 | [3, 0, 1] -> pivot [3] [12] -> pivot [12] -> sorted 32 | | 33 | [0, 1, 3] -> 3 is its srted index 34 | | 35 | | | 36 | [0, 1] -> pivot [0] no element to right side 37 | | 38 | [0, 1] 39 | | 40 | [1] -> pivot [1] -> sorted 41 | 42 | output -> [0, 1, 3, 9, 12] 43 | ``` 44 | 45 | 46 | 47 | ##### Output: `[0, 1, 3, 9, 12]` 48 | 49 | 50 | ### Pseudo Code 51 | ``` js 52 | // Sorts a (portion of an) array, divides it into partitions, then sorts those 53 | algorithm quicksort(A, lo, hi) is 54 | // If indices are in correct order 55 | if lo < hi then 56 | // Partition array and get pivot index 57 | p := partition(A, lo, hi) 58 | 59 | // Sort the two partitions 60 | quicksort(A, lo, p - 1) // Left side of pivot 61 | quicksort(A, p + 1, hi) // Right side of pivot 62 | 63 | // Divides array into two partitions 64 | algorithm partition(A, lo, hi) is 65 | pivot := A[lo] // The pivot as first element 66 | 67 | // Pivot index 68 | i := lo 69 | 70 | for j := lo+1 to hi do 71 | // If the current element is less than or equal to the pivot 72 | if A[j] <= pivot then 73 | // Move the pivot index forward 74 | i := i + 1 75 | 76 | // Swap the current element with the element at the pivot 77 | swap A[i] with A[j] 78 | // swap last pivot element with low index 79 | swap A[i] with A[lo] 80 | return i // the pivot index 81 | ``` 82 | 83 | ### Code `Python` 84 | ``` py 85 | def partition(arr, start, end): 86 | pivot = arr[start] 87 | i = start 88 | j = i + 1 89 | 90 | while j <= end: 91 | if arr[j] < pivot: 92 | i += 1 93 | temp = arr[j] 94 | arr[j] = arr[i] 95 | arr[i] = temp 96 | j += 1 97 | 98 | temp = arr[start] 99 | arr[start] = arr[i] 100 | arr[i] = temp 101 | 102 | return i 103 | 104 | 105 | def quickSort(arr, start, end): 106 | if start <= end: 107 | index = partition(arr, start, end) 108 | quickSort(arr, start, index - 1) 109 | quickSort(arr, index + 1, end) 110 | 111 | ``` 112 | 113 | ### Quick Sort Optimization 114 | 115 | Quick Sort can be optimized from O(N) and O(N logN) to O(logN) using tail recursion.It reduces the complexity by solving smallest partition first which improves the time complexity of Quick Sort to O(log N). 116 | ### Code `C++` 117 | ``` 118 | void QuickSortOptimized(int arr[],int start,int end){ 119 | while(start 152 | `Worst: O(N^2)` *if all elements are same* 153 | 154 | #### 👾 Space complexities: 155 | `O(n)` *recursion stack space* 156 | -------------------------------------------------------------------------------- /Algorithms/Recursive algorithm/Akra_Bazzi.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | we use Akra-Bazzi theorem to analyze the complexity of divide and conquer algorithm. 3 | # The Complexity of Divide and Conquer Algorithm 4 | Divide-and-conquer is an algorithm design technique that solves a problem by breaking it down into smaller sub-problems and combining their solutions. Each sub-problem is treated the same way: we divide it into smaller parts, solve them recursively, and unify the sub-solutions.

5 | **Master Theorem:** If the sub-problems are of the same size, we use the Master Theorem. Those are the ones where T(n), the number of steps an algorithm makes while solving a problem of size n, is a recurrence of the form:
6 | $[T(n) = aT\left(\frac{n}{b}\right) + g(n)]$ 7 |
where g(n) is a real-valued function. More specifically, the recurrence describes the complexity of an algorithm that divides a problem of size n into a sub-problems each of size n/b. It performs g(n) steps to break down the problem and combine the solutions to its sub-problems. 8 | 9 | # Akra Bazzi Theorem 10 | The Akra Bazzi theorem was developed by Mohammad Akra and Louay Bazzi in the year 1996. It can be applied in the recurrence of the form: 11 |
![](https://4.bp.blogspot.com/-PepSqXo9UC8/WWOa6V8gL9I/AAAAAAAAG28/8qNynp2wiGUObZb6bxzErm_euQpIK--RQCPcBGAYYCw/s1600/akra_bazzi_img.jpg) 12 | or T(n)=$a_1T$($b_1x$+$E_1(x)$)+$a_2T$($b_2x$+$E_2(x)$)+....+$a_kT$($b_kx$+$E_k(x)$)+g(x) 13 |
14 | where, $a_{i}$ and $b_{i}$ are constants such that:
15 | - $n_0$ belongs in R is large enough so that T is well-defined. 16 | - For each i=1,2,\ldots,k: 17 | - the constant $a_i$ >0 18 | - the constant $b_{i}$ lies in (0, 1) 19 | - $|h_i(n)| \in O\left(\frac{n}{\log^2 n}\right)$ 20 | ## Formula 21 | *T(x)=θ($x^p$+$x^p$ $∫_1^x$ (g(u)/ $u^p$ $^+$ $^1$)du )* 22 | ## What is p? 23 | $a_1b_1^p$+$a_2b_2^p$+.....+$a_nb_n^p$ =1 24 |

25 | Therefore, $∑_1^k$ $a_ib_i^p$=1. 26 | ## Examples 27 | 1. T(N) = 2T(N/2)+(N-1). 28 |
Here, a1=2,b1=1/2,g(x)=x-1 29 |
30 | $∑_1^k$ $^=$ $^1$ $a_1b_1^p$=1 31 |
32 | => 2 X $(1/2)^p$ = 1 33 |
34 | => for p=1, the equation becomes 1. 35 |
putting p in formula 36 |
T(x)=θ($x^p$+$x^p$ $∫_1^x$ (g(u)/ $u^p$ $^+$ $^1$)du )

37 | T(x)=θ($x^1$+$x^1$ $∫_1^x$ (u-1/ $u^2$)du ) 38 |
=> θ(x+x $[logx]_1^x$- $[-1/u]_1^x$) 39 |
=> θ(x+x[logx+(1/x)-1]) 40 |
=> θ(x+xlogx+1-x) 41 |
θ(~~x~~+xlogx+1-~~x~~) 42 |
θ(xlogx+1) 43 |
θ(xlogx).

44 | 2. Suppose T(n) is defined as 1 for integers 0 less than equal to n less than equal to 3 and $n^2$ +7/4*T([1/2*n])+T[(3/4*n)] for integers n>3.In applying the Akra–Bazzi method, the first step is to find the value of p for which 45 | In time complexity we ignore constant hence final answer is 7/4 $(1/2)^p$ + $(3/4)^p$=1.In this example,p=2. Then using akra-bazzi theorem:
46 | T(x)=θ($x^p$+$x^p$ $∫_1^x$ (g(u)/ $u^p$ $^+$ $^1$)du ) 47 |
=>θ($x^2$+$x^2$ $∫_1^x$ ($u^2$/ $u^3$ )du )
48 | =>θ($x^2$+$x^2$ lnx) 49 |
=>θ($x^2$ lnx)

50 | 3. T(n)= $1/3T(n/3)+1/n$
51 | Here,a1=1/3,b1=1/3,g(n)=1/n 52 |
$\large\frac{1}{2}\normalsize*\large\frac{1}{2}^p\normalsize=1$
53 | Here,p=-1 satisfies the equation
54 | => $n^{-1}(1+\int_{1}^{n}\large\frac{\frac{1}{u}}{u^{-1+1}}\normalsize du)$ 55 | 56 |
=> $\large\frac{1}{n}\normalsize(1+\int_{1}^{n}$ $\large\frac{1}{u}\normalsize du)$ 57 | 58 | => $\large\frac{1}{n}\normalsize(1+[\log u]_{1}^{n})$ 59 | 60 | => $\large\frac{1}{n}\normalsize(1+\log n)$ 61 | 62 | 63 | => $\theta(\large\frac{\log n}{n}\normalsize)$

64 | 4.T(n)=9T(n/3+logn)+n 65 |
Here,a1=9,b1=1/3,g(n)=n.
66 | from $∑_1^k$ $^=$ $^1$ $a_1b_1^p$=1 67 |
$9*\large\frac{1}{3}^p\normalsize=1$ 68 |
we get p =2 69 |
=> $n^{2}(1+\int_{1}^{n}\large\frac{u}{u^{2+1}}\normalsize du)$ 70 | 71 | => $n^{2}(1+\int_{1}^{n}\large\frac{1}{u^{2}}\normalsize du)$ 72 | 73 | => $n^{2}(1+[-\large\frac{1}{u}\normalsize]_{1}^{n})$ 74 | 75 | => $n^{2}(2-\large\frac{1}{n}\normalsize)$ 76 | 77 | => $2n^{2}-n$ 78 | 79 | => $\theta(n^{2})$ 80 | ## Significance 81 | The Akra–Bazzi method is more useful than most other techniques for determining asymptotic behavior because it covers such a wide variety of cases. Its primary application is the approximation of the running time of many divide-and-conquer algorithms. For example, in the merge sort, the number of comparisons required in the worst case, which is roughly proportional to its runtime, is given recursively as T(1)=0 and 82 |
T(n)=T([1/2*n])+T([1/2*n])+n-1
83 | for integers n>0, and can thus be computed using the Akra–Bazzi method to be θ(nlogn) . 84 | 85 | 86 | 87 | 88 | ## Advantages 89 | - Works for many divides and conquer algorithms. 90 | - Has a lesser constraint over the format of the recurrence than Master’s Theorem. 91 | - p can be calculated using numerical methods for complex recurrence relations. 92 | ## Disadvantages 93 | - Doesn't work if growth of the function g(n) is not bounded polynomial. 94 | - Doesn't deal with ceil and floor functions. 95 |

96 | --- 97 | # Resource 98 | For More Practice You Can Visit: [Link](https://github.com/kunal-kushwaha/DSA-Bootcamp-Java/blob/main/assignments/13-complexities.md) 99 |
100 | For getting clear conception on Time Complexities: [Video](https://youtu.be/mV3wrLBbuuE) 101 | 102 | --- 103 | # Conclusion 104 | This is a documentation of Akra Bazzi theorem. 105 |
106 | Resource for dother examples of Akra Bazzi: 107 | [GeeksforGeeks](https://www.geeksforgeeks.org/akra-bazzi-method-for-finding-the-time-complexities/)
108 | Resource for detailed study of Other DSA topics:[Wizard-of_docs github repo](https://github.com/HackClubRAIT/Wizard-Of-Docs) 109 | --- 110 | --- 111 | Don't forget to give a ⭐ to [Wizard-Of-Docs](https://github.com/HackClubRAIT/Wizard-Of-Docs) and keep contributing. 112 |
113 | Happy Coding! 114 | --- -------------------------------------------------------------------------------- /Algorithms/Segmented Sieve/Segmented Sieve.md: -------------------------------------------------------------------------------- 1 | # Segmented Sieve Algorithm 2 | 3 | A Prime Number has a unique property which states that the number can only be divisible by itself or 1. 4 | 5 | ### Sieve of Eratosthenes 6 | An algorithm known as Sieve of Eratosthenes can distinguish primes upto a given maximum number by marking off the multiples of smaller primes as composite numbers. 7 | 8 | But this algorithm has a disadvantage. 9 | If there is also given a lower limit, then this algorithm would not be much efficient as it would be also finding out primes below the lower limit as part of the process. 10 | This algorithm takes up a lot of unnecessary memory in order to process the calculations. 11 | 12 | 13 | Hence to find out prime numbers within a certain range having a lower limit, Segmented Sieve Algorithm is used which is an adaptation of the Sieve of Eratosthenes Algorithm and works perfectly for given lower and upper boundaries. 14 | 15 | 16 | ## Steps 17 | **Step 1)** A list/vector is created which is going to store all primes upto the root of the higher limit. This is because primes after the root of the higher limit would not be used for determining the primes in the given range. 18 | 19 | **Step 2)** The list is filled with the inital primes, using the Sieve of Eratosthenes method, as it is the most efficient in this case. 20 | 21 | **Step 3)** Then similar to the Sieve of Eratosthenes, the multiples of the initial primes inside that range/segment are marked as composites. 22 | 23 | **Step 4)** The numbers left unmarked are the required primes and is extracted. 24 | 25 | 26 | ## Code using C++ 27 | ```cpp 28 | #include 29 | using namespace std; 30 | 31 | vector simpleSieve(int limit) { 32 | // This function is based on Sieve of Erathosthenes and would be used to get the initial prime numbers 33 | vector initialPrimes; 34 | 35 | // First marking all numbers as prime numbers 36 | vector mark(limit + 1, true); 37 | 38 | // Marking each of multiples of the primes as a composite number 39 | for(int i= 2; i*i<= limit; i++) { 40 | if(mark[i] == true) { 41 | // Logically all multiples below square of prime will automatically be marked as multiples of smaller primes, 42 | // Eg. If i=7, upto i*i=49, all multiples of 7, that is, 7*2, 7*3... are already marked by 2, 3 and so on. 43 | // If i=13, upto 13*13=169, all multiples of 13 including 13*11, 13*7, 13*2, etc are all marked as the multiples of smaller primes. 44 | // So no need to mark them again, hence starting from the square of the prime... 45 | for(int j= i*i; j<=limit; j+=i) 46 | mark[j] = false; 47 | } 48 | } 49 | 50 | // All the numbers that are still marked as primes are then stored inside the primes vector while omiting 0 and 1 51 | for(int i=2; i<=limit; i++) 52 | if(mark[i]) 53 | initialPrimes.push_back(i); 54 | 55 | return initialPrimes; 56 | } 57 | 58 | 59 | vector segmentedSieve(int lp, int up) { 60 | vector mark(up-lp+1, true); 61 | vector primes; 62 | vector initialPrimes = simpleSieve(sqrt(up)); 63 | 64 | for(auto p : initialPrimes) { 65 | // Calculating the first multiple of the prime in this segment 66 | int first = lp/p * p; 67 | if(first < lp) 68 | first += p; 69 | 70 | // Marking multiples of the primes as composites 71 | // Omiting checks for multiples below the square of prime 72 | for(int i= max(first, p*p); i<= up; i+= p) { 73 | mark[i-lp] = false; 74 | } 75 | } 76 | // Collecting All Primes 77 | for(int i=0; i primes; 88 | int lp = 0, up = 0; 89 | 90 | cout << "\nEnter the lower limit: "; 91 | cin >> lp; 92 | cout << "\nEnter the upper limit: "; 93 | cin >> up; 94 | cout << endl; 95 | 96 | primes = segmentedSieve(lp, up); 97 | 98 | for(auto p : primes) 99 | cout << p << ", "; 100 | cout << endl; 101 | 102 | return 0; 103 | } 104 | ``` 105 | 106 | ## Code using Python 107 | ```python 108 | def simpleSieve(limit : int) -> list: 109 | # All numbers upto limit [except 0 and 1] are intially marked as primes 110 | mark = [False]*2 + [True]*(limit-1) 111 | initialPrimes = list() 112 | 113 | for i, m in enumerate(mark): 114 | if m: 115 | # Multiples before the square of prime is already marked as multiples of smaller primes 116 | # Eg. For prime=13, 13*2, 13*3, 13*5, 13*7, 13*11 will already be marked as multiples of 2, 3, 5, 7, 11 respectively 117 | # Only multiples from 13*13 should begin marking as composites 118 | for j in range(i*i, len(mark), i): 119 | mark[j] = False # Marked as composite 120 | 121 | # All numbers still marked as primes are primes as they are not multiples of any other primes numbers 122 | # Collecting aLl primes 123 | for i, m in enumerate(mark): 124 | if m: 125 | initialPrimes.append(i) 126 | 127 | return initialPrimes 128 | 129 | 130 | def segmentedSieve(lp : int, up : int) -> list: 131 | mark = [True] * (up - lp + 1) 132 | primes = list() 133 | initialPrimes = simpleSieve(limit= int(up**0.5)) 134 | 135 | 136 | for p in initialPrimes: 137 | # Finding out first multiple of the prime 138 | first = lp//p * p 139 | if firstContributed by @saikatsahana77 4 | 5 | Ternary search is a divide and conquer algorithm that can be used to find an element in an array. It is similar to binary search where we divide the array into two parts but in this algorithm, we divide the given array into three parts and determine which has the key (searched element). We can divide the array into three parts by taking mid1 and mid2 which can be calculated as shown below. Initially, l and r will be equal to 0 and n-1 respectively, where n is the length of the array. 6 | > Note: Input must be in sorted order 7 | #### Example: 8 | 9 | ##### Input: `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 6` 10 | ###### input1: `array` 11 | ###### input2: `Target` 12 | 13 | ##### Explanation: 14 | `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]`
15 | `Search ternm: 6` 16 | 17 | ## Steps to Perform Ternary Serach: 18 | 19 | - First, we compare the key with the element at mid1. If found equal, we return mid1. 20 | - If not, then we compare the key with the element at mid2. If found equal, we return mid2. 21 | - If not, then we check whether the key is less than the element at mid1. If yes, then recur to the first part. 22 | - If not, then we check whether the key is greater than the element at mid2. If yes, then recur to the third part. 23 | - If not, then we recur to the second (middle) part. 24 | 25 | ## Visual Explanation: 26 | ![ternary](https://media.geeksforgeeks.org/wp-content/uploads/ternaryS-3.png)
27 | 28 | 29 | ### Pseudo Code (Iterative Approach) 30 | ``` py 31 | function ternary_search(ar, n, key) is 32 | l := 0 33 | r := n − 1 34 | while r >= l do 35 | if key == ar[mid1]: 36 | return mid1 37 | if key == mid2: 38 | return mid2 39 | mid1 = l + (r - l) //3 40 | mid2 = r - (r - l) //3 41 | if key < ar[mid1]: 42 | r = mid1 - 1 43 | else if key > ar[mid2]: 44 | l = mid2 + 1 45 | else: 46 | l = mid1 + 1 47 | r = mid2 - 1 48 | return unsuccessful 49 | ``` 50 | 51 | ### Code `Python` (Iterative Approach) 52 | ``` python 3 53 | def ternarySearch(r, key, ar): 54 | while r >= l: 55 | # Setting the upper and lower bounds 56 | l = 0 57 | r = r-1 58 | 59 | # Find mid1 and mid2 60 | mid1 = l + (r-l) // 3 61 | mid2 = r - (r-l) // 3 62 | 63 | # Check if key is at any mid 64 | if key == ar[mid1]: 65 | return mid1 66 | if key == mid2: 67 | return mid2 68 | 69 | # Since key is not present at mid, 70 | # Check in which region it is present 71 | # Then repeat the search operation in that region 72 | if key < ar[mid1]: 73 | # key lies between l and mid1 74 | r = mid1 - 1 75 | elif key > ar[mid2]: 76 | # key lies between mid2 and r 77 | l = mid2 + 1 78 | else: 79 | # key lies between mid1 and mid2 80 | l = mid1 + 1 81 | r = mid2 - 1 82 | 83 | # key not found 84 | return -1 85 | 86 | # Driver code 87 | if __name__ == '__main__': 88 | # Getting the sorted list 89 | ar = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] 90 | 91 | # Length of list 92 | r = len(ar) 93 | 94 | ### Test 1 ### 95 | 96 | # Checking for 5 97 | # Key to be searched in the list 98 | key = 5 99 | 100 | # Search the key using ternary search 101 | p = ternarySearch(r, key, ar) 102 | 103 | # Print the result 104 | print(p) 105 | 106 | ### Test 2 ### 107 | 108 | # Checking for 50 109 | # Key to be searched in the list 110 | key = 50 111 | 112 | # Search the key using ternary search 113 | p = ternarySearch(r, key, ar) 114 | 115 | # Print the result 116 | print(p) 117 | ``` 118 | 119 | ### Pseudo Code (Recursive Approach) 120 | ``` py 121 | function ternary_search(ar, n, key) is 122 | l := 0 123 | r := n − 1 124 | if r >= l 125 | if key == ar[mid1]: 126 | return mid1 127 | if key == mid2: 128 | return mid2 129 | mid1 = l + (r - l) //3 130 | mid2 = r - (r - l) //3 131 | if key < ar[mid1]: 132 | ternary_search(l, mid1 - 1, key, ar) 133 | else if key > ar[mid2]: 134 | ternarySearch(mid2 + 1, r, key, ar) 135 | else: 136 | return ternarySearch(mid1 + 1, mid2 - 1, key, ar) 137 | return unsuccessful 138 | ``` 139 | ### Code `Python` (Recursive Approach) 140 | ``` python 3 141 | # Function to perform Ternary Search 142 | def ternarySearch(r, key, ar): 143 | 144 | if (r >= l): 145 | # Setting the upper and lower bounds 146 | l = 0 147 | r = r-1 148 | 149 | # Find the mid1 and mid2 150 | mid1 = l + (r - l) //3 151 | mid2 = r - (r - l) //3 152 | 153 | # Check if key is present at any mid 154 | if (ar[mid1] == key): 155 | return mid1 156 | 157 | if (ar[mid2] == key): 158 | return mid2 159 | 160 | # Since key is not present at mid, 161 | # check in which region it is present 162 | # then repeat the Search operation 163 | # in that region 164 | if (key < ar[mid1]): 165 | 166 | # The key lies in between l and mid1 167 | return ternarySearch(l, mid1 - 1, key, ar) 168 | 169 | elif (key > ar[mid2]): 170 | 171 | # The key lies in between mid2 and r 172 | return ternarySearch(mid2 + 1, r, key, ar) 173 | 174 | else: 175 | 176 | # The key lies in between mid1 and mid2 177 | return ternarySearch(mid1 + 1, 178 | mid2 - 1, key, ar) 179 | 180 | # Key not found 181 | return -1 182 | 183 | # Driver code 184 | if __name__ == '__main__': 185 | # Getting the sorted array 186 | ar = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ] 187 | 188 | # Starting index 189 | l = 0 190 | 191 | # length of array 192 | r = len(ar) 193 | 194 | ### Test 1 ### 195 | 196 | # Checking for 5 197 | # Key to be searched in the array 198 | key = 5 199 | 200 | # Search the key using ternarySearch 201 | p = ternarySearch(r, key, ar) 202 | 203 | # Print the result 204 | print(p) 205 | 206 | ### Test 2 ### 207 | 208 | # Checking for 50 209 | # Key to be searched in the array 210 | key = 50 211 | 212 | # Search the key using ternarySearch 213 | p = ternarySearch(r, key, ar) 214 | 215 | # Print the result 216 | print(p) 217 | ``` 218 | 219 | 220 | #### ⏲️ Time Complexities: 221 | `O(log``3`` n) (array is divided into 1/3rd of it's size at each iteration/recursive call).` 222 |
223 | #### 👾 Space complexities: 224 | `O(1) (no extra space is used, no extra array space is used).` --------------------------------------------------------------------------------