4 |
5 |
6 |
--------------------------------------------------------------------------------
/.idea/vcs.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ZTM-DS-and-Algo-Python
2 | Contains all the code samples, implementations, and exercises from the Zero to Mastery : Master the Coding Interview - Data Structures + Algorithms course by Andrei Neagoie, in Python.
3 |
4 | ### Content
5 | - venv/Scripts
6 | - Algorithms
7 | - Dynamic Programming
8 | - [Fibonacci.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Dynamic%20Programming/Fibonacci.py)
9 | - [Memoization.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Dynamic%20Programming/Memoization.py)
10 |
11 | - Recursion
12 | - [Factorial.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Recursion/Factorial.py)
13 | - [Fibonacci.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Recursion/Fibonacci.py)
14 | - [Reverse_String.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Recursion/Reverse_String.py)
15 |
16 | - Sorting
17 | - [Bubble_Sort.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Sorting/Bubble_Sort.py)
18 | - [Heap_Sort.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Sorting/Heap_Sort.py)
19 | - [Insertion_Sort.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Sorting/Insertion_Sort.py)
20 | - [Merge_Sort.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Sorting/Merge_Sort.py)
21 | - [Quick_Sort.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Sorting/Quick_Sort.py)
22 | - [Selection_Sort.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Sorting/Selection_Sort.py)
23 |
24 | - Traversals
25 | - [BFS.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Traversals/BFS.py)
26 | - [DFS.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Algorithms/Traversals/DFS.py)
27 |
28 | - Big-O
29 | - [O(1).py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Big-O/O(1).py)
30 | - [O(m + n).py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Big-O/O(m%20%2B%20n).py)
31 | - [O(m x n).py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Big-O/O(m%20x%20n).py)
32 | - [O(n).py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Big-O/O(n).py)
33 | - [O(n^2).py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Big-O/O(n%5E2).py)
34 |
35 | - Data Structures
36 | - Arrays
37 | - [Contains_Duplicate.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Arrays/Contains_Duplicate.py)
38 | - [Implementation.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Arrays/Implementation.py)
39 | - [Introduction.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Arrays/Introduction.py)
40 | - [Longest_Word.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Arrays/Longest_Word.py)
41 | - [Maximum_SubArray.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Arrays/Maximum_SubArray.py)
42 | - [Merging_sorted_arrays.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Arrays/Merging_sorted_arrays.py)
43 | - [Move_Zeroes.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Arrays/Move_Zeroes.py)
44 | - [Reverse_String.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Arrays/Reverse_String.py)
45 | - [Rotate_Array.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Arrays/Rotate_Array.py)
46 |
47 | - Graphs
48 | - [Undirected_Graph_Implementation.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Graphs/Undirected_Graph_Implementation.py)
49 |
50 | - Hash Tables
51 | - [First_Recurring_Character.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Hash%20Tables/First_Recurring_Character.py)
52 | - [Implementation.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Hash%20Tables/Implementation.py)
53 | - [Introduction.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Hash%20Tables/Introduction.py)
54 | - [Pattern_Matching.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Hash%20Tables/Pattern_Matching.py)
55 |
56 | - Linked Lists
57 | - [Doubly_Linked_Lists.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Linked%20Lists/Doubly_Linked_Lists.py)
58 | - [Implementation.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Linked%20Lists/Implementation.py)
59 | - [Reverse.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Linked%20Lists/Reverse.py)
60 |
61 | - Queues
62 | - [Linked_List_Implementation.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Queues/Linked_List_Implementation.py)
63 | - [Queue_Using_Stacks.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Queues/Queue_Using_Stacks.py)
64 |
65 | - Stacks
66 | - [Array_Implementation.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Stacks/Array_Implementation.py)
67 | - [Linked_List_Implementation.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Stacks/Linked_List_Implementation.py)
68 |
69 | - Trees
70 | - [Binary_Search_Tree.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Trees/Binary_Search_Tree.py)
71 | - [Heap.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Trees/Heap.py)
72 | - [Priority_Queues_Using_Heap.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Trees/Priority_Queues_Using_Heap.py)
73 | - [Trie.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/Data%20Structures/Trees/Trie.py)
74 |
75 | - How To Solve Coding Problems
76 | - [Google Interview Question.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/How%20to%20solve%20coding%20problems/Google%20Interview%20Question.py)
77 | - [Interview Question 1.py](https://github.com/VicodinAbuser/ZTM-DS-and-Algo-Python/blob/master/venv/Scripts/How%20to%20solve%20coding%20problems/Interview%20Question%201.py)
78 |
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Dynamic Programming/Fibonacci.py:
--------------------------------------------------------------------------------
1 | #Now we will implement our old Fibonacci program using Dynamic Programming
2 | #Fibonacci Sequence : 0 1 1 2 3 5 8 13 21 35 55 89 144 233 . . .
3 |
4 | import time
5 |
6 | def fibonacci(n):
7 | if n<2:
8 | return n
9 | else:
10 | return fibonacci(n-1) + fibonacci(n-2)
11 |
12 |
13 | cache = {}
14 | def dynamic_fibonacci(n):
15 | if n in cache:
16 | return cache[n]
17 | else:
18 | if n < 2:
19 | return n
20 | else:
21 | cache[n] = dynamic_fibonacci(n-1) + dynamic_fibonacci(n-2)
22 | return cache[n]
23 |
24 |
25 | t1 = time.time()
26 | print(fibonacci(30))
27 | t2 = time.time()
28 | print(t2-t1)
29 | #832040
30 | #0.39888763427734375
31 |
32 | t1 = time.time()
33 | print(dynamic_fibonacci(30))
34 | t2 = time.time()
35 | print(t2-t1)
36 | #832040
37 | #0.0
38 |
39 |
40 | t1 = time.time()
41 | print(dynamic_fibonacci(60))
42 | t2 = time.time()
43 | print(t2-t1)
44 | #1548008755920
45 | #0.0
46 |
47 |
48 | t1 = time.time()
49 | print(dynamic_fibonacci(100))
50 | t2 = time.time()
51 | print(t2-t1)
52 | #354224848179261915075
53 | #0.0
54 |
55 | t1 = time.time()
56 | print(dynamic_fibonacci(1000))
57 | t2 = time.time()
58 | print(t2-t1)
59 | #43466557686937456435688527675040625802564660517371780402481729089536555417949051890403879840079255169295922593080322634775209689623239873322471161642996440906533187938298969649928516003704476137795166849228875
60 | #0.0009982585906982422
61 |
62 | #I won't even dare to try calculating fibonacci(1000) using the normal recursive function!
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Dynamic Programming/Memoization.py:
--------------------------------------------------------------------------------
1 | #Memoization is an optimization technique used to speed up programs by storing the results of expensive function calls
2 | #and returning the cached result when the same inputs occur again.
3 | #In Python there's a module named functools with a method lru_cache() which allows us to use this optimization technique
4 | #First, we'll implement memoization on our own with an example function, then with the help of lru_cache
5 |
6 | import time, random
7 |
8 | times =[]
9 |
10 | def squaring_without_memoization(number): #Function to calculate the suare of a number
11 | return number**2
12 |
13 | array = [random.randint(1,10) for _ in range(10000000)] #Generates an array of size 1000000 with random integers between 1-10(both included)
14 | t1 = time.time()
15 | for i in range(len(array)):
16 | print(squaring_without_memoization(array[i]))
17 | t2 = time.time()
18 | times.append(t2-t1)
19 |
20 |
21 | cache = {}
22 | def squaring_with_memoization(number):
23 | if number in cache:
24 | return cache[number]
25 | else:
26 | cache[number] = number**2
27 | return cache[number]
28 |
29 | t1 = time.time()
30 | for i in range(len(array)):
31 | print(squaring_with_memoization(array[i]))
32 | t2 = time.time()
33 | times.append(t2-t1)
34 |
35 |
36 | from functools import lru_cache
37 |
38 | @lru_cache(maxsize=10000)
39 | def squaring(number):
40 | return number**2
41 |
42 | print(array)
43 | t1 = time.time()
44 | for i in range(len(array)):
45 | print(squaring(array[i]))
46 | t2 = time.time()
47 | times.append(t2-t1)
48 |
49 | print(times)
50 | #[203.95188665390015, 148.48580384254456, 148.26833629608154] --- When array size was 10000000
51 | #[7.06306266784668, 6.145563125610352, 5.758295774459839] --- When array size was 1000000
52 |
53 | print(cache)
54 | #{8: 64, 7: 49, 6: 36, 1: 1, 4: 16, 9: 81, 2: 4, 5: 25, 3: 9, 10: 100}
55 |
56 | print(squaring.cache_info())
57 | #CacheInfo(hits=999990, misses=10, maxsize=10000, currsize=10) --- When array size was 1000000
58 | #CacheInfo(hits=9999990, misses=10, maxsize=10000, currsize=10) --- When array size was 10000000
59 |
60 |
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Recursion/Factorial.py:
--------------------------------------------------------------------------------
1 | #Given a number, we have to return its factorial.
2 | #For example, factorial(5) should return 5! = 5*4*3*2*1 = 120
3 | #We can solve this recursively, or iteratively.
4 | #First we are going to solve it iteratively.
5 |
6 | def iterative_factorial(number):
7 | f = 1
8 | for i in range(1, number+1):
9 | f = f * i
10 | return f
11 |
12 | print(iterative_factorial(0))
13 | #1
14 | print(iterative_factorial(5))
15 | #120
16 | print(iterative_factorial(50))
17 | #30414093201713378043612608166064768844377641568960512000000000000
18 |
19 | def recursive_factorial(number):
20 | if number <= 1:
21 | return 1
22 | else:
23 | return number * recursive_factorial(number-1)
24 |
25 | print(recursive_factorial(0))
26 | #1
27 | print(recursive_factorial(5))
28 | #120
29 | print(recursive_factorial(50))
30 | #30414093201713378043612608166064768844377641568960512000000000000
31 | print(recursive_factorial(1000))
32 | #RecursionError: maximum recursion depth exceeded in comparison
33 |
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Recursion/Fibonacci.py:
--------------------------------------------------------------------------------
1 | #Given a number, we have to return the number at that index of the fibonacci sequence.
2 | #Fibonacci Sequence - 0 1 1 2 3 5 8 13 21 34 55 89 144 . . . .
3 | #For example, fibonacci(5) should return 5 as the 5th index (staring from 0) of the fibonacci sequence is the number 5
4 | #Again , we will do both the iterative and recursive solutions
5 |
6 | def iterative_fibonacci(index):
7 | first_number = 0
8 | second_number = 1
9 | if index == 0:
10 | return first_number
11 | if index == 1:
12 | return second_number
13 | for i in range(2,index +1):
14 | third_number = first_number + second_number
15 | first_number = second_number
16 | second_number = third_number
17 | return third_number
18 |
19 | print(iterative_fibonacci(0)) #0
20 | print(iterative_fibonacci(1)) #1
21 | print(iterative_fibonacci(5)) #5
22 | print(iterative_fibonacci(7)) #13
23 | print(iterative_fibonacci(10)) #55
24 | print(iterative_fibonacci(12)) #144
25 |
26 |
27 | def recursive_fibonacci(index):
28 | if index == 0: #Base case 1
29 | return 0
30 | if index == 1: #Base case 2
31 | return 1
32 | return recursive_fibonacci(index-1) + recursive_fibonacci(index-2) #Every term in fib sequence = sum of previous two terms
33 |
34 | print(recursive_fibonacci(0)) #0
35 | print(recursive_fibonacci(1)) #1
36 | print(recursive_fibonacci(5)) #5
37 | print(recursive_fibonacci(7)) #13
38 | print(recursive_fibonacci(10)) #55
39 | print(recursive_fibonacci(12)) #144
40 |
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Recursion/Reverse_String.py:
--------------------------------------------------------------------------------
1 | #Given a string , we need to reverse it using recursion (and iteration)
2 | #For example, input = "Zero To Mastery", output = "yretsaM oT oreZ"
3 |
4 | #First we will implement the iterative solution
5 | def iterative_reverse(string): #Here we use a second string to store the reversed version. Time and Space complexity = O(n)
6 | reversed_string = ''
7 | for i in range(len(string)):
8 | reversed_string = reversed_string + string[len(string)-i-1]
9 | return reversed_string
10 |
11 | print(iterative_reverse("Zero To Mastery"))
12 | #yretsaM oT oreZ
13 |
14 | #Here we append the string backwards into the original string itself and then slice it to contain only the 2nd half,i.e.,the reversed part.
15 | #Time complexity = O(n). Space complexity = O(n)
16 | def second_iterative_reverse(string):
17 | original_length = len(string)
18 | for i in range(original_length):
19 | string = string + string[original_length - i - 1]
20 | string = string[original_length:]
21 | return string
22 |
23 | print(second_iterative_reverse("Zero To Mastery"))
24 | #yretsaM oT oreZ
25 |
26 |
27 | def recursive_reverse(string):
28 | print(string)
29 | if len(string) == 0:
30 | return string
31 | else:
32 | return recursive_reverse(string[1:]) + string[0]
33 |
34 | print(recursive_reverse("Zero To Mastery"))
35 | '''
36 | Zero To Mastery
37 | ero To Mastery
38 | ro To Mastery
39 | o To Mastery
40 | To Mastery
41 | To Mastery
42 | o Mastery
43 | Mastery
44 | Mastery
45 | astery
46 | stery
47 | tery
48 | ery
49 | ry
50 | y
51 |
52 | yretsaM oT oreZ
53 | '''
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Sorting/Bubble_Sort.py:
--------------------------------------------------------------------------------
1 | #In Bubble Sort, the largest value is bubbled up in every pass.
2 | #Every two adjacent items are compared and they are swapped if they are in the wrong order.
3 | #This way, after every pass, the largest element reaches to the end of the array.
4 | #Time complexity of Bubble Sort in Worst and Average Case is O(n^2) and in best case, its O(n)
5 |
6 | def bubble_sort(array):
7 | count = 0
8 | for i in range(len(array)-1): #-1 because when only 1 item will be left, we don't need to sort that
9 | print(array)
10 | for j in range(len(array)-i-1): #In every iteration of the outer loop, one number gets sorted. So the inner loop will run only for the unsorted part
11 | count += 1
12 | if array[j] > array[j+1]: #If two adjacent elements in the wrong order are found, they are swapped
13 | array[j], array[j+1] = array[j+1], array[j]
14 | #print(f'Number of comparisons = {count}')
15 | return (f'{array} \nNumber of comparisons = {count}')
16 |
17 | array = [5,9,3,10,45,2,0]
18 | print(bubble_sort(array))
19 |
20 | '''
21 | [5, 9, 3, 10, 45, 2, 0]
22 | [5, 3, 9, 10, 2, 0, 45]
23 | [3, 5, 9, 2, 0, 10, 45]
24 | [3, 5, 2, 0, 9, 10, 45]
25 | [3, 2, 0, 5, 9, 10, 45]
26 | [2, 0, 3, 5, 9, 10, 45]
27 | [0, 2, 3, 5, 9, 10, 45]
28 | [0, 2, 3, 5, 9, 10, 45]
29 | Number of comparisons = 21
30 | '''
31 |
32 |
33 | sorted_array = [5,6,7,8,9]
34 | print(bubble_sort(sorted_array))
35 |
36 | '''
37 | [5, 6, 7, 8, 9]
38 | [5, 6, 7, 8, 9]
39 | [5, 6, 7, 8, 9]
40 | [5, 6, 7, 8, 9]
41 | [5, 6, 7, 8, 9]
42 | Number of comparisons = 10
43 | '''
44 |
45 |
46 |
47 | #We can optimize the bubble sort slightly by adding a new boolean variable
48 | #which keeps track of wehether any swaps were done in the last iteration or not
49 | #This way, if say halfway through the loops, the array becomes completely sorted, then we won't do unnecessary comparisons
50 | def optimized_bubble_sort(array):
51 | count = 0
52 | for i in range(len(array) - 1):
53 | swap = False
54 | print(array)
55 | for j in range(len(array) - i - 1):
56 | count += 1
57 | if array[j] > array[j+1]:
58 | array[j], array[j+1] = array[j+1], array[j]
59 | swap = True
60 | if swap==False:
61 | return (f'{array} \nNumber of comparisons = {count}')
62 | return (f'{array} \nNumber of comparisons = {count}')
63 |
64 |
65 | array1 = [5,9,3,10,45,2,0]
66 | print(optimized_bubble_sort(array1))
67 |
68 | '''
69 | [5, 9, 3, 10, 45, 2, 0]
70 | [5, 3, 9, 10, 2, 0, 45]
71 | [3, 5, 9, 2, 0, 10, 45]
72 | [3, 5, 2, 0, 9, 10, 45]
73 | [3, 2, 0, 5, 9, 10, 45]
74 | [2, 0, 3, 5, 9, 10, 45]
75 | [0, 2, 3, 5, 9, 10, 45]
76 | Number of comparisons = 21
77 | '''
78 |
79 |
80 | sorted_array1 = [5,6,7,8,9]
81 | print(optimized_bubble_sort(sorted_array1))
82 |
83 | '''
84 | [5, 6, 7, 8, 9]
85 | [5, 6, 7, 8, 9]
86 | Number of comparisons = 4
87 | '''
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Sorting/Heap_Sort.py:
--------------------------------------------------------------------------------
1 | #Heap Sort as the name suggests, uses the heap data structure.
2 | #First the array is converted into a binary heap. Then the first element which is the maximum elemet in case of a max-heap,
3 | #is swapped with the last element so that the maximum element goes to the end of the array as it should be in a sorted array.
4 | #Then the heap size is reduced by 1 and max-heapify function is called on the root.
5 | #Time complexity is O(nlog N) in all cases and space complexity = O(1)
6 |
7 | count = 0
8 | def max_heapify(array, heap_size, i):
9 | left = 2 * i + 1
10 | right = 2 * i + 2
11 | largest = i
12 | global count
13 | if left < heap_size:
14 | count += 1
15 | if array[left] > array[largest]:
16 | largest = left
17 | if right < heap_size:
18 | count += 1
19 | if array[right] > array[largest]:
20 | largest = right
21 | if largest != i:
22 | array[i], array[largest] = array[largest], array[i]
23 | max_heapify(array, heap_size, largest)
24 |
25 | def build_heap(array):
26 | heap_size = len(array)
27 | for i in range ((heap_size//2),-1,-1):
28 | max_heapify(array,heap_size, i)
29 |
30 | def heap_sort(array):
31 | heap_size = len(array)
32 | build_heap(array)
33 | print (f'Heap : {array}')
34 | for i in range(heap_size-1,0,-1):
35 | array[0], array[i] = array[i], array[0]
36 | heap_size -= 1
37 | max_heapify(array, heap_size, 0)
38 |
39 | array = [5,9,3,10,45,2,0]
40 | heap_sort(array)
41 | print (array)
42 | print(f'Number of comparisons = {count}')
43 | '''
44 | Heap : [45, 10, 3, 5, 9, 2, 0]
45 | [0, 2, 3, 5, 9, 10, 45]
46 | Number of comparisons = 22
47 | '''
48 |
49 | sorted_array = [5,6,7,8,9]
50 | heap_sort(sorted_array)
51 | print(sorted_array)
52 | print(f'Number of comparisons = {count}')
53 | '''
54 | Heap : [9, 8, 7, 5, 6]
55 | [5, 6, 7, 8, 9]
56 | Number of comparisons = 12
57 | '''
58 |
59 | reverse_sorted_array = [9,8,7,6,5,4,3,2,1,0,-1,-2,-3,-4,-5,-6,-7,-8,-9,-10]
60 | heap_sort(reverse_sorted_array)
61 | print(reverse_sorted_array)
62 | print(f'Number of comparisons = {count}')
63 | '''
64 | Heap : [9, 8, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
65 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
66 | Number of comparisons = 105
67 | '''
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Sorting/Insertion_Sort.py:
--------------------------------------------------------------------------------
1 | #In Insertion sort, for the first iteration we fix the first element, assuming it is at its correct position
2 | #Then we loop through the rest of the elements and insert them in their correct positions, with respect to the alreay sorted part of the array
3 | #Time complexity is O(n^2) in worst case
4 |
5 | def insertion_sort(array):
6 | count = 0
7 | for i in range(1, len(array)):
8 | print(array)
9 | last_sorted_position = array[i-1] #We store the last element which is sorted
10 | count += 1
11 | if array[i] < last_sorted_position: #We check if the current element is lesser than the last sorted element
12 | temp = array[i] #If yes, we store the curent element in a temporary variable.
13 | for j in range(i-1,-1,-1): #We loop backwards through the sorted part of the array to check where the current element fits
14 | count += 1
15 | if temp < array[j]: #For every element we find in the sorted part which is greater than the current element, we shift them one place towards right to make room for the current element
16 | if j == 0: #If we reach the beginning of the array in the process, we shift the elements right and we assign the current element to the 0th position
17 | array[j+1] = array[j]
18 | array[j] = temp
19 | else: #Otherwise we just keep shifting
20 | array[j+1] = array[j]
21 | else: #Once we find an element that is smaller than the current element, it means we have found the position to insert out current element at
22 | array[j+1] = temp #So we just assign the element to its correct position
23 | break #And break out of this loop
24 | return (f'{array} \nNumber of comparisons = {count}')
25 |
26 | array = [5,9,3,10,45,2,0]
27 | print(insertion_sort(array))
28 | '''
29 | [5, 9, 3, 10, 45, 2, 0]
30 | [5, 9, 3, 10, 45, 2, 0]
31 | [3, 5, 9, 10, 45, 2, 0]
32 | [3, 5, 9, 10, 45, 2, 0]
33 | [3, 5, 9, 10, 45, 2, 0]
34 | [2, 3, 5, 9, 10, 45, 0]
35 | [0, 2, 3, 5, 9, 10, 45]
36 | Number of comparisons = 19
37 | '''
38 |
39 | sorted_array = [5,6,7,8,9]
40 | print(insertion_sort(sorted_array))
41 | '''
42 | [5, 6, 7, 8, 9]
43 | [5, 6, 7, 8, 9]
44 | [5, 6, 7, 8, 9]
45 | [5, 6, 7, 8, 9]
46 | [5, 6, 7, 8, 9]
47 | Number of comparisons = 4
48 | '''
49 |
50 | #It is fast for sorted or nearly sorted inputs as can be seen with the number of comparisons above.
51 |
52 | reverse_sorted_array = [9,8,7,6,5,4,3,2,1,0,-1,-2,-3,-4,-5,-6,-7,-8,-9,-10]
53 | print(insertion_sort(reverse_sorted_array))
54 | '''
55 | [9, 8, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
56 |
57 | [8, 9, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
58 |
59 | [7, 8, 9, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
60 |
61 | [6, 7, 8, 9, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
62 |
63 | [5, 6, 7, 8, 9, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
64 |
65 | [4, 5, 6, 7, 8, 9, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
66 |
67 | [3, 4, 5, 6, 7, 8, 9, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
68 |
69 | [2, 3, 4, 5, 6, 7, 8, 9, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
70 |
71 | [1, 2, 3, 4, 5, 6, 7, 8, 9, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
72 |
73 | [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
74 |
75 | [-1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -2, -3, -4, -5, -6, -7, -8, -9, -10]
76 |
77 | [-2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -3, -4, -5, -6, -7, -8, -9, -10]
78 |
79 | [-3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -4, -5, -6, -7, -8, -9, -10]
80 |
81 | [-4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -5, -6, -7, -8, -9, -10]
82 |
83 | [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -6, -7, -8, -9, -10]
84 |
85 | [-6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -7, -8, -9, -10]
86 |
87 | [-7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -8, -9, -10]
88 |
89 | [-8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -9, -10]
90 |
91 | [-9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -10]
92 |
93 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
94 |
95 | Number of comparisons = 209
96 | '''
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Sorting/Merge_Sort.py:
--------------------------------------------------------------------------------
1 | #Merge Sort uses the Divide and Conquer approach. It involves breaking up the array from the middle until
2 | #Arrays of only 1 elements remain and thein merging them back up in a sorted order.
3 | #Time complexity is O(nlog N) and space complexity is O(n)
4 |
5 | count = 0
6 |
7 | def merge_sort(array):
8 | if len(array) == 1:
9 | return array
10 | else:
11 | mid = len(array)//2
12 | left_array = array[:mid]
13 | right_array = array[mid:]
14 | print(f'Left : {left_array}')
15 | print(f'Right : {right_array}')
16 | return merge(merge_sort(left_array),merge_sort(right_array))
17 |
18 |
19 | def merge(left, right):
20 | l = len(left)
21 | r = len(right)
22 | left_index = 0
23 | right_index = 0
24 | sorted_array = []
25 | while (left_index < l and right_index < r):
26 | global count
27 | count += 1
28 | if left[left_index] < right[right_index]:
29 | sorted_array.append(left[left_index])
30 | left_index += 1
31 | else:
32 | sorted_array.append(right[right_index])
33 | right_index += 1
34 | print(sorted_array + left[left_index:] + right[right_index:])
35 | return sorted_array + left[left_index:] + right[right_index:]
36 |
37 |
38 |
39 | array = [5,9,3,10,45,2,0]
40 | print(merge_sort(array))
41 | print(f'Number of comparisons = {count}')
42 | '''
43 | Left : [5, 9, 3]
44 | Right : [10, 45, 2, 0]
45 | Left : [5]
46 | Right : [9, 3]
47 | Left : [9]
48 | Right : [3]
49 | [3, 9]
50 | [3, 5, 9]
51 | Left : [10, 45]
52 | Right : [2, 0]
53 | Left : [10]
54 | Right : [45]
55 | [10, 45]
56 | Left : [2]
57 | Right : [0]
58 | [0, 2]
59 | [0, 2, 10, 45]
60 | [0, 2, 3, 5, 9, 10, 45]
61 | [0, 2, 3, 5, 9, 10, 45]
62 | Number of comparisons = 12
63 | '''
64 |
65 | sorted_array = [5,6,7,8,9]
66 | print(merge_sort(sorted_array))
67 | print(f'Number of comparisons = {count}')
68 | '''
69 | Left : [5, 6]
70 | Right : [7, 8, 9]
71 | Left : [5]
72 | Right : [6]
73 | [5, 6]
74 | Left : [7]
75 | Right : [8, 9]
76 | Left : [8]
77 | Right : [9]
78 | [8, 9]
79 | [7, 8, 9]
80 | [5, 6, 7, 8, 9]
81 | [5, 6, 7, 8, 9]
82 | Number of comparisons = 5
83 | '''
84 |
85 | reverse_sorted_array = [9,8,7,6,5,4,3,2,1,0,-1,-2,-3,-4,-5,-6,-7,-8,-9,-10]
86 | print(merge_sort(reverse_sorted_array))
87 | print(f'Number of comparisons = {count}')
88 | '''
89 | Left : [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
90 | Right : [-1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
91 |
92 | Left : [9, 8, 7, 6, 5]
93 | Right : [4, 3, 2, 1, 0]
94 |
95 | Left : [9, 8]
96 | Right : [7, 6, 5]
97 |
98 | Left : [9]
99 | Right : [8]
100 |
101 | [8, 9]
102 |
103 | Left : [7]
104 | Right : [6, 5]
105 |
106 | Left : [6]
107 | Right : [5]
108 |
109 | [5, 6]
110 |
111 | [5, 6, 7]
112 |
113 | [5, 6, 7, 8, 9]
114 |
115 | Left : [4, 3]
116 | Right : [2, 1, 0]
117 |
118 | Left : [4]
119 | Right : [3]
120 |
121 | [3, 4]
122 |
123 | Left : [2]
124 | Right : [1, 0]
125 |
126 | Left : [1]
127 | Right : [0]
128 |
129 | [0, 1]
130 |
131 | [0, 1, 2]
132 |
133 | [0, 1, 2, 3, 4]
134 |
135 | [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
136 |
137 | Left : [-1, -2, -3, -4, -5]
138 | Right : [-6, -7, -8, -9, -10]
139 |
140 | Left : [-1, -2]
141 | Right : [-3, -4, -5]
142 |
143 | Left : [-1]
144 | Right : [-2]
145 |
146 | [-2, -1]
147 |
148 | Left : [-3]
149 | Right : [-4, -5]
150 |
151 | Left : [-4]
152 | Right : [-5]
153 |
154 | [-5, -4]
155 |
156 | [-5, -4, -3]
157 |
158 | [-5, -4, -3, -2, -1]
159 |
160 | Left : [-6, -7]
161 | Right : [-8, -9, -10]
162 |
163 | Left : [-6]
164 | Right : [-7]
165 |
166 | [-7, -6]
167 |
168 | Left : [-8]
169 | Right : [-9, -10]
170 |
171 | Left : [-9]
172 | Right : [-10]
173 |
174 | [-10, -9]
175 |
176 | [-10, -9, -8]
177 |
178 | [-10, -9, -8, -7, -6]
179 |
180 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1]
181 |
182 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
183 |
184 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
185 |
186 | Number of comparisons = 48
187 | '''
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Sorting/Quick_Sort.py:
--------------------------------------------------------------------------------
1 | #Quick Sort is another sorting algorithm which follows divide and conquer approach.
2 | #It requires to chose a pivot, then place all elements smaller than the pivot on the left of the pivot and all elements larger on the right
3 | #Then the array is partitioned in the pivot position and the left and right arrays followthe same procedure until the base case is reached.
4 | #After each pass the pivot element occupies its correct position in the array.
5 | #Time Complexity in Best and Average cases is O(nlog N) whereas in worst case it jumps up to O(n^2). Space complexity is O(log n)
6 |
7 | #In this implementation, we will take the last element as pivot.
8 | count = 0
9 |
10 | def partition(array, left, right):
11 | smaller_index = left - 1
12 | pivot = array[right]
13 | for i in range(left, right):
14 | global count
15 | count += 1
16 | if array[i] < pivot:
17 | smaller_index += 1
18 | array[smaller_index], array[i] = array[i], array[smaller_index]
19 | array[smaller_index+1], array[right] = array[right], array[smaller_index+1]
20 | print(array)
21 | return (smaller_index+1)
22 |
23 | def quick_sort(array, left, right):
24 | if left < right:
25 | partitioning_index = partition(array, left, right)
26 | print(partitioning_index)
27 | quick_sort(array,left,partitioning_index-1)
28 | quick_sort(array,partitioning_index+1,right)
29 |
30 | array = [5,9,3,10,45,2,0]
31 | quick_sort(array, 0, (len(array)-1))
32 | print(array)
33 | print(f'Number of comparisons = {count}')
34 | '''
35 | [0, 9, 3, 10, 45, 2, 5]
36 | 0
37 | [0, 3, 2, 5, 45, 9, 10]
38 | 3
39 | [0, 2, 3, 5, 45, 9, 10]
40 | 1
41 | [0, 2, 3, 5, 9, 10, 45]
42 | 5
43 | [0, 2, 3, 5, 9, 10, 45]
44 | #Number of comparisons = 14
45 | '''
46 |
47 | sorted_array = [5,6,7,8,9]
48 | quick_sort(sorted_array, 0, len(sorted_array)-1)
49 | print(sorted_array)
50 | print(f'Number of comparisons = {count}')
51 | '''
52 | [5, 6, 7, 8, 9]
53 | 4
54 | [5, 6, 7, 8, 9]
55 | 3
56 | [5, 6, 7, 8, 9]
57 | 2
58 | [5, 6, 7, 8, 9]
59 | 1
60 | [5, 6, 7, 8, 9]
61 | Number of comparisons = 10
62 | '''
63 |
64 | reverse_sorted_array = [9,8,7,6,5,4,3,2,1,0,-1,-2,-3,-4,-5,-6,-7,-8,-9,-10]
65 | quick_sort(reverse_sorted_array, 0, len(reverse_sorted_array) - 1)
66 | print(reverse_sorted_array)
67 | print(f'Number of comparisons = {count}')
68 | '''
69 | [-10, 8, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, 9]
70 | 0
71 | [-10, 8, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, 9]
72 | 19
73 | [-10, -9, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, 8, 9]
74 | 1
75 | [-10, -9, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, 8, 9]
76 | 18
77 | [-10, -9, -8, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, 7, 8, 9]
78 | 2
79 | [-10, -9, -8, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, 7, 8, 9]
80 | 17
81 | [-10, -9, -8, -7, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, 6, 7, 8, 9]
82 | 3
83 | [-10, -9, -8, -7, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, 6, 7, 8, 9]
84 | 16
85 | [-10, -9, -8, -7, -6, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, 5, 6, 7, 8, 9]
86 | 4
87 | [-10, -9, -8, -7, -6, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, 5, 6, 7, 8, 9]
88 | 15
89 | [-10, -9, -8, -7, -6, -5, 3, 2, 1, 0, -1, -2, -3, -4, 4, 5, 6, 7, 8, 9]
90 | 5
91 | [-10, -9, -8, -7, -6, -5, 3, 2, 1, 0, -1, -2, -3, -4, 4, 5, 6, 7, 8, 9]
92 | 14
93 | [-10, -9, -8, -7, -6, -5, -4, 2, 1, 0, -1, -2, -3, 3, 4, 5, 6, 7, 8, 9]
94 | 6
95 | [-10, -9, -8, -7, -6, -5, -4, 2, 1, 0, -1, -2, -3, 3, 4, 5, 6, 7, 8, 9]
96 | 13
97 | [-10, -9, -8, -7, -6, -5, -4, -3, 1, 0, -1, -2, 2, 3, 4, 5, 6, 7, 8, 9]
98 | 7
99 | [-10, -9, -8, -7, -6, -5, -4, -3, 1, 0, -1, -2, 2, 3, 4, 5, 6, 7, 8, 9]
100 | 12
101 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, 0, -1, 1, 2, 3, 4, 5, 6, 7, 8, 9]
102 | 8
103 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, 0, -1, 1, 2, 3, 4, 5, 6, 7, 8, 9]
104 | 11
105 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
106 | 9
107 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
108 | Number of comparisons = 190
109 | '''
110 |
111 | #If the same algorithm is implemented with the pivot element = array[(right-left)//2], i.e., the middle element of the array,
112 | #then the number of comparisons for the reverse_sorted_array drops down to 91
113 | #Number of comparisons for the sorted array = 6 and
114 | #Number of comparisons for the first unsorted array = 11
115 |
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Sorting/Selection_Sort.py:
--------------------------------------------------------------------------------
1 | #Selection sort involves finding the minimum element in one pass through the array
2 | #and then swapping it with the first position of the unsorted part of the array.
3 | #Time complexity of selection sort is O(n^2) in all cases
4 |
5 | def selection_sort(array):
6 | count = 0
7 | for i in range(len(array)-1): #-1 because when only 1 elment remians, it will be already be sorted
8 | print(array)
9 | minimum = array[i] #We set the minu=imum element to be the ith element
10 | minimum_index = i #And the minimum index to be the ith index
11 | for j in range(i+1,len(array)): #Then we check the array from the i+1th element to the end
12 | count += 1
13 | if array[j] < minimum: #If a smaller element than the minimum element is found, we re-assign the minimum element and the minimu index
14 | minimum = array[j]
15 | minimum_index = j
16 | if minimum_index != i: #If minimum index has changed, i.e, a smaller element has been found, then we swap that element with the ith element
17 | array[minimum_index], array[i] = array[i], array[minimum_index]
18 | return (f'{array} \nNumber of comparisons = {count}')
19 |
20 | array = [5,9,3,10,45,2,0]
21 | print(selection_sort(array))
22 | '''
23 | [5, 9, 3, 10, 45, 2, 0]
24 | [0, 9, 3, 10, 45, 2, 5]
25 | [0, 2, 3, 10, 45, 9, 5]
26 | [0, 2, 3, 10, 45, 9, 5]
27 | [0, 2, 3, 5, 45, 9, 10]
28 | [0, 2, 3, 5, 9, 45, 10]
29 | [0, 2, 3, 5, 9, 10, 45]
30 | Number of comparisons = 21
31 | '''
32 | sorted_array = [5,6,7,8,9]
33 | print(selection_sort(sorted_array))
34 | """
35 | [5, 6, 7, 8, 9]
36 | [5, 6, 7, 8, 9]
37 | [5, 6, 7, 8, 9]
38 | [5, 6, 7, 8, 9]
39 | [5, 6, 7, 8, 9]
40 | Number of comparisons = 10
41 | """
42 |
43 | reverse_sorted_array = [9,8,7,6,5,4,3,2,1,0,-1,-2,-3,-4,-5,-6,-7,-8,-9,-10]
44 | print(selection_sort(reverse_sorted_array))
45 | '''
46 | [9, 8, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
47 |
48 | [-10, 8, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, 9]
49 |
50 | [-10, -9, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, 8, 9]
51 |
52 | [-10, -9, -8, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, 7, 8, 9]
53 |
54 | [-10, -9, -8, -7, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, 6, 7, 8, 9]
55 |
56 | [-10, -9, -8, -7, -6, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, 5, 6, 7, 8, 9]
57 |
58 | [-10, -9, -8, -7, -6, -5, 3, 2, 1, 0, -1, -2, -3, -4, 4, 5, 6, 7, 8, 9]
59 |
60 | [-10, -9, -8, -7, -6, -5, -4, 2, 1, 0, -1, -2, -3, 3, 4, 5, 6, 7, 8, 9]
61 |
62 | [-10, -9, -8, -7, -6, -5, -4, -3, 1, 0, -1, -2, 2, 3, 4, 5, 6, 7, 8, 9]
63 |
64 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, 0, -1, 1, 2, 3, 4, 5, 6, 7, 8, 9]
65 |
66 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
67 |
68 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
69 |
70 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
71 |
72 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
73 |
74 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
75 |
76 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
77 |
78 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
79 |
80 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
81 |
82 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
83 |
84 | [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
85 |
86 | Number of comparisons = 190
87 | '''
88 |
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Traversals/BFS.py:
--------------------------------------------------------------------------------
1 | #BFS or Breadth First Search is a traversal algorithm for a tree or graph, where we start from the root node(for a tree)
2 | #And visit all the nodes level by level from left to right. It requires us to keep track of the chiildren of each node we visit
3 | #In a queue, so that after traversal through a level is complete, our algorithm knows which node to visit next.
4 | #Time complexity is O(n) but the space complexity can become a problem in some cases.
5 |
6 | #To implement BFS, we'll need a Binary Search Tree, which we have already coded. So we'll use that.
7 |
8 | class Node():
9 | def __init__(self, data):
10 | self.data = data
11 | self.left = None
12 | self.right = None
13 |
14 |
15 | class BST():
16 | def __init__(self):
17 | self.root = None
18 | self.number_of_nodes = 0
19 |
20 |
21 | #For the insert method, we check if the root node is None, then we make the root node point to the new node
22 | #Otherwise, we create a temporary pointer which points to the root node at first.
23 | #Then we compare the data of the new node to the data of the node pointed by the temporary node.
24 | #If it is greater then first we check if the right child of the temporary node exists, if it does, then we update the temporary node to its right child
25 | #Otherwise we make the new node the right child of the temporary node
26 | #And if the new node data is less than the temporary node data, we follow the same procedure as above this time with the left child.
27 | #The complexity is O(log N) in avg case and O(n) in worst case.
28 | def insert(self, data):
29 | new_node = Node(data)
30 | if self.root == None:
31 | self.root = new_node
32 | self.number_of_nodes += 1
33 | return
34 | else:
35 | current_node = self.root
36 | while(current_node.left != new_node) and (current_node.right != new_node):
37 | if new_node.data > current_node.data:
38 | if current_node.right == None:
39 | current_node.right = new_node
40 | else:
41 | current_node = current_node.right
42 | elif new_node.data < current_node.data:
43 | if current_node.left == None:
44 | current_node.left = new_node
45 | else:
46 | current_node = current_node.left
47 | self.number_of_nodes += 1
48 | return
49 |
50 |
51 | #Now we will implement the lookup method.
52 | #It will follow similar logic as to the insert method to reach the correct position.
53 | #Only instead of inserting a new node we will return "Found" if the node pointed by the temporary node contains the same value we are looking for
54 | def search(self,data):
55 | if self.root == None:
56 | return "Tree Is Empty"
57 | else:
58 | current_node = self.root
59 | while True:
60 | if current_node == None:
61 | return "Not Found"
62 | if current_node.data == data:
63 | return "Found"
64 | elif current_node.data > data:
65 | current_node = current_node.left
66 | elif current_node.data < data:
67 | current_node = current_node.right
68 |
69 |
70 | #Finally comes the very complicated remove method.
71 | #This one is too complicated for me to explain while writing. So I'll just write the code down with some comments
72 | #explaining which conditions are being checked
73 | def remove(self, data):
74 | if self.root == None: #Tree is empty
75 | return "Tree Is Empty"
76 | current_node = self.root
77 | parent_node = None
78 | while current_node!=None: #Traversing the tree to reach the desired node or the end of the tree
79 | if current_node.data > data:
80 | parent_node = current_node
81 | current_node = current_node.left
82 | elif current_node.data < data:
83 | parent_node = current_node
84 | current_node = current_node.right
85 | else: #Match is found. Different cases to be checked
86 | #Node has left child only
87 | if current_node.right == None:
88 | if parent_node == None:
89 | self.root = current_node.left
90 | return
91 | else:
92 | if parent_node.data > current_node.data:
93 | parent_node.left = current_node.left
94 | return
95 | else:
96 | parent_node.right = current_node.left
97 | return
98 |
99 | #Node has right child only
100 | elif current_node.left == None:
101 | if parent_node == None:
102 | self.root = current_node.right
103 | return
104 | else:
105 | if parent_node.data > current_node.data:
106 | parent_node.left = current_node.right
107 | return
108 | else:
109 | parent_node.right = current_node.left
110 | return
111 |
112 | #Node has neither left nor right child
113 | elif current_node.left == None and current_node.right == None:
114 | if parent_node == None: #Node to be deleted is root
115 | current_node = None
116 | return
117 | if parent_node.data > current_node.data:
118 | parent_node.left = None
119 | return
120 | else:
121 | parent_node.right = None
122 | return
123 |
124 | #Node has both left and right child
125 | elif current_node.left != None and current_node.right != None:
126 | del_node = current_node.right
127 | del_node_parent = current_node.right
128 | while del_node.left != None: #Loop to reach the leftmost node of the right subtree of the current node
129 | del_node_parent = del_node
130 | del_node = del_node.left
131 | current_node.data = del_node.data #The value to be replaced is copied
132 | if del_node == del_node_parent: #If the node to be deleted is the exact right child of the current node
133 | current_node.right = del_node.right
134 | return
135 | if del_node.right == None: #If the leftmost node of the right subtree of the current node has no right subtree
136 | del_node_parent.left = None
137 | return
138 | else: #If it has a right subtree, we simply link it to the parent of the del_node
139 | del_node_parent.left = del_node.right
140 | return
141 | return "Not Found"
142 |
143 |
144 | #Now we implement the BFS method.
145 | def BFS(self):
146 | current_node = self.root #We start with the root node
147 | if current_node is None: #In case we don't insert anything in tree and then run BFS function
148 | return 'Tree is empty'
149 | else:
150 | BFS_result = [] #This will store the result of the BFS
151 | queue = [] #Queue to keep track of the children of each node
152 | queue.append(current_node) #We add the root to the queue first
153 | #queue = [current_node] #above two statement can be combined as per the PEP guidelines
154 | while len(queue) > 0:
155 | current_node = queue.pop(0) #We extract the first element of the queue and make it the current node
156 | BFS_result.append(current_node.data) #We push the data of the current node to the result list as we are currently visiting the current node
157 | if current_node.left: #If left child of the current node exists, we append it to the queue
158 | queue.append(current_node.left)
159 | if current_node.right: #Similarly, if right child exists, we append it to the queue
160 | queue.append(current_node.right)
161 | return BFS_result
162 |
163 | #Finally, we will implement the Recursive version of the BFS.
164 | def Recursive_BFS(self, queue, BFS_list):
165 | if self.root is None: #In case we don't insert anything in tree and then run BFS function
166 | return 'Tree is empty'
167 | if len(queue) == 0:
168 | return BFS_list
169 | current_node = queue.pop(0)
170 | BFS_list.append(current_node.data)
171 | if current_node.left:
172 | queue.append(current_node.left)
173 | if current_node.right:
174 | queue.append(current_node.right)
175 | return self.Recursive_BFS(queue, BFS_list)
176 |
177 |
178 | my_bst = BST()
179 | my_bst.insert(5)
180 | my_bst.insert(3)
181 | my_bst.insert(7)
182 | my_bst.insert(1)
183 | my_bst.insert(13)
184 | my_bst.insert(65)
185 | my_bst.insert(0)
186 | my_bst.insert(10)
187 | '''
188 | 5
189 | 3 7
190 | 1 13
191 | 0 10 65
192 | '''
193 |
194 | #The BFS Traversal for this tree should be : [5,3,7,1,13,0,10,65]
195 |
196 | print(my_bst.BFS())
197 | #[5, 3, 7, 1, 13, 0, 10, 65]
198 |
199 | print(my_bst.Recursive_BFS([my_bst.root],[])) #We need to pass the root node as an array and an empty array for the result
200 | #[5, 3, 7, 1, 13, 0, 10, 65]
201 |
202 |
--------------------------------------------------------------------------------
/venv/Scripts/Algorithms/Traversals/DFS.py:
--------------------------------------------------------------------------------
1 | #DFS or Depth First Search is another traversal algorithm.
2 | #In this, we traverse to the depths of the tree/graph until we can't go further, in which case, we go back up and repeat the process for the unvisited nodes
3 | #DFS Traversals can be of 3 types - PreOrder, InOrder, and PostOrder.
4 | #Again , to implement this, we'll need a BST which we have already coded. So we'll use that.
5 |
6 | class Node():
7 | def __init__(self, data):
8 | self.data = data
9 | self.left = None
10 | self.right = None
11 |
12 |
13 | class BST():
14 | def __init__(self):
15 | self.root = None
16 | self.number_of_nodes = 0
17 |
18 |
19 | #For the insert method, we check if the root node is None, then we make the root node point to the new node
20 | #Otherwise, we create a temporary pointer which points to the root node at first.
21 | #Then we compare the data of the new node to the data of the node pointed by the temporary node.
22 | #If it is greater then first we check if the right child of the temporary node exists, if it does, then we update the temporary node to its right child
23 | #Otherwise we make the new node the right child of the temporary node
24 | #And if the new node data is less than the temporary node data, we follow the same procedure as above this time with the left child.
25 | #The complexity is O(log N) in avg case and O(n) in worst case.
26 | def insert(self, data):
27 | new_node = Node(data)
28 | if self.root == None:
29 | self.root = new_node
30 | self.number_of_nodes += 1
31 | return
32 | else:
33 | current_node = self.root
34 | while(current_node.left != new_node) and (current_node.right != new_node):
35 | if new_node.data > current_node.data:
36 | if current_node.right == None:
37 | current_node.right = new_node
38 | else:
39 | current_node = current_node.right
40 | elif new_node.data < current_node.data:
41 | if current_node.left == None:
42 | current_node.left = new_node
43 | else:
44 | current_node = current_node.left
45 | self.number_of_nodes += 1
46 | return
47 |
48 |
49 | #Now we will implement the lookup method.
50 | #It will follow similar logic as to the insert method to reach the correct position.
51 | #Only instead of inserting a new node we will return "Found" if the node pointed by the temporary node contains the same value we are looking for
52 | def search(self,data):
53 | if self.root == None:
54 | return "Tree Is Empty"
55 | else:
56 | current_node = self.root
57 | while True:
58 | if current_node == None:
59 | return "Not Found"
60 | if current_node.data == data:
61 | return "Found"
62 | elif current_node.data > data:
63 | current_node = current_node.left
64 | elif current_node.data < data:
65 | current_node = current_node.right
66 |
67 |
68 | #Finally comes the very complicated remove method.
69 | #This one is too complicated for me to explain while writing. So I'll just write the code down with some comments
70 | #explaining which conditions are being checked
71 | def remove(self, data):
72 | if self.root == None: #Tree is empty
73 | return "Tree Is Empty"
74 | current_node = self.root
75 | parent_node = None
76 | while current_node!=None: #Traversing the tree to reach the desired node or the end of the tree
77 | if current_node.data > data:
78 | parent_node = current_node
79 | current_node = current_node.left
80 | elif current_node.data < data:
81 | parent_node = current_node
82 | current_node = current_node.right
83 | else: #Match is found. Different cases to be checked
84 | #Node has left child only
85 | if current_node.right == None:
86 | if parent_node == None:
87 | self.root = current_node.left
88 | return
89 | else:
90 | if parent_node.data > current_node.data:
91 | parent_node.left = current_node.left
92 | return
93 | else:
94 | parent_node.right = current_node.left
95 | return
96 |
97 | #Node has right child only
98 | elif current_node.left == None:
99 | if parent_node == None:
100 | self.root = current_node.right
101 | return
102 | else:
103 | if parent_node.data > current_node.data:
104 | parent_node.left = current_node.right
105 | return
106 | else:
107 | parent_node.right = current_node.left
108 | return
109 |
110 | #Node has neither left nor right child
111 | elif current_node.left == None and current_node.right == None:
112 | if parent_node == None: #Node to be deleted is root
113 | current_node = None
114 | return
115 | if parent_node.data > current_node.data:
116 | parent_node.left = None
117 | return
118 | else:
119 | parent_node.right = None
120 | return
121 |
122 | #Node has both left and right child
123 | elif current_node.left != None and current_node.right != None:
124 | del_node = current_node.right
125 | del_node_parent = current_node.right
126 | while del_node.left != None: #Loop to reach the leftmost node of the right subtree of the current node
127 | del_node_parent = del_node
128 | del_node = del_node.left
129 | current_node.data = del_node.data #The value to be replaced is copied
130 | if del_node == del_node_parent: #If the node to be deleted is the exact right child of the current node
131 | current_node.right = del_node.right
132 | return
133 | if del_node.right == None: #If the leftmost node of the right subtree of the current node has no right subtree
134 | del_node_parent.left = None
135 | return
136 | else: #If it has a right subtree, we simply link it to the parent of the del_node
137 | del_node_parent.left = del_node.right
138 | return
139 | return "Not Found"
140 |
141 | #Now we'll implementthe three kinds of DFS Traversals.
142 | def DFS_Inorder(self):
143 | return inorder_traversal(self.root, [])
144 |
145 | def DFS_Preorder(self):
146 | return preorder_traversal(self.root, [])
147 |
148 | def DFS_Postorder(self):
149 | return postorder_traversal(self.root, [])
150 |
151 |
152 |
153 | def inorder_traversal(node, DFS_list):
154 | if node.left:
155 | inorder_traversal(node.left, DFS_list)
156 | DFS_list.append(node.data)
157 | if node.right:
158 | inorder_traversal(node.right, DFS_list)
159 | return DFS_list
160 |
161 |
162 | def preorder_traversal(node,DFS_list):
163 | DFS_list.append(node.data)
164 | if node.left:
165 | preorder_traversal(node.left, DFS_list)
166 | if node.right:
167 | preorder_traversal(node.right, DFS_list)
168 | return DFS_list
169 |
170 |
171 | def postorder_traversal(node, DFS_list):
172 | if node.left:
173 | postorder_traversal(node.left, DFS_list)
174 | if node.right:
175 | postorder_traversal(node.right, DFS_list)
176 | DFS_list.append(node.data)
177 | return DFS_list
178 |
179 |
180 | my_bst = BST()
181 | my_bst.insert(5)
182 | my_bst.insert(3)
183 | my_bst.insert(7)
184 | my_bst.insert(1)
185 | my_bst.insert(13)
186 | my_bst.insert(65)
187 | my_bst.insert(0)
188 | my_bst.insert(10)
189 | '''
190 | 5
191 | 3 7
192 | 1 13
193 | 0 10 65
194 | '''
195 |
196 | #Inorder traversal for this tree : [0, 1, 3, 5, 7, 10, 13, 65]
197 | #Preorder Traversal for this tree : [5, 3, 1, 0, 7, 13, 10, 65]
198 | #Postorder Traversal for this tree : [0, 1, 3, 10, 65, 13, 7, 5]
199 |
200 |
201 | print(my_bst.DFS_Inorder())
202 | #[0, 1, 3, 5, 7, 10, 13, 65]
203 |
204 | print(my_bst.DFS_Preorder())
205 | #[5, 3, 1, 0, 7, 13, 10, 65]
206 |
207 | print(my_bst.DFS_Postorder())
208 | #[0, 1, 3, 10, 65, 13, 7, 5]
209 |
210 |
--------------------------------------------------------------------------------
/venv/Scripts/Big-O/O(1).py:
--------------------------------------------------------------------------------
1 | #O(1) - Constant Time
2 | #The no. of operations do not depend on the size of the input and are always constant.
3 | import time
4 |
5 | array_small = ['nemo' for i in range(10)]
6 | array_medium = ['nemo' for i in range(100)]
7 | array_large = ['nemo' for i in range(100000)]
8 |
9 | def finding_nemo(array):
10 | t0 = time.time()
11 | print(array[0]) #O(1) operation
12 | print(array[1]) #O(1) operation
13 | t1 = time.time()
14 | print(f'Time taken = {t1-t0}')
15 |
16 | finding_nemo(array_small)
17 | finding_nemo(array_medium)
18 | finding_nemo(array_large)
19 |
20 | #Time taken in all 3 cases would be 0.0 seconds because we are only extracting the first and second elements of the arays.
21 | #We are not looping over the entire array.
22 | #We are performing two O(1) operations, which equal to O(2)
23 | #Any constant number can be considered as 1. There we can say this function is of O(1) - Constant Time Complexity.
24 |
25 |
--------------------------------------------------------------------------------
/venv/Scripts/Big-O/O(m + n).py:
--------------------------------------------------------------------------------
1 | import time
2 |
3 | large1 = ['nemo' for i in range(100000)]
4 | large2 = ['nemo' for i in range(100000)]
5 |
6 | def find_nemo(array1, array2):
7 |
8 | #Here there are two different variables array1 and array2.
9 | #They have to be represented by 2 different variables in the Big-O representation as well.
10 | #Let array1 correspond to m and array2 correspond to n
11 |
12 | t0 = time.time() #O(1)
13 | for i in range(0,len(array1)): #O(m)
14 | if array1[i] == 'nemo': #m*O(1)
15 | print("Found Nemo!!") #k1*O(1) where k1 <= m because this statement will be executed only if the if statement returns True, which can be k1(<=m) times
16 | t1 = time.time() #O(1)
17 | print(f'The search took {t1-t0} seconds.') #O(1)
18 |
19 | t0 = time.time() #O(1)
20 | for i in range(0, len(array2)): #O(n)
21 | if array2[i] == 'nemo': #n*O(1)
22 | print("Found Nemo!!") #k2*O(1) where k2 <= m because this statement will be executed only if the if statement returns True, which can be k2(<=m) times
23 | t1 = time.time() #O(1)
24 | print(f'The search took {t1 - t0} seconds.') #O(1)
25 |
26 | find_nemo(large1, large2)
27 |
28 | #Total time complexity of the find_nemo function =
29 | #O(1 + m + m*1 + k1*1 + 1 + 1 + 1 + n + n*1 + k2*1 + 1 + 1) = O(6 + 2m + 2n + k1 + k2)
30 | #Now k1<=m and k2<=n. In the worst case, k1 can be m and k2 can be n. We'll consider the worst case and calculate the Big-O
31 | #O(6 + 2m + 2n + m + n) = O(3m + 3n + 6) = O(3(m + n + 2))
32 | #The constants can be safely ignored.
33 | #Therefore, O(m + n + 2) = O(m + n)
34 |
--------------------------------------------------------------------------------
/venv/Scripts/Big-O/O(m x n).py:
--------------------------------------------------------------------------------
1 | import time
2 |
3 | array1 = ['a','b','c','d','e']
4 | array2 = [1,2,3,4,5]
5 |
6 | def pairs(array1, array2):
7 |
8 | # Here there are two different variables array1 and array2.
9 | # They have to be represented by 2 different variables in the Big-O representation as well.
10 | # Let array1 correspond to m and array2 correspond to n
11 |
12 | for i in range(len(array1)): #n*O(m)
13 | for j in range(len(array2)): #m*O(n)
14 | print(array1[i],array2[j]) #m*n*O(1)
15 |
16 | pairs(array1,array2)
17 |
18 | #Total time complexity of the pairs function =
19 | #O(n*m + m*n + m*n*1) = O(3*m*n)
20 | #The constants can be safely ignored.
21 | #Therefore, O(m * n * 3) = O(m * n)
22 |
23 |
--------------------------------------------------------------------------------
/venv/Scripts/Big-O/O(n).py:
--------------------------------------------------------------------------------
1 | import time
2 |
3 | nemo = ['nemo']
4 | everyone = ['dory', 'bruce', 'marlin', 'nemo', 'gill', 'bloat', 'nigel', 'squirt', 'darla']
5 | large = ['nemo' for i in range(100000)]
6 | def find_nemo(array):
7 | t0 = time.time()
8 | for i in range(0,len(array)):
9 | if array[i] == 'nemo':
10 | print("Found Nemo!!")
11 | t1 = time.time()
12 | print(f'The search took {t1-t0} seconds.')
13 | find_nemo(nemo)
14 | find_nemo(everyone)
15 | find_nemo(large)
16 |
17 |
18 | def funchallenge(input):
19 | temp = 10 #O(1)
20 | temp = temp +50 #O(1)
21 | for i in range(len(input)): #O(n)
22 | var = True #n*O(1)
23 | a += 1 #n*O(1)
24 | return a #O(1)
25 |
26 | funchallenge(nemo)
27 | funchallenge(everyone)
28 | funchallenge(large)
29 |
30 | #Total running time of the funchallenge function =
31 | #O(1 + 1 + n + n*1 + n*1 + n*1 + 1) = O(3n +3) = O(3(n+1))
32 | #Any constant in the Big-O representation can be replaced by 1, as it doesn't really matter what constant it is.
33 | #Therefore, O(3(n+1)) becomes O(n+1)
34 | #Similarly, any constant number added or subtracted to n or multiplied or divided by n can also be safely written as just n
35 | #This is because the constant that operates upon n, doesn't depend on n, i.e., the input size
36 | #Therefore, the funchallenge function can be said to be of O(n) or Linear Time Complexity.
--------------------------------------------------------------------------------
/venv/Scripts/Big-O/O(n^2).py:
--------------------------------------------------------------------------------
1 | import time
2 |
3 | array = ['a','b','c','d','e']
4 |
5 | def log_all_pairs(array):
6 |
7 | #There are nested for loops in this function but there is only one variable array. So we don't need two variables for the Big-O
8 |
9 | for i in range(len(array)): #n*O(n)
10 | for j in range(len(array)): #n*O(n)
11 | print(array[i], array[j]) #n*n*O(1)
12 |
13 | log_all_pairs(array)
14 |
15 | #Total time complexity of the log_all_pairs function =
16 | #O(n*n + n*n + n*n*1) = O(3n^2)
17 | #The constants can be safely ignored.
18 | #Therefore, O(3n^2) = O(n^2)
19 |
20 | new_array = [1,2,3,4,5]
21 | def print_numbers_then_pairs(array):
22 |
23 | #There are a total of three loops here but only one variable. So we need only variable for our Big-O notation
24 |
25 | print("The numbers are : ") #O(1)
26 | for i in range(len(array)): #O(n)
27 | print(array[i]) #n*O(1)
28 |
29 | print("The pairs are :") #O(1)
30 | for i in range(len(array)): #n*O(n)
31 | for j in range(len(array)): #n*O(n)
32 | print(array[i], array[j]) #n*n*O(1)
33 |
34 | print_numbers_then_pairs(new_array)
35 |
36 | #Total time complexity of the print_numbers_then_pairs function =
37 | #O(1 + n + n*1 + 1 + n*n + n*n + n*n*1) = O(3n^2 + 2n + 2)
38 | #Now, Big-O presents scalability of the cod, i.e., how the code will behave as the inputs grow larger and larger
39 | #Therefore if the expression contains terms of different degrees and the size of inputs is huge, the terms of the smaller degrees become negligible in comparison to those of the higher degrees
40 | #Therefore, we can ignore the terms of the smaller degrees and only keep the highest degree term
41 | #O(3n^2 + 2n + 2) = O(3n2)
42 | #The constants can be safely ignored.
43 | #Therefore, O(3n^2) = O(n^2)
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Arrays/Contains_Duplicate.py:
--------------------------------------------------------------------------------
1 | #Given an array of integers, find if the array contains any duplicates.
2 | #Your function should return true if any value appears at least twice in the array, and it should return false if every element is distinct.
3 | #Example 1:
4 | #Input: [1,2,3,1]
5 | #Output: true
6 | #Example 2:
7 | #Input: [1,2,3,4]
8 | #Output: false
9 |
10 | #As usual we'll get the naive approach out of the way first.
11 |
12 | def brute_force_duplicate_search(array):
13 | for i in range(len(array)-1):
14 | for j in range(i+1,len(array)):
15 | if array[i] == array[j]:
16 | return True
17 | return False
18 |
19 | array = [1,2,46,32,98,61,34,46]
20 | print(brute_force_duplicate_search(array))
21 |
22 | #This is pretty simple, as we go through every possible pair of elements to check if they are the same.
23 | #If we find a pair having the same elements we return True, else we return False
24 | #Time Complexity - O(n^2)
25 |
26 | #A slightly better solution can be :
27 | #First we sort the array using O(nlog n) built-in sort of Python.
28 | #Then we loop through the array once to check if any consecutive elements are same, which will be O(n).
29 | #So overall complexity will be O(nlog n)
30 |
31 | def better_duplicate_search(array):
32 | array.sort()
33 | for i in range(len(array)-1):
34 | if array[i] == array[i+1]:
35 | return True
36 | return False
37 |
38 | print(better_duplicate_search(array))
39 |
40 | #An even better solution can be using a dictionary.
41 | #As we loop through the array, we'll check first if the current element is present in the dictionary
42 | #If yes, we return True
43 | #If no, we add the element to the dictionary.
44 | #Since looking up in a dictionary is O(1) time, overall complexity would be O(n)
45 |
46 | def smart_duplicate_search(array):
47 | dictionary = dict()
48 | if len(array)<2:
49 | return False
50 | else:
51 | for i in range(len(array)):
52 | if array[i] in dictionary:
53 | return True
54 | else:
55 | dictionary[array[i]] = True
56 | return False
57 |
58 | print(smart_duplicate_search(array))
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Arrays/Implementation.py:
--------------------------------------------------------------------------------
1 | #Although arrays are pre-defined in Python in the form of lists, we can implement our own arrays.
2 | #Here, we will implement our own array with some common methods such as access, push, pop, insert, delete
3 |
4 | class MyArray():
5 | def __init__(self):
6 | self.length = 0 #We initialize the array's length to be zero
7 | self.data = {} #We initialize the data of the array using an empty dictionary. The keys will correspond to the index and the values to the data
8 |
9 | #The attributes of the array class are stored in a dictionary by default.
10 | #When the __dict__ method is called on an instance of the class it returns the attributes of the class along with their values in a dictionary format
11 | #Now, when the instance of the class is printed, it returns a class object with its location in memory.
12 | #But we know when we print the array we get the elements of the array as output
13 | #When we print the instance of the class, the built-in __str__ method is called. So we can modify the __str__ method inside the class
14 | #To suit our needs.
15 | def __str__(self):
16 | return str(self.__dict__) #This will print the attributes of the array class(length and dsata) in string format when print(array_instance) is executed
17 |
18 | def get(self, index):
19 | return self.data[index] #This method takes in the index of the element as a parameter and returns the corresponding element in O(1) time.
20 |
21 | def push(self, item):
22 | self.length += 1
23 | self.data[self.length - 1] = item #Adds the item provided to the end of the array
24 |
25 | def pop(self):
26 | last_item = self.data[self.length-1] #Collects the last element
27 | del self.data[self.length - 1] #Deletes the last element from the array
28 | self.length -= 1 #Decrements the length attribute of the array by 1
29 | return last_item #Returns the popped element. O(1) time
30 |
31 | def insert(self, index, item):
32 | self.length += 1
33 | for i in range(self.length-1, index, -1):
34 | self.data[i] = self.data[i-1] #Shifts every element from the index to the end by one place towards right. Thus making space at the specified index
35 | self.data[index] = item #Adds the element at the given index. O(n) operation
36 |
37 |
38 | def delete(self,index):
39 | for i in range(index, self.length-1):
40 | self.data[i] = self.data[i+1] #Shifts elements from the given index to the end by one place towards left
41 | del self.data[self.length - 1] #The last element which remains two times in the array is deleted
42 | self.length -= 1 #The lenght is decremented by 1. O(n) operation
43 |
44 |
45 |
46 | arr = MyArray()
47 | arr.push(6)
48 | #{'length': 1, 'data': {0: 6}}
49 |
50 | arr.push(2)
51 | #{'length': 2, 'data': {0: 6, 1: 2}}
52 |
53 | arr.push(9)
54 | #{'length': 3, 'data': {0: 6, 1: 2, 2: 9}}
55 |
56 | arr.pop()
57 | #{'length': 2, 'data': {0: 6, 1: 2}}
58 |
59 | arr.push(45)
60 | arr.push(12)
61 | arr.push(67)
62 | #{'length': 5, 'data': {0: 6, 1: 2, 2: 45, 3: 12, 4: 67}}
63 |
64 | arr.insert(3,10)
65 | #{'length': 6, 'data': {0: 6, 1: 2, 2: 45, 3: 10, 4: 12, 5: 67}}
66 |
67 | arr.delete(4)
68 | #{'length': 5, 'data': {0: 6, 1: 2, 2: 45, 3: 10, 4: 67}}
69 |
70 | print(arr.get(1))
71 | #2
72 |
73 | print(arr)
74 | #The outputs given after each function call are the outputs obtained by calling print(arr) and not by the function calls themselves
75 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Arrays/Introduction.py:
--------------------------------------------------------------------------------
1 | #Arrays are one of the most commonly-used data structures
2 | #The elements of an array are stored in contiguous memory locations
3 | #Arrays are of two types : Static and Dynamic
4 | #Static arrays have fixed, pre-defined amount of memory that they can use, whereas in dynamic arrays this is flexible
5 | #In Python we only have dynamic arrays
6 | #Some basic operations and their complexities are given below :
7 |
8 | #Look-up/Accses - O(1)
9 | #Push/Pop - O(1)*
10 | #Insert - O(n)
11 | #Delete - O(n)
12 |
13 | array = [5,8,2,9,17,43,25,10]
14 |
15 | #Look-up/Acces
16 | #Any element of an array can be accessed by its index.
17 | #We just need to ask for the particular index of the element we are interested in and we will get the element in constant time
18 | first_element = array[0] #This will return the first element of the array, in this case, 5, in O(1) time
19 | sixth_element = array[5] #sixth-element = 43 Again, in O(1) time
20 |
21 |
22 | #Push/Pop
23 | #Push corresponds to pushing or adding an element at the end of the array.
24 | #Similarly, pop corresponds to removing the element at the end of the array.
25 | #Since the index of the end of the array is known, finding it and pushing or popping an element will only require O(1) time
26 | array.append(87) #Adds 87 at the end of the array in O(1) time
27 |
28 | #In some special cases, the append(push) operation may take greater time. This is because as mentioned earlier, Python has dynamic arrays
29 | #So when an element is to appended and the array is filled, the entire array has to be copied to a new location
30 | #With more space allocated(generally double the space) this time so that more items can be appended.
31 | #Therefore, some individual operations may reuire O(n) time or greater, but when averaged over a large number of operations,
32 | #The complexity can be safely considered to be O(1)
33 |
34 | array.pop() #Pops/removes the element at the end of the array in O(1) time.
35 |
36 | print(array)
37 |
38 |
39 | #Insert
40 | #Insert operation inserts an element at the beginning of the array, or at any location specified.
41 | #This is O(n) operation since after inserting the element at the desired location,
42 | #The elements to the right of the array have to be updated with the correct index as they all have shifted by one place.
43 | #This requires looping through the array. Hence, O(n) time.
44 | array.insert(0,50) #Inserts 50 at the beginning of the array and shifts all other elements one place towards right. O(n)
45 | array.insert(4,0) #inserts '0' at index '4', thus shifting all elements starting from index 4 one place towards right. O(n)
46 |
47 | print(array)
48 |
49 |
50 | #Delete
51 | #Similar to insert, it deletes an element from the specified location in O(n) time
52 | #The elements to the right of the deleted element have to shifted one space left, which requires looping over the entire array
53 | #Hence, O(n) time complexity
54 | array.pop(0) #This pops the first element of the array, shifting the remaining elements of the array one place left. O(n)
55 | print(array)
56 | array.remove(17) #This command removes the first occurence of the element 17 in the array, for which it needs to traverse the entire array, which requires O(n) time
57 | print(array)
58 | del array[2:4] #This command deletes elements from position 2 to position 4, again, in O(n) time
59 | print(array)
60 |
61 |
62 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Arrays/Longest_Word.py:
--------------------------------------------------------------------------------
1 | #Find the largest word in a given string
2 | #Examples
3 | #Input: "fun&!! time"
4 | #Output: time
5 |
6 | #The simplest and easiest solution that comes to mind is :
7 | #We check for every character if it is an alphanumeric character or not
8 | #If it is, we increase a counter and update a variable which stores the maximum value of counter
9 | #If we encunter a non-alphanumeric character, we reset the counter to zero and start again when the next alpha-numeric character arrives
10 |
11 | def easy_longest_word(string):
12 | count = 0
13 | maximum = 0
14 | for char in string:
15 | if char.isalnum():
16 | count += 1
17 | else:
18 | maximum = max(maximum, count)
19 | count = 0
20 | maximum = max(maximum, count)
21 | return maximum
22 |
23 | string = 'fun!@#$# times'
24 | print(easy_longest_word(string))
25 |
26 | #This prints the length of the longest word, but after writing this funtion I realized we have to print the word as well 😂
27 | #We can do that using the same logic as above. Just that we have create two new arrays
28 | #One to hold all the words and another to hold the current word.#Then we'll find the word with maximum length and print that
29 |
30 | def naive_longest_word(string):
31 | count = 0
32 | maximum = 0
33 | words = []
34 | word = []
35 | for char in string:
36 | if char.isalnum():
37 | count += 1
38 | word.append(char)
39 | else:
40 | if word not in words and word:
41 | words.append(''.join(word))
42 | print(words)
43 | print(word)
44 | word = []
45 | maximum = max(maximum, count)
46 | count = 0
47 | maximum = max(maximum, count)
48 | if word not in words and word:
49 | words.append(''.join(word))
50 | print(words)
51 | print(word)
52 | for item in words:
53 | if len(item) == maximum:
54 | return item
55 |
56 | print(naive_longest_word(string))
57 | #As can be seen, this has become a pretty complicated solution.
58 | #We loop over every character and check if it is an alphanumeric character.
59 | #If yes, we increase count by 1 and append the character to the word list.
60 | #If not, we first check if the word which we have accumulated so far is their in the words list or not.
61 | #If not, we convert the list word into a string using the join method and add the string to the words list
62 | #If yes, then we ignore it. This is done so that same words are not added more than once in the words list
63 | #Then we reset word to an empty list in anticipation of the next word and count to 0.
64 | #This way by the end of the loop, words contains al the words in the string except for the last one, which we add manually after the for loop
65 | #Finally, we check the length of which word is equal to the maximum value, which has been keeping track of the length of the longest word
66 | #And we return the longest word, albeit only the first occurence , if there are more than one words with maximum length.
67 |
68 | #The complexity is bad on all fronts. There is a join function used inside a for loop.
69 | #Complexity of .join(string) is O(len(string)). So overall time complexity is O(mn)
70 | #Also, two new arrays are created. So space complexity = O(m + n)
71 |
72 |
73 | #A different method to solve this problem can be using Regex,or Regular Expressions
74 | #First we split the string into groups of alphanumeric characters
75 | #Then we find the maximum length among all the words
76 | #Finally we find the word corresponding to the maximum length
77 |
78 | import re
79 |
80 | def regex(string):
81 | string = re.findall('\w+', string)
82 | maximum = max([len(item) for item in string])
83 | for item in string:
84 | if len(item) == maximum:
85 | return item
86 | sss = "Hello there how are you"
87 | print(regex(sss))
88 |
89 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Arrays/Maximum_SubArray.py:
--------------------------------------------------------------------------------
1 | #Given an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum
2 | #and return its sum.
3 | #Example:
4 | #Input: [-2,1,-3,4,-1,2,1,-5,4],
5 | #Output: 6
6 | #Explanation: [4,-1,2,1] has the largest sum = 6.
7 |
8 |
9 | #The first solution that comes to mind, as always, is the brute force solution.
10 | #Here we will extract the sum of every possible subarray of the given array and return the maximum value.
11 | #We can predict right off the bat that it is not going to be an optimal solution, but it is going to be a solution nonetheless.
12 | #So let's try it.
13 |
14 | def brute_force_max_subarray(array):
15 | maximum = 0
16 | if len(array)==0:
17 | return None
18 | for i in range(len(array)):
19 | cum_sum = 0
20 | for j in range(i,len(array)):
21 | cum_sum += array[j]
22 | maximum = max(maximum, cum_sum)
23 | return maximum
24 |
25 | array = [-2,1,-3,4,-1,2,1,-5,4]
26 | print(brute_force_max_subarray(array))
27 |
28 | #What's happening here is each iteration of the outer loop is giving us all the possible subarrays starting from the ith index.
29 | #For example, in the first iteration we get all the subarrays starting with the first element and so on.
30 | #Now with the second for loop we are building a subarray starting from the ith index to the last index
31 | #In the process we are cumulating the sum at every iteration which is giving us the sum of the subarray from the ith index to the jth index
32 | #Then we are checking if the cumulative sum is greater than all other cumulative sums we have found so far and storing the greatest sum in "maximum"
33 | #Finally we return maximum which contains the greatest sum of any subarray in the array.
34 | #Since we are looping through two nested for loops the time complexity is O(n^2)
35 |
36 |
37 | #There's a much faster way to solve this though, and that is by using the Kadane's algorithm.
38 | #In this, we loop over the array and for every iteration we check for the mximum subarray ending at that index.
39 | #Now the maximum subarray ending at a particular index can be either of two things:
40 | #1. Sum of the maximum subarray ending at the previous index + the element at the current index, or
41 | #2. The current element only.
42 | #So lets implement this algorithm.
43 |
44 | def kadane(array):
45 | maximum = maxarray = array[0]
46 | for i in range(1,len(array)):
47 | maxarray = max(array[i], maxarray+array[i])
48 | maximum = max(maxarray, maximum)
49 | return maximum
50 |
51 | print(kadane(array))
52 |
53 | #We set the variables maximum and maxarray to the value of the first element of the array.
54 | #Then we loop over the entire array from the first index.
55 | #At every iteration, we update the value of maxarray to be the maximum among the current element
56 | #and the sum of the maxarray ending at the previous index and the current element
57 | #This way, maxarray stores the maximum subarray ending at index i
58 | #And the variable maximum stores the maximum among all the maxarrays ending at every index, effectively storing the global maximum
59 | #Since this requires only one for loop, the time complexity is an effiecient O(n)!
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Arrays/Merging_sorted_arrays.py:
--------------------------------------------------------------------------------
1 | #Given two sorted arrays, we need to merge them and create one big sorted array.
2 | #For example, array1 = [1,3,5,7], array2 = [2,4,6,8]
3 | #The result should be array = [1,2,3,4,5,6,7,8]
4 |
5 | #One solution can be : we compare the corresponding elements of both arrays
6 | #We add the smaller element to a new array and increment the index of the array from which the element was added.
7 | #Again we compare the elements of both arrays and repeat the procedure until all the elements have been added.
8 |
9 | def merge(array1, array2):
10 | new_array = []
11 | flag = 0
12 | first_array_index = second_array_index = 0
13 | while not (first_array_index>=len(array1) or second_array_index>=len(array2)): #The loop runs until we reach the end of either of the arrays
14 | if array1[first_array_index] <= array2[second_array_index]:
15 | new_array.append(array1[first_array_index])
16 | first_array_index += 1
17 | else:
18 | new_array.append(array2[second_array_index])
19 | second_array_index += 1
20 |
21 | if first_array_index==len(array1): #When the loop finishes, we need to know which array's end was reached, so that the remaining elements of the other array can be appended to the new array
22 | flag = 1 #This flag will tell us if we reached the end of the first array or the second array
23 |
24 | if flag == 1: #If the end of the first array was reached, the remaining elements of the second array are added to the new array
25 | for item in array2[second_array_index:]:
26 | new_array.append(item)
27 | else: #And if the end of the second array was reached, the remaining elements of the first array are added to the new array
28 | for item in array1[first_array_index:]:
29 | new_array.append(item)
30 |
31 | return new_array
32 |
33 | array1 = [1,3,5,7]
34 | array2 = [2,4,6,8,10,12]
35 | print(merge(array1,array2))
36 | #[1, 2, 3, 4, 5, 6, 7, 8, 10, 12]
37 |
38 |
39 |
40 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Arrays/Move_Zeroes.py:
--------------------------------------------------------------------------------
1 | #Given an array nums, write a function to move all 0's to the end of it while maintaining the relative order of the non-zero elements.
2 | #Example:
3 | #Input: [0,1,0,3,12]
4 | #Output: [1,3,12,0,0]
5 |
6 | #The first solution that comes to mind is we can traverse the original length of the array and for every zero we find,
7 | #We append a 0 at the end of the array.
8 | #Then we again traverse the array, only for the original length of the array,
9 | #And we pop every 0 that we find, thus removing every 0 from the original part of the array and moving them all to the end.
10 | #This seems inefficient as pop operation anywhere apart fromthe end of the array costs O(n) time
11 | #And we have loop over the entire array so it becomes O(n^2), but lets see.
12 |
13 | def naive_zero_mover(array):
14 | l = len(array)
15 | for i in range(l):
16 | if array[i] == 0:
17 | array.append(0) #We have appended 0's at the end for every 0 in the original array
18 | j = 0
19 | c = 0
20 | while c < l: #We run the while loop only for l times, i.e., the original length of the array
21 | if array[j]!=0:
22 | j += 1 #If the element is non-zero we increment j by 1, which keeps track of every index of the array upto l
23 | else:
24 | array.pop(j) #If we find a 0 , we pop it and we do not increase j as all the elements have been shifted left. So the next element is in jth index only
25 | c += 1
26 | return array
27 |
28 | array = [0,0,0,0,1,0,3,0,0,0,12,9,7]
29 | print(naive_zero_mover(array))
30 |
31 |
32 | #A far better solution can be swapping every non-zero element we find with the first un-swapped zero
33 |
34 | def swap_move(array):
35 | z = 0
36 | for i in range(len(array)):
37 | if array[i] != 0:
38 | array[i], array[z] = array[z], array[i]
39 | z += 1
40 | return array
41 | print(swap_move(array))
42 | #In this solution, we traverse the array and swap every non-zero element with itself until we find a 0
43 | #Then we swap the next non-zero element with the 0 and we keep doing this until we have looped over the entire array.
44 | #This seems like a cleaner solution to the first one but still there are lots of unnecessary swaps going on here.
45 | #Still, we are looping over the array once and the swapping is done in constant time, so overall time complexity is O(n)
46 |
47 |
48 | #A very elegant solution to this problem can be the following one-liner :
49 |
50 | def one_liner_move(array):
51 | array.sort(key=bool, reverse=True)
52 | return array
53 |
54 | print(one_liner_move(array))
55 |
56 | #What this does is it sorts the array in place using the key as boolean.
57 | #Now the integer 0 is considered as boolean 0 and all other integers are considered as boolean 1
58 | #So providing the key as bool, we are telling the sort method to sort the array on the basis of boolean values
59 | #The 0's , which are considered as 0, come first, and the remaining numbers, considered as 1, come next, in their original order.
60 | #The reverse=True reverses this arrangement so that the 0's are all the end and the non-zero numbers at the front.
61 | #The complexity for this is O(nlog n) as the complexity of Python's built-in sort method is O(nlog n)
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Arrays/Reversing_String.py:
--------------------------------------------------------------------------------
1 | # A string is given. We have to print the reversed string.
2 | # For example, the string is "Hi how are you?"
3 | # The output should be "?ouy era woh iH"
4 |
5 | #The first solution that comes to mind is we can create a new array and append the characters of the original array,
6 | #one by one from the end to the beginning.
7 |
8 | def simple_reverse(string):
9 | new_string = []
10 | for i in range(len(string)-1, -1, -1): #The for loop runs from the last element to the first element of the original string
11 | new_string.append(string[i]) #The characters of the original string are added to the new string
12 | return ''.join(new_string) #The characters of the reversed array are joined to form a string
13 |
14 | string = "Hello"
15 | print(simple_reverse(string))
16 | #Since we only have to traverse the string once, the time complexity is O(n)
17 | #But since we are also creating a new array of the same size , the space complexity is also O(n)
18 |
19 |
20 | #A smarter way to do this , can be taking a pair of elements from either end of the string and swapping them
21 | #We have start at both the ends and continue swapping pairs till the middle of the string
22 | #Because we are creating a list in the swap function, space complexity is O(4*n) and time complexity is still at O(n)
23 |
24 | def swap(string, a, b): #Function which swaps two characters of a string
25 | string = list(string)
26 | temp = string[a]
27 | string[a] = string[b]
28 | string[b] = temp
29 | return ''.join(string)
30 |
31 | def smarter_reverse(string):
32 | for i in range(len(string)//2):
33 | string = swap(string, i, len(string)-i-1)
34 | return string
35 |
36 | print(smarter_reverse(string))
37 |
38 |
39 | #Apart from these, some built-in functions that can be used to reverse a string are as follows:
40 |
41 | string1 = 'abcde'
42 | string2 = reversed(string1)
43 | print(''.join(string2))
44 |
45 | list1 = list(string1)
46 | list1.reverse()
47 | print(''.join(list1))
48 |
49 | #Both these methods are of O(n) time complexity
50 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Arrays/Rotate_Array.py:
--------------------------------------------------------------------------------
1 | #Given an array, rotate the array to the right by k steps, where k is non-negative.
2 | #Example 1:
3 | #nput: nums = [1,2,3,4,5,6,7], k = 3
4 | #Output: [5,6,7,1,2,3,4]
5 |
6 | #The instant solution for this that comes to mind is :
7 | #We create a new array and initialize the first k elemenst of the new array with the last k elements of the original array.
8 | #Then we fill in the remaining elements
9 | #The time complexity is O(n) and the space complexity is also O(n)
10 |
11 | def naive_rotation(array, k):
12 | new_array = []
13 | for i in range(k%len(array)):
14 | new_array.append(array[len(array)-k+i])
15 | for i in range(len(array)-k%len(array)):
16 | new_array.append(array[i])
17 | return new_array
18 |
19 | array = [1,2,3,4,5,6,7,8,9]
20 | k = 11
21 | print(naive_rotation(array,k))
22 |
23 |
24 | #Another inefficient but correct approach can be the brute force appoach where we rotate the array by 1 element in each traversal of the array
25 | #This way we won't have to use another array, so we'll save on space complexity
26 | #But the time complexity would be O(n*k) as we we'll have to rotate the array k times and for each rotation, we need to traverse the entire array
27 |
28 | def brute_force_rotation(array, k):
29 | for i in range(k):
30 | temp = array[-1]
31 | for i in range(len(array)-1,0,-1):
32 | array[i] = array[i-1]
33 | array[0] = temp
34 | return array
35 |
36 | print(brute_force_rotation(array, k))
37 |
38 |
39 | #A better solution can be using the Reversal Algorithm
40 | #In this, we first reverse the entire array, then we reverse the first k elements, followed by reversing the last n-k elements
41 | #Since, the time complexity of reversing is O(n)
42 | #Therefore, overall time complexity for this algorithm would be O(3n) which is equal to O(n), with no extra space required
43 |
44 | def reverse(array, start, end): #Function to reverse the elements of array from index start to index end
45 | while start> {" ".join(map(str, self.adjacency_list[node]))}')
43 |
44 |
45 | my_graph = Graph()
46 | my_graph.insert_node(1)
47 | my_graph.insert_node(2)
48 | my_graph.insert_node(3)
49 | my_graph.insert_edge(1,2)
50 | my_graph.insert_edge(1,3)
51 | my_graph.insert_edge(2,3)
52 | my_graph.show_connections()
53 |
54 | """
55 | 1 -->> 2 3
56 | 2 -->> 1 3
57 | 3 -->> 1 2
58 | """
59 |
60 | print(my_graph.adjacency_list)
61 | #{1: [2, 3], 2: [1, 3], 3: [1, 2]}
62 |
63 | print(my_graph.number_of_nodes)
64 | #3
65 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Hash Tables/First_Recurring_Character.py:
--------------------------------------------------------------------------------
1 | #Google Question
2 | #Given an array, return the first recurring character
3 | #Example1 : array = [2,1,4,2,6,5,1,4]
4 | #It should return 2
5 | #Example 2 : array = [2,6,4,6,1,3,8,1,2]
6 | #It should return 6
7 |
8 | #First thing that comes to mind is we can create a dictionary and keep storing each element of the array in the dictionary
9 | #as we go along the array. But before adding the element to the dictionary, we'll check if the element is already present in the dictionary
10 | #If yes, then we simply return the element and break out
11 | #If not, then we add the element to the dictionary and move forward
12 |
13 | def simple_frc(array):
14 | dictionary = dict()
15 | for item in array:
16 | if item in dictionary:
17 | return item
18 | else:
19 | dictionary[item] = True
20 | return None
21 |
22 | array = [2,1,4,1,5,2,6]
23 | #print(simple_frc(array))
24 |
25 | #The time complexity is O(n) as we are looping through the array only once
26 | #And the search which we are doing in the dictionary, is of O(1) time, since it is basically an implementation of hash table.
27 |
28 |
29 | #Another aproach can be the naive approach using two nested loops
30 |
31 | def naive_frc(array):
32 | l = len(array)
33 | i= 0
34 | frc = None
35 | while(i 1:
40 | for j in range(len(self.data[i])): #Looping over all the lists(key,value pairs) in the current bucket
41 | keys_array.append(self.data[i][j][0]) #Adding the key of each list to the keys_array
42 | else:
43 | keys_array.append(self.data[i][0][0])
44 | return keys_array
45 |
46 | def values(self): #Function to return all the values, with exactly the same logic as the keys function
47 | values_array = []
48 | for i in range(self.size):
49 | if self.data[i]:
50 | for j in range(len(self.data[i])):
51 | values_array.append(self.data[i][j][1]) #Only difference from the keys function is instead of appending the first element, we are appending the last element of each list
52 | return values_array
53 |
54 |
55 | new_hash = hash_table(2)
56 | print(new_hash)
57 | #{'size': 2, 'data': [None, None]}
58 |
59 | new_hash.set('one',1)
60 | new_hash.set('two',2)
61 | new_hash.set('three',3)
62 | new_hash.set('four',4)
63 | new_hash.set('five',5)
64 | print(new_hash)
65 | #{'size': 2, 'data': [[['one', 1], ['five', 5]], [['two', 2], ['three', 3], ['four', 4]]]}
66 |
67 | print(new_hash.get('one'))
68 | #1
69 |
70 | print(new_hash.keys())
71 | #['one', 'five', 'two', 'three', 'four']
72 | print(new_hash.values())
73 | #[1, 5, 2, 3, 4]
74 |
75 |
76 | #Now although there are some for loops running in the class hash,
77 | #the time complexity is not O(n).
78 | #This is because n stands for the size of the input, which corresponds to number of key,value pairs in the table
79 | #But the for loop in the _hash function runs only for the length of the key, which would be insignificantly small in comparison to the number of entries in general.
80 | #Also, the for loop in the get function runs for the length of the collisioned array, which in most cases, won't be too long
81 | #Atleast nowhere near to the number of total entries, hence the time complexity remains way less than O(n), even less than O(log n) in most cases
82 | #The keys and values methods are slightly worse than O(n) though, as we have to loop over the entire size of the table once,
83 | #And loop over all the lists in the buckets which have collision
84 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Hash Tables/Introduction.py:
--------------------------------------------------------------------------------
1 | #Hash Tables are data structures which generally provide very fast(O(1)) lookups, insertions and deletions
2 | #In Python, dictionaries are implemented as hash tables.
3 |
4 | #The way hashing works is that there is a bucket containing slots to fill with elements.
5 | #Like in arrays, elements are referenced by their integer indexes, in dictionaries, or hash tables,
6 | #values are referenced by their keys, which can be of any data type.
7 | #Now there are different kinds of hash functions (eg: MD5, SHA1, SHA256) which are used to convert the keys into hashes, which are unique for each key
8 | #And the hashes are then mapped to some slot in the bucket. And the key and value pair get stored in the slot,
9 | #or in some accompanying data structure within the slot (like, linked lists)
10 |
11 | #In general, the lookup, insert and delete operations all are very fast, in the order of O(1)
12 | #But in some cases, more than one keys can map to the same slot and that increases the time complexity by some margin,
13 | #although not by a lot in most cases. This is known as a collision.
14 | #Now, like for almost all problem there is some sort of a solution in the computer science world,
15 | #collisions can also be resolved by numerous collison resolution techniques like open addressing and closed addressing
16 |
17 | #Enough details, let's look at how hash tables are implemented in Python using dictionaries.
18 |
19 | dictionary = dict()
20 | dictionary = {'one':1, 'two':2, 'three':3, 'four':4, 'five':5}
21 | print(dictionary)
22 | #{'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5}
23 |
24 | print(dictionary.keys())
25 | #dict_keys(['one', 'two', 'three', 'four', 'five'])
26 |
27 | print(dictionary.values())
28 | #dict_values([1, 2, 3, 4, 5])
29 |
30 | print(dictionary.items())
31 | #dict_items([('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)])
32 |
33 | print(dictionary['one']) #Accessing a value by its key in O(1) time
34 | #1
35 |
36 | dictionary['six'] = 6 #Inserting the value 6 for the key 'six' in O(1) time.
37 | print(dictionary)
38 | #{'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5, 'six': 6}
39 |
40 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Hash Tables/Pattern_Matching.py:
--------------------------------------------------------------------------------
1 | #Given a string and a pattern, write a program to find all occurences of the pattern in the string
2 | #For example, string = "THIS IS A TEST TEXT", pattern = "TEST"
3 | #Output = Pattern found at index 10
4 | #Example 2, string = "AABAACAADAABAABA", pattern = "AABA"
5 | #Output: Pattern found at index 0, Pattern found at index 9, Pattern found at index 12
6 |
7 | ##The naive approach would be to slide over the entire string in windows of length equal to the length of the pattern
8 | #And check every cbaracter if it matches the pattern
9 |
10 | def naive_pattern_matching(string, pattern):
11 | matched_indices = [] #We create a list to store the indices where the pattern is found in the string
12 | l = len(pattern)
13 | flag = 0 #This flag will be used to check if the pattern has been matched or not
14 | for i in range(len(string) - l + 1): #We loop over the string only til the start of the potential last match of the pattern that can be found
15 | k = 0
16 | for j in range(i,i+l): #We loop for the length of the pattern
17 | if pattern[k] != string[j]: #If we find a non-matching character, we set the flag to 1 and break out of the loop to go to the next window
18 | flag = 1
19 | break;
20 | else: #If character matches, we increment k by 1
21 | k += 1
22 | if flag==0: #If flag is zero,i.e., if we haven't broken out after encountering a non-matching character, we append i to the list as this is where the matched attern starts
23 | matched_indices.append(i)
24 | flag = 0 #We reset the flag to be 0 for the next window
25 | if matched_indices:
26 | return matched_indices
27 | else:
28 | return None
29 |
30 | string = "AABAACAADAABAABA"
31 | pattern = "AABA"
32 | print(f'Pattern found at {naive_pattern_matching(string,pattern)}')
33 | #Pattern found at [0, 9, 12]
34 |
35 |
36 | #As can be clearly made out this isn't a very efficient solution.
37 | #The outer for loop is of order O(n-m+1) where n is the size of the string and m is the size of the pattern
38 | #And the inner for loop is of order O(m).
39 | #Therefore, overall time complexity is equal to O(n*m) plus we also have some space complexity
40 |
41 |
42 | #Now, since this problem is not there in the hash tables portion of the course, and I am solving it here, there must be some method using hashing to solve this
43 | #And there is. It is an algorithm known as the Rabin-Karp Algorithm
44 | #In this, we loop over the entire string similarly as our naive algorithm in windows of length equal to that of the pattern
45 | #But we don't compare the characters of every window.
46 | #We calculate the hash value of the pattern and the substring we are checking and start comparing the characters only if the hash values match
47 | #For this, a hash function is required which should be pretty fast and can give the hash value of the substring in the next window,
48 | #by using the hash value of the substring of the present window
49 | #The hash function suggested by Rabin and Karp is as follows:
50 |
51 | '''hash( txt[s+1 .. s+m] ) = ( d ( hash( txt[s .. s+m-1]) – txt[s]*h ) + txt[s + m] ) mod q
52 |
53 | hash( txt[s .. s+m-1] ) : Hash value at shift s.
54 | hash( txt[s+1 .. s+m] ) : Hash value at next shift (or shift s+1)
55 | d: Number of characters in the alphabet (256)
56 | q: A prime number
57 | h: d^(m-1) '''
58 |
59 | #So let's implement this
60 |
61 | def rabin_karp(string, pattern, prime):
62 | n = len(string)
63 | m = len(pattern)
64 | h = 1
65 | d = 256
66 | p = 0
67 | t = 0
68 | flag = 0
69 | matched_indices = []
70 |
71 | #Calculating h according to formula h = (d^(m-1))%prime
72 | for i in range(m-1):
73 | h = (h*d)%prime
74 |
75 | #Calculating the hash values of the pattern and the first window of substring inside the given string
76 | for i in range(m):
77 | p = (p*d + ord(pattern[i]))%prime
78 | t = (t*d + ord(string[i]))%prime
79 |
80 | #Sliding over the string in winows and checking if the hash values match
81 | for i in range(n-m+1):
82 | if p==t:
83 | for j in range(m): #Comparing every character in the substring whose hash value matchs that of the pattern
84 | if pattern[j] != string[i+j]:
85 | flag = 1
86 | break #If a non-matching character is found, we break out of the loop
87 | if flag == 0:
88 | matched_indices.append(i)
89 | else:
90 | flag = 0
91 | #Calculating the hash value for the next substring
92 | if i= self.length:
71 | if position > self.length:
72 | print('This position is not available. Inserting at the end of the list')
73 | self.append(data) #Similarly, inserting ata position >= the length of the list is equivalent to appending, so we call the append method
74 | return
75 | else:
76 | new_node = Node(data)
77 | current_node = self.head
78 | for i in range(position - 1): #We traverse upto one position before the position where we want to insert the new node
79 | current_node = current_node.next
80 | new_node.previous = current_node #We make the previous of the new node point to the current node
81 | new_node.next = current_node.next #And the next point to the next of the current node.
82 | current_node.next = new_node #Then we break the link between the current node and the next node and make the next of the current node point to the new node
83 | new_node.next.previous = new_node #And finally we update the previous of the next node to point to the new node instead of the current node. This way, the new node gets inserted in betwwen the current and the next nodes.
84 | self.length += 1
85 | return
86 |
87 |
88 | def delete_by_value(self, data):
89 | if self.head == None:
90 | print("Linked List is empty. Nothing to delete.")
91 | return
92 |
93 | current_node = self.head
94 | if current_node.data == data:
95 | self.head = self.head.next
96 | if self.head == None or self.head.next==None: #If after deleting the first node the list becomes empty or there remains only one node, we set the tail equal to the head
97 | self.tail = self.head
98 | if self.head != None:
99 | self.head.previous = None #We set the previous pointer of the new head to be None
100 | self.length -= 1
101 | return
102 | try: # Try block required as if the value is not found then current_node.next will be None and there is no data parameter to compare.
103 | while current_node!= None and current_node.next.data != data:
104 | current_node = current_node.next
105 | if current_node!=None:
106 | current_node.next = current_node.next.next
107 | if current_node.next != None: #If the node deleted is not the last node(i.e., the node next to the next to the current node is !- None),
108 | current_node.next.previous = current_node #Then we set the previous of the node next to the deleted node equal to the current node, so a two-way link is established
109 | else:
110 | self.tail = current_node #If the deleted node is the last node then we update the tail to be the current node
111 | self.length -= 1
112 | return
113 | except AttributeError:
114 | print("Given value not found.")
115 | return
116 |
117 |
118 | def delete_by_position(self, position):
119 | if self.head == None:
120 | print("Linked List is empty. Nothing to delete.")
121 | return
122 |
123 | if position == 0:
124 | self.head = self.head.next
125 | #print(self.head)
126 | if self.head == None or self.head.next == None:
127 | self.tail = self.head
128 | if self.head != None:
129 | self.head.previous = None #We update the previous of the new head to be equal to None
130 | self.length -= 1
131 | return
132 |
133 | if position>=self.length:
134 | position = self.length-1
135 |
136 | current_node = self.head
137 | for i in range(position - 1):
138 | current_node = current_node.next
139 | current_node.next = current_node.next.next
140 | if current_node.next != None: #Similar logic to the delete_by_value method
141 | current_node.next.previous = current_node
142 | else:
143 | self.tail = current_node
144 | self.length -= 1
145 | return
146 |
147 |
148 | #I'll create a Doubly linked list and call all its methods in the same sequence as I did in the Singly Linked List implementation
149 | #The answers should come out to be the same
150 | my_linked_list = DoublyLinkedList()
151 | my_linked_list.print_list()
152 | #Empty
153 |
154 | my_linked_list.append(5)
155 | my_linked_list.append(2)
156 | my_linked_list.append(9)
157 | my_linked_list.print_list()
158 | #5 2 9
159 |
160 | my_linked_list.prepend(4)
161 | my_linked_list.print_list()
162 | #4 5 2 9
163 |
164 | my_linked_list.insert(2,7)
165 | my_linked_list.print_list()
166 | #4 5 7 2 9
167 |
168 | my_linked_list.insert(0,0)
169 | my_linked_list.insert(6,0)
170 | my_linked_list.insert(9,3)
171 | my_linked_list.print_list()
172 | #This position is not available. Inserting at the end of the list
173 | #0 4 5 7 2 9 0 3
174 |
175 | my_linked_list.delete_by_value(3)
176 | my_linked_list.print_list()
177 | #0 4 5 7 2 9 0
178 |
179 | my_linked_list.delete_by_value(0)
180 | my_linked_list.print_list()
181 | #4 5 7 2 9 0
182 |
183 | my_linked_list.delete_by_position(3)
184 | my_linked_list.print_list()
185 | #4 5 7 9 0
186 |
187 | my_linked_list.delete_by_position(0)
188 | my_linked_list.print_list()
189 | #5 7 9 0
190 |
191 | my_linked_list.delete_by_position(8)
192 | my_linked_list.print_list()
193 | #5 7 9
194 |
195 | my_linked_list.delete_by_value(3)
196 | my_linked_list.print_list()
197 | #Given value not found.
198 |
199 | print(my_linked_list.length)
200 | #3
201 |
202 |
203 | #The answers are all same. Meaning our doubly linked list works properly
204 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Linked Lists/Implementation.py:
--------------------------------------------------------------------------------
1 | #Linked lists are, as the name suggests, a list which is linked.
2 | #It consists of nodes which contain data and a pointer to the next node in the list.
3 | #The list is connected with the help of these pointers.
4 | #These nodes are scattered in memory, quite like the buckets in a hash table.
5 | #The node where the list starts is called the head of theblist and the node where it ends, or last node, is called the tail of the list.
6 | #The average time complexity of some operations invloving linked lists are as follows:
7 | #Look-up : O(n)
8 | #Insert : O(n)
9 | #Delete : O(n)
10 | #Append : O(1)
11 | #Prepend : O(1)
12 | #Python doesn't have a built-in implementation of linked lists, we have to build it on our own
13 | #So, here we go.
14 |
15 |
16 | #First we define a class Node which will act as a blueprint for each of our nodes
17 | class Node():
18 | def __init__(self, data): #When instantiating a Node, we will pass the data we want the node to hold
19 | self.data = data #The data passed during instantiation will be stored in self.data
20 | self.next = None #This self.next will act as a pointer to the next node in the list. When creating a new node, it always points to null(or None).
21 |
22 |
23 | #Next we define the class LinkedList which will have a head pointer to point to the beginning of the list and a tail pointer to
24 | #point to the end of the list. An optional value of length can also be stored to keep track of the length of the linked list.
25 | #When the list is created , it is empty and there is no node to point to. So head will point to None at the time of creation of linked list
26 | #And since the list is empty at the time of creation, we will point the tail to whatever the head is pointing to, i.e., None
27 | class LinkedList():
28 | def __init__(self):
29 | self.head = None
30 | self.tail = self.head
31 | self.length = 0
32 |
33 | #Next comes the append method with which we will add nodes to the end of the linked list.
34 | #To do this, we will just pass the data we want to append. The append method will create a new instance of the Node class,
35 | #Effectively creating a new node, with the data passed to the instance, so that the new node will contain the data the user wants to enter
36 | #Then we will check if the list is empty. If it is, we will point the head to the new node just created and the tail to the head,
37 | #as there is only one node in the list, so the head and tail point to the same node. Also, we will set the length equal to 1.
38 | #If the list isn't empty, then we will make the 'next' pointer of the last node(pointed at by 'tail') point to the new node
39 | #And update the tail to point to the new node as this has become the last node in the list now. And we'll increase the length.
40 | def append(self, data):
41 | new_node = Node(data)
42 | if self.head == None:
43 | self.head = new_node
44 | self.tail = self.head
45 | self.length = 1
46 | else:
47 | self.tail.next = new_node
48 | self.tail = new_node
49 | self.length += 1
50 |
51 |
52 | #Next operation we'll implement is prepend, wehre we add a node at the head of the list.
53 | #For this, we will call the prepend method and pass the value we want to enter, which will create a new object of the node class
54 | #Then we will make the 'next' of the new node point to the head ,as the head is currently pionting to the first node of the list
55 | #And then we will update the head to point to new node as we want the new node to be new first node, i.e, the new head.
56 | #And ofcourse, we'll increase the length by 1
57 | def prepend(self, data):
58 | new_node = Node(data)
59 | if self.head == None:
60 | self.head = new_node
61 | self.tail = self.head
62 | self.length += 1
63 | else:
64 | new_node.next = self.head
65 | self.head = new_node
66 | self.length += 1
67 |
68 |
69 | #Now we will implement the print function to print the values in the nodes of the linked list
70 | #We will check if the list is empty or not. If it is, we will printout "Empty"
71 | #Else, we will create a new node which will point to the head. Then we will loop until the node we created becomes None
72 | #Inside the loop we will print the data of the current node and then make the current node equal to the node pointed by the current node
73 | #Since this requires us to traverse the entire lenth og the linked list, this is an O(n) operation.
74 | def print_list(self):
75 | if self.head == None:
76 | print("Empty")
77 | else:
78 | current_node = self.head
79 | while current_node!= None:
80 | print(current_node.data, end= ' ')
81 | current_node = current_node.next
82 | print()
83 |
84 | #Next comes the insert operation, where we insert a data at a specified position
85 | #If the position is greater than the length of the list, we simply follow the procedure of the append method where we add the node to the end of the list
86 | #If the position is equal to 0, we follow the prepend procedure, where we append the node at the head
87 | #If the postition is somewhere in between, then we create a temporary node which traverses the list upto the previous position of the position we want to enter the new node
88 | #Now the 'next' of the temporary node is pointing to the next node in the list, wehre we want to insert our new node
89 | #So first we link the new node and the node at the desired position by making the 'next' of the new node equal to the 'next' of the temporary node
90 | #The temporary node and the new node point to the same position now, the position we want to insert the new node
91 | #So we update the 'next' of the temporary node to point to the new node.
92 | #This way, our new node occupies the position it intended to and the node which was originally there, gets pushed to the next position
93 | #Since this requires traversal of the list, it is an O(n) operation.
94 | def insert(self, position, data):
95 | if position >= self.length:
96 | if position>self.length:
97 | print("This position is not available. Inserting at the end of the list")
98 | new_node = Node(data)
99 | self.tail.next = new_node
100 | self.tail = new_node
101 | self.length += 1
102 | elif position == 0:
103 | new_node = Node(data)
104 | new_node.next = self.head
105 | self.head = new_node
106 | self.length += 1
107 | else:
108 | new_node = Node(data)
109 | current_node = self.head
110 | for i in range(position-1):
111 | current_node = current_node.next
112 | new_node.next = current_node.next
113 | current_node.next = new_node
114 | self.length += 1
115 |
116 |
117 | #Next comes the delete_by_value method where the user can enter a value and if the value is found in the list, it will be deleted.
118 | #(If the value is found multiple times, only the first occurence of thevalue will be deleted.)
119 | #First we check if the list is empty. If yes, we print appropriate message. If not, then we create a temporary node.
120 | #Then we check if the value of the head is equal to the value we want deleted.
121 | #If yes, we make the head equal to the node pointed by the 'next' of the head. Then we check if there are only one or zero nodes in the list
122 | #If yes, then we update the tail to be equal to the head.
123 | #By Doing this, the original 'head' gets disconnected from the list and the head becomes updated to what was originally the second node
124 | #If these two cases are not encountered, then we have to traverse the list and check every node.
125 | #So we loop until either the current node becomes None, signifying the end of the list, or until the data of the node next to the current node equals the data we want deleted.
126 | #After coming out of the loop we check if the current node is not equal to None, it means the next node of the current node is the one we want deleted
127 | #So we make the 'next' of the current node point to the next to the next of the current node.
128 | #Effectively, we bypass the node we want deleted and establish a connection between the current and the next to next of the current nodes.
129 | #After deleting the required node, we check if the current node's 'next' points to None, i.e., if it is the last node. If yes, then we update the tail
130 | #And if the current node is None, it means we traversed the entire list but couldn't find the value.
131 | #Time complexity is pretty clearly O(n)
132 | def delete_by_value(self, data):
133 | if self.head == None:
134 | print("Linked List is empty. Nothing to delete.")
135 | return
136 | current_node = self.head
137 | if current_node.data == data:
138 | self.head = self.head.next
139 | if self.head == None or self.head.next==None:
140 | self.tail = self.head
141 | self.length -= 1
142 | return
143 | while current_node.next!= None and current_node.next.data != data:
144 | #if current_node.data == data:
145 | # previous_node.next = current_node.next
146 | # return
147 | current_node = current_node.next
148 | if current_node.next!=None:
149 | current_node.next = current_node.next.next
150 | if current_node.next == None:
151 | self.tail = current_node
152 | self.length -= 1
153 | return
154 | else:
155 | print("Given value not found.")
156 |
157 |
158 | #Another functionality of linked lists can be deleting a node based on its position.
159 | #It follows more or less the same procedure as delete_by_value method.
160 | #Only real difference is instead of traversing the list till the current node becomes None or the next node equals the required data,
161 | #Here we traverse the list till the position one place behind the position we want deleted, similar to the insert operation
162 | #And then we bypass the next node to the current node and link it to the next to the next node of the current node.
163 | #We do a similar check for tail and update it just like in the delete_by_value method.
164 | #This operation too has a time complexity of O(n) in the worst case.
165 | def delete_by_position(self, position):
166 | if self.head == None:
167 | print("Linked List is empty. Nothing to delete.")
168 | return
169 | if position == 0:
170 | self.head = self.head.next
171 | if self.head == None or self.head.next == None:
172 | self.tail = self.head
173 | self.length -= 1
174 | return
175 | if position>=self.length:
176 | position = self.length-1
177 | current_node = self.head
178 | for i in range(position - 1):
179 | current_node = current_node.next
180 | current_node.next = current_node.next.next
181 | self.length -= 1
182 | if current_node.next == None:
183 | self.tail = current_node
184 | return
185 |
186 |
187 | #We will import this file while reversing a linked list. So we must make sure that it runs only
188 | #when it is the main file being run and not also when it is being imported in some other file.
189 | if __name__ == '__main__':
190 |
191 | my_linked_list = LinkedList()
192 | my_linked_list.print_list()
193 | #Empty
194 |
195 | my_linked_list.append(5)
196 | my_linked_list.append(2)
197 | my_linked_list.append(9)
198 | my_linked_list.print_list()
199 | #5 2 9
200 |
201 | my_linked_list.prepend(4)
202 | my_linked_list.print_list()
203 | #4 5 2 9
204 |
205 | my_linked_list.insert(2,7)
206 | my_linked_list.print_list()
207 | #4 5 7 2 9
208 |
209 | my_linked_list.insert(0,0)
210 | my_linked_list.insert(6,0)
211 | my_linked_list.insert(9,3)
212 | my_linked_list.print_list()
213 | #This position is not available. Inserting at the end of the list
214 | #0 4 5 7 2 9 0 3
215 |
216 | my_linked_list.delete_by_value(3)
217 | my_linked_list.print_list()
218 | #0 4 5 7 2 9 0
219 |
220 | my_linked_list.delete_by_value(0)
221 | my_linked_list.print_list()
222 | #4 5 7 2 9 0
223 |
224 | my_linked_list.delete_by_position(3)
225 | my_linked_list.print_list()
226 | #4 5 7 9 0
227 |
228 | my_linked_list.delete_by_position(0)
229 | my_linked_list.print_list()
230 | #5 7 9 0
231 |
232 | my_linked_list.delete_by_position(8)
233 | my_linked_list.print_list()
234 | #5 7 9
235 | print(my_linked_list.length)
236 | #3
237 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Linked Lists/Reverse.py:
--------------------------------------------------------------------------------
1 | #Given a linked list we have to reverse it.
2 | #For this we would have to implement a linked list from scratch first, so we will import our Implementation.py file
3 | #And use the LinkedList and Node classes defined there so that we don't have to create a Linked List from scratch
4 |
5 | from Implementation import LinkedList, Node
6 |
7 | #Now we create a Linked List by appending some values
8 | my_linked_list = LinkedList()
9 | my_linked_list.append(2)
10 | my_linked_list.append(3)
11 | my_linked_list.append(4)
12 | my_linked_list.append(5)
13 | my_linked_list.append(6)
14 | my_linked_list.print_list()
15 | #2 3 4 5 6
16 |
17 |
18 | #Linked list has been created. Now we need to create a reverse function, which will reverse the list.
19 | #It will take the linked list as an argument and return the reversed list.
20 | #If the list is empty or consists of 1 item only we return the list as it is.
21 | #Otherwise, we create two nodes first and second which point to the first and second nodes of the list respectively
22 | #Then we update the tail of the list to point to the head as after reversing the present head will become the last node
23 | #Then we run a loop until second becmes None
24 | #Inside the loop we create a temporary node which points to the 'next' of the second node
25 | #Then we update the 'next' of the second node to point to the first node so that the link is now reversed (2nd node points to 1st node instead of 3rd).
26 | #And then we will update the first and second nodes to be equal to the second and temporary nodes respectively.
27 | #What this does is, in the next iteration, 'second' will point to the 3rd node and 'first' to the 2nd
28 | #And the 'second.next = first' statement will make the 3rd node point to the 2nd node instead of the 4th.
29 | #And this will go on till 'second' becomes None and by then all the links will be reversed.
30 | #Finally, we will update the 'next' of the head(which is still the original head) point to None as it is effectively the last node
31 | #And then we will update the head to be equal to 'first', which by now points to the last node of the original list, and return the now reversed linked list
32 | #Time complexity pretty clearly will be O(n)
33 | def reverse(linked_list):
34 | if linked_list.length <=1:
35 | return linked_list
36 | else:
37 | first = linked_list.head
38 | second = first.next
39 | linked_list.tail = linked_list.head
40 | while second:
41 | temp = second.next
42 | second.next = first
43 | first = second
44 | second = temp
45 | linked_list.head.next = None
46 | linked_list.head = first
47 | return linked_list
48 |
49 | reversed_linked_list = reverse(my_linked_list)
50 | reversed_linked_list.print_list()
51 | #6 5 4 3 2
52 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Queues/Linked_List_Implementation.py:
--------------------------------------------------------------------------------
1 | #Queues are another form of linear data structure very similar to stacks.
2 | #The difference is queues follow the FIFO rule - First In First Out, much like real life queues,
3 | #Where the person gets in first, gets to leave first.
4 | #Quesues can be implemented with both arrays and linked lists but the array implementation is not eficient
5 | #Because for removing an element from the queues, which happens from the front of the array(queue),
6 | #the indices of the array have to be updated every time, essentially making it an O(n) operation,
7 | #Whereas the same operation can be done in O(1) time with linked lists.
8 | #Queues have enqueue and dequeue operations which correspond to the push and pop operations of stacks , only difference being dequeue removes element from the front
9 | #Time complexities are as follows:
10 | #Peek - O(1)
11 | #Enqueue - O(1)
12 | #Dequeue - o(1)
13 |
14 |
15 | #Like for stacks, we need a node class which will contain the data and a pointer to the next node
16 | class Node():
17 | def __init__(self, data):
18 | self.data = data
19 | self.next = None
20 |
21 |
22 | #Next we need the Queue class itself which will contain a constructor initialising the queue and then the methods we require
23 | class Queue():
24 |
25 | #The 'first' pointer will always point to the front of the queue, the element which is to be removed next that is
26 | #The 'last' pointer will always point to the end of the queue, i.e., the element which has last been entered
27 | def __init__(self):
28 | self.first = None
29 | self.last = None
30 | self.length = 0
31 |
32 | #Now comes the peek method which will return the element at the front of the queue
33 | def peek(self):
34 | return self.first.data
35 |
36 | #The enqueue operation will add an element at the end of the queue
37 | #If the queue is empty, it will make both the first and last pointer point to the new node
38 | #Else, if will first make the next of the new node to point to the present last node and then it will update the last node to point to the new node
39 | #Time complexity will be O(1)
40 | def enqueue(self, data):
41 | new_node = Node(data)
42 | if self.last == None:
43 | self.last = new_node
44 | self.first = self.last
45 | self.length += 1
46 | return
47 | else:
48 | self.last.next = new_node
49 | self.last = new_node
50 | self.length += 1
51 | return
52 |
53 |
54 | #Next comes the dequeue operation which removes the front element of the queue
55 | #If the queue is empty, it will print an apropriate message
56 | #Else, it will simply make the first pointer point to the next element of the first pointer.
57 | def dequeue(self):
58 | if self.last == None:
59 | print("Quue Empty")
60 | return
61 | if self.last == self.first:
62 | self.last = None
63 | self.first = self.first.next
64 | self.length -= 1
65 | return
66 |
67 | #Finally we'll create the print method which prints the elements of the queue in, well, a queue like format
68 | def print_queue(self):
69 | if self.length == 0:
70 | print("Queue Empty")
71 | return
72 | else:
73 | current_pointer = self.first
74 | while(current_pointer!= None):
75 | if current_pointer.next == None:
76 | print(current_pointer.data)
77 | else:
78 | print(f'{current_pointer.data} <<-- ', end='')
79 | current_pointer = current_pointer.next
80 | return
81 |
82 | my_queue = Queue()
83 | my_queue.enqueue("This")
84 | my_queue.enqueue("is")
85 | my_queue.enqueue("a")
86 | my_queue.enqueue("Queue")
87 | my_queue.print_queue()
88 | #This <<-- is <<-- a <<-- Queue
89 |
90 | print(my_queue.peek())
91 | #This
92 |
93 | my_queue.dequeue()
94 | my_queue.dequeue()
95 | my_queue.print_queue()
96 | #a <<-- Queue
97 |
98 | print(my_queue.__dict__)
99 | #{'first': <__main__.Node object at 0x0000020CE99AED48>, 'last': <__main__.Node object at 0x0000020CE99AED88>, 'length': 2}
100 | print(my_queue.first)
101 | #<__main__.Node object at 0x000001A3F633ED48>
102 | print(my_queue.first.data)
103 | #a
104 |
105 | my_queue.dequeue()
106 | my_queue.dequeue()
107 | my_queue.print_queue()
108 | #Queue Empty
109 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Queues/Queue_Using_Stacks.py:
--------------------------------------------------------------------------------
1 | #This ia popular interview question. Implementation of a queue using stacks.
2 | #We have access to stacks push and pop operations. Using those we need to execute a qeueue's enqueue and dequeue operation
3 | #It can be done in two ways, by either making the enqueue operation costly(O(n)) or the dequeue operation costly(O(n))
4 | #In the first method, we need two stacks say s1 and s2 and we have maintain them such that the element entered first
5 | #Is always at the top of the stack s1. This way, for dequeue, we just need to pop from s1.
6 | #But for enqueueing, we have to make the enqueued item reach the bottom of the stack.
7 | #For that, we will have to pop the elements of s1 one by one and push them onto stack 2, then add the new item to stack1 ,
8 | #And then again pop everything from stack2 and push it back to stack 1., so the new item is now at the last.
9 |
10 | #Lets implement a queue using stacks(array implementation) using this method first
11 |
12 | class Queue():
13 | def __init__(self):
14 | self.s1 = []
15 | self.s2 = []
16 |
17 |
18 | def peek(self):
19 | if len(self.s1) == 0:
20 | print("Queue empty")
21 | else:
22 | return self.s1[len(self.s1)-1]
23 |
24 |
25 | def enqueue(self, data):
26 | for i in range(len(self.s1)):
27 | item = self.s1.pop()
28 | self.s2.append(item)
29 | self.s1.append(data)
30 | for i in range(len(self.s2)):
31 | item = self.s2.pop()
32 | self.s1.append(item)
33 | return
34 |
35 | def dequeue(self):
36 | if len(self.s1)==0:
37 | print("Queue Empty")
38 | return
39 | else:
40 | return self.s1.pop()
41 |
42 | def print_queue(self):
43 | if len(self.s1) == 0:
44 | print("Queue Empty")
45 | return
46 | for i in range(len(self.s1) - 1,0,-1):
47 | print(f'{self.s1[i]} <<-- ',end='')
48 | print(self.s1[0])
49 | return
50 |
51 |
52 | my_queue = Queue()
53 | my_queue.enqueue(2)
54 | my_queue.enqueue(5)
55 | my_queue.enqueue(0)
56 | my_queue.print_queue()
57 | #2 <<-- 5 <<-- 0
58 |
59 | my_queue.dequeue()
60 | my_queue.print_queue()
61 | #5 <<-- 0
62 |
63 | print(my_queue.peek())
64 | #5
65 | my_queue.enqueue(9)
66 | my_queue.print_queue()
67 | #5 <<-- 0 <<-- 9
68 |
69 | my_queue.dequeue()
70 | my_queue.dequeue()
71 | my_queue.dequeue()
72 | my_queue.print_queue()
73 | #Queue Empty
74 |
75 |
76 | '''
77 | For the second method, we can make the dequeue operation costly just like we did with the enqueue operation above.
78 | For enqueueing, we will simply push in s1.
79 | For dequeueing, we will pop all but last element of s1 and push it onto s2. Then we will pop the last element of s1,
80 | Which is the element we want to dequeue. After that we pop out all items of s2 and push it back onto s1.
81 | This makes the dequeue uperation O(n) while enqueue and peek remain O(1)
82 | '''
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Stacks/Array_Implementation.py:
--------------------------------------------------------------------------------
1 | #Stacks can be implemented with the help of arrays as well.
2 | #We can insert and delete elements only at the end of the array(the top of the stack)
3 | #Python comes built-in with lists which are basically arrays.
4 | #They contain functionalities like append and pop which correspond to the push and pop methods of stacks respectively
5 | #So implementing stacks using arrays is pretty simple in Python
6 | #The time complexities of different operations are same as that for the linked list implementation of stacks
7 |
8 |
9 | #We define a class Stack with the array which will store the elements and the methods we require for a stack
10 | class Stack():
11 |
12 | #The constructor consists of only an empty array as length comes built-in with arrays(lists)
13 | def __init__(self):
14 | self.array = []
15 |
16 | #In the peek method we access the last element of the array(top element of the stack) by using the built-in length functionality of arrays
17 | def peek(self):
18 | return self.array[len(self.array)-1]
19 |
20 | #For push operation, we use the built-in append method of lists, which appends/pushes/inserts an element at the end of the list(top of the stack)
21 | def push(self, data):
22 | self.array.append(data)
23 | return
24 |
25 | #For pop operation, we use thebuilt-in pop method of lists, which removes the last element of the list(top element of the stack)
26 | #Time complexity of pop operation for the last element of the list is O(1).
27 | def pop(self):
28 | if len(self.array)!= 0:
29 | self.array.pop()
30 | return
31 | else:
32 | print("Stack Empty")
33 | return
34 |
35 | #Stack follows LIFO, so for the print operation, we have to print the last element of the list first.
36 | #This will require a loop traversing the entire array, so the complexity is O(n)
37 | def print_stack(self):
38 | for i in range(len(self.array)-1, -1, -1):
39 | print(self.array[i])
40 | return
41 |
42 |
43 |
44 | my_stack = Stack()
45 | my_stack.push("Andrei's")
46 | my_stack.push("Courses")
47 | my_stack.push("Are")
48 | my_stack.push("Awesome")
49 | my_stack.print_stack()
50 | #Awesome
51 | #Are
52 | #Courses
53 | #Andrei's
54 |
55 | my_stack.pop()
56 | my_stack.pop()
57 | my_stack.print_stack()
58 | #Courses
59 | #Andrei's
60 |
61 | print(my_stack.peek())
62 | #Courses
63 |
64 | print(my_stack.__dict__)
65 | #{'array': ["Andrei's", 'Courses']}
66 |
67 |
68 | '''Stacks can be implemented in Python in two more ways.
69 | 1. Using the 'deque' class from 'collections' module. Same methods used in lists, append and pop are used in deques
70 | 2. Using 'LifoQueue' from the 'queue' module . 'put()' and 'get()' methods are used for pushing and popping. It comes with some other useful metjods async well.
71 | '''
72 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Stacks/Linked_List_Implementation.py:
--------------------------------------------------------------------------------
1 | #Stacks are linear data-structures which can be implemented using either stacks or linked lists
2 | #Insertion and deletion of elements in a stack take place from one end only.
3 | #Stacks follow the LIFO rule - Last In First Out, where the last element that is inserted, is the first element that comes out.
4 | #The main operations that can be performed on a stack , with their time complexities are as follows:
5 | #Push (Insert) - O(1)
6 | #Pop (Remove) - O(1)
7 | #Peek (Retrieve the top element) - O(1)
8 |
9 | #Here we'll implement a stack using linked lists
10 |
11 | #Linked Lists are made of nodes. So we create a node class.
12 | #It will contain the data and the pointer to the next node.
13 | class Node():
14 | def __init__(self, data):
15 | self.data = data
16 | self.next = None
17 |
18 |
19 | #Now we create the Stack class
20 | #It will consist of a constructor having the top pointer, i.e., the pointer which points to the top element of the stack at any given time
21 | #The length variable which keeps track of the length of the stack, and a bottom pointer which points to bottom most element of the stack
22 | #After this will come the methods associated with a stack
23 | class Stack():
24 | def __init__(self):
25 | self.top = None
26 | self.bottom = None
27 | self.length = 0
28 |
29 | #The peek method will allow us to peek at the top element,i.e.,
30 | #It will return the element at the top of the stack without removing it from the stack.
31 | #Since for this we only need to see what the top pointer points at, the time complexity will be O(1)
32 | def peek(self):
33 | if self.top is None:
34 | return None
35 | return self.top.data
36 |
37 |
38 | #Next comes the push operation, where we insert an element at the top of the stack
39 | #Again this only requires access to the top pointer and inl=volves no looping.
40 | #So time complexity is O(1)
41 | def push(self, data):
42 | new_node = Node(data)
43 | if self.top == None: #If the stack is empty, we make the top and bottom pointer both point to the new node
44 | self.top = new_node
45 | self.bottom = new_node
46 | else: #Otherwise, we make the next of the new node, which was pointing to None, point to the present top and then update the top pointer
47 | new_node.next = self.top
48 | self.top = new_node
49 | self.length += 1
50 |
51 | #Next comes the pop operation wehere we remove the top element from the stack
52 | #Its time complexity is O(1) as well
53 | def pop(self):
54 | if self.top == None: #If the stack is empty, we print an appropriate message
55 | print("Stack empty")
56 | else: #Else we make the top pointer point to the next of the top pointer and decrease the length by 1, effectively deleting the top element.
57 | self.top = self.top.next
58 | self.length -= 1
59 | if(self.length == 0): #We make the bottom pointer None if there was only 1 element in the stack and that gets popped
60 | self.bottom = None
61 |
62 | #Finally we'll implement a print method which prints the elements of the stack from top to bottom
63 | #This will be an O(n) operation as we'll obviously have to traverse the entire linked list to print all elelments
64 | def print_stack(self):
65 | if self.top == None:
66 | print("Stack empty")
67 | else:
68 | current_pointer = self.top
69 | while(current_pointer!=None):
70 | print(current_pointer.data)
71 | current_pointer = current_pointer.next
72 |
73 |
74 | my_stack = Stack()
75 | print(my_stack.peek())
76 | #None
77 |
78 | my_stack.push('google')
79 | my_stack.push('udemy')
80 | my_stack.push('discord')
81 | my_stack.print_stack()
82 | #discord
83 | #udemy
84 | #google
85 |
86 | print(my_stack.top.data)
87 | #discord
88 |
89 | print(my_stack.bottom.data)
90 | #gogle
91 |
92 | my_stack.pop()
93 | my_stack.print_stack()
94 | #udemy
95 | #google
96 |
97 | my_stack.pop()
98 | my_stack.pop()
99 | my_stack.print_stack()
100 | #Stack Empty
101 |
102 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Trees/Binary_Search_Tree.py:
--------------------------------------------------------------------------------
1 | #Binary Search Trees are a non-linear data structure.
2 | #They consist of a root node and zero, one or two children where the children can again have 0,1, or 2 nodes as their children and so on
3 | #In most cases, the time complexity of operations on a BST, which include, lookups, insertions and deletions, take O(log N) time
4 | #Except for the worst case, where the tree is heavily unbalanced with all the nodes being on one side of the tree.
5 | #In that case, it basically becomes a linked list and the time complexities go up to O(n)
6 |
7 | #Lets implement an unbalanced Binary Search Tree first
8 | #We will need a node class to store information about each node
9 | #It will store the data and the pointers to its left and right children
10 | class Node():
11 | def __init__(self, data):
12 | self.data = data
13 | self.left = None
14 | self.right = None
15 |
16 |
17 | #Now we will implement the Binary Search Tree having a constructor with the root node initialised to None
18 | #And the three methods, lookup, insert and delete
19 | class BST():
20 | def __init__(self):
21 | self.root = None
22 | self.number_of_nodes = 0
23 |
24 |
25 | #For the insert method, we check if the root node is None, then we make the root node point to the new node
26 | #Otherwise, we create a temporary pointer which points to the root node at first.
27 | #Then we compare the data of the new node to the data of the node pointed by the temporary node.
28 | #If it is greater then first we check if the right child of the temporary node exists, if it does, then we update the temporary node to its right child
29 | #Otherwise we make the new node the right child of the temporary node
30 | #And if the new node data is less than the temporary node data, we follow the same procedure as above this time with the left child.
31 | #The complexity is O(log N) in avg case and O(n) in worst case.
32 | def insert(self, data):
33 | new_node = Node(data)
34 | if self.root == None:
35 | self.root = new_node
36 | self.number_of_nodes += 1
37 | return
38 | else:
39 | current_node = self.root
40 | while(current_node.left != new_node) and (current_node.right != new_node):
41 | if new_node.data > current_node.data:
42 | if current_node.right == None:
43 | current_node.right = new_node
44 | else:
45 | current_node = current_node.right
46 | elif new_node.data < current_node.data:
47 | if current_node.left == None:
48 | current_node.left = new_node
49 | else:
50 | current_node = current_node.left
51 | self.number_of_nodes += 1
52 | return
53 |
54 |
55 | #Now we will implement the lookup method.
56 | #It will follow similar logic as to the insert method to reach the correct position.
57 | #Only instead of inserting a new node we will return "Found" if the node pointed by the temporary node contains the same value we are looking for
58 | def search(self,data):
59 | if self.root == None:
60 | return "Tree Is Empty"
61 | else:
62 | current_node = self.root
63 | while True:
64 | if current_node == None:
65 | return "Not Found"
66 | if current_node.data == data:
67 | return "Found"
68 | elif current_node.data > data:
69 | current_node = current_node.left
70 | elif current_node.data < data:
71 | current_node = current_node.right
72 |
73 |
74 | #Finally comes the very complicated remove method.
75 | #This one is too complicated for me to explain while writing. So I'll just write the code down with some comments
76 | #explaining which conditions are being checked
77 | def remove(self, data):
78 | if self.root == None: #Tree is empty
79 | return "Tree Is Empty"
80 | current_node = self.root
81 | parent_node = None
82 | while current_node!=None: #Traversing the tree to reach the desired node or the end of the tree
83 | if current_node.data > data:
84 | parent_node = current_node
85 | current_node = current_node.left
86 | elif current_node.data < data:
87 | parent_node = current_node
88 | current_node = current_node.right
89 | else: #Match is found. Different cases to be checked
90 | #Node has left child only
91 | if current_node.right == None:
92 | if parent_node == None:
93 | self.root = current_node.left
94 | return
95 | else:
96 | if parent_node.data > current_node.data:
97 | parent_node.left = current_node.left
98 | return
99 | else:
100 | parent_node.right = current_node.left
101 | return
102 |
103 | #Node has right child only
104 | elif current_node.left == None:
105 | if parent_node == None:
106 | self.root = current_node.right
107 | return
108 | else:
109 | if parent_node.data > current_node.data:
110 | parent_node.left = current_node.right
111 | return
112 | else:
113 | parent_node.right = current_node.right
114 | return
115 |
116 | #Node has neither left nor right child
117 | elif current_node.left == None and current_node.right == None:
118 | if parent_node == None: #Node to be deleted is root
119 | current_node = None
120 | return
121 | if parent_node.data > current_node.data:
122 | parent_node.left = None
123 | return
124 | else:
125 | parent_node.right = None
126 | return
127 |
128 | #Node has both left and right child
129 | elif current_node.left != None and current_node.right != None:
130 | del_node = current_node.right
131 | del_node_parent = current_node.right
132 | while del_node.left != None: #Loop to reach the leftmost node of the right subtree of the current node
133 | del_node_parent = del_node
134 | del_node = del_node.left
135 | current_node.data = del_node.data #The value to be replaced is copied
136 | if del_node == del_node_parent: #If the node to be deleted is the exact right child of the current node
137 | current_node.right = del_node.right
138 | return
139 | if del_node.right == None: #If the leftmost node of the right subtree of the current node has no right subtree
140 | del_node_parent.left = None
141 | return
142 | else: #If it has a right subtree, we simply link it to the parent of the del_node
143 | del_node_parent.left = del_node.right
144 | return
145 | return "Not Found"
146 |
147 |
148 |
149 |
150 | my_bst = BST()
151 | my_bst.insert(5)
152 | my_bst.insert(3)
153 | my_bst.insert(7)
154 | my_bst.insert(1)
155 | my_bst.insert(13)
156 | my_bst.insert(65)
157 | my_bst.insert(0)
158 | my_bst.insert(10)
159 | '''
160 | 5
161 | 3 7
162 | 1 13
163 | 0 10 65
164 | '''
165 |
166 | (my_bst.remove(13))
167 | '''
168 | 5
169 | 3 7
170 | 1 65
171 | 0 10
172 | '''
173 | my_bst.remove(5)
174 | '''
175 | 7
176 | 3 65
177 | 1 10
178 | 0
179 | '''
180 | my_bst.remove(3)
181 | '''
182 | 7
183 | 1 65
184 | 0 10
185 | '''
186 | my_bst.remove(7)
187 | '''
188 | 10
189 | 1 65
190 | 0
191 | '''
192 | my_bst.remove(1)
193 | '''
194 | 10
195 | 0 65
196 |
197 | '''
198 | my_bst.remove(0)
199 | '''
200 | 10
201 | 65
202 |
203 | '''
204 | my_bst.remove(10)
205 | '''
206 | 65
207 |
208 |
209 | '''
210 | my_bst.remove(65)
211 | '''
212 |
213 |
214 | '''
215 |
216 | my_bst.insert(10)
217 | '''
218 | 10
219 |
220 |
221 | '''
222 | print(my_bst.root.data)
223 | #10
224 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Trees/Heap.py:
--------------------------------------------------------------------------------
1 | # A heap is a tree where every parent is gretaer than its children ( for max-heaps ) or they are smaller than their children ( for min-heaps)
2 | #A Max heap( and min heap) is typically represented as an array. The root element will be at Arr[0]. Arr[(i-1)/2] Returns the parent node.
3 | #Arr[(2*i)+1] Returns the left child node. Arr[(2*i)+2] Returns the right child node. Operations on Max Heap (and min heap) include:
4 | #getMax(): It returns the root element of Max Heap. Time Complexity of this operation is O(1).
5 | #extractMax(): Removes the maximum element from MaxHeap. Time Complexity of this Operation is O(Log n)
6 | #as this operation needs to maintain the heap property (by calling heapify()) after removing root.
7 | #insert(): Inserting a new key takes O(Log n) time. We add a new key at the end of the tree.
8 | #If new key is smaller than its parent, then we don’t need to do anything. Otherwise, we need to traverse up to fix the violated heap property.
9 | #Here we are going to implement a max-heap
10 |
11 |
12 | import sys
13 | class MaxHeap:
14 |
15 | #The constructor initializes the heap with a maxsize entered by the user, size set to 0, all the elements of heap set to 0
16 | #For the sake of easier calculation of parent and child nodes, we do the indexing from 1 instead of 0. So we fill the 0th index of heap with a garbage value
17 | def __init__(self, maxsize):
18 | self.maxsize = maxsize
19 | self.size = 0
20 | self.Heap = [0] * (self.maxsize + 1)
21 | self.Heap[0] = sys.maxsize
22 | self.FRONT = 1
23 |
24 | #Method to return the position of parent for the node currently at pos. Because of the 1-indexing the formula for the parents and children becomes simpler
25 | def parent(self, pos):
26 | return pos // 2
27 |
28 | #Method to return the position of the left child for the node currently at pos
29 | def left_child(self, pos):
30 | return 2 * pos
31 |
32 | #Method to return the position of the right child for the node currently at pos
33 | def right_child(self, pos):
34 | return (2 * pos) + 1
35 |
36 | #Method that returns true if the passed node is a leaf node.
37 | #All the nodes in the second half of the heap(when viewed as an array) are leaf nodes.
38 | #So we just check if the position entered is >= half of the size of the heap and <= size of the heap
39 | def is_leaf(self, pos):
40 | if pos >= (self.size // 2) and pos <= self.size:
41 | return True
42 | return False
43 |
44 | #Method to swap two nodes of the heap
45 | def swap(self, fpos, spos):
46 | self.Heap[fpos], self.Heap[spos] = self.Heap[spos], self.Heap[fpos]
47 |
48 | #Method to heapify the node at pos. This method will be called whenever the heap property is disturbed, to restore the heap property of the heap
49 | #We will check if the concerned node is a leaf node or not first. If it is, then no need to do anything.
50 | #If it is not and it is smaller than any of its children, then we will check which of its children is largest
51 | #and swap the node with its largest child. After doing this, the heap property may be disturbed. So we will call max_heapify again.
52 | def max_heapify(self, pos):
53 |
54 | #If the node is a non-leaf node and smaller than any of its child
55 | if not self.is_leaf(pos):
56 | if (self.Heap[pos] < self.Heap[self.left_child(pos)] or
57 | self.Heap[pos] < self.Heap[self.right_child(pos)]):
58 |
59 | #Swap with the left child and heapify the left child
60 | if self.Heap[self.left_child(pos)] > self.Heap[self.right_child(pos)]:
61 | self.swap(pos, self.left_child(pos))
62 | self.max_heapify(self.left_child(pos))
63 |
64 | #Swap with the right child and heapify the right child
65 | else:
66 | self.swap(pos, self.right_child(pos))
67 | self.max_heapify(self.right_child(pos))
68 |
69 | #Method to insert a node into the heap . First we will increase the size of the heap by 1.
70 | #Then we will insert the element to end of the heap. Now this new element may violate the heap property.
71 | #So we will keep checking its value with its parent's value. And keep swapping it with its parent as long as the parent is smaller than the element.
72 | def insert(self, element):
73 | if self.size >= self.maxsize:
74 | return
75 | self.size += 1
76 | self.Heap[self.size] = element
77 | current = self.size
78 | while self.Heap[current] > self.Heap[self.parent(current)]:
79 | self.swap(current, self.parent(current))
80 | current = self.parent(current)
81 |
82 | #Method to print the contents of the heap in a detailed format
83 | def print_heap(self):
84 | for i in range(1, (self.size // 2) + 1):
85 | print(" PARENT : " + str(self.Heap[i]) + " LEFT CHILD : " +
86 | str(self.Heap[2 * i]) + " RIGHT CHILD : " +
87 | str(self.Heap[2 * i + 1]))
88 |
89 | #Method to remove and return the maximum element from the heap . The maximum element will be at the root.
90 | #So we will copy the element at the end of the heap into the root node and delete the last node, which will leave the heap property disturbed
91 | #So we will finally call heapify on the root node to restore the heap property
92 | def extract_max(self):
93 | popped = self.Heap[self.FRONT]
94 | self.Heap[self.FRONT] = self.Heap[self.size]
95 | self.size -= 1
96 | self.max_heapify(self.FRONT)
97 | return popped
98 |
99 |
100 | if __name__ == "__main__":
101 | my_heap = MaxHeap(15)
102 | my_heap.insert(5)
103 | my_heap.insert(3)
104 | my_heap.insert(17)
105 | my_heap.insert(10)
106 | my_heap.insert(84)
107 | my_heap.insert(19)
108 | my_heap.insert(6)
109 | my_heap.insert(22)
110 | my_heap.insert(9)
111 | my_heap.print_heap()
112 | '''
113 | PARENT : 84 LEFT CHILD : 22 RIGHT CHILD : 19
114 | PARENT : 22 LEFT CHILD : 17 RIGHT CHILD : 10
115 | PARENT : 19 LEFT CHILD : 5 RIGHT CHILD : 6
116 | PARENT : 17 LEFT CHILD : 3 RIGHT CHILD : 9
117 | '''
118 |
119 | print("The Max val is " + str(my_heap.extract_max()))
120 | #The Max val is 84
121 |
122 | my_heap.print_heap()
123 | '''
124 | PARENT : 22 LEFT CHILD : 17 RIGHT CHILD : 19
125 | PARENT : 17 LEFT CHILD : 9 RIGHT CHILD : 10
126 | PARENT : 19 LEFT CHILD : 5 RIGHT CHILD : 6
127 | PARENT : 9 LEFT CHILD : 3 RIGHT CHILD : 9
128 | '''
129 |
130 | my_heap.insert(100)
131 | my_heap.print_heap()
132 | '''
133 | PARENT : 100 LEFT CHILD : 22 RIGHT CHILD : 19
134 | PARENT : 22 LEFT CHILD : 17 RIGHT CHILD : 10
135 | PARENT : 19 LEFT CHILD : 5 RIGHT CHILD : 6
136 | PARENT : 17 LEFT CHILD : 3 RIGHT CHILD : 9
137 | '''
138 |
139 | print(my_heap.Heap[0])
140 | #9223372036854775807
141 |
142 |
143 |
144 |
145 |
146 |
147 |
148 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Trees/Priority_Queues_Using_Heap.py:
--------------------------------------------------------------------------------
1 | # Priority Queues, as the name suggests, are queues where the elements have different priorities
2 | # And it does not always follow the FIFO rule.
3 | # They can be implemented in a number of ways, out of which heap is the most commonly used one.
4 | #In Python, it is available using “heapq” module. The property of this data structure in Python is that each time the smallest of heap element is popped(min heap).
5 | #Whenever elements are pushed or popped, heap structure in maintained. The heap[0] element returns the smallest element each time.
6 | #Operations we can perform on heap using heapq module are:
7 | #heapify(iterable) : This function converts the iterable into a heap. i.e. in heap order.
8 | #heappush(heap, ele) : This function inserts an element into heap. The order is adjusted, so as heap structure is maintained.
9 | #heappop(heap) : This function removes and returns the smallest element from heap. Again, he order is adjusted, so as heap structure is maintained.
10 | #heappushpop(heap, ele) : This function combines the functioning of both push and pop operations in one statement, increasing efficiency. Heap order is maintained after this operation.
11 | #eapreplace(heap, ele) : This function also inserts and pops element in one statement, but in this, minimum element is first popped, then the new element is pushed.
12 | #heapreplace() returns the smallest value originally in heap regardless of the pushed element as opposed to heappushpop().
13 |
14 |
15 | import heapq
16 |
17 | # initializing list
18 | li = [5, 7, 9, 1, 3]
19 |
20 | #using heapify to convert list into heap
21 | heapq.heapify(li)
22 |
23 | #printing created heap
24 | print("The created heap is : ", end="")
25 | print(list(li))
26 | #The created heap is : [1, 3, 9, 7, 5]
27 |
28 |
29 | #using heappush() to push elements into heap
30 | heapq.heappush(li, 4)
31 |
32 | #printing modified heap
33 | print("The modified heap after push is : ", end="")
34 | print(list(li))
35 | #The modified heap after push is : [1, 3, 4, 7, 5, 9]
36 |
37 |
38 | #using heappop() to pop smallest element
39 | print("The popped and smallest element is : ", end="")
40 | print(heapq.heappop(li))
41 | #The popped and smallest element is : 1
42 |
43 |
44 | #Creating two identical heaps to demonstrate the difference between heappushpop and heapreplace
45 | li1 = [5, 7, 9, 4, 3]
46 | li2 = [5, 7, 9, 4, 3]
47 | heapq.heapify(li1)
48 | heapq.heapify(li2)
49 |
50 | # using heappushpop() to push and pop items simultaneously
51 | print("The popped item using heappushpop() is : ", end="")
52 | print(heapq.heappushpop(li1, 2))
53 | #The popped item using heappushpop() is : 2
54 |
55 |
56 | # using heapreplace() to push and pop items simultaneously
57 | print("The popped item using heapreplace() is : ", end="")
58 | print(heapq.heapreplace(li2, 2))
59 | #The popped item using heapreplace() is : 3
60 |
--------------------------------------------------------------------------------
/venv/Scripts/Data Structures/Trees/Trie.py:
--------------------------------------------------------------------------------
1 | #Trie is an efficient information reTrieval data structure.
2 | #It is mostly used for searching through strings to see if certain desired words are present or not
3 | #Doing this task using a list, or a balanced BST, costs O(nm) and O(mlog N) respectively, where m is the length of the string being searched
4 | #But using tries, it can be done in O(m) time.
5 | #Tries are like trees, with each node having multiple branches, generally equal to the number of letters in the alphabet.
6 | #Each node represents a single letter. Each node also consists of an end_of_word variable which tells us whether it marks the end of a word or not
7 | #Here we will implement two of its major operations, insert and search, both of which are of O(m)time complexity
8 |
9 |
10 | #First we define a Trie_node class containing 26 children each initialized to None, and an end_of_word flag to determine whether it marks the end of a word or not
11 | class TrieNode():
12 | def __init__(self):
13 | self.children = [None]*26
14 | self.is_end_of_word = False
15 |
16 | #Next we define the Trie class itself containing a constructor which initializes the trie and the insert and search methods
17 | class Trie():
18 | def __init__(self):
19 | self.root = TrieNode()
20 |
21 | #We define a private helper function to calculate the numerical index of each character in the range of 0-25
22 | def _character_index(self, char):
23 | if char.isupper():
24 | return ord(char) - ord('A')
25 | else:
26 | return ord(char) - ord('a')
27 |
28 | #Now we come to the insert function.
29 | #We will create a pointer which will start at the root node. Then for every character in the word to be inserted,
30 | #We will check if the character already exists in the trie by matching it with the pointer's children.
31 | #If it does, we will simply update the pointer to that child of the current node and repeat the process for the next character of the word
32 | #Otherwise, we will initialize a new node at the index of the character that is to be inserted, which was equal to None until now,
33 | #And then we will update the pointer to point to this newly created node and repeat the process for the next character
34 | #Once we reach the end of the word, we will set the is_end_of_word to True for the node containing the last character.
35 | #The entire process will take O(m) time where m is the length of the string
36 | def insert(self, string):
37 | pointer = self.root
38 | for character in string:
39 | index = self._character_index(character)
40 | if not pointer.children[index]:
41 | pointer.children[index] = TrieNode()
42 | pointer = pointer.children[index]
43 | pointer.is_end_of_word = True
44 | return
45 |
46 | #Finally, for the search method, we will follow the exact same approach
47 | #Only this time, instead of creating a new TrieNode when we don't find a character in the Trie, we will simply return False
48 | #And if after the loop terminates and is_end_of_word equals True and the node isn't equal to None, it means we have found the word
49 | def search(self, string):
50 | pointer = self.root
51 | for character in string:
52 | index = self._character_index(character)
53 | if not pointer.children[index]:
54 | return False
55 | pointer = pointer.children[index]
56 | return pointer and pointer.is_end_of_word
57 |
58 |
59 | my_trie = Trie()
60 | my_trie.insert('Data')
61 | my_trie.insert("Structures")
62 | my_trie.insert("and")
63 | my_trie.insert("Algorithms")
64 | print(my_trie.search("and"))
65 | #True
66 | print(my_trie.search("Data"))
67 | #True
68 | print(my_trie.search("woohoo"))
69 | #False
70 | print(my_trie.search("STructures"))
71 | #True
72 |
--------------------------------------------------------------------------------
/venv/Scripts/How to solve coding problems/Google Interview Question.py:
--------------------------------------------------------------------------------
1 | #This question is same as that in the Google interview question video.
2 | #We are given an array of integers and a particular sum.
3 | #We have to check if there are any two elements in the array that add up to the given sum.
4 | #For example, array = [1,2,4,5] ,sum = 6
5 | #This should return True as 2+4 = 6
6 |
7 |
8 | #Again, the very first solution that comes to mind is the naive, or brute force approach. Lets implement that.
9 |
10 | array = [1,2,4,5]
11 | sum = 3
12 | def brute_force_pair_sum(array, sum):
13 | for i in range(len(array)):
14 | for j in range(i+1,len(array)):
15 | if array[i] + array[j] == sum:
16 | return "Yes"
17 | return "No"
18 |
19 | #print(brute_force_pair_sum(array, sum))
20 |
21 | #As we can clearly see, the complexity of this function is O(n^2), which will become very inefficient for large inputs
22 | #We need to optimize the function
23 |
24 |
25 | #One solution better than O(n^2) that comes to mind is :
26 | #We loop through the array once and for every element we encounter, we calculate its complement,i.e., the number which when added
27 | #to the element at hand, will give the sum.
28 | #Then we do a binary search for this complement in the remaining portion of the array
29 | #Since binary search is O(log n) and we loop through the array once, the overall complexity is O(nlog n), which is better than O(n^2)
30 |
31 | def binary_search(array,left,right, ele):
32 | if right >= left:
33 | mid = (left+right)//2
34 | if (array[mid]) == ele:
35 | return True
36 | elif array[mid] > ele:
37 | return binary_search(array, left, mid - 1, ele)
38 | else:
39 | return binary_search(array, mid + 1, right, ele)
40 | else:
41 | return False
42 |
43 | def slightly_better_pair_sum(array, sum):
44 | for i in range(len(array)):
45 | comp = sum - array[i]
46 | if binary_search(array,i+1,len(array)-1,comp):
47 | return "Yes"
48 | return "No"
49 |
50 | print(slightly_better_pair_sum(array,sum))
51 |
52 | #We have arrived at an O(nlog n) function from the naive O(n^2)
53 | #But this is still not great and it seems like there is some scope for improvement.
54 | #Lets try to make it O(n)
55 |
56 | #One solution is :
57 | #We take one element from each end and calculate their sum. If it equals the given sum, job done!
58 | #If not, and if the given sum is greater than what we get, it means we require the sum of the pair to be higher
59 | #To achieve that we move the left index by one and add the corresponding element to the element at the right index
60 | #And if the given sum is less than what we get, it means we require the sum of the pair to be smaller,
61 | #So we move the right index one step left and add the corresponding element to the element at the left index.
62 | #We keep on moving like this until we find a pair which adds up to the given sum or until the left and right indices cross
63 | #Since this procedure requires only one traversal of the array, the complexity is O(n)!
64 |
65 |
66 | def smart_pair_sum(array, sum):
67 | left = 0
68 | right = len(array)-1
69 | while right > left:
70 | if array[left] + array[right] == sum:
71 | return "Yes"
72 | elif array[left] + array[right] > sum:
73 | right -= 1
74 | else:
75 | left += 1
76 | return "No"
77 |
78 | print(smart_pair_sum(array,sum))
79 |
80 |
81 | #Although we have achieved an efficient time complexity of O(n) we've done so under the assumption that the array will be sorted
82 | #What if the array isn't sorted?
83 | #In that case the first solution that comes to mind is that we can sort the array in O(nlog n) time
84 | #And then perform the smart_pair_sum operation on the sorted array to give us a final time complexity of O(nlog n)
85 | #Python's built-in sort method uses Tim Sort which has an average case time coplexity of O(nlog n)
86 | #So we can simply use that, or use a different sorting algorithm such as quicksort or heapsort.
87 | #Here, we are going with the built-in method
88 |
89 |
90 | def sort_pair_sum(array, sum):
91 | array.sort()
92 | left = 0
93 | right = len(array)-1
94 | while right > left:
95 | if array[left] + array[right] == sum:
96 | return "Yes"
97 | elif array[left] + array[right] > sum:
98 | right -= 1
99 | else:
100 | left += 1
101 | return "No"
102 |
103 | print(sort_pair_sum(array,sum))
104 |
105 | #This solves our problem if the array is unsorted to begin with.
106 | #But we have lost out on time complexity as we have gone from O(n) to O(nlog n)
107 | #One thing we can do to get back to O(n) complexity is :
108 | #We can create a dictionary as we go along the array and enter the element we encounter to the dictionary
109 | #If the complement of the present element isn't already present in the dictionary
110 | #That is, we loop through the array once, and first check if the complement of the current element is present in the dictionary
111 | #If yes, then we return "Yes". If no, then we add the current element to the dictionary.
112 |
113 | def smartest_pair_sum(array, sum):
114 | dictionary = dict()
115 | for item in array:
116 | comp = sum - item
117 | if not comp in dictionary:
118 | dictionary[item] = True
119 | else:
120 | return "Yes"
121 | return "No"
122 |
123 | print(smartest_pair_sum(array, sum))
124 |
125 | #We have achieved an O(n) algorithm taking into consideration that the array can be unsorted!
126 |
--------------------------------------------------------------------------------
/venv/Scripts/How to solve coding problems/Interview Question 1.py:
--------------------------------------------------------------------------------
1 | # We are given two arrays. We have to find if these two arrays contain any matching elements.
2 | # For example, array1 = ['a','b','c','x'] , array2 = ['x','y','z']
3 | # This should return True as element 'x' appears in both arrays.
4 |
5 | # Now, the very first solution that comes to mind is the naive, or brute force solution.
6 | # So let's code that down.
7 |
8 | array1 = ['a','b','c','x']
9 | array2 = ['x','y','z']
10 |
11 | def brute_force_matching_element(array1, array2):
12 | for i in range(len(array1)):
13 | for j in range(len(array2)):
14 | if array1[i] == array2[j]:
15 | return True
16 | return False
17 |
18 | #Complexity of brute_force_matching_element function = ?
19 | #At first glance it looks like O(n^2) but since there are two different variables in the form of array1 and array2,
20 | #The complexity actually is O(m*n), which is equally bad.
21 |
22 | #This is a good first solution to come up with but we can easily recognize here that as the input size gets large,
23 | #This function becomes very inefficient.
24 | #So we need to come up with something better.
25 |
26 | #One solution can be creating a dictionary for one of the arrays and then looping over the other array
27 | #To check if any of its elements are present in the dictionary's keys.
28 | #Dictionaries are implemented as hash tables in python, and the time complexity for look-ups or searching for an item, is O(1).
29 | #Therefore, the dictionary can be created in O(n) time and then we loop over one array of size say m,
30 | #Where we check for every element if it is present in the dictionary's keys, which is an O(1) operation.
31 | #Therefore, the overall complexity of the function becomes, O(n + m*1) = O(m+n)
32 | #Which is significantly better than our previous function.
33 |
34 | def smarter_matching(array1, array2):
35 | dictionary = dict()
36 | for i in range(len(array1)):
37 | dictionary[array1[i]] = True
38 |
39 | for i in range(len(array2)):
40 | if array2[i] in dictionary:
41 | return True
42 |
43 | return False
44 |
45 | print(smarter_matching(array1, array2))
46 |
47 |
48 | #In this solution, we've made a number of asssumptions, like there will be no repitions of any element in an array,
49 | #Or that our function will receive exactly 2 inputs which will be arrays.
50 | #Now that we have come up with a better solution in terms of time complexity, we can look to iron out the minor flaws.
51 | #Let's make our function such that it can receive arrays with repititve elements, and if it receives anything other than two arrays
52 | #It gives an error message instead of just crashing out.
53 |
54 | def smarter_matching2(array1, array2):
55 | try:
56 | dictionary = dict()
57 | for i in range(len(array1)):
58 | if array1[i] not in dictionary:
59 | dictionary[array1[i]] = True
60 |
61 | for i in range(len(array2)):
62 | if array2[i] in dictionary:
63 | return True
64 | return False
65 |
66 | except TypeError:
67 | return "Exactly two arrays required."
68 |
--------------------------------------------------------------------------------