├── .gitignore ├── 1 - Data Structures.md ├── 1.1 - Linked List.md ├── 1.2 - Stack.md ├── 1.3 - Queue.md ├── 1.4 - Binary Search Tree.md ├── 1.5 - Binary Heap.md ├── 1.6 - Trie.md ├── 2 - Algorithms.md ├── 2.1 - Search Algorithms.md ├── 2.2 - Sorting Algorithms.md ├── 2.3 - Tree & Graph Traversal Algorithms.md ├── 2.4 - Pathfinding Algorithms.md ├── 2.5 - Other Graph Algorithms.md ├── 3 - Object Oriented Programming.md ├── 4 - Design Patterns.md ├── 5 - OS Fundamentals.md ├── 6 - Concurrency in Java.md ├── 7 - Bit Manipulation.md ├── 8 - Miscellaneous.md ├── LICENSE ├── README.md ├── References.md └── assets ├── Bubble-Sort.gif ├── Dijkstra.gif ├── Heap-Sort.gif ├── Insertion-Sort.gif ├── Kruskal.gif ├── Merge-Sort.png ├── Prim.gif ├── Quicksort.gif └── Selection-Sort.gif /.gitignore: -------------------------------------------------------------------------------- 1 | TODO.md 2 | -------------------------------------------------------------------------------- /1 - Data Structures.md: -------------------------------------------------------------------------------- 1 | # Data Structures 2 | 3 | ## Primitive Data Types (Java) 4 | 5 | Data Type | Description | Default | Size 6 | :-------: | :---------------------: | :------: | :-----: 7 | `boolean` | `true` or `false` | `false` | 1 bit 8 | `char` | Unicode character | `\u0000` | 16 bits 9 | `byte` | twos complement integer | `0` | 8 bits 10 | `short` | twos complement integer | `0` | 16 bits 11 | `int` | twos complement integer | `0` | 32 bits 12 | `long` | twos complement integer | `0` | 64 bits 13 | `float` | IEEE 754 floating point | `0.0` | 32 bits 14 | `double` | IEEE 754 floating point | `0.0` | 64 bits 15 | 16 | ## Array 17 | 18 | - Stores data elements based on a sequential (usually zero-based) index. 19 | 20 | ### Important Points 21 | 22 | - Optimal for indexing. Bad for searching, inserting and deleting. 23 | - **Linear Arrays**, or one dimensional arrays, are the most basic. 24 | - **Two Dimensional Arrays** have `x` & `y` indices like a grid or nested arrays. 25 | - **Dynamic Arrays** are like one dimensional arrays but have reserved space for additional elements. 26 | - If a dynamic array is full, it copies its contents to a larger array (usually 2x in size). 27 | 28 | ### Time Complexity 29 | 30 | - **Access:** `O(1)` (unsorted & sorted) 31 | - **Search:** `O(n)` (unsorted), `O(log n)` (sorted) 32 | - **Insertion:** `O(1)` (without shifting / unsorted), `O(n)` (with shifting / sorted) 33 | - **Deletion:** `O(1)` (without shifting / unsorted), `O(n)` (with shifting / sorted) 34 | - **Insertion at End of Dynamic Array:** `O(1)` amortized 35 | 36 | ### Java 37 | 38 | - **`Array`:** [Oracle Docs](https://docs.oracle.com/javase/tutorial/java/nutsandbolts/arrays.html), [TutorialsPoint](https://www.tutorialspoint.com/java/java_arrays.htm). 39 | - **`ArrayList`:** [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html), [TutorialsPoint](https://www.tutorialspoint.com/java/java_arraylist_class.htm). 40 | - **`Vector`:** Similar to `ArrayList` but synchronized. [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/Vector.html). 41 | - **`String`:** [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html). 42 | - **`StringBuilder`:** [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/lang/StringBuilder.html). 43 | 44 | ## Linked List 45 | 46 | - Stores data using **nodes** that have a datum and pointers to other nodes. 47 | 48 | ### Important Points 49 | 50 | - Designed to optimize insertion and deletion during iteration. Slow at indexing and searching. 51 | - **Singly Linked Lists** have nodes that reference the next node. 52 | - **Doubly Linked Lists** have nodes that also reference the previous node. 53 | - **Circularly Linked Lists** are linked lists whose *tail* references the *head*. 54 | - **Stacks** are commonly implemented using linked lists. 55 | - Stacks are **last in, first out (LIFO)** data structures. `push()` & `pop()`. 56 | - Implemented with a linked list where the head is the only place for insertion and removal. 57 | - **Queues** are commonly implemented using linked lists. 58 | - Queues are **first in, first out (FIFO)** data structures. `enqueue()` & `dequeue()`. 59 | - Implemented with a linked list that only removes from the head and adds to the tail. 60 | - **Double-Ended Queues** or **Deques** are also commonly implemented using linked lists. 61 | - Deques allow elements to be added and removed from either the head or tail of the queue. 62 | 63 | ### Time Complexity 64 | 65 | - **Access:** `O(n)` (unsorted & sorted) 66 | - **Search:** `O(n)` (unsorted & sorted) 67 | - **Insertion:** `O(1)` (unsorted), `O(n)` (sorted) 68 | - **Deletion:** `O(1)` (unsorted & sorted) 69 | 70 | ### Java 71 | 72 | - [Java Implementation of Linked List](1.1%20-%20Linked%20List.md) 73 | - [Java Implementation of Stack](1.2%20-%20Stack.md) 74 | - [Java Implementation of Queue](1.3%20-%20Queue.md) 75 | - **`LinkedList`:** [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/LinkedList.html), [TutorialsPoint](https://www.tutorialspoint.com/java/util/java_util_linkedlist.htm). 76 | - **`Iterator`:** [TutorialsPoint](https://www.tutorialspoint.com/java/java_using_iterator.htm). 77 | - **`Stack`:** [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/Stack.html). 78 | - **`Queue`:** [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/Queue.html). 79 | - **`Deque`:** [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/Deque.html). 80 | 81 | ## Hash Table 82 | 83 | - Stores data as key-value pairs in a direct access table. 84 | - Designed to optimize searching, insertion and deletion. 85 | 86 | ### Important Points 87 | 88 | - **Hash Functions** accept a key (from an arbitrarily sized dataset) and map it to an output i.e. hash code (from a fixed sized dataset). This hash code is then mapped to an index for storage. 89 | - This is known as **hashing**, whose motivation is to assign a unique index to every possible key. 90 | - This is done because the actual key space may be too large while only a fraction of those keys may appear. 91 | - A good **hash function** must: 92 | - return a value within the hash table range. 93 | - achieve an even distribution of indices from the keys that actually occur. 94 | - be easy and quick to compute. 95 | - **Hash Collisions** occur when a hash function returns the same hash code for two distinct keys or two different hash codes are mapped to the same index. 96 | - Most hash functions have this problem. 97 | - **Closed Address Hashing -** Chain together the keys that generate the same index in a linked list (`O(n)`) or binary search tree (`O(log n)`). 98 | - **Load Factor** is `n/h`, where `n` is the number of records and `h` is the number of hash cells. 99 | - **Open Address Hashing -** Store all the elements in the hash table and handle hash collisions by **rehashing** i.e. look for an alternative slot. 100 | - **Linear Probing -** Insert the colliding record in the next slot recursively. However, this can result in long runs of occupied slots. 101 | - **Quadratic Probing/Double Hashing -** Use two hash functions to hash the key twice in case of a collision. 102 | - Using open address hashing requires a hash cell to be marked as *obsolete* when a record is deleted to avoid stopping search. 103 | - Hashes are important for associative arrays (i.e. key-value pairs) and database indexing. 104 | 105 | ### Time Complexity 106 | 107 | - **Search:** Average Case: `O(1)`, Worst Case: `O(n)` 108 | - **Insertion:** Average Case: `O(1)`, Worst Case: `O(n)` 109 | - **Deletion:** Average Case: `O(1)`, Worst Case: `O(n)` 110 | 111 | ### Java 112 | 113 | - **`HashMap`:** ``, no guarantee about iteration order, `O(1)`. [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html), [TutorialsPoint](https://www.tutorialspoint.com/java/java_hashmap_class.htm). 114 | - **`TreeMap`:** ``, iteration according to natural order of keys or externally supplied `Comparator`, `O(log n)`. [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/TreeMap.html). 115 | - **`LinkedHashMap`:** ``, iteration according to insertion order, `O(1)`. [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/LinkedHashMap.html). 116 | - **`HashSet`:** ``, no guarantee about iteration order, `O(1)`. [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/HashSet.html), [TutorialsPoint](https://www.tutorialspoint.com/java/java_hashset_class.htm). 117 | - **`TreeSet`:** ``, iteration according to natural order of keys or externally supplied `Comparator`, `O(log n)`. [Oracle Docs](https://docs.oracle.com/javase/7/docs/api/java/util/TreeSet.html). 118 | - **`LinkedHashSet`:** ``, iteration according to insertion order, `O(1)`. [Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/LinkedHashSet.html). 119 | 120 | ## Tree 121 | 122 | - A tree is a data structure composed of nodes. 123 | - Each tree has one *root node* that has zero or more child nodes and each of these child nodes has zero or more child nodes. 124 | - Nodes without children are called *leaves*. 125 | - A tree cannot contain cycles and nodes may not have links back to their parent nodes. 126 | 127 | ## Binary Tree 128 | 129 | - Is a tree data structure where every node has at most two children. 130 | - There is one left and one right child node. 131 | 132 | ### Important Points 133 | 134 | - A **complete binary tree** is one where every level of the tree is fully filled, except for perhaps the last level. To the extent that the last level is filled, it is filled from left to right. 135 | - A **full binary tree** is one where every node has either zero or two children. 136 | - A **perfect binary tree** is one that is both full and complete. Perfect binary trees must have exactly `2^k - 1` nodes, where `k` is the number of levels. 137 | - A **degenerate tree** is an unbalanced tree, which if entirely one-sided is essentially a linked list. 138 | - Used to implement [**binary search trees**](1.4%20-%20Binary%20Search%20Tree.md) & [**binary heaps**](1.5%20-%20Binary%20Heap.md). 139 | 140 | ## Trie (Prefix Tree) 141 | 142 | - Is a variant of an n-ary tree in which characters are stored at each node. 143 | - Each path down the tree represents a word. 144 | 145 | ### Important Points 146 | 147 | - `*` nodes (or `null` nodes) are used to indicate complete words. 148 | - A node in a trie could have anywhere from 1 through `ALPHABET_SIZE + 1` children (or 0 through `ALPHABET_SIZE` children if a boolean value is used to indicate `*` nodes). 149 | - Tries are usually used to store words for quick prefix lookups. While a hash table can quickly look up whether a string is a valid word, it cannot quickly verify if a string is a prefix of any valid words. 150 | 151 | ### Time Complexity 152 | 153 | - **Validate Prefix:** `O(n)` (where `n` is length of prefix) 154 | 155 | ### Java 156 | 157 | - [Java Implementation of Trie](1.6%20-%20Trie.md) 158 | 159 | ## Segment Tree 160 | 161 | - A segment tree is a tree data structure for storing intervals or segments. 162 | - It allows querying which of the stored segments contain a given element. 163 | - It is usually a static structure i.e. it cannot be modified easily once it's built. 164 | 165 | ### Time Complexity 166 | 167 | - A segment tree for a set `I` of `n` intervals uses `O(n log n)` storage and can be built in `O(n log n)` time. 168 | - Segment trees support searching for all the intervals that contain a query point in `O(log n + k)`, `k` being the number of retrieved intervals or segments. 169 | 170 | ## Graph 171 | 172 | - A linked list based data structure where links (edges) can exist between any two nodes (vertices). 173 | - Edges may be directed/undirected and weighted/unweighted. The graph may be cyclic. 174 | - **LL-based Implementation:** 175 | - A 1-D array is used to represent the vertices. 176 | - A linked list or array is used for each vertex `v`, which contains the vertices that are adjacent to `v` (adjacency list). 177 | - Vertices adjacent to a vertex can be found quickly. 178 | - In an undirected graph, edges will be stored twice. 179 | - **Array-based Implementation:** 180 | - A 1-D array is used to represent the vertices. 181 | - A 2-D boolean array (adjacency matrix) is used to represent the edges. 182 | - Connectivity between two vertices can be verified quickly. 183 | - In an undirected graph, the adjacency matrix will be symmetrical. 184 | - An adjacency matrix is usually better for storing dense graphs while an adjacency list is better for storing sparse graphs. 185 | - Traversal in an adjacency list implementation is more efficient than an adjacency matrix implementation since it is not necessary to iterate through all the nodes to find a node's neighbors. 186 | 187 | ### Important Points 188 | 189 | - Two nodes are **adjacent** if they are connected by an edge. 190 | - The **degree** of a node `i` is the number of nodes adjacent to `i`. 191 | - A **path** is a sequence of edges that connect two nodes. 192 | - A **connected graph** is a graph where a path exists between any two nodes. 193 | - A **complete graph** is a graph where every vertex is directly connected to every other vertex. 194 | - A **clique** is a complete subgraph. 195 | - Traversal is similar to trees but vertices must be marked as visited (due to multiple possible paths) and all adjacent vertices must be visited (as opposed to just two in binary trees). 196 | 197 | ### Time Complexity 198 | 199 | - **Add Vertex:** Adjacency List: `O(1)`, Adjacency Matrix: `O(|V|^2)` 200 | - **Add Edge:** Adjacency List: `O(1)`, Adjacency Matrix: `O(1)` 201 | - **Remove Vertex:** Adjacency List: `O(|V| + |E|)`, Adjacency Matrix: `O(|V|^2)` 202 | - **Remove Edge:** Adjacency List: `O(|E|)`, Adjacency Matrix: `O(1)` 203 | - **Query for Adjacency:** Adjacency List: `O(|V|)`, Adjacency Matrix: `O(1)` 204 | - **Storage Size:** Adjacency List: `O(|V| + |E|)`, Adjacency Matrix: `O(|V|^2)` 205 | -------------------------------------------------------------------------------- /1.1 - Linked List.md: -------------------------------------------------------------------------------- 1 | # Linked List 2 | 3 | ```java 4 | class LinkedList { 5 | class Node { 6 | Node next; 7 | int val; 8 | 9 | Node(int val) { 10 | this.val = val; 11 | } 12 | } 13 | 14 | Node head; 15 | 16 | void printList() { 17 | Node n = head; 18 | while (n != null) { 19 | System.out.println(n.val); 20 | n = n.next; 21 | } 22 | } 23 | 24 | void append(int val) { 25 | if (head == null) { 26 | head = new Node(val); 27 | } else { 28 | Node last = head; 29 | while (last.next != null) last = last.next; 30 | last.next = new Node(val); 31 | } 32 | } 33 | 34 | void delete(int val) { 35 | if (head == null) return; 36 | 37 | if (head.val == val) { 38 | head = head.next; 39 | return; 40 | } 41 | 42 | Node node = head; 43 | while (node.next != null) { 44 | if (node.next.val == val) { 45 | node.next = node.next.next; 46 | return; 47 | } 48 | node = node.next; 49 | } 50 | } 51 | 52 | Node search(int val) { 53 | Node node = head; 54 | while (node != null) { 55 | if (node.val == val) return node; 56 | node = node.next; 57 | } 58 | return null; 59 | } 60 | 61 | void reverse() { 62 | Node current = head; 63 | Node prev = null; 64 | while (current != null) { 65 | Node next = current.next; 66 | current.next = prev; 67 | prev = current; 68 | current = next; 69 | } 70 | head = prev; 71 | } 72 | 73 | void setHead(int val) { 74 | Node node = new Node(val); 75 | node.next = head; 76 | head = node; 77 | } 78 | } 79 | ``` 80 | -------------------------------------------------------------------------------- /1.2 - Stack.md: -------------------------------------------------------------------------------- 1 | # Stack 2 | 3 | ```java 4 | class Stack { 5 | class StackNode { 6 | int val; 7 | StackNode next; 8 | 9 | public StackNode(int val) { 10 | this.val = val; 11 | } 12 | } 13 | 14 | StackNode top; 15 | 16 | int pop() { 17 | if (top == null) throw new EmptyStackException(); 18 | int val = top.val; 19 | top = top.next; 20 | return val; 21 | } 22 | 23 | void push(int val) { 24 | StackNode node = new StackNode(val); 25 | node.next = top; 26 | top = node; 27 | } 28 | 29 | int peek() { 30 | if (top == null) throw new EmptyStackException(); 31 | return top.val; 32 | } 33 | 34 | boolean isEmpty() { 35 | return top == null; 36 | } 37 | } 38 | ``` 39 | -------------------------------------------------------------------------------- /1.3 - Queue.md: -------------------------------------------------------------------------------- 1 | # Queue 2 | 3 | ```java 4 | class Queue { 5 | class QueueNode { 6 | int val; 7 | QueueNode next; 8 | 9 | public QueueNode(int val) { 10 | this.val = val; 11 | } 12 | } 13 | 14 | QueueNode first, last; 15 | 16 | void add(int val) { 17 | QueueNode node = new QueueNode(val); 18 | 19 | if (last != null) last.next = node; 20 | last = node; 21 | 22 | if (first == null) first = last; 23 | } 24 | 25 | int remove() { 26 | if (first == null) throw new EmptyQueueException(); 27 | 28 | int val = first.val; 29 | first = first.next; 30 | if (first == null) last = null; 31 | 32 | return val; 33 | } 34 | 35 | int peek() { 36 | if (first == null) throw new EmptyQueueException(); 37 | return first.val; 38 | } 39 | 40 | boolean isEmpty() { 41 | return first == null; 42 | } 43 | } 44 | ``` 45 | -------------------------------------------------------------------------------- /1.4 - Binary Search Tree.md: -------------------------------------------------------------------------------- 1 | # Binary Search Tree 2 | 3 | - BST is a binary tree that uses comparable keys to assign which direction a child is. 4 | - The left child is always smaller than its parent node. 5 | - The right child is always greater than its parent node. 6 | - There can be no duplicate nodes. 7 | - Designed to optimize searching and sorting. 8 | - In-order traversal of a BST produces a sorted list. 9 | 10 | #### Time Complexity 11 | 12 | - **Search:** Average Case: `O(log n)`, Worst Case: `O(n)` 13 | 14 | ### Implementation in Java 15 | 16 | ```java 17 | class BinarySearchTree { 18 | class Node { 19 | int val; 20 | Node right, left; 21 | 22 | Node(int val) { 23 | this.val = val; 24 | } 25 | } 26 | 27 | Node root; 28 | } 29 | ``` 30 | 31 | ```java 32 | Node search(Node node, int key) { 33 | // Return null if not found or node if found. 34 | if (node == null || node.val == key) 35 | return node; 36 | 37 | // Search left subtree if key < node.val. 38 | if (key < node.val) 39 | return search(node.left, key); 40 | 41 | // Search right subtree if key > node.val. 42 | return search(node.right, key); 43 | } 44 | ``` 45 | 46 | ```java 47 | void insert(int key) { 48 | root = insertRec(root, key); 49 | } 50 | 51 | Node insertRec(Node node, int key) { 52 | if (node == null) { 53 | node = new Node(key); 54 | return node; 55 | } 56 | 57 | if (key < node.key) 58 | node.left = insertRec(node.left, key); 59 | else if (key > node.key) 60 | node.right = insertRec(node.right, key); 61 | 62 | return node; 63 | } 64 | ``` 65 | 66 | ```java 67 | void delete(int key) { 68 | root = deleteRec(root, key); 69 | } 70 | 71 | Node deleteRec(Node node, int key) { 72 | // If tree is empty or node doesn't exist, return null. 73 | if (node == null) return null; 74 | 75 | // Else, recursively traverse the tree. 76 | if (key < node.key) { 77 | node.left = deleteRec(node.left, key); 78 | } else if (key > node.key) { 79 | node.right = deleteRec(node.right, key); 80 | } else { 81 | // If node has only one child or no children. 82 | if (node.left == null) 83 | return node.right; 84 | else if (node.right == null) 85 | return node.left; 86 | 87 | // If node has two children, replace current node's value with the 88 | // minimum value in the node's right subtree (in-order successor). 89 | node.key = minValue(node.right); 90 | 91 | // Delete the in-order successor. 92 | node.right = deleteRec(node.right, node.key); 93 | } 94 | 95 | return node; 96 | } 97 | 98 | int minValue(Node node) { 99 | while (node.left != null) node = node.left; 100 | return node.key; 101 | } 102 | ``` 103 | 104 | ## Self-Balancing Binary Search Tree 105 | 106 | - A self-balancing binary search tree is a BST that automatically keeps its height at a minimum regardless of arbitrary insertions and deletions. This allows for worst-case lookup performance of `O(log n)` instead of `O(n)`. 107 | - Self-balancing binary search trees are useful to construct and maintain ordered lists, such as priority queues. 108 | - They are also useful to store key-value pairs with an ordering based on the key alone. As opposed to hash tables, they provide enumeration of the items in key order and better worst-case performance. However, hash tables have better average case performance (`O(1)`). 109 | 110 | ### AVL Tree 111 | 112 | - An AVL tree stores in each node the height of the subtrees rooted at this node. Then, for any node, it is possible to check if it is height balanced i.e. the height of the left subtree and right subtree should differ by no more than one. 113 | 114 | ``` 115 | balance(n) = n.left.height - n.right.height 116 | ``` 117 | 118 | ``` 119 | -1 <= balance(n) <= 1 120 | ``` 121 | 122 | - During insertion, the balance of some nodes may change to -2 or 2. Therefore, when the recursive stack is unwinded after insertion, the balance is checked and fixed at each node going upwards from the inserted node. 123 | - Imbalances are fixed by performing left or right rotations at each node. These operations do not violate the BST rule. 124 | 125 | ``` 126 | y x 127 | / \ Right Rotation / \ 128 | x T3 – – – – – – – > T1 y 129 | / \ < - - - - - - - / \ 130 | T1 T2 Left Rotation T2 T3 131 | 132 | keys(T1) < key(x) < keys(T2) < key(y) < keys(T3) 133 | ``` 134 | 135 | #### Time Complexity 136 | 137 | - **Search:** `O(log n)` 138 | - **Insertion:** `O(log n)` 139 | - **Deletion:** `O(log n)` 140 | 141 | ### Red-Black Tree 142 | 143 | - Red-black trees do not ensure quite as strict balancing as AVL trees but it is still good enough for `O(log n)` insertions, deletions & retrievals. 144 | - They require less memory and can rebalance faster (thus, quicker insertions and deletions). 145 | - `TreeMap` and `TreeSet` in Java are implemented using red-black trees. 146 | 147 | #### Time Complexity 148 | 149 | - **Search:** `O(log n)` 150 | - **Insertion:** `O(log n)` 151 | - **Deletion:** `O(log n)` 152 | -------------------------------------------------------------------------------- /1.5 - Binary Heap.md: -------------------------------------------------------------------------------- 1 | # Binary Heap 2 | 3 | - Is a complete binary tree that satisfies the **heap property**. 4 | - **Heap Property:** If `A` is a parent node of `B`, then the value of node `A` is ordered with respect to the value of node `B` with the same ordering applying across the heap. 5 | - A heap can be further classified as either a **max heap** or a **min heap**. 6 | - In a **min heap**, the keys of parent nodes are less than or equal to those of the children and the lowest key is in the root node. 7 | - In a **max heap**, the keys of parent nodes are always greater than or equal to those of the children and the highest key is in the root node. 8 | - Heaps are crucial in several efficient graph algorithms such as Dijkstra's algorithm and sorting algorithms such as Heap Sort. 9 | - The `PriorityQueue` class in Java is based on heaps and can be used as a heap ([Oracle Docs](https://docs.oracle.com/javase/8/docs/api/java/util/PriorityQueue.html)). 10 | - A heap can be implemented using an array, where the left child of `node[i]` is at `node[2 * i + 1]` and the right child is at `node[2 * i + 2]`. 11 | 12 | #### Time Complexity 13 | 14 | - **Insertion:** `O(log n)` 15 | - **Extract Min/Max Element:** `O(1)` 16 | - **Fix Heap After Retrieval:** `O(log n)` 17 | 18 | ### Implementation in Java 19 | 20 | ```java 21 | class BinaryMinHeap { 22 | private int[] data; 23 | private int heapSize; 24 | 25 | BinaryMinHeap(int size) { 26 | this.data = new int[size]; 27 | this.heapSize = 0; 28 | } 29 | 30 | int peekMinimum() { 31 | if (!isEmpty()) return data[0]; 32 | throw new HeapException("Heap is empty!"); 33 | } 34 | 35 | boolean isEmpty() { 36 | return this.heapSize == 0; 37 | } 38 | 39 | private int getLeftChildIndex(int nodeIndex) { 40 | return 2 * nodeIndex + 1; 41 | } 42 | 43 | private int getRightChildIndex(int nodeIndex) { 44 | return 2 * nodeIndex + 2; 45 | } 46 | 47 | private int getParentIndex(int nodeIndex) { 48 | return (nodeIndex - 1) / 2; 49 | } 50 | 51 | private void swap(int i, int j) { 52 | int tmp = data[i]; 53 | data[i] = data[j]; 54 | data[j] = tmp; 55 | } 56 | 57 | public class HeapException extends RuntimeException { 58 | public HeapException(String message) { 59 | super(message); 60 | } 61 | } 62 | } 63 | ``` 64 | 65 | ```java 66 | void insert(int key) { 67 | if (heapSize == data.length) { 68 | throw new HeapException("Heap is full!"); 69 | } else { 70 | heapSize++; 71 | heap[heapSize - 1] = key; 72 | heapifyUp(heapSize - 1); 73 | } 74 | } 75 | 76 | private void heapifyUp(int index) { 77 | if (index == 0) return; 78 | 79 | int parentIndex = getParentIndex(index); 80 | if (arr[parentIndex] > data[index]) { 81 | swap(index, parentIndex); 82 | heapifyUp(parentIndex); 83 | } 84 | } 85 | ``` 86 | 87 | ```java 88 | int pollMinimum() { 89 | if (isEmpty()) { 90 | throw new HeapException("Heap is empty!"); 91 | } else { 92 | int min = data[0]; 93 | data[0] = data[heapSize - 1]; 94 | heapSize--; 95 | heapifyDown(0); 96 | return min; 97 | } 98 | } 99 | 100 | private void heapifyDown(int index) { 101 | int leftChild = getLeftChildIndex(index); 102 | int rightChild = getRightChildIndex(index); 103 | 104 | if (rightChild >= heapSize) { // no right child 105 | if (leftChild >= heapSize) { // no left child 106 | return; 107 | } else { 108 | if (data[leftChild] < data[index]) { 109 | swap(leftChild, index); 110 | heapifyDown(leftChild); 111 | } 112 | } 113 | } else { 114 | int minIndex = data[leftChild] < data[rightChild] ? leftChild : rightChild; 115 | if (data[minIndex] < data[index]) { 116 | swap(minIndex, index); 117 | heapifyDown(minIndex); 118 | } 119 | } 120 | } 121 | ``` 122 | -------------------------------------------------------------------------------- /1.6 - Trie.md: -------------------------------------------------------------------------------- 1 | # Trie 2 | 3 | ```java 4 | class TrieNode { 5 | public TrieNode[] children = new TrieNode[26]; 6 | public boolean isEnd; 7 | 8 | public TrieNode() {} 9 | } 10 | 11 | public class Trie { 12 | private TrieNode root; 13 | 14 | public Trie() { 15 | root = new TrieNode(); 16 | } 17 | 18 | // Inserts a word into the trie. 19 | public void insert(String word) { 20 | TrieNode node = root; 21 | 22 | for (int i = 0; i < word.length(); i++) { 23 | char c = word.charAt(i); 24 | if (node.children[c - 'a'] == null) { 25 | node.children[c - 'a'] = new TrieNode(); 26 | } 27 | node = node.children[c - 'a']; 28 | } 29 | 30 | node.isEnd = true; 31 | } 32 | 33 | // Returns true if the word is in the trie. 34 | public boolean search(String word) { 35 | TrieNode node = searchNode(word); 36 | 37 | if (node == null) { 38 | return false; 39 | } else { 40 | return node.isEnd; 41 | } 42 | } 43 | 44 | // Returns true if there is any word in the trie that starts with the given prefix. 45 | public boolean startsWith(String prefix) { 46 | TrieNode node = searchNode(prefix); 47 | 48 | if (node == null) { 49 | return false; 50 | } else { 51 | return true; 52 | } 53 | } 54 | 55 | // Returns the node that corresponds to the last character in 's', else null. 56 | public TrieNode searchNode(String s) { 57 | TrieNode node = root; 58 | 59 | for (int i = 0; i < s.length(); i++) { 60 | char c = s.charAt(i); 61 | if (node.children[c - 'a'] != null) { 62 | node = node.children[c - 'a']; 63 | } else { 64 | return null; 65 | } 66 | } 67 | 68 | return node; 69 | } 70 | } 71 | ``` 72 | -------------------------------------------------------------------------------- /2 - Algorithms.md: -------------------------------------------------------------------------------- 1 | # Algorithms 2 | 3 | ## Types of Algorithms 4 | 5 | ### Iterative 6 | 7 | - An algorithm that is called repeatedly but for a finite number of times, each time being a single iteration. 8 | - Often used to move incrementally through a dataset. 9 | 10 | ### Recursive 11 | 12 | - An algorithm that calls itself in its definition. 13 | - The **recursive case** in a conditional statement is used to trigger the recursion. 14 | - The **base case** in a conditional statement is used to break the recursion. 15 | - Note that recursive algorithms can be very space inefficient as each recursive call adds a new layer to the stack. 16 | 17 | ### Greedy 18 | 19 | - An algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. 20 | - The general five components, taken from [Wikipedia](http://en.wikipedia.org/wiki/Greedy_algorithm#Specifics): 21 | - A **candidate set**, from which a solution is created. 22 | - A **selection function**, which chooses the best candidate to be added to the solution. 23 | - A **feasibility function**, which is used to determine if a candidate can be used to contribute to a solution. 24 | - An **objective function**, which assigns a value to a solution, or a partial solution. 25 | - A **solution function**, which will indicate when we have discovered a complete solution. 26 | 27 | ## Dynamic Programming 28 | 29 | Dynamic programming is a general method for solving a problem with *optimal substructure* by breaking it down into *overlapping subproblems*. 30 | 31 | - A problem exhibits *optimal substructure* if an optimal solution to the problem contains within it optimal solutions to subproblems. Additionally, the solution to one subproblem should not affect the solution to another subproblem of the same problem. 32 | - A problem has *overlapping subproblems* when a recursive solution revisits the same problem repeatedly. 33 | 34 | **Top-Down:** Memoize (store) the solutions to subproblems and solve problem recursively. 35 | 36 | ```java 37 | int fibonacci(int n) { 38 | return fibonacci(n, new int[n + 1]); 39 | } 40 | 41 | int fibonacci(int i, int[] memo) { 42 | if (i == 0 || i == 1) return i; 43 | 44 | if (memo[i] == 0) { 45 | memo[i] = fibonacci(i - 1, memo) + fibonacci(i - 2, memo); 46 | } 47 | 48 | return memo[i]; 49 | } 50 | ``` 51 | 52 | **Bottom-Up:** Build up subproblems from base case up and avoid recursive overhead. Order subproblems by topologically sorting the DAG of dependencies. 53 | 54 | ```java 55 | int fibonacci(int n) { 56 | if (n == 0 || n == 1) return n; 57 | 58 | int[] memo = new int[n]; 59 | memo[0] = 0; 60 | memo[1] = 1; 61 | for (int i = 2; i < n; i++) { 62 | memo[i] = memo[i - 1] + memo[i - 2]; 63 | } 64 | 65 | return memo[n - 1] + memo[n - 2]; 66 | } 67 | ``` 68 | 69 | ## Important Algorithms 70 | 71 | - [Search Algorithms](2.1%20-%20Search%20Algorithms.md) 72 | - Sequential Search 73 | - Binary Search 74 | - [Sorting Algorithms](2.2%20-%20Sorting%20Algorithms.md) 75 | - Selection Sort 76 | - Bubble Sort 77 | - Insertion Sort 78 | - Merge Sort 79 | - Quicksort 80 | - Heap Sort 81 | - Bucket Sort 82 | - Radix Sort 83 | - [Tree & Graph Traversal Algorithms](2.3%20-%20Tree%20&%20Graph%20Traversal%20Algorithms.md) 84 | - Breadth-First Traversal 85 | - Depth-First Traversal (Pre-Order, In-Order, Post-Order) 86 | - [Pathfinding Algorithms](2.4%20-%20Pathfinding%20Algorithms.md) 87 | - Dijkstra's Algorithm 88 | - A* Search Algorithm 89 | - Bellman-Ford Algorithm 90 | - Floyd-Warshall Algorithm 91 | - [Other Graph Algorithms](2.5%20-%20Other%20Graph%20Algorithms.md) 92 | - Prim's Algorithm 93 | - Kruskal's Algorithm 94 | - Topological Sorting 95 | -------------------------------------------------------------------------------- /2.1 - Search Algorithms.md: -------------------------------------------------------------------------------- 1 | # Search Algorithms 2 | 3 | ## Sequential Search 4 | 5 | ### Algorithm 6 | 7 | ```java 8 | int sequentialSearch(int[] arr, int k) { 9 | for (int i = 0; i < arr.length; i++) { 10 | if (k == arr[i]) return i; 11 | } 12 | return -1; 13 | } 14 | ``` 15 | 16 | **Time Complexity:** `O(n)` 17 | 18 | ## Binary Search 19 | 20 | ### Important Points 21 | 22 | - Requires the search array to be ordered. 23 | - Uses the *divide & conquer* approach. 24 | 25 | ### Algorithm 26 | 27 | ```java 28 | int binarySearch(int[] arr, int k) { 29 | int low = 0; 30 | int high = arr.length - 1; 31 | int mid; 32 | 33 | while (low <= high) { 34 | mid = low + ((high - low) / 2); 35 | if (a[mid] < k) { 36 | low = mid + 1; 37 | } else if (a[mid] > k) { 38 | high = mid - 1; 39 | } else { 40 | return mid; 41 | } 42 | } 43 | 44 | return -1; 45 | } 46 | 47 | int binarySearchRecursive(int[] arr, int k, int low, int high) { 48 | if (low > high) return -1; 49 | 50 | int mid = low + ((high - low) / 2); 51 | if (arr[mid] < k) { 52 | return binarySearchRecursive(arr, k, mid + 1, high); 53 | } else if (arr[mid] > k) { 54 | return binarySearchRecursive(arr, k, low, mid - 1); 55 | } else { 56 | return mid; 57 | } 58 | } 59 | ``` 60 | 61 | **Time Complexity:** `O(log n)` 62 | -------------------------------------------------------------------------------- /2.2 - Sorting Algorithms.md: -------------------------------------------------------------------------------- 1 | # Sorting Algorithms 2 | 3 | ## Selection Sort ([GIF](assets/Selection-Sort.gif)) 4 | 5 | - Selection sort finds the minimum element in the unsorted part of the array and swaps it with the first element in the unsorted part of the array. 6 | - The sorted part of the array grows from left to right with every iteration. 7 | - After `i` iterations, the first `i` elements of the array are sorted. 8 | - Sorts in-place. Not stable.[1](#footnote1) 9 | 10 | ### Algorithm 11 | 12 | ```java 13 | void selectionSort(int[] arr) { 14 | for (int i = 0; i < arr.length; i++) { 15 | int min = i; 16 | for (int j = i; j < arr.length; j++) { 17 | if (arr[j] <= arr[min]) min = j; 18 | } 19 | 20 | if (min != i) { 21 | swap(arr, i, min); 22 | } 23 | } 24 | } 25 | ``` 26 | 27 | ### Time Complexity 28 | 29 | - **Best Case:** `O(n^2)` 30 | - **Average Case:** `O(n^2)` 31 | - **Worst Case:** `O(n^2)` 32 | 33 | ## Bubble Sort ([GIF](assets/Bubble-Sort.gif)) 34 | 35 | - In every iteration, bubble sort compares every couplet, moving the larger element to the right as it iterates through the array. 36 | - The sorted part of the array grows from right to left with every iteration. 37 | - After `i` iterations, the last `i` elements of the array are the largest and sorted. 38 | - Sorts in-place. Stable.[1](#footnote1) 39 | 40 | ### Algorithm 41 | 42 | ```java 43 | void bubbleSort(int[] arr) { 44 | boolean swapped = true; 45 | int j = 0; 46 | 47 | while (swapped) { 48 | swapped = false; 49 | for (int i = 1; i < arr.length - j; i++) { 50 | if (arr[i - 1] > arr[i]) { 51 | swap(arr, i - 1, i); 52 | swapped = true; 53 | } 54 | } 55 | j++; 56 | } 57 | } 58 | ``` 59 | 60 | ### Time Complexity 61 | 62 | - **Best Case:** `O(n)` 63 | - **Average Case:** `O(n^2)` 64 | - **Worst Case:** `O(n^2)` 65 | 66 | ## Insertion Sort ([GIF](assets/Insertion-Sort.gif)) 67 | 68 | - In every iteration, insertion sort takes the first element in the unsorted part of the array, finds the location it belongs to within the sorted part of the array and inserts it there. 69 | - The sorted part of the array grows from left to right with every iteration. 70 | - After `i` iterations, the first `i` elements of the array are sorted. 71 | - Sorts in-place. Stable.[1](#footnote1) 72 | 73 | ### Algorithm 74 | 75 | ```java 76 | void insertionSort(int[] arr) { 77 | for (int i = 1; i < arr.length; i++) { 78 | for (int j = i; j > 0; j--) { 79 | if (arr[j - 1] > arr[j]) { 80 | swap(arr, j - 1, j); 81 | } else { 82 | break; 83 | } 84 | } 85 | } 86 | } 87 | ``` 88 | 89 | ### Time Complexity 90 | 91 | - **Best Case:** `O(n)` 92 | - **Average Case:** `O(n^2)` 93 | - **Worst Case:** `O(n^2)` 94 | 95 | ## Merge Sort ([GIF](assets/Merge-Sort.png)) 96 | 97 | - Uses the *divide & conquer* approach. 98 | - Merge sort divides the original array into smaller arrays recursively until the resulting subarrays have one element each. 99 | - Then, it starts merging the divided subarrays by comparing each element and moving the smaller one to the left of the merged array. 100 | - This is done recursively till all the subarrays are merged into one sorted array. 101 | - Requires `O(n)` space. Stable.[1](#footnote1) 102 | 103 | ### Algorithm 104 | 105 | ```java 106 | void mergesort(int[] arr) { 107 | int[] helper = new int[arr.length]; 108 | mergesort(arr, helper, 0, arr.length - 1); 109 | } 110 | 111 | void mergesort(int[] arr, int[] helper, int low, int high) { 112 | // Check if low is smaller than high, if not then the array is sorted. 113 | if (low < high) { 114 | int mid = low + ((high - low) / 2); // Get index of middle element 115 | mergesort(arr, helper, low, mid); // Sort left side of the array 116 | mergesort(arr, helper, mid + 1, high); // Sort right side of the array 117 | merge(arr, helper, low, mid, high); // Combine both sides 118 | } 119 | } 120 | 121 | void merge(int[] arr, int[] helper, int low, int mid, int high) { 122 | // Copy both halves into a helper array. 123 | for (int i = low; i <= high; i++) { 124 | helper[i] = arr[i]; 125 | } 126 | 127 | int helperLeft = low; 128 | int helperRight = mid + 1; 129 | int current = low; 130 | 131 | // Iterate through helper array. Compare the left and right half, copying back 132 | // the smaller element from the two halves into the original array. 133 | while (helperLeft <= mid && helperRight <= high) { 134 | if (helper[helperLeft] <= helper[helperRight]) { 135 | arr[current] = helper[helperLeft]; 136 | helperLeft++; 137 | } else { 138 | arr[current] = helper[helperRight]; 139 | helperRight++; 140 | } 141 | current++; 142 | } 143 | 144 | // Copy the rest of the left half of the array into the target array. Right half 145 | // is already there. 146 | while (helperLeft <= mid) { 147 | arr[current] = helper[helperLeft]; 148 | current++; 149 | helperLeft++; 150 | } 151 | } 152 | ``` 153 | 154 | ### Time Complexity 155 | 156 | - **Best Case:** `O(n log n)` 157 | - **Average Case:** `O(n log n)` 158 | - **Worst Case:** `O(n log n)` 159 | 160 | ## Quicksort ([GIF](assets/Quicksort.gif)) 161 | 162 | - Quicksort starts by selecting one element as the *pivot*. The array is then divided into two subarrays with all the elements smaller than the pivot on the left side of the pivot and all the elements greater than the pivot on the right side. 163 | - It recursively repeats this process on the left side until it is comparing only two elements at which point the left side is sorted. 164 | - Once the left side is sorted, it performs the same recursive operation on the right side. 165 | - Quicksort is the fastest general purpose in-memory sorting algorithm in practice. 166 | - Best case occurs when the pivot always splits the array into equal halves. 167 | - Usually used in conjunction with Insertion Sort when the subarrays become smaller and *almost* sorted. 168 | - Requires `O(log n)` space on average. Not stable.[1](#footnote1) 169 | 170 | ### Algorithm 171 | 172 | ```java 173 | void startQuicksort(int[] arr) { 174 | quicksort(arr, 0, arr.length - 1); 175 | } 176 | 177 | void quicksort(int[] arr, int low, int high) { 178 | if (low >= high) return; 179 | 180 | int mid = low + ((high - low) / 2); 181 | int pivot = arr[mid]; // pick pivot point 182 | 183 | int i = low, j = high; 184 | while (i <= j) { 185 | // Find element on left that should be on right. 186 | while (arr[i] < pivot) i++; 187 | 188 | // Find element on right that should be on left. 189 | while (arr[j] > pivot) j--; 190 | 191 | // Swap elements and move left and right indices. 192 | if (i <= j) { 193 | swap(arr, i, j); 194 | i++; 195 | j--; 196 | } 197 | } 198 | 199 | // Sort left half. 200 | if (low < i - 1) 201 | quicksort(arr, low, i - 1); 202 | 203 | // Sort right half. 204 | if (i < high) 205 | quicksort(arr, i, high); 206 | } 207 | ``` 208 | 209 | ### Time Complexity 210 | 211 | - **Best Case:** `O(n log n)` 212 | - **Average Case:** `O(n log n)` 213 | - **Worst Case:** `O(n^2)` 214 | 215 | ## Heap Sort ([GIF](assets/Heap-Sort.gif)) 216 | 217 | - Heap sort takes the maximum element in the array and places it at the end of the array. 218 | - At every iteration, the maximum element from the unsorted part of the array is selected by taking advantage of the binary heap data structure and placed at the end. Then, the unsorted part is heapified and the process is repeated. 219 | - After `i` iterations, the last `i` elements of the array are sorted. 220 | - Sorts in-place. Not stable.[1](#footnote1) 221 | 222 | ### Binary Heap (Array Implementation) 223 | 224 | - We can implement a binary heap with `n` nodes using an array with the following conditions: 225 | - The left child of `nodes[i]` is `nodes[2i + 1]`. 226 | - The right child of `nodes[i]` is `nodes[2i + 2]`. 227 | - `nodes[i]` is a leaf if `2i + 1` > `n`. 228 | - Therefore, in a binary max heap, `nodes[i]` > `nodes[2i + 1]` & `nodes[i]` > `nodes[2i + 2]`. 229 | 230 | ### Algorithm 231 | 232 | ```java 233 | void heapSort(int[] arr) { 234 | int n = arr.length; 235 | 236 | // Construct initial max-heap (rearrange array) 237 | for (int i = n / 2 - 1; i >= 0; i--) { 238 | heapify(arr, n, i); 239 | } 240 | 241 | // Extract an element one by one from heap 242 | for (int i = n - 1; i >= 0; i--) { 243 | // Move current root to end 244 | int temp = arr[0]; 245 | arr[0] = arr[i]; 246 | arr[i] = temp; 247 | 248 | // Call heapify() on the reduced heap 249 | heapify(arr, i, 0); 250 | } 251 | } 252 | 253 | // Heapifies a subtree rooted at arr[i]. n is the size of the entire heap. 254 | void heapify(int arr[], int n, int i) { 255 | int largest = i; // initialize largest as root 256 | int l = 2*i + 1; // left child 257 | int r = 2*i + 2; // right child 258 | 259 | // If left child is larger than root 260 | if (l < n && arr[l] > arr[largest]) 261 | largest = l; 262 | 263 | // If right child is larger than largest so far 264 | if (r < n && arr[r] > arr[largest]) 265 | largest = r; 266 | 267 | // If largest is not root 268 | if (largest != i) { 269 | int temp = arr[i]; 270 | arr[i] = arr[largest]; 271 | arr[largest] = temp; 272 | 273 | // Recursively heapify the affected subtree 274 | heapify(arr, n, largest); 275 | } 276 | } 277 | ``` 278 | 279 | ### Time Complexity 280 | 281 | - **Best Case:** `O(n log n)` 282 | - **Average Case:** `O(n log n)` 283 | - **Worst Case:** `O(n log n)` 284 | 285 | ## Bucket Sort 286 | 287 | - Bucket sort is a sorting algorithm that works by distributing the elements of an array into a number of buckets. 288 | - Each bucket is then sorted individually, either using a different sorting algorithm or by recursively applying the bucket sorting algorithm. 289 | 290 | ### Time Complexity 291 | 292 | - **Average Case:** `O(n + k)` (where `k` is the number of buckets) 293 | - **Worst Case:** `O(n^2)` 294 | 295 | ## Radix Sort 296 | 297 | - Radix sort is a sorting algorithm for integers (and some other data types) that groups the numbers by each digit from left to right (most significant digit radix sort) or right to left (least significant digit radix sort) on every pass. 298 | - This process is repeated for each subsequent digit until the whole array is sorted. 299 | 300 | ### Time Complexity 301 | 302 | - **Worst Case:** `O(kn)` (where `k` is the number of passes of the algorithm) 303 | 304 | --- 305 | 306 | [1](#footnote1): A sorting algorithm is said to be **stable** if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted. 307 | -------------------------------------------------------------------------------- /2.3 - Tree & Graph Traversal Algorithms.md: -------------------------------------------------------------------------------- 1 | # Tree/Graph Traversal Algorithms 2 | 3 | ## Breadth-First Traversal 4 | 5 | - An algorithm that traverses through a tree (or graph) level-by-level, starting at the root. 6 | - Breadth-First Traversal is iterative and uses a queue to keep track of unvisited nodes. 7 | - In a tree, the bottom-right node is evaluated last (i.e. the node that is deepest and is farthest right). 8 | 9 | ### Binary Tree BFT 10 | 11 | ```java 12 | // Breadth-First Traversal of a Binary Tree 13 | void BFTraversal(BTNode root) { 14 | Queue queue = new LinkedList(); 15 | queue.add(root); 16 | 17 | while (!queue.isEmpty()) { 18 | BTNode node = queue.poll(); // poll() removes the present head 19 | System.out.println(node.data); 20 | 21 | // Enqueue left child 22 | if (node.left != null) queue.add(node.left); 23 | 24 | // Enqueue right child 25 | if (node.right != null) queue.add(node.right); 26 | } 27 | } 28 | ``` 29 | 30 | ### Graph BFS 31 | 32 | ```java 33 | // Breadth-First Search of a Graph 34 | void BFS(Node root) { 35 | Queue queue = new LinkedList(); 36 | root.visited = true; 37 | queue.add(root); 38 | 39 | while(!queue.isEmpty()) { 40 | Node r = queue.poll(); 41 | visit(r); 42 | 43 | for (int i = 0; i < root.adjacent.length; i++) { 44 | if (root.adjacent[i].visited == false) { 45 | root.adjacent[i].visited = true; 46 | queue.add(root.adjacent[i]); 47 | } 48 | } 49 | } 50 | } 51 | ``` 52 | 53 | ## Time Complexity 54 | 55 | ``` 56 | O(|V| + |E|) 57 | ``` 58 | 59 | ## Depth-First Traversal 60 | 61 | - An algorithm that traverses through a tree (or graph) by traversing the depth of the tree first, starting at the root. 62 | - Depth-First Traversal is usually recursive and uses a stack to keep track of unvisited nodes. 63 | - In pre-order & in-order traversal, the right-most node is evaluated last (the node that is right of all its ancestors). 64 | - In post-order traversal, the root node is evaluated last. 65 | 66 | #### Pre-Order Traversal 67 | 68 | 1. Process the current node. 69 | 2. Visit the left child subtree. 70 | 3. Visit the right child subtree. 71 | 72 | #### In-Order Traversal 73 | 74 | 1. Visit the left child subtree. 75 | 2. Process the current node. 76 | 3. Visit the right child subtree. 77 | 78 | #### Post-Order Traversal 79 | 80 | 1. Visit the left child subtree. 81 | 2. Visit the right child subtree. 82 | 3. Process the current node. 83 | 84 | ### Binary Tree DFT (Recursive) 85 | 86 | ```java 87 | // Pre-Order Depth-First Traversal of a Binary Tree 88 | void DFTraversal(BTNode node) { 89 | if (node == null) return; 90 | 91 | System.out.println(node.data); // visit node 92 | DFTraversal(node.left); // left subtree 93 | DFTraversal(node.right); // right subtree 94 | } 95 | ``` 96 | 97 | ### Binary Tree DFT (Iterative) 98 | 99 | ```java 100 | // Iterative Pre-Order DFT of a Binary Tree 101 | void IterativePreOrderDFT(BTNode node) { 102 | if (node == null) return; 103 | 104 | // The stack will keep track of the order to visit the nodes, from top to bottom. 105 | Stack stack = new Stack<>(); 106 | stack.push(root); 107 | 108 | // Traverse the tree. 109 | while (!stack.isEmpty()) { 110 | BTNode node = stack.pop(); 111 | System.out.print(node.data + " "); 112 | 113 | // Push right then left child to stack so left child is visited first. 114 | if (node.right != null) stack.push(node.right); 115 | if (node.left != null) stack.push(node.left); 116 | } 117 | } 118 | ``` 119 | 120 | ```java 121 | // Iterative In-Order DFT of a Binary Tree 122 | void IterativeInOrderDFT(BTNode node) { 123 | if (root == null) return; 124 | 125 | // The stack will keep track of the order to visit the nodes, from top to bottom. 126 | Stack stack = new Stack<>(); 127 | BTNode node = root; 128 | 129 | // Set the top of the stack to the leftmost node. 130 | while (node != null) { 131 | stack.push(node); 132 | node = node.left; 133 | } 134 | 135 | // Traverse the tree. 136 | while (!stack.isEmpty()) { 137 | node = stack.pop(); 138 | System.out.print(node.data + " "); 139 | 140 | if (node.right != null) { 141 | node = node.right; 142 | while (node != null) { 143 | stack.push(node); 144 | node = node.left; 145 | } 146 | } 147 | } 148 | } 149 | ``` 150 | 151 | ```java 152 | // Iterative Post-Order DFT of a Binary Tree 153 | void IterativePostOrderDFT(BTNode node) { 154 | if (root == null) return; 155 | 156 | // The stack will keep track of the order to visit the nodes, from top to bottom. 157 | Stack stack = new Stack<>(); 158 | stack.push(root); 159 | 160 | BTNode prev = null; 161 | while (!stack.isEmpty()) { 162 | BTNode curr = stack.peek(); 163 | 164 | // Go down the tree. If current node is leaf, process it and pop stack, 165 | // otherwise, keep going down. 166 | if (prev == null || prev.left == curr || prev.right == curr) { 167 | if (curr.left != null) { 168 | stack.push(curr.left); 169 | } else if (curr.right != null) { 170 | stack.push(curr.right); 171 | } else { 172 | System.out.println(curr.data); 173 | stack.pop(); 174 | } 175 | 176 | // Go up the tree from the left node. If there is a right child, push it 177 | // onto stack. Else, process parent node and pop stack. 178 | } else if (curr.left == prev) { 179 | if (curr.right != null) { 180 | stack.push(curr.right); 181 | } else { 182 | System.out.println(curr.data); 183 | stack.pop(); 184 | } 185 | 186 | // Go up the tree from the right node. Process parent node and pop stack. 187 | } else if (curr.right == prev) { 188 | System.out.println(curr.data); 189 | stack.pop(); 190 | } 191 | 192 | prev = curr; 193 | } 194 | } 195 | ``` 196 | 197 | ### Graph DFS 198 | 199 | ```java 200 | // Depth-First Search of a Graph 201 | void DFS(Node root) { 202 | if (root == null) return; 203 | 204 | visit(root); 205 | root.visited = true; 206 | 207 | for (int i = 0; i < root.adjacent.length; i++) { 208 | if (root.adjacent[i].visited == false) DFS(root.adjacent[i]); 209 | } 210 | } 211 | ``` 212 | 213 | ## Time Complexity 214 | 215 | ``` 216 | O(|V| + |E|) 217 | ``` 218 | 219 | ## BFS vs. DFS & Other Search Algorithms 220 | 221 | - Breadth-first search is guaranteed to find a shortest possible path between two vertices in a graph. Depth-first search is not (and usually does not). 222 | - **Iterative deepening depth-first search** is a graph search strategy in which a depth-limited version of depth-first search is run repeatedly with increasing depth limits until the goal is found. This is equivalent to breadth-first search but uses much less memory since on each iteration, it visits the nodes in the search tree in the same order as depth-first search but the cumulative order in which nodes are visited is effectively breadth-first. 223 | - **Bidirectional search** is used to find the shortest path between a source and a destination node. It operates by essentially running two simultaneous breadth-first searches, one from each node. When the searches collide, the path is found. 224 | -------------------------------------------------------------------------------- /2.4 - Pathfinding Algorithms.md: -------------------------------------------------------------------------------- 1 | # Pathfinding Algorithms 2 | 3 | ## Dijkstra's Algorithm (Adapted from [Wikipedia](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm), [GIF](assets/Dijkstra.gif)) 4 | 5 | - Finds the shortest path from a start node to a goal node in a weighted, directed graph with non-negative edge weights. 6 | 7 | 1. Assign every node a tentative distance value. Zero for the start node and infinity for all other nodes. 8 | 2. Set the start node as current. Mark all other nodes unvisited. Create a set of all the unvisited nodes. 9 | 3. For the current node, consider all of its neighbors and calculate their tentative distances. Compare the newly calculated tentative distance to the current assigned value and assign the smaller one. If the newly calculated tentative distance is smaller, assign the current node as the neighbor's previous node. 10 | 4. When we are done considering all of the neighbors of the current node, mark the current node as visited and remove it from the unvisited set. A visited node will never be checked again. 11 | 5. If there are still unvisited nodes, select an unvisited node with the smallest tentative distance, set it as the new "current node" and go back to Step 3. 12 | 6. Use the goal's previous node to recursively backtrack to the start node. 13 | 14 | ```java 15 | HashMap Dijkstra(Node[] graph, Node start, Node goal) { 16 | // Keeps track of the previous node for a node in the path. Used to construct the path after 17 | // it is determined. 18 | HashMap prev = new HashMap<>(); 19 | 20 | // Queue that stores nodes in ascending order of distance. 21 | PriorityQueue queue = new PriorityQueue<>(new Comparator() { 22 | @Override 23 | public int compare(Node n1, Node n2) { 24 | return n1.dijkstraDistance - n2.dijkstraDistance; 25 | } 26 | }); 27 | 28 | for (Node n : graph) { 29 | if (n != start) { 30 | n.dijkstraDistance = Integer.MAX_VALUE; 31 | prev.put(n, null); 32 | } 33 | queue.add(n); 34 | } 35 | start.dijkstraDistance = 0; 36 | 37 | while (!queue.isEmpty()) { 38 | Node current = queue.poll(); // get minimum distance node 39 | if (current == goal) return prev; 40 | 41 | for (Node neighbor : current.neighbors) { 42 | int altDistance = current.dijkstraDistance + distance(current, neighbor); 43 | if (altDistance < neighbor.dijkstraDistance) { 44 | neighbor.dijkstraDistance = altDistance; 45 | prev.put(neighbor, current); 46 | 47 | // Update position of node in queue. 48 | queue.remove(neighbor); 49 | queue.add(neighbor); 50 | } 51 | } 52 | } 53 | 54 | return null; // no path found! 55 | } 56 | ``` 57 | 58 | ### Time Complexity 59 | 60 | - Each node `V` can be adjacent to a maximum `|V - 1|` nodes. 61 | - Finding and updating the weight of each adjacent node in a binary heap is `O(log |V|)`. 62 | - Therefore, updating all the adjacent nodes of one node is `O(|V - 1| log |V|)`. 63 | - Hence, time complexity for updating for all nodes is `O(|V| |V - 1| log |V|)`, which can be simplified to: 64 | 65 | ``` 66 | O(|E| log |V|) 67 | ``` 68 | 69 | where `|E|` represents the total number of edges in the graph. 70 | 71 | Generally, Dijkstra's algorithm takes `O(|E| * T_dk + |V| * T_em)` where `T_dk` & `T_em` denote the complexities of the *decrease-key* and *extract-minimum* operation (both are `O(log |V|)` for a binary heap). If a graph is sparse, a binary heap implementation is preferred but if a graph is dense, an array implementation (`O(|V|^2)`) is preferred for the priority queue. 72 | 73 | ## A* Search Algorithm (Adapted from [Wikipedia](https://en.wikipedia.org/wiki/A*_search_algorithm)) 74 | 75 | - Finds the shortest path from a start node to a goal node in a weighted, directed graph with non-negative edge weights. 76 | 77 | The A* search algorithm is based on minimizing: 78 | 79 | ``` 80 | f(n) = g(n) + h(n) 81 | ``` 82 | 83 | where `f(n)` is the estimated distance of `START` to `GOAL` through `n`, `g(n)` is the known distance from `START` to `n` and `h(n)` is the heuristic distance from `n` to `GOAL`. 84 | 85 | The heuristic must follow the the triangle inequality theorem i.e. `|h(A) - h(b)| ≤ dist(A, B)` for all nodes `A, B` and must never overestimate the actual minimal cost of reaching a goal. 86 | 87 | > Dijkstra's algorithm is a special case of A*, where h(n) = 0. 88 | 89 | ```java 90 | HashMap AStar(Node[] graph, Node start, Node goal) { 91 | HashSet openSet = new HashSet<>(); 92 | HashSet closedSet = new HashSet<>(); 93 | 94 | openSet.add(start) 95 | 96 | HashMap prev = new HashMap<>(); 97 | HashMap gScore = new HashMap<>(); 98 | HashMap fScore = new HashMap<>(); 99 | 100 | for (Node n : graph) { 101 | gScore.put(n, Integer.MAX_VALUE); 102 | fScore.put(n, Integer.MAX_VALUE); 103 | } 104 | 105 | gScore.put(start, 0) 106 | fScore.put(start, 0 + heuristicEstimate(start, goal)) 107 | 108 | while (!openSet.isEmpty()) { 109 | Node current = getMinimum(openSet, fScore); // get node with lowest fScore value 110 | if (current == goal) return prev; 111 | 112 | openSet.remove(current); 113 | closedSet.add(current); 114 | 115 | for (Node neighbor : current.neighbors) { 116 | if (closedSet.contains(neighbor)) continue; 117 | if (!openSet.contains(neighbor)) openSet.add(neighbor); 118 | 119 | int altGScore = gScore.get(current) + distance(current, neighbor); 120 | if (altGScore < gScore.get(neighbor)) { 121 | prev.put(neighbor, current); 122 | gScore.put(neighbor, altGScore); 123 | fScore.put(neighbor, altGScore + heuristicEstimate(neighbor, goal)); 124 | } 125 | } 126 | } 127 | 128 | return null; // no path found! 129 | } 130 | ``` 131 | 132 | ### Time Complexity 133 | 134 | Since A* algorithm's runtime complexity is heavily dependent on the heuristic chosen, the worst case time complexity for an unbounded search space is: 135 | 136 | ``` 137 | O(b^d) 138 | ``` 139 | 140 | where `b` is the branching factor (i.e. average number of successors per state) and `d` is the depth of the solution. 141 | 142 | ## Bellman-Ford Algorithm (Adapted from [Wikipedia](https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm)) 143 | 144 | - Finds the shortest path from a single source node to all other nodes in a weighted, directed graph, where edge weights may be negative. If there is a negative weight cycle, then the shortest distances are not calculated and the cycle is reported. 145 | 146 | 1. Initialize distance for all nodes as infinite, except the source node, which is initialized as zero. 147 | 2. Repeat the following `|V| - 1` times: 148 | 1. For each edge `u, v`, if `dist[v] > dist[u] + weight(u, v)`, then `dist[v] = dist[u] + weight(u, v)`. 149 | 3. Repeat the following for each edge `u, v`: 150 | 1. If `dist[v] > ist[u] + weight(u, v)`, graph contains a negative weight cycle. 151 | 152 | - Step 3 is performed because Step 2 guarantees shortest distances only if the graph doesn't contain a negative weight cycle. 153 | - Unlike Dijkstra's algorithm, Bellman-Ford is capable of handling graphs with *some* negative weight edges. 154 | 155 | ```java 156 | class Graph { 157 | class Node { 158 | int id; 159 | } 160 | 161 | class Edge { 162 | int src; // ID of source node 163 | int dest; // ID of destination node 164 | int weight; // weight of edge 165 | } 166 | 167 | Node[] nodes; 168 | Edge[] edges; 169 | } 170 | 171 | int[] BellmanFord(Graph graph, int src) { 172 | int V = graph.node.length; 173 | int E = graph.edges.length; 174 | int[] dist = new int[V]; // distances to nodes 175 | 176 | for (int i = 0; i < V; i++) dist[i] = Integer.MAX_VALUE; 177 | dist[src] = 0; 178 | 179 | for (int i = 1; i < V; i++) { 180 | for (int j = 0; j < E; j++) { 181 | int u = graph.edges[j].src; 182 | int v = graph.edges[j].dest; 183 | int weight = graph.edges[j].weight; 184 | if (dist[u] != Integer.MAX_VALUE && dist[u] + weight < dist[v]) 185 | dist[v] = dist[u] + weight; 186 | } 187 | } 188 | 189 | for (int j = 0; j < E; j++) { 190 | int u = graph.edges[j].src; 191 | int v = graph.edges[j].dest; 192 | int weight = graph.edges[j].weight; 193 | if (dist[u] != Integer.MAX_VALUE && dist[u] + weight < dist[v]) { 194 | System.out.println("Graph contains negative weight cycle!"); 195 | return null; 196 | } 197 | } 198 | 199 | return dist; 200 | } 201 | ``` 202 | 203 | ### Time Complexity 204 | 205 | ``` 206 | O(|V| |E|) 207 | ``` 208 | 209 | ## Floyd-Warshall Algorithm (Adapted from [Wikipedia](https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm)) 210 | 211 | - Finds the shortest paths between all pairs of nodes in a weighted, directed graph, where edge weights may be negative but there are no negative weight cycles. 212 | 213 | 1. Initialize the solution matrix as the same as the input graph matrix i.e. the shortest paths are initialized as the paths with no intermediate nodes. 214 | 2. For every node `k`, consider it as the intermediate node for a path between `u` & `v`. Then, either: 215 | 1. `k` is not an intermediate node in shortest path from `u` to `v` and the value of `dist[u][v]` is kept as it is. 216 | 2. `k` is an intermediate node in the shortest path from `u` to `v` and the value of `dist[u][v]` is updated to `dist[u][k] + dist[k][v]`. 217 | 218 | ```java 219 | int[][] FloydWarshall(int graph[][]) { 220 | int V = graph.length; 221 | int[][] dist = new int[V][V]; 222 | int i, j, k; 223 | 224 | for (i = 0; i < V; i++) 225 | for (j = 0; j < V; j++) 226 | dist[i][j] = graph[i][j]; 227 | 228 | for (k = 0; k < V; k++) { 229 | for (u = 0; u < V; u++) { 230 | for (v = 0; v < V; v++) { 231 | if (dist[u][k] + dist[k][v] < dist[u][v]) 232 | dist[u][v] = dist[u][k] + dist[k][v]; 233 | } 234 | } 235 | } 236 | 237 | return dist; 238 | } 239 | ``` 240 | 241 | ### Time Complexity 242 | 243 | ``` 244 | O(|V|^3) 245 | ``` 246 | -------------------------------------------------------------------------------- /2.5 - Other Graph Algorithms.md: -------------------------------------------------------------------------------- 1 | # Other Graph Algorithms 2 | 3 | ## Prim's Algorithm (Adapted from [Wikipedia](https://en.wikipedia.org/wiki/Prim%27s_algorithm), [GIF](assets/Prim.gif)) 4 | 5 | - Finds a minimum spanning tree for a weighted, undirected graph. 6 | - A minimum spanning tree is a subset of the edges that forms a tree, which contains every vertex, where the total weight of all the edges in the tree is minimized. 7 | 8 | 1. Create a set `mstSet` that keeps track of the vertices already included in the MST. 9 | 2. Initialize key values for all the vertices in the graph as infinite, except for the starting node, which is assigned a value of 0 and is the root node for the MST. 10 | 4. While `mstSet` does not include all the vertices: 11 | 1. Pick a vertex `u`, which is not in `mstSet` and has the minimum key value. 12 | 2. Add `u` to `mstSet`. 13 | 3. Update the key values of all adjacent vertices to `u`. For every adjacent vertex `v`, update the key value (and parent node) if the weight of the edge `u, v` is less than the previous key value of `v`. 14 | 15 | ```java 16 | int[] primMST(int graph[][]) { 17 | int V = graph.length; 18 | int[] parent = new int[V]; // stores constructed MST 19 | int[] key = new int[V]; // key values 20 | boolean[] mstSet = new boolean[V]; // keeps track of which vertex is in MST 21 | 22 | for (int i = 0; i < V; i++) key[i] = Integer.MAX_VALUE; 23 | 24 | key[0] = 0; 25 | parent[0] = -1; // root of tree 26 | for (int count = 0; count < V - 1; count++) { 27 | int u = minKey(key, mstSet); // pick the minimum key vertex not in mstSet 28 | mstSet[u] = true; 29 | 30 | for (int v = 0; v < V; v++) { 31 | if (graph[u][v] != 0 && mstSet[v] == false && graph[u][v] < key[v]) { 32 | parent[v] = u; 33 | key[v] = graph[u][v]; 34 | } 35 | } 36 | } 37 | 38 | return parent; 39 | } 40 | ``` 41 | 42 | ### Time Complexity 43 | 44 | - Adjacency Matrix Implementation: `O(|V|^2)` 45 | - Binary Heap & Adjacency List Implementation: `O(|E| log |V|)` 46 | 47 | ## Kruskal's Algorithm (Adapted from [Wikipedia](https://en.wikipedia.org/wiki/Kruskal%27s_algorithm), [GIF](assets/Kruskal.gif)) 48 | 49 | - Finds a minimum spanning tree for a weighted, undirected graph. 50 | 51 | 1. Create a graph `F` (a set of trees), where each vertex of the input graph is a separate tree. 52 | 2. Create a set `S` containing all the edges of the graph. 53 | 3. While `S` is not empty and `F` is not yet spanning: 54 | 1. Remove an edge with minimum weight from `S`. 55 | 2. If the removed edge connects two different trees, then add it to the forest `F`, combining two trees into a single tree. 56 | 57 | ### Time Complexity 58 | 59 | ``` 60 | O(|E| log |E|) = O(|E| log |V|^2) = O(|E| * 2 log |V|) = O(|E| log |V|) 61 | ``` 62 | 63 | ## Topological Sort 64 | 65 | - A topological sort of a directed acyclic graph's (DAG) nodes is a linear ordering such that for every edge `(u, v)`, `u` comes before `v` in the ordering. 66 | - A DAG has at least one vertex with in-degree 0 and one vertex with out-degree 0. 67 | 68 | ```java 69 | class Node { 70 | int index; 71 | int val; 72 | Node[] neighbors; 73 | } 74 | ``` 75 | 76 | ### Kahn's Algorithm 77 | 78 | 1. Compute the in-degree for each vertex present in the DAG and initialize the count of visited nodes to 0. 79 | 2. Pick all the vertices with in-degree 0 and enqueue them. 80 | 3. Remove a vertex from the queue and repeat the following till the queue is empty: 81 | 1. Increment the count of visited nodes by 1. 82 | 2. Decrease in-degree by 1 for all its neighboring nodes. 83 | 3. If the in-degree of a neighboring node is reduced to 0, enqueue it. 84 | 4. If the count of visited nodes is not equal to the number of nodes in the graph, a topological sort does not exist. 85 | 86 | ```java 87 | void topologicalSort(Node[] graph) { 88 | int V = graph.length; 89 | 90 | int[] indegree = new int[V]; 91 | for (Node node : graph) { 92 | for (Node neighbor : node.neighbors) indegree[neighbor.index]++; 93 | } 94 | 95 | Queue queue = new LinkedList<>(); 96 | for (Node node : graph) { 97 | if (indegree[node.index] == 0) queue.add(node.index); 98 | } 99 | 100 | int c = 0; 101 | ArrayList topOrder = new ArrayList<>(); 102 | while (!queue.isEmpty()) { 103 | int u = queue.poll(); 104 | topOrder.add(u); 105 | 106 | for (Node neighbor : graph[u].neighbors) { 107 | indegree[neighbor.index]--; 108 | if (indegree[neighbor.index] == 0) queue.add(neighbor.index); 109 | } 110 | 111 | c++; 112 | } 113 | 114 | if (c != v) { 115 | System.out.println("A cycle exists. Topological sort not possible."); 116 | return; 117 | } 118 | 119 | for (int i : topOrder) System.out.print(graph[i].val + " "); 120 | } 121 | ``` 122 | 123 | #### Time Complexity 124 | 125 | ``` 126 | O(|V| + |E|) 127 | ``` 128 | 129 | ### DFS-like Implementation 130 | 131 | ```java 132 | void topologicalSort(Node[] graph) { 133 | Stack stack = new Stack<>(); 134 | int V = graph.length; 135 | boolean[] visited = new boolean[V]; 136 | 137 | for (int i = 0; i < V; i++) 138 | if (visited[i] == false) 139 | topologicalSortUtil(i, visited, stack, graph); 140 | 141 | while (!stack.isEmpty()) System.out.print(stack.pop() + " "); 142 | } 143 | 144 | void topologicalSortUtil(int index, boolean visited[], Stack stack, Node[] graph) { 145 | visited[index] = true; 146 | 147 | for (Node neighbor : graph[index].neighbors) { 148 | if (!visited[neighbor.index]) topologicalSortUtil(neighbor.index, visited, stack, graph); 149 | } 150 | 151 | stack.push(graph[index].val); 152 | } 153 | ``` 154 | 155 | #### Time Complexity 156 | 157 | ``` 158 | O(|V| + |E|) 159 | ``` 160 | -------------------------------------------------------------------------------- /3 - Object Oriented Programming.md: -------------------------------------------------------------------------------- 1 | # Object Oriented Programming 2 | 3 | ## Object Oriented Model 4 | 5 | - In OOP, computation is represented as the interaction among or communication between *objects*. 6 | - An object is an entity that contains both, the attributes and the actions, of the real-world object. 7 | - The attributes of an object encompass the data/variables that characterize the state. 8 | - The state of an object encompasses all of the (usually static) properties of the object plus the current (usually dynamic) values of each of these properties. 9 | - The behavior/actions of an object encompass the methods that represent the services & operations an object provides. 10 | - A **class** is a template/blueprint for objects. It contains data properties & methods. An **object** is a specific instance of a class. 11 | - **Object Composition –** An object can include other objects as its data member(s). *has-a* relationship. 12 | 13 | ## OOP Concepts 14 | 15 | - **Abstraction –** An abstraction denotes the essential characteristics of an object that distinguish it from all other kinds of objects, and thus, it provides crisply defined conceptual boundaries. 16 | - **Encapsulation –** Encapsulation builds a barrier to protect an object's private data. Access to the private data can only be done through public methods of the object's class, such as accessors & mutators. 17 | - **Information Hiding –** Hides the implementation details of the class from users of the class. 18 | - **Inheritance –** A mechanism that defines a new class that inherits the properties and behaviors (methods) of a parent class. Superclass/Base Class (Parent) → Subclass/Derived Class (Child). Any inherited behavior may be redefined and overridden in the subclass. Avoids duplication of code. 19 | - Multiple inheritance is when a class inherits from more than one superclass. A problem arises when there is more than one property/method to inherit with the same name. 20 | - **Polymorphism –** The same method can be invoked on different objects through the same reference type with different results. Sending object does not need to know the class of the receiving object or how the object will respond. 21 | 22 | ## Inheritance 23 | 24 | - Inheritance is an important OOP feature that allows the derivation of new classes from existing classes by absorbing their attributes and behaviors while also adding new capabilities. This enables code reuse and can greatly reduce programming effort. *is-a* relationship. 25 | - The **superclass** is a generalization of the subclasses. 26 | - The **subclasses** are specializations of the superclass. 27 | - **Method Overriding –** A subclass inherits properties and methods from the superclass. When a subclass alters a method from a superclass by defining a method with exactly the same signature, it overrides that method. This can either be a refinement or a replacement of the superclass' method. 28 | - Implementing abstract methods of an abstract class or implementing methods of an interface is also method overriding. 29 | - When a method is invoked on an object, the search for a matching method begins at the class of the object → immediate superclass → and so on... 30 | - **Method Overloading –** When a method is overloaded, it is designed to perform differently when supplied with different signatures i.e. same method name but different number of parameters or parameter types. *Not a behavior due to inheritance.* 31 | 32 | ### Types of Classes in Java 33 | 34 | - A **concrete class** is a class with implementation for all methods. 35 | - **Abstract Classes & Methods (`abstract`) –** Abstract methods don't have any implementation in the abstract class. The implementation must be provided by the subclass(es). 36 | - `public abstract class Quadrilateral {}` 37 | - `public abstract double findArea();` 38 | - Multiple inheritance is not supported by Java. However, Java does support implementing multiple interfaces. 39 | - An **interface** is like an abstract class except it contains only abstract methods and constants (i.e. `static final`). The `abstract` keyword is not needed when defining the methods in an interface. 40 | - `public interface Figure {}` 41 | - `static final int constant;` 42 | - `public double findArea();` 43 | - A class implementing an interface has to provide an implementation for all the abstract methods. Otherwise, the new class will be abstract. 44 | - Interfaces can inherit each other as per normal i.e. `extends`. 45 | 46 | | Abstract Class | Interface | 47 | |:--------------:|:--------------:| 48 | | `extends` | `implements` | 49 | | Real base class. | Not a real base class. | 50 | | Can have object attributes (data members). | Cannot have object attributes. | 51 | | May have some methods declared as `abstract`. | Can only have abstract methods. | 52 | | May have `final` & non-final data attributes. | Limited to only static constants i.e. `static final`. | 53 | | Cannot be instantiated as objects with `new`. | Cannot be instantiated as objects with `new`. | 54 | 55 | ## Polymorphism 56 | 57 | - In OOP, polymorphism is the ability of an object reference to refer to different object types; knowing which method to execute depends on where it is in the inheritance hierarchy. 58 | - When a program invokes a method through a superclass variable, the appropriate subclass version of the method is executed based on the actual type of object stored in the superclass variable. 59 | - The same method name & signature can cause different actions to occur, depending on the actual type of object on which the method is invoked. 60 | - Benefits of Polymorphism: 61 | - Simplicity – Code can ignore type-specific details and just interact with the base type of the family. Makes it easier to write and understand the code. 62 | - Extensibility – New functionality can be added by creating new derived classes without modifying other derived classes. 63 | 64 | ### Binding 65 | 66 | - **Binding –** Defines which method is to be executed (i.e. connecting a method call to a method body). 67 | - **Static Binding –** Occurs when the method call is bound at compile time. 68 | - **Dynamic Binding –** The selection of the method body to be executed is delayed until runtime (based on the actual object being referred). 69 | - Java uses this by default for all methods except `private`, `final` & `static`. 70 | 71 | ### Object Variable vs. Object Reference 72 | 73 | - **Upcasting –** When an object of a derived class is assigned to a variable of a base class (or any ancestor class). However, subclass-only members cannot be referred to by a superclass variable. 74 | - **Downcasting –** When an object of a base class is assigned to a variable of a derived class. This doesn't make sense in many cases and may be illegal. 75 | - In Java, `object instanceof ClassName` will return true if object is an instance of `ClassName` or any descendent class of `ClassName`. 76 | 77 | ## Java Notes 78 | 79 | >`this` 80 | 81 | - `this` references the receiver object. 82 | 83 | > `super` 84 | 85 | - `super()` calls the constructor of the superclass and `super.X()` can be used to call a superclass' method. 86 | 87 | > `static` 88 | 89 | - `static` declares a class variable or class method that applies to the whole class instead of individual objects. 90 | 91 | > `final` 92 | 93 | - A `final` variable cannot be initialized more than once. 94 | - A `final` method cannot be overridden in subclasses and a `final` class cannot be a superclass. 95 | - Improves security by ensuring no change in behavior and improves efficiency by reducing runtime type checking and binding. 96 | 97 | ### Visibility Modifiers 98 | 99 | - `public` - Visible anywhere in an application. 100 | - `protected` - Visible anywhere within the same package. 101 | - `private` - Visible only within that class. 102 | 103 | ### Package 104 | 105 | - A package contains a set of classes that are grouped together in the same directory. Non-private data can be accessed by any object in the same package. 106 | - Packages allow for namespacing, thus, the same class name can be used in two different packages. For eg: `X.Deck` & `Y.Deck`. 107 | 108 | ## Design Principles 109 | 110 | - Symptoms of Rotting Design: 111 | - Rigidity – The tendency of software to be difficult to change, even in simple ways. Every change causes a cascade of subsequent changes. 112 | - Fragility – The tendency of software to break in many places every time it is changed. Breakage may occur in areas that have no conceptual relationship with the area that was changed. 113 | - Immobility – The inability to reuse software/module from other projects or from parts of the same project. The module may have too much baggage that it depends on. 114 | - Good design & programming must be easy to read, easy to maintain and modify, efficient, reliable and secure. 115 | - The main design goal of OOD is to make software easier to change i.e. minimize impact of change. 116 | - A modular program has well-defined, conceptually simple and independent units interacting through well-defined interfaces. Achieved through encapsulation, low coupling & high cohesion. 117 | 118 | ### SOLID(D) 119 | 120 | - **Single Responsibility Principle –** There should never be more than one reason for a class to change. If the class has more than one responsibility, then the responsibilities become coupled. 121 | - **Open-Closed Principle –** A module should be open for extension but closed for modification. We want to be able to change what the modules do, without changing the source code of the modules. 122 | - **Liskov Substitution Principle –** Subtypes must be substitutable for their base types. A user of a base class should continue to function if a derivative of that base class is passed to it. 123 | - **Interface Segregation Principle –** Many client specific interfaces are better than one general purpose interface. Classes should not depend on interfaces that they do not use. 124 | - **Don't Repeat Yourself –** Refactor to eliminate duplicated code and functionality. 125 | - **Dependency Injection Principle –** High level modules should not depend upon low level modules. Both should depend upon abstractions. This allows the simple reuse of high level modules. 126 | 127 | ## Handling Object-Oriented Design Questions 128 | 129 | 1. Handle Ambiguity 130 | 2. Define the Core Objects 131 | 3. Analyze Relationships (i.e. inheritance, one-to-many, many-to-many, has-a, etc.) 132 | 4. Investigate Actions 133 | -------------------------------------------------------------------------------- /4 - Design Patterns.md: -------------------------------------------------------------------------------- 1 | # Design Patterns 2 | 3 | ## Observer Pattern 4 | 5 | The Observer pattern involves having observers register for changes to subjects. Whenever the subject changes, each of the registered observers is notified as opposed to the observers continuously polling the subject's state. 6 | 7 | ```java 8 | public class Subject { 9 | private List observers = new ArrayList<>(); 10 | private int state; 11 | 12 | public int getState() { return state; } 13 | 14 | public void setState(int state) { 15 | this.state = state; 16 | notifyAllObservers(); 17 | } 18 | 19 | public void attach(Observer observer) { 20 | observers.add(observer); 21 | } 22 | 23 | public void notifyAllObservers() { 24 | for (Observer observer : observers) { 25 | observer.update(); 26 | } 27 | } 28 | } 29 | ``` 30 | 31 | ```java 32 | public abstract class Observer { 33 | protected Subject subject; 34 | public abstract void update(); 35 | } 36 | ``` 37 | 38 | ```java 39 | public class BinaryObserver extends Observer { 40 | public BinaryObserver(Subject subject) { 41 | this.subject = subject; 42 | this.subject.attach(this); 43 | } 44 | 45 | @Override 46 | public void update() { 47 | System.out.println("Binary String: " + Integer.toBinaryString(subject.getState())); 48 | } 49 | } 50 | ``` 51 | 52 | ```java 53 | public class HexaObserver extends Observer { 54 | public HexaObserver(Subject subject) { 55 | this.subject = subject; 56 | this.subject.attach(this); 57 | } 58 | 59 | @Override 60 | public void update() { 61 | System.out.println("Hex String: " + Integer.toHexString(subject.getState()).toUpperCase()); 62 | } 63 | } 64 | ``` 65 | 66 | ## Singleton Class 67 | 68 | The Singleton pattern ensures that a class has only one instance and ensures access to that instance through the application. It can be useful when you need a *global* object with exactly one instance. 69 | 70 | ```java 71 | public class Singleton { 72 | private static Singleton _instance = null; 73 | protected Singleton() { ... } 74 | public static Singleton getInstance() { 75 | if (_instance == null) { 76 | _instance = new Singleton(); 77 | } 78 | return _instance; 79 | } 80 | } 81 | ``` 82 | 83 | ## Factory Method 84 | 85 | The Factory Method offers an interface for creating an instance of a class, with its subclasses deciding which class to instantiate. The Factory method can also be implemented with a parameter representing which class to instantiate. 86 | 87 | ```java 88 | public class CardGameFactory { 89 | public static CardGame createCardGame(String type) { 90 | if (type.equalsIgnoreCase("POKER")) { 91 | return new PokerGame(); 92 | } else if (type.equalsIgnoreCase("BLACKJACK")) { 93 | return new BlackJackGame(); 94 | } 95 | return null; 96 | } 97 | } 98 | ``` 99 | 100 | ## Model-View-Controller (MVC) 101 | 102 | Model‐view‐controller (MVC) is a design pattern commonly used in user interfaces. The goal is to keep the "data" separate from the user interface. Essentially, a program that uses MVC uses separate programming entities to store the data (the "model"), display the data (the "view") and modify the data (the "controller"). In MVC, the view usually makes heavy use of listeners to listen to changes and events in the model. 103 | -------------------------------------------------------------------------------- /5 - OS Fundamentals.md: -------------------------------------------------------------------------------- 1 | # OS Fundamentals 2 | 3 | ## Big Endian vs. Little Endian 4 | 5 | - Big Endian stores the MSB in the smallest address. 6 | - Little Endian stores the LSB in the smallest address. 7 | 8 | ## Stack & Heap (Memory Space) 9 | 10 | - The stack is used for static memory allocation while the heap is used for dynamic memory allocation. 11 | - Variables allocated on the stack have their memory allocated at compile time and access is very fast. 12 | - Variables allocated on the heap have their memory allocated at runtime and accessing this memory is slightly slower. 13 | 14 | ## Processes & Threads 15 | 16 | - A process is an instance of a program in execution. It is an independent entity to which system resources are allocated. Each process executes in a separate address space and one process cannot access the data of another process. However, inter-process communication is possible using pipes, files, sockets, etc. 17 | - A thread is a particular execution path of a process. It exists within a process and shares the process' resources. Multiple threads within the same process will share the same heap space but each thread still has its own registers and its own stack. 18 | 19 | ## Mutexes & Semaphores 20 | 21 | - A mutex is like a lock. Mutexes are used in parallel programming to ensure that only one thread can access a shared resource at a time. 22 | - A mutex provides mutual exclusion. For example, either a producer or a consumer can have the key (mutex) to proceed with their work. As long as the buffer is being filled by the producer, the consumer needs to wait and vice-versa. 23 | - A semaphore is a signalling mechanism that restricts the number of simultaneous users for a shared resource up to a maximum number. Threads can request access to the resource (by decrementing the semaphore) and can signal that they have finished using the resource (by incrementing the semaphore). 24 | - Even though the implementation of a mutex is similar to that of a binary semaphore they are not the same. A mutex is a locking mechanism used to synchronize access to a resource while a semaphore is more like a signaling mechanism. 25 | 26 | ## Deadlocks 27 | 28 | A deadlock is a situation where a thread is waiting for a resource that another thread holds, while the second thread is waiting for the resource held by the first thread (or an equivalent situation with several threads). Since each thread is waiting for the other to unlock the resource, both threads remain waiting forever. 29 | 30 | ### Conditions for Deadlock 31 | 32 | 1. **Mutual Exclusion:** Only one process can access a resource at a given time (or there are limited number of the same resource). 33 | 2. **Hold and Wait:** Processes already holding a resource can request additional resources without releasing their current resources. 34 | 3. **No Preemption:** One process cannot forcibly remove another process' resource. 35 | 4. **Circular Wait:** Two or more processes form a circular chain where each process is waiting on another resource in the chain. 36 | 37 | Deadlocks can be prevented by removing any one of the four conditions above. However, most deadlock prevention algorithms focus on avoiding circular wait. 38 | 39 | ## Livelock 40 | 41 | A thread often acts in response to the action of another thread. If the other thread's action is also a response to the action of another thread, then livelock may result. As with deadlock, livelocked threads are unable to make further progress. However, the threads are not blocked — they are simply too busy responding to each other to resume work. 42 | 43 | ## Starvation 44 | 45 | Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress. This happens when shared resources are made unavailable for long periods by "greedy" threads. 46 | 47 | ## Scheduling 48 | 49 | - **Long-Term Scheduler:** Determines which programs are admitted into the system for processing. It selects processes from the queue and loads them into memory for execution. 50 | - **Short-Term Scheduler:** Selects a process among the processes that are ready to execute and allocates the CPU to one of them. Its main objective is to increase system performance according to certain criteria. 51 | - **Medium-Term Scheduler:** Responsible for removing processes from memory in case of long waits (for eg. I/O wait). The suspended process is moved to secondary storage and swapped for another process in the queue. Once the wait is over, the suspended process is swapped back in for continued execution. 52 | -------------------------------------------------------------------------------- /6 - Concurrency in Java.md: -------------------------------------------------------------------------------- 1 | # Concurrency in Java 2 | 3 | > Adapted from the [Oracle tutorial](https://docs.oracle.com/javase/tutorial/essential/concurrency/index.html) on Java concurrency. 4 | 5 | In concurrent programming, there are two basic units of execution - processes & threads. A process has a self-contained execution environment and is often seen as synonymous with programs and applications (though this may not be true). Inter-process communication usually happens via pipes or sockets as processes have their own memory space. Threads, on the other hand, exist within a process (each process has at least one) and share the process's resources. From the application programmer's point of view, you start with just one `main` thread (not counting "system" threads for memory management, signal handling, etc.), which has the ability to create new threads. 6 | 7 | ## Thread Objects 8 | 9 | Each thread is associated with an instance of the `Thread` class. An application that creates a `Thread` instance must provide the code that will run in the thread by either providing a `Runnable` object or by sub-classing `Thread` as the `Thread` class itself implements the `Runnable` interface. 10 | 11 | ```java 12 | public class HelloRunnable implements Runnable { 13 | public void run() { 14 | System.out.println("Hello from a thread!"); 15 | } 16 | 17 | public static void main(String args[]) { 18 | (new Thread(new HelloRunnable())).start(); 19 | } 20 | } 21 | ``` 22 | 23 | ```java 24 | public class HelloThread extends Thread { 25 | public void run() { 26 | System.out.println("Hello from a thread!"); 27 | } 28 | 29 | public static void main(String args[]) { 30 | (new HelloThread()).start(); 31 | } 32 | } 33 | ``` 34 | 35 | The first approach is more flexible and separates the `Runnable` task from the `Thread` object that executes the task. Additionally, since Java does not support multiple inheritance, implementing the `Runnable` interface will still allow the runnable class to extend another class. 36 | 37 | ### Sleep 38 | 39 | `Thread.sleep(int milliseconds)` causes the current thread to suspend execution for a specified period of time. However, the sleep period may be terminated by interrupts. 40 | 41 | ### Interrupt 42 | 43 | An interrupt is an indication to a thread that it should stop what it is doing and do something else. A thread sends an interrupt by invoking `.interrupt()` on the `Thread` object for the thread to be interrupted. For the interrupt mechanism to work correctly, the interrupted thread must support its own interruption. 44 | 45 | If the interrupted thread is frequently invoking methods that throw `InterruptedException`, it simply returns from the run method after it catches that exception. If a thread is not invoking such methods, it must periodically invoke `Thread.interrupted()`, which returns true if an interrupt has been received. 46 | 47 | Note that the interrupt mechanism is implemented using an internal flag known as the interrupt status. Invoking `Thread.interrupt()` sets this flag. When a thread checks for an interrupt by invoking the static method `Thread.interrupted()`, interrupt status is cleared. The non-static `.isInterrupted()` method, which is used by one thread to query the interrupt status of another, does not change the interrupt status flag. 48 | 49 | ### Joins 50 | 51 | The `.join()` method allows one thread to wait for the completion of another. For example, if a thread wants to wait for the completion of `t`, 52 | 53 | ```java 54 | t.join(); 55 | ``` 56 | 57 | causes the current thread to pause execution until `t`'s thread terminates. Like `.sleep()`, `join()` responds to an interrupt by throwing a `InterruptedException`. 58 | 59 | ## Synchronization 60 | 61 | Threads communicate each other primarily by sharing access to fields and the object references fields refer to. While this is extremely efficient, it leads to *thread interference* and *memory consistency errors*. 62 | 63 | ### Thread Interference 64 | 65 | Interference happens when two operations, running in different threads, but acting on the same data, *interleave*. This means that the two operations consist of multiple steps and the sequences of steps overlap. For example, `a++` & `a--` being invoked by two concurrent threads. Thread interference bugs (AKA race conditions) are unpredictable and hard to debug. 66 | 67 | ### Memory Consistency Errors 68 | 69 | Memory consistency errors happen when two or more threads have an inconsistent view of the data. Avoiding memory consistency errors requires understanding of the *happens-before* relationship. This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement. 70 | 71 | Invoking `Thread.start()` and `Thread.join()` at the right places ensure that the statements have a predictable happens-before relationship. 72 | 73 | ### Synchronized Methods 74 | 75 | ```java 76 | public synchronized void methodName() { } 77 | ``` 78 | 79 | Adding the `synchronized` keyword to a method guarantees two things: 80 | 81 | 1. It is not possible for two invocations of (the same or different) synchronized methods on the same object to interleave. 82 | 2. When a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. 83 | 84 | ### Intrinsic Locks 85 | 86 | Synchronization is built around an internal entity known as the *intrinsic lock* or *monitor lock*. Every object has an intrinsic lock associated with it. By convention, a thread that needs exclusive and consistent access to an object's fields has to acquire the object's intrinsic lock (a.k.a. own the intrinsic lock) before accessing them and release the intrinsic lock when it's done with them. Other threads will block when they attempt to acquire the lock. 87 | 88 | When a thread invokes a synchronized method, it automatically acquires the intrinsic lock for that method's object. When a static synchronized method is invoked, the thread acquires the intrinsic lock for the `Class` object associated with the class. 89 | 90 | Another way to create synchronized code is through synchronized statements. Unlike synchronized methods, synchronized statements must specify the object that provides the intrinsic lock: 91 | 92 | ```java 93 | public void addSomething(int a) { 94 | synchronized(this) { 95 | counter += a; 96 | } 97 | System.out.println(a); 98 | } 99 | ``` 100 | 101 | ```java 102 | public class ABC { 103 | private int a = 0; 104 | private int b = 0; 105 | private Object lock1 = new Object(); 106 | private Object lock2 = new Object(); 107 | 108 | public void inc() { 109 | synchronized(lock1) { 110 | a++; 111 | } 112 | 113 | synchronized(lock1) { 114 | b++; 115 | } 116 | } 117 | } 118 | ``` 119 | 120 | ```java 121 | public class LockedClass { 122 | private Lock lock; 123 | private int i = 0; 124 | 125 | public LockedClass() { 126 | lock = new ReentrantLock(); 127 | } 128 | 129 | public void inc() { 130 | lock.lock(); 131 | i++; 132 | lock.unlock(); 133 | } 134 | } 135 | ``` 136 | 137 | Note that a thread can acquire a lock it already owns (reentrant synchronization). For example, this may be needed if a synchronized method invokes another synchronized method for the same object. 138 | 139 | ### Atomic Access 140 | 141 | In programming, an atomic action is one that effectively happens all at once. An atomic action cannot stop in the middle. It either happens completely or it doesn't happen at all. In Java, the following actions are atomic: 142 | 143 | 1. Reads and writes are atomic for reference variables and for most primitive variables (all types except `long` and `double`). 144 | 2. Reads and writes are atomic for all variables declared `volatile`. 145 | -------------------------------------------------------------------------------- /7 - Bit Manipulation.md: -------------------------------------------------------------------------------- 1 | # Bit Manipulation 2 | 3 | ## Bitwise Operators 4 | 5 | - `&`: AND 6 | - `|`: OR 7 | - `^`: XOR 8 | - `~`: NOT 9 | - `<<`: Binary Left Shift 10 | - `>>`: Binary Right Shift 11 | - `>>>`: Zero Fill Right Shift 12 | 13 | ## Bit Facts & Tricks 14 | 15 | ``` 16 | x ^ 0s = x x & 0s = 0 x | 0s = x 17 | x ^ 1s = ~x x & 1s = x x | 1s = 1s 18 | x ^ x = 0 x & x = x x | x = x 19 | ``` 20 | 21 | ## Two's Complement 22 | 23 | - Computers typically store integers in two's complement representation. 24 | - Range of unsigned numbers that can be stored with `N` bits is `0` - `+(2^N - 1)`. 25 | - Range of signed numbers that can be stored with `N` bits in two's complement representation is `-(2^(N - 1))` - `+(2^(N - 1) - 1)`. 26 | - Binary representation of `-K` is `concat(1, bin(2^(N - 1) - K))`. Another way to compute it is to flip the bits of the binary representation of `K`, add `1` and then prepend the sign bit `1`. 27 | 28 | ## Arithmetic vs. Logical Shift 29 | 30 | - In an arithmetic right shift (`>>`), the bits are shifted and the sign bit is put in the MSB. 31 | - In a logical right shift (`>>>`), the bits are shifted and a `0` is put in the MSB. 32 | 33 | ## Common Bit Tasks 34 | 35 | ### Get Bit 36 | 37 | ```java 38 | boolean getBit(int num, int i) { 39 | return ((num & (1 << i)) != 0); 40 | } 41 | ``` 42 | 43 | ### Set Bit 44 | 45 | ```java 46 | int setBit(int num, int i) { 47 | return num | (1 << i); 48 | } 49 | ``` 50 | 51 | ### Clear Bit(s) 52 | 53 | ```java 54 | int clearBit(int num, int i) { 55 | return num & ~(1 << i); 56 | } 57 | 58 | int clearBitsMSBThroughI(int num, int i) { 59 | int mask = (1 << i) - 1; 60 | return num & mask; 61 | } 62 | 63 | int clearBitsIThrough0(int num, int i) { 64 | int mask = (-1 << (i + 1)); 65 | return num & mask; 66 | } 67 | ``` 68 | 69 | ### Toggle Bit 70 | 71 | ```java 72 | int toggleBit(int num, int i) { 73 | return num ^ (1 << i); 74 | } 75 | ``` 76 | 77 | ### Update Bit 78 | 79 | ```java 80 | int updateBit(int num, int i, boolean setBit) { 81 | int value = setBit ? 1 : 0; 82 | int mask = ~(1 << i); 83 | return (num & mask) | (value << i); 84 | } 85 | ``` 86 | 87 | ### Multiply/Divide by 2n 88 | 89 | ```java 90 | num = num << n; // multiply 91 | num = num >> n; // divide 92 | ``` 93 | -------------------------------------------------------------------------------- /8 - Miscellaneous.md: -------------------------------------------------------------------------------- 1 | # Miscellaneous 2 | 3 | ## Time Complexity 4 | 5 | ### Asymptotic Bounds 6 | 7 | > f(n) = O(g(n)) 8 | 9 | - `g(n)` is the asymptotic upper bound of `f(n)`. 10 | 11 | > f(n) = Θ(g(n)) 12 | 13 | - `g(n)` is the asymptotic tight bound of `f(n)`. 14 | 15 | > f(n) = Ω(g(n)) 16 | 17 | - `g(n)` is the asymptotic lower bound of `f(n)`. 18 | 19 | ### Complexity Classes 20 | 21 | - `O(1)`: Constant 22 | - `O(log n)`: Logarithmic 23 | - `O(n)`: Linear 24 | - `O(n log n)`: Loglinear 25 | - `O(n^2)`: Quadratic 26 | - `O(n^c)`: Polynomial 27 | - `O(c^n)`: Exponential 28 | - `O(n!)`: Factorial 29 | 30 | ## Math 31 | 32 | ### Combinatorics 33 | 34 | - `2^(n + 1) - 1`: Sum of powers of two from 1 though `n`. 35 | - `(n (n + 1)) / 2`: Sum of integers from 1 through `n`. 36 | - `(n (n - 1)) / 2`: No. of handshakes in a group. 37 | - `n - 1`: No. of matches in a knockout tournament. 38 | - `2^k`: No. of binary strings of length `k`. 39 | - `n! / ((n - k)!)`: Permutations of `n` items taken `k` at a time. 40 | - `n! / (k! (n - k)!)`: Combinations of `n` items taken `k` at a time. 41 | 42 | ### Probability 43 | 44 | ``` 45 | P(A and B) = P(B | A) P(A) 46 | P(A or B) = P(A) + P(B) - P(A and B) 47 | 48 | P(A and B) = P(A) P(B) // if A & B are independent 49 | P(A or B) = P(A) + P(B) // if A & B are mutually exclusive 50 | 51 | P(A | B) = (P(B | A) P(A)) / P(B) 52 | ``` 53 | 54 | ## Internet 55 | 56 | ### HTTP Methods 57 | 58 | - `GET`: Used to retrieve data, no other effect on the data. 59 | - `POST`: Used to send data to the server (e.g. form). 60 | - `PUT`: Replaces current representation of resource (idempotent). 61 | - `DELETE`: Removes current representation resource. 62 | 63 | ### HTTP Status Codes 64 | 65 | - `200 OK`: Success 66 | - `400 Bad Request`: Syntax could not be understood. 67 | - `401 Unauthorized`: Request not fulfilled due to lack of authorization. 68 | - `403 Forbidden`: Request understood but not fulfilled, authorization will not help. 69 | - `404 Not Found`: URI could not be matched. 70 | - `408 Request Timeout`: Server did not receive a timely response from client. 71 | - `500 Internal Server Error`: Server exception. 72 | - `503 Service Unavailable`: Server unable to handle the request (temporary). 73 | - `504 Gateway Timeout`: Server did not receive a timely response from an upstream server. 74 | 75 | ## Rabin-Karp Substring Search 76 | 77 | The brute force way to search for a substring of length `b` in a larger string of length `S` takes `O(b (S - b))` time, since the first `S - b + 1` characters are searched in the larger string and for each, the next `b` characters are checked. 78 | 79 | The Rabin-Karp algorithm takes advantage of the fact that two identical strings will have the same hash value. Note that two different strings may also have the same hash value. 80 | 81 | Therefore, using this trick can reduce the search time (in the best case) to `O(S)` since we just need to compute the hash value for every sequence of `b` characters in the larger string and subsequently validate those substrings. 82 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Suyash Lakhotia 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Tech Interviews 2 | 3 | > Here's a cheat sheet to prepare for technical interviews as a CS major. All the best for your interviews! Go get that **dream job**! 4 | 5 | ### Table of Contents 6 | 7 | 1. [Data Structures](1%20-%20Data%20Structures.md) 8 | 2. [Algorithms](2%20-%20Algorithms.md) 9 | 3. [Object Oriented Programming](3%20-%20Object%20Oriented%20Programming.md) 10 | 4. [Design Patterns](4%20-%20Design%20Patterns.md) 11 | 5. [OS Fundamentals](5%20-%20OS%20Fundamentals.md) 12 | 6. [Concurrency in Java](6%20-%20Concurrency%20in%20Java.md) 13 | 7. [Bit Manipulation](7%20-%20Bit%20Manipulation.md) 14 | 8. [Miscellaneous](8%20-%20Miscellaneous.md) 15 | 16 | ### Disclaimers 17 | 18 | If you found a mistake (sorry!) or know a better way of doing/explaining something or if you think I may have missed out something important, please [post an issue](https://github.com/SuyashLakhotia/TechInterview/issues) or [submit a pull request](https://github.com/SuyashLakhotia/TechInterview/pulls)! :blush: 19 | 20 | This repo is an assimilation of knowledge from various sources that I didn't completely keep track of when I was learning. So, if you found a resource I might have referenced and paraphrased (or straight up regurgitated) in this repo, please do let me know and I would be happy to add it to the growing list of [references](References.md). 21 | -------------------------------------------------------------------------------- /References.md: -------------------------------------------------------------------------------- 1 | # References 2 | 3 | This repository really doesn't do justice to these great resources and is in no way an alternative to them. Please do check them out to truly understand the concepts covered in this cheat sheet! 4 | 5 | - Computer Science Courses @ Nanyang Technological University, Singapore 6 | - [Cracking the Coding Interview, 6th Edition - Gayle Laakmann McDowell](https://www.amazon.com/Cracking-Coding-Interview-Programming-Questions/dp/0984782850) 7 | - Introduction to Algorithms, 3rd Edition - Cormen et. al 8 | - [Hacking a Google Interview Handouts](http://courses.csail.mit.edu/iap/interview/Hacking_a_Google_Interview_Handout_1.pdf) 9 | - [Big O Cheat Sheet](http://bigocheatsheet.com/) 10 | - [GeeksforGeeks](http://www.geeksforgeeks.org/) 11 | - [TutorialsPoint](http://tutorialspoint.com/) 12 | - [schmatz/cs-interview-guide](https://github.com/schmatz/cs-interview-guide) 13 | - [kdn251/Interviews](https://github.com/kdn251/Interviews) 14 | -------------------------------------------------------------------------------- /assets/Bubble-Sort.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SuyashLakhotia/TechInterview/e163f0570fdcf2bcd324934069d5005ac0be9944/assets/Bubble-Sort.gif -------------------------------------------------------------------------------- /assets/Dijkstra.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SuyashLakhotia/TechInterview/e163f0570fdcf2bcd324934069d5005ac0be9944/assets/Dijkstra.gif -------------------------------------------------------------------------------- /assets/Heap-Sort.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SuyashLakhotia/TechInterview/e163f0570fdcf2bcd324934069d5005ac0be9944/assets/Heap-Sort.gif -------------------------------------------------------------------------------- /assets/Insertion-Sort.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SuyashLakhotia/TechInterview/e163f0570fdcf2bcd324934069d5005ac0be9944/assets/Insertion-Sort.gif -------------------------------------------------------------------------------- /assets/Kruskal.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SuyashLakhotia/TechInterview/e163f0570fdcf2bcd324934069d5005ac0be9944/assets/Kruskal.gif -------------------------------------------------------------------------------- /assets/Merge-Sort.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SuyashLakhotia/TechInterview/e163f0570fdcf2bcd324934069d5005ac0be9944/assets/Merge-Sort.png -------------------------------------------------------------------------------- /assets/Prim.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SuyashLakhotia/TechInterview/e163f0570fdcf2bcd324934069d5005ac0be9944/assets/Prim.gif -------------------------------------------------------------------------------- /assets/Quicksort.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SuyashLakhotia/TechInterview/e163f0570fdcf2bcd324934069d5005ac0be9944/assets/Quicksort.gif -------------------------------------------------------------------------------- /assets/Selection-Sort.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SuyashLakhotia/TechInterview/e163f0570fdcf2bcd324934069d5005ac0be9944/assets/Selection-Sort.gif --------------------------------------------------------------------------------