├── 1-intro └── 1-intro.md ├── 10-k-nearest └── 10-k-nearest.md ├── 2-selection-sort ├── 2-selection-sort.md └── 2-selection-sort.py ├── 3-recursion ├── 3-recursion.md └── 3-recursion.py ├── 4-quicksort ├── 4-quicksort.md └── 4-quicksort.py ├── 5-hash-tables ├── 5-hash-tables.md └── 5-hash-tables.py ├── 6-graphs ├── 6-graphs.md └── 6-graphs.py ├── 7-dijkstra ├── 7-dijkstra.py ├── 7.1A-dijkstra.py └── 7.1B-dijkstra.py ├── 8-greedy-algorithms ├── 8-greedy-algorithms.md └── 8-greedy-algorithms.py ├── 9-dynamic-programming └── 9-dynamic-programming.md └── README.md /1-intro/1-intro.md: -------------------------------------------------------------------------------- 1 | 1.1: 2 | log2(128) = 7 3 | Binary search will take log2(n) steps to run in the worst case for any list of n. 4 | 5 | 1.2: 6 | log2(256) = 8 7 | 8 | 1.3: 9 | O(log(n)) 10 | 11 | 1.4: 12 | O(n) 13 | 14 | 1.5: 15 | O(n) 16 | 17 | 1.6: 18 | O(n) 19 | Constants are ignored in Big O notation 20 | -------------------------------------------------------------------------------- /10-k-nearest/10-k-nearest.md: -------------------------------------------------------------------------------- 1 | 10.1: 2 | Adjust ratings based on users' average ratings - "normalization". 3 | 4 | 10.2: 5 | Add more weight to influencers' ratings. 6 | 7 | 10.3: 8 | 5 is too low vs. millions of users. 9 | -------------------------------------------------------------------------------- /2-selection-sort/2-selection-sort.md: -------------------------------------------------------------------------------- 1 | 2.1: 2 | Lists. 3 | Arrays have fast reads and slow inserts. Linked lists have slow reads and fast inserts. Because you’ll be inserting more often than reading, it makes sense to use a linked list. Also, linked lists have slow reads only if you’re accessing random elements in the list. Because you’re reading every element in the list, linked lists will do well on reads too. 4 | 5 | 2.2: 6 | Linked list. Lots of inserts (orders). Orders are taken in order and not randomly accessed. 7 | 8 | 2.3: 9 | (Sorted) Array. Binary search needs random access. 10 | 11 | 2.4: 12 | Arrays have slow inserts as elements need to be shifted down, in order to have a sorted array for binary search to work. 13 | 14 | 2.5: 15 | Array of linked lists would be slower than arrays for searching but faster for inserting. The elements in the array need not be shifted down. 16 | 17 | Array of linked lists would be faster than linked lists for searching but _same amount of time_ for inserting. The number of elements are split into 26 linked lists instead of searching through one big linked list. 18 | -------------------------------------------------------------------------------- /2-selection-sort/2-selection-sort.py: -------------------------------------------------------------------------------- 1 | def findSmallest(arr): 2 | smallest = arr[0] 3 | smallest_index = 0 4 | for i in range(1, len(arr)): 5 | if arr[i] < smallest: 6 | smallest = arr[i] 7 | smallest_index = i 8 | return smallest_index 9 | 10 | def selectionSort(arr): 11 | result = [] 12 | for i in range(0, len(arr)): 13 | smallest = findSmallest(arr) 14 | result.append(arr.pop(smallest)) 15 | return result 16 | 17 | print selectionSort([5,3,6,2,10]) 18 | -------------------------------------------------------------------------------- /3-recursion/3-recursion.md: -------------------------------------------------------------------------------- 1 | 3.1: 2 | 3 | - The greet function is called first with name=maggie 4 | - The greet function calls the greet2 function with name=maggie 5 | - The current function call is greet2 with name=maggie 6 | - The greet function is suspended while greet2 is being called, and will resume after greet2 completes 7 | 8 | 3.2: 9 | Stack overflow. 10 | -------------------------------------------------------------------------------- /3-recursion/3-recursion.py: -------------------------------------------------------------------------------- 1 | def countdown(i): 2 | print i 3 | if i <= 0: # base case 4 | return 5 | else: 6 | countdown(i-1) # recursive case 7 | 8 | # countdown(3) 9 | 10 | def factorial(x): 11 | if x == 1: # base case 12 | return 1 13 | else: 14 | return x * factorial(x-1) # recursive case 15 | 16 | # print factorial(3) 17 | 18 | # factorial(3) -> factorial(3-1) -> factorial(2-1) 19 | # factorial(2-1) = 1 20 | # factorial(3-1) = 2 21 | # x * factorial(2-1) = 2 * 1 = 2 22 | # factorial(3) = 6 23 | # x * factorial(3-1) = 3 * 2 = 6 -------------------------------------------------------------------------------- /4-quicksort/4-quicksort.md: -------------------------------------------------------------------------------- 1 | 4.4: 2 | Binary search 3 | 4 | X = element you're searching for; C = current element 5 | Base case: if C = X 6 | Recursive case: 7 | if C < X, call function on (total length - C / 2) 8 | if C > X, call function on (C / 2) 9 | 10 | 4.5: 11 | O(n) 12 | 13 | 4.6: 14 | O(n) 15 | 16 | 4.7: 17 | O(1) 18 | 19 | 4.8: 20 | O(n^2) 21 | -------------------------------------------------------------------------------- /4-quicksort/4-quicksort.py: -------------------------------------------------------------------------------- 1 | # 4.1 Sum function 2 | def sum(arr): 3 | if len(arr) == 0: 4 | return 0 5 | elif len(arr) == 1: 6 | return arr[0] 7 | else: 8 | return arr.pop(0) + sum(arr) 9 | 10 | # print sum([1, 2, 3]) #6 11 | # print sum([7]) # 7 12 | # print sum([]) # 0 13 | 14 | # 4.2 Count number of items in a list 15 | def count(arr): 16 | if len(arr) == 0: 17 | return 0 18 | else: 19 | return 1 + count(arr[1:]) # [1:] means from second item to the end 20 | 21 | # print count([1, 2, 3]) 22 | # print count([]) 23 | 24 | # 4.3 Maximum number in a list 25 | def max(arr): 26 | if len(arr) == 1: 27 | return arr[0] 28 | elif len(arr) == 0: 29 | return "error: list must contain at least one number!" 30 | else: 31 | sub_max = max(arr[1:]) 32 | return sub_max if sub_max > arr[0] else arr[0] 33 | 34 | # print max([1, 5, 2, 0, 100]) 35 | # print max([]) 36 | # print max([9]) 37 | 38 | # Quicksort code 39 | def quicksort(arr): 40 | if len(arr) < 2: 41 | return arr 42 | else: 43 | pivot = arr[0] 44 | # create new sub-array consisting of remaining elements that are <= pivot 45 | less = [i for i in arr[1:] if i <= pivot] 46 | # create new sub-array consisting of remaining elements that are > pivot 47 | greater = [i for i in arr[1:] if i > pivot] 48 | return quicksort(less) + [pivot] + quicksort(greater) 49 | 50 | # print quicksort([2, 1, 109, 20, 40, 10]) -------------------------------------------------------------------------------- /5-hash-tables/5-hash-tables.md: -------------------------------------------------------------------------------- 1 | 5.1: Yes 2 | 5.2: No 3 | 5.3: No 4 | 5.4: Yes 5 | 5.5: C, D 6 | 5.6: B, D 7 | 5.7: B, C, D 8 | -------------------------------------------------------------------------------- /5-hash-tables/5-hash-tables.py: -------------------------------------------------------------------------------- 1 | voted = {} 2 | def check_voter(name): 3 | if voted.get(name): 4 | print "kick them out!" 5 | else: 6 | voted[name] = True 7 | print "let them vote!" -------------------------------------------------------------------------------- /6-graphs/6-graphs.md: -------------------------------------------------------------------------------- 1 | 6.1: 2 2 | 6.2: 2 3 | 4 | 6.3: 5 | A = invalid 6 | B = valid 7 | C = invalid 8 | 9 | 6.4: 10 | 11 | 1. Wake up 12 | 2. Pack lunch 13 | 3. Brush teeth 14 | 4. Eat breakfast 15 | 5. Exercise 16 | 6. Shower 17 | 7. Get dressed 18 | 19 | 6.5: A and C 20 | 21 | # graph implementation in python 22 | 23 | 1. keep a queue of nodes to check 24 | 2. pop node off queue 25 | 3. check if node fulfils condition 26 | 4. yes? done. 27 | 5. no? add the node's neighbours to the queue 28 | 6. loop 29 | 7. if queue is empty, the condition can't be met 30 | -------------------------------------------------------------------------------- /6-graphs/6-graphs.py: -------------------------------------------------------------------------------- 1 | from collections import deque # python library has a queue function 2 | 3 | # create graph 4 | graph = {} 5 | graph["you"] = ["alice", "bob", "claire"] 6 | graph["bob"] = ["anuj", "peggy"] 7 | graph["alice"] = ["peggy"] 8 | graph["claire"] = ["thom", "jonny"] 9 | graph["anuj"] = [] 10 | graph["peggy"] = [] 11 | graph["thom"] = [] 12 | graph["jonny"] = [] 13 | 14 | def is_seller(person): 15 | return person[-1] == "m" 16 | 17 | def search(name): 18 | # initialize queue 19 | search_queue = deque() 20 | search_queue += graph[name] 21 | 22 | # keep track of people already searched 23 | searched = [] 24 | 25 | # BFS 26 | while search_queue: 27 | person = search_queue.popleft() # (pop first item i.e. JS shift) 28 | if not person in searched: 29 | if is_seller(person): 30 | print (person + " is a mango seller!") 31 | return True 32 | else: 33 | search_queue += graph[person] 34 | searched.append(person) 35 | return False 36 | 37 | search("you") -------------------------------------------------------------------------------- /7-dijkstra/7-dijkstra.py: -------------------------------------------------------------------------------- 1 | # init graph 2 | graph = {} 3 | graph["start"] = {} # init start node 4 | graph["start"]["a"] = 6 # edge from start to a, cost of 6 5 | graph["start"]["b"] = 2 6 | graph["a"] = {} 7 | graph["a"]["fin"] = 1 8 | graph["b"] = {} 9 | graph["b"]["a"] = 3 10 | graph["b"]["fin"] = 5 11 | graph["fin"] = {} 12 | 13 | # init costs table 14 | # tracks the lowest cost to reach each node 15 | # if unknown, use infinity 16 | infinity = float("inf") # infinity in python 17 | costs = {} 18 | costs["a"] = 6 19 | costs["b"] = 2 20 | costs ["fin"] = infinity 21 | 22 | # init parents table 23 | # used to find the final route backwards 24 | parents = {} 25 | parents["a"] = "start" 26 | parents["b"] = "start" 27 | parents["fin"] = None 28 | 29 | # init array to keep track of processed nodes 30 | processed = [] 31 | 32 | # function to find lowest cost node 33 | def find_lowest_cost_node(costs): 34 | lowest_cost = float("inf") 35 | lowest_cost_node = None 36 | for node in costs: 37 | cost = costs[node] 38 | if cost < lowest_cost and node not in processed: 39 | lowest_cost = cost 40 | lowest_cost_node = node 41 | return lowest_cost_node 42 | 43 | # Djikstra's Algorithm 44 | node = find_lowest_cost_node(costs) # find the lowest cost node not in processed --> b 45 | while node is not None: # while there is a remaining node to be processed 46 | cost = costs[node] # costs["b"] = 2 47 | neighbours = graph[node] # graph["b"].keys() --> ["a", "fin] 48 | for n in neighbours.keys(): 49 | # add cost of getting to b (2) 50 | # and cost of getting from b to neighbour (fin) 51 | # e.g. 2 + graph["b"]["fin"] = 5 52 | new_cost = cost + neighbours[n] 53 | if costs[n] > new_cost: # if existing cost of getting to fin > new cost 54 | costs[n] = new_cost # replace new cost 55 | parents[n] = node # update parent node 56 | processed.append(node) # add node to processed list 57 | node = find_lowest_cost_node(costs) # find the next lowest cost node 58 | 59 | print (costs["fin"]) 60 | 61 | # 7.1 62 | # A: shortest weight is 8 63 | # B: shortest weight is 60 64 | # C: Dijkstra's Algorithm cannot be used with negative-weight edges -------------------------------------------------------------------------------- /7-dijkstra/7.1A-dijkstra.py: -------------------------------------------------------------------------------- 1 | # A: shortest weight is 8 2 | # init graph 3 | graph = {} 4 | graph["start"] = {} # init start node 5 | graph["start"]["a"] = 5 # edge from start to a, cost of 6 6 | graph["start"]["b"] = 2 7 | graph["a"] = {} 8 | graph["a"]["d"] = 4 9 | graph["a"]["c"] = 2 10 | graph["b"] = {} 11 | graph["b"]["a"] = 8 12 | graph["b"]["c"] = 7 13 | graph["c"] = {} 14 | graph["c"]["fin"] = 1 15 | graph["d"] = {} 16 | graph["d"]["c"] = 6 17 | graph["d"]["fin"] = 3 18 | graph["fin"] = {} 19 | 20 | # init costs table 21 | # tracks the lowest cost to reach each node 22 | # if unknown, use infinity 23 | infinity = float("inf") # infinity in python 24 | costs = {} 25 | costs["a"] = 5 26 | costs["b"] = 2 27 | costs["c"] = infinity 28 | costs["d"] = infinity 29 | costs ["fin"] = infinity 30 | 31 | # init parents table 32 | # used to find the final route backwards 33 | parents = {} 34 | parents["a"] = "start" 35 | parents["b"] = "start" 36 | parents["c"] = None 37 | parents["d"] = None 38 | parents["fin"] = None 39 | 40 | # init array to keep track of processed nodes 41 | processed = [] 42 | 43 | # function to find lowest cost node 44 | def find_lowest_cost_node(costs): 45 | lowest_cost = float("inf") 46 | lowest_cost_node = None 47 | for node in costs: 48 | cost = costs[node] 49 | if cost < lowest_cost and node not in processed: 50 | lowest_cost = cost 51 | lowest_cost_node = node 52 | return lowest_cost_node 53 | 54 | # Djikstra's Algorithm 55 | node = find_lowest_cost_node(costs) # find the lowest cost node not in processed --> b 56 | while node is not None: # while there is a remaining node to be processed 57 | cost = costs[node] 58 | neighbours = graph[node] 59 | for n in neighbours.keys(): 60 | new_cost = cost + neighbours[n] 61 | if costs[n] > new_cost: # if existing cost of getting to fin > new cost 62 | costs[n] = new_cost # replace new cost 63 | parents[n] = node # update parent node 64 | processed.append(node) # add node to processed list 65 | node = find_lowest_cost_node(costs) # find the next lowest cost node 66 | 67 | print (costs["fin"]) 68 | -------------------------------------------------------------------------------- /7-dijkstra/7.1B-dijkstra.py: -------------------------------------------------------------------------------- 1 | # B: shortest weight is 60 2 | # init graph 3 | graph = {} 4 | graph["start"] = {} # init start node 5 | graph["start"]["a"] = 10 # edge from start to a, cost of 6 6 | graph["a"] = {} 7 | graph["a"]["b"] = 20 8 | graph["b"] = {} 9 | graph["b"]["fin"] = 30 10 | graph["b"]["c"] = 1 11 | graph["c"] = {} 12 | graph["c"]["a"] = 1 13 | graph["fin"] = {} 14 | 15 | # init costs table 16 | # tracks the lowest cost to reach each node 17 | # if unknown, use infinity 18 | infinity = float("inf") # infinity in python 19 | costs = {} 20 | costs["a"] = 10 21 | costs["b"] = infinity 22 | costs["c"] = infinity 23 | costs ["fin"] = infinity 24 | 25 | # init parents table 26 | # used to find the final route backwards 27 | parents = {} 28 | parents["a"] = "start" 29 | parents["b"] = None 30 | parents["c"] = None 31 | parents["fin"] = None 32 | 33 | # init array to keep track of processed nodes 34 | processed = [] 35 | 36 | # function to find lowest cost node 37 | def find_lowest_cost_node(costs): 38 | lowest_cost = float("inf") 39 | lowest_cost_node = None 40 | for node in costs: 41 | cost = costs[node] 42 | if cost < lowest_cost and node not in processed: 43 | lowest_cost = cost 44 | lowest_cost_node = node 45 | return lowest_cost_node 46 | 47 | # Djikstra's Algorithm 48 | node = find_lowest_cost_node(costs) # find the lowest cost node not in processed --> b 49 | while node is not None: # while there is a remaining node to be processed 50 | cost = costs[node] 51 | neighbours = graph[node] 52 | for n in neighbours.keys(): 53 | new_cost = cost + neighbours[n] 54 | if costs[n] > new_cost: # if existing cost of getting to fin > new cost 55 | costs[n] = new_cost # replace new cost 56 | parents[n] = node # update parent node 57 | processed.append(node) # add node to processed list 58 | node = find_lowest_cost_node(costs) # find the next lowest cost node 59 | 60 | print (costs["fin"]) 61 | -------------------------------------------------------------------------------- /8-greedy-algorithms/8-greedy-algorithms.md: -------------------------------------------------------------------------------- 1 | 8.1: 2 | Greedy:Pick the biggest box that will fit in the remaining space at each stage. Stop when there is no more space. 3 | 4 | Not optimal, you may only have space for one box. 5 | 6 | 8.2: 7 | Greedy: Pick the highest priority item that will fit into the remaining time at each stage. Stop when there is no more time. 8 | 9 | Not optimal because you may only have time for one item. 10 | 11 | 8.3: 12 | Quicksort: not greedy, picking a pivot is not picking the locally optimal solution 13 | 14 | 8.4: 15 | BFS: not greedy, at each stage the node's children are added to the queue in any order 16 | 17 | 8.5: 18 | Djikstra: greedy, at each stage pick the lowest cost / closest node 19 | 20 | 8.6: 21 | Yes, similar to the traveling salesman problem. 22 | 23 | 8.7: 24 | Yes, set-covering problem. 25 | 26 | 8.8: 27 | Yes, sequence of states. 28 | -------------------------------------------------------------------------------- /8-greedy-algorithms/8-greedy-algorithms.py: -------------------------------------------------------------------------------- 1 | # Problem: igure out the smallest set of stations you can play on to cover all 50 states 2 | # Optimal solution O(2^n): 3 | ## List every possible subset of stations, pick the set with the smallest number of stations that cover all 50 states. 4 | # Greedy solution O(n^2): 5 | ## Pick the station that covers the most uncovered states, repeat until all states are covered. 6 | 7 | # Init set of states to cover 8 | states_needed = set(["mt", "wa", "or", "id", "nv", "ut", "ca", "az"]) 9 | 10 | # Init hash map of stations 11 | stations = {} 12 | stations["kone"] = set(["id", "nv", "ut"]) 13 | stations["ktwo"] = set(["wa", "id", "mt"]) 14 | stations["kthree"] = set(["or", "nv", "ca"]) 15 | stations["kfour"] = set(["nv", "ut"]) 16 | stations["kfive"] = set(["ca", "az"]) 17 | 18 | # Init set of final stations 19 | final_stations = set() 20 | 21 | # Find the best station (covers most uncovered states) 22 | while states_needed: # Iterate until all states are covered 23 | best_station = None 24 | states_covered = set() 25 | for station, states in stations.items(): # station = kfive; states = {'ca', 'az'} 26 | covered = states_needed & states # set intersection 27 | if len(covered) > len(states_covered): 28 | best_station = station 29 | states_covered = covered 30 | states_needed -= states_covered 31 | final_stations.add(best_station) # add the best station from each iteration to final set 32 | 33 | print(final_stations) # {'kfive', 'kthree', 'kone', 'ktwo'} -------------------------------------------------------------------------------- /9-dynamic-programming/9-dynamic-programming.md: -------------------------------------------------------------------------------- 1 | 9.1: 2 | No, the MP3 Player is worth less than the guitar and weighs the same. 3 | 4 | | | 1lb | 2lbs | 3lbs | 4lbs | 5 | | ----------------- | ----- | ----- | ----- | --------------------- | 6 | | Guitar (1lb) | $1500 | $1500 | $1500 | $1500 | 7 | | Stereo (4lbs) | $1500 | $1500 | $1500 | $3000 | 8 | | Laptop (3lbs) | $1500 | $1500 | $2000 | $2000 + $1500 = $3000 | 9 | | MP3 Player (1lbs) | $1500 | $1500 | $2000 | $2000 + $1500 = $3000 | 10 | 11 | 9.2: 12 | Camera + Food + Water 13 | 14 | | | 1lb | 2lbs | 3lbs | 4lbs | 5lbs | 6lbs | 15 | | ----------------- | --- | ---- | ------ | ---- | ------ | ------ | 16 | | Water (3lb) (10) | 0 | 0 | **10** | 10 | 10 | 10 | 17 | | Book (1lbs) (3) | 3 | 3 | 10 | 13 | 13 | 13 | 18 | | Food (2lbs) (9) | 3 | 9 | 12 | 13 | **19** | 19 | 19 | | Jacket (2lbs) (5) | 3 | 9 | 12 | 14 | 19 | 19 | 20 | | Camera (1lbs) (6) | 6 | 9 | 15 | 18 | 20 | **25** | 21 | 22 | 9.3: 23 | Longest common substring (not subsequence!) 24 | 25 | | | C | L | U | E | S | 26 | | --- | --- | --- | --- | --- | --- | 27 | | B | 0 | 0 | 0 | 0 | 0 | 28 | | L | 0 | 1 | 0 | 0 | 0 | 29 | | U | 0 | 0 | 2 | 0 | 0 | 30 | | E | 0 | 0 | 0 | 3 | 0 | 31 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Big O Notation 2 | 3 | - O(1): constant 4 | - O(n): touch every element in a list once 5 | 6 | # Arrays vs. Linked Lists 7 | 8 | - Arrays allow fast reads and random access. 9 | - Linked lists allow fast inserts and deletes but only sequential access. 10 | 11 | # Selection sort 12 | 13 | - O(n^2): going through a list of n items n times 14 | - E.g. pushing most played songs into a new sorted array 15 | - Repeatedly finds the min/max element and putting it at the beginning 16 | 17 | # Recursion 18 | 19 | - Function calls itself 20 | - Recursive case and base case (to avoid stack overflow) 21 | - May take up a lot of memory, function calls go onto the call stack 22 | 23 | # Quicksort 24 | 25 | - O(n log n): average case (stack size is O(log n)) 26 | - O(n^2): worst case (stack size is O(n)) 27 | - Picking a _random_ pivot and recursively calling quicksort on the two sub-arrays created 28 | - Faster than merge sort (smaller constant) 29 | 30 | # Merge sort 31 | 32 | - Divide and conquer 33 | - O(n log n) 34 | 35 | # Hash tables 36 | 37 | - O(1) time 38 | - Takes up extra space 39 | 40 | # Graphs 41 | 42 | - Consists of nodes and edges 43 | - Can use BFS if unweighted 44 | - Topological sort: ordered list from graph i.e. A depends on B 45 | - Directed graph: unilateral edge/relationship 46 | - Undirected graph: bilateral (a cycle) 47 | 48 | # Breadth-First-Search (BFS) 49 | 50 | - Check if there is a path and find the shortest path (smallest number of nodes) in an unweighted graph 51 | - Runtime is O(V+E) 52 | - The queue's runtime is O(n); searching the entire graph is at least O(E) 53 | 54 | # Stacks 55 | 56 | - First in LAST out 57 | - Push and pop 58 | - E.g. matching brackets problem 59 | 60 | # Queues 61 | 62 | - First in FIRST out 63 | - Enqueue and deque 64 | 65 | # Trees 66 | 67 | - Basically a graph where all edges point one-way only e.g. family tree 68 | 69 | # Djikstra's Algorithm 70 | 71 | - Find the fastest path in a weighted graph 72 | - Only works on directed acyclic graphs (DAGs) 73 | - Cannot be used with negative-weight edges (use Bellman-Ford) 74 | 75 | # Greedy Algorithms 76 | 77 | - At each step, pick the locally optimal solution, in the end you're left with the globally optimal solution 78 | - E.g. scheduling problem - pick the event that ends the earliest, pick the event that starts after the first event and ends the earliest. 79 | - Good for approximation algorithms: when calculating the optimal solution will take too much time. 80 | 81 | # NP-complete problems 82 | 83 | - Problems with no fast algorithmic solution 84 | - How to tell if a problem might be NP-complete: 85 | 1. "All combinations" or "every possible version" 86 | 2. Involves a sequence or a set 87 | 3. Can be restated as the 'traveling salesman' or 'set-covering' problem 88 | 4. Runs quickly with few items but really slowly with more items 89 | 90 | # Dynamic programming 91 | 92 | - Problem can be broken into different subproblems 93 | - Involves a grid where each cell is a subproblem 94 | 95 | ## Longest common substring 96 | 97 | | | C | L | U | E | S | 98 | | --- | --- | --- | --- | --- | --- | 99 | | B | 0 | 0 | 0 | 0 | 0 | 100 | | L | 0 | 1 | 0 | 0 | 0 | 101 | | U | 0 | 0 | 2 | 0 | 0 | 102 | | E | 0 | 0 | 0 | 3 | 0 | 103 | 104 | ## Longest common subsequence 105 | 106 | | | F | I | S | H | 107 | | --- | --- | --- | --- | --- | 108 | | F | 1 | 1 | 1 | 1 | 109 | | O | 1 | 1 | 1 | 1 | 110 | | S | 1 | 1 | 2 | 2 | 111 | | H | 1 | 1 | 2 | 3 | 112 | 113 | # K-nearest neighbours 114 | 115 | - Used for classification and regression 116 | - Graph elements based on features and look at k number of nearest neighbours based on distance or cosine similarity 117 | - Machine learning applications like Netflix recommendations, OCR, spam filters 118 | - Rule of thumb: look at sqrt(k) for k number of users/elements 119 | 120 | # 10 more algorithms to explore 121 | 122 | 1. Trees, B-trees, red-black trees, heaps, splay trees 123 | 2. Inverted indexes 124 | 3. Fourier transform 125 | 4. Parallel algorithms 126 | 5. MapReduce 127 | 6. Bloom filters and HyperLogLog 128 | 7. Secure hash algorithm function 129 | 8. Locality-sensitive hashing 130 | 9. Diffie-Hellman key exchange 131 | 10. Linear programming 132 | --------------------------------------------------------------------------------