├── Week 5 ├── Images │ ├── Graph-Tree.jpg │ ├── Fibonacci-Tree-1.png │ └── Fibonacci-Tree-2.png ├── 2. Intermediate DP.md ├── 3. Introduction to Graphs.md ├── 4. Graph Traversal.md └── 1. Introduction to DP.md ├── Week 1 ├── Images │ ├── rating-groups.png │ └── time-complexity-examples.png ├── 1. Introduction.md ├── 2. Codeforces.md ├── 4. Time Complexity.md ├── 3. Sample Problem.md ├── 5. C++ Quickstart.md └── 6. Basic Math.md ├── Week 3 ├── Images │ └── 2d-prefix-sum.png ├── 2. Prefix Sums.md ├── 3. Bit Manipulation.md └── 1. Two Pointers.md ├── Week 2 ├── Images │ ├── load-imbalance-1.png │ ├── load-imbalance-2.png │ └── load-imbalance-3.png ├── 2. Frequency Table.md ├── 1. Brute Force.md └── 3. Sorting & Greedy Algorithms.md ├── Week 4 ├── Images │ └── bisection-simulation.png ├── 3. Intermediate Math.md ├── 2. Binary Search.md └── 1. Data Structures.md ├── README.md └── Resources.md /Week 5/Images/Graph-Tree.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/crux-bphc/CC-Summer-Group-2023/HEAD/Week 5/Images/Graph-Tree.jpg -------------------------------------------------------------------------------- /Week 1/Images/rating-groups.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/crux-bphc/CC-Summer-Group-2023/HEAD/Week 1/Images/rating-groups.png -------------------------------------------------------------------------------- /Week 3/Images/2d-prefix-sum.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/crux-bphc/CC-Summer-Group-2023/HEAD/Week 3/Images/2d-prefix-sum.png -------------------------------------------------------------------------------- /Week 2/Images/load-imbalance-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/crux-bphc/CC-Summer-Group-2023/HEAD/Week 2/Images/load-imbalance-1.png -------------------------------------------------------------------------------- /Week 2/Images/load-imbalance-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/crux-bphc/CC-Summer-Group-2023/HEAD/Week 2/Images/load-imbalance-2.png -------------------------------------------------------------------------------- /Week 2/Images/load-imbalance-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/crux-bphc/CC-Summer-Group-2023/HEAD/Week 2/Images/load-imbalance-3.png -------------------------------------------------------------------------------- /Week 5/Images/Fibonacci-Tree-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/crux-bphc/CC-Summer-Group-2023/HEAD/Week 5/Images/Fibonacci-Tree-1.png -------------------------------------------------------------------------------- /Week 5/Images/Fibonacci-Tree-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/crux-bphc/CC-Summer-Group-2023/HEAD/Week 5/Images/Fibonacci-Tree-2.png -------------------------------------------------------------------------------- /Week 4/Images/bisection-simulation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/crux-bphc/CC-Summer-Group-2023/HEAD/Week 4/Images/bisection-simulation.png -------------------------------------------------------------------------------- /Week 1/Images/time-complexity-examples.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/crux-bphc/CC-Summer-Group-2023/HEAD/Week 1/Images/time-complexity-examples.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # cruX Summer Group 2023 2 | 3 | Find all the material regarding cruX's Summer Group 2023 for competitive coding here. 4 | 5 | ## Introduction 6 | 7 | The summer series is targeted at beginners and aims to build up conceptual knowledge in competitive programming from the basics. 8 | 9 | Make sure to join the [Discord server](https://discord.gg/xs54Pew4C6) and the [Facebook group](https://www.facebook.com/groups/BPHCCompetitiveCoding/) if you haven't already. 10 | -------------------------------------------------------------------------------- /Resources.md: -------------------------------------------------------------------------------- 1 | # Additional Resources 2 | 3 | For exploring competitive coding beyond what we've covered here in this short timeframe, here are some additional resources that will help you a lot: 4 | - [Competitive Programmer’s Handbook](https://cses.fi/book/book.pdf) - one of the best and most concise books to start out with 5 | - [CP-Algorithms](https://www.cp-algorithms.com) - a great website covering most well-known algorithms 6 | - [Principles of Algorithmic Problem Solving](http://www.csc.kth.se/~jsannemo/slask/main.pdf) - longer and more detailed book, covers some more algorithms in greater depth, best used as a reference 7 | - [Codeforces EDU](https://codeforces.com/edu/courses) has a few courses on specific topics while [Codeforces Catalog](https://codeforces.com/catalog) is a good place to look for blogs explaining a certain topic. 8 | 9 | 10 | Another great website is [USACO Guide](https://usaco.guide/), which has structured topic-wise explanations along with example problems and links to further resources. This structured path might take time but will definitely pay off in the long run. 11 | 12 | 13 | Here are the major platforms that hold regular CC competitions: 14 | - [Codeforces](https://codeforces.com) 15 | - [Codechef](https://www.codechef.com) 16 | - [Leetcode](https://leetcode.com/contest/) 17 | - [Atcoder](https://atcoder.jp/) 18 | -------------------------------------------------------------------------------- /Week 1/1. Introduction.md: -------------------------------------------------------------------------------- 1 | # Introduction to Competitive Coding 2 | 3 |

This section provides basic information about competitive coding.

4 |

If you are already familiar with what competitive coding is, proceed to the next section.

5 | 6 | ## What is Competitive Coding? 7 | 8 |

Competitive coding is a mind sport which involves coding efficient solutions to logical problems.

9 |

Participating teams / individuals compete with each other in order to solve the most number of problems in the least amount of time.

10 | 11 | ## Why do I need to learn CC? 12 | 13 |

Competitive coding is quite popular among college students primarily because IT companies ask CC-related questions in the coding rounds of their recruitment process.

14 |

However, it is not necassary that you learn CC just to boost your placement prospects.

15 |

CC improves your logical skills as it encourages you to think of different approaches to solve a particular problem.

16 |

This in turn makes you a better programmer in general.

17 | 18 | ## What are the best programming languages for CC? 19 | 20 |

Major competitive programming platforms generally contain options for you to submit solutions coded in most major programming languages.

21 |

However, C++ is by far the most popular programming language used for CC.

22 |

As C++ is closer to low-level programming languages than other languages like Java and Python, C++ programs take less time to execute making it ideal for CC.

23 |

C++ also offers a wide range of functions through its Standard Template Library (STL), which you will learn about over the course of the workshop.

24 |

That being said, it is okay if you start CC with another language like Python or Java for the time being if you are not familiar with C++ yet.

25 | 26 | ## What is the ICPC? 27 | 28 |

The International Collegiate Programming Contest is an annual competitive coding competition contested by university students from around the world.

29 |

The contest consists of multiple rounds held at different levels, starting with the regionals and culminating in the world finals.

30 |

Teams from BPHC have put up strong performances in the ICPC.

31 |

At the ICPC 2021 World Finals, the team from BPHC consisting of Ashish Gupta, Mahir Shah and Kunal Verma finished rank 61 in the world.

32 |

They were among the top 7 Indian teams that competed at the finals.

33 |

The team from BPHC, consisting of Hriday Gajulapalli, Jeevan Jyot Singh and Pranav Rajagopalan, will complete at the upcoming ICPC 2023 World Finals.

34 | 35 | ## What are the most popular websites for CC? 36 | 37 |

There are several websites that hold compeititve coding contests and contain problems to practice.

38 |

Here is a list of the most popular ones:

39 | 40 | 1. [Codeforces](https://codeforces.com/) 41 | 2. [CodeChef](https://www.codechef.com/) 42 | 3. [CSES](https://cses.fi/) 43 | 4. [AtCoder](https://atcoder.jp/) 44 | 5. [LeetCode](https://leetcode.com/) 45 | 46 |

Codeforces is generally regarded as the most popular competitive coding platform.

47 |

It is known for holding contests quite frequently and its extensive problemset.

48 |

In the next section, we will look at Codeforces in detail.

49 | -------------------------------------------------------------------------------- /Week 1/2. Codeforces.md: -------------------------------------------------------------------------------- 1 | # Codeforces 2 | 3 |

This section goes into detail about Codeforces, which is the most popular website for compeititve coding.

4 | 5 | ## What is Codeforces? 6 | 7 |

Codeforces is a Russian competitve coding platform.

8 |

It is maintained by a team of compeititve coders from ITMO University led by Mike Mirzayanov.

9 |

It is widely regarded as the most popular platform for CC.

10 | 11 | ## User rating 12 | 13 |

Codeforces users are ranked according to their user rating.

14 |

The rating system used in Codeforces is similar to the Elo rating system used in chess.

15 |

On the basis of rating, users are classified into different rating groups.

16 | 17 | 18 | 19 | ## Contests on Codeforces 20 | 21 |

A typical Codeforces contest typically consists of 6-8 problems.

22 |

Users are ranked on the basis of the number of problems they solve, the amount of time it took them to solve the problems, and the number of times they submitted an incorrect submission.

23 |

The exact way in which these metrics are taken into account depends on the contest.

24 |

Contests on Codeforces are primarily of four types - Division 1, Division 2, Division 3 and Division 4.

25 | 26 | 1. Division 1 contests are for users with a rating greater than or equal to 1900. 27 | 2. Division 2 contests are for users with a rating below 1900. 28 | 3. Division 3 contests are for users with a rating below 1600. 29 | 4. Division 4 contests are for users with a rating below 1400. 30 | 31 | ## Problems on Codeforces 32 | 33 |

In addition to hosting contests, Codeforces allows users to practice problems from previous ones.

34 |

Each problem on Codeforces has a 'rating' which is roughly indicative of its difficulty.

35 |

The easiest problems on Codeforces are rated 800 while the toughest are rated 3500.

36 |

Problems on Codeforces are also tagged by the topics on which they are based.

37 |

The Codeforces problemset page allows you to sort problems by rating and the number of users that have solved it, and also filter by rating and tags.

38 | 39 | ## Problem Verdicts 40 | 41 | After you submit your source code to a problem, the judging system will run your code for a large number of inputs (test cases) and verify if the output matches the correct output. There are various messages (verdicts) that the judging system may display: 42 | - **Accepted (AC):** 43 | 44 | Congratulations, you've solved the problem! 45 | 46 | - **Wrong Answer (WA):** 47 | 48 | Your code's output was not right for one or more test cases. You should try to debug your code to see what's going wrong. 49 | 50 | - **Run Time Error (RTE):** 51 | 52 | Your code crashed while executing one or more test cases. You should try to debug your code by checking for situations where it might crash. 53 | 54 | - **Time Limit Exceeded (TLE):** 55 | 56 | Your code took too long to execute during one or more test cases. Your code may or may not be right and you should try to think of a more optimal solution. 57 | 58 | - **Memory Limit Exceeded (MLE):** 59 | 60 | Your code consumed too much memory while executing one or more test cases. Try to think of a more memory efficient solution. 61 | 62 | - **Compile Error (CE):** 63 | 64 | Your code failed to compile. You should check for syntax errors in your code. 65 | -------------------------------------------------------------------------------- /Week 1/4. Time Complexity.md: -------------------------------------------------------------------------------- 1 | # Time Complexity 2 | 3 | _In this section, we will discuss time complexity, the concept behind evaluating the efficiency of an algorithm._ 4 | 5 | ## Big O notation 6 | 7 | Big O notation is a mathematical notation that is used to represent how a particular function changes with respect to change in one of its independent variables. 8 | 9 | It shows the asymptotic behaviour of the function, which is when the independent variables tend to infinity. 10 | 11 | Let us take the example of the function $f(x) = 5x$. 12 | 13 | As the change in $f(x)$ is directly proportional to the change in $x$, $f(x)$ is said to be linear with respect to $x$. 14 | 15 | The big O notation for such functions is $O(x)$. 16 | 17 | Similarly, the big O notation for function $f(x) = 3x^2$ is $O(x^2)$. 18 | 19 | Note that the big O notation only contains information about the power of $x$ and not the constants it is multiplied by. 20 | 21 | Now, let us take the function $f(x) = x^2 + 5x$. 22 | 23 | When a function contains multiple terms, only the term which increases fastest with respect to the independent variable is considered. 24 | 25 | Thus, the time complexity of this function is also $O(x^2)$. 26 | 27 | 28 | ## How do I use big O notation? 29 | 30 | Big O notation is helpful in CC as it helps determine the efficiency of an algorithm. 31 | 32 | First, we think of the amount of time taken by an algorithm to run as a function of an input-dependent value $n$. 33 | 34 | This function is roughly proportional to the number of operations performed by an algorithm. 35 | 36 | Thus, if we can find out how the number of operations performed by our an algorithm scales with the input-dependent value, we can determine the time complexity of the algorithm. 37 | 38 | For each time complexity, there is a set of input values which can execute within the time constraint. 39 | 40 | Therefore, we must first look at the input constraints and then choose an appropriate algorithm accordingly. 41 | 42 | A general rule-of-thumb is that an average computer can execute around $10^7$ to $10^8$ elementary operations in one second. 43 | 44 | ![Time complexity](Images/time-complexity-examples.png) 45 | 46 | ## Common time complexities 47 | 48 | Let us take a look at some of the most common time complexities and examples of algorithms for each. 49 | 50 | Don't worry about not understanding the derivation of the big O notation for each algorithm at this point, you will eventually get a hang of it as you solve CC problems. 51 | 52 | ### $O(1)$ - constant time 53 | 54 | Adding two numbers takes constant time, as if the numbers are not extremely large, this involves a single addition operation. 55 | 56 | ### $O(log n)$ - logarithmic time 57 | 58 | Binary search takes logarithmic time, as during each step of the algorithm, the length of the array to be searched is halved. 59 | 60 | ### $O(n)$ - linear time 61 | 62 | Finding the maximum number in an array takes linear time, as performing this task requires processing each element in the array. 63 | 64 | ### $O(n log n)$ - log-linear time 65 | 66 | Merge sort, one of the most efficient sorting algorithm, takes log-linear time. 67 | 68 | ### $O(n^2)$ - quadratic time 69 | 70 | The algorithm written in the previous section takes quadratic time as it consists of two nested input-dependent for loops. 71 | 72 | ### $O(2^n)$ - exponential time 73 | 74 | A naive recursive implementation of finding a particular term in the Fibonacci sequence takes exponential time, as you need the previous two terms to calculate a specific term. 75 | 76 | ### $O(n!)$ - factorial time 77 | 78 | An algorithm that processes every possible permutation of an array of numbers takes factorial time. 79 | 80 | ## Links to external resources 81 | 82 | 1. [Big O notation in 100 seconds](https://www.youtube.com/watch?v=g2o22C3CRfU) 83 | 2. [GeeksForGeeks article on time and space complexity](https://www.geeksforgeeks.org/time-complexity-and-space-complexity/) 84 | 3. [Sorting algorithms compared using big O notation](https://www.youtube.com/watch?v=kgBjXUE_Nwc) 85 | 4. [Big O notation for coding interviews](https://www.youtube.com/watch?v=BgLTDT03QtU) 86 | -------------------------------------------------------------------------------- /Week 1/3. Sample Problem.md: -------------------------------------------------------------------------------- 1 | # A sample CC problem 2 | 3 |

In this section, we will describe the structure of a typical CC problem with an example, and provide its solution at the end.

4 | 5 | ## Structure of a typical CC problem 6 | 7 |

A CC problem typically consists of five parts.

8 | 9 | 1. Problem statement 10 | 2. Description of input 11 | 3. Description of output 12 | 4. Sample input 13 | 5. Sample output 14 | 15 | ## Problem statement 16 | 17 | >

You are given a number n.

18 | >

You are required to find the nth prime number.

19 | >

For example, if n = 1, you must output 2 (as the 1st prime number is 2).

20 | >

Similarly, if n = 4, you must output 7 (as the 4th prime number is 7).

21 | >

How do you solve this problem?

22 | 23 |

In this example, the problem statement is quite direct and the problem at hand is explained clearly.

24 |

However, this is not the case with Codeforces problem statements in general.

25 |

Codeforces problem statements are generally accompanied by a story, and it is upto the problem solver to gauge what the problem is really asking for.

26 | 27 | ## Description of input and output 28 | 29 | >

The first line of input consists of a single integer t - the number of testcases.

30 | >

The following t lines each consist of a single integer n (1 ≤ n ≤ 1000).

31 | 32 |

The input provided to a problem always follows a well-defined format, which is described following the problem statement.

33 |

Each input value is accompanied by its constraints, which will come in handy while designing your algorithm, as you will see in the next section.

34 | 35 | >

The output should consist of t lines, each line containing the answer to its corresponding testcase.

36 | >

The answer to each testcase consists of a single integer - the nth prime number.

37 | 38 |

Similarly, the user is expected to output the answer to the problem in a particular format.

39 | 40 | ## Solution 41 | 42 |

The simplest approach to solving this problem is to check whether each number, starting from 1, is a prime or not.

43 |

A separate variable will keep track of the number of primes we have found.

44 |

When the value of this variable becomes n, we will end the program as it would mean that we found the nth prime number.

45 |

The question now is: how do you check if a number x is prime?

46 |

We will count the number of numbers from 1 to x that divide x (which will be the number of factors of x), and if that number is 2, we can say that x is a prime.

47 | 48 |
49 | C++ Implementation 50 | 51 | ```cpp 52 | #include 53 | using namespace std; 54 | 55 | void solve() 56 | { 57 | int n; 58 | cin >> n; 59 | int x = 1, count = 0; 60 | 61 | while (count < n) 62 | { 63 | int factors = 0; 64 | for (int i = 1; i <= x; i++) 65 | { 66 | if (x % i == 0) 67 | factors++; 68 | } 69 | 70 | if (factors == 2) 71 | count++; 72 | 73 | if (count == n) 74 | cout << x << endl; 75 | else 76 | x++; 77 | } 78 | } 79 | 80 | int main() 81 | { 82 | int t; 83 | cin >> t; 84 | for (int i = 0; i < t; i++) 85 | solve(); 86 | return 0; 87 | } 88 | ``` 89 |
90 | 91 |
92 | Python Implementation 93 | 94 | ```py 95 | def solve(): 96 | n = int(input()) 97 | x = 1 98 | count = 0 99 | 100 | while count < n: 101 | factors = 0 102 | for i in range(1, x + 1): 103 | if x % i == 0: 104 | factors += 1 105 | 106 | if factors == 2: 107 | count += 1 108 | 109 | if count == n: 110 | print(x) 111 | else: 112 | x += 1 113 | 114 | t = int(input()) 115 | for i in range(t): 116 | solve() 117 | ``` 118 |
119 | 120 |

Do note that this is not the most efficient way to solve this problem.

121 |

This is because the nested loops will result in the program taking a large amount of time to execute when n is large.

122 |

In competitive coding, for tougher problems, simple approachesm (although correct) may not work due to constraints on time of execution and memory.

123 |

As you become better at competitive coding, you should find better and more efficient ways to solve a given problem.

124 |

In the next section, we will discuss time complexity, which is an essential tool in helping you evaluate the efficiency of your algorithms.

125 | -------------------------------------------------------------------------------- /Week 2/2. Frequency Table.md: -------------------------------------------------------------------------------- 1 | # Frequency Table 2 | 3 | Frequency tables are generally used to keep track of how many times a certain element occurs within a set of elements. 4 | 5 | To calculate this, we can use a `vector` `cnt` where `cnt[i] = number of occurences of i`. We can compute the `cnt1` values with a basic for loop: 6 | 7 | ```cpp 8 | for (auto ele : arr) 9 | cnt[ele]++; 10 | ``` 11 | 12 | The only disadvantage of this approach is that the size of the `cnt` array must be greater than the maximum element, which can be a problem when the elements are upto $10^9$. 13 | 14 | Also, it should be obvious that a lot of space is wasted since most indices of `cnt` will hold zero (especially when its size is large). 15 | 16 |
17 | 18 | ## `map` 19 | 20 | To help with this, C++ has a `map` data structure, which allows you to **store any type of key-value pair in $O(n)$ space**, where $n$ is the numnber of key-value pairs. 21 | 22 | The main disadvantage is that instead of read and write operations being $O(1)$ (constant time), each of these operations is $O(log n)$. 23 | 24 | Here's how you use it: 25 | 26 | ```cpp 27 | // you define it as map variable_name; 28 | map m; 29 | // yup, they can be any type you want 30 | map, string> interesting_map; 31 | map>, map>> surely_there_is_a_better_way_map; 32 | 33 | // to insert a key-value pair: 34 | m.insert({3, 2}); 35 | // to get the value of a key: 36 | cout << m[3] << "\n"; // 2 37 | 38 | // the array-index notation is a reference and can be used to update a value quickly 39 | m[3] += 5; 40 | cout << m[3] << "\n"; // 7 41 | // if the key doesn't exist, simply mentioning the array index notation somewhere will create it with a default value 42 | cout << m[-3] << "\n"; // 0 43 | 44 | // if you don't want this, use m.find(...) 45 | if (m.find(4) != m.end()) 46 | { 47 | // element found 48 | } 49 | else 50 | { 51 | // element not found 52 | } 53 | ``` 54 | 55 |
56 | 57 | ## `set` 58 | 59 | Another cool data structure in C++ is the `set`, which only stores unique values. Like the `map`, you can insert, remove or check if a particular element is present in $O(log n)$. 60 | 61 | Here's how you use it: 62 | 63 | ```cpp 64 | // set variable_name; 65 | set s; 66 | set dictionary; 67 | 68 | // insert an element 69 | s.insert(5) 70 | // inserting an element that is already present does nothing 71 | s.insert(5) 72 | 73 | // to check if an element is present, use .count 74 | cout << s.count(5) << "\n" // 1 75 | // remove an element 76 | s.erase(5); 77 | cout << s.count(5) << "\n" // 0 78 | ``` 79 | 80 | Another cool thing about `set` and `map` is that elements are not stored in a random order or in the order of insertion: **they are usually always sorted in ascending order** (for map, key is considered for ordering). This is a very useful property that we will look at later. 81 | 82 |
83 | 84 | ## Example Problem: 2-SUM 85 | 86 | You're given an array of $n$ elements and an integer $k$. 87 | 88 | Find the number of distinct pairs $(i, j)$ such that $a_i + a_j = k$. 89 | 90 | The naive solution would be to loop over all possible $i$ and $j$ and count how many times the condition is satisfied. 91 | 92 | However, the time complexity of this would be $O(n^2)$, which is too slow when $n \approx 10^5$. 93 | 94 | Instead, we can construct the frequency map of the array and then loop over its key-value pairs (distinct element - its count). 95 | 96 | For each value of the element (call it `x`), we look for the number of occurences of `k - x`. Then, we can add `res[x] * res[k - x]` to the answer since each occurence of `x` can pair with an occurence of `k - x` to form a pair with sum `k`. 97 | 98 |
99 | C++ Implementation 100 | 101 | ```cpp 102 | void solve() { 103 | ll n, k; 104 | cin >> n >> k; 105 | vector a(n); 106 | map cnt; 107 | for (auto &i : a) { 108 | cin >> i; 109 | cnt[i]++; 110 | } 111 | ll ans = 0; 112 | for (auto &i : cnt) { 113 | int x = i->first; 114 | ans += cnt[x] * cnt[k - x]; 115 | } 116 | cout << ans << "\n"; 117 | } 118 | ``` 119 | 120 |
121 | 122 |
123 | Python Implementation 124 | 125 | ```py 126 | def solve (): 127 | n, k = tuple (map (int, input ().split (' '))) 128 | a = list (map (int, input ().split (' '))) 129 | cnt = {} 130 | for ele in a: 131 | cnt[ele] = cnt.get(ele, 0) + 1 132 | ans = 0 133 | for x in cnt.keys(): 134 | ans += cnt[x] * cnt.get(k - x, 0) 135 | print(ans) 136 | ``` 137 | 138 |
139 | 140 |
141 | 142 | ## Links: 143 | 1. https://www.geeksforgeeks.org/counting-frequencies-of-array-elements/ 144 | 2. [CSES - Distinct Numbers](https://cses.fi/problemset/task/1621) 145 | 3. [CSES - Sum of Two Values](https://cses.fi/problemset/task/1640) 146 | 4. [CSES - Sum of Three Values](https://cses.fi/problemset/task/1641) 147 | 5. [CF 525A - Vitaliy and Pie](https://codeforces.com/problemset/problem/525/A) 148 | 6. [CF 1144D - Equalize Them All](https://codeforces.com/problemset/problem/1144/D) 149 | 7. [CF 1520D - Same Differences](https://codeforces.com/problemset/problem/1520/D) 150 | 8. [CF 1269B - Modulo Equality](https://codeforces.com/contest/1269/problem/B) 151 | -------------------------------------------------------------------------------- /Week 4/3. Intermediate Math.md: -------------------------------------------------------------------------------- 1 | # Intermediate Math 2 | 3 | ## Modular Inverse 4 | 5 | Previously, we saw some properties of modular arithmetic and modular exponenetiation by squaring. Instead of division, modular systems have the concept of a (multiplicative) inverse: 6 | 7 | $`a \cdot a^{-1} \equiv 1 \, (mod \, m)`$ 8 | 9 | It can be proven that the inverse exists iff $gcd(a, m) = 1$ and is unique when it exists. Instead of division, we multiply by the inverse. 10 | 11 | To compute the modular inverse, we use Fermat's Little Theorem: for any prime $m$ and integer $a$: 12 | 13 | $`a^m \equiv a \, (mod \, m)`$ 14 | 15 | If $a$ is not divisible by $m$, it is equivalent to: 16 | 17 | $`a^{m-1} \equiv 1 \, (mod \, m)`$ 18 | 19 | For a prime modulus $m$, all $a \in \\{ 1, 2 ... m - 1 \\}$ are coprime to $m$ and so modular inverse exists for these $a$. Multiplying by the modular inverse, we get: 20 | 21 | $`a^{m-2} \equiv a^{-1} \, (mod \, m)`$ 22 | 23 | This means that to find the modular inverse, we simply have to raise it to the power $m - 2$ using modular exponentiation. 24 | 25 | Note that this only works for prime $m$ and $a \in \\{ 1, 2 ... m - 1 \\}$. 26 | 27 | We can apply this in combinatorics related questions. 28 | 29 | ## Modular Combinatorics 30 | 31 | We usually precompute factorials once before running the test cases to prevent repetition. 32 | 33 | Note that for $i \ge 1$: 34 | 35 | $$ i! = i * (i - 1)! $$ 36 | 37 | We can use this to easily compute all factorials upto $MAX$ in $O(MAX)$ time: 38 | 39 | ```cpp 40 | ll MOD = 1000000007; 41 | ll MAX = 100000; 42 | vector fact(MAX + 1); 43 | 44 | void precomp() 45 | { 46 | fact[0] = 1; 47 | for (int i = 1; i <= MAX; i++) 48 | fact[i] = (i * fact[i - 1]) % MOD; 49 | } 50 | ``` 51 | 52 | One of the most used functions in combinatorics is the 'choose' function defined as: 53 | 54 | $$ \binom{n}{r} = C_{r}^{n} = \frac{n!}{r!(n-r)!} $$ 55 | 56 | We can use the modular inverse to calculate this easily. 57 | 58 | ## Linear Sieve 59 | 60 | Previously, we looked at the Sieve of Eratosthenes as a method of finding all primes less than $n$. While its complexity was $`O(n \, log(log \, n)))`$, the linear sieve runs in $O(n)$. While this may not seem like much of a difference, it also allows you to compute the prime factorisation of any number in $`O(log \, n)`$, which is very useful in many problems. 61 | 62 | We compute two arrays: $spf$, which stores the smallest prime factor of each number, and $primes$, a ordinary vector that holds all primes. We initialise $spf$ with zeroes to assume that they are all prime. 63 | 64 | We then iterate from 2 to $n$. For the current number $i$, we have two cases: 65 | - If $spf_i = 0$, $i$ is prime since we could not find any smaller factors to mark it as composite. So, we assign $spf_i = i$ and append to the $primes$ vector. 66 | - If $spf_i \neq 0$, we must update the appropriate multiples of $i$. For all $primes_j \le spf_i$, we set $spf_{i * primes_j} = primes_j$. 67 | 68 | While it should be easy to see that this will not miss any primes, it takes a little more effort to show that it marks all composite numbers correctly. Note that by definition, every number $i$ has a unique representation of the form: 69 | 70 | $$ i = spf_i \cdot x $$ 71 | 72 | where $x$ doesn't have any prime factors less than $spf_i$. This implies that: 73 | 74 | $$ spf_i \le spf_x $$ 75 | 76 | For every $x$, our algorithm goes through all the primes it could be multiplied with (up to $spf_x$) to get the numbers of the above form. This proves that the algorithm goes through every prime & compositr number exactly once ad so the time complexity is $O(n)$. 77 | 78 | ```cpp 79 | ll N = 1000000; 80 | vector spf(N + 1, 0); 81 | vector primes; 82 | 83 | void precomp() 84 | { 85 | for (int i = 2; i <= N; i++) 86 | { 87 | if (spf[i] == 0) 88 | { 89 | spf[i] = i; 90 | primes.pb(i); 91 | } 92 | for (int j = 0; j < primes.size() && primes[j] <= spf[i] && i * primes[j] <= N; j++) 93 | spf[i * primes[j]] = primes[j]; 94 | } 95 | } 96 | ``` 97 | 98 | Now, to find the prime factorisation of a number, we can simply repeatedly divide by its smallest prime factor while storing their values in a `map`: 99 | 100 | ```cpp 101 | map prime_factorise(ll x) 102 | { 103 | map res; 104 | while (x != 1) 105 | { 106 | res[spf[x]]++; 107 | x /= spf[x]; 108 | } 109 | return res; 110 | } 111 | ``` 112 | 113 | The number of times the loop runs will be the number of prime factors (counted with multiplicity). It is not too hard to see that the worst case will be when it is a power of two and in that case the time complexity will be $O(log n)$. 114 | 115 | ## Links 116 | 117 | - [Modular Inverse](https://cp-algorithms.com/algebra/module-inverse.html#finding-the-modular-inverse-using-binary-exponentiation) 118 | - [Linear Sieve](https://cp-algorithms.com/algebra/prime-sieve-linear.html) 119 | - [Binomial Coefficients](https://cses.fi/problemset/task/1079) 120 | - [Counting Divisors](https://cses.fi/problemset/task/1713) 121 | - [Divisor Analysis](https://cses.fi/problemset/task/2182) 122 | - [Creating Strings II](https://cses.fi/problemset/task/1715) 123 | - [Distributing Apples](https://cses.fi/problemset/task/1716) 124 | - [Christmas Party](https://cses.fi/problemset/task/1717) 125 | - [Santa's Bot](https://codeforces.com/contest/1279/problem/D) 126 | -------------------------------------------------------------------------------- /Week 5/2. Intermediate DP.md: -------------------------------------------------------------------------------- 1 | # Intermediate DP 2 | 3 | ## Using DP with other techniques 4 | 5 | For many problems, after figuring out the DP approach, we'll need to make some observations and optimise it using other techniques like prefix sums, binary search or other data structures. 6 | 7 | Consider this problem: given a set of $n$ projects with start date $s_i$, end date $e_i$ and reward $w_i$, what is the maximum reward you can earn if you can attend atmost one project in a day? 8 | 9 | Take $dp_i$ to be the maximum reward you can earn doing till the $i$ th project. We have two options: 10 | - We don't do the $i$ th project. 11 | - If we're doing the $i$ th project, we can only do other projects that have end date $e_j$ before its start date $s_i$. 12 | 13 | $$ dp_i = max(dp_{i-1}, w_i + \max_{e_j < s_i}{(dp_j)}) $$ 14 | 15 | We can sort the projects in ascending order of $e_i$ to allow us to stop checking once $e_j >= s_i$. Still, the time complexity of this approach is $O(n^2)$ since the $max$ calculation takes $O(n)$. 16 | 17 | To optimise this, we notice that $dp_i$ is non-decreasing: as we get more available projects, our maximum reward cannot decrease, it will either stay the same or increase. 18 | 19 | This means we can use binary search to find the maximum $j$ such that $e_j < s_i$ and then use the corresponding $dp_j$ for the $max$ calculation, which only takes $O(log n)$. This makes the overall time complexity $O(n log n)$. 20 | 21 | ```cpp 22 | #include 23 | using namespace std; 24 | typedef long long ll; 25 | int main() 26 | { 27 | int n; 28 | cin >> n; 29 | // start, end, reward 30 | vector> arr(n); 31 | for (auto &i : arr) 32 | cin >> i[0] >> i[1] >> i[2]; 33 | sort(arr.begin(), arr.end(), [](auto &i, auto &j) { 34 | return i[1] < j[1]; 35 | }); 36 | vector end(n); 37 | for (int i = 0; i < n; i++) 38 | end[i] = arr[i][1]; 39 | vector dp(n, 0); 40 | for (int i = 0; i < n; i++) 41 | { 42 | int j = lower_bound(end.begin(), end.end(), arr[i][0]) - end.begin() - 1; 43 | dp[i] = max((i ? dp[i - 1] : 0), (j >= 0 ? dp[j] : 0) + arr[i][2]); 44 | } 45 | cout << dp[n - 1] << "\n"; 46 | return 0; 47 | } 48 | ``` 49 | 50 | 51 | ## Bitmask DP 52 | 53 | One common technique that can be used when certain parameters are small is using a bitmask as one of the dimensions of the DP array. 54 | 55 | For example, consider this problem: given the prices of $k$ products over $n$ days, what is the minimum total price we need to pay if we can buy atmost one product in a day? 56 | 57 | Let `price[i][j]` denote the price of the $i$ th product on day $j$. We take `dp[mask][i]` as the required minimum total price for buying a subset of the products denoted by $mask$ in $i$ days. 58 | 59 | For `dp[mask][i]`, on day $i$, we can either not buy any product or buy some product $x$ that belongs to the $mask$, which gives us: 60 | 61 | $$ dp(mask, i) = min(dp(mask, i - 1), \min_{x \in mask}{(dp(mask \\ x, i - 1) + price[x][i])}) $$ 62 | 63 | ```cpp 64 | int main() 65 | { 66 | int k, n; 67 | cin >> k >> n; 68 | int prices[k][n]; 69 | for (auto &i : prices) 70 | { 71 | for (auto &j : i) 72 | cin >> j; 73 | } 74 | int dp[1 << k][n]; 75 | for (int x = 0; x < k; x++) 76 | dp[1 << x][0] = price[x][0]; 77 | for (int i = 1; i < n; i++) 78 | { 79 | for (int mask = 0; mask < (1 << k); mask++) 80 | { 81 | dp[mask][i] = dp[mask][i - 1]; 82 | for (int x = 0; x < k; x++) 83 | { 84 | if (mask & (1 << x)) 85 | dp[mask][i] = min(dp[mask][i], dp[mask ^ (1 << x)][i - 1] + price[x][i]); 86 | } 87 | } 88 | } 89 | } 90 | ``` 91 | 92 | The time complexity of this is $O(n \cdot 2 ^ k \cdot k)$. 93 | 94 | Now, we can only represent bitmasks of upto 64 bits using a `long long` variable but we can use the `bitset` STL data structure to represent longer bitsets: 95 | 96 | ```cpp 97 | // creates a bitset of length 8 with all bits 0 98 | bitset<8> b1; 99 | 100 | // we can initialise it with the binary representation of a number 101 | bitset<16> b2(69); 102 | bitset<16> b3("010100111"); 103 | 104 | // they can be printed directly! 105 | cout << b1 << "\n" << b2 << "\n" << b3 << "\n"; 106 | 107 | // it behaves similar to a boolean array / vector 108 | for (int i = 0; i < 16; i++) 109 | cout << b1[i]; 110 | b2[0] = b1[0] ^ b3[1] 111 | 112 | // it supports all normal bitwise operations (&, ^, |, <<, >>) as long as both operands have same size 113 | b2 &= b3; 114 | bitset<16> b4 = b2 << 4; 115 | ``` 116 | 117 | The main disadvantage of `bitset` is that its **size must be known at compile-time**, which means that you cannot intialise the size to some non-constant variable. 118 | 119 | Internally, the implementation of `bitset` can be thought of as the compiler splitting the bitstring into chunks of 64 bits and performing the required operations on these chunks as normal numbers. 120 | 121 | This means that most bitwise operations on `bitset` have time complexity $O (n / 64)$ where $n$ is the length of the bitset. This means that using `bitset` is around 64x faster than a regular boolean array most of the time. 122 | 123 | While you might be confused as to why we mention the $64$ despite it being a constant factor that should be ignored in time complexity, don't forget that at the end of the day, we're not just theoretically analysing algorithms, we're practically implementing them as well and large constant factors like this cannot be ignored in certain cases. 124 | 125 | For example, there have been DP problems where a regular $O(n^2)$ gave TLE but a `bitset` optimised $O(n^2 / 64)$ managed to just squeeze under the time limit and give AC. 126 | 127 | # Links 128 | 1. [Bitmask DP by USACO](https://usaco.guide/gold/dp-bitmasks?lang=cpp) 129 | 2. [AtCoder DP Contest](https://atcoder.jp/contests/dp/tasks) 130 | 3. [Projects](https://cses.fi/problemset/task/1140) 131 | 4. [Elevator Rides](https://cses.fi/problemset/task/1653) 132 | 5. [Empty String](https://cses.fi/problemset/task/1080) 133 | 6. [Zuma](https://codeforces.com/problemset/problem/607/B) 134 | 7. [Posting](https://www.codechef.com/INOIPRAC/problems/INOI2201) 135 | -------------------------------------------------------------------------------- /Week 3/2. Prefix Sums.md: -------------------------------------------------------------------------------- 1 | # Prefix Sums 2 | 3 | Prefix sums is a very useful technique for computing static range queries on arrays (a value that can be calculated for a subarray of an array where the elements are not updated). 4 | 5 | Let's understand it using the classic problem prefix sums tries to solve: static range sum queries. 6 | 7 |
8 | 9 | ## Subarray Sums 10 | 11 | You have an array of $n$ elements: $a_1, a_2, ..., a_n$. You have to answer $q$ queries: given $l_j$ and $r_j$ ($1 \le j \le q$), what is the sum of all the array elements from index $l_j$ to index $r_j$ (both inclusive)? 12 | 13 | The obvious approach would be to use a `for` loop for each query: 14 | ```cpp 15 | for (int i = 0; i < q; i++) 16 | { 17 | int l, r; 18 | cin >> l >> r; 19 | int sum = 0; 20 | for (int j = l; j <= r; j++) 21 | sum += a[j]; 22 | cout << sum << "\n"; 23 | } 24 | ``` 25 | 26 | For the time complexity, we see that the inner loop takes $r - l$ iterations per query, which can be $n$ in the worst case. Since the outer loop runs $q$ times, the overall time complexity is $O(q \cdot n)$. 27 | 28 | While this is fine for small $q$, when $q$ is comparable to $n$, it becomes rather inefficient. Let's think of a way to speed this up with some precomputation. 29 | 30 | Observe that: 31 | 32 | $$ \sum_{i=l}^r a_i = \sum_{i=1}^r a_i - \sum_{i=1}^{l - 1} a_i $$ 33 | 34 | We define the prefix sum function $p_i$ as the sum of the array elements from the starting index till index $i$: 35 | 36 | $$ p_i = \sum_{j=1}^i a_j $$ 37 | 38 | Notice that if we can calculate this function for all indices, we can find the sum of any subarray just by subtracting (we assume p_0 = 0 to simplify things): 39 | 40 | $$ \sum_{i=l}^r a_i = p_r - p_{l - 1} $$ 41 | 42 | While it might seem like we still need to use a `for` loop for each calculation of $p_i$, we actually don't: 43 | 44 | $$ p_1 = a_1 $$ 45 | 46 | $$ p_i = p_{i - 1} + a_i $$ 47 | 48 | This means we can use just one `for` loop to calculate all the prefix sum values and then subtract the appropriate values for answering the queries. 49 | 50 | ```cpp 51 | void solve() { 52 | int n; 53 | cin >> n; 54 | vector a(n + 1); 55 | for (int i = 1; i <= n; i++) 56 | cin >> arr[i]; 57 | 58 | vector p(n + 1); 59 | p[1] = a[1]; 60 | for (int i = 2; i <= n; i++) 61 | p[i] = p[i - 1] + a[i]; 62 | 63 | int q; 64 | cin >> q; 65 | for (int i = 0; i < q; i++) 66 | { 67 | int l, r; 68 | cin >> l >> r; 69 | cout << p[r] - (l == 1 ? 0 : p[l - 1]) << "\n"; 70 | } 71 | } 72 | ``` 73 | 74 | For the time complexity, computing the prefix sums is $O(n)$ and we answer each of the $q$ queries in $O(1)$, giving us $O(n + q)$, which is must faster. 75 | 76 |
77 | 78 | ## 2-D Prefix Sums: 79 | 80 | Consider the 2-D variant of this problem: given a 2-D array $a_{ij}$ $(1 \le i \le n, 1 \le j \le m)$, answer $q$ queries of the sum of rectanglular subarray formed by $i = x_1, i = x_2, j = y_1$ and $j = y_2$ $(x_1 \le x_2, y_1 \le y_2)$. 81 | 82 | Let's define our prefix function as the sum of the subarray formed by $(1, 1)$ and $(i, j)$: 83 | 84 | $$ p(i, j) = \sum_{x=1}^i \sum_{y=1}^j a_{ij} $$ 85 | 86 | To answer the query, we can think geometrically and use some inclusion-exclusion intuition: 87 | 88 | ![2D Prefix Sum](Images/2d-prefix-sum.png) 89 | 90 | $$ S = p(x_2, y_2) - p(x_2, y_1 - 1) - p(x_1 - 1, y_2) + p(x_1 - 1, y_1 - 1) $$ 91 | 92 | To calculate the values of the prefix sum function, we first do the prefix sums rowwise and then columnwise (or vice versa): 93 | 94 | ```cpp 95 | void solve() { 96 | int n, m; 97 | cin >> n >> m; 98 | vector a(n + 1, vector(m + 1)); 99 | for (int i = 1; i <= n; i++) 100 | { 101 | for (int j = 1; j <= m; j++) 102 | { 103 | cin >> a[i][j]; 104 | } 105 | } 106 | 107 | vector p(n + 1, vector(m + 1)); 108 | for (int i = 1; i <= n; i++) 109 | { 110 | p[i][1] = a[i][1]; 111 | for (int j = 2; j <= m; j++) 112 | { 113 | p[i][j] = p[i][j - 1] + a[i][j]; 114 | } 115 | } 116 | for (int j = 1; j <= m; j++) 117 | { 118 | for (int i = 2; i <= n; i++) 119 | { 120 | p[i][j] += p[i - 1][j]; 121 | } 122 | } 123 | 124 | int q; 125 | cin >> q; 126 | for (int i = 0; i < q; i++) 127 | { 128 | int x1, x2, y1, y2; 129 | cin >> x1 >> x2 >> y1 >> y2; 130 | int ans = p[x2][y2]; 131 | if (x1 > 1) ans -= p[x1 - 1][y2]; 132 | if (y1 > 1) ans -= p[x2][y1 - 1]; 133 | if (x1 > 1 && y1 > 1) ans += p[x1 - 1][y1 - 1]; 134 | cout << ans << "\n"; 135 | } 136 | } 137 | ``` 138 | 139 | Challenge: can you think of how you would implement 3-D prefix sums? What about n-D prefix sums? 140 | 141 |
142 | 143 | ## Extending Prefix Sums 144 | 145 | Even thought the most common application is with range sums, this type of 'prefix precomputation' can be applied to a number of operations like multiplication or XOR. 146 | 147 | Consider the problem of finding the range query values of some binary operator $\ast$ on the subarray $[l, r]$: 148 | 149 | $$ Q(l, r) = a_l \ast ... \ast a_r $$ 150 | 151 | The prefix function $p$ is: 152 | 153 | $$ p(i) = a_1 \ast ... \ast a_i $$ 154 | 155 | For the queries, let $\circ$ be the inverse operator of $\ast$: 156 | 157 | $$ Q(l, r) = p(r) \circ p(l - 1) $$ 158 | 159 | As you can see, this method works for any binary operator $\ast$ that is associative and has an inverse (if the inverse does not exist, we can only answer queries with $l = 1$). 160 | 161 | The main drawback of this technique is that it does not support update operations on the array (dynamic range queries). However, more advanced data structures like segment trees and Fenwick trees can be used in such cases. 162 | 163 |
164 | 165 | ## Problems: 166 | 167 | - [Static Range Sum Queries](https://cses.fi/problemset/task/1646) 168 | - [Forest Queries](https://cses.fi/problemset/task/1652) 169 | - [Range Xor Queries](https://cses.fi/problemset/task/1650) 170 | - [Maximum Subarray Sum](https://cses.fi/problemset/task/1643) 171 | - [Subarray Sums II](https://cses.fi/problemset/task/1661) 172 | - [1398C - Good Subarrays](https://codeforces.com/contest/1398/problem/C) 173 | - [313B - Ilya and Queries](https://codeforces.com/problemset/problem/313/B) 174 | - [433B - Kuriyama Mirai's Stones](https://codeforces.com/problemset/problem/433/B) 175 | -------------------------------------------------------------------------------- /Week 5/3. Introduction to Graphs.md: -------------------------------------------------------------------------------- 1 | # Introduction to Graphs 2 | 3 | Graphs and networks are one of the most natural ways in which humans analyse real world problems because modelling many problems as a network can simplify the problem. 4 | 5 | You will understand how powerful graph theory is when you apply it in multiple problems through your CP journey. However, before we dwell into the concepts of graph theory, let's look into some terminolgy that will be used in further sections. 6 | 7 | ## Parts of a Graph 8 | 9 | Any graph is a set of 2 components. 10 | - Nodes (or vertices) 11 | - Edges 12 | 13 | Edges connect nodes with each other. Consider a map of highways connecting cities. Each highway is an edge and each city is a node. 14 | 15 | ## Types of Graphs 16 | 17 | Graphs can be of many types and so are categorised by many parameters. We'll illustrate the most commonly used ones. 18 | 19 | You can visualize them on [this website](https://visualgo.net/en/graphds?slide=1). You may also play around with the parameters to understand better. 20 | 21 | - **Directed and Undirected Graphs** 22 | 23 | Graphs in which each edge has a direction (unidirectional) are known as directed graphs, whereas, a graph in which edges don't have a direction (can also be thought of as bidirectional) is known as an undirected graph. 24 | 25 | Direction of an edge refers to the direction in which it can be travelled, i.e $A \rightarrow B$ is a directed edge from A to B such that one may go from A to B through that edge but not the other way round. 26 | 27 | $A \leftrightarrow B$ is an undirected edge. 28 | 29 | Some basic terminology associated with the system of nodes and edges are: 30 | - Number of edges incident to a node is called degree of the node. 31 | - Number of edges going into a node in a directed graph is known as its indegree. 32 | - Number of edges going out of a node in a directed graph is known as its outdegree. 33 | 34 | The terms indegree and outdegree only makes sense for a directed graph. 35 | 36 | - **Weighted and Unweighted Graphs** 37 | 38 | Graphs in which each edge has weight/value/cost associated with it are called weighted graphs and graphs in which edges do not have a weight/value/cost are knowns as unweighted graphs. 39 | 40 | - **Cyclic And Acyclic Graphs** 41 | 42 | A cycle in a graph is a sequence of nodes such that you can start from any node of the cycle and end up at the starting node without tranversing the same edge twice. A graph with atleast one cyle is a cyclic graph, otherwise it is acyclic. 43 | 44 | - **Simple and Complete Graphs** 45 | 46 | A graph without cycles and with at most one edge between any two nodes is called a simple graph. If there is an edge between all pairs of nodes, the graph is called a complete graph. 47 | 48 | - **Bipartite Graphs** 49 | 50 | A bipartite graph is a graph whose nodes can be partitioned into two sets such that each edge is from one set to another and no edge connects nodes in the same set. 51 | 52 | - **Trees** 53 | 54 | An undirected, acyclic and connected graph is known as a Tree. Trees are incredibly important in competitive programming - many algorithms you'll learn are based on trees, and many problems you will encounter will involve trees. 55 | 56 | Trees are special because of the following features : 57 | - A tree with $n$ nodes has $n-1$ edges. 58 | - Any node can be reached from another node through an unique path. 59 | 60 | ![A typical tree](Images/Graph-Tree.jpg) 61 | 62 | ## Graph Terminology 63 | 64 | - The nodes in a directed tree with indegree 0 is called the root node. 65 | - In an undirected tree, any node can be made a root node by anchoring the tree about that node. 66 | - Nodes below node X become its children and X is the parent of those nodes. 67 | - Nodes without children are leaf nodes. 68 | - Nodes at the same heirarchial level under a parent node are sibling nodes. 69 | - Any node on the path between node X and the root is an ancestor of X. 70 | - The number of edges from the root to the deepest leaf is called height of the tree. 71 | - In a rooted tree, node X and all the nodes below X form the subtree of X. 72 | 73 | ## Representation of Graphs 74 | 75 | There are 3 main ways we represent graphs in CC: 76 | 77 | - **Adjacency Matrix** 78 | 79 | If the number of vertices $V$ is small enough, we can build a static 2D array `int AM[V][V]` with $O(V^2)$ space complexity. 80 | 81 | For an unweighted graph, set `AM[u][v]` to a non-zero value (usually 1) if there is an edge between vertex `u-v` and zero otherwise. 82 | 83 | For a weighted graph, set `AM[u][v] = weight(u, v)` if there is an edge between vertex `u-v` with `weight(u, v)` and zero otherwise. 84 | 85 | This is useful for small and dense graphs but is not recommended for large sparse graphs as it would require too much space and there would be many blank cells in the 2D array. 86 | 87 | Another drawback is that it also takes $O(V)$ time to enumerate the list of neighbors of any vertex — an operation common to many graph algorithms — even if that vertex only has a handful of neighbors. 88 | 89 | ```cpp 90 | void makegraph() { 91 | int V = 0, E = 0; 92 | cin >> V >> E; 93 | int adj_matrix[V + 1][V + 1]; 94 | for(int i = 1; i <= V; i++) { 95 | for(int j = 1; j <= V; j++) { 96 | adj_matrix[i][j] = 0; 97 | } 98 | } 99 | 100 | for(int i = 1; i <= E; i++) { 101 | int node1, node2; 102 | cin >> node1 >> node2; 103 | adj_matrix[node1][node2] = 1; 104 | } 105 | } 106 | ``` 107 | - **Adjacency List** 108 | 109 | In an adjacency list, we have a 2D vector of pairs, where `list[u]` stores pairs `(v, w)` where `v` is a neighbour of `u` and `w` is the weight of the edge that connects them. If the graph is unweighted, we can just ignore the weight completely. 110 | 111 | Adjacency lists offer a space complexity of $O(V+E)$. These are more space efficient than adjacency matrices but do not offer then same $O(1)$ lookup time for the existence of an edge between any pair of vertices. 112 | 113 | Despite this, the space efficiency and quick access to any node's neighbours make adjacency lists the most common way of representing graphs. 114 | 115 | ```cpp 116 | void makegraph() { 117 | int V = 0, E = 0; 118 | cin >> V >> E; 119 | vector>> list(V+1); 120 | for(int i = 1; i <= E; i++) { 121 | int node1, node2, weight; 122 | cin >> node1 >> node2 >> weight; 123 | list[node1].push_back({node2, w}); 124 | // if undirected, we would also do `list[node2].push_back({node1, w});` 125 | } 126 | } 127 | ``` 128 | 129 | - **Edge List** 130 | 131 | Here, we simply store a list of all E edges, with a space complexity of O(E). 132 | 133 | This is used rarely only for some specific algorithms that involve sorting the edges in a particular order (one good example is Kruskal's MST algorithm). 134 | 135 | ```cpp 136 | void makegraph() { 137 | int V = 0, E = 0; 138 | cin >> V >> E; 139 | pair edge_list[E]; 140 | 141 | for(int i = 0; i < E; i++) { 142 | int node1, node2; 143 | cin >> node1 >> node2; 144 | edge_list[i] = {node1, node2}; 145 | } 146 | } 147 | ``` 148 | 149 | 150 | 151 | 152 | -------------------------------------------------------------------------------- /Week 3/3. Bit Manipulation.md: -------------------------------------------------------------------------------- 1 | # Bit Manipulation 2 | 3 | ## Introduction 4 | 5 | An integer is stored in computer as a sequence of bits. So, we can use integer data types (like `int` and `float`) to represent a lightweight small set of boolean values. All set operations then involve only bitwise manipulation of the corresponding integer, which makes it a much more efficient choice when compared to using an array of boolean variables. 6 | 7 | You need to familiarise yourself with binary representation of data before proceeding further. Go through the following links to catch up if you are not familiar with booean algebra. 8 | 9 | [Binary System, Representation and Logic Gates](https://youtube.com/playlist?list=PL2ONYsvCDiDsb311caRSwgMmcIXR27UCi&feature=shared) 10 | 11 |
12 | 13 | ## Bitwise Operations: 14 | 15 | Most bitwise operations are practically $O(1)$ (to be precise, they are $O(number of bits)$). Keep in mind that the value of some of these operations will depend on whether you're using a 32-bit (`int`) data type or 64-bit (`long`) data type. 16 | 17 | 1. The bitwise NOT operation: 18 | 19 | Using `~` on any integer flips all its bits. 20 | 21 | ``` 22 | P = 14 : 1110 23 | _____________ 24 | 25 | ~P = 1 : 0001 26 | ``` 27 | 28 | 2. The bitwise OR operation: 29 | 30 | Using `|` on two numbers as operands performs a logical 'OR' operation on all the coresponding bits. The resulting bit is 1 if either of the operands' corresponding bits are 1 and 0 otherwise. 31 | 32 | ``` 33 | P = 10 : 1010 34 | Q = 9 : 1001 35 | _______________ 36 | 37 | P|Q = 11 : 1011 38 | ``` 39 | 40 | 3. The bitwise AND operation: 41 | 42 | Using `&` on two numbers as operands performs a logical 'AND' operation on all the coresponding bits. The resulting bit is 1 if both the operands' corresponding bits are 1 and 0 otherwise. 43 | 44 | ``` 45 | P = 10 : 1010 46 | Q = 9 : 1001 47 | _______________ 48 | 49 | P&Q = 8 : 1000 50 | ``` 51 | 52 | 4. The bitwise XOR operation: 53 | 54 | Using `^` on two numbers as operands performs a logical 'XOR' operation on all the coresponding bits. The resulting bit is 1 if the operands' corresponding bits are different and 0 otherwise. 55 | 56 | ``` 57 | P = 10 : 1010 58 | Q = 9 : 1001 59 | _______________ 60 | 61 | P^Q = 3 : 0011 62 | ``` 63 | 64 | 5. The bitshift operations: 65 | 66 | Using right shift `>>` and left shift `<<` on a pair of operands performs a bitshifting operation on the first operand by the numbere of places denoted by the second operand. 67 | 68 | ``` 69 | P = 10 : 01010 70 | Q = 1 71 | _______________ 72 | P>>Q = 5 : 00101 73 | P< 77 | 78 | ## Cool bit tricks: 79 | 80 | C++ (`g++` in specific) also has a few other inbuilt functions related to bit manipulation: 81 | 82 | - `__builtin_clz(x)` counts number of leading zeroes 83 | - `__builtin_ctz(x)` counts number of trailing zeroes 84 | - `__builtin_popcount(x)` counts number of ones (set bits) 85 | 86 | Note that these are only for the `int` data type, use `__builtin_clzl` for `long` and `__builtin_clzll` for `long long`. 87 | 88 | Here are some cool things you can do using bitwise operations: 89 | - Check Parity: 90 | 91 | A number is odd iff its rightmost bit (LSB) is 1, which means ANDing it with 1 should give you 1. For even numbers, the rightmost bit is 0 and so ANDing with 1 will give 0. 92 | ```cpp 93 | bool isOdd(int x) { 94 | return x & 1; 95 | } 96 | ``` 97 | 98 | - Multiplication & Division: 99 | 100 | To multiply or divide an integer by $2^k$, we only need to shift all bits $k$ times to the left / right respectively. Note that truncation in the shift right operation automatically rounds the division down (floor division). 101 | 102 | ``` 103 | P = 44 (base 10) = 0101100 (base 2) 104 | P<<1 = 88 (base 10) = 1011000 (base 2) 105 | P>>1 = 22 (base 10) = 0010110 (base 2) 106 | P>>2 = 5 (base 10) = 0000101 (base 2) 107 | ``` 108 | 109 | - Checking if the $i$ th bit is set : 110 | 111 | Bitwise AND operation can be used to check the state of bits in general. We can AND with $2^i$ and it will give 0 if the bit is not set and will give $2^i$ if the bit is set: 112 | 113 | ``` 114 | P = 44 (base 10) = 0101100 (base 2) 115 | i = 4 (base 10) 116 | X = 1< 164 | 165 | ## Problems 166 | 167 | 1. Warmup: 168 | - [BIT by BIT](https://open.kattis.com/problems/bitbybit) 169 | - [Splitting Numbers](https://onlinejudge.org/index.php?option=onlinejudge&Itemid=8&page=show_problem&problem=3084) 170 | - [Snapper Chain (easy)](https://open.kattis.com/problems/snappereasy) 171 | - [Equal by XORing](https://www.codechef.com/problems/EQBYXOR) 172 | 173 | 2. Core Workout: 174 | - [Snapper Chain (harder)](https://open.kattis.com/problems/snapperhard) 175 | - [Death Star](https://open.kattis.com/problems/deathstar) 176 | - [Possible or Not](https://www.codechef.com/problems/CS2023_PON) 177 | - [Hypercube](https://open.kattis.com/problems/hypercube) 178 | - [And Then There Were K](https://codeforces.com/problemset/problem/1527/A) 179 | - [Counting Bits](https://cses.fi/problemset/task/1146) 180 | 181 |
182 | 183 | -------------------------------------------------------------------------------- /Week 1/5. C++ Quickstart.md: -------------------------------------------------------------------------------- 1 | # C++ Quickstart 2 | 3 | 4 | While most major platforms let you use any programming language, most competitive programmers use either C++, Java or Python. Out of these, C++ is the most popular due to its concise syntax, fast execution and large standard template library (STL). 5 | 6 | Here, we'll look at setting up a local C++ environment and where to learn the basics of the language, along with some specific tips for competitive coding. 7 | 8 | A bit of advice: yes, I know learning a new programming language will seem quite daunting, especially if you try to do it in a short time frame. However, the important thing to understand is that **concepts in competitive coding (and algorithmic thinking in general) are independent of programming language**. 9 | 10 | This means that **you can start CC in any programming language you choose and switch later if you want**. So, instead of deep diving and trying to finish all of this at once, you could do certain parts while solving actual CC problems over the next week or two. This would serve as practice and strengthen your understanding of C++ and CC as a whole. 11 | 12 |
13 | 14 | ## Installation: 15 | 16 | We will be using `g++` , the C++ compiler from the GNU Compiler Collection (GCC). While most compilers will behave identically in most situations, `g++` has a few advantages we will see later. 17 | 18 | - **Windows:** 19 | 20 | Install [MinGW](https://www.mingw-w64.org/) and add the location of the executable `C:\MinGW\bin`) to your `PATH`. 21 | 22 | - **Linux:** 23 | 24 | Use your favourite package manager to install the `g++` package. 25 | 26 | - **Mac:** 27 | 28 | Install [Homebrew](https://brew.sh/) and then run: 29 | ``` 30 | brew install gcc 31 | ``` 32 | As of writing, this installs `g++-13` ([source](https://formulae.brew.sh/formula/gcc)). To make it available simply as `g++`, run: 33 | ``` 34 | sudo ln -s $(which g++-13) /usr/local/bin/g++ 35 | ``` 36 | 37 | 38 | To check your installation, running this command should print your `g++` version: 39 | ``` 40 | g++ --version 41 | ``` 42 | 43 | We recommend using a lightweight text editor like [Neovim](https://neovim.io/) or [Sublime Text](https://www.sublimetext.com/) for CC. 44 | 45 | When compiling and running your code, you can easily read from an input file or write to an output file using `<` or `>`: 46 | ``` 47 | g++ -o prog prog.cpp out.txt && ./prog 48 | ``` 49 | 50 | We **do not recommend using the default build system** ("the shiny green button") of these editors as these sometimes use different versions / compilers of C++, causing strange errors. 51 | 52 | Instead, run the commands in your terminal. 53 | 54 |
55 | 56 | ## Resources: 57 | 58 | Some nice resources for learning C++: 59 | - https://cplusplus.com/doc/tutorial/ (fairly concise) 60 | - https://www.learncpp.com/ (more in depth) 61 | - Chapter 2 and parts of Chapter 3 of [PAPS](https://www.csc.kth.se/~jsannemo/slask/main.pdf) (catered specifically for CC) 62 | 63 | The things you should know: 64 | - Variables & Data Types 65 | - I/O & Operators 66 | - Control Flow (`if-else`, `switch`) 67 | - Loops (`for`, `while`, `do-while`) 68 | - Functions & Recursion 69 | - Arrays, Vectors & Strings 70 | 71 | You don't need to focus too much on topics like pointers, structs, templates and classes for now since they are rarely used in competitive coding. 72 | 73 |
74 | 75 | ## C++ for CP: 76 | 77 | While most resources will teach you about C++ in general, here are a few things specifically useful for CC: 78 | 79 | - **Importing Libraries:** 80 | 81 | Instead of including many header files separately, you can import the entire standard library in one line: 82 | ```cpp 83 | #include 84 | ``` 85 | This is not a common to all C++ compilers and is one of the reasons competitive programmers prefer `g++`. 86 |
87 | 88 | 89 | 90 | - **Arrays, Vectors & Strings**: 91 | 92 | While normal arrays have fixed size (must be known at compile time), `vector` is a dynamic array: 93 | ```cpp 94 | int arr1[5]; // 1D array 95 | vector vec1(5, -1); // 1D vector, filled with -1 96 | int arr2[3][4]; // 2D array 97 | vector> vec2; // 2D vector, with size 0 98 | 99 | // Add an element to the back 100 | vec1.push_back(24); 101 | // Get size of vector 102 | cout << vec1.size() << " " << vec[0] << " " << vec[5]; 103 | // Output: 6 -1 24 104 | ``` 105 | `string` is basically just `vector`. 106 |
107 | 108 | - **Range based loops**: 109 | 110 | You can directly iterate over many data structures without an index variable: 111 | ```cpp 112 | vector v = [2, 4, 5, 6, 1]; 113 | for (int ele : v) 114 | { 115 | cout << ele << " "; 116 | } 117 | // prints all elements in v 118 | ``` 119 | To get the references of the elements (if you want to modify them), use `&` after the data type: 120 | ```cpp 121 | vector v(100); 122 | // takes input into v while adding 1 to all elements 123 | for (int &ele : v) 124 | { 125 | cin >> ele; 126 | ele++; 127 | } 128 | ``` 129 | We can replace the data type with `auto` to let the compiler decide for us: 130 | ```cpp 131 | // input into vector 132 | for (auto &ele : vec) 133 | cin >> ele; 134 | 135 | // print vector 136 | for (auto ele : vec) 137 | cout << ele << " "; 138 | ``` 139 |
140 | 141 | 142 | - **Fast I/O**: 143 | 144 | Add these lines at the start of the `main` function: 145 | ```cpp 146 | ios::sync_with_stdio(false); 147 | cin.tie(NULL); 148 | ``` 149 | Also, always use `"\n"` instead of `endl` while using `cout`. 150 |
151 | 152 | 153 | - **Typedefs & Macros**: 154 | 155 | These can be used to make your code much more concise and are usually placed right after `#include `. 156 | 157 | `typedef` allows you to give existing data types new names: 158 | ```cpp 159 | // Now you can use `ll` anywhere instead of `long long` 160 | typedef long long ll; 161 | // You can nest your typedefs as well 162 | typedef vector vll; 163 | ``` 164 | 165 | `#define` works in an almost find-and-replace manner and can be used for small snippets: 166 | ```cpp 167 | // Now, we can simply use `cout << ln` instead of `cout << "\n"`` 168 | #define ln "\n" 169 | // You can make new names for functions 170 | #define pb push_back 171 | // Snippets can also be multiple lines long 172 | #define fastio ios_base::sync_with_stdio(false); \ 173 | cin.tie(NULL); 174 | ``` 175 |
176 | 177 | - **Integer Overflow**: 178 | 179 | This is one of the most common bugs in competitive coding and is due to the fixed size of `int` in most languages. While `int` is a 32-bit integer (stores upto `2e9`), **doing any operation on two `int`s will always give you an `int`, no matter what the actual size of the result is**. 180 | 181 | For example, if you multiplied two `int`s with value `1e9`, it would try to store `1e9 * 1e9 = 1e18` into an `int` and will chop off the bits beyond the 32-bit limit to do so. 182 | 183 | We can instead use a 64-bit integer like `long long` (stores upto `9e18`) since most results in CC are guaranteed to be within this limit. Since `long long` is used very frequently, most competitive programmers use: 184 | 185 | ```cpp 186 | // Now you can use `ll` anywhere instead of `long long` 187 | typedef long long ll; 188 | ``` 189 | 190 | However, this is almost never an issue in Python as its integers have no fixed size and can expand to the limit of available memory. 191 | 192 |
193 | 194 | 195 | ## Practice: 196 | 197 | Two good collections of problems that let you practice your general C++ skills are: 198 | - HackerRank - 199 | - HackerEarth - 200 | 201 | Apart from this, you can try implementing basic programs in C++ to get a feel for the language. 202 | -------------------------------------------------------------------------------- /Week 3/1. Two Pointers.md: -------------------------------------------------------------------------------- 1 | # Two Pointers 2 | 3 |

Two pointers in an approach for solving problems which require you to find a pair of numbers in an array (or a subarray bound by them) that satisfies a certain property.

4 |

It is used to obtain an $O(n)$ solution to certain problems whose solutions would otherwise take $O(n^2)$ time.

5 |

For example, one problem that can be solved using two pointers is finding whether there exists two integers in an array whose sum is equal to some number x (commonly known as the 2SUM problem).

6 |

Let us examine the working of two pointers by constructing a solution to the above problem using it.

7 | 8 |
9 | 10 | ## 2SUM Problem 11 | 12 |

Given an array of numbers, we must find whether there exists two numbers in the array such that there sum is equal to a given x.

13 |

Let us say that the array of numbers is [7, 12, 3, 15, 8, 2, 14, 9] and the required sum is x = 20.

14 |

The first approach that might occur to you is to use two nested for loops to iterate through every possible pair of two numbers from the array and check whether their sum is 20.

15 | 16 | ```cpp 17 | for (int i = 0; i < n; i++) { 18 | for (int j = i + 1; j < n; j++) { 19 | if (arr[i] + arr[j] == x) { 20 | cout << "YES\n"; 21 | return; 22 | } 23 | } 24 | } 25 | cout << "NO\n"; 26 | ``` 27 | 28 |

Although this approach works, it is not very efficient as it makes use of two nested for loops, resulting in a time complexity of $O(n^2)$.

29 |

Now, let us use two pointers to solve the problem with lower time complexity.

30 |

The first step is to sort the elements in the array in non-decreasing order.

31 | 32 |

[2, 3, 7, 8, 9, 12, 14, 15]

33 | 34 |

Next, we define two 'pointer' variables l and r.

35 |

Note that pointers here does not refer to the C++ pointers datatype and is just an arbitrary name given to two int variables that keep track of indices in the array.

36 |

l is initialised to 0 (the position of the first element in the array) and r is initialised to 7 (the position of the last element in the array).

37 | 38 | ``` 39 | [2, 3, 7, 8, 9, 12, 14, 15] 40 | ^ ^ 41 | l r 42 | ``` 43 | 44 |

The elements at l and r, that is 2 and 15, add upto 17, which is less than our required sum 20.

45 |

As 2 paired with the maximum element cannot sum upto 20, we can conclude that any pair containing 2 cannot add upto 20.

46 |

Since 2 has been eliminated, we increment l by 1, so that it now points to the next element, 3.

47 | 48 | ``` 49 | [2, 3, 7, 8, 9, 12, 14, 15] 50 | ^ ^ 51 | ``` 52 |  53 |

As 3 and 15 add upto 18, which is still less than 20, we can eliminate 3 too and move onto the next element 7.

54 | 55 | ``` 56 | [2, 3, 7, 8, 9, 12, 14, 15] 57 | ^ ^ 58 | ``` 59 | 60 |

7 and 15 add upto 22, which is greater than 20, making this case different from the previous two cases.

61 |

Now, instead of altering l, we alter r, decrementing it until the sum is less than or equal to 20.

62 |

As it turns out, r has to be decremented until it becomes 5 and points to 12.

63 | 64 | ``` 65 | [2, 3, 7, 8, 9, 12, 14, 15] 66 | ^ ^ 67 | ``` 68 | 69 |

l is incremented to 3 (and now points to 8) as 19 is less than 20.

70 | 71 | ``` 72 | [2, 3, 7, 8, 9, 12, 14, 15] 73 | ^ ^ 74 | ``` 75 | 76 |

If you notice, the numbers that the two pointers now point to add upto exactly 20.

77 |

Thus, we got the solution to our problem.

78 |

The important thing to notice here is that incrementing l increases our sum while decrementing r decreases our sum. 79 |

What about the case where no problem exists, such as when x = 13?

80 |

Then, we continue the process until l becomes equal to r, at which point we break out of the loop and conclude that no answer exists.

81 | 82 |
83 | C++ Implementation 84 | 85 | ```cpp 86 | void solve() { 87 | int n, x; 88 | cin >> n >> x; 89 | vector arr (n); 90 | for (int i = 0; i < n; i++) cin >> arr[i]; 91 | sort (arr.begin(), arr.end()); 92 | int l = 0, r = n - 1; 93 | while (l < r) { 94 | if (arr [l] + arr [r] < x) 95 | l++; 96 | else if (arr[l] + arr[r] > x) 97 | r--; 98 | else { 99 | cout << "YES\n" ; 100 | cout << arr[l] << ' ' << arr[r] << "\n"; 101 | return; 102 | } 103 | } 104 | cout << "NO\n"; 105 | } 106 | ``` 107 | 108 |
109 | 110 |
111 | Python Implementation 112 | 113 | ```py 114 | def solve (): 115 | n, x = tuple (map (int, input ().split (' '))) 116 | arr = list (map (int, input ().split (' '))) 117 | arr.sort() 118 | l = 0 119 | r = n - 1 120 | while l < r: 121 | if arr[l] + arr[r] < x: 122 | l++ 123 | elif arr[l] + arr[r] > x: 124 | r-- 125 | else: 126 | print('YES') 127 | print(arr [l] + ' ' + arr [r]) 128 | print('YES') 129 | ``` 130 | 131 |
132 | 133 |
134 | 135 | ## Time Complexity 136 | 137 |

It turns out that two pointers actually takes $O(n)$ time.

138 |

If you notice, in every iteration of the loop, either l and r takes on a value that it has never taken beforei while maintaining the l ≤ r condition.

139 |

It is not too hard to see that the maximum number of iterations is n - 1 (the two pointers will always collide after that and cannot move past each other due to the condition).

140 |

Hence, there can only be a maximum of n - 1 iterations of the inner loop over all the iterations of the outer loop, resulting in the time complexity of the algorithm being $O(n)$.

141 |

Note that the time complexity of the overall code is $O (n log n)$, as the array is sorted first in this case.

142 |

In general, you can apply two pointers for greedy solutions when you can find certain properties that guarantee that both your two pointers always move in a specific direction (either left or right) without crossing over each other. 143 | 144 | ## Practice Problems 145 | 146 | 1. [Codeforces 381A - Sereja and Dima](https://codeforces.com/problemset/problem/381/A) 147 | 2. [Codeforces 1462A - Favorite Sequence](https://codeforces.com/problemset/problem/1462/A) 148 | 3. [Codeforces 1791 - Prepend and Append](https://codeforces.com/problemset/problem/1791/C) 149 | 4. [Codeforces 6C - Alex, Bob and Chocolate](https://codeforces.com/problemset/problem/6/C) 150 | 5. [Sum of Two Values](https://cses.fi/problemset/task/1640) 151 | 6. [Subarray Sums I](https://cses.fi/problemset/task/1660) 152 | 153 | ## Links 154 | 155 | 1. [GeeksForGeeks tutorial](https://www.geeksforgeeks.org/two-pointers-technique/) 156 | 2. [USACO tutorial](https://usaco.guide/silver/two-pointers?lang=cpp) 157 | 3. [Video visualisation by Josh's Dev Box](https://www.youtube.com/watch?v=On03HWe2tZM) 158 | 4. [Overview by Team AlgoDaily](https://www.youtube.com/watch?v=-gjxg6Pln50) 159 | 5. [Errichto solving LeetCode problems](https://www.youtube.com/watch?v=QwN-weNSrAg) 160 | -------------------------------------------------------------------------------- /Week 2/1. Brute Force.md: -------------------------------------------------------------------------------- 1 | # Brute Force 2 | 3 | ## What is it? 4 | 5 |

Brute force is an approach of problem-solving that involves examining every potential solution to a problem.

6 |

It is generally used in problems where you are asked to count the number of solutions or find the 'best' solution when the input size is quite small.

7 |

Writing a brute force algorithm does not require a lot of logic building as it is the most simple and naive way of solving a particular problem.

8 | 9 | ## Number of subarrays with even sum 10 | 11 | ### Problem Statement 12 | 13 |

You are given an array consisting of n integers.

14 |

Your task is to find the number of subarrays of the array whose sum is even.

15 |

Note that a subarray of an array is a contiguous section of an array that can be obtained by deleting elements at the beginning and at the end of the array.

16 | 17 | ### Sample Case 18 | 19 |

Let us take the array [3, 7, 2, 6, 3, 4].

20 |

[7, 2, 6, 3] is one subarray of this array whose sum is even.

21 |

Our task is to write a program that calculates the total number of such subarrays.

22 |

As it turns out, the answer for this particular array is 9.

23 | 24 | ### Solution 25 | 26 |

The first step is to input the size of the original array and the values it contains.

27 |

We declare a variable result that keeps track of the number of even sum subarrays.

28 |

This is initialised to 0 as we have not examined any subarrays yet.

29 |

As we are writing a brute force algorithm, we must examine every possible subarray of the array and count the number of subarrays whose sum is even.

30 |

To iterate over every subarray, we use a nested for loop.

31 |

The outer loop goes through every value for the start index of the subarray (l from 1 to n) and the inner loop goes through every corresponding possible values for the end index (r from i to n).

32 |

Another loop is written to calculate the sum of the subarray - that is the sum of the elements in the array from index l to r.

33 |

If the sum happens to be even, result is incremented by 1, else it is left unchanged.

34 |

The code which is an implementation of this solution to the problem is given below.

35 | 36 |
37 | C++ Implementation 38 | 39 | ```cpp 40 | void solve () { 41 | int n; 42 | cin >> n; 43 | int array [n]; 44 | for (int i = 0; i < n; i ++) { 45 | cin >> array [i]; 46 | } 47 | int result = 0; 48 | for (int l = 0; l < n; l ++) { 49 | for (int r = l; r < n; r ++) { 50 | int sum = 0; 51 | for (int i = l; i <= r; i ++) { 52 | sum += array [i]; 53 | } 54 | if (sum % 2 == 0) { 55 | result++; 56 | } 57 | } 58 | } 59 | cout << result << '\n'; 60 | } 61 | ``` 62 | 63 |
64 | 65 |
66 | Python Implementation 67 | 68 | ```py 69 | def solve (): 70 | n = int (input ()) 71 | array = list (map (int, input ().split (' '))) 72 | result = 0 73 | for l in range (n): 74 | for r in range (l, n): 75 | sum = 0 76 | for i in range (l, r + 1): 77 | sum += arr [i] 78 | if sum % 2 == 0: 79 | result += 1 80 | print (result) 81 | ``` 82 | 83 |
84 | 85 | To calculate the time complexity of the code, we see that the `l` and `r` loops will together run in $O(n^2)$ (to be precise, the number of iterations will be $n * (n - 1) / 2$, but we can ignore constants and lower order terms). 86 | 87 | The innermost loop that calculates the `sum` will (in the worst case) run in $O(n)$, making the total time complexity $O(n^3)$. (bonus: can you think of an algorithm to solve it in $O(n^2)$?) 88 | 89 | ## Number of Fibonacci sequences 90 | 91 | ### Problem statement 92 | 93 |

In this problem, you have to find the number of Fibonacci sequences of length n and ending with the element k.

94 |

A Fibonacci sequence is a sequence of non-decreasing positive integers defined by two starting elements (a1 ≤ a2) and the relation ai = ai-1 + ai-2 for i ≥ 3.

95 | 96 | ### Sample case 97 | 98 |

Let us take n = 6 and k = 40.

99 |

There are two valid sequences that can be constructed for these parameters, and they have beens listed below.

100 | 101 |
    102 |
  • 0, 8, 8, 16, 24, 40
  • 103 |
  • 5, 5, 10, 15, 25, 40
  • 104 |
105 | 106 |

Hence, the answer to this case is 2.

107 | 108 | ### Solution 109 | 110 |

One way to approach this problem is to iterate over the possible values for the first two elements and construct the corresponding sequence.

111 |

However, this is not ideal as there is no upper bound for the values that the two elements can take and this will result in high time complexity.

112 |

The best alternative is to instead iterate over the values that the second-last element can take on.

113 |

The values that it can take on are all integers from 0 to k.

114 |

Using the last two elements, the sequence is constructed backwards using the relation ai = ai+2 - ai+1 for 0 ≤ i ≤ n - 2.

115 |

If the constructed sequence is valid (non-decreasing with no negative integers), the sequence is considered as a possible solution.

116 | 117 |
118 | C++ Implementation 119 | 120 | ```cpp 121 | void solve () { 122 | int n, k; 123 | cin >> n >> k; 124 | int result = 0; 125 | for (int i = 0; i <= k; i ++) { 126 | int array [n]; 127 | array [n - 1] = k; 128 | array [n - 2] = i; 129 | bool valid = true; 130 | for (int j = n - 2; j >= 0; j --) { 131 | array [j] = array [j + 2] - array [j + 1]; 132 | if (array [j] < 0 || array [j] > array [j + 1]) { 133 | valid = false; 134 | break; 135 | } 136 | } 137 | if (valid) { 138 | result++; 139 | } 140 | } 141 | cout << result << '\n'; 142 | } 143 | ``` 144 | 145 |
146 | 147 |
148 | Python Implementation 149 | 150 | ```py 151 | def solve (): 152 | n, k = tuple (map (int, input ().split (' '))) 153 | result = 0 154 | for i in range (k + 1): 155 | array = [0] * n 156 | array [n - 1] = k 157 | array [n - 2] = i 158 | valid = True 159 | for j in range (n - 2, -1, -1): 160 | array [j] = array [j + 2] - array [j + 1] 161 | if array [j] < 0 or array [j] > array [j + 1]: 162 | valid = False 163 | break 164 | if valid: 165 | result += 1 166 | print (result) 167 | ``` 168 | 169 |
170 | 171 | Looking at the code, the time complexity is clearly $O(k \cdot n)$. 172 | 173 | ## Note 174 | 175 |

While brute force algorithms are very easy to implement most of the time as they do not require a lot of thinking, sometimes you will need to make some clever observations to simplify your implementation.

176 |

However, the downside to using brute force algorithms is that they take a lot of time to execute, resulting in a high chance that your code gets the Time Limit Exceeded verdict.

177 |

Brute force algorithms generally work for easy problems, but as you move towards solving tougher problems with tighter constraints, you will need to find better algorithms.

178 |

In general, before coding anything, plan out your solution and calculate its time complexity. Based on the input size (and the rule of thumb about execution time), you will know if your approach will TLE or not.

179 | 180 | ## Practice problems 181 | 182 | 1. [Codeforces 4A - Watermelon](https://codeforces.com/problemset/problem/4/A) 183 | 2. [Codeforces 271A - Beautiful Year](https://codeforces.com/problemset/problem/271/A) 184 | 3. [Codeforces 1703C - Cypher](https://codeforces.com/problemset/problem/1703/C) 185 | 4. [Codeforces 25A - IQ test](https://codeforces.com/problemset/problem/25/A) 186 | 5. [Codeforces 1368A - C+=](https://codeforces.com/problemset/problem/1368/A) 187 | 6. [Codeforces 320A - Magic Numbers](https://codeforces.com/problemset/problem/320/A) 188 | 7. [Codeforces 734B - Anton and Digits](https://codeforces.com/problemset/problem/734/B) 189 | 8. [Codeforces 1382A - Common Subsequence](https://codeforces.com/problemset/problem/1382/A) 190 | 191 | ## Links to external resources 192 | 193 | 1. [Brute force algorithms](https://www.youtube.com/watch?v=BYWf6-tpQ4k) 194 | 2. [Introduction to brute force by GeeksForGeeks](https://www.geeksforgeeks.org/brute-force-approach-and-its-pros-and-cons/) 195 | 3. [Using brute force to crack passwords](https://www.kaspersky.com/resource-center/definitions/brute-force-attack) 196 | -------------------------------------------------------------------------------- /Week 5/4. Graph Traversal.md: -------------------------------------------------------------------------------- 1 | # Graph Traversal 2 | 3 | There are two basic graph traversal algorithms: 4 | - Depth First Search (DFS) 5 | - Breadth First Search (BFS). 6 | 7 | Both do similar things: from one vertex $u$, go to another unvisited vertex $v$ by following the edge $(u, v)$. They are just using different underlying data structures (usually implicit–stack/recursion for DFS versus a queue for BFS) and so the order of visited nodes is different. 8 | 9 | Both DFS and BFS are $O(V+E)$ algorithms. However, on a case by case basis, one is more natural and efficient to implement than the other. 10 | 11 | You can look [this demo]((https://visualgo.net/en/dfsbfs?slide=1)) for a cool visualization of graph traversal. 12 | 13 | ## Breadth First Search 14 | 15 | BFS begins from a source and traverses nodes "breadth" wise. Nodes are traversed in order of shortest distance from the source node, which makes it useful to find distances from the source. The algorithm works alike for both directed and unweighted graphs. Finding distances using BFS, however, works only for unweighted graphs. 16 | 17 | The BFS uses a queue to simulate the graph traversal. We push nodes into the queue in the order of distances (the BFS order). Note that the front of the queue always will have the closest node to the source, and the source itself in the beginning of the BFS. 18 | 19 | In every iteration, we pop the first node from the queue and push all its adjacent unvisited vertices into the queue. Therefore, each node is visited atmost once. We can find the distances from the source while doing the BFS traversal itself. 20 | 21 | Suppose the node $a$ has a distance $d$ from the source $s$. The node $a$ is then popped from the queue and an adjacent node $b$ is pushed into the queue. Since the graph is unweighted, the distance from $b$ to the source $s$ will be $d + 1$. Using this property, we find the distances from the source $s$. The initial node pushed into the queue is the source itself with distance as 0. 22 | 23 | ```cpp 24 | void bfs(int source) { 25 | vector d(n, -1); //-1 is unvisited 26 | d[source] = 0; 27 | queue q; 28 | q.push(source); 29 | while(!q.empty()) { 30 | int v = q.front(); q.pop(); 31 | for(int u : g[v]) { //g is adjacency list of the graph 32 | if(d[u] == -1) { 33 | d[u] = d[v] + 1; 34 | q.push(u); 35 | } 36 | } 37 | } 38 | } 39 | ``` 40 | 41 | You can also do a BFS from multiple sources: instead of pushing just the single source into the queue initially, you push all the sources into the queue (with their distances marked as 0). We can use this to find the minimum distance of any node v to any of the sources. This is a technique known as multi-source BFS. 42 | 43 | ## Depth First Search (DFS) 44 | 45 | Starting from a source vertex, DFS will traverse the graph ‘depth-first’. Every time it hits a branching point (a vertex with more than one neighbors), DFS will choose one of the unvisited neighbor(s) and visit this neighbor vertex. 46 | 47 | DFS repeats this process and goes deeper until it reaches a vertex where it cannot go any deeper. When this happens, DFS will ‘backtrack’ and explore another unvisited neighbor(s), if any. One call of `dfs(u)` will only visit all vertices that are directly / indirectly connected to (or reachable from) vertex u. 48 | 49 | There are two ways to implement DFS: using recursion or a stack. The stack implementation is rarely used in competitive programming, mainly because using recursion is much shorter and cleaner. 50 | 51 | (Sidenote: the recursive implementation actually uses the fact that function calls also operate on an internal stack, which means that coneceptually the two approaches are still the same.) 52 | 53 | ```cpp 54 | set seen; 55 | void dfs(int cur) 56 | { 57 | seen.insert(cur); 58 | for (auto &[v, w] : list[u]) 59 | { 60 | if (seen.count(v)) 61 | dfs(v); 62 | } 63 | } 64 | ``` 65 | 66 | One easy application of DFS is in checking if a directed graph is acyclic or not as directed acyclic graphs (DAGs) have very useful properties that can simplify certain types of problems greatly. 67 | 68 | We can simply run DFS / BFS from some source vertex. If we see a vertex that we have already seen, we have just traversed a cycle. Running a DFS from this vertex and storing the sequence of nodes we visit will give you the cycle. 69 | 70 | ## Connected Components 71 | 72 | A connected component is a maximal connected subgraph (it is a connected subgraph that is not part of any larger connected subgraph). Any graph can be partitioned into a number of connected components and in many graph problems, we come up with a solution for a connected graph and extend it for all graphs by repeating for each connected component. 73 | 74 | To list all connected components, we first pick some vertex and then run DFS / BFS to find all vertices that are connected to it. The set of vertices we visit forms a connected component. We then pick some vertex we have not visited and repeat until we visit every single vertex. 75 | 76 | ```cpp 77 | int seen[n]; 78 | vector comp; 79 | 80 | void dfs(int cur) 81 | { 82 | if (seen[cur]) return; 83 | seen[cur] = 1; 84 | comp.pb(i); 85 | for (auto &i : g[cur]) 86 | { 87 | dfs(i); 88 | } 89 | } 90 | 91 | int main() 92 | { 93 | // ...graph input in g... 94 | vector> comps; 95 | for (int i = 0; i < n; i++) 96 | { 97 | if (seen[i]) continue; 98 | comp.clear(); 99 | dfs(i); 100 | comps.push_back(comp); 101 | } 102 | } 103 | ``` 104 | 105 | ## Flood Fill 106 | 107 | Flood fill is similar to BFS and DFS. The only change is that instead of nodes of a graph, we are dealing with adjacent squares in a 2D grid. You can think of the cells in the grid as nodes, and when two cells are next to each other, you can make an edge between them. 108 | 109 | Here is some code that finds the size of the connected component that encloses $(r, c)$ in a 2D grid: 110 | ```cpp 111 | int dr[] = { 1, 1, 0,-1,-1,-1, 0, 1}; 112 | int dc[] = { 0, 1, 1, 1, 0,-1,-1,-1}; // the order is: // S/SE/E/NE/N/NW/W/SW 113 | 114 | int floodfill(int r, int c, char c1, char c2) // returns the size of CC 115 | { 116 | if ((r < 0) || (r >= R)) return 0; // outside grid, part 1 117 | if ((c < 0) || (c >= C)) return 0; // outside grid, part 2 118 | if (grid[r][c] != c1) return 0; // does not have color c1 119 | int ans = 1; // (r, c) has color c1 120 | grid[r][c] = c2; // to avoid cycling 121 | 122 | for (int d = 0; d < 8; ++d) 123 | ans += floodfill(r+dr[d], c+dc[d], c1, c2); // the code is neat as we use dr[] and dc[] 124 | return ans; 125 | } 126 | ``` 127 | 128 | ## Bipartite Graphs 129 | 130 | A bipartite graph is a graph whose nodes can be partitioned into two sets such that each edge is from one set to another and no edge connects nodes in the same set. 131 | 132 | Another way to think about it as that each node can be 'coloured' in one of two colours and no two nodes of the same colour are adjacent. Many problems often not even evident to be a graph problem can be solved by modelling it as a bipartite graph problem. 133 | 134 | Lets look at the problem [Among Us](https://www.codechef.com/INOIPRAC/problems/AMONGUS2) from INOI 2021. 135 | 136 | There are $N$ astronauts (numbered $1$ to $N$) who suspect there are parasites among them and $Q$ statements from them. Each statement made by astronaut $i$ about $j$ is of two types: 137 | - Type 1 : i accuses j of being a parasite 138 | - Type 2 : i vouches j of being a human. 139 | 140 | Given that no human tells a lie and no parasite tells the truth, you need to find whether the statements made by the astronauts are consistent. If they are consistent, report the maximum possible number of parasites. 141 | 142 | For the solution, we'll call the two types of people liars and truthfuls. 143 | 144 | There are two types of statements. 145 | 146 | - `1 i j` : $i$ accuses $j$ of being a liar. 147 | 148 | If $i$ is truthful, then $j$ is a liar. 149 | 150 | If $i$ is a liar, then $j$ is truthful. 151 | 152 | - `2 i j` : $i$ vouches of $j$ being truthful. 153 | 154 | If $i$ is truthful, then $j$ is truthful. 155 | 156 | If $i$ is a liar, then $j$ is a liar. 157 | 158 | This means that any pair $(i,j)$ associated with type 1 statements are of opposite types and those associated with type 2 statements are of the same types. 159 | 160 | This means that we can model the problem as a graph with $N$ nodes and $Q$ edges where each edge is of 2 types: a type 1 edge connects two nodes $i$ and $j$ of opposite colours while a type 2 edge connects two nodes $i$ and $j$ of the same colour. 161 | 162 | We can then colour the nodes using DFS by arbitarily asssigning a colour to the starting node. If the same node is coloured twice with opposite colours, it is inconsistent. 163 | 164 | Since we can always flip the colour of all nodes and still have a valid colouring, count the number of nodes of each kind of colour and print the maximum among them for the answer. 165 | 166 | ## Problems 167 | 168 | - [Labyrinth](https://cses.fi/problemset/task/1193) 169 | - [Maze](https://codeforces.com/problemset/problem/377/A) 170 | - [Wealth Disparity](https://www.codechef.com/INOIPRAC/problems/INOI1601) 171 | - [Multihedgehog](https://codeforces.com/contest/1068/problem/E) 172 | - [Department Strengths](https://www.codechef.com/INOIPRAC/problems/INOI2001) 173 | - [Counting Rooms](https://cses.fi/problemset/task/1192) 174 | - [Building Roads](https://cses.fi/problemset/task/1666) 175 | - [Labyrinth](https://cses.fi/problemset/task/1193) 176 | - [Icy Perimeter](http://www.usaco.org/index.php?page=viewproblem2&cpid=895) 177 | - [Fence](http://www.usaco.org/index.php?page=viewproblem2&cpid=895) 178 | - [UVa 00469 - Wetlands of Florida](https://onlinejudge.org/index.php?option=onlinejudge&Itemid=8&page=show_problem&problem=410) 179 | - [Building Teams](https://cses.fi/problemset/task/1668) 180 | -------------------------------------------------------------------------------- /Week 4/2. Binary Search.md: -------------------------------------------------------------------------------- 1 | # Binary Search 2 | 3 | ## Introduction 4 | 5 | Divide and Conquer (D&C) is a problem-solving paradigm in which a problem is made simpler by ‘dividing’ it into smaller parts and then conquering each part. The steps: 6 | 7 | 1. Divide the original problem into sub-problems—usually by half or nearly half, 8 | 2. Find (sub)-solutions for each of these sub-problems—which are now easier, 9 | 3. If needed, combine the sub-solutions to get a complete solution for the main problem. 10 | 11 | You have learnt in the previous weeks about a few divide and conquer techniques. $O(n \log n) $ Sorting algorithms like merge sort, heap sort etc work on this principle. Data structures like `set` use D&C methods to store date. However, the most common use of it is binary search. 12 | 13 |
14 | 15 | ## Standard Usage 16 | 17 | Binary search refers to searching for an element in a sorted array by repeatedly splitting it into halves and referring to the middle element. Consider an array sorted in ascending order. The following code can be used to search for an element in the array. 18 | 19 | ```cpp 20 | while (l <= r) 21 | { 22 | mid = (l + r) / 2; 23 | if (arr[mid] == x) 24 | cout << "Found!" << "\n"; 25 | if (x < arr[mid]) 26 | r = mid - 1; 27 | if (x > arr[mid]) 28 | l = mid + 1; 29 | } 30 | ``` 31 | 32 | Notice how the left and right limits of the search range are dynamically changed to narrow down on the element we want to search. 33 | 34 | We check the middle of the sorted array to determine if it contains what we are looking for. If it is or there are no more items to consider (l > r), we stop. 35 | 36 | Otherwise, we can decide whether the answer is to the left or right of the middle element and continue searching. If `x` is smaller than the middle element, it will obviously be smaller than everything in the left half (because it's in ascending order) and so we can update `l` to ignore that part. 37 | 38 | As the size of search space is halved (in binary fashion) after each check, we can see that: 39 | 40 | $$ 2^{ops} \approx n $$ 41 | 42 | This means that its time complexity is $O(log n)$. 43 | 44 | (If you're curious about analysing divide and conquer time complexities in general, read up on the [master theorem](https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms))) 45 | 46 | As mentioned before, two C++ STL functions `lower_bound` and `upper_bound` are very useful in this context and will save you the effort of implementing the binary search. 47 | 48 | - `lower_bound(start_ptr, end_ptr, num)`: 49 | 50 | Returns a pointer to the first position of a number greater than or equal to num 51 | 52 | - `upper_bound(start_ptr, end_ptr, num)`: 53 | 54 | Returns a pointer to the first position of a number strictly greater than num 55 | 56 | The `start_ptr` variable holds the starting point of the binary search and `end_ptr` holds the ending position of binary search space and `num` is the value to be found. 57 | 58 | To get the actual index, just subtract the first position i.e `vec.begin()` from the pointer. 59 | 60 | While this method is quite useful on its own, it becomes very powerful when extended in general. 61 | 62 |
63 | 64 | ## Bisection Method 65 | 66 | The binary search principle can be extended to compute the root of a function (its zero) **as long as it is monotonic**. 67 | 68 | Assume you are taking a loan of $x$ rupees from the bank at an interest rate of $i$ % for $m$ months. You need to compute the installment $d$ that you need to pay per month such that the loan is payed in the given time. 69 | 70 | The bank charges interst on the unpaid loan at the end of each month. In short, you need to compute $d$ such that $f(d) \approx 0$ for given $x$, $m$ and $i$. 71 | 72 | Let us take an example with $m$ = 2 months, $x$ = 1000 rupees and $i$ = 10%. 73 | 74 | An easy way to solve this root finding problem is to use the bisection method. 75 | 76 | We pick a reasonable range as a starting point. We want to find $d$ within the range $[a, b]$ where $a = 0.01$ (we have to pay at least one paisa) and $b = (1 + \frac{i}{100}) * x$ (the earliest we can complete is $m = 1$ if we pay exactly $(1 + \frac{i}{100}) * x$ rupees after one month. In this example, $b = (1 + 0.1) * 1000 = 1100.00$ rupees. 77 | 78 | ![Bisection simulation](Images/bisection-simulation.png) 79 | 80 | Notice that bisection method only requires $O(log((b-a)/\epsilon))$ iterations to get an answer with error smaller than $\epsilon$. In this example, bisection method only takes $log(1099.99/\epsilon)$ tries, which is only 40 iterations for $\epsilon = 10^{-9}$ , this yields only $\approxeq$ 40 iterations, which is incomprehensibly faster than the brute force solution. 81 | 82 | However, this is still not the most general form of binary search. 83 | 84 |
85 | 86 | ## Binary Search on Answer 87 | 88 | Consider a boolean function $f: {l, l + 1, ... r} \rightarrow {0, 1}$ that is monotonous on $[l, r]$, that is: 89 | 90 | $$ f(l) \le f(l + 1) \le ... \le f(r) $$ 91 | 92 | or 93 | 94 | $$ f(l) \ge f(l + 1) \ge ... \ge f(r) $$ 95 | 96 | Binary search finds the unique index $x$ such that $f(x) = f(l)$ and $f(x + 1) = f(r)$ or reports that it does not exist. 97 | 98 | The time complexity will be $O(T * (r - l))$ where $T$ is the time complexity of the function $f$. This means that we can use binary search for **finding the changing point of any monotonous function we define**. 99 | 100 | This can be very useful for certain problems where checking if a solution is valid is easy but finding the solution with a specific property is much harder (as long as the associated boolean function is monotonic). 101 | 102 | We can take the initial range $[l, r]$ as the range of all possible solutions and run our binary search by checking if the current solution is valid: 103 | 104 | ```cpp 105 | // This finds the first index on [l, r] at which `f` is true 106 | ll ans = r + 1; 107 | while (l <= r) 108 | { 109 | ll mid = (l + r) / 2; 110 | if (f(mid)) 111 | { 112 | ans = mid; 113 | r = mid - 1; 114 | } 115 | else 116 | { 117 | l = mid + 1; 118 | } 119 | } 120 | cout << ans << "\n"; 121 | ``` 122 | 123 | To make things clear, let's look at an example. 124 | 125 |
126 | 127 | ## Array Division 128 | 129 | Given an array of positive integers $a_1, a_2...a_n$, divide it into $k$ subarrays such that the maximum sum of a subarray is as small as possible. 130 | 131 | On its own, this type of minimising the maximum along with the partitions seems pretty hard to solve using normal methods. 132 | 133 | Let us think of the related checking problem: given a value $x$, can you divide array $a$ into $k$ subarrays such that the maximum sum of a subarray is atmost $x$? 134 | 135 | Note that this would mean that all subarray would have to have sum less than $x$. This means we can use a greedy strategy: greedily extend the current subarray as long as its sum is atmost $x$ and start a new subarray once it exceeds $x$. 136 | 137 | Finally, we will check if the number of subarrays created is atmost $k$ to see if using $x$ as the solution is valid or not. This will have time complexity $O(n)$. 138 | 139 | (Note that having fewer partitions is never a problem, we can always split it up more and the sum will still never cross $k$) 140 | 141 | Now, we can binary search on the answer $x$ since if $x$ is a valid solution, any $y \gt x$ is also a valid solution (if the maximum sum of a subarray is less than $x$, then clearly it is less than $y$ as well), which means it is monotonic. 142 | 143 | The total time complexity will be $O(n * log (MAXSUM))$. 144 | 145 |
146 | 147 | Implementation 148 | 149 | ```cpp 150 | #include 151 | using namespace std; 152 | typedef long long ll; 153 | 154 | bool f(const vector &arr, const ll k, ll maxsum) 155 | { 156 | ll count = 0; 157 | ll cur = 0; 158 | 159 | for (auto &i : arr) 160 | { 161 | if (i > maxsum) 162 | return false; 163 | if (cur + i > maxsum) 164 | { 165 | count++; 166 | cur = 0; 167 | } 168 | cur += i; 169 | } 170 | 171 | if (cur > 0) 172 | count++; 173 | 174 | return count <= k; 175 | } 176 | 177 | 178 | int main() 179 | { 180 | ll n, k; 181 | cin >> n >> k; 182 | 183 | vector arr(n); 184 | for (ll &i : arr) 185 | cin >> i; 186 | 187 | ll l = 0, r = 3e14; 188 | 189 | ll ans = r + 1; 190 | while (l <= r) 191 | { 192 | ll mid = (l + r) / 2; 193 | if (f(arr, k, mid)) 194 | { 195 | ans = mid; 196 | r = mid - 1; 197 | } 198 | else 199 | { 200 | l = mid + 1; 201 | } 202 | } 203 | cout << ans << "\n"; 204 | return 0; 205 | } 206 | ``` 207 | 208 |
209 | 210 |
211 | 212 | ## Problems 213 | 214 | A great resource for binary search in general is the Codeforces EDU course. 215 | 216 | **1. Workout Prep:** 217 | - [Binary Search](https://leetcode.com/problems/binary-search) 218 | - [Search Insert Position](https://leetcode.com/problems/search-insert-position/) 219 | - [Sum of Two Values](https://cses.fi/problemset/task/1640) 220 | - [Sum of Three Values](https://cses.fi/problemset/task/1641) 221 | 222 | **2. Warmup:** 223 | - [Firefly](https://open.kattis.com/problems/firefly) 224 | - [Room Painting](https://open.kattis.com/problems/roompainting) [hint : use `lower_bound`] 225 | - [Out of Sorts](https://open.kattis.com/problems/outofsorts) 226 | - [Fibonaccharsis](https://codeforces.com/problemset/problem/1853/B) 227 | - [Search a 2D Matrix II](https://leetcode.com/problems/search-a-2d-matrix-ii/) 228 | - [Valid Perfect Square](https://leetcode.com/problems/valid-perfect-square/) 229 | - [Subarray Sums II](https://cses.fi/problemset/task/1661) 230 | 231 | **3. Core Workout:** 232 | - [Vika and the Bridge](https://codeforces.com/problemset/problem/1848/B) 233 | - [XOR Partition](https://codeforces.com/problemset/problem/1849/F) 234 | - [Toy Blocks](https://codeforces.com/problemset/problem/1452/B) 235 | - [Sage's Birthday](https://codeforces.com/problemset/problem/1419/D2) 236 | - [Keshi Is Throwing a Party](https://codeforces.com/problemset/problem/1610/C) 237 | - [Dubious Cyrpto](https://codeforces.com/problemset/problem/1379/B) 238 | - [Subarray Divisibility](https://cses.fi/problemset/task/1662) 239 | - [Array Division](https://cses.fi/problemset/task/1085) 240 | 241 |
242 | -------------------------------------------------------------------------------- /Week 1/6. Basic Math.md: -------------------------------------------------------------------------------- 1 | # Basic Math 2 | ## Modular Arithmetic: 3 | 4 | In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, which is called the modulus. This means that a "modulo $m$" system has only $m$ numbers: $0, 1 ... m - 1$. 5 | 6 | Meanwhile in programming, the modulo operator `a % b` calculates the remainder on dividing `a` by `b`. Most elementary properties of modular arithmetic are quite obvious: 7 | - $`(a + b) \, mod \, m \equiv (a \, mod \, m + b \, mod \, m) \, mod \, m`$ 8 | - $`(a - b) \, mod \, m \equiv (a \, mod \, m - b \, mod \, m) \, mod \, m`$ 9 | - $`(a \cdot b) \, mod m \equiv (a \, mod \, m \cdot b \, mod \, m) \, mod \, m`$ 10 | 11 | Often in competitive coding, you will come across problems in which the input constraint would be such that the output exceeds even $10^{18}$! And such problems often ask you to output the answer modulo m where m is a large number like $10^9+7$. In such scenarios, you would require modular arithmetic to compute the answer efficiently and accurately. 12 | 13 | Given 3 numbers $p$, $q$ and $m$, we may need to compute the value of $p^q mod m$. However, if $q$ is very large ($\approx 10^{18}$), simply multiplying $p$ with itself $q$ times (while taking modulo) will be too inefficient as the time complexity is $O(q)$. 14 | 15 | Instead, notice this property: 16 | - $`p^2 \, mod \, m \equiv (p)^2 \, mod \, m`$ 17 | - $`p^4 \, mod \, m \equiv (p^2)^2 \, mod \, m`$ 18 | - $`p^8 \, mod \, m \equiv (p^4)^2 \, mod \, m`$ 19 | 20 | ...and so on. 21 | 22 | So, we can calculate $p^k$ where $k$ is a power of 2 simply by repeated squaring. 23 | 24 | Since $a^{b+c} = a^b \cdot a^c$, if we can write $q$ as a sum of some powers of 2, we could multiply the corresponding values from the squaring (while taking modulo) and get our answer. 25 | 26 | Since the binary representation of $q$ tells us exactly that, we can go over each bit from right to left, multiply the current power value if the bit is set and update the power value by squaring it. 27 | 28 | Since our loop goes over each bit of $q$ once and the number of bits in $x$ is approximately $`log \, x`$, the time complexity of this approach is $`O(log \, x)`$. 29 | 30 |
31 | C++ Implementation 32 | 33 | ```cpp 34 | long long mod_exp(long long p, long long q , long long m) 35 | { 36 | long long ans = 1; 37 | // stores the current value of p^k where k is a power of 2 representing the current bit 38 | long long cur = p; 39 | 40 | while (q != 0) 41 | { 42 | // if last bit is 1, multiply cur to ans 43 | if (q % 2 == 1) 44 | ans = (ans * cur) % m; 45 | 46 | // remove last bit 47 | q /= 2; 48 | 49 | // update cur by squaring 50 | cur = (cur * cur) % m; 51 | } 52 | return ans; 53 | } 54 | ``` 55 |
56 | 57 |
58 | Python Implementation 59 | 60 | ```py 61 | def expo(p, q, m): 62 | ans = 1 63 | # stores the current value of p^k where k is a power of 2 representing the current bit 64 | cur = p 65 | 66 | while q != 0: 67 | # If q is odd, multiply p with ans 68 | if q % 2 == 1: 69 | ans = (ans * cur) % m 70 | 71 | # remove last bit 72 | q /= 2 73 | 74 | # update cur by squaring 75 | cur = (cur * cur) % m 76 | 77 | return ans 78 | ``` 79 |
80 | 81 | **Links:** 82 | 1. https://cp-algorithms.com/algebra/binary-exp.html 83 | 2. [Exponentiation](https://cses.fi/problemset/task/1095) 84 | 3. [Count Good Numbers](https://leetcode.com/problems/count-good-numbers/) 85 | 86 |
87 | 88 | ## Primes and Factors: 89 | 90 | ### Sieve of Eratosthenes: 91 | 92 | Given a number $n$, we can find all prime numbers smaller than $n$ using the Sieve of Eratosthenes. 93 | 94 | The algorithm works on the principle that multiples of a prime are not prime numbers. We first mark every number as a prime and then iterate over them. If a number is marked as a prime, we mark all of its multiples as being not prime. 95 | 96 | While the below implementation may seem like an $O(n^2)$ algorithm because of the two nested loops, its actual time complexity is $`O(n \, log(log \, n)))`$ ([here](https://cp-algorithms.com/algebra/sieve-of-eratosthenes.html#asymptotic-analysis) is the derivation if you're interested). 97 | 98 | You can use this to precompute all primes (up to a certain number) and then use them for every test case/query with $O(1)$ access time. 99 | 100 |
101 | C++ Implementation 102 | 103 | ```cpp 104 | // prime[i] = 1 if prime, 0 if not 105 | // first, we mark everything as prime 106 | vector prime (n+1, 1); 107 | prime[1] = 0; 108 | prime[0] = 0; 109 | for (int i = 2; i <= n; i++) 110 | { 111 | if (!prime[i]) continue; 112 | // Here, we know i is prime 113 | // The multiples of i below i^2 would have already been marked previously under the other prime factor (that would be smaller than i) and so we start marking from i^2 114 | for (int j = i * i; j <= n; j += i) 115 | prime[j] = 0; 116 | } 117 | ``` 118 |
119 | 120 |
121 | Python Implementation 122 | 123 | ```py 124 | # prime[i] = True if prime, False if not 125 | # first, we mark everything as prime 126 | n = 100 127 | prime = [True] * (n + 1) 128 | prime[0] = False 129 | prime[1] = False 130 | for i in range(2, n + 1): 131 | if not prime[i]: 132 | continue 133 | # Here, we know i is prime 134 | # The multiples of i below i^2 would have already been marked previously under the other prime factor (that would be smaller than i) and so we start marking from i^2 135 | j = i * i 136 | while j <= n: 137 | prime[j] = False 138 | j += i 139 | ``` 140 |
141 | 142 |
143 | 144 | ### Prime Factorisation : 145 | 146 | Every positive integer $n$ can be expressed in the form of: 147 | $$n = p_1^{a_1} \cdot p_2^{a_2} ... p_i^{a_i}$$ 148 | where $p_i$ is a prime number and $a_i$ is a positive integer. 149 | 150 | A simple way of computing the prime factors of $n$ can be done by repeatedly dividing $n$ by its smallest factor. Since it is impossible that all prime factors of $n$ are bigger than 151 | $\sqrt{n}$, we only need to test factors till $\sqrt{n}$, making the time complexity of this approach is $O(\sqrt{n})$. 152 | 153 |
154 | C++ Implementation 155 | 156 | ```cpp 157 | vector fact(int n) 158 | { 159 | vector pf; 160 | // Iterate over all possible divisors from 2 to sqrt(n) 161 | for (int i = 2; i * i <= n; i++) 162 | { 163 | // as long as it is a factor, repeatedly divide it out 164 | while (n % i == 0) 165 | { 166 | pf.push_back(i); 167 | n /= i; 168 | } 169 | } 170 | // if we could not find any factors from 2 to sqrt(n), the number itself is prime 171 | if (n > 1) 172 | pf.push_back(n); 173 | return pf; 174 | } 175 | ``` 176 |
177 | 178 |
179 | Python Implementation 180 | 181 | ```py 182 | def fact(n): 183 | pf = [] 184 | # Iterate over all possible divisors from 2 to sqrt(n) 185 | i = 2 186 | while i * i <= n: 187 | # as long as it is a factor, repeatedly divide it out 188 | while n % i == 0: 189 | pf.append(i) 190 | n //= i 191 | i += 1 192 | # if we could not find any factors from 2 to sqrt(n), the number itself is prime 193 | if n > 1: 194 | pf.append(n) 195 | return pf 196 | ``` 197 |
198 | 199 |
200 | 201 | ### LCM and GCD: 202 | 203 | Given two integers $a$ and $b$ such that $c = gcd(a,b)$ and $d = lcm(a,b)$, then : 204 | $$a \cdot b = c \cdot d$$ 205 | 206 | The Euclidean algorithm for finding $gcd(a,b)$ is 207 | 208 | ```math 209 | 210 | \text{gcd}(a, b) = 211 | \begin{cases} 212 | a & \text{if } b = 0 \\ 213 | \text{gcd}(b, a \bmod b) & \text{otherwise} 214 | \end{cases} 215 | 216 | 217 | ``` 218 | 219 | GCD and LCM computations are quite common in many competitive coding problems owing to their vast applications. 220 | 221 |
222 | C++ Implementation 223 | 224 | ```cpp 225 | int gcd(int a, int b) 226 | { 227 | return b == 0 ? a : gcd(b, a % b); 228 | } 229 | ``` 230 |
231 | 232 |
233 | Python Implementation 234 | 235 | ```py 236 | def gcd(a, b): 237 | return b == 0 ? a : gcd(b, a % b) 238 | ``` 239 |
240 | 241 |
242 | 243 | ### Links: 244 | 1. https://cp-algorithms.com/algebra/sieve-of-eratosthenes.html 245 | 2. https://cp-algorithms.com/algebra/factorization.html#trial-division 246 | 3. [Count Primes](https://leetcode.com/problems/count-primes/) 247 | 4. [Factorial Trailing Zeroes](https://leetcode.com/problems/factorial-trailing-zeroes/) 248 | 5. [Sum of Divisors](https://cses.fi/problemset/task/1082) 249 | 6. [Counting Divisors](https://cses.fi/problemset/task/1713) 250 | 7. [Omkar and Last Class of Math](https://codeforces.com/problemset/problem/1372/B) 251 | 252 |
253 | 254 | ## Binomial Coefficients: 255 | The binomial coefficient, which is denoted be $n \choose k$, is the number of ways of choosing k elements from a set of n elements. Combinatorics problems often involve manipulating the properties of binomial systems to calculate outputs in the most efficient way. Some of the key properties are: 256 | 257 | ```math 258 | {n \choose k} = {n-1 \choose k-1} + {n-1 \choose k} \\ 259 | and \\ 260 | {n \choose k} = {n \choose n-k} 261 | ``` 262 | 263 | This recursive nature can be used to compute the binomial coefficients of a required system: 264 | 265 |
266 | C++ Implementation 267 | 268 | ```cpp 269 | int choose(int n, int k) 270 | { 271 | // n choose 0 = n choose n = 1 272 | if k == 0 or k == n: 273 | return 1 274 | return choose(n - 1, k - 1) + choose(n - 1, k); 275 | } 276 | ``` 277 |
278 | 279 |
280 | Python Implementation 281 | 282 | ```py 283 | def choose(n, k): 284 | # n choose 0 = n choose n = 1 285 | if k == 0 or k == n: 286 | return 1 287 | return choose(n - 1, k - 1) + choose(n - 1, k) 288 | ``` 289 |
290 | 291 | However, this method is will fail after about $n = 20$. Better methods that are required to compute the same will be explored in further sections. 292 | -------------------------------------------------------------------------------- /Week 4/1. Data Structures.md: -------------------------------------------------------------------------------- 1 | # Data Structures 2 | 3 | ## Introduction 4 | 5 |

A data structure is some structure chosen for organising, managing, and storing data.

6 |

Arrays, sets, maps, stacks, queues etc. are some commonly-used data structures.

7 |

We have already covered several basic data structures over the previous weeks of this workshop. In this section, we will be jumping straight ahead into some of the more advanced and useful ones.

8 | 9 | ## `pair` 10 | 11 |

A pair is an ordered collection of two elements.

12 |

One use case of pairs is representing points on the cartesian plane, as each point has two values associated with it - its x-coordinate and y-coordinate.

13 | 14 | ```cpp 15 | pair p; 16 | p.first = 3; 17 | p.second = 5; 18 | 19 | // different types + nesting also supported 20 | pair> q; 21 | q.first = "hello"; 22 | q.second = {1, 2}; // initialiser lists can be used to initialise pairs 23 | ``` 24 | 25 |

While comparing two pairs, the first elements in each pair are first compared, followed by their second elements.

26 |

This means that they also easily support sorting like other data types.

27 | 28 | ## `stack` 29 | 30 |

A stack is an ordered collection of elements in which elements are added and removed from only one side (usually called the top).

31 |

To visualise the working of a stack, think of a stack of books.

32 |

When you want to add a new book to the stack, you can only place it at the top.

33 |

Similarly, the only book in a stack that can be directly removed is the top-most one.

34 |

Removing books apart from the topmost one would require removing all the books above it first.

35 |

A stack is often described as a Last In First Out (LIFO) data structure as the last element to be inserted into a stack is also the first to be removed.

36 | 37 | ```cpp 38 | stack s; 39 | ``` 40 | 41 |

Here are some commonly-used stack methods.

42 | 43 | 1. `push (x)` - adds the element `x` to the end of the stack 44 | 2. `top ()` - returns the topmost element 45 | 3. `pop ()` - removes the topmost element 46 | 4. `size ()` - returns the number of elements 47 | 5. `empty ()` - removes every element 48 | 49 | Don't forget to check if the stack is non-empty before calling `top()` or `pop()` to prevent runtime errors. 50 | 51 | Two great use cases for stacks in CP are to find the position of the matching bracket in an expression and to find the next smaller / larger element of all elements in an array. 52 | 53 | ## `queue` 54 | 55 |

A queue is also an ordered collection of elements, but what differentiates queues from stacks is the fact that elements are removed from the side opposite to the one where elements are added.

56 |

Queues are named as such because queues in the real world are also First In First Out (FIFO) - the first element to be inserted (enter) into a queue is the first to be removed (exit).

57 |

The below code instantiates a queue of integers.

58 | 59 | ```cpp 60 | queue q; 61 | ``` 62 | 63 |

Given below are some commonly-used queue methods.

64 | 65 | 1. `push (x)` - adds the element `x` to the end of the queue 66 | 2. `front ()` - returns the first element 67 | 3. `back ()` - returns the last element 68 | 4. `pop ()` - removes the first element 69 | 5. `size ()` - returns the number of elements 70 | 6. `empty ()` - removes every element 71 | 72 | ## `priority_queue` 73 | 74 |

A priority queue is similar to a regular queue, except that the elements in a priority queue are sorted according to their values and not according to the order in which they are inserted in.

75 |

This means that no matter what order elements are inserted in, they will always remain sorted in ascending order inside the container.

76 |

The below code instantiates a priority queue of integers.

77 | 78 | ```cpp 79 | priority_queue pq; 80 | ``` 81 | 82 |

To demonstrate the difference between a regular queue and a priority queue, let us consider a queue q and a priority queue pq, both of which currently hold the elements [2, 3, 5, 11, 13, 17].

83 |

We now add the element 7 to both q and pq.

84 |

In q, the elements now follow the order [2, 3, 5, 11, 13, 17, 7], as new elements always get added to the end regardless of their value.

85 |

However, in pq, the elements are stored in the order [2, 3, 5, 7, 11, 13, 17], as new elements are added in such a way that the ascending order is maintained.

86 |

Now, let us look at some commonly-used priority queue methods.

87 | 88 | 1. `push (x)` - adds the element `x` to the priority queue 89 | 2. `top ()` - returns the greatest element 90 | 3. `pop ()` - removes the greatest element 91 | 4. `size ()` - returns the number of elements 92 | 5. `empty ()` - removes every element 93 | 94 | You can specify the comparator of the sorting as well: 95 | ```cpp 96 | // for ascending order, use the inbuilt `greater` comparator 97 | priority_queue, greater> pq; 98 | ``` 99 | 100 | An interesting thing to note is that a `set` / `map` is nearly identical to a `priority_queue`, with the main difference being that `set` / `map` sorts in ascending while `prioriity_queue`sorts in descending by default. 101 | 102 | ## `iterator` 103 | 104 |

An iterator is a object that can point to elements in C++ STL data structures.

105 |

It is the STL equivalent of regular C++ pointers.

106 |

For the purpose of simplicity, in this section, we will be dealing with iterators to vector elements specifically.

107 |

Here is the syntax for defining an iterator it.

108 | 109 | ```cpp 110 | vector :: iterator it; 111 | ``` 112 | 113 |

Let us look at how to loop over the elements of a vector using iterators.

114 | 115 | ```cpp 116 | for (it = arr.begin (); it < arr.end (); it ++) { 117 | 118 | } 119 | ``` 120 | 121 |

The begin () method returns an iterator to the first element of the vector.

122 |

The end () method, however, does not return an iterator to the last element like how you would expect it to.

123 |

Instead, it returns a pointer to an imaginary element that lies right after the last element.

124 |

Incrementing an iterator shifts the iterator one element to the right, and decrementing it shifts it to the left.

125 | 126 | ``` 127 | [1, 2, 3, 4, 5, 6, 7] 128 | ^ ^ 129 | begin () end () 130 | ``` 131 | 132 |

Here are two cool methods used on sorted vectors that work using iterators.

133 |

lower_bound (arr.begin (), arr.end (), x) returns an iterator pointing to the smallest element in arr whose value is greater than or equal to x.

134 |

upper_bound (arr.begin (), arr.end (), x) returns an iterator pointing to the largest element in arr whose value is less than or equal to x.

135 | 136 | Since these methods use binary search, their time complexity is only $O(n log n)$. 137 | 138 |

Note that sort(), lower_bound() and upper_bound all take in iterators as parameters, the first one denoting the left-most element and the second one denoting the right-most element in the range to be considered.

139 | 140 | 141 | ## Optimal Task Scheduling 142 | 143 |

We are given a list of tasks that need to be completed.

144 |

Each task has its own duration and deadline before which it needs to be completed.

145 |

The problem is to find the maximum number of tasks that can be completed if we schedule the tasks optimally.

146 | 147 | ### Example 148 | 149 | 150 | 151 | 152 | 153 | 154 | 155 | 156 | 157 | 158 | 159 | 160 | 161 | 162 | 163 | 164 | 165 | 166 | 167 | 168 | 169 | 170 | 171 | 172 | 173 | 174 | 175 | 176 | 177 | 178 | 179 | 180 |
IDDeadlineDuration
153
284
392
4155
5186
181 | 182 |

The highest number of tasks that can be completed is 4.

183 |

One strategy to achieve this is given below.

184 | 185 | 1. Start task 1 at the 0th minute. 186 | 2. Start task 2 at the 3rd minute. 187 | 3. Start task 3 at the 7th minute. 188 | 4. Start task 5 at the 9th minute. 189 | 190 | ### Solution 191 | 192 |

There is a greedy approach to solving this problem.

193 |

First, we store all the elements in a vector, and sort them in descending order of their deadlines.

194 |

Each task will be represented by a pair - the deadline and duration.

195 |

We process the tasks one-by-one in the decreasing order of their deadlines.

196 |

The ith task is first inserted into a priority queue.

197 |

We then calculate the time t between the deadline of the ith task and the (i-1)th task.

198 |

We use the priority queue to find the task with the least duration of time and execute it.

199 |

This process continues as long as the total time duration of the tasks remains below t.

200 |

If a task cannot be completely executed under the time t, the portion of it that can be executed is and the rest is put back into the priority queue.

201 | 202 | The running time of this algorithm is $O (n log n)$. 203 | 204 |
205 | 206 | Implementation 207 | 208 | ```cpp 209 | void solve () { 210 | int n; 211 | cin >> n; 212 | vector < pair > jobs; 213 | for (int i = 0; i < n; i ++) { 214 | pair j; 215 | int dl, dur; 216 | cin >> dl >> dur; 217 | j.first = dl; 218 | j.second = dur; 219 | jobs.push_back (j); 220 | } 221 | sort (jobs.rbegin (), jobs.rend ()); 222 | priority_queue < pair > q; 223 | int result = 0; 224 | for (int i = 0; i < n; i ++) { 225 | pair j_inv; 226 | // The minus sign is a common trick to make the elements in ascending order 227 | j_inv.first = - jobs [i].second; 228 | j_inv.second = jobs [i].first; 229 | q.push (j_inv); 230 | int t = jobs[i].first - (i != n - 1 ? jobs [i + 1].first : 0); 231 | while (t > 0) { 232 | pair j = q.top (); 233 | q.pop (); 234 | // Don't forget to flip the sign back :) 235 | j.first = - j.first; 236 | if (j.first <= t) { 237 | t -= j.first; 238 | result ++; 239 | } else { 240 | pair j_new; 241 | j_new.first = t - j.first; 242 | j_new.second = j.second; 243 | q.push (j_new); 244 | break; 245 | } 246 | } 247 | } 248 | cout << result << "\n"; 249 | } 250 | ``` 251 | 252 |
253 | 254 | ## Links 255 | 256 | 1. [Pairs](https://www.geeksforgeeks.org/pair-in-cpp-stl/) 257 | 2. [Stacks](https://www.geeksforgeeks.org/stack-in-cpp-stl/) 258 | 3. [Queues](https://www.geeksforgeeks.org/queue-cpp-stl/) 259 | 4. [Priority Queues](https://www.geeksforgeeks.org/priority-queue-in-cpp-stl/) 260 | 5. [Video on scheduling problem](https://www.youtube.com/watch?v=nnHRrnZsPwo) 261 | 6. [Nearest Smaller Values](https://cses.fi/problemset/task/1645) 262 | 7. [Course Schedule III](https://leetcode.com/problems/course-schedule-iii/) 263 | 8. [The Skyline Problem](https://leetcode.com/problems/the-skyline-problem/) 264 | 9. [Task Scheduler](https://leetcode.com/problems/task-scheduler/) 265 | 10. [Concert Tickets](https://cses.fi/problemset/task/1091) 266 | 267 | 268 | -------------------------------------------------------------------------------- /Week 2/3. Sorting & Greedy Algorithms.md: -------------------------------------------------------------------------------- 1 | # Sorting 2 | 3 | ## Introduction: 4 | 5 | Sorting forms the primary subroutine to solve many problems and forms the basis for many algorithms. In fact, you would have come across many problems that you would have solved (without realizing) using sorting. 6 | 7 | Some of the basic yet very hard to compute; questions that arise when we deal with an array of data can be answered by preprocessing the data first by sorting. 8 | 9 | Given an array of numbers, questions like "What is the largest number in the set that is $\le k$ for some integer $k$?" or "What is the most frequent number in the array?" can be answered by initially preprocessing the array by sorting. 10 | 11 | There are many sorting algorithms and they are classified on the basis of time complexity. 12 | 13 | 0. $O(n^2)$ comparision based algorithms: 14 | 15 | Some examples of this are bubble, selection and insertion sort. These are often quite slow and are not usable in most problems and are avoided. However, a rudimentary understanding on their working principle does help in a few problems. 16 | 17 | 1. $O(n \log(n))$ comparison based algorithms: 18 | 19 | Some examples of this are merge, quick and heap sort. These algorithms are the default choice in programming contests as the time complexity is optimal for comparison-based sorting. Therefore, these sorting algorithms run in the ‘best possible’ time in most cases. 20 | 21 | To avoid reinventing the wheel, we use the STL implementation of these using `sort`, `stable_sort` or `partial_sort` in C++, `Collections.sort` in Java or `sorted(list_name)` in Python. 22 | 23 | Watch the video on merge sort in `Sorting Algorithms` and `Technique Analysis` to understand how C++ STL sorts data. 24 | 25 | 3. $O(n)$ special purpose algorithms: 26 | 27 | Counting, radix and bucket sort are a few examples. Although not as generic as the others, these special purpose algorithms can reduce the required sorting time if the data has certain special characteristics. 28 | 29 | Additionally, you can use the links below for a better understanding by visualizing these algorithms : 30 | - [Sorting Algorithms](https://www.youtube.com/watch?feature=shared&v=WaNLJf8xzC4) 31 | - [Technique Analysis](https://youtube.com/playlist?list=PL2ONYsvCDiDsQ2AwqRxh0EE6AKm4jW7hp&feature=shared) 32 | - [Sort Visualization ](https://visualgo.net/en/sorting) 33 | 34 |
35 | 36 | ## C++ Implementation: 37 | 38 | Sorting can be done on arrays or any sequence container like vectors and deques. 39 | 40 | The C++ function for sort is `std::sort` and it runs in $O(n\log(n))$. 41 | 42 | - On arrays, it is called as: 43 | 44 | ```cpp 45 | // For an array 'a' of length 'n' 46 | 47 | sort(a, a + n); // ascending order 48 | 49 | sort(a, a + n, greater()); // descending order 50 | ``` 51 | 52 | - On sequence containers, it can be done as: 53 | 54 | ```cpp 55 | 56 | sort(v.begin(), v.end()); //sort vector 'v' in ascending order 57 | 58 | sort(dq.begin(), dq.end(), greater()); //sort deque 'dq' in descending order 59 | 60 | ``` 61 | 62 | - Custom comparators: 63 | 64 | Sorting can also be done using customised sorting rules according to various problem requirements. These arbitary comparision rules are incorporated using custom comparator functions. 65 | 66 | Assume that we need to sort a vector `v` of pairs $(x, y)$ in descending order of the value of $x^3 - y^2$: 67 | 68 | ```cpp 69 | bool comp(pair a, pair b) 70 | { 71 | int x = pow(a.first, 3), y = pow(a.second, 2); 72 | int p = pow(b.first, 3), q = pow(b.second, 2); 73 | 74 | if((x - y) > (p - q)) return true; //place a before b 75 | else return false; //place a after b 76 | } 77 | 78 | ... 79 | 80 | void solve() 81 | { 82 | ... 83 | sort(v.begin(), v.end(), comp); 84 | ... 85 | } 86 | 87 | ``` 88 | 89 | When you write a comparator `bool cmp(a, b)`, return true if you want a to be before b in the sorted list and false if you want b to be before a in the sorted list. 90 | 91 | As this rather (in)famous [CF blogpost](https://codeforces.com/blog/entry/70237) points out, `cmp` must return false when `a = b` (failing to do this can make your operator non-transitive in certain situations, leading to runtime errors and veery weird side effects). 92 | 93 | Alternatively, we can use anonymous 'lambda' functions to do the same in a more concise format: 94 | 95 | ```cpp 96 | // we can take advantage of type inference and use `auto` instead of mentioning the actual data type 97 | // this snippet sorts a vector of pairs (x, y) in ascending order of x * y 98 | sort(v.begin(), v.end(), [] (auto a, auto b) { 99 | return a.first * a.second < b.first * b.second; 100 | }); 101 | ``` 102 | 103 | PS: While this is what most people use lambda functions for, they are far more powerful and versatile. Their conciseness and variable capturing means that they can be used instead of regular functions most of the time, except that you can declare them inside normal / lambda functions. This can simplify variable scope management for slightly more complicated CC problems and is slightly more efficient. (perhaps a topic for a future doc?) 104 | 105 |
106 | 107 | # Greedy Algorithms 108 | 109 | ## Introduction: 110 | 111 | Greedy algorithms are exactly what they sound like - they exploit certain properties of the given problem that optimise the computation of the result by eliminating or minimizing the simulation process. Since greedy algorithms are mostly problem-specific and aren't particularly classified into typical cases, we will explore some illustrations to understand the concept of greedy algorithms. 112 | 113 | There are usually two common traits in most greedy problems exhibit: 114 | - It has optimal substructures (the optimal solution to the problem contains the optimal solution to all sub-structures) 115 | - It has a greedy property (the globally optimal solution can be made by making a locally optimal choice) 116 | 117 | Sometimes, it can be quite difficult to prove the absolute validity of the greedy solution and so you have to rely on a mix of intuition, sample tests and edge cases you've thought about. However, eventually this process will start to feel more natural as your intuition develops with practice. 118 | 119 | 120 |
121 | 122 | ## Example 0 - [Station Balance](https://onlinejudge.org/index.php?option=onlinejudge&Itemid=8&page=show_problem&problem=351): 123 | 124 | ![Fig 1](Images/load-imbalance-1.png) 125 | 126 | How would you go about this problem? Simulate all the possible options? In the worst case scenario, you have C = 5 and S = 10, which would give you around $45^5 \approxeq 10^9$ possible combinations, which is sure to exceed the time limit for this problem. 127 | 128 | ![Fig 2](Images/load-imbalance-2.png) 129 | 130 | Let's make a few observations: 131 | 132 | - If there exists an empty chamber, it is usually beneficial and **never worse** to move one specimen from a chamber with two specimens to an empty chamber! Otherwise, the empty chamber contributes more to the imbalance (see fig 2 top). 133 | 134 | - If $S \gt C$, then $S - C$ specimens must be paired with a chamber already containing other specimens — the pigeonhole principle! (see fig 2 bottom) 135 | 136 | The key insight is that the solution to this problem can be simplified with sorting: if $S \lt 2C$ , add $2C - S$ dummy specimens with mass 0. For example, C = 3, S = 4, M = {5, 1, 2, 7} $\rightarrow$ C = 3, S = 6, M = {5, 1, 2, 7, 0, 0}. 137 | 138 | Then, sort the specimens on their mass such that $M_1 \leq M_2 \leq .. . \leq M_{2C-1} \leq M_{2C}$. 139 | 140 | In this example, M = {5, 1, 2, 7, 0, 0} $\rightarrow$ {0, 0, 1, 2, 5, 7} . By adding dummy specimens and then sorting them, a greedy strategy becomes ‘apparent’: 141 | 142 | - Pair the specimens with masses M_1 & M_{2C} and put them in chamber 1, then 143 | - Pair the specimens with masses M_2 & M_{2C-1} and put them in chamber 2, and so on .. . 144 | 145 | This greedy algorithm - commonly called 'load balancing' - works for this problem! 146 | 147 | ![Fig 3](Images/load-imbalance-3.png) 148 | 149 | Designing greedy algorithms is an art and it is hard to impart the techniques used to deriving greedy solutions. However, one of the best ways to get started if you think that there exists a greedy solution is to sort the data in some fasion and then compute the required terms to see if some greedy strategy emerges. 150 | 151 | Two common ways to prove greedy algorithms are to prove that "greedy stays ahead" (the greedy strategy is never worse than the optimal strategy at any point) and exchange arguments (exchanging two elements with certain properties always makes the answer better / worse). 152 | 153 | It is also true that greedy strategies often fail to provide complete solutions to a broader testcases. We will explore an example of the same in the next illustration. 154 | 155 |
156 | 157 | ## Example 1 - Coin Change: 158 | 159 | Problem description: 160 | 161 | We are given a target amount $V$ cents and a list of denominations of $n$ coins, i.e., we have $coin_i$ (in cents) for coin types $i \epsilon [0..n-1]$. 162 | What is the minimum number of coins that we must use to represent amount $V$? Assume that we have an unlimited supply of coins of any type. 163 | 164 | Example: 165 | 166 | `If n = 4, coinValue = {25, 10, 5, 1} cents and we want to represent V = 42 cents.` 167 | 168 | We can use this greedy algorithm: repeatedly select the largest coin denomination which is not greater than the remaining amount, i.e., 42 - 25 = 17 $\rightarrow$ 17 - 10 = 7 $\rightarrow$ 7 - 5 = 2 $\rightarrow$ 2 - 1 = 1 $\rightarrow$ 1 - 1 = 0, a total of 5 coins. This is optimal. 169 | 170 | The problem above has the two ingredients required for a successful greedy algorithm: 171 | 172 | - It has optimal sub-structures. 173 | We have seen that in our quest to represent 42 cents, we use 25 + 10 + 5 + 1 + 1 . This is an optimal 5-coin solution to the original problem! 174 | Optimal solutions to sub-problem are contained within the 5-coin solution, i.e., 175 | a. To represent 17 cents, we use 10 + 5 + 1 + 1 (part of the solution for 42 cents), 176 | b. To represent 7 cents, we use 5 + 1 + 1 (also part of the solution for 42 cents), etc. 177 | 178 | - It has the greedy property: 179 | Given every amount V,we can greedily subtract V with the largest coin denomination which is not greater than this amount V. 180 | It can be proven (not shown here for brevity) that using any other strategies will not lead to an optimal solution for this set of coin denominations. 181 | 182 | However, this greedy algorithm does not work for all sets of coin denominations. Take for example {4, 3, 1} cents. To make 6 cents with that set, a greedy algorithm would choose 3 coins {4, 1, 1} instead of the optimal solution that uses 2 coins {3, 3}. 183 | 184 | The complete solution to this problem will be dealt with in further weeks when we will discuss another more powerful problem solving paradigm - dynamic programming. 185 | 186 |
187 | 188 | ## Example 2 - [Dragons of Loowater](https://open.kattis.com/problems/loowater): 189 | 190 | There are several ways to solve this problem, but we will illustrate one of the easiest. This problem is a bipartite matching problem, in the sense that we are required to match (pair) knights to dragons in a minimal cost way. 191 | 192 | We shall try to develop a greedy algorithm for this: a dragon head with a certain diameter $D$ should be chopped by a knight with the shortest height $H$ such that $D \leq H$. 193 | 194 | However, the input is given in an arbitrary order. This is frequently done by the problem authors to mask the greedy strategy. If we sort both the array of dragon head diameters head and knight heights height in $O(n \log n + m \log m)$, we can use the following $O(max(n,m))$ scan to determine the answer. This is yet another example where sorting the input can help produce the required greedy strategy. 195 | 196 | ```cpp 197 | sort(D.begin(), D.end()); 198 | sort(H.begin(), H.end()); 199 | 200 | int gold = 0, d = 0, k = 0; 201 | 202 | while ((d < n) && (k < m)) //while not done yet 203 | { 204 | while ((k < m) && (D[d] > H[k])) 205 | ++k; //find required knight 206 | if (k == m) break; //loowater is doomed 207 | gold += H[k]; //pay this amount of gold 208 | ++d; ++k; //next dragon & knight 209 | } 210 | 211 | if (d == n) cout << gold << "\n"; 212 | else cout << "loowater is doomed" << "\n"; 213 | 214 | ``` 215 | 216 |
217 | 218 | # Problems on Greedy and Sorting Algorithms 219 | 220 | 1. Warmup: 221 | - [Cow Tipping](http://www.usaco.org/index.php?page=viewproblem2&cpid=689) 222 | - [Amr and Music](https://codeforces.com/problemset/problem/507/A) 223 | - [Middle Class](https://codeforces.com/problemset/problem/1334/B) 224 | - [Elephants](https://onlinejudge.org/index.php?option=onlinejudge&Itemid=8&page=show_problem&problem=5020) 225 | 226 | 2. Sorting and Greedy: 227 | - [Too Many Segments](https://codeforces.com/contest/1249/problem/D2) 228 | - [Olya and Game with Arrays](https://codeforces.com/problemset/problem/1859/B) 229 | - [Similar Pairs](https://codeforces.com/problemset/problem/1334/B) 230 | - [Moamen and k-subarrays](https://codeforces.com/problemset/problem/1557/B) 231 | - [ICPC Team Selection](https://open.kattis.com/problems/icpcteamselection) 232 | 233 | 3. Classic and Not so Classic Greedy Problems (Core Workout): 234 | - [Carnival Tickets](https://oj.uz/problem/view/IOI20_tickets) 235 | - [Alien DNA](https://onlinejudge.org/index.php?option=onlinejudge&Itemid=8&page=show_problem&problem=2630P) 236 | - [Teacher Evaluation](https://open.kattis.com/problems/teacherevaluation) 237 | - [Hippo Circus](https://onlinejudge.org/index.php?option=onlinejudge&Itemid=8&page=show_problem&problem=4952) 238 | - [Square Pegs in a Circular Hole](https://open.kattis.com/problems/squarepegs) 239 | 240 | 4. [CSES Problemset - Sorting and Searching](https://cses.fi/problemset/list/) 241 | -------------------------------------------------------------------------------- /Week 5/1. Introduction to DP.md: -------------------------------------------------------------------------------- 1 | # Dynamic Programming 2 | 3 | ## Introduction 4 | 5 |

Dynamic programming is a technique of solving problems that involves dividing the larger problem into smaller sub-problems that can be solved independently.

6 |

Dynamic programming is generally used in two cases:

7 | 8 |
    9 |
  • Finding the most optimal solution
  • 10 |
  • Finding the number of solutions
  • 11 |
12 | 13 |

Before we look at how dynamic programming works, we first need to look at the underlying trick that makes dynamic programming quick - memoisation.

14 | 15 | ## Memoisation 16 | 17 | ### Regular recursion - is it efficient? 18 | 19 |

In dynamic programming, we make use of recursion - for example, we could express the solution for the case when n = k in terms of the solutions to cases when n < k.

20 |

As an example, let us consider the problem of computing the nth (0-based indexing) term in the Fibonacci sequence.

21 |

fib (n) can be expressed as fib (n - 1) + fib (n - 2).

22 |

We define the base cases fib (0) = 1 and fib (1) = 1 separately.

23 |

Let us implement this logic using recursion.

24 | 25 |
26 | C++ Implementation 27 | 28 | ```cpp 29 | void fib (int k) { 30 | if (k == 0 || k == 1) return 1; 31 | else return fib (k - 1) + fib (k - 2); 32 | } 33 | ``` 34 | 35 |
36 | 37 |
38 | Python Implementation 39 | 40 | ```py 41 | def fib (k): 42 | if k == 0 or k == 1: 43 | return 1 44 | else: 45 | return fib (k - 1) + fib (k - 2) 46 | ``` 47 | 48 |
49 | 50 |

Every time we call the function fib, we re-call it twice, creating a tree of recursive fib calls until the base cases fib (0) and fib (1) are reached.

51 |

In the case of computing fib (4), the tree looks something like this.

52 | 53 | ![Fibonacci tree](Images/Fibonacci-Tree-1.png) 54 | 55 | The main thing to notice is that the tree gets exponentially bigger with every increase in $n$, which shows that this algorithm has a time complexity of $O (2 ^ n)$. 56 | 57 | However, we can improve the algorithm to work in $O (n)$ complexity. 58 | 59 | ### Recursion powered by memoisation 60 | 61 |

If you look at the tree, you will notice that several function calls are identical.

62 |

For example, fib (2) is called twice.

63 |

If we could somehow save the values of the function calls, we would end up massively reducing the time taken by the algorithm.

64 |

Let us create an array arr where arr [i] stores the value of fib (i).

65 |

The values in arr are initialised to -1, indicating that no values have been computed yet.

66 |

The function fib (k) is only called when arr [k] = -1, otherwise, the value of arr [k] is directly used.

67 |

This prevents us from making redundant function calls when we already know the answer.

68 | 69 |
70 | C++ Implementation 71 | 72 | ```cpp 73 | int arr [n + 1]; 74 | for (int i = 0; i <= n; i ++) 75 | arr [i] = -1; 76 | 77 | void fib (int k) { 78 | if (k == 0 || k == 1) return 1; 79 | else { 80 | if (arr[k] == -1) 81 | arr[k] = fib (k - 1) + fib (k - 2); 82 | return arr[k]; 83 | } 84 | } 85 | ``` 86 | 87 |
88 | 89 |
90 | Python Implementation 91 | 92 | ```py 93 | arr = [] 94 | for i in range (n + 1): 95 | arr.append (-1) 96 | 97 | def fib (k): 98 | if k == 0 or k == 1: 99 | return 1 100 | else: 101 | if (arr[k] == -1) 102 | arr[k] = fib (k - 1) + fib (k - 2) 103 | return arr[k] 104 | ``` 105 | 106 |
107 | 108 |

Let us now examine the time complexity of our new algorithm.

109 |

Here is the function call tree in the case of memoisation.

110 | 111 | ![Fibonacci tree](Images/Fibonacci-Tree-2.png) 112 | 113 | As you can see, the number of function calls is now the same as the index of the term that we are looking for - making the time complexity $O (n)$, which is much better than the $O (2 ^ n)$ time complexity without memoisation. 114 | 115 | ### Do we really need recursion? 116 | 117 |

Now that we are using memoisation, it turns out that we do not need to use recursion.

118 |

We can simply build arr iteratively, term-by-term, with each term being constructed from its preceding two terms.

119 |

Of course, we need to initialise arr [0] and arr [1] before we begin iterating.

120 | 121 |
122 | C++ Implementation 123 | 124 | ```cpp 125 | int arr [n + 1]; 126 | arr [0] = 1; 127 | arr [1] = 1; 128 | for (int i = 2; i <= n; i ++) 129 | arr [i] = arr [i - 1] + arr [i - 2]; 130 | ``` 131 | 132 |
133 | 134 |
135 | Python Implementation 136 | 137 | ```py 138 | arr = [] 139 | arr.append (0) 140 | arr.append (1) 141 | for i in range (2, n + 1): 142 | arr.append (arr [i - 1] + arr [i - 2]) 143 | ``` 144 | 145 |
146 | 147 | The approach that involves recursion with memoization is usually called top-down dynamic programming (recursive DP) while the iterative solution is usually called bottom-up dynamic programming (iterative DP). 148 | 149 | While the time complexities of both approaches should be the same if implemented correctly, bottom-up is usually more efficient since there is much less function call overhead due to the abssence of recursion. 150 | 151 | However, in more complicated problems, the bottom up solution can be harder to think of since you need to guarantee that all the subproblems a particular state depends on have been computed **before** we compute the answer for the current state. 152 | 153 | ## Minimum number of notes required to reach a certain amount 154 | 155 | The problem is to find what is the minimum number of Indian Rupee notes (and coins) required to reach a certain amount $x$. 156 | 157 |

As we know, Indian Rupee notes and coins come in the following denominations - 1, 2, 5, 10, 20, 50, 100 and 500. 158 | 159 | ### The greedy approach - does it really work? 160 | 161 |

It turns out that in this case, the problem can be solved greedily - keep picking the note with the largest possible denomination as long as the total sum in your hand is less than x.

162 |

Once the total sum becomes x, you have your answer.

163 | 164 |
165 | C++ Implementation 166 | 167 | ```cpp 168 | void solve () { 169 | int x; 170 | cin >> x; 171 | int result = 0, rupees [8] = {1, 2, 5, 10, 20, 50, 100, 500}; 172 | while (x > 0) { 173 | for (int i = 7; i >= 0; i --) { 174 | if (rupees [i] <= x) { 175 | x -= rupees [i]; 176 | result ++; 177 | break; 178 | } 179 | } 180 | } 181 | cout << result << '\n'; 182 | } 183 | ``` 184 | 185 |
186 | 187 |
188 | Python Implementation 189 | 190 | ```py 191 | def solve (): 192 | x = int (input ()) 193 | result = 0 194 | rupees = [1, 2, 5, 10, 20, 50, 100, 500] 195 | while x > 0: 196 | for denum in rupees [ : : -1]: 197 | if (denum <= x): 198 | x -= denum 199 | result += 1 200 | break 201 | print (result) 202 | ``` 203 | 204 |
205 | 206 |

But, will this algorithm work for every set of denominations of money?

207 |

It turns out that the greedy algorithm does not always work.

208 |

Let us say that a hypothetical country has only three denominations of money - 1, 3, and 4.

209 |

If we require a sum x = 6, the greedy algorithm would give us an answer of 4, 1, and 1 - which means that the minimum number of notes required according to our greedy algorithm is 3.

210 |

However, this is not the best solution, as by taking two 3 notes we get the required sum using just 2 notes.

211 |

Now that we have shown that the greedy approach does not always work, let us solve this problem by dynamic programming.

212 | 213 | ### The correct approach using dynamic programming 214 | 215 |

Let us say that the function f (x) returns the minimum number of notes required to obtain a sum x.

216 |

The key to solving the problem using DP is to express f (x) recursively.

217 |

How do we do this?

218 |

We know that a sum x can be reached in only three ways - by adding 1 to x - 1, by adding 3 to x - 3, or by adding 4 to x - 4.

219 |

Therefore, we can state that f (x) is equal to 1 added to the minimum among f (x - 1), f (x - 3), and f (x - 4).

220 |

We can define an array dp such that dp [x] stores the value of f (x).

221 |

First, we will define the base case dp [0] = 0, as 0 notes are required to reach a sum of 0.

222 |

We then find dp [i] for further values of i iteratively, using the relation dp [i] = 1 + min (dp [i - a1], dp [i - a2], dp [i - a3]), ... dp [i - an]), where a denotes the set of all available denominations.

223 |

Note that dp [i - aj] should only be considered while calculating the minimum when i - aj is non-negative, as a sum can never be obtained from a negative one.

224 | 225 |
226 | C++ Implementation 227 | 228 | ```cpp 229 | void solve () { 230 | int n, x; 231 | cin >> n >> x; 232 | int arr [n]; 233 | for (int i = 0; i < n; i ++) 234 | cin >> arr [i]; 235 | int dp [x + 1]; 236 | dp [0] = 0; 237 | for (int i = 1; i <= x; i ++) { 238 | dp [i] = -1; 239 | for (auto ele : arr) { 240 | if (i - ele >= 0) { 241 | if (dp [i] == -1 || dp [i - ele] + 1 < dp [i]) 242 | dp [i] = dp [i - ele] + 1; 243 | } 244 | } 245 | } 246 | cout << dp [x] << '\n'; 247 | } 248 | ``` 249 | 250 |
251 | 252 |
253 | Python Implementation 254 | 255 | ```py 256 | def solve (): 257 | n, x = tuple (map (int, input ().split (' '))) 258 | arr = tuple (map (int, input ().split (' '))) 259 | dp = [] 260 | dp.append (0) 261 | for i in range (1, x + 1): 262 | dp.append (-1) 263 | for ele in arr: 264 | if i - ele >= 0: 265 | if dp [i] == -1 or dp [i - ele] + 1 < dp [i]: 266 | dp [i] = dp [i - ele] + 1 267 | print (dp [x]) 268 | ``` 269 | 270 |
271 | 272 | ## Multidimensional DP 273 | In many problems, the recurrence depends on more than one variable, giving rise to multidimensional DP. 274 | 275 | As an example, consider this problem: given an $n \times n$ grid with `.` indicating a free square and `*` indicated a blocked square, how many paths from the upper-left square to the bottom-right square do not pass through any blocked squares if you can only move right or down? 276 | 277 | Let us take `dp[x][y]` to be the number of required paths from the upper-left square `(0, 0)` to `n(x, y)` (which means that `dp[n - 1][n - 1]` will be the final answer). Now, there are two ways to reach `(x, y)`: move left from `(x - 1, y)` or move down from `(x, y - 1)`. 278 | 279 | However, if `(x, y)` is a blocked square, `dp[x][y]` should be zero (we cannot move to a blocked square). Combining this, we get: 280 | 281 | $$ \texttt{dp}[x][y] = 282 | \begin{cases} 283 | \texttt{dp}[x-1][y] + \texttt{dp}[x][y-1] & \text{if $(x, y)$ is blocked} \\ 284 | 0, & \text{if $(x, y)$ is free} 285 | \end{cases} $$ 286 | 287 | ```cpp 288 | int main() 289 | { 290 | ll n; 291 | cin >> n; 292 | vector grid(n); 293 | for (auto &i : grid) 294 | cin >> i; 295 | vector> dp(n, vector(n, 0)); 296 | dp[0][0] = 1; 297 | for (int i = 0; i < n; i++) 298 | { 299 | for (int j = 0; j < n; j++) 300 | { 301 | if (grid[i][j] == '*') 302 | { 303 | dp[i][j] = 0; 304 | } 305 | else 306 | { 307 | if (i > 0) dp[i][j] += dp[i-1][j]; 308 | if (j > 0) dp[i][j] += dp[i][j-1]; 309 | } 310 | } 311 | } 312 | cout << dp[n-1][n-1] << '\n'; 313 | return 0; 314 | } 315 | ``` 316 | 317 | ## Links to external resources 318 | 319 | 1. [DP explained at different levels by TopCoder](https://www.topcoder.com/thrive/articles/Dynamic%20Programming:%20From%20Novice%20to%20Advanced) 320 | 2. [Multi-dimensional DP](https://itnext.io/introduction-to-multi-dimensional-dynamic-programming-666b095b2e7b) 321 | 3. [DP visualised by Reducible](https://www.youtube.com/watch?v=aPQY__2H3tE) 322 | 4. [MIT lecture on DP](https://www.youtube.com/watch?v=OQ5jsbhAv_M) 323 | 324 | ## Practice problems 325 | 326 | 1. [CSES 1633 - Dice Combinations](https://cses.fi/problemset/task/1633) 327 | 2. [Codeforces 189A - Cut Ribbon](https://codeforces.com/problemset/problem/189/A) 328 | 3. [Codeforces 474D - Flowers](https://codeforces.com/problemset/problem/474/D) 329 | 4. [CSES 1637 - Removing Digits](https://cses.fi/problemset/task/1637) 330 | 5. [Codeforces 545C - Woodcutters](https://codeforces.com/problemset/problem/545/C) 331 | 6. [Codeforces 1195C - Basketball Exercise](https://codeforces.com/problemset/problem/1195/C) 332 | 7. [CSES 1638 - Grid Paths](https://cses.fi/problemset/task/1638) 333 | 8. [Codeforces 698A - Vacations](https://codeforces.com/problemset/problem/698/A) 334 | 9. [CSES 1639 - Edit Distance](https://cses.fi/problemset/task/1639) 335 | 336 | --------------------------------------------------------------------------------