├── .gitignore ├── README.md ├── custom └── main.html ├── docs ├── Chap01 │ ├── 1.1.md │ ├── 1.2.md │ └── Problems │ │ └── 1-1.md ├── Chap02 │ ├── 2.1.md │ ├── 2.2.md │ ├── 2.3.md │ └── Problems │ │ ├── 2-1.md │ │ ├── 2-2.md │ │ ├── 2-3.md │ │ └── 2-4.md ├── Chap03 │ ├── 3.1.md │ ├── 3.2.md │ └── Problems │ │ ├── 3-1.md │ │ ├── 3-2.md │ │ ├── 3-3.md │ │ ├── 3-4.md │ │ ├── 3-5.md │ │ └── 3-6.md ├── Chap04 │ ├── 4.1.md │ ├── 4.2.md │ ├── 4.3.md │ ├── 4.4.md │ ├── 4.5.md │ ├── 4.6.md │ └── Problems │ │ ├── 4-1.md │ │ ├── 4-2.md │ │ ├── 4-3.md │ │ ├── 4-4.md │ │ ├── 4-5.md │ │ └── 4-6.md ├── Chap05 │ ├── 5.1.md │ ├── 5.2.md │ ├── 5.3.md │ ├── 5.4.md │ └── Problems │ │ ├── 5-1.md │ │ └── 5-2.md ├── Chap06 │ ├── 6.1.md │ ├── 6.2.md │ ├── 6.3.md │ ├── 6.4.md │ ├── 6.5.md │ └── Problems │ │ ├── 6-1.md │ │ ├── 6-2.md │ │ └── 6-3.md ├── Chap07 │ ├── 7.1.md │ ├── 7.2.md │ ├── 7.3.md │ ├── 7.4.md │ └── Problems │ │ ├── 7-1.md │ │ ├── 7-2.md │ │ ├── 7-3.md │ │ ├── 7-4.md │ │ ├── 7-5.md │ │ └── 7-6.md ├── Chap08 │ ├── 8.1.md │ ├── 8.2.md │ ├── 8.3.md │ ├── 8.4.md │ └── Problems │ │ ├── 8-1.md │ │ ├── 8-2.md │ │ ├── 8-3.md │ │ ├── 8-4.md │ │ ├── 8-5.md │ │ ├── 8-6.md │ │ └── 8-7.md ├── Chap09 │ ├── 9.1.md │ ├── 9.2.md │ ├── 9.3.md │ └── Problems │ │ ├── 9-1.md │ │ ├── 9-2.md │ │ ├── 9-3.md │ │ └── 9-4.md ├── Chap10 │ ├── 10.1.md │ ├── 10.2.md │ ├── 10.3.md │ ├── 10.4.md │ └── Problems │ │ ├── 10-1.md │ │ ├── 10-2.md │ │ └── 10-3.md ├── Chap11 │ ├── 11.1.md │ ├── 11.2.md │ ├── 11.3.md │ ├── 11.4.md │ ├── 11.5.md │ └── Problems │ │ ├── 11-1.md │ │ ├── 11-2.md │ │ ├── 11-3.md │ │ └── 11-4.md ├── Chap12 │ ├── 12.1.md │ ├── 12.2.md │ ├── 12.3.md │ ├── 12.4.md │ └── Problems │ │ ├── 12-1.md │ │ ├── 12-2.md │ │ ├── 12-3.md │ │ └── 12-4.md ├── Chap13 │ ├── 13.1.md │ ├── 13.2.md │ ├── 13.3.md │ ├── 13.4.md │ └── Problems │ │ ├── 13-1.md │ │ ├── 13-2.md │ │ ├── 13-3.md │ │ └── 13-4.md ├── Chap14 │ ├── 14.1.md │ ├── 14.2.md │ ├── 14.3.md │ └── Problems │ │ ├── 14-1.md │ │ └── 14-2.md ├── Chap15 │ ├── 15.1.md │ ├── 15.2.md │ ├── 15.3.md │ ├── 15.4.md │ ├── 15.5.md │ └── Problems │ │ ├── 15-1.md │ │ ├── 15-10.md │ │ ├── 15-11.md │ │ ├── 15-12.md │ │ ├── 15-2.md │ │ ├── 15-3.md │ │ ├── 15-4.md │ │ ├── 15-5.md │ │ ├── 15-6.md │ │ ├── 15-7.md │ │ ├── 15-8.md │ │ └── 15-9.md ├── Chap16 │ ├── 16.1.md │ ├── 16.2.md │ ├── 16.3.md │ ├── 16.4.md │ ├── 16.5.md │ └── Problems │ │ ├── 16-1.md │ │ ├── 16-2.md │ │ ├── 16-3.md │ │ ├── 16-4.md │ │ └── 16-5.md ├── Chap17 │ ├── 17.1.md │ ├── 17.2.md │ ├── 17.3.md │ ├── 17.4.md │ └── Problems │ │ ├── 17-1.md │ │ ├── 17-2.md │ │ ├── 17-3.md │ │ ├── 17-4.md │ │ └── 17-5.md ├── Chap18 │ ├── 18.1.md │ ├── 18.2.md │ ├── 18.3.md │ └── Problems │ │ ├── 18-1.md │ │ └── 18-2.md ├── Chap19 │ ├── 19.1.md │ ├── 19.2.md │ ├── 19.3.md │ ├── 19.4.md │ └── Problems │ │ ├── 19-1.md │ │ ├── 19-2.md │ │ ├── 19-3.md │ │ └── 19-4.md ├── Chap20 │ ├── 20.1.md │ ├── 20.2.md │ ├── 20.3.md │ └── Problems │ │ ├── 20-1.md │ │ └── 20-2.md ├── Chap21 │ ├── 21.1.md │ ├── 21.2.md │ ├── 21.3.md │ ├── 21.4.md │ └── Problems │ │ ├── 21-1.md │ │ ├── 21-2.md │ │ └── 21-3.md ├── Chap22 │ ├── 22.1.md │ ├── 22.2.md │ ├── 22.3.md │ ├── 22.4.md │ ├── 22.5.md │ └── Problems │ │ ├── 22-1.md │ │ ├── 22-2.md │ │ ├── 22-3.md │ │ └── 22-4.md ├── Chap23 │ ├── 23.1.md │ ├── 23.2.md │ └── Problems │ │ ├── 23-1.md │ │ ├── 23-2.md │ │ ├── 23-3.md │ │ └── 23-4.md ├── Chap24 │ ├── 24.1.md │ ├── 24.2.md │ ├── 24.3.md │ ├── 24.4.md │ ├── 24.5.md │ └── Problems │ │ ├── 24-1.md │ │ ├── 24-2.md │ │ ├── 24-3.md │ │ ├── 24-4.md │ │ ├── 24-5.md │ │ └── 24-6.md ├── Chap25 │ ├── 25.1.md │ ├── 25.2.md │ ├── 25.3.md │ └── Problems │ │ ├── 25-1.md │ │ └── 25-2.md ├── Chap26 │ ├── 26.1.md │ ├── 26.2.md │ ├── 26.3.md │ ├── 26.4.md │ ├── 26.5.md │ └── Problems │ │ ├── 26-1.md │ │ ├── 26-2.md │ │ ├── 26-3.md │ │ ├── 26-4.md │ │ ├── 26-5.md │ │ └── 26-6.md ├── Chap27 │ ├── 27.1.md │ ├── 27.2.md │ ├── 27.3.md │ └── Problems │ │ ├── 27-1.md │ │ ├── 27-2.md │ │ ├── 27-3.md │ │ ├── 27-4.md │ │ ├── 27-5.md │ │ └── 27-6.md ├── Chap28 │ ├── 28.1.md │ ├── 28.2.md │ ├── 28.3.md │ └── Problems │ │ ├── 28-1.md │ │ └── 28-2.md ├── Chap29 │ ├── 29.1.md │ ├── 29.2.md │ ├── 29.3.md │ ├── 29.4.md │ ├── 29.5.md │ └── Problems │ │ ├── 29-1.md │ │ ├── 29-2.md │ │ ├── 29-3.md │ │ ├── 29-4.md │ │ └── 29-5.md ├── Chap30 │ ├── 30.1.md │ ├── 30.2.md │ ├── 30.3.md │ └── Problems │ │ ├── 30-1.md │ │ ├── 30-2.md │ │ ├── 30-3.md │ │ ├── 30-4.md │ │ ├── 30-5.md │ │ └── 30-6.md ├── Chap31 │ ├── 31.1.md │ ├── 31.2.md │ ├── 31.3.md │ ├── 31.4.md │ ├── 31.5.md │ ├── 31.6.md │ ├── 31.7.md │ ├── 31.8.md │ ├── 31.9.md │ └── Problems │ │ ├── 31-1.md │ │ ├── 31-2.md │ │ ├── 31-3.md │ │ └── 31-4.md ├── Chap32 │ ├── 32.1.md │ ├── 32.2.md │ ├── 32.3.md │ ├── 32.4.md │ └── Problems │ │ └── 32-1.md ├── Chap33 │ ├── 33.1.md │ ├── 33.2.md │ ├── 33.3.md │ ├── 33.4.md │ └── Problems │ │ ├── 33-1.md │ │ ├── 33-2.md │ │ ├── 33-3.md │ │ ├── 33-4.md │ │ └── 33-5.md ├── Chap34 │ ├── 34.1.md │ ├── 34.2.md │ ├── 34.3.md │ ├── 34.4.md │ ├── 34.5.md │ └── Problems │ │ ├── 34-1.md │ │ ├── 34-2.md │ │ ├── 34-3.md │ │ └── 34-4.md ├── Chap35 │ ├── 35.1.md │ ├── 35.2.md │ ├── 35.3.md │ ├── 35.4.md │ ├── 35.5.md │ └── Problems │ │ ├── 35-1.md │ │ ├── 35-2.md │ │ ├── 35-3.md │ │ ├── 35-4.md │ │ ├── 35-5.md │ │ ├── 35-6.md │ │ └── 35-7.md ├── assets │ └── favicon.png ├── color.md ├── css │ └── mathjax.css ├── img │ ├── 10.4-1.png │ ├── 12.1-1-1.png │ ├── 12.1-1-2.png │ ├── 12.1-1-3.png │ ├── 12.1-1-4.png │ ├── 12.1-1-5.png │ ├── 13.1-1-1.png │ ├── 13.1-1-2.png │ ├── 13.1-1-3.png │ ├── 13.1-1-4.png │ ├── 13.1-2-1.png │ ├── 13.1-2-2.png │ ├── 13.3-2-1.png │ ├── 13.3-2-2.png │ ├── 13.3-2-3.png │ ├── 13.3-2-4.png │ ├── 13.3-2-5.png │ ├── 13.3-2-6.png │ ├── 13.3-3-1.png │ ├── 13.3-3-2.png │ ├── 13.4-3-1.png │ ├── 13.4-3-2.png │ ├── 13.4-3-3.png │ ├── 13.4-3-4.png │ ├── 13.4-3-5.png │ ├── 13.4-3-6.png │ ├── 13.4-3-7.png │ ├── 13.4-7.png │ ├── 18.3-1-1.png │ ├── 18.3-1-2.png │ ├── 18.3-1-3.png │ ├── 18.3-1-4.png │ ├── 21.3-1.png │ ├── 6.4-1.png │ ├── 6.5-1-1.png │ ├── 6.5-1-2.png │ ├── 6.5-1-3.png │ ├── 6.5-1-4.png │ ├── 6.5-1-5.png │ ├── 6.5-2-1.png │ ├── 6.5-2-2.png │ ├── 6.5-2-3.png │ ├── 6.5-2-4.png │ └── 6.5-2-5.png ├── index.md └── js │ └── mathjax.js ├── makefile └── mkdocs.yml /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | site/ 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Solutions to **Introduction to Algorithms** *Third Edition* 2 | 3 | ## Getting Started 4 | 5 | This [website](https://walkccc.github.io/CLRS/) contains nearly complete solutions to the bible textbook - [**Introduction to Algorithms** *Third Edition*](https://mitpress.mit.edu/books/introduction-algorithms-third-edition) published by [Thomas H. Cormen](https://mitpress.mit.edu/contributors/thomas-h-cormen), [Charles E. Leiserson](https://mitpress.mit.edu/contributors/charles-e-leiserson), [Ronald L. Rivest](https://mitpress.mit.edu/contributors/ronald-l-rivest) and [Clifford Stein](https://mitpress.mit.edu/contributors/clifford-stein). 6 | 7 | Hope to reorganize solutions to help more people and myself study algorithms. By using [Markdown (.md)](https://en.wikipedia.org/wiki/Markdown) files, it's much more readable on portable devices now. 8 | 9 | *"Many a little makes a mickle."* 10 | ## 特别说明, 是从这里的github摘抄过来的, 只供自己学习参考使用, 11 | 不作为什么盈利目的. 如有侵权, 请联系qq:438660228会及时删除. 12 | ## Contributors 13 | 14 | Thanks to: the Instructor's Manual by [Thomas H. Cormen](https://mitpress.mit.edu/contributors/thomas-h-cormen), [@skanev](https://github.com/skanev), [@CyberZHG](https://github.com/CyberZHG), [@yinyanghu](https://github.com/yinyanghu), @ajl213, etc. 15 | 16 | Special thanks to [@JeffreyCA](https://github.com/JeffreyCA), who fixed MathJax rendering on iOS Safari in [#26](https://github.com/walkccc/CLRS/pull/26). 17 | 18 | Please don't hesitate to give me your feedback if any adjustment is needed with the sorted solutions. You can simply press the "Pencil icon" in the upper right corner to edit the contents or simply [open an issue](https://github.com/walkccc/CLRS/issues/new) in [my repository](https://github.com/walkccc/CLRS/). 19 | 20 | ## Working on following exercises 21 | 22 | [18.2-1](https://walkccc.github.io/CLRS/Chap18/18.2/#182-1), [19.2-1](https://walkccc.github.io/CLRS/Chap19/19.2/#192-1). 23 | 24 | I will continue to complete VII Selected Topics. 25 | 26 | ## How I generate this website 27 | 28 | I use the static site generator [MkDocs](http://www.mkdocs.org/) and the beautiful theme [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/) to build this website! 29 | 30 | Since there are some LaTeX equations [KaTeX](https://khan.github.io/KaTeX/) doesn't support, here I use [MathJax](https://www.mathjax.org/) to render the math equations in my website. 31 | 32 | I also add [overflow-x: auto](https://www.w3schools.com/cssref/css3_pr_overflow-x.asp) to prevent the overflow issue on mobile devices, so you can scroll horizontally in the math display equations. 33 | 34 | ## More Informations 35 | 36 | For more informations please visit [**my GitHub site**](https://github.com/walkccc). 37 | 38 | Updated to this new site on April 13, 2018 at 04:48 [(GMT+8)](https://time.is/GMT+8). 39 | -------------------------------------------------------------------------------- /custom/main.html: -------------------------------------------------------------------------------- 1 | 2 | {% extends "base.html" %} 3 | 4 | {% block site_meta %} 5 | {{ super() }} 6 | 7 | {% endblock %} 8 | -------------------------------------------------------------------------------- /docs/Chap01/1.1.md: -------------------------------------------------------------------------------- 1 | ## 1.1-1 2 | 3 | > Give a real-world example that requires sorting or a real-world example that requires computing a convex hull. 4 | 5 | - Sorting: browse the price of the restaurants with ascending prices on NTU street. 6 | - Convex hull: computing the diameter of set of points. 7 | 8 | ## 1.1-2 9 | 10 | > Other than speed, what other measures of efficiency might one use in a real-world setting? 11 | 12 | Memory efficiency and coding efficiency. 13 | 14 | ## 1.1-3 15 | 16 | > Select a data structure that you have seen previously, and discuss its strengths and limitations. 17 | 18 | Linked-list: 19 | 20 | - Strengths: insertion and deletion. 21 | - Limitations: random access. 22 | 23 | ## 1.1-4 24 | 25 | > How are the shortest-path and traveling-salesman problems given above similar? How are they different? 26 | 27 | - Similar: finding path with shortest distance. 28 | - Different: traveling-salesman has more constrains. 29 | 30 | ## 1.1-5 31 | 32 | > Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is "approximately" the best is good enough. 33 | 34 | - Best: find the GCD of two positive integer numbers. 35 | - Approximately: find the solution of differential equations. 36 | -------------------------------------------------------------------------------- /docs/Chap01/1.2.md: -------------------------------------------------------------------------------- 1 | ## 1.2-1 2 | 3 | > Give an example of an application that requires algorithmic content at the application level, and discuss the function of the algorithms involved. 4 | 5 | Drive navigation. 6 | 7 | ## 1.2-2 8 | 9 | > Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size $n$ , insertion sort runs in $8n^2$ steps, while merge sort runs in $64n\lg n$ steps. For which values of $n$ does insertion sort beat merge sort? 10 | 11 | \begin{align} 12 | 8n^2 & < 64n\lg n \\\\ 13 | 2^n & < n^8 \\\\ 14 | n & \le 43. 15 | \end{align} 16 | 17 | ## 1.2-3 18 | 19 | > What is the smallest value of $n$ such that an algorithm whose running time is $100n^2$ runs faster than an algorithm whose running time is $2^n$ on the same machine? 20 | 21 | \begin{align} 22 | 100n^2 & < 2^n \\\\ 23 | n & \ge 15. 24 | \end{align} 25 | -------------------------------------------------------------------------------- /docs/Chap01/Problems/1-1.md: -------------------------------------------------------------------------------- 1 | > For each function $f(n)$ and time $t$ in the following table, determine the largest size $n$ of a problem that can be solved in time $t$, assuming that the algorithm to solve the problem takes $f(n)$ microseconds. 2 | 3 | \begin{array}{cccccccc} 4 | & \text{1 second} & \text{1 minute} & \text{1 hour} & \text{1 day} & \text{1 month} & \text{1 year} & \text{1 century} \\\\ 5 | \hline 6 | \lg n & 2^{10^6} & 2^{6 \times 10^6} & 2^{3.6 \times 10^9} & 2^{8.64 \times 10^{10}} & 2^{2.59 \times 10^{12}} & 2^{3.15 \times 10^{13}} & 2^{3.15 \times 10^{15}} \\\\ 7 | \sqrt n & 10^{12} & 3.6 \times 10^{15} & 1.3 \times 10^{19} & 7.46 \times 10^{21} & 6.72 \times 10^{24} & 9.95 \times 10^{26} & 9.95 \times 10^{30} \\\\ 8 | n & 10^6 & 6 \times 10^7 & 3.6 \times 10^9 & 8.64 \times 10^{10} & 2.59 \times 10^{12} & 3.15 \times 10^{13} & 3.15 \times 10^{15} \\\\ 9 | n\lg n & 6.24 \times 10^4 & 2.8 \times 10^6 & 1.33 \times 10^8 & 2.76 \times 10^9 & 7.19 \times 10^{10} & 7.98 \times 10^{11} & 6.86 \times 10^{13} \\\\ 10 | n^2 & 1000 & 7745 & 60000 & 293938 & 1609968 & 5615692 & 56156922 \\\\ 11 | n^3 & 100 & 391 & 1532 & 4420 & 13736 & 31593 & 146645 \\\\ 12 | 2^n & 19 & 25 & 31 & 36 & 41 & 44 & 51 \\\\ 13 | n! & 9 & 11 & 12 & 13 & 15 & 16 & 17 14 | \end{array} 15 | -------------------------------------------------------------------------------- /docs/Chap02/2.2.md: -------------------------------------------------------------------------------- 1 | ## 2.2-1 2 | 3 | > Express the function $n^3 / 1000 - 100n^2 - 100n + 3n$ in terms of $\Theta$-notation. 4 | 5 | $\Theta(n^3)$. 6 | 7 | ## 2.2-2 8 | 9 | > Consider sorting $n$ numbers stored in array $A$ by first finding the smallest element of $A$ and exchanging it with the element in $A[1]$. Then find the second smallest element of $A$, and exchange it with $A[2]$. Continue in this manner for the first $n - 1$ elements of $A$. Write pseudocode for this algorithm, which is known as ***selection sort***. What loop invariant does this algorithm maintain? Why does it need to run for only the first $n - 1$ elements, rather than for all $n$ elements? Give the best-case and worst-case running times of selection sort in $\Theta$-notation. 10 | 11 | ```cpp 12 | SELECTION-SORT(A) 13 | n = A.length 14 | for j = 1 to n - 1 15 | smallest = j 16 | for i = j + 1 to n 17 | if A[i] < A[smallest] 18 | smallest = i 19 | exchange A[j] with A[smallest] 20 | ``` 21 | 22 | The algorithm maintains the loop invariant that at the start of each iteration of the outer **for** loop, the subarray $A[1..j - 1]$ consists of the $j - 1$ smallest elements in the array $A[1..n]$, and this subarray is in sorted order. After the first $n - 1$ elements, the subarray $A[1..n]$ contains the smallest $n - 1$ elements, sorted, and therefore element $A[n]$ must be the largest element. 23 | 24 | The running time of the algorithm is $\Theta(n^2)$ for all cases. 25 | 26 | ## 2.2-3 27 | 28 | > Consider linear search again (see Exercise 2.1-3). How many elements of the in- put sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in $\Theta$-notation? Justify your answers. 29 | 30 | If the element is present in the sequence, half of the elements are likely to be checked before it is found in the average case. In the worst case, all of them will be checked. That is, $n / 2$ checks for the average case and $n$ for the worst case. Both of them are $\Theta(n)$. 31 | 32 | ## 2.2-4 33 | 34 | > How can we modify almost any algorithm to have a good best-case running time? 35 | 36 | Modify the algorithm so it tests whether the input satisfies some special-case condition and, if it does, output a pre-computed answer. The best-case running time is generally not a good measure of an algorithm. 37 | -------------------------------------------------------------------------------- /docs/Chap02/Problems/2-3.md: -------------------------------------------------------------------------------- 1 | > The following code fragment implements Horner's rule for evaluating a polynomial 2 | > 3 | > \begin{align} 4 | > P(x) & = \sum_{k = 0}^n a_k x^k \\\\ 5 | > & = a_0 + x(a_1 + x (a_2 + \cdots + x(a_{n - 1} + x a_n) \cdots)), 6 | > \end{align} 7 | > 8 | > given the coefficients $a_0, a_1, \ldots, a_n$ and a value of $x$: 9 | > 10 | > ```cpp 11 | > y = 0 12 | > for i = n downto 0 13 | > y = a[i] + x * y 14 | > ``` 15 | 16 | > **a.** In terms of $\Theta$-notation, what is the running time of this code fragment for Horner's rule? 17 | > 18 | > **b.** Write pseudocode to implement the naive polynomial-evaluation algorithm that computes each term of the polynomial from scratch. What is the running time of this algorithm? How does it compare to Horner's rule 19 | > 20 | > **c.** Consider the following loop invariant: At the start of each iteration of the **for** loop of lines 2-3, 21 | > 22 | > $$y = \sum_{k = 0}^{n - (i + 1)} a_{k + i + 1} x^k.$$ 23 | > 24 | > Interpret a summation with no terms as equaling $0$. Following the structure of the loop invariant proof presented in this chapter, use this loop invariant to show that, at termination, $y = \sum_{k = 0}^n a_k x^k$. 25 | > 26 | > **d.** Conclude by arguing that the given code fragment correctly evaluates a polynomial characterized by the coefficients $a_0, a_1, \ldots, a_n$. 27 | 28 | **a.** $\Theta(n)$. 29 | 30 | **b.** 31 | 32 | ```cpp 33 | NAIVE-HORNER() 34 | y = 0 35 | for k = 0 to n 36 | temp = 1 37 | for i = 1 to k 38 | temp = temp * x 39 | y = y + a[i] * m 40 | ``` 41 | 42 | The running time is $\Theta(n^2)$, because of the nested loop. It is obviously slower. 43 | 44 | **c.** **Initialization:** It is pretty trivial, since the summation has no terms which implies $y = 0$. 45 | 46 | **Maintenance:** By using the loop invariant, in the end of the $i$-the iteration, we have 47 | 48 | \begin{align} 49 | y & = a_i + x \sum_{k = 0}^{n - (i + 1)} a_{k + i + 1} x^k \\\\ 50 | & = a_i x^0 + \sum_{k = 0}^{n - i - 1} a_{k + i + 1} x^{k + 1} \\\\ 51 | & = a_i x^0 \sum_{k = 1}^{n - i} a_{k + i} x^k \\\\ 52 | & = \sum_{k = 0}^{n - i} a_{k + i} x^k. 53 | \end{align} 54 | 55 | **Termination:** The loop terminates at $i = -1$. If we substitute, 56 | 57 | $$y = \sum_{k = 0}^{n - i - 1} a_{k + i + 1} x^k = \sum_{k = 0}^n a_k x^k.$$ 58 | 59 | **d.** The invariant of the loop is a sum that equals a polynomial with the given coefficients. 60 | -------------------------------------------------------------------------------- /docs/Chap03/Problems/3-1.md: -------------------------------------------------------------------------------- 1 | > Let 2 | > 3 | > $$p(n) = \sum_{i = 0}^d a_i n^i,$$ 4 | > 5 | > where $a_d > 0$, be a degree-$d$ polynomial in $n$, and let $k$ be a constant. Use the definitions of the asymptotic notations to prove the following properties. 6 | > 7 | > **a.** If $k \ge d$, then $p(n) = O(n^k)$. 8 | > 9 | > **b.** If $k \le d$, then $p(n) = \Omega(n^k)$. 10 | > 11 | > **c.** If $k = d$, then $p(n) = \Theta(n^k)$. 12 | > 13 | > **d.** If $k > d$, then $p(n) = o(n^k)$. 14 | > 15 | > **e.** If $k < d$, then $p(n) = \omega(n^k)$. 16 | 17 | Let's see that $p(n) = O(n^d)$. We need do pick $c = a_d + b$, such that 18 | 19 | $$\sum\limits_{i = 0}^d = a_d n^d + a_{d - 1}n^{d - 1} + \cdots + a_1n + a_0 \le cn^d.$$ 20 | 21 | When we divide by $n^d$, we get 22 | 23 | $$c = a_d + b \ge a_d + \frac{a_{d - 1}}n + \frac{a_{d - 2}}{n^2} + \cdots + \frac{a_0}{n^d}.$$ 24 | 25 | and 26 | 27 | $$b \ge \frac{a_{d - 1}}n + \frac{a_{d - 2}}{n^2} + \cdots + \frac{a_0}{n^d}.$$ 28 | 29 | If we choose $b = 1$, then we can choose $n_0$, 30 | 31 | $$n_0 = \max(da_{d - 1}, d\sqrt{a_{d - 2}}, \ldots, d\sqrt[d]{a_0}).$$ 32 | 33 | Now we have $n_0$ and $c$, such that 34 | 35 | $$p(n) \le cn^d \quad \text{for } n \ge n_0,$$ 36 | 37 | which is the definition of $O(n^d)$. 38 | 39 | By chosing $b = -1$ we can prove the $\Omega(n^d)$ inequality and thus the $\Theta(n^d)$ inequality. 40 | 41 | It is very similar to prove the other inequalities. 42 | -------------------------------------------------------------------------------- /docs/Chap03/Problems/3-2.md: -------------------------------------------------------------------------------- 1 | > Indicate for each pair of expressions $(A, B)$ in the table below, whether $A$ is $O$, $o$, $\Omega$, $\omega$, or $\Theta$ of $B$. Assume that $k \ge 1$, $\epsilon > 0$, and $c > 1$ are constants. Your answer should be in the form of the table with "yes" or "no" written in each box. 2 | 3 | \begin{array}{ccccccc} 4 | A & B & O & o & \Omega & \omega & \Theta \\\\ 5 | \hline 6 | \lg^k n & n^\epsilon & yes & yes & no & no & no \\\\ 7 | n^k & c^n & yes & yes & no & no & no \\\\ 8 | \sqrt n & n^{\sin n} & no & no & no & no & no \\\\ 9 | 2^n & 2^{n / 2} & no & no & yes & yes & no \\\\ 10 | n^{\lg c} & c^{\lg n} & yes & no & yes & no & yes \\\\ 11 | \lg(n!) & \lg(n^n) & yes & no & yes & no & yes 12 | \end{array} 13 | -------------------------------------------------------------------------------- /docs/Chap03/Problems/3-4.md: -------------------------------------------------------------------------------- 1 | > Let $f(n)$ and $g(n)$ by asymptotically positive functions. Prove or disprove each of the following conjectures. 2 | > 3 | > **a.** $f(n) = O(g(n))$ implies $g(n) = O(f(n))$. 4 | > 5 | > **b.** $f(n) + g(n) = \Theta(\min(f(n), g(n)))$. 6 | > 7 | > **c.** $f(n) = O(g(n))$ implies $\lg(f(n)) = O(\lg(g(n)))$, where $\lg(g(n)) \ge 1$ and $f(n) \ge 1$ for all sufficiently large $n$. 8 | > 9 | > **d.** $f(n) = O(g(n))$ implies $2^{f(n)} = O(2^{g(n)})$. 10 | > 11 | > **e.** $f(n) = O((f(n))^2)$. 12 | > 13 | > **f.** $f(n) = O(g(n))$ implies $g(n) = \Omega(f(n))$. 14 | > 15 | > **g.** $f(n) = \Theta(f(n / 2))$. 16 | > 17 | > **h.** $f(n) + o(f(n)) = \Theta(f(n))$. 18 | 19 | **a.** Disprove, $n = O(n^2)$, but $n^2 \ne O(n)$. 20 | 21 | **b.** Disprove, $n^2 + n \ne \Theta(\min(n^2, n)) = \Theta(n)$. 22 | 23 | **c.** Prove, because $f(n) \ge 1$ after a certain $n \ge n_0$. 24 | 25 | \begin{align} 26 | \exists c, n_0: \forall n \ge n_0, 0 \le f(n) \le cg(n) \\\\ 27 | \Rightarrow 0 \le \lg f(n) \le \lg (cg(n)) = \lg c + \lg g(n). 28 | \end{align} 29 | 30 | We need to prove that 31 | 32 | $$\lg f(n) \le d\lg g(n).$$ 33 | 34 | We can find $d$, 35 | 36 | $$d = \frac{\lg c + \lg g(n)}{\lg g(n)} = \frac{\lg c}{\lg g(n)} + 1 \le \lg c + 1,$$ 37 | 38 | where the last step is valid, because $\lg g(n) \ge 1$. 39 | 40 | **d.** Disprove, because $2n = O(n)$, but $2^{2n} = 4^n \ne O(2^n)$. 41 | 42 | **e.** Prove, $0 \le f(n) \le cf^2(n)$ is trivial when $f(n) \ge 1$, but if $f(n) < 1$ for all $n$, it's not correct. However, we don't care this case. 43 | 44 | **f.** Prove, from the first, we know that $0 \le f(n) \le cg(n)$ and we need to prove that $0 \le df(n) \le g(n)$, which is straightforward with $d = 1 / c$. 45 | 46 | **g.** Disprove, let's pick $f(n) = 2^n$. We will need to prove that 47 | 48 | $$\exists c_1, c_2, n_0: \forall n \ge n_0, 0 \le c_1 \cdot 2^{n / 2} \le 2^n \le c_2 \cdot 2^{n / 2},$$ 49 | 50 | which is obviously untrue. 51 | 52 | **h.** Prove, let $g(n) = o(f(n))$. We need to prove that 53 | 54 | $$\exists c_1, c_2, n_0: \forall n \ge n_0, 0 \le c_1f(n) \le f(n) + g(n) \le c_2f(n).$$ 55 | 56 | Thus, if we pick $c_1 = 1$ and $c_2 = 2$, it holds. 57 | -------------------------------------------------------------------------------- /docs/Chap03/Problems/3-6.md: -------------------------------------------------------------------------------- 1 | > We can apply the iteration operator $^\*$ used in the $\lg^\*$ function to any monotonically increasing function $f(n)$ over the reals. For a given constant $c \in \mathbb R$, we define the iterated function ${f_c}^\*$ by ${f_c}^\*(n) = \min \\{i \ge 0 : f^{(i)}(n) \le c \\}$ which need not be well defined in all cases. In other words, the quantity ${f_c}^\*(n)$ is the number of iterated applications of the function $f$ required to reduce its argument down to $c$ or less. 2 | 3 | For each of the following functions $f(n)$ and constants $c$, give as tight a bound as possible on ${f_c}^\*(n)$. 4 | 5 | \begin{array}{ccl} 6 | f(n) & c & {f_c}^\* \\\\ 7 | \hline 8 | n - 1 & 0 & \Theta(n) \\\\ 9 | \lg n & 1 & \Theta(\lg^\*{n}) \\\\ 10 | n / 2 & 1 & \Theta(\lg n) \\\\ 11 | n / 2 & 2 & \Theta(\lg n) \\\\ 12 | \sqrt 2 & 2 & \Theta(\lg\lg n) \\\\ 13 | \sqrt 2 & 1 & \text{does not converge} \\\\ 14 | n^{1 / 2} & 2 & \Theta(\log_3{\lg n}) \\\\ 15 | n / \lg n & 2 & \omega(\lg\lg n), o(\lg n) 16 | \end{array} 17 | -------------------------------------------------------------------------------- /docs/Chap04/4.6.md: -------------------------------------------------------------------------------- 1 | ## 4.6-1 $\star$ 2 | 3 | > Give a simple and exact expression for $n_j$ in equation $\text{(4.27)}$ for the case in which $b$ is a positive integer instead of an arbitrary real number. 4 | 5 | $n_j$ is obtained by shifting the base $b$ representation $j$ positions to the right, and adding $1$ if any of the $j$ least significant positions are non-zero. 6 | 7 | ## 4.6-2 $\star$ 8 | 9 | > Show that if $f(n) = \Theta(n^{\log_b a}\lg^k{n})$, where $k \ge 0$, then the master recurrence has solution $T(n) = \Theta(n^{\log_b a}\lg^{k + 1}n)$. For simplicity, confine your analysis to exact powers of $b$. 10 | 11 | \begin{align} 12 | g(n) & = \sum_{j = 0}^{\log_b n - 1} a^j f(n / b^j) \\\\ 13 | f(n / b^j) & = \Theta\Big((n / b^j)^{\log_b a} \lg^k(n / b^j) \Big) \\\\ 14 | g(n) & = \Theta\Big(\sum_{j = 0}^{\log_b n - 1}a^j\big(\frac{n}{b^j}\big)^{\log_b a}\lg^k\big(\frac{n}{b^j}\big)\Big) \\\\ 15 | & = \Theta(A) \\\\ 16 | A & = \sum_{j = 0}^{\log_b n - 1} a^j \big(\frac{n}{b^j}\big)^{\log_b a}\lg^k\frac{n}{b^j} \\\\ 17 | & = n^{\log_b a} \sum_{j = 0}^{\log_b n - 1}\Big(\frac{a}{b^{\log_b a}}\Big)^j\lg^k\frac{n}{b^j} \\\\ 18 | & = n^{\log_b a}\sum_{j = 0}^{\log_b n - 1}\lg^k\frac{n}{b^j} \\\\ 19 | & = n^{\log_b a} B \\\\ 20 | \lg^k\frac{n}{d} & = (\lg n - \lg d)^k = \lg^k{n} + o(\lg^k{n}) \\\\ 21 | B & = \sum_{j = 0}^{\log_b n - 1}\lg^k\frac{n}{b^j} \\\\ 22 | & = \sum_{j = 0}^{\log_b n - 1}\Big(\lg^k{n} - o(\lg^k{n})\Big) \\\\ 23 | & = \log_b n\lg^k{n} + \log_b n \cdot o(\lg^k{n}) \\\\ 24 | & = \Theta(\log_b n\lg^k{n}) \\\\ 25 | & = \Theta(\lg^{k + 1}{n}) \\\\ 26 | g(n) & = \Theta(A) \\\\ 27 | & = \Theta(n^{\log_b a}B) \\\\ 28 | & = \Theta(n^{\log_b a}\lg^{k + 1}{n}). 29 | \end{align} 30 | 31 | ## 4.6-3 $\star$ 32 | 33 | > Show that case 3 of the master method is overstated, in the sense that the regularity condition $af(n / b) \le cf(n)$ for some constant $c < 1$ implies that there exists a constant $\epsilon > 0$ such that $f(n) = \Omega(n^{\log_b a + \epsilon})$. 34 | 35 | \begin{align} 36 | af(n / b) & \le cf(n) \\\\ 37 | \alpha f(n / b) & \le f(n), \alpha = a / c \\\\ 38 | \alpha f(n) & \le f(nb) \\\\ 39 | \alpha^i f(1) & \le f(b^i) \\\\ 40 | \end{align} 41 | \begin{align} 42 | n = b^i & \Rightarrow i = \log_b n \Rightarrow f(n) \ge \alpha^{\log_b n}f(1) = n^{\log_b \alpha} \\\\ 43 | \alpha > a & \Rightarrow \alpha = a + d \quad (c < 1, d > 0) \\\\ 44 | & \Rightarrow f(n) = n^{\log_b a + \log_b d} = n^{\log_b a+\epsilon}. \quad (\epsilon = \log_b d) 45 | \end{align} 46 | -------------------------------------------------------------------------------- /docs/Chap04/Problems/4-2.md: -------------------------------------------------------------------------------- 1 | 2 | > Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an $N$-element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself. This problem examines the implications of three parameter-passing strategies: 3 | > 4 | > 1. An array is passed by pointer. Time $= \Theta(1)$. 5 | > 2. An array is passed by copying. Time $= \Theta(N)$, where $N$ is the size of the array. 6 | > 3. An array is passed by copying only the subrage that might be accessed by the called procedure. Time $= \Theta(q - p + 1)$ if the subarray $A[p..q]$ is passed. 7 | > 8 | > **a.** Consider the recursive binary search algorithm for finding a number in a sorted array (see Exercise 2.3-5). Give recurrences for the worst-case running times of binary search when arrays are passed using each of the three methods above, and give good upper bounds on the solutions of the recurrences. Let $N$ be the size of the original problems and $n$ be the size of a subproblem. 9 | > 10 | > **b.** Redo part (a) for the $\text{MERGE-SORT}$ algorithm from Section 2.3.1. 11 | 12 | **a.** 13 | 14 | 1. $T(n) = T(n / 2) + c = \Theta(\lg n)$. (master method) 15 | 2. $\Theta(n\lg n)$. 16 | 17 | \begin{align} 18 | T(n) & = T(n / 2) + cN \\\\ 19 | & = 2cN + T(n / 4) \\\\ 20 | & = 3cN + T(n / 8) \\\\ 21 | & = \sum_{i = 0}^{\lg n - 1}(2^icN / 2^i) \\\\ 22 | & = cN\lg n \\\\ 23 | & = \Theta(n\lg n). 24 | \end{align} 25 | 26 | 3. $T(n) = T(n / 2) + cn = \Theta(n)$. (master method) 27 | 28 | **b.** 29 | 30 | 1. $T(n) = 2T(n / 2) + cn = \Theta(n\lg n)$. (master method) 31 | 2. $\Theta(n^2)$. 32 | 33 | \begin{align} 34 | T(n) & = 2T(n / 2) + cn + 2N = 4N + cn + 2c(n / 2) + 4T(n / 4) \\\\ 35 | & = 8N + 2cn + 4c(n / 4) + 8T(n / 8) \\\\ 36 | & = \sum_{i = 0}^{\lg n - 1}(cn + 2^iN) \\\\ 37 | & = \sum_{i = 0}^{\lg n - 1}cn + N\sum_{i = 0}^{\lg n - 1}2^i \\\\ 38 | & = cn\lg n + N\frac{2^{\lg n} - 1}{2 - 1} \\\\ 39 | & = cn\lg n + nN - N = \Theta(nN) \\\\ 40 | & = \Theta(n^2). 41 | \end{align} 42 | 43 | 3. $\Theta(n\lg n)$. 44 | 45 | \begin{align} 46 | T(n) & = 2T(n / 2) + cn + 2n / 2 \\\\ 47 | & = 2T(n / 2) + (c + 1)n \\\\ 48 | & = \Theta(n\lg n). 49 | \end{align} 50 | -------------------------------------------------------------------------------- /docs/Chap04/Problems/4-5.md: -------------------------------------------------------------------------------- 1 | 2 | > Professor Diogenes has $n$ supposedly identical integrated-circuit chips that in principle are capable of testing each other. The professor's test jig accomodates two chips at a time. When the jig is loaded, each chip tests the other and reports whether it is good or bad. A good chip always reports accurately whether the other chip is good or bad, but the professor cannot trust the answer of a bad chip. Thus, the four possible outcomes of a test are as follows: 3 | > 4 | > \begin{array}{lll} 5 | > \text{Chip $A$ says} & \text{Chip $B$ says} & \text{Conclusion} \\\\ 6 | > \hline 7 | > \text{$B$ is good} & \text{$A$ is good} & \text{both are good, or both are bad} \\\\ 8 | > \text{$B$ is good} & \text{$A$ is bad} & \text{at least one is bad} \\\\ 9 | > \text{$B$ is bad} & \text{$A$ is good} & \text{at least one is bad} \\\\ 10 | > \text{$B$ is bad} & \text{$A$ is bad} & \text{at least one is bad} 11 | > \end{array} 12 | > 13 | > **a.** Show that if more than $n / 2$ chips are bad, the professor cannot necessarily determine which chips are good using any strategy based on this kind of pairwise test. Assume that the bad chips can conspire to fool the professor. 14 | > 15 | > **b.** Consider the problem of finding a single good chip from among $n$ chips, assuming that more than $n / 2$ of the chips are good. Show that $\lfloor n / 2 \rfloor$ pairwise tests are sufficient to reduce the problem to one of nearly half the size. 16 | > 17 | > **c.** Show that the good chips can be identified with $\Theta(n)$ pairwise tests, assuming that more than $n / 2$ chips are good. Give and solve the recurrence that describes the number of tests. 18 | 19 | **a.** Lets say that there are $g < n / 2$ good chips. The same amount of the remaining bad chips can choose to act similar to good chips. That is, they can identify each other as good and all other as faulty. Since this is what the good chips would do, both groups are symmetric in regards to the operation of parwise comparison. No strategy can distinguish between the two groups. 20 | 21 | **b.** We split the chips in groups of two and compare them. We can take one of the chips if the outcome is the first one (both are good or both are bad) and put both away otherwise. When putting away, we're removing at least one bad chip for every good one we remove. Out of the pairs we've chosen a chip from, there would be more good chips than bad chips (there would be more good pairs, because the good chips are more than the half). Now we have at most $n / 2$ chips, where at least half of them are good. 22 | 23 | **c.** The recurrence for finding at least one good chip is 24 | 25 | $$T(n) = T(n / 2) + n / 2.$$ 26 | 27 | By the master method, this is $\Theta(n)$. After we've found one, we can compare it will all others, which is a $\Theta(n)$ operation. 28 | -------------------------------------------------------------------------------- /docs/Chap06/Problems/6-1.md: -------------------------------------------------------------------------------- 1 | > We can build a heap by repeatedly calling $\text{MAX-HEAP-INSERT}$ to insert the elements into the heap. Consider the following variation of the $\text{BUILD-MAX-HEAP}$ procedure: 2 | > 3 | > ```cpp 4 | > BUILD-MAX-HEAP'(A) 5 | > A.heap-size = 1 6 | > for i = 2 to A.length 7 | > MAX-HEAP-INSERT(A, A[i]) 8 | > ``` 9 | > 10 | > **a.** Do the procedures $\text{BUILD-MAX-HEAP}$ and $\text{BUILD-MAX-HEAP}'$ always create the same heap when run on the same input array? Prove that they do, or provide a counterexample. 11 | > 12 | > **b.** Show that in the worst case, $\text{BUILD-MAX-HEAP}'$ requires $\Theta(n\lg n)$ time to build a $n$-element heap. 13 | 14 | **a.** The procedures $\text{BUILD-MAX-HEAP}$ and $\text{BUILD-MAX-HEAP}'$ do not always 15 | 16 | create the same heap when run on the same input array. Consider the following counterexample. 17 | 18 | Input array $A = \langle 1, 2, 3 \rangle$: 19 | $\text{BUILD-MAX-HEAP}(A)$: $A = \langle 3, 2, 1 \rangle$. 20 | $\text{BUILD-MAX-HEAP}'(A)$: $A = \langle 3, 1, 2 \rangle$. 21 | 22 | **b.** An upper bound of $O(n\lg n)$ time follows immediately from there being $n - 1$ calls to $\text{MAX-HEAP-INSERT}$, each taking $O(\lg n)$ time. For a lower bound of $\Omega(n\lg n)$, consider the case in which the input array is given in strictly increasing order. Each call to $\text{MAX-HEAP-INSERT}$ causes $\text{HEAP-INCREASE-KEY}$ to go all the way up to the root. Since the depth of node $i$ is $\lfloor \lg i \rfloor$, the total time is 23 | 24 | \begin{align} 25 | \sum_{i = 1}^n \Theta(\lfloor \lg i \rfloor) 26 | & \ge \sum_{i = \lceil n / 2 \rceil}^n \Theta(\lfloor \lg \lceil n / 2 \rceil\rfloor) \\\\ 27 | & \ge \sum_{i = \lceil n / 2 \rceil}^n \Theta(\lfloor \lg (n / 2) \rfloor) \\\\ 28 | & = \sum_{i = \lceil n / 2 \rceil}^n \Theta(\lfloor \lg n - 1 \rfloor) \\\\ 29 | & \ge n / 2 \cdot \Theta(\lg n) \\\\ 30 | & = \Omega(n\lg n). 31 | \end{align} 32 | 33 | In the worst case, therefore, $\text{BUILD-MAX-HEAP}'$ requires $\Theta(n\lg n)$ time to build an $n$-element heap. 34 | -------------------------------------------------------------------------------- /docs/Chap07/7.1.md: -------------------------------------------------------------------------------- 1 | ## 7.1-1 2 | 3 | > Using figure 7.1 as a model, illustrate the operation of $\text{PARTITION}$ on the array $A = \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle$. 4 | 5 | \begin{align} 6 | \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ 7 | \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ 8 | \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ 9 | \langle 9, 19, 13, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ 10 | \langle 9, 5, 13, 19, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ 11 | \langle 9, 5, 13, 19, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ 12 | \langle 9, 5, 8, 19, 12, 13, 7, 4, 21, 2, 6, 11 \rangle \\\\ 13 | \langle 9, 5, 8, 7, 12, 13, 19, 4, 21, 2, 6, 11 \rangle \\\\ 14 | \langle 9, 5, 8, 7, 4, 13, 19, 12, 21, 2, 6, 11 \rangle \\\\ 15 | \langle 9, 5, 8, 7, 4, 13, 19, 12, 21, 2, 6, 11 \rangle \\\\ 16 | \langle 9, 5, 8, 7, 4, 2, 19, 12, 21, 13, 6, 11 \rangle \\\\ 17 | \langle 9, 5, 8, 7, 4, 2, 6, 12, 21, 13, 19, 11 \rangle \\\\ 18 | \langle 9, 5, 8, 7, 4, 2, 6, 11, 21, 13, 19, 12 \rangle 19 | \end{align} 20 | 21 | ## 7.1-2 22 | 23 | > What value of $q$ does $\text{PARTITION}$ return when all elements in the array $A[p..r]$ have the same value? Modify $\text{PARTITION}$ so that $q = \lfloor (p + r) / 2 \rfloor$ when all elements in the array $A[p..r]$ have the same value. 24 | 25 | It returns $r$. 26 | 27 | We can modify $\text{PARTITION}$ by counting the number of comparisons in which $A[j] = A[r]$ and then subtracting half that number from the pivot index. 28 | 29 | ## 7.1-3 30 | 31 | > Give a brief argument that the running time of $\text{PARTITION}$ on a subarray of size $n$ is $\Theta(n)$. 32 | 33 | There is a for statement whose body executes $r - 1 - p = \Theta(n)$ times. In the worst case every time the body of the if is executed, but it takes constant time and so does the code outside of the loop. Thus the running time is $\Theta(n)$. 34 | 35 | ## 7.1-4 36 | 37 | > How would you modify $\text{QUICKSORT}$ to sort into nonincreasing order? 38 | 39 | We only need to flip the condition on line 4. 40 | -------------------------------------------------------------------------------- /docs/Chap07/7.3.md: -------------------------------------------------------------------------------- 1 | ## 7.3-1 2 | 3 | > Why do we analyze the expected running time of a randomized algorithm and not its worst-case running time? 4 | 5 | We may be interested in the worst-case performance, but in that case, the randomization is irrelevant: it won't improve the worst case. What randomization can do is make the chance of encountering a worst-case scenario small. 6 | 7 | ## 7.3-2 8 | 9 | > When $\text{RANDOMIZED-QUICKSORT}$ runs, how many calls are made to the random number generator $\text{RANDOM}$ in the worst case? How about in the best case? Give your answer in terms of $\Theta$-notation. 10 | 11 | In the worst case, the number of calls to $\text{RANDOM}$ is 12 | 13 | $$T(n) = T(n - 1) + 1 = n = \Theta(n).$$ 14 | 15 | As for the best case, 16 | 17 | $$T(n) = 2T(n / 2) + 1 = \Theta(n).$$ 18 | 19 | This is not too surprising, because each third element (at least) gets picked as pivot. 20 | -------------------------------------------------------------------------------- /docs/Chap07/Problems/7-1.md: -------------------------------------------------------------------------------- 1 | > The version of $\text{PARTITION}$ given in this chapter is not the original partitioning algorithm. Here is the original partition algorithm, which is due to C.A.R. Hoare: 2 | > 3 | > ```cpp 4 | > HOARE-PARTITION(A, p, r) 5 | > x = A[p] 6 | > i = p - 1 7 | > j = r + 1 8 | > while true 9 | > repeat 10 | > j = j - 1 11 | > until A[j] ≤ x 12 | > repeat 13 | > i = i + 1 14 | > until A[i] ≥ x 15 | > if i < j 16 | > exchange A[i] with A[j] 17 | > else return j 18 | > ``` 19 | > 20 | > **a.** Demonstrate the operation of $\text{HOARE-PARTITION}$ on the array $A = \langle 13, 19, 9, 5, 12, 8, 7, 4, 11, 2, 6, 21 \rangle$, showing the values of the array and auxiliary values after each iteration of the **while** loop in lines 4-13. 21 | > 22 | > The next three questions ask you to give a careful argument that the procedure $\text{HOARE-PARTITION}$ is correct. Assuming that the subarray $A[p..r]$ contains at least two elements, prove the following: 23 | > 24 | > **b.** The indices $i$ and $j$ are such that we never access an element of $A$ outside the subarray $A[p..r]$. 25 | > 26 | > **c.** When $\text{HOARE-PARTITION}$ terminates, it returns a value $j$ such that $p \le j < r$. 27 | > 28 | > **d.** Every element of $A[p..j]$ is less than or equal to every element of $A[j + 1..r]$ when $\text{HOARE-PARTITION}$ terminates. 29 | > 30 | > The $\text{PARTITION}$ procedure in section 7.1 separates the pivot value (originally in $A[r]$) from the two partitions it forms. The $\text{HOARE-PARTITION}$ procedure, on the other hand, always places the pivot value (originally in $A[p]$) into one of the two parititions $A[p..j]$ and $A[j + 1..r]$. Since $p \le j < r$, this split is always nontrivial. 31 | > 32 | > **e.** Rewrite the $\text{QUICKSORT}$ procedure to use $\text{HOARE-PARTITION}$. 33 | 34 | **a.** After the end of the loop, the variables have the following values: $x = 13$, $j = 9$ and $i = 10$. 35 | 36 | **b.** Because when $\text{HOARE-PARTITION}$ is running, $p \le i < j \le r$ will always hold, $i$, $j$ won't access any element of $A$ outside the subarray $A[p..r]$. 37 | 38 | **c.** When $i \ge j$, $\text{HOARE-PARTITION}$ terminates, so $p \le j < r$. 39 | 40 | **d.** When $\text{HOARE-PARTITION}$ terminates, $A[p..j] \le x \le A[j + 1..r]$. 41 | 42 | **e.** 43 | 44 | ```cpp 45 | QUICKSORT(A, p, r) 46 | if p < r 47 | q = HOARE-PARTITION(A, p, r) 48 | QUICKSORT(A, p, q) 49 | QUICKSORT(A, q + 1, r) 50 | ``` 51 | -------------------------------------------------------------------------------- /docs/Chap07/Problems/7-6.md: -------------------------------------------------------------------------------- 1 | > Consider the problem in which we do not know the numbers exactly. Instead, for each number, we know an interval on the real line to which it belongs. That is, we are given $n$ closed intervals of the form $[a_i, b_i]$, where $a_i \le b_i$. We wish to ***fuzzy-sort*** these intervals, i.e., to produce a permutation $\langle i_1, i_2, \ldots, i_n \rangle$ of the intervals such that for $j = 1, 2, \ldots, n$, there exists $c_j \in [a_{i_j}, b_{i_j}]$ satisfying $c_1 \le c_2 \le \cdots \le c_n$. 2 | > 3 | **a.** Design a randomized algorithm for fuzzy-sorting $n$ intervals. Your algorithm should have the general structure of an algorithm that quicksorts the left endpoints (the $a_i$ values), but it should take advantage of overlapping intervals to improve the running time. (As the intervals overlap more and more, the problem of fuzzy-sorting the intervals becoes progressively easier. Your algorithm should take advantage of such overlapping, to the extend that it exists.) 4 | > 5 | **b.** Argue that your algorithm runs in expected time $\Theta(n\lg n)$ in general, but runs in expected time $\Theta(n)$ when all of the intervals overlap (i.e., when there exists a value $x$ such that $x \in [a_i, b_i]$ for all $i$). Your algorithm should not be checking for this case explicitly; rather, its performance should naturally improve as the amount of overlap increases. 6 | 7 | **a.** 8 | 9 | ```cpp 10 | FUZZY-PARTITION(A, p, r) 11 | x = A[r] 12 | exchange A[r] with A[p] 13 | i = p - 1 14 | k = p 15 | for j = p + 1 to r - 1 16 | if b[j] < x.a 17 | i = i + 1 18 | k = i + 2 19 | exchange A[i] with A[j] 20 | exchange A[k] with A[j] 21 | if b[j] ≥ x.a or a[j] ≤ x.b 22 | x.a = max(a[j], x.a) and x.b = min(b[j], x.b) 23 | k = k + 1 24 | exchange A[k] with A[j] 25 | exchange A[i + 1] with A[r] 26 | return i + 1 and k + 1 27 | ``` 28 | 29 | When intervals overlap we treat them as equal elements, thus cutting down on the time required to sort. 30 | 31 | **b.** For distinct intervals the algorithm runs exactly as regular quicksort does, so its expected runtime will be $\Theta(n\lg n)$ in general. If all of the intervals overlap 32 | then the condition on line 12 will be satisfied for every iteration of the **for** loop. Thus the algorithm returns $p$ and $r$, so only empty arrays remain to be sorted. $\text{FUZZY-PARTITION}$ will only be called a single time, and since its runtime remains $\Theta(n)$, the total expected runtime is $\Theta(n)$. 33 | -------------------------------------------------------------------------------- /docs/Chap08/8.2.md: -------------------------------------------------------------------------------- 1 | ## 8.2-1 2 | 3 | > Using Figure 8.2 as a model, illustrate the operation of $\text{COUNTING-SORT}$ on the array $A = \langle 6, 0, 2, 0, 1, 3, 4, 6, 1, 3, 2 \rangle$. 4 | 5 | We have that $C = \langle 2, 4, 6, 8, 9, 9, 11 \rangle$. Then, after successive iterations of the loop on lines 10-12, we have 6 | 7 | \begin{align} 8 | B & = \langle, , , , , 2, , , , , \rangle, \\\\ 9 | B & = \langle, , , , , 2, , 3, , , \rangle, \\\\ 10 | B & = \langle, , , 1, , 2, , 3, , , \rangle 11 | \end{align} 12 | 13 | and at the end, 14 | 15 | $$B = \langle 0, 0, 1, 1, 2, 2, 3, 3, 4, 6, 6 \rangle.$$ 16 | 17 | ## 8.2-2 18 | 19 | > Prove that $\text{COUNTING-SORT}$ is stable. 20 | 21 | Consider two elements in the input array, $A[s]$ and $A[s + 1]$, such that $A[s] = A[s + 1]$, $1 \le s \le n - 1$. 22 | 23 | After the execution of the final fo $r$ loop in $\text{COUNTING-SORT}$, $B[p] = A[s + 1]$ and $B[p - 1] = A[s]$, $2 \le p \le n$. $A[s]$ and $A[s + 1]$ appear in the output array $B$ in the same order as they appear in $A$. Therefore, $\text{COUNTING-SORT}$ is stable. 24 | 25 | ## 8.2-3 26 | 27 | > Suppose that we were to rewrite the **for** loop header in line 10 of the $\text{COUNTING-SORT}$ as 28 | > 29 | > ```cpp 30 | > 10 for j = 1 to A.length 31 | > ``` 32 | > 33 | > Show that the algorithm still works properly. Is the modified algorithm stable? 34 | 35 | *[The following solution also answers Exercise 8.2-2.]* 36 | 37 | Notice that the correctness argument in the text does not depend on the order in which $A$ is processed. The algorithm is correct no matter what order is used! 38 | 39 | But the modified algorithm is not stable. As before, in the final **for** loop an element equal to one taken from $A$ earlier is placed before the earlier one (i.e., at a lower index position) in the output array $B$. The original algorithm was stable because an element taken from $A$ later started out with a lower index than one taken earlier. But in the modified algorithm, an element taken from $A$ later started out with a higher index than one taken earlier. 40 | 41 | In particular, the algorithm still places the elements with value $k$ in positions $C[k - 1] + 1$ through $C[k]$, but in the reverse order of their appearance in $A$. 42 | 43 | ## 8.2-4 44 | 45 | > Describe an algorithm that, given n integers in the range $0$ to $k$, preprocesses its input and then answers any query about how many of the $n$ integers fall into a range $[a..b]$ in $O(1)$ time. Your algorithm should use $\Theta(n + k)$ preprocessing time. 46 | 47 | Compute the $C$ array as is done in counting sort. The number of integers in the range $[a..b]$ is $C[b] - C[a - 1]$, where we interpret $C[-1]$ as $0$. 48 | -------------------------------------------------------------------------------- /docs/Chap08/Problems/8-2.md: -------------------------------------------------------------------------------- 1 | > Suppose that we have an array of $n$ data records to sort and that the key of each record has the value $0$ or $1$. An algorithm for sorting such a set of records might possess some subset of the following three desirable characteristics: 2 | > 3 | > 1. The algorithm runs in $O(n)$ time. 4 | > 2. The algorithm is stable. 5 | > 3. The algorithm sorts in place, using no more than a constant amount of storage space in addition to the original array. 6 | > 7 | > **a.** Give an algorithm that satisfies criteria 1 and 2 above. 8 | > 9 | > **b.** Give an algorithm that satisfies criteria 1 and 3 above. 10 | > 11 | > **c.** Give an algorithm that satisfies criteria 2 and 3 above. 12 | > 13 | > **d.** Can you use any of your sorting algorithms from parts (a)–(c) as the sorting method used in line 2 of $\text{RADIX-SORT}$, so that $\text{RADIX-SORT}$ sorts $n$ records with $b$-bit keys in $O(bn)$ time? Explain how or why not. 14 | > 15 | > **e.** Suppose that the $n$ records have keys in the range from $1$ to $k$. Show how to modify counting sort so that it sorts the records in place in $O(n + k)$ time. You may use $O(k)$ storage outside the input array. Is your algorithm stable? ($\textit{Hint:}$ How would you do it for $k = 3$?) 16 | 17 | **a.** Counting-Sort. 18 | 19 | **b.** Quicksort-Partition. 20 | 21 | **c.** Insertion-Sort. 22 | 23 | **d.** (a) Yes. (b) No. (c) No. 24 | 25 | **e.** Using $O(k)$ outside the input-arr. 26 | 27 | ```cpp 28 | COUNTING-SORT(A, k) 29 | let C[0..k] be a new array 30 | for i = 0 to k 31 | C[i] = 0 32 | for i = 1 to A.length 33 | C[A[i]] = C[A[i]] + 1 // C[i] now contains the number of elements equal to i 34 | p = 0 35 | for i = 0 to k 36 | for j = 1 to C[i] 37 | p = p + 1 38 | A[p] = i 39 | ``` 40 | 41 | Not stable, in place, in $O(n + k)$. 42 | -------------------------------------------------------------------------------- /docs/Chap08/Problems/8-5.md: -------------------------------------------------------------------------------- 1 | > Suppose that, instead of sorting an array, we just require that the elements increase on average. More precisely, we call an $n$-element array $A$ ***k-sorted*** if, for all $i = 1, 2, \ldots, n - k$, the following holds: 2 | > 3 | > $$\frac{\sum_{j = i}^{i + k - 1} A[j]}{k} \le \frac{\sum_{j = i + 1}^{i + k} A[j]}{k}.$$ 4 | > 5 | > **a.** What does it mean for an array to be $1$-sorted? 6 | > 7 | > **b.** Give a permutation of the numbers $1, 2, \ldots, 10$ that is $2$-sorted, but not sorted. 8 | > 9 | > **c.** Prove that an $n$-element array is $k$-sorted if and only if $A[i] \le A[i + k]$ for all $i = 1, 2, \ldots, n - k$. 10 | > 11 | > **d.** Give an algorithm that $k$-sorts an $n$-element array in $O(n\lg (n / k))$ time. 12 | > 13 | > We can also show a lower bound on the time to produce a $k$-sorted array, when $k$ is a constant. 14 | > 15 | > **e.** Show that we can sort a $k$-sorted array of length $n$ in $O(n\lg k)$ time. ($\textit{Hint:}$ Use the solution to Exercise 6.5-9.) 16 | > 17 | > **f.** Show that when $k$ is a constant, $k$-sorting an $n$-element array requires $\Omega(n\lg n)$ time. ($\textit{Hint:}$ Use the solution to the previous part along with the lower bound on comparison sorts.) 18 | 19 | **a.** Ordinary sorting 20 | 21 | **b.** $2, 1, 4, 3, 6, 5, 8, 7, 10, 9$. 22 | 23 | **c.** 24 | 25 | \begin{align} 26 | \frac{\sum_{j = i}^{i + k - 1} A[j]}{k} & \le \frac{\sum_{j = i + 1}^{i + k}A[j]}{k} \\\\ 27 | \sum_{j = i}^{i + k- 1 } A[j] & \le \sum_{j = i + 1}^{i + k} A[j] \\\\ 28 | A[i] & \le A[i + k]. 29 | \end{align} 30 | 31 | **d.** Shell-Sort, i.e., We split the $n$-element array into $k$ part. For each part, we use Insertion-Sort (or Quicksort) to sort in $O(n / k \lg(n / k))$ time. Therefore, the total running time is $k \cdot O(n / k \lg(n / k)) = O(n\lg(n / k))$. 32 | 33 | **e.** Using a heap, we can sort a $k$-sorted array of length $n$ in $O(n\lg k)$ time. (The height of the heap is $\lg k$, the solution to Exercise 6.5-9.) 34 | 35 | **f.** The lower bound of sorting each part is $\Omega(n / k\lg(n / k))$, so the total lower bound is $\Theta(n\lg n(/k))$. Since $k$ is a constant, therefore $\Theta(n\lg(n / k)) = \Omega(n\lg n)$. 36 | -------------------------------------------------------------------------------- /docs/Chap08/Problems/8-6.md: -------------------------------------------------------------------------------- 1 | > The problem of merging two sorted lists arises frequently. We have seen a procedure for it as the subroutine $\text{MERGE}$ in Section 2.3.1. In this problem, we will prove a lower bound of $2n - 1$ on the worst-case number of comparisons required to merge two sorted lists, each containing $n$ items. 2 | > 3 | > First we will show a lower bound of $2n - o(n)$ comparisons by using a decision tree. 4 | > 5 | > **a.** Given $2n$ numbers, compute the number of possible ways to divide them into two sorted lists, each with $n$ numbers. 6 | > 7 | > **b.** Using a decision tree and your answer to part (a), show that any algorithm that correctly merges two sorted lists must perform at least $2n - o(n)$ comparisons. 8 | > 9 | > Now we will show a slightly tighter $2n - 1$ bound. 10 | > 11 | > **c.** Show that if two elements are consecutive in the sorted order and from different lists, then they must be compared. 12 | > 13 | > **d.** Use your answer to the previous part to show a lower bound of $2n - 1$ comparisons for merging two sorted lists. 14 | 15 | **a.** There are $\binom{2n}{n}$ ways to divide $2n$ numbers into two sorted lists, each with $n$ numbers. 16 | 17 | **b.** Based on Exercise C.1.13, 18 | 19 | \begin{align} 20 | \binom{2n}{n} & \le 2^h \\\\ 21 | h & \ge \lg\frac{(2n)!}{(n!)^2} \\\\ 22 | & = \lg (2n!) - 2\lg (n!) \\\\ 23 | & = \Theta(2n\lg 2n) - 2\Theta(n\lg n) \\\\ 24 | & = \Theta(2n). 25 | \end{align} 26 | 27 | **c.** We have to know the order of the two consecutive elements. 28 | 29 | **d.** Let list $A = 1, 3, 5, \ldots, 2n - 1$ and $B = 2, 4, 6, \ldots, 2n$. By part (c), we must compare $1$ with $2$, $2$ with $3$, $3$ with $4$, and so on up until we compare $2n - 1$ with $2n$. This amounts to a total of $2n - 1$ comparisons. 30 | -------------------------------------------------------------------------------- /docs/Chap09/9.1.md: -------------------------------------------------------------------------------- 1 | ## 9.1-1 2 | 3 | > Show that the second smallest of $n$ elements can be found with $n + \lceil \lg n \rceil - 2$ comparisons in the worst case. ($\textit{Hint:}$ Also find the smallest element.) 4 | 5 | The smallest of $n$ numbers can be found with $n - 1$ comparisons by conducting a tournament as follows: Compare all the numbers in pairs. Only the smaller of each pair could possibly be the smallest of all $n$, so the problem has been reduced to that of finding the smallest of $\lceil n / 2 \rceil$ numbers. Compare those numbers in pairs, and so on, until there's just one number left, which is the answer. 6 | 7 | To see that this algorithm does exactly $n - 1$ comparisons, notice that each number except the smallest loses exactly once. To show this more formally, draw a binary tree of the comparisons the algorithm does. The $n$ numbers are the leaves, and each number that came out smaller in a comparison is the parent of the two numbers that were compared. Each non-leaf node of the tree represents a comparison, and there are $n - 1$ internal nodes in an $n$-leaf full binary tree (see Exercise (B.5-3)), so exactly $n - 1$ comparisons are made. 8 | 9 | In the search for the smallest number, the second smallest number must have come out smallest in every comparison made with it until it was eventually compared with the smallest. So the second smallest is among the elements that were compared with the smallest during the tournament. To find it, conduct another tournament (as above) to find the smallest of these numbers. At most $\lceil \lg n \rceil$ (the height of the tree of comparisons) elements were compared with the smallest, so finding the smallest of these takes $\lceil \lg n \rceil - 1$ comparisons in the worst case. 10 | 11 | The total number of comparisons made in the two tournaments was 12 | 13 | $$n - 1 + \lceil \lg n \rceil - 1 = n + \lceil \lg n \rceil - 2$$ 14 | 15 | in the worst case. 16 | 17 | ## 9.1-2 $\star$ 18 | 19 | > Prove the lower bound of $\lceil 3n / 2 \rceil - 2$ comparisons in the worst case to find both the maximum and minimum of $n$ numbers. ($\textit{Hint:}$ Consider how many numbers are potentially either the maximum or minimum, and investigate how a comparison affects these counts.) 20 | 21 | If $n$ is odd, there are 22 | 23 | \begin{align} 24 | 1 + \frac{3(n-3)}{2} + 2 25 | & = \frac{3n}{2} - \frac{3}{2} \\\\ 26 | & = (\bigg\lceil \frac{3n}{2} \bigg\rceil - \frac{1}{2}) - \frac{3}{2} \\\\ 27 | & = \bigg\lceil \frac{3n}{2} \bigg\rceil - 2 28 | \end{align} 29 | 30 | comparisons. 31 | 32 | If $n$ is even, there are 33 | 34 | \begin{align} 35 | 1 + \frac{3(n - 2)}{2} 36 | & = \frac{3n}{2} - 2 \\\\ 37 | & = \bigg\lceil \frac{3n}{2} \bigg\rceil - 2 38 | \end{align} 39 | 40 | comparisons. 41 | -------------------------------------------------------------------------------- /docs/Chap09/9.2.md: -------------------------------------------------------------------------------- 1 | ## 9.2-1 2 | 3 | > Show that $\text{RANDOMIZED-SELECT}$ never makes a recursive call to a $0$-length array. 4 | 5 | Calling a $0$-length array would mean that the second and third arguments are equal. So, if the call is made on line 8, we would need that $p = q - 1$, which means that $q - p + 1 = 0$. 6 | However, $i$ is assumed to be a nonnegative number, and to be executing line 8, we would need that $i < k = q - p + 1 = 0$, a contradiction. The other possibility is that the bad recursive call occurs on line 9. This would mean that $q + 1 = r$. To be executing line 9, we need that $i > k = q - p + 1 = r - p$. This would be a nonsensical original call to the array though because we are asking for the ith element from an array of strictly less size. 7 | 8 | ## 9.2-2 9 | 10 | > Argue that the indicator random variable $X_k$ and the value $T(\max(k - 1, n - k))$ are independent. 11 | 12 | The probability that $X_k$ is equal to $1$ is unchanged when we know the max of $k - 1$ and $n - k$. In other words, $\Pr\\{X_k = a \mid \max(k - 1, n - k) = m\\} = \Pr\\{X_k = a\\}$ for $a = 0, 1$ and $m = k - 1, n - k$ so $X_k$ and $\max(k - 1, n - k)$ are independent. 13 | 14 | By C.3-5, so are $X_k$ and $T(\max(k - 1, n - k))$. 15 | 16 | ## 9.2-3 17 | 18 | > Write an iterative version of $\text{RANDOMIZED-SELECT}$. 19 | 20 | ```cpp 21 | PARTITION(A, p, r) 22 | x = A[r] 23 | i = p 24 | for k = p - 1 to r 25 | if A[k] < x 26 | i = i + 1 27 | swap A[i] with A[k] 28 | i = i + 1 29 | swap A[i] with A[r] 30 | return i 31 | ``` 32 | 33 | ```cpp 34 | RANDOMIZED-PARTITION(A, p, r) 35 | x = RANDOM(p - 1, r) 36 | swap A[x] with A[r] 37 | return PARTITION(A, p, r) 38 | ``` 39 | 40 | ```cpp 41 | RANDOMIZED-SELECT(A, p, r, i) 42 | while true 43 | if p == r 44 | return A[p] 45 | q = RANDOMIZED-PARTITION(A, p, r) 46 | k = q - p + 1 47 | if i == k 48 | return A[q] 49 | if i < k 50 | r = q 51 | else 52 | p = q 53 | i = i - k 54 | ``` 55 | 56 | ## 9.2-4 57 | 58 | > Suppose we use $\text{RANDOMIZED-SELECT}$ to select the minimum element of the array $A = \langle 3, 2, 9, 0, 7, 5, 4, 8, 6, 1 \rangle$. Describe a sequence of partitions that results in a worst-case performance of $\text{RANDOMIZED-SELECT}$. 59 | 60 | When the partition selected is always the maximum element of the array we get worst-case performance. In the example, the sequence would be $\langle 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 \rangle$. 61 | -------------------------------------------------------------------------------- /docs/Chap09/Problems/9-1.md: -------------------------------------------------------------------------------- 1 | > Given a set of $n$ numbers, we wish to find the $i$ largest in sorted order using a comparison-based algorithm. Find the algorithm that implements each of the following methods with the best asymptotic worst-case running time, and analyze the running times of the algorithms in terms of $n$ and $i$ . 2 | > 3 | > **a.** Sort the numbers, and list the $i$ largest. 4 | > 5 | > **b.** Build a max-priority queue from the numbers, and call $\text{EXTRACT-MAX}$ $i$ times. 6 | > 7 | > **c.** Use an order-statistic algorithm to find the $i$th largest number, partition around that number, and sort the $i$ largest numbers. 8 | 9 | We assume that the numbers start out in an array. 10 | 11 | **a.** Sort the numbers using merge sort or heapsort, which take $\Theta(n\lg n)$ worst-case time. (Don't use quicksort or insertion sort, which can take $\Theta(n^2)$ time.) Put the $i$ largest elements (directly accessible in the sorted array) into the output array, taking $\Theta(i)$ time. 12 | 13 | Total worst-case running time: $\Theta(n\lg n + i) = \Theta(n\lg n)$ (because $i \le n$). 14 | 15 | **b.** Implement the priority queue as a heap. Build the heap using $\text{BUILD-HEAP}$, which takes $\Theta(n)$ time, then call $\text{HEAP-EXTRACT-MAX}$ $i$ times to get the $i$ largest elements, in $\Theta(i\lg n)$ worst-case time, and store them in reverse order of extraction in the output array. The worst-case extraction time is $\Theta(i\lg n)$ because 16 | 17 | - $i$ extractions from a heap with $O(n)$ elements takes $i \cdot O(\lg n) = O(i\lg n)$ time, and 18 | - half of the $i$ extractions are from a heap with $\ge n / 2$ elements, so those $i / 2$ extractions take $(i / 2)\Omega(\lg n / 2)) = \Omega(i\lg n)$ time in the worst case. 19 | 20 | Total worst-case running time: $\Theta(n + i\lg n)$. 21 | 22 | **c.** Use the $\text{SELECT}$ algorithm of Section 9.3 to find the $i$th largest number in $\Theta(n)$ time. Partition around that number in $\Theta(n)$ time. Sort the i largest numbers in $\Theta(i\lg i)$ worst-case time (with merge sort or heapsort). 23 | 24 | Total worst-case running time: $\Theta(n + i\lg i)$. 25 | 26 | Note that method \(c\) is always asymptotically at least as good as the other two methods, and that method (b) is asymptotically at least as good as (a). (Comparing \(c\) to (b) is easy, but it is less obvious how to compare \(c\) and (b) to (a). \(c\) and (b) are asymptotically at least as good as (a) because $n$, $i\lg i$, and $i\lg n$ are all $O(n\lg n)$. The sum of two things that are $O(n\lg n)$ is also $O(n\lg n)$.) 27 | -------------------------------------------------------------------------------- /docs/Chap10/Problems/10-1.md: -------------------------------------------------------------------------------- 1 | > For each of the four types of lists in the following table, what is the asymptotic worst-case running time for each dynamic-set operation listed? 2 | > 3 | > \begin{array}{l|c|c|c|c|} 4 | > & \text{unsorted, singly linked} 5 | > & \text{sorted, singly linked} 6 | > & \text{unsorted, doubly linked} 7 | > & \text{sorted, doubly linked} \\\\ 8 | > \hline 9 | > \text{SEARCH($L, k$)} & & & & \\\\ 10 | > \hline 11 | > \text{INSERT($L, x$)} & & & & \\\\ 12 | > \hline 13 | > \text{DELETE($L, x$)} & & & & \\\\ 14 | > \hline 15 | > \text{SUCCESSOR($L, x$)} & & & & \\\\ 16 | > \hline 17 | > \text{PREDECESSOR($L, x$)} & & & & \\\\ 18 | > \hline 19 | > \text{MINIMUM($L$)} & & & & \\\\ 20 | > \hline 21 | > \text{MAXIMUM($L$)} & & & & \\\\ 22 | > \hline 23 | > \end{array} 24 | 25 | \begin{array}{l|c|c|c|c|} 26 | & \text{unsorted, singly linked} 27 | & \text{sorted, singly linked} 28 | & \text{unsorted, doubly linked} 29 | & \text{sorted, doubly linked} \\\\ 30 | \hline 31 | \text{SEARCH($L, k$)} & \Theta(n) & \Theta(n) & \Theta(n) & \Theta(n) \\\\ 32 | \hline 33 | \text{INSERT($L, x$)} & \Theta(1) & \Theta(n) & \Theta(1) & \Theta(n) \\\\ 34 | \hline 35 | \text{DELETE($L, x$)} & \Theta(n) & \Theta(n) & \Theta(1) & \Theta(1) \\\\ 36 | \hline 37 | \text{SUCCESSOR($L, x$)} & \Theta(n) & \Theta(1) & \Theta(n) & \Theta(1) \\\\ 38 | \hline 39 | \text{PREDECESSOR($L, x$)} & \Theta(n) & \Theta(n) & \Theta(n) & \Theta(1) \\\\ 40 | \hline 41 | \text{MINIMUM($L$)} & \Theta(n) & \Theta(1) & \Theta(n) & \Theta(1) \\\\ 42 | \hline 43 | \text{MAXIMUM($L$)} & \Theta(n) & \Theta(n) & \Theta(n) & \Theta(n) \\\\ 44 | \hline 45 | \end{array} 46 | -------------------------------------------------------------------------------- /docs/Chap11/11.5.md: -------------------------------------------------------------------------------- 1 | ## 11.5-1 $\star$ 2 | 3 | > Suppose that we insert $n$ keys into a hash table of size $m$ using open addressing and uniform hashing. Let $p(n, m)$ be the probability that no collisions occur. Show that $p(n, m) \le e^{-n(n - 1) / 2m}$. ($\textit{Hint:}$ See equation $\text{(3.12)}$.) Argue that when $n$ exceeds $\sqrt m$, the probability of avoiding collisions goes rapidly to zero. 4 | 5 | \begin{align} 6 | p(n, m) & = \frac{m}{m} \cdot \frac{m - 1}{m} \cdots \frac{m - n + 1}{m} \\\\ 7 | & = \frac{m \cdot (m - 1) \cdots (m - n + 1)}{m^n}. 8 | \end{align} 9 | 10 | \begin{align} 11 | (m - i) \cdot (m - n + i) 12 | & = (m - \frac{n}{2} + \frac{n}{2} - i) \cdot (m - \frac{n}{2} - \frac{n}{2} + i) \\\\ 13 | & = (m - \frac{n}{2})^2 - (i - \frac{n}{2})^2 \\\\ 14 | & \le (m - \frac{n}{2})^2. 15 | \end{align} 16 | 17 | \begin{align} 18 | p(n, m) & \le \frac{m \cdot (m - \frac{n}{2})^{n - 1}}{m^n} \\\\ 19 | & = (1 - \frac{n}{2m}) ^ {n - 1}. 20 | \end{align} 21 | 22 | Based on equation $\text{(3.12)}$, $e^x \ge 1 + x$, 23 | 24 | \begin{align} 25 | p(n, m) & \le (e^{-n / 2m})^{n - 1} \\\\ 26 | & = e^{-n(n - 1) / 2m}. 27 | \end{align} 28 | -------------------------------------------------------------------------------- /docs/Chap12/Problems/12-1.md: -------------------------------------------------------------------------------- 1 | > Equal keys pose a problem for the implementation of binary search trees. 2 | > 3 | > **a.** What is the asymptotic performance of $\text{TREE-INSERT}$ when used to insert $n$ items with identical keys into an initially empty binary search tree? 4 | > 5 | > We propose to improve $\text{TREE-INSERT}$ by testing before line 5 to determine whether $z.key = x.key$ and by testing before line 11 to determine whether $z.key = y.key$. 6 | > 7 | > If equality holds, we implement one of the following strategies. For each strategy, find the asymptotic performance of inserting $n$ items with identical keys into an initially empty binary search tree. (The strategies are described for line 5, in which we compare the keys of $z$ and $x$. Substitute $y$ for $x$ to arrive at the strategies for line 11.) 8 | > 9 | > **b.** Keep a boolean flag $x.b$ at node $x$, and set $x$ to either $x.left$ or $x.right$ based on the value of $x.b$, which alternates between $\text{FALSE}$ and $\text{TRUE}$ each time we visit $x$ while inserting a node with the same key as $x$. 10 | > 11 | > **c.** Keep a list of nodes with equal keys at $x$, and insert $z$ into the list. 12 | > 13 | > **d.** Randomly set $x$ to either $x.left$ or $x.right$. (Give the worst-case performance and informally derive the expected running time.) 14 | 15 | 16 | **a.** Each insertion will add the element to the right of the rightmost leaf because the inequality on line 11 will always evaluate to false. This will result in the runtime being $\sum_{i = 1}^n i \in \Theta(n^2)$. 17 | 18 | **b.** This strategy will result in each of the two children subtrees having a difference in size at most one. This means that the height will be $\Theta(\lg n)$. So, the total runtime will be $\sum_{i = 1}^n \lg n \in \Theta(n\lg n)$. 19 | 20 | **c.** This will only take linear time since the tree itself will be height $0$, and a single insertion into a list can be done in constant time. 21 | 22 | **d.** 23 | 24 | - **Worst-case:** every random choice is to the right (or all to the left) this will result in the same behavior as in the first part of this problem, $\Theta(n^2)$. 25 | - **Expected running time:** notice that when randomly choosing, we will pick left roughly half the time, so, the tree will be roughly balanced, so, we have that the depth is roughly $\lg(n)$, $\Theta(n\lg n)$. 26 | -------------------------------------------------------------------------------- /docs/Chap12/Problems/12-2.md: -------------------------------------------------------------------------------- 1 | > Given two strings $a = a_0a_1 \ldots a_p$ and $b = b_0b_1 \ldots b_q$, where each $a_i$ and each $b_j$ is in some ordered set of characters, we say that string $a$ is ***lexicographically less than*** string $b$ if either 2 | > 3 | > 1. there exists an integer $j$, where $0 \le j \le \min(p, q)$, such that $a_i = b_i$ for all $i = 0, 1, \ldots j - 1$ and $a_j < b_j$, or 4 | > 2. $p < q$ and $a_i = b_i$ for all $i = 0, 1, \ldots, p$. 5 | > 6 | > For example, if $a$ and $b$ are bit strings, then $10100 < 10110$ by rule 1 (letting $j = 3$) and $10100 < 101000$ by rule 2. This ordering is similar to that used in English-language dictionaries. 7 | > 8 | > The ***radix tree*** data structure shown in Figure 12.5 stores the bit strings $1011, 10, 011, 100$, and $0$. When searching for a key $a = a_0a_1 \ldots a_p$, we go left at a node of depth $i$ if $a_i = 0$ and right if $a_i = 1$. Let $S$ be a set of distinct bit strings whose lengths sum to $n$. Show how to use a radix tree to sort $S$ lexicographically in $\Theta(n)$ time. For the example in Figure 12.5, the output of the sort should be the sequence $0, 011, 10, 100, 1011$. 9 | 10 | To sort the strings of $S$, we first insert them into a radix tree, and then use a preorder tree walk to extract them in lexicographically sorted order. The tree walk outputs strings only for nodes that indicate the existence of a string (i.e., those that are lightly shaded in Figure 12.5 of the text). 11 | 12 | ***Correctness:*** The preorder ordering is the correct order because: 13 | 14 | - Any node's string is a prefix of all its descendants' strings and hence belongs before them in the sorted order (rule 2). 15 | - A node's left descendants belong before its right descendants because the corresponding strings are identical up to that parent node, and in the next position the left subtree's strings have $0$ whereas the right subtree's strings have $1$ (rule 1). 16 | 17 | ***Time:*** $\Theta(n)$. 18 | 19 | - Insertion takes $\Theta(n)$ time, since the insertion of each string takes time proportional to its length (traversing a path through the tree whose length is the length of the string), and the sum of all the string lengths is $n$. 20 | - The preorder tree walk takes $O(n)$ time. It is just like $\text{INORDER-TREE-WALK}$ (it prints the current node and calls itself recursively on the left and right subtrees), so it takes time proportional to the number of nodes in the tree. The number of nodes is at most $1$ plus the sum $(n)$ of the lengths of the binary strings in the tree, because a length-$i$ string corresponds to a path through the root and $i$ other nodes, but a single node may be shared among many string paths. 21 | -------------------------------------------------------------------------------- /docs/Chap15/Problems/15-6.md: -------------------------------------------------------------------------------- 1 | > Professor Stewart is consulting for the president of a corporation that is planning a company party. The company has a hierarchical structure; that is, the supervisor relation forms a tree rooted at the president. The personnel office has ranked each employee with a conviviality rating, which is a real number. In order to make the party fun for all attendees, the president does not want both an employee and his or her immediate supervisor to attend. 2 | > 3 | > Professor Stewart is given the tree that describes the structure of the corporation, using the left-child, right-sibling representation described in Section 10.4. Each node of the tree holds, in addition to the pointers, the name of an employee and that employee's conviviality ranking. Describe an algorithm to make up a guest list that maximizes the sum of the conviviality ratings of the guests. Analyze the running time of your algorithm. 4 | 5 | The problem exhibits optimal substructure in the following way: If the root $r$ is included in an optimal solution, then we must solve the optimal subproblems rooted at the grandchildren of $r$. If $r$ is not included, then we must solve the optimal subproblems on trees rooted at the children of $r$. The dynamic programming algorithm to solve this problem works as follows: We make a table $C$ indexed by vertices which tells us the optimal conviviality ranking of a guest list obtained from the subtree with root at that vertex. We also make a table $G$ such that $G[i]$ tells us the guest list we would use when vertex $i$ is at the root. Let $T$ be the tree of guests. To solve the problem, we need to examine the guest list stored at $G[T.root]$. First solve the problem at each leaf $L$. If the conviviality ranking at $L$ is positive, $G[L] = \{L\}$ and $C[L] = L.conviv$. Otherwise $G[L] = \emptyset$ and $C[L] = 0$. Iteratively solve the subproblems located at parents of nodes at which the subproblem has been solved. In general for a node $x$, 6 | 7 | $$C[x] = \min(\sum_{y\text{ is a child of } x} C[y], \sum_{y\text{ is a grandchild of } x} C[y]).$$ 8 | 9 | The runtime of the algorithm is $O(n^2)$ where $n$ is the number of vertices, because we solve $n$ subproblems, each in constant time, but the tree traversals required to find the appropriate next node to solve could take linear time. 10 | -------------------------------------------------------------------------------- /docs/Chap16/16.5.md: -------------------------------------------------------------------------------- 1 | ## 16.5-1 2 | 3 | > Solve the instance of the scheduling problem given in Figure 16.7, but with each penalty $w_i$ replaced by $80 - w_i$. 4 | 5 | \begin{array}{c|ccccccc} 6 | a_i & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\ 7 | \hline 8 | d_i & 4 & 2 & 4 & 3 & 1 & 4 & 6 \\\\ 9 | w_i & 10 & 20 & 30 & 40 & 50 & 60 & 70 10 | \end{array} 11 | 12 | We begin by just greedily constructing the matroid, adding the most costly to leave incomplete tasks first. So, we add tasks $7, 6, 5, 4, 3$. Then, in order to schedule tasks $1$ or $2$ we need to leave incomplete more important tasks. So, our final schedule is $\langle 5, 3, 4, 6, 7, 1, 2 \rangle$ to have a total penalty of only $w_1 + w_2 = 30$. 13 | 14 | ## 16.5-2 15 | 16 | > Show how to use property 2 of Lemma 16.12 to determine in time $O(|A|)$ whether or not a given set $A$ of tasks is independent. 17 | 18 | Create an array $B$ of length $n$ containing zeros in each entry. For each element $a \in A$, add $1$ to $B[a.deadline]$. If $B[a.deadline] > a.deadline$, return that the set is not independent. Otherwise, continue. If successfully examine every element of $A$, return that the set is independent. 19 | -------------------------------------------------------------------------------- /docs/Chap17/17.1.md: -------------------------------------------------------------------------------- 1 | ## 17.1-1 2 | 3 | > If the set of stack operations included a $\text{MULTIPUSH}$ operation, which pushses $k$ items onto the stack, would the $O(1)$ bound on the amortized cost of stack operations continue to hold? 4 | 5 | No. The time complexity of such a series of operations depends on the number of pushes (pops vise versa) could be made. Since one $\text{MULTIPUSH}$ needs $\Theta(k)$ time, performing $n$ $\text{MULTIPUSH}$ operations, each with $k$ elements, would take $\Theta(kn)$ time, leading to amortized cost of $\Theta(k)$. 6 | 7 | ## 17.1-2 8 | 9 | > Show that if a $\text{DECREMENT}$ operatoin were included in the $k$-bit counter example, $n$ operations could cost as much as $\Theta(nk)$ time. 10 | 11 | The logarithmic bit flipping predicate does not hold, and indeed a sequence of events could consist of the incrementation of all $1$s and decrementation of all $0$s; yielding $\Theta(nk)$. 12 | 13 | ## 17.1-3 14 | 15 | > Suppose we perform a sequence of $n$ operations on a data structure in which the $i$th operation costs $i$ if $i$ is an exact power of $2$, and $1$ otherwise. Use aggregate analysis to determine the amortized cost per operation. 16 | 17 | Let $c_i =$ cost of $i$th operation. 18 | 19 | $$ 20 | c_i = 21 | \begin{cases} 22 | i & \text{if $i$ is an exact power of $2$}, \\\\ 23 | 1 & \text{otherwise}. 24 | \end{cases} 25 | $$ 26 | 27 | \begin{array}{cc} 28 | \text{Operation} & \text{Cost} \\\\ 29 | \hline 30 | 1 & 1 \\\\ 31 | 2 & 2 \\\\ 32 | 3 & 1 \\\\ 33 | 4 & 4 \\\\ 34 | 5 & 1 \\\\ 35 | 6 & 1 \\\\ 36 | 7 & 1 \\\\ 37 | 8 & 8 \\\\ 38 | 9 & 1 \\\\ 39 | 10 & 1 \\\\ 40 | \vdots & \vdots 41 | \end{array} 42 | 43 | $n$ operations cost: 44 | 45 | $$\sum_{i = 1}^n c_i \le n + \sum_{j = 0}^{\lg n} 2^j = n + (2n - 1) < 3n.$$ 46 | 47 | (Note: Ignoring floor in upper bound of $\sum 2^j$.) 48 | 49 | Average cost of operation: 50 | 51 | $$\frac{\text{Total case}}{\text{\# operations}} < 3.$$ 52 | 53 | By aggregate analysis, the amoritzed cost per operation $= O(1)$. 54 | -------------------------------------------------------------------------------- /docs/Chap17/17.4.md: -------------------------------------------------------------------------------- 1 | ## 17.4-1 2 | 3 | > Suppose that we wish to implement a dynamic, open-address hash table. Why might we consider the table to be full when its load factor reaches some value $\alpha$ that is strictly less than $1$? Describe briefly how to make insertion into a dynamic, open-address hash table run in such a way that the expected value of the amortized cost per insertion is $O(1)$. Why is the expected value of the actual cost per insertion not necessarily $O(1)$ for all insertions? 4 | 5 | By theorems 11.6-11.8, the expected cost of performing insertions and searches in an open address hash table approaches infinity as the load factor approaches one, for any load factor fixed away from $1$, the expected time is bounded by a constant though. The expected value of the actual cost my not be $O(1)$ for every insertion because the actual cost may include copying out the current values from the current table into a larger table because it became too full. This would take time that is linear in the number of elements stored. 6 | 7 | ## 17.4-2 8 | 9 | > Show that if $\alpha_{i - 1} \ge 1 / 2$ and the $i$th operation on a dynamic table is $\text{TABLE-DELETE}$, then the amortized cost of the operation with respect to the potential function $\text{(17.6)}$ is bounded above by a constant. 10 | 11 | \begin{align} 12 | \hat c_i & = c_i + \Phi_i - \Phi_{i - 1} \\\\ 13 | & = 1 + (2 \cdot num_i - size_i) - (2 \cdot (num_i + 1) - size_i) \\\\ 14 | & = -1. 15 | \end{align} 16 | 17 | ## 17.4-3 18 | 19 | > Suppose that instead of contracting a table by halving its size when its load factor drops below $1 / 4$, we contract it by multiplying its size by $2 / 3$ when its load factor drops below $1 / 3$. Using the potential function 20 | > 21 | > $$\Phi(T) = | 2 \cdot T.num - T.size |,$$ 22 | > 23 | > show that the amortized cost of a $\text{TABLE-DELETE}$ that uses this strategy is bounded above by a constant. 24 | 25 | If $1 / 3 < \alpha_i \le 1 / 2$, 26 | 27 | \begin{align} 28 | \hat c_i & = c_i + \Phi_i - \Phi_{i - 1} \\\\ 29 | & = 1 + (size_i - 2 \cdot num_i) - (size_i - 2 \cdot (num_i + 1)) \\\\ 30 | & = 3. 31 | \end{align} 32 | 33 | If the $i$th operation does trigger a contraction, 34 | 35 | \begin{align} 36 | \frac{1}{3} size_{i - 1} & = num_i + 1 \\\\ 37 | size_{i - 1} & = 3 (num_i + 1) \\\\ 38 | size_{i} & = \frac{2}{3} size_{i - 1} = 2 (num_i + 1). 39 | \end{align} 40 | \begin{align} 41 | \hat c_i & = c_i + \Phi_i - \Phi_{i - 1} \\\\ 42 | & = (num_i + 1) + [2 \cdot (num_i + 1) - 2 \cdot num_i] - [3 \cdot (num_i + 1) - 2 \cdot (num_i + 1)] \\\\ 43 | & = 2. 44 | \end{align} 45 | -------------------------------------------------------------------------------- /docs/Chap18/18.1.md: -------------------------------------------------------------------------------- 1 | ## 18.1-1 2 | 3 | > Why don't we allow a minimum degree of $t = 1$? 4 | 5 | According to the definition, minimum degree $t$ means every node other than the root must have at least $t - 1$ keys, and every internal node other than the root thus has at least $t$ children. So, when $t = 1$, it means every node other than the root must have at least $t - 1 = 0$ key, and every internal node other than the root thus has at least $t = 1$ child. 6 | 7 | Thus, we can see that the minimum case doesn't exist, because no node exists with $0$ key, and no node exists with only $1$ child in a B-tree. 8 | 9 | ## 18.1-2 10 | 11 | > For what values of $t$ is the tree of Figure 18.1 a legal B-tree? 12 | 13 | According to property 5 of B-tree, every node other than the root must have at least $t - 1$ keys and may contain at most $2t - 1$ keys. In Figure 18.1, the number of keys of each node (except the root) is either $2$ or $3$. So to make it a legal B-tree, we need to guarantee that $t - 1 \le 2 \text{ and } 2 t - 1 \ge 3$, which yields $2 \le t \le 3$. So $t$ can be $2$ or $3$. 14 | 15 | ## 18.1-3 16 | 17 | > Show all legal B-trees of minimum degree $2$ that represent $\\{1, 2, 3, 4, 5\\}$. 18 | 19 | We know that every node except the root must have at least $t - 1 = 2$ keys, and at most $2t - 1 = 5$ keys. Also remember that the leaves stay in the same depth. Thus, there are $2$ possible legal B-trees: 20 | 21 | - $$| 1, 2, 3, 4, 5 |$$ 22 | 23 | - $$| 3 |$$ 24 | 25 | $$\swarrow \quad \searrow$$ 26 | 27 | $$| 1, 2 | \qquad\qquad | 4, 5 |$$ 28 | 29 | ## 18.1-4 30 | 31 | > As a function of the minimum degree $t$, what is the maximum number of keys that can be stored in a B-tree of height $h$? 32 | 33 | \begin{align} 34 | n & = (1 + 2t + (2t) ^ 2 + \cdots + (2t) ^ {h}) \cdot (2t - 1) \\\\ 35 | & = (2t)^{h + 1} - 1. 36 | \end{align} 37 | 38 | ## 18.1-5 39 | 40 | > Describe the data structure that would result if each black node in a red-black tree were to absorb its red children, incorporating their children with its own. 41 | 42 | After absorbing each red node into its black parent, each black node may contain $1, 2$ ($1$ red child), or $3$ ($2$ red children) keys, and all leaves of the resulting tree have the same depth, according to property 5 of red-black tree (For each node, all paths from the node to descendant leaves contain the same number of black nodes). Therefore, a red-black tree will become a Btree with minimum degree $t = 2$, i.e., a 2-3-4 tree. 43 | -------------------------------------------------------------------------------- /docs/Chap18/18.3.md: -------------------------------------------------------------------------------- 1 | ## 18.3-1 2 | 3 | > Show the results of deleting $C$, $P$, and $V$, in order, from the tree of Figure 18.8(f). 4 | 5 | - Figure 18.8(f) 6 | 7 | ![](../img/18.3-1-1.png) 8 | 9 | - delete $C$ 10 | 11 | ![](../img/18.3-1-2.png) 12 | 13 | - delete $P$ 14 | 15 | ![](../img/18.3-1-3.png) 16 | 17 | - delete $V$ 18 | 19 | ![](../img/18.3-1-4.png) 20 | 21 | ## 18.3-2 22 | 23 | > Write pseudocode for $\text{B-TREE-DELETE}$. 24 | 25 | The algorithm $\text{B-TREE-DELETE}(x, k)$ is a recursive procedure which deletes key $k$ from the B-tree rooted at node $x$. 26 | 27 | The functions $\text{PREDECESSOR}(k, x)$ and $\text{SUCCESSOR}(k, x)$ return the predecessor and successor of $k$ in the B-tree rooted at $x$ respectively. 28 | 29 | The cases where $k$ is the last key in a node have been omitted because the pseudocode is already unwieldy. For these, we simply use the left sibling as opposed to the right sibling, making the appropriate modifications to the indexing in the for-loops. 30 | -------------------------------------------------------------------------------- /docs/Chap19/19.1.md: -------------------------------------------------------------------------------- 1 | There is no exercise in this section. 2 | -------------------------------------------------------------------------------- /docs/Chap19/19.2.md: -------------------------------------------------------------------------------- 1 | ## 19.2-1 2 | 3 | > Show the Fibonacci heap that results from calling $\text{FIB-HEAP-EXTRACT-MIN}$ on the Fibonacci heap shown in Figure 19.4(m). 4 | 5 | (Omit!) 6 | -------------------------------------------------------------------------------- /docs/Chap19/19.3.md: -------------------------------------------------------------------------------- 1 | ## 19.3-1 2 | 3 | > Suppose that a root $x$ in a Fibonacci heap is marked. Explain how $x$ came to be a marked root. Argue that it doesn't matter to the analysis that $x$ is marked, even though it is not a root that was first linked to another node and then lost one child. 4 | 5 | A root in the heap became marked because it at some point had a child whose key was decreased. It doesn't add the potential for having to do any more actual work for it to be marked. This is because the only time that markedness is checked is in line 3 of cascading cut. This however is only ever run on nodes whose parent is non $\text{NIL}$. Since every root has $\text{NIL}$ as it parent, line 3 of cascading cut will never be run on this marked root. It will still cause the potential function to be larger than needed, but that extra computation that was paid in to get the potential function higher will never be used up later. 6 | 7 | ## 19.3-2 8 | 9 | > Justify the $O(1)$ amortized time of $\text{FIB-HEAP-DECREASE-KEY}$ as an average cost per operation by using aggregate analysis. 10 | 11 | Recall that the actual cost of $\text{FIB-HEAP-DECREASE-KEY}$ is $O\(c\)$, where $c$ is the number of calls made to $\text{CASCADING-CUT}$. If $c_i$ is the number of calls made on the $i$th key decrease, then the total time of $n$ calls to $\text{FIB-HEAPDECREASE-KEY}$ is $\sum_{i = 1}^n O(c_i)$. 12 | 13 | Next observe that every call to $\text{CASCADING-CUT}$ moves a node to the root, and every call to a root node takes $O(1)$. Since no roots ever become children during the course of these calls, we must have that $\sum_{i = 1}^n c_i = O(n)$. Therefore the aggregate cost is $O(n)$, so the average, or amortized, cost is $O(1)$. -------------------------------------------------------------------------------- /docs/Chap19/Problems/19-1.md: -------------------------------------------------------------------------------- 1 | > Professor Pisano has proposed the following variant of the $\text{FIB-HEAP-DELETE}$ procedure, claiming that it runs faster when the node being deleted is not the node pointed to by $H.min$. 2 | > 3 | > ```cpp 4 | > PISANO-DELETE(H, x) 5 | > if x == H.min 6 | > FIB-HEAP-EXTRACT-MIN(H) 7 | > else y = x.p 8 | > if y != NIL 9 | > CUT(H, x, y) 10 | > CASCADING-CUT(H, y) 11 | > add x's child list to the root list of H 12 | > remove x from the root list of H 13 | > ``` 14 | > 15 | > **a.** The professor's claim that this procedure runs faster is based partly on the assumption that line 7 can be performed in $O(1)$ actual time. What is wrong with this assumption? 16 | > 17 | > **b.** Give a good upper bound on the actual time of $\text{PISANO-DELETE}$ when $x$ is not $H.min$. Your bound should be in terms of $x.degree$ and the number $c$ of calls to the $\text{CASCADING-CUT}$ procedure. 18 | > 19 | > **c.** Suppose that we call $\text{PISANO-DELETE}(H, x)$, and let $H'$ be the Fibonacci heap that results. Assuming that node $x$ is not a root, bound the potential of $H'$ in terms of $x.degree$, $c$, $t(H)$, and $m(H)$. 20 | > 21 | > **d.** Conclude that the amortized time for $\text{PISANO-DELETE}$ is asymptotically no better than for $\text{FIB-HEAP-DELETE}$, evenwhen $x \ne H.min$. 22 | 23 | **a.** It can take actual time proportional to the number of children that $x$ had because for each child, when placing it in the root list, their parent pointer needs to be updated to be $\text{NIL}$ instead of $x$. 24 | 25 | **b.** Line 7 takes actual time bounded by $x.degree$ since updating each of the children of $x$ only takes constant time. So, if $c$ is the number of cascading cuts that are done, the actual cost is $O(c + x.degree)$. 26 | 27 | **c.** From the cascading cut, we marked at most one more node, so, $m(H') \le 1 + m(H)$ regardless of the number of calls to cascading cut, because only the highest thing in the chain of calls actually goes from unmarked to marked. 28 | 29 | Also, the number of children increases by the number of children that $x$ had, that is $t(H') = x.degree + t(H)$. Putting these together, we get that 30 | 31 | $$\Phi(H') \le t(H) + x.degree + 2(1 + m(H)).$$ 32 | 33 | **d.** The asymptotic time is 34 | 35 | $$\Theta(x.degree) = \Theta(\lg(n)),$$ 36 | 37 | which is the same asyptotic time that was required for the original deletion method. 38 | -------------------------------------------------------------------------------- /docs/Chap19/Problems/19-3.md: -------------------------------------------------------------------------------- 1 | > We wish to augment a Fibonacci heap $H$ to support two new operations without changing the amortized running time of any other Fibonacci-heap operations. 2 | > 3 | > **a.** The operation $\text{FIB-HEAP-CHANGE-KEY}(H, x, k)$ changes the key of node $x$ to the value $k$. Give an efficient implementation of $\text{FIB-HEAP-CHANGE-KEY}$, and analyze the amortized running time of your implementation for the cases in which $k$ is greater than, less than, or equal to $x.key$. 4 | > 5 | > **b.** Give an efficient implementation of $\text{FIB-HEAP-PRUNE}(H, r)$, which deletes $q = \min(r, H.n)$ nodes from $H$. You may choose any $q$ nodes to delete. Analyze the amortized running time of your implementation. ($\textit{Hint:}$ You may need to modify the data structure and potential function.) 6 | 7 | **a.** If $k < x.key$ just run the decrease key procedure. If $k > x.key$, delete the current value $x$ and insert $x$ again with a new key. Both of these cases only need $O(\lg(n))$ amortized time to run. 8 | 9 | **b.** Suppose that we also had an additional cost to the potential function that was proportional to the size of the structure. This would only increase when we do an insertion, and then only by a constant amount, so there aren't any worries concerning this increased potential function raising the amortized cost of any operations. Once we've made this modification, to the potential function, we also modify the heap itself by having a doubly linked list along all of the leaf nodes in the heap. 10 | 11 | To prune we then pick any leaf node, remove it from it's parent's child list, and remove it from the list of leaves. We repeat this $\min(r, H.n)$ times. This causes the potential to drop by an amount proportional to $r$ which is on the order of the actual cost of what just happened since the deletions from the linked list take only constant amounts of time each. So, the amortized time is constant. -------------------------------------------------------------------------------- /docs/Chap20/20.1.md: -------------------------------------------------------------------------------- 1 | ## 20.1-1 2 | 3 | > Modify the data structures in this section to support duplicate keys. 4 | 5 | To modify these structure to allow for multiple elements, instead of just storing a bit in each of the entries, we can store the head of a linked list representing how many elements of that value that are contained in the structure, with a $\text{NIL}$ value to represent having no elements of that value. 6 | 7 | ## 20.1-2 8 | 9 | > Modify the data structures in this section to support keys that have associated satellite data. 10 | 11 | All operations will remain the same, except instead of the leaves of the tree being an array of integers, they will be an array of nodes, each of which stores $x.key$ in addition to whatever additional satellite data you wish. 12 | 13 | ## 20.1-3 14 | 15 | > Observe that, using the structures in this section, the way we find the successor and predecessor of a value $x$ does not depend on whether $x$ is in the set at the time. Show how to find the successor of $x$ in a binary search tree when $x$ is not stored in the tree. 16 | 17 | To find the successor of a given key $k$ from a binary tree, call the procedure $\text{SUCC}(x, T.root)$. Note that this will return $\text{NIL}$ if there is no entry in the tree with a larger key. 18 | 19 | ## 20.1-4 20 | 21 | > Suppose that instead of superimposing a tree of degree $\sqrt u$, we were to superimpose a tree of degree $u^{1 / k}$, where $k > 1$ is a constant. What would be the height of such a tree, and how long would each of the operations take? 22 | 23 | The new tree would have height $k$. $\text{INSERT}$ would take $O(k)$, $\text{MINIMUM}$, $\text{MAXIMUM}$, $\text{SUCCESSOR}$, $\text{PREDECESSOR}$, and $\text{DELETE}$ would take $O(ku^{1 / k})$. -------------------------------------------------------------------------------- /docs/Chap22/Problems/22-1.md: -------------------------------------------------------------------------------- 1 | > A depth-first forest classifies the edges of a graph into tree, back, forward, and cross edges. A breadth-first tree can also be used to classify the edges reachable from the source of the search into the same four categories. 2 | > 3 | > **a.** Prove that in a breadth-first search of an undirected graph, the following properties hold: 4 | > 5 | > 1. There are no back edges and no forward edges. 6 | > 2. For each tree edge $(u, v)$, we have $v.d = u.d + 1$. 7 | > 3. For each cross edge $(u, v)$, we have $v.d = u.d$ or $v.d = u.d + 1$. 8 | > 9 | > **b.** Prove that in a breadth-first search of a directed graph, the following properties hold: 10 | > 11 | > 1. There are no forward edges. 12 | > 2. For each tree edge $(u, v)$, we have $v.d = u.d + 1$. 13 | > 3. For each cross edge $(u, v)$, we have $v.d \le u.d + 1$. 14 | > 4. For each back edge $(u, v)$, we have $0 \le v.d \le u.d$. 15 | 16 | **a.** 17 | 18 | 1. Suppose $(u, v)$ is a back edge or a forward edge in a $\text{BFS}$ of an undirected graph. Then one of $u$ and $v$, say $u$, is a proper ancestor of the other ($v$) in the breadth-first tree. Since we explore all edges of $u$ before exploring any edges of any of $u$'s descendants, we must explore the edge $(u, v)$ at the time we explore $u$. But then $(u, v)$ must be a tree edge. 19 | 2. In $\text{BFS}$, an edge $(u, v)$ is a tree edge when we set $v.\pi \leftarrow u$. But we only do so when we set $v.d \leftarrow u.d + 1$. Since neither $u.d$ nor $v.d$ ever changes thereafter, we have $v.d=u.d+1$ when $\text{BFS}$ completes. 20 | 3. Consider a cross edge $(u, v)$ where, without loss of generality, $u$ is visited before $v$. At the time we visit $u$, vertex $v$ must already be on the queue, for otherwise $(u, v)$ would be a tree edge. Because $v$ is on the queue, we have $v.d \le u.d + 1$ by Lemma 22.3. By Corollary 22.4, we have $v.d \ge u.d$. Thus, either $v.d = u.d$ or $v.d = u.d + 1$. 21 | 22 | **b.** 23 | 24 | 1. Suppose $(u, v)$ is a forward edge. Then we would have explored it while visiting $u$, and it would have been a tree edge. 25 | 2. Same as for undirected graphs. 26 | 3. For any edge $(u, v)$, whether or not it's a cross edge, we cannot have $v.d > u.d + 1$, since we visit $v$ at the latest when we explore edge $(u, v)$. Thus, $v.d \le u.d + 1$. 27 | 4. Clearly, $v.d \ge 0$ for all vertices $v$. For a back edge $(u, v)$, $v$ is an ancestor of $u$ in the breadth-first tree, which means that $v.d\le u.d$. (Note that since self-loops are considered to be back edges, we could have $u = v$.) 28 | -------------------------------------------------------------------------------- /docs/Chap22/Problems/22-4.md: -------------------------------------------------------------------------------- 1 | > Let $G = (V, E)$ be a directed graph in which each vertex $u \in V$ is labeled with a unique integer $L(U)$ from the set $\\{1, 2, \ldots, |V|\\}$. For each vertex $u \in V$, let $R(u) = \\{v \in V: u \leadsto v \\}$ be the set of vertices that are reachable from $u$. Define $\min(u)$ to be the vertex in $R(u)$ whose label is minimum, i.e., $\min(u)$ is the vertex $v$ such that $L(v) = \min \\{L(w): w \in R(u) \\}$. Give an $O(V + E)$-time algorithm that computes $\min(u)$ for all vertices $u \in V$. 2 | 3 | Compute $G^\text T$ in the usual way, so that $G^\text T$ is $G$ with its edges reversed. Then do a depth-first search on $G^\text T$ , but in the main loop of $\text{DFS}$, consider the vertices in order of increasing values of $L(v)$. If vertex $u$ is in the depth-first tree with root $v$, then $\min(u) = v$. Clearly, this algorithm takes $O(V + E)$ time. 4 | 5 | To show correctness, first note that if $u$ is in the depth-first tree rooted at $v$ in $G^\text T$, then there is a path $v \leadsto u$ in $G^\text T$, and so there is a path $u \leadsto v$ in $G$. Thus, the minimum vertex label of all vertices reachable from $u$ is at most $L(v)$, or in other words, $L(v) \ge \min \\{L(w): w \in R(u)\\}$. 6 | 7 | Now suppose that $L(v) > \min \\{L(w): w \in R(u) \\}$, so that there is a vertex $w \in R(u)$ such that $L(w) < L(v)$. At the time $v.d$ that we started the depthfirst search from $v$, we would have already discovered $w$, so that $w.d < v.d$. By the parenthesis theorem, either the intervals $[v.d, v.f]$, and $[w.d, w.f]$ are disjoint and neither $v$ nor $w$ is a descendant of the other, or we have the ordering $w.d < v.d < v.f < w.f$ and $v$ is a descendant of $w$. The latter case cannot occur, since $v$ is a root in the depth-first forest (which means that $v$ cannot be a descendant of any other vertex). In the former case, since $w.d < v.d$, we must have $w.d < w.f < v.d < v.f$. In this case, since $u$ is reachable from $w$ in $G^\text T$ , we would have discovered $u$ by the time $w.f$, so that $u.d < w.f$. Since we discovered $u$ during a search that started at $v$, we have $v.d \le u.d$. Thus, $v.d \le u.d < w.f < v.d$, which is a contradiction. We conclude that no such vertex $w$ can exist. -------------------------------------------------------------------------------- /docs/Chap24/Problems/24-2.md: -------------------------------------------------------------------------------- 1 | > A $d$-dimensional box with dimensions $(x_1, x_2, \ldots, x_d)$ ***nests*** within another box with dimensions $(y_1, y_2, \ldots, y_d)$ if there exists a permutation $\pi$ on $\\{1, 2, \ldots, d\\}$ such that $x_{\pi(1)} < y_1$, $x_{\pi(2)} < y_2$, $\ldots$, $x_{\pi(d)} < y_d$. 2 | > 3 | > **a.** Argue that the nesting relation is transitive. 4 | > 5 | > **b.** Describe an efficient method to determine whether or not one $d$-dimensional box nests inside another. 6 | > 7 | > **c.** Suppose that you are given a set of $n$ $d$-dimensional boxes $\\{B_1, B_2, \ldots, B_n\\}$. Give an efficient algorithm to find the longest sequence $\langle B_{i_1}, B_{i_2}, \ldots, B_{i_k} \rangle$ of boxes such that $B_{i_j}$ nests within $B_{i_{j + 1}}$ for $j = 1, 2, \ldots, k - 1$. Express the running time of your algorithm in terms of $n$ and $d$. 8 | 9 | **a.** Consider boxes with dimensions $x = (x_1, \ldots, x_d)$, $y = (y_1, \ldots, y_d)$, and $z = (z_1, \ldots, z_d)$. Suppose there exists a permutation $\pi$ such that $x_{\pi(i)} < y_i$ for $i = 1, \ldots, d$ and there exists a permutation $\pi'$ such that $y_{\pi'(i)} < z_i$ for $i = 1, \ldots, d$, so that $x$ nests inside $y$ and $y$ nests inside $z$. Construct a permutation $\pi''$, where $\pi''(i) = \pi'(\pi(i))$. Then for $i = 1, \ldots, d$, we have $x_{\pi''(i)} = x_{\pi'(\pi(i))} < y_{\pi'(i)} < z_i$, and so $x$ nests inside $z$. 10 | 11 | **b.** Sort the dimensions of each box from longest to shortest. A box $X$ with sorted dimensions $(x_1, x_2, \ldots, x_d)$ nests inside a box $Y$ with sorted dimensions $(y_1, y_2, \ldots, y_d)$ if and only if $x_i < y_i$ for $i = 1, 2, \ldots, d$. The sorting can be done in $O(d\lg d)$ time, and the test for nesting can be done in $O(d)$ time, and so the algorithm runs in $O(d\lg d)$ time. This algorithm works because a $d$-dimensional box can be oriented so that every permutation of its dimensions is possible. (Experiment with a $3$-dimensional box if you are unsure of this). 12 | 13 | **c.** Construct a dag $G = (V, E)$, where each vertex $v_i$ corresponds to box $B_i$, and $(v_i, v_j) \in E$ if and only if box $B_i$ nests inside box $B_j$. Graph $G$ is indeed a dag, because nesting is transitive and antireflexive (i.e., no box nests inside itself). The time to construct the dag is $O(dn^2 + dn\lg d)$, from comparing each of the $\binom{n}{2}$ pairs of boxes after sorting the dimensions of each. 14 | 15 | Add a supersource vertex $s$ and a supersink vertex $t$ to $G$, and add edges $(s, v_i)$ for all vertices $v_i$ with $in\text-degree$ $0$ and $(v_j, t)$ for all vertices $v_j$ with outdegree $0$. Call the resulting dag $G'$. The time to do so is $O(n)$. 16 | 17 | Find a longest path from $s$ to $t$ in $G'$. (Section 24.2 discusses how to find a longest path in a dag.) This path corresponds to a longest sequence of nesting boxes. The time to find a longest path is $O(n^2)$, since $G'$ has $n + 2$ vertices and $O(n^2)$ edges. 18 | 19 | Overall, this algorithm runs in $O(dn^2 + dn\lg d)$ time. 20 | -------------------------------------------------------------------------------- /docs/Chap25/Problems/25-2.md: -------------------------------------------------------------------------------- 1 | > A graph $G = (V, E)$ is ***$\epsilon$-dense*** if $|E| = \Theta(V^{1 + \epsilon})$ for some constant $\epsilon$ in the range $0 < \epsilon \le 1$. By using $d$-ary min-heaps (see Problem 6-2) in shortest-paths algorithms on $\epsilon$-dense graphs, we can match the running times of Fibonacci-heap-based algorithms without using as complicated a data structure. 2 | > 3 | > **a.** What are the asymptotic running times for $\text{INSERT}$, $\text{EXTRACT-MIN}$, and $\text{DECREASE-KEY}$, as a function of $d$ and the number $n$ of elements in a $d$-ary min-heap? What are these running times if we choose $d = \Theta(n^\alpha)$ for some constant $0 < \alpha \le 1$? Compare these running times to the amortized costs of these operations for a Fibonacci heap. 4 | > 5 | > **b.** Show how to compute shortest paths from a single source on an $\epsilon$-dense directed graph $G = (V, E)$ with no negative-weight edges in $O(E)$ time. ($\textit{Hint:}$ Pick $d$ as a function of $\epsilon$.) 6 | > 7 | > **c.** Show how to solve the all-pairs shortest-paths problem on an $\epsilon$-dense directed graph $G = (V, E)$ with no negative-weight edges in $O(VE)$ time. 8 | > 9 | > **d.** Show how to solve the all-pairs shortest-paths problem in $O(VE)$ time on an $\epsilon$-dense directed graph $G = (V, E)$ that may have negative-weight edges but has no negative-weight cycles. 10 | 11 | **a.** 12 | 13 | - $\text{INSERT}$: $\Theta(\log_d n) = \Theta(1 / \alpha)$. 14 | - $\text{EXTRACT-MIN}$: $\Theta(d\log_d n) = \Theta(n^\alpha / \alpha)$. 15 | - $\text{DECREASE-KEY}$: $\Theta(\log_d n) = \Theta(1 / \alpha)$. 16 | 17 | **b.** Dijkstra, $O(d\log_d V \cdot V + \log_d V \cdot E)$, if $d = V^\epsilon$, then 18 | 19 | \begin{align} 20 | O(d \log_d V \cdot V + \log_d V \cdot E) 21 | & = O(V^\epsilon \cdot V / \epsilon + E / \epsilon) \\\\ 22 | & = O((V^{1+\epsilon} + E) / \epsilon) \\\\ 23 | & = O((E + E) / \epsilon) \\\\ 24 | & = O(E). 25 | \end{align} 26 | 27 | **c.** Run $|V|$ times Dijkstra, since the algorithm is $O(E)$ based on (b), the total time is $O(VE)$. 28 | 29 | **d.** Johnson's reweight is $O(VE)$. 30 | -------------------------------------------------------------------------------- /docs/Chap26/Problems/26-1.md: -------------------------------------------------------------------------------- 1 | 2 | > A$n \times n$ ***grid*** is an undirected graph consisting of $n$ rows and $n$ columns of vertices, as shown in Figure 26.11. We denote the vertex in the $i$th row and the $j$th column by $(i, j)$. All vertices in a grid have exactly four neighbors, except for the boundary vertices, which are the points $(i, j)$ for which $i = 1$, $i = n$, $j = 1$, or $j = n$. 3 | > 4 | > Given $m \le n^2$ starting points $(x_1, y_1), (x_2, y_2), \ldots, (x_m, y_m)$ in the grid, the ***escape problem*** is to determine whether or not there are $m$ vertex-disjoint paths from the starting points to any $m$ different points on the boundary. For example, the grid in Figure 26.11(a) has an escape, but the grid in Figure 26.11(b) does not. 5 | > 6 | > **a.** Consider a flow network in which vertices, as well as edges, have capacities. That is, the total positive flow entering any given vertex is subject to a capacity constraint. Show that determining the maximum flow in a network with edge and vertex capacities can be reduced to an ordinary maximum-flow problem on a flow network of comparable size. 7 | > 8 | > **b.** Describe an efficient algorithm to solve the escape problem, and analyze its running time. 9 | 10 | **a.** This problem is identical to exercise 26.1-7. 11 | 12 | **b.** Construct a vertex constrained flow network from the instance of the escape problem by letting our flow network have a vertex (each with unit capacity) for each intersection of grid lines, and have a bidirectional edge with unit capacity for each pair of vertices that are adjacent in the grid. Then, we will put a unit capacity edge going from $s$ to each of the distinguished vertices, and a unit capacity edge going from each vertex on the sides of the grid to $t$. Then, we know that a solution to this problem will correspond to a solution to the escape problem because all of the augmenting paths will be a unit flow, because every edge has unit capacity. This means that the flows through the grid will be the paths taken. This gets us the escaping paths if the total flow is equal to $m$ (we know it cannot be greater than $m$ by looking at the cut which has $s$ by itself). And, if the max flow is less than $m$, we know that the escape problem is not solvable, because otherwise we could construct a flow with value $m$ from the list of disjoint paths that the people escaped along. 13 | -------------------------------------------------------------------------------- /docs/Chap27/Problems/27-6.md: -------------------------------------------------------------------------------- 1 | > Just as with ordinary serial algorithms, we sometimes want to implement randomized multithreaded algorithms. This problem explores how to adapt the various performance measures in order to handle the expected behavior of such algorithms. It also asks you to design and analyze a multithreaded algorithm for randomized quicksort. 2 | > 3 | > **a.** Explain how to modify the work law $\text{(27.2)}$, span law $\text{(27.3)}$, and greedy scheduler bound $\text{(27.4)}$ to work with expectations when $T_P$, $T_1$, and $T_\infty$ are all random variables. 4 | > 5 | > **b.** Consider a randomized multithreaded algorithm for which $1\%$ of the time we have $T_1 = 10^4$ and $T_{10,000} = 1$, but for $99\%$ of the time we have $T_1 = T_{10,000} = 10^9$. Argue that the ***speedup*** of a randomized multithreaded algorithm should be defined as $\text E[T_1]/\text E[T_P]$, rather than $\text E[T_1 / T_P]$. 6 | > 7 | > **c.** Argue that the ***parallelism*** of a randomized multithreaded algorithm should be defined as the ratio $\text E[T_1] / \text E[T_\infty]$. 8 | > 9 | > **d.** Multithread the $\text{RANDOMIZED-QUICKSORT}$ algorithm on page 179 by using nested parallelism. (Do not parallelize $\text{RANDOMIZED-PARTITION}$.) Give the pseudocode for your $\text{P-RANDOMIZED-QUICKSORT}$ algorithm. 10 | > 11 | > **e.** Analyze your multithreaded algorithm for randomized quicksort. ($\textit{Hint:}$ Review the analysis of $\text{RANDOMIZED-SELECT}$ on page 216.) 12 | 13 | **a.** 14 | 15 | \begin{align} 16 | \text E[T_P] & \ge \text E[T_1] / P \\\\ 17 | \text E[T_P] & \ge \text E[T_\infty] \\\\ 18 | \text E[T_P] & \le \text E[T_1]/P + \text E[T_\infty]. 19 | \end{align} 20 | 21 | **b.** 22 | 23 | $$\text E[T_1] \approx \text E[T_{10,000}] \approx 9.9 \times 10^8, \text E[T_1]/\text E[T_P] = 1.$$ 24 | 25 | $$\text E[T_1 / T_{10,000}] = 10^4 * 0.01 + 0.99 = 100.99.$$ 26 | 27 | **c.** Same as the above. 28 | 29 | **d.** 30 | 31 | ```cpp 32 | RANDOMIZED-QUICKSORT(A, p, r) 33 | if p < r 34 | q = RANDOM-PARTITION(A, p, r) 35 | spawn RANDOMIZED-QUICKSORT(A, p, q - 1) 36 | RANDOMIZED-QUICKSORT(A, q + 1, r) 37 | sync 38 | ``` 39 | 40 | **e.** 41 | 42 | \begin{align} 43 | \text E[T_1] & = O(n\lg n) \\\\ 44 | \text E[T_\infty] & = O(\lg n) \\\\ 45 | \text E[T_1] / \text E[T_\infty] & = O(n). 46 | \end{align} 47 | -------------------------------------------------------------------------------- /docs/Chap28/28.3.md: -------------------------------------------------------------------------------- 1 | ## 28.3-1 2 | 3 | > Prove that every diagonal element of a symmetric positive-definite matrix is positive. 4 | 5 | To see this, let $e_i$ be the vector that is $0$s except for a $1$ in the $i$th 6 | position. Then, we consider the quantity $e_i^TAe_i$ for every $i$. $Ae_i$ takes each row of $A$ and pulls out the $i$th column of it, and puts those values into a column vector. Then, we multiply that on the left by $e_i^T$, pulls out the $i$th row of this quantity, which means that the quantity $e_i^TAe_i$ exactly the value of $A_{i, i}$. 7 | 8 | So, we have that by positive definiteness, since $e_i$ is nonzero, that quantity must be positive. Since we do this for every $i$, we have that every entry along the diagonal must be positive. 9 | 10 | ## 28.3-2 11 | 12 | > Let 13 | > 14 | > $$ 15 | > A = 16 | > \begin{pmatrix} 17 | > a & b \\\\ 18 | > b & c 19 | > \end{pmatrix} 20 | > $$ 21 | > 22 | > be a $2 \times 2$ symmetrix positive-definite matrix. Prove that its determinant $ac - b^2$ is positive by "completing the square" in a manner similar to that used in the proof of Lemma 28.5. 23 | 24 | Let $x = -by / a$. Since $A$ is positive-definite, we have 25 | 26 | \begin{align} 27 | 0 & < \begin{pmatrix} x & y \end{pmatrix}^T A \begin{pmatrix} x \\\\ y \end{pmatrix} \\\\ 28 | & = \begin{pmatrix} x & y \end{pmatrix}^T \begin{pmatrix} ax + by \\\\ bx + cy \end{pmatrix} \\\\ 29 | & = ax^2 + 2bxy + cy^2 \\\\ 30 | & = cy^2 - \frac{b^2y^2}{a} \\\\ 31 | & = (c - b^2 / a)y^2. 32 | \end{align} 33 | 34 | Thus, $c - b^2 / a > 0$, which implies $ac - b^2 > 0$, since $a > 0$. 35 | 36 | ## 28.3-3 37 | 38 | > Prove that the maximum element in a symmetric positive-definite matrix lies on the diagonal. 39 | 40 | (Omit!) 41 | 42 | ## 28.3-4 43 | 44 | > Prove that the determinant of each leading submatrix of a symmetrix positive-definite matrix is positive. 45 | 46 | (Omit!) 47 | 48 | ## 28.3-5 49 | 50 | > Let $A_k$ denote the $k$th leading submatrix of a symmetric positive-definite matrix $A$. Prove that $\text{det}(A_k) / \text{det}(A_{k - 1})$ is the $k$th pivot during $\text{LU}$ decomposition, where, by convention, $\text{det}(A_0) = 1$. 51 | 52 | (Omit!) 53 | 54 | ## 28.3-6 55 | 56 | > Find the function of the form 57 | > 58 | > $$F(x) = c_1 + c_2x\lg x + c_3 e^x$$ 59 | > 60 | > that is the best least-squares fit to the data points 61 | > 62 | > $$(1, 1), (2, 1), (3, 3), (4, 8).$$ 63 | 64 | (Omit!) 65 | 66 | ## 28.3-7 67 | 68 | > Show that the pseudoinverse $A^+$ satisfies the following four equations: 69 | > 70 | > \begin{align} 71 | > AA^+A & = A, \\\\ 72 | > A^+AA^+ & = A^+, \\\\ 73 | > (AA^+)^{\text T} & = AA^+, \\\\ 74 | > (A^+A)^{\text T} & = A^+A. 75 | > \end{align} 76 | 77 | (Omit!) 78 | -------------------------------------------------------------------------------- /docs/Chap28/Problems/28-1.md: -------------------------------------------------------------------------------- 1 | > Consider the tridiagonal matrix 2 | > 3 | > $$ 4 | > A = 5 | > \begin{pmatrix} 6 | > 1 & -1 & 0 & 0 & 0 \\\\ 7 | > -1 & 2 & -1 & 0 & 0 \\\\ 8 | > 0 & -1 & 2 & -1 & 0 \\\\ 9 | > 0 & 0 & -1 & 2 & -1 \\\\ 10 | > 0 & 0 & 0 & -1 & 2 11 | > \end{pmatrix}. 12 | > $$ 13 | > 14 | > **a.** Find an $\text{LU}$ decomposition of $A$. 15 | > 16 | > **b.** Solve the equation $Ax = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 \end{pmatrix}^{\text T}$ by using forward and back substitution. 17 | > 18 | > **c.** Find the inverse of $A$. 19 | > 20 | > **d.** Show how, for any $n \times n$ symmetric positive-definite, tridiagonal matrix $A$ and any $n$-vector $b$, to solve the equation $Ax = b$ in $O(n)$ time by performing an $\text{LU}$ decomposition. Argue that any method based on forming $A^{-1}$ is asymptotically more expensive in the worst case. 21 | > 22 | > **e.** Show how, for any $n \times n$ nonsingular, tridiagonal matrix $A$ and any $n$-vector $b$, to solve the equation $Ax = b$ in $O(n)$ time by performing an $\text{LUP}$ decomposition. 23 | 24 | (Omit!) 25 | -------------------------------------------------------------------------------- /docs/Chap29/29.1.md: -------------------------------------------------------------------------------- 1 | ## 29.1-1 2 | 3 | > If we express the linear program in $\text{(29.24)}$–$\text{(29.28)}$ in the compact notation of $\text{(29.19)}$–$\text{(29.21)}$, what are $n$, $m$, $A$, $b$, and $c$? 4 | 5 | (Omit!) 6 | 7 | ## 29.1-2 8 | 9 | > Give three feasible solutions to the linear program in $\text{(29.24)}$–$\text{(29.28)}$. What is the objective value of each one? 10 | 11 | (Omit!) 12 | 13 | ## 29.1-3 14 | 15 | > For the slack form in $\text{(29.38)}$–$\text{(29.41)}$, what are $N$, $B$, $A$, $b$, $c$, and $v$? 16 | 17 | (Omit!) 18 | 19 | ## 29.1-4 20 | 21 | > Convert the following linear program into standard form: 22 | > 23 | > \begin{array}{lccccccc} 24 | > \text{minimize} & 2x_1 & + & 7x_2 & + & x_3 & & \\\\ 25 | > \text{subject to} & & & & & \\\\ 26 | > & x_1 & & & - & x_3 & = & 7 \\\\ 27 | > & 3x_1 & + & x_2 & & & \ge & 24 \\\\ 28 | > & & & x_2 & & & \ge & 0 \\\\ 29 | > & & & & & x_3 & \le & 0. 30 | > \end{array} 31 | 32 | (Omit!) 33 | 34 | ## 29.1-5 35 | 36 | > Convert the following linear program into slack form: 37 | > 38 | > \begin{array}{lccccccc} 39 | > \text{minimize} & 2x_1 & & & - & 6x_3 \\\\ 40 | > \text{subject to} & \\\\ 41 | > & x_1 & + & x_2 & - & x_3 & \le & 7 \\\\ 42 | > & 3x_1 & - & x_2 & & & \ge & 8 \\\\ 43 | > & -x_1 & + & 2x_2 & + & 2x_3 & \ge & 0 \\\\ 44 | > & & x_1, x_2, x_3 & & & & \ge & 0. 45 | > \end{array} 46 | > 47 | > What are the basic and nonbasic variables? 48 | 49 | (Omit!) 50 | 51 | ## 29.1-6 52 | 53 | > Show that the following linear program is infeasible: 54 | > 55 | > \begin{array}{lccccc} 56 | > \text{minimize} & 3x_1 & - & 2x_2 \\\\ 57 | > \text{subject to} & \\\\ 58 | > & x_1 & + & x_2 & \le & 2 \\\\ 59 | > & -2x_1 & - & 2x_2 & \le & -10 \\\\ 60 | > & & x_1, x_2 & & \ge & 0. 61 | > \end{array} 62 | 63 | (Omit!) 64 | 65 | ## 29.1-7 66 | 67 | > Show that the following linear program is unbounded: 68 | > 69 | > \begin{array}{lccccc} 70 | > \text{minimize} & x_1 & - & x_2 \\\\ 71 | > \text{subject to} & \\\\ 72 | > & -2x_1 & + & x_2 & \le & -1 \\\\ 73 | > & -x_1 & - & 2x_2 & \le & -2 \\\\ 74 | > & & x_1, x_2 & & \ge & 0. 75 | > \end{array} 76 | 77 | (Omit!) 78 | 79 | ## 29.1-8 80 | 81 | > Suppose that we have a general linear program with $n$ variables and $m$ constraints, and suppose that we convert it into standard form. Give an upper bound on the number of variables and constraints in the resulting linear program. 82 | 83 | (Omit!) 84 | 85 | ## 29.1-9 86 | 87 | > Give an example of a linear program for which the feasible region is not bounded, but the optimal objective value is finite. 88 | 89 | (Omit!) 90 | -------------------------------------------------------------------------------- /docs/Chap29/29.2.md: -------------------------------------------------------------------------------- 1 | ## 29.2-1 2 | 3 | > Put the single-pair shortest-path linear program from $\text{(29.44)}$–$\text{(29.46)}$ into standard form. 4 | 5 | (Omit!) 6 | 7 | ## 29.2-2 8 | 9 | > Write out explicitly the linear program corresponding to finding the shortest path from node $s$ to node $y$ in Figure 24.2(a). 10 | 11 | (Omit!) 12 | 13 | ## 29.2-3 14 | 15 | > In the single-source shortest-paths problem, we want to find the shortest-path weights from a source vertex $s$ to all vertices $v \in V$. Given a graph $G$, write a linear program for which the solution has the property that $d_v$ is the shortest-path weight from $s$ to $v$ for each vertex $v \in V$. 16 | 17 | (Omit!) 18 | 19 | ## 29.2-4 20 | 21 | > Write out explicitly the linear program corresponding to finding the maximum flow in Figure 26.1(a). 22 | 23 | (Omit!) 24 | 25 | ## 29.2-5 26 | 27 | > Rewrite the linear program for maximum flow $\text{(29.47)}$–$\text{(29.50)}$ so that it uses only $O(V + E)$ constraints. 28 | 29 | (Omit!) 30 | 31 | ## 29.2-6 32 | 33 | > Write a linear program that, given a bipartite graph $G = (V, E)$ solves the maximum-bipartite-matching problem. 34 | 35 | (Omit!) 36 | 37 | ## 29.2-7 38 | 39 | > In the ***minimum-cost multicommodity-flow problem***, we are given directed graph $G = (V, E)$ in which each edge $(u, v) \in E$ has a nonnegative capacity $c(u, v) \ge 0$ and a cost $a(u, v)$. As in the multicommodity-flow problem, we are given $k$ different commodities, $K_1, K_2, \ldots, K_k$, where we specify commodity $i$ by the triple $K_i = (s_i, t_i, d_i)$. We define the flow $f_i$ for commodity $i$ and the aggregate flow $f_{uv}$ on edge $(u, v)$ as in the multicommodity-flow problem. A feasible flow is one in which the aggregate flow on each edge $(u, v)$ is no more than the capacity of edge $(u, v)$. The cost of a flow is $\sum_{u, v \in V} a(u, v)f_{uv}$, and the goal is to find the feasible flow of minimum cost. Express this problem as a linear program. 40 | 41 | (Omit!) 42 | -------------------------------------------------------------------------------- /docs/Chap29/29.3.md: -------------------------------------------------------------------------------- 1 | ## 29.3-1 2 | 3 | > Complete the proof of Lemma 29.4 by showing that it must be the case that $c = c'$ and $v = v'$. 4 | 5 | (Omit!) 6 | 7 | ## 29.3-2 8 | 9 | > Show that the call to $\text{PIVOT}$ in line 12 of $\text{SIMPLEX}$ never decreases the value of $v$. 10 | 11 | (Omit!) 12 | 13 | ## 29.3-3 14 | 15 | > Prove that the slack form given to the $\text{PIVOT}$ procedure and the slack form that the procedure returns are equivalent. 16 | 17 | (Omit!) 18 | 19 | ## 29.3-4 20 | 21 | > Suppose we convert a linear program $(A, b, c)$ in standard form to slack form. Show that the basic solution is feasible if and only if $b_i \ge 0$ for $i = 1, 2, \ldots, m$. 22 | 23 | (Omit!) 24 | 25 | ## 29.3-5 26 | 27 | > Solve the following linear program using $\text{SIMPLEX}$: 28 | > 29 | > \begin{array}{lccccc} 30 | > \text{minimize} & 18x_1 & + & 12.5x_2 \\\\ 31 | > \text{subject to} & \\\\ 32 | > & x_1 & + & x_2 & \le & 20 \\\\ 33 | > & x_1 & & & \le & 12 \\\\ 34 | > & & & x_2 & \le & 16 \\\\ 35 | > & & x_1, x_2 & & \ge & 0. 36 | > \end{array} 37 | 38 | (Omit!) 39 | 40 | ## 29.3-6 41 | 42 | > Solve the following linear program using $\text{SIMPLEX}$: 43 | > 44 | > \begin{array}{lccccc} 45 | > \text{minimize} & 5x_1 & - & 3x_2 \\\\ 46 | > \text{subject to} & \\\\ 47 | > & x_1 & - & x_2 & \le & 1 \\\\ 48 | > & 2x_1 & + & x_2 & \le & 2 \\\\ 49 | > & & x_1, x_2 & & \ge & 0. 50 | > \end{array} 51 | 52 | (Omit!) 53 | 54 | ## 29.3-7 55 | 56 | > Solve the following linear program using $\text{SIMPLEX}$: 57 | > 58 | > \begin{array}{lccccc} 59 | > \text{minimize} & x_1 & + & x_2 & + & x_3 \\\\ 60 | > \text{subject to} & \\\\ 61 | > & 2x_1 & + & 7.5x_2 & + & 3x_3 & \ge & 10000 \\\\ 62 | > & 20x_1 & & 5x_2 & + & 10x_3 & \ge & 30000 \\\\ 63 | > & & x_1, x_2, x_3 & & & & \ge & 0. 64 | > \end{array} 65 | 66 | (Omit!) 67 | 68 | ## 29.3-8 69 | 70 | > In the proof of Lemma 29.5, we argued that there are at most $\binom{m + n}{n}$ ways to choose a set $B$ of basic variables. Give an example of a linear program in which there are strictly fewer than $\binom{m + n}{n}$ ways to choose the set $B$. 71 | 72 | (Omit!) 73 | -------------------------------------------------------------------------------- /docs/Chap29/29.4.md: -------------------------------------------------------------------------------- 1 | ## 29.4-1 2 | 3 | > Formulate the dual of the linear program given in Exercise 29.3-5. 4 | 5 | (Omit!) 6 | 7 | ## 29.4-2 8 | 9 | > Suppose that we have a linear program that is not in standard form. We could produce the dual by first converting it to standard form, and then taking the dual. It would be more convenient, however, to be able to produce the dual directly. Explain how we can directly take the dual of an arbitrary linear program. 10 | 11 | (Omit!) 12 | 13 | ## 29.4-3 14 | 15 | > Write down the dual of the maximum-flow linear program, as given in lines $\text{(29.47)}$–$\text{(29.50)}$ on page 860. Explain how to interpret this formulation as a minimum-cut problem. 16 | 17 | (Omit!) 18 | 19 | ## 29.4-4 20 | 21 | > Write down the dual of the minimum-cost-flow linear program, as given in lines $\text{(29.51)}$–$\text{(29.52)}$ on page 862. Explain how to interpret this problem in terms of graphs and flows. 22 | 23 | (Omit!) 24 | 25 | ## 29.4-5 26 | 27 | > Show that the dual of the dual of a linear program is the primal linear program. 28 | 29 | (Omit!) 30 | 31 | ## 29.4-6 32 | 33 | > Which result from Chapter 26 can be interpreted as weak duality for the maximum-flow problem? 34 | 35 | (Omit!) 36 | -------------------------------------------------------------------------------- /docs/Chap29/29.5.md: -------------------------------------------------------------------------------- 1 | ## 29.5-1 2 | 3 | > Give detailed pseudocode to implement lines 5 and 14 of $\text{INITIALIZE-SIMPLEX}$. 4 | 5 | (Omit!) 6 | 7 | ## 29.5-2 8 | 9 | > Show that when the main loop of $\text{SIMPLEX}$ is run by $\text{INITIALIZE-SIMPLEX}$, it can never return "unbounded." 10 | 11 | (Omit!) 12 | 13 | ## 29.5-3 14 | 15 | > Suppose that we are given a linear program $L$ in standard form, and suppose that for both $L$ and the dual of $L$, the basic solutions associated with the initial slack forms are feasible. Show that the optimal objective value of $L$ is $0$. 16 | 17 | (Omit!) 18 | 19 | ## 29.5-4 20 | 21 | > Suppose that we allow strict inequalities in a linear program. Show that in this case, the fundamental theorem of linear programming does not hold. 22 | 23 | (Omit!) 24 | 25 | ## 29.3-5 26 | 27 | > Solve the following linear program using $\text{SIMPLEX}$: 28 | > 29 | > \begin{array}{lccccc} 30 | > \text{maxmize} & x_1 & + & 3x_2 \\\\ 31 | > \text{subject to} & \\\\ 32 | > & x_1 & - & x_2 & \le & 8 \\\\ 33 | > & -x_1 & - & x_2 & \le & -3 \\\\ 34 | > & -x_1 & + & 4x_2 & \le & 2 \\\\ 35 | > & & x_1, x_2 & & \ge & 0. 36 | > \end{array} 37 | 38 | (Omit!) 39 | 40 | ## 29.3-6 41 | 42 | > Solve the following linear program using $\text{SIMPLEX}$: 43 | > 44 | > \begin{array}{lccccc} 45 | > \text{maxmize} & x_1 & - & 2x_2 \\\\ 46 | > \text{subject to} & \\\\ 47 | > & x_1 & + & 2x_2 & \le & 4 \\\\ 48 | > & -2x_1 & - & 6x_2 & \le & -12 \\\\ 49 | > & & & x_2 & \le & 1 \\\\ 50 | > & & x_1, x_2 & & \ge & 0. 51 | > \end{array} 52 | 53 | (Omit!) 54 | 55 | ## 29.3-7 56 | 57 | > Solve the following linear program using $\text{SIMPLEX}$: 58 | > 59 | > \begin{array}{lccccc} 60 | > \text{maxmize} & x_1 & + & 3x_2 \\\\ 61 | > \text{subject to} & \\\\ 62 | > & -x_1 & + & x_2 & \le & -1 \\\\ 63 | > & -x_1 & - & x_2 & \le & -3 \\\\ 64 | > & -x_1 & + & 4x_2 & \le & 2 \\\\ 65 | > & & x_1, x_2 & & \ge & 0. 66 | > \end{array} 67 | 68 | (Omit!) 69 | 70 | ## 29.5-8 71 | 72 | > Solve the linear program given in $\text{(29.6)}$–$\text{(29.10)}$. 73 | 74 | (Omit!) 75 | 76 | ## 29.5-9 77 | 78 | > Consider the following $1$-variable linear program, which we call $P$: 79 | > 80 | > \begin{array}{lccc} 81 | > \text{maximize} & tx \\\\ 82 | > \text{subject to} & rx & \le & s \\\\ 83 | > & x & \ge & 0, 84 | > \end{array} 85 | > 86 | > where $r$, $s$, and $t$ are arbitrary real numbers. Let $D$ be the dual of $P$. 87 | > 88 | > State for which values of $r$, $s$, and $t$ you can assert that 89 | > 90 | > 1. Both $P$ and $D$ have optimal solutions with finite objective values. 91 | > 2. $P$ is feasible, but $D$ is infeasible. 92 | > 3. $D$ is feasible, but $P$ is infeasible. 93 | > 4. Neither $P$ nor $D$ is feasible. 94 | 95 | (Omit!) 96 | -------------------------------------------------------------------------------- /docs/Chap29/Problems/29-1.md: -------------------------------------------------------------------------------- 1 | > Given a set of $m$ linear inequalities on $n$ variables $x_1, x_2, \dots, x_n$, the ***linear-inequality feasibility problem*** asks whether there is a setting of the variables that simultaneously satisfies each of the inequalities. 2 | > 3 | > **a.** Show that if we have an algorithm for linear programming, we can use it to solve a linear-inequality feasibility problem. The number of variables and constraints that you use in the linear-programming problem should be polynomial in $n$ and $m$. 4 | > 5 | > **b.** Show that if we have an algorithm for the linear-inequality feasibility problem, we can use it to solve a linear-programming problem. The number of variables and linear inequalities that you use in the linear-inequality feasibility problem should be polynomial in $n$ and $m$, the number of variables and constraints in the linear program. 6 | 7 | (Omit!) 8 | -------------------------------------------------------------------------------- /docs/Chap29/Problems/29-2.md: -------------------------------------------------------------------------------- 1 | > ***Complementary slackness*** describes a relationship between the values of primal variables and dual constraints and between the values of dual variables and primal constraints. Let $\bar x$ be a feasible solution to the primal linear program given in $\text{(29.16)–(29.18)}$, and let $\bar y$ be a feasible solution to the dual linear program given in $\text{(29.83)–(29.85)}$. Complementary slackness states that the following conditions are necessary and sufficient for $\bar x$ and $\bar y$ to be optimal: 2 | > 3 | > $$\sum_{i = 1}^m a_{ij}\bar y_i = c_j \text{ or } \bar x_j = 0 \text{ for } j = 1, 2, \dots, n$$ 4 | > 5 | > and 6 | > 7 | > $$\sum_{j = 1}^m a_{ij}\bar x_j = b_i \text{ or } \bar y_i = 0 \text{ for } j = 1, 2, \dots, m.$$ 8 | > 9 | > **a.** Verify that complementary slackness holds for the linear program in lines $\text{(29.53)–(29.57)}$. 10 | > 11 | > **b.** Prove that complementary slackness holds for any primal linear program and its corresponding dual. 12 | > 13 | > **c.** Prove that a feasible solution $\bar x$ to a primal linear program given in lines $\text{(29.16)–(29.18)}$ is optimal if and only if there exist values $\bar y = (\bar y_1, \bar y_2, \dots, \bar y_m)$ such that 14 | > 15 | > 1. $\bar y$ is a feasible solution to the dual linear program given in $\text{(29.83)–(29.85)}$, 16 | > 2. $\sum_{i = 1}^m a_{ij}\bar y_i = c_j$ for all $j$ such that $\bar x_j > 0$, and 17 | > 3. $\bar y_i = 0$ for all $i$ such that $\sum_{j = 1}^n a_{ij}\bar x_j < b_i$. 18 | 19 | (Omit!) 20 | -------------------------------------------------------------------------------- /docs/Chap29/Problems/29-3.md: -------------------------------------------------------------------------------- 1 | > An ***integer linear-programming problem*** is a linear-programming problem with the additional constraint that the variables $x$ must take on integral values. Exercise 34.5-3 shows that just determining whether an integer linear program has a feasible solution is NP-hard, which means that there is no known polynomial-time algorithm for this problem. 2 | > 3 | > **a.** Show that weak duality (Lemma 29.8) holds for an integer linear program. 4 | > 5 | > **b.** Show that duality (Theorem 29.10) does not always hold for an integer linear program. 6 | > 7 | > **c.** Given a primal linear program in standard form, let us define $P$ to be the optimal objective value for the primal linear program, $D$ to be the optimal objective value for its dual, $IP$ to be the optimal objective value for the integer version of the primal (that is, the primal with the added constraint that the variables take on integer values), and $ID$ to be the optimal objective value for the integer version of the dual. Assuming that both the primal integer program and the dual integer program are feasible and bounded, show that 8 | > 9 | > $$IP \le P = D \le ID.$$ 10 | 11 | (Omit!) 12 | -------------------------------------------------------------------------------- /docs/Chap29/Problems/29-4.md: -------------------------------------------------------------------------------- 1 | > Let $A$ be an $m \times n$ matrix and $c$ be an $n$-vector. Then Farkas's lemma states that exactly one of the systems 2 | > 3 | > \begin{align} 4 | > Ax & \le 0, \\\\ 5 | > c^Tx & > 0 6 | > \end{align} 7 | > 8 | > and 9 | > 10 | > \begin{align} 11 | > A^Ty & = c, \\\\ 12 | > y & \le 0 13 | > \end{align} 14 | > 15 | > is solvable, where $x$ is an $n$-vector and $y$ is an $m$-vector. Prove Farkas's lemma. 16 | 17 | (Omit!) 18 | -------------------------------------------------------------------------------- /docs/Chap29/Problems/29-5.md: -------------------------------------------------------------------------------- 1 | > In this problem, we consider a variant of the minimum-cost-flow problem from Section 29.2 in which we are not given a demand, a source, or a sink. Instead, we are given, as before, a flow network and edge costs $a(u, v)$ flow is feasible if it satisfies the capacity constraint on every edge and flow conservation at *every* vertex. The goal is to find, among all feasible flows, the one of minimum cost. We call this problem the ***minimum-cost-circulation problem***. 2 | > 3 | > **a.** Formulate the minimum-cost-circulation problem as a linear program. 4 | > 5 | > **b.** Suppose that for all edges $(u, v) \in E$, we have $a(u, v) > 0$. Characterize an optimal solution to the minimum-cost-circulation problem. 6 | > 7 | > **c.** Formulate the maximum-flow problem as a minimum-cost-circulation problem linear program. That is given a maximum-flow problem instance $G = (V, E)$ with source $s$, sink $t$ and edge capacities $c$, create a minimum-cost-circulation problem by giving a (possibly different) network $G' = (V', E')$ with edge capacities $c'$ and edge costs $a'$ such that you can discern a solution to the maximum-flow problem from a solution to the minimum-cost-circulation problem. 8 | > 9 | > **d.** Formulate the single-source shortest-path problem as a minimum-cost-circulation problem linear program. 10 | 11 | (Omit!) 12 | -------------------------------------------------------------------------------- /docs/Chap30/30.1.md: -------------------------------------------------------------------------------- 1 | ## 30.1-1 2 | 3 | > Multiply the polynomials $A(x) = 7x^3 - x^2 + x - 10$ and $B(x) = 8x^3 - 6x + 3$ using equations $\text{(30.1)}$ and $\text{(30.2)}$. 4 | 5 | (Omit!) 6 | 7 | ## 30.1-2 8 | 9 | > Another way to evaluate a polynomial $A(x)$ of degree-bound $n$ at a given point $x_0$ is to divide $A(x)$ by the polynomial $(x - x_0)$, obtaining a quotient polynomial $q(x)$ of degree-bound $n - 1$ and a remainder $r$, such that 10 | > 11 | > $$A(x) = q(x)(x - x_0) + r.$$ 12 | > 13 | > Clearly, $A(x_0) = r$. Show how to compute the remainder $r$ and the coefficients of $q(x)$ in time $\Theta(n)$ from $x_0$ and the coefficients of $A$. 14 | 15 | (Omit!) 16 | 17 | ## 30.1-3 18 | 19 | > Derive a point-value representation for $A^\text{rev}(x) = \sum_{j = 0}^{n - 1} a_{n - 1 - j}x^j$ from a point-value representation for $A(x) = \sum_{j = 0}^{n - 1} a_jx^j$, assuming that none of the points is $0$. 20 | 21 | (Omit!) 22 | 23 | ## 30.1-4 24 | 25 | > Prove that $n$ distinct point-value pairs are necessary to uniquely specify a polynomial of degree-bound $n$, that is, if fewer than $n$ distinct point-value pairs are given, they fail to specify a unique polynomial of degree-bound $n$. ($\textit{Hint:}$ Using Theorem 30.1, what can you say about a set of $n - 1$ point-value pairs to which you add one more arbitrarily chosen point-value pair?) 26 | 27 | (Omit!) 28 | 29 | ## 30.1-5 30 | 31 | > Show how to use equation $\text{(30.5)}$ to interpolate in time $\Theta(n^2)$. ($\textit{Hint:}$ First compute the coefficient representation of the polynomial $\prod_j (x - x_j)$ and then divide by $(x - x_k)$ as necessary for the numerator of each term; see Exercise 30.1-2. You can compute each of the $n$ denominators in time $O(n)$.) 32 | 33 | (Omit!) 34 | 35 | ## 30.1-6 36 | 37 | > Explain what is wrong with the "obvious" approach to polynomial division using a point-value representation, i.e., dividing the corresponding $y$ values. Discuss separately the case in which the division comes out exactly and the case in which it doesn't. 38 | 39 | (Omit!) 40 | 41 | ## 30.1-7 42 | 43 | > Consider two sets $A$ and $B$, each having $n$ integers in the range from $0$ to $10n$. We wish to compute the ***Cartesian sum*** of $A$ and $B$, defined by 44 | > 45 | > $$C = \\{x + y: x \in A \text{ and } y \in B\\}.$$ 46 | > 47 | > Note that the integers in $C$ are in the range from $0$ to $20n$. We want to find the elements of $C$ and the number of times each element of $C$ is realized as a sum of elements in $A$ and $B$. Show how to solve the problem in $O(n\lg n)$ time. ($\textit{Hint:}$ Represent $A$ and $B$ as polynomials of degree at most $10n$.) 48 | 49 | (Omit!) 50 | -------------------------------------------------------------------------------- /docs/Chap30/30.2.md: -------------------------------------------------------------------------------- 1 | ## 30.2-1 2 | 3 | > Prove Corollary 30.4. 4 | 5 | (Omit!) 6 | 7 | ## 30.2-2 8 | 9 | > Compute the $\text{DFT}$ of the vector $(0, 1, 2, 3)$. 10 | 11 | (Omit!) 12 | 13 | ## 30.2-3 14 | 15 | > Do Exercise 30.1-1 by using the $\Theta(n\lg n)$-time scheme. 16 | 17 | (Omit!) 18 | 19 | ## 30.2-4 20 | 21 | > Write pseudocode to compute $\text{DFT}_n^{-1}$ in $\Theta(n\lg n)$ time. 22 | 23 | (Omit!) 24 | 25 | ## 30.2-5 26 | 27 | > Describe the generalization of the $\text{FFT}$ procedure to the case in which $n$ is a power of $3$. Give a recurrence for the running time, and solve the recurrence. 28 | 29 | (Omit!) 30 | 31 | ## 30.2-6 $\star$ 32 | 33 | > Suppose that instead of performing an $n$-element $\text{FFT}$ over the field of complex numbers (where $n$ is even), we use the ring $\mathbb Z_m$ of integers modulo $m$, where $m = 2^{tn / 2} + 1$ and $t$ is an arbitrary positive integer. Use $\omega = 2^t$ instead of $\omega_n$ as a principal nth root of unity, modulo $m$. Prove that the $\text{DFT}$ and the inverse $\text{DFT}$ are well defined in this system. 34 | 35 | (Omit!) 36 | 37 | ## 30.2-7 38 | 39 | > Given a list of values $z_0, z_1, \dots, z_{n - 1}$ (possibly with repetitions), show how to find the coefficients of a polynomial $P(x)$ of degree-bound $n + 1$ that has zeros only at $z_0, z_1, \dots, z_{n - 1}$ (possibly with repetitions). Your procedure should run in time $O(n\lg^2 n)$. ($\textit{Hint:}$ The polynomial $P(x)$ has a zero at $z_j$ if and only if $P(x)$ is a multiple of $(x - z_j)$.) 40 | 41 | (Omit!) 42 | 43 | ## 30.2-8 $\star$ 44 | 45 | > The ***chirp transform*** of a vector $a = (a_0, a_1, \dots, a_{n - 1})$ is the vector $y = (y_0, y_1, \dots, y_{n - 1})$, where $y_k = \sum_{j = 0}^{n - 1} a_jz^{kj}$ and $z$ is any complex number. The $\text{DFT}$ is therefore a special case of the chirp transform, obtained by taking $z = \omega_n$. Show how to evaluate the chirp transform in time $O(n\lg n)$ for any complex number $z$. ($\textit{Hint:}$ Use the equation 46 | > 47 | > $$y_k = z^{k^2 / 2} \sum_{j = 0}^{n - 1} \Big(a_jz^{j^2 / 2}\Big) \Big(z^{-(k - j)^2 / 2}\Big)$$ 48 | > 49 | > to view the chirp transform as a convolution.) 50 | 51 | (Omit!) 52 | -------------------------------------------------------------------------------- /docs/Chap30/30.3.md: -------------------------------------------------------------------------------- 1 | ## 30.3-1 2 | 3 | > Show how $\text{ITERATIVE-FFT}$ computes the $\text{DFT}$ of the input vector $(0, 2, 3, -1, 4, 5, 7, 9)$. 4 | 5 | (Omit!) 6 | 7 | ## 30.3-2 8 | 9 | > Show how to implement an $\text{FFT}$ algorithm with the bit-reversal permutation occurring at the end, rather than at the beginning, of the computation. ($\textit{Hint:}$ Consider the inverse $\text{DFT}$.) 10 | 11 | (Omit!) 12 | 13 | ## 30.3-3 14 | 15 | > How many times does $\text{ITERATIVE-FFT}$ compute twiddle factors in each stage? Rewrite $\text{ITERATIVE-FFT}$ to compute twiddle factors only $2^{s - 1}$ times in stage $s$. 16 | 17 | (Omit!) 18 | 19 | ## 30.3-4 $\star$ 20 | 21 | > Suppose that the adders within the butterfly operations of the $\text{FFT}$ circuit sometimes fail in such a manner that they always produce a zero output, independent of their inputs. Suppose that exactly one adder has failed, but that you don't know which one. Describe how you can identify the failed adder by supplying inputs to the overall $\text{FFT}$ circuit and observing the outputs. How efficient is your method? 22 | 23 | (Omit!) 24 | -------------------------------------------------------------------------------- /docs/Chap30/Problems/30-1.md: -------------------------------------------------------------------------------- 1 | > **a.** Show how to multiply two linear polynomials $ax + b$ and $cx + d$ using only three multiplications. ($\textit{Hint:}$ One of the multiplications is $(a + b) \cdot (c + d)$.) 2 | > 3 | > **b.** Give two divide-and-conquer algorithms for multiplying two polynomials of degree-bound $n$ in $\Theta(n^{\lg 3})$ time. The first algorithm should divide the input polynomial coefficients into a high half and a low half, and the second algorithm should divide them according to whether their index is odd or even. 4 | > 5 | > **c.** Show how to multiply two $n$-bit integers in $O(n^{\lg 3})$ steps, where each step operates on at most a constant number of $1$-bit values. 6 | 7 | (Omit!) 8 | -------------------------------------------------------------------------------- /docs/Chap30/Problems/30-2.md: -------------------------------------------------------------------------------- 1 | > A ***Toeplitz matrix*** is an $n \times n$ matrix $A = (a_{ij})$ such that $a_{ij} = a_{i - 1, j - 1}$ for $i = 2, 3, \dots, n$ and $j = 2, 3, \dots, n$. 2 | > 3 | > **a.** Is the sum of two Toeplitz matrices necessarily Toeplitz? What about the product? 4 | > 5 | > **b.** Describe how to represent a Toeplitz matrix so that you can add two $n \times n$ Toeplitz matrices in $O(n)$ time. 6 | > 7 | > **c.** Give an $O(n\lg n)$-time algorithm for multiplying an $n \times n$ Toeplitz matrix by a vector of length $n$. Use your representation from part (b). 8 | > 9 | > **d.** Give an efficient algorithm for multiplying two $n \times n$ Toeplitz matrices. Analyze its running time. 10 | 11 | (Omit!) 12 | -------------------------------------------------------------------------------- /docs/Chap30/Problems/30-3.md: -------------------------------------------------------------------------------- 1 | > We can generalize the $1$-dimensional discrete Fourier transform defined by equation $\text{(30.8)}$ to $d$ dimensions. The input is a $d$-dimensional array $A = (a_{j_1, j_2, \dots, j_d})$ whose dimensions are $n_1, n_2, \dots, n_d$, where $n_1n_2 \cdots n_d = n$. We define the $d$-dimensional discrete Fourier transform by the equation 2 | > 3 | > $$y_{k_1, k_2, \dots, k_d} = \sum_{j_1 = 0}^{n_1 - 1} \sum_{j_2 = 0}^{n_2 - 1} \cdots \sum_{j_d = 0}^{n_d - 1} a_{j_1, j_2, \cdots, j_d} \omega_{n_1}^{j_1k_1}\omega_{n_2}^{j_2k_2} \cdots \omega_{n_d}^{j_dk_d}$$ 4 | > 5 | > for $0 \le k_1 < n_1, 0 \le k_2 < n_2, \dots, 0 \le k_d < n_d$. 6 | > 7 | > **a.** Show that we can compute a $d$-dimensional $\text{DFT}$ by computing $1$-dimensional $\text{DFT}$s on each dimension in turn. That is, we first compute $n / n_1$ separate $1$-dimensional $\text{DFT}$s along dimension $1$. Then, using the result of the $\text{DFT}$s along dimension $1$ as the input, we compute $n / n_2$ separate $1$-dimensional $\text{DFT}$s along dimension $2$. Using this result as the input, we compute $n / n_3$ separate $1$-dimensional $\text{DFT}$s along dimension $3$, and so on, through dimension $d$. 8 | > 9 | > **b.** Show that the ordering of dimensions does not matter, so that we can compute a $d$-dimensional $\text{DFT}$ by computing the $1$-dimensional $\text{DFT}$s in any order of the $d$ dimensions. 10 | > 11 | > **c.** Show that if we compute each $1$-dimensional $\text{DFT}$ by computing the fast Fourier transform, the total time to compute a $d$-dimensional $\text{DFT}$ is $O(n\lg n)$, independent of $d$. 12 | 13 | (Omit!) 14 | -------------------------------------------------------------------------------- /docs/Chap30/Problems/30-4.md: -------------------------------------------------------------------------------- 1 | > Given a polynomial $A(x)$ of degree-bound $n$, we define its $t$th derivative by 2 | > 3 | > $$ 4 | > A^{(t)}(x) = 5 | > \begin{cases} 6 | > A(x) & \text{ if } t = 0, \\\\ 7 | > \frac{d}{dx} A^{(t - 1)}(x) & \text{ if } 1 \le t \le n - 1, \\\\ 8 | > 0 & \text{ if } t \ge n. 9 | > \end{cases} 10 | > $$ 11 | > 12 | > From the coefficient representation $(a_0, a_1, \dots, a_{n - 1})$ of $A(x)$ and a given point $x_0$, we wish to determine $A^{(t)}(x_0)$ for $t = 0, 1, \dots, n- 1$. 13 | > 14 | > **a.** Given coefficients $b_0, b_1, \dots, b_{n - 1}$ such that 15 | > 16 | > $$A(x) = \sum_{j = 0}^{n - 1} b_j(x - x_0)^j,$$ 17 | > 18 | > show how to compute $A^{(t)}(x_0)$ for $t = 0, 1, \dots, n - 1$, in $O(n)$ time. 19 | > 20 | > **b.** Explain how to find $b_0, b_1, \dots, b_{n - 1}$ in $O(n\lg n)$ time, given $A(x_0 + \omega_n^k)$ for $k = 0, 1, \dots, n - 1$. 21 | > 22 | > **c.** Prove that 23 | > 24 | > $$A(x_0 + \omega_n^k) = \sum_{r = 0}^{n - 1} \Bigg(\frac{\omega_n^{kr}}{r!} \sum_{j = 0}^{n - 1} f(j)g(r - j)\Bigg),$$ 25 | > 26 | > where $f(j) = a_j \cdot j!$ and 27 | > 28 | > $$ 29 | > g(l) = 30 | > \begin{cases} 31 | > x_0^{-l} / (-l)! & \text{ if } -(n - 1) \le l \le 0, \\\\ 32 | > 0 & \text{ if } 1 \le l \le n - 1. 33 | > \end{cases} 34 | > $$ 35 | > 36 | > **d.** Explain how to evaluate $A(x_0 + \omega_n^k)$ for $k = 0, 1, \dots, n - 1$ in $O(n\lg n)$ time. Conclude that we can evaluate all nontrivial derivatives of $A(x)$ at $x_0$ in $O(n\lg n)$ time. 37 | 38 | (Omit!) 39 | -------------------------------------------------------------------------------- /docs/Chap30/Problems/30-5.md: -------------------------------------------------------------------------------- 1 | > We have seen how to evaluate a polynomial of degree-bound $n$ at a single point in $O(n)$ time using Horner's rule. We have also discovered how to evaluate such a polynomial at all $n$ complex roots of unity in $O(n\lg n)$ time using the $\text{FFT}$. We shall now show how to evaluate a polynomial of degree-bound $n$ at $n$ arbitrary points in $O(n\lg^2 n)$ time. 2 | > 3 | > To do so, we shall assume that we can compute the polynomial remainder when one such polynomial is divided by another in $O(n\lg n)$ time, a result that we state without proof. For example, the remainder of $3x^3 + x^2 - 3x + 1$ when divided by $x^2 + x + 2$ is 4 | > 5 | > $$(3x^3 + x^2 - 3x + 1) \mod (x^2 + x + 2) = -7x + 5.$$ 6 | > 7 | > Given the coefficient representation of a polynomial $A(x) = \sum_{k = 0}^{n - 1} a_kx^k$ and $n$ points $x_0, x_1, \dots, x_{n - 1}$, we wish to compute the $n$ values $A(x_0), A(x_1), \dots, A(x_{n - 1})$. For $0 \le i \le j \le n - 1$, define the polynomials $P_{ij}(x) = \prod_{k = i}^j (x - x_k)$ and $Q_{ij}(x) = A(x) \mod P_{ij}(x)$. Note that $Q_{ij}(x)$ has degree at most $j - i$. 8 | > 9 | > **a.** Prove that $A(x) \mod (x - z) = A(z)$ for any point $z$. 10 | > 11 | > **b.** Prove that $Q_{kk}(x) = A(x_k)$ and that $Q_{0, n - 1}(x) = A(x)$. 12 | > 13 | > **c.** Prove that for $i \le k \le j$, we have $Q_{ik}(x) = Q_{ij}(x) \mod P_{ik}(x)$ and $Q_{kj}(x) = Q_{ij}(x) \mod P_{kj}(x)$. 14 | > 15 | > **d.** Give an $O(n\lg^2 n)$-time algorithm to evaluate $A(x_0), A(x_1), \dots, A(x_{n - 1})$. 16 | 17 | (Omit!) 18 | -------------------------------------------------------------------------------- /docs/Chap30/Problems/30-6.md: -------------------------------------------------------------------------------- 1 | > As defined, the discrete Fourier transform requires us to compute with complex numbers, which can result in a loss of precision due to round-off errors. For some problems, the answer is known to contain only integers, and by using a variant of the $\text{FFT}$ based on modular arithmetic, we can guarantee that the answer is calculated exactly. An example of such a problem is that of multiplying two polynomials with integer coefficients. Exercise 30.2-6 gives one approach, using a modulus of length $\Omega(n)$ bits to handle a $\text{DFT}$ on $n$ points. This problem gives another approach, which uses a modulus of the more reasonable length $O(\lg n)$; it requires that you understand the material of Chapter 31. Let $n$ be a power of $2$. 2 | > 3 | > **a.** Suppose that we search for the smallest $k$ such that $p = kn + 1$ is prime. Give a simple heuristic argument why we might expect $k$ to be approximately $\ln n$. (The value of $k$ might be much larger or smaller, but we can reasonably expect to examine $O(\lg n)$ candidate values of $k$ on average.) How does the expected length of $p$ compare to the length of $n$? 4 | > 5 | > Let $g$ be a generator of $\mathbb Z_p^\*$, and let $w = g^k \mod p$. 6 | > 7 | > **b.** Argue that the $\text{DFT}$ and the inverse $\text{DFT}$ are well-defined inverse operations modulo $p$, where $w$ is used as a principal $n$th root of unity. 8 | > 9 | > **c.** Show how to make the $\text{FFT}$ and its inverse work modulo $p$ in time $O(n\lg n)$, where operations on words of $O(\lg n)$ bits take unit time. Assume that the algorithm is given $p$ and $w$. 10 | > 11 | > **d.** Compute the $\text{DFT}$ modulo $p = 17$ of the vector $(0, 5, 3, 7, 7, 2, 1, 6)$. Note that $g = 3$ is a generator of $\mathbb Z_{17}^\*$. 12 | 13 | (Omit!) 14 | -------------------------------------------------------------------------------- /docs/Chap31/31.3.md: -------------------------------------------------------------------------------- 1 | ## 31.3-1 2 | 3 | > Draw the group operation tables for the groups $(\mathbb Z_4, +_4)$ and $(\mathbb Z_5^\*, \cdot_5)$. Show that these groups are isomorphic by exhibiting a one-to-one correspondence $\alpha$ between their elements such that $a + b \equiv c (\mod 4)$ if and only if $\alpha(a) \cdot \alpha(b) \equiv \alpha(c) (\mod 5)$. 4 | 5 | - $(\mathbb Z_4, +_4)$: $\{ 0, 1, 2, 3 \}$. 6 | - $(\mathbb Z_5^\*, \cdot_5)$: $\{ 1, 2, 3, 4 \}$. 7 | 8 | $\alpha(x) = 2^{x-1}$. 9 | 10 | ## 31.3-2 11 | 12 | > List all subgroups of $\mathbb Z_9$ and of $\mathbb Z_{13}^\*$. 13 | 14 | - $\mathbb Z_9$: 15 | 16 | - $\langle 0 \rangle = \{ 0 \}$, 17 | - $\langle 1 \rangle = \{ 0, 1, 2, 3, 4, 5, 6, 7, 8 \}$, 18 | - $\langle 2 \rangle = \{ 0, 2, 4, 6, 8 \}$. 19 | 20 | - $\mathbb Z_{13}^\*$: 21 | - $\langle 1 \rangle = \{ 1 \}$, 22 | - $\langle 2 \rangle = \{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 \}$. 23 | 24 | ## 31.3-3 25 | 26 | > Prove Theorem 31.14. 27 | 28 | A nonempty closed subset of a finite group is a subgroup. 29 | 30 | - Closure: the subset is closed. 31 | - Identity: suppose $a \in S'$, then $a^{(k)} \in S'$. Since the subset is finite, there must be a period such that $a^{(m)} = a^{(n)}$, hence $a^{(m)}a^{(-n)} = a^{(m - n)} = 1$, therefore the subset must contain the identity. 32 | - Associativity: inherit from the origin group. 33 | - Inverses: suppose $a^{(k)} = 1$, the inverse of element $a$ is $a^{(k - 1)}$ since $aa^{(k - 1)} = a^{(k)} = 1$. 34 | 35 | ## 31.3-4 36 | 37 | > Show that if $p$ is prime and $e$ is a positive integer, then 38 | > 39 | > $\phi(p^e) = p^{e - 1}(p - 1)$. 40 | 41 | $\phi(p^e) = p^e \cdot \left ( 1 - \frac{1}{p} \right ) = p^{e - 1}(p - 1)$. 42 | 43 | ## 31.3-5 44 | 45 | > Show that for any integer $n > 1$ and for any $a \in \mathbb Z_n^\*$, the function $f_a : \mathbb Z_n^\* \rightarrow \mathbb Z_n^\*$ defined by $f_a(x) = ax \mod n$ is a permutation of $\mathbb Z_n^\*$. 46 | 47 | To prove it is a permutation, we need to prove that 48 | - for each element $x \in \mathbb Z_n^\*$, $f_a(x) \in \mathbb Z_n^\*$, 49 | - the numbers generated by $f_a$ are distinct. 50 | 51 | Since $a \in \mathbb Z_{n}^\*$ and $x \in \mathbb Z_n^\*$, then $f_a(x) = ax \mod n \in \mathbb Z_n^\*$ by the closure property. 52 | 53 | Suppose there are two distinct numbers $x \in \mathbb Z_n^\*$ and $y \in \mathbb Z_n^\*$ that $f_a(x) = f_a(y)$, 54 | 55 | \begin{align} 56 | f_a(x) & = f_a(y) \\\\ 57 | ax \mod n & = ay \mod n \\\\ 58 | (a \mod n)(x \mod n) & = (a \mod n)(y \mod n) \\\\ 59 | (x \mod n) & = y \mod n \\\\ 60 | x & \equiv y \mod n, 61 | \end{align} 62 | 63 | which contradicts the assumption, therefore the numbers generated by $f_a$ are distinct. 64 | -------------------------------------------------------------------------------- /docs/Chap31/31.4.md: -------------------------------------------------------------------------------- 1 | ## 31.4-1 2 | 3 | > Find all solutions to the equation $35x \equiv 10 (\mod 50)$. 4 | 5 | $\\{6, 16, 26, 36, 46\\}$. 6 | 7 | ## 31.4-2 8 | 9 | > Prove that the equation $ax \equiv ay (\mod n)$ implies $x \equiv y (\mod n)$ whenever $\gcd(a, n) = 1$. Show that the condition $\gcd(a, n) = 1$ is necessary by supplying a counterexample with $\gcd(a, n) > 1$. 10 | 11 | $$d = \gcd(ax, n) = \gcd(x, n)$$ 12 | 13 | Since $ax \cdot x' + n \cdot y' = d$, 14 | 15 | we have 16 | 17 | $$x \cdot (ax') + n \cdot y' = d.$$ 18 | 19 | \begin{align} 20 | x_0 & = x'(ay / d), \\\\ 21 | x_0' & = (ax')(y / d) = x'(ay / d) = x_0. 22 | \end{align} 23 | 24 | ## 31.4-3 25 | 26 | > Consider the following change to line 3 of the procedure $\text{MODULAR-LINEAR-EQUATION-SOLVER}$: 27 | > 28 | > ```cpp 29 | > 3 x0 = x'(b / d) mod (n / d) 30 | > ``` 31 | > 32 | > Will this work? Explain why or why not. 33 | 34 | Assume that $x_0 \ge n / d$, then the largest solution is $x_0 + (d - 1) \cdot (n / d) \ge d \cdot n / d \ge n$, which is impossible, therefore $x_0 < n / d$. 35 | 36 | ## 31.4-4 $\star$ 37 | 38 | > Let $p$ be prime and $f(x) \equiv f_0 + f_1 x + \cdots + f_tx^t (\mod p)$ be a polynomial of degree $t$, with coefficients $f_i$ drawn from $\mathbb Z_p$. We say that $a \in \mathbb Z_p$ is a ***zero*** of $f$ if $f(a) \equiv 0 (\mod p)$. Prove that if $a$ is a zero of $f$, then $f(x) \equiv (x - a) g(x) (\mod p)$ for some polynomial $g(x)$ of degree $t - 1$. Prove by induction on $t$ that if $p$ is prime, then a polynomial $f(x)$ of degree $t$ can have at most $t$ distinct zeros modulo $p$. 39 | 40 | (Omit!) 41 | -------------------------------------------------------------------------------- /docs/Chap31/31.5.md: -------------------------------------------------------------------------------- 1 | ## 31.5-1 2 | 3 | > Find all solutions to the equations $x \equiv 4 (\mod 5)$ and $x \equiv 5 (\mod 11)$. 4 | 5 | \begin{align} 6 | m_1 & = 11, m_2 = 5. \\\\ 7 | m_1^{-1} & = 1, m_2^{-1} = 9. \\\\ 8 | c_1 & = 11, c_2 = 45. \\\\ 9 | a & = (c_1 \cdot a_1 + c_2 \cdot a_2) \mod (n_1 \cdot n_2) \\\\ 10 | & = (11 \cdot 4 + 45 \cdot 5) \mod 55 = 49. 11 | \end{align} 12 | 13 | ## 31.5-2 14 | 15 | > Find all integers $x$ that leave remainders $1$, $2$, $3$ when divided by $9$, $8$, $7$ respectively. 16 | 17 | $10 + 504i$, $i \in \mathbb Z$. 18 | 19 | ## 31.5-3 20 | 21 | > Argue that, under the definitions of Theorem 31.27, if $\gcd(a, n) = 1$, then 22 | > 23 | > $$(a^{-1} \mod n) \leftrightarrow ((a_1^{-1} \mod n_1), (a_2^{-1} \mod n_2), \ldots, (a_k^{-1} \mod n_k)).$$ 24 | 25 | $$\gcd(a, n) = 1 \rightarrow \gcd(a, n_i) = 1.$$ 26 | 27 | ## 31.5-4 28 | 29 | > Under the definitions of Theorem 31.27, prove that for any polynomial $f$, the number of roots of the equation $f(x) \equiv 0 (\mod n)$ equals the product of the number of roots of each of the equations 30 | > 31 | > $$f(x) \equiv 0 (\mod n_1), f(x) \equiv 0 (\mod n_2), \ldots, f(x) \equiv 0 (\mod n_k).$$ 32 | 33 | Based on $\text{31.28}$–$\text{31.30}$. 34 | -------------------------------------------------------------------------------- /docs/Chap31/31.6.md: -------------------------------------------------------------------------------- 1 | ## 31.6-1 2 | 3 | > Draw a table showing the order of every element in $\mathbb Z_{11}^*$. Pick the smallest primitive root $g$ and compute a table giving $\text{ind}\_{11, g}(x)$ for all $x \in \mathbb Z_{11}^\*$. 4 | 5 | $g = 2$, $\\{1, 2, 4, 8, 5, 10, 9, 7, 3, 6\\}$. 6 | 7 | ## 31.6-2 8 | 9 | > Give a modular exponentiation algorithm that examines the bits of $b$ from right to left instead of left to right. 10 | 11 | ```cpp 12 | MODULAR-EXPONENTIATION(a, b, n) 13 | i = 0 14 | d = 1 15 | while (1 << i) ≤ b 16 | if (b & (1 << i)) > 0 17 | d = (d * a) % n 18 | a = (a * a) % n 19 | i = i + 1 20 | return d 21 | ``` 22 | 23 | ## 31.6-3 24 | 25 | > Assuming that you know $\phi(n)$, explain how to compute $a^{-1} \mod n$ for any $a \in \mathbb Z_n^\*$ using the procedure $\text{MODULAR-EXPONENTIATION}$. 26 | 27 | \begin{array}{rlll} 28 | a^{\phi(n)} & \equiv & 1 & (\mod n), \\\\ 29 | a\cdot a^{\phi(n) - 1} & \equiv & 1 & (\mod n), \\\\ 30 | a^{-1} & \equiv & a^{\phi(n)-1} & (\mod n). 31 | \end{array} 32 | -------------------------------------------------------------------------------- /docs/Chap31/31.7.md: -------------------------------------------------------------------------------- 1 | ## 31.7-1 2 | 3 | > Consider an RSA key set with $p = 11$, $q = 29$, $n = 319$, and $e = 3$. What value of $d$ should be used in the secret key? What is the encryption of the message $M = 100$? 4 | 5 | $\phi(n) = (p - 1) \cdot (q - 1) = 280$. 6 | 7 | $d = e^{-1} \mod \phi(n) = 187$. 8 | 9 | $P(M) = M^e \mod n = 254$. 10 | 11 | $S(C) = C^d \mod n = 254^{187} \mod n = 100$. 12 | 13 | ## 31.7-2 14 | 15 | > Prove that if Alice's public exponent $e$ is $3$ and an adversary obtains Alice's secret exponent $d$, where $0 < d < \phi(n)$, then the adversary can factor Alice's modulus $n$ in time polynomial in the number of bits in $n$. (Although you are not asked to prove it, you may be interested to know that this result remains true even if the condition $e = 3$ is removed. See Miller [255].) 16 | 17 | $$ed \equiv 1 \mod \phi(n)$$ 18 | 19 | $$ed - 1 = 3d - 1 = k \phi(n)$$ 20 | 21 | If $p, q < n / 4$, then 22 | 23 | $$\phi(n) = n - (p + q) + 1 > n - n / 2 + 1 = n / 2 + 1 > n / 2.$$ 24 | 25 | $kn / 2 < 3d - 1 < 3d < 3n$, then $k < 6$, then we can solve $3d - 1 = n - p - n / p + 1$. 26 | 27 | ## 31.7-3 $\star$ 28 | 29 | > Prove that RSA is multiplicative in the sense that 30 | > 31 | > $P_A(M_1) P_A(M_2) \equiv P_A(M_1, M_2) (\mod n)$. 32 | > 33 | > Use this fact to prove that if an adversary had a procedure that could efficiently decrypt $1$ percent of messages from $\mathbb Z_n$ encrypted with $P_A$, then he could employ a probabilistic algorithm to decrypt every message encrypted with $P_A$ with high probability. 34 | 35 | Multiplicative: $e$ is relatively prime to $n$. 36 | 37 | Decrypt: In each iteration randomly choose a prime number $m$ that $m$ is relatively prime to $n$, if we can decrypt $m \cdot M$, then we can return $m^{-1}M$ since $m^{-1} = m^{n - 2}$. 38 | -------------------------------------------------------------------------------- /docs/Chap31/31.8.md: -------------------------------------------------------------------------------- 1 | ## 31.8-1 2 | 3 | > Prove that if an odd integer $n > 1$ is not a prime or a prime power, then there exists a nontrivial square root of $1$ modulo $n$. 4 | 5 | (Omit!) 6 | 7 | ## 31.8-2 $\star$ 8 | 9 | > It is possible to strengthen Euler's theorem slightly to the form 10 | > 11 | > $a^{\lambda(n)} \equiv 1 (\mod n)$ for all $a \in \mathbb Z_n^\*$, 12 | > 13 | > where $n = p_1^{e_1} \cdots p_r^{e_r}$ and $\lambda(n)$ is defined by 14 | > 15 | > $$\lambda(n) = \text{lcm}(\phi(p_1^{e_1}), \ldots, \phi\phi(p_r^{e_r})). \tag{31.42}$$ 16 | > 17 | > Prove that $\lambda(n) \mid \phi(n)$. A composite number $n$ is a Carmichael number if $\lambda(n) \mid n - 1$. The smallest Carmichael number is $561 = 3 \cdot 11 \cdot 17$; here, $\lambda(n) = \text{lcm}(2, 10, 16) = 80$, which divides $560$. Prove that Carmichael numbers must be both "square-free" (not divisible by the square of any prime) and the product of at least three primes. (For this reason, they are not very common.) 18 | 19 | (Omit!) 20 | 21 | ## 31.8-3 22 | 23 | > Prove that if $x$ is a nontrivial square root of $1$, modulo $n$, then $\gcd(x - 1, n)$ and $\gcd(x + 1, n)$ are both nontrivial divisors of $n$. 24 | 25 | \begin{array}{rlll} 26 | x^2 & \equiv & 1 & (\mod n), \\\\ 27 | x^2 - 1 & \equiv & 0 & (\mod n), \\\\ 28 | (x + 1)(x - 1) & \equiv & 0 & (\mod n). 29 | \end{array} 30 | 31 | $n \mid (x + 1)(x - 1)$, suppose $\gcd(x - 1, n) = 1$, then $n \mid (x + 1)$, then $x \equiv -1 (\mod n)$ which is trivial, it contradicts the fact that $x$ is nontrivial, therefore $\gcd(x - 1, n) \ne 1$, $\gcd(x + 1, n) \ne 1$. 32 | -------------------------------------------------------------------------------- /docs/Chap31/31.9.md: -------------------------------------------------------------------------------- 1 | ## 31.9-1 2 | 3 | > Referring to the execution history shown in Figure 31.7(a), when does \text{POLLARDRHO} print the factor $73$ of $1387$? 4 | 5 | $x = 84$, $y = 814$. 6 | 7 | ## 31.9-2 8 | 9 | > Suppose that we are given a function $f : \mathbb Z_n \rightarrow \mathbb Z_n$ and an initial value $x_0 \in \mathbb Z_n$. Define $x_i = f(x_{i - 1})$ for $i = 1, 2, \ldots$. Let $t$ and $u > 0$ be the smallest values such that $x_{t + i} = x_{t + u + i}$ for $i = 0, 1, \ldots$. In the terminology of Pollard's rho algorithm, $t$ is the length of the tail and $u$ is the length of the cycle of the rho. Give an efficient algorithm to determine $t$ and $u$ exactly, and analyze its running time. 10 | 11 | (Omit!) 12 | 13 | ## 31.9-3 14 | 15 | > How many steps would you expect $\text{POLLARD-RHO}$ to require to discover a factor of the form $p^e$, where $p$ is prime and $e > 1$? 16 | 17 | $\Theta(\sqrt p)$. 18 | 19 | ## 31.9-4 $\star$ 20 | 21 | > One disadvantage of $\text{POLLARD-RHO}$ as written is that it requires one gcd computation for each step of the recurrence. Instead, we could batch the gcd computations by accumulating the product of several $x_i$ values in a row and then using this product instead of $x_i$ in the gcd computation. Describe carefully how you would implement this idea, why it works, and what batch size you would pick as the most effective when working on a $\beta$-bit number $n$. 22 | 23 | (Omit!) 24 | -------------------------------------------------------------------------------- /docs/Chap31/Problems/31-1.md: -------------------------------------------------------------------------------- 1 | > Most computers can perform the operations of subtraction, testing the parity (odd or even) of a binary integer, and halving more quickly than computing remainders. This problem investigates the ***binary gcd algorithm***, which avoids the remainder computations used in Euclid's algorithm. 2 | 3 | > **a.** Prove that if $a$ and $b$ are both even, then $\gcd(a, b) = 2 \cdot \gcd(a / 2, b / 2)$. 4 | 5 | > **b.** Prove that if $a$ is odd and $b$ is even, then $\gcd(a, b) = \gcd(a, b / 2)$. 6 | 7 | > **c.** Prove that if $a$ and $b$ are both odd, then $\gcd(a, b) = \gcd((a - b) / 2, b)$. 8 | 9 | > **d.** Design an efficient binary gcd algorithm for input integers $a$ and $b$, where $a \ge b$, that runs in $O(\lg a)$ time. Assume that each subtraction, parity test, and halving takes unit time. 10 | 11 | (Omit!) 12 | 13 | **d.** 14 | 15 | ```cpp 16 | BINARY-GCD(a, b) 17 | if a < b 18 | return BINARY-GCD(b, a) 19 | if b == 0 20 | return a 21 | if (a & 1 == 1) and (b & 1 == 1) 22 | return BINARY-GCD((a - b) >> 1, b) 23 | if (a & 1 == 0) and (b & 1 == 0) 24 | return BINARY-GCD(a >> 1, b >> 1) << 1 25 | if a & 1 == 1 26 | return BINARY-GCD(a, b >> 1) 27 | return BINARY-GCD(a >> 1, b) 28 | ``` 29 | -------------------------------------------------------------------------------- /docs/Chap31/Problems/31-2.md: -------------------------------------------------------------------------------- 1 | > **a.** Consider the ordinary "paper and pencil" algorithm for long division: dividing $a$ by $b$, which yields a quotient $q$ and remainder $r$. Show that this method requires $O((1 + \lg q) \lg b)$ bit operations. 2 | > 3 | > **b.** Define $\mu(a, b) = (1 + \lg a)(1 + \lg b)$. Show that the number of bit operations performed by $\text{EUCLID}$ in reducing the problem of computing $\gcd(a, b)$ to that of computing $\gcd(b, a \mod b)$ is at most $c(\mu(a, b) - \mu(b, a \mod b))$ for some sufficiently large constant $c > 0$. 4 | 5 | > **c.** Show that $\text{EUCLID}(a, b)$ requires $O(\mu(a, b))$ bit operations in general and $O(\beta^2)$ bit operations when applied to two $\beta$-bit inputs. 6 | 7 | **a.** 8 | 9 | - Number of comparisons and subtractions: $\lceil \lg a \rceil - \lceil \lg b \rceil + 1 = \lceil \lg q \rceil$. 10 | - Length of subtraction: $\lceil \lg b \rceil$. 11 | - Total: $O((1 + \lg q) \lg b)$. 12 | 13 | **b.** 14 | 15 | \begin{array}{rlll} 16 | & \mu(a, b) - \mu(b, a \mod b) \\\\ 17 | = & \mu(a, b) - \mu(b, r) \\\\ 18 | = & (1 + \lg a)(1 + \lg b) - (1 + \lg b)(1 + \lg r) \\\\ 19 | = & (1 + \lg b)(\lg a - \lg r) & (\lg r \le \lg b) \\\\ 20 | \ge & (1 + \lg b)(\lg a - \lg b) \\\\ 21 | = & (1 + \lg b)(\lg q + 1) \\\\ 22 | \ge & (1 + \lg q) \lg b 23 | \end{array} 24 | 25 | **c.** $\mu(a, b) = (1 + \lg a)(1 + \lg b) \approx \beta^2$ 26 | -------------------------------------------------------------------------------- /docs/Chap31/Problems/31-4.md: -------------------------------------------------------------------------------- 1 | > Let $p$ be an odd prime. A number $a \in Z_p^\*$ is a ***quadratic residue*** if the equation $x^2 = a ~(\text{mod}~p)$ has a solution for the unknown $x$. 2 | > 3 | > **a.** Show that there are exactly $(p - 1) / 2$ quadratic residues, modulo $p$. 4 | > 5 | > **b.** If $p$ is prime, we define the ***Legendre symbol*** $(\frac{a}{p})$, for $a \in \mathbb Z_p^\*$, to be $1$ if $a$ is a quadratic residue modulo $p$ and $-1$ otherwise. Prove that if $a \in \mathbb Z_p^\*$, then 6 | > 7 | > $$(\frac{a}{p}) \equiv a^{(p - 1) / 2} (\mod p).$$ 8 | > 9 | > Give an efficient algorithm that determines whether a given number $a$ is a quadratic residue modulo $p$. Analyze the efficiency of your algorithm. 10 | > 11 | > **c.** Prove that if $p$ is a prime of the form $4k + 3$ and $a$ is a quadratic residue in $\mathbb Z_b^\*$, then $a^{k + 1} \mod p$ is a square root of $a$, modulo $p$. How much time is required to find the square root of a quadratic residue $a$ modulo $p$? 12 | > 13 | > **d.** Describe an efficient randomized algorithm for finding a nonquadratic residue, modulo an arbitrary prime $p$, that is, a member of $\mathbb Z_p^\*$ that is not a quadratic residue. How many arithmetic operations does your algorithm require on average? 14 | 15 | (Omit!) 16 | -------------------------------------------------------------------------------- /docs/Chap32/32.2.md: -------------------------------------------------------------------------------- 1 | ## 32.2-1 2 | 3 | > Working modulo $q = 11$, how many spurious hits does the Rabin-Karp matcher encounter in the text $T = 3141592653589793$ when looking for the pattern $P = 26$? 4 | 5 | $|\\{15, 59, 92, 26\\}| = 4$. 6 | 7 | ## 32.2-2 8 | 9 | > How would you extend the Rabin-Karp method to the problem of searching a text string for an occurrence of any one of a given set of $k$ patterns? Start by assuming that all $k$ patterns have the same length. Then generalize your solution to allow the patterns to have different lengths. 10 | 11 | Truncation. 12 | 13 | ## 32.2-3 14 | 15 | > Show how to extend the Rabin-Karp method to handle the problem of looking for a given $m \times m$ pattern in an $n \times n$ array of characters. (The pattern may be shifted vertically and horizontally, but it may not be rotated.) 16 | 17 | Calculate the hashes in each column just like the Rabin-Karp in one-dimension, then treat the hashes in each row as the characters and hashing again. 18 | 19 | ## 32.2-4 20 | 21 | > Alice has a copy of a long $n$-bit file $A = \langle a_{n - 1}, a_{n - 2}, \ldots, a_0 \rangle$, and Bob similarly has an $n$-bit file $B = \langle b_{n - 1}, b_{n - 2}, \ldots, b_0 \rangle$. Alice and Bob wish to know if their files are identical. To avoid transmitting all of $A$ or $B$, they use the following fast probabilistic check. Together, they select a prime $q > 1000n$ and randomly select an integer $x$ from $\\{ 0, 1, \ldots, q - 1 \\}$. Then, Alice evaluates 22 | > 23 | > $$A(x) = (\sum_{i = 0}^{n - 1} a_i x^i) \mod q$$ 24 | > 25 | > and Bob similarly evaluates $B(x)$. Prove that if $A \ne B$, there is at most one chance in $1000$ that $A(x) = B(x)$, whereas if the two files are the same, $A(x)$ is necessarily the same as $B(x)$. ($\textit{Hint:}$ See Exercise 31.4-4.) 26 | 27 | (Omit!) 28 | -------------------------------------------------------------------------------- /docs/Chap32/32.3.md: -------------------------------------------------------------------------------- 1 | ## 32.3-1 2 | 3 | > Construct the string-matching automaton for the pattern $P = aabab$ and illustrate its operation on the text string $T = \text{aaababaabaababaab}$. 4 | 5 | $$0 \rightarrow 1 \rightarrow 2 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 5 \rightarrow 1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 5 \rightarrow 1 \rightarrow 2 \rightarrow 3.$$ 6 | 7 | ## 32.3-2 8 | 9 | > Draw a state-transition diagram for a string-matching automaton for the pattern 10 | $ababbabbababbababbabb$ over the alphabet $\sigma = \\{a, b\\}$. 11 | 12 | \begin{array}{c|c|c} 13 | 0 & 1 & 0 \\\\ 14 | 1 & 1 & 2 \\\\ 15 | 2 & 3 & 0 \\\\ 16 | 3 & 1 & 4 \\\\ 17 | 4 & 3 & 5 \\\\ 18 | 5 & 6 & 0 \\\\ 19 | 6 & 1 & 7 \\\\ 20 | 7 & 3 & 8 \\\\ 21 | 8 & 9 & 0 \\\\ 22 | 9 & 1 & 10 \\\\ 23 | 10 & 11 & 0 \\\\ 24 | 11 & 1 & 12 \\\\ 25 | 12 & 3 & 13 \\\\ 26 | 13 & 14 & 0 \\\\ 27 | 14 & 1 & 15 \\\\ 28 | 15 & 16 & 8 \\\\ 29 | 16 & 1 & 17 \\\\ 30 | 17 & 3 & 18 \\\\ 31 | 18 & 19 & 0 \\\\ 32 | 19 & 1 & 20 \\\\ 33 | 20 & 3 & 21 \\\\ 34 | 21 & 9 & 0 35 | \end{array} 36 | 37 | ## 32.3-3 38 | 39 | > We call a pattern $P$ ***nonoverlappable*** if $P_k \sqsupset P_q$ implies $k = 0$ or $k = q$. Describe the state-transition diagram of the string-matching automaton for a nonoverlappable pattern. 40 | 41 | $\delta(q, a) \in \\{0, 1, q + 1\\}$. 42 | 43 | ## 32.3-4 $\star$ 44 | 45 | > Given two patterns $P$ and $P'$, describe how to construct a finite automaton that determines all occurrences of either pattern. Try to minimize the number of states in your automaton. 46 | 47 | Combine the common prefix and suffix. 48 | 49 | ## 32.3-5 50 | 51 | > Given a pattern $P$ containing gap characters (see Exercise 32.1-4), show how to build a finite automaton that can find an occurrence of $P$ in a text $T$ in $O(n)$ matching time, where $n = |T|$. 52 | 53 | Split the string with the gap characters, build finite automatons for each substring. When a substring is matched, moved to the next finite automaton. 54 | -------------------------------------------------------------------------------- /docs/Chap32/32.4.md: -------------------------------------------------------------------------------- 1 | ## 32.4-1 2 | 3 | > Compute the prefix function $\pi$ for the pattern $\text{ababbabbabbababbabb}$. 4 | 5 | $$\pi = \\{ 0, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 3, 4, 5, 6, 7, 8 \\}.$$ 6 | 7 | ## 32.4-2 8 | 9 | > Give an upper bound on the size of $\pi^\*[q]$ as a function of $q$. Give an example to show that your bound is tight. 10 | 11 | $|\pi^\*[q]| < q$. 12 | 13 | ## 32.4-3 14 | 15 | > Explain how to determine the occurrences of pattern $P$ in the text $T$ by examining the $\pi$ function for the string $PT$ (the string of length $m + n$ that is the concatenation of $P$ and $T$). 16 | 17 | $\\{ q \mid \pi[q] = m \text{ and } q \ge 2m \\}$. 18 | 19 | ## 32.4-4 20 | 21 | > Use an aggregate analysis to show that the running time of $\text{KMP-MATCHER}$ is $\Theta$. 22 | 23 | The number of $q = q + 1$ is at most $n$. 24 | 25 | ## 32.4-5 26 | 27 | > Use a potential function to show that the running time of $\text{KMP-MATCHER}$ is $\Theta(n)$. 28 | 29 | $\Phi = p.$ 30 | 31 | ## 32.4-6 32 | 33 | > Show how to improve $\text{KMP-MATCHER}$ by replacing the occurrence of $\pi$ in line 7 (but not line 12) by $\pi'$, where $\pi'$ is defined recursively for $q = 1, 2, \ldots, m - 1$ by the equation 34 | > 35 | > $$ 36 | > \pi'[q] = 37 | > \begin{cases} 38 | > 0 & \text{ if } \pi[q] = 0, \\\\ 39 | > \pi'[\pi[q]] & \text{ if } \pi[q] \ne 0 \text{ and } P[\pi[q] + 1] = P[q + 1] \\\\ 40 | > \pi[q] & \text{ if } \pi[q] \ne 0 \text{ and } P[\pi[q] + 1] \ne P[q + 1]. 41 | > \end{cases} 42 | > $$ 43 | > 44 | > Explain why the modified algorithm is correct, and explain in what sense this change constitutes an improvement. 45 | 46 | If $P[q + 1] \ne T[i]$, then if $P[\pi[q] + q] = P[q + 1] \ne T[i]$, there is no need to compare $P[\pi[q] + q]$ with $T[i]$. 47 | 48 | ## 32.4-7 49 | 50 | > Give a linear-time algorithm to determine whether a text $T$ is a cyclic rotation of another string $T'$. For example, $\text{arc}$ and $\text{car}$ are cyclic rotations of each other. 51 | 52 | Find $T'$ in $TT$. 53 | 54 | ## 32.4-8 $\star$ 55 | 56 | > Give an $O(m|\Sigma|)$-time algorithm for computing the transition function $\delta$ for the string-matching automaton corresponding to a given pattern $P$. (Hint: Prove that $\delta(q, a) = \delta(\pi[q], a)$ if $q = m$ or $P[q + 1] \ne a$.) 57 | 58 | Compute the prefix function $m$ times. 59 | -------------------------------------------------------------------------------- /docs/Chap32/Problems/32-1.md: -------------------------------------------------------------------------------- 1 | > Let $y^i$ denote the concatenation of string $y$ with itself $i$ times. For example, $(\text{ab})^3 = \text{ababab}$. We say that a string $x \in \Sigma^\*$ has ***repetition factor*** $r$ if $x = y ^ r$ for some string $y \in \Sigma^\*$ and some $r > 0$. Let $\rho(x)$ denote the largest $r$ such that $x$ has repetition factor $r$. 2 | > 3 | > **a.** Give an efficient algorithm that takes as input a pattern $P[1 \ldots m]$ and computes the value $\rho(P_i)$ for $i = 1, 2, \ldots, m$. What is the running time of your algorithm? 4 | > 5 | > **b.** For any pattern $P[1 \ldots m]$, let $\rho^\*(P)$ be defined as $\max_{1 \le i \le m} \rho(P_i)$. Prove that if the pattern $P$ is chosen randomly from the set of all binary strings of length $m$, then the expected value of $\rho^\*(P)$ is $O(1)$. 6 | > 7 | > **c.** Argue that the following string-matching algorithm correctly finds all occurrences of pattern $P$ in a text $T[1 \ldots n]$ in time $O(\rho^\*(P)n + m)$: 8 | > 9 | > ```cpp 10 | > REPETITION_MATCHER(P, T) 11 | > m = P.length 12 | > n = T.length 13 | > k = 1 + ρ*(P) 14 | > q = 0 15 | > s = 0 16 | > while s ≤ n - m 17 | > if T[s + q + 1] == P[q + 1] 18 | > q = q + 1 19 | > if q == m 20 | > print "Pattern occurs with shift" s 21 | > if q == m or T[s + q + 1] != P[q + 1] 22 | > s = s + max(1, ceil(q / k)) 23 | > q = 0 24 | > ``` 25 | > This algorithm is due to Galil and Seiferas. By extending these ideas greatly, they obtained a linear-time string-matching algorithm that uses only $O(1)$ storage beyond what is required for $P$ and $T$. 26 | 27 | **a.** Compute $\pi$, let $l = m - \pi[m]$, if $m ~\text{mod}~ l = 0$ and for all $p = m - i \cdot l > 0$, $p - \pi[p] = l$, then $\rho(P_i) = m / l$, otherwise $\rho(P_i) = 1$. The running time is $\Theta(n)$. 28 | 29 | **b.** 30 | 31 | \begin{align} 32 | P(\rho^\*(P) \ge 2) & = \frac{1}{2} + \frac{1}{8} + \frac{1}{32} + \cdots \approx \frac{2}{3} \\\\ 33 | P(\rho^\*(P) \ge 3) & = \frac{1}{4} + \frac{1}{32} + \frac{1}{256} + \cdots \approx \frac{2}{7} \\\\ 34 | P(\rho^\*(P) \ge 4) & = \frac{1}{8} + \frac{1}{128} + \frac{1}{2048} + \cdots \approx \frac{2}{15} \\\\ 35 | P(\rho^\*(P) = 1) & = \frac{1}{3} \\\\ 36 | P(\rho^\*(P) = 2) & = \frac{8}{21} \\\\ 37 | P(\rho^\*(P) = 3) & = \frac{16}{105} \\\\ 38 | \text E[\rho^\*(P)] & = 1 \cdot \frac{1}{3} + 2 \cdot \frac{8}{21} + 3 \cdot \frac{16}{105} + \ldots \approx 2.21 39 | \end{align} 40 | 41 | **c.** 42 | 43 | (Omit!) 44 | -------------------------------------------------------------------------------- /docs/Chap33/33.2.md: -------------------------------------------------------------------------------- 1 | ## 33.2-1 2 | 3 | > Show that a set of $n$ line segments may contain $\Theta(n ^ 2)$ intersections. 4 | 5 | Star. 6 | 7 | ## 33.2-2 8 | 9 | > Given two segments $a$ and $b$ that are comparable at $x$, show how to determine in $O(1)$ time which of $a \succeq_x b$ or $b \succeq_x a$ holds. Assume that neither segment is vertical. 10 | 11 | Suppose $a = \overline{(x_1, y_1)(x_2, y_2)}$ and $b = \overline{(x_3, y_3)(x_4, y_4)}$, 12 | 13 | {equation}\frac{x - x_1}{x_2 - x_1} = \frac{y - y_1}{y_2 - y_1} 14 | {equation} 15 | {equation}y = (x - x_1) \cdot \frac{y_2 - y_1}{x_2 - x_1} + y_1 16 | {equation} 17 | {equation}y' = (x - x_3) \cdot \frac{y_4 - y_3}{x_4 - x_3} + y_3 18 | {equation} 19 | Compare $y$ and $y'$. To avoid division, compare $(x_2 - x_1) \cdot y$ and $(x_4 - x_3) \cdot y'$. 20 | 21 | ## 33.2-3 22 | 23 | > Professor Mason suggests that we modify ANY-SEGMENTS-INTERSECT so that instead of returning upon finding an intersection, it prints the segments that intersect and continues on to the next iteration of the __for__ loop. The professor calls the resulting procedure PRINT-INTERSECTING-SEGMENTS and claims that it prints all intersections, from left to right, as they occur in the set of line segments. Professor Dixon disagrees, claiming that Professor Mason's idea is incorrect. Which professor is right? Will PRINT-INTERSECTING-SEGMENTS always find the leftmost intersection first? Will it always find all the intersections? 24 | 25 | No. 26 | 27 | No. 28 | 29 | ## 33.2-4 30 | 31 | > Give an $O(n\lg n)$-time algorithm to determine whether an n-vertex polygon is simple. 32 | 33 | Same as ANY-SEGMENTS-INTERSECT. 34 | 35 | ## 33.2-5 36 | 37 | > Give an $O(n\lg n)$-time algorithm to determine whether two simple polygons with a total of $n$ vertices intersect. 38 | 39 | Same as ANY-SEGMENTS-INTERSECT. 40 | 41 | ## 33.2-6 42 | 43 | > A ***disk*** consists of a circle plus its interior and is represented by its center point and radius. Two disks intersect if they have any point in common. Give an $O(n\lg n)$- time algorithm to determine whether any two disks in a set of $n$ intersect. 44 | 45 | Same as ANY-SEGMENTS-INTERSECT. 46 | 47 | ## 33.2-7 48 | 49 | > Given a set of $n$ line segments containing a total of $k$ intersections, show how to output all $k$ intersections in $O((n + k) \lg)$ time. 50 | 51 | Treat the intersection points as event points. 52 | 53 | ## 33.2-8 54 | 55 | > Argue that ANY-SEGMENTS-INTERSECT works correctly even if three or more segments intersect at the same point. 56 | 57 | (Omit!) 58 | 59 | ## 33.2-9 60 | 61 | > Show that ANY-SEGMENTS-INTERSECT works correctly in the presence of vertical segments if we treat the bottom endpoint of a vertical segment as if it were a left endpoint and the top endpoint as if it were a right endpoint. How does your answer to Exercise 33.2-2 change if we allow vertical segments? 62 | 63 | (Omit!) 64 | -------------------------------------------------------------------------------- /docs/Chap33/33.3.md: -------------------------------------------------------------------------------- 1 | ## 33.3-1 2 | 3 | > Prove that in the procedure $\text{GRAHAM-SCAN}$, points $p_1$ and $p_m$ must be vertices of $\text{CH}(Q)$. 4 | 5 | ## 33.3-2 6 | 7 | > Consider a model of computation that supports addition, comparison, and multiplication and for which there is a lower bound of $\Omega(n\lg n)$ to sort $n$ numbers. Prove that $\Omega(n\lg n)$ is a lower bound for computing, in order, the vertices of the convex hull of a set of $n$ points in such a model. 8 | 9 | ## 33.3-3 10 | 11 | > Given a set of points $Q$, prove that the pair of points farthest from each other must be vertices of $\text{CH}(Q)$. 12 | 13 | ## 33.3-4 14 | 15 | > For a given polygon $P$ and a point $q$ on its boundary, the ***shadow*** of $q$ is the set of points $r$ such that the segment $\overline{qr}$ is entirely on the boundary or in the interior of $P$. As Figure 33.10 illustrates, a polygon $P$ is ***star-shaped*** if there exists a point $p$ in the interior of $P$ that is in the shadow of every point on the boundary of $P$. The set of all such points $p$ is called the ***kernel*** of $P$. Given an $n$-vertex, star-shaped polygon $P$ specified by its vertices in counterclockwise order, show how to compute $\text{CH}(P)$ in $O(n)$ time. 16 | 17 | ## 33.3-5 18 | 19 | > In the ***on-line convex-hull problem***, we are given the set $Q$ of $n$ points one point at a time. After receiving each point, we compute the convex hull of the points seen so far. Obviously, we could run Graham's scan once for each point, with a total running time of $O(n^2\lg n)$. Show how to solve the on-line convex-hull problem in a total of $O(n^2)$ time. 20 | 21 | ## 33.3-6 $\star$ 22 | 23 | > Show how to implement the incremental method for computing the convex hull of $n$ points so that it runs in $O(n\lg n)$ time. 24 | 25 | (Omit!) 26 | -------------------------------------------------------------------------------- /docs/Chap33/33.4.md: -------------------------------------------------------------------------------- 1 | ## 33.4-1 2 | 3 | > Professor Williams comes up with a scheme that allows the closest-pair algorithm to check only $5$ points following each point in array $Y'$. The idea is always to place points on line $l$ into set $P_L$. Then, there cannot be pairs of coincident points on line $l$ with one point in $P_L$ and one in $P_R$. Thus, at most $6$ points can reside in the $\delta \times 2\delta$ rectangle. What is the flaw in the professor's scheme? 4 | 5 | (Omit!) 6 | 7 | ## 33.4-2 8 | 9 | > Show that it actually suffices to check only the points in the $5$ array positions following each point in the array $Y'$. 10 | 11 | (Omit!) 12 | 13 | ## 33.4-3 14 | 15 | > We can define the distance between two points in ways other than euclidean. In the plane, the ***$L_m$-distance*** between points $p_1$ and $p_2$ is given by the expression $(|x_1 - x_2|^m + |y_1 - y_2|^m)^{1 / m}$. Euclidean distance, therefore, is $L_2$-distance. Modify the closest-pair algorithm to use the $L_1$-distance, which is also known as the ***Manhattan distance***. 16 | 17 | (Omit!) 18 | 19 | ## 33.4-4 20 | 21 | > Given two points $p_1$ and $p_2$ in the plane, the $L_\infty$-distance between them is given by $\max(|x_1 - x_2|, |y_1 - y_2|)$. Modify the closest-pair algorithm to use the $L_\infty$-distance. 22 | 23 | (Omit!) 24 | 25 | ## 33.4-5 26 | 27 | > Suppose that $\Omega(n)$ of the points given to the closest-pair algorithm are covertical. Show how to determine the sets $P_L$ and $P_R$ and how to determine whether each point of $Y$ is in $P_L$ or $P_R$ so that the running time for the closest-pair algorithm remains $O(n\lg n)$. 28 | 29 | (Omit!) 30 | 31 | ## 33.4-6 32 | 33 | > Suggest a change to the closest-pair algorithm that avoids presorting the $Y$ array but leaves the running time as $O(n\lg n)$. ($\textit{Hint:}$ Merge sorted arrays $Y_L$ and $Y_R$ to form the sorted array $Y$.) 34 | 35 | (Omit!) 36 | -------------------------------------------------------------------------------- /docs/Chap33/Problems/33-1.md: -------------------------------------------------------------------------------- 1 | > Given a set $Q$ of points in the plane, we define the ***convex layers*** of $Q$ inductively. The first convex layer of $Q$ consists of those points in $Q$ that are vertices of $\text{CH}(Q)$. For $i > 1$, define $Q_i$ to consist of the points of $Q$ with all points in convex layers $i, 2, \dots, i - 1$ removed. Then, the $i$th convex layer of $Q$ is $\text{CH}(Q_i)$ if $Q_i \ne \emptyset$ and is undefined otherwise. 2 | > 3 | > **a.** Give an $O(n^2)$- time algorithm to find the convex layers of a set of $n$ points. 4 | > 5 | > **b.** Prove that $\Omega(n\lg n)$ time is required to compute the convex layers of a set of $n$ points with any model of computation that requires $\Omega(n\lg n)$ time to sort $n$ real numbers. 6 | 7 | (Omit!) 8 | -------------------------------------------------------------------------------- /docs/Chap33/Problems/33-2.md: -------------------------------------------------------------------------------- 1 | > Let $Q$ be a set of $n$ points in the plane. We say that point $(x, y)$ ***dominates*** point $(x', y')$ if $x \ge x'$ and $y \ge y'$. A point in $Q$ that is dominated by no other points in $Q$ is said to be ***maximal***. Note that $Q$ may contain many maximal points, which can be organized into ***maximal layers*** as follows. The first maximal layer $L_1$ is the set of maximal points of $Q$. For $i > 1$, the $i$th maximal layer $L_i$ is the set of maximal points in $Q - \bigcup_{j = 1}^{i - 1} L_j$. 2 | > 3 | > Suppose that $Q$ has $k$ nonempty maximal layers, and let $y_i$ be the $y$-coordinate of the leftmost point in $L_i$ for $i = 1, 2, \dots, k$. For now, assume that no two points in $Q$ have the same $x$- or $y$-coordinate. 4 | > 5 | > **a.** Show that $y_1 > y_2 > \cdots > y_k$. 6 | > 7 | > Consider a point $(x, y)$ that is to the left of any point in $Q$ and for which $y$ is distinct from the $y$-coordinate of any point in $Q$. Let $Q' = Q \cup \\{(w, y)\\}$. 8 | > 9 | > **b.** Let $j$ be the minimum index such that $y_j < y$, unless $y < y_k$, in which case we let $j = k + 1$. Show that the maximal layers of $Q'$ are as follows: 10 | > 11 | > - If $j \le k$, then the maximal layers of $Q'$ are the same as the maximal layers of $Q$, except that $L_j$ also includes $(x, y)$ as its new leftmost point. 12 | > 13 | > - If $j = k + 1$, then the first $k$ maximal layers of $Q'$ are the same as for $Q$, but in addition, $Q'$ has a nonempty $(k + 1)$st maximal layer: $L_{k + 1} = \\{(x, y)\\}$. 14 | > 15 | > **c.** Describe an $O(n\lg n)$-time algorithm to compute the maximal layers of a set $Q$ of $n$ points. ($\textit{Hint:}$ Move a sweep line from right to left.) 16 | > 17 | > **d.** Do any difficulties arise if we now allow input points to have the same $x$- or $y$-coordinate? Suggest a way to resolve such problems. 18 | 19 | (Omit!) 20 | -------------------------------------------------------------------------------- /docs/Chap33/Problems/33-3.md: -------------------------------------------------------------------------------- 1 | > A group of $n$ Ghostbusters is battling n ghosts. Each Ghostbuster carries a proton pack, which shoots a stream at a ghost, eradicating it. A stream goes in a straight line and terminates when it hits the ghost. The Ghostbusters decide upon the following strategy. They will pair off with the ghosts, forming $n$ Ghostbuster-ghost pairs, and then simultaneously each Ghostbuster will shoot a stream at his chosen ghost. As we all know, it is very dangerous to let streams cross, and so the Ghostbusters must choose pairings for which no streams will cross. 2 | > 3 | > Assume that the position of each Ghostbuster and each ghost is a fixed point in the plane and that no three positions are colinear. 4 | > 5 | > **a.** Argue that there exists a line passing through one Ghostbuster and one ghost such that the number of Ghostbusters on one side of the line equals the number of ghosts on the same side. Describe how to find such a line in $O(n\lg n)$ time. 6 | > 7 | > **b.** Give an $O(n^2\lg n)$-time algorithm to pair Ghostbusters with ghosts in such a way that no streams cross. 8 | 9 | (Omit!) 10 | -------------------------------------------------------------------------------- /docs/Chap33/Problems/33-4.md: -------------------------------------------------------------------------------- 1 | > Professor Charon has a set of $n$ sticks, which are piled up in some configuration. Each stick is specified by its endpoints, and each endpoint is an ordered triple giving its $(x, y, z)$ coordinates. No stick is vertical. He wishes to pick up all the sticks, one at a time, subject to the condition that he may pick up a stick only if there is no other stick on top of it. 2 | > 3 | > **a.** Give a procedure that takes two sticks $a$ and $b$ and reports whether $a$ is above, below, or unrelated to $b$. 4 | > 5 | > **b.** Describe an efficient algorithm that determines whether it is possible to pick up all the sticks, and if so, provides a legal order in which to pick them up. 6 | 7 | (Omit!) 8 | -------------------------------------------------------------------------------- /docs/Chap33/Problems/33-5.md: -------------------------------------------------------------------------------- 1 | > Consider the problem of computing the convex hull of a set of points in the plane that have been drawn according to some known random distribution. Sometimes, the number of points, or size, of the convex hull of $n$ points drawn from such a distribution has expectation $O(n^{1 - \epsilon})$ for some constant $\epsilon > 0$. We call such a distribution ***sparse-hulled***. Sparse-hulled distributions include the following: 2 | > 3 | > - Points drawn uniformly from a unit-radius disk. The convex hull has expected size $\Theta(n^{1 / 3})$. 4 | > 5 | > - Points drawn uniformly from the interior of a convex polygon with $k$ sides, for any constant $k$. The convex hull has expected size $\Theta(\lg n)$. 6 | > 7 | > - Points drawn according to a two-dimensional normal distribution. The convex $p$ hull has expected size $\Theta(\sqrt{\lg n})$. 8 | > 9 | > **a.** Given two convex polygons with $n_1$ and $n_2$ vertices respectively, show how to compute the convex hull of all $n_1 + n_2$ points in $O(n_1 + n_2)$ time. (The polygons may overlap.) 10 | > 11 | > **b.** Show how to compute the convex hull of a set of $n$ points drawn independently according to a sparse-hulled distribution in $O(n)$ average-case time. ($\textit{Hint:}$ Recursively find the convex hulls of the first $n / 2$ points and the second $n / 2$ points, and then combine the results.) 12 | 13 | (Omit!) 14 | -------------------------------------------------------------------------------- /docs/Chap34/34.1.md: -------------------------------------------------------------------------------- 1 | ## 34.1-1 2 | 3 | > Define the optimization problem $\text{LONGEST-PATH-LENGTH}$ as the relation that associates each instance of an undirected graph and two vertices with the number of edges in a longest simple path between the two vertices. Define the decision problem $\text{LONGEST-PATH}$ $= \\{\langle G, u, v, k\rangle: G = (V, E)$ is an undirected graph, $u, v \in V, k \ge 0$ is an integer, and there exists a simple path from $u$ to $v$ in $G$ consisting of at least $k$ edges $\\}$. Show that the optimization problem $\text{LONGEST-PATH-LENGTH}$ can be solved in polynomial time if and only if $\text{LONGEST-PATH} \in P$. 4 | 5 | (Omit!) 6 | 7 | ## 34.1-2 8 | 9 | > Give a formal definition for the problem of finding the longest simple cycle in an undirected graph. Give a related decision problem. Give the language corresponding to the decision problem. 10 | 11 | (Omit!) 12 | 13 | ## 34.1-3 14 | 15 | > Give a formal encoding of directed graphs as binary strings using an adjacencymatrix representation. Do the same using an adjacency-list representation. Argue that the two representations are polynomially related. 16 | 17 | (Omit!) 18 | 19 | ## 34.1-4 20 | 21 | > Is the dynamic-programming algorithm for the 0-1 knapsack problem that is asked for in Exercise 16.2-2 a polynomial-time algorithm? Explain your answer. 22 | 23 | (Omit!) 24 | 25 | ## 34.1-5 26 | 27 | > Show that if an algorithm makes at most a constant number of calls to polynomial-time subroutines and performs an additional amount of work that also takes polynomial time, then it runs in polynomial time. Also show that a polynomial number of calls to polynomial-time subroutines may result in an exponential-time algorithm. 28 | 29 | (Omit!) 30 | 31 | ## 34.1-6 32 | 33 | > Show that the class $P$, viewed as a set of languages, is closed under union, intersection, concatenation, complement, and Kleene star. That is, if $L_1, L_2 \in P$, then $L_1 \cup L_2 \in P$, $L_1 \cap L_2 \in P$, $L_1L_2 \in P$, $\bar L_1 \in P$, and $L_1^\* \in P$. 34 | 35 | (Omit!) 36 | -------------------------------------------------------------------------------- /docs/Chap34/34.2.md: -------------------------------------------------------------------------------- 1 | ## 34.2-1 2 | 3 | > Consider the language $\text{GRAPH-ISOMORPHISM}$ $= \\{\langle G_1, G_2 \rangle: G_1$ and $G_2$ are isomorphic graphs$\\}$. Prove that $\text{GRAPH-ISOMORPHISM} \in \text{NP}$ by describing a polynomial-time algorithm to verify the language. 4 | 5 | (Omit!) 6 | 7 | ## 34.2-2 8 | 9 | > Prove that if $G$ is an undirected bipartite graph with an odd number of vertices, then $G$ is nonhamiltonian. 10 | 11 | (Omit!) 12 | 13 | ## 34.2-3 14 | 15 | > Show that if $\text{HAM-CYCLE} \in P$, then the problem of listing the vertices of a hamiltonian cycle, in order, is polynomial-time solvable. 16 | 17 | (Omit!) 18 | 19 | ## 34.2-4 20 | 21 | > Prove that the class $\text{NP}$ of languages is closed under union, intersection, concatenation, and Kleene star. Discuss the closure of $\text{NP}$ under complement. 22 | 23 | (Omit!) 24 | 25 | ## 34.2-5 26 | 27 | > Show that any language in $\text{NP}$ can be decided by an algorithm running in time $2^{O(n^k)}$ for some constant $k$. 28 | 29 | (Omit!) 30 | 31 | ## 34.2-6 32 | 33 | > A ***hamiltonian path*** in a graph is a simple path that visits every vertex exactly once. Show that the language $\text{HAM-PATH}$ $= \\{\langle G, u, v \rangle:$ there is a hamiltonian path from $u$ to $v$ in graph $G\\}$ belongs to $\text{NP}$. 34 | 35 | (Omit!) 36 | 37 | ## 34.2-7 38 | 39 | > Show that the hamiltonian-path problem from Exercise 34.2-6 can be solved in polynomial time on directed acyclic graphs. Give an efficient algorithm for the problem. 40 | 41 | (Omit!) 42 | 43 | ## 34.2-8 44 | 45 | > Let $\phi$ be a boolean formula constructed from the boolean input variables $x_1, x_2, \dots, x_k$, negations ($\neg$), ANDs ($\vee$), ORs ($\wedge$), and parentheses. The formula $\phi$ is a ***tautology*** if it evaluates to $1$ for every assignment of $1$ and $0$ to the input variables. Define $\text{TAUTOLOGY}$ as the language of boolean formulas that are tautologies. Show that $\text{TAUTOLOGY} \in \text{co-NP}$. 46 | 47 | (Omit!) 48 | 49 | ## 34.2-9 50 | 51 | > Prove that $\text P \subseteq \text{co-NP}$. 52 | 53 | (Omit!) 54 | 55 | ## 34.2-10 56 | 57 | > Prove that if $\text{NP} \ne \text{co-NP}$, then $\text P \ne \text{NP}$. 58 | 59 | (Omit!) 60 | 61 | ## 34.2-11 62 | 63 | > Let $G$ be a connected, undirected graph with at least $3$ vertices, and let $G^3$ be the graph obtained by connecting all pairs of vertices that are connected by a path in $G$ of length at most $3$. Prove that $G^3$ is hamiltonian. ($\textit{Hint:}$ Construct a spanning tree for $G$, and use an inductive argument.) 64 | 65 | (Omit!) 66 | -------------------------------------------------------------------------------- /docs/Chap34/34.3.md: -------------------------------------------------------------------------------- 1 | ## 34.3-1 2 | 3 | > Verify that the circuit in Figure 34.8(b) is unsatisfiable. 4 | 5 | (Omit!) 6 | 7 | ## 34.3-2 8 | 9 | > Show that the $\le_\text P$ relation is a transitive relation on languages. That is, show that if $L_1 \le_\text P L_2$ and $L_2 \le_\text P L_3$, then $L_1 \le_\text P L_3$. 10 | 11 | (Omit!) 12 | 13 | ## 34.3-3 14 | 15 | > Prove that $L \le_\text P \bar L$ if and only if $\bar L \le_\text P L$. 16 | 17 | (Omit!) 18 | 19 | ## 34.3-4 20 | 21 | > Show that we could have used a satisfying assignment as a certificate in an alternative proof of Lemma 34.5. Which certificate makes for an easier proof? 22 | 23 | (Omit!) 24 | 25 | ## 34.3-5 26 | 27 | > The proof of Lemma 34.6 assumes that the working storage for algorithm A occupies a contiguous region of polynomial size. Where in the proof do we exploit this assumption? Argue that this assumption does not involve any loss of generality. 28 | 29 | (Omit!) 30 | 31 | ## 34.3-6 32 | 33 | > A language $L$ is ***complete*** for a language class $C$ with respect to polynomial-time reductions if $L \in C$ and $L' \le_\text P L$ for all $L' \in C$. Show that $\emptyset$ and $\\{0, 1\\}^\*$ are the only languages in $\text P$ that are not complete for $\text P$ with respect to polynomial-time reductions. 34 | 35 | (Omit!) 36 | 37 | ## 34.3-7 38 | 39 | > Show that, with respect to polynomial-time reductions (see Exercise 34.3-6), $L$ is complete for $\text{NP}$ if and only if $\bar L$ is complete for \text{co-NP}$. 40 | 41 | (Omit!) 42 | 43 | ## 34.3-8 44 | 45 | > The reduction algorithm $F$ in the proof of Lemma 34.6 constructs the circuit $C = f(x)$ based on knowledge of $x$, $A$, and $k$. Professor Sartre observes that the string $x$ is input to $F$, but only the existence of $A$, $k$, and the constant factor implicit in the $O(n^k)$ running time is known to $F$ (since the language $L$ belongs to $\text{NP}$), not their actual values. Thus, the professor concludes that $F$ can't possibly construct the circuit $C$ and that the language $\text{CIRCUIT-SAT}$ is not necessarily $\text{NP-hard}$. Explain the flaw in the professor's reasoning. 46 | 47 | (Omit!) 48 | -------------------------------------------------------------------------------- /docs/Chap34/34.4.md: -------------------------------------------------------------------------------- 1 | ## 34.4-1 2 | 3 | > Consider the straightforward (nonpolynomial-time) reduction in the proof of Theorem 34.9. Describe a circuit of size $n$ that, when converted to a formula by this method, yields a formula whose size is exponential in $n$. 4 | 5 | (Omit!) 6 | 7 | ## 34.4-2 8 | 9 | > Show the $\text{3-CNF}$ formula that results when we use the method of Theorem 34.10 on the formula $\text{(34.3)}$. 10 | 11 | (Omit!) 12 | 13 | ## 34.4-3 14 | 15 | > Professor Jagger proposes to show that $\text{SAT} \le_\text P \text{3-CNF-SAT}$ by using only the truth-table technique in the proof of Theorem 34.10, and not the other steps. That is, the professor proposes to take the boolean formula $\phi$, form a truth table for its variables, derive from the truth table a formula in $\text{3-DNF}$ that is equivalent to $\neg\phi$, and then negate and apply DeMorgan's laws to produce a $\text{3-CNF}$ formula equivalent to $\phi$. Show that this strategy does not yield a polynomial-time reduction. 16 | 17 | (Omit!) 18 | 19 | ## 34.4-4 20 | 21 | > Show that the problem of determining whether a boolean formula is a tautology is complete for $\text{co-NP}$. ($\textit{Hint:}$ See Exercise 34.3-7.) 22 | 23 | (Omit!) 24 | 25 | ## 34.4-5 26 | 27 | > Show that the problem of determining the satisfiability of boolean formulas in disjunctive normal form is polynomial-time solvable. 28 | 29 | (Omit!) 30 | 31 | ## 34.4-6 32 | 33 | > Suppose that someone gives you a polynomial-time algorithm to decide formula satisfiability. Describe how to use this algorithm to find satisfying assignments in polynomial time. 34 | 35 | (Omit!) 36 | 37 | ## 34.4-7 38 | 39 | > Let $\text{2-CNF-SAT}$ be the set of satisfiable boolean formulas in $\text{CNF}$ with exactly $2$ literals per clause. Show that $\text{2-CNF-SAT} \in P$. Make your algorithm as efficient as possible. ($\textit{Hint:}$ Observe that $x \vee y$ is equivalent to $\neg x \to y$. Reduce $\text{2-CNF-SAT}$ to an efficiently solvable problem on a directed graph.) 40 | 41 | (Omit!) 42 | -------------------------------------------------------------------------------- /docs/Chap34/34.5.md: -------------------------------------------------------------------------------- 1 | ## 34.5-1 2 | 3 | > The ***subgraph-isomorphism problem*** takes two undirected graphs $G_1$ and $G_2$, and it asks whether $G_1$ is isomorphic to a subgraph of $G_2$. Show that the subgraphisomorphism problem is $\text{NP-complete}$. 4 | 5 | (Omit!) 6 | 7 | ## 34.5-2 8 | 9 | > Given an integer $m \times n$ matrix $A$ and an integer $m$-vector $b$, the ***0-1 integerprogramming problem*** asks whether there exists an integer $n$-vector $x$ with elements in the set $\\{0, 1\\}$ such that $Ax \le b$. Prove that 0-1 integer programming is $\text{NP-complete}$. ($\textit{Hint:}$ Reduce from $\text{3-CNF-SAT}$.) 10 | 11 | (Omit!) 12 | 13 | ## 34.5-3 14 | 15 | > The integer ***linear-programming problem*** is like the 0-1 integer-programming problem given in Exercise 34.5-2, except that the values of the vector $x$ may be any integers rather than just $0$ or $1$. Assuming that the 0-1 integer-programming problem is $\text{NP-hard}$, show that the integer linear-programming problem is $\text{NP-complete}$. 16 | 17 | (Omit!) 18 | 19 | ## 34.5-4 20 | 21 | > Show how to solve the subset-sum problem in polynomial time if the target value $t$ is expressed in unary. 22 | 23 | (Omit!) 24 | 25 | ## 34.5-5 26 | 27 | > The ***set-partition problem*** takes as input a set $S$ of numbers. The question is whether the numbers can be partitioned into two sets $A$ and $\bar A = S - A$ such that $\sum_{x \in A} x = \sum_{x \in \bar A} x$. Show that the set-partition problem is $\text{NP-complete}$. 28 | 29 | (Omit!) 30 | 31 | ## 34.5-6 32 | 33 | > Show that the hamiltonian-path problem is $\text{NP-complete}$. 34 | 35 | (Omit!) 36 | 37 | ## 34.5-7 38 | 39 | > The ***longest-simple-cycle problem*** is the problem of determining a simple cycle (no repeated vertices) of maximum length in a graph. Formulate a related decision problem, and show that the decision problem is $\text{NP-complete}$. 40 | 41 | (Omit!) 42 | 43 | ## 34.5-8 44 | 45 | > In the ***half 3-CNF satisfiability*** problem, we are given a $\text{3-CNF}$ formula $\phi$ with $n$ variables and $m$ clauses, where $m$ is even. We wish to determine whether there exists a truth assignment to the variables of $\phi$ such that exactly half the clauses evaluate to $0$ and exactly half the clauses evaluate to $1$. Prove that the half $\text{3-CNF}$ satisfiability problem is $\text{NP-complete}$. 46 | 47 | (Omit!) 48 | -------------------------------------------------------------------------------- /docs/Chap34/Problems/34-1.md: -------------------------------------------------------------------------------- 1 | > An ***independent set*** of a graph $G = (V, E)$ is a subset $V' \subseteq V$ of vertices such that each edge in $E$ is incident on at most one vertex in $V'$. The ***independent-set problem*** is to find a maximum-size independent set in $G$. 2 | > 3 | > **a.** Formulate a related decision problem for the independent-set problem, and prove that it is $\text{NP-complete}$. ($\textit{Hint:}$ Reduce from the clique problem.) 4 | > 5 | > **b.** Suppose that you are given a "black-box" subroutine to solve the decision problem you defined in part (a). Give an algorithm to find an independent set of maximum size. The running time of your algorithm should be polynomial in $|V|$ and $|E|$, counting queries to the black box as a single step. 6 | > 7 | > Although the independent-set decision problem is $\text{NP-complete}$, certain special cases are polynomial-time solvable. 8 | > 9 | > **c.** Give an efficient algorithm to solve the independent-set problem when each vertex in $G$ has degree $2$. Analyze the running time, and prove that your algorithm works correctly. 10 | > 11 | > **d.** Give an efficient algorithm to solve the independent-set problem when $G$ is bipartite. Analyze the running time, and prove that your algorithm works correctly. ($\text{Hint:}$ Use the results of Section 26.3.) 12 | 13 | (Omit!) 14 | -------------------------------------------------------------------------------- /docs/Chap34/Problems/34-2.md: -------------------------------------------------------------------------------- 1 | > Bonnie and Clyde have just robbed a bank. They have a bag of money and want to divide it up. For each of the following scenarios, either give a polynomial-time algorithm, or prove that the problem is $\text{NP-complete}$. The input in each case is a list of the $n$ items in the bag, along with the value of each. 2 | > 3 | > **a.** The bag contains $n$ coins, but only $2$ different denominations: some coins are worth $x$ dollars, and some are worth $y$ dollars. Bonnie and Clyde wish to divide the money exactly evenly. 4 | > 5 | > **b.** The bag contains $n$ coins, with an arbitrary number of different denominations, but each denomination is a nonnegative integer power of $2$, i.e., the possible denominations are $1$ dollar, $2$ dollars, $4$ dollars, etc. Bonnie and Clyde wish to divide the money exactly evenly. 6 | > 7 | > **c.** The bag contains $n$ checks, which are, in an amazing coincidence, made out to "Bonnie or Clyde." They wish to divide the checks so that they each get the exact same amount of money. 8 | > 9 | > **d.** The bag contains $n$ checks as in part (c), but this time Bonnie and Clyde are willing to accept a split in which the difference is no larger than $100$ dollars. 10 | 11 | (Omit!) 12 | -------------------------------------------------------------------------------- /docs/Chap34/Problems/34-3.md: -------------------------------------------------------------------------------- 1 | > Mapmakers try to use as few colors as possible when coloring countries on a map, as long as no two countries that share a border have the same color. We can model this problem with an undirected graph $G = (V, E)$ in which each vertex represents a country and vertices whose respective countries share a border are adjacent. Then, a ***$k$-coloring*** is a function $c: V \to \\{1, 2, \dots, k \\}$ such that $c(u) \ne c(v)$ for every edge $(u, v) \in E$. In other words, the numbers $1, 2, \dots, k$ represent the $k$ colors, and adjacent vertices must have different colors. The ***graph-coloring problem*** is to determine the minimum number of colors needed to color a given graph. 2 | > 3 | > **a.** Give an efficient algorithm to determine a $2$-coloring of a graph, if one exists. 4 | > 5 | > **b.** Cast the graph-coloring problem as a decision problem. Show that your decision problem is solvable in polynomial time if and only if the graph-coloring problem is solvable in polynomial time. 6 | > 7 | > **c.** Let the language $\text{3-COLOR}$ be the set of graphs that can be $3$-colored. Show that if $\text{3-COLOR}$ is $\text{NP-complete}$, then your decision problem from part (b) is $\text{NP-complete}$. 8 | > 9 | > To prove that $\text{3-COLOR}$ is $\text{NP-complete}$, we use a reduction from $\text{3-CNF-SAT}$. Given a formula $\phi$ of $m$ clauses on $n$ variables $x_1, x_2, \dots, x_n$, we construct a graph $G = (V, E)$ as follows. The set $V$ consists of a vertex for each variable, a vertex for the negation of each variable, $5$ vertices for each clause, and $3$ special vertices: $\text{TRUE}$, $\text{FALSE}$, and $\text{RED}$. The edges of the graph are of two types: "literal" edges that are independent of the clauses and "clause" edges that depend on the clauses. The literal edges form a triangle on the special vertices and also form a triangle on $x_i, \neg x_i$, and $\text{RED}$ for $i = 1, 2, \dots, n$. 10 | > 11 | > **d.** Argue that in any $3$-coloring $c$ of a graph containing the literal edges, exactly one of a variable and its negation is colored $c(\text{TRUE})$ and the other is colored $c(\text{FALSE})$. Argue that for any truth assignment for $\phi$, there exists a $3$-coloring of the graph containing just the literal edges. 12 | > 13 | > The widget shown in Figure 34.20 helps to enforce the condition corresponding to a clause $(x \vee y \vee z)$. Each clause requires a unique copy of the $5$ vertices that are heavily shaded in the figure; they connect as shown to the literals of the clause and the special vertex $\text{TRUE}$. 14 | > 15 | > **e.** Argue that if each of $x$, $y$, and $z$ is colored $c(\text{TRUE})$ or $c(\text{FALSE})$, then the widget is $3$-colorable if and only if at least one of $x$, $y$, or $z$ is colored $c(\text{TRUE})$. 16 | > 17 | > **f.** Complete the proof that $\text{3-COLOR}$ is $\text{NP-complete}$. 18 | 19 | (Omit!) 20 | -------------------------------------------------------------------------------- /docs/Chap34/Problems/34-4.md: -------------------------------------------------------------------------------- 1 | > Suppose that we have one machine and a set of $n$ tasks $a_1, a_2, \dots, a_n$, each of which requires time on the machine. Each task $a_j$ requires $t_j$ time units on the machine (its processing time), yields a profit of $p_j$, and has a deadline $d_j$. The machine can process only one task at a time, and task $a_j$ must run without interruption for $t_j$ consecutive time units. If we complete task $a_j$ by its deadline $d_j$, we receive a profit $p_j$, but if we complete it after its deadline, we receive no profit. As an optimization problem, we are given the processing times, profits, and deadlines for a set of $n$ tasks, and we wish to find a schedule that completes all the tasks and returns the greatest amount of profit. The processing times, profits, and deadlines are all nonnegative numbers. 2 | > 3 | > **a.** State this problem as a decision problem. 4 | > 5 | > **b.** Show that the decision problem is $\text{NP-complete}$. 6 | > 7 | > **c.** Give a polynomial-time algorithm for the decision problem, assuming that all processing times are integers from $1$ to $n$. ($\textit{Hint:}$ Use dynamic programming.) 8 | > 9 | > **d.** Give a polynomial-time algorithm for the optimization problem, assuming that all processing times are integers from $1$ to $n$. 10 | 11 | (Omit!) 12 | -------------------------------------------------------------------------------- /docs/Chap35/35.1.md: -------------------------------------------------------------------------------- 1 | ## 35.1-1 2 | 3 | > Give an example of a graph for which $\text{APPROX-VERTEX-COVER}$ always yields a suboptimal solution. 4 | 5 | (Omit!) 6 | 7 | ## 35.1-2 8 | 9 | > Prove that the set of edges picked in line 4 of $\text{APPROX-VERTEX-COVER}$ forms a maximal matching in the graph $G$. 10 | 11 | (Omit!) 12 | 13 | ## 35.1-3 $\star$ 14 | 15 | > Professor Bündchen proposes the following heuristic to solve the vertex-cover problem. Repeatedly select a vertex of highest degree, and remove all of its incident edges. Give an example to show that the professor's heuristic does not have an approximation ratio of $2$. ($\textit{Hint:}$ Try a bipartite graph with vertices of uniform degree on the left and vertices of varying degree on the right.) 16 | 17 | (Omit!) 18 | 19 | ## 35.1-4 20 | 21 | > Give an efficient greedy algorithm that finds an optimal vertex cover for a tree in linear time. 22 | 23 | (Omit!) 24 | 25 | ## 35.1-5 26 | 27 | > From the proof of Theorem 34.12, we know that the vertex-cover problem and the $\text{NP-complete}$ clique problem are complementary in the sense that an optimal vertex cover is the complement of a maximum-size clique in the complement graph. Does this relationship imply that there is a polynomial-time approximation algorithm with a constant approximation ratio for the clique problem? Justify your answer. 28 | 29 | (Omit!) 30 | -------------------------------------------------------------------------------- /docs/Chap35/35.2.md: -------------------------------------------------------------------------------- 1 | ## 35.2-1 2 | 3 | > Suppose that a complete undirected graph $G = (V, E)$ with at least $3$ vertices has a cost function $c$ that satisfies the triangle inequality. Prove that $c(u, v) \ge 0$ for all $u, v \in V$. 4 | 5 | (Omit!) 6 | 7 | ## 35.2-2 8 | 9 | > Show how in polynomial time we can transform one instance of the traveling-salesman problem into another instance whose cost function satisfies the triangle inequality. The two instances must have the same set of optimal tours. Explain why such a polynomial-time transformation does not contradict Theorem 35.3, assuming that $\text P \ne \text{NP}$. 10 | 11 | (Omit!) 12 | 13 | ## 35.2-3 14 | 15 | > Consider the following ***closest-point heuristic*** for building an approximate traveling-salesman tour whose cost function satisfies the triangle inequality. Begin with a trivial cycle consisting of a single arbitrarily chosen vertex. At each step, identify the vertex $u$ that is not on the cycle but whose distance to any vertex on the cycle is minimum. Suppose that the vertex on the cycle that is nearest $u$ is vertex $v$. Extend the cycle to include $u$ by inserting $u$ just after $v$. Repeat until all vertices are on the cycle. Prove that this heuristic returns a tour whose total cost is not more than twice the cost of an optimal tour. 16 | 17 | (Omit!) 18 | 19 | ## 35.2-4 20 | 21 | > In the ***bottleneck traveling-salesman problem***, we wish to find the hamiltonian cycle that minimizes the cost of the most costly edge in the cycle. Assuming that the cost function satisfies the triangle inequality, show that there exists a polynomial-time approximation algorithm with approximation ratio $3$ for this problem. ($\textit{Hint:}$ Show recursively that we can visit all the nodes in a bottleneck spanning tree, as discussed in Problem 23-3, exactly once by taking a full walk of the tree and skipping nodes, but without skipping more than two consecutive intermediate nodes. Show that the costliest edge in a bottleneck spanning tree has a cost that is at most the cost of the costliest edge in a bottleneck hamiltonian cycle.) 22 | 23 | (Omit!) 24 | 25 | ## 35.2-5 26 | 27 | > Suppose that the vertices for an instance of the traveling-salesman problem are points in the plane and that the cost $c(u, v)$ is the euclidean distance between points $u$ and $v$. Show that an optimal tour never crosses itself. 28 | 29 | (Omit!) 30 | -------------------------------------------------------------------------------- /docs/Chap35/35.3.md: -------------------------------------------------------------------------------- 1 | ## 35.3-1 2 | 3 | > Consider each of the following words as a set of letters: $\\{\text{arid}$, $\text{dash}$, $\text{drain}$, $\text{heard}$, $\text{lost}$, $\text{nose}$, $\text{shun}$, $\text{slate}$, $\text{snare}$, $\text{thread}\\}$. Show which set cover $\text{GREEDY-SET-COVER}$ produces when we break ties in favor of the word that appears first in the dictionary. 4 | 5 | (Omit!) 6 | 7 | ## 35.3-2 8 | 9 | > Show that the decision version of the set-covering problem is $\text{NP-complete}$ by reducing it from the vertex-cover problem. 10 | 11 | (Omit!) 12 | 13 | ## 35.3-3 14 | 15 | > Show how to implement $\text{GREEDY-SET-COVER}$ in such a way that it runs in time $O\Big(\sum_{S \in \mathcal F} |S|\Big)$. 16 | 17 | (Omit!) 18 | 19 | ## 35.3-4 20 | 21 | > Show that the following weaker form of Theorem 35.4 is trivially true: 22 | > 23 | > $$|\mathcal C| \le |\mathcal C^\*| \max\\{|S|: S \in \mathcal F\\}.$$ 24 | 25 | (Omit!) 26 | 27 | ## 35.3-5 28 | 29 | > $\text{GREEDY-SET-COVER}$ can return a number of different solutions, depending on how we break ties in line 4. Give a procedure $\text{BAD-SET-COVER-INSTANCE}(n)$ that returns an $n$-element instance of the set-covering problem for which, depending on how we break ties in line 4, $\text{GREEDY-SET-COVER}$ can return a number of different solutions that is exponential in $n$. 30 | 31 | (Omit!) 32 | -------------------------------------------------------------------------------- /docs/Chap35/35.4.md: -------------------------------------------------------------------------------- 1 | ## 35.4-1 2 | 3 | > Show that even if we allow a clause to contain both a variable and its negation, randomly setting each variable to 1 with probability $1 / 2$ and to $0$ with probability $1 / 2$ still yields a randomized $8 / 7$-approximation algorithm. 4 | 5 | (Omit!) 6 | 7 | ## 35.4-2 8 | 9 | > The ***MAX-CNF satisfiability problem*** is like the $\text{MAX-3-CNF}$ satisfiability problem, except that it does not restrict each clause to have exactly $3$ literals. Give a randomized $2$-approximation algorithm for the $\text{MAX-CNF}$ satisfiability problem. 10 | 11 | (Omit!) 12 | 13 | ## 35.4-3 14 | 15 | > In the $\text{MAX-CUT}$ problem, we are given an unweighted undirected graph $G = (V, E)$. We define a cut $(S, V - S)$ as in Chapter 23 and the ***weight*** of a cut as the number of edges crossing the cut. The goal is to find a cut of maximum weight. Suppose that for each vertex $v$, we randomly and independently place $v$ in $S$ with probability $1 / 2$ and in $V - S$ with probability $1 / 2$. Show that this algorithm is a randomized $2$-approximation algorithm. 16 | 17 | (Omit!) 18 | 19 | ## 35.4-4 20 | 21 | > Show that the constraints in line $\text{(35.19)}$ are redundant in the sense that if we remove them from the linear program in lines $\text{(35.17)}–\text{(35.20)}$, any optimal solution to the resulting linear program must satisfy $x(v) \le 1$ for each $v \in V$. 22 | 23 | (Omit!) 24 | -------------------------------------------------------------------------------- /docs/Chap35/35.5.md: -------------------------------------------------------------------------------- 1 | ## 35.5-1 2 | 3 | > Prove equation $\text{(35.23)}$. Then show that after executing line 5 of $\text{EXACT-SUBSET-SUM}$, $L_i$ is a sorted list containing every element of $P_i$ whose value is not more than $t$. 4 | 5 | (Omit!) 6 | 7 | ## 35.5-2 8 | 9 | > Using induction on $i$, prove inequality $\text{(35.26)}$. 10 | 11 | (Omit!) 12 | 13 | ## 35.5-3 14 | 15 | > Prove inequality $\text{(35.29)}$. 16 | 17 | (Omit!) 18 | 19 | ## 35.5-4 20 | 21 | > How would you modify the approximation scheme presented in this section to find a good approximation to the smallest value not less than $t$ that is a sum of some subset of the given input list? 22 | 23 | (Omit!) 24 | 25 | ## 35.5-5 26 | 27 | > Modify the $\text{APPROX-SUBSET-SUM}$ procedure to also return the subset of $S$ that sums to the value $z^\*$. 28 | 29 | (Omit!) 30 | -------------------------------------------------------------------------------- /docs/Chap35/Problems/35-1.md: -------------------------------------------------------------------------------- 1 | > Suppose that we are given a set of $n$ objects, where the size $s_i$ of the $i$th object satisfies $0 < s_i < 1$. We wish to pack all the objects into the minimum number of unit-size bins. Each bin can hold any subset of the objects whose total size does not exceed $1$. 2 | > 3 | > **a.** Prove that the problem of determining the minimum number of bins required is $\text{NP-hard}$. ($\textit{Hint:}$ Reduce from the subset-sum problem.) 4 | > 5 | > The ***first-fit*** heuristic takes each object in turn and places it into the first bin that can accommodate it. Let $S = \sum_{i = 1}^n s_i$. 6 | > 7 | > **b.** Argue that the optimal number of bins required is at least $\lceil S \rceil$. 8 | > 9 | > **c.** Argue that the first-fit heuristic leaves at most one bin less than half full. 10 | > 11 | > **d.** Prove that the number of bins used by the first-fit heuristic is never more than $\lceil 2S \rceil$. 12 | > 13 | > **e.** Prove an approximation ratio of $2$ for the first-fit heuristic. 14 | > 15 | > **f.** Give an efficient implementation of the first-fit heuristic, and analyze its running time. 16 | 17 | (Omit!) 18 | -------------------------------------------------------------------------------- /docs/Chap35/Problems/35-2.md: -------------------------------------------------------------------------------- 1 | > Let $G = (V, E)$ be an undirected graph. For any $k \ge 1$, define $G^{(k)}$ to be the undirected graph $(V^{(k)}, E^{(k)})$, where $V^{(k)}$ is the set of all ordered $k$-tuples of vertices from $V$ and $E^{(k)}$ is defined so that $(v_1, v_2, \dots, v_k)$ is adjacent to $(w_1, w_2, \dots, w_k)$ if and only if for $i = 1, 2, \dots, k$, either vertex $v_i$ is adjacent to $w_i$ in $G$, or else $v_i = w_i$. 2 | > 3 | > **a.** Prove that the size of the maximum clique in $G^{(k)}$ is equal to the $k$th power of the size of the maximum clique in $G$. 4 | > 5 | > **b.** Argue that if there is an approximation algorithm that has a constant approximation ratio for finding a maximum-size clique, then there is a polynomial-time approximation scheme for the problem. 6 | 7 | (Omit!) 8 | -------------------------------------------------------------------------------- /docs/Chap35/Problems/35-3.md: -------------------------------------------------------------------------------- 1 | > Suppose that we generalize the set-covering problem so that each set $S_i$ in the family $\mathcal F$ has an associated weight $w_i$ and the weight of a cover $\mathcal C$ is $\sum_{S_i \in \mathcal C} w_i$. We wish to determine a minimum-weight cover. (Section 35.3 handles the case in which $w_i = 1$ for all $i$.) 2 | > 3 | > Show how to generalize the greedy set-covering heuristic in a natural manner to provide an approximate solution for any instance of the weighted set-covering problem. Show that your heuristic has an approximation ratio of $H(d)$, where $d$ is the maximum size of any set $S_i$. 4 | 5 | (Omit!) 6 | -------------------------------------------------------------------------------- /docs/Chap35/Problems/35-4.md: -------------------------------------------------------------------------------- 1 | > Recall that for an undirected graph $G$, a matching is a set of edges such that no two edges in the set are incident on the same vertex. In Section 26.3, we saw how to find a maximum matching in a bipartite graph. In this problem, we will look at matchings in undirected graphs in general (i.e., the graphs are not required to be bipartite). 2 | > 3 | > **a.** A ***maximal matching*** is a matching that is not a proper subset of any other matching. Show that a maximal matching need not be a maximum matching by exhibiting an undirected graph $G$ and a maximal matching $M$ in $G$ that is not a maximum matching. ($\textit{Hint:}$ You can find such a graph with only four vertices.) 4 | > 5 | > **b.** Consider an undirected graph $G = (V, E)$. Give an $O(E)$-time greedy algorithm to find a maximal matching in $G$. 6 | > 7 | > In this problem, we shall concentrate on a polynomial-time approximation algorithm for maximum matching. Whereas the fastest known algorithm for maximum matching takes superlinear (but polynomial) time, the approximation algorithm here will run in linear time. You will show that the linear-time greedy algorithm for maximal matching in part (b) is a $2$-approximation algorithm for maximum matching. 8 | > 9 | > **c.** Show that the size of a maximum matching in $G$ is a lower bound on the size of any vertex cover for $G$. 10 | > 11 | > **d.** Consider a maximal matching $M$ in $G = (V, E)$. Let 12 | > 13 | > $$T = \\{v \in V: \text{ some edge in } M \text{ is incident on } v\\}.$$ 14 | > 15 | > What can you say about the subgraph of $G$ induced by the vertices of $G$ that are not in $T$? 16 | > 17 | > **e.** Conclude from part (d) that $2|M|$ is the size of a vertex cover for $G$. 18 | > 19 | > **f.** Using parts (c) and (e), prove that the greedy algorithm in part (b) is a $2$-approximation algorithm for maximum matching. 20 | 21 | (Omit!) 22 | -------------------------------------------------------------------------------- /docs/Chap35/Problems/35-5.md: -------------------------------------------------------------------------------- 1 | > In the ***parallel-machine-scheduling problem***, we are given $n$ jobs, $J_1, J_2, \dots, J_n$, where each job $J_k$ has an associated nonnegative processing time of $p_k$. We are also given $m$ identical machines, $M_1, M_2, \dots, M_m$. Any job can run on any machine. A ***schedule*** specifies, for each job $J_k$, the machine on which it runs and the time period during which it runs. Each job $J_k$ must run on some machine $M_i$ for $p_k$ consecutive time units, and during that time period no other job may run on $M_i$. Let $C_k$ denote the ***completion time*** of job $J_k$, that is, the time at which job $J_k$ completes processing. Given a schedule, we define $C_\max = \max_{1 \le j \le n} C_j$ to be the ***makespan*** of the schedule. The goal is to find a schedule whose makespan is minimum. 2 | > 3 | > For example, suppose that we have two machines $M_1$ and $M_2$ and that we have four jobs $J_1, J_2, J_3, J_4$, with $p_1 = 2$, $p_2 = 12$, $p_3 = 4$, and $p_4 = 5$. Then one possible schedule runs, on machine $M_1$, job $J_1$ followed by job $J_2$, and on machine $M_2$, it runs job $J_4$ followed by job $J_3$. For this schedule, $C_1 = 2$, $C_2 = 14$, $C_3 = 9$, $C_4 = 5$, and $C_\max = 14$. An optimal schedule runs $J_2$ on machine $M_1$, and it runs jobs $J_1$, $J_3$, and $J_4$ on machine $M_2$. For this schedule, $C_1 = 2$, $C_2 = 12$, $C_3 = 6$, $C_4 = 11$, and $C_\max = 12$. 4 | > 5 | > Given a parallel-machine-scheduling problem, we let $C_\max^\*$ denote the makespan of an optimal schedule. 6 | > 7 | > **a.** Show that the optimal makespan is at least as large as the greatest processing time, that is, 8 | > 9 | > $$C_\max^\* \ge \max_{1 \le k \le n} p_k.$$ 10 | > 11 | > **b.** Show that the optimal makespan is at least as large as the average machine load, that is, 12 | > 13 | > $$C_\max^\* \ge \frac 1 m \sum_{1 \le k \le n} p_k.$$ 14 | > 15 | > Suppose that we use the following greedy algorithm for parallel machine scheduling: whenever a machine is idle, schedule any job that has not yet been scheduled. 16 | > 17 | > **c.** Write pseudocode to implement this greedy algorithm. What is the running time of your algorithm? 18 | > 19 | > **d.** For the schedule returned by the greedy algorithm, show that 20 | > 21 | > $$C_\max \le \frac 1 m \sum_{1 \le k \le n} p_k + \max_{1 \le k \le n} p_k.$$ 22 | > 23 | > Conclude that this algorithm is a polynomial-time $2$-approximation algorithm. 24 | 25 | (Omit!) 26 | -------------------------------------------------------------------------------- /docs/Chap35/Problems/35-6.md: -------------------------------------------------------------------------------- 1 | > Let $G = (V, E)$ be an undirected graph with distinct edge weights $w(u, v)$ on each edge $(u, v) \in E$. For each vertex $v \in V$, let $\max(v) = \max_{(u, v) \in E} \\{w(u, v)\\}$ be the maximum-weight edge incident on that vertex. Let $S_G = \\{\max(v): v \in V\\}$ be the set of maximum-weight edges incident on each vertex, and let $T_G$ be the maximum-weight spanning tree of $G$, that is, the spanning tree of maximum total weight. For any subset of edges $E' \subseteq E$, define $w(E') = \sum_{(u, v) \in E'} w(u, v)$. 2 | > 3 | > **a.** Give an example of a graph with at least $4$ vertices for which $S_G = T_G$. 4 | > 5 | > **b.** Give an example of a graph with at least $4$ vertices for which $S_G \ne T_G$. 6 | > 7 | > **c.** Prove that $S_G \subseteq T_G$ for any graph $G$. 8 | > 9 | > **d.** Prove that $w(T_G) \ge w(S_G) / 2$ for any graph $G$. 10 | > 11 | > **e.** Give an $O(V + E)$-time algorithm to compute a $2$-approximation to the maximum spanning tree. 12 | 13 | (Omit!) 14 | -------------------------------------------------------------------------------- /docs/Chap35/Problems/35-7.md: -------------------------------------------------------------------------------- 1 | > Recall the knapsack problem from Section 16.2. There are $n$ items, where the $i$th item is worth $v_i$ dollars and weighs $w_i$ pounds. We are also given a knapsack that can hold at most $W$ pounds. Here, we add the further assumptions that each weight $w_i$ is at most $W$ and that the items are indexed in monotonically decreasing order of their values: $v_1 \ge v_2 \ge \cdots \ge v_n$. 2 | > 3 | > In the 0-1 knapsack problem, we wish to find a subset of the items whose total weight is at most $W$ and whose total value is maximum. The fractional knapsack problem is like the 0-1 knapsack problem, except that we are allowed to take a fraction of each item, rather than being restricted to taking either all or none of each item. If we take a fraction $x_i$ of item $i$, where $0 \le x_i \le 1$, we contribute $x_iw_i$ to the weight of the knapsack and receive value $x_iv_i$. Our goal is to develop a polynomial-time $2$-approximation algorithm for the 0-1 knapsack problem. 4 | > 5 | > In order to design a polynomial-time algorithm, we consider restricted instances of the 0-1 knapsack problem. Given an instance $I$ of the knapsack problem, we form restricted instances $I_j$, for $j = 1, 2, \dots, n$, by removing items $1, 2, \dots, j - 1$ and requiring the solution to include item $j$ (all of item $j$ in both the fractional and 0-1 knapsack problems). No items are removed in instance $I_1$. For instance $I_j$, let $P_j$ denote an optimal solution to the 0-1 problem and $Q_j$ denote an optimal solution to the fractional problem. 6 | > 7 | > **a.** Argue that an optimal solution to instance $I$ of the 0-1 knapsack problem is one of $\\{P_1, P_2, \dots, P_n\\}$. 8 | > 9 | > **b.** Prove that we can find an optimal solution $Q_j$ to the fractional problem for instance $I_j$ by including item $j$ and then using the greedy algorithm in which at each step, we take as much as possible of the unchosen item in the set $\\{j + 1, j + 2, \dots, n\\}$ with maximum value per pound $v_i / w_i$. 10 | > 11 | > **c.** Prove that we can always construct an optimal solution $Q_j$ to the fractional problem for instance $I_j$ that includes at most one item fractionally. That is, for all items except possibly one, we either include all of the item or none of the item in the knapsack. 12 | > 13 | > **d.** Given an optimal solution $Q_j$ to the fractional problem for instance $I_j$, form solution $R_j$ from $Q_j$ by deleting any fractional items from $Q_j$. Let $v(S)$ denote the total value of items taken in a solution $S$. Prove that $v(R_j) \ge v(Q_j) / 2 \ge v(P_j) / 2$. 14 | > 15 | > **e.** Give a polynomial-time algorithm that returns a maximum-value solution from the set $\\{R_1, R_2, \dots, R_n\\}$, and prove that your algorithm is a polynomial-time $2$-approximation algorithm for the 0-1 knapsack problem. 16 | 17 | (Omit!) 18 | -------------------------------------------------------------------------------- /docs/assets/favicon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/assets/favicon.png -------------------------------------------------------------------------------- /docs/css/mathjax.css: -------------------------------------------------------------------------------- 1 | /* .md-typeset .MathJax_CHTML { 2 | overflow-y: hidden; 3 | } */ 4 | /* The math equations behave awkwardly in Chrome now */ -------------------------------------------------------------------------------- /docs/img/10.4-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/10.4-1.png -------------------------------------------------------------------------------- /docs/img/12.1-1-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/12.1-1-1.png -------------------------------------------------------------------------------- /docs/img/12.1-1-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/12.1-1-2.png -------------------------------------------------------------------------------- /docs/img/12.1-1-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/12.1-1-3.png -------------------------------------------------------------------------------- /docs/img/12.1-1-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/12.1-1-4.png -------------------------------------------------------------------------------- /docs/img/12.1-1-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/12.1-1-5.png -------------------------------------------------------------------------------- /docs/img/13.1-1-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.1-1-1.png -------------------------------------------------------------------------------- /docs/img/13.1-1-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.1-1-2.png -------------------------------------------------------------------------------- /docs/img/13.1-1-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.1-1-3.png -------------------------------------------------------------------------------- /docs/img/13.1-1-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.1-1-4.png -------------------------------------------------------------------------------- /docs/img/13.1-2-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.1-2-1.png -------------------------------------------------------------------------------- /docs/img/13.1-2-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.1-2-2.png -------------------------------------------------------------------------------- /docs/img/13.3-2-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.3-2-1.png -------------------------------------------------------------------------------- /docs/img/13.3-2-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.3-2-2.png -------------------------------------------------------------------------------- /docs/img/13.3-2-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.3-2-3.png -------------------------------------------------------------------------------- /docs/img/13.3-2-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.3-2-4.png -------------------------------------------------------------------------------- /docs/img/13.3-2-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.3-2-5.png -------------------------------------------------------------------------------- /docs/img/13.3-2-6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.3-2-6.png -------------------------------------------------------------------------------- /docs/img/13.3-3-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.3-3-1.png -------------------------------------------------------------------------------- /docs/img/13.3-3-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.3-3-2.png -------------------------------------------------------------------------------- /docs/img/13.4-3-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.4-3-1.png -------------------------------------------------------------------------------- /docs/img/13.4-3-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.4-3-2.png -------------------------------------------------------------------------------- /docs/img/13.4-3-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.4-3-3.png -------------------------------------------------------------------------------- /docs/img/13.4-3-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.4-3-4.png -------------------------------------------------------------------------------- /docs/img/13.4-3-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.4-3-5.png -------------------------------------------------------------------------------- /docs/img/13.4-3-6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.4-3-6.png -------------------------------------------------------------------------------- /docs/img/13.4-3-7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.4-3-7.png -------------------------------------------------------------------------------- /docs/img/13.4-7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/13.4-7.png -------------------------------------------------------------------------------- /docs/img/18.3-1-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/18.3-1-1.png -------------------------------------------------------------------------------- /docs/img/18.3-1-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/18.3-1-2.png -------------------------------------------------------------------------------- /docs/img/18.3-1-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/18.3-1-3.png -------------------------------------------------------------------------------- /docs/img/18.3-1-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/18.3-1-4.png -------------------------------------------------------------------------------- /docs/img/21.3-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/21.3-1.png -------------------------------------------------------------------------------- /docs/img/6.4-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.4-1.png -------------------------------------------------------------------------------- /docs/img/6.5-1-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.5-1-1.png -------------------------------------------------------------------------------- /docs/img/6.5-1-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.5-1-2.png -------------------------------------------------------------------------------- /docs/img/6.5-1-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.5-1-3.png -------------------------------------------------------------------------------- /docs/img/6.5-1-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.5-1-4.png -------------------------------------------------------------------------------- /docs/img/6.5-1-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.5-1-5.png -------------------------------------------------------------------------------- /docs/img/6.5-2-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.5-2-1.png -------------------------------------------------------------------------------- /docs/img/6.5-2-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.5-2-2.png -------------------------------------------------------------------------------- /docs/img/6.5-2-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.5-2-3.png -------------------------------------------------------------------------------- /docs/img/6.5-2-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.5-2-4.png -------------------------------------------------------------------------------- /docs/img/6.5-2-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuwei881/Introduction_to_Algorithms_result/9ee91994710eecfe70839470435ac9da3a9f9743/docs/img/6.5-2-5.png -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | # Solutions to **Introduction to Algorithms** *Third Edition* 2 | 3 | ## Getting Started 4 | 5 | This [website](https://walkccc.github.io/CLRS/) contains nearly complete solutions to the bible textbook - [**Introduction to Algorithms** *Third Edition*](https://mitpress.mit.edu/books/introduction-algorithms-third-edition) published by [Thomas H. Cormen](https://mitpress.mit.edu/contributors/thomas-h-cormen), [Charles E. Leiserson](https://mitpress.mit.edu/contributors/charles-e-leiserson), [Ronald L. Rivest](https://mitpress.mit.edu/contributors/ronald-l-rivest) and [Clifford Stein](https://mitpress.mit.edu/contributors/clifford-stein). 6 | 7 | Hope to reorganize solutions to help more people and myself study algorithms. By using [Markdown (.md)](https://en.wikipedia.org/wiki/Markdown) files, it's much more readable on portable devices now. 8 | 9 | *"Many a little makes a mickle."* 10 | 11 | ## Contributors 12 | 13 | Thanks to: the Instructor's Manual by [Thomas H. Cormen](https://mitpress.mit.edu/contributors/thomas-h-cormen), [@skanev](https://github.com/skanev), [@CyberZHG](https://github.com/CyberZHG), [@yinyanghu](https://github.com/yinyanghu), @ajl213, etc. 14 | 15 | Special thanks to [@JeffreyCA](https://github.com/JeffreyCA), who fixed MathJax rendering on iOS Safari in [#26](https://github.com/walkccc/CLRS/pull/26). 16 | 17 | Please don't hesitate to give me your feedback if any adjustment is needed with the sorted solutions. You can simply press the "Pencil icon" in the upper right corner to edit the contents or simply [open an issue](https://github.com/walkccc/CLRS/issues/new) in [my repository](https://github.com/walkccc/CLRS/). 18 | 19 | ## Working on following exercises 20 | 21 | [18.2-1](https://walkccc.github.io/CLRS/Chap18/18.2/#182-1), [19.2-1](https://walkccc.github.io/CLRS/Chap19/19.2/#192-1). 22 | 23 | I will continue to complete VII Selected Topics. 24 | 25 | ## How I generate this website 26 | 27 | I use the static site generator [MkDocs](http://www.mkdocs.org/) and the beautiful theme [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/) to build this website! 28 | 29 | Since there are some LaTeX equations [KaTeX](https://khan.github.io/KaTeX/) doesn't support, here I use [MathJax](https://www.mathjax.org/) to render the math equations in my website. 30 | 31 | I also add [overflow-x: auto](https://www.w3schools.com/cssref/css3_pr_overflow-x.asp) to prevent the overflow issue on mobile devices, so you can scroll horizontally in the math display equations. 32 | 33 | ## More Informations 34 | 35 | For more informations please visit [**my GitHub site**](https://github.com/walkccc). 36 | 37 | Updated to this new site on April 13, 2018 at 04:48 [(GMT+8)](https://time.is/GMT+8). 38 | -------------------------------------------------------------------------------- /docs/js/mathjax.js: -------------------------------------------------------------------------------- 1 | window.MathJax = { 2 | tex2jax: { 3 | inlineMath: [["$","$"]], 4 | displayMath: [["$$", "$$"]] 5 | }, 6 | TeX: { 7 | TagSide: "right", 8 | TagIndent: ".8em", 9 | MultLineWidth: "85%", 10 | unicode: { 11 | fonts: "STIXGeneral,'Arial Unicode MS'" 12 | } 13 | }, 14 | CommonHTML: { 15 | scale: 90 16 | }, 17 | showProcessingMessages: false, 18 | messageStyle: "none" 19 | }; -------------------------------------------------------------------------------- /makefile: -------------------------------------------------------------------------------- 1 | run: 2 | mkdocs gh-deploy 3 | git add . 4 | git commit -m 'update master' 5 | git push origin master 6 | git checkout gh-pages 7 | rm -rf site 8 | git checkout master 9 | --------------------------------------------------------------------------------