9 | collapse: section
10 | edit: https://github.com/dlsun/probability/edit/master/%s
11 | download: ["pdf", "epub"]
12 | bookdown::pdf_book:
13 | includes:
14 | in_header: preamble.tex
15 | latex_engine: xelatex
16 | citation_package: natbib
17 | keep_tex: yes
18 | bookdown::epub_book: default
19 |
--------------------------------------------------------------------------------
/autocorrelation.Rmd:
--------------------------------------------------------------------------------
1 | # (PART) Random Signal Processing {-}
2 |
3 | # Autocorrelation Function {#autocorrelation}
4 |
5 | ## Theory {-}
6 |
7 | In this lesson, we introduce a summary of a random process that is closely
8 | related to the mean and autocovariance functions. This function plays a
9 | crucial role in signal processing.
10 |
11 | ```{definition autocorrelation, name="Autocorrelation Function"}
12 | The **autocorrelation function** $R_X(s, t)$ of a random process $\{ X(t) \}$
13 | is a function of _two_ times $s$ and $t$. It specifies
14 | \begin{equation}
15 | R_X(s, t) \overset{\text{def}}{=} E[X(s)X(t)].
16 | (\#eq:autocorrelation)
17 | \end{equation}
18 |
19 | By the shortcut formula for covariance \@ref(eq:cov-shortcut),
20 | $E[X(s) X(t)] = \text{Cov}[X(s), X(t)] + E[X(s)] E[X(t)]$, so the autocorrelation
21 | function can be related to the mean function \@ref(eq:mean-function)
22 | and autocovariance function
23 | \[ R_X(s, t) = C_X(s, t) + \mu_X(s) \mu_X(t). \]
24 |
25 | For a stationary process (Definition \@ref(stationary)), the autocorrelation
26 | function only depends on the difference between the times $\tau = s - t$:
27 | \[ R_X(\tau) = C_X(\tau) + \mu_X^2 \]
28 |
29 | For a discrete-time process, we notate the autocorrelation function as
30 | \[ R_X[m, n] \overset{\text{def}}{=} C_X[m, n] + \mu_X[m] \mu_X[n]. \]
31 | ```
32 |
33 | Let's calculate the autocorrelation function of some random processes.
34 |
35 | ```{example random-amplitude-autocorrelation, name="Random Amplitude Process"}
36 | Consider the random amplitude process
37 | \begin{equation}
38 | X(t) = A\cos(2\pi f t)
39 | (\#eq:random-amplitude)
40 | \end{equation}
41 | introduced in Example \@ref(exm:random-amplitude).
42 |
43 | Its autocorrelation function is
44 | \begin{equation}
45 | R_X(s, t) = C_X(s, t) + \mu_X(s) \mu_X(t) = (1.25 + 2.5^2) \cos(2\pi f s) \cos(2\pi f t).
46 | (\#eq:random-amplitude-autocorrelation)
47 | \end{equation}
48 | ```
49 |
50 | ```{example poisson-process-autocorrelation, name="Poisson Process"}
51 | Consider the Poisson process
52 | $\{ N(t); t \geq 0 \}$ of rate $\lambda$,
53 | defined in Example \@ref(exm:poisson-process-as-process).
54 |
55 | Its autocorrelation function is
56 | \begin{equation}
57 | R_X(s, t) = C_X(s, t) + \mu_X(s) \mu_X(t) = \lambda\min(s, t) + \lambda^2 s t.
58 | (\#eq:poisson-process-autocorrelation)
59 | \end{equation}
60 | ```
61 |
62 | ```{example white-noise-autocorrelation, name="White Noise"}
63 | Consider the white noise process $\{ Z[n] \}$ defined in Example \@ref(exm:white-noise),
64 | which consists of i.i.d. random variables with mean $\mu = E[Z[n]]$ and
65 | variance $\sigma^2 \overset{\text{def}}{=} \text{Var}[Z[n]]$.
66 |
67 | We showed in Example \@ref(exm:white-noise-stationary) that this process is stationary,
68 | so its autocorrelation function depends only on $k = m - n$:
69 | \begin{equation}
70 | R_Z[k] = C_Z[k] + \mu_Z^2 = \sigma^2\delta[k] + \mu^2.
71 | (\#eq:white-noise-autocorrelation)
72 | \end{equation}
73 | ```
74 |
75 | ```{example random-walk-autocorrelation, name="Random Walk"}
76 | Consider the random walk process $\{ X[n]; n\geq 0 \}$ from
77 | Example \@ref(exm:random-walk-process).
78 |
79 | Its autocorrelation function is:
80 | \begin{equation}
81 | R_X[m, n] = C_X[m, n] + \underbrace{\mu_X[m]}_{0} \mu_Y[n] = \sigma^2 \min(m, n).
82 | (\#eq:random-walk-autocorrelation)
83 | \end{equation}
84 | ```
85 |
86 | ```{example gaussian-process-autocorrelation}
87 | Consider the stationary process $\{X(t)\}$ from Example \@ref(exm:gaussian-process),
88 | whose mean and autocovariance functions are
89 | \begin{align*}
90 | \mu_X &= 2 & C_X(\tau) &= 5e^{-3\tau^2}.
91 | \end{align*}
92 |
93 | Its autocorrelation function likewise depends on $\tau = s - t$ only:
94 | \begin{equation}
95 | R_X(\tau) = C_X(\tau) + \mu_X^2 = 5 e^{-3\tau^2} + 4.
96 | (\#eq:gaussian-process-autocorrelation)
97 | \end{equation}
98 | ```
99 |
100 |
101 | ## Essential Practice {-}
102 |
103 | For these questions, you may want to refer to the mean and
104 | autocovariance functions you calculated in Lessons \@ref(mean-function)
105 | and \@ref(cov-function).
106 |
107 | 1. Consider a grain of pollen suspended in water, whose horizontal position can be modeled by
108 | Brownian motion $\{B(t); t \geq 0\}$ with parameter $\alpha=4 \text{mm}^2/\text{s}$, as in Example \@ref(exm:pollen).
109 | Calculate the autocorrelation function of $\{ B(t); t \geq 0 \}$.
110 |
111 | 2. Radioactive particles hit a Geiger counter according to a Poisson process
112 | at a rate of $\lambda=0.8$ particles per second. Let $\{ N(t); t \geq 0 \}$ represent this Poisson process.
113 |
114 | Define the new process $\{ D(t); t \geq 3 \}$ by
115 | \[ D(t) = N(t) - N(t - 3). \]
116 | This process represents the number of particles that hit the Geiger counter in the
117 | last 3 seconds. Calculate the autocorrelation function of $\{ D(t); t \geq 3 \}$.
118 |
119 | 3. Consider the moving average process $\{ X[n] \}$ of Example \@ref(exm:ma1), defined by
120 | \[ X[n] = 0.5 Z[n] + 0.5 Z[n-1], \]
121 | where $\{ Z[n] \}$ is a sequence of i.i.d. standard normal random variables.
122 | Calculate the autocorrelation function of $\{ X[n] \}$.
123 |
124 | 4. Let $\Theta$ be a $\text{Uniform}(a=-\pi, b=\pi)$ random variable, and let $f$ be a constant. Define the random phase process $\{ X(t) \}$ by
125 | \[ X(t) = \cos(2\pi f t + \Theta). \]
126 | Calculate the autocorrelation function of $\{ X(t) \}$.
127 |
128 | 5. Let $\{ X(t) \}$ be a continuous-time random process with mean function
129 | $\mu_X(t) = -1$ and autocovariance function $C_X(s, t) = 2e^{-|s - t|/3}$.
130 | Calculate the autocorrelation function of $\{ X(t) \}$.
131 |
--------------------------------------------------------------------------------
/book.bib:
--------------------------------------------------------------------------------
1 | @Book{xie2015,
2 | title = {Dynamic Documents with {R} and knitr},
3 | author = {Yihui Xie},
4 | publisher = {Chapman and Hall/CRC},
5 | address = {Boca Raton, Florida},
6 | year = {2015},
7 | edition = {2nd},
8 | note = {ISBN 978-1498716963},
9 | url = {http://yihui.name/knitr/},
10 | }
11 |
--------------------------------------------------------------------------------
/brownian-motion.Rmd:
--------------------------------------------------------------------------------
1 | # Brownian Motion {#brownian-motion}
2 |
3 |
4 |
5 |
6 |
7 | The videos above discussed Brownian motion of particles moving in two or three dimensions; for simplicity, we
8 | will only consider Brownian motion in one dimension. This can be used to model, among other things, a particle
9 | moving along a line.
10 |
11 | Shown below are 5 sample paths of Brownian motion.
12 |
13 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
14 | set.seed(1)
15 | colors = c("black", "red", "blue", "orange", "green")
16 | plot(NA, NA,
17 | xaxt="n", yaxt="n", bty="n",
18 | xlim=c(0, 3), ylim=c(-3, 3),
19 | xlab="Time (t)", ylab="B(t)")
20 | axis(1, pos=0)
21 | axis(2, pos=0)
22 | for(i in 1:5) {
23 | n <- 300
24 | t <- 0:n
25 | x <- c(0, cumsum(rnorm(n)))
26 |
27 | lines(t / 100, x / sqrt(100), type="l", col=colors[i])
28 | }
29 | ```
30 |
31 | ```{definition bm, name="Brownian Motion"}
32 | **Brownian motion** $\{ B(t); t \geq 0 \}$ is a continuous-time process with the
33 | following properties:
34 |
35 | 1. The "particle" starts at the origin at $t=0$: $B(0) = 0$.
36 | 2. The displacement over any interval $(t_0, t_1)$, denoted by $B(t_1) - B(t_0)$, follows a
37 | $\text{Normal}(\mu=0, \sigma=\sqrt{\alpha(t_1 - t_0)})$ distribution.
38 | 3. **Independent increments:** The displacements over non-overlapping intervals are independent.
39 |
40 | The parameter $\alpha$ controls the scale of Brownian motion.
41 | ```
42 |
43 | These three properties allow us to calculate most probabilities of interest.
44 |
45 | ```{example pollen, name="Motion of a Pollen Grain"}
46 | The horizontal position of a grain of pollen suspended in water can be modeled by Brownian motion
47 | with scale $\alpha = 4 \text{mm}^2/\text{s}$.
48 |
49 | a. What is the probability the pollen grain moves by more than 10 mm (in the horizontal direction)
50 | between 1 and 4 seconds?
51 | b. If the particle is more than 5 mm away from the origin after 1 second, what is the probability the pollen grain
52 | moves by more than 10 mm between 1 and 4 seconds?
53 | ```
54 |
55 | ```{solution}
56 | .
57 |
58 | a. The displacement of the particle between 1 and 4 seconds is represented by $B(4) - B(1)$, which
59 | we know follows a $\text{Normal}(\mu=0, \sigma=\underbrace{\sqrt{4(4-1)}}_{\sqrt{12}})$ distribution.
60 |
61 | Since the question does not indicate whether the displacement is positive or negative, we’re really interested in determining $P(|B(4) - B(1)| > 10)$. Because the distribution of
62 | $B(4) - B(1)$ is symmetric about $0$, this is the same as $2 P(B(4) - B(1) > 10)$ or
63 | $2 P(B(4) - B(1) < -10)$.
64 | \begin{align*}
65 | P(|B(4) - B(1)| > 10) &= 2 P(B(4) - B(1) > 10) \\
66 | &= 2 P\big(\underbrace{\frac{B(4) - B(1) - 0}{\sqrt{12}}}_Z > \frac{10 - 0}{\sqrt{12}}\big) \\
67 | &= 2 (1 - \Phi(\frac{10}{\sqrt{12}})) \\
68 | &\approx .0039.
69 | \end{align*}
70 | b. The interval $(0, 1)$ does not overlap with the interval $(1, 4)$. Therefore, by the independent
71 | increments property, $B(1) = B(1) - B(0)$ is independent of $B(4) - B(1)$, and
72 | \[ P\Big(|B(4) - B(1)| > 10\ \big|\ |B(1)| > 5\Big) = P(|B(4) - B(1)| > 10) = .0039. \]
73 | ```
74 |
75 | ### Brownian Motion as the Limit of a Random Walk {-}
76 |
77 | Brownian motion is the extension of a (discrete-time) random walk $\{ X[n]; n \geq 0 \}$
78 | to a continuous-time process $\{ B(t); t \geq 0 \}$. The recipe is as follows:
79 |
80 | 1. Suppose the steps of the random walk happens at intervals of $\Delta t$ seconds. That is,
81 | \[ X(t) = X\left[\frac{t}{\Delta t}\right] \]
82 | 2. We let $\Delta t \to 0$. Since each step happens so quickly, it does not make sense to take steps of
83 | the same size. We should also rescale the size of the steps to be commensurate with how quickly they
84 | happen. The right rescaling is:
85 | \[ B(t) = \sqrt{\Delta t} X(t). \]
86 |
87 | If we start with a discrete-time random walk and rescale time and step sizes in this way, we get
88 | Brownian motion. The animation below illustrates this.
89 |
90 |
91 |
92 |
93 | ## Essential Practice {-}
94 |
95 | 1. Brownian motion is used in finance to model short-term asset price fluctuation. Suppose the price (in dollars) of a barrel of crude oil varies according to a Brownian motion process; specifically, suppose the change in a barrel’s price
96 | $t$ days from now is modeled by Brownian motion $B(t)$ with $\alpha = .15$.
97 |
98 | a. Find the probability that the price of a barrel of crude oil has changed by more than $1, in either direction, after 5 days.
99 | b. Repeat (a) for a time interval of 10 days.
100 | c. Given that the price has increased by $1 after one week (7 days), what is the probability that the price has increased by at least $2 after two weeks (14 days)?
101 |
--------------------------------------------------------------------------------
/cdf.Rmd:
--------------------------------------------------------------------------------
1 | # Cumulative Distribution Functions {#cdf}
2 |
3 | ## Theory {-}
4 |
5 | The p.m.f. \@ref(def:pmf-function) is one way to describe a random variable, but
6 | it is not the only way. The **cumulative distribution function** is a different
7 | representation that contains the same information as the p.m.f.
8 |
9 | ```{definition cdf-function, name="Cumulative Distribution Function"}
10 | The **cumulative distribution function** (c.d.f.) is a function that returns the probability
11 | that a random variable is _less than or equal_ to a particular value:
12 | \begin{equation}
13 | F(x) \overset{\text{def}}{=} P(X \leq x).
14 | (\#eq:cdf)
15 | \end{equation}
16 | It is called "cumulative" because it includes all the probability up to (and including) $x$.
17 | ```
18 |
19 | ```{example, name="Calculating the C.D.F."}
20 | Let's calculate the c.d.f. of $X$, the number of diamonds among the community cards, using the
21 | p.m.f. that we calculated in Lesson \@ref(rv). Note that
22 | $F(x)$ is the sum of all the probabilities up to $x$. So, for example,
23 | \begin{align*}
24 | F(2.8) = P(X \leq 2.8) &= f(0) + f(1) + f(2) \\
25 | &= .2546 + .4243 + .2496 \\
26 | &= .9284.
27 | \end{align*}
28 | There is no simple formula for $F(x)$. However, we can describe it piecewise.
29 | \[ F(x) = \begin{cases} 0 & x \leq 0 \\
30 | .2546 & 0 \leq x < 1 \\
31 | .6788 & 1 \leq x < 2 \\
32 | .9284 & 2 \leq x < 3 \\
33 | .9926 & 3 \leq x < 4 \\
34 | .9997 & 4 \leq x < 5 \\
35 | 1.0 & x \geq 5
36 | \end{cases}. \]
37 | Note that the c.d.f. of $X$ has the following properties:
38 |
39 | - It is constant between integers. Because it is impossible for the random variable $X$
40 | to assume non-integer values, $F(1.2) = P(X \leq 1.2)$ and $F(1) = P(X \leq 1)$ must be the same.
41 | - The value of $F(x)$ increases from 0 to 1 as $x$ increases. This makes sense because as $x$ increases,
42 | we accumulate more and more probability.
43 | ```
44 |
45 | ```{example, name="Graphing the C.D.F."}
46 | The properties of the c.d.f. become clearer on a graph, like the one below.
47 | Because the random variable $X$ cannot take on decimal values, the c.d.f. of $X$ does not change
48 | between integers, giving it its step-function appearance.
49 | ```
50 |
51 | ```{r cdf-graph, echo=FALSE, fig.show = "hold", fig.align = "default", fig.cap="Graph of a cumulative distribution function"}
52 | plot(0:5, phyper(0:5, 11, 37, 5), pch=19, xlab="x", ylab="F(x)", xlim=c(-0.5, 5.5), ylim=c(-0.05, 1.05))
53 | points(0:5, phyper(-1:4, 11, 37, 5), xlab="x", ylab="f(x)")
54 | lines(c(-1, 0), c(0, 0), lwd=2)
55 | for(x in 0:5) {
56 | p <- phyper(x, 11, 37, 5)
57 | lines(c(x, x + 1), c(p, p), lwd=2)
58 | }
59 | ```
60 |
61 | Note that the c.d.f. $F(x)$ can be evaluated at all values $x$, not just at integer values.
62 |
63 | ```{example, name="Using the C.D.F."}
64 | Some probabilities are easier to calculate using the c.d.f. than using the p.m.f.
65 | For example, the probability that Alice gets a flush, $P(X \geq 3)$, can be calculated
66 | by using the complement rule (\@ref(thm:complement)) and looking up the appropriate
67 | probability directly from the c.d.f. $F(x)$.
68 |
69 | \begin{align*}
70 | P(X \geq 3) &= 1 - P(X < 3) \\
71 | &= 1 - P(X \leq 2) \\
72 | &= 1 - F(2) \\
73 | &= 1 - .9284 \\
74 | &= .0716
75 | \end{align*}
76 |
77 | Remember that the c.d.f. $F(x)$ always includes the probability of $x$. Since
78 | $P(X < 3)$ should not include the probability of $3$, we use $F(2)$ instead of
79 | $F(3)$.
80 |
81 | At the beginning of this lesson, we mentioned that the c.d.f. contains the exact same
82 | information as the p.m.f., no more and no less. Therefore, it should be possible
83 | to recover the p.m.f. from the c.d.f.
84 |
85 | For example, how would we calculate $f(3) = P(X = 3)$, if we only knew the c.d.f. $F(x)$? We could
86 | subtract $P(X \leq 2)$ from $P(X \leq 3)$ to get just the probability that it is equal to 3.
87 | \begin{align*}
88 | f(3) = P(X = 3) &= P(X \leq 3) - P(X \leq 2) \\
89 | &= F(3) - F(2) \\
90 | &= .9926 - .9284 \\
91 | &= .0642,
92 | \end{align*}
93 | which agrees with the p.m.f. from Lesson \@ref(rv).
94 | ```
95 |
96 | ## Examples {-}
97 |
98 | 1. Continuing with the example from the lesson, let $Y$ be the number of Jacks among the community cards.
99 | Calculate and graph the c.d.f. of $Y$.
100 |
101 | 2. Consider a random variable $Z$ with c.d.f. given by the formula
102 | \begin{align*}
103 | F(x) &= \begin{cases} 1 - 3^{-\lfloor x \rfloor} & x \geq 0 \\ 0 & \text{otherwise} \end{cases}.
104 | \end{align*}
105 | (Note that $\lfloor x \rfloor$ denotes the _floor_ operator, which rounds $x$ down to the nearest integer.
106 | So $\lfloor 3.9 \rfloor = 3$ and $\lfloor 7.1 \rfloor = 7$.)
107 |
108 | Graph the c.d.f. $F(x)$. Then, use it to calculate:
109 | a. $P(Z > 3)$
110 | b. $P(Z = 2)$
111 |
112 |
--------------------------------------------------------------------------------
/colabs/py/Calculating_Binomial_Probabilities.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Calculating Hypergeometric Probabilities",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "authorship_tag": "ABX9TyN8TA4s5/z3nA9yUQTa/tDQ",
10 | "include_colab_link": true
11 | },
12 | "kernelspec": {
13 | "name": "python3",
14 | "display_name": "Python 3"
15 | }
16 | },
17 | "cells": [
18 | {
19 | "cell_type": "markdown",
20 | "metadata": {
21 | "id": "view-in-github",
22 | "colab_type": "text"
23 | },
24 | "source": [
25 | ""
26 | ]
27 | },
28 | {
29 | "cell_type": "markdown",
30 | "metadata": {
31 | "id": "bloGjuMJ2W-U",
32 | "colab_type": "text"
33 | },
34 | "source": [
35 | "# Calculating Binomial Probabilities\n",
36 | "\n",
37 | "This notebook accompanies [this lesson](http://dlsun.github.io/probability/binomial.html)."
38 | ]
39 | },
40 | {
41 | "cell_type": "code",
42 | "metadata": {
43 | "id": "QjLIYY6V2GlV",
44 | "colab_type": "code",
45 | "colab": {}
46 | },
47 | "source": [
48 | "# Install Symbulate in this Colab\n",
49 | "!pip install -q symbulate"
50 | ],
51 | "execution_count": null,
52 | "outputs": []
53 | },
54 | {
55 | "cell_type": "code",
56 | "metadata": {
57 | "id": "JV7NADSZRk5V",
58 | "colab_type": "code",
59 | "colab": {}
60 | },
61 | "source": [
62 | "list(range(2, 51, 2))"
63 | ],
64 | "execution_count": null,
65 | "outputs": []
66 | },
67 | {
68 | "cell_type": "code",
69 | "metadata": {
70 | "id": "VYhkwSmc2Nvm",
71 | "colab_type": "code",
72 | "colab": {}
73 | },
74 | "source": [
75 | "from symbulate import *\n",
76 | "probs = Binomial(n=51, p=0.01).pmf(range(2, 51, 2))\n",
77 | "probs"
78 | ],
79 | "execution_count": null,
80 | "outputs": []
81 | },
82 | {
83 | "cell_type": "code",
84 | "metadata": {
85 | "id": "LhJRUssW2UBq",
86 | "colab_type": "code",
87 | "colab": {}
88 | },
89 | "source": [
90 | "sum(probs)"
91 | ],
92 | "execution_count": null,
93 | "outputs": []
94 | }
95 | ]
96 | }
--------------------------------------------------------------------------------
/colabs/py/Calculating_Hypergeometric_Probabilities.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Calculating Hypergeometric Probabilities",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "authorship_tag": "ABX9TyMwOCU/1qk5gNZjjQ7GBsXs",
10 | "include_colab_link": true
11 | },
12 | "kernelspec": {
13 | "name": "python3",
14 | "display_name": "Python 3"
15 | }
16 | },
17 | "cells": [
18 | {
19 | "cell_type": "markdown",
20 | "metadata": {
21 | "id": "view-in-github",
22 | "colab_type": "text"
23 | },
24 | "source": [
25 | ""
26 | ]
27 | },
28 | {
29 | "cell_type": "markdown",
30 | "metadata": {
31 | "id": "bloGjuMJ2W-U",
32 | "colab_type": "text"
33 | },
34 | "source": [
35 | "# Calculating Hypergeometric Probabilities\n",
36 | "\n",
37 | "This notebook accompanies [this lesson](http://dlsun.github.io/probability/hypergeometric.html)."
38 | ]
39 | },
40 | {
41 | "cell_type": "code",
42 | "metadata": {
43 | "id": "QjLIYY6V2GlV",
44 | "colab_type": "code",
45 | "colab": {}
46 | },
47 | "source": [
48 | "# Install Symbulate in this Colab\n",
49 | "!pip install -q symbulate"
50 | ],
51 | "execution_count": null,
52 | "outputs": []
53 | },
54 | {
55 | "cell_type": "code",
56 | "metadata": {
57 | "id": "VYhkwSmc2Nvm",
58 | "colab_type": "code",
59 | "colab": {}
60 | },
61 | "source": [
62 | "from symbulate import *\n",
63 | "probs = Hypergeometric(n=30, N1=20, N0=80).pmf(range(7, 31))"
64 | ],
65 | "execution_count": null,
66 | "outputs": []
67 | },
68 | {
69 | "cell_type": "code",
70 | "metadata": {
71 | "id": "LhJRUssW2UBq",
72 | "colab_type": "code",
73 | "colab": {}
74 | },
75 | "source": [
76 | "sum(probs)"
77 | ],
78 | "execution_count": null,
79 | "outputs": []
80 | },
81 | {
82 | "cell_type": "code",
83 | "metadata": {
84 | "id": "K2XZD-7N2VBz",
85 | "colab_type": "code",
86 | "colab": {}
87 | },
88 | "source": [
89 | "1 - Hypergeometric(n=30, N1=20, N0=80).cdf(6)"
90 | ],
91 | "execution_count": null,
92 | "outputs": []
93 | }
94 | ]
95 | }
--------------------------------------------------------------------------------
/colabs/py/Calculating_NegBinom_Probabilities.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Calculating Negative Binomial Probabilities",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "authorship_tag": "ABX9TyPMQKYUrNWX8xYoKZb3Haj8",
10 | "include_colab_link": true
11 | },
12 | "kernelspec": {
13 | "name": "python3",
14 | "display_name": "Python 3"
15 | }
16 | },
17 | "cells": [
18 | {
19 | "cell_type": "markdown",
20 | "metadata": {
21 | "id": "view-in-github",
22 | "colab_type": "text"
23 | },
24 | "source": [
25 | ""
26 | ]
27 | },
28 | {
29 | "cell_type": "markdown",
30 | "metadata": {
31 | "id": "bloGjuMJ2W-U",
32 | "colab_type": "text"
33 | },
34 | "source": [
35 | "# Calculating Negative Binomial Probabilities\n",
36 | "\n",
37 | "This notebook accompanies [this lesson](http://dlsun.github.io/probability/negative-binomial.html)."
38 | ]
39 | },
40 | {
41 | "cell_type": "code",
42 | "metadata": {
43 | "id": "QjLIYY6V2GlV",
44 | "colab_type": "code",
45 | "colab": {}
46 | },
47 | "source": [
48 | "# Install Symbulate in this Colab\n",
49 | "!pip install -q symbulate"
50 | ],
51 | "execution_count": null,
52 | "outputs": []
53 | },
54 | {
55 | "cell_type": "code",
56 | "metadata": {
57 | "id": "O_4kgMBBi7HD",
58 | "colab_type": "code",
59 | "colab": {}
60 | },
61 | "source": [
62 | "from symbulate import *"
63 | ],
64 | "execution_count": null,
65 | "outputs": []
66 | },
67 | {
68 | "cell_type": "code",
69 | "metadata": {
70 | "id": "Cww2cCxci8Re",
71 | "colab_type": "code",
72 | "colab": {}
73 | },
74 | "source": [
75 | "# plot the negative binomial distribution\n",
76 | "NegativeBinomial(r=10, p=0.15).plot()"
77 | ],
78 | "execution_count": null,
79 | "outputs": []
80 | },
81 | {
82 | "cell_type": "code",
83 | "metadata": {
84 | "id": "JV7NADSZRk5V",
85 | "colab_type": "code",
86 | "colab": {}
87 | },
88 | "source": [
89 | "probs = NegativeBinomial(r=10, p=0.15).pmf(range(10, 41)) # note that range(..., 41) does not include 41\n",
90 | "1 - sum(probs)"
91 | ],
92 | "execution_count": null,
93 | "outputs": []
94 | },
95 | {
96 | "cell_type": "code",
97 | "metadata": {
98 | "id": "LhJRUssW2UBq",
99 | "colab_type": "code",
100 | "colab": {}
101 | },
102 | "source": [
103 | "1 - NegativeBinomial(r=10, p=0.15).cdf(40)"
104 | ],
105 | "execution_count": null,
106 | "outputs": []
107 | }
108 | ]
109 | }
--------------------------------------------------------------------------------
/colabs/py/Calculating_Poisson_Probabilities.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Calculating Poisson Probabilities",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "authorship_tag": "ABX9TyMJfHEJ70iuMyEm8b+nKxA4",
10 | "include_colab_link": true
11 | },
12 | "kernelspec": {
13 | "name": "python3",
14 | "display_name": "Python 3"
15 | }
16 | },
17 | "cells": [
18 | {
19 | "cell_type": "markdown",
20 | "metadata": {
21 | "id": "view-in-github",
22 | "colab_type": "text"
23 | },
24 | "source": [
25 | ""
26 | ]
27 | },
28 | {
29 | "cell_type": "markdown",
30 | "metadata": {
31 | "id": "bloGjuMJ2W-U",
32 | "colab_type": "text"
33 | },
34 | "source": [
35 | "# Calculating Poisson Probabilities\n",
36 | "\n",
37 | "This notebook accompanies [this lesson](http://dlsun.github.io/probability/poisson.html)."
38 | ]
39 | },
40 | {
41 | "cell_type": "code",
42 | "metadata": {
43 | "id": "QjLIYY6V2GlV",
44 | "colab_type": "code",
45 | "colab": {}
46 | },
47 | "source": [
48 | "# Install Symbulate in this Colab\n",
49 | "!pip install -q symbulate"
50 | ],
51 | "execution_count": null,
52 | "outputs": []
53 | },
54 | {
55 | "cell_type": "code",
56 | "metadata": {
57 | "id": "O_4kgMBBi7HD",
58 | "colab_type": "code",
59 | "colab": {}
60 | },
61 | "source": [
62 | "from symbulate import *"
63 | ],
64 | "execution_count": null,
65 | "outputs": []
66 | },
67 | {
68 | "cell_type": "code",
69 | "metadata": {
70 | "id": "Cww2cCxci8Re",
71 | "colab_type": "code",
72 | "colab": {}
73 | },
74 | "source": [
75 | "# Plot the exact binomial and approximate Poisson distributions\n",
76 | "Binomial(n=10 ** 4, p=10 ** -3).plot(xlim=(0, 30))\n",
77 | "Poisson(10).plot(xlim=(0, 30))"
78 | ],
79 | "execution_count": null,
80 | "outputs": []
81 | },
82 | {
83 | "cell_type": "markdown",
84 | "metadata": {
85 | "id": "7W7NhcjqWT-r",
86 | "colab_type": "text"
87 | },
88 | "source": [
89 | "The two distributions agree almost perfectly, which is why you only see one curve (even though we plotted two curves)."
90 | ]
91 | },
92 | {
93 | "cell_type": "code",
94 | "metadata": {
95 | "id": "JV7NADSZRk5V",
96 | "colab_type": "code",
97 | "colab": {}
98 | },
99 | "source": [
100 | "probs = Poisson(10).pmf(\n",
101 | " range(12) # range(12) is [0, 1, 2, ..., 11]\n",
102 | ")\n",
103 | "1 - sum(probs)"
104 | ],
105 | "execution_count": null,
106 | "outputs": []
107 | },
108 | {
109 | "cell_type": "code",
110 | "metadata": {
111 | "id": "LhJRUssW2UBq",
112 | "colab_type": "code",
113 | "colab": {}
114 | },
115 | "source": [
116 | "(1 - Poisson(10).cdf(11), # approximate\n",
117 | " 1 - Binomial(n=10 ** 4, p=10 ** -3).cdf(11) # exact\n",
118 | ")"
119 | ],
120 | "execution_count": null,
121 | "outputs": []
122 | },
123 | {
124 | "cell_type": "code",
125 | "metadata": {
126 | "id": "uf6ZKq_vXWL5",
127 | "colab_type": "code",
128 | "colab": {}
129 | },
130 | "source": [
131 | "x = 12\n",
132 | "while Poisson(10).cdf(x) < .99:\n",
133 | " x += 1\n",
134 | "\n",
135 | "# print out the value of x and the probability\n",
136 | "x, Poisson(10).cdf(x)"
137 | ],
138 | "execution_count": null,
139 | "outputs": []
140 | }
141 | ]
142 | }
--------------------------------------------------------------------------------
/colabs/py/Exponential_Distribution.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Exponential Distribution",
7 | "provenance": [],
8 | "authorship_tag": "ABX9TyP0p/nC7uA8mKsZ9y97Cy/B"
9 | },
10 | "kernelspec": {
11 | "name": "python3",
12 | "display_name": "Python 3"
13 | }
14 | },
15 | "cells": [
16 | {
17 | "cell_type": "markdown",
18 | "metadata": {
19 | "id": "eRvpLpTSJNiU",
20 | "colab_type": "text"
21 | },
22 | "source": [
23 | "# Exponential Distribution"
24 | ]
25 | },
26 | {
27 | "cell_type": "code",
28 | "metadata": {
29 | "id": "I9CuCqDMJSx7",
30 | "colab_type": "code",
31 | "colab": {}
32 | },
33 | "source": [
34 | "!pip install -q symbulate\n",
35 | "from symbulate import *"
36 | ],
37 | "execution_count": null,
38 | "outputs": []
39 | },
40 | {
41 | "cell_type": "code",
42 | "metadata": {
43 | "id": "iklHjiWNLl_N",
44 | "colab_type": "code",
45 | "colab": {}
46 | },
47 | "source": [
48 | "Exponential(0.3).plot()"
49 | ],
50 | "execution_count": null,
51 | "outputs": []
52 | },
53 | {
54 | "cell_type": "code",
55 | "metadata": {
56 | "id": "ulEZqozFJRQM",
57 | "colab_type": "code",
58 | "colab": {}
59 | },
60 | "source": [
61 | "Exponential(0.3).cdf(5) - Exponential(0.3).cdf(1)"
62 | ],
63 | "execution_count": null,
64 | "outputs": []
65 | }
66 | ]
67 | }
--------------------------------------------------------------------------------
/colabs/r/Calculating_Binomial_Probabilities.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Calculating Hypergeometric Probabilities",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "include_colab_link": true
10 | },
11 | "kernelspec": {
12 | "name": "ir",
13 | "display_name": "R"
14 | }
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "id": "view-in-github",
21 | "colab_type": "text"
22 | },
23 | "source": [
24 | ""
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "zkn9Nei63dUS",
31 | "colab_type": "text"
32 | },
33 | "source": [
34 | "# Calculating Binomial Probabilities (R Version)\n",
35 | "\n",
36 | "This notebook accompanies [this lesson](http://dlsun.github.io/probability/binomial.html)."
37 | ]
38 | },
39 | {
40 | "cell_type": "code",
41 | "metadata": {
42 | "id": "xxXfK3AwSfCi",
43 | "colab_type": "code",
44 | "colab": {}
45 | },
46 | "source": [
47 | "seq(from=2, to=51, by=2)"
48 | ],
49 | "execution_count": null,
50 | "outputs": []
51 | },
52 | {
53 | "cell_type": "code",
54 | "metadata": {
55 | "id": "Z60PpIfIeF4Z",
56 | "colab_type": "code",
57 | "colab": {}
58 | },
59 | "source": [
60 | "probs <- dbinom(seq(from=2, to=51, by=2), size=51, prob=0.01)\n",
61 | "probs"
62 | ],
63 | "execution_count": null,
64 | "outputs": []
65 | },
66 | {
67 | "cell_type": "code",
68 | "metadata": {
69 | "id": "AouC3Wt83VR7",
70 | "colab_type": "code",
71 | "colab": {}
72 | },
73 | "source": [
74 | "sum(probs)"
75 | ],
76 | "execution_count": null,
77 | "outputs": []
78 | }
79 | ]
80 | }
--------------------------------------------------------------------------------
/colabs/r/Calculating_Hypergeometric_Probabilities.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Calculating Hypergeometric Probabilities",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "include_colab_link": true
10 | },
11 | "kernelspec": {
12 | "name": "ir",
13 | "display_name": "R"
14 | }
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "id": "view-in-github",
21 | "colab_type": "text"
22 | },
23 | "source": [
24 | ""
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "zkn9Nei63dUS",
31 | "colab_type": "text"
32 | },
33 | "source": [
34 | "# Calculating Hypergeometric Probabilities (R Version)\n",
35 | "\n",
36 | "This notebook accompanies [this lesson](http://dlsun.github.io/probability/hypergeometric.html)."
37 | ]
38 | },
39 | {
40 | "cell_type": "code",
41 | "metadata": {
42 | "id": "Z60PpIfIeF4Z",
43 | "colab_type": "code",
44 | "colab": {}
45 | },
46 | "source": [
47 | "probs <- dhyper(x=7:30, m=20, n=80, k=30)\n",
48 | "probs"
49 | ],
50 | "execution_count": null,
51 | "outputs": []
52 | },
53 | {
54 | "cell_type": "code",
55 | "metadata": {
56 | "id": "AouC3Wt83VR7",
57 | "colab_type": "code",
58 | "colab": {}
59 | },
60 | "source": [
61 | "sum(probs)"
62 | ],
63 | "execution_count": null,
64 | "outputs": []
65 | },
66 | {
67 | "cell_type": "code",
68 | "metadata": {
69 | "id": "HLIiBFfS3TbV",
70 | "colab_type": "code",
71 | "colab": {}
72 | },
73 | "source": [
74 | "1 - phyper(6, m=20, n=80, k=30)"
75 | ],
76 | "execution_count": null,
77 | "outputs": []
78 | }
79 | ]
80 | }
--------------------------------------------------------------------------------
/colabs/r/Calculating_NegBinom_Probabilities.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Calculating Negative Binomial Probabilities",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "include_colab_link": true
10 | },
11 | "kernelspec": {
12 | "name": "ir",
13 | "display_name": "R"
14 | }
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "id": "view-in-github",
21 | "colab_type": "text"
22 | },
23 | "source": [
24 | ""
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "zkn9Nei63dUS",
31 | "colab_type": "text"
32 | },
33 | "source": [
34 | "# Calculating Negative Binomial Probabilities (R Version)\n",
35 | "\n",
36 | "This notebook accompanies [this lesson](http://dlsun.github.io/probability/negative-binomial.html)."
37 | ]
38 | },
39 | {
40 | "cell_type": "markdown",
41 | "metadata": {
42 | "id": "BUfO_woYkJt4",
43 | "colab_type": "text"
44 | },
45 | "source": [
46 | "R uses the names `size=` and `prob=` for $r$ and $p$, respectively. \n",
47 | "\n",
48 | "R also defines the negative binomial distribution a bit differently than we do. It only counts the number of $\\fbox{0}$s that were drawn, rather than the total number of draws. So we have to remember to subtract the $r=10$ $\\fbox{1}$s from the total number of draws before passing values to `dnbinom` or `pnbinom`."
49 | ]
50 | },
51 | {
52 | "cell_type": "code",
53 | "metadata": {
54 | "id": "xxXfK3AwSfCi",
55 | "colab_type": "code",
56 | "colab": {}
57 | },
58 | "source": [
59 | "# PMF of negative binomial\n",
60 | "probs <- dnbinom((10:40) - 10 , size=10, prob=0.15)\n",
61 | "1 - sum(probs)"
62 | ],
63 | "execution_count": null,
64 | "outputs": []
65 | },
66 | {
67 | "cell_type": "code",
68 | "metadata": {
69 | "id": "Z60PpIfIeF4Z",
70 | "colab_type": "code",
71 | "colab": {}
72 | },
73 | "source": [
74 | "# CDF of negative binomial\n",
75 | "1 - pnbinom(40 - 10 , size=10, prob=0.15)"
76 | ],
77 | "execution_count": null,
78 | "outputs": []
79 | }
80 | ]
81 | }
--------------------------------------------------------------------------------
/colabs/r/Calculating_Poisson_Probabilities.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Calculating Poisson Probabilities",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "include_colab_link": true
10 | },
11 | "kernelspec": {
12 | "name": "ir",
13 | "display_name": "R"
14 | }
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "id": "view-in-github",
21 | "colab_type": "text"
22 | },
23 | "source": [
24 | ""
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "zkn9Nei63dUS",
31 | "colab_type": "text"
32 | },
33 | "source": [
34 | "# Calculating Poisson Probabilities (R Version)\n",
35 | "\n",
36 | "This notebook accompanies [this lesson](http://dlsun.github.io/probability/poisson.html)."
37 | ]
38 | },
39 | {
40 | "cell_type": "code",
41 | "metadata": {
42 | "id": "xxXfK3AwSfCi",
43 | "colab_type": "code",
44 | "colab": {}
45 | },
46 | "source": [
47 | "# PMF of Poisson\n",
48 | "probs <- dpois(0:11, 10)\n",
49 | "1 - sum(probs)"
50 | ],
51 | "execution_count": null,
52 | "outputs": []
53 | },
54 | {
55 | "cell_type": "code",
56 | "metadata": {
57 | "id": "Z60PpIfIeF4Z",
58 | "colab_type": "code",
59 | "colab": {}
60 | },
61 | "source": [
62 | "# CDF of Poisson\n",
63 | "1 - ppois(11, 10)"
64 | ],
65 | "execution_count": null,
66 | "outputs": []
67 | },
68 | {
69 | "cell_type": "code",
70 | "metadata": {
71 | "id": "-ELh3tFXXKOA",
72 | "colab_type": "code",
73 | "colab": {}
74 | },
75 | "source": [
76 | "x <- 12\n",
77 | "while(ppois(x, 10) < .99) {\n",
78 | " x <- x + 1\n",
79 | "}\n",
80 | " \n",
81 | "# print out the value of x and the probability\n",
82 | "c(x, ppois(x, 10))"
83 | ],
84 | "execution_count": null,
85 | "outputs": []
86 | }
87 | ]
88 | }
--------------------------------------------------------------------------------
/complex.Rmd:
--------------------------------------------------------------------------------
1 | # Complex Numbers {#complex}
2 |
3 | ## Motivation {-}
4 |
5 | The following videos explain why imaginary numbers are _necessary_ in mathematics. Note that these videos
6 | use $i$ to denote the imaginary number $\sqrt{-1}$, whereas we use $j$ (which is common in electrical engineering,
7 | to avoid confusion with current).
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 | ## Theory {-}
18 |
19 | The following videos explain the core concepts: the complex plane and the geometric interpretations of
20 | complex addition and multiplication.
21 |
22 |
23 |
24 |
25 |
26 | Be sure to try the calculations suggested at the end of the previous video before moving on to the next video:
27 |
28 | - $(4 + 3j) \cdot j$
29 | - $(4 + 3j) \cdot 2j$
30 | - $(4 + 3j) \cdot (4 + 3j)$
31 | - $(2 + j) \cdot (1 + 2j)$
32 |
33 |
34 |
35 | ```{definition magnitude, name="Magnitude of a Complex Number"}
36 | The **magnitude** of a complex number $z = a + jb$ is
37 | \[ |z| \overset{\text{def}}{=} \sqrt{a^2 + b^2}. \]
38 | This follows from the Pythagorean Theorem.
39 | ```
40 |
41 | ```{definition complex-conjugate, name="Complex Conjugate"}
42 | The **complex conjugate** of a complex number $z = a + jb$ is
43 | \[ z^* \overset{\text{def}}{=} a - jb. \]
44 | ```
45 |
46 | To conjugate a complex number, we simply flip the sign in front of any $j$s in the expression. For example:
47 | \[ \left(\frac{1 + 2j}{2 - j} \right)^* = \frac{1 - 2j}{2 + j}. \]
48 |
49 | Because it is so easy to calculate the complex conjugate, conjugation is often the preferred way to
50 | calculate magnitudes.
51 |
52 | ```{theorem calculating-magnitude, name="Calculating Magnitude"}
53 | The _squared_ magnitude of a complex number $z$ is the product of the number and its complex conjugate:
54 | \[ |z|^2 = z z^*. \]
55 | This means that the magnitude of a complex number can be calculated as
56 | \[ |z| = \sqrt{z z^*}. \]
57 | ```
58 |
59 | ```{proof}
60 | If $z = a + jb$, then $z^* = a - jb$, and
61 | \[ z z^* = (a + jb)(a - jb) = a^2 - j^2 b^2 = a^2 + b^2 = |z|^2. \]
62 | ```
63 |
64 | For example, the magnitude of the number above is:
65 | \[ \left|\frac{1 + 2j}{2 - j} \right| = \sqrt{\frac{1 + 2j}{2 - j} \cdot \frac{1 - 2j}{2 + j}} = \sqrt{\frac{1 + 4}{4 + 1}} = 1. \]
66 | It would have been much more tedious to calculate this magnitude by finding $a$ and $b$ such that
67 | $\frac{1 + 2j}{2 - j} = a + jb$ and using Definition \@ref(def:magnitude).
68 |
69 | ```{theorem euler, name="Euler's Identity"}
70 | Euler's identity relates the complex exponential $e^{j\theta}$ to the trigonometric functions:
71 | \[ e^{j\theta} = \cos \theta + j \sin\theta \]
72 | Two immediate corollaries of Euler's identity are:
73 | \begin{align*}
74 | \cos\theta &= \frac{e^{j\theta} + e^{-j\theta}}{2} \\
75 | \sin\theta &= \frac{e^{j\theta} - e^{-j\theta}}{2j}.
76 | \end{align*}
77 | ```
78 |
79 | ## Essential Practice {-}
80 |
81 | 1. Express the complex number $\frac{1 - j}{1 + j}$ in $a + jb$ form, where $a$ and $b$ are real numbers.
82 | 2. Calculate $e^{j\pi/4} + e^{-j\pi/4}$. Your answer should be a real number.
83 | 3. Calculate $\left| \frac{1 - j}{1 + e^{j\pi/4}} \right|$.
84 |
85 |
86 |
--------------------------------------------------------------------------------
/conditional.Rmd:
--------------------------------------------------------------------------------
1 | # Conditional Probability {#conditional}
2 |
3 | ## Motivating Example {-}
4 |
5 | You know that your coworker has two children. Absent any other information, the probability that
6 | both are boys is $P(\text{both boys}) = \frac{1}{4}$.
7 |
8 | One day, she mentions to you, "I need to stop by St. Joseph's after work for a PTA meeting."
9 | St. Joseph's is a local all-boys school. So now you know that at least one of her children is a boy.
10 | What is the probability now that both her children are boys?
11 |
12 | Most people assume that the answer is $\frac{1}{2}$, since the other child is equally likely to be a
13 | boy or a girl, and the gender of one child does not affect the gender of another. The actual
14 | answer may surprise you....
15 |
16 | ## Theory {-}
17 |
18 | To quantify how probabilities change in light of new information, we calculate the **conditional probability**.
19 |
20 | $$ P(\text{both boys}\ |\ \text{at least one boy}) $$
21 |
22 | The $|$ symbol is read "given" and the event after the $|$ symbol represents information that we know.
23 |
24 | In general, to calculate a conditional probability, we use the following formula.
25 |
26 | ```{definition conditional, name="Conditional Probability"}
27 | $$ P(B | A) = \frac{P(A \textbf{ and } B)}{P(A)}. $$
28 | ```
29 |
30 | The probability $P(A \textbf{ and } B)$ is called the **joint probability** of the two events $A$ and $B$.
31 |
32 | So the conditional probability above is
33 |
34 | \begin{align*}
35 | P(\text{both boys}\ |\ \text{at least one boy}) &= \frac{P(\text{both boys} \textbf{ and } \text{at least one boy})}{P(\text{at least one boy})} \\
36 | &= \frac{P(\text{both boys})}{P(\text{at least one boy})} \\
37 | &= \frac{1/4}{3/4} \\
38 | &= \frac{1}{3}.
39 | \end{align*}
40 |
41 | In the above example, the joint probability $P(\text{both boys} \textbf{ and } \text{at least one boy})$ is easy to calculate because the two events are redundant. If we know that both are boys, then we automatically know that at least one is a boy.
42 |
43 | The information that at least one of her children attends St. Joseph's (and, thus, is a boy) increases the probability that she has two boys from $1/4$ to $1/3$.
44 |
45 | If this result was counterintuitive to you, the video below gives some intuition.
46 |
47 |
48 |
49 |
50 | We can rearrange the conditional probability formula to get a formula that is useful for calculating the joint probability
51 | when the conditional probability is known.
52 | ```{theorem multiplication-rule, name="Multiplication Rule"}
53 | $$ P(A \textbf{ and } B) = P(A) \cdot P(B | A). $$
54 | ```
55 |
56 | ```{example}
57 | Two cards are dealt off the top of a shuffled deck of cards. What is the probability that both are queens?
58 | ```
59 | ```{solution}
60 | In this case, we want to calculate the joint probability
61 | \[ P(\text{1st card is Q} \textbf{ and } \text{2nd card is Q}), \]
62 | and the conditional probability
63 | \[ P(\text{2nd card is Q} | \text{1st card is Q}) \]
64 | is simple. If it is known that
65 | the 1st card was a queen, then there are only 51 cards remaining in the deck, of which 3 are queens, so
66 | \[ P(\text{2nd card is Q} | \text{1st card is Q}) = \frac{3}{51}. \]
67 |
68 | By the Multiplication Rule (\@ref(thm:multiplication-rule)),
69 | \begin{align*}
70 | P(\text{1st card is Q} \textbf{ and } \text{2nd card is Q}) &= P(\text{1st card is Q}) \cdot P(\text{2nd card is Q} | \text{1st card is Q}) \\
71 | &= \frac{4}{52} \cdot \frac{3}{51}
72 | \end{align*}
73 |
74 | Let's compare this with a solution based on counting the outcomes directly. There are $52 \cdot 51$ equally likely (ordered) outcomes,
75 | of which $4 \cdot 3$ are both queens. Therefore, the probability is
76 | \[ P(\text{1st card is queen} \textbf{ and } \text{2nd card is queen}) = \frac{4 \cdot 3}{52 \cdot 51}. \]
77 | We get the same answer; the only thing that changes is the order of operations:
78 |
79 | - In counting, we first multiply to get the number of outcomes, then divide to get probabilities.
80 | - In the multiplication rule, we first divide to get probabilities, then multiply the probabilities.
81 | ```
82 |
83 |
84 | ## Examples {-}
85 |
86 | 1. You and your friend Amy are each dealt two cards: hers face up and yours face down.
87 | In which of the following scenarios are you more likely to have a pair:
88 | - when she has a pair of queens
89 | - when she has a queen and a 5?
90 |
91 | 2. Dr. No has captured James Bond and forces him to play a game of Russian roulette. (Note: Russian roulette is very different from the casino game roulette!) Dr. No shows him an revolver with 6 chambers, all initially empty. He places 2 bullets into adjacent chambers. He makes Bond spin the cylinder, place the muzzle against his head, and pull the trigger. He survives! Luckily for Bond, the cylinder stopped on one of the empty chambers.
92 |
93 | Now Dr. No gives Bond two options: he can re-spin the cylinder before firing again or he can fire with the gun in its current state. (Keep in mind that the cylinder rotates to the next chamber each time the gun is fired.) What option should Bond choose to maximize his chance of surviving?
94 |
95 | a. Clearly write out the conditional probability of interest using $P(B|A)$ notation.
96 | b. Find the probability. (**Hint:** You should not need to do any calculations. You should be able to find the probability just by thinking carefully about the information you have. Make sure you explain your reasoning carefully.)
97 |
98 | ## Additional Exercises {-}
99 |
--------------------------------------------------------------------------------
/cov-continuous.Rmd:
--------------------------------------------------------------------------------
1 | # Covariance of Continuous Random Variables {#cov-continuous}
2 |
3 | ## Theory {-}
4 |
5 | This lesson summarizes results about the covariance of
6 | continuous random variables. The statements of these
7 | results are exactly the same as for discrete random variables,
8 | but keep in mind that the expected values are now computed
9 | using integrals and p.d.f.s, rather than sums and p.m.f.s.
10 |
11 | ```{definition, name="Covariance"}
12 | Let $X$ and $Y$ be random variables. Then, the **covariance** of $X$ and $Y$, symbolized
13 | $\text{Cov}[X, Y]$ is defined as
14 | \begin{equation}
15 | \text{Cov}[X, Y] \overset{\text{def}}{=} E[(X - E[X])(Y - E[Y])].
16 | \end{equation}
17 | ```
18 |
19 | ```{theorem, name="Shortcut Formula for Covariance"}
20 | The covariance can also be computed as:
21 | \begin{equation}
22 | \text{Cov}[X, Y] = E[XY] - E[X]E[Y].
23 | (\#eq:cov-shortcut-continuous)
24 | \end{equation}
25 | ```
26 |
27 | ```{example cov-arrival-times, name="Covariance Between the First and Second Arrival Times"}
28 | In Example \@ref(exm:pp-joint-arrivals), we saw that the joint distribution of the first arrival time $X$ and
29 | the second arrival time $Y$ in a Poisson process of rate $\lambda = 0.8$ is
30 | \[ f(x, y) = \begin{cases} 0.64 e^{-0.8 y} & 0 < x < y \\ 0 & \text{otherwise} \end{cases}. \]
31 | What is the covariance between $X$ and $Y$? Intuitively, we expect the covariance to be positive.
32 | The longer it takes for the first arrival to happen, the longer we will have to wait for the second
33 | arrival, since the second arrival has to happen after the first arrival. Let's calculate the exact value
34 | of the covariance using the shortcut formula \@ref(eq:cov-shortcut-continuous).
35 |
36 | First, we need to calculate $E[XY]$. We do this using 2D LOTUS \@ref(eq:lotus2d-continuous).
37 | If $S = \{ (x, y): 0 < x < y \}$ denotes the support of the distribution, then
38 | \begin{align*}
39 | E[XY] &= \iint_S xy \cdot 0.64 e^{-0.8 y}\,dx\,dy \\
40 | &= \int_0^\infty \int_0^y xy \cdot 0.64 e^{-0.8 y}\,dx\,dy \\
41 | &= \frac{75}{16}.
42 | \end{align*}
43 |
44 | What about $E[X]$? We know that the first arrival follows an $\text{Exponential}(\lambda=0.8)$ distribution,
45 | so its expected value is $1/\lambda = 1/0.8$ seconds.
46 |
47 | What about $E[Y]$? We showed in Example \@ref(exm:pp-arrival-ev) that the $r$th arrival is expected to happen
48 | at $r / 0.8$ seconds (by linearity of expectation, since the $r$th arrival is the sum of the
49 | $r$ $\text{Exponential}(\lambda=0.8)$ interarrival times). Therefore, the secon arrival is expected
50 | to happen at $E[Y] = 2 / 0.8$ seconds.
51 |
52 | Putting everything together, we have
53 | \[ \text{Cov}[X, Y] = E[XY] - E[X]E[Y] = \frac{75}{16} - \frac{1}{0.8} \cdot \frac{2}{0.8} = 1.5625. \]
54 | ```
55 |
56 | ```{theorem, name="Properties of Covariance"}
57 | Let $X, Y, Z$ be random variables, and let $c$ be a constant. Then:
58 |
59 | 1. Covariance-Variance Relationship: $\displaystyle\text{Var}[X] = \text{Cov}[X, X]$ (This was also Theorem \@ref(thm:cov-var).)
60 | 2. Pulling Out Constants:
61 |
62 | $\displaystyle\text{Cov}[cX, Y] = c \cdot \text{Cov}[X, Y]$
63 |
64 | $\displaystyle\text{Cov}[X, cY] = c \cdot \text{Cov}[X, Y]$
65 |
66 | 3. Distributive Property:
67 |
68 | $\displaystyle\text{Cov}[X + Y, Z] = \text{Cov}[X, Z] + \text{Cov}[Y, Z]$
69 |
70 | $\displaystyle\text{Cov}[X, Y + Z] = \text{Cov}[X, Y] + \text{Cov}[X, Z]$
71 |
72 | 4. Symmetry: $\displaystyle\text{Cov}[X, Y] = \text{Cov}[Y, X]$
73 | 5. Constants cannot covary: $\displaystyle\text{Cov}[X, c] = 0$.
74 | ```
75 |
76 | ```{example}
77 | Here is an easier way to do Example \@ref(exm:cov-arrival-times), using properties of covariance.
78 |
79 | Write $Y = X + Z$, where $Z$ is the time between the first and second arrivals. We know that
80 | $Z$ is $\text{Exponential}(\lambda=0.8)$ and independent of $X$, so $\text{Cov}[X, Z] = 0$.
81 |
82 | Now, we have
83 | \begin{align*}
84 | \text{Cov}[X, Y] &= \text{Cov}[X, X + Z] = \underbrace{\text{Cov}[X, X]}_{\text{Var}[X]} + \underbrace{\text{Cov}[X, Z]}_0 = \frac{1}{0.8^2} = 1.5625,
85 | \end{align*}
86 | where in the last step we used the fact that the variance of an $\text{Exponential}(\lambda)$ random variable is $1/\lambda^2$. This matches the answer we got in Example \@ref(exm:cov-arrival-times), but it required much less
87 | calculation.
88 | ```
89 |
90 | ```{example pp-arrival-sd, name="Standard Deviation of Arrival Times"}
91 | In Example \@ref(exm:pp-arrival-ev), we saw that the expected value of the $r$th arrival time in a
92 | Poisson process of rate $\lambda=0.8$ is $r / 0.8$. What is the standard deviation of the $r$th arrival time?
93 |
94 | The $r$th arrival time $S_r$ is the sum of $r$ independent $\text{Exponential}(\lambda=0.8)$ random variables:
95 | \[ S_r = T_1 + T_2 + \ldots + T_r. \]
96 | By properties of covariance:
97 | \begin{align*}
98 | \text{Var}[S_r] &= \text{Cov}[S_r, S_r] \\
99 | &= \text{Cov}[T_1 + T_2 + \ldots + T_r, T_1 + T_2 + \ldots + T_r] \\
100 | &= \sum_{i=1}^r \underbrace{\text{Cov}[T_i, T_i]}_{\text{Var}[T_i]} + \sum_{i\neq j} \underbrace{\text{Cov}[T_i, T_j]}_0 \\
101 | &= r \text{Var}[T_1] \\
102 | &= r \frac{1}{0.8^2}.
103 | \end{align*}
104 |
105 | Therefore, the standard deviation is
106 | \[ \text{SD}[S_r] = \frac{\sqrt{r}}{0.8}. \]
107 | ```
108 |
109 |
110 | ## Essential Practice {-}
111 |
112 | 1. Let $A$ be a $\text{Exponential}(\lambda=1.5)$ random variable, and let $\Theta$ be a
113 | $\text{Uniform}(a=-\pi, b=\pi)$ random variable. What is
114 | $\text{Cov}[A\cos(\Theta + 2\pi s), A\cos(\Theta + 2\pi t)]$?
115 |
116 | 2. In a standby system, a component is used until it wears out and is then immediately
117 | replaced by another, not necessarily identical, component. (The second component is said to be "in
118 | standby mode," i.e., waiting to be used.) The overall lifetime of a standby system is just the sum of the
119 | lifetimes of its individual components. Let $X$ and $Y$ denote the lifetimes of the two components of a
120 | standby system, and suppose $X$ and $Y$ are independent exponentially distributed random variables
121 | with expected lifetimes 3 weeks and 4 weeks, respectively. Let $T = X + Y$, the lifetime of the
122 | standby system. What is the _standard deviation_ of the lifetime of the system?
123 |
124 | 3. Let $U_1, U_2, ..., U_n$ be independent and identically distributed (i.i.d.) $\text{Uniform}(a=0, b=1)$
125 | random variables. Let $S_n = U_1 + ... + U_n$ denote their sum.
126 | Calculate $E[S_n]$ and $\text{SD}[S_n]$ in terms of $n$.
127 |
--------------------------------------------------------------------------------
/covariance.Rmd:
--------------------------------------------------------------------------------
1 | # Covariance {#covariance}
2 |
3 |
4 | ## Theory {-}
5 |
6 |
7 |
8 | The covariance measures the relationship between two random variables.
9 |
10 | ```{definition covariance, name="Covariance"}
11 | Let $X$ and $Y$ be random variables. Then, the **covariance** of $X$ and $Y$, symbolized
12 | $\text{Cov}[X, Y]$ is defined as
13 | \begin{equation}
14 | \text{Cov}[X, Y] \overset{\text{def}}{=} E[(X - E[X])(Y - E[Y])].
15 | (\#eq:cov)
16 | \end{equation}
17 | ```
18 |
19 | The _sign_ of the covariance is most meaningful:
20 |
21 | - If $\text{Cov}[X, Y] > 0$, then $X$ and $Y$ tend to move together. When $X$ is high,
22 | $Y$ tends to also be high.
23 | - If $\text{Cov}[X, Y] < 0$, then $X$ and $Y$ tend to move in opposite directions.
24 | When $X$ is high, $Y$ tends to be low.
25 | - If $\text{Cov}[X, Y] = 0$, then $X$ and $Y$ do not consistently move together.
26 | This does not mean that they are independent, just that they do not consistently move
27 | together.
28 |
29 | By comparing the definitions of variance \@ref(eq:var) and covariance \@ref(eq:cov),
30 | we have the following obvious, but important, relationship between variance and covariance.
31 | ```{theorem cov-var, name="Covariance-Variance Relationship"}
32 | Let $X$ be a random variable. Then:
33 | \begin{equation}
34 | \text{Var}[X] = \text{Cov}[X, X].
35 | \end{equation}
36 | ```
37 |
38 | For calculations, it is often easier to use the following "shortcut formula" for the covariance.
39 |
40 | ```{theorem cov-shortcut, name="Shortcut Formula for Covariance"}
41 | The covariance can also be computed as:
42 | \begin{equation}
43 | \text{Cov}[X, Y] = E[XY] - E[X]E[Y].
44 | (\#eq:cov-shortcut)
45 | \end{equation}
46 | ```
47 | ```{proof}
48 | \begin{align*}
49 | \text{Cov}[X, Y] &= E[(X - E[X])(Y - E[Y])] & \text{(definition of covariance)} \\
50 | &= E[XY - X E[Y] - E[X] Y + E[X]E[Y]] & \text{(expand expression inside expectation)}\\
51 | &= E[XY] -E[X] E[Y] - E[X] E[Y] + E[X]E[Y] & \text{(linearity of expectation)} \\
52 | &= E[XY] - E[X]E[Y] & \text{(simplify)}
53 | \end{align*}
54 | ```
55 |
56 | Here is an example where we use the shortcut formula.
57 |
58 | ```{example roulette-cov, name="Roulette Covariance"}
59 | Let's calculate the covariance between the number of bets that Xavier wins, $X$, and
60 | the number of bets that Yolanda wins, $Y$.
61 |
62 | We calculated $E[XY] \approx 4.11$ in Lessons \@ref(lotus2d) and \@ref(ev-product). But if we did not
63 | already know this, we would have to calculate it (usually by 2D LOTUS).
64 |
65 | Since $X$ and $Y$ are binomial, we also know their expected values are
66 | $E[X] = 3\frac{18}{38}$ and $E[Y] = 5\frac{18}{38}$.
67 |
68 | Therefore, the covariance is
69 | \[ \text{Cov}[X, Y] = E[XY] - E[X]E[Y] = 4.11 - 3\frac{18}{38} \cdot 5\frac{18}{38} = .744. \]
70 |
71 | This covariance is positive, which makes sense---since the more Xavier wins, the more Yolanda wins.
72 | ```
73 |
74 | Finally, we note that if $X$ and $Y$ are independent, then their covariance is zero.
75 | ```{theorem, name="Independence Implies Zero Covariance"}
76 | If $X$ and $Y$ are independent, then $\text{Cov}[X, Y] = 0$.
77 | ```
78 | ```{proof}
79 | We use the shortcut formula \@ref(eq:cov-shortcut) and Theorem \@ref(thm:ev-product).
80 | \begin{align*}
81 | \text{Cov}[X, Y] &= E[XY] - E[X]E[Y] \\
82 | &= E[X]E[Y] - E[X]E[Y] \\
83 | &= 0
84 | \end{align*}
85 | ```
86 |
87 | However, the converse is not true.
88 | It is possible for the covariance to be 0, even when the random variables are not independent.
89 | An example of such a distibution can be found in the Essential Practice below.
90 |
91 |
92 | ## Essential Practice {-}
93 |
94 | 1. Suppose $X$ and $Y$ are random variables with joint p.m.f.
95 | \[ \begin{array}{rr|ccc}
96 | y & 1 & .3 & 0 & .3 \\
97 | & 0 & 0 & .4 & 0 \\
98 | \hline
99 | & & 0 & 1 & 2 \\
100 | & & & x \\
101 | \end{array}. \]
102 | Are $X$ and $Y$ independent? What is $\text{Cov}[X, Y]$?
103 |
104 | 2. Two tickets are drawn from a box with $N_1$ $\fbox{1}$s and $N_0$ $\fbox{0}$s. Let $X$ be the number of
105 | $\fbox{1}$s on the first draw and $Y$ be the number of $\fbox{1}$s on the second draw. (Note that $X$ and $Y$
106 | can only be 0 or 1.)
107 |
108 | a. Calculate $\text{Cov}[X, Y]$ when the draws are made with replacement.
109 | b. Calculate $\text{Cov}[X, Y]$ when the draws are made without replacement.
110 |
111 | (Hint: You worked out $E[XY]$ in Lesson \@ref(lotus2d). Use it!)
112 |
113 | 3. At Diablo Canyon nuclear plant, radioactive particles hit a Geiger counter according to a Poisson process
114 | with a rate of 3.5 particles per second. Let $X$ be the number of particles detected in the first 2 seconds.
115 | Let $Y$ be the number of particles detected in the second after that (i.e., the 3rd second).
116 | Calculate $\text{Cov}[X, Y]$.
117 |
118 | ## Additional Practice {-}
119 |
120 | 1. Consider the following three scenarios:
121 |
122 | - A fair coin is tossed 3 times. $X$ is the number of heads and $Y$ is the number of tails.
123 | - A fair coin is tossed 4 times. $X$ is the number of heads in the first 3 tosses, $Y$ is the number of heads in the last 3 tosses.
124 | - A fair coin is tossed 6 times. $X$ is the number of heads in the first 3 tosses, $Y$ is the number of heads in the last 3 tosses.
125 |
126 | Calculate $\text{Cov}[X, Y]$ for each of these three scenarios. Interpret the sign of the covariance.
127 |
--------------------------------------------------------------------------------
/distribution-table.Rmd:
--------------------------------------------------------------------------------
1 | # (APPENDIX) Appendix {-}
2 |
3 | # Distribution Tables {#distribution-table}
4 |
5 | ## Discrete Distributions {#discrete-distributions}
6 |
7 | | Distribution of $X$ | $f(x)$ | Support | $E[X]$ | $\text{Var}[X]$ |
8 | |:------|:---|:---|:--:|:--:|
9 | | [Hypergeometric](#hypergeometric)$(n, N_1, N_0)$ | $\displaystyle \frac{\binom{N_1}{x} \binom{N_0}{n-x}}{\binom{N}{n}}$ | $x=0, 1, \ldots, n$ | $n \frac{N_1}{N}$ | $n \frac{N_1}{N} \frac{N_0}{N} \left(1 - \frac{n-1}{N-1}\right)$ |
10 | | [Binomial](#binomial)$(n, N_1, N_0)$ | $\displaystyle \frac{\binom{n}{x} N_1^x N_0^{n-x}}{N^n}$ | $x=0, 1, \ldots, n$ | $n \frac{N_1}{N}$ | $n \frac{N_1}{N} \frac{N_0}{N}$ |
11 | | [Binomial](#binomial)$(n, p)$ | $\binom{n}{x} p^x (1-p)^{n-x}$ | $x=0, 1, \ldots, n$ | $np$ | $np(1-p)$ |
12 | | [Geometric](#geometric)$(p)$ | $(1-p)^{x-1} p$ | $x=1, 2, \ldots$ | $\frac{1}{p}$ | $\frac{1-p}{p^2}$ |
13 | | [NegativeBinomial](#negative-binomial)$(r, p)$ | $\binom{x-1}{r-1} (1-p)^{x-r} p^r$ | $x=r, r+1, \ldots$ | $\frac{r}{p}$ | $\frac{r(1-p)}{p^2}$ |
14 | | [Poisson](#poisson)$(\mu)$ | $e^{-\mu} \frac{\mu^x}{x!}$ | $x=0, 1, 2, \ldots$ | $\mu$ | $\mu$ |
15 |
16 | ## Continuous Distributions {#continuous-distributions}
17 |
18 | | Distribution of $X$ | $f(x)$ | Support | $E[X]$ | $\text{Var}[X]$ |
19 | |:------|:---|:---|:--:|:--:|
20 | | [Uniform](#uniform)$(a, b)$ | $\frac{1}{b-a}$ | $a < x < b$ | $\frac{a+b}{2}$ | $\frac{(b-a)^2}{12}$ |
21 | | [Exponential](#exponential)$(\lambda)$ | $\lambda e^{-\lambda x}$ | $0 < x < \infty$ | $\frac{1}{\lambda}$ | $\frac{1}{\lambda^2}$ |
22 | | [Gamma](#sums-continuous)$(r, \lambda)$ | $\frac{\lambda^r}{(r-1)!}x^{r-1} e^{-\lambda x}$ | $0 < x < \infty$ | $\frac{r}{\lambda}$ | $\frac{r}{\lambda^2}$ |
23 | | [Normal](#normal)$(\mu, \sigma)$ | $\frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{(x - \mu)^2}{2\sigma^2}}$ | $-\infty < x < \infty$ | $\mu$ | $\sigma^2$ |
24 |
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/binomial-pmf-1-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/binomial-pmf-1-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/binomial-pmf-2-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/binomial-pmf-2-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/cdf-graph-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/cdf-graph-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/conditional-pmf-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/conditional-pmf-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/conditional-x-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/conditional-x-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/ev-continuous-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/ev-continuous-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/ev-long-run-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/ev-long-run-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/expo-graph-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/expo-graph-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/filter-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/filter-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/geometric-pmf-1-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/geometric-pmf-1-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/hypergeometric-pmf-1-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/hypergeometric-pmf-1-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/hypergeometric-pmf-2-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/hypergeometric-pmf-2-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/iir-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/iir-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/lti-filter-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/lti-filter-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/marginal-x-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/marginal-x-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/marginal-y-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/marginal-y-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/negbinom-pmf-1-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/negbinom-pmf-1-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/negbinom-pmf-2-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/negbinom-pmf-2-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/normal-approx-binomial-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/normal-approx-binomial-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/normal-graph-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/normal-graph-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/pdf-to-cdf-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/pdf-to-cdf-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/pmf-graph-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/pmf-graph-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/poisson-pmf-1-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/poisson-pmf-1-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/poisson-process-conditional-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/poisson-process-conditional-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/roulette-center-of-mass-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/roulette-center-of-mass-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/standard-normal-graph-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/standard-normal-graph-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/t-cdf-graph-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/t-cdf-graph-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/t-pdf-graph-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/t-pdf-graph-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/uniform-graph-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/uniform-graph-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-1-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-1-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-101-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-101-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-102-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-102-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-104-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-104-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-110-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-110-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-120-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-120-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-122-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-122-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-124-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-124-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-126-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-126-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-133-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-133-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-134-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-134-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-135-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-135-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-137-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-137-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-139-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-139-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-146-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-146-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-152-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-152-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-153-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-153-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-154-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-154-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-155-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-155-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-156-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-156-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-157-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-157-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-16-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-16-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-169-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-169-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-170-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-170-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-171-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-171-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-172-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-172-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-173-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-173-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-174-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-174-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-175-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-175-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-176-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-176-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-177-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-177-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-178-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-178-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-180-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-180-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-181-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-181-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-182-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-182-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-183-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-183-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-184-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-184-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-185-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-185-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-186-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-186-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-187-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-187-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-188-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-188-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-189-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-189-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-190-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-190-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-192-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-192-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-193-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-193-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-198-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-198-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-199-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-199-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-2-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-2-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-200-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-200-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-201-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-201-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-202-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-202-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-206-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-206-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-207-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-207-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-208-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-208-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-209-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-209-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-3-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-3-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-4-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-4-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-5-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-5-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-6-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-6-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-66-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-66-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-7-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-7-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-70-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-70-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-92-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-92-1.png
--------------------------------------------------------------------------------
/docs/bookdown-demo_files/figure-html/unnamed-chunk-94-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/bookdown-demo_files/figure-html/unnamed-chunk-94-1.png
--------------------------------------------------------------------------------
/docs/libs/gitbook-2.6.7/css/fontawesome/fontawesome-webfont.ttf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dlsun/probability/61a8014d9aeffa220ab53dcf112953f1064a8297/docs/libs/gitbook-2.6.7/css/fontawesome/fontawesome-webfont.ttf
--------------------------------------------------------------------------------
/docs/libs/gitbook-2.6.7/css/plugin-bookdown.css:
--------------------------------------------------------------------------------
1 | .book .book-header h1 {
2 | padding-left: 20px;
3 | padding-right: 20px;
4 | }
5 | .book .book-header.fixed {
6 | position: fixed;
7 | right: 0;
8 | top: 0;
9 | left: 0;
10 | border-bottom: 1px solid rgba(0,0,0,.07);
11 | }
12 | span.search-highlight {
13 | background-color: #ffff88;
14 | }
15 | @media (min-width: 600px) {
16 | .book.with-summary .book-header.fixed {
17 | left: 300px;
18 | }
19 | }
20 | @media (max-width: 1240px) {
21 | .book .book-body.fixed {
22 | top: 50px;
23 | }
24 | .book .book-body.fixed .body-inner {
25 | top: auto;
26 | }
27 | }
28 | @media (max-width: 600px) {
29 | .book.with-summary .book-header.fixed {
30 | left: calc(100% - 60px);
31 | min-width: 300px;
32 | }
33 | .book.with-summary .book-body {
34 | transform: none;
35 | left: calc(100% - 60px);
36 | min-width: 300px;
37 | }
38 | .book .book-body.fixed {
39 | top: 0;
40 | }
41 | }
42 |
43 | .book .book-body.fixed .body-inner {
44 | top: 50px;
45 | }
46 | .book .book-body .page-wrapper .page-inner section.normal sub, .book .book-body .page-wrapper .page-inner section.normal sup {
47 | font-size: 85%;
48 | }
49 |
50 | @media print {
51 | .book .book-summary, .book .book-body .book-header, .fa {
52 | display: none !important;
53 | }
54 | .book .book-body.fixed {
55 | left: 0px;
56 | }
57 | .book .book-body,.book .book-body .body-inner, .book.with-summary {
58 | overflow: visible !important;
59 | }
60 | }
61 | .kable_wrapper {
62 | border-spacing: 20px 0;
63 | border-collapse: separate;
64 | border: none;
65 | margin: auto;
66 | }
67 | .kable_wrapper > tbody > tr > td {
68 | vertical-align: top;
69 | }
70 | .book .book-body .page-wrapper .page-inner section.normal table tr.header {
71 | border-top-width: 2px;
72 | }
73 | .book .book-body .page-wrapper .page-inner section.normal table tr:last-child td {
74 | border-bottom-width: 2px;
75 | }
76 | .book .book-body .page-wrapper .page-inner section.normal table td, .book .book-body .page-wrapper .page-inner section.normal table th {
77 | border-left: none;
78 | border-right: none;
79 | }
80 | .book .book-body .page-wrapper .page-inner section.normal table.kable_wrapper > tbody > tr, .book .book-body .page-wrapper .page-inner section.normal table.kable_wrapper > tbody > tr > td {
81 | border-top: none;
82 | }
83 | .book .book-body .page-wrapper .page-inner section.normal table.kable_wrapper > tbody > tr:last-child > td {
84 | border-bottom: none;
85 | }
86 |
87 | div.theorem, div.lemma, div.corollary, div.proposition, div.conjecture {
88 | font-style: italic;
89 | }
90 | span.theorem, span.lemma, span.corollary, span.proposition, span.conjecture {
91 | font-style: normal;
92 | }
93 | div.proof:after {
94 | content: "\25a2";
95 | float: right;
96 | }
97 | .header-section-number {
98 | padding-right: .5em;
99 | }
100 |
--------------------------------------------------------------------------------
/docs/libs/gitbook-2.6.7/css/plugin-clipboard.css:
--------------------------------------------------------------------------------
1 | div.sourceCode {
2 | position: relative;
3 | }
4 |
5 | .copy-to-clipboard-button {
6 | position: absolute;
7 | right: 0;
8 | top: 0;
9 | visibility: hidden;
10 | }
11 |
12 | .copy-to-clipboard-button:focus {
13 | outline: 0;
14 | }
15 |
16 | div.sourceCode:hover > .copy-to-clipboard-button {
17 | visibility: visible;
18 | }
19 |
--------------------------------------------------------------------------------
/docs/libs/gitbook-2.6.7/css/plugin-search.css:
--------------------------------------------------------------------------------
1 | .book .book-summary .book-search {
2 | padding: 6px;
3 | background: transparent;
4 | position: absolute;
5 | top: -50px;
6 | left: 0px;
7 | right: 0px;
8 | transition: top 0.5s ease;
9 | }
10 | .book .book-summary .book-search input,
11 | .book .book-summary .book-search input:focus,
12 | .book .book-summary .book-search input:hover {
13 | width: 100%;
14 | background: transparent;
15 | border: 1px solid #ccc;
16 | box-shadow: none;
17 | outline: none;
18 | line-height: 22px;
19 | padding: 7px 4px;
20 | color: inherit;
21 | box-sizing: border-box;
22 | }
23 | .book.with-search .book-summary .book-search {
24 | top: 0px;
25 | }
26 | .book.with-search .book-summary ul.summary {
27 | top: 50px;
28 | }
29 | .with-search .summary li[data-level] a[href*=".html#"] {
30 | display: none;
31 | }
32 |
--------------------------------------------------------------------------------
/docs/libs/gitbook-2.6.7/css/plugin-table.css:
--------------------------------------------------------------------------------
1 | .book .book-body .page-wrapper .page-inner section.normal table{display:table;width:100%;border-collapse:collapse;border-spacing:0;overflow:auto}.book .book-body .page-wrapper .page-inner section.normal table td,.book .book-body .page-wrapper .page-inner section.normal table th{padding:6px 13px;border:1px solid #ddd}.book .book-body .page-wrapper .page-inner section.normal table tr{background-color:#fff;border-top:1px solid #ccc}.book .book-body .page-wrapper .page-inner section.normal table tr:nth-child(2n){background-color:#f8f8f8}.book .book-body .page-wrapper .page-inner section.normal table th{font-weight:700}
2 |
--------------------------------------------------------------------------------
/docs/libs/gitbook-2.6.7/js/jquery.highlight.js:
--------------------------------------------------------------------------------
1 | gitbook.require(["jQuery"], function(jQuery) {
2 |
3 | /*
4 | * jQuery Highlight plugin
5 | *
6 | * Based on highlight v3 by Johann Burkard
7 | * http://johannburkard.de/blog/programming/javascript/highlight-javascript-text-higlighting-jquery-plugin.html
8 | *
9 | * Code a little bit refactored and cleaned (in my humble opinion).
10 | * Most important changes:
11 | * - has an option to highlight only entire words (wordsOnly - false by default),
12 | * - has an option to be case sensitive (caseSensitive - false by default)
13 | * - highlight element tag and class names can be specified in options
14 | *
15 | * Copyright (c) 2009 Bartek Szopka
16 | *
17 | * Licensed under MIT license.
18 | *
19 | */
20 |
21 | jQuery.extend({
22 | highlight: function (node, re, nodeName, className) {
23 | if (node.nodeType === 3) {
24 | var match = node.data.match(re);
25 | if (match) {
26 | var highlight = document.createElement(nodeName || 'span');
27 | highlight.className = className || 'highlight';
28 | var wordNode = node.splitText(match.index);
29 | wordNode.splitText(match[0].length);
30 | var wordClone = wordNode.cloneNode(true);
31 | highlight.appendChild(wordClone);
32 | wordNode.parentNode.replaceChild(highlight, wordNode);
33 | return 1; //skip added node in parent
34 | }
35 | } else if ((node.nodeType === 1 && node.childNodes) && // only element nodes that have children
36 | !/(script|style)/i.test(node.tagName) && // ignore script and style nodes
37 | !(node.tagName === nodeName.toUpperCase() && node.className === className)) { // skip if already highlighted
38 | for (var i = 0; i < node.childNodes.length; i++) {
39 | i += jQuery.highlight(node.childNodes[i], re, nodeName, className);
40 | }
41 | }
42 | return 0;
43 | }
44 | });
45 |
46 | jQuery.fn.unhighlight = function (options) {
47 | var settings = { className: 'highlight', element: 'span' };
48 | jQuery.extend(settings, options);
49 |
50 | return this.find(settings.element + "." + settings.className).each(function () {
51 | var parent = this.parentNode;
52 | parent.replaceChild(this.firstChild, this);
53 | parent.normalize();
54 | }).end();
55 | };
56 |
57 | jQuery.fn.highlight = function (words, options) {
58 | var settings = { className: 'highlight', element: 'span', caseSensitive: false, wordsOnly: false };
59 | jQuery.extend(settings, options);
60 |
61 | if (words.constructor === String) {
62 | words = [words];
63 | // also match 'foo-bar' if search for 'foo bar'
64 | if (/\s/.test(words[0])) words.push(words[0].replace(/\s+/, '-'));
65 | }
66 | words = jQuery.grep(words, function(word, i){
67 | return word !== '';
68 | });
69 | words = jQuery.map(words, function(word, i) {
70 | return word.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, "\\$&");
71 | });
72 | if (words.length === 0) { return this; }
73 |
74 | var flag = settings.caseSensitive ? "" : "i";
75 | var pattern = "(" + words.join("|") + ")";
76 | if (settings.wordsOnly) {
77 | pattern = "\\b" + pattern + "\\b";
78 | }
79 | var re = new RegExp(pattern, flag);
80 |
81 | return this.each(function () {
82 | jQuery.highlight(this, re, settings.element, settings.className);
83 | });
84 | };
85 |
86 | });
87 |
--------------------------------------------------------------------------------
/docs/libs/gitbook-2.6.7/js/plugin-clipboard.js:
--------------------------------------------------------------------------------
1 | gitbook.require(["gitbook", "jQuery"], function(gitbook, $) {
2 |
3 | var copyButton = '';
4 | var clipboard;
5 |
6 | gitbook.events.bind("page.change", function() {
7 |
8 | if (!ClipboardJS.isSupported()) return;
9 |
10 | // the page.change event is thrown twice: before and after the page changes
11 | if (clipboard) {
12 | // clipboard is already defined
13 | // we can deduct that we are before page changes
14 | clipboard.destroy(); // destroy the previous events listeners
15 | clipboard = undefined; // reset the clipboard object
16 | return;
17 | }
18 |
19 | $(copyButton).prependTo("div.sourceCode");
20 |
21 | clipboard = new ClipboardJS(".copy-to-clipboard-button", {
22 | text: function(trigger) {
23 | return trigger.parentNode.textContent;
24 | }
25 | });
26 |
27 | });
28 |
29 | });
30 |
--------------------------------------------------------------------------------
/docs/libs/gitbook-2.6.7/js/plugin-fontsettings.js:
--------------------------------------------------------------------------------
1 | gitbook.require(["gitbook", "lodash", "jQuery"], function(gitbook, _, $) {
2 | var fontState;
3 |
4 | var THEMES = {
5 | "white": 0,
6 | "sepia": 1,
7 | "night": 2
8 | };
9 |
10 | var FAMILY = {
11 | "serif": 0,
12 | "sans": 1
13 | };
14 |
15 | // Save current font settings
16 | function saveFontSettings() {
17 | gitbook.storage.set("fontState", fontState);
18 | update();
19 | }
20 |
21 | // Increase font size
22 | function enlargeFontSize(e) {
23 | e.preventDefault();
24 | if (fontState.size >= 4) return;
25 |
26 | fontState.size++;
27 | saveFontSettings();
28 | };
29 |
30 | // Decrease font size
31 | function reduceFontSize(e) {
32 | e.preventDefault();
33 | if (fontState.size <= 0) return;
34 |
35 | fontState.size--;
36 | saveFontSettings();
37 | };
38 |
39 | // Change font family
40 | function changeFontFamily(index, e) {
41 | e.preventDefault();
42 |
43 | fontState.family = index;
44 | saveFontSettings();
45 | };
46 |
47 | // Change type of color
48 | function changeColorTheme(index, e) {
49 | e.preventDefault();
50 |
51 | var $book = $(".book");
52 |
53 | if (fontState.theme !== 0)
54 | $book.removeClass("color-theme-"+fontState.theme);
55 |
56 | fontState.theme = index;
57 | if (fontState.theme !== 0)
58 | $book.addClass("color-theme-"+fontState.theme);
59 |
60 | saveFontSettings();
61 | };
62 |
63 | function update() {
64 | var $book = gitbook.state.$book;
65 |
66 | $(".font-settings .font-family-list li").removeClass("active");
67 | $(".font-settings .font-family-list li:nth-child("+(fontState.family+1)+")").addClass("active");
68 |
69 | $book[0].className = $book[0].className.replace(/\bfont-\S+/g, '');
70 | $book.addClass("font-size-"+fontState.size);
71 | $book.addClass("font-family-"+fontState.family);
72 |
73 | if(fontState.theme !== 0) {
74 | $book[0].className = $book[0].className.replace(/\bcolor-theme-\S+/g, '');
75 | $book.addClass("color-theme-"+fontState.theme);
76 | }
77 | };
78 |
79 | function init(config) {
80 | var $bookBody, $book;
81 |
82 | //Find DOM elements.
83 | $book = gitbook.state.$book;
84 | $bookBody = $book.find(".book-body");
85 |
86 | // Instantiate font state object
87 | fontState = gitbook.storage.get("fontState", {
88 | size: config.size || 2,
89 | family: FAMILY[config.family || "sans"],
90 | theme: THEMES[config.theme || "white"]
91 | });
92 |
93 | update();
94 | };
95 |
96 |
97 | gitbook.events.bind("start", function(e, config) {
98 | var opts = config.fontsettings;
99 | if (!opts) return;
100 |
101 | // Create buttons in toolbar
102 | gitbook.toolbar.createButton({
103 | icon: 'fa fa-font',
104 | label: 'Font Settings',
105 | className: 'font-settings',
106 | dropdown: [
107 | [
108 | {
109 | text: 'A',
110 | className: 'font-reduce',
111 | onClick: reduceFontSize
112 | },
113 | {
114 | text: 'A',
115 | className: 'font-enlarge',
116 | onClick: enlargeFontSize
117 | }
118 | ],
119 | [
120 | {
121 | text: 'Serif',
122 | onClick: _.partial(changeFontFamily, 0)
123 | },
124 | {
125 | text: 'Sans',
126 | onClick: _.partial(changeFontFamily, 1)
127 | }
128 | ],
129 | [
130 | {
131 | text: 'White',
132 | onClick: _.partial(changeColorTheme, 0)
133 | },
134 | {
135 | text: 'Sepia',
136 | onClick: _.partial(changeColorTheme, 1)
137 | },
138 | {
139 | text: 'Night',
140 | onClick: _.partial(changeColorTheme, 2)
141 | }
142 | ]
143 | ]
144 | });
145 |
146 |
147 | // Init current settings
148 | init(opts);
149 | });
150 | });
151 |
152 |
153 |
--------------------------------------------------------------------------------
/docs/libs/gitbook-2.6.7/js/plugin-sharing.js:
--------------------------------------------------------------------------------
1 | gitbook.require(["gitbook", "lodash", "jQuery"], function(gitbook, _, $) {
2 | var SITES = {
3 | 'github': {
4 | 'label': 'Github',
5 | 'icon': 'fa fa-github',
6 | 'onClick': function(e) {
7 | e.preventDefault();
8 | var repo = $('meta[name="github-repo"]').attr('content');
9 | if (typeof repo === 'undefined') throw("Github repo not defined");
10 | window.open("https://github.com/"+repo);
11 | }
12 | },
13 | 'facebook': {
14 | 'label': 'Facebook',
15 | 'icon': 'fa fa-facebook',
16 | 'onClick': function(e) {
17 | e.preventDefault();
18 | window.open("http://www.facebook.com/sharer/sharer.php?u="+encodeURIComponent(location.href));
19 | }
20 | },
21 | 'twitter': {
22 | 'label': 'Twitter',
23 | 'icon': 'fa fa-twitter',
24 | 'onClick': function(e) {
25 | e.preventDefault();
26 | window.open("http://twitter.com/intent/tweet?text="+document.title+"&url="+encodeURIComponent(location.href)+"&hashtags=rmarkdown,bookdown");
27 | }
28 | },
29 | 'linkedin': {
30 | 'label': 'LinkedIn',
31 | 'icon': 'fa fa-linkedin',
32 | 'onClick': function(e) {
33 | e.preventDefault();
34 | window.open("https://www.linkedin.com/shareArticle?mini=true&url="+encodeURIComponent(location.href)+"&title="+encodeURIComponent(document.title));
35 | }
36 | },
37 | 'weibo': {
38 | 'label': 'Weibo',
39 | 'icon': 'fa fa-weibo',
40 | 'onClick': function(e) {
41 | e.preventDefault();
42 | window.open("http://service.weibo.com/share/share.php?content=utf-8&url="+encodeURIComponent(location.href)+"&title="+encodeURIComponent(document.title));
43 | }
44 | },
45 | 'instapaper': {
46 | 'label': 'Instapaper',
47 | 'icon': 'fa fa-italic',
48 | 'onClick': function(e) {
49 | e.preventDefault();
50 | window.open("http://www.instapaper.com/text?u="+encodeURIComponent(location.href));
51 | }
52 | },
53 | 'vk': {
54 | 'label': 'VK',
55 | 'icon': 'fa fa-vk',
56 | 'onClick': function(e) {
57 | e.preventDefault();
58 | window.open("http://vkontakte.ru/share.php?url="+encodeURIComponent(location.href));
59 | }
60 | }
61 | };
62 |
63 |
64 |
65 | gitbook.events.bind("start", function(e, config) {
66 | var opts = config.sharing;
67 | if (!opts) return;
68 |
69 | // Create dropdown menu
70 | var menu = _.chain(opts.all)
71 | .map(function(id) {
72 | var site = SITES[id];
73 | if (!site) return;
74 | return {
75 | text: site.label,
76 | onClick: site.onClick
77 | };
78 | })
79 | .compact()
80 | .value();
81 |
82 | // Create main button with dropdown
83 | if (menu.length > 0) {
84 | gitbook.toolbar.createButton({
85 | icon: 'fa fa-share-alt',
86 | label: 'Share',
87 | position: 'right',
88 | dropdown: [menu]
89 | });
90 | }
91 |
92 | // Direct actions to share
93 | _.each(SITES, function(site, sideId) {
94 | if (!opts[sideId]) return;
95 |
96 | gitbook.toolbar.createButton({
97 | icon: site.icon,
98 | label: site.label,
99 | title: site.label,
100 | position: 'right',
101 | onClick: site.onClick
102 | });
103 | });
104 | });
105 | });
106 |
--------------------------------------------------------------------------------
/docs/style.css:
--------------------------------------------------------------------------------
1 | p.caption {
2 | color: #777;
3 | margin-top: 10px;
4 | }
5 | p code {
6 | white-space: inherit;
7 | }
8 | pre {
9 | word-break: normal;
10 | word-wrap: normal;
11 | }
12 | pre code {
13 | white-space: inherit;
14 | }
15 |
--------------------------------------------------------------------------------
/double-counting.Rmd:
--------------------------------------------------------------------------------
1 | # Double Counting {#double-counting}
2 |
3 | ## Motivating Example {-}
4 |
5 | The French nobleman (and avid gambler) Chevalier de Méré knew that betting on at least one
6 | six (⚅) in 4 rolls of a die was a favorable bet for him. Once other gamblers caught on, he
7 | devised a new bet: at least one double-six (⚅⚅) in 24 rolls of two dice. Although he did not know
8 | how to calculate the probabilities, he reasoned that the two bets should be equivalent, since
9 |
10 | - double-sixes are $1/6$ as likely as a single six,
11 | - but there are $6$ times as many rolls to compensate
12 |
13 | Are the two bets equivalent?
14 |
15 | ## Theory {-}
16 |
17 | Here is a common (but wrong) way of calculating the probability of getting at least one six in
18 | 4 rolls of a die.
19 |
20 | \begin{align*}
21 | P(\text{at least one ⚅}) &= P(\text{⚅ on 1st roll, or ⚅ on 2nd roll, or ⚅ on 3rd roll or ⚅ on 4th roll}) \\
22 | &= P(\text{⚅ on 1st roll}) + P(\text{⚅ on 2nd roll}) + P(\text{⚅ on 3nd roll}) + P(\text{⚅ on 4th roll}) \\
23 | &= \frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6} \\
24 | &= \frac{4}{6} ?
25 | \end{align*}
26 |
27 | What's wrong with the above reasoning? The problem is that
28 | \[ P(A \text{ or } B) \neq P(A) + P(B) \]
29 | in general. The reason is that $P(A) + P(B)$ double counts outcomes where $A$ and $B$ both happen.
30 |
31 | For example, suppose $A$ is "⚅ on 1st roll" and $B$ is "⚅ on 2nd roll". Then, it is not hard to see
32 | that the event $A \text{ or } B$ happens on 11 out of 36 outcomes, so
33 | \[ P(A \text{ or } B) = \frac{11}{36} \neq \frac{12}{36} = \frac{1}{6} + \frac{1}{6} = P(A) + P(B). \]
34 | $P(A) + P(B)$ double counts the outcome where _both_ rolls were ⚅s.
35 |
36 | ```{r, echo=FALSE, engine='tikz', out.width='60%', fig.ext='png', fig.align='center', fig.cap='Double Counting Dice Rolls', engine.opts = list(template = "tikz_template.tex")}
37 | \begin{tikzpicture}
38 |
39 | \foreach \i in {1, 2, 3, 4, 5, 6} {
40 | \fill [blue!50,opacity=.3] (1.5*6 - .75, -1*\i - .5) rectangle (1.5*6 + .75, -1*\i + .5);
41 | }
42 | \foreach \i in {1, 2, 3, 4, 5, 6} {
43 | \fill [red!50,opacity=.3] (1.5*\i - .75, -1*6 - .5) rectangle (1.5*\i + .75, -1*6 + .5);
44 | }
45 |
46 | \foreach \i in {1, ..., 6} {
47 | \foreach \j in {1, ..., 6} {
48 | \node[rectangle,draw] at (1.5*\i, -\j) {\Large\epsdice{\j}\ \epsdice{\i}};
49 | }
50 | }
51 |
52 | \foreach \i in {1, ..., 6} {
53 | \foreach \j in {1, ..., 6} {
54 | \node[rectangle,draw,opacity=.3] at (1.5*\i, -\j) {\Large\epsdice{\j}\ \epsdice{\i}};
55 | }
56 | }
57 |
58 | \foreach \i in {1, ..., 6} {
59 | \node[rectangle,draw] at (1.5*\i, -1*4) {\Large\epsdice{4}\ \epsdice{\i}};
60 | \node[rectangle,draw] at (1.5*4, -1*\i) {\Large\epsdice{\i}\ \epsdice{4}};
61 | }
62 | \end{tikzpicture}
63 | ```
64 |
65 | One way to avoid double counting is to subtract the cases that are double counted.
66 |
67 | ```{theorem, name="Inclusion-Exclusion Principle"}
68 | \[ P(A \text{ or } B) = P(A) + P(B) - P(A \text{ and } B) \]
69 | ```
70 |
71 | ```{r, echo=FALSE, engine='tikz', out.width='60%', fig.ext='png', fig.align='center', fig.cap='Intuition for the Inclusion-Exclusion Principle'}
72 |
73 | \tikzset{filled/.style={fill=circle area, draw=circle edge, thick},
74 | outline/.style={draw=circle edge, thick}}
75 |
76 | \begin{tikzpicture}
77 | \draw[fill=red!50,fill opacity=0.5] (0, 0) circle (30pt);
78 | \draw[fill=blue!50,fill opacity=0.5] (1, 0) circle (30pt);
79 |
80 | \node at (-0.4, 0) {{\scriptsize $A$}};
81 | \node at (1.4, 0) {{\scriptsize $B$}};
82 | \node at (0.5, 0) {{\scriptsize $A \text{ and } B$}};
83 | \end{tikzpicture}
84 | ```
85 |
86 | So, for example:
87 | \begin{align*}
88 | P(\text{at least one ⚅ in 2 rolls}) &= P(\text{⚅ on 1st roll}) + P(\text{⚅ on 2nd roll}) - P(\text{⚅ on both rolls}) \\
89 | &= \frac{1}{6} + \frac{1}{6} - \frac{1}{36} \\
90 | &= \frac{11}{36}
91 | \end{align*}
92 |
93 | However, this approach does not scale well to calculating the probability of at least one ⚅ in 4 rolls.
94 |
95 | In many situations, it is easier to calculate the probability that an event does _not_ happen, also known as the
96 | **complement** of the event. Because the total probability has to be 1, the two probabilities are related by the following formula.
97 |
98 | ```{theorem complement, name="Complement Rule"}
99 | \[ P(\text{not } A) = 1 - P(A). \]
100 | ```
101 |
102 | Let's apply the Complement Rule to the Chevalier de Méré's problem. To calculate the probability of getting at least one ⚅ in 4 rolls,
103 | we can calculate the probability of the complement. If we did not get at least one ⚅, that must mean that we got no ⚅s. This means that
104 | every roll was one of the other 5 outcomes. This probability is much easier to calculate using the counting tricks we have learned.
105 | \begin{align*}
106 | P(\text{at least one ⚅}) &= 1 - P(\text{no ⚅s}) \\
107 | &= 1 - \frac{5^4}{6^4} \\
108 | &\approx 0.5177.
109 | \end{align*}
110 |
111 |
112 | ## Examples {-}
113 |
114 | 1. In poker, a "two pair" hand has 2 cards of one rank,
115 | 2 cards of another rank, and 1 card of a third rank. For example, the hand
116 | 2, 2, Q, Q, J is a "two pair". Your friend calculates the probability of "two pair" as follows:
117 |
118 | - There are $\binom{52}{5}$ equally likely hands (where order does not matter).
119 | - We count the number of ways to choose the first pair. There are $13$ choices for the rank and $\binom{4}{2}$ choices for the two cards within the rank, so there are $13 \times \binom{4}{2}$ ways.
120 | - Next, we count the ways to choose the second pair. Since one rank has already been chosen, there are $12 \times \binom{4}{2}$ ways to do this.
121 | - Finally, we choose the remaining card. There are $11 \times \binom{4}{1} = 44$ ways to do this.
122 |
123 | Your friend calculates the probability as
124 | \[ \frac{13 \times \binom{4}{2} \times 12 \times \binom{4}{2} \times 44}{\binom{52}{5}} \approx .095, \]
125 | but then [finds online](http://www.math.hawaii.edu/~ramsey/Probability/PokerHands.html) that the actual probability of "two pair" is only $.0475$. This number is exactly half the probability that your friend got, so he suspects that he double-counted. But where?
126 |
127 | 2. Complete the calculation for the Chevalier de Méré. Calculate the probability of
128 | getting at least one ⚅⚅ in 24 rolls of two dice.
129 |
130 |
--------------------------------------------------------------------------------
/ev-continuous.Rmd:
--------------------------------------------------------------------------------
1 | # Expected Value of Continuous Random Variables {#ev-continuous}
2 |
3 | ## Theory {-}
4 |
5 |
6 |
7 | ```{definition, name="Expected Value of a Continuous Random Variable"}
8 | Let $X$ be a continuous random variable with p.d.f. $f(x)$. Then, the
9 | expected value of $X$ is defined as
10 | \begin{equation}
11 | E[X] = \int_{-\infty}^\infty x \cdot f(x)\,dx.
12 | (\#eq:ev-continuous)
13 | \end{equation}
14 | ```
15 | Compare this definition with the definition of expected value for a discrete random
16 | variable \@ref(eq:ev). We simply replaced the p.m.f. by the p.d.f. and
17 | the sum by an integral.
18 |
19 | ```{example ev-unif, name="Expected Value of the Uniform Distribution"}
20 | Let $X$ be a $\text{Uniform}(a, b)$ random variable. What is $E[X]$?
21 |
22 | First, the p.d.f. is
23 | \[ f(x) = \begin{cases} \frac{1}{b-a} & a \leq x \leq b \\ 0 & \text{otherwise} \end{cases}, \]
24 | which is non-zero only between $a$ and $b$. So, even though \@ref(eq:ev-continuous) says we should
25 | integrate from $-\infty$ to $\infty$, the integrand will only be non-zero between $a$ and $b$.
26 |
27 | \begin{align*}
28 | E[X] &= \int_a^b x \cdot \frac{1}{b-a} \,dx \\
29 | &= \frac{1}{b-a} \left. \frac{x^2}{2} \right]_a^b \\
30 | &= \frac{1}{b-a} (b^2 - a^2) \frac{1}{2} \\
31 | &= \frac{a + b}{2}.
32 | \end{align*}
33 |
34 | This is just the midpoint of the possible values of this uniform random variable. This is clearly the
35 | point where the p.d.f. would balance, if we put it on a scale.
36 | ```{r ev-continuous, echo=FALSE, engine='tikz', out.width='70%', fig.ext='png', fig.align='center', fig.cap='Expected Value of the Uniform Distribution', engine.opts = list(template = "tikz_template.tex")}
37 | \begin{tikzpicture}
38 |
39 | \draw [gray,<->] (-2, 0) coordinate -- (2, 0) coordinate;
40 |
41 | \fill[red!50] (-.3, -0.5) -- (0, 0) -- (.3, -0.5) -- cycle;
42 | \node[anchor=north,red] at (0, -0.5) {$\frac{a + b}{2}$};
43 |
44 | \node[anchor=north] at (-1, 0) {$a$};
45 | \node[anchor=north] at (1, 0) {$b$};
46 |
47 | \draw[thick] (-2, 0) coordinate -- (-1, 0) coordinate -- (-1, 1) coordinate -- (1, 1) coordinate -- (1, 0) coordinate -- (2, 0) coordinate;
48 |
49 | \end{tikzpicture}
50 | ```
51 |
52 | ```{example ev-expo, name="Expected Value and Median of the Exponential Distribution"}
53 | Let $X$ be an $\text{Exponential}(\lambda)$ random variable. What is $E[X]$? Does the
54 | random variable have an equal chance of being above as below the expected value?
55 |
56 | First, we calculate the expected value using \@ref(eq:ev-continuous) and the
57 | p.d.f. of the exponential distribution \@ref(eq:exponential-pdf). This is an
58 | exercise in integration by parts.
59 | \begin{align*}
60 | E[X] &= \int_0^\infty x \cdot \lambda e^{-\lambda x} \,dx \\
61 | &= -x e^{-\lambda x} \Big|_0^\infty - \int_0^\infty -e^{-\lambda x}\,dx \\
62 | &= \ (-0 + 0) - \underbrace{\frac{1}{\lambda} e^{-\lambda x} \Big|_0^\infty}_{0 - \frac{1}{\lambda}} \\
63 | &= \frac{1}{\lambda}
64 | \end{align*}
65 |
66 | Now, let's calculate the probability that the random variable is below expected value.
67 | \[ P(X < E[X]) = P(X < \frac{1}{\lambda}) = \int_0^{1/\lambda} \lambda e^{-\lambda x}\,dx = 1 - e^{-1} \approx .632. \]
68 | The random variable does not have an 50/50 chance of being above or below its expected value.
69 |
70 | The value that a random variable has an equal chance of being above or below is called its **median**. To
71 | calculate the median, we have to solve for $m$ such that
72 | \[ P(X < m) = 0.5. \]
73 | (Equivalently, we could solve $P(X > m) = 0.5$. It also doesn't matter whether we use
74 | $<$ or $\leq$, since this is a continuous random variable, so $P(X = m) = 0$.)
75 |
76 | Calculating the probability in terms of $m$, we have
77 | \[ 0.5 = P(X < m) = \int_0^m \lambda e^{-\lambda x}\,dx = 1 - e^{-\lambda m}. \]
78 | Solving for $m$, we see that the median is $-\log(0.5) / \lambda$. (Note: $\log$ here is the natural logarithm,
79 | base $e$.)
80 | ```{r, echo=FALSE, engine='tikz', out.width='70%', fig.ext='png', fig.align='center', fig.cap='Mean vs. Median of the Exponential Distribution', engine.opts = list(template = "tikz_template.tex")}
81 | \begin{tikzpicture}
82 |
83 |
84 | \draw[red!50,dashed,thick] (1, 0) -- (1, 2);
85 | \fill[red!50] (.8, -0.4) -- (1, 0) -- (1.2, -0.4) -- cycle;
86 | \node[anchor=east,red,rotate=90] at (1.01, -0.4) {$E[X]$};
87 |
88 | \fill [blue!40,smooth,samples=50,domain=0:.693] (0, 0) -- plot(\x,{2*exp(-(\x))}) -- (.693, 0);
89 | \draw[blue!40,dashed,thick] (.693, 0) -- (.693, 2);
90 | \node[anchor=east,blue,rotate=90] at (.65, 0) {median};
91 | \node[blue] at (0.33, 0.65) {{\small 50\%}};
92 |
93 | \draw [gray,->] (-0.1, 0) coordinate -- (3.5, 0) coordinate;
94 | \draw [gray,->] (0, -0.1) coordinate -- (0, 2.1) coordinate;
95 | \node[anchor=north] at (3.5, 0) {$x$};
96 | \node[anchor=east] at (0, 1.8) {$f(x)$};
97 | \draw [thick,smooth,samples=100,domain=0:3.5] plot(\x,{2*exp(-(\x))});
98 |
99 |
100 | \end{tikzpicture}
101 | ```
102 |
103 | Formulas for the variance of named continuous distributions can be found
104 | in Appendix \@ref(continuous-distributions).
105 |
106 |
107 | ## Essential Practice {-}
108 |
109 | 1. The distance (in hundreds of miles) driven by a trucker in one day is a continuous
110 | random variable $X$ whose cumulative distribution function (c.d.f.) is given by:
111 | \[ F(x) = \begin{cases} 0 & x < 0 \\ x^3 / 216 & 0 \leq x \leq 6 \\ 1 & x > 6 \end{cases}. \]
112 |
113 | a. Calculate $E[X]$, the expected value of $X$.
114 | b. Calculate the median of $X$.
115 | c. Sketch a graph of the p.d.f., along with the locations of the expected value and median.
116 |
117 | 2. Suppose that an electronic device has a lifetime $T$ (in hours) that follows
118 | an $\text{Exponential}(\lambda=\frac{1}{1000})$ distribution.
119 |
120 | a. What is $E[T]$?
121 | b. Suppose that the cost of manufacturing one such item is $2. The manufacturer sells
122 | the item for $5, but guarantees a total refund if the lifetime ends up being less than 900 hours.
123 | What is the manufacturer's expected profit per item?
124 |
125 |
--------------------------------------------------------------------------------
/ev-infinity.Rmd:
--------------------------------------------------------------------------------
1 | # Expected Value and Infinity {#ev-infinity}
2 |
3 | ## Pascal's Wager {#pascals-wager}
4 |
5 | The mathematician Blaise Pascal (1623-1662) made an argument for why it was rational
6 | to believe in God---at least the Judeo-Christian God. Pascal acknowledged that God may or
7 | may not exist and assigned the probability $p > 0$ to God's existence. Then, he
8 | considered the following bets:
9 |
10 | 1. If you believe in God, and He exists, then your reward is infinite (i.e., eternal salvation).
11 | If He does not exist, then your reward is some number $a$, which may be negative but finite.
12 | 2. If you do not believe in God, and He exists, the your reward is negative infinity
13 | (i.e., eternal damnation). If He does not exist, then your reward is some number $b$, which is
14 | finite.
15 |
16 | Looking at the first bet (belief in God), Pascal calculated the expected reward $E[R]$ as follows. The p.m.f. of $R$ is
17 | \[ \begin{array}{r|cc}
18 | r & a & \infty \\
19 | \hline
20 | f(r) & 1-p & p,
21 | \end{array} \]
22 | and the expected value of this bet is
23 | \[ E[R] = a\cdot (1-p) + \infty \cdot p = \infty. \]
24 |
25 | Can you do the same for the second bet?
26 |
27 | Pascal argued that we should pick the bet with the higher expected value, which is to believe in God.
28 |
29 | I present this only as an illustration of how expected value can be applied in surprising
30 | contexts, not to convince you to believe or not to believe. There have been
31 | many philosophical objections to Pascal's wager. If you are interested, this video discusses
32 | some of the debates around Pascal's wager. You can find many more discussions on Youtube.
33 |
34 |
35 |
36 |
37 | ## St. Petersburg Paradox {#st-petersburg}
38 |
39 | In the Pascal's Wager example, we dealt with random variables which could take on
40 | values such as $\infty$ or $-\infty$. It is not surprising that the expected value
41 | is infinite when infinity is a possible value. However, the expected value can be infinite,
42 | even if the random variable is finite-valued. Let's look at an example.
43 |
44 | You are offered the following game at a carnival.
45 | The game starts with $1 in a jar.
46 | You toss a coin repeatedly. Each time the coin lands heads,
47 | the amount of money in the jar is doubled. As soon as the
48 | coin lands tails, you cash out whatever money is in the jar, and
49 | the game ends.
50 |
51 | For example, if you toss the sequence HHHT, then you win $8 because:
52 |
53 | - After the first H, the amount in the jar is doubled to $2.
54 | - After the second H, the amount in the jar is doubled to $4.
55 | - After the third H, the amount in the jar is doubled to $8.
56 | - After the T, the game ends, and you win the $8 in the jar.
57 |
58 | How much would you be willing to pay to play this game?
59 |
60 | We have seen that the expected value represents the "fair value" of a game.
61 | First, let's work out the p.m.f. of the amount of money you win, $W$. Fill in the first
62 | few probabilities of this p.m.f. table.
63 | \[ \begin{array}{rcccccc}
64 | w & 1 & 2 & 4 & ? & ? & \ldots \\
65 | \hline
66 | f(w) & 1/2 & 1/4 & ? & ? & ? & \ldots \\
67 | \end{array}\]
68 |
69 | Now calculate $E[W]$ from the p.m.f. What does this suggest about how much
70 | you should be willing to pay? This is a famous puzzle in probability called the
71 | **St. Petersburg Paradox**.
72 |
73 | Here is a video discussing the St. Petersburg paradox.
74 |
75 |
76 |
--------------------------------------------------------------------------------
/ev-product.Rmd:
--------------------------------------------------------------------------------
1 | # Expected Value of a Product {#ev-product}
2 |
3 |
4 | ## Theory {-}
5 |
6 | ```{theorem ev-product, name="Expected Value of a Product"}
7 | If $X$ and $Y$ are _independent_ random variables, then
8 | \begin{equation}
9 | E[XY] = E[X] E[Y].
10 | (\#eq:ev-product)
11 | \end{equation}
12 | In fact, if $X$ and $Y$ are independent, then for any functions $g$ and $h$,
13 | \begin{equation}
14 | E[g(X)h(Y)] = E[g(X)] E[h(Y)].
15 | (\#eq:ev-product-general)
16 | \end{equation}
17 | ```
18 |
19 | ```{example, name="Xavier and Yolanda Revisited"}
20 | In Lesson \@ref(lotus2d), we calculated $E[XY]$, the expected product of the
21 | numbers of times that Xavier and Yolanda win. There, we used 2D LOTUS.
22 | Now, let's repeat the calculation using Theorem \@ref(thm:ev-product).
23 |
24 | You might be tempted to multiply $E[X]$ and $E[Y]$. However, this is wrong
25 | because $X$ and $Y$ are not independent. Every time Xavier wins, Yolanda
26 | also wins. So we cannot apply Theorem \@ref(thm:ev-product) directly.
27 |
28 | We can express the number of times Yolanda wins as:
29 | \[ Y = X + Z, \]
30 | where $Z$ is the number of wins in the last two spins of the roulette wheel.
31 | Now, $X$ and $Z$ are independent. Furthermore, we know that
32 | $X$ is $\text{Binomial}(n=3, N_1=18, N_0=20)$, and $Z$ is
33 | $\text{Binomial}(n=2, N_1=18, N_0=20)$.
34 |
35 | Therefore,
36 | \begin{align*}
37 | E[XY] &= E[X(X + Z)] & \text{(by the representation above)} \\
38 | &= E[X^2 + XZ] & \text{(expand expression inside expected value)} \\
39 | &= E[X^2] + E[XZ] & \text{(linearity of expectation)} \\
40 | &= E[X^2] + E[X]E[Z] & \text{(by independence of $X$ and $Z$ and \ref{eq:ev-product})}
41 | \end{align*}
42 |
43 | Now, $X$ and $Z$ are binomial, so there is a simple formula for their expected value:
44 | \begin{align*}
45 | E[X] &= n\frac{N_1}{N} = 3\frac{18}{38} \\
46 | E[Z] &= n\frac{N_1}{N} = 2\frac{18}{38}.
47 | \end{align*}
48 | The only non-trivial part is calculating $E[X^2]$. However, we showed in
49 | Examples \@ref(exm:binomial-lotus) and \@ref(exm:binomial-lotus-2) that for a binomial random variable $X$,
50 | \[ E[X(X-1)] = n(n-1) \frac{N_1^2}{N^2}. \]
51 | Since $E[X(X-1)] = E[X^2 - X] = E[X^2] - E[X]$, we can solve for $E[X^2]$:
52 | \begin{align*}
53 | E[X^2] &= E[X(X-1)] + E[X] \\
54 | &= n(n-1) \frac{N_1^2}{N^2} + n\frac{N_1}{N}
55 | \end{align*}
56 | For the number of bets that Xavier wins,
57 | \[ E[X^2] = 3(2)\frac{18^2}{38^2} + 3\frac{18}{38}. \]
58 |
59 | Putting it all together, we get
60 | \[ E[XY] = 3(2)\frac{18^2}{38^2} + 3\frac{18}{38} + \left( 3\frac{18}{38} \right) \left( 2\frac{18}{38} \right) \approx 4.11, \]
61 | which matches the answer from Lesson \@ref(lotus2d).
62 | ```
63 |
64 | ## Essential Practice {-}
65 |
66 |
67 | 1. Consider the following three scenarios:
68 |
69 | - A fair coin is tossed 3 times. $X$ is the number of heads and $Y$ is the number of tails.
70 | - A fair coin is tossed 4 times. $X$ is the number of heads in the first 3 tosses, $Y$ is the number of heads in the last 3 tosses.
71 | - A fair coin is tossed 6 times. $X$ is the number of heads in the first 3 tosses, $Y$ is the number of heads in the last 3 tosses.
72 |
73 | In Lesson \@ref(lotus2d), you showed that $E[X + Y]$ was the same for all three scenarios, but
74 | $E[XY]$ was different. In light of Theorems \@ref(thm:linearity) and \@ref(thm:ev-product), explain why
75 | this makes sense.
76 |
77 | 2. Two fair dice are rolled. Let $X$ be the outcome of the first die. Let $Y$ be the outcome of the
78 | second die. Calculate the expected ratio between the numbers on the two dice, $\displaystyle E[X / Y]$.
79 | (You can use Theorem \@ref(thm:ev-product), since $X$ and $Y$ are independent. However,
80 | be careful because $E[X / Y] \neq E[X] / E[Y]$.)
81 |
82 | What is $E[Y / X]$? Why does this seem paradoxical?
83 |
84 | 3. At Diablo Canyon nuclear plant, radioactive particles hit a Geiger counter according to a Poisson process
85 | with a rate of 3.5 particles per second. Let $X$ be the number of particles detected in the first 2 seconds.
86 | Let $Y$ be the number of particles detected in the second after that (i.e., the 3rd second). Find $E[XY]$.
87 |
--------------------------------------------------------------------------------
/factorial.Rmd:
--------------------------------------------------------------------------------
1 | # The Factorial {#factorial}
2 |
3 | ## Motivating Example {-}
4 |
5 | How many ways are there to arrange a deck of 52 cards?
6 |
7 | ## Theory {-}
8 |
9 |
10 |
11 | We can use the multiplication rule to determine the number of ways to arrange the deck.
12 | The first card can be any one of 52 cards. No matter which one it is,
13 | the second card can be any one of the remaining 51 cards. So there are $52 \cdot 51$ ways to
14 | choose the first 2 cards. For every one of these $52 \cdot 51$ ways, there are $50$ remaining cards
15 | to choose as the third card, which makes $52 \cdot 51 \cdot 50$ ways to choose the first 3 cards.
16 | And so on. By the time we get to the last card in the deck, there is only $1$ card left. So there are
17 | \[ 52 \cdot 51 \cdot 50 \cdot \ldots \cdot 2 \cdot 1 \]
18 | ways to arrange the 52 cards in a deck.
19 |
20 | This is such an important quantity in probability and counting that it has been given a special name.
21 | ```{definition, factorial, name="Factorial"}
22 | The quantity $n!$ (pronounced: "n factorial") is defined as
23 | \[ n! = n \cdot (n-1) \cdot \ldots \cdot 1. \]
24 | It represents the number of ways to arrange $n$ objects.
25 | ```
26 |
27 | So the number of ways to arrange a deck of cards can be expressed as $52!$. Using
28 | [Wolfram Alpha](https://www.wolframalpha.com/input/?i=52%21), we can calculate this to be
29 | about $8 \times 10^{67}$, an astronomical number. In particular, it is greater than:
30 |
31 | - the number of seconds since the universe began.
32 | - the number of atoms on Earth.
33 |
34 | So the next time you hold a shuffled deck of cards in your hands, spend a moment appreciating the fact
35 | that you are holding an arrangement that has likely never before existed in the history of
36 | the universe.
37 |
38 |
39 | ## Examples {-}
40 |
41 | 1. Each year, as part of a "Secret Santa" tradition, a group of 4 friends write their names
42 | on slips of papers and place the slips into a hat. Each member of the group draws a name at random
43 | from the hat and must by a gift for that person. Of course, it is possible that they
44 | draw their own name, in which case they buy a gift for themselves. What is the probability
45 | that everyone in the group ends up buying a gift for themselves?
46 | (Note that the names are not placed back in the hat once they are drawn, so each
47 | person receives exactly one gift.)
48 | 2. A deck of 52 cards is shuffled thoroughly. What is the probability that the four aces are
49 | all next to each other? (Hint: First, count the number of positions that the block of four
50 | aces can go, then multiply this by the number of ways of ordering the four aces.)
51 | 3. If a five-letter word is formed at random (meaning that all sequences of five letters are
52 | equally likely), what is the probability that no letter occurs more than once?
53 | 4. The "bootstrap" is a statistical method for generating a new data set that is like an existing one.
54 | Suppose we have a data set consisting of $6$ observations: $x_1, x_2, x_3, x_4, x_5, x_6$. To generate
55 | a "bootstrap" data set, we sample from the original data set _with replacement_, meaning that it is
56 | possible for each observation to be sampled more than once. Examples of bootstrap data sets include:
57 | \begin{align*}
58 | x_4, x_2, x_4, x_3, x_2, x_4 \\
59 | x_3, x_1, x_6, x_1, x_1, x_2 \\
60 | x_2, x_1, x_4, x_3, x_6, x_5
61 | \end{align*}
62 | Notice that in the last example, each observation appears exactly once.
63 | What is the probability that the bootstrap data set contains each observation exactly once?
64 |
65 | ## Additional Exercises {-}
66 |
67 |
68 |
--------------------------------------------------------------------------------
/fourier-table.Rmd:
--------------------------------------------------------------------------------
1 | # Fourier Tables {#fourier-table}
2 |
3 | ## Continuous-Time Fourier Transforms {#ctft}
4 |
5 | \[ G(f) = \int_{-\infty}^\infty g(t) e^{-j2\pi f t}\,dt,\ \ -\infty < f < \infty \]
6 |
7 | | Time-Domain $g(t)$ | Frequency-Domain $G(f) = \mathscr{F}[g(t)]$ |
8 | |:---|:---|
9 | $1$ | $\delta(f)$ |
10 | $u(t) \overset{\text{def}}{=} \begin{cases} 1 & t \geq 0 \\ 0 & t < 0 \end{cases}$ | $\displaystyle\frac{1}{2}\delta(f) + \frac{1}{j2\pi f}$ |
11 | $\cos(2\pi f_0 t)$ | $\frac{1}{2}(\delta(f - f_0) + \delta(f + f_0))$ |
12 | $\sin(2\pi f_0 t)$ | $\frac{1}{2j}(\delta(f - f_0) - \delta(f + f_0))$ |
13 | $e^{-t}u(t)$ | $\displaystyle\frac{1}{1 + j2\pi f}$ |
14 | $e^{-|t|}$ | $\displaystyle\frac{2}{1 + (2\pi f)^2}$ |
15 | $e^{-\pi t^2}$ | $e^{-\pi f^2}$ |
16 | $\text{rect}(t) \overset{\text{def}}{=} \begin{cases} 1 & |t| \leq 0.5 \\ 0 & |t| > 0.5 \end{cases}$ | $\displaystyle\text{sinc}(f) \overset{\text{def}}{=} \frac{\sin(\pi f)}{\pi f}$ |
17 | $\text{tri}(t) \overset{\text{def}}{=} \begin{cases} 1 - |t| & |t| \leq 1 \\ 0 & |t| > 1 \end{cases}$ | $\text{sinc}^2(f)$ |
18 |
19 |
20 | ## Discrete-Time Fourier Transforms {#dtft}
21 |
22 | Note that $f$ here denotes _normalized frequency_ (cycles/sample).
23 | \[ G(f) = \sum_{n=-\infty}^\infty g[n] e^{-j2\pi f n},\ \ -0.5 < f < 0.5 \]
24 |
25 | | Time-Domain $g[n]$ | Frequency-Domain $G(f) = \mathscr{F}[g[n]]$ | Restrictions |
26 | |:--|:---|:--|
27 | | $1$ | $\delta(f)$ |
28 | | $\delta[n]$ | $1$ |
29 | | $\cos(2\pi f_0 n)$ | $\frac{1}{2}(\delta(f - f_0) + \delta(f + f_0))$ | $-0.5 < f < 0.5$ |
30 | | $\sin(2\pi f_0 n)$ | $\frac{1}{2j}(\delta(f - f_0) - \delta(f + f_0))$ | $-0.5 < f < 0.5$ |
31 | | $\alpha^{|n|}$ |$\displaystyle\frac{1 - \alpha^{2}}{1 + \alpha^{2} - 2\alpha \cos(2\pi f)}$ | $|\alpha| < 1$ |
32 | | $\alpha^{n} u[n]$ | $\displaystyle\frac{1}{1 - \alpha e^{-j2\pi f}}$ | $|\alpha| < 1$ |
33 |
34 |
35 | ## Fourier Properties {#fourier-properties}
36 |
37 | Suppose $g$, $g_1$, and $g_2$ are time-domain signals with Fourier transforms $G$, $G_1$, and $G_2$, respectively.
38 |
39 | Property | When It Applies | Time-Domain | | Frequency-Domain |
40 | |:---|:---:|----:|:-:|:----|
41 | Linearity | continuous-time, discrete-time | $a g_1(t) + b g_2(t)$ | $\overset{\mathscr{F}}{\longleftrightarrow}$ | $a G_1(f) + b G_2(f)$ |
42 | Scaling | continuous-time only | $g(at)$ | $\overset{\mathscr{F}}{\longleftrightarrow}$ | $\frac{1}{a} G\left(\frac{f}{a}\right)$ |
43 | Shifting | continuous-time, discrete-time | $g(t + b)$ | $\overset{\mathscr{F}}{\longleftrightarrow}$ | $G(f) e^{j2\pi b f}$ |
44 | Convolution | continuous-time, discrete-time | $(g_1 * g_2)(t)$ | $\overset{\mathscr{F}}{\longleftrightarrow}$ | $G_1(f) \cdot G_2(f)$ |
45 | Reversal | continuous-time, discrete-time | $g(-t)$ | $\overset{\mathscr{F}}{\longleftrightarrow}$ | $G(-f)$ |
46 | DC Offset | continuous-time, discrete-time | $\int_{-\infty}^\infty g(t)\,dt$ | $\overset{\mathscr{F}}{\longleftrightarrow}$ | $G(0)$ |
47 |
--------------------------------------------------------------------------------
/fourier.Rmd:
--------------------------------------------------------------------------------
1 | # Fourier Transforms {#fourier}
2 |
3 | ## Continuous-Time Fourier Transforms {-}
4 |
5 | The Fourier transform is an alternative representation of a signal. It describes the frequency content
6 | of the signal. It is defined as
7 | \begin{equation}
8 | G(f) \overset{\text{def}}{=} \int_{-\infty}^\infty g(t)e^{-j2\pi f t}\,dt.
9 | (\#eq:fourier)
10 | \end{equation}
11 | Notice that it is a function of frequency $f$, rather than time $t$. Notice also that it is complex-valued,
12 | since its definition involves the imaginary number $j \overset{\text{def}}{=} \sqrt{-1}$.
13 |
14 | The following video explains the visual intuition behind the Fourier transform. Note that this video
15 | uses $i$ to denote the imaginary number $\sqrt{-1}$, whereas we use $j$ (which is common in electrical engineering,
16 | to avoid confusion with current).
17 |
18 |
19 |
20 | To calculate the Fourier transform of a signal, we will rarely use \@ref(eq:fourier). Instead, we will
21 | look up the Fourier transform of the signal in a table like Appendix \@ref(ctft) and use properties of the
22 | Fourier transform (as shown in Appendix \@ref(fourier-properties)).
23 |
24 | ```{example}
25 | Calculate the Fourier transform of $g(t) = 80e^{-20t} u(t)$.
26 | ```
27 |
28 | ```{solution}
29 | This signal is most similar to $e^{-t} u(t)$ in Appendix \@ref(ctft), whose Fourier transform is
30 | $\frac{1}{1 + j2\pi f}$. The only differences are:
31 |
32 | - Our signal has an extra factor of 80 in front. This factor comes along for the ride by linearity of the Fourier transform.
33 | - Our signal is time-scaled by a factor of 20, so we have to apply the scaling property.
34 | Since we _multiplied_ by 20 in the time domain, we have to _divide_ by 20 in the
35 | frequency domain, on both the _inside_ and the _outside_.
36 |
37 | Putting everything together, we see that the Fourier transform is:
38 | \[ G(f) = 80 \frac{1}{20} \frac{1}{1 + j 2\pi \frac{f}{20}}. \]
39 |
40 | This is a complex-valued function. That is, at each frequency $f$, $G(f)$ is a complex number, with a
41 | real and an imaginary component. To make it easier to visualize, we calculate its magnitude
42 | using Theorem \@ref(thm:calculating-magnitude).
43 | \begin{align*}
44 | |G(f)| = \sqrt{|G(f)|^2} &= \sqrt{G(f) \cdot G^*(f)} \\
45 | &= \sqrt{\frac{4}{1 + j2\pi \frac{f}{20}} \cdot \frac{4}{1 - j2\pi \frac{f}{20}}} \\
46 | &= \sqrt{\frac{4}{1 + (2\pi \frac{f}{20})^2}}
47 | \end{align*}
48 |
49 | Now, the _magnitude_ of the Fourier transform is a function we can easily visualize.
50 | From the graph, we see that the signal has more low-frequency content than high-frequency content.
51 | ```
52 |
53 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
54 | fs <- seq(-50, 50, by=.1)
55 | plot(fs, sqrt(4 / (1 + (2*pi*fs/20)^2)), col="orange", type="l", lwd=2,
56 | xaxt="n", yaxt="n", bty="n",
57 | xlab="Frequency (f)", ylab="Fourier Transform Magnitude |G(f)|")
58 | axis(1, pos=0)
59 | axis(2, pos=0, at=(0:4) / 2)
60 | ```
61 |
62 | ## Discrete-Time Fourier Transforms {-}
63 |
64 | Discrete-time signals are obtained by sampling a continuous-time signal
65 | at regular time intervals. The sampling rate $f_s$ (in Hz) specifies the
66 | number of samples per second. The signal below is sampled at a rate of
67 | $f_s = 16$ Hz.
68 |
69 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
70 | ts <- seq(0, 1, .001)
71 | plot(ts, sin(2 * pi * 4 * ts) + sin(2 * pi * 6 * ts), type="l",
72 | xlab="Time (seconds)", ylab="Amplitude")
73 |
74 | ts <- seq(0, 1, length.out=17)
75 | points(ts, sin(2 * pi * 4 * ts) + sin(2 * pi * 6 * ts), pch=19)
76 | ```
77 |
78 | We write $x[n] = x(t_n)$ for the $n$th time sample, where $t_n = \frac{n}{f_s}$.
79 |
80 | It's possible to choose a sampling rate $f_s$ that is too low. In the graph below,
81 | the underlying continuous-time signal (in black) is a sinusoid with a frequency of 14 Hz.
82 | However, because the discrete-time signal (in red) was sampled at a rate of 16 Hz, the
83 | sinusoid appears to have a frequency of 2 Hz.
84 |
85 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
86 | ts <- seq(0, 1, .001)
87 | plot(ts, sin(2 * pi * 14 * ts), type="l", col="black",
88 | xlab="Time (seconds)", ylab="Amplitude")
89 | lines(ts, -sin(2 * pi * 2 * ts), col="red")
90 |
91 | ts <- seq(0, 1, length.out=17)
92 | points(ts, sin(2 * pi * 14 * ts), pch=19, col="red")
93 | ```
94 |
95 | We say that the higher frequency is **aliased** by the lower one.
96 |
97 | At a sampling rate of $f_s$, any frequencies outside the range
98 | \[ (-f_s / 2, f_s / 2) \]
99 | will be aliased by a frequency inside this range. This "maximum frequency"
100 | of $f_s / 2$ is known as the **Nyquist limit**.
101 |
102 | Therefore, the Fourier transform of a discrete-time signal is effectively only
103 | defined for frequencies from $-f_s/2$ to $f_s/2$. Contrast this with the Fourier transform of a
104 | continuous-time signal, which is defined on all frequencies from $-\infty$ to $\infty$.
105 |
106 | Aliasing is not just a theoretical problem. For example, if helicopter blades are spinning
107 | too fast for the frame rate of a video, then they can look like they are not spinning at all!
108 |
109 |
110 |
111 | When calculating the Fourier transform of a discrete-time signal, we typically work with
112 | _normalized_ frequencies. That is, the frequencies are in units of "cycles per _sample_" instead of
113 | "cycles per _second_". (Another way to think about this is that we assume all discrete-time signals are
114 | sampled at 1 Hz.) As a result, the Nyquist limit is $1/2$, so the **Discrete-Time Fourier Transform**
115 | is defined only for $|f| < 0.5$.
116 |
117 | \begin{align}
118 | G(f) &\overset{\text{def}}{=} \sum_{n=-\infty}^\infty g[n] e^{-j2\pi f t}\,dt & -0.5 < f < 0.5
119 | (\#eq:discrete-fourier)
120 | \end{align}
121 |
122 | To calculate the Fourier transform of a signal, we will rarely use \@ref(eq:discrete-fourier). Instead, we will
123 | look up the Fourier transform of the signal in a table like Appendix \@ref(dtft) and use properties of the
124 | Fourier transform (as shown in Appendix \@ref(fourier-properties)).
125 |
126 |
127 | ## Essential Practice {-}
128 |
129 | 1. Calculate the Fourier transform $G(f)$ of the continuous-time signal
130 | \[ g(t) = \begin{cases} 1/3 & 0 < t < 3 \\ 0 & \text{otherwise} \end{cases}. \]
131 | Graph its magnitude $|G(f)|$ as a function of frequency.
132 |
133 | 2. Calculate the Fourier transform $G(f)$ of the discrete-time signal
134 | \[ g[n] = \delta[n] + 0.4 \delta[n-1] + 0.4 \delta[n+1]. \]
135 | Graph $G(f)$. (The Fourier transform should be a real-valued function, so you do not need to take its magnitude.)
136 |
137 | 3. Which of the following terms fill in the blank? The Fourier transform $G(f)$ of a real-valued time signal $g(t)$
138 | is necessarily ....
139 |
140 | a. real-valued: $G(f) \in \mathbb{R}$
141 | b. positive-valued: $G(f) > 0$
142 | c. symmetric: $G(-f) = G(f)$
143 | d. conjugate symmetric: $G(-f) = G^*(f)$
144 |
145 | (_Hint:_ Write out $G(f)$ and $G(-f)$ according to the definition \@ref(eq:fourier), and
146 | use Euler's identity (Theorem \@ref(thm:euler)). Simplify as much as possible, using the facts that
147 | $\cos(-\theta) = \cos(\theta)$ and $\sin(-\theta) = -\sin(\theta)$.)
148 |
149 | 4. Which of the following terms fill in the blank? The Fourier transform $G(f)$ of a real-valued and _symmetric_
150 | time signal $g(t)$ is necessarily ....
151 |
152 | a. real-valued: $G(f) \in \mathbb{R}$
153 | b. positive-valued: $G(f) > 0$
154 | c. symmetric: $G(-f) = G(f)$
155 | d. conjugate symmetric: $G(-f) = G^*(f)$
156 |
157 |
158 |
--------------------------------------------------------------------------------
/independence.Rmd:
--------------------------------------------------------------------------------
1 | # Independence {#independence}
2 |
3 | ## Motivating Example {-}
4 |
5 | Many gamblers believe that after a string of losses, they are "due"
6 | for a win. Consider a gambler who repeatedly bets on reds in
7 | roulette, an event with probability $18 / 38 \approx 0.474$. The ball has
8 | not landed in a red pocket on any of the last 4 spins of the wheel.
9 | Does this make it more likely that he will win on the next spin?
10 |
11 | This is really a conditional probability question in disguise.
12 | We want to know the probability that the ball will land in a
13 | red pocket on the 5th spin, _given_ that it did not land in a
14 | red pocket on any of the first 4 spins:
15 | $$ P(\text{red on 5th spin}\ |\ \text{not red on first 4 spins}). $$
16 |
17 | ## Theory {-}
18 |
19 | To verify that the two probabilities are _exactly_ the same, we
20 | do the calculation:
21 |
22 | \begin{align*}
23 | P(\text{red on 5th spin}\ |\ \text{not red on first 4 spins}) &= \frac{P(\text{not red on first 4 spins} \textbf{ and } \text{red on 5th spin}) }{P(\text{not red on first 4 spins})} \\
24 | &= \frac{ 20^4 \cdot 18 \big/ 38^5 }{20^4 \big/ 38^4} \\
25 | &= \frac{18}{38}.
26 | \end{align*}
27 |
28 | We see that the conditional probability is $18 / 38$, which is the same as the probability that the 5th
29 | spin is red if we did not know the outcome of the first 4 spins. In mathematical notation,
30 | \begin{align*}
31 | P(\text{red on 5th spin}\ |\ \text{not red on first 4 spins}) &= P(\text{red on 5th spin})
32 | \end{align*}
33 | When conditioning on one event (e.g., "not red on first 4 spins") does not change the probability
34 | of another event (e.g., "red on 5th spin"), we say that the two
35 | events are **independent**. In this case, whether the gambler wins on the 5th spin is
36 | independent of the fact that he has lost each of the last 4 spins. The folk belief that one is
37 | "due" for a win after a series of losses is plain wrong and is known as the **gambler's fallacy**.
38 |
39 | ```{definition independence, name="Independence"}
40 | Two events $A$ and $B$ are said to be **independent** if the
41 | \begin{equation}
42 | P(B | A) = P(B).
43 | (\#eq:independence)
44 | \end{equation}
45 | ```
46 |
47 | The next result follows by applying the Multiplication Rule (\@ref(thm:multiplication-rule)) to
48 | the definition of independence.
49 | ```{theorem independence-mult}
50 | Two events $A$ and $B$ are independent if and only if their probabilities multiply:
51 | \begin{equation}
52 | P(A \textbf{ and } B) = P(A) P(B).
53 | (\#eq:independence-mult)
54 | \end{equation}
55 | ```
56 | ```{proof}
57 | First, we assume \@ref(eq:independence) and show that \@ref(eq:independence-mult) holds:
58 | \begin{align*}
59 | P(A \textbf{ and } B) &= P(A) P(B | A) & \text{by the Multiplication Rule} \\
60 | &= P(A) P(B) & \text{by assumption}.
61 | \end{align*}
62 |
63 | Conversely, we assume \@ref(eq:independence-mult) and show that \@ref(eq:independence) holds:
64 | \begin{align*}
65 | P(B | A) &= \frac{P(A \textbf{ and } B)}{P(A)} & \text{by the definition of conditional probability} \\
66 | &= \frac{P(A) P(B)}{P(A)} & \text{by assumption} \\
67 | &= P(B).
68 | \end{align*}
69 | ```
70 |
71 | Here is an example that combines several concepts from the past few lessons.
72 |
73 |
74 |
75 | ```{example}
76 | You and a fellow castaway are stranded on a desert island, playing dice for the last banana. Two dice will be rolled.
77 | If the biggest number is 1, 2, 3, or 4, then Player 1 wins. If the biggest number is 5 or 6, then Player 2 wins.
78 | Would you rather be Player 1 or Player 2?
79 | ```
80 | ```{solution}
81 | Player 2 has a $20/36 = 0.5556$ chance of winning. Here's why:
82 | \begin{align*}
83 | P(\text{biggest number is 5, 6}) &= 1 - P(\text{biggest number is 1, 2, 3, 4}) \\
84 | &= 1 - P(\text{1st die is 1, 2, 3, 4} \textbf{ and } \text{2nd die is 1, 2, 3, 4}) \\
85 | &= 1 - \frac{4}{6} \cdot \frac{4}{6}\ \ \text{(by independence)} \\
86 | &= 1 - \frac{16}{36} \\
87 | &= \frac{20}{36}
88 | \end{align*}
89 | ```
90 |
91 |
92 | ## Examples {-}
93 |
94 | 1. One card is dealt off the top of a well-shuffled deck of cards. Is the event that the card is a
95 | heart independent of the event that the card is an ace?
96 | 2. Two cards are dealt off the top of a well-shuffled deck of cards. Is the event that the
97 | first card is a heart independent of the event that the second card is a heart?
98 | 3. In the dice game Yahtzee, five dice are rolled. The outcomes of the five dice are independent.
99 | What is the probability of rolling a "Yahtzee" (i.e., when all five dice show the same number)?
100 |
101 | ## Additional Exercises {-}
102 |
--------------------------------------------------------------------------------
/index.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Introduction to Probability"
3 | author: "Dennis Sun"
4 | date: "`r Sys.Date()`"
5 | site: bookdown::bookdown_site
6 | output: bookdown::gitbook
7 | documentclass: book
8 | bibliography: [book.bib, packages.bib]
9 | biblio-style: apalike
10 | link-citations: yes
11 | github-repo: dlsun/probability
12 | description: "Introduction to probability textbook."
13 | ---
14 |
15 | # Preface {-}
16 |
17 |
--------------------------------------------------------------------------------
/law-of-large-numbers.Rmd:
--------------------------------------------------------------------------------
1 | # Law of Large Numbers {#lln}
2 |
3 | ## Motivating Example {-}
4 |
5 | So far, we have seen that that casino games like roulette have
6 | negative expected value (for the player). But we have also seen that
7 | there is a lot of variance. How can casinos be sure that they
8 | will not lose all of their money to players in a stroke of bad luck?
9 |
10 | The answer lies in the Law of Large Numbers. In the long-run, as a
11 | player makes more and more negative expected value bets, it is a
12 | virtual certainty that the player will lose money.
13 |
14 |
15 | ## Theory {-}
16 |
17 | Suppose we make the same bet repeatedly. Let $X_i$ be our
18 | net winnings from bet $i$. Then $X_1, X_2, X_3, \ldots$ are independent,
19 | and furthermore, they are identically distributed (since the bets are the same).
20 | We say that $X_1, X_2, X_3, \ldots$ are **i.i.d.** (which is short for
21 | "independent and identically distributed").
22 |
23 | ```{theorem lln, name="Law of Large Numbers"}
24 | Let $X_1, X_2, X_3, \ldots$ be i.i.d. random variables.
25 | Then, the average of the random variables approaches the expected value:
26 | \begin{equation}
27 | \frac{X_1 + X_2 + \ldots + X_n}{n} \to E[X_1]
28 | (\#eq:lln)
29 | \end{equation}
30 | as $n\to\infty$.
31 | ```
32 |
33 | ```{proof}
34 | Here is a heuristic proof. We calculate the expected value and the variance of
35 | \[ \bar X_n \overset{\text{def}}{=} \frac{X_1 + X_2 + \ldots + X_n}{n}. \]
36 |
37 | The expected value is:
38 | \begin{align*}
39 | E[\bar X_n] &= E\left[ \frac{X_1 + X_2 + \ldots + X_n}{n} \right] \\
40 | &= \frac{1}{n} E[X_1 + X_2 + \ldots + X_n] \\
41 | &= \frac{1}{n} (E[X_1] + E[X_2] + \ldots + E[X_n]) \\
42 | &= \frac{1}{n} nE[X_1] \\
43 | &= E[X_1].
44 | \end{align*}
45 |
46 | The variance is:
47 | \begin{align*}
48 | \text{Var}[\bar X_n] &= \text{Var}\left[ \frac{X_1 + X_2 + \ldots + X_n}{n} \right] \\
49 | &= \frac{1}{n^2} \text{Var}\left[ X_1 + X_2 + \ldots + X_n \right] \\
50 | &= \frac{1}{n^2} (\text{Var}[X_1] + \text{Var}[X_2] + \ldots + \text{Var}[X_n]) \\
51 | &= \frac{1}{n^2} n \text{Var}[X_1] \\
52 | &= \frac{\text{Var}[X_1]}{n}.
53 | \end{align*}
54 |
55 | So we see that as $n\to\infty$, the expected value is fixed at $E[X_1]$, and the
56 | variance approaches $0$. A random variable with zero variance is a constant. Therefore,
57 | in the limit, the random variable $\bar X_n$ approaches the constant $E[X_1]$.
58 | ```
59 |
60 | To appreciate the Law of Large Numbers, consider betting on reds in roulette, a
61 | bet with an expected value of -$0.05, represented by a red dashed line in the figure below.
62 | There are many possible ways it could turn out; three sample paths are shown below.
63 |
64 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
65 | set.seed(4)
66 |
67 | n <- 1000
68 | i <- 19
69 | plot(NULL, xaxt="n", yaxt="n",
70 | xlim=c(0,1.05 * n), ylim=c(-1, 1),
71 | xlab="", ylab="", bty="n")
72 | axis(1, pos=0, at=seq(0, n, 100), labels=c("", seq(100, n, 100)))
73 | axis(2, pos=0)
74 | mtext(expression("X"[n]), side=2, line=1, at=0)
75 | text(x=1.02 * n, y=0.05, "n")
76 | for(col in c("blue", "orange", "green")) {
77 | x <- cumsum(sample(c(-1, 1), n, prob=c(20/38, 18/38), replace=T)) / 1:n
78 | lines(1:n, x, pch=i, col=col)
79 | i <- i - 1
80 | }
81 |
82 | abline(-2/38, 0, lty=2, lwd=2, col="red")
83 | ```
84 |
85 | There is a lot of uncertainty about how much you will make (on average) in the first
86 | 200 bets, but by the time we have made 1000 bets, there is not much uncertainty at all.
87 | We are guaranteed to lose money in the long run. The game of chance involves very little
88 | chance, especially when you are a large casino, which handles millions of such bets every year.
89 |
90 | ```{example}
91 |
92 | ```
93 |
94 | ## Essential Practice {-}
95 |
96 | These questions make reference to the opening scene from the Tom Stoppard play
97 | _Rosencrantz and Guildenstern are Dead_ (a retelling of Shakespeare's _Hamlet_).
98 | You do not need to watch the scene to answer these questions, but I've included it just for fun.
99 |
100 |
101 |
102 | 1. Rosencrantz and Guildenstern have just learned the Law of Large Numbers.
103 | It turns out that they have a different understanding of what the law says...
104 |
105 | **Guildenstern:** The Law of Large Numbers says that in the long run,
106 | the coin will land heads as often as it lands tails.
107 |
108 | **Rosencrantz:** I don't think that's what it says. The Law of Large Numbers says that
109 | the fraction of heads will get closer and closer to 1/2, which is the expected value of
110 | each toss.
111 |
112 | **Guildenstern:** Isn't that the same as what I said?
113 |
114 | **Rosencrantz:** No, you said, "The coin will land heads as often as it lands tails,"
115 | which implies that the difference between the _number_ of heads and the _number_ of tails
116 | will get smaller as we toss the coin more and more. I don't think the number of heads
117 | will be close to the number of tails.
118 |
119 | **Guildenstern:** If the fraction of heads is close to 1/2, then the
120 | _number_ of heads must be close to the _number_ of tails. How could it be otherwise?
121 |
122 | Who is right: Rosencrantz or Guildenstern? Calculate the variance of
123 | \[ \text{number of heads in $n$ tosses} - \text{number of tails in $n$ tosses} \]
124 | as a function of $n$. In light of this calculation, do you agree with Guildenstern
125 | that the difference between the number of heads and the number of tails
126 | approaches 0 as the number of tosses increases?
127 |
128 | 2. Rosencrantz and Guildenstern are tossing a coin. The coin has landed heads
129 | 85 times in a row. They discuss the probability that the next toss is heads.
130 |
131 | **Rosencrantz:** We are due for tails! We have had so many heads that the next toss
132 | _has_ to be more likely to land tails.
133 |
134 | **Guildenstern:** Not so fast. Didn't we call this the "gambler's fallacy"
135 | in Lesson \@ref(independence)?
136 | The coin tosses are independent, so each toss is still equally likely to land
137 | heads or tails, regardless of past experience.
138 |
139 | **Rosencrantz:** But what about the Law of Large Numbers? It says that
140 | we have to end up with 50\% tails eventually. We are in a deep hole
141 | from the 85 heads in a row. How can we get back to 50\% tails if the coin is not
142 | more likely now to land tails?
143 |
144 | Of course, Guildenstern is right. The coin is not more likely to land
145 | tails. But how would you answer Rosencrantz's question? Why does the Law of Large
146 | Numbers not contradict the gambler's fallacy?
147 |
148 |
--------------------------------------------------------------------------------
/lotus-continuous.Rmd:
--------------------------------------------------------------------------------
1 | # LOTUS for Continuous Random Variables {#lotus-continuous}
2 |
3 | ## Theory {-}
4 |
5 | ```{theorem, name="LOTUS for a Continuous Random Variable"}
6 | Let $X$ be a continuous random variable with p.d.f. $f(x)$. Then, the
7 | expected value of $g(X)$ is
8 | \begin{equation}
9 | E[g(X)] = \int_{-\infty}^\infty g(x) \cdot f(x)\,dx.
10 | (\#eq:lotus-continuous)
11 | \end{equation}
12 | ```
13 | Compare this definition with LOTUS for a discrete random
14 | variable \@ref(eq:lotus). We simply replaced the p.m.f. by the p.d.f. and
15 | the sum by an integral.
16 |
17 | ```{example ev-unif-sq, name="Expected Value of the Square of a Uniform"}
18 | Suppose the current (in Amperes) flowing through a 1-ohm resistor is a $\text{Uniform}(a, b)$
19 | random variable $I$ for $a, b > 0$. The power dissipated by this resistor is
20 | $X = I^2$. What is the expected power dissipated by the resistor?
21 |
22 | There are two ways to calculate this.
23 |
24 | **Method 1 (The Long Way)** We can first derive the p.d.f. of the power, $X = I^2$,
25 | using the methods of Lesson \@ref(transformations).
26 | \begin{align*}
27 | F_X(x) &= P(X \leq x) \\
28 | &= P(I^2 \leq x) \\
29 | &= P(I \leq \sqrt{x}) & (\text{if $x \geq 0$, since $a, b > 0$})\\
30 | &= \begin{cases} 0 & x < a^2 \\ \frac{\sqrt{x} - a}{b - a} & a^2 \leq x \leq b^2 \\ 1 & x > b^2 \end{cases} \\
31 | f_X(x) &= \frac{d}{dx} F_X(x) \\
32 | &= \begin{cases} \frac{1}{2(b-a)\sqrt{x}} & a^2 \leq x \leq b^2 \\ 0 & \text{otherwise} \end{cases}
33 | \end{align*}
34 | Now that we have the p.d.f., we calculate the expected value using the definition \@ref(eq:ev-continuous). Note the
35 | limits of integration:
36 | \[ E[X] = \int_{a^2}^{b^2} x \cdot \frac{1}{2(b-a)\sqrt{x}}\,dx = \frac{b^3 - a^3}{3(b-a)}. \]
37 |
38 | **Method 2 (Using LOTUS)** LOTUS allows us to calculate the expected value of $X$
39 | without working out the p.d.f. of $X$. All we need is the p.d.f. of $I$, which we
40 | already have and is simple.
41 | \[ E[X] = E[I^2] = \int_a^b i^2 \cdot \frac{1}{b-a}\,di = \frac{i^{3}}{3} \frac{1}{b-a} \Big|_a^b = \frac{b^3 - a^3}{3(b-a)}. \]
42 | In fact, we did not use the fact that $a, b > 0$ at all in this calculation, so this formula is
43 | valid for all uniform distributions, not just uniform distributions on positive values.
44 | ```
45 |
46 | ```{example ev-expo-sq, name="Expected Value of the Square of an Exponential"}
47 | Continuing with the previous example, suppose that the current $I$ instead follows an $\text{Exponential}(\lambda)$
48 | distribution. What is the expected power dissipated by the resistor?
49 |
50 | Again, we will use LOTUS. This is an unpleasant exercise in integration by parts. We
51 | simply set up the integral and [use software to evaluate the integral](https://www.wolframalpha.com/input/?i=integrate+i%5E2+*+lambda+*+exp%28-lambda+*+i%29+from+i%3D0+to+infinity).
52 | \[ E[I^2] = \int_0^\infty i^2 \cdot \lambda e^{-\lambda i}\,di = \frac{2}{\lambda^2}. \]
53 | ```
54 |
55 | ```{example, name="Expected Value of a Cosine"}
56 | Let $\Theta$ be a random angle, from a $\text{Uniform}(-\pi, \pi)$ distribution. Then, the p.d.f.
57 | of $\Theta$ is
58 | \[ f(\theta) = \begin{cases} \frac{1}{\pi - (-\pi)} & -\pi \leq x \leq \pi \\ 0 & \text{otherwise} \end{cases}, \]
59 | and the expected value of the cosine is:
60 | \begin{align*}
61 | E[\cos(\Theta)] &= \int_{-\pi}^\pi \cos(\theta)\cdot \frac{1}{\pi - (-\pi)}\,d\theta \\
62 | &= \frac{1}{2\pi} \sin\theta \Big|_{-\pi}^\pi \\
63 | &= \frac{1}{2\pi} (0 - 0) \\
64 | &= 0.
65 | \end{align*}
66 | ```
67 |
68 | ## Essential Practice {-}
69 |
70 | 1. You inflate a spherical balloon in a single breath. If the volume of air you exhale in a
71 | single breath (in cubic inches) is $\text{Uniform}(a=36\pi, b=288\pi)$ random variable, what
72 | is the expected radius of the balloon (in inches)?
73 |
74 | (Use LOTUS, but feel free to check
75 | your answer using the p.d.f. you derived in Lesson \@ref(transformations).)
76 |
77 | 2. The distance (in hundreds of miles) driven by a trucker in one day is a continuous
78 | random variable $X$ whose cumulative distribution function (c.d.f.) is given by:
79 | \[ F(x) = \begin{cases} 0 & x < 0 \\ x^3 / 216 & 0 \leq x \leq 6 \\ 1 & x > 6 \end{cases}. \]
80 |
81 | Let the random variable $D$ represent the time (in days) required for the trucker
82 | to make a 1500-mile trip, so $D = 15 / X$. Calculate the expected value of $D$.
83 |
84 |
--------------------------------------------------------------------------------
/lotus2d.Rmd:
--------------------------------------------------------------------------------
1 | # 2D LOTUS {#lotus2d}
2 |
3 |
4 | ## Theory {-}
5 |
6 | In this lesson, we calculate the expected value of a function of
7 | _two_ random variables, $E[g(X, Y)]$. For example,
8 | $E[X + Y]$ and $E[XY]$ are all examples of expected values that
9 | involve more than one random variable.
10 |
11 | ```{theorem lotus2d, name="2D LOTUS"}
12 | Let $X$ and $Y$ be random variables with joint p.m.f. $f(x, y)$. Let $g$ be some function. Then,
13 | \begin{equation}
14 | E[g(X, Y)] = \sum_x \sum_y g(x, y) \cdot f(x, y).
15 | (\#eq:lotus2d)
16 | \end{equation}
17 | ```
18 |
19 | Again, the result is intuitive. Now that there are _two_ random variables, the probabilities are
20 | given by the _joint_ p.m.f. We use these probabilities to weight the possible values of
21 | $g(X, Y)$.
22 |
23 | Let's apply Theorem \@ref(thm:lotus2d) to the Xavier and Yolanda example from
24 | Lesson \@ref(joint-discrete).
25 |
26 | ```{example, name="Xavier and Yolanda Revisited"}
27 | In Lesson \@ref(joint-discrete), we showed that the joint distribution of the
28 | number of times Xavier and Yolanda win is
29 | \[ \begin{array}{rr|cccc}
30 | & 5 & 0 & 0 & 0 & .0238 \\
31 | & 4 & 0 & 0 & .0795 & .0530 \\
32 | y & 3 & 0 & .0883 & .1766 & .0294 \\
33 | & 2 & .0327 & .1963 & .0981 & 0 \\
34 | & 1 & .0727 & .1090 & 0 & 0 \\
35 | & 0 & .0404 & 0 & 0 & 0 \\
36 | \hline
37 | & & 0 & 1 & 2 & 3\\
38 | & & & & x
39 | \end{array}. \]
40 | Let's calculate $E[Y - X]$, the expected number of _additional_ times that Yolanda wins,
41 | compared to Xavier, as well as $E[XY]$, the expected product of the number of times they win.
42 |
43 | Applying 2D LOTUS to the function $g(x, y) = y-x$, we can calculate $E[g(X, Y)] = E[Y - X]$.
44 | In the sum below, we will omit outcomes with probability 0.
45 | \begin{align*}
46 | E[Y - X] &= \sum_x \sum_y (y - x) f(x, y) \\
47 | &\approx (0 - 0) .0404 + (1 - 0) .0727 + (1 - 1) .1090 \\
48 | &\ \ \ + (2 - 0) .0327 + (2 - 1) .1963 + (2 - 2) .0981 \\
49 | &\ \ \ + (3 - 1) .0883 + (3 - 2) .1766 + (3 - 3) .0294 \\
50 | &\ \ \ + (4 - 2) .0795 + (4 - 3) .0530 + (5 - 3) .0238 \\
51 | &= .947.
52 | \end{align*}
53 |
54 | Of course, there is an easier to calculate this particular expected value. $Y - X$ is just the number of
55 | wins in the last two bets, which follows a $\text{Binomial}(n=2, p=18/38)$ distribution.
56 | From Appendix \@ref(discrete-distributions), we know that the expected value of a binomial is
57 | $np = 2 (18/38) \approx .947$, which matches the answer above.
58 |
59 | We can calculate $E[XY]$ in the same way, by applying 2D LOTUS to the function $g(x, y) = xy$.
60 | \begin{align*}
61 | E[XY] &= \sum_x \sum_y xy f(x, y) \\
62 | &= (0 \cdot 0) .0404 + (1 \cdot 0) .0727 + (1 \cdot 1) .1090 \\
63 | &\ \ \ + (2 \cdot 0) .0327 + (2 \cdot 1) .1963 + (2 \cdot 2) .0981 \\
64 | &\ \ \ + (3 \cdot 1) .0883 + (3 \cdot 2) .1766 + (3 \cdot 3) .0294 \\
65 | &\ \ \ + (4 \cdot 2) .0795 + (4 \cdot 3) .0530 + (5 \cdot 3) .0238 \\
66 | &= 4.11.
67 | \end{align*}
68 | ```
69 |
70 | ## Essential Practice {-}
71 |
72 | 1. Two tickets are drawn from a box with $N_1$ $\fbox{1}$s and $N_0$ $\fbox{0}$s. Let $X$ be the number of
73 | $\fbox{1}$s on the first draw and $Y$ be the number of $\fbox{1}$s on the second draw. (Note that $X$ and $Y$
74 | can only be 0 or 1.)
75 |
76 | a. Calculate $E[XY]$ when the draws are made with replacement.
77 | b. Calculate $E[XY]$ when the draws are made without replacement.
78 |
79 | (Hint: You worked out the joint p.m.f. of $X$ and $Y$ in Lesson \@ref(joint-discrete). Use it!)
80 |
81 | 2. You roll two fair, six-sided dice. Let $X$ be the number on the first die. Let $Y$ be the number on the
82 | second die. Calculate $E[\max(X, Y)]$, the expected value of the _larger_ of the two numbers. There are
83 | several ways you can do this. You should try to do this by applying 2D LOTUS to the joint distribution of
84 | $X$ and $Y$, which is extremely simple. To check your answer, you can use the p.m.f. of
85 | $L = \max(X, Y)$ that you derived in Lesson \@ref(marginal-discrete).
86 |
87 | 3. Consider the following three scenarios:
88 |
89 | - A fair coin is tossed 3 times. $X$ is the number of heads and $Y$ is the number of tails.
90 | - A fair coin is tossed 4 times. $X$ is the number of heads in the first 3 tosses, $Y$ is the number of heads in the last 3 tosses.
91 | - A fair coin is tossed 6 times. $X$ is the number of heads in the first 3 tosses, $Y$ is the number of heads in the last 3 tosses.
92 |
93 | In all three scenarios, $X$ and $Y$ are both marginally $\text{Binomial}(n=3, p=1/2)$. However, they have
94 | different joint distributions. In Lessons \@ref(joint-discrete) and \@ref(marginal-discrete), you
95 | worked out the joint p.m.f.s.
96 |
97 | Calculate $E[X + Y]$ and $E[XY]$ for each of the scenarios. Does $E[X + Y]$ change, depending on the
98 | joint distribution? What about $E[XY]$?
99 |
--------------------------------------------------------------------------------
/mean-function.Rmd:
--------------------------------------------------------------------------------
1 | # Mean Function {#mean-function}
2 |
3 | In the next few lessons, we discuss three functions that are
4 | commonly used to summarize random processes: the mean function,
5 | the variance function, and the autocovariance function.
6 |
7 | ## Theory {-}
8 |
9 | ```{definition mean-function, name="Mean Function"}
10 | The **mean function** $\mu_X(t)$ of a random process $\{ X(t) \}$
11 | is a function that specifies the expected value at each time $t$.
12 | That is,
13 | \begin{equation}
14 | \mu_X(t) \overset{\text{def}}{=} E[X(t)].
15 | (\#eq:mean-function)
16 | \end{equation}
17 |
18 | For a discrete-time process, we notate the mean function as
19 | \[ \mu_X[n] \overset{\text{def}}{=} E[X[n]]. \]
20 | ```
21 |
22 | The following video explains how to think about a mean function intuitively.
23 |
24 |
25 |
26 | Let's calculate the mean function of some random processes.
27 |
28 | ```{example random-amplitude-mean, name="Random Amplitude Process"}
29 | Consider the random amplitude process
30 | \begin{equation}
31 | X(t) = A\cos(2\pi f t)
32 | (\#eq:random-amplitude)
33 | \end{equation}
34 | introduced in Example \@ref(exm:random-amplitude).
35 |
36 | To be concrete, suppose $A$ is a $\text{Binomial}(n=5, p=0.5)$ random variable
37 | and $f = 1$. To calculate the mean function, observe that the only thing that is
38 | random in \@ref(eq:random-amplitude) is $A$. Everything else is a constant,
39 | so they can be pulled outside the expectation.
40 | \begin{align*}
41 | \mu_X(t) = E[X(t)] &= E[A\cos(2\pi ft)] \\
42 | &= E[A] \cos(2\pi ft) \\
43 | &= 2.5 \cos(2\pi ft),
44 | \end{align*}
45 | where in the last step, we used the formula for the expected value of a binomial random variable,
46 | $E[A] = np = 5 \cdot 0.5 = 2.5$.
47 | ```
48 |
49 | Shown below (in blue) are 30 realizations of this process, along with the mean function (in red). Notice
50 | how the mean function passes through the heart of the realizations.
51 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
52 | plot(NA, NA,
53 | xaxt="n", yaxt="n", bty="n",
54 | xlim=c(-2, 3), ylim=c(-5.1, 5.1),
55 | xlab="Time (t)", ylab="X(t)")
56 | axis(1, pos=0)
57 | axis(2, pos=0)
58 | set.seed(1)
59 | ts <- seq(-2, 3, by=0.01)
60 | a <- rbinom(30, 5, 0.5)
61 | for(i in 1:30) {
62 | lines(ts, a[i] * cos(2 * pi * ts), col=rgb(0, 0, 1, 0.2))
63 | }
64 | lines(ts, 2.5 * cos(2 * pi * ts), col="red", lty=2, lwd=2)
65 | ```
66 |
67 | ```{example poisson-process-mean, name="Poisson Process"}
68 | Consider the Poisson process
69 | $\{ N(t); t \geq 0 \}$ of rate $\lambda$,
70 | defined in Example \@ref(exm:poisson-process-as-process).
71 |
72 | Recall that $N(t)$ represents the number of arrivals on the
73 | interval $(0, t)$, which we know (by properties of the Poisson process)
74 | follows a $\text{Poisson}(\mu=\lambda t)$ distribution. Since the
75 | expected value of a $\text{Poisson}(\mu)$ random variable is $\mu$, we
76 | must have
77 | \[ \mu_N(t) = E[N(t)] = \lambda t \]
78 | ```
79 |
80 | Shown below (in blue) are 30 realizations of the Poisson process, with
81 | the mean function superimposed (in red).
82 |
83 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
84 | plot(NA, NA,
85 | xaxt="n", yaxt="n", bty="n",
86 | xlim=c(0, 5), ylim=c(0, 8.5),
87 | xlab="Time (t)", ylab="Number of arrivals N(t)")
88 | axis(1, pos=0, at=0:5)
89 | axis(2, pos=0)
90 | set.seed(1)
91 | for(i in 1:30) {
92 | arrivals <- cumsum(rexp(10, 0.8))
93 | ts <- seq(0, 6, by=.01)
94 | N <- c()
95 | for(t in ts) {
96 | N <- c(N, sum(arrivals < t))
97 | }
98 | lines(ts, N, col=rgb(0, 0, 1, 0.2))
99 | }
100 | abline(a=0, b=0.8, col="red", lty=2, lwd=2)
101 | ```
102 |
103 | ```{example white-noise-mean, name="White Noise"}
104 | Consider the white noise process $\{ Z[n] \}$ defined in \@ref(exm:white-noise),
105 | which consists of i.i.d. random variables. Suppose the expected value of these
106 | random variables is $\mu \overset{\text{def}}{=} E[Z[n]]$. Then, the mean function
107 | is constant, equal to this expected value.
108 | \[ \mu_Z[n] = E[Z[n]] \equiv \mu. \]
109 | ```
110 |
111 | Shown below (in blue) are 30 realizations of a white noise process consisting of i.i.d.
112 | standard normal random variables, along with the mean function (in red), which in this case is
113 | $\mu_Z[n] \equiv 0$.
114 |
115 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
116 | plot(NA, NA,
117 | xaxt="n", yaxt="n", bty="n",
118 | xlim=c(-2, 3), ylim=c(-3, 3),
119 | xlab="Time Sample (n)", ylab="Z[n]")
120 | axis(1, pos=0, at=-2:3)
121 | axis(2, pos=0)
122 | set.seed(1)
123 | for(i in 1:30) {
124 | ts <- -3:4
125 | Z <- rnorm(8)
126 | points(ts, Z, pch=19, col=rgb(0, 0, 1, 0.2))
127 | lines(ts, Z, col=rgb(0, 0, 1, 0.2))
128 | }
129 | points(-2:3, rep(0, 6), col="red", pch=19)
130 | abline(h=0, col="red", lty=2, lwd=2)
131 | ```
132 |
133 | ```{example random-walk-mean, name="Random Walk"}
134 | Consider the random walk process $\{ X[n]; n\geq 0 \}$ of
135 | Example \@ref(exm:random-walk-process).
136 | Since
137 | \[ X[n] = Z[1] + Z[2] + \ldots + Z[n], \]
138 | the mean function is
139 | \begin{align*}
140 | \mu_X[n] = E[X[n]] &= E[Z[1]] + E[Z[2]] + \ldots + E[Z[n]] \\
141 | &= n E[Z[1]].
142 | \end{align*}
143 | ```
144 |
145 | Shown below (in blue) are 30 realizations of the random walk process where the steps $Z[n]$ are
146 | i.i.d. standard normal, so the mean function (shown in red) is $\mu_X[n] = n \cdot 0 \equiv 0$.
147 |
148 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
149 | plot(NA, NA,
150 | xaxt="n", yaxt="n", bty="n",
151 | xlim=c(0, 5), ylim=c(-5.1, 5.1),
152 | xlab="Time Sample (n)", ylab="X[n]")
153 | axis(1, pos=0)
154 | axis(2, pos=0)
155 | set.seed(1)
156 | for(i in 1:30) {
157 | ts <- 0:5
158 | X <- c(0, cumsum(rnorm(5)))
159 | points(ts, X, pch=19, col=rgb(0, 0, 1, 0.2))
160 | lines(ts, X, col=rgb(0, 0, 1, 0.2))
161 | }
162 | points(0:5, rep(0, 6), col="red", pch=19)
163 | abline(h=0, col="red", lty=2, lwd=2)
164 | ```
165 |
166 | ## Essential Practice {-}
167 |
168 | 1. Consider a grain of pollen suspended in water, whose horizontal position can be modeled by
169 | Brownian motion $\{B(t)\}$ with parameter $\alpha=4 \text{mm}^2/\text{s}$, as in Example \@ref(exm:pollen).
170 | Calculate the mean function of $\{ B(t) \}$.
171 |
172 | 2. Radioactive particles hit a Geiger counter according to a Poisson process
173 | at a rate of $\lambda=0.8$ particles per second. Let $\{ N(t); t \geq 0 \}$ represent this Poisson process.
174 |
175 | Define the new process $\{ D(t); t \geq 3 \}$ by
176 | \[ D(t) = N(t) - N(t - 3). \]
177 | This process represents the number of particles that hit the Geiger counter in the last 3 seconds.
178 | Calculate the mean function of $\{ D(t); t \geq 3 \}$.
179 |
180 | 3. Consider the moving average process $\{ X[n] \}$ of Example \@ref(exm:ma1), defined by
181 | \[ X[n] = 0.5 Z[n] + 0.5 Z[n-1], \]
182 | where $\{ Z[n] \}$ is a sequence of i.i.d. standard normal random variables.
183 | Calculate the mean function of $\{ X[n] \}$.
184 |
185 | 4. Let $\Theta$ be a $\text{Uniform}(a=-\pi, b=\pi)$ random variable, and let $f$ be a constant.
186 | Define the random phase process $\{ X(t) \}$ by
187 | \[ X(t) = \cos(2\pi f t + \Theta). \]
188 | Calculate the mean function of $\{ X(t) \}$. (Hint: LOTUS)
189 |
--------------------------------------------------------------------------------
/packages.bib:
--------------------------------------------------------------------------------
1 | @Manual{R-base,
2 | title = {R: A Language and Environment for Statistical Computing},
3 | author = {{R Core Team}},
4 | organization = {R Foundation for Statistical Computing},
5 | address = {Vienna, Austria},
6 | year = {2019},
7 | url = {https://www.R-project.org/},
8 | }
9 | @Manual{R-bookdown,
10 | title = {bookdown: Authoring Books and Technical Documents with R Markdown},
11 | author = {Yihui Xie},
12 | year = {2020},
13 | note = {R package version 0.18},
14 | url = {https://CRAN.R-project.org/package=bookdown},
15 | }
16 | @Manual{R-knitr,
17 | title = {knitr: A General-Purpose Package for Dynamic Report Generation in R},
18 | author = {Yihui Xie},
19 | year = {2019},
20 | note = {R package version 1.23},
21 | url = {https://CRAN.R-project.org/package=knitr},
22 | }
23 | @Manual{R-rmarkdown,
24 | title = {rmarkdown: Dynamic Documents for R},
25 | author = {JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and Winston Chang and Richard Iannone},
26 | year = {2020},
27 | note = {R package version 2.1},
28 | url = {https://CRAN.R-project.org/package=rmarkdown},
29 | }
30 |
--------------------------------------------------------------------------------
/power.Rmd:
--------------------------------------------------------------------------------
1 | # Power of a Stationary Process {#power}
2 |
3 | ## Motivation {-}
4 |
5 | This lesson is the first in a series of lessons about the processing
6 | of _random signals_. The two most common kinds of random signals that
7 | are studied in electrical engineering are voltage signals $\{ V(t) \}$
8 | and current signals $I(t)$. These signals are often modulated while
9 | the other aspects of the circuit (e.g., the resistance) are held constant.
10 | The (instantaneous) power dissipated by the circuit is then
11 | \[ \text{Power}(t) = I(t)^2 R = \frac{V(t)^2}{R}. \]
12 | In other words, the power is proportional to the _square_ of the signal.
13 |
14 | ## Theory {-}
15 |
16 | For this reason, the (instantaneous) power of a general signal $\{ X(t) \}$ is defined as
17 | \[ \text{Power}_X(t) = X(t)^2. \]
18 | When the signal is random, it makes sense to report its expected value, rather
19 | than its
20 |
21 | ```{definition power, name="Expected Power"}
22 | The **expected power** of a random process $\{ X(t) \}$ is defined as
23 | \[ E[X(t)^2]. \]
24 |
25 | Notice that the expected power is related to the autocorrelation function
26 | \@ref(eq:autocorrelation) by
27 | \[ E[X(t)^2] = R_X(t, t). \]
28 |
29 | For a stationary process, the autocorrelation function only depends on the
30 | difference between the times, $R_X(\tau)$, so the expected power of a
31 | _stationary_ process is
32 | \[ E[X(t)^2] = R_X(0). \]
33 | ```
34 |
35 | Since most noise signals are stationary, we will only calculate expected
36 | power for stationary signals.
37 |
38 | ```{example white-noise-power, name="White Noise"}
39 | Consider the white noise process $\{ Z[n] \}$ defined in Example \@ref(exm:white-noise),
40 | which consists of i.i.d. random variables with mean $\mu = E[Z[n]]$ and
41 | variance $\sigma^2 \overset{\text{def}}{=} \text{Var}[Z[n]]$.
42 |
43 | This process is stationary, with autocorrelation function
44 | \[ R_Z[k] = \sigma^2\delta[k] + \mu^2, \]
45 | so its expected power is
46 | \[ R_Z[0] = \sigma^2\delta[0] + \mu^2 = \sigma^2 + \mu^2. \]
47 | ```
48 |
49 | ```{example gaussian-process-power}
50 | Consider the process $\{X(t)\}$ from Example \@ref(exm:gaussian-process).
51 | This process is stationary, with autocorrelation function
52 | \[ R_X(\tau) = 5 e^{-3\tau^2} + 4, \]
53 | so its expected power is
54 | \[ R_X(0) = 5 e^{-3 \cdot 0^2} + 4 = 9. \]
55 | ```
56 |
57 |
58 | ## Essential Practice {-}
59 |
60 | For these questions, you may want to refer to the autocorrelation function
61 | you calculated in Lesson \@ref(autocorrelation).
62 |
63 | 1. Radioactive particles hit a Geiger counter according to a Poisson process
64 | at a rate of $\lambda=0.8$ particles per second. Let $\{ N(t); t \geq 0 \}$ represent this Poisson process.
65 |
66 | Define the new process $\{ D(t); t \geq 3 \}$ by
67 | \[ D(t) = N(t) - N(t - 3). \]
68 | This process represents the number of particles that hit the Geiger counter in the
69 | last 3 seconds. Calculate the expected power in $\{ D(t); t \geq 3 \}$.
70 |
71 | 2. Consider the moving average process $\{ X[n] \}$ of Example \@ref(exm:ma1), defined by
72 | \[ X[n] = 0.5 Z[n] + 0.5 Z[n-1], \]
73 | where $\{ Z[n] \}$ is a sequence of i.i.d. standard normal random variables.
74 | Calculate the expected power in $\{ X[n] \}$.
75 |
76 | 3. Let $\Theta$ be a $\text{Uniform}(a=-\pi, b=\pi)$ random variable, and let $f$ be a constant. Define the random phase process $\{ X(t) \}$ by
77 | \[ X(t) = \cos(2\pi f t + \Theta). \]
78 | Calculate the expected power in $\{ X(t) \}$.
79 |
80 | 4. Let $\{ X(t) \}$ be a continuous-time random process with mean function
81 | $\mu_X(t) = -1$ and autocovariance function $C_X(s, t) = 2e^{-|s - t|/3}$.
82 | Calculate the expected power of $\{ X(t) \}$.
83 |
--------------------------------------------------------------------------------
/preamble.tex:
--------------------------------------------------------------------------------
1 | \usepackage{booktabs}
2 | \usepackage{amsthm}
3 | \makeatletter
4 | \def\thm@space@setup{%
5 | \thm@preskip=8pt plus 2pt minus 4pt
6 | \thm@postskip=\thm@preskip
7 | }
8 | \makeatother
9 |
--------------------------------------------------------------------------------
/random-process-examples.Rmd:
--------------------------------------------------------------------------------
1 | # Examples of Random Processes {#random-process-examples}
2 |
3 | In this lesson, we cover a few more examples of random processes.
4 |
5 | ```{example random-amplitude, name="Random Amplitude Process"}
6 | Let $A$ be a random variable. Let $f$ be a constant. Then the continuous-time process
7 | \[ X(t) = A\cos(2\pi f t) \]
8 | is called a **random amplitude** process.
9 |
10 | Note that once the value of $A$ is simulated, the random process $\{ X(t) \}$ is
11 | completely specified for all times $t$.
12 |
13 | Shown below are 30 realizations of
14 | the random amplitude process, where $A$ is $\text{Binomial}(n=5, p=0.5)$.
15 | ```
16 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
17 | plot(NA, NA,
18 | xaxt="n", yaxt="n", bty="n",
19 | xlim=c(-2, 3), ylim=c(-5.1, 5.1),
20 | xlab="Time (t)", ylab="X(t)")
21 | axis(1, pos=0)
22 | axis(2, pos=0)
23 | set.seed(1)
24 | ts <- seq(-2, 3, by=0.01)
25 | a <- rbinom(30, 5, 0.5)
26 | for(i in 1:30) {
27 | lines(ts, a[i] * cos(2 * pi * ts), col=rgb(0, 0, 1, 0.2))
28 | }
29 | ```
30 |
31 | Here is a video that animates the random amplitude process.
32 |
33 |
34 |
35 |
36 | ```{example ma1, name="Moving Average Process"}
37 | Let $\{ Z[n] \}$ be a white noise process. Then, a **moving average process** (of order 1) $\{ X[n] \}$
38 | is a discrete-time process defined by
39 | \begin{equation}
40 | X[n] = b_0 Z[n] + b_1 Z[n-1].
41 | (\#eq:ma1)
42 | \end{equation}
43 |
44 | Below, we show one realization of a white noise process $\{ Z[n] \}$ (in orange),
45 | along with the corresponding moving average process
46 | \[ X[n] = 0.5 Z[n] + 0.5 Z[n-1]. \]
47 | Notice that the moving average process is just the average of successive values of the
48 | white noise process. As a result, the moving average process is "smoother" than the white noise process.
49 | ```
50 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
51 | set.seed(1)
52 | ts <- -2:7
53 | Z <- rnorm(10)
54 | plot(ts, Z, col="orange", pch=19,
55 | xaxt="n", yaxt="n", bty="n",
56 | xlim=c(0, 6), ylim=c(-3, 3),
57 | xlab="Time Sample (n)", ylab="Value of the Process")
58 | lines(ts, Z, col="orange")
59 | axis(1, pos=0)
60 | axis(2, pos=0)
61 | X <- c()
62 | for(i in 2:10) {
63 | X <- c(X, (Z[i] + Z[i-1]) / 2)
64 | }
65 | points(-1:7, X, col="blue", pch=19)
66 | lines(-1:7, X, col="blue", lwd=2)
67 | legend(4, 3, c("White Noise", "Moving Average"), lty=1, lwd=c(1, 2), col=c("orange", "blue"))
68 | ```
69 |
70 | Now, we show 30 realizations of the same moving average process. Superficially, this might
71 | look like the white noise of Example \@ref(exm:white-noise), but if you look closely, you will
72 | see that each individual function fluctuates less.
73 |
74 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
75 | set.seed(1)
76 | plot(NA, NA,
77 | xaxt="n", yaxt="n", bty="n",
78 | xlim=c(0, 6), ylim=c(-3, 3),
79 | xlab="Time Sample (n)", ylab="X[n]")
80 | axis(1, pos=0)
81 | axis(2, pos=0)
82 | for(i in 1:30) {
83 | Z <- rnorm(10)
84 | X <- c()
85 | for(i in 2:10) {
86 | X <- c(X, (Z[i] + Z[i-1]) / 2)
87 | }
88 | points(-1:7, X, col=rgb(0, 0, 1, 0.2), pch=19)
89 | lines(-1:7, X, col=rgb(0, 0, 1, 0.2), lwd=2)
90 | }
91 | ```
92 |
93 |
--------------------------------------------------------------------------------
/random-walk.Rmd:
--------------------------------------------------------------------------------
1 | # Random Walk {#random-walk}
2 |
3 | ## Theory {-}
4 |
5 | The **random walk** (also known as the "drunkard's walk") is an example of a
6 | random process evolving over time, like the
7 | Poisson process (Lesson \@ref(poisson-process)).
8 |
9 | The setup for the random walk is as follows.
10 | A drunk man is stumbling home from a bar. Because of his inebriated state,
11 | each step he takes is equally likely to be one step forward or one step backward,
12 | independent of any other step.
13 |
14 | In other words, the $i$th step is a random variable $Z_i$, with p.m.f.
15 |
16 | \[ \begin{array}{r|cc}
17 | z & -1 & 1 \\
18 | \hline
19 | f(z) & 0.5 & 0.5
20 | \end{array} \]
21 |
22 | and the $Z_i$s are independent.
23 |
24 | The drunkard's position after $n$ steps is
25 | $$ X_n = Z_1 + Z_2 + \ldots + Z_n. $$
26 | (We assume that he starts at the origin $X_0 = 0$.)
27 |
28 | Shown below are three possible realizations of the random walk. A realization of a
29 | random process over time is called a _sample path_. We plot the three sample
30 | paths below, each of which shows the position of the drunkard $X_n$ as a function of $n$.
31 | Although all three sample paths start at 0 at $n=0$, they diverge very quickly.
32 | ```{r, echo=FALSE, fig.show = "hold", fig.align = "default"}
33 | set.seed(1)
34 |
35 | i <- 19
36 | plot(NULL, xaxt="n", yaxt="n",
37 | xlim=c(0,22), ylim=c(-10, 10),
38 | xlab="", ylab="", bty="n")
39 | axis(1, pos=0, at=c(0, 5, 10, 15, 20), labels=c("", 5, 10, 15, 20))
40 | axis(2, pos=0)
41 | mtext(expression("X"[n]), side=2, line=1, at=0)
42 | text(x=21, y=0, "n")
43 | for(col in c("blue", "orange", "green")) {
44 | x <- cumsum(sample(c(-1, 1), 20, replace=T))
45 | points(0:20, c(0, x), pch=i, col=col)
46 | lines(0:20, c(0, x), pch=i, col=col)
47 | i <- i - 1
48 | }
49 | ```
50 |
51 | ## Essential Practice {-}
52 |
53 | Answer the following questions about the random walk. You should use linearity of expectation and
54 | properties of covariance as much as possible.
55 |
56 | 1. What is the distribution of $X_2$, the drunkard's position after just $2$ steps? (This is not a named distribution. Just make a table showing the possible values and their probabilities.)
57 | 2. Calculate $E[Z_i]$ and $\text{Var}[Z_i]$.
58 | 3. Calculate $E[X_3]$ and $\text{Var}[X_3]$.
59 | 5. Consider $X_3$ and $X_5$. Do you think their covariance is positive, negative, or zero? Calculate $\text{Cov}[X_3, X_5]$.
60 |
--------------------------------------------------------------------------------
/replacement.Rmd:
--------------------------------------------------------------------------------
1 | # Sampling With Replacement {#replacement}
2 |
3 | ## Motivating Example {-}
4 |
5 | Recall Galileo's problem from Lesson \@ref(counting). He wanted to know whether
6 | a sum of 9 or a sum of 10 was more likely when 3 dice are rolled. In fact,
7 | Galileo's peers reasoned that the two events should be equally likely,
8 | since there are six ways to get a sum of 9
9 |
10 | \begin{align*}
11 | 3 + 3 + 3 & & 2 + 3 + 4 & & 2 + 2 + 5 & & 1 + 4 + 4 & & 1 + 3 + 5 & & 1 + 2 + 6
12 | \end{align*}
13 |
14 | and also six ways to get a sum of 10:
15 |
16 | \begin{align*}
17 | 3 + 3 + 4
18 | & & 2 + 4 + 4
19 | & & 2 + 3 + 5
20 | & & 2 + 2 + 6
21 | & & 1 + 4 + 5
22 | & & 1 + 3 + 6
23 | \end{align*}
24 |
25 | But gamblers of the day knew better. From experience, they knew that
26 | a sum of 10 was more likely than a sum of 9. But where did Galileo's
27 | peers go wrong in their reasoning?
28 |
29 | ## Discussion {-}
30 |
31 | Recall from Lesson \@ref(box-models) that the three rolls of a die can be
32 | modeled as $n=3$ draws _with replacement_ from the box
33 | \[ \fbox{$\fbox{1}\ \fbox{2}\ \fbox{3}\ \fbox{4}\ \fbox{5}\ \fbox{6}$}. \]
34 | Galileo's peers showed that there are 6 ways to get a sum of 9
35 | _if you ignore the order in which the tickets were drawn_.
36 | (Notice that they included 2 + 3 + 5, but not 3 + 5 + 2 and other orderings of the same three rolls.)
37 | They also showed that there are 6 ways to get a sum of 10. Why does this not
38 | guarantee the probabilities of the two events are the same?
39 |
40 | Notice that this case (draws with replacement, order doesn't matter) is one that
41 | we have not studied yet.
42 |
43 | | | with replacement | without replacement |
44 | |-------------------:|:----------------:|:-------------------:|
45 | | order matters| $N^n$ |$\frac{N!}{(N-n)!}$ |
46 | |order doesn't matter| ??? | $\binom{N}{n} = \frac{N!}{n!(N-n)!}$ |
47 |
48 | To settle the question, let's go back to counting ordered outcomes.
49 |
50 | - The outcome $2 + 3 + 4$ corresponds to $3! = 6$ outcomes, when you account for the possible orderings:
51 | - $2 + 3 + 4$
52 | - $2 + 4 + 3$
53 | - $3 + 2 + 4$
54 | - $3 + 4 + 2$
55 | - $4 + 2 + 3$
56 | - $4 + 3 + 2$
57 | - On the other hand, the outcome $2 + 2 + 5$ can only be reordered $3$ different ways:
58 | - $2 + 2 + 5$
59 | - $2 + 5 + 2$
60 | - $5 + 2 + 2$
61 | - And the outcome $3 + 3 + 3$ only has one possible ordering.
62 |
63 | In other words, when we draw with replacement, the same ticket can be drawn more than once, and
64 | repetitions reduce the number of ways that tickets can be reordered. The problem with
65 | simply counting the number of outcomes is that the 6 outcomes
66 | \begin{align*}
67 | 3 + 3 + 3 & & 2 + 3 + 4 & & 2 + 2 + 5 & & 1 + 4 + 4 & & 1 + 3 + 5 & & 1 + 2 + 6
68 | \end{align*}
69 | are not all equally likely. $2 + 3 + 4$ is twice as likely as $2 + 2 + 5$ and
70 | six times as likely as $3 + 3 + 3$. When draws are made with replacement, only ordered outcomes
71 | are equally likely.
72 |
73 | This was Galileo's insight. When he took into account the number of possible orderings associated
74 | with each unordered outcome:
75 | \begin{array}{rr}
76 | \hline
77 | \text{Unordered Outcome} & \text{Possible Orderings} \\
78 | \hline
79 | 3 + 3 + 3 & \text{1 way}\ \\
80 | 2 + 3 + 4 & \text{6 ways} \\
81 | 2 + 2 + 5 & \text{3 ways} \\
82 | 1 + 4 + 4 & \text{3 ways} \\
83 | 1 + 3 + 5 & \text{6 ways} \\
84 | 1 + 2 + 6 & + \text{6 ways} \\
85 | \hline
86 | & \text{25 ways}
87 | \end{array}
88 | he found that the probability of a 9 was actually
89 | \[ \frac{25}{216}. \]
90 | (Remember that there are $6^3 = 216$ equally likely outcomes when order matters.)
91 |
92 | Repeating the calculation for the probability of a 10, Galileo showed that the two
93 | probabilities were indeed different.
94 |
95 |
96 | ## Examples {-}
97 |
98 | 1. Here's a different illustration of the fact that not all unordered outcomes are
99 | equally likely when draws are made with replacement. In the Pick 3 Lotto, a winning number is
100 | chosen between 000 to 999. Contestants win if the digits in their chosen number matches
101 | the winning number, _in any order_.
102 | a. What is your chance of winning if you bet on 053?
103 | b. What is your chance of winning if you bet on 055?
104 | c. What is your chance of winning if you bet on 555?
105 | 2. Complete the solution to Galileo's problem. What is the probability that the sum is 10
106 | when 3 fair dice are rolled? How does this compare with the probability that the sum is 9?
107 |
108 |
109 | ## Bonus Material {-}
110 |
111 | You will rarely need to
112 | count the _unordered_ ways that $n$ tickets can be drawn _with replacement_, since the
113 | unordered outcomes are not equally likely. However, in case you are curious how this
114 | can be calculated, the following video explains how. This video is completely optional!
115 |
116 |
--------------------------------------------------------------------------------
/style.css:
--------------------------------------------------------------------------------
1 | p.caption {
2 | color: #777;
3 | margin-top: 10px;
4 | }
5 | p code {
6 | white-space: inherit;
7 | }
8 | pre {
9 | word-break: normal;
10 | word-wrap: normal;
11 | }
12 | pre code {
13 | white-space: inherit;
14 | }
15 |
--------------------------------------------------------------------------------
/sums-discrete.Rmd:
--------------------------------------------------------------------------------
1 | # Sums of Random Variables {#sums-discrete}
2 |
3 | ## Theory {-}
4 |
5 | Let $X$ and $Y$ be random variables. What is the distribution of their sum---that is,
6 | the random variable $T = X + Y$?
7 |
8 | In principle, we already know how to calculate this. To determine the distribution of
9 | $T$, we need to calculate
10 | \[ f_T(t) \overset{\text{def}}{=} P(T = t) = P(X + Y = t), \]
11 | which we can do by summing the joint p.m.f. over the appropriate values:
12 | \begin{equation}
13 | \sum_{(x, y):\ x + y = t} f(x, y).
14 | (\#eq:sum-rv-discrete)
15 | \end{equation}
16 |
17 | For example, to calculate the _total_ number of bets that Xavier and Yolanda win, we
18 | calculate $P(X + Y = t)$ for $t = 0, 1, 2, \ldots, 8$. The probabilities that we
19 | would need to sum for $t=4$ are highlighted in the joint p.m.f. table below:
20 | \[ \begin{array}{rr|cccc}
21 | & 5 & 0 & 0 & 0 & .0238 \\
22 | & 4 & \fbox{0} & 0 & .0795 & .0530 \\
23 | y & 3 & 0 & \fbox{.0883} & .1766 & .0294 \\
24 | & 2 & .0327 & .1963 & \fbox{.0981} & 0 \\
25 | & 1 & .0726 & .1090 & 0 & \fbox{0} \\
26 | & 0 & .0404 & 0 & 0 & 0 \\
27 | \hline
28 | & & 0 & 1 & 2 & 3\\
29 | & & & & x
30 | \end{array}. \]
31 |
32 | For a fixed value of $t$, $x$ determines the value of $y$ (and vice versa). In particular,
33 | $y = t - x$. So we can write \@ref(eq:sum-rv-discrete) as a sum over $x$:
34 | \begin{equation}
35 | f_T(t) = \sum_x f(x, t-x).
36 | (\#eq:sum-rv-general)
37 | \end{equation}
38 |
39 | This is the general equation for the p.m.f. of the sum $T$. If the random variables are independent,
40 | then we can actually say more.
41 |
42 | ```{theorem sum-independent, name="Sum of Independent Random Variables"}
43 | Let $X$ and $Y$ be independent random variables.
44 | Then, the p.m.f. of $T = X + Y$ is the **convolution** of the p.m.f.s of $X$ and $Y$:
45 | \begin{equation} f_T = f_X * f_Y.
46 | (\#eq:convolution)
47 | \end{equation}
48 | The convolution operator $*$ in \@ref(eq:convolution) is defined as follows:
49 | \[ f_T(t) = \sum_x f_X(x) \cdot f_Y(t-x). \]
50 | Note that the verb form of "convolution" is **convolve**, not "convolute", even though many
51 | students find convolution quite convoluted!
52 | ```
53 |
54 | ```{proof}
55 | This follows from \@ref(eq:sum-rv-general), after observing that independence means that the joint distribution
56 | is the product of the marginal distributions (Theorem \@ref(thm:joint-independent)):
57 | \[ f(x, t-x) = f_X(x) \cdot f_Y(t-x). \]
58 | ```
59 |
60 | ```{theorem sum-binomials, name="Sum of Independent Binomials"}
61 | Let $X$ and $Y$ be independent $\text{Binomial}(n, p)$ and $\text{Binomial}(m, p)$ random variables,
62 | respectively. Then $T = X + Y$ follows a $\text{Binomial}(n + m, p)$ distribution.
63 | ```
64 |
65 | ```{proof}
66 | We apply Theorem \@ref(thm:sum-independent) to binomial p.m.f.s.
67 |
68 | \begin{align*}
69 | f_T(t) &= \sum_{x=0}^t f_X(x) \cdot f_Y(t-x) \\
70 | &= \sum_{x=0}^t \binom{n}{x} p^x (1-p)^{n-x} \cdot \binom{m}{t-x} p^{t-x} (1-p)^{m-(t-x)} \\
71 | &= \sum_{x=0}^t \binom{n}{x} \binom{m}{t-x} p^t (1-p)^{n+m-t} \\
72 | &= \binom{n+m}{t} p^t (1-p)^{n+m-t},
73 | \end{align*}
74 | which is the p.m.f. of a $\text{Binomial}(n + m, p)$ random variable.
75 |
76 | In the last equality, we used the fact that
77 | \begin{equation}
78 | \sum_{x=0}^t \binom{n}{x} \binom{m}{t-x} = \binom{n+m}{t}.
79 | (\#eq:vandermonde)
80 | \end{equation}
81 | This equation is known as Vandermonde's identity. One way to see it is to observe
82 | \[ \sum_{x=0}^t \frac{\binom{n}{x} \binom{m}{t-x}}{\binom{n+m}{t}} = 1, \]
83 | since we are summing the p.m.f. of a $\text{Hypergeometric}(t, n, m)$ random variable over
84 | all of its possible values $0, 1, 2, \ldots, t$. Now, if we multiply both sides of this equality by
85 | \[ \binom{n+m}{t}, \]
86 | we obtain Vandermonde's identity \@ref(eq:vandermonde).
87 |
88 | However, we can see that the sum of two independent binomials must be binomial another way.
89 | $X$ represents the number of $\fbox{1}$s in $n$ draws with replacement from a box.
90 | $Y$ represents the number of $\fbox{1}$s in $m$ _separate_ draws with replacement from the _same_ box:
91 |
92 | - The draws must be _separate_ because we need $X$ to be independent of $Y$.
93 | - We can use the _same_ box because $p$ (which corresponds to how many $\fbox{1}$s and $\fbox{0}$s there
94 | are in the box) is the same for $X$ and $Y$.
95 |
96 | Instead of drawing $m$ times for $X$ and $n$ times for $Y$, we could
97 | simply draw $m+n$ times with replacement from this box. $X+Y$ is then the number of
98 | $\fbox{1}$s in these $m+n$ draws and must also be binomial.
99 | ```
100 |
101 | ## Essential Practice {-}
102 |
103 | 1. Let $X$ and $Y$ be independent $\text{Poisson}(\mu)$ and $\text{Poisson}(\nu)$ random variables.
104 | Use convolution to find the distribution of $X + Y$. (_Hint:_ It is a named distribution.) Then,
105 | by making an analogy to a Poisson process, explain why this must be the distribution of $X + Y$.
106 |
107 | (The binomial theorem will come in handy: $\sum_{x=0}^n \binom{n}{x} a^x b^{n-x} = (a + b)^n$.)
108 |
109 | 2. Let $X$ and $Y$ be independent $\text{Geometric}(p)$ random variables. Use convolution to
110 | find the distribution of $X + Y$. (_Hint:_ It is a named distribution. It may help to remember
111 | that $\sum_{i=1}^m 1 = m = \binom{m}{1}$.) Then, by making an
112 | analogy to a box model, explain why this has to be the distribution of $X + Y$.
113 |
114 | 3. Give an example of two $\text{Binomial}(n=3, p=0.5)$ random variables $X$ and $Y$,
115 | where $T = X + Y$ does not follow a $\text{Binomial}(n=6, p=0.5)$ distribution. Why does this
116 | not contradict Theorem \@ref(thm:sum-binomials)?
117 |
118 |
119 | ## Additional Exercises {-}
120 |
121 | 1. Let $X$ and $Y$ be independent random variables with the p.m.f.
122 |
123 | | $x$ | 1 | 2 | 3 | 4 | 5 | 6 |
124 | |------:|:--:|:--:|:--:|:--:|:--:|:--:|
125 | |$f(x)$ | $1/6$ | $1/6$ | $1/6$ | $1/6$ | $1/6$ | $1/6$
126 |
127 | Use convolution to find the p.m.f. of $T = X + Y$. Why does the answer make sense?
128 | (_Hint:_ $X$ and $Y$ represent the outcomes when you roll two fair dice.)
129 |
--------------------------------------------------------------------------------
/sums.Rmd:
--------------------------------------------------------------------------------
1 | # Calculating Sums by Distribution Matching {#sums}
2 |
3 | ## Motivating Example {-}
4 |
5 | What is the value of the sum
6 | \begin{equation}
7 | \sum_{k=0}^5 \binom{5}{k}?
8 | (\#eq:sum-subsets)
9 | \end{equation}
10 |
11 | One way to calculate this is to use the fact that the probabilities in a binomial distribution have to add up to
12 | $1$. First, to make the summation \@ref(eq:sum-subsets) match a binomial distribution, we add factors of
13 | $(1/2)^k$ and $(1/2)^{5-k}$ inside the sum:
14 | $$\sum_{k=0}^5 \binom{5}{k} = \sum_{k=0}^5 \binom{5}{k} \left(\frac{1}{2}\right)^k \left(\frac{1}{2}\right)^{5-k} 2^5. $$
15 | Of course, we have to be sure to cancel out any terms that we add. The $2^5$ term precisely cancels out the
16 | $(1/2)^k \cdot (1/2)^{5-k} = (1/2)^5$ term that we added.
17 |
18 | Next, noting that $2^5$ does not depend on $k$ (the index of the
19 | summation), we can bring the $2^5$ outside the sum, leaving us with the p.m.f. of the
20 | $\text{Binomial}(n=5, p=1/2)$ distribution inside the summation. This sum must equal 1, since
21 | probabilities have to add up to 1.
22 | $$\sum_{k=0}^5 \binom{5}{k} = 2^5 \underbrace{\sum_{k=0}^5 \overbrace{\binom{5}{k} \left(\frac{1}{2}\right)^k \left(\frac{1}{2}\right)^{5-k}}^{\text{$f[k]$ for $\text{Binomial}(n=5, p=1/2)$}}}_{=1} = 2^5. $$
23 | So we see that the sum equals $2^5$. This technique of calculating sums is called "distribution matching"
24 | because we manipulate the summation until we recognize one of the probability distributions that we have
25 | learned.
26 |
27 | There is another way to see why the answer is $2^5$, without evaluating the summation.
28 | Remembering that $\binom{n}{k}$ represents the number of distinct
29 | subsets of $\{ 1, ..., n \}$ of size $k$, the sum \@ref(eq:sum-subsets) is just the
30 | total number of subsets (of any size) of $\{ 1, ..., 5 \}$. There is another way
31 | to count the number of subsets; each element of $\{ 1, ..., 5 \}$ can either be
32 | in the subset or not. Since there are 2 choices for each element, the total number of
33 | subsets must be:
34 | $$ 2 \times 2 \times 2 \times 2 \times 2 = 2^5 $$
35 |
36 | ## Theory {-}
37 |
38 | Many useful sums can be calculated by using the fact that the sum of a p.m.f. over all
39 | of the possible values must equal $1$.
40 | ```{theorem, name="Binomial Theorem"}
41 | For any real numbers $a, b \geq 0$ and integers $n \geq 0$:
42 | \begin{equation}
43 | (a + b)^n = \sum_{k=0}^n \binom{n}{k} a^k b^{n-k}.
44 | (\#eq:binomial-theorem)
45 | \end{equation}
46 | (Note that \@ref(eq:binomial-theorem) holds, even when $a$ and $b$ are negative. However, only the
47 | non-negative case can be proven using the distribution matching technique, and we only need the
48 | Binomial Theorem for $a, b \geq 0$ in the remainder of this book.)
49 | ```
50 | ```{proof}
51 | \begin{align*}
52 | \sum_{k=0}^n \binom{n}{k} a^k b^{n-k} &= \sum_{k=0}^n \binom{n}{k} \left(\frac{a}{a+b} \right)^k \left(\frac{b}{a + b}\right)^{n-k} (a + b)^k (a + b)^{n-k} \\
53 | &= (a + b)^n \sum_{k=0}^n \binom{n}{k} \left(\frac{a}{a+b} \right)^k \left(\frac{b}{a + b}\right)^{n-k} \\
54 | &= (a + b)^n \underbrace{\sum_{k=0}^n \binom{n}{k} p^k (1-p)^{n-k}}_{=1} & p := \frac{a}{a+b} \\
55 | &= (a + b)^n
56 | \end{align*}
57 | ```
58 |
59 | ```{theorem, name="Geometric Series"}
60 | For $0 \leq r < 1$,
61 | \begin{equation}
62 | \sum_{k=0}^\infty r^k = 1 + r + r^2 + r^3 + \ldots = \frac{1}{1 - r}.
63 | (\#eq:geometric-series)
64 | \end{equation}
65 | (Note that \@ref(eq:geometric-series) holds for all $|r| < 1$. However, only the
66 | non-negative case can be proven using the distribution matching technique.)
67 | ```
68 | ```{proof}
69 | \begin{align*}
70 | \sum_{k=0}^\infty r^k &= \sum_{x=1}^\infty (1 - p)^{x-1} & x := k + 1, p := 1 - r \\
71 | &= \sum_{x=1}^\infty (1 - p)^{x-1} p \frac{1}{p} \\
72 | &= \frac{1}{p} \underbrace{\sum_{x=1}^\infty (1 - p)^{x-1} p}_{=1} \\
73 | &= \frac{1}{1 - r}
74 | \end{align*}
75 | ```
76 |
77 | ## Worked Examples {-}
78 |
79 | 1. Show Vandermonde's Identity:
80 | $$ \sum_{k=0}^r \binom{m}{k} \binom{n}{r - k} = \binom{n + m}{r}. $$
81 | 2. Show that $e$ (the mathematical constant approximately equal to $2.718$) is equal to
82 | $$ e = 1 + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \cdots. $$
83 | 3. Equation \@ref(eq:geometric-series) is the sum of an _infinite_ geometric series.
84 | Show the sum of a _finite_ geometric series is:
85 | $$ \sum_{k=0}^n r^k = \frac{1 - r^n}{1 - r}. $$
86 |
87 | ## Exercises {-}
88 |
--------------------------------------------------------------------------------
/tikz_template.tex:
--------------------------------------------------------------------------------
1 | \documentclass{article}
2 | \include{preview}
3 | \usepackage[pdftex,active,tightpage]{preview}
4 | \usepackage{amsmath}
5 | \usepackage{tikz}
6 | \usetikzlibrary{positioning}
7 | \usetikzlibrary{matrix}
8 |
9 | \usepackage{epsdice}
10 |
11 | % packages for tables
12 | \usepackage{colortbl}
13 | \usepackage{multirow}
14 | \usepackage{rotating}
15 |
16 | \begin{document}
17 | \begin{preview}
18 | %% TIKZ_CODE %%
19 | \end{preview}
20 | \end{document}
21 |
--------------------------------------------------------------------------------
/toc.css:
--------------------------------------------------------------------------------
1 | #TOC ul,
2 | #TOC li,
3 | #TOC span,
4 | #TOC a {
5 | margin: 0;
6 | padding: 0;
7 | position: relative;
8 | }
9 | #TOC {
10 | line-height: 1;
11 | border-radius: 5px 5px 0 0;
12 | background: #141414;
13 | background: linear-gradient(to bottom, #333333 0%, #141414 100%);
14 | border-bottom: 2px solid #0fa1e0;
15 | width: auto;
16 | }
17 | #TOC:after,
18 | #TOC ul:after {
19 | content: '';
20 | display: block;
21 | clear: both;
22 | }
23 | #TOC a {
24 | background: #141414;
25 | background: linear-gradient(to bottom, #333333 0%, #141414 100%);
26 | color: #ffffff;
27 | display: block;
28 | padding: 19px 20px;
29 | text-decoration: none;
30 | text-shadow: none;
31 | }
32 | #TOC ul {
33 | list-style: none;
34 | }
35 | #TOC > ul > li {
36 | display: inline-block;
37 | float: left;
38 | margin: 0;
39 | }
40 | #TOC > ul > li > a {
41 | color: #ffffff;
42 | }
43 | #TOC > ul > li:hover:after {
44 | content: '';
45 | display: block;
46 | width: 0;
47 | height: 0;
48 | position: absolute;
49 | left: 50%;
50 | bottom: 0;
51 | border-left: 10px solid transparent;
52 | border-right: 10px solid transparent;
53 | border-bottom: 10px solid #0fa1e0;
54 | margin-left: -10px;
55 | }
56 | #TOC > ul > li:first-child > a {
57 | border-radius: 5px 0 0 0;
58 | }
59 | #TOC.align-right > ul > li:first-child > a,
60 | #TOC.align-center > ul > li:first-child > a {
61 | border-radius: 0;
62 | }
63 | #TOC.align-right > ul > li:last-child > a {
64 | border-radius: 0 5px 0 0;
65 | }
66 | #TOC > ul > li.active > a,
67 | #TOC > ul > li:hover > a {
68 | color: #ffffff;
69 | box-shadow: inset 0 0 3px #000000;
70 | background: #070707;
71 | background: linear-gradient(to bottom, #262626 0%, #070707 100%);
72 | }
73 | #TOC .has-sub {
74 | z-index: 1;
75 | }
76 | #TOC .has-sub:hover > ul {
77 | display: block;
78 | }
79 | #TOC .has-sub ul {
80 | display: none;
81 | position: absolute;
82 | width: 200px;
83 | top: 100%;
84 | left: 0;
85 | }
86 | #TOC .has-sub ul li a {
87 | background: #0fa1e0;
88 | border-bottom: 1px dotted #31b7f1;
89 | filter: none;
90 | display: block;
91 | line-height: 120%;
92 | padding: 10px;
93 | color: #ffffff;
94 | }
95 | #TOC .has-sub ul li:hover a {
96 | background: #0c7fb0;
97 | }
98 | #TOC ul ul li:hover > a {
99 | color: #ffffff;
100 | }
101 | #TOC .has-sub .has-sub:hover > ul {
102 | display: block;
103 | }
104 | #TOC .has-sub .has-sub ul {
105 | display: none;
106 | position: absolute;
107 | left: 100%;
108 | top: 0;
109 | }
110 | #TOC .has-sub .has-sub ul li a {
111 | background: #0c7fb0;
112 | border-bottom: 1px dotted #31b7f1;
113 | }
114 | #TOC .has-sub .has-sub ul li a:hover {
115 | background: #0a6d98;
116 | }
117 | #TOC ul ul li.last > a,
118 | #TOC ul ul li:last-child > a,
119 | #TOC ul ul ul li.last > a,
120 | #TOC ul ul ul li:last-child > a,
121 | #TOC .has-sub ul li:last-child > a,
122 | #TOC .has-sub ul li.last > a {
123 | border-bottom: 0;
124 | }
125 | #TOC ul {
126 | font-size: 1.2rem;
127 | }
128 |
--------------------------------------------------------------------------------
/total-probability.Rmd:
--------------------------------------------------------------------------------
1 | # Law of Total Probability {#ltp}
2 |
3 | ## Motivating Example {-}
4 |
5 | You watch a magician place 4 ordinary quarters and 1 double-headed quarter into a box. If you select a coin
6 | from the box at random and toss it, what is the probability that it lands heads?
7 |
8 | ## Theory {-}
9 |
10 |
11 |
12 | In some situations, calculating the probability of an event is easy, once you condition on the right information.
13 | For example, in the example above, if we knew that the coin we chose was ordinary, then:
14 | \[ P(\text{heads} | \text{ordinary}) = \frac{1}{2}. \]
15 | On the other hand, if the coin we chose was double-headed, then
16 | \[ P(\text{heads} | \text{double-headed}) = 1. \]
17 |
18 | How do we combine these conditional probabilities to come up with $P(\text{heads})$,
19 | the overall probability that the coin lands heads?
20 |
21 | ```{theorem, ltp, name="Law of Total Probability"}
22 | Let $A_1, ..., A_n$ be a partition of the possible outcomes. Then:
23 | \[ P(B) = \sum_{i=1}^n P(A_i) P(B | A_i). \]
24 | ```
25 |
26 | A **partition** is a collection of non-overlapping events that cover all the possible outcomes.
27 | For example, in the example above, $A_1 = \{ \text{ordinary}\}$ and $A_2 = \{ \text{double-headed} \}$
28 | is a partition, since the coin that we selected has to be one of the two and cannot be both.
29 |
30 | Applying the Law of Total Probability to this problem, we have
31 | \begin{align*}
32 | P(\text{heads}) &= P(\text{ordinary}) P(\text{heads} | \text{ordinary}) + P(\text{double-headed}) P(\text{heads} | \text{double-headed}) \\
33 | &= 0.8 \cdot \frac{1}{2} + 0.2 \cdot 1 \\
34 | &= 0.6
35 | \end{align*}
36 |
37 | So the overall probability $P(B)$ is just a weighted average of the conditional probabilities
38 | $P(B | A_i)$, where the "weights" are $P(A_i)$. (Note that these "weights" have to add up to 1, since
39 | $A_1, ..., A_n$ are a partition of all the possible outcomes, whose total probability is $1.0$.)
40 | This means that the overall probability $P(B)$ will always lie somewhere between the conditional
41 | probabilities $P(B | A_i)$, with more weight given to the more probable scenarios. For example,
42 | we could have predicted that $P(\text{heads})$ would lie somewhere between $\frac{1}{2}$ and $1$;
43 | the fact that it is much closer to the former is because choosing the ordinary coin was more probable.
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 | ## Examples {-}
72 |
73 | 1. Here are some things we already know about a deck of cards:
74 | - The top card in a shuffled deck of cards has a $13/52$ chance of being a diamond.
75 | - If the top card is a diamond, then the second card has a $12/51$ chance of being a diamond.
76 | - If the top card is not a diamond, then the second card has a $13/51$ chance of being a diamond.
77 |
78 | Now, suppose we "burn" (i.e., discard) the top card without looking at it. What is the
79 | probability that the second card is a diamond? Use the Law of Total Probability, conditioning on the top card.
80 | Does burning cards affect probabilities?
81 |
82 | 2. Anny is a fan of chess competitor Hikaru Nakamura, and tomorrow is the World Chess Championship.
83 | She is superstitious and believes that the weather influences how he will perform. Hikaru has a 60% chance of winning
84 | if it rains, a 25% chance if it is cloudy, and a 10% chance if it is sunny. Anny checks the weather the night before,
85 | and the forecast says that the chance of rain tomorrow is 40%; otherwise, it is equally likely to be cloudy as sunny.
86 | What is the probability that Hikaru wins the World Chess Championship?
87 |
--------------------------------------------------------------------------------
/uniform.Rmd:
--------------------------------------------------------------------------------
1 | # Uniform Distribution {#uniform}
2 |
3 | ## Motivation {-}
4 |
5 | How do we model a random variable that is equally likely to take on any real
6 | number between $a$ and $b$?
7 |
8 |
9 | ## Theory {-}
10 |
11 | ```{definition uniform, name="Uniform Distribution"}
12 | A random variable $X$ is said to follow a $\text{Uniform}(a, b)$ distribution if
13 | its p.d.f. is
14 | \[ f(x) = \begin{cases} \frac{1}{b-a} & a \leq x \leq b \\ 0 & \text{otherwise} \end{cases}, \]
15 | or equivalently, if its c.d.f. is
16 | \[ F(x) = \begin{cases} 0 & x < a \\ \frac{x-a}{b-a} & a \leq x \leq b \\ 1 & x > b \end{cases}. \]
17 | ```
18 |
19 | The p.d.f. and c.d.f. are graphed below.
20 |
21 | ```{r uniform-graph, echo=FALSE, fig.show = "hold", fig.align = "center",fig.cap="PDF and CDF of the Uniform Distribution", fig.asp=1.2}
22 | par(mfrow=c(2, 1), mgp=c(1, 1, 0))
23 |
24 | t <- seq(0, 2, by=0.1)
25 | plot(t, rep(1, length(t)), type="l", lwd=3, col="blue",
26 | xaxt="n", yaxt="n", bty="n",
27 | xlab="x", ylab="Probability Density Function f(x)",
28 | xlim=c(-0.5, 2.5), ylim=c(-0.05, 1.1))
29 | axis(1, pos=0, at=c(0, 2), labels=c("a", "b"))
30 | axis(2, pos=0, at=c(0, 1), labels=c("", expression(frac(1, b - a))))
31 | lines(c(-0.5, 2.5), c(0, 0))
32 | lines(c(0, 0), c(-0.05, 1.1))
33 | lines(c(-0.5, 0), c(0, 0), lwd=3, col="blue")
34 | lines(c(2, 2.5), c(0, 0), lwd=3, col="blue")
35 |
36 | plot(t, t / 2, type="l", lwd=3, col="red",
37 | xaxt="n", yaxt="n", bty="n",
38 | xlab="x", ylab="Cumulative Distribution Function F(x)",
39 | xlim=c(-0.5, 2.5), ylim=c(-0.05, 1.1))
40 | axis(1, pos=0, at=c(0, 2), labels=c("a", "b"))
41 | axis(2, pos=0, at=c(0, 1), labels=c("", 1))
42 | lines(c(-0.5, 2.5), c(0, 0))
43 | lines(c(0, 0), c(-0.05, 1.1))
44 | lines(c(-0.5, 0), c(0, 0), lwd=3, col="red")
45 | lines(c(2, 2.5), c(1, 1), lwd=3, col="red")
46 | ```
47 |
48 | Why is the p.d.f. of a $\text{Uniform}(a, b)$ random variable what it is?
49 |
50 | - Since we want all values between $a$ and $b$ to be equally likely, the p.d.f. must be constant
51 | between $a$ and $b$.
52 | - This constant is chosen so that the total area under the p.d.f. (i.e., the total probability)
53 | is 1. Since the p.d.f. is a rectangle of width $b-a$, the height must be $\frac{1}{b-a}$ to make
54 | the total area 1.
55 |
56 | ```{example fencing}
57 | You buy fencing for a square enclosure. The length of fencing that you buy is uniformly
58 | distributed between 0 and 4 meters. What is the probability that the enclosed area will be
59 | larger than $0.5$ square meters?
60 | ```
61 |
62 | ```{solution}
63 | Let $L$ be a $\text{Uniform}(a=0, b=4)$ random variable. Note that $L$ represents the perimeter of
64 | the square enclosure, so $L/4$ is the length of a side and the area is
65 | \[ A = \left( \frac{L}{4} \right)^2 = \frac{L^2}{16}. \]
66 | We want to calculate $P(\frac{L^2}{16} > 0.5)$. To do this, we rearrange the expression inside the
67 | probability to isolate $L$, at which point we can calculate the probability by integrating the p.d.f.
68 | of $L$ over the appropriate range.
69 | \begin{align*}
70 | P(\frac{L^2}{16} > 0.5) &= P(L > \sqrt{8}) \\
71 | &= \int_{\sqrt{8}}^{4} \frac{1}{4 - 0}\,dx \\
72 | &= \frac{4 - \sqrt{8}}{4} \\
73 | &\approx .293.
74 | \end{align*}
75 | ```
76 |
77 |
78 | ## Essential Practice {-}
79 |
80 | 1. A point is chosen uniformly along the length of a stick, and the stick is broken at that point.
81 | What is the probability the left segment is more than twice as long as the right segment?
82 |
83 | (Hint: Assume the length of the stick is 1. Let $X$ be the point at which the stick is
84 | broken, and observe that $X$ is the length of the left segment.)
85 |
86 | 2. You inflate a spherical balloon in a single breath. If the volume of air you exhale in a
87 | single breath (in cubic inches) is $\text{Uniform}(a=36\pi, b=288\pi)$ random variable, what
88 | is the probability that the radius of the balloon is less than 5 inches?
89 |
--------------------------------------------------------------------------------
/var-continuous.Rmd:
--------------------------------------------------------------------------------
1 | # Variance of Continuous Random Variables {#var-continuous}
2 |
3 | ## Theory {-}
4 |
5 | The variance is defined for continuous random variables in exactly the
6 | same way as for discrete random variables, except the expected
7 | values are now computed with integrals and p.d.f.s, as in Lessons
8 | \@ref(ev-continuous) and \@ref(lotus-continuous), instead of sums and p.m.f.s.
9 | ```{definition var-continuous, name="Variance"}
10 | Let $X$ be a random variable. Then, the **variance** of $X$, symbolized
11 | $\text{Var}[X]$ is defined as
12 | \begin{equation}
13 | \text{Var}[X] \overset{\text{def}}{=} E[(X - E[X])^2].
14 | (\#eq:var-continuous)
15 | \end{equation}
16 | ```
17 |
18 | The "shortcut formula" also works for continuous random variables.
19 | ```{theorem var-shortcut-continuous, name="Shortcut Formula for Variance"}
20 | The variance can also be computed as:
21 | \begin{equation}
22 | \text{Var}[X] = E[X^2] - E[X]^2.
23 | (\#eq:var-shortcut-continuous)
24 | \end{equation}
25 | ```
26 |
27 | The standard deviation is also defined in the same way, as the square root of the variance,
28 | as a way to correct the units of variance..
29 | \[ \text{SD}[X] = \sqrt{\text{Var}[X]}. \]
30 |
31 | We apply the shortcut formula to derive formulas for the expected values of the
32 | uniform and exponential distributions.
33 |
34 | ```{example uniform-var, name="Variance of the Uniform Distribution"}
35 | Let $X$ be a $\text{Uniform}(a, b)$ random variable. We calculated
36 | $E[X]$ in Example \@ref(exm:ev-unif) and $E[X^2]$ in Example \@ref(exm:ev-unif-sq), so
37 | most of the work has already been done.
38 | \begin{align*}
39 | \text{Var}[X] &= E[X^2] - E[X]^2 \\
40 | &= \frac{b^3 - a^3}{3(b-a)} - \left( \frac{a + b}{2} \right)^2 \\
41 | &= \frac{(b-a)^2}{12}
42 | \end{align*}
43 | ```
44 |
45 | ```{example exponential-var, name="Variance of the Exponential Distribution"}
46 | Let $X$ be a $\text{Exponential}(\lambda)$ random variable. We calculated
47 | $E[X]$ in Example \@ref(exm:ev-expo) and $E[X^2]$ in Example \@ref(exm:ev-expo-sq), so
48 | most of the work has already been done.
49 | \begin{align*}
50 | \text{Var}[X] &= E[X^2] - E[X]^2 \\
51 | &= \frac{2}{\lambda^2} - \left( \frac{1}{\lambda} \right)^2 \\
52 | &= \frac{1}{\lambda^2}
53 | \end{align*}
54 | ```
55 |
56 | Formulas for the variance of named continuous distributions can be found
57 | in Appendix \@ref(continuous-distributions).
58 |
59 |
60 | ## Essential Practice {-}
61 |
62 | 1. The distance (in hundreds of miles) driven by a trucker in one day is a continuous
63 | random variable $X$ whose cumulative distribution function (c.d.f.) is given by:
64 | \[ F(x) = \begin{cases} 0 & x < 0 \\ x^3 / 216 & 0 \leq x \leq 6 \\ 1 & x > 6 \end{cases}. \]
65 |
66 | a. Calculate the standard deviation of $X$.
67 | b. What is the probability that $X$ is within 1 standard deviation of the mean
68 | (i.e., expected value)?
69 |
70 | 2. Small aircraft arrive at San Luis Obispo airport according to a Poisson process at a
71 | rate of 6 per hour.
72 |
73 | a. What is the expected value and standard deviation of the time between two arrivals (in hours)?
74 | b. What is the probability that the time between two arrivals will be more than 1
75 | standard deviation above the mean (i.e., expected value)?
76 |
77 |
78 | ## Additional Practice {-}
79 |
80 | 1. The article "Modeling Sediment and Water Column Interactions for Hydrophobic Pollutants"
81 | (Water Res., 1984: 1169–1174) suggests the uniform distribution on the interval $[7.5, 20]$ as a
82 | model for depth (cm) of the bioturbation layer in sediment in a certain region.
83 |
84 | a. What are the mean and variance of depth?
85 | b. What is the probability that the observed depth is within 1 standard deviation of the
86 | expected value?
87 |
--------------------------------------------------------------------------------