├── .gitignore ├── LICENSE ├── Lecture 1 - Introduction.ipynb ├── Lecture 10 - Policy Gradient III.ipynb ├── Lecture 11 - Fast RL I.ipynb ├── Lecture 12 - Fast RL II.ipynb ├── Lecture 13 - Fast RL III.ipynb ├── Lecture 14 - Batch RL.ipynb ├── Lecture 15 - Monte Carlo Tree Search.ipynb ├── Lecture 2 - Given a Model of the World.ipynb ├── Lecture 3 - Model-Free Policy Evaluation.ipynb ├── Lecture 4 - Model Free Control.ipynb ├── Lecture 5 - Value Function Approximation.ipynb ├── Lecture 6 - CNNs and Deep Q Learning.ipynb ├── Lecture 7 - Imitation Learning.ipynb ├── Lecture 8 - Policy Gradient I.ipynb ├── Lecture 9 - Policy Gradient II.ipynb ├── README.md └── img ├── CUT.PNG ├── LVFA.PNG ├── OPE.PNG ├── SARSA_theorem.PNG ├── SPI.PNG ├── VFA.PNG ├── bias_variance.PNG ├── convergence_VFA.PNG ├── diagram.PNG ├── dp_mc_td.PNG ├── dp_mdp.PNG ├── dp_mrp.PNG ├── dp_tree.PNG ├── dueling_dqn.PNG ├── experience_replay.PNG ├── forward_search.PNG ├── mc_td.PNG ├── monotonic_improvement.PNG ├── policy_improvement.PNG ├── policy_iteration.PNG ├── prove_monotonic.PNG ├── rl_agent_types.PNG ├── search_tree_path.PNG └── value_iteration.PNG /.gitignore: -------------------------------------------------------------------------------- 1 | .ipynb_checkpoints/* -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 Vincent Tu 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Lecture 10 - Policy Gradient III.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "16df9564", 6 | "metadata": {}, 7 | "source": [ 8 | "# Lecture 10 - Policy Gradient III\n", 9 | "\n", 10 | "provided by [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 11 | "\n", 12 | "---" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "id": "618c06cf", 18 | "metadata": {}, 19 | "source": [ 20 | "
\n", 21 | "Table of Contents:
\n", 22 | " \n", 23 | "\n", 34 | "
" 35 | ] 36 | }, 37 | { 38 | "cell_type": "markdown", 39 | "id": "a8837110", 40 | "metadata": {}, 41 | "source": [ 42 | "# 1. Introduction" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "id": "e3a1a301", 48 | "metadata": {}, 49 | "source": [ 50 | "Today's lecture will cover the 2 other methods for automatic step-size tuning: trust regions and the TRPO algorithm." 51 | ] 52 | }, 53 | { 54 | "cell_type": "markdown", 55 | "id": "ac9f36cf", 56 | "metadata": {}, 57 | "source": [ 58 | "# 2. Need for Automatic Step Size Tuning" 59 | ] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "id": "83a14379", 64 | "metadata": {}, 65 | "source": [ 66 | "Recall the objective function we defined in Lecture 9.\n", 67 | "\n", 68 | "$$\n", 69 | "\\begin{equation}\n", 70 | " \\begin{split}\n", 71 | " L_{\\pi}(\\tilde{\\pi}) = V(\\tilde{\\theta}) & = V(\\theta) + \\mathbb{E}_{\\pi_{\\tilde{\\theta}}}[\\sum_{t = 0}^{\\infty} \\gamma^{t} A_{\\pi}(s_{t}, a_{t})]\\\\\n", 72 | " & = V(\\theta) + \\sum_{s} \\mu_{\\tilde{\\pi}}(s) \\sum_{a} \\tilde{\\pi}(a~|~s) A_{\\pi}(s, a)\\\\\n", 73 | " \\mu_{\\tilde{\\pi}}(s) & = \\mathbb{E}_{\\tilde{\\pi}}[\\sum_{t = 0}^{\\infty} \\gamma^{t} I(s_{t} = s)]\n", 74 | " \\end{split}\n", 75 | "\\end{equation} \\hspace{1em} (Eq.~1)\\\\\n", 76 | "$$" 77 | ] 78 | }, 79 | { 80 | "cell_type": "markdown", 81 | "id": "d454d7b9", 82 | "metadata": {}, 83 | "source": [ 84 | "## 2.1. Local Approximation" 85 | ] 86 | }, 87 | { 88 | "cell_type": "markdown", 89 | "id": "5fc1f912", 90 | "metadata": {}, 91 | "source": [ 92 | "I copied over the text from the previous lecture for completeness." 93 | ] 94 | }, 95 | { 96 | "cell_type": "markdown", 97 | "id": "c975292d", 98 | "metadata": {}, 99 | "source": [ 100 | "We can slightly rewrite Eq. 4 so that we have a substitute for $\\mu_{\\tilde{\\pi}}$:\n", 101 | "\n", 102 | "$$\n", 103 | "L_{\\pi}(\\tilde{\\pi}) = V(\\theta) + \\sum_{s} \\mu_{\\pi}(s) \\sum_{a} \\tilde{\\pi}(a~|~s) A_{\\pi}(s, a) \\hspace{1em} (Eq.~2)\\\\\n", 104 | "$$\n", 105 | "\n", 106 | "Eq. 5, instead of using the discounted weighted frequency of state $s$ under policy $\\mu_{\\tilde{\\pi}}$, uses $\\mu_{\\pi}$, the current policy's discounted weighted frequency of state $s$" 107 | ] 108 | }, 109 | { 110 | "cell_type": "markdown", 111 | "id": "8d10d428", 112 | "metadata": {}, 113 | "source": [ 114 | "This begs the question: how do Eq. 3 and Eq. 4 fit into our current understanding of policy gradients? Over Lecture 8 and Lecture 9, we have seen a lot of formulas involving value functions.\n", 115 | "\n", 116 | "For now, I'm still not too sure. Let's give it some time." 117 | ] 118 | }, 119 | { 120 | "cell_type": "markdown", 121 | "id": "a8cad738", 122 | "metadata": {}, 123 | "source": [ 124 | "## 2.2. Trust Region" 125 | ] 126 | }, 127 | { 128 | "cell_type": "markdown", 129 | "id": "a30a8182", 130 | "metadata": {}, 131 | "source": [ 132 | "Disclaimer: I won't cover too much of the theory!\n", 133 | "\n", 134 | "With our new formulation in Eq. 1, we want to ask: is there a bound to the new policy's performance by optimizing on the surrogate objective (the local approximation)?\n", 135 | "\n", 136 | "$$\n", 137 | "\\pi_{new}(a~|~s) = (1 - \\alpha) \\pi_{old}(a~|~s) + \\alpha \\pi'(a~|~s)\n", 138 | "$$\n", 139 | "\n", 140 | "Consider a __mixture policy__ (a blend of 2 policies), it will have a percentage of the old and a percentage of the new policy. For general stochastic policies we have the theorem (Eq. 3):\n", 141 | "\n", 142 | "$$\n", 143 | "D_{TV}^{max}(\\pi_{1}, \\pi_{2}) = \\underset{s}{max} D_{TV}(\\pi_{1}(\\cdot~|~s), \\pi_{2}(\\cdot~|~s))\\\\\n", 144 | "\\epsilon = \\underset{s}{max}[\\mathbb{E}_{a \\sim \\pi'(a~|~s)}[A_{\\pi}(s, a)]]\\\\\n", 145 | "D_{TV}(p, q)^{2} \\le D_{KL}(p, q)\\\\\n", 146 | "C = \\frac{4 \\epsilon \\gamma}{(1 - \\gamma)^{2}}\\\\\n", 147 | "\\begin{equation}\n", 148 | " \\begin{split}\n", 149 | "V^{\\pi_{new}} & \\ge L_{\\pi_{old}}(\\pi_{new}) - \\frac{4 \\epsilon \\gamma}{(1 - \\gamma)^{2}}(D_{TV}^{max}(\\pi_{old}, \\pi_{new}))^{2}\\\\\n", 150 | " & \\ge L_{\\pi_{old}}(\\pi_{new}) - \\frac{4 \\epsilon \\gamma}{(1 - \\gamma)^{2}}D_{KL}^{max}(\\pi_{old}, \\pi_{new}) = M_{i}(\\pi)\n", 151 | " \\end{split}\n", 152 | "\\end{equation} \\hspace{1em} (Eq.~3)\\\\\n", 153 | "V^{\\pi_{i + 1}} - V^{\\pi_{i}} \\ge M_{i}(\\pi_{i + 1}) - M_{i}(\\pi_{i}) \\hspace{1em} (Eq.~4)\\\\\n", 154 | "$$\n", 155 | "\n", 156 | "From the theorem (Eq. 3), we can derive Eq. 4. Eq. 4 simply says that we can have a monotonically improving general stochastic policy. For $C$, we tend to make this a hyperparameter." 157 | ] 158 | }, 159 | { 160 | "cell_type": "markdown", 161 | "id": "7e15c253", 162 | "metadata": {}, 163 | "source": [ 164 | "With the theorem established, we can put it into practice. Let's formulate our objective function: \n", 165 | "\n", 166 | "$$\n", 167 | "\\underset{\\theta}{max} L_{\\pi_{old}}(\\pi_{new}) - CD_{KL}^{max}(\\pi_{old}, \\pi_{new}) \\hspace{1em} (Eq.~4)\\\\\n", 168 | "$$\n", 169 | "\n", 170 | "We can rewrite this to:\n", 171 | "\n", 172 | "$$\n", 173 | "\\underset{\\theta}{max} L_{\\pi_{old}}(\\pi_{new}) \\hspace{1em} (Eq.~5)\\\\\n", 174 | "subject ~ to ~ D_{KL}^{s \\sim \\mu_{\\theta_{old}}}(\\pi_{old}, \\pi_{new}) \\le \\delta\n", 175 | "$$\n", 176 | "\n", 177 | "This formulation leverages not only the previous objective function in local approximation, but also the new lower bound theorem. The constraint serves as a __trust region__ for the KL divergence between the old and new policies.\n", 178 | "\n", 179 | "We make 3 substitutions (note $\\tilde{\\pi} = \\pi_{new} = \\theta_{new}$ and the same goes for the old):\n", 180 | "\n", 181 | "1. Substituting $\\sum_{s} \\mu_{\\theta_{old}}(s)$\n", 182 | "\n", 183 | "$$\n", 184 | "L_{\\pi_{old}} = V(\\theta) + \\sum_{s} \\mu_{\\tilde{\\pi}}(s) \\sum_{a} \\tilde{\\pi}(a~|~s) A_{\\pi}(s, a)\\\\\n", 185 | "becomes\\\\\n", 186 | "L_{\\pi_{old}} = V(\\theta) + \\frac{1}{1 - \\gamma} \\mathbb{E}_{s \\sim \\mu_{\\theta_{old}}} [\\sum_{a} \\tilde{\\pi}(a~|~s) A_{\\pi}(s, a)] \\hspace{1em} (Eq.~5)\\\\\n", 187 | "$$\n", 188 | "\n", 189 | "This substitution is made because our state space can be continuous and infinite so summing over it would be impossible in practice.\n", 190 | "\n", 191 | "2. Substituting $\\sum_{a} \\tilde{\\pi}(a~|~s) A_{\\pi}(s, a)$\n", 192 | "\n", 193 | "$$\n", 194 | "(Eq.~5)\\\\\n", 195 | "becomes\\\\\n", 196 | "L_{\\pi_{old}} = V(\\theta) + \\frac{1}{1 - \\gamma} \\mathbb{E}_{s \\sim \\mu_{\\theta_{old}}} [\\mathbb{E}_{a \\sim q}[\\frac{\\pi_{\\theta}(a~|~s_{n})}{q(a~|~s_{n})}A_{\\theta_{old}}(s_{n}, a)]] \\hspace{1em} (Eq.~6)\\\\\n", 197 | "$$\n", 198 | "\n", 199 | "Again, summing over actions can be a continuous set. So instead, we use importance sampling.\n", 200 | "\n", 201 | "3. Substituting $A_{\\theta_{old}}$\n", 202 | "\n", 203 | "$$\n", 204 | "(Eq.~6)\\\\\n", 205 | "becomes\\\\\n", 206 | "\\underset{\\theta}{max} \\mathbb{E}_{s \\sim \\mu_{\\theta_{old}}, a \\sim q} [\\frac{\\pi_{\\theta}(a~|~s)}{q(a~|~s)}Q_{\\theta_{old}}(s, a)] \\hspace{1em} (Eq.~7)\\\\\n", 207 | "subject ~ to ~ \\mathbb{E}_{s \\sim \\mu_{\\theta_{old}}} D_{KL}(\\pi_{old}(\\cdot~|~s), \\pi_{new}(\\cdot~|~s)) \\le \\delta\n", 208 | "$$" 209 | ] 210 | }, 211 | { 212 | "cell_type": "markdown", 213 | "id": "24662685", 214 | "metadata": {}, 215 | "source": [ 216 | "## 2.3. TRPO" 217 | ] 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "id": "020e6565", 222 | "metadata": {}, 223 | "source": [ 224 | "for iteration = 1,2, ... do
\n", 225 | "$\\quad$ Run policy for $T$ timesteps or $N$ trajectories
\n", 226 | "$\\quad$ Estimate advantage function at all time steps
\n", 227 | "$\\quad$ Compute policy gradient $g$
\n", 228 | "$\\quad$ Use CG to compute $F^{-1}$g where $F$ is the Fisher information matrix
\n", 229 | "$\\quad$ Do line search on surrogate loss and KL constraint
\n", 230 | "\n", 231 | "_Algorithm 1. Trust Region Policy Optimization (TRPO)._" 232 | ] 233 | }, 234 | { 235 | "cell_type": "markdown", 236 | "id": "61137305", 237 | "metadata": {}, 238 | "source": [ 239 | "Algorithm 1 is just a brief look into TRPO! " 240 | ] 241 | }, 242 | { 243 | "cell_type": "markdown", 244 | "id": "0f657f6b", 245 | "metadata": {}, 246 | "source": [ 247 | "# 3. Resource" 248 | ] 249 | }, 250 | { 251 | "cell_type": "markdown", 252 | "id": "32e04a27", 253 | "metadata": {}, 254 | "source": [ 255 | "If you missed the link right below the title, I'm providing the resource here again along with the course website.\n", 256 | "\n", 257 | "- [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 258 | "- [Course Website](http://web.stanford.edu/class/cs234/index.html)\n", 259 | "\n", 260 | "This is a series of 15 lectures provided by Stanford.\n" 261 | ] 262 | }, 263 | { 264 | "cell_type": "code", 265 | "execution_count": null, 266 | "id": "f5c15c5c", 267 | "metadata": {}, 268 | "outputs": [], 269 | "source": [] 270 | } 271 | ], 272 | "metadata": { 273 | "kernelspec": { 274 | "display_name": "Python 3", 275 | "language": "python", 276 | "name": "python3" 277 | }, 278 | "language_info": { 279 | "codemirror_mode": { 280 | "name": "ipython", 281 | "version": 3 282 | }, 283 | "file_extension": ".py", 284 | "mimetype": "text/x-python", 285 | "name": "python", 286 | "nbconvert_exporter": "python", 287 | "pygments_lexer": "ipython3", 288 | "version": "3.8.8" 289 | } 290 | }, 291 | "nbformat": 4, 292 | "nbformat_minor": 5 293 | } 294 | -------------------------------------------------------------------------------- /Lecture 11 - Fast RL I.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "16df9564", 6 | "metadata": {}, 7 | "source": [ 8 | "# Lecture 11 - Fast RL I\n", 9 | "\n", 10 | "provided by [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 11 | "\n", 12 | "---" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "id": "618c06cf", 18 | "metadata": {}, 19 | "source": [ 20 | "
\n", 21 | "Table of Contents:
\n", 22 | " \n", 23 | "\n", 33 | "
" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "id": "a8837110", 39 | "metadata": {}, 40 | "source": [ 41 | "# 1. Introduction" 42 | ] 43 | }, 44 | { 45 | "cell_type": "markdown", 46 | "id": "0c0b90ce", 47 | "metadata": {}, 48 | "source": [ 49 | "2 Categories:\n", 50 | "* _Computationally efficiency_\n", 51 | " * takes a long time to compute something (an AV needs to calculate something fast as it is moving)\n", 52 | "* _Sample efficiency_\n", 53 | " * sometimes experience/data is hard to gather\n", 54 | " \n", 55 | "How do we evaluate our algorithm?\n", 56 | "* how good is it?\n", 57 | "* does it converge?\n", 58 | "* how quickly does it converge?\n", 59 | "\n", 60 | "We usually evaluate an algorithm based on its performance, but today we will evaluate it based on the amount of data it needs to make good decisions.\n", 61 | "\n", 62 | "The next 3 lectures (including this one) will cover the following:\n", 63 | "* __settings__: (bandits, MDPs, etc)\n", 64 | "* __frameworks__: evaluation criteria for evaluating RL algorithms\n", 65 | "* __approaches__: classes of algorithms for achieving particular evaluation criterias\n", 66 | "\n", 67 | "Specifically, for today's lecture we will cover:\n", 68 | "* setting: multi-armed bandits\n", 69 | "* framework: regret\n", 70 | "* approach: optimism under uncertainty\n", 71 | "* framework: bayesian regret\n", 72 | "* approach: probability matching/thompson sampling" 73 | ] 74 | }, 75 | { 76 | "cell_type": "markdown", 77 | "id": "3892c2d3", 78 | "metadata": {}, 79 | "source": [ 80 | "# 2. Multi-Armed Bandits" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "id": "33d94b9a", 86 | "metadata": {}, 87 | "source": [ 88 | "* Multi-armed bandits is a tuple of $(\\mathcal{A}, \\mathcal{R})$\n", 89 | "* $\\mathcal{A}$ is a known set of $m$ actions (arms)\n", 90 | "* $R^{a}(r) = \\mathbb{P}[r~|~a]$ is an unknown probability distribution over rewards\n", 91 | "* each step $t$, the agent selects an action $a_{t} \\in \\mathcal{A}$ (pulling an arm)\n", 92 | "* environment produces a reward $r_{t} \\sim \\mathcal{R}^{a_{t}}$\n", 93 | "* Goal: maximize cumulative reward $\\sum_{\\tau = 1}^{t}r_{\\tau}$\n", 94 | "\n", 95 | "$$\n", 96 | "Q(a) = \\mathbb{E}[r~|~a]\\hspace{1em} (Eq.~1)\\\\\n", 97 | "V^{*} = Q(a^{*}) = \\underset{a \\in \\mathcal{A}}{max}Q(a) \\hspace{1em} (Eq.~2)\\\\\n", 98 | "l_{t} = \\mathbb{E}[V^{*} - Q(a_{t})] \\hspace{1em} (Eq.~3)\\\\\n", 99 | "L_{t} = \\mathbb{E}[\\sum_{\\tau = 0}^{t} V^{*} - Q(a_{\\tau})] \\hspace{1em} (Eq.~4)\n", 100 | "$$\n", 101 | "\n", 102 | "Eq. 1 is the expected reward (mean reward) given an action (we ignore states in the multi-armed bandit setting). Eq. 2: Take the action $a$ that yields the largest action-value. Eq. 3: opportunity loss (the difference between the optimal action-value at time step $t$ and the action you took). Eq. 4 is the total opportunity loss (the sum of regrets across all time steps for an episode).\n", 103 | "\n", 104 | "$$\n", 105 | "\\Delta_{i} = V^{*} - Q(a_{i})\\\\\n", 106 | "\\begin{equation}\n", 107 | " \\begin{split}\n", 108 | " L_{t} & = \\mathbb{E}[\\sum_{\\tau = 1}^{t}V^{*} - Q(a_{\\tau})]\\\\\n", 109 | " & = \\sum_{a \\in \\mathcal{A}} \\mathbb{E}[N_{t}(a)](v^{*} - Q(a))\\\\\n", 110 | " & = \\sum_{a \\in \\mathcal{A}} \\mathbb{E}[N_{t}(a)]\\Delta_{a}\\\\\n", 111 | " \\end{split}\n", 112 | "\\end{equation} \\hspace{1em} (Eq.~5)\n", 113 | "$$\n", 114 | "\n", 115 | "$N_{t}(a)$ is the number of times action $a$ has been picked up to time step $t$.\n", 116 | "\n", 117 | "By maximizing cumulative reward, we minimize total regret." 118 | ] 119 | }, 120 | { 121 | "cell_type": "markdown", 122 | "id": "64dfd2d6", 123 | "metadata": {}, 124 | "source": [ 125 | "## 2.1. Greedy Algorithm" 126 | ] 127 | }, 128 | { 129 | "cell_type": "markdown", 130 | "id": "cf977655", 131 | "metadata": {}, 132 | "source": [ 133 | "The simplest approach is the greedy algorithm.\n", 134 | "\n", 135 | "$$\n", 136 | "\\hat{Q}_{t}(a) = \\frac{1}{N_{t}(a)} \\sum_{t = 1}^{T} r_{t} \\mathcal{1}(a_{t} = a) \\hspace{1em} (Eq.~6)\\\\\n", 137 | "a^{*}_{t} = \\underset{a \\in \\mathcal{A}}{argmax} \\hat{Q}_{t}(a) \\hspace{1em} (Eq.~7)\\\\\n", 138 | "$$\n", 139 | "\n", 140 | "A slightly more nuanced version of this is the $\\epsilon$-greedy algorithm. It will select $a_{t} = \\underset{a \\in \\mathcal{A}}{argmax} \\hat{Q}_{t}(a)$ with probability $1 - \\epsilon$ and a random action with probability $\\epsilon$.\n", 141 | "\n", 142 | "The problem with these is that they can get stuck in suboptimal actions forever." 143 | ] 144 | }, 145 | { 146 | "cell_type": "markdown", 147 | "id": "466a4566", 148 | "metadata": {}, 149 | "source": [ 150 | "## 2.2. Upper Confidence Bounds" 151 | ] 152 | }, 153 | { 154 | "cell_type": "markdown", 155 | "id": "b97a2764", 156 | "metadata": {}, 157 | "source": [ 158 | "Types of Regret bounds:\n", 159 | "* __problem independent__: bound on regret is a function of $T$\n", 160 | "* __problem dependent__: bound regret as a function of number of times we pull each arm and gap between reward and optimal action\n", 161 | "\n", 162 | "From past work, we find that the lower bound is sublinear.\n", 163 | "\n", 164 | "$$\n", 165 | "\\underset{t \\rightarrow \\infty}{lim} L_{t} \\ge log(t) \\sum_{a~|~\\Delta_{a} > 0} \\frac{\\Delta_{a}}{D_{KL}(\\mathcal{R}^{a}||\\mathcal{R}^{a*})} \\hspace{1em} (Eq.~8)\\\\\n", 166 | "$$" 167 | ] 168 | }, 169 | { 170 | "cell_type": "markdown", 171 | "id": "25a2ab8c", 172 | "metadata": {}, 173 | "source": [ 174 | "In Upper Confidence Bounds (UCB),\n", 175 | "* we estimate an upper confidence $U_{t}(a)$ for each action value such that $Q(a) \\le U_{t}(a)$\n", 176 | "* depends on number of times $N_{t}(a)$\n", 177 | "* select action to maximize UCB: $a_{t} = \\underset{a \\in \\mathcal{A}}{argmax}[U_{t}(a)]$\n", 178 | "\n", 179 | "We leverage __Hoeffding's Inequality__:\n", 180 | "\n", 181 | "Consider X to be an i.i.d. random variable in $[0, 1]$ from 1 to n. $\\bar{X}_{n}$ to be the sample mean.\n", 182 | "$$\n", 183 | "\\mathcal{P}[\\mathcal{E}[X] > \\bar{X}_{n} + u] \\le exp(-2nu^{2}) \\hspace{1em} (Eq.~9)\\\\\n", 184 | "$$\n", 185 | "\n", 186 | "We then derive the UCB equation for selecting an action at timestep $t$. \n", 187 | "\n", 188 | "$$\n", 189 | "\\begin{equation}\n", 190 | " \\begin{split}\n", 191 | " U_{t}(a) = \\hat{Q}(a) + \\sqrt{\\frac{2 log(t)}{N_{t}(a)}}\\\\\n", 192 | " a_{t} = \\underset{a \\in \\mathcal{A}}{argmax}[U_{t}(a)]\n", 193 | " \\end{split}\n", 194 | "\\end{equation} \\hspace{1em} (Eq.~10)\\\\\n", 195 | "$$\n", 196 | "\n", 197 | "UCB achieves logarithmic asymptotic total regret:\n", 198 | "\n", 199 | "$$\n", 200 | "\\underset{t \\rightarrow \\infty}{lim} L_{t} \\le 8 log(t) \\sum_{a~|~\\Delta_{a} > 0} \\Delta_{a} \\hspace{1em} (Eq.~11)\\\\\n", 201 | "$$\n", 202 | "\n", 203 | "For an example of how UCB works refer to this: https://www.youtube.com/watch?v=FgmMK6RPU1c&t=507s&ab_channel=ritvikmath." 204 | ] 205 | }, 206 | { 207 | "cell_type": "markdown", 208 | "id": "0f657f6b", 209 | "metadata": {}, 210 | "source": [ 211 | "# 3. Resource" 212 | ] 213 | }, 214 | { 215 | "cell_type": "markdown", 216 | "id": "32e04a27", 217 | "metadata": {}, 218 | "source": [ 219 | "If you missed the link right below the title, I'm providing the resource here again along with the course website.\n", 220 | "\n", 221 | "- [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 222 | "- [Course Website](http://web.stanford.edu/class/cs234/index.html)\n", 223 | "\n", 224 | "This is a series of 15 lectures provided by Stanford.\n" 225 | ] 226 | }, 227 | { 228 | "cell_type": "code", 229 | "execution_count": null, 230 | "id": "f5c15c5c", 231 | "metadata": {}, 232 | "outputs": [], 233 | "source": [] 234 | } 235 | ], 236 | "metadata": { 237 | "kernelspec": { 238 | "display_name": "Python 3", 239 | "language": "python", 240 | "name": "python3" 241 | }, 242 | "language_info": { 243 | "codemirror_mode": { 244 | "name": "ipython", 245 | "version": 3 246 | }, 247 | "file_extension": ".py", 248 | "mimetype": "text/x-python", 249 | "name": "python", 250 | "nbconvert_exporter": "python", 251 | "pygments_lexer": "ipython3", 252 | "version": "3.8.8" 253 | } 254 | }, 255 | "nbformat": 4, 256 | "nbformat_minor": 5 257 | } 258 | -------------------------------------------------------------------------------- /Lecture 12 - Fast RL II.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "16df9564", 6 | "metadata": {}, 7 | "source": [ 8 | "# Lecture 12 - Fast RL II\n", 9 | "\n", 10 | "provided by [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 11 | "\n", 12 | "---" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "id": "618c06cf", 18 | "metadata": {}, 19 | "source": [ 20 | "
\n", 21 | "Table of Contents:
\n", 22 | " \n", 23 | "\n", 31 | "
" 32 | ] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "id": "a8837110", 37 | "metadata": {}, 38 | "source": [ 39 | "# 1. Introduction" 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "id": "679241e8", 45 | "metadata": {}, 46 | "source": [ 47 | "We have been covering algorithms that fall under the concept of __optimism under uncertainty__. \n", 48 | "\n", 49 | "We have looked at the following approaches:\n", 50 | "* __Greedy__: linear total regret\n", 51 | "* __Constant $\\epsilon$-greedy__: linear total regret\n", 52 | "* __Decaying $\\epsilon$-greedy__: sublinear regret\n", 53 | "* __UCB__: sublinear regret" 54 | ] 55 | }, 56 | { 57 | "cell_type": "markdown", 58 | "id": "80d9035e", 59 | "metadata": {}, 60 | "source": [ 61 | "# 2. Bayesian Bandits" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "id": "42491d95", 67 | "metadata": {}, 68 | "source": [ 69 | "Before in UCB, we made no assumptions about the unknown reward distribution $R$ except for the bounds on the rewards.\n", 70 | "\n", 71 | "Another approach is called __bayesian bandits__ and exploits prior knowledge of rewards $p[R]$. It computes a posterior distribution of rewards $p[R~|~h_{t}]$ based on a history of action reward pairs.\n", 72 | "\n", 73 | "It leverages Bayes' rule:\n", 74 | "\n", 75 | "$$\n", 76 | "p(\\phi_{i}~|~r_{i1}) = \\frac{p(r_{i1}~|~\\phi_{i})p(\\phi_{i})}{p(r_{i1})} \\hspace{1em} (Eq.~1)\\\\\n", 77 | "$$\n", 78 | "\n", 79 | "If $p(\\phi_{i}~|~r_{i1})$ and $p(\\phi_{i})$ are the same, then we can call the prior $p(\\phi_{i})$ and model $p(r_{i1}~|~\\phi_{i})$ a __conjugate__. Why is this useful? It means we can do our posterior updating analytically.\n", 80 | "\n", 81 | "Framework:\n", 82 | "* __frequentist regret__ : (the framework we have been using before) assumes a true unknown set of parameters\n", 83 | "\n", 84 | "$$\n", 85 | "Regret(\\mathcal{A}, T; \\theta) = \\sum_{t = 1}^{t} \\mathbb{E}[Q(a^{*}) - Q(a_{t})] \\hspace{1em} (Eq.~2)\\\\\n", 86 | "$$\n", 87 | "\n", 88 | "* __bayesian regret__ : assumes there's a prior over parameters\n", 89 | "\n", 90 | "$$\n", 91 | "BayesRegret(\\mathcal{A}, T; \\theta) = \\mathbb{E}_{\\theta \\sim p_{\\theta}} [\\sum_{t = 1}^{t} \\mathbb{E}[Q(a^{*}) - Q(a_{t})~|~\\theta]] \\hspace{1em} (Eq.~3)\\\\\n", 92 | "$$\n", 93 | "\n", 94 | "We tackle this framework with __probability matching__." 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "id": "26489369", 100 | "metadata": {}, 101 | "source": [ 102 | "# 3. Probability Matching" 103 | ] 104 | }, 105 | { 106 | "cell_type": "markdown", 107 | "id": "d9c73893", 108 | "metadata": {}, 109 | "source": [ 110 | "We assume we have a parametric distribution over rewards for each arm. \n", 111 | "\n", 112 | "__Probability Matching__ selects the best action (optimal action) based on a history.\n", 113 | "\n", 114 | "$$\n", 115 | "\\pi(a~|~h_{t}) = \\mathbb{P}[Q(a) > Q(a'), \\forall a' \\ne a ~|~ h_{t}] \\hspace{1em} (Eq.~4)\\\\\n", 116 | "$$\n", 117 | "\n", 118 | "Uncertain actions have higher probability of being max." 119 | ] 120 | }, 121 | { 122 | "cell_type": "markdown", 123 | "id": "a45caa0c", 124 | "metadata": {}, 125 | "source": [ 126 | "Initialize prior over each arm $a, p(R_{a})$
\n", 127 | "loop
\n", 128 | "$\\quad$ For each arm $a$ _sample_ a reward distribution $R_{a}$ from posterior
\n", 129 | "$\\quad$ Compute action-value function $Q(a) = \\mathbb{E}[R_{a}]$
\n", 130 | "$\\quad$ $a_{t} = \\underset{a \\in \\mathcal{A}}{argmax}Q(a)$
\n", 131 | "$\\quad$ Observe reward $r$
\n", 132 | "$\\quad$ Update posterior $p(R_{a}~|~r)$ using Bayes law
\n", 133 | "\n", 134 | "_Algorithm 1. Thompson Sampling._" 135 | ] 136 | }, 137 | { 138 | "cell_type": "markdown", 139 | "id": "4a16e1bb", 140 | "metadata": {}, 141 | "source": [ 142 | "I found this resource to be really helpful for understanding thompson sampling: https://www.youtube.com/watch?v=Zgwfw3bzSmQ.\n", 143 | "\n", 144 | "_Thompson sampling has the same regret bounds as UCB._" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "id": "02c7c09a", 150 | "metadata": {}, 151 | "source": [ 152 | "# 4. Framework: Probably Approximately Correct" 153 | ] 154 | }, 155 | { 156 | "cell_type": "markdown", 157 | "id": "545ffd8d", 158 | "metadata": {}, 159 | "source": [ 160 | " Because we evaluate based on total regret, we don't know if regret is caused by a lot of little mistakes or a few large ones.\n", 161 | " \n", 162 | " We can tackle this problem with the __Probably Approximately Correct (PAC)__ framework.\n", 163 | " \n", 164 | " $$\n", 165 | " Q(a) \\ge Q(a^{*}) - \\epsilon \\hspace{1em} (Eq.~5)\\\\\n", 166 | " $$\n", 167 | " \n", 168 | " Basically it will operate much like before (optimism or Thompson sampling) however a small $\\epsilon$ is added to give room for other actions to be selected.\n", 169 | " \n", 170 | "From what I'm understanding, this framework can be applied to optimism under uncertainty and probability matching/thompson sampling." 171 | ] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "id": "e9ae85ac", 176 | "metadata": {}, 177 | "source": [ 178 | "# 5. Fast RLs in MDPs" 179 | ] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "id": "8bad3cc8", 184 | "metadata": {}, 185 | "source": [ 186 | "For the MDP setting (we've been covering the multi-armed bandit setting), we can use the same frameworks. This section focuses on the PAC framework.\n", 187 | "\n", 188 | "Not too sure, but from what I understand, I would think UCB and Thompson sampling are only applicable to the multi-armed bandit setting. In the (tabular) MDP setting, they carry the same ideas but aren't exactly the same.\n", 189 | "\n", 190 | "The lecture begins with __optimistic initialization__. In the MDP setting, we can use any of the model-free algorithms (e.g. SARSA, MC, Q-learning) we've learned to estimate $Q(s, a)$. \n", 191 | "\n", 192 | "We can initialize our q-values optimistically like setting them to $\\frac{r_{max}}{1 - \\gamma}$ or initializing $V(s) = \\frac{r_{max}}{(1 - \\gamma) \\Pi_{i=1}^{T} \\alpha_{i}}$. We consider $r_{max}$ to be the state-action pair that maximizes the reward. $\\gamma$ is the discount factor. $\\alpha_{i}$ is the learning rate at the $i$-th timestep which goes up till $T$, the number of samples to learn near optimal q-values.\n", 193 | "\n", 194 | "Optimistic initialization is one way to make RL faster in the MDP setting.\n", 195 | "Other approaches include:\n", 196 | "* be very optimistic till confident empirical estimates close to true parameters\n", 197 | "* be optimistic given information you have\n", 198 | " * compute confidence sets on dynamics/reward models\n", 199 | " * add reward bonuses" 200 | ] 201 | }, 202 | { 203 | "cell_type": "markdown", 204 | "id": "7eda8aa3", 205 | "metadata": {}, 206 | "source": [ 207 | "Given $\\epsilon, \\delta, m$
\n", 208 | "$\\beta = \\frac{1}{1 - \\gamma} \\sqrt{0.5 ln(2|S||A|\\frac{m}{\\delta})}$
\n", 209 | "$n_{sas}(s, a, s') = 0; s \\in S, a \\in A, s' \\in S$
\n", 210 | "$rc(s, a) = 0, n_{sa}(s, a) = 0, \\tilde{Q}(s, a) = \\frac{1}{1 - \\gamma} \\forall s \\in S, a \\in A$
\n", 211 | "$t = 0; s_{t} = s_{init}$
\n", 212 | "loop
\n", 213 | "$\\quad$ $a_{t} = \\underset{a \\in A}{argmax} Q(s_{t}, a)$
\n", 214 | "$\\quad$ Observe reward $r_{t}$ and state $s_{t + 1}$
\n", 215 | "$\\quad$ $n_{sa}(s_{t}, a_{t}) += 1$
\n", 216 | "$\\quad$ $n_{sas}(s_{t}, a_{t}, s_{t + 1}) += 1$
\n", 217 | "$\\quad$ $rc(s_{t}, a_{t}) = \\frac{rc(s_{t}, a_{t})n_{sa}(s_{t}, a_{t}) + r_{t}}{n_{sa}(s_{t}, a_{t}) + 1}$
\n", 218 | "$\\quad$ $\\hat{R}(s, a) = \\frac{rc(s_{t}, a_{t})}{n(s_{t}, a_{t})}$
\n", 219 | "$\\quad$ $\\hat{T}(s'~|~s, a) = \\frac{n_{sas}(s_{t}, a_{t}, s_{t + 1})}{n_{sa}(s_{t}, a_{t})} \\forall s' \\in S$
\n", 220 | "$\\quad$ while not converged do
\n", 221 | "$\\quad\\quad$ $\\hat{Q}(s, a) = \\hat{R}(s, a) + \\gamma \\sum_{s'}\\hat{T}(s'~|~s, a)\\underset{a'}{max}\\tilde{Q}(s', a') + \\underbrace{\\frac{\\beta}{\\sqrt{n_{sa}(s, a)}}}_{reward ~ bonus} \\forall s \\in S, a \\in A$\n", 222 | "\n", 223 | "_Algorithm 2. Model-Based Interval Estimation with Exploration Bonus (MBIE-EB)._" 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "id": "95799536", 229 | "metadata": {}, 230 | "source": [ 231 | "Algorithm 2 (MBIE-EB) uses value iteration for model-based policy control (but it estimates the reward and dynamics models). It also implements an exploration bonus (or reward bonus)." 232 | ] 233 | }, 234 | { 235 | "cell_type": "markdown", 236 | "id": "0f657f6b", 237 | "metadata": {}, 238 | "source": [ 239 | "# 6. Resource" 240 | ] 241 | }, 242 | { 243 | "cell_type": "markdown", 244 | "id": "32e04a27", 245 | "metadata": {}, 246 | "source": [ 247 | "If you missed the link right below the title, I'm providing the resource here again along with the course website.\n", 248 | "\n", 249 | "- [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 250 | "- [Course Website](http://web.stanford.edu/class/cs234/index.html)\n", 251 | "\n", 252 | "This is a series of 15 lectures provided by Stanford.\n" 253 | ] 254 | }, 255 | { 256 | "cell_type": "code", 257 | "execution_count": null, 258 | "id": "f5c15c5c", 259 | "metadata": {}, 260 | "outputs": [], 261 | "source": [] 262 | } 263 | ], 264 | "metadata": { 265 | "kernelspec": { 266 | "display_name": "Python 3", 267 | "language": "python", 268 | "name": "python3" 269 | }, 270 | "language_info": { 271 | "codemirror_mode": { 272 | "name": "ipython", 273 | "version": 3 274 | }, 275 | "file_extension": ".py", 276 | "mimetype": "text/x-python", 277 | "name": "python", 278 | "nbconvert_exporter": "python", 279 | "pygments_lexer": "ipython3", 280 | "version": "3.8.8" 281 | } 282 | }, 283 | "nbformat": 4, 284 | "nbformat_minor": 5 285 | } 286 | -------------------------------------------------------------------------------- /Lecture 13 - Fast RL III.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "16df9564", 6 | "metadata": {}, 7 | "source": [ 8 | "# Lecture 13 - Fast RL III\n", 9 | "\n", 10 | "provided by [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 11 | "\n", 12 | "---" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "id": "618c06cf", 18 | "metadata": {}, 19 | "source": [ 20 | "
\n", 21 | "Table of Contents:
\n", 22 | " \n", 23 | "\n", 30 | "
" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "id": "a8837110", 36 | "metadata": {}, 37 | "source": [ 38 | "# 1. Introduction" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "id": "24456ebc", 44 | "metadata": {}, 45 | "source": [ 46 | "We defined (PAC) as a framework last lecture. But, we can also classify algorithms as PAC." 47 | ] 48 | }, 49 | { 50 | "cell_type": "markdown", 51 | "id": "5a88cd62", 52 | "metadata": {}, 53 | "source": [ 54 | "# 2. PAC Criteria" 55 | ] 56 | }, 57 | { 58 | "cell_type": "markdown", 59 | "id": "3f1ff285", 60 | "metadata": {}, 61 | "source": [ 62 | "For an algorithm to be PAC, it must match these 3 criteria:\n", 63 | "* optimism\n", 64 | " * must have optimism (assume unexplored states yield good reward)\n", 65 | "* accuracy\n", 66 | " * must balance between being optimal and being optimistic\n", 67 | " * $V^{\\pi_{t}}(s_{t}) - V^{\\pi_{t}}_{\\mu}(s_{t}) \\le \\epsilon$ where $V^{\\pi_{t}}_{\\mu}(s_{t})$ is a hybrid value function between the optimal and optimistic policy\n", 68 | "* bounded learning complexity (bounded by $\\epsilon, \\delta$)\n", 69 | " * total \\# of Q updates is updated\n", 70 | " * \\# of times visited unknown state-action pair\n", 71 | " \n", 72 | "I won't cover the following section in the lecture: a proof of how MBIE-EB is PAC." 73 | ] 74 | }, 75 | { 76 | "cell_type": "markdown", 77 | "id": "ec8c1afb", 78 | "metadata": {}, 79 | "source": [ 80 | "# 3. Bayesian Model-Based RL" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "id": "5eab8947", 86 | "metadata": {}, 87 | "source": [ 88 | "We know model-based RL is where the agent has a _model_ of the real world (transition model and reward model). For Bayesian Model-Based RL, we maintain a posterior distribution over MDP models and estimate transition and rewards. Our posterior guides our exploration." 89 | ] 90 | }, 91 | { 92 | "cell_type": "markdown", 93 | "id": "45b52b59", 94 | "metadata": {}, 95 | "source": [ 96 | "Initialize prior over the dynamics and reward models for each $(s, a)$, $p(R_{sa}), p(T(s'~|~s, a))$
\n", 97 | "Initialize state $s_{0}$.
\n", 98 | "loop
\n", 99 | "$\\quad$ Sample a MDP M: for each $(s, a)$ pair, sample a dynamics model $T(s'~|~s, a)$ and reward model $R(s, a)$
\n", 100 | "$\\quad$ Compute $Q^{*}_{M}$, optimal value for MDP $M$
\n", 101 | "$\\quad$ $a_{t} = \\underset{a \\in A}{argmax} Q^{*}_{M}(s_{t}, a)$
\n", 102 | "$\\quad$ Observe reward $r_{t}$ and next state $s_{t + 1}$
\n", 103 | "$\\quad$ Update posterior $p(R_{a_{t}, s_{t}}~|~r_{t})$, $p(T(s'~|~s, a)~|~s_{t + 1})$
\n", 104 | "$\\quad$ t += 1
\n", 105 | "
\n", 106 | "\n", 107 | "_Algorithm 1. Thompson Sampling for MDPs._" 108 | ] 109 | }, 110 | { 111 | "cell_type": "markdown", 112 | "id": "62bc8c9e", 113 | "metadata": {}, 114 | "source": [ 115 | "# 4. Generalization and Exploration" 116 | ] 117 | }, 118 | { 119 | "cell_type": "markdown", 120 | "id": "11913c8a", 121 | "metadata": {}, 122 | "source": [ 123 | "How do we do everything we just did for large state and action spaces? In large, how do we generalize or scale up?\n", 124 | "\n", 125 | "With VFA (value function approximation) for model-free settings, we can add a reward bonus term for updating our weights. \n", 126 | "\n", 127 | "Some people in class suggested embeddings or some type of way to model how similar states or actions are (maybe clustering them or reducing this high dimensionality?)." 128 | ] 129 | }, 130 | { 131 | "cell_type": "markdown", 132 | "id": "0f657f6b", 133 | "metadata": {}, 134 | "source": [ 135 | "# 5. Resource" 136 | ] 137 | }, 138 | { 139 | "cell_type": "markdown", 140 | "id": "32e04a27", 141 | "metadata": {}, 142 | "source": [ 143 | "If you missed the link right below the title, I'm providing the resource here again along with the course website.\n", 144 | "\n", 145 | "- [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 146 | "- [Course Website](http://web.stanford.edu/class/cs234/index.html)\n", 147 | "\n", 148 | "This is a series of 15 lectures provided by Stanford.\n" 149 | ] 150 | }, 151 | { 152 | "cell_type": "code", 153 | "execution_count": null, 154 | "id": "f5c15c5c", 155 | "metadata": {}, 156 | "outputs": [], 157 | "source": [] 158 | } 159 | ], 160 | "metadata": { 161 | "kernelspec": { 162 | "display_name": "Python 3", 163 | "language": "python", 164 | "name": "python3" 165 | }, 166 | "language_info": { 167 | "codemirror_mode": { 168 | "name": "ipython", 169 | "version": 3 170 | }, 171 | "file_extension": ".py", 172 | "mimetype": "text/x-python", 173 | "name": "python", 174 | "nbconvert_exporter": "python", 175 | "pygments_lexer": "ipython3", 176 | "version": "3.8.8" 177 | } 178 | }, 179 | "nbformat": 4, 180 | "nbformat_minor": 5 181 | } 182 | -------------------------------------------------------------------------------- /Lecture 3 - Model-Free Policy Evaluation.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Model-Free Policy Evaluation\n", 8 | "\n", 9 | "provided by [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 10 | "\n", 11 | "---" 12 | ] 13 | }, 14 | { 15 | "cell_type": "markdown", 16 | "metadata": {}, 17 | "source": [ 18 | "
\n", 19 | "Table of Contents:
\n", 20 | " \n", 21 | "\n", 28 | "
" 29 | ] 30 | }, 31 | { 32 | "cell_type": "markdown", 33 | "metadata": {}, 34 | "source": [ 35 | "# 1. Introduction" 36 | ] 37 | }, 38 | { 39 | "cell_type": "markdown", 40 | "metadata": {}, 41 | "source": [ 42 | "Last time we covered policy evaluation & control with a true model of the world (environment). This lecture will cover policy evaluation (no control!) without known dynamics & reward models. Next time, we will cover control for this case." 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "Here are our definitions:\n", 50 | "\n", 51 | "$$\n", 52 | "G_{t} = r_{t} + \\gamma r_{t + 1} + ... \\\\\n", 53 | "V^{\\pi}(s) = \\mathbb{E}_{\\pi}[G_{t}~|~s_{t} = s] = \\mathbb{E}_{\\pi}[r_{t} + \\gamma r_{t + 1} + ...~|~s_{t} = s] \\\\\n", 54 | "Q^{\\pi}(s, a) = \\mathbb{E}_{\\pi}[G_{t}~|~s_{t} = s, a_{t} = a] = \\mathbb{E}_{\\pi}[r_{t} + \\gamma r_{t + 1} + ...~|~s_{t} = s, a_{t} = a]\n", 55 | "$$\n", 56 | "\n", 57 | "Remember the dynamic programming approach to evaluating a policy for a finite MDP." 58 | ] 59 | }, 60 | { 61 | "attachments": { 62 | "dp_mdp.PNG": { 63 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjUAAACdCAYAAABMxYkyAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAAEnQAABJ0Ad5mH3gAAH7ASURBVHhe7V0HmBTF1pWMIklQkaSiqChGeKAiIMkEGEDBBIigz4AimCPm8KuImJ45+1TM+AyoqBgRAyKoYEIQFURyEFi9f59bfWbuFj2zswHYhTrfnu2uqlu3QndX3anQvdHff/8tgYGBgYGli//8808ik2RLA8tCHpO4atWqRP/Asslg1AQGBgaWQlojwTJJtjSwLOQxcP1nMGoCAwMDSyGtkVAWDIWylNfA9ZfBqCkEwzBlYGBgYGYGoyZwXTMYNYGB6wFDZxIYGBgYjJr1kuzgQieXO1lXZbHemOeylu/AwMDAkmYwatZDhk6u8GRdlcV6s3n3wwIDAwM3JBbbqGEnUFIN6rpct4K0SfqxXL5/cZiUTnG4NjuzNZnW2iyHJdK1aSed22tV2OuWqVyZ/EuSayONwMDAwNLCUm/UrM1GmYZGUqeVyb+oLEl9a7OO1kRavH+Kq7s4Omy8TOekf+2Yri9LmSQdBfkn6SuIvnxR9QQGBgaWVRbLqElqNNd0I1pQR1FS9DuuNUWmUZT0bBzW+5qu/4JYUNpJefTdRWWSXt8f9UW3H0435TJdD+rw/QHfL0l3QbRxCopnw33ZbDoA39+/l2xYYGBgYFlgqTNq/IbV0g9nHMtMHVEmZpKHf0G6/PBc4iTRL2Ou9OVzTbtgubx8XLlypXaCQLK8I8sAMG8EdPhyzAfdjFNUWh3QzfN8/nmrJC86gn443dloZZh/vxwMB6wf82RJWZ82rCBZ/3pZIk3mD6Qu5i2/nnR4UlhgYGBgaWeJGTUl0QhaoBOETsCmYRtoS/gTlE2SI6084OuFO1NatpO3eSLghoyvw7qZR5+UtbQ6LJkWzwlrQPiEPJE5PXSI6XPgySeflMsuu0zP/TwnEZg7d66ceuqpMnHiRHXbNEg/nvW3ctY/iVaG9Ux/G77KXDsAbivLuCn5BDeOPuDHvCSR8a3b+mejr8MS14fXi6C/9Vu5cvV7iGWzZWRamdNbXU9gYGBgaWGx19SABTWEuRD47bff5IorrpDXXntN3YCvO1OjCnzzzTfy8ccfy19//aVuhiV18gA63ffff19++eUXddtw29D7/sALL7wgd9xxh+YLgP/nn38ukyZNyidrdeCcebHlsqSsH8fqISFPwOi4/vrrtUyAlbFxAJT54osvloULF6rbhv/99z9RehidQWfp5N95Z5xstNFG8vxzL6gbebF5TiKBdKpWrZrKVyaDi/Gs24bTLxOtHOsMYDyM0KQQlfHLL76Q+X/Oiz2iMkXhiEdafST8iRdffFG6d+8uBx54oDz22GOxb2aDL0lfEpNks+tI1/WoUc/KnXf+R/7O4z35t9xzz31RXkerG0AcWz6/zDYd6x8YGBhYFlhMo8YNcf/zDxpDHB3Tw99JcVYnGk7gnXfe0c6zZ8+e6gYQbhv1pMaXaNWqlcb/7rvv1G3DANtAA0899ZTKn3POOepmWDYyr507d9a48AMWLFggFStWlDp16sjixYvVz6aXlP8kUj5XEuhgkZ8xY8aom7qYHvONY+XKlWXvvfdW9+ppYuQJho3TvWzpX1Jtk+py1llD1Y1f/Da/2Ui0bt1aSdh8+XGYD3sOUt6nH4+EPP15Djz7zDNSc9PqUrVSZa2vMwYNUn8AciT1ktBDXH755Rq3Vat/SZcuXWToUFc3gB+PcW2+QOtn5aw85ZL0pJmu51b/2lvKl6+UunYrVuRJxQqVpe1+7dUNQE9SHkiEE5SzzJ6XwMDAwHXLYhg1zpiJmkk9qjFTBMOGjSyAEQR0Fscff7y6AcoBbEzZuOLIMGDYsGHStWtX+f3339WNsBUrVsjs2bNTcUlgwoQJagg9/vjj6qa+bGReDz30UKlevbqeA8uXL9d8Dxw4MN8oQRKRl0zMJJ/JHwRuuOEGrbvbbrtN3S6OuwaUAU477TSV4+jU6iMnUbw8xNdgOemkU6RSxSpR+VaoG7/+mR/WF/Phk3X16aefapowIgGGgX4cn8wX5ekmfTnrb8+BJ554QvNx8AEHyqfjx8uVw5xxctxxx2k4wDgAdZPAV199pXFGjrxV3YQvS9o8gEllYLjvR1k/jq2LKFoK7dt3kNq16+r1A2Dc1Ky5mRx8cDfnEcGP75P5ZH6Y52xxAgMDA0sLC2XUuIaOHSX8cIxbS4O/tVHEYszcDBs0lgCNmj59+qgbQBjSBSALsNFlI8sjwTBg3Lhx0rBhQ52WAmwcC/qThCtz/vwCMGpq1Kih5wDzCEDGxmMYjvacYTwn6M6VwFtvvaV117dvX3XDn3o4HUfjYsQtI9T9d15+PU4edefyOGfOXJUfOjQeyYo6S4RTL+rMr0fA6oRRCeyxxx6y66676jlAHThmItPg0b9GoJXNRADHLbbYQtru11bdxB23365lfC+69wCkYfMAUD/w8MMPq/yMGTPUbUF5wuohCesH3TxasryUs0jLp/07dOgkdTbb3Bg1eWrkHHTQIc4jgtXvk+kkpZWp/gMDAwNLE4swUoOGbWVksERH0/b98utf8kd6iYJgOYbbZVI8o6YgcKSBOgDqBT788EPV+dlnn6mbYGdLwG11+GAa1AujpmbNmnoO0B9AnrLpIpj3TECYT+YBtB3Qzz//rOWE8UBgbQXItTHbbtNEGjVsrOdAXtTpqYzqc9cJ6lascHl/4IGHVOfHH3+ibhg7OpITp5+tjMwnRrCAW265RXVNmzZN3cx7Eq1+S/qn85qWzUYARgvSf+G559W9Mr7+CIc/RrAAXBPqZX6YPvDMM8+o/OzZc9QN2DpOAvObCcwn08M5y8t0k+DyldZLo4ZGKY2agw/uqm6AaWVitvSS5AMDAwNLE4tk1OgUU9zQvf7ml3LAIadI8z2Okr1a95WTT/8/+XHmMg1bEf1izEt1gpmNGzakSUbN/Pnz5dJLL5UffvhBz8866yw55phj5JFHHoklXGMLYFTm0Ucf1XN0pphWGjRokOo8++yz5c4779TFvVj/AsyZM0f9uAYHHQrwxhtvyLHHHqvrQLBWBYYRwA4HOOyww/IZNcDzzz8vL730Uuxy7htvvFGnhEaMGKEd+7333it33313Sifx+uuv69TZPvvsk5pCAlA37OyYvq031h2OWM+z8cYb64JrIKryyEBxP9vvu+9BrYd3xo5T9z8rI10rIh0wbCJdND5RBegMgV69jpZatTaTRYuWqBv6EA45dP7AsmXLdDEw8o11OgMGDJB3331XwwDmj6NEd911l7ptmXwWBYiHtFg3JPXhOiP9r76cpO5/8tLGwO67755a82PzYePjHNcW9wX0XHbZMLn99jtlypRvNRyAMTFs2BXSpk07HR154on/xiEOf/zxh9YV7mXgnnvu0R1lNLBtuiTxyiuvRNejl+bzqKOOSu0owxonIpNRk22khmmCvKZ4zvC8tGnTRqdn+/Xrp88mkJTHwMDAwNLCQhk17t0eruMDbr/rRam7VTvpdMgF0u3I/5M99zlbNq17gDTb40iZMs3trMEPSRg0Ll5yg8iOL2lNzddff61+PXr0kJYtW2oju+WWW6ofGl4AOgEulgXQ2e6555668wZ+devW1emiRo0apaYOPvjgAw27/fbb1Q3AWIIfOmksAG3evLm6YaAAyC8Ao8ZOPwHbbrttvimWI488UuPWqbOZcpdddpGKFSuo37XXXhtLie74Ypow2HDOOrD15dcd6o11B7Rt21bj0mBi5wbDBv49ehylbkn3lSlgytC9wwXXyPk1abK9tGjxL+eIAKMGIzUwgIBff/s1kmkitTerLYMHnyn9+/fXdHCdCOQTwE6rcuXKycknn6xugPcDSVlMl0EeC67BJUuXKPV8ydKIy/T6Wn9/xxtJwJhA+j/9+KO67dZudN5bb711Pnmco26RLwDH/fffP3X9Nt98i+je2iS6X9zOp7lz50f3ZX0NO+bo4+SAAw5ydd4zrvMIn0z4RP0efOhBadeunZ536tw5XxokDTRg+PDhKtuhQwc16lHncNPQIDp06KxGDI1SjLjlatTwPvr999/0Xq23VT0544wz1KBBWri3CMgjLvMYGBgYWFpYCKMGv+TTa2i++HK21KzbVgafP0omThP5z2O/S7/TX5TWHa6QKjU7S7cjzpF4E4bGz2TU2AY1yajBKEqVKlXUHyMoBKZZ4MdFwUDv3r119GTJEjeyAIwePVrlMFJAIF3go48+0jCMnhCYpuL6G6JZs2ZSqVIl7UAJf6EwgF/86CABlMsHRoiQXvv26d0o2BoOP4zeENiaDj+EAX6dkbY+gfPPP1/jcYsxs9C37wnqzxEXYtRTz8hFF14kn4wfH/uI5MXTGUsWL5NNNq4mvXsdrW5AjZo8lyYAI7JWzVp6TqDuaTQiX7y2AIxRW3bmnyQwModRn04dO6p8x6gz79ypc+TuFLFzdH6AdIyOHTp01F1oe+21l46IAbZ+WC/A4MGDpXz5CjIzzlueyRcMDOSNU2WMy04bR+bvif+6xcZTv3XTaMS222yno1pcUA28N84ZzWcNdrujsMh4s9qbqR/uX0wZEn5+rRsjXxyZIerVq6cGur+mpm7dLaL8Or9sRg3KY48E6hPGuQUMTObVxglGTWBgYGljVqPGNlr/YNop+iWPTg245PIHZKttesj1t34ur44TGfnQbzJgyBhpe/AtsvWOA6TBNgfLJ5/NVFm3bgMGkTumdEZukB1fklHz/fffqx9GRiyuueYa9Z88eXLsIzrK4Rs1GGGBHAwYAmkDMF4QhmkAgMPvPrgexBpGR2D6yRupwSgNjZokbLnlVlK5cpXU9BeAzg2dE4ARB0xRABiZslvb2ZFYsv5YHu7ugXFDcNrnzjvuiH0cYJRVrbqxdD2kq4Zfe/U16v9PfH1/m/W7VChXQc447Qx1A+gsYdgQO+2wk2xceWOZ+s3U2CcNTKkwj8QOO+ygBghhy0JCHtOCP/7wg3w/7bsUv5uKI/wifv9jZOz+EPN7NXxZb9CB+8kaIsAZZwyWihUqycyZ6XcSETCc6tevv9poD/XQDWB6E/X1xRdpI+OZZ55Tv1dffV3dSJb1dPKAkzXsr2UrZPqP0/V8q+g+YD0D0M3riKMlnw0feA0BdM2c6Z4xAEZe3brp6adsRo0tF0jgHm3YsHFkmLp6suAaMObV1xEYGBi4rpm7URMZNMq4LR546g1Sf9ve0u+0Z+Tsy8fLoIs+lF4Dn5c2Bw6XHXYbJHW36iwvvOQWmOovfOiIG8OUTq/hzmbU+FNN2L4Nf/z6JWDU1KpVK59R8/TTT6sc1tsALBPgGzV58dtziW++/lr+9/LLclhkAEDOGkYwampVz27UMB2gVy83rfTOO+n1JvgFjHUw4L777qvxYeRgig3GGV7sRvh15xPAi/+QxsEHH6xuYLfdd1NjwuKmm26U8uXLpwyB0S+8qPG+mTxF3cAvP/8i5TYqJ2edcZbziDvqf1BFcbFuuWmExgO3b7K9vr+Fox2sZ14voEWLllFedoxdrn4sESffS/IKiZSOmDbtCy7A9FNF+TEyLADIErhmjRs3VkMMQBjvS1/XQw+5tUmffppeeI5t7/CbPdvVJ4yKlfFi61FPjtKwt998W2b/OlvPL47yosjDc+WMfZDpshw8J+CeMmWKPPvss3LAAQeoLjvag+mpJKMmaaEw9PPcpnH99TeqXrBZs13kyiuvToUzDvMVGBgYWNpY6OknjnbfPOJZqVH3QGl74PVySO9HpOvRT0jH7nfLHvteJlvvcII03q6bfPKZ+7UXtYFKZ9zkbxBtw801LklGzZAhQ9QNeQALLOGPRp5Imn4aNcp1Kly8yoYZ4PQTjRoAO4KGDB4itWvWki0231xa7rWXNKzfQOUmmGmaw7sfuppRY6efLG68ya2JuPnmW2Ifh0WLFsmmm24qhx9+uL5FGSMteDMwFjjDGOM0GMpcUEcCwEiCsbLddtupmy8X/OKLL9RNtIjKNKD/ibHLYafI8Lng3HNjl8ic3+ZIpQqV5d8n/dt5oNqjZNSoMYbN2DFj5YhDe0iVSm6KsFu3/O9EYd6AvfZqGXWUO8eudDiJa0NgemjhvPkpLvhznjvOWyDz5y1UzvtzQVSHi1PxqMPqjG8XXdTr6oILbNOjck2bNlVDErD3h9XDNB544AHV8/nn6Trt29etJfrll1/VjfscC7CBxx95XMNef+V1+T02akaOiBeCY/1SZNRAt3994cc0ccTnJqpVq6Zrw1q0aKEjS1gU/mO8Rgjo2HH1kZpateqsNlLD9PKlaUaORo9+JYrTVY1A5DfpZZipeIGBgYGliIUwatxCYRD4YfoSadrscKm3dS/Zde+LZK/9rpRd/3WBbL/LSVJz887S58QrZXncb6BjAfWXPo5GJ8jGm9uv7cvQfKOG4EiNnX5KMmr8kRo0yASNGruepUuHLlI56swfeuAhmfGT+1V/y003q9z4D9MjNTBq8GZaC4y0YMQFYDoYmUHcIw5PdwycmgF23nnnfIswffh1lYkEOmiM/GBaAh1gvxP6qT93+6yK0q5fr54Mv/EmdROHdeuuL6Ujli5eJtU2ribHHRMbmChOpAJGjZ06IdApYqQAZcXLDtXPy1uDBo2kdWv3JmPA5t/KjRw5Uo7s0VOO7tVLeh95VP5jr966K+uoo3rLkUf2iu6V49UovP/++zUudbHj5Zbnd991o4CPPOJ2x3GqCaNV8D/99NPVTX8A9yX18R6lUWNHam67zb3rZsyYt9QNY2JFfPOfefqZGoaRr1kzZun5LTfFxq0aNU63TYfnAO4VLDDHgncYvFyvxPVT06e7exSAUYMFzDRqsGC4Zs3aGY0apuO4+jXFGqyOHTtpOrymiLd63MDAwMDSwUIZNWzMoh+Xiqee+UDqNeok1Wp1kLpbHSa1t+gq1WruJwceMkgmfhV3bFFb6esB2XhTJ5BtpIavooc8wB1D/vQTdiRhbQrB94pwkTHSZcfF6SduM545faa6r7r8KnUTp/zbTS/gLbQEjAAdqYnzA1ijBsAi5o2rbiytWqY/EeADnTF0863GBNaVvP3223qeqRNBWViPOALo4DH6g51Xm2++uY7eAFwYu3jBQjXG7hiZ3jYO9Dyip7Tbr13scmiwVcPIL72wlyM0eGkfFlTjaHHQQQdJ7dq1U9eI+QSw9bhKlaq6HZlgOIl4IEYfvpw4USbFnPj55yl+HqWLtMGJE7/U64+RKL4hmXWSrjOXF3zuAVMxmAKzwFZ71D9eXgjYKSifwEMPuXf32PVV6PDh17z57rGPw7dTvlX/ju07qvu7b79T9/Abh6tbR7tQnwlpgQCnFK3hDWD0BP52+gmLfGHU4McDAKMGIzUHHpiejkS98J6xxDOB+vSB1ww0aNAgdX/x2qbrNzAwMLD0sFBGTYrRr3J25R989J30PeES6dTlJDm46+ly8SV3ycxf4l0kkZB7T008dRXRbwxVLj6OHTtWG2osYiWmTp2qftwKzEb1vPPOU3/7Uj2uM8C0DoFw+GFdCQwhvF2YH8zElBTCsGWW2H1Xt6vq3KHnyX133yc9os4ebvCD99JbaPdv67bkEsgXGn+MvBCYJoBMh3YdZOjgodL7yN7S66heuuYF23oJjExBDu/nwU4s/ArH95nsiFVSJ8IOHOecTsFHLZlfvP0WQFyO1CyNDD4YYyOjzlwRX0hMIe3fdn/ncKJyUJeDpFH9Rjolp4hHaLDQGSNi22y7jY6YYSH1fm330zQxmgAwv8zXtGnuOl533XXqtuUgGaeoQFzUB9KELlc3kc443/fcc6/mAe97mT79p5RBaafMtK7ifFjCD+D7bri+ikbQyy//T/3rb9VAbrrpZjl7qFvIW3ezzeXXeFpq4mcT1e+qK5zRzPVJflo2PaxRwlQT4l100UWaPoxHuEEY/QTvN3776a+/Vqq7det91A2k6yWdFoCRPWxXh2GO5wTPBKZSER87CAHmK6l+AgMDA0sDC23UpBq1qM21sxDYLWx3DP8TtZXunSdsBN2up6QGEY0sgKkkjHRg+oHAi+RgBHB6gcD6E3xM0K4pwPePMFpjpxAArL/BWhMYNHiXCofwsXUaQ/b2q+A/TPtBWuzhOoemTXbQ9TVYc4MRkKnfpnf5XHvV1XLc0cdELX26EvBeDxhbwJ/z/tQdW4d2O1S6H9Jd2u3bTo2b9m3ba8dx0035p38wYoBdUJhmwKJVLIyeN8+9ohl1Bvj1Zsk6xKgFdvPgfSYEwmnU/BPJNarfQIZdcom6iY7tO0VGl9u+nRe/0O3W4SO1HlILiGMd0PdQZDD9q9W/dGQMi7OxwJn1yPzgyGtxzz13q65PP52gbnsfpO+RtBsjS3+jTHqjRX6xG/7U7ccj/U571SpnTAMwCvC+GuQFxMv0CMgm6SMBjOhgRATGNkB/AAZ0mzb76dbx2rU2kxP7D5SlS5yBD/w8faZ0je6F5597Ud14RpAtP78sF68pRqNglOODqXi9AI0bjNZwWgiAf//+J6pOAMZNnz4nyDVXO0MSgM6k9GCcYcQSH+nENQXxoj+OcEIO8Zg3Gz8wMDCwtLBIRg2A/39HrTLX2Fi4bwrBiEEjmL/zsQ2i9WMDTiT5MV5BsnBTNwG3hV0oCqyKOvK/zSvn7bkFOtU8GzfSC11+Wn56SUAcPx/85U+gXJBjeSyZDsP8egEoF/2LfUS6HdJV/hX9qiew3bhKpapy911uikONiQg/fPejdvw3x++AwaJWlF/1xcB5pusE4iV5ANZmNGqU/kQD84wjyTh6DkMG9xHKb6n3Vn7auEn+yJ+tZxha48ePT715GeA1zKQL9MuJawU/EOGEfw0xXct1LgT8+KJDxgeRPgm3zbd/rwCQg7/Nm75CAaOpJkkYdn4adPvloj9BWRwJygQGBgaWJhbJqHHnUQcXk1NL6od32Rg3G9B0vPy6/DDChhNofP0GvKC4foNN+cSGHB1mxExgOiCRyS8XIA+Ma8sJ2HJZGfrz3MpBH9yEL8cwrl3C1NGfc//UaTu8FG7FCtdpMh6AqZl6W26p54BNy4dND+fshDEyhvRG3soviDsC1GfJ+DpaE/upMWncuZB6kFeS5bJAPiln4yeRgB5fHm4LJBU9AkoYNdaw8XcBZuPqetPX0uaBwDnLTlCG/tSBc+igHh+MZ2nj+WGBgYGB65JFW1OTojVo0MCB1i8ty0aUDSjIxpFkI0lZhvOc8ejHRpe6rAzlshHy1g0DzX0qwBlrOLfhJNPCKJV1+3I2r9RnddJAs/GT9JCUs4Q/0/HL7MsCHFl79dVX9XX7eB0+Ps/A7xFBDjJ8XwwXat9tXlBo07BpMh0bBmANC17qRkRVEcmlO2fGZTwS+fDpy5Reomwoa1SuiNiFxXOlyqz+nJQM7TOYPw3/GgUGBgauTyymUQMmNaBgkqwjOrtsnSHdBTW+SXFw7nfumejrpw5rfNCPspp3E+bLWCLMyvlkPjPFs8xFJomZ4gFwc30RQHkbF7j55pv1UxVYgwQw31oXMf24IIBdXTCKPv7Y7RyLVeoR8pY2buCaYajvwMDA9ZklYNQUnuwEQdvI+rTytvO0cXl0HWX2htqG49yXp1+SnPUDkRdfzp778plI2WyknB/X0soW5A838k8w3MrynMAHOPFqfoAyJK8L4zAe1tNgazm/YRXZulGYZVre6lt/6Ax8V77cjf7SyKTnL3DNM9R5YGDhuMaNmqTGkG7bqVlaWcpz2oPhVpbndGciwv20/bz5pByYFOafJ8nbc59WPhOtnB8f+bcyNixJnv4Ajjwn6EdCnsBurKT68uuUxIJcvicH0y+Rl8f86ayvXB/KV9BzErhmGOo9MLBwXCcjNbYzS2JSHDAX2SS/bLTyOEcjYhsSv1GhO1P6SWGZ5CxtHJ5bP1+mOLS6cyWB+Kwj0tdLGcZzfhF1PYlbZ4IgypPUU9Zp6yQTbd2tb+UPDAwMXFdcK0aN33FZt/XPxCR5n1bOj5+Nucr7ckyrsOnlyiT9dCd1mlauIFJPUUgd7JT9vFiZbLT6fN0bAm3dbYjlDwwMDFwTXONGDRtsy0wyvj+JMCBTmE+/o/VJuYLCeA748gzz/YtK5tl2dDzaNDKll8nf15eNTMuXTfKzdVxQfZNOR/oDjkl6yyJtWYpaHsaz16uougIDAwM3VK7VkZqkMMdMiylXX1Dp6+K575+N2WStP+WS5JP8SoJJBgLTyZRetrzYTtIP80k9BclaOSubLV4mWd+/LJJlKGpZ/HjF0RUYGBi4IXONGTUl3Sj7nb1t+C2tTCYmyRdkTFhZG7Y2iTST6oHnCMtWDp6TViYbk3QS1s/X6bt9PXbxN2V9maS0M7EwsiVJ5h/wy1xYlpSeNUGWszTmLTAwMBBc50ZNJKLk7piCAJ3ovKjfZ1IaPguKx84R/oQNZ5jvtzZoO+6kvCfRyvHc+hXEJGOBwHmSLuvHI/RQF+HL2rSsfEGkbK7yJUnmn2UoLktKT0nSlrE05i8wMDAQLDGjxjV06emiTI2f9XPnMFKiY17cy+UAv+NiWknpJdGX57nVyzALhhWHTIv66WdlMtHKIa8E3b5ekv4F0cr6OmyYJf0zydp8AhydISDDNyuz/qnP+uXCwsiWJFlWMkmmKCwpXUXRY8vD+L47MDAwsLSxRIwa19DhCLczbApq/NLhzqgBpk39Xs444yw5+eRT5dhjjpc+ffpF7Buxjxx//PHSvXt3efTRR1UWQHzbETI9HrMxSd76gcCkSZP0C934UjJhZQpLpmHTylUfymqBDycif1YGuvzO3aZHZgsDbfwkOT/MnttrAuDjkfjS+Zw5c9QN3H333XLiiSfKokWL1A15G4/6kuiXr7As6c8uIL9kUnhhyXyVlL6i6PHLVNw6DwwMDFwbLIZRw4W8GCHAQl80hPRHA8hwP55jusFMGzVjx76jr9QvX76itG69j7Rp007atWsfHdvIvvvuKzvttJPccsstKgsk6QTYEGdjOn3XYNsOFcToAXDNNddonqZMmaJuhAFWVyZSv3X7TJLLRACGDL7HNGrUKNl9993V0CMo53dANi26k8J9uWwyNg2cW7A+6Q+jEHX48ssvqxvo2bOn+v3xxx/qtvoKYibZwugAbVkListwG4du0vpZmaKyJPUUVpcfJ1P9lFQeAwMDA0uCxTRq8PHHqLGLjBr3EcjIPzJs4Kf+sVHjN5D5z6POOJ56ev+9D7Wj69fvROeRAWhgOWUBUBd1++nR35ex4X4YdR988MGyZcJXqnMh9eAcR+q29OMkEXLEFVdcoXVUp04dPSJ/RFLcJCIvzFcmMr9Wb6Y8Axhx+fTTT/VIGegA8PHMCy64QH7++Wd1Axh5q1q1asqoSdKbiQXlnUzMb17kj6+xo3zREV/+xr3EaTEwqeygLZf1IxmPcpniWKa+MxbJ6fOj+Yuem4ipZyqmr9uyoDoBMsVHXMb3w5PcoF9WntMdGBgYuC5YRKMmPQqzKr08IoVVUcO2KgpDg5zU0OX3QwPplLz/3gfaUWPaiYjEUrI4AtSDhtUC4WxsKUMkxaefdZNEgwYN5MADD4xdmfX4zKTXEjKW9PPl6A9gGuyjjz6SJUuWSMOGDaVjx47qDyTFSyLTsx056OtgmgDDAfjbjhAYM2aMXrtx48apGzII868R3TBqNt54Y5k7d666mWZB9FGQjMtr2jiAISMoVnTfRZ5OKAbLQx0sg9WpOiJ/Wwd0W1AXdRBWH8JgVIH54qths3ocG5fIFMZ8EThnXvxyWkCGH2zFEfdIvrxFoB6fNo3AwMDAdcEiGDXOoNHdSnEjt2ylyDffz5fps/6KfWDYuH4Dv0ARDw03dfgNHzod4P3339eO8bjj+qgbQDqMk26MXR4I+FvYX95JgDwbfdsQ05/xfvnlF80PvlINJKXjl4V6rNumk4mUs/GSaIHRmg4dOsSudGfjp09q+aKOikBHz47LAlNcvl8SbH1MmDBB64pf8gaQH3z/yYJxsE4KRg2+J0X4+XUjFdG9ZvIMJOWXZSbgZ7EqD3qieybKUzYg3mpxVTfv+3RecO7nxQ/371UL1M+qHOoZQD1Y2HRxjjrC9cwG1mtSvpPcfr1bINyVLf81CwwMDFyXLKJRk24877r3Vdl59yNli0adZcvGnaXHMRfIxK//1LCVkZj+MNbONt0A+p03O4L3339PO8b+J2SefoIBhDgAOtIOHTpKrVq1pVGjxvolaQt8TPG4446T1157Td3nn3++jrr8/vvv6mZeSOTDdUROP+IhP++++666AYQPHjxYrrvuuthn9c6Y5SkKGN/XyTwS8Ktbt67sv//+sc/q+fBJjLx1pFx++eX5dP7yyyw57bTTZMyYN9QNfPHFF9LjiB46bfTiiy9Kq1atlFdffXVKH+r4oosu0mkw1BXWz5x66qly8sknp0ZhXn/9dTn99NNTboBGjfXz84upmLzY4AWefPJJ2XPPPbXcGKXq27efri+yeHn0aNlrr71ks9q1dQ3WXXfeGYdAf578tXy5DDr1NHnmqaflt1mz5Kgjj5TmzZvrffLjjz/GkqILm88+232VHMB9x7oaNeoZvQeWLFmqbuCuu/4jTZo00Xuxd++jUx/yJC688MKUcXzfffdJ2/3aygcffKBu4NqrrpY9d9tdOnfsLG+89oY8dP/D0rF9R+mwf0eZOXNmLCXy3LPPSbNmzaRGjRrSuVNn+ebrr9WfeRs58lYZMmSIGh2DB58pu+yyixxwwAGptHBv0YCBwTlo0CCpV6+eGsg77biT3Ph/N+YzRHENzhh0hmyxxZZK3DfEatcrMDAwcB2zUEYNGk41TuJ+5pLLHpA6W3aR4068W/qc9JC0an+BVK3dWerW7yRj3/tBZVZG/TvjZeqsaQTQqOncubNOs6AhHjfuPTUq3nnnHZk8ebLKAY8++rjK7rlni6iTvi3qRE9xcTt1ifS5Dhe7beDXr18/ad26tZ5jgardcZNE5uecc87ROOx4Z0WdIDoAdMYwqAAbz5XTpY21IjAKsDsJZUk6ghMnTlT3559/rqMcVo8l/EgAfujc7UiNH8evb2K3qBMvF5VrlRmR+PzziVrWoUPPjn1Enn/+efVr9a/W0rJlK63jpk13UD90hgA672OOOUZ2jAwI+MPowJQYFjBzt9PQoUM17Ou4Awa4psaO1LB8ml98/BLXMc72OUPctWi7Xzu5/vr/izrus6VihUqRnvSo3g033KAyXaLOfuSIW+XoXr3UfUzvo2IJkcULF0jtmjVl2623kb1230N6RzKHHnqoym2//fayZOkSlYPhB78XXxyt7pUr3U2Pe6tmzdqRsbCrugHee0cd1VtuuulmDW/UaGtZsSJt3O68086yd2QQ9ouMOcju3Wpv+fknt8boqB5Hqt8F554vRxx6hJ433KqhnPbv0+SsM4dExtdvKnfDta58B3Q+QG6L7vldmjVX908/uGcN6Nmzh/q1a9dWDU3sMIMbxD1G4J7GWjH4n3LKKXLdtddJzyN6SsXyFeXNMW+qzML5i2SH7dz1vujCiyMj71w95/SwXqO/3fXCvZb0fAeuO9o2Iyk8MHB9ZCGMmvhTBlFDBnzw0U9So3Y7ufDyMTI5+oF7/5ML5YRBo2X/Q26WqjUPkv06nqLTUoB7qNKNnn3IeA7AiGEDXL16dalcubKySpUq6sdFsTNmuGmh44/vq27i88+/VP9zzjlf3QsWLIh+XW6hfnvvvXfKmAEKaoAB7LjaYYcd9Pyzzz5TPS1atFA3AB1JDQaAnT5Is3379trJ44hRFRghcNsjiBEQGF/Lli3T+L5OpAPwiLT96SfKZiob0X6/trJl3c0lz0w5fDVpipbvssuGxT4ib731pvptvvmWUaeejr/ddtvpNbG/6DEaA1lreDLNyy67TMOmTZumbqAgo+Yf3Gdxki89/5LGv/Tiy5xHDOyam/qt0/lebBBfe/U16iZefulF9b/9VrdrbuniRbLLTs3Ub/QLL6ofcMnFF6sfygGgmitVrBLdc93UzfK//vobKvfqq07ulf+50bzHHvuvuolaterIlVem83LQgQep3Ga1asmUSenXA4x+YbT6v/XG2NhHpMfhPWT3XfeIXQ5T4utz5bCrYh+HvSODs1fPI2OXyJmDzlC5QdGR4IgjFmsT3bp1Uz97TYBZM36VhfPcKFPf4/qqzNzZ6dG0r75y+fh0whfqdoZNMGpKI/k82ecqMHB9Z85GjduyHT0Y8SjNsCsekIbb9JBrhn8qz42Jftk++IcMGPKmdOh2uzRpdorUa3yIvPtB+hek2+adpNc9bADX1MB4+e6773Q3Dd5xAuJX5vTp01UORgvk/vjDTXPZX8RHHXW0lC9fKdIrsnz5cu04wXxD6l7aPgGsq8CIDDpkjBQhvd69e2sYgCH8pPjwoz9GMUAYU5a+P84XL16sBk2mjoF6QSDJqGG6BRk17RKMmi+/nKxlHDYsPb3Axb9XXOE6Ul57GCTwt1NHzz7zjPpNiK4VwbUpw4YN0zBcUyKbUaNlRHZdUaVzxy5SuUJl5xch6WWNGG3ZpNomsSuSMaNQuzXfRRo3bKDnKHPjBg1l+22bqJsYO/YtzeNjjz0e+4icdZYbYZozJ13O3r2PlS23rB+7RA444KDoPtlUfvzxZ5ky5ZvoXv1U5v25QAYMOCkygP8VS4l06thZdY3/MF0/AEdfli9N358PP/iIVI4MqmnfpuvrjNPOVLmJn30pM6fPlE8/+Ux+nfmr3PR/N8sWdbaIyuUq5KQTT1K5GTPSU1bIP/z69++vbkzZwc3RNoWpb+DPOX+qzFE9jlIj5/NPv9DRvF9/nS1Nm+4k553rDKRg1JRe2ucpKTwwcH1kgUYNGyoYNa4R04Ocetr/Sf2te0qfk/8rZ13ygZx63vvSa+CL0ubAEdJ0tzOldr0u8swLn6ise6hyN2owZJ4N++67nzRs2DiVF3RykRrFtdderzq+//4H1Y3zo2NjxC1uXD1t3w+YOnWqxsXIDI4g1yWgs7YNOBt0EPG5ZqEoQHyrm4Q/CUAGRo3d/cRwPy5J0KixHT9/gVuj5vXXnVHzUNTJAnyfEKabKlWqJPPnz1c3MOrpp1X2o/ddHWFhOI0arMNAWGFGamjA/BV19vXrNZD928ZrhxAUXXeSwIicXV+EVercQXTKyf+WShUqyry5zgiGUYM6sHjl5f9pHp988qnYJz0ld8/d96kbxgreoYSpL6Jp0x3Vb6cdd5bGjbfRaadtt9lOateuK+3apfPTJrpnGzdoHFVM5EA1xvfr/0a7dK2xA8Nks5qbybIly2MfkcO7u3f97LTDTrLt1k1kmygtsO5mm0fHbWXJQre+54Q+/VXut1lu3Rjwc2QEwW/AgAHqxhZ7uJ+OrhnAe0rrPK73yZOmSIVyFbTud4zKuHWUxtZRuihb1aqbyMUXX6pyYfqpdLOgNiEwcH1j7tNP0YOBGQFsIAGGj3hWqtbYX9p2uUa69npQuvZ+TPbvepfsts8wqdekjzRueqhM/MqtqUCbaRcKW6Y6sQg0avr2TU8r8aEEiX32aRN1YvVUL4DONi9eR3PDddepjmnfTtVOrUK5clGndrKG+UZNJgJYGAs91apV08YfneY+++yjYQBlkxoMAKMvGFnC4lPwhx9+0F/IPhmGI3ZbJXUM9LP1QKPGH6lhfnw9jAckGTVff/2tlveyy9JGzWuvJRs1GLHyjZpnR41S2fEffqRu1D2Nu0xGjb9Q2OafneuKZStkqy23kjZ7t1H3P7Exg/vQGjX1ovuhbZvYUEF0sx5nYP+BUn6j8pFR4wwoGBcd2qXrDXjl5Vc0j0/+13X0xPbb7yA779xcz++6626VmfRleooN92HnTgfI7Nl/qGE4efLXevzmm6kyK14LA+zTel/Zrfnucd6iekxfDi0f9N526+1y6kmn6vkdI+/QMC1HhP323VeaNtlOZv82W76d8q18/dXXOiWF44zIaKG+AScM0Pi/zZrtPCLM/NlN19KoGT3aTXk988wz6tb6joC6ZZ1NifRCZmSUp1+jcnw58Sudovzqq8lR2b6NDDxXl7xegYGBgaWBWY0a28nkRb/I0L7GP35l+owVst1Oh8vmDXvIrq3Ol91aXyLN9jxHtt5pgFSptb+cPuTWVLsdqVA93Haq56ajxhGgUYOdMQTzABLnn3+hyqEjBlaZ9R7t99tPNq5cJWqc/5FlixdLxY3KyckDBmpYksHgk+lgtw7S4JQJdjvBzZ1Q6LCT9DH+s88+q6M8WEsDtm3bVtmuXTt14wjCDyMM2LFz7LHHrramBvpwRFq2DnCebfqJpB6QaNumTWQEbJG+mBG+/tqNTA0bdkXsg+knNyXzwAMPq5vrSmDUVKhYQdcsEaOefEplP5/waeyTRpJRk7T7CflnGdi5AocceIjGX7LQLeJFNYAWPQ9zbyhevMDJqOEQo3GDrWX7bbfX81V/5eki3NTITwyujXniCTdSQ/23336n+s+dO19HCFu2bO0CYvTqdbRU26R67MoMGDW77ryrG6kxxgN2OrXcs6Wce/a5OiKy8447yyP3P+QCI+RhpX2ESy66SPOxbIm7PzKhf78TVW72r+lPUvziGTUwoOFebUTU1OnSJcuj61MtMj7zr1uzSLr/AwMDA9clCxypYSeDYWb0gdEh9cK9F1+eKI237ybVNussNTbvKjU3P1iq19lfDjvqfPlpphsF0F/UiB/pWWmMGuoHeORC4UxGDQj88stvKrf7bnumOgfgwfvvV/9rrnJrQBbOm69GzSknnaTuXEZqCGybtW/rxfqcihUrqkEC2DJYMo9IC+tl0PFbcj1N0jkMGuiwnQXAtOBvgd1PXbp0iV0OlCOph7oAGDU1N83fEV900aVad5i+I15/3S0UplHDkZpevXqpUWNHal56/gWVHfOq2z7/Z2Ss0GDhmpoko8affsqX1/jw3jvO2O13/AnOI8YPP0yX++65X8+x1gQyRx91jLqJyy91BhW2SAPLo84aRo5v1Iwe7UZqaNSwrAsXRoZxhcq6vgaLf++88271ZziNwf4nOIOBwBobjnABKaMGZUJUF11HZOptXs854GdGnxRxHSyK6hrptN+vnfOIgWmne/9zryxd5IwdjNSUi+752b+akZp4+mngQGfcA1hfA7833khv4QewcPndsc5wv+46t96HO8CIt99+W1555X96bu+vwMDAwHXN3KefsCAwor6wLSKmooCp3y+SCy+7Vw4/6lw5+vhL5M57XpbF8Y9JGECQx9QTXh6WZNTgnJ01Gks0oocddpi6Acowzsr4l+ubb45V2SqVquoahH1b76Pu4485VsOBP//4Q/2OPdp1dH4n7xNpANi6jXj4JAGAMICjNVyLwDjMG/TzvKhAXEvmmXnDu1patmypozvIC3aH4T0k2ErNbeaIQ1IP4wPnn3teql7uuP0O6datu9TfqoH62d1P6Mzgd+ed/1E3tzXznTT2A5WzZrqOs0ZkLB3V80ipUL68jlYBZ599tobx+1lAjx491M9+JoF1l8orDYAI2MIM+UYNG+si3E6duqgbW/iJRx9+TP22qLul/Pukf+v6E7gxCkIsWrBYNt1kUx0dsXguNsoeeMCNkiALzMaJJw7UsDqb1ZX589zoFJ4F1scTTzyp4U2abC/nnXt+dP+6bdmHHnq4hgPY0t2wfiPRXV34i40X5KfaxtVUHgt+a1WvJZtU2UT+1eJf8twzz6kMPukAfPCeM+5qRjJDBg+VPvHupB22dzv0gKN6um3s3AYOTP9xuvphJNCCrznAuqwB/QfIdttur+6nzBRcnz4nqN+BBx6krzjAvQf3yJEjNZz3V2BgYGBpYO5GTUR2OjBMwGxwL2+FvFtLo8aQZ9TYjhfAS94wJI6Om2AY45AAXqI3cOBJOqqyf7v2+kI1C4x84EVk+PgjYDv5TASwFgbTT1gsDHBtCD5NcNJJJ8nDD7tf/ZBHvmx5fDfTtP50J/nbOJaQAbAjCe/aQT3hhW4wGLAGCX58Dwx12LTpD2An2LlRB7VD06Y6BYYXzc2d+6e+nJAvKgRQfkxZ4LMMAA1KfCkdsqgPgG/FxQ4oTKN1ijrJe++9NxWObdJ4F8rs2enRg8cff1zzTxlbDzxXt5kim/DJBC0nXijXsWMnTQMyAI/YUg4ZvEgO78qx5YEMRtywo+2uO++KfR1gcCGPX3wxUd2adpQX4Mcff5JjIgPwiSfctm3648hzxO/Vq7e+8A+jZ7hHmCfg1hG3qlGsek2ZrrjiajUSDu12mJzY70Tpd3w/3XFUu5b7ttfw4W4rOhZeA7/O+lVHWVA+jBrecP0NqSlL4InHn5CzzjpLRwkJrH8584wzU8Y472fgjjvu0FcP7LHHnnJ072Plww8/Vv88M3331FOjorT213rH1CPeGQWgLEn3amBgYOC6Yk5GDRt4vqvGudGBRx1n3kpnqESNnI7MrIrkokY7/cXu5AXClmwcLbhmBWEA0yVtw5wPkTgWqbo8pBtmGFRMKxMR7ucDbtBPD35+GaiD55Tx/RmnMGTcgsB6S4qL84z1ZkAdFk5nXr741EmDNQlJnxywuqEDbptHP9/QnUk/gDDkK1PZGO7rodvPI2SZtq+T9eDy6u7vTOkClCfwXPCTBz/GIyjXXJN+O7UFFsTXqFFTz5PyYuGXDUjy03qI8pRNl66fi55j/RRKhlsO1wd6XPnctbLXLjAwMHBdsJBGje0g0aCjQUOHhLA8yYsaUPdxPsg5P8eoIY8bv8KQ6aLhBJi2z6Qwq4PIFE4Z+tHfupOYJEN9Ngzw/a2bfvTnuc+kMD8u6cvQzz+nTKbOiW6G++cpv+ja4wj4YbkQ6fhx4ZdKP9Zv7wUra2WsLnbsoHboRobxkmjTLsiNPLFOUukm1JGNA2CkUUdpzDQVsXjxMn1twQ477KRu5psEbH4KYqb8+vkm+TkSy6T0qDcpLDAwMHBts1DTT8mk4YLGEaQ7f6OY1MhnImRtQ2kbzoKIUQNOcYF+fJuOpR+eq7yV4zk7CFsOypCsD/rnUj9Wl6/PD6cf82Bl6E8/Sxs/lzxlY1J8q9/3o3+SDAmdfpl8Ml1fJpveTKS8jeeXK0kG9POBoz0HXnrJrVvCe22wdgc7+/CCv402Kq/bxSdPduuQCipzQWTaVgfPodvl1T67PlfXCSbpDQwMDFxXLEGjxmeSbG70G3DbcBZEP45187wg5iJL/QDl6cdzdmoF0Y9P+mEE/SytnCX8cs0HWNR4PjPFtfqTwpLO6c4WTr9sMpnqLht9+aT49MumG2E2nPc4sGDBIrn0kmHSvfth+lkGjNzccYdbnA1geoyyVmdRyDzY/CAvRb3W1ENdgYGBgeuSJWDUrL+0DX2mhpt+NpzndBfEguIl+ZHskGw45bPF8/3p5rEonRzzkUl3YWnLVlQdpZ9pYzUJSdNARbk2gYGBgRsCCzRq1t/OpGCyM/WZJGtZFPkkf7IgPez46aa8Hw/nBRkePBa246ReIJPuggg5S/jZshU2T6WRKEvSNcC5hZPDl+2DURMYGBiYK7MaNX7DuyGQZbZlt+cgOpVMHYsfd03S5oPpZUs/lzzlIpNExCOoA3nDeTadhZFd35gub3o3lbue6Wlce40DAwMDA7MzjNR4ZEdj6cuwo0nqbLLFy8bCxrFp+/nIpIuwfoVNNxOpJ0lfJv3wB1iXlKOOTPHWB/pl5JF1YWV9d2BgYGBgMoNRY5jU0WQqPzsaP5xxMsXLxKLEAW2cbDoYBvhxACtbFLIzpm6m58sxLNs544JWL8OtDN0gr0lZpF8WS1uvgYGBgYGZWWijZn1tXFEuyyQZkB0nj75sUTvWgtLNlUl6rNsPT3Lz3JL+Vt6XZRg7YRtu3dbfd1OOhF9Sndo4IGRI6x+Yv06TwgMDAwPXFwajJqZt+C2tDDtr6+ezqJ1qUnqFJdImrL/Vmy2NTHmgP8N8d5IfYMNzpdXjx4ebgJt1DX+cF7Xu13fa+iST5HJhceMHBgYGrkkWyahZXxo1lsXShvGcHSb8CgJkitK5Mn2bblJ4UpglgTzYV+EnxbU6eZ5NzobzHOlYsOxWjnp8tx/mM+nzC/Y7RwDKCFmkW5R631CIuvaZJFcQixM3MDAwcE1zgzZqQFuWbGUD8DHERx55RIYPH64fJ7zmmmuU1157rVx++eVy/fXXy/z581U2SUcuzJR+QXkjbr/99tSHC4GxY8fKjTfemDJw/HjUSf2Z0sgUDsybN09eeeUV+e0392VoK2PjZPLPRH4oE7h1xAhps++++hHO7bbbTj8sOm3aNA1LyQejJitZ57nUfWBgYGBZZIFGTVlnugFPb5N1zN/J5pfNrwME5s6dK3Xq1NXX2jdvvpt+2RjcddddZYcddpB99tlHZs6cqbKMh47W6s2kH6RckkwmP9CiVq1a0rlz59gl+rVx5JcjHEnxk/ysP90AjnZUBHjxxRc1DRhUBOPbuIB1Uz+pevGdJuiOv2S9cP4C2SmqW+gfOmSI/Oeuu/Qr1HAffPDBKuPrCcxM1n+msCT/wMDAwLLCDcSowTk64fxMf3izYAIYkdh00+rSrt3+6gbQP2fqKODnf0CRsqSVTfK34b6fS9/JE40aNZLu3bvHLpGzzz47yvOmOsoE+DqsXp5Tp0/Kgdao+eGHH9TQ+OKLL9QNUN7GJxhGHSSNGpwTrVq0kPpbbinLlyyJfRwwSvPxxx/rua8nsGi01yopPDAwMLC0c702amwjnRcZMavyVsTHlXr85x+89Cw5LskGHoBRU716DenYsYu6kwB5HW3w1oIAWCNCvQTTINBBM02fVo6An/Xfeuut5bDDDotdIuedd57UqFFD/vrrL3VbAwBgWtRjdRH0ZzxLfJfIIqncFjYdxOdRDZo4b/ADXhntPvb40fsfqPtvTEf9Dfk8dQNcUxNYfPrXJjAwMLCscQMwaqJ+MOoD/a56VdSn47s76UbcTUml46WJzhbA9JMaNR06qdvC6fhb8iJZkJj4xRfy7TffxK7IsIk7bnTGHD0BlixZIosXL07psWT6wKJFi+TDDz+U7777LvZxhhEBo+bQQw+NXSLnnHNORqPG6rc6kIcJEyboSAjzCJmkvNmyAtTD9IAlUbn++OOP2OVkoMvq893A8JtuVqPm1f+9om5FJLNypctv0icEAotOXgN7HQIDAwPLEsu8UcMG2G+MnRudcdwZRpg0ZY68+sYX8tOMpbEPwhHHjdggDuHrA2DUbFqtunRKMGqIv+ME335rrDTYqr5UrVJFKlesJM2b7SzfT0sbIljM27x5c11ce/LJJ0vVqlV1US/B9G2ehg4dKlUifQ0aNNDO/phjjolD0sZEJqMm0/QTyXSwELpOnTo6jYV4lSpVSi08phwMHDtN9MP338vOOzWTD+MRFYymtG/bTq68/HK54brrpMam1TW/mBZbsGCBikCPn751AzAIEQ+84447ZNq0qeoPwFD9Ow9x8scLDAwMDNxwuV4YNT7pz1mKpVF/fvKp/yfb7XSEbN30MGmy46Fy04jnXGCEvHhKw9ejOuJ0AIw2bFWvvmxWq46cPODfMrD/QBlwwolyYr9+0bG/fD7hU5V7d+zb2hHDf9aMmTJl0lfSumVL2bhyFZkx/WeVefjhh6V8+fLSrFkzqV27thoT/hZsEsDU14knnijPPPOM7rB6/fXXNY0hQ4ZoOAGjxq6p4fRTtjU1TOOtt96ScuXKyV133aUG3NSpU+Wyyy5LGVs2njVqJn72ueblNY6oROp2iYwc1gHCb7j+enVfcMEFTiaC1eeTuPlmN1pD7rVXS3n77XFxKA2bZB2BgYGBgRsW10Ojhn5xrxfh9DNHSOu2Z8jDT86UQee8JLu2PFMqbry33P/QOxoe2TQRXXy7M8q53TmAjr7BVg2kdo3NpN/xJ0jf4/rK8cccJ0f36iWHdz9UPvtkgsrt1LSpdOrQQc8talWvIccdc6yewzhBJ43RGjtVk1ym9BSYxaBBg9QgssbQtttuW+iRGuL444/XPCVBR2diIo6uG4rz9NWXkzTeG5GhRcCoqV2zZuxyQFn33nvv2JU8FUZSN/D999/LddddLy1btkoZN+PHOwNy1ar09FlgYGBg4IbNMm3UoLMHbOefHnHRIPnu+yXSaNtucvGVY+XDz0WuGf6dHNb7Idmi0dHSocuZ8lc8mgOjxulNj9j46ehC4WrVpUunzAuFJ0/6SjvdJ594wnlgFCjuoAefcYZssfnmev7888+r3DvvOMOKC16ZtiX8LTBSM336dDVCsIWbUzqAP/2E3U/Vq1fPatQgDQCjR8gTRpCge9y49IgI8wHCiLAjNZMmTtR4Y159Td0Yqdl+2yZq6Fnsv//+0rp169iVNpSYB+q3ZN6IBx54SNPq16+/uhEcFgsHBgYGBoJl3qhZ/dwZJZiWAD6Z8IvUa3iQHN7rDjn74o/llLPelQMPu08abddfWrTuL/MWqpjk+fE9/QCMmhqRgdCxfXpLt4/3IkMAne7Lo0er28YfNmyYbq+G3hdeeEHlfv7ZTUdRDmEkwCMwZswYadu2rTRt2lTfiVO/fn1p2LChLh4m/OknGDV2oTDT8Uk8ERljLVu21LyBI0aMiENcXB5zMWp6H3mUc8fYd999lYTq8YwaugGeWz+g2ibV5KCD3DtqAIYHBgYGBm7YXG+MmjSdUULM/VNkx50Pl4bbHiOdDxkh+x84QnZvdZFsUrOL9Ox9se6KAleZ+FYf01Bdc+dKzchA6NCuvboVUbjKxGn++ttv2sHDgAEQxvzsFxkkWIALPPXUUyqHqRXApsl0bVysuYE81qR8E++muuKKK9RIWrgwtswiwKjp1q1b7MptpAZGgQ/sxILxtP3228c++eNao2a16acoy02bbCdH9uiZcgNt2rRRY4xA2aCDR+TDTqUBOs0VkVvHf//d1e9tt41UN/MTGBgYGBhYZo0adviZ/C3++/QHUqHKnlKr7iGyRYMjpUr1jlJ/64Pkg49naPiKvH/UqMmkD0cARk3tWrXyr5f5O218oPMF+vbtqx0vXkpHcLqJIx+PP/64ujMZNaA1Nrp27aq7kiwwRVSzZk01QIgmTZqstqYGMklGDfMNTJo0KfU2ZAKLmK0umy+eA5NzNGratWunhg3BurV6AeTl8MMPl1mzZqnbAm9urlevXirfNn5gYGBgYNGI9pdMCi8rLNNGTSZ/Ett9iedf/Fi6Hz5Y2rQ7QY7te5l8+Imb9sEsFcTwYrx0vPz6cATmzJmjnfe/WrRQN8A4JICpnlat3KJWvAivS5cueo7dS8R9992nfl9//bW6bZqgvcGA1157TeWx2BY7nqizWrVq+UZqMHID44E45ZRTVC7TZxKI4447TuUwyjNgwAAdUWrcuHHKsOCNjjg4YkSF8T//9DON+794yg0VWqd2benSKf25BmCXXXaRnXfeOXaljRqWk/ref/991Qd2PaSrXH3V1alPI2y++eYyY4YzRhk/MDAwMLB4ZDucFFaWuF4bNSqDLb9x343DvAXuCMBUoUFj4ybpBDDacdP/3ShPPPaYugHGoQzkiXvuuUc6duyohg3W0FhMmTJFp6gyfQCTNxhIPPvss7rYtmfPnjodhc4fadjdU3fffXe+D1piIfINN9yQmtaBbptn5nv27Nk6ikQj7Pzzz0/lzd7okLfnwO+//y7DLr1MfuLIVFSpd91+hzw7apQ6+YI+jE498MADeg4gPstInSzvT9Onq/GGD1jifT/NI4MI28tZDhsnMDAwMDAQLNNragom1sekSaArXpkX+eETCfEW8OT4jgy3gF8msmP2gTDo8cNtWpmIuJmQpBfn1s18UQ+OSfF8IMzmg/Kp83jKjcBOr3/ij1EC+oblSA4krK4krpYfr+hJeQoMDAwMDCy0UcOOsOyQRg06dHeep4tco2MqLClefrLcNAYsk/zpR/gydKODppuEX1LHDTnC6mGYjcejL2f96G9lCT9OQaR8QfEQTiaFW0LGIkkmMDAwMDCQLJRRU5gOqXSRxguIDhzMzaDxy2rrgMzkn4m+Xj8siTZeNrlcaXXAqPH1FiUNjshY2nCmkZSWdVtipAcjP3+DCeGBgYGBgYHkBmLUgNawyc2gAf2y2jooCn09vm4/vCDa+LkwU3zfzw/3WVB4Eq28je/r8t2BgYGBgYG5sNhGTeiAMtPWV0nVka/T0pez7uLQTk/RLynNNUU74rO20gwMDAwMLHsMRk0Zo70GPn0ZG6+49NNZE2kk0aZblDSLGq800y+T7w4MDAzcUFmkhcK28QyN6don69ynH27jFIc2DZ9J8iXJ4qZXnLillbZMmZgULzAwMHB950acWsiVfoMZGtG1T9a5Tz/cxikKre5sTIpbUkxKK9c0/XhFpY1fXF0lQVuuTEyKF1gyLGybGRgYuPZY6JEan6ERXTdknbP+/WtQEtfE6s7EpHglyaQ0wSRZn0WJk0Qbtzh6Soq2TJmYFC+w+IRBE4yawMDSy2IbNYHrjmu6M0vSb5kUpySZlCaYJJuNRY1H2rjF0VNSZHnIJJlsLGq8wMDAwNLOYNQEFkjbga7tDrG46RYnLmnjFkdPSdGWqSj5KU7cwMDAwNLMYNQEFkjbCZa1zrAk8mzjFkVPcdP3WVx9jF+SeQoMDFx/WJanWYNRkyPDXLpjWesIS0MHvq7T92nrpDTlKzAwcN2TfV0wajYABsOm7LE0dNylzXhgfkpTngJLN0O7F1hWGIyaQjAYNWWPpaEDL20GRGmok8Ci0792a+M6hnYvsKwwGDWFYDBqyibZCayNxr8sMNRH2Wa4doGBmRmMmhwZDJqyy9CJ52eoj9LHwlyLpGuHtilcz8DAYNTkzGDUlF2yEwiNfuC6YmHaj6Lcq8GoCQx0DEZNjgxGTelkLh0AZUKjH1gWCBT2Xi0rRk14DgPXNINRUwgGo6b0MZdGkjK5yK4Jros0A9ce/fsq6XoX5h7w9aWZl5HBqAkMdAxGTWCZZi4NJBvSddGglkS6Nu7azn9gwbTXl+f+dSrM6EtRfjytWpWXs/41yYLynlQ3gYElyWDUBObMstwgrYt8s76Kk7avozC6wshi8ZhrXdtrRPrhOK5puOuNkZv8+VvbzHTf0d+vHxsWuPZo71cySa5g2lHDpPD8ZDpFTy87g1ETmDPtzW+ZJFtUFlZnSadfkixu/TB+aS5jWWJh6zFXeXudSNtJww3MmTNHHn/8cXnsscfk0Ucfldtvv12GDx8ut9xyi3LEiBF6pB+P8B95621y28g7ojh3yt3/uVceevAReeqpUcpHH31cfv99tqaB9LIZD4WhLY91+3KFodXlh5Ukmdc1nU5Zpq2ftV1fmdKmX3EYjJrAnOnffGSS7NpitjwUJ2/oGErDr0eWIVs5LXOV21Bo6yOpbpL8SoK+TmDatGmy0UYbpXj44YfLoEGD5Pjjj5c+ffrkI/zof9xxx0mvXr3loIMOkRYtWkm9eg3y6QFPPfV0TQOw6RaFrBOWIdO5pe9HOT5HNp7vTorn+xeW1FMSutYnZqqPotcXrnF0zLP3Xv6Rm0ztKNOyaRc+/dUZjJocyYczKWxDoX/zlcQNmMRc9WeTY8Np/ZJkk+KCma43H9xc7oUk3ZnSy0ab74LStbJ02/B1ycLmxS+Lz1z0+Tropp91+9ecMgWR8ZPCSOL6669PGSK9e/WKfQuPP/74U1599XXp2eOolL45c/7UsKT0C0OWh2XC0dfrl5du1N+KFStUHoA/1vtQhnKZzq2bfrk8a5lo9fq0aeTKgvQl+ReGxSmrz6KUD2S8XOLCoFm5Mn1/R1H0ehcU34bz3PoVh8GoyZF+g7chsqRvPp/USyTJ4BrYPET/5J88Fzcb/HhwWyRdWytPP8Lqs3HApHg2zPezTIqbyZ3NP5fyrS0WJS82ji2jHwZk0mfrI5O/T+iiPrhtvEz0dcDP6iGJgw8+OGWI/F9k5KQAEdzG8a3sdLkOIhvef+9DqVJlEznv3AvUbdMsCm05LGx5GJ7kBr76arJccMFFsny5M3DQ0RGUB6GT53T7+jKRcbL5Z5Mhkq6VT4RbZAvLpispL5aM68tlg5WzzFT2TLR1AhRUJ44uzkcfjZfLLrs8igM96pWClWeebN6S/IrDYNTkSFzg3C7y+s+SuPF8QqePTPW9cuXKWCJG/GAtW7ZMZsyYId9//73MnDlTpk+frkP+ixcvzvfQQC+wYMECGTlyZPSr9w91r8rL36GBTJO499575YsvvtBzK4N4dCeRekjIZyofmCkO/Ikk/TwHFi5cKDfffLP88ssv6qYOyhaUh5Ig9ANLliyR2267Ta8LkCRrCbz77ru6BoVgWQFcW6w1Wb58ubqTykFZxqMfj75/EqkriUnyZFLdsi5w39XbcsuUYfPmmDHq//eqKG5Eifr/f/JiHdE9mdIR+WGY35KYMeMXad++g/z888/qTkrfkmHIK44Az+lP3H///fLBBx/oOcMtASsPHHP0sVq2BQsWxT4iU6dOlXHjxuk55MFMebR5KSypm0wKB1588UV5/vnn9ZxhmeoN+Pzzz/X5J2wY2pCbbrpJ5s2bp260UTY+5QjmgWHMK9JmGAA3gbYN9z2uMfj111/L3LlzNYw6rD6eWz/rb8tq07zvvvu0rABl7TnIuMSBBx4s5cpVSI3avPXm27r2i0Acpm9p0ydtOkVhMGpypL0BAkueuJmBKVOmyKWXXqoPLIAwW+98kBYtWiQ33HCDfBPLPfbwI9KlU2dp2aKF7L///vKvVq2kTZs20rp1a/nss89UhjpAoEuXLnLYYYfpOYCGiJ2I/4ARWNy57bbbyu+//65uyiFffhzfbcn7yZaN8pnigAXJwJ/o1q2blpFIkrW0eQH9/BWFRM+ePaVjx44pgzRJlmQZ0AluGXX+jzzyiLptGPLVvn17OfbYY9UNJOmx50Wh1efTl/PdSXK4v4DXX3stZdTU22ILWThvgfrnrYjqzBg1Vo8ywbBhR/Lqq6/m66ST4jNvDIMboD9JwHDceuut5ccff1Q3y+HLUg8A3dU22VTOOmuouml83X333dKvXz89hwzi4GjzSf1+GoWh1eHrQVrA2LFjZfPNN5e3335b3ZRjfmyeQGDWrFn67F955ZXqpgzRq1evjM8bMH/+fLnmmmtk9OjR6gasDOWAu+66K5U3HPEs77vvvrL33ntL27ZtpVXUvuH84YcfVhmAOlhGvwx+ndhyEtdee600btxYZs9OLzxnOM9BxgVmz/5DKlWsEsVNjzr++OPPUn+rhnLnnXepG3Fs+n4erF9xGYyaHMnKTwoLLD5xQwNvvPGGNvSXXnqJuv0bnXInn3yyyi1bskQmTZwol196mUz+cpKcMWiQ/soYPHiw/tLHyMBff/2lcRCfnSoapho1aqTced615agNHzacE5g+2GeffWJXOo+UtXqS3LnQxrHMJIv84pyd5h133CGbVKumxh/Ae5fyNj5p/VVnFKc49zyBvKCuMWIG5KKT1+Xpp5/W6/zDDz+o24bh1/Gmm26qHQDg6wBYFpBlI4mkMJDxmN+i1gd0MR2QuOaqq1OGTe8jzfqayAjIw+JLo4PTUI4oV34SuN427/Y8E30ZlBGYGD1X5cuXl7feekvdWCuTSR/8CfcMl9PRI4BBGOU46aST9JxxSKRp3aCfRiYmXRerz/oDqKPq1avrjyIA/pRN0gXCD8CIFa7XhAkT1E15ol69ejJs2DA9Z9o4AjBqELd58+bqBpguCTz11FMqN27ce7JkyVI597zz9FogvzBc8cMKBiL0YTTW14E0AetOoi//6aefaroffvihuvGcUQZIigs8/fQzUqXKxjLvT2eYr8JoY4RX/vda5F9Vf5wAlGdc6MlU38VhMGpy4Jqo+MD85A2PzgsPVvfu3dUN5EUNOsgOGw84ZB579FF1z4uHYH/+abpceIFbV3DqqafqlAfhrp/7yYjOEPFTnWEUputyIv4dMS/KjzVqVAbpR27gu+++0/hoYAB7b/BhzebOhZC1enHO9H051g3rB50PjIj/u/FGdbM81AVaHUm0MjYeyfwk+fMIIC/oQLBAFmBD6cezRFzI0HjZcacdpXfv3noOf6YBXH31VVJ7s9qp9HBkuJ9/ukEAo4G2wfXJeMgHjlZ3QfTzAdhwov1+++m9BN5/z/3OM7pNMVJj5UE/fyTAcJ77cXx3EilDdOnSWTp37hS7HCjjE/GJgQMHStdDuun5ihUwxvR0NaMGYNqoK56TfhqZ6F8XutN63E4cLmA+77xzpU6dzfQcsJ23r8uSOOigg3QUGLCyAMoIQ9BOC1EGgEGD54HhbNtAAHKbVqsuffueoO4FCxbqEcBOOeQVoyk0PADmnYSOJDeOlj722msv6WxGmthmUI8rh6tLAH5Ar17HyDFHH6fnGDWMvRVt92snXbt2jV0ONi/Z6ruoDEZNDmTFl3TlB+YnsGLFSmnQoIHUr19f3QAMDZDYaaedpPW/WsWuCPFDdOXll8stw4errnbt2ukvGQBu91C6h/GqK6+SCuUr6BocQI2aiFHL4s7tUxmDDyKBIWBMfwDs9CBjH1jfbf0KImST7jcfeWYBJs8fuP8B7ST/ZMMZ6bEjUdCfBIYxfZ7T7RP+SWAcAPPz5cqVSw1n2zLhPEkH/FAWjrBBB8oza9av6qYO4Ldff9Owhx58SN0rV+Z/VqGL6ZFME1MB6CAA+Fn6cXwSVtbGteW0/jxfGXews3/7XTbdpJqWoWrljeX7aW5EalVUDsZneRg3iZT1yTAbn4C/hZNz99DkyZM1T5zOsuWhviRgjROmaN59162dcdt9XZpYm3PKKafoOfMDXTZPFjY90pbH9/dJUAbAMw+jgqMpuM9AqysJNuy1eOpw0qRJ6kZaDIN+GDXXXXeduuGPvNJAxxZ9xMWoCIB2zcYfMmSoVKhQWRYtWqpuFuOVV/6nW/sBTOVy9IxlRRp+nVgyDcrTH3GAL+MfivyhZuOQSVi0aLE0bbqjfPaZW2e4ciUNHj3IqFHPqF6OtPo61wSDUZMDecNku2kCi8voZo8bP7yPAw/C7N9cR6jrC1a4h+/BuMP+buo0dcMIARbOmy9Nm2wnk750DU3lypXlpZde0nO3DiGtf8/d95SuBx+i51hkDB1gXtzwAIsWLop0fZUaQgdsA33bbbdrPubNS2+jBfyHlo0ImQsox/sNbquTsFtnly5eGpXBhXXu2Dky+lrrOaBli8gGDMAQPBplLDwk7C9WpmndNN4KA/yqbdmypZ4z/9DFshFY0P3NN9+kGn/UM6478MuMWVK+XIXIuHlA3YgLHUTLlq3k4IPdr0H33oz0s8q8M20bF1OQXB9BOV/eZxKSwlEOmweGW/24JgAWrOJeAnfZZVf1AxiH5WHcbGScbPRB4xGAUQhcfvmVsvHG1eTPP93CV6TP689rBGBR/ldffZWaWkSHf+ihh+o5DCSQaT7wwAP5jBo/P7j+IO9rhpMsI+uD/rmAz+0LL7yk9fzF5xPVjWcGpGFDoFxY3+fnBUBZN954Y137B7DecQTw/iGMehDwZ53dMvwWTf/pJ0epGzve8mJD4KcfftKwe+91I3acxgH223dfue3WkXreuWNHOfXfrh6Zrr0+oK0fS8gyv5Bh+S6+6CKpXLGSrOS9ED1HGLnmDyUA7d/kyV/Ld985AwUYPfplOfLII2NXOj8ggEXxFStWzDfVt6YZjJocyIco040SWAKMGp14VFMuvPBifbjHvjFW3bojJMLyJcul/Ebl5LxzzlE3Rmjw4AHjP/pIzj5riJ4DgwefGTVgL+g59MbPmMz8+RcpF+m4dfgI54FfSjEJ6KlcsbJUrep+QW+33Q6rrQ/45BM3//z666+rG2WwDQZZFFBX0v0GoPFCo4lFgh998JHUrrGZHNbtcA0Dqm+yqQw+40znQLm88l2EBiwy+kBXvu20MwGYti0Hz9kwv/POO9qQDRgwQPr3768viTv53yfrsUePHvomXAANJhZjXhBPCSI+ywSdABZNYlQOnQTyAuJtu4p0eyq7Nt9dDjusR+xCPtOBWJBavXrNqLOJR94i/bbu/PKAQFFGaoCXX35ZF2m2aNFC6wJAegDWc6EzBJL0WTenC4HTTx+UKv9ppw2KfV1Z/LhWh+9v5Rlm3QB2oXXteog89NCD0ratm/7aYostUjucgL33biO777Zn7HLl4/UD0Onvvvvu2mFhJA46zomeSxhIdkcaiDwAvlFDf6zBweJU6GIdYNcegXxDnuXhOY4AjCosQAZxP2JEA2vuMCpy2GGHy7BhV6gcgLrdZONNZcVydy+jbQH5g2D8+PGy/fbbS6VKlVJ5YYdsgZFgbEIgbHlw/yOe3UzA+3Xsm2M17JILnUGk2/hd0tKqZWtpvrMxauNb/M8//pBTTjpJ/ojfGv3fxx+XYZddpuesExyTmBTm+wFtIqMJU6Ep5EWJo82I8/bII49LjRq1dUGwq5cKkTH+soYtWeqm+Tk1Bf0ksOeee6YMXfjx+q0pBqMmB+IikEnhgSVA/DJ3z4CMGvWsPjh3jLzDecQPVv9+J8pmtWo7R4R8Iyzxtm4A14mAXvziYQPx8UefqO4xrzpjJNXhx/HPHDRIatesKZ9GRgsWvr399jh9W6tv1MyYMVP1cJg5ac0KgOH7Cy+8UK6++mrtREm4SYwWQM/FF1+sDTzAew16rF4AHQoa3aZNm0rdzerKKSefKlMmuV1gP//0c2S0bSQPPfCgurVcpnyXRY1hlSpVtAPDryhsscU6B+5usWmBbJxYp3jFP4wQrHHZY489tHNH54FOqUfPnrqImkYNtp2ijtCZATByUE+sm/fff1/Dbx15q8ybP0+mTpsqZ599tjz3/HMazusOdO96qOywQ7PYhevg1j4B+GULPdOmfq9uvwy2HEwbgEHD60dQJkkHgM62atWqOgK16667arqYqgHQoZ933nl6fSwy6QNpKAJNm+6k+sCHHnS7Wqws82bph1u3T6Jdu7aaBq4hRlauuOIKdWNtEtFgq4Zy/LF99BxxoZv3AKZ1a0bPCIwHbGHGVmOsmcKWZsJvKwFMP8HYAKgLBiCma/CMYNEr6g736BNPPKHhANNO0omdQZiOPuqoo7Tz3GWXXeTEE0+URo0a6bq8bt266/tTiC6dD1TjgVCDBjvOIuAZgKEP4wxlxCgmRmOQbwJ5AfDMYJE63SwP8Morr2h9fhT90ALY2QN4PhHW/ZB4zWB8WZ7+r1sQP/UbNwL9D8SjsL8xUgIDI4YdTfbrg3VEf+QNtDKWzPuyZUulZo0aUTviro1O56tR49Ll6NYjjzwWXe+F0TX7NmoTB+nLHwnfmLH6YWziGrGO/HyUNINRkwP9myVwzTB+BuSbb6bqQ3TaKelXv0/8zM35vhz/OsDDDmMERo0ijutPs3Bai8PPL70wWvV88vF4ddtRGuykqlqpsgzsf6K6feThV13cCGEuGY3a4DMHq3v1srj0YNSgo0NDDaKR5JFEgw7DBp2L/eVPPWwkQICNJkZYfpmZnh4DUC6EvRGPIPmjNHXq1Mm3jd2C6TEt6wbQ4cCA4S9QdPAwjDBEfYN9iVwM7qZIrcuIGmTqAjDSg7VNSViFIXmIxtf13yedIls33sY5Iug6q1jX88+56ZsPPmAnks6/vRd83HjjjYm/wgn7vANYnIlf6OjECXybCXUC4Lrcc889eg7w2vl1aok0aNhgXQLKAW66aXWdkgOsHsskfZkIeQCGV9WqG0cGzdHqJk4//XRN9/tpziCrUa2GXBC/zM+mB2CLvcp6xhsAGZTJ1h3TtkYNdcGQhy5Og1hYg8/mgXFhUKHuubYFu+w44nj55WlDxqL5LrvK4d0Pd44oWzAeOP2DHxvIC+5pH7YcAPKNHwcwxACUl+G87zGtCMCfa5UwpVq/XgPZftvtNX3nKdGPtc3kjNPOcG4UD+KumE4uasfwHONZYzq2ji3pDzlL+PvxAEwxIr+XXRJPp6G9MG0G3kGz887pHVsWnLbz0wKpf+jQodru2DWOa5LBqInJC40jacOsOzB3Fqbu8CAAeGFXxQqVpc2+6eHQvVvtIx3adXCO6FnR9Rbx6AN2P61Ytlzd/oJYPlwg8OjDj+oD/NWXX6kbRlGq04/UHRL9Akd4yxYtZcSIW2XxYje0ik4WDaD+goJ7xSqpU7tO1Nn+W91qOMXpWBYFvg6/kWBHwPUwKPuqFa4DGBcZRQh779131c3GkMC7NBCOqQOMqHDLt03HpkcCeIkfXy42bdrUVGcOQ/GcoWfrOQwXdkYcicE7WQAdWYvyAt0AO8faNWvL4DMGy5dffKn+6GRSde1EZejgoVJv83qy6q/41x46ifj6v/Gaew3AG/F0pZ9v5BmdHBpXjAThCGMSBgoWfGPRKPxtGMvJexfAzhYu8LSdMH7Z41pgtKKgBZFJzwNlgZG3jtSygBgNIqw8afXwnNcuicDkyVNU9zPPuNEwrJMA8GFM+H/z9bdR3f4jlaPn77JL0luTMcKGNAC8fJJ5xK9wjPYAkGN+mCeQaVujhvcIDXSMkJx22mny8ccfqz/A8vigH3Yx8sWS+Fho3759daEuRiExWgNg4wEXr6LdaNqkqfTqGW+fj9woK36sAGPGjNG8YBoM+eQOI78cAK415OwCeIZj5A6jT9iaDfBHFaN3bN9RqlSsIrN/m6NuGBO6nmV5/OoJjB7FeVq0YLG2h4jrmL6etq4zkbIWrjxY7+SuPd4zg3Jff038wwTiCIqj3XzjTRq+daPGcnH0A+zbr53hmIf1VyY/PpE+gPYKo5u4RoCfx5JmMGpi4kLzSNow3y+wYBau3jh8qfe9NG++u66TADgd9cN3bopEf1m550VmzZgpHfffX7dzA+jAoQ9TE+mHK71lctTTrvHG9BKQMmriDnLR/AVy1pmDpdomm6hcw4YN5eef48W0UMH2MWp0Nt9scxnY321RTdqCC6Lzw69jHtGYg6wX+IP0RxyA8dk44Ejg5YIgoKMfkR4OFWNtEfI99k23O4IjNVyUujRq9LHGBaNMkMNaCruehukDNg/IqwW2w58Rr9u5dfhIOW/ouXrO+gcw/I40XnnZja5h6FzDUd8xHn7wYWnezE3jgI886F60pw0qihQX+5wh58iWdbeU5UvjhYzGqHVGTbnVjBrWGRZ2Yh0MOhms18EUGtbyYPoFL+9Dx4o3FyMM0x74hc2Fr9SF8nOtiNUNQBdeUudPUzCuJa873ZAjiQMPPFDr4sknn4x90vnIROq0+nwCjzzijPoffoiflzhZvDgN/osXLYmM5BWyceWN5cLzLtQwxKVRgzQAGA54VxOvGwwSwuYJZNq+UUP/UaNGSbNmzVK6MIJGMD3I2vJBrwXWz3G9Dq4h1tYAbnG/niqaN2suRxx6hHNEySNMZ1riewmbCzitCNo1V8wLgKk2TAFz1JJlBtyUWoXonnJTaEgDReXC33PPPk91fzP5G1mycLGe/+fOOzUM97Ua82CkDt/2+t//aDSS+a8r085EYO7cP3XEFMY9DD2AxtycOXOjPJSTqy6/Wt3atkbkDzjgzttuk0b10x9SHfWkM9j0R5OXH0sARg3WzNGogX9SPkuKwaiJyIfPZ5KM9fOZq0yS//pE1oNlklx+OqOGDxzee4CH5/ff58iWW24V/Ro+X/11QR86xbhjfPq/T8q2W2+tDxcBPWiEYdjwHS00at4Y437Vvz/ufXVrhx93tj7QqUCW2z91lCAWWzR/kQ7RnzOEnfnqDypwZ9RY4W2g6DxBrEPAglrrph/kuJaAOtgAsIFAZ1utWjWdsgKwNdgZNS69KZO+0jyPfsENfcMf4TRqLJ59zhmL1AXjygLXjelqnRod++yzrxqbwLlDz5Oz47fHWqOG24LZADKfdl0AMeOnGbLdNtvJHs33iH0iQE18WQf0HyiNG22Tmv7Tkbq4zM8985yU26i8jB/vDFXec9ZITAI+kYHpo0xg+XkNCJyTADp4vO0V7y8CbHhBpH7WLd5fAkOT65KYNo9JtGFWHgTsdRswYKB+5dtHnchA79DBvZNm2ZLlskWdLeTMQc5opU6rh8BUDe5bXGf7OQ4/D4A1avx7DYCBgDUx2HKdBF5PS+rHwnU8awAMEb7XyO2GS+dh73/tLQd2PtA5cAtFYTacwAgM1oxh7RDSAdBGgQA6aowu0fjFGi+G4TtIqI+XXoqnyiNvZJNvfn7s4cc0/KP3P5KTThwoe+1u7nkaNRFmTJ8pW0XXatJXbuE5wVEf1q9fJyCuFY4ARrROPvnf0Q+0xtG9VU/9AKaD0aAKG1XQHw6KKBrCOFpk8c3kKbJJ1arSYo/0InJ7nf08AXgRat26dVNTdTZ/a4LBqImICk5iJhnrb5lruJXJJl8WactomSTrEw8Dt5TecMON+i0RbNVt0mT71Fo5nb6AI5IHLjjvfDmsW/pFfQAazCTdwKSJk7RBQUeogDGEhi+K8+knE1zHGwMNG2RpaOgIUZyPn7532y//c8d/1J3JqMHL3fBKdizIxXQMCbfvB7lsC3YBvnjwmWeeUTdGatQgi42632bNkvJR+K3srOGvdfa3vouCow0A3mMDXfZ7NlgngcWz/AWKa8dGiusCPvvsc20c+QZRrAU4rJvb3RAJqyzw22+/Sfly5WTkCLfTzBqP30cGAL9TROyx2x7Svau5lk5U0WH/jrLXXm5rOKCjPXGZh984XI0aLub27zvWHcvB/GEKAesoACvPI+PaeOm6cDIARoE4coZwGz8TqZf6CIwMYq0R4cfzaXXwaP1JonnzXfN1bMCRR7opyanfOqMMwDRN13ibPACdAK4pdhtZDBkyRNdM2JEsPw+ANWoAjBDy5YcEPjOAqVEChhKuEdcXQa+tXwBrnPAmX35aBaMRMI4IGC00hnsc3kN2bLqjc0TAm5uBn3+ekTJKCSx633nnnWNXug4A/AhBmQmUkeF4wy7q84sv3FofGiGc7hn/4fjoGS0vR/Y4Urasu4W+ER1A24N2hMbGq/97TXbaoZlbXxbjr78iwy5yQp+tX9aHJesdxsScOX/If+66Rw2b2PZKtWU4bt1wazk2foEeQINm8qTJ+i4li/pRXR979DHOYfJgadPHe3VgIBIwTnN5RorKYNRERAUnMZuMDfNlrNsPSyJl1gcmlQ9MkvUJOdzwwBtvvKUNA/jaa+6DfytWYMQBL8uC4eEemAM6d5Hrr3FDxBid6NenjyyJ14mkOvuI7EwxwlJt42pyBXdExB0j1uVUr7apfofn3yedJAOjjgVp450TCqiKHnQ+7NyaOWH8J+qG/qTyFAX+Q8+GAuB2Ua6nQTjqg+EAynD8MfE3keLyLV6wUBrWbyB1ateWAf1P1PJViAyOrlHDrYjjY1gd+s8/342MIS+ukUrrP+CAg6RTpwNil8gtN90sFfEW1dnp4WWi2Y47yckDB+q5HaE5JmoUkQ5+YZ911lnameMX+k8//qThuhA4vmYAwk86yXWIWh8mDLviGjVoHNUZ6snVia23JAL4RY/dZwD8GM+S8jjHNfHdAD7lwBEf+vNIwm1p9RDYIm7fb0JZq6cgXTjauNYPoyr4JhO25eIaDh16jo7Q4DqMHv2KynBK4tCuh8o2ZmE2n0u8oh/yMD7wCxwvoISb74RiPpguCfhGDdYwIS4WruNtuTBmMKXDD8YCDz30kMpw5AWgftQFgJ1I9tMDmEZEHC5kRifOjvzSiy/TVzos+NMtWuV9dPHFl2gc7JhCXtAJY80MP+wIgz4uhmLPPVuk3prsGy3YQl65clWZP4+LiN1iWhCYN/fP6Bl1HzS9KB4l5VpAa9RcfMHFaoQBP/7wk/TqdbR88gm/Y5e+3tRt3X7dAw898HDUBjRK6WebBmBKbleznZw/KA7s4qZCMZp85hln6M5HcO4f7iPANm1Lmy5exggjEGD+1iSDURPRbzTIgmRsuJWxbj8siZRZn1iUMlIWWLhwkVx6ybCoIXRvigXQaaHh4IgBRnWabt9Upv/4ozzy4EOyy07N5BMuNERjBaIzRgNgRlkO6nKQ7NNqb+cA8PxFfH/cON3W2LZNG2nftp024IS+4RUNQPywX3zBRVKjWrVIp1OKxjHTA8vGxTIpDG7Wla0zhgMYFUBnTLfKmqk24MQT+uuLCFOIDRLUzVlRw9R+v7bSLirjbfEICoARHwAjNNiFhV/fvBYub04HMGTI2TrETkz79ls5/ZRTZX78ojbmDegbGZk6PQjgOsSdyE+RUQaDAh+5bN26la7JwDA5wOlCdqRfTvpSG1Z2nMiXjtbFwKgCpqcA3BN+vbn8569HAL/o/e/0+LTxeU0AhgFYo8Q3vFo5ew2TaPOCTwvgTdqEr8PPg0/mkzrpT3kA30Jz9fiy3BX9asfIF76mzZepISpHBR649wGpVKGyzJjudnrxWmBhOd7wjPeO4KWKmELl7iOmSfp5Qjxr1GCqDUY6FkTjA40YoeJX3BkHMpgmxO4s7kqifpQNwDXkDjsAo0moz/QXy/9JjfR+HE8NvRH/UOJ9hFFZPO+YSkO5sACaPxyQjl2bM2vW76oDdQjAP86uonXrfaIyuVEu+LtrkJ66Ah6MDLxLL75Ylsafcvk7EsR9DwOJU64HH3CwPPXEU7r2psk228nDD8XvbyoEWFfAvf+5Vxpu1TDyVKcaN3nxlNitw2/VMv05x71MlDI//Thd6xfPKdZQ4V7ne2nSP3jS19wS4Gdv/I/S+rIlyWDURHQ37eosSMaGFySTFEZaHesbC1/G9DeWCNcwRA9C9BCSwLfffifNd2muow11a9d2O6Ai4JMH+tmDqBPl5w/+iRolvsTviUf/qw/ar1wDgPx5aVrow4vGJn7Qge22aSJ9j3O/Ptywcf5y2MYcoJv05cCkerJh7FgA+jEO5QB0sCjft/EC4NRi4gz4Ow7nOgd8M4vvHGEa1I+0CN1BYfLk1iik8wPwa9Rff+Xe5YJwNN6ZgDKysWR+sH4Bozj8lhfy40brRL6KpxPHvP6muvkLloSeTPWENRg0XBGOdG3cTKQuAHHQEfqLrUmk5fv59zi2xleqVDG15iBp+pRp4mjPrQxBf5aV5cVoEupq5sxZ6rZgmnqfR5j962zZpEo1ufMOt04FYdCbCagHPz9++taogXwm8DrY9GD8+Iu3fRmc+3mkMWLbje233yEy/N3uKBg1fhyL1LWI6oVrYvBDy9Yjiof2CYAfwh59lGvj+Fy7NYN638aGPaHGTBSGdFj/MDb2abWPbj/Hou3PJrgRo7gq9TMUMNzw48MnRpoQxnuSdX13ZIQ1qt8oUq7OKK3oX3w+57c5mu/77r5P3e6FhHFiCeA1ykRec9zbWHvE3YRJsiXNYNRE5MPgM9fwTHLZwixtuI2zoRIPBEcG8GygQclPDUq9n6TP8cdHv2S2kYnxULEaGFFd8ujOo8Y1Ig2TLepuIWdEv/4AK8d4SJgNMvKEeGxwXo5+6SLdad/GH0NEfNArAx9snoNJ1xj+ONqGwsr551ZfEgEMx3NbK4DGFPnUsqFuYfDFbi03zuO46Hz4jg2mS91snHFUxjqoh0YNR42A3ZrvKv369I1dTqfTYfW6dECWl8B2UBg2AGQRTvQ8oofsvmt6DUYUvJoee870AKwBgaGUJJ+NNn+fffaZGjVAQQ29JUcb8Zp53EvpLyM73UgjU91Y0h9AB8x1ISwjjwDWiOD9N4Q+S1Fc5Fs71vh68hkZetbZ0rhRY+eIwLRScUz6SWT+mQdr1CSVI+kcwM4mvD0bYLgl9Cf5OdkoLConjBtMXwMPPOCMEo4sWp3ML86ZB3ceNzoRYBTxg5P2xxYwaNCZurEhLrKmb/U5zyiQlyVOL8W4jZk53b3cs9sh3aXZjs3k/tQnQtzFmTbtO92xh7V12D0G4hxTodjlhx1+dl0ccO/d90rjBunr6RYlp/OC0c4m0Y81AmH+e2hsnWS7BwgsesdrEgDET5ItaQajphQx0w2yodHe/P6DADeBbZtYhAZg+qKV+cillUdHqw9n1GBjGgl48403tdHgt6KcTP76t9cD8QHowPoOrANRt5G3RLpkUjhp08h2/a0e6iVsGHQAn0yYIBUqVky998OWzc8TwwAMueNFgFY/4/jxciHwyfgJUV2Xlw/edy/Hy0UXy4FfnXZaBg0pCPAdJzAsAHQ8vt6ktOhHIC36+cymB8AnB/gZCGzZt+Hg6tcUv/jdvfRN/JJJftPKdb5u5MDmgTqYPo/0J7AVGp0ZQBkbjtf/2+8yMZx6cORzAsDow2cuuK2ZMpTPhcgHcffdd+u7ZAB2iNl0AZgWwjSh/94gy1zy5Oo1bZi0aNFSjuB6uQhW1q9f+gEYwcTLCxct4qghro8Lw3eRcD3TX+/PbxBYPfRbjbFRg1cb7LKTW+wMo67aJtVSL6+z5SgIMFb5vNx1+x2yRZ26ou+XAfSHSPp6L12yVEdER9wSL+yP829p6zqpzuFH4H1PWH/DEVdfdk0xGDWBpZh4qNLEQ2+BuXiOKGC+vUmTJrqDCIAs15mk6XRw+gNzxZgn5q4N+FMW6duHljj3vHN1MSHBOO6XbjrvSMe6rb/fGMAvkzxkKU8Zylv6cQB0clirwK+RQ47lQ35JlhfAGga+kC+Tfp/UST356a7Z8OEjZNddd9f3ZQCry+UngGsJA5KjD+wIAeRzt91317fIAi6PqzfC2cpAf8ZPopXF0V47AC/j4wv3GIajJeUdXXrYOVaz5mZy8cXuGz6A/uqPwp2cu1eZLkA36AOLY7Hjx9+BhPTphvFnf7379cW8Qj/uCwDfQsLnOFJflTbyzIv1s6Q+AiML9jtgVsbSxkNHbg2apDTh7+uhW8sSh4Ps4KEX9fXEf900UbZygMBXX03Sz4Hg0wwAjFOOnGAUqO1+7aI2xW1A4BZvxme+s1HTiatryOChulCYwKctrr76Kj2HrC2fnwbdIDBr1iz9AGa7/fbTd80c2rWrPPyg+4yK6orbAeC999/TZw7r2ACrK1cCGHlEe4zdmgCvN4n823KUJINRE1imyAcXDwm/ogs3gG2ffIU95ZJ08AEG0HH++ttveg7/RPnowQNw/smET1K/PJCHTGnQP+mhzfYg+/qsrA3L5M8wAiM1XGCZL37ckPllJnz9fhq50v6qHDfuvdRC0CRZSwDb27nFHflhngAsKuYIDeDHJ5PybnUx3MpYP/rjaOOBOCcowzArY88JvFyyT59+savowDocfIYDIwSZFj0jXfgRfgdj5XBkfD4neK6SXjUAuaQyWzIcgCFh3ypr68bqob+tL4YlEXJWF/1wTJUlliEBdPb8+KiNm0QA9+7Uqd/qOcvO+xtvHh8f74QE/PiU92nDeQRgyNMABXC/cyt9Np0MszIwdD94730Z/+FHurbt9VdelaneGjC2AwB2n/F7ZqzHXIn0AOjgFnv4Q4+fv6T8lgSDURNYJsmGCbANFcBG2z40OKecLw/ATVmQbsaDThsnU8dg6euwtDLW7fsnhZHZZOBnAT/kg3WSLa4NZ7mTZCwL0mWRqz4iKZxAeXLNYyYirmWSDJirnCXlcCR6HHGkfktn2VJ80XqFzJ07L+J8Hb2Z9yeO85QwAkAYpZiGgWGBzg5vasZWZ2yzxcvhYNDgtfz+y+9yzaMl7w/rJnx9CIOfTytDwp/A9bLp4Jy0fjwHs+lOImV9PZZEUphPm3+6MRLDfFnQz9dBMtzK2TqgMQkktTuMQ9q41p/MBKTDOFYnQX8/PZJhvgzOCSvvM1N+i8tg1ASWWdoHybp5tA87CH+S4TwnfT8/ni9v5Xw/X7YgHZaZZK1fUjj9k86RR9IPs6ReMknGZzbZpLohc9WfRKaZq45sctl0+X5023JlKiNliSFDhqoRgk+A4B0xtWptpkewbt0tlHU2q6svdsNbWLGmpXbt2lKuXDmNl4lcKwPYdAvLpHJk04UwyyQZMtt956db2Hz4tLI8tzptXnLVS1mrx+rOVU8SEd8ySSYTk+Il+ZFYQ+eH2bzzvKB7Cf5kpjDf3zIpbyXBYNQElmnm+vDk+gBRjjrh9vWvqYfRZ6ay0T9TeCayHsgkmcIwKX26GeaHF4YlmddstHn182vdmWR82vxSFsAUwlFH9ZLu3Q+LjJDD9W3ZXQ/pno/duh7q2C1yd43Cux6iR+ywwkvqQLywEC+jw8dJQazxwhupAT9dW38F5bs4TKoXmzbPfbd/XlJcE2X1y+jnOSk817JlkrP+PBa1bFjzRn1Wb3HolzmJCLfplUS62RiMmsAyzVweqqQHONuDhTDqtOc23LrXJJPKxjInhWUj841jLmUorH6fxY3PfOaS17XJXMtl841zxFuTYHrMH93rqv6Qrp82/ax/klxxiPL71yjXa5aNvl4/z36YpZVLYpKcH59H5sOmVxj6erMxF5mCiHza9EpCZzYGoyZwgyUfNNs4+I3Fmn4Ai0o/nyDz6vuTtixro1yZ8lFY2rwmlbssEGUA1xSon+mVpjryr5nNq5/vkmDSPZKpPjL5Z2MueaZMLrJgQTKFyWeusoXJV2HSt3pxnks6JcmN8vLyJDBwQyQfAiy+s35WJpMfSR1JYWuayHdS3q2fpc3n2shzpnzkQluvNq9+mUsjmXebb54z727rNt6h5Oi78/LSZffJOrCELDoPHEkbti6ZlA+Ug0eelxQLW+7C1lEueaZMLrK5MJsOP/+FLU9BZH3mqtfm1Za/JOohF5YKoyapwnKtwMDAkqR9CK2fdVuurQe1sOQzZZ+t0ppXS9Y/85pLO2DLWNpoy5KfkV9kvOBdJ/w1iy9Gp4gXo5m6sDr88uKcOlK6Yn8ck9Nfe/TzuyEw6bqtbbLeN7T6L5VGzYZ2EQJLNzM1TLxHS+O9ymfIcm00sEgnF79MLGxnYMuXFL6umbkskV88IoP3HuGdJDBGrCzPrbECP8izzPTHu2TwbadrrrlG399D2XR6q3Nt1dvaSqc0EXVPJoWvabLOLZPk1kduhLmydU08pKA9pzswsKRZmPuLD0pSWFm7RzOVo7TRr/Oyku8k+mVJMy+6f1znd8stI3SHk+0EIUO3T4Th3qMbXzvHFvBDDjlE2rdvL5tuuql+1BRh9h7171e4S8s9zLyUlvwUh+tTWcoiS51Rk+QODCxJFuZ+y9wpuXhJ/oHFp63zbNegtJN5x71CI8QnPh9x3XXX67nKrnSyON5///1yzrnn6rfN8OXlBx98MF9cfGUcr6P/9ddfU354OR9e1Idz/z63eYN7Xd7DTD+JSfJlgetLOcoyN0oaplrbxIfgwEzuwMCS5vp+v61vZUJjleRf2gkjBdeBxg2+iYPX3dM97t33Zeedd025/1q+XI+TIxm8m+aqq67Sd9BgSgnGytSpU1Oy4F577aXvqrF+JF6Pb/Ni7wfeH+v6HknKkw0vi1wfylCWWeqMGp6HGyNwbXJ9u9/CM1R6CAMDxki7du2kbdu2Uq9ePTnrrCHqj08mjLhlpJ6vWuFklyxaLG3b7Cf/G/2yun1avTB68EbhTp06ycMPP6zfgoI/DRqcwyD074VwbwSurwwjNYGBEcP9FrgmiPsKhkWzZs1k0KBBeo5vOU2aNEmWLFkmV1xxdeRe5IyPWPaeu/4j5w4dqudJZPtI98033yyNGjVS46ZKlSr6IUH4+/mwx9JIlispLHD9Zkle9438OcB1QfyqADO5AwPXBHmfhXstcE0RjSxGSvDxSY6ojBs3Tv2WLl2mR3DF8r8io8atwbj6iiul5V57ycD+J0qfY49zPP54/b7T9Tdc7+RX/CWLFy/WNKjj559/lmrVqulnE+hn7/HSeK+X9vwFrh2W5LUPRk1gYGDgGiLaMRoYzz33nHTo0EGNmy+//NIZHX+hnftLVkQGzcrYqBl+001y8sCBMjMyUsZ/9LF88vF4+WT8eF2Pg2ksyEA39VoedNBBSrrZllr6eVzXLO35C1zzLMnrvxGH/Moqly1blmJSeGFYUnoCA8syw3NQMkQDi5GaN954Q15++WU9f+edd6RmzZry6aefqtuPAz9MH2GLN9bHwJ3EuXPnSf/+AyJD6XmZP3++yl577bW6tfvzzz9XGejjtbT001zXLO35Wx+YS7361yGXOKWRwagxLMsXMjCwpFiY5yA8M9kJ4+L555+X5s2bS+vWrXW3EkZs4L906VKV4S/M5dH5suXLNOzRxx6Vgw85RMaMGRMZMHP1C9+LFi2SxYuXaDh4+ulnyqabVpc99thDdtxxB9lll11kwoQJGpZ0TUrzdQr30JplQfWL8CQmyZZ2BqPGsCxfyMDAkmKuzwHlwjOTmagbGBlLlizRxcF29IXhKdnIoAFxjnAYKH369tH1OGecMUj69Okjd999Tyo++Oef82Ts2LHy8ccfp/zCdQn0melesPdKJibFK83cyM5llUXi1w6ZFF4YlpSewMCyzFyeA8oUJFuQnvWVbGB5ztEY0IbZ+rF1aePMnj1bvv/+e11PM2fOH+q3dClk0jpJxqXOwEAw6Tnk/VYQ/XilncGoMSyrFzEwsCRZ0HPAcMskuQ2d1njJ5ufT1qlvtIDYNeXHCdywWNhnL0nO6shGP15p50ZJw01liRjWJZPCC8OS0hMYWJZZ0HPAcMskuQ2dbGQLWz8F1+lSw6TwwPWdJfHcUUdBTIpbmhmMGsOyehEDA0uSBT0HDLdMkgssGlmnNIrob88DN2wW9tlLkrU6stHGKQsMRo1hWb2IgYElyYKeA4ZbJskFFo2s08IYNb5s4PpN++yBSTKWvpyNWxCtnrLAjZLm0MoS8VZNMim8sCwpPYGBZZUFPU8Mt0ySCywaWZ9spP3wJBZGNrDss7DPni9v3QXR11XauU6NmpKotLJc+YGBpY32ecr0TGWS4bkfTlIu0HFDqZtw/Uuehb13kuStXzZaPWWBGyUNNxWXeEkUmBRmmatcNlJHcfUEBgbmf54yPVOZZHjuh5OUC3TcUOolXPuSJ++dXOs2Sd76ZaPVUxa4RowasqDKyeRfGGbTHxgYWDja5ynTM+XLUM4ek2h1bIhMqhMwSXZ94oZQxrVJ//7JpX6T5Gz8bLRx1gaLm2YwagxLSk9gYHG4ru9BPgeZ8mHDrZw9JtHqKGssiTJYHTwvrs7ADY/23sn1/kmStTqy0cZZGyxuuhslzaGVFPFKcMtM4b5/YZhNf2FZUnoCSzdL23W2ecklb5TJRbawLEivDffl/HOfDCuLLOlylLS+0swNpZxri0W5d5LiWL9stHrWBoub7kbWIitpLliwIB8zhfv+hWE2/YVlSekJLN0sbdfZ5iVbvphvn0myRWVBem14NjkwV7mywJIuR1mvj8KwJOstsGj3oo1TGCbpWtMsXtqL5P8Bg8+uoDXWZpQAAAAASUVORK5CYII=" 64 | } 65 | }, 66 | "cell_type": "markdown", 67 | "metadata": {}, 68 | "source": [ 69 | "![dp_mdp.PNG](attachment:dp_mdp.PNG)
\n", 70 | "_Figure 1. Dynamic Programming for evaluation._" 71 | ] 72 | }, 73 | { 74 | "cell_type": "markdown", 75 | "metadata": {}, 76 | "source": [ 77 | "We can think of $V^{\\pi}_{k}(s)$ as an exact value of the k-horizon value of state $s$ under policy $\\pi$. And we can say it's an estimate of the infinite horizon for state $s$ under policy $\\pi$.\n", 78 | "\n", 79 | "$$\n", 80 | "V^{\\pi}(s) = \\mathbb{E}_{\\pi}[G_{t}~|~s_{t} = s] \\approx \\mathbb{E}_{\\pi}[r_{t} + \\gamma V_{k - 1}~|~s_{t} = s]\n", 81 | "$$\n", 82 | "\n", 83 | "This is formalized in the above equation." 84 | ] 85 | }, 86 | { 87 | "cell_type": "markdown", 88 | "metadata": {}, 89 | "source": [ 90 | "
\"drawing\"/
\n", 91 | "_Figure 2. DP tree for evaluation._" 92 | ] 93 | }, 94 | { 95 | "cell_type": "markdown", 96 | "metadata": {}, 97 | "source": [ 98 | "We can think of the DP approach for evaluation as a tree where a state is followed by an action which can lead to a variable number of other states (which also have their corresponding actions). The point here is that we are __bootstrapping__. This means we are replacing the expected return by its estimate. " 99 | ] 100 | }, 101 | { 102 | "cell_type": "markdown", 103 | "metadata": {}, 104 | "source": [ 105 | "# 2. Monte Carlo (MC) Policy Evaluation" 106 | ] 107 | }, 108 | { 109 | "cell_type": "markdown", 110 | "metadata": {}, 111 | "source": [ 112 | "Okay, notation is about to get a little nuanced. But this isn't the end of the road! Most notations can be omitted. They are there to highlight something. With that, let's take a look at the value function under a certain policy $\\pi$. \n", 113 | "\n", 114 | "$$\n", 115 | "V^{\\pi}(s) = \\mathbb{E}_{T \\sim \\pi}[G_{t}~|~s_{t} = s] \\hspace{1em} (Eq.~1)\\\\\n", 116 | "$$\n", 117 | "\n", 118 | "It is the same as before, but now we specify that we sample a trajectory following policy $\\pi$.\n", 119 | "\n", 120 | "> __Monte Carlo Policy Evaluation__ : a model-free (model here means the ground truth dynamics model of the environment) policy evaluation method\n", 121 | "\n", 122 | "Requirements for MC Policy Evaluation:\n", 123 | "\n", 124 | "* trajectories/episodes need to be finite (need to be episodic)\n", 125 | "* no bootstrapping (just sampling)\n", 126 | "* does not assume state is markov\n", 127 | "\n", 128 | "Let's take a look at 2 different monte carlo methods for model-free policy evaluation.\n", 129 | "\n", 130 | "> __First-Visit MC__ : a version of monte carlo policy evaluation that updates the value function with the first time $t$ that state $s$ is visited in episode $i$\n", 131 | "\n", 132 | "> __Every-Visit MC__ : a version of monte carlo policy evaluation that updates the value function with the with every time $t$ that state $s$ is visited in episode $i$\n", 133 | "\n", 134 | "Below is the algorithm for First-Visit:\n", 135 | "\n", 136 | "Initialize $N(s) = 0, G(s) = 0 ~~ \\forall s \\in S$
\n", 137 | "Loop
\n", 138 | "$\\quad$ Sample episode $i = s_{i, 1}, a_{i, 1}, r_{i, 1}, s_{i, 2}, a_{i, 2}, r_{i, 2}, ..., s_{i, T_{i}}$\n", 139 | "
\n", 140 | "$\\quad$ Define $G_{i, t} = r_{i, t} + \\gamma r_{i, t + 1} + \\gamma^{2} r_{i, t + 2} + ... \\gamma^{T_{i} - 1} r_{i, T_{i}}$ for the $ith$ episode at time step $t$
\n", 141 | "$\\quad$ For each state $s$ visited in episode $i$
\n", 142 | "$\\quad\\quad$ For __first__ time $t$ that state $s$ is visited in episode $i$
\n", 143 | "$\\quad\\quad\\quad$ Increment counter of total first visits: $N(s) = N(s) + 1$
\n", 144 | "$\\quad\\quad\\quad$ Increment total return $G(s) = G(s) + G_{i, t}$
\n", 145 | "$\\quad\\quad\\quad$ Update estimate $V^{\\pi}(s) = G(s)/N(s)$
\n", 146 | "
\n", 147 | "_Algorithm 1. First-Visit Monte Carlo_" 148 | ] 149 | }, 150 | { 151 | "attachments": {}, 152 | "cell_type": "markdown", 153 | "metadata": {}, 154 | "source": [ 155 | "
\"drawing\"/
\n", 156 | "\n", 157 | "_Figure 1. Bias and variance and MSE._" 158 | ] 159 | }, 160 | { 161 | "cell_type": "markdown", 162 | "metadata": {}, 163 | "source": [ 164 | "Simply put, the bias is the difference between the expected value of the statistic $\\hat{\\theta}$ and the true statistic $\\theta$. The variance is the expected squared difference between the $\\hat{\\theta}$ and the expected $\\hat{\\theta}$. The MSE is simply a combination of these 2.\n", 165 | "\n", 166 | "* $V^{\\pi}$ is an _unbiased_ estimator of true $\\mathbb{E}_{\\pi}[G_{t}~|~s_{t}=s]$.
\n", 167 | "* By the law of large numbers, as the count for First-Visit MC approaches infinity, the value estimates would also approach the expected value estimates under that same policy: $N(s) \\rightarrow \\infty, V^{\\pi}(s) \\rightarrow \\mathbb{E}_{\\pi}[G_{t}~|~s_{t} = s]$." 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": {}, 173 | "source": [ 174 | "Initialize $N(s) = 0, G(s) = 0 ~~ \\forall s \\in S$
\n", 175 | "Loop
\n", 176 | "$\\quad$ Sample episode $i = s_{i, 1}, a_{i, 1}, r_{i, 1}, s_{i, 2}, a_{i, 2}, r_{i, 2}, ..., s_{i, T_{i}}$\n", 177 | "
\n", 178 | "$\\quad$ Define $G_{i, t} = r_{i, t} + \\gamma r_{i, t + 1} + \\gamma^{2} r_{i, t + 2} + ... \\gamma^{T_{i} - 1} r_{i, T_{i}}$ for the $ith$ episode at time step $t$
\n", 179 | "$\\quad$ For each state $s$ visited in episode $i$
\n", 180 | "$\\quad\\quad$ For __every__ time $t$ that state $s$ is visited in episode $i$
\n", 181 | "$\\quad\\quad\\quad$ Increment counter of total first visits: $N(s) = N(s) + 1$
\n", 182 | "$\\quad\\quad\\quad$ Increment total return $G(s) = G(s) + G_{i, t}$
\n", 183 | "$\\quad\\quad\\quad$ Update estimate $V^{\\pi}(s) = G(s)/N(s)$
\n", 184 | "
\n", 185 | "_Algorithm 2. Every-Visit Monte Carlo_" 186 | ] 187 | }, 188 | { 189 | "cell_type": "markdown", 190 | "metadata": {}, 191 | "source": [ 192 | "This is the exact same as the First Visit MC except it is done for every visit. In this case:\n", 193 | "\n", 194 | "* $V^{\\pi}$ for every-visit MC is _biased_ because now a state in one episode that occurs too often will be given a lot more priority. \n", 195 | "*It often has better MSE than first-visit and is a consistent estimator." 196 | ] 197 | }, 198 | { 199 | "cell_type": "markdown", 200 | "metadata": {}, 201 | "source": [ 202 | "> __Incremental MC__ : an approach that can be layered on top of the previous MC methods to incrementally update the value estimate function\n", 203 | "\n", 204 | "After each episode $i$
\n", 205 | "$\\quad$ Define $G_{i, t}$
\n", 206 | "$\\quad$ For each state $s$ visited in episode $i$
\n", 207 | "$\\quad\\quad$ For a time $t$ that state $s$ is visited in episode $i$
\n", 208 | "$\\quad\\quad\\quad$ Increment counter of total first visits: $N(s) = N(s) + 1$
\n", 209 | "$\\quad\\quad\\quad$ Update estimate $V^{\\pi}(s) = V^{\\pi}(s) + \\alpha(G_{i, t} - V^{\\pi}(s))$
\n", 210 | "
\n", 211 | "_Algorithm 3. Incremental Monte Carlo._" 212 | ] 213 | }, 214 | { 215 | "cell_type": "markdown", 216 | "metadata": {}, 217 | "source": [ 218 | "* $\\alpha = \\frac{1}{N(s)}$: every-visit MC\n", 219 | "* $\\alpha > \\frac{1}{N(s)}$: forget older data, good for non-stationary domains (when the MDP is constantly changing)" 220 | ] 221 | }, 222 | { 223 | "cell_type": "markdown", 224 | "metadata": {}, 225 | "source": [ 226 | "The general limitations of MC methods:\n", 227 | "* they require episodes to terminate at some point\n", 228 | "* they generally have high variance\n", 229 | "\n", 230 | "MC methods:\n", 231 | "* don't bootstrap and instead, samples\n", 232 | " * converges to true value under some assumptions" 233 | ] 234 | }, 235 | { 236 | "cell_type": "markdown", 237 | "metadata": {}, 238 | "source": [ 239 | "# 3. Temporal Difference (TD) Learning" 240 | ] 241 | }, 242 | { 243 | "cell_type": "markdown", 244 | "metadata": {}, 245 | "source": [ 246 | "> __Temporal Difference (TD) Learning__ : combines MC methods (sampling) and dynamic programming methods (bootstrapping)\n", 247 | "\n", 248 | "* the TD family of methods are model-free (much like the MC methods)\n", 249 | "* they bootstrap and sample (so they do what dynamic programming and MC do)\n", 250 | "* can be used in both episodic and infinite-horizon settings (unlike MC which can be used only in episodic settings)\n", 251 | "* updates for eery state, action, reward, next_state tuple" 252 | ] 253 | }, 254 | { 255 | "cell_type": "markdown", 256 | "metadata": {}, 257 | "source": [ 258 | "Set $\\alpha$
\n", 259 | "Initialize $V^{\\pi}(s) = 0, \\forall s \\in S$
\n", 260 | "Loop
\n", 261 | "$\\quad$ Sample __tuple__ $(s_{t}, a_{t}, r_{t}, s_{t + 1})$
\n", 262 | "$\\quad$ $V^{\\pi}(s_{t}) = V^{\\pi}(s_{t}) + \\alpha([r_{t} + \\gamma V^{\\pi}(s_{t + 1})] - V^{\\pi}(s_{t}))$
\n", 263 | "\n", 264 | "_Algorithm 4. TD Learning TD(0)._" 265 | ] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "metadata": {}, 270 | "source": [ 271 | "We call it TD(0) because we take the initial reward (via sampling every-visit if $\\alpha = \\frac{1}{N(s)}$) and we bootstrap. Notice how similar it is to the dynamic programming approach and the MC approach. \n", 272 | "\n", 273 | "The __TD error__ is:\n", 274 | "\n", 275 | "$$\n", 276 | "\\delta_{t} = [r_{t} + \\gamma V^{\\pi}(s_{t + 1})] - V^{\\pi}(s_{t})\n", 277 | "$$" 278 | ] 279 | }, 280 | { 281 | "cell_type": "markdown", 282 | "metadata": {}, 283 | "source": [ 284 | "# 4. Comparing Approaches" 285 | ] 286 | }, 287 | { 288 | "attachments": {}, 289 | "cell_type": "markdown", 290 | "metadata": {}, 291 | "source": [ 292 | "
\"drawing\"/
\n", 293 | "\n", 294 | "_Figure 2. Comparing different approaches._" 295 | ] 296 | }, 297 | { 298 | "cell_type": "markdown", 299 | "metadata": {}, 300 | "source": [ 301 | "We pick different model-free policy evaluation algorithms based on:\n", 302 | "\n", 303 | "* bias/variance trade-offs\n", 304 | "* data efficiency\n", 305 | "* computational efficiency\n", 306 | "\n", 307 | "MC is:\n", 308 | "* unbiased\n", 309 | "* high variance\n", 310 | "* converges to true even with function approximation\n", 311 | "\n", 312 | "TD is:\n", 313 | "* moderate bias\n", 314 | "* lower variance\n", 315 | "* converges if tabular representation" 316 | ] 317 | }, 318 | { 319 | "attachments": {}, 320 | "cell_type": "markdown", 321 | "metadata": {}, 322 | "source": [ 323 | "
\"drawing\"/
\n", 324 | "\n", 325 | "_Figure 3. Data & Computational efficiency of model-free policy evaluation algorithms._" 326 | ] 327 | }, 328 | { 329 | "cell_type": "markdown", 330 | "metadata": {}, 331 | "source": [ 332 | "# 5. Resource" 333 | ] 334 | }, 335 | { 336 | "cell_type": "markdown", 337 | "metadata": {}, 338 | "source": [ 339 | "If you missed the link right below the title, I'm providing the resource here again along with the course website.\n", 340 | "\n", 341 | "- [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 342 | "- [Course Website](http://web.stanford.edu/class/cs234/index.html)\n", 343 | "\n", 344 | "This is a series of 15 lectures provided by Stanford.\n" 345 | ] 346 | }, 347 | { 348 | "cell_type": "code", 349 | "execution_count": null, 350 | "metadata": {}, 351 | "outputs": [], 352 | "source": [] 353 | } 354 | ], 355 | "metadata": { 356 | "kernelspec": { 357 | "display_name": "Python 3", 358 | "language": "python", 359 | "name": "python3" 360 | }, 361 | "language_info": { 362 | "codemirror_mode": { 363 | "name": "ipython", 364 | "version": 3 365 | }, 366 | "file_extension": ".py", 367 | "mimetype": "text/x-python", 368 | "name": "python", 369 | "nbconvert_exporter": "python", 370 | "pygments_lexer": "ipython3", 371 | "version": "3.8.8" 372 | } 373 | }, 374 | "nbformat": 4, 375 | "nbformat_minor": 4 376 | } 377 | -------------------------------------------------------------------------------- /Lecture 7 - Imitation Learning.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "16df9564", 6 | "metadata": {}, 7 | "source": [ 8 | "# Lecture 7 - Imitation Learning\n", 9 | "\n", 10 | "provided by [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 11 | "\n", 12 | "---" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "id": "618c06cf", 18 | "metadata": {}, 19 | "source": [ 20 | "
\n", 21 | "Table of Contents:
\n", 22 | " \n", 23 | "\n", 31 | "
" 32 | ] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "id": "a8837110", 37 | "metadata": {}, 38 | "source": [ 39 | "# 1. Introduction" 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "id": "0677a008", 45 | "metadata": {}, 46 | "source": [ 47 | "Some environments may have sparse rewards and even a DQN wouldn't be able to succeed in the environment. An example is Montezuma's Revenge which is a game where a character navigates a 2D world in order find a key and open a door.\n", 48 | "\n", 49 | "RL is good for simple and cheap data and parallelization is easy. However, it wouldn't be practical for cases where executing actions is slow, expensive to fail, or safety is priority. \n", 50 | "\n", 51 | "Problems with RL:\n", 52 | "* needs lots of data\n", 53 | "* needs lot of time\n", 54 | "* sparse rewards\n", 55 | "* hard to learn \n", 56 | "* execution of actions is slow\n", 57 | "* very expensive to fail\n", 58 | "* not safe \n", 59 | "\n", 60 | "__Imitation Learning__:\n", 61 | "* learn from imitating behavior\n", 62 | "* rewards are dense in time to closely guide the agent" 63 | ] 64 | }, 65 | { 66 | "cell_type": "markdown", 67 | "id": "3296d6f9", 68 | "metadata": {}, 69 | "source": [ 70 | "# 2. Problem Setup" 71 | ] 72 | }, 73 | { 74 | "cell_type": "markdown", 75 | "id": "796fc797", 76 | "metadata": {}, 77 | "source": [ 78 | "Input:\n", 79 | "* state space, action space\n", 80 | "* Transition model $P(s' ~|~ s, a)$\n", 81 | "* No reward function $R$\n", 82 | "* Set of one or more teacher's demonstrations $(s_{0}, a_{0}, s_{1}, a_{1}, s_{2})$\n", 83 | "\n", 84 | "__Behavioral Cloning__:\n", 85 | "* Can we directly learn the teacher's policy using supervised learning?\n", 86 | "\n", 87 | "__Inverse RL__:\n", 88 | "* Can we recover $R$?\n", 89 | "\n", 90 | "__Apprenticeship Learning via Inverse RL__:\n", 91 | "* Can we use the R we find in Inverse RL to generate a good policy?" 92 | ] 93 | }, 94 | { 95 | "cell_type": "markdown", 96 | "id": "2a1b5fd9", 97 | "metadata": {}, 98 | "source": [ 99 | "# 3. Behavioral Cloning" 100 | ] 101 | }, 102 | { 103 | "cell_type": "markdown", 104 | "id": "96d0cab4", 105 | "metadata": {}, 106 | "source": [ 107 | "Behavioral Cloning:\n", 108 | "* the second your model deviates from the teacher behavior, it will have no idea what to do\n", 109 | "* fine so long as your data covers all possible states encountered" 110 | ] 111 | }, 112 | { 113 | "cell_type": "markdown", 114 | "id": "04d185e6", 115 | "metadata": {}, 116 | "source": [ 117 | "Initialize $D \\leftarrow \\emptyset$
\n", 118 | "Initialize $\\hat{\\pi}_{1}$ to any policy in $\\Pi$
\n", 119 | "for $i = 1$ to $N$ do
\n", 120 | "$\\quad$ Let $\\pi_{i} = \\beta_{i}\\pi^{*} + (1 - \\beta_{i})\\hat{\\pi}_{i}$
\n", 121 | "$\\quad$ Sample $T$-step trajectories using $\\pi_{i}$
\n", 122 | "$\\quad$ Get dataset $D_{i} = \\{(s, \\pi^{*}(s))\\}$ of visited states by $\\pi_{i}$ and actions given by expert
\n", 123 | "$\\quad$ Aggregate datasets: $D \\rightarrow D \\cup D_{i}$
\n", 124 | "$\\quad$ Train classifier $\\hat{\\pi}_{i + 1}$ on $D$.
\n", 125 | "\n", 126 | "Return best $\\hat{\\pi}_{i}$ on validation \n", 127 | "

\n", 128 | "\n", 129 | "_Algorithm 1. DAGGER: Dataset Aggregation._" 130 | ] 131 | }, 132 | { 133 | "cell_type": "markdown", 134 | "id": "c0b57242", 135 | "metadata": {}, 136 | "source": [ 137 | "The basic principle behind __DAGGER__ for behavior cloning is that you continually build up the dataset, train, and repeat. " 138 | ] 139 | }, 140 | { 141 | "cell_type": "markdown", 142 | "id": "2dc5c8ee", 143 | "metadata": {}, 144 | "source": [ 145 | "# 4. Inverse RL" 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "id": "4ef0adf3", 151 | "metadata": {}, 152 | "source": [ 153 | "We have to estimate the $R$ reward function. There is no unique $R$ for a given set of data. \n", 154 | "\n", 155 | "$R(s) = \\textbf{w}^{T}x(s)$ where $w \\in \\mathbb{R}^{n}$, $x : S \\rightarrow \\mathbb{R}^{n} \\hspace{1em} (Eq.~1)$\n", 156 | "\n", 157 | "The value function for a policy $\\pi$ is now:\n", 158 | "\n", 159 | "$$\n", 160 | "\\begin{equation}\n", 161 | "\t\\begin{split}\n", 162 | "V^{\\pi} & \\underset{s \\thicksim \\pi}{=} \\mathbb{E}[\\sum_{t = 0}^{\\infty}\\gamma^{t}R(s_{t}) ~|~ \\pi]\\\\\n", 163 | " & = \\mathbb{E}[\\sum_{t = 0}^{\\infty}\\gamma^{t}\\textbf{w}^{T}x(s_{t}) ~|~ \\pi]\\\\\n", 164 | " & = \\textbf{w}^{T} \\mathbb{E}[\\sum_{t = 0}^{\\infty}\\gamma^{t}x(s_{t}) ~|~ \\pi]\\\\\n", 165 | " & = \\textbf{w}^{T} \\mu(\\pi)\\\\\n", 166 | " \\end{split}\n", 167 | "\\end{equation}\n", 168 | "\\hspace{1em} (Eq.~2)\\\\\n", 169 | "$$\n", 170 | "\n", 171 | "$\\mu(\\pi)(s)$ is the discounted weighted frequency of state features under policy $\\pi$." 172 | ] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "id": "e9997dd8", 177 | "metadata": {}, 178 | "source": [ 179 | "# 5. Apprenticeship Learning" 180 | ] 181 | }, 182 | { 183 | "cell_type": "markdown", 184 | "id": "ad8e70e7", 185 | "metadata": {}, 186 | "source": [ 187 | "$$\n", 188 | "V^{\\pi} = \\textbf{w}^{T} \\mu(\\pi)\n", 189 | "$$\n", 190 | "\n", 191 | "$$\n", 192 | "\\mathbb{E}[\\sum_{t = 0}^{\\infty}\\gamma^{t}R^{*}(s_{t}) ~|~ \\pi^{*}] = V^{*} \\ge V^{\\pi} = \\mathbb{E}[\\sum_{t = 0}^{\\infty}\\gamma^{t}R^{*}(s_{t}) ~|~ \\pi] \\hspace{1em} (Eq.~3)\\\\\n", 193 | "w^{*^{T}} \\mu(\\pi^{*}) \\ge w^{*^{T}} \\mu(\\pi), \\forall ~ \\pi \\ne \\pi^{*} \\hspace{1em} (Eq.~4)\\\\\n", 194 | "$$\n", 195 | "\n", 196 | "If:\n", 197 | "\n", 198 | "$$\n", 199 | "||\\mu(\\pi) - \\mu(\\pi^{*})||_{1} \\le \\epsilon \\hspace{1em} (Eq.~5)\\\\\n", 200 | "$$\n", 201 | "\n", 202 | "then for all $w$ with $||w||_{\\infty} \\le 1$:

\n", 203 | "$$\n", 204 | "|w^{T}\\mu(\\pi) - w^{T}\\mu(\\pi^{*})| \\le \\epsilon \\hspace{1em} (Eq.~6)\n", 205 | "$$" 206 | ] 207 | }, 208 | { 209 | "cell_type": "markdown", 210 | "id": "abba7e7e", 211 | "metadata": {}, 212 | "source": [ 213 | "Assumption: $R(s) = w^{T}x(s)$
\n", 214 | "Initialize policy $\\pi_{0}$\n", 215 | "for $i = 1, 2, ...$
\n", 216 | "$\\quad$ Find a reward function ($\\textbf{w}$) such that the teacher maximally outperforms all previous controllers:\n", 217 | "\n", 218 | "$$\n", 219 | "\\underset{\\textbf{w}}{argmax} ~ \\underset{\\gamma}{max} ~ s.t. ~~ w^{T} \\mu(\\pi^{*}) \\ge w^{T}\\mu(\\pi) + \\gamma ~~~ \\forall \\pi \\in \\{\\pi_{0}, \\pi_{1}, ..., \\pi_{i - 1}\\} ~~ s.t. ~~ ||w||_{2} \\le 1 \\hspace{1em} (Eq.~7)\\\\\n", 220 | "$$\n", 221 | "\n", 222 | "$\\quad$ Find optimal control policy $\\pi_{i}$ for the current $\\textbf{w}$
\n", 223 | "$\\quad$ Exit if $\\gamma \\le \\frac{\\epsilon}{2}$
\n", 224 | "\n", 225 | "_Algorithm 2. Apprenticeship Learning._" 226 | ] 227 | }, 228 | { 229 | "cell_type": "markdown", 230 | "id": "0f657f6b", 231 | "metadata": {}, 232 | "source": [ 233 | "# 6. Resource" 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "id": "32e04a27", 239 | "metadata": {}, 240 | "source": [ 241 | "If you missed the link right below the title, I'm providing the resource here again along with the course website.\n", 242 | "\n", 243 | "- [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 244 | "- [Course Website](http://web.stanford.edu/class/cs234/index.html)\n", 245 | "\n", 246 | "This is a series of 15 lectures provided by Stanford.\n" 247 | ] 248 | }, 249 | { 250 | "cell_type": "code", 251 | "execution_count": null, 252 | "id": "f5c15c5c", 253 | "metadata": {}, 254 | "outputs": [], 255 | "source": [] 256 | } 257 | ], 258 | "metadata": { 259 | "kernelspec": { 260 | "display_name": "Python 3", 261 | "language": "python", 262 | "name": "python3" 263 | }, 264 | "language_info": { 265 | "codemirror_mode": { 266 | "name": "ipython", 267 | "version": 3 268 | }, 269 | "file_extension": ".py", 270 | "mimetype": "text/x-python", 271 | "name": "python", 272 | "nbconvert_exporter": "python", 273 | "pygments_lexer": "ipython3", 274 | "version": "3.8.8" 275 | } 276 | }, 277 | "nbformat": 4, 278 | "nbformat_minor": 5 279 | } 280 | -------------------------------------------------------------------------------- /Lecture 8 - Policy Gradient I.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "16df9564", 6 | "metadata": {}, 7 | "source": [ 8 | "# Lecture 8 - Policy Gradient I\n", 9 | "\n", 10 | "provided by [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 11 | "\n", 12 | "---" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "id": "618c06cf", 18 | "metadata": {}, 19 | "source": [ 20 | "
\n", 21 | "Table of Contents:
\n", 22 | " \n", 23 | "\n", 28 | "
" 29 | ] 30 | }, 31 | { 32 | "cell_type": "markdown", 33 | "id": "a8837110", 34 | "metadata": {}, 35 | "source": [ 36 | "# 1. Introduction" 37 | ] 38 | }, 39 | { 40 | "cell_type": "markdown", 41 | "id": "803854f9", 42 | "metadata": {}, 43 | "source": [ 44 | "In the previous lecture, we tried to approximate the value or action-value functions using parameters $\\theta$:\n", 45 | "\n", 46 | "$$\n", 47 | "V_{\\theta}(s) \\approx V^{\\pi}(s)\\\\\n", 48 | "Q_{\\theta}(s, a) \\approx Q^{\\pi}(s, a)\n", 49 | "$$\n", 50 | "\n", 51 | "The policy will usually be $\\epsilon$-greedy applied on top of these value functions. Today we will directly parameterize the policy:\n", 52 | "\n", 53 | "$$\n", 54 | "\\pi_{\\theta}(s, a) = \\mathbb{P}[a~|~s; \\theta] \\hspace{1em} (Eq.~1)\\\\\n", 55 | "$$" 56 | ] 57 | }, 58 | { 59 | "cell_type": "markdown", 60 | "id": "9abfa2c7", 61 | "metadata": {}, 62 | "source": [ 63 | "* Value Based\n", 64 | " * Learnt value function\n", 65 | " * implicit policy (e.g. $\\epsilon$-greedy)\n", 66 | "* Policy Based\n", 67 | " * no value function\n", 68 | " * learnt policy\n", 69 | "* Actor-Critic\n", 70 | " * learnt value function\n", 71 | " * learnt policy" 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "id": "f1190870", 77 | "metadata": {}, 78 | "source": [ 79 | "Advantages of Policy-Based RL:\n", 80 | "* better convergence properties\n", 81 | "* effective in high-dimensional/continuous action spaces\n", 82 | "Disadvantages:\n", 83 | "* usually converge to local rather than global optimum\n", 84 | "* evaluating policy is inefficient" 85 | ] 86 | }, 87 | { 88 | "cell_type": "markdown", 89 | "id": "13807f37", 90 | "metadata": {}, 91 | "source": [ 92 | "* Goal: given a policy $\\pi_{\\theta}(s, a)$ with parameters $\\theta$, find best $\\theta$\n", 93 | "* we measure the quality of the policy (policy evaluation)\n", 94 | "* in __episodic environments__, we can use the start value of the policy:\n", 95 | "\n", 96 | "$$\n", 97 | "J_{1}(\\theta) = V^{\\pi_{\\theta}}(s_{1}) \\hspace{1em} (Eq.~2)\\\\\n", 98 | "J_{avV}(\\theta) = \\sum_{s} d^{\\pi_{\\theta}}(s) V^{\\pi_{\\theta}}(s) \\hspace{1em} (Eq.~3)\\\\\n", 99 | "J_{avR}(\\theta) = \\sum_{s} d^{\\pi_{\\theta}}(s) \\sum_{a} \\pi_{\\theta}(s, a) R(a, s) \\hspace{1em} (Eq.~4)\\\\\n", 100 | "$$\n", 101 | "\n", 102 | "$d^{\\pi_{\\theta}}(s)$ is the stationary distribution of states under $\\pi_{\\theta}$.\n", 103 | "\n", 104 | "Eq. 2: in episodic environments we can use the start value of the policy state $s_{1}$. Eq. 3: in continuing environments we can use the average value. Eq. 4: in continuing environments we can also use the average reward per time-step." 105 | ] 106 | }, 107 | { 108 | "cell_type": "markdown", 109 | "id": "12637991", 110 | "metadata": {}, 111 | "source": [ 112 | "# 2. Policy Optimization" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "id": "2f70e469", 118 | "metadata": {}, 119 | "source": [ 120 | "Policy-based RL (we've been doing model-based/model-free value-based RL) is an optimization problem. There are gradient-free methods for optimization:\n", 121 | "* hill climbing\n", 122 | "* genetic algorithms\n", 123 | "\n", 124 | "Non-gradient optimization methods are good baselines, but they are sample inefficient." 125 | ] 126 | }, 127 | { 128 | "cell_type": "markdown", 129 | "id": "d1d07646", 130 | "metadata": {}, 131 | "source": [ 132 | "Policy gradient algorithms search for a _local_ maximum in $V(\\theta)$. There are many different PG algorithms.\n", 133 | "\n", 134 | "$$\n", 135 | "V(\\theta) = V^{\\pi_{\\theta}}\\\\\n", 136 | "\\Delta \\theta = \\alpha \\nabla_{\\theta}V(\\theta)\\\\\n", 137 | "\\nabla_{\\theta}V(\\theta) = s_{t} = \\begin{pmatrix}\n", 138 | " \\frac{\\partial V(\\theta)}{\\partial \\theta_{1}} \\\\\n", 139 | " \\vdots \\\\\n", 140 | "\t\\frac{\\partial V(\\theta)}{\\partial \\theta_{n}}\n", 141 | "\\end{pmatrix}\n", 142 | "$$\n", 143 | "\n", 144 | "$\\alpha$ is a step-size parameter." 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "id": "ca2ef4d8", 150 | "metadata": {}, 151 | "source": [ 152 | "__PG by Finite Differences__ is simple, noisy, and inefficient, but can sometimes be good." 153 | ] 154 | }, 155 | { 156 | "cell_type": "markdown", 157 | "id": "2694da25", 158 | "metadata": {}, 159 | "source": [ 160 | "To evaluate policy gradient of $\\pi_{\\theta}(s, a)$
\n", 161 | "For each dimension $k \\in [1, n]$
\n", 162 | "$\\quad$ Estimate $k$-th partial derivative of objective function w.r.t. $\\theta$
\n", 163 | "$\\quad$ Perturb $\\theta$ by small amount $\\epsilon$ in $k$-th dimension\n", 164 | "\n", 165 | "$$\n", 166 | "\\frac{\\partial V(\\theta)}{\\partial \\theta_{k}} \\approx \\frac{V(\\theta + \\epsilon u_{k}) - V(\\theta)}{\\epsilon}\n", 167 | "$$\n", 168 | "\n", 169 | "$u_{k}$ is a unit vector with 1 in $k$-th component, 0 elsewhere.
\n", 170 | "\n", 171 | "_Algorithm 1. PG by Finite Differences._" 172 | ] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "id": "4a7ad3a5", 177 | "metadata": {}, 178 | "source": [ 179 | "__Likelihood Ratio Policies__\n", 180 | "\n", 181 | "Define a state-action trajectory: $\\tau = (s_{0}, a_{0}, r_{0}, ..., s_{T - 1}, r_{T - 1}, s_{T})$
\n", 182 | "Let $R(\\tau) = \\sum_{t=0}^{T}R(s_{t}, a_{t})$ be the sum of rewards for a trajectory $\\tau$.
\n", 183 | "The policy value is:\n", 184 | "\n", 185 | "$$\n", 186 | "V(\\theta) = \\sum_{\\tau} P(\\tau; \\theta) R(\\tau) \\hspace{1em} (Eq.~5)\\\\\n", 187 | "\\underset{\\theta}{argmax} V(\\theta) = \\underset{\\theta}{argmax} \\sum_{\\tau} P(\\tau; \\theta)R(\\tau) \\hspace{1em} (Eq.~6)\\\\\n", 188 | "\\begin{equation}\n", 189 | " \\begin{split}\n", 190 | " \\nabla_{\\theta}V(\\theta) & = \\sum_{\\tau} P(\\tau; \\theta) R(\\tau) \\nabla_{\\theta} log ~ P(\\tau;\\theta)\\\\\n", 191 | " & = \\mathbb{E}_{\\tau}[\\sum_{t = 0}^{T - 1} \\nabla_{\\theta} log \\pi_{\\theta}(a_{t}~|~s_{t}) G_{t}^{(i)}]\\\\\n", 192 | " & \\approx \\hat{g} = (\\frac{1}{m}) \\sum_{i = 1}^{m} R(\\tau^{(i)}) \\nabla_{\\theta} log ~ P(\\tau;\\theta)\\\\\n", 193 | " & = \\frac{1}{m} \\sum_{i = 1}^{m} R(\\tau^{(i)}) \\sum_{t = 0}^{T_{i}} \\nabla_{\\theta} log \\pi_{\\theta}(a_{t}~|~s_{t})\\\\\n", 194 | " & = \\frac{1}{m} \\sum_{i = 1}^{m} \\sum_{t = 0}^{T - 1} \\nabla_{\\theta} log \\pi_{\\theta}(a_{t}~|~s_{t}) G_{t}^{(i)}\\\\\n", 195 | " \\end{split}\n", 196 | "\\end{equation} \\hspace{1em} (Eq.~7)\\\\\n", 197 | "$$\n", 198 | "\n", 199 | "$P(\\tau; \\theta)$ is the probability over trajectories when executing policy $\\pi_{\\theta}$.\n", 200 | "\n", 201 | "In likelihood ratio policies, we often see value functions to be modeled like Eq. 5. Eq. 6 is a mathematical formulation for how we can optimize for a policy. Eq. 7 is the actual gradient of the value function w.r.t. the parameters of the policy $\\theta$. Notice how policy gradient algorithms directly optimize for the policy (and in this case, it optimizes via using gradients). The last 2 equations in Eq. 7 is the most important as those are the crux of REINFORCE, one classic policy gradient algorithm!\n", 202 | "\n", 203 | "Note: $log \\pi(a_{t}~|~s_{t}; \\theta)$ is the same as $log \\pi_{\\theta}(a_{t}~|~s_{t})$." 204 | ] 205 | }, 206 | { 207 | "cell_type": "markdown", 208 | "id": "e3658639", 209 | "metadata": {}, 210 | "source": [ 211 | "The rest of the lecture highlights the REINFORCE algorithm, one common policy gradient method.\n", 212 | "\n", 213 | "For an implementation of the algorithm I'd recommend this: https://github.com/ageron/handson-ml2/blob/master/18_reinforcement_learning.ipynb.\n", 214 | "\n", 215 | "For understanding it, I recommend this: https://medium.com/intro-to-artificial-intelligence/reinforce-a-policy-gradient-based-reinforcement-learning-algorithm-84bde440c816." 216 | ] 217 | }, 218 | { 219 | "cell_type": "markdown", 220 | "id": "0f657f6b", 221 | "metadata": {}, 222 | "source": [ 223 | "# 3. Resource" 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "id": "32e04a27", 229 | "metadata": {}, 230 | "source": [ 231 | "If you missed the link right below the title, I'm providing the resource here again along with the course website.\n", 232 | "\n", 233 | "- [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 234 | "- [Course Website](http://web.stanford.edu/class/cs234/index.html)\n", 235 | "\n", 236 | "This is a series of 15 lectures provided by Stanford.\n" 237 | ] 238 | }, 239 | { 240 | "cell_type": "code", 241 | "execution_count": null, 242 | "id": "f5c15c5c", 243 | "metadata": {}, 244 | "outputs": [], 245 | "source": [] 246 | } 247 | ], 248 | "metadata": { 249 | "kernelspec": { 250 | "display_name": "Python 3", 251 | "language": "python", 252 | "name": "python3" 253 | }, 254 | "language_info": { 255 | "codemirror_mode": { 256 | "name": "ipython", 257 | "version": 3 258 | }, 259 | "file_extension": ".py", 260 | "mimetype": "text/x-python", 261 | "name": "python", 262 | "nbconvert_exporter": "python", 263 | "pygments_lexer": "ipython3", 264 | "version": "3.8.8" 265 | } 266 | }, 267 | "nbformat": 4, 268 | "nbformat_minor": 5 269 | } 270 | -------------------------------------------------------------------------------- /Lecture 9 - Policy Gradient II.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "16df9564", 6 | "metadata": {}, 7 | "source": [ 8 | "# Lecture 9 - Policy Gradient II\n", 9 | "\n", 10 | "provided by [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 11 | "\n", 12 | "---" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "id": "618c06cf", 18 | "metadata": {}, 19 | "source": [ 20 | "
\n", 21 | "Table of Contents:
\n", 22 | " \n", 23 | "\n", 33 | "
" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "id": "a8837110", 39 | "metadata": {}, 40 | "source": [ 41 | "# 1. Introduction" 42 | ] 43 | }, 44 | { 45 | "cell_type": "markdown", 46 | "id": "3cdb5196", 47 | "metadata": {}, 48 | "source": [ 49 | "For Policy Gradient algorithms, we want to converge as fast as possible to the local optima as well as have monotonic improvement.\n", 50 | "\n", 51 | "Last lecture, we focused on policy-based methods. This lecture we focus on policy and value-based methods which are commonly referred to as __actor-critic__ methods." 52 | ] 53 | }, 54 | { 55 | "cell_type": "markdown", 56 | "id": "672dcc72", 57 | "metadata": {}, 58 | "source": [ 59 | "# 2. \"Vanilla\" Policy Gradient Algorithm" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "id": "d8f1ce68", 65 | "metadata": {}, 66 | "source": [ 67 | "Initialize policy parameter $\\theta$, baseline $b$
\n", 68 | "for iteration=1, 2, ..., do
\n", 69 | "$\\quad$ Collect a set of trajectories by executing the current policy
\n", 70 | "$\\quad$ At each timestep $t$ in each trajectory $\\tau^{i}$
\n", 71 | "$\\quad\\quad$ Compute Return $G_{t}^{i} = \\sum_{t' = t}^{T - 1}r_{t}^{i}$, and
\n", 72 | "$\\quad\\quad$ Advantage estimate $\\hat{A}_{t}^{i} = G_{t}^{i} - b(s_{t})$.
\n", 73 | "$\\quad$ Re-fit the baseline, by minimizing $\\sum_{i}\\sum_{t}||b(s_{t}) - G_{t}^{i}||^{2}$,
\n", 74 | "$\\quad$ Update the policy, using a policy gradient estimate $\\hat{g}$,
\n", 75 | "$\\quad\\quad$ Which is a sum of terms $\\nabla_{\\theta} log \\pi(a_{t}~|~s_{t}, \\theta) \\hat{A}_{t}$.
\n", 76 | "$\\quad\\quad$ Plug $\\hat{g}$ into SGD or ADAM
\n", 77 | "
\n", 78 | "\n", 79 | "_Algorithm 1. \"Vanilla\" Policy Gradient Algorithm._" 80 | ] 81 | }, 82 | { 83 | "cell_type": "markdown", 84 | "id": "11e7af78", 85 | "metadata": {}, 86 | "source": [ 87 | "The __\"Vanilla\" Policy Gradient__ algorithm is a general skeleton or framework for many different PG methods. REINFORCE is a prime example of this template. Notice that this algorithm uses the equations we defined in understanding likelihood policies. The only new idea introduced here is the baseline.\n", 88 | "\n", 89 | "$b(s_{t})$ is simply a function (e.g. deep/shallow neural network, etc) that takes in a state and outputs an expected return. As this \"vanilla\" PG algorithm iterates, the baselineshould be continuously re-fit to perfectly match the expected return (undiscounted).\n", 90 | "\n", 91 | "We introduce a baseline into our standard PG algorithm template because it reduces variance.\n", 92 | "\n", 93 | "$$\n", 94 | "\\begin{equation}\n", 95 | " \\begin{split}\n", 96 | "\\nabla_{\\theta} V(\\theta) & = \\frac{1}{m} \\sum_{i = 1}^{m} R(\\tau^{(i)}) \\sum_{t = 0}^{T_{i}} \\nabla_{\\theta} log \\pi_{\\theta}(a_{t}~|~s_{t}) \\hspace{1em} (Eq.~1)\\\\\n", 97 | " & = \\mathbb{E}_{\\tau}[\\sum_{t = 0}^{T - 1} \\nabla_{\\theta} log \\pi_{\\theta}(a_{t}~|~s_{t}) G_{t}^{(i)}]\\\\\n", 98 | " \\end{split}\n", 99 | "\\end{equation}\n", 100 | "$$\n", 101 | "\n", 102 | "Eq. 1 are our standard gradient formulas from the previous lecture. \n", 103 | "\n", 104 | "$$\n", 105 | "\\hat{A}_{t}^{i} = G_{t}^{i} - b(s_{t})\\\\\n", 106 | "\\nabla_{\\theta} V(\\theta) = \\mathbb{E}_{\\tau}[\\sum_{t = 0}^{T - 1} \\nabla_{\\theta} log \\pi_{\\theta}(a_{t}~|~s_{t}) \\hat{A}_{t}^{i}] \\hspace{1em} (Eq.~2)\\\\\n", 107 | "$$\n", 108 | "\n", 109 | "Eq. 2 is the same equation written with the baseline included.\n", 110 | "\n", 111 | "Now notice we can substitute $V$ or $Q$ into the advantage function. Additionally, we can have a method that learns this value function. We call this a __critic__. Instead of just the sum of future rewards $G_{t}^{i}$, we can use the $Q$ function. We can use TD or MC methods to compute that reward.\n", 112 | "\n", 113 | "$$\n", 114 | "\\hat{A}_{t}^{i} = Q(s_{t}, w) - b(s_{t})\\\\\n", 115 | "$$" 116 | ] 117 | }, 118 | { 119 | "cell_type": "markdown", 120 | "id": "0952fdff", 121 | "metadata": {}, 122 | "source": [ 123 | "But wait! Keep in mind this algorithm so far is policy-based. To make it both policy and value-based, we can parameterize $R(\\tau^{(i)})$." 124 | ] 125 | }, 126 | { 127 | "cell_type": "markdown", 128 | "id": "ac9f36cf", 129 | "metadata": {}, 130 | "source": [ 131 | "# 3. Need for Automatic Step Size Tuning" 132 | ] 133 | }, 134 | { 135 | "cell_type": "markdown", 136 | "id": "cf0862ec", 137 | "metadata": {}, 138 | "source": [ 139 | "At each iteration of our \"vanilla\" PG algorithm, we want the value function for the new updated policy to be better than the previous iteration's policy: $V^{\\pi'} \\ge V^{pi}$.\n", 140 | "\n", 141 | "Why is the step size important in this scenario? Well, the step size affects how we converge and how fast we do it. If we have a bad step size, our policy will be updated a certain way, and consequently, it will collect data in a biased way." 142 | ] 143 | }, 144 | { 145 | "cell_type": "markdown", 146 | "id": "14f67ca1", 147 | "metadata": {}, 148 | "source": [ 149 | "So what are some ways to account for this issue?\n", 150 | "* simple step size with line search\n", 151 | " * simple but expensive \n", 152 | " * naive\n", 153 | "* auto-step-size selection\n", 154 | " * can we ensure the current policy's value function is greater than or equal to the previous iteration's policy's value function?" 155 | ] 156 | }, 157 | { 158 | "cell_type": "markdown", 159 | "id": "74e89e1d", 160 | "metadata": {}, 161 | "source": [ 162 | "$$\n", 163 | "V(\\theta) = \\mathbb{E}_{\\pi_{\\theta}}[\\sum_{t = 0}^{\\infty}\\gamma^{t} R(s_{t}, a_{t}); \\pi_{\\theta}] \\hspace{1em} (Eq.~3)\\\\\n", 164 | "$$\n", 165 | "\n", 166 | "Eq. 3 says we want to maximize the value function for a given policy in the infinite horizon setting.\n", 167 | "\n", 168 | "We can decompose this function into parts: \n", 169 | "\n", 170 | "$$\n", 171 | "\\begin{equation}\n", 172 | " \\begin{split}\n", 173 | " L_{\\pi}(\\tilde{\\pi}) = V(\\tilde{\\theta}) & = V(\\theta) + \\mathbb{E}_{\\pi_{\\tilde{\\theta}}}[\\sum_{t = 0}^{\\infty} \\gamma^{t} A_{\\pi}(s_{t}, a_{t})]\\\\\n", 174 | " & = V(\\theta) + \\sum_{s} \\mu_{\\tilde{\\pi}}(s) \\sum_{a} \\tilde{\\pi}(a~|~s) A_{\\pi}(s, a)\\\\\n", 175 | " \\mu_{\\tilde{\\pi}}(s) & = \\mathbb{E}_{\\tilde{\\pi}}[\\sum_{t = 0}^{\\infty} \\gamma^{t} I(s_{t} = s)]\n", 176 | " \\end{split}\n", 177 | "\\end{equation} \\hspace{1em} (Eq.~4)\\\\\n", 178 | "$$\n", 179 | "\n", 180 | "Notice how the first and second equations of Eq. 4 are the exact same just written in different ways. \n", 181 | "\n", 182 | "The only new idea in these equations is the tilde (~). $\\tilde{\\pi}$ is the new policy (at iteration $i + 1$) and the same goes for $\\tilde{\\theta}$. \n", 183 | "\n", 184 | "So we understand this is for auto step size tuning, but we don't know what $\\mu_{\\tilde{\\pi}}$. Well to be more specific, we can't calculate it just yet (it requires the new policy at iteration $i + 1$). How do we fix this?\n", 185 | "\n", 186 | "There are a few approaches to fixing this issue:\n", 187 | "* __local approximation__\n", 188 | "* __trust regions__\n", 189 | "* __TRPO algorithm__" 190 | ] 191 | }, 192 | { 193 | "cell_type": "markdown", 194 | "id": "d454d7b9", 195 | "metadata": {}, 196 | "source": [ 197 | "## 3.1. Local Approximation" 198 | ] 199 | }, 200 | { 201 | "cell_type": "markdown", 202 | "id": "c975292d", 203 | "metadata": {}, 204 | "source": [ 205 | "We can slightly rewrite Eq. 4 so that we have a substitute for $\\mu_{\\tilde{\\pi}}$:\n", 206 | "\n", 207 | "$$\n", 208 | "L_{\\pi}(\\tilde{\\pi}) = V(\\theta) + \\sum_{s} \\mu_{\\pi}(s) \\sum_{a} \\tilde{\\pi}(a~|~s) A_{\\pi}(s, a) \\hspace{1em} (Eq.~5)\\\\\n", 209 | "$$\n", 210 | "\n", 211 | "Eq. 5, instead of using the discounted weighted frequency of state $s$ under policy $\\mu_{\\tilde{\\pi}}$, uses $\\mu_{\\pi}$, the current policy's discounted weighted frequency of state $s$" 212 | ] 213 | }, 214 | { 215 | "cell_type": "markdown", 216 | "id": "8d10d428", 217 | "metadata": {}, 218 | "source": [ 219 | "This begs the question: how do Eq. 3 and Eq. 4 fit into our current understanding of policy gradients? Over Lecture 8 and Lecture 9, we have seen a lot of formulas involving value functions.\n", 220 | "\n", 221 | "For now, I'm still not too sure. Let's give it some time.\n", 222 | "\n", 223 | "My conclusion is that we formulate our objective function like this (there are many other ways to do it) because we want to find a sure-fire way to have monotonic improvement in gradient-based policy search." 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "id": "0f657f6b", 229 | "metadata": {}, 230 | "source": [ 231 | "# 4. Resource" 232 | ] 233 | }, 234 | { 235 | "cell_type": "markdown", 236 | "id": "32e04a27", 237 | "metadata": {}, 238 | "source": [ 239 | "If you missed the link right below the title, I'm providing the resource here again along with the course website.\n", 240 | "\n", 241 | "- [Stanford CS234](https://www.youtube.com/watch?v=FgzM3zpZ55o)\n", 242 | "- [Course Website](http://web.stanford.edu/class/cs234/index.html)\n", 243 | "\n", 244 | "This is a series of 15 lectures provided by Stanford.\n" 245 | ] 246 | }, 247 | { 248 | "cell_type": "code", 249 | "execution_count": null, 250 | "id": "f5c15c5c", 251 | "metadata": {}, 252 | "outputs": [], 253 | "source": [] 254 | } 255 | ], 256 | "metadata": { 257 | "kernelspec": { 258 | "display_name": "Python 3", 259 | "language": "python", 260 | "name": "python3" 261 | }, 262 | "language_info": { 263 | "codemirror_mode": { 264 | "name": "ipython", 265 | "version": 3 266 | }, 267 | "file_extension": ".py", 268 | "mimetype": "text/x-python", 269 | "name": "python", 270 | "nbconvert_exporter": "python", 271 | "pygments_lexer": "ipython3", 272 | "version": "3.8.8" 273 | } 274 | }, 275 | "nbformat": 4, 276 | "nbformat_minor": 5 277 | } 278 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Stanford-CS234-RL---Lecture-Notes 2 | My lecture notes on the RL series provided by Stanford. 3 | 4 | ## Table of Contents: 5 | - [1. Description](https://github.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/blob/main/README.md#1-description) 6 | - [2. Difficulties](https://github.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/blob/main/README.md#2-difficulties) 7 | - [3. Author Info](https://github.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/blob/main/README.md#3-author-info) 8 | - [4. Thank You](https://github.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/blob/main/README.md#4-thank-you) 9 | 10 | ## 1. Description 11 | 12 | All lectures (and the images I took) are located in this repo. Below is a list of these corresponding notebooks in Kaggle: 13 | * [Lecture 1](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-1) 14 | * [Lecture 2](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-2) 15 | * [Lecture 3](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-3) 16 | * [Lecture 4](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-4) 17 | * [Lecture 5](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-5) 18 | * [Lecture 6](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-6) 19 | * [Lecture 7](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-7) 20 | * [Lecture 8](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-8) 21 | * [Lecture 9](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-9) 22 | * [Lecture 10](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-10) 23 | * [Lecture 11](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-11) 24 | * [Lecture 12](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-12) 25 | * [Lecture 13](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-13) 26 | * [Lecture 14](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-14) 27 | * [Lecture 15](https://www.kaggle.com/code/vincenttu/stanford-cs234-rl-lecture-15) 28 | 29 | ## 2. Difficulties 30 | 31 | A few difficulties! Besides school, these lectures were very dense! Information was packed into the last few minutes of class too. I thought this was great (more material the better). I'm still a newbie in RL, but I think this lecture series, more than anything, provided me the landscape of modern RL and gave me insights into a a number of nooks and crannies. This lecture series definitely helped solidify my fundamental understanding of RL. I remember before I'd always mix up the different Bellman equation flavors. Why does one equation have states and actions while the other has only states? These small things would always stump me! These lectures certainly clarified all of that for me. I also think a lot of RL is theoretical! The math is definitely heavy. This made some parts of the lecture a bit more dense than usual. I ended up spending a lot of time going through these lectures, but it was all worth it! 32 | 33 | ## 3. Author Info 34 | 35 | - Vincent Tu: [LinkedIn](https://www.linkedin.com/in/vincent-tu-422b18208/) | [Kaggle](https://www.kaggle.com/vincenttu) 36 | 37 | ## 4. Thank You 38 | 39 | This lecture series of notes took a while to compile (maybe a few months). These notes have helped me a ton in reviewing and understanding RL. I hope they also help you (do still watch the lectures!). I knew I was in for a very insightful look into RL when I noticed how much the first lecture covered. From model-based to model-free policy evaluation and control to value function approximation, deep learning, imitation learning, policy gradients, and fast and batch RL, I found the lectures to be informative and clear. I'd like to thank Professor Brunskill for making this series possible! I'd also like to thank everyone involved in helping with this lecture series and also you the viewer! Thank you. -------------------------------------------------------------------------------- /img/CUT.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/CUT.PNG -------------------------------------------------------------------------------- /img/LVFA.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/LVFA.PNG -------------------------------------------------------------------------------- /img/OPE.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/OPE.PNG -------------------------------------------------------------------------------- /img/SARSA_theorem.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/SARSA_theorem.PNG -------------------------------------------------------------------------------- /img/SPI.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/SPI.PNG -------------------------------------------------------------------------------- /img/VFA.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/VFA.PNG -------------------------------------------------------------------------------- /img/bias_variance.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/bias_variance.PNG -------------------------------------------------------------------------------- /img/convergence_VFA.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/convergence_VFA.PNG -------------------------------------------------------------------------------- /img/diagram.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/diagram.PNG -------------------------------------------------------------------------------- /img/dp_mc_td.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/dp_mc_td.PNG -------------------------------------------------------------------------------- /img/dp_mdp.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/dp_mdp.PNG -------------------------------------------------------------------------------- /img/dp_mrp.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/dp_mrp.PNG -------------------------------------------------------------------------------- /img/dp_tree.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/dp_tree.PNG -------------------------------------------------------------------------------- /img/dueling_dqn.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/dueling_dqn.PNG -------------------------------------------------------------------------------- /img/experience_replay.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/experience_replay.PNG -------------------------------------------------------------------------------- /img/forward_search.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/forward_search.PNG -------------------------------------------------------------------------------- /img/mc_td.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/mc_td.PNG -------------------------------------------------------------------------------- /img/monotonic_improvement.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/monotonic_improvement.PNG -------------------------------------------------------------------------------- /img/policy_improvement.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/policy_improvement.PNG -------------------------------------------------------------------------------- /img/policy_iteration.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/policy_iteration.PNG -------------------------------------------------------------------------------- /img/prove_monotonic.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/prove_monotonic.PNG -------------------------------------------------------------------------------- /img/rl_agent_types.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/rl_agent_types.PNG -------------------------------------------------------------------------------- /img/search_tree_path.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/search_tree_path.PNG -------------------------------------------------------------------------------- /img/value_iteration.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alckasoc/Stanford-CS234-RL---Lecture-Notes/3ea1b85cb7bff6b659c512bbf57f519e2365ce7a/img/value_iteration.PNG --------------------------------------------------------------------------------