├── README.md
├── blackjack_dqn.ipynb
├── dmc_doudizhu.ipynb
├── leduc_holdem_cfr.ipynb
├── leduc_holdem_pretrained.ipynb
├── random.ipynb
└── requirements.txt
/README.md:
--------------------------------------------------------------------------------
1 | # RLCard Tutorial
2 | This is an official tutorial for [RLCard: A Toolkit for Reinforcement Learning in Card Games](https://github.com/datamllab/rlcard). We provide step-by-step instructions and running examples with Jupyter Notebook in Python3. The tutorial is available in Colab, where you can try your experiments in the cloud interactively.
3 | * Official Website: [http://www.rlcard.org](http://www.rlcard.org)
4 | * Paper: [https://arxiv.org/abs/1910.04376](https://arxiv.org/abs/1910.04376)
5 | * Resources: [Awesome-Game-AI](https://github.com/datamllab/awesome-game-ai)
6 |
7 | ## For Python
8 | ### Tutorials in Jupyter Notebook
9 | * [Playing with Random Agents](https://github.com/datamllab/rlcard-tutorial/blob/master/random.ipynb)
10 | * [Training DQN on Blackjack](https://github.com/datamllab/rlcard-tutorial/blob/master/blackjack_dqn.ipynb)
11 | * [Training CFR on Leduc Hold'em](https://github.com/datamllab/rlcard-tutorial/blob/master/leduc_holdem_cfr.ipynb)
12 | * [Having Fun with Pretrained Leduc Model](https://github.com/datamllab/rlcard-tutorial/blob/master/leduc_holdem_pretrained.ipynb)
13 | * [Training DMC on Dou Dizhu](https://github.com/datamllab/rlcard-tutorial/blob/master/dmc_doudizhu.ipynb)
14 |
15 | ### Links to Colab
16 | * [Playing with Random Agents](https://colab.research.google.com/github/mia1996/rlcard-tutoirial/blob/master/random.ipynb)
17 | * [Training DQN on Blackjack](https://colab.research.google.com/github/mia1996/rlcard-tutoirial/blob/master/blackjack_dqn.ipynb)
18 | * [Training CFR on Leduc Hold'em](https://colab.research.google.com/github/mia1996/rlcard-tutoirial/blob/master/leduc_holdem_cfr.ipynb)
19 | * [Having Fun with Pretrained Leduc Model](https://colab.research.google.com/github/mia1996/rlcard-tutoirial/blob/master/leduc_holdem_pretrained.ipynb)
20 | * [Training DMC on Dou Dizhu](https://colab.research.google.com/github/mia1996/rlcard-tutoirial/blob/master/dmc_doudizhu.ipynb)
21 |
22 | ## Contributing
23 | Contribution to this project is greatly appreciated! Please create an issue/pull request for feedbacks or more tutorials.
24 |
25 | ## Cite this work
26 | If you find this repo useful, you may cite:
27 |
28 | Zha, Daochen, et al. "RLCard: A Platform for Reinforcement Learning in Card Games." IJCAI. 2020.
29 | ```bibtex
30 | @inproceedings{zha2020rlcard,
31 | title={RLCard: A Platform for Reinforcement Learning in Card Games},
32 | author={Zha, Daochen and Lai, Kwei-Herng and Huang, Songyi and Cao, Yuanpu and Reddy, Keerthana and Vargas, Juan and Nguyen, Alex and Wei, Ruzhe and Guo, Junyu and Hu, Xia},
33 | booktitle={IJCAI},
34 | year={2020}
35 | }
36 | ```
37 |
--------------------------------------------------------------------------------
/blackjack_dqn.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Copy of blackjack_dqn.ipynb",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "include_colab_link": true
10 | },
11 | "kernelspec": {
12 | "name": "python3",
13 | "display_name": "Python 3"
14 | }
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "id": "view-in-github",
21 | "colab_type": "text"
22 | },
23 | "source": [
24 | "
"
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "miBl4S8JARzX"
31 | },
32 | "source": [
33 | "\n",
34 | "\n",
35 | "#
\n",
36 | "\n",
37 | "## **Deep-Q Learning on Blackjack**\n",
38 | "This example is to use Deep-Q learning to train an agent on Blackjack. We aim to use this example to show how reinforcement learning algorithms can be developed and applied in our toolkit. To be self-contained, we first install RLCard."
39 | ]
40 | },
41 | {
42 | "cell_type": "code",
43 | "metadata": {
44 | "id": "zQ8CiXAJjQGi",
45 | "outputId": "bda614ff-35d4-4ba3-d171-374c8142af0d",
46 | "colab": {
47 | "base_uri": "https://localhost:8080/"
48 | }
49 | },
50 | "source": [
51 | "!pip install rlcard[torch]"
52 | ],
53 | "execution_count": null,
54 | "outputs": [
55 | {
56 | "output_type": "stream",
57 | "name": "stdout",
58 | "text": [
59 | "Collecting rlcard[torch]\n",
60 | " Downloading rlcard-1.0.7.tar.gz (268 kB)\n",
61 | "\u001b[?25l\r\u001b[K |█▏ | 10 kB 16.1 MB/s eta 0:00:01\r\u001b[K |██▍ | 20 kB 18.6 MB/s eta 0:00:01\r\u001b[K |███▋ | 30 kB 21.5 MB/s eta 0:00:01\r\u001b[K |████▉ | 40 kB 20.1 MB/s eta 0:00:01\r\u001b[K |██████ | 51 kB 16.5 MB/s eta 0:00:01\r\u001b[K |███████▎ | 61 kB 18.6 MB/s eta 0:00:01\r\u001b[K |████████▌ | 71 kB 16.1 MB/s eta 0:00:01\r\u001b[K |█████████▊ | 81 kB 7.0 MB/s eta 0:00:01\r\u001b[K |███████████ | 92 kB 7.6 MB/s eta 0:00:01\r\u001b[K |████████████▏ | 102 kB 8.3 MB/s eta 0:00:01\r\u001b[K |█████████████▍ | 112 kB 8.3 MB/s eta 0:00:01\r\u001b[K |██████████████▋ | 122 kB 8.3 MB/s eta 0:00:01\r\u001b[K |███████████████▉ | 133 kB 8.3 MB/s eta 0:00:01\r\u001b[K |█████████████████ | 143 kB 8.3 MB/s eta 0:00:01\r\u001b[K |██████████████████▎ | 153 kB 8.3 MB/s eta 0:00:01\r\u001b[K |███████████████████▌ | 163 kB 8.3 MB/s eta 0:00:01\r\u001b[K |████████████████████▊ | 174 kB 8.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████ | 184 kB 8.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████▏ | 194 kB 8.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████▍ | 204 kB 8.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▋ | 215 kB 8.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▉ | 225 kB 8.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████████ | 235 kB 8.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▎ | 245 kB 8.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▌ | 256 kB 8.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▊| 266 kB 8.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 268 kB 8.3 MB/s \n",
62 | "\u001b[?25hRequirement already satisfied: numpy>=1.16.3 in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.21.5)\n",
63 | "Requirement already satisfied: termcolor in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.1.0)\n",
64 | "Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.10.0+cu111)\n",
65 | "Collecting GitPython\n",
66 | " Downloading GitPython-3.1.27-py3-none-any.whl (181 kB)\n",
67 | "\u001b[K |████████████████████████████████| 181 kB 41.9 MB/s \n",
68 | "\u001b[?25hCollecting gitdb2\n",
69 | " Downloading gitdb2-4.0.2-py3-none-any.whl (1.1 kB)\n",
70 | "Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (3.2.2)\n",
71 | "Collecting gitdb>=4.0.1\n",
72 | " Downloading gitdb-4.0.9-py3-none-any.whl (63 kB)\n",
73 | "\u001b[K |████████████████████████████████| 63 kB 922 kB/s \n",
74 | "\u001b[?25hCollecting smmap<6,>=3.0.1\n",
75 | " Downloading smmap-5.0.0-py3-none-any.whl (24 kB)\n",
76 | "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from GitPython->rlcard[torch]) (3.10.0.2)\n",
77 | "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (3.0.7)\n",
78 | "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (0.11.0)\n",
79 | "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (1.4.0)\n",
80 | "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (2.8.2)\n",
81 | "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->rlcard[torch]) (1.15.0)\n",
82 | "Building wheels for collected packages: rlcard\n",
83 | " Building wheel for rlcard (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
84 | " Created wheel for rlcard: filename=rlcard-1.0.7-py3-none-any.whl size=325373 sha256=dd6ec050c7312061d299e960ea393b346a9278484233bb6c92c0f87c7b1d4d90\n",
85 | " Stored in directory: /root/.cache/pip/wheels/8a/90/bd/bc402a48ca90970c9a7c2c4387dcb885fdf6073ec231a605ad\n",
86 | "Successfully built rlcard\n",
87 | "Installing collected packages: smmap, gitdb, rlcard, GitPython, gitdb2\n",
88 | "Successfully installed GitPython-3.1.27 gitdb-4.0.9 gitdb2-4.0.2 rlcard-1.0.7 smmap-5.0.0\n"
89 | ]
90 | }
91 | ]
92 | },
93 | {
94 | "cell_type": "markdown",
95 | "source": [
96 | "Now we import `rlcard` and `DQNAgent`. The `DQNAgent` will learn how to win the game."
97 | ],
98 | "metadata": {
99 | "id": "2bt-JVoXyTwM"
100 | }
101 | },
102 | {
103 | "cell_type": "code",
104 | "metadata": {
105 | "id": "4_A_Br3Jj0xW"
106 | },
107 | "source": [
108 | "import rlcard\n",
109 | "from rlcard.agents import DQNAgent"
110 | ],
111 | "execution_count": null,
112 | "outputs": []
113 | },
114 | {
115 | "cell_type": "markdown",
116 | "metadata": {
117 | "id": "N2ltkfinYmiU"
118 | },
119 | "source": [
120 | "Let's create the Blackjack environment and take a look at it."
121 | ]
122 | },
123 | {
124 | "cell_type": "code",
125 | "metadata": {
126 | "id": "l_8Kuf47kghG",
127 | "colab": {
128 | "base_uri": "https://localhost:8080/"
129 | },
130 | "outputId": "177c5020-2885-465b-9648-40c1ddcd6c06"
131 | },
132 | "source": [
133 | "env = rlcard.make(\"blackjack\")\n",
134 | "print(\"Number of actions:\", env.num_actions)\n",
135 | "print(\"Number of players:\", env.num_players)\n",
136 | "print(\"Shape of state:\", env.state_shape)\n",
137 | "print(\"Shape of action:\", env.action_shape)"
138 | ],
139 | "execution_count": null,
140 | "outputs": [
141 | {
142 | "output_type": "stream",
143 | "name": "stdout",
144 | "text": [
145 | "Number of actions: 2\n",
146 | "Number of players: 1\n",
147 | "Shape of state: [[2]]\n",
148 | "Shape of action: [None]\n"
149 | ]
150 | }
151 | ]
152 | },
153 | {
154 | "cell_type": "markdown",
155 | "source": [
156 | "Blackjack is a very simple game with only two possible actions. There is only one player. It's time for us to train our DQN to master this game! We first create a DQNAgent."
157 | ],
158 | "metadata": {
159 | "id": "W2vJYyyqzVj3"
160 | }
161 | },
162 | {
163 | "cell_type": "code",
164 | "metadata": {
165 | "id": "8bqfMncnJTYU"
166 | },
167 | "source": [
168 | "agent = DQNAgent(\n",
169 | " num_actions=env.num_actions,\n",
170 | " state_shape=env.state_shape[0],\n",
171 | " mlp_layers=[64,64],\n",
172 | ")"
173 | ],
174 | "execution_count": null,
175 | "outputs": []
176 | },
177 | {
178 | "cell_type": "markdown",
179 | "source": [
180 | "Here, we use a 64-64 deep neural network to learn. Then we pass the DQNAgent to the environment."
181 | ],
182 | "metadata": {
183 | "id": "tJVGcqYhzvyG"
184 | }
185 | },
186 | {
187 | "cell_type": "code",
188 | "source": [
189 | "env.set_agents([agent])"
190 | ],
191 | "metadata": {
192 | "id": "5JEwJb_Oztut"
193 | },
194 | "execution_count": null,
195 | "outputs": []
196 | },
197 | {
198 | "cell_type": "markdown",
199 | "source": [
200 | "Now we are ready to train! We first import some useful classes and functions for training."
201 | ],
202 | "metadata": {
203 | "id": "zy_sHLKf0TVl"
204 | }
205 | },
206 | {
207 | "cell_type": "code",
208 | "metadata": {
209 | "id": "GhgGYyick13x"
210 | },
211 | "source": [
212 | "from rlcard.utils import (\n",
213 | " tournament,\n",
214 | " reorganize,\n",
215 | " Logger,\n",
216 | " plot_curve,\n",
217 | ")"
218 | ],
219 | "execution_count": null,
220 | "outputs": []
221 | },
222 | {
223 | "cell_type": "markdown",
224 | "source": [
225 | "Then start the training and log the performance with our `Logger`. The script below will train DQN for 1000 epochs (i.e., 1000 games). Usually, the agent will become stronger if trained longer."
226 | ],
227 | "metadata": {
228 | "id": "w2bT6ETj04Hq"
229 | }
230 | },
231 | {
232 | "cell_type": "code",
233 | "metadata": {
234 | "id": "rYAj8Q22k5e2",
235 | "outputId": "e43b29b2-553a-4c17-e3b3-166910af9c42",
236 | "colab": {
237 | "base_uri": "https://localhost:8080/"
238 | }
239 | },
240 | "source": [
241 | "with Logger(\"experiments/leduc_holdem_dqn_result/\") as logger:\n",
242 | " for episode in range(1000):\n",
243 | "\n",
244 | " # Generate data from the environment\n",
245 | " trajectories, payoffs = env.run(is_training=True)\n",
246 | "\n",
247 | " # Reorganaize the data to be state, action, reward, next_state, done\n",
248 | " trajectories = reorganize(trajectories, payoffs)\n",
249 | "\n",
250 | " # Feed transitions into agent memory, and train the agent\n",
251 | " for ts in trajectories[0]:\n",
252 | " agent.feed(ts)\n",
253 | "\n",
254 | " # Evaluate the performance.\n",
255 | " if episode % 50 == 0:\n",
256 | " logger.log_performance(\n",
257 | " env.timestep,\n",
258 | " tournament(\n",
259 | " env,\n",
260 | " 10000,\n",
261 | " )[0]\n",
262 | " )\n",
263 | "\n",
264 | " # Get the paths\n",
265 | " csv_path, fig_path = logger.csv_path, logger.fig_path"
266 | ],
267 | "execution_count": null,
268 | "outputs": [
269 | {
270 | "output_type": "stream",
271 | "name": "stdout",
272 | "text": [
273 | "\n",
274 | "----------------------------------------\n",
275 | " timestep | 552308\n",
276 | " reward | -1.0\n",
277 | "----------------------------------------\n",
278 | "\n",
279 | "----------------------------------------\n",
280 | " timestep | 571353\n",
281 | " reward | -1.0\n",
282 | "----------------------------------------\n",
283 | "INFO - Step 100, rl-loss: 1.7699662446975708\n",
284 | "INFO - Copied model parameters to target network.\n",
285 | "INFO - Step 142, rl-loss: 1.4145885705947876\n",
286 | "----------------------------------------\n",
287 | " timestep | 590632\n",
288 | " reward | -0.8554\n",
289 | "----------------------------------------\n",
290 | "INFO - Step 210, rl-loss: 1.2130974531173706\n",
291 | "----------------------------------------\n",
292 | " timestep | 608418\n",
293 | " reward | -0.7016\n",
294 | "----------------------------------------\n",
295 | "INFO - Step 280, rl-loss: 1.1429738998413086\n",
296 | "----------------------------------------\n",
297 | " timestep | 625811\n",
298 | " reward | -0.6066\n",
299 | "----------------------------------------\n",
300 | "INFO - Step 348, rl-loss: 0.9004282355308533\n",
301 | "----------------------------------------\n",
302 | " timestep | 642940\n",
303 | " reward | -0.4844\n",
304 | "----------------------------------------\n",
305 | "INFO - Step 413, rl-loss: 0.6521245837211609\n",
306 | "----------------------------------------\n",
307 | " timestep | 659822\n",
308 | " reward | -0.365\n",
309 | "----------------------------------------\n",
310 | "INFO - Step 480, rl-loss: 0.840827465057373\n",
311 | "----------------------------------------\n",
312 | " timestep | 676163\n",
313 | " reward | -0.085\n",
314 | "----------------------------------------\n",
315 | "INFO - Step 553, rl-loss: 0.5684747099876404\n",
316 | "----------------------------------------\n",
317 | " timestep | 691722\n",
318 | " reward | -0.0584\n",
319 | "----------------------------------------\n",
320 | "INFO - Step 624, rl-loss: 0.8044297099113464\n",
321 | "----------------------------------------\n",
322 | " timestep | 706931\n",
323 | " reward | -0.0619\n",
324 | "----------------------------------------\n",
325 | "INFO - Step 691, rl-loss: 0.5927526354789734\n",
326 | "----------------------------------------\n",
327 | " timestep | 721525\n",
328 | " reward | -0.0763\n",
329 | "----------------------------------------\n",
330 | "INFO - Step 757, rl-loss: 0.8437865376472473\n",
331 | "----------------------------------------\n",
332 | " timestep | 736157\n",
333 | " reward | -0.0601\n",
334 | "----------------------------------------\n",
335 | "INFO - Step 828, rl-loss: 0.561453640460968\n",
336 | "----------------------------------------\n",
337 | " timestep | 750661\n",
338 | " reward | -0.0781\n",
339 | "----------------------------------------\n",
340 | "INFO - Step 899, rl-loss: 0.7222794890403748\n",
341 | "----------------------------------------\n",
342 | " timestep | 764648\n",
343 | " reward | -0.0693\n",
344 | "----------------------------------------\n",
345 | "INFO - Step 972, rl-loss: 0.4520031809806824\n",
346 | "----------------------------------------\n",
347 | " timestep | 778987\n",
348 | " reward | -0.0774\n",
349 | "----------------------------------------\n",
350 | "INFO - Step 1040, rl-loss: 0.5117819309234619\n",
351 | "----------------------------------------\n",
352 | " timestep | 793088\n",
353 | " reward | -0.0543\n",
354 | "----------------------------------------\n",
355 | "INFO - Step 1100, rl-loss: 0.4985390901565552\n",
356 | "INFO - Copied model parameters to target network.\n",
357 | "INFO - Step 1116, rl-loss: 0.6727216243743896\n",
358 | "----------------------------------------\n",
359 | " timestep | 807093\n",
360 | " reward | -0.0829\n",
361 | "----------------------------------------\n",
362 | "INFO - Step 1183, rl-loss: 0.681004524230957\n",
363 | "----------------------------------------\n",
364 | " timestep | 820958\n",
365 | " reward | -0.0615\n",
366 | "----------------------------------------\n",
367 | "INFO - Step 1256, rl-loss: 0.6855602264404297\n",
368 | "----------------------------------------\n",
369 | " timestep | 834715\n",
370 | " reward | -0.0823\n",
371 | "----------------------------------------\n",
372 | "INFO - Step 1329, rl-loss: 0.5815291404724121\n",
373 | "----------------------------------------\n",
374 | " timestep | 848487\n",
375 | " reward | -0.0985\n",
376 | "----------------------------------------\n",
377 | "INFO - Step 1393, rl-loss: 0.6741513013839722\n",
378 | "Logs saved in experiments/leduc_holdem_dqn_result/\n"
379 | ]
380 | }
381 | ]
382 | },
383 | {
384 | "cell_type": "markdown",
385 | "source": [
386 | "Now we plot the learning curves to monitor how the agent gets improved!"
387 | ],
388 | "metadata": {
389 | "id": "rbIiQS3K2R0u"
390 | }
391 | },
392 | {
393 | "cell_type": "code",
394 | "metadata": {
395 | "id": "06n2QSTDIqb0",
396 | "outputId": "4536c8af-e4ca-489e-c7fd-4d32c839f81c",
397 | "colab": {
398 | "base_uri": "https://localhost:8080/",
399 | "height": 279
400 | }
401 | },
402 | "source": [
403 | "plot_curve(csv_path, fig_path, \"DQN\")"
404 | ],
405 | "execution_count": null,
406 | "outputs": [
407 | {
408 | "output_type": "display_data",
409 | "data": {
410 | "text/plain": [
411 | ""
412 | ],
413 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAY8AAAEGCAYAAACdJRn3AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO3deXxU9b3/8dcne0JCNkgI+yKrgGAAwbYKdbfWrda23rrUWmqtXbRabW0t3va2aqu9ba+9rbfXSvX+LnVpK2hbVCTVuiGx7BBA1rBkIxACZP/+/pgDN9AAmZDJmTnzfj4e88iZM+fM+Xw4w3zmnO/3fI855xAREQlHgt8BiIhI7FHxEBGRsKl4iIhI2FQ8REQkbCoeIiIStiS/A+huffr0cUOHDg1rnQMHDtCrV6/IBOSDIOWjXKJXkPIJUi7QtXxKS0urnXN9O7t84IrH0KFDWbp0aVjrlJSUMHPmzMgE5IMg5aNcoleQ8glSLtC1fMxsazjL67SViIiETcVDRETCpuIhIiJhC1ybR0eam5spLy+noaGhw9ezs7NZu3ZtD0fV/dLS0hg4cKDfYYhIHIiL4lFeXk5WVhZDhw7FzP7p9f3795OVleVDZN3HOUdNTQ3l5eV+hyIicSAuTls1NDSQn5/fYeEICjMjPz//uEdXIiLdKS6KBxDownFYPOQoItEhLk5biUj8qm9sYdHaCppa2rh0QhG9UvW11x30r9hDEhMTmTBhAs3NzSQlJXHDDTdwxx13kJAQOvj7+9//zp133kldXR3OOb72ta9x2223ATBnzhwefvhhtmzZQkFBAQCZmZnU19f7lo+EOOdYtaOOF1fupHp/E71SE8lISaJXSiIZqUf/XV/TSl75XjJSkshISaRXShIZqYkkJ/pzAqD2QBPrdu9nREEvCrLSfIkhUg4XjJdW7KJkfRVNLW0APLBgDZ84cwCfnT6EkYWx3c7pNxWPHpKens6yZcsAqKys5LrrrqOuro4HHniA3bt3c9111/GnP/2JM888k+rqai666CKKioq46qqrAOjTpw+PPPIIDz30kJ9piGdjZT3zl+9kwfKdbK4+QHKiUZCVxsGmFg40tR75svon7735T7NSEhPISktiZGEmEwZkM35ANhMH5jAkL4OEhO45FdnW5thYVU/p1lpKt9by/rZaNlUdOPL6iL69mDEinxnD+3DW8Dz6ZKZ2y3Z7UkcFo7B3KtdNG8zHJhaRYMbT72zlf5dsZ+7bW5kxPJ/rZwzhgnGF3RbD/oZmSsqqeHlNBa+vr2Jgbjrnjy3kgnGFnN6/d6BOLat4+KCgoIDHH3+cqVOnMmfOHB577DFuuukmzjzzTCBUKB5++GG++93vHikeN998M08++ST33HMPeXl5foYft3bsPcSC5TuZv2wna3bVYQYzhufzxXOGc/H4fuRkpBxZtrm1jYNNraFi0hj6++a7pYwaN54DTa0cbGw56u++Q02s2bWfuW9vPVJ4slKTGD8gmwkDvYIyIJsh+Rmd+gLa39DMsu17eX/rXkq31fKPbbXsb2gBIDcjmeIhuVxTPJCx/XqzvmI/b2+q4Y/v7+Dpd7YBMKowk+nD85kxPJ+zhueT1yvlRJs7Keccew82s732IOW1h0hKMPrnpNM/J53cjOQuf6merGAUD849qgAXD8nlOx8byzNLy3n6na3c9j/vU9g7lRkFbYw9s4HC3uEfgVXtb+TVtRUsXL2btzbW0NTaRn6vFM4bU8C2PQf5+Wsb+NmiDQzISef8sQWcP66Qs4blk5IU203OcVc8HliwmjU7646a19raSmJiYpffc1z/3nzv46eHtc7w4cNpbW2lsrKS1atXc+ONNx71+pQpU1izZs2R55mZmdx888387Gc/44EHHuhyrBKe6vpG/rxyF/OX7WTp1loAJg3K4f7LxnHZxCIKjvNlk5yYQHZ6AtnpyUfm7dmYyMyxJ/6V29zaxvqK/azasY+VO/axsnwfT765haZWr6CkJTG+fzYTvYIyYUA2g/My2LbnYOioYlst72+tpaxiP86BGYwuzOKyif0pHpJL8ZBchh5TgGaNKeCL546gpbWNlTv28c6mPby9qYbnSsv53duh4Y7G9Mti+vB875F3VKE8rL6xhe17DoYetYcorz3I9j2hv+W1h6hvbOkw57TkBPpnp1OUk+b9TWdAThpF2elegUkjIyXpqO2EUzCOlZ+ZypdmjmD2OcMpKavkqXe28kJZFQsefI2LTi/k+ulDmT4874QFbUv1AV5es5uXV1dQuq0W52BQXjo3zBjChaf3o3hILoleDNX1jby2tpKX11Tw+6Who56s1CRmjing/LEFzBxdcNTnJFbEXfGIZV/96leZNGkSd911l9+hBFpdQzMLV+1m/vKdvPVBDa1tjtGFWdx90Wg+PrE/g/MzIrbt5MQETu+fzen9s/nU1NC8ppZjCsqOffy2XUFJTjSaWx0QOlqZNDiHi8f348zBuUwanEPvtM59MSUlJjB5cC6TB+fypZkjaG5tY0X5Pt7ZVMM7m2qY9942nnxrC2Ywtl9vChIbeWZHKdv3HGJ77UH2Hmw+6v16pSQyKC+DgbkZTB+ez6C8DAblpjMwN4OWtjZ27m1g595D7Np3KDS97xCvb6iicn8jzh0dW3Z6Mv1z0slOT+L9bXvDLhgdSUwwzhtbyHljC3nmz6+xkSKeWbqdP6/czciCTK6fMYSrJg8gKy0Z5xyrd9axcHWoYJRV7AdgXFFvvn7eKC48vZAx/bI6LDh9MlO5duogrp06iENNrfx9YzWvrNnNorWVLFi+k6QEY/rwfM4fW8AFp/djQE56WHn4Je6KR0dHCH5cJLhp0yYSExMpKChg3LhxlJaWcsUVVxx5vbS0lClTphy1Tk5ODtdddx2PPfZYj8YaDxqaW3ltXSXzl+3ktbJKmlraGJibzhfPGc7lk/ozpl9v32JLSUpgvNcW8mlv3uGCsnLHPj6orGd430yKh+RyWkHmkV+8pyo5MeHI0cqXZ51GU0sby8v38s4HNby9qYb3trVQ2LifgbkZTByY7RWHDAblhQrEyU5HTTzOYAjNrW1U1DWwc28Du/YdYsfeQ+zyCk11feMpFYzjKchI4NqZY7nzglEsWL6Tp9/Zyv0vrObBv6zj3FF9WVG+jx17D5FgMHVoHvdfNo4LxhUyKC+8HxLpKYlcMC7UBtLa5li2vZaX11TwypoK5ixYw5wFaxhX1JtzRvVl2rBciofkRe1RSdwVj2hQVVXFrbfeyu23346Z8eUvf5mzzjqLq6++mkmTJlFTU8N9993Hgw8++E/r3nnnnUydOpWWlo5PAUjXfOF3S3ljQzV9MkO/Zi+f1J/Jg3KitoGzfUHpyW1OHZrH1KF5fOW8kREbxjw5MYGBuaEjlp6WlpzIJ6cM4pNTBrF8+16eemcrJWVVTBqUw9fPH8l5YwtPuf3nsMQEo3hIHsVD8vjWJWP5oKqeV9dU8OraCn7zxiZ+9TeHGYzp15tpQ3OZOiyPaUPzjnuqtKepePSQQ4cOMWnSpCNdda+//nruvPNOAIqKinj66aeZPXs2+/btY8uWLTz55JOce+65//Q+ffr04aqrruKnP/1pT6cQWNX1jbyxoZpbPjyMb106ttt+uUtsO2NQDmcMyumx7Y3om8mIczP54rkjONTUyj+217Jk8x7e27KHZ5aWM9drfxqanxEq4l4x6Wwniu6m4tFDWltbT/j6Oeecw5IlSwD45S9/yQ9/+EMuvvhicnNzmTNnzlHLPvroozz66KORCjXuvL6+CoArJw9Q4ZCokJ6SyNkj+nD2iD5A6FTe6p11LNlcw5LNtbyytoJnS0Pj2BVkpR4pJGcNz+uxU6wqHlHotttuO3KBoETe4rIq+mSmMq7Iv3YNkRNJTkxg0qAcJg3KYfY5/3fdzrub9/Ced3Ty0opdTBiQzYKvfLhHYlLxkLjW2uZ4fX0V548t7LbGV5FIS0gwRhVmMaowi+unD8E5R3ntIWoPNvVYDHFTPJxzUdv42V3csf0b5aSWba9l36FmZo3p63coIl1mZqHebmH2/joVsX2JYyelpaVRU1MT6C/Xw/fzSEuLjp4YsaKkrIoEg4+cpuIhEo64OPIYOHAg5eXlVFVVdfh6Q0NDIL50D99JcOvWrX6HEjNKyqooHpJLdkZ09qUXiVZxUTySk5MZNmzYcV8vKSlh8uTJPRiRRIPK/Q2s3LGPuy8a7XcoIjEnLk5biXTk9fXVAMwcrVNWIuFS8ZC4tbiskoIsddEV6QoVD4lLLa1tvLG+ipmj+wa+F55IJKh4SFz6x/a91DW0MHN0gd+hiMQkFQ+JSyVllSQmGB86rY/foYjEJBUPiUuL13lddKN0uGuRaKfiIXGnoq6BNbvq1MtK5BSoeEjc+VtZ6GLRWWrvEOkyFQ+JOyXrK+nXO40x/Xr27pEiQaLiIXGlubWNN9ZXq4uuyClS8ZC48v7WWvY3tqi9Q+QUqXhIXFlcVkWSuuiKnDIVD4krJWWVTBmaS1aauuiKnApfioeZ5ZnZK2a2wfub28Eyk8zsbTNbbWYrzOxTfsQqwbFr3yHW7d6vXlYi3cCvI497gUXOuZHAIu/5sQ4CNzjnTgcuBv7dzHJ6MEYJmMNddDUkicip86t4XAHM9abnAlceu4Bzbr1zboM3vROoBNTKKV1WUlZFUXYaowoz/Q5FJOaZH7dmNbO9zrkcb9qA2sPPj7P8NEJF5nTnXFsHr88GZgMUFhYWz5s3L6x46uvrycwMzhdKkPLprlxa2hy3LzrI9KIkbhqf2g2RhS9I+wWClU+QcoGu5TNr1qxS59yUTq/gnIvIA3gVWNXB4wpg7zHL1p7gfYqAMmB6Z7ZbXFzswrV48eKw14lmQcqnu3J5a2O1G3LPi+6vq3Z1y/t1RZD2i3PByidIuTjXtXyApS6M7/iI3YbWOXf+8V4zswozK3LO7TKzIkKnpDparjfwEnCfc+6dCIUqcaCkrJLkRHXRFekufrV5zAdu9KZvBF44dgEzSwH+CPzOOfdcD8YmAVRSVsXUoXlkpkbs95JIXPGreDwIXGBmG4DzveeY2RQz+423zLXAOcBNZrbMe0zyJ1yJZTv3HqKsQl10RbqTLz/DnHM1wHkdzF8K3OJNPw083cOhSQCVHOmiq856It1FV5hL4C0uq2RATjqnFQSnN42I31Q8JNAaW1p5a6NG0RXpbioeEmhLt9RyoKlVV5WLdDMVDwm0krJKUhITOHtEvt+hiASKiocE2uKyKqYNy6OXuuiKdCsVDwms7XsOsrGyXr2sRCJAxUMCq2S9RtEViRQVDwmsv5VVMigvnRF9e/kdikjgqHhIIDW2tPLmxhpmjipQF12RCFDxkEBasnkPh5pbmTVG7R0ikaDiIYFUUlZFSlICM4ZrFF2RSFDxkEBaXFbJWcPySE9J9DsUkUBS8ZDA2VZzkE1VBzSKrkgEqXhI4JSsD91bTNd3iESOiocETklZFUPyMxjWR110RSJFxUMCpaG5lbc+qGbmKI2iKxJJKh4SKO9u3kNDcxszx6i9QySSVDwkUBavqyQ1KYEZwzWKrkgkqXhIoPxtfRUzRuSTlqwuuiKRpOIhgbGl+gCbqw8wc5R6WYlEmoqHBEZJ2eEuumrvEIk0FQ8JjMVlVQzr04uh6qIrEnEqHhIIDc2tvLOphnN1ykqkR6h4SCC8vamGxpY2ZqmLrkiPUPGQQChZV0lacgJnDcvzOxSRuKDiITHPOcfisirOHtFHXXRFeoiKh8S8zdUH2LbnoAZCFOlBKh4S8xaXVQEwc5TaO0R6ioqHxLSa+kb+s+QDJg7MZnB+ht/hiMSNJL8DEOkq5xzf/uNK6g4189Tnp/kdjkhc0ZGHxKzn39/BwtUVfOPCUYwt6u13OCJxRcVDYtL2PQeZM38104blcctHhvsdjkjc8aV4mFmemb1iZhu8v7knWLa3mZWb2X/0ZIwSvVrbHN94ZjkAj3zyDBITdNMnkZ7m15HHvcAi59xIYJH3/Hi+D7zeI1FJTPjNG5tYsmUPcy4/nUF5aiQX8YNfxeMKYK43PRe4sqOFzKwYKARe7qG4JMqt2VnHT14u4+LT+/GJMwf4HY5I3PKreBQ653Z507sJFYijmFkC8AhwV08GJtGrobmVO59ZRnZ6Cj+8eoLuUS7iI3POReaNzV4F+nXw0n3AXOdcTrtla51zR7V7mNntQIZz7mEzuwmY4py7/Tjbmg3MBigsLCyeN29eWLHW19eTmZkZ1jrRLEj5tM9l3rom/rqlmTuKUzmjb+z1Mg/SfoFg5ROkXKBr+cyaNavUOTel0ys453r8AZQBRd50EVDWwTL/A2wDtgDVQB3w4Mneu7i42IVr8eLFYa8TzYKUz+Fc3tpY7Ybe+6L79h9W+BvQKQjSfnEuWPkEKRfnupYPsNSF8T3u18+3+cCNwIPe3xeOXcA59y+Hp9sdeZyoYV0Cqq6hmbueXc6QvAzu+9hYv8MREfxr83gQuMDMNgDne88xsylm9hufYpIoNWf+anbXNfDTT00iIyX2TleJBJEv/xOdczXAeR3MXwrc0sH8J4EnIx6YRJ33drfwh2U7+Op5I5k8+LiXA4lID9MV5hK1KusamLu6kYkDs/nKR0/zOxwRaUfFQ6KSc45vPr+Cplb46acmkZyoj6pINNH/SIlK//PuNkrKqrh2dAoj+ganC6VIUKh4SNTZVFXPv720lo+M7MN5g9VALhKNVDwkqrS0tnHHM8tJSUrgx9ecoavIRaKUiodElccWf8Dy7Xv5t6vG0y87ze9wROQ4VDwkaizfvpefv7aBKyf157KJ/f0OR0RO4IQnlM1sAXDcwa+cc5d3e0QSlw41tXLH75dRkJXKA1eM9zscETmJk7VG/sT7ezWhQQ6f9p5/BqiIVFASf370l7Vsqj7A/7vlLLLTk/0OR0RO4oTFwzn3NwAze8QdPdriAjNbGtHIJG6UlFXyu7e38vkPD+Ps0/r4HY6IdEJn2zx6mdmRG0Wb2TCgV2RCknhSe6CJbz63gpEFmdx90Wi/wxGRTupsJ/qvAyVmtgkwYAje/TNEuso5x3f+tIrag008cdNU0pIT/Q5JRDrppMXDu6NfNjASGOPNXueca4xkYBJ8LyzbyUsrd3H3RaMZPyDb73BEJAwnPW3lnGsDvumca3TOLfceKhxySnbtO8R3X1hF8ZBcbj13hN/hiEiYOtvm8aqZ3WVmg8ws7/AjopFJYLW1Oe5+dgWtbY5Hrz2DxARdRS4Sazrb5vEp7++X281zwPAOlhU5oafe2crfN1bzb1eNZ0i++l2IxKJOFQ/n3LBIByLx4YOqen70l7XMHN2X66YN9jscEemiTg9ZambjgXHAkQGHnHO/i0RQEkwtrW3c+cxy0pITefgTEzXooUgM61TxMLPvATMJFY8/A5cAfwdUPKTTflkSGvTwP66bTEFvDXooEss622B+DaF7ju92zn0OOINQ912RTllZvo+fL9rAFRr0UCQQOls8DnlddlvMrDdQCQyKXFgSJA3NrdzxzDL6ZKbyr5dr0EORIOhsm8dSM8sB/gsoBeqBtyMWlQTKjxeWsbGynqc+P43sDA16KBIEne1tdZs3+Ssz+yvQ2zm3InJhSVC89UE1//33zdwwYwgfGdnX73BEpJt0tsH8KeB14A3n3LrIhiRBUdfQzN3PrmBYn17ce8mYk68gIjGjs20eTwBFwC/MbJOZPW9mX4tgXBIA/7pgDbv2HeLRa88gI6XTvcJFJAZ09rTVYjN7HZgKzAJuBU4HfhbB2CSGLVy9m+dKy/nKR09j8uBcv8MRkW7W2dNWiwjdv+Nt4A1gqnOuMpKBSeyqrm/k239YyfgBvfnKR0f6HY6IREBnT1utAJqA8cBEYLyZpUcsKolZzjnufX4l+xtbePTaSaQkdfYjJiKxpLOnre4AMLMs4Cbgt4TuaZ4ascgkJj1bWs6rayv4zsfGMqowy+9wRCRCOnva6nbgI0AxsIVQA/obkQtLYtH2PQf51wVrOGtYHjd/SGNpigRZZ7vApAGPAqXOuZYIxiMxqq3NcdezywF45NozSNA9OkQCrVMnpJ1zPwGSgesBzKyvmemnpRzxxJubeXfzHr738XEMzM3wOxwRibBOFQ9vVN17gG95s5KBpyMVlMSW9RX7eXhhGReMK+Sa4oF+hyMiPaCzXWGuAi4HDgA453YCXW4N9W5j+4qZbfD+dnghgJkNNrOXzWytma0xs6Fd3aZERlNLG3f8fhlZqUn86OoJukeHSJzobPFocs45QreexcxO9d6h9wKLnHMjgUXe8478Dvixc24sMI3QaL4SRX7x2gZW76zjR1dPoE+mOt+JxIuTFg8L/ZR80cx+DeSY2ReAVwmNsNtVVwBzvem5wJUdbHcckOScewXAOVfvnDt4CtuUbvb+tloeW7yRa4oHcuHp/fwOR0R6kIUOKE6ykNlK4E7gQsCAhYe/1Lu0UbO9zrkcb9qA2sPP2y1zJXALoYsThxEqWPc651o7eL/ZwGyAwsLC4nnz5oUVT319PZmZmV1JJSr1RD6NLY773zpESxv84MPppCdF5nRVkPZNkHKBYOUTpFyga/nMmjWr1Dk3pdMrOOdO+iB0dDC1M8u2W+dVYFUHjyuAvccsW9vB+tcA+4DhhLoUPw98/mTbLS4uduFavHhx2OtEs0jn09ra5r72v++7ofe+6N7aWB3RbQVp3wQpF+eClU+QcnGua/kAS10Y3/Gdvc7jLOBfzGwrXqO5V3gmnqAonX+818yswsyKnHO7zKyIjtsyyoFlzrlN3jp/AqYD/93JmCVCHvrrOv60bCd3XzSaGSPy/Q5HRHzQ2eJxUTdvdz5wI/Cg9/eFDpZ5j1AbS1/nXBXwUWBpN8chYfrNG5v49eubuGHGEG6bOcLvcETEJ50d22prN2/3QeAZM/s8sBW4FsDMpgC3Ouducc61mtldwCKvXaSUU2ukl1P0wrId/OCltVw6oR/f+/jp6pYrEsd8uUOPc64GOK+D+UsJNZIffv4KoVF8xWdvbKjirmeXc9awPB69dhKJGn5EJK5pvGw5qVU79nHrU6WM6JvJ4zdMIS050e+QRMRnKh5yQltrDnDTb5eQk5HC3JunkZ2e7HdIIhIFdGNpOa7q+kZufGIJLW2OeTdPo7B3mt8hiUiU0JGHdOhAYwuf++177K5r4ImbpnJaQXAuoBKRU6cjD/knTS1t3Pp0KWt21fH49cWcObjDcStFJI7pyEOO0tbm+OZzy3ljQzU/unoC540t9DskEYlCKh5ylAfbXT1+7ZRBfocjIlFKxUOO+M0bm3hcV4+LSCeoeAigq8dFJDwqHqKrx0UkbCoecW5lua4eF5HwqXjEsa01B/jck7p6XETCp+IRp6r2N3LDE0tobXPM1dXjIhImXSQYh+obW7j5yfeoqGvg/31huq4eF5GwqXjEmaaWNr7kXT3+Xzfo6nER6Rqdtoozv3htw5Grxz86RlePi0jXqHjEkfLagzz++iYuP6O/rh4XkVOi4hFHHvprGQD3XDLG50hEJNapeMSJ0q17WLB8J7PPGc6AnHS/wxGRGKfiEQfa2hz/+uJaCrJSufVcjVklIqdOxSMOzF++k+Xb9/LNi8fQK1Ud7ETk1Kl4BNyhplYe+us6JgzI5urJA/wOR0QCQsUj4B5/fRO79jXw3cvGkaABD0Wkm6h4BNjufQ386m8fcOmEfkwblud3OCISICoeAfbwwnW0tjm+dclYv0MRkYBR8Qio5dv38of3d3Dzh4cxKC/D73BEJGBUPALIOcf3X1xDn8wUvjxLXXNFpPupeATQe7tbWbq1lm9cOJqsNN2jQ0S6n4pHwDQ0t/L7sibG9MvS+FUiEjG6Yixg/vvvm6lpcPzi4+N0L3IRiRgdeQRI5f4Gfrl4I5MLEjl7RB+/wxGRAFPxCJBHFq6nqbWNT41O8TsUEQk4FY+AWLVjH8+UbufGGUPp10u7VUQiy5dvGTPLM7NXzGyD97fDe6Ga2cNmttrM1prZz81MJ/E7cLhrbk56Ml85b6Tf4YhIHPDrJ+q9wCLn3Ehgkff8KGZ2NvAhYCIwHpgKnNuTQcaKhasreHfzHu68YBTZ6eqaKyKR51fxuAKY603PBa7sYBkHpAEpQCqQDFT0SHQxpLGllR/9ZS0jCzL5zLTBfocjInHCnHM9v1Gzvc65HG/agNrDz49Z7ifALYAB/+Gcu+847zcbmA1QWFhYPG/evLDiqa+vJzMzM7wkosRfNjfz+7ImvlGcyoS+oZ7XsZzPsZRL9ApSPkHKBbqWz6xZs0qdc1M6vYJzLiIP4FVgVQePK4C9xyxb28H6pwEvAZne423gIyfbbnFxsQvX4sWLw14nGlTvb3Dj7/+ru+mJd4+aH6v5dES5RK8g5ROkXJzrWj7AUhfGd3zELhJ0zp1/vNfMrMLMipxzu8ysCKjsYLGrgHecc/XeOn8BZgBvRCTgGPToK+s52NzKfR8b53coIhJn/GrzmA/c6E3fCLzQwTLbgHPNLMnMkgk1lq/tofiiXtnu/fzvkm1cP30IpxUE53BbRGKDX8XjQeACM9sAnO89x8ymmNlvvGWeAz4AVgLLgeXOuQV+BBttnHP84KU1ZKUl8zV1zRURH/gytpVzrgY4r4P5Swk1kOOcawW+2MOhxYTFZZW8saGa+y8bR24vXU0uIj1PlyLHmObWNn7w4lqG9+3F9TOG+B2OiMQpFY8Y89TbW9lUfYD7Lh1LcqJ2n4j4Q98+MaT2QBM/W7SBj4zsw0fHFPgdjojEMRWPGPKzRRvY39DMdz42Dg3zJSJ+UvGIERsr9/PUO1v5zLTBjO6X5Xc4IhLnVDxigHOOBxasISM5kTsvGOV3OCIiKh6x4NnSct7YUM3dF48mPzPV73BERFQ8ol1FXQPff3EN04bl8dmz1DVXRKKDikcUc85x3x9X0dTSxkOfmEhCghrJRSQ6qHhEsQUrdvHq2gruunA0w/r08jscEZEjVDyiVE19I3Pmr+aMQTnc/OFhfocjInIUFY8o9b35q9nf0MyPr5lIok5XiUiUUfGIQgtX7+bFFbv46kdHMqpQ13SISPRR8Ygy+w42850/rYPxGNUAAAtcSURBVGJcUW9unTnC73BERDrky5Dscnzff2kNew408dubpmrgQxGJWvp2iiIlZZU8V1rOl84dwfgB2X6HIyJyXCoeUWJ/QzPf/sNKTivI5CvnneZ3OCIiJ6TTVlHiob+uY1ddA89/6WxSkxL9DkdE5IR05BEF3v6ghqff2cbnPzSMMwfn+h2OiMhJqXj47GBTC/c8v4Ih+Rl848LRfocjItIpOm3ls0deXs+2PQeZN3s66Sk6XSUisUFHHj56f1stT7y5mc9OH8z04fl+hyMi0mkqHj5paG7lm8+toH92OvdeMtbvcEREwqLTVj75xWsb2FhZz9ybp5GZqt0gIrFFRx4+WLVjH7/62yY+WTyQc0f19TscEZGwqXj0sObWNu5+bgX5vVL4zsfG+R2OiEiX6HxJD/tVyQes3VXH49cXk52R7Hc4IiJdoiOPHrS+Yj8/f20DHz+jPxee3s/vcEREukzFo4e0tLZx97PLyUpLZs7HdbpKRGKbTlv1kCfe3Mzy8n384jOTyc9M9TscEZFToiOPHrC5+gCPvLyeC8YVctnEIr/DERE5ZSoeEdbW5rjnuRWkJiXwgyvHY6b7kYtI7FPxiLCn393Kki17+O5l4yjsneZ3OCIi3cKX4mFmnzSz1WbWZmZTTrDcxWZWZmYbzezenoyxO2zfc5AH/7KOc0b15ZrigX6HIyLSbfw68lgFXA28frwFzCwReAy4BBgHfMbMYqabknOOb/1hJQb86OoJOl0lIoHiS28r59xa4GRfqNOAjc65Td6y84ArgDWRiGnvwSY++au3u+39Wtocm6sP8P0rxzMgJ73b3ldEJBqYc86/jZuVAHc555Z28No1wMXOuVu859cDZznnbu9g2dnAbIDCwsLiefPmhRVHfX09Cam9eGJVY/hJnMCAzASuOC2ZhB4+6qivryczM7NHtxkpyiV6BSmfIOUCXctn1qxZpc654zYjHCtiRx5m9irQ0WXU9znnXujObTnnHgceB5gyZYqbOXNmWOuXlJQwc+ZMLr2gO6Pyz+F8gkC5RK8g5ROkXKBn8olY8XDOnX+Kb7EDGNTu+UBvnoiI+Cyau+q+B4w0s2FmlgJ8Gpjvc0wiIoJ/XXWvMrNyYAbwkpkt9Ob3N7M/AzjnWoDbgYXAWuAZ59xqP+IVEZGj+dXb6o/AHzuYvxO4tN3zPwN/7sHQRESkE6L5tJWIiEQpFQ8REQmbioeIiIRNxUNERMLm6xXmkWBmVcDWMFfrA1RHIBy/BCkf5RK9gpRPkHKBruUzxDnXt7MLB654dIWZLQ3nsvxoF6R8lEv0ClI+QcoFeiYfnbYSEZGwqXiIiEjYVDxCHvc7gG4WpHyUS/QKUj5BygV6IB+1eYiISNh05CEiImFT8RARkfA552L6AWwBVgLLgKXevDmE7v2xzHtc2m75bwEbgTLgonbzL/bmbQTubTd/GPCuN//3QIo3P9V7vtF7fWg35ZMDPAesIzSa8AwgD3gF2OD9zfWWNeDnXgwrgDPbvc+N3vIbgBvbzS/2/r02eusePnXZ4TYikEvM7RtgdLt4lwF1wNdjeL8cL5+Y2zfee94BrAZWAf8LpHVl+92VYwRyeRLY3G6/TIqGz9kpf9n5/SBUPPocM28OodvbHrvsOGC59wEaBnwAJHqPD4DhQIq3zDhvnWeAT3vTvwK+5E3fBvzKm/408PtuymcucIs3nULoC/jhwx9a4F7gIW/6UuAv3odoOvBuuw/CJu9vrjd9+Ittibeseete4s3vcBsRyCVm9433fonAbmBIrO6XE+QTc/sGGEDoizW93XZvCnf73ZljBHJ5Erimg+V9/Zx124fQrwfhFY9vAd9q93whoV/DM4CFxy7n/QNXA0ne/CPLHV7Xm07ylrNTzCXb+/DYMfPLgCJvuggo86Z/DXzm2OWAzwC/bjf/1968ImBdu/lHljveNiKQS0zum3bbvxB4M1b3y0nyibl9Q+gLdzuhL8ok4EXgonC33505dnMuF3L84uHr5ywIbR4OeNnMSs1sdrv5t5vZCjN7wsxyvXmHd85h5d68483PB/a60I2p2s8/6r281/d5y5+KYUAV8Fsz+4eZ/cbMegGFzrld3jK7gcIu5jPAmz52PifYRnfnArG5bw77NKHTCRCb++VY7fOBGNs3zrkdwE+AbcAu7/1Ku7D97syx23Jxzr3svfxv3n75qZmlHptLJ2Pu1s9ZEIrHh51zZwKXAF82s3OA/wRGAJMI7YRHfIwvHEnAmcB/OucmAwcIHUIe4UI/DVwkg+imbRwvl1jdN3i3Q74cePbY12JovxzRQT4xt2+8AncFoR8r/YFehNooYk5HuZjZZwkd6YwBphI6KrknknF09nMW88XDq9Y45yoJ3Z1wmnOuwjnX6pxrA/4LmOYtvgMY1G71gd68482vAXLMLOmY+Ue9l/d6trf8qSgHyp1z73rPnyP0BVxhZkXetoqAyi7ms8ObPnY+J9hGt+YSw/sGQj9Q3nfOVXjPY3G/tHdUPjG6b84HNjvnqpxzzcAfgA91YfvdmWN35nK2c26XC2kEfkvX90u3fs5iuniYWS8zyzo8Tej84KrD/wieqwj1XACYD3zazFLNbBgwklAD0nvASDMb5v0a+zQw36vAi4FrvPVvBF5o9143etPXAK95y3eZc243sN3MRnuzzgPWHLOtY2O4wUKmEzrM3UXofO2FZpbr/Zq5kND52F1AnZlNNzMDbjhOPu230a25xOq+8XyGo0/xxNx+OVE+MbpvtgHTzSzD+7c7/H8m3O13Z47dmcvadl/qBlzJ0fvFv8/ZqTTw+P0g1ANiufdYDdznzX+KUHe0Fd4/SlG7de4j1HuiDK+ngTf/UmC999p9x2xjCaGubc8Cqd78NO/5Ru/14d2U0yRgqRf7nwj1lsgHFhHqRvcqkOcta8BjXswrgSnt3udmL7aNwOfazZ/iffg+AP6D/+uq1+E2IpBLTO4bQqdDaoDsdvNicr+cIJ9Y3TcPEOoOvsrLIbUr2++uHCOQy2veflkFPA1kRsPnTMOTiIhI2GL6tJWIiPhDxUNERMKm4iEiImFT8RARkbCpeIiISNhUPEQ6YGY5ZnabN93fzJ6L4LYmmdmlkXp/kUhQ8RDpWA6hEVhxzu10zl1zkuVPxSRC1xKIxAxd5yHSATObR2icoTJCF06Ndc6NN7ObCF3l24vQVcg/ITRU9/VAI6F7YOwxsxGELuDqCxwEvuCcW2dmnwS+B7QSGpTvfEIXcqUTGiriR4RGU/0FMB5IBuY4517wtn0VoSE1BgBPO+ceiPA/hUiHkk6+iEhcuhcY75ybZGZDCX2hHzYemEzoauWNwD3Ouclm9lNCQz78O/A4cKtzboOZnQX8EvgocD+hGw3tMLMc51yTmd1P6Org2wHM7IeEhs242cxygCVm9qq37Wne9g8C75nZS865pZH8hxDpiIqHSPgWO+f2A/vNbB+wwJu/EphoZpnA2cCzoSGEgNAwEwBvAk+a2TOEBr7ryIXA5WZ2l/c8DRjsTb/inKsBMLM/AB8mNASMSI9S8RAJX2O76bZ2z9sI/Z9KIHSvh0nHruicu9U7EvkYUGpmxR28vwGfcM6VHTUztN6x55l13ll8oQZzkY7tB7K6sqJzrg7Y7LVv4I16eoY3PcI5965z7n5CN8sa1MG2FgJf8UY+xcwmt3vtAjPLM7N0Qm0vb3YlRpFTpeIh0gHv1NCbZrYK+HEX3uJfgM+b2eERn6/w5v/YzFZ67/sWoRGhFwPjzGyZmX0K+D6hhvIVZrbae37YEuB5QiPfPq/2DvGLeluJxAivt9WRhnURP+nIQ0REwqYjDxERCZuOPEREJGwqHiIiEjYVDxERCZuKh4iIhE3FQ0REwvb/AbJ7vmKJ8Rs4AAAAAElFTkSuQmCC\n"
414 | },
415 | "metadata": {
416 | "needs_background": "light"
417 | }
418 | }
419 | ]
420 | },
421 | {
422 | "cell_type": "markdown",
423 | "metadata": {
424 | "id": "rtJ7xdSIaPw3"
425 | },
426 | "source": [
427 | "You can find that the agent achieves an increasingly better performance with more training. All the logs and the model weights are saved in `experiments/leduc_holdem_dqn_result/`. Now you have your trained DQN agent on Blackjack!"
428 | ]
429 | }
430 | ]
431 | }
--------------------------------------------------------------------------------
/dmc_doudizhu.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Copy of blackjack_dqn.ipynb",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "include_colab_link": true
10 | },
11 | "kernelspec": {
12 | "name": "python3",
13 | "display_name": "Python 3"
14 | }
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "id": "view-in-github",
21 | "colab_type": "text"
22 | },
23 | "source": [
24 | "
"
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "miBl4S8JARzX"
31 | },
32 | "source": [
33 | "\n",
34 | "\n",
35 | "#
\n",
36 | "\n",
37 | "## **Deep Monte-Carlo on DouDizhu**\n",
38 | "Now let's try some large-scale games. We will train a DMC agent developed in [DouZero](https://github.com/kwai/DouZero) to train strong agents on Dou Dizhu. We first install RLCard and PyTorch."
39 | ]
40 | },
41 | {
42 | "cell_type": "code",
43 | "metadata": {
44 | "id": "zQ8CiXAJjQGi",
45 | "outputId": "011a1e5d-d1a3-44cb-c987-64f709b4d3f2",
46 | "colab": {
47 | "base_uri": "https://localhost:8080/"
48 | }
49 | },
50 | "source": [
51 | "!pip install rlcard[torch]"
52 | ],
53 | "execution_count": null,
54 | "outputs": [
55 | {
56 | "output_type": "stream",
57 | "name": "stdout",
58 | "text": [
59 | "Collecting rlcard[torch]\n",
60 | " Downloading rlcard-1.0.7.tar.gz (268 kB)\n",
61 | "\u001b[?25l\r\u001b[K |█▏ | 10 kB 13.6 MB/s eta 0:00:01\r\u001b[K |██▍ | 20 kB 8.0 MB/s eta 0:00:01\r\u001b[K |███▋ | 30 kB 7.1 MB/s eta 0:00:01\r\u001b[K |████▉ | 40 kB 4.4 MB/s eta 0:00:01\r\u001b[K |██████ | 51 kB 4.1 MB/s eta 0:00:01\r\u001b[K |███████▎ | 61 kB 4.8 MB/s eta 0:00:01\r\u001b[K |████████▌ | 71 kB 5.0 MB/s eta 0:00:01\r\u001b[K |█████████▊ | 81 kB 5.1 MB/s eta 0:00:01\r\u001b[K |███████████ | 92 kB 5.7 MB/s eta 0:00:01\r\u001b[K |████████████▏ | 102 kB 5.0 MB/s eta 0:00:01\r\u001b[K |█████████████▍ | 112 kB 5.0 MB/s eta 0:00:01\r\u001b[K |██████████████▋ | 122 kB 5.0 MB/s eta 0:00:01\r\u001b[K |███████████████▉ | 133 kB 5.0 MB/s eta 0:00:01\r\u001b[K |█████████████████ | 143 kB 5.0 MB/s eta 0:00:01\r\u001b[K |██████████████████▎ | 153 kB 5.0 MB/s eta 0:00:01\r\u001b[K |███████████████████▌ | 163 kB 5.0 MB/s eta 0:00:01\r\u001b[K |████████████████████▊ | 174 kB 5.0 MB/s eta 0:00:01\r\u001b[K |██████████████████████ | 184 kB 5.0 MB/s eta 0:00:01\r\u001b[K |███████████████████████▏ | 194 kB 5.0 MB/s eta 0:00:01\r\u001b[K |████████████████████████▍ | 204 kB 5.0 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▋ | 215 kB 5.0 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▉ | 225 kB 5.0 MB/s eta 0:00:01\r\u001b[K |████████████████████████████ | 235 kB 5.0 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▎ | 245 kB 5.0 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▌ | 256 kB 5.0 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▊| 266 kB 5.0 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 268 kB 5.0 MB/s \n",
62 | "\u001b[?25hRequirement already satisfied: numpy>=1.16.3 in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.21.5)\n",
63 | "Requirement already satisfied: termcolor in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.1.0)\n",
64 | "Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.10.0+cu111)\n",
65 | "Collecting GitPython\n",
66 | " Downloading GitPython-3.1.27-py3-none-any.whl (181 kB)\n",
67 | "\u001b[K |████████████████████████████████| 181 kB 38.2 MB/s \n",
68 | "\u001b[?25hCollecting gitdb2\n",
69 | " Downloading gitdb2-4.0.2-py3-none-any.whl (1.1 kB)\n",
70 | "Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (3.2.2)\n",
71 | "Collecting gitdb>=4.0.1\n",
72 | " Downloading gitdb-4.0.9-py3-none-any.whl (63 kB)\n",
73 | "\u001b[K |████████████████████████████████| 63 kB 577 kB/s \n",
74 | "\u001b[?25hCollecting smmap<6,>=3.0.1\n",
75 | " Downloading smmap-5.0.0-py3-none-any.whl (24 kB)\n",
76 | "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from GitPython->rlcard[torch]) (3.10.0.2)\n",
77 | "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (1.4.0)\n",
78 | "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (3.0.7)\n",
79 | "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (0.11.0)\n",
80 | "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (2.8.2)\n",
81 | "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->rlcard[torch]) (1.15.0)\n",
82 | "Building wheels for collected packages: rlcard\n",
83 | " Building wheel for rlcard (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
84 | " Created wheel for rlcard: filename=rlcard-1.0.7-py3-none-any.whl size=325373 sha256=6a7c5207b3ee173eb2719d488058723f29aed159377030c6bfd098a5cf603789\n",
85 | " Stored in directory: /root/.cache/pip/wheels/8a/90/bd/bc402a48ca90970c9a7c2c4387dcb885fdf6073ec231a605ad\n",
86 | "Successfully built rlcard\n",
87 | "Installing collected packages: smmap, gitdb, rlcard, GitPython, gitdb2\n",
88 | "Successfully installed GitPython-3.1.27 gitdb-4.0.9 gitdb2-4.0.2 rlcard-1.0.7 smmap-5.0.0\n"
89 | ]
90 | }
91 | ]
92 | },
93 | {
94 | "cell_type": "markdown",
95 | "source": [
96 | "We import RLCard and the DMC trainer."
97 | ],
98 | "metadata": {
99 | "id": "2bt-JVoXyTwM"
100 | }
101 | },
102 | {
103 | "cell_type": "code",
104 | "metadata": {
105 | "id": "4_A_Br3Jj0xW"
106 | },
107 | "source": [
108 | "import rlcard\n",
109 | "from rlcard.agents.dmc_agent import DMCTrainer"
110 | ],
111 | "execution_count": null,
112 | "outputs": []
113 | },
114 | {
115 | "cell_type": "markdown",
116 | "metadata": {
117 | "id": "N2ltkfinYmiU"
118 | },
119 | "source": [
120 | "Let's create the Dou Dizhu environment and take a look at it."
121 | ]
122 | },
123 | {
124 | "cell_type": "code",
125 | "metadata": {
126 | "id": "l_8Kuf47kghG",
127 | "colab": {
128 | "base_uri": "https://localhost:8080/"
129 | },
130 | "outputId": "cb85d944-d4f4-4d04-c099-39e574c86abc"
131 | },
132 | "source": [
133 | "env = rlcard.make(\"doudizhu\")\n",
134 | "print(\"Number of actions:\", env.num_actions)\n",
135 | "print(\"Number of players:\", env.num_players)\n",
136 | "print(\"Shape of state:\", env.state_shape)\n",
137 | "print(\"Shape of action:\", env.action_shape)"
138 | ],
139 | "execution_count": null,
140 | "outputs": [
141 | {
142 | "output_type": "stream",
143 | "name": "stdout",
144 | "text": [
145 | "Number of actions: 27472\n",
146 | "Number of players: 3\n",
147 | "Shape of state: [[790], [901], [901]]\n",
148 | "Shape of action: [[54], [54], [54]]\n"
149 | ]
150 | }
151 | ]
152 | },
153 | {
154 | "cell_type": "markdown",
155 | "source": [
156 | "It looks really difficult. Let's solve it with DMC. This will run five processes to train on CPU. It will take a lot of time (hours to days). So we will not pause it in the tutorial."
157 | ],
158 | "metadata": {
159 | "id": "W2vJYyyqzVj3"
160 | }
161 | },
162 | {
163 | "cell_type": "code",
164 | "metadata": {
165 | "id": "8bqfMncnJTYU",
166 | "outputId": "a7a3f51c-a1dc-46b7-b3b1-f4638e2400eb",
167 | "colab": {
168 | "base_uri": "https://localhost:8080/"
169 | }
170 | },
171 | "source": [
172 | "# Initialize the DMC trainer\n",
173 | "trainer = DMCTrainer(\n",
174 | " env,\n",
175 | " cuda=\"\",\n",
176 | " xpid=\"doudizhu\",\n",
177 | " savedir=\"experiments/dmc_result\",\n",
178 | " save_interval=1,\n",
179 | ")\n",
180 | "\n",
181 | "# Train DMC Agents\n",
182 | "trainer.start()"
183 | ],
184 | "execution_count": null,
185 | "outputs": [
186 | {
187 | "output_type": "stream",
188 | "name": "stderr",
189 | "text": [
190 | "Creating log directory: experiments/dmc_result/doudizhu\n",
191 | "Saving arguments to experiments/dmc_result/doudizhu/meta.json\n",
192 | "Saving messages to experiments/dmc_result/doudizhu/out.log\n",
193 | "Saving logs data to experiments/dmc_result/doudizhu/logs.csv\n",
194 | "Saving logs' fields to experiments/dmc_result/doudizhu/fields.csv\n",
195 | "[INFO:60 trainer:335 2022-03-24 03:53:20,270] Saving checkpoint to experiments/dmc_result/doudizhu/model.tar\n",
196 | "[INFO:60 trainer:371 2022-03-24 03:53:20,701] After 0 frames: @ 0.0 fps Stats:\n",
197 | "{'loss_0': 0,\n",
198 | " 'loss_1': 0,\n",
199 | " 'loss_2': 0,\n",
200 | " 'mean_episode_return_0': 0,\n",
201 | " 'mean_episode_return_1': 0,\n",
202 | " 'mean_episode_return_2': 0}\n",
203 | "[INFO:60 trainer:371 2022-03-24 03:53:25,720] After 0 frames: @ 0.0 fps Stats:\n",
204 | "{'loss_0': 0,\n",
205 | " 'loss_1': 0,\n",
206 | " 'loss_2': 0,\n",
207 | " 'mean_episode_return_0': 0,\n",
208 | " 'mean_episode_return_1': 0,\n",
209 | " 'mean_episode_return_2': 0}\n",
210 | "[INFO:60 trainer:371 2022-03-24 03:53:30,754] After 0 frames: @ 0.0 fps Stats:\n",
211 | "{'loss_0': 0,\n",
212 | " 'loss_1': 0,\n",
213 | " 'loss_2': 0,\n",
214 | " 'mean_episode_return_0': 0,\n",
215 | " 'mean_episode_return_1': 0,\n",
216 | " 'mean_episode_return_2': 0}\n",
217 | "Updated log fields: ['_tick', '_time', 'frames', 'mean_episode_return_0', 'loss_0', 'mean_episode_return_1', 'loss_1', 'mean_episode_return_2', 'loss_2']\n",
218 | "[INFO:60 trainer:371 2022-03-24 03:53:35,768] After 3200 frames: @ 639.3 fps Stats:\n",
219 | "{'loss_0': 0.4258865416049957,\n",
220 | " 'loss_1': 0,\n",
221 | " 'loss_2': 0,\n",
222 | " 'mean_episode_return_0': 0.39743590354919434,\n",
223 | " 'mean_episode_return_1': 0,\n",
224 | " 'mean_episode_return_2': 0}\n",
225 | "[INFO:60 trainer:371 2022-03-24 03:53:40,775] After 9600 frames: @ 1278.9 fps Stats:\n",
226 | "{'loss_0': 0.4258865416049957,\n",
227 | " 'loss_1': 0.6387953758239746,\n",
228 | " 'loss_2': 0.6459171175956726,\n",
229 | " 'mean_episode_return_0': 0.39743590354919434,\n",
230 | " 'mean_episode_return_1': 0.597484290599823,\n",
231 | " 'mean_episode_return_2': 0.5987654328346252}\n",
232 | "[INFO:60 trainer:371 2022-03-24 03:53:45,785] After 9600 frames: @ 0.0 fps Stats:\n",
233 | "{'loss_0': 0.4258865416049957,\n",
234 | " 'loss_1': 0.6387953758239746,\n",
235 | " 'loss_2': 0.6459171175956726,\n",
236 | " 'mean_episode_return_0': 0.39743590354919434,\n",
237 | " 'mean_episode_return_1': 0.597484290599823,\n",
238 | " 'mean_episode_return_2': 0.5987654328346252}\n",
239 | "[INFO:60 trainer:371 2022-03-24 03:53:50,797] After 9600 frames: @ 0.0 fps Stats:\n",
240 | "{'loss_0': 0.4258865416049957,\n",
241 | " 'loss_1': 0.6387953758239746,\n",
242 | " 'loss_2': 0.6459171175956726,\n",
243 | " 'mean_episode_return_0': 0.39743590354919434,\n",
244 | " 'mean_episode_return_1': 0.597484290599823,\n",
245 | " 'mean_episode_return_2': 0.5987654328346252}\n",
246 | "[INFO:60 trainer:371 2022-03-24 03:53:55,809] After 19200 frames: @ 1918.3 fps Stats:\n",
247 | "{'loss_0': 0.3745853304862976,\n",
248 | " 'loss_1': 0.5792326927185059,\n",
249 | " 'loss_2': 0.5598975419998169,\n",
250 | " 'mean_episode_return_0': 0.4082987904548645,\n",
251 | " 'mean_episode_return_1': 0.5882158279418945,\n",
252 | " 'mean_episode_return_2': 0.589612603187561}\n",
253 | "[INFO:60 trainer:371 2022-03-24 03:54:00,821] After 19200 frames: @ 0.0 fps Stats:\n",
254 | "{'loss_0': 0.3745853304862976,\n",
255 | " 'loss_1': 0.5792326927185059,\n",
256 | " 'loss_2': 0.5598975419998169,\n",
257 | " 'mean_episode_return_0': 0.4082987904548645,\n",
258 | " 'mean_episode_return_1': 0.5882158279418945,\n",
259 | " 'mean_episode_return_2': 0.589612603187561}\n",
260 | "[INFO:60 trainer:371 2022-03-24 03:54:05,835] After 19200 frames: @ 0.0 fps Stats:\n",
261 | "{'loss_0': 0.3745853304862976,\n",
262 | " 'loss_1': 0.5792326927185059,\n",
263 | " 'loss_2': 0.5598975419998169,\n",
264 | " 'mean_episode_return_0': 0.4082987904548645,\n",
265 | " 'mean_episode_return_1': 0.5882158279418945,\n",
266 | " 'mean_episode_return_2': 0.589612603187561}\n",
267 | "[INFO:60 trainer:371 2022-03-24 03:54:10,847] After 19200 frames: @ 0.0 fps Stats:\n",
268 | "{'loss_0': 0.3745853304862976,\n",
269 | " 'loss_1': 0.5792326927185059,\n",
270 | " 'loss_2': 0.5598975419998169,\n",
271 | " 'mean_episode_return_0': 0.4082987904548645,\n",
272 | " 'mean_episode_return_1': 0.5882158279418945,\n",
273 | " 'mean_episode_return_2': 0.589612603187561}\n",
274 | "[INFO:60 trainer:371 2022-03-24 03:54:15,859] After 28800 frames: @ 1918.1 fps Stats:\n",
275 | "{'loss_0': 0.4179472327232361,\n",
276 | " 'loss_1': 0.4089127480983734,\n",
277 | " 'loss_2': 0.40309226512908936,\n",
278 | " 'mean_episode_return_0': 0.4541811943054199,\n",
279 | " 'mean_episode_return_1': 0.5466577410697937,\n",
280 | " 'mean_episode_return_2': 0.5452118515968323}\n",
281 | "[INFO:60 trainer:335 2022-03-24 03:54:20,868] Saving checkpoint to experiments/dmc_result/doudizhu/model.tar\n",
282 | "[INFO:60 trainer:371 2022-03-24 03:54:21,523] After 28800 frames: @ 0.0 fps Stats:\n",
283 | "{'loss_0': 0.4179472327232361,\n",
284 | " 'loss_1': 0.4089127480983734,\n",
285 | " 'loss_2': 0.40309226512908936,\n",
286 | " 'mean_episode_return_0': 0.4541811943054199,\n",
287 | " 'mean_episode_return_1': 0.5466577410697937,\n",
288 | " 'mean_episode_return_2': 0.5452118515968323}\n",
289 | "[INFO:60 trainer:371 2022-03-24 03:54:26,530] After 28800 frames: @ 0.0 fps Stats:\n",
290 | "{'loss_0': 0.4179472327232361,\n",
291 | " 'loss_1': 0.4089127480983734,\n",
292 | " 'loss_2': 0.40309226512908936,\n",
293 | " 'mean_episode_return_0': 0.4541811943054199,\n",
294 | " 'mean_episode_return_1': 0.5466577410697937,\n",
295 | " 'mean_episode_return_2': 0.5452118515968323}\n",
296 | "[INFO:60 trainer:371 2022-03-24 03:54:31,539] After 28800 frames: @ 0.0 fps Stats:\n",
297 | "{'loss_0': 0.4179472327232361,\n",
298 | " 'loss_1': 0.4089127480983734,\n",
299 | " 'loss_2': 0.40309226512908936,\n",
300 | " 'mean_episode_return_0': 0.4541811943054199,\n",
301 | " 'mean_episode_return_1': 0.5466577410697937,\n",
302 | " 'mean_episode_return_2': 0.5452118515968323}\n",
303 | "[INFO:60 trainer:371 2022-03-24 03:54:36,553] After 38400 frames: @ 1918.3 fps Stats:\n",
304 | "{'loss_0': 0.31496214866638184,\n",
305 | " 'loss_1': 0.3514341115951538,\n",
306 | " 'loss_2': 0.3326067328453064,\n",
307 | " 'mean_episode_return_0': 0.4701230823993683,\n",
308 | " 'mean_episode_return_1': 0.5306400656700134,\n",
309 | " 'mean_episode_return_2': 0.5302324295043945}\n"
310 | ]
311 | }
312 | ]
313 | }
314 | ]
315 | }
--------------------------------------------------------------------------------
/leduc_holdem_cfr.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "leduc_holdem_cfr.ipynb",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "include_colab_link": true
10 | },
11 | "kernelspec": {
12 | "name": "python3",
13 | "display_name": "Python 3"
14 | }
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "id": "view-in-github",
21 | "colab_type": "text"
22 | },
23 | "source": [
24 | "
"
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "miBl4S8JARzX"
31 | },
32 | "source": [
33 | "\n",
34 | "\n",
35 | "#
\n",
36 | "\n",
37 | "## **Training CFR on Leduc Hold'em**\n",
38 | "In this tutorial, we will showcase a more advanced algorithm CFR, which uses `step` and `step_back` to traverse the game tree."
39 | ]
40 | },
41 | {
42 | "cell_type": "markdown",
43 | "metadata": {
44 | "id": "DvSQRtFHfQde"
45 | },
46 | "source": [
47 | "First, we install RLcard and PyTorch."
48 | ]
49 | },
50 | {
51 | "cell_type": "code",
52 | "metadata": {
53 | "id": "zQ8CiXAJjQGi",
54 | "outputId": "f62f9be2-2b1e-4d34-a254-a5ddef2f5962",
55 | "colab": {
56 | "base_uri": "https://localhost:8080/"
57 | }
58 | },
59 | "source": [
60 | "!pip install rlcard[torch]"
61 | ],
62 | "execution_count": null,
63 | "outputs": [
64 | {
65 | "output_type": "stream",
66 | "name": "stdout",
67 | "text": [
68 | "Collecting rlcard[torch]\n",
69 | " Downloading rlcard-1.0.7.tar.gz (268 kB)\n",
70 | "\u001b[?25l\r\u001b[K |█▏ | 10 kB 13.5 MB/s eta 0:00:01\r\u001b[K |██▍ | 20 kB 14.3 MB/s eta 0:00:01\r\u001b[K |███▋ | 30 kB 9.7 MB/s eta 0:00:01\r\u001b[K |████▉ | 40 kB 5.0 MB/s eta 0:00:01\r\u001b[K |██████ | 51 kB 4.7 MB/s eta 0:00:01\r\u001b[K |███████▎ | 61 kB 5.5 MB/s eta 0:00:01\r\u001b[K |████████▌ | 71 kB 5.5 MB/s eta 0:00:01\r\u001b[K |█████████▊ | 81 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████ | 92 kB 5.9 MB/s eta 0:00:01\r\u001b[K |████████████▏ | 102 kB 3.0 MB/s eta 0:00:01\r\u001b[K |█████████████▍ | 112 kB 3.0 MB/s eta 0:00:01\r\u001b[K |██████████████▋ | 122 kB 3.0 MB/s eta 0:00:01\r\u001b[K |███████████████▉ | 133 kB 3.0 MB/s eta 0:00:01\r\u001b[K |█████████████████ | 143 kB 3.0 MB/s eta 0:00:01\r\u001b[K |██████████████████▎ | 153 kB 3.0 MB/s eta 0:00:01\r\u001b[K |███████████████████▌ | 163 kB 3.0 MB/s eta 0:00:01\r\u001b[K |████████████████████▊ | 174 kB 3.0 MB/s eta 0:00:01\r\u001b[K |██████████████████████ | 184 kB 3.0 MB/s eta 0:00:01\r\u001b[K |███████████████████████▏ | 194 kB 3.0 MB/s eta 0:00:01\r\u001b[K |████████████████████████▍ | 204 kB 3.0 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▋ | 215 kB 3.0 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▉ | 225 kB 3.0 MB/s eta 0:00:01\r\u001b[K |████████████████████████████ | 235 kB 3.0 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▎ | 245 kB 3.0 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▌ | 256 kB 3.0 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▊| 266 kB 3.0 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 268 kB 3.0 MB/s \n",
71 | "\u001b[?25hRequirement already satisfied: numpy>=1.16.3 in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.21.5)\n",
72 | "Requirement already satisfied: termcolor in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.1.0)\n",
73 | "Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.10.0+cu111)\n",
74 | "Collecting GitPython\n",
75 | " Downloading GitPython-3.1.27-py3-none-any.whl (181 kB)\n",
76 | "\u001b[K |████████████████████████████████| 181 kB 45.4 MB/s \n",
77 | "\u001b[?25hCollecting gitdb2\n",
78 | " Downloading gitdb2-4.0.2-py3-none-any.whl (1.1 kB)\n",
79 | "Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (3.2.2)\n",
80 | "Collecting gitdb>=4.0.1\n",
81 | " Downloading gitdb-4.0.9-py3-none-any.whl (63 kB)\n",
82 | "\u001b[K |████████████████████████████████| 63 kB 1.4 MB/s \n",
83 | "\u001b[?25hCollecting smmap<6,>=3.0.1\n",
84 | " Downloading smmap-5.0.0-py3-none-any.whl (24 kB)\n",
85 | "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from GitPython->rlcard[torch]) (3.10.0.2)\n",
86 | "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (1.4.0)\n",
87 | "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (2.8.2)\n",
88 | "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (0.11.0)\n",
89 | "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (3.0.7)\n",
90 | "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->rlcard[torch]) (1.15.0)\n",
91 | "Building wheels for collected packages: rlcard\n",
92 | " Building wheel for rlcard (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
93 | " Created wheel for rlcard: filename=rlcard-1.0.7-py3-none-any.whl size=325373 sha256=e59625c8a510c5eecf50ecc571cc57be45f37daf110a3cd1d59996d1ff298f64\n",
94 | " Stored in directory: /root/.cache/pip/wheels/8a/90/bd/bc402a48ca90970c9a7c2c4387dcb885fdf6073ec231a605ad\n",
95 | "Successfully built rlcard\n",
96 | "Installing collected packages: smmap, gitdb, rlcard, GitPython, gitdb2\n",
97 | "Successfully installed GitPython-3.1.27 gitdb-4.0.9 gitdb2-4.0.2 rlcard-1.0.7 smmap-5.0.0\n"
98 | ]
99 | }
100 | ]
101 | },
102 | {
103 | "cell_type": "markdown",
104 | "source": [
105 | "Then we import all the classes and functions we need."
106 | ],
107 | "metadata": {
108 | "id": "0ToWVZr881JH"
109 | }
110 | },
111 | {
112 | "cell_type": "code",
113 | "source": [
114 | "import rlcard\n",
115 | "from rlcard.agents import (\n",
116 | " CFRAgent,\n",
117 | " RandomAgent,\n",
118 | ")\n",
119 | "from rlcard.utils import (\n",
120 | " tournament,\n",
121 | " Logger,\n",
122 | " plot_curve,\n",
123 | ")"
124 | ],
125 | "metadata": {
126 | "id": "pfmdahQ_86_W"
127 | },
128 | "execution_count": null,
129 | "outputs": []
130 | },
131 | {
132 | "cell_type": "markdown",
133 | "source": [
134 | "We make two environments, where one allows `step_back` so that CFR can traverse the tree, and the other for evaluation only."
135 | ],
136 | "metadata": {
137 | "id": "dRXGan5w9BLT"
138 | }
139 | },
140 | {
141 | "cell_type": "code",
142 | "source": [
143 | "env = rlcard.make(\n",
144 | " 'leduc-holdem',\n",
145 | " config={\n",
146 | " 'allow_step_back': True,\n",
147 | " }\n",
148 | " )\n",
149 | "eval_env = rlcard.make(\n",
150 | " 'leduc-holdem',\n",
151 | ")"
152 | ],
153 | "metadata": {
154 | "id": "oN7k8icR89iB"
155 | },
156 | "execution_count": null,
157 | "outputs": []
158 | },
159 | {
160 | "cell_type": "markdown",
161 | "source": [
162 | "We create the CFR agent."
163 | ],
164 | "metadata": {
165 | "id": "qd5fe5W_9Ukz"
166 | }
167 | },
168 | {
169 | "cell_type": "code",
170 | "metadata": {
171 | "id": "4_A_Br3Jj0xW"
172 | },
173 | "source": [
174 | "agent = CFRAgent(\n",
175 | " env,\n",
176 | " \"experiments/leduc_holdem_cfr_result/cfr_model\",\n",
177 | ")"
178 | ],
179 | "execution_count": null,
180 | "outputs": []
181 | },
182 | {
183 | "cell_type": "markdown",
184 | "source": [
185 | "Here, we save the trained model in the path `experiments/leduc_holdem_cfr_result/cfr_model`. Then we use a random agent as the opponent."
186 | ],
187 | "metadata": {
188 | "id": "yYvb4MDz9qH6"
189 | }
190 | },
191 | {
192 | "cell_type": "code",
193 | "source": [
194 | "eval_env.set_agents([\n",
195 | " agent,\n",
196 | " RandomAgent(num_actions=env.num_actions),\n",
197 | "])"
198 | ],
199 | "metadata": {
200 | "id": "C4cq5mA-90W_"
201 | },
202 | "execution_count": null,
203 | "outputs": []
204 | },
205 | {
206 | "cell_type": "markdown",
207 | "metadata": {
208 | "id": "QOMi0W94fSH7"
209 | },
210 | "source": [
211 | "Now we start training for `1000` iterations, i.e., 1000 games."
212 | ]
213 | },
214 | {
215 | "cell_type": "code",
216 | "source": [
217 | "with Logger(\"experiments/leduc_holdem_cfr_result\") as logger:\n",
218 | " for episode in range(1000):\n",
219 | " agent.train()\n",
220 | " print('\\rIteration {}'.format(episode), end='')\n",
221 | " # Evaluate the performance. Play with Random agents.\n",
222 | " if episode % 50 == 0:\n",
223 | " logger.log_performance(\n",
224 | " env.timestep,\n",
225 | " tournament(\n",
226 | " eval_env,\n",
227 | " 10000,\n",
228 | " )[0]\n",
229 | " )\n",
230 | "\n",
231 | " # Get the paths\n",
232 | " csv_path, fig_path = logger.csv_path, logger.fig_path"
233 | ],
234 | "metadata": {
235 | "id": "MTbACbO4-D_c",
236 | "outputId": "f3841a04-9990-405c-a162-846f97abf816",
237 | "colab": {
238 | "base_uri": "https://localhost:8080/"
239 | }
240 | },
241 | "execution_count": null,
242 | "outputs": [
243 | {
244 | "output_type": "stream",
245 | "name": "stdout",
246 | "text": [
247 | "Iteration 0\n",
248 | "----------------------------------------\n",
249 | " timestep | 192\n",
250 | " reward | 5e-05\n",
251 | "----------------------------------------\n",
252 | "Iteration 50\n",
253 | "----------------------------------------\n",
254 | " timestep | 9792\n",
255 | " reward | 0.3098\n",
256 | "----------------------------------------\n",
257 | "Iteration 100\n",
258 | "----------------------------------------\n",
259 | " timestep | 19392\n",
260 | " reward | 0.43155\n",
261 | "----------------------------------------\n",
262 | "Iteration 150\n",
263 | "----------------------------------------\n",
264 | " timestep | 28992\n",
265 | " reward | 0.5922\n",
266 | "----------------------------------------\n",
267 | "Iteration 200\n",
268 | "----------------------------------------\n",
269 | " timestep | 38592\n",
270 | " reward | 0.5821\n",
271 | "----------------------------------------\n",
272 | "Iteration 250\n",
273 | "----------------------------------------\n",
274 | " timestep | 48192\n",
275 | " reward | 0.62535\n",
276 | "----------------------------------------\n",
277 | "Iteration 300\n",
278 | "----------------------------------------\n",
279 | " timestep | 57792\n",
280 | " reward | 0.70355\n",
281 | "----------------------------------------\n",
282 | "Iteration 350\n",
283 | "----------------------------------------\n",
284 | " timestep | 67392\n",
285 | " reward | 0.7055\n",
286 | "----------------------------------------\n",
287 | "Iteration 400\n",
288 | "----------------------------------------\n",
289 | " timestep | 76992\n",
290 | " reward | 0.7063\n",
291 | "----------------------------------------\n",
292 | "Iteration 450\n",
293 | "----------------------------------------\n",
294 | " timestep | 86592\n",
295 | " reward | 0.7075\n",
296 | "----------------------------------------\n",
297 | "Iteration 500\n",
298 | "----------------------------------------\n",
299 | " timestep | 96192\n",
300 | " reward | 0.70215\n",
301 | "----------------------------------------\n",
302 | "Iteration 550\n",
303 | "----------------------------------------\n",
304 | " timestep | 105792\n",
305 | " reward | 0.71865\n",
306 | "----------------------------------------\n",
307 | "Iteration 600\n",
308 | "----------------------------------------\n",
309 | " timestep | 115392\n",
310 | " reward | 0.74505\n",
311 | "----------------------------------------\n",
312 | "Iteration 650\n",
313 | "----------------------------------------\n",
314 | " timestep | 124992\n",
315 | " reward | 0.72085\n",
316 | "----------------------------------------\n",
317 | "Iteration 700\n",
318 | "----------------------------------------\n",
319 | " timestep | 134592\n",
320 | " reward | 0.7117\n",
321 | "----------------------------------------\n",
322 | "Iteration 750\n",
323 | "----------------------------------------\n",
324 | " timestep | 144192\n",
325 | " reward | 0.71705\n",
326 | "----------------------------------------\n",
327 | "Iteration 800\n",
328 | "----------------------------------------\n",
329 | " timestep | 153792\n",
330 | " reward | 0.7401\n",
331 | "----------------------------------------\n",
332 | "Iteration 850\n",
333 | "----------------------------------------\n",
334 | " timestep | 163392\n",
335 | " reward | 0.7587\n",
336 | "----------------------------------------\n",
337 | "Iteration 900\n",
338 | "----------------------------------------\n",
339 | " timestep | 172992\n",
340 | " reward | 0.75715\n",
341 | "----------------------------------------\n",
342 | "Iteration 950\n",
343 | "----------------------------------------\n",
344 | " timestep | 182592\n",
345 | " reward | 0.7278\n",
346 | "----------------------------------------\n",
347 | "Iteration 999\n",
348 | "Logs saved in experiments/leduc_holdem_cfr_result\n"
349 | ]
350 | }
351 | ]
352 | },
353 | {
354 | "cell_type": "markdown",
355 | "source": [
356 | "We can plot the learning curve"
357 | ],
358 | "metadata": {
359 | "id": "d-qU4hS4-hI5"
360 | }
361 | },
362 | {
363 | "cell_type": "code",
364 | "metadata": {
365 | "id": "l_8Kuf47kghG",
366 | "colab": {
367 | "base_uri": "https://localhost:8080/",
368 | "height": 279
369 | },
370 | "outputId": "f6bc9709-e500-492c-9e88-348532ed5450"
371 | },
372 | "source": [
373 | "plot_curve(csv_path, fig_path, 'cfr')"
374 | ],
375 | "execution_count": null,
376 | "outputs": [
377 | {
378 | "output_type": "display_data",
379 | "data": {
380 | "text/plain": [
381 | ""
382 | ],
383 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYIAAAEGCAYAAABo25JHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO3de3RU9bn/8feTyQ0yXJMQJOF+UQFRTAS1osRbsUehVrB4Wg62WnoRW+uxVX+uRT2ent85rXp6qtJa689Wa9sIWltqOaWKiVqvBAQhICbcJOGSCxAygVzn+f0xO3QISUiG7LlkntdaszJ77+/s+WQH5pn93Xt/t6gqxhhj4ldCpAMYY4yJLCsExhgT56wQGGNMnLNCYIwxcc4KgTHGxLnESAfoqYyMDB0zZkxIr62vryctLa13A7kgFnLGQkawnL0tFnLGQkYIf87169dXq2pmhwtVNaYeubm5GqrCwsKQXxtOsZAzFjKqWs7eFgs5YyGjavhzAsXayeeqdQ0ZY0ycs0JgjDFxzgqBMcbEuZg7WNyR5uZmysvLaWho6LLdoEGD2LZtW5hSdV9qaio5OTkkJSVFOooxJg71iUJQXl7OgAEDGDNmDCLSabu6ujoGDBgQxmSnp6rU1NRQXl7O2LFjIx3HGBOH+kTXUENDA+np6V0WgWglIqSnp592b8YYY9zSJwoBEJNFoE0sZzfGxL4+0TVkjDFu8fuVLftqebushuZWP0meBJITE0j2CMmJCSem/zE/aNqTQHKikOzxkJqUQOaAlKj84meFwEVVVVVcf/31NDU18dhjjzFr1qxIRzLGdENDcytvl1Xz2rZK1m47SGVdY6+s98JRg7kjfwJXnjOsV9bXW6wQuGjt2rWcd955PP3006csa21txePxRCCVMaYjlXUNvL6tkte2VfL3sioamv14UxK5YlImV507jPyzhzEgNZHmVqWpxU9Ta+DR3Pa85eTp5hPzAu2r6hp5/r093PZsMecMH8DsrBZm+RVPQuT3EKwQ9KLnnnuORx55BBFh1KhRbNq0iePHj1NcXMy7775LZmYmX//613nttddYvnw5l112WaQjGxO3VJWPD9SxdttBXt1Wyaa9RwDIHtyPL+aN5OrJWcwcm05y4smHUhM90C85tC9xt88ay6qN+/hZURlPbmrkr+VFfHP2eG6cnnPK+4RTnysE//bnErbuO9rhslC/hU8eMZAf3DClyzYlJSX88Ic/5J133iEjI4NDhw6xatUqiouLeeKJJ4DAIFMzZ87k0Ucf7XEGY0JVVunj2Xd28/rHlVw8Lp1bZowkd/SQqOyrdltjSyvv7zzE2m0HeW1bJRVHjgNw/sjB3HPtJK46N4tzhg9wbdskeRK4KTeHG6dn8+iKtbxRlci9L23mf14r5WuzxrFwxkj6J4f/Y7nPFYJIef3111mwYAEZGRkADB069JQ2Ho+Hm266KdzRTBzy+5WiTyr51du7eau0mmRPApeMT+evW/bz0oZyJgzzsvCikXzhwhyGpiVHOq6rmlv9vLG9ipc3VvDG9ip8jS2kJiVw2YRM7rwy0F8/bGBqWDMlJAgXDU/kni9expul1SwvLOOhV7byRGEZt102li9fPJpB/cJ3gWmfKwRdfXOP9AVlqampdlzAuOpoQzMvFpfz3Lu72V1zjKyBKdxz7SQWzhhFhjeF+sYW/vLRfn6/7lN++Jdt/Piv27l2Sha3zBjFJePSSYiC/ure8vGBo7xYXM4fN1ZQ7WsiPS2ZG84/i6vPzeIzEzJITYr8/0UR4YpJmVwxKZN1uw/xs8IyHl6znSeLdrDoktF89bKxZHhTXM/R5wpBpFx55ZXceOON3H333aSnp3Po0KFIRzJxZEeVj+fe2c2L68upb2old/QQ/vXas5kzdThJnn/0PaelJHLzRSO5+aKRfHzgKAUf7OXlDyt45aP9jBrany9eNJIFuTlh/4bcWw7VN/HqnmYeefwttlQcJckjXHVOFvNzc7ji7MyTtkW0uWjMUH71lRlsqajl50U7+PkbO3jm7V0svGgUSy4fx4jB/Vx7bysEvWTKlCk88MADXHHFFXg8HqZPn87s2bMjHcv0YX6/8kZpFb9+ezdvfFJFsieB688/i1svHcO0nMGnff05wwfy4Nwp3HfdOawpOcDvP/iUh9ds579f/YT8s4dxy4yRXDEpk8Qo/vCEf3T9vLi+nLUfH6S5VZmancqDN0xm7gXZMdf1NTV7EMu/dCE7qnw8WbSD59/bw2/f38ON07P55uwJjM3o/ZvZWCHoRYsXL2bx4sUnzbv11ltPPPf5fGFOZPqiuoZmXlpfzrPv7mFXdT3DBqRw9zWTuGXGKDIH9LwbITXJw7wLspl3QTa7quspWPcpL60v57VtBxk+MJWb83JYkDeSkUP7u/DbhK5910+GN5nFl4xhtB5g0Q2xf83O+EwvDy84n7uumcQv39zJ7z/4lNzRQ6wQGBPPdlXX86zT/eNrbOHCUYP57i3TmTNleK+dejg2I437rzuXf73mbNZuO0jBur08XljG44VlfGZ8BkP8TRwaWM7YjDTGZXgZ1D+8I+Yeqm9i1cYKXtxQ3mnXT1FRZVgzuS17cD8enDuFO/InuHYA2QqBMVHK19jCB7tqeKu0mr+XVlNa6SPJI9wwbQSLLx3D+SNP3/0TquTEBK477yyuO+8syg8fY0VxOa98tI93qpv5885NJ9oNTUtmbEbaice4jDTGZXoZnd4/pIOxx5paqPE1Ue1rpNrXRI2vkZr6wPSnNcd4s7TK6foZGLNdP6EKZW+vu/pMIVDVmD0vOnA7URPvWlr9bCqv5e2ywAf/hk8P0+JXUhITmDF2KAvycvj89GyGDQjvgdycIf25+5pJ3H3NJF57vZCx513Erqp6dlb72FVdz86qet78JNBH30YERgzqx7jMfxSJkUP6U+980NfUN1Jd5/wMmj7e3NphhgEpiWQOSGHxJWO4KTeHc88aGK5fPy64WghEZA7wU8ADPK2q/9Vu+U+AfGeyPzBMVXv8NSc1NZWampqYHIq67X4EqamxeZZGX9LqV5pb/bT4lZZWP82tSovfT0trYL5fYUj/JAb3T+6VYQFUlV3V9fy9rJqXNzRwZ+Gr1DW2IAJTRwzia5eP47IJGeSOHhIVpzoCJCYI4zO9jM/0AlknLfM1trC7up6d1fXsqqpnl1MoXt5QQV1jy0ltPQlCeloy6d4UMryBvYq26XRvMpnOz3RvCulpyVHz+/dVrhUCEfEAy4FrgHJgnYisUtWtbW1U9btB7e8EpofyXjk5OZSXl1NVVdVlu4aGhqj8wG27Q1lft+/IcY42NNPY7KexxU9jS+vJz1v8NDQHfgbmt57Ursn5kG5tVVpV8fuVFr/iV6W17bk/sKzVH3gcrTtOvw/fOLGs/Yd74ENfafb76e6OWYLA0LTAB1i6N5n0tBQynA+uDG+y8zzwAZbhTTlpOIIaXyNv76jh76VVvF1Wc+LK1ox+wvXn5/CZCRlcOj4jJrs7vCmJTM0exNTsQSfNV1Vq6pvYe+gYA1ITSU9LYVC/pD51zUKsc3OPYAZQpqo7AUSkAJgHbO2k/S3AD0J5o6SkpG7d3auoqIjp00OqNeYM/fLNnfzH6p7dJjQxQUhN8pCSmEBKYgJJiQkkJgieBCFBhESP4BEhIUFIdOYlJybgcdp4REhuqWd4ppcEZzrRIyQlJAR+egLrS/QkkOQREk/MDzxP8gSWJSYE2orA4fqmf3RlOH3YGw8docbXSH1Tx90aacke0r0pJHmEHVX1AAxMTeTS8Rl8Y/Z4Zk3IYNfmD8jPn3bG2zkaiQgZ3pSwXBhlQuNmIcgG9gZNlwMzO2ooIqOBscDrLuYxEfJ2WTX/+b/buPrcLL5wYbbzwR4Ynz0l0UNKUsKJeSmJCaQkBcZx743z14uKipg9O7cXfovTO97UelKBqPE1UeX8rKlvpL6xhRunZ/OZCRlMyxl8UvfS7hjr0jR9i7h1oFJE5gNzVPV2Z3oRMFNVl3bQ9l4gR1Xv7GRdS4AlAFlZWbkFBQUhZfL5fHi93pBeG06xkLO7GWuO+3nwneMMSBGWXdyP1MTwfuDFwrYEy9mbYiEjhD9nfn7+elXN63ChqrryAC4B1gRN3w/c30nbD4FLu7Pe3NxcDVVhYWHIrw2nWMjZnYzHm1p07uNv6dRlf9Wyyjr3Q3UgFralquXsTbGQUTX8OYFi7eRz1c1rx9cBE0VkrIgkAwuBVe0bicg5wBDgXRezmAh4cFUJm8prefTm852zTIwx0ci1QqCqLcBSYA2wDVihqiUi8pCIzA1quhAocCqW6SMKPviUgnV7uSN/PNdOGR7pOMaYLrh6HYGqrgZWt5u3rN30g25mMOG3ce8Rlv2phFkTM7j7mrMjHccYcxrRPaygiTk1vka+9fx6Mgek8NjC6VFxP1ZjTNf6zBATJvJaWv3c+fsPqalv4qVvXsqQGLwoyph4ZIXA9JqH/7add3bU8PD8aadcXWqMiV7WNWR6xerN+/nFGzv58sWjWJA3MtJxjDE9YIXAnLGyyjq+t3IT00cNZtn1nd8z2hgTnawQmDNS19DMkt+sp1+yh59/KbfXbpBijAkfO0ZgQqaq3LNyE3tqjvHb22cyfFD0jexqjDk9+/pmQvbzN3awpuQg9193DhePS490HGNMiGyPwISkpLqVR9dv5/ppZ3HbZacfAtwYE71sj8D0WPnhY/x8UwMThnn50U3TYu6ucMaYk1khMD3S0NzKN5/fQKvCLxblkZZiO5XGxDorBKbbVJVlf9rC5opalkxLYWxGWqQjGWN6gRUC022//2AvK4rLufPKCUwfZnsCxvQVVghMt6wpOcCDq0q4fFImd109KdJxjDG9yL7WmS7VN7bw769spWDdXqZmD+SxhRfYiKLG9DFWCEynPvz0MN99YSN7Dh3jm7PH892rJ9mVw8b0QVYIzClaWv0sL9zBY6+XMnxgKgVfu5iZdsGYMX2Wq1/vRGSOiGwXkTIRua+TNjeLyFYRKRGR37mZx5zenpp6FvziXX7y2ifMPX8E/3vXLCsCxvRxru0RiIgHWA5cA5QD60RklapuDWozEbgf+IyqHhaRYW7lMV1TVVYWl/Pgn0tITBAeu2U6c88fEelYxpgwcLNraAZQpqo7AUSkAJgHbA1q8zVguaoeBlDVShfzmE4cqm/i/j98xJqSg1wyLp1Hbz6fEYP7RTqWMSZM3CwE2cDeoOlyYGa7NpMARORtwAM8qKp/dTGTaeeNT6r43spNHD7WxP/53Dncftk4EuysIGPiiqiqOysWmQ/MUdXbnelFwExVXRrU5hWgGbgZyAHeBM5T1SPt1rUEWAKQlZWVW1BQEFImn8+H1+sN6bXh1FFOVaW+GdKS6JWxfZpalZWfNPHqnhayvcLXp6UwaqDnjDJGI8vZu2IhZyxkhPDnzM/PX6+qeR0tc3OPoAIIvmdhjjMvWDnwvqo2A7tE5BNgIrAuuJGqPgU8BZCXl6ezZ88OKVBRURGhvjacOsq5ongv33/xI7wpiUwY5mXiMC8Ts7xMHDaACcO8ZA/u1+1v8iX7armrYCOllS185TNjuHfOOaQmdb8IdJYxGlnO3hULOWMhI0RXTjcLwTpgooiMJVAAFgL/3K7NH4FbgF+JSAaBrqKdLmaKWb99bw+j0/sze1ImpZU+ij6pYuX68hPL+yV5ThSICU6BmDjMy8ih/U9cANbqV55+ayeP/G07Q/on89xXZ3D5pMxI/UrGmCjhWiFQ1RYRWQqsIdD//4yqlojIQ0Cxqq5yll0rIluBVuB7qlrjVqZY9fGBo2wqr2XZ9ZP5atDY/0eONVFW6aO00kfpQR+llXW8u7OGP3z4jx2vlMQExmcG9h72HTnOut2HmTNlOP/5hfMYkpYciV/HGBNlXL2gTFVXA6vbzVsW9FyBu52H6cTK4nKSPMLnp2efNH9w/2Tyxgwlb8zQk+YfbWimrNJHmVMcSit9FO8+zLGmFn48fxoLcnPsHgLGmBPsyuIo19Ti5+UPK7hmchZDu/kNfmBqEheOGsKFo4a4nM4Y0xfYwDFR7vWPD3KovokFeSNP39gYY0JghSDKrSguZ/jAVC6faAd1jTHusEIQxQ4ebaBoeyU35Wbb0M/GGNdYIYhiL20ox6+wINe6hYwx7rFCEKXaBoGbMXYoY+zewMYYF1khiFLFew6zq7qem+0gsTHGZVYIotSKdXtJS/bwufOGRzqKMaaPs0IQhY63KH/ZvJ8bzh9B/2S71MMY4y4rBFFo3YEWjjW12rUDxpiwsEIQhd4qb2F8ZhoXjhoc6SjGmDhghSDK7KjyUXrEz4K8kTYekDEmLKwQRJmVxeUkCHyh3QBzxhjjFisEUaSl1c9LG8qZluFh2MDUSMcxxsQJKwRR5I1Pqqiqa2RWjp0pZIwJHysEUWRF8V4yvMmcn9mz20YaY8yZsEIQJap9jazdVsmN07NJtAHmjDFh5GohEJE5IrJdRMpE5L4Olt8qIlUistF53O5mnmj2xw8raPGrXTtgjAk71wqBiHiA5cB1wGTgFhGZ3EHTF1T1AufxtFt5opmq8sK6vVwwcjCTsgZEOo4xJs64uUcwAyhT1Z2q2gQUAPNcfL+Ytam8ltJKnw0wZ4yJCDcLQTawN2i63JnX3k0i8pGIvCgicflJuLJ4L6lJCVx//lmRjmKMiUOiqu6sWGQ+MEdVb3emFwEzVXVpUJt0wKeqjSLydeCLqnplB+taAiwByMrKyi0oKAgpk8/nw+v1hvRatzS2KncVHmP6sESWTEsBojNne7GQESxnb4uFnLGQEcKfMz8/f72q5nW4UFVdeQCXAGuCpu8H7u+ivQeoPd16c3NzNVSFhYUhv9YtL28o19H3vqLvlFWfmBeNOduLhYyqlrO3xULOWMioGv6cQLF28rnqZtfQOmCiiIwVkWRgIbAquIGIBPeFzAW2uZgnKq0o3suoof2ZOXZopKMYY+KUa5ewqmqLiCwF1hD4tv+MqpaIyEMEKtMq4NsiMhdoAQ4Bt7qVJxrtPXSMd3bU8K/XTCLBrh0wxkSIq2MZqOpqYHW7ecuCnt9PoMsoLq1cX44I3JSbE+koxpg4ZlcWR0irX3mxeC+zJmYyYnC/SMcxxsQxKwQR8s6OavbVNnBznu0NGGMiywpBhKwoLmdw/ySumZwV6SjGmDhnhSACjhxrYk3JAT5/QTYpiTbSqDEmsqwQRMCqTftoavGzwLqFjDFRwApBBKwo3suUEQOZMmJQpKMYY4wVgnAr2VfLloqjNsCcMSZqWCEIs5XF5SR7Eph3wYhIRzHGGMAKQVg1trTyx40VXDsli8H9kyMdxxhjACsEYfXa1kqOHGu2biFjTFSxQhBGK4r3MmJQKp+ZkBHpKMYYc4IVgjDZd+Q4b5ZWMT83B48NMGeMiSJdDjonIn8GOr1zjarO7fVEfdTz7+0BYH6udQsZY6LL6UYffcT5+QVgOPC8M30LcNCtUH3Nofomnn1nN/903lmMSu8f6TjGGHOSLguBqr4BICKP6sm3OPuziBS7mqwPeerNnRxrbuU7V02MdBRjjDlFd48RpInIuLYJERkLpLkTqW+p8TXy3Lu7uWHaCCZmDYh0HGOMOUV3b0xzF1AkIjsBAUbj3EzedO2pN3fS0NzKt21vwBgTpU5bCEQkARgETATOcWZ/rKqNbgbrC6rqGnn23d3MuyCbCcO8kY5jjDEdOm3XkKr6ge+raqOqbnIe3SoCIjJHRLaLSJmI3NdFu5tEREUkr7M2segXb+ygqcXPnVdOiHQUY4zpVHePEbwmIveIyEgRGdr26OoFIuIBlgPXAZOBW0RkcgftBgDfAd7vYfaoVlnXwPPv7+Hz07MZl2l7A8aY6NXdYwRfdH7eETRPgXEdtG0zAyhT1Z0AIlIAzAO2tmv378CPgO91M0tMeLJoJ82tyrevtGMDxpjoJqqdXi92ZisWmQ/MUdXbnelFwExVXRrU5kLgAVW9SUSKgHtU9ZTTUkVkCc7B6aysrNyCgoKQMvl8Prxe97+dH27w8/03j3PxWYncdl5Kj18frpxnIhYyguXsbbGQMxYyQvhz5ufnr293GcAJ3d0jQESmEujiSW2bp6rPhRrKOQj938Ctp2urqk8BTwHk5eXp7NmzQ3rPoqIiQn1tTzy4qgRlD//xz7NCuoAsXDnPRCxkBMvZ22IhZyxkhOjK2a1CICI/AGYTKASrCfT7/x3oqhBUAMHjKeQ489oMAKYSOC0VAlcurxKRuR3tFcSKA7UN/O6DT7npwhy7itgYExO6e7B4PnAVcEBVvwKcT+CU0q6sAyaKyFgRSQYWAqvaFqpqrapmqOoYVR0DvAfEdBEA+FlRGX6/stTOFDLGxIjuFoLjzmmkLSIyEKjk5G/7p1DVFmApsAbYBqxQ1RIReUhE+uRgdfuOHKfgg70syBvJyKG2N2CMiQ3dPUZQLCKDgV8C6wEf8O7pXqSqqwl0JQXPW9ZJ29ndzBK1lheWodjegDEmtnSrEKjqt5ynT4rIX4GBqvqRe7FiT/nhY6wo3svNeSPJHtwv0nGMMabbunuw+DfAm8Bbqvqxu5Fi0/LCHQjCHfm2N2CMiS3dPUbwDHAW8LiI7BSRl0TkOy7miil7Dx1jZfFeFs4YyQjbGzDGxJjudg0VisibwEVAPvANYArwUxezxYwnXi8jIUH41mzbGzDGxJ7udg2tJXD/gXeBt4CLVLXSzWCx4tOaY7y4oZxFF49m+KDU07/AGGOiTHe7hj4CmghcADYNmCoi1gcCPP56KYkJwrdmj490FGOMCUl3u4a+CydGCr0V+BWBK4F7PpBOH7K7up4/fFjB4kvGMGyg7Q0YY2JTd7uGlgKzgFxgN4GDx2+5Fys2PPZ6KUke4RuzuxqE1Rhjolt3LyhLJTBA3HrniuG4t7PKxx8/rOC2y8YybIDtDRhjYle3jhGo6iNAErAIQEQynRvYx63HXy8jJdHD16+wYwPGmNjWrULgjD56L3C/MysJeN6tUNGurNLHnzZW8C+XjCbDG9eHSYwxfUB3zxq6EZgL1AOo6j4Cw0jHpcfWlpKa5GHJ5XZswBgT+7pbCJo0cCszBRCRNPciRbfSg3X8+aN9LL50DOm2N2CM6QNOWwgkcNeYV0TkF8BgEfka8BqBkUjjzk/XltI/ycOSWbY3YIzpG0571pCqqogsAO4GjgJnA8tU9VW3w0Wb7Qfq+Mvm/Xxr9niGpCVHOo4xxvSK7p4+ugE4oqrfczNMtPvp2k9IS07ka7Y3YIzpQ7pbCGYCXxKRPTgHjAFUdZorqaLQxweOsnrzAb595QQG97e9AWNM39HdQvDZUFYuInMIjFDqAZ5W1f9qt/wbwB1AK4G7ni1R1a2hvJfb/vLRfjwJwm2X2d6AMaZv6e5YQ3t6umIR8QDLgWuAcmCdiKxq90H/O1V90mk/l8DVy3N6+l7hsLmilonDvAzqnxTpKMYY06u6e/poKGYAZaq6U1WbgAJgXnADVT0aNJmGc3pqtFFVtlTUMjV7UKSjGGNMr5PA5QEurFhkPjBHVW93phcBM1V1abt2dxA4IykZuFJVSztY1xJgCUBWVlZuQUFBSJl8Ph9er7fHrzvU4OfuouN8+dxkrh7t/h5BqDnDKRYyguXsbbGQMxYyQvhz5ufnr1fVvA4XqqorD2A+geMCbdOLgCe6aP/PwLOnW29ubq6GqrCwMKTX/a3kgI6+9xUt3n0o5PfuiVBzhlMsZFS1nL0tFnLGQkbV8OcEirWTz1U3u4YqgJFB0znOvM4UAJ93MU/INlfUkiAw+ayBkY5ijDG9zs1CsA6YKCJjRSQZWAisCm4gIhODJv8JOKVbKBpsqahlwjAv/ZI9kY5ijDG9rrunj/aYqrY4N7RZQ+D00WdUtUREHiKwi7IKWCoiVwPNwGFgsVt5zsTmilpmTcyIdAxjjHGFa4UAQFVXA6vbzVsW9Pw7br5/bzh4tIGqukbOszOGjDF9lJtdQ33C5vJaACsExpg+ywrBaWzZV4sITB5hB4qNMX2TFYLT2FJRy/hML/2TXe1FM8aYiLFCcBqbK2qtW8gY06dZIehCZV0DB4822tASxpg+zQpBF7ZU2IFiY0zfZ4WgC1sqjtqBYmNMn2eFoAubK2oZm5GGN8UOFBtj+i4rBF3YYgeKjTFxwApBJ6p9jeyvbbBCYIzp86wQdGKzc6DYzhgyxvR1Vgg6scUZWmKKHSg2xvRxVgg6sWVf4EDxgFS7R7Expm+zQtCJLRVHrVvIGBMXrBB04FB9ExVHjnNetnULGWP6PisEHbADxcaYeGKFoANbrBAYY+KIq4VAROaIyHYRKROR+zpYfreIbBWRj0RkrYiMdjNPd22pqGVMen8G2oFiY0wccK0QiIgHWA5cB0wGbhGRye2afQjkqeo04EXgx27l6YnNFbVMsb0BY0yccHOPYAZQpqo7VbUJKADmBTdQ1UJVPeZMvgfkuJinWw7XN1F++LhdUWyMiRuiqu6sWGQ+MEdVb3emFwEzVXVpJ+2fAA6o6g87WLYEWAKQlZWVW1BQEFImn8+H1+vtss2W6lYeKW7g+xelMjndE9L7nKnu5Iy0WMgIlrO3xULOWMgI4c+Zn5+/XlXzOlyoqq48gPnA00HTi4AnOmn7ZQJ7BCmnW29ubq6GqrCw8LRtlheW6uh7X9Ej9U0hv8+Z6k7OSIuFjKqWs7fFQs5YyKga/pxAsXbyuerm+MoVwMig6Rxn3klE5GrgAeAKVW10MU+3bKmoZdTQ/gzqbweKjTHxwc1jBOuAiSIyVkSSgYXAquAGIjId+AUwV1UrXczSbYEriu1CMmNM/HCtEKhqC7AUWANsA1aoaomIPCQic51mDwNeYKWIbBSRVZ2sLixqjzXz6aFjdv2AMSauuHrrLVVdDaxuN29Z0POr3Xz/ntqyz+5RbIyJP3ZlcZATQ0uMsEJgjIkfVgiCbK6oJWdIP4akJUc6ijHGhI0VgiAlFbW2N2CMiTtWCBxHG5rZXXOM83KsEBhj4osVAoeNOGqMiVdWCBxthcDOGDLGxBsrBI7NFUfJHtyPoXag2BgTZ6wQOLZU1DJlhF1RbIyJP1YIgDeoW7cAAAzUSURBVLqGZnZV11u3kDEmLlkhAEr2HQVgqp0xZIyJQ1YIsAPFxpj4ZoWAwBXFZw1KJcObEukoxhgTdlYICBQCu37AGBOv4r4Q+Bpb2FVdb0NLGGPiVtwXgq37jqIK5+XYqaPGmPgU94Vgsw0tYYyJc3FfCLZU1JI1MIVhA1IjHcUYYyLC1UIgInNEZLuIlInIfR0sv1xENohIi4jMdzNLZzZX1Nppo8aYuOZaIRARD7AcuA6YDNwiIpPbNfsUuBX4nVs5ulLf2MKOKh9T7ECxMSaOuXnP4hlAmaruBBCRAmAesLWtgarudpb5XczRqW37nQPFtkdgjIljoqrurDjQ1TNHVW93phcBM1V1aQdtfw28oqovdrKuJcASgKysrNyCgoKQMvl8Prxe74npV3c389uPm/jJ7H4MSY2ewyXtc0ajWMgIlrO3xULOWMgI4c+Zn5+/XlXzOlrm5h5Br1HVp4CnAPLy8nT27NkhraeoqIjg165asZHMAdXcOOfKXkjZe9rnjEaxkBEsZ2+LhZyxkBGiK6ebX4MrgJFB0znOvKixxQ4UG2OMq4VgHTBRRMaKSDKwEFjl4vv1yLGmFsoqfUy1exAYY+Kca4VAVVuApcAaYBuwQlVLROQhEZkLICIXiUg5sAD4hYiUuJWnvW376/CrXUhmjDGuHiNQ1dXA6nbzlgU9X0egyyjsTgw9bfcgMMbEueg5VSbMNlfUkuFNZvhAu6LYGBPf4rYQbHGGnhaRSEcxxpiIistC0NDcSmmlz4aeNsYY4rQQbN1/lFa/2oFiY4whTgtBiR0oNsaYE+KyEGyuqGVoWjIjBtmBYmOMidNCcNQOFBtjjCPuCkFDcyulB+vsimJjjHHEXSH4+EAdLX61MYaMMcYRd4Vgi92j2BhjThKXhWBw/yRyhvSLdBRjjIkKcVcI2u5RbAeKjTEmIK4KQbNf+eRgnXULGWNMkLgqBOV1fppb1YaWMMaYIHFVCHbX+gG7Wb0xxgSLq0Kw56ifQf2SGDnUDhQbY0ybuCoEu4/6mZo90A4UG2NMEFcLgYjMEZHtIlImIvd1sDxFRF5wlr8vImPcytLU4qe8zm8Hio0xph3XCoGIeIDlwHXAZOAWEZncrtltwGFVnQD8BPiRW3k+OVhHi2IHio0xph039whmAGWqulNVm4ACYF67NvOAZ53nLwJXiUv9Npvbhp62PQJjjDmJqKo7KxaZD8xR1dud6UXATFVdGtRmi9Om3Jne4bSpbreuJcASgKysrNyCgoIe59lwsIWiPQ1896K0qD9G4PP58Hq9kY7RpVjICJazt8VCzljICOHPmZ+fv15V8zpalhi2FGdAVZ8CngLIy8vT2bNn93gds4ELi4oI5bXhVhQDOWMhI1jO3hYLOWMhI0RXTje7hiqAkUHTOc68DtuISCIwCKhxMZMxxph23CwE64CJIjJWRJKBhcCqdm1WAYud5/OB19WtvipjjDEdcq1rSFVbRGQpsAbwAM+oaomIPAQUq+oq4P8BvxGRMuAQgWJhjDEmjFw9RqCqq4HV7eYtC3reACxwM4MxxpiuxdWVxcYYY05lhcAYY+KcFQJjjIlzVgiMMSbOuXZlsVtEpArYE+LLM4Dq07aKvFjIGQsZwXL2tljIGQsZIfw5R6tqZkcLYq4QnAkRKe7sEutoEgs5YyEjWM7eFgs5YyEjRFdO6xoyxpg4Z4XAGGPiXLwVgqciHaCbYiFnLGQEy9nbYiFnLGSEKMoZV8cIjDHGnCre9giMMca0Y4XAGGPiXFwUAhGZIyLbRaRMRO4L03uOFJFCEdkqIiUi8h1n/oMiUiEiG53H54Jec7+TcbuIfPZ0+Z0hvt935r/gDPcdStbdIrLZyVPszBsqIq+KSKnzc4gzX0TkMec9PxKRC4PWs9hpXyoii4Pm5zrrL3Ne26NbxInI2UHba6OIHBWRu6JhW4rIMyJS6dxtr22e69uus/foYc6HReRjJ8vLIjLYmT9GRI4HbdcnQ83T1e/cg5yu/51FJMWZLnOWj+lhxheC8u0WkY2R3pY9oqp9+kFgCOwdwDggGdgETA7D+54FXOg8HwB8AkwGHgTu6aD9ZCdbCjDWyezpKj+wAljoPH8S+GaIWXcDGe3m/Ri4z3l+H/Aj5/nngP8FBLgYeN+ZPxTY6fwc4jwf4iz7wGkrzmuvO8O/5wFgdDRsS+By4EJgSzi3XWfv0cOc1wKJzvMfBeUcE9yu3Xp6lKez37mHOV3/OwPfAp50ni8EXuhJxnbLHwWWRXpb9uQRD3sEM4AyVd2pqk1AATDP7TdV1f2qusF5XgdsA7K7eMk8oEBVG1V1F1BGIHuH+Z1vD1cCLzqvfxb4fC/+CvOcdbZf9zzgOQ14DxgsImcBnwVeVdVDqnoYeBWY4ywbqKrvaeBf83NnmPMqYIeqdnV1edi2paq+SeBeGu3f3+1t19l7dDunqv5NVVucyfcI3EWwUyHm6ex37nbOLvTm3zk4/4vAVW3f0HuS0XnNzcDvuwoejm3ZE/FQCLKBvUHT5XT9gdzrnN3M6cD7zqylzq7dM0G79J3l7Gx+OnAk6D/ymfxeCvxNRNaLyBJnXpaq7neeHwCyQsyZ7TxvPz9UCzn5P1m0bUsIz7br7D1C9VUC3zbbjBWRD0XkDRGZFZS/p3l66/+f23/nE69xltc67XtqFnBQVUuD5kXbtjxFPBSCiBIRL/AScJeqHgV+DowHLgD2E9iNjLTLVPVC4DrgDhG5PHih840l4ucZO/25c4GVzqxo3JYnCce2O9P3EJEHgBbgt86s/cAoVZ0O3A38TkQGhitPB6L+7xzkFk7+ohJt27JD8VAIKoCRQdM5zjzXiUgSgSLwW1X9A4CqHlTVVlX1A78ksBvbVc7O5tcQ2DVMbDe/x1S1wvlZCbzsZDrYttvp/KwMMWcFJ3c5nMn2vw7YoKoHnbxRty0d4dh2nb1Hj4jIrcD1wJecDx2crpYa5/l6Av3tk0LMc8b//8L0dz7xGmf5IKd9tzmv+wLwQlD2qNqWnYmHQrAOmOicLZBMoGthldtv6vQV/j9gm6r+d9D84D69G4G2Mw9WAQudsxfGAhMJHEzqML/zn7YQmO+8fjHwpxByponIgLbnBA4gbnHytJ29ErzuVcC/OGcwXAzUOruxa4BrRWSIs+t+LbDGWXZURC52tsm/hJLTcdK3rWjblkHCse06e49uE5E5wPeBuap6LGh+poh4nOfjCGy/nSHm6ex37knOcPydg/PPB15vK4w9cDXwsaqe6PKJtm3ZqfZHj/vig8DR9k8IVOMHwvSelxHYpfsI2Og8Pgf8BtjszF8FnBX0mgecjNsJOrOms/wEzor4gMBBspVASgg5xxE4q2ITUNK2fgL9o2uBUuA1YKgzX4DlTpbNQF7Qur7qZCkDvhI0P4/Af94dwBM4V7T3MGcagW9og4LmRXxbEihM+4FmAn22t4Vj23X2Hj3MWUagz7nt32fbWTM3Of8WNgIbgBtCzdPV79yDnK7/nYFUZ7rMWT6uJxmd+b8GvtGubcS2ZU8eNsSEMcbEuXjoGjLGGNMFKwTGGBPnrBAYY0ycs0JgjDFxzgqBMcbEOSsEJi6JyGAR+ZbzfISIvHi615zBe10gQSNmGhNtrBCYeDWYwIiTqOo+VZ1/mvZn4gIC57UbE5XsOgITl0SkbRTa7QQu3DlXVac6Qy58nsAFbBOBRwgMZbwIaAQ+p6qHRGQ8gYt7MoFjwNdU9WMRWQD8AGglMHDZ1QQuUupHYDiA/wReAR4HpgJJwIOq+ifnvW8kMLxBNvC8qv6by5vCGBJP38SYPuk+YKqqXuCMDvtK0LKpBEaLTSXwIX6vqk4XkZ8QGArgfwjcePwbqloqIjOBnxEY4ngZ8FlVrRCRwaraJCLLCFwFuhRARP4vgSEMviqBm8F8ICKvOe89w3n/Y8A6EfmLqha7uSGMsUJgzKkKNXAPiToRqQX+7MzfDExzRpS9FFgp/xiyPsX5+TbwaxFZAfyhk/VfC8wVkXuc6VRglPP8VXUGKRORPxAYqsQKgXGVFQJjTtUY9NwfNO0n8H8mgcC49he0f6GqfsPZQ/gnYL2I5HawfgFuUtXtJ80MvK59X6313RrX2cFiE6/qCNxCtMc0cF+JXc7xgLZ7yZ7vPB+vqu+r6jKgisCwwe3faw1wpzPqJCIyPWjZNRK4Z20/Ascq3g4lozE9YYXAxCWn++VtCdyA/OEQVvEl4DYRaRu1te32pw9L4IbkW4B3CIzqWghMlsDNy78I/DuBg8QfiUiJM93mAwL3sPgIeMmOD5hwsLOGjIkSzllDJw4qGxMutkdgjDFxzvYIjDEmztkegTHGxDkrBMYYE+esEBhjTJyzQmCMMXHOCoExxsS5/w/AKKzNu4Fb8wAAAABJRU5ErkJggg==\n"
384 | },
385 | "metadata": {
386 | "needs_background": "light"
387 | }
388 | }
389 | ]
390 | },
391 | {
392 | "cell_type": "markdown",
393 | "metadata": {
394 | "id": "1QRAbZ3vfwtW"
395 | },
396 | "source": [
397 | "Good job! Now you have your trained CFR agent on Leduc Hold'em!"
398 | ]
399 | }
400 | ]
401 | }
--------------------------------------------------------------------------------
/leduc_holdem_pretrained.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "leduc_holdem_pretrained.ipynb",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "include_colab_link": true
10 | },
11 | "kernelspec": {
12 | "name": "python3",
13 | "display_name": "Python 3"
14 | }
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "id": "view-in-github",
21 | "colab_type": "text"
22 | },
23 | "source": [
24 | "
"
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "miBl4S8JARzX"
31 | },
32 | "source": [
33 | "\n",
34 | "\n",
35 | "#
\n",
36 | "\n",
37 | "## **Having Fun with Pretrained Leduc Model**\n",
38 | "We have designed simple human interfaces to play against the pre-trained model of Leduc Hold'em. We will go through this process to have fun!"
39 | ]
40 | },
41 | {
42 | "cell_type": "markdown",
43 | "metadata": {
44 | "id": "y8USmiveSRO5"
45 | },
46 | "source": [
47 | "First, we install RLCard and PyTorch."
48 | ]
49 | },
50 | {
51 | "cell_type": "code",
52 | "metadata": {
53 | "id": "zQ8CiXAJjQGi",
54 | "outputId": "a0b7fab7-bb43-4d97-9a36-5bbb89b164a0",
55 | "colab": {
56 | "base_uri": "https://localhost:8080/"
57 | }
58 | },
59 | "source": [
60 | "!pip install rlcard[torch]"
61 | ],
62 | "execution_count": null,
63 | "outputs": [
64 | {
65 | "output_type": "stream",
66 | "name": "stdout",
67 | "text": [
68 | "Collecting rlcard[torch]\n",
69 | " Downloading rlcard-1.0.7.tar.gz (268 kB)\n",
70 | "\u001b[?25l\r\u001b[K |█▏ | 10 kB 26.9 MB/s eta 0:00:01\r\u001b[K |██▍ | 20 kB 16.5 MB/s eta 0:00:01\r\u001b[K |███▋ | 30 kB 11.7 MB/s eta 0:00:01\r\u001b[K |████▉ | 40 kB 1.1 MB/s eta 0:00:01\r\u001b[K |██████ | 51 kB 1.3 MB/s eta 0:00:01\r\u001b[K |███████▎ | 61 kB 1.5 MB/s eta 0:00:01\r\u001b[K |████████▌ | 71 kB 1.5 MB/s eta 0:00:01\r\u001b[K |█████████▊ | 81 kB 1.7 MB/s eta 0:00:01\r\u001b[K |███████████ | 92 kB 1.9 MB/s eta 0:00:01\r\u001b[K |████████████▏ | 102 kB 2.1 MB/s eta 0:00:01\r\u001b[K |█████████████▍ | 112 kB 2.1 MB/s eta 0:00:01\r\u001b[K |██████████████▋ | 122 kB 2.1 MB/s eta 0:00:01\r\u001b[K |███████████████▉ | 133 kB 2.1 MB/s eta 0:00:01\r\u001b[K |█████████████████ | 143 kB 2.1 MB/s eta 0:00:01\r\u001b[K |██████████████████▎ | 153 kB 2.1 MB/s eta 0:00:01\r\u001b[K |███████████████████▌ | 163 kB 2.1 MB/s eta 0:00:01\r\u001b[K |████████████████████▊ | 174 kB 2.1 MB/s eta 0:00:01\r\u001b[K |██████████████████████ | 184 kB 2.1 MB/s eta 0:00:01\r\u001b[K |███████████████████████▏ | 194 kB 2.1 MB/s eta 0:00:01\r\u001b[K |████████████████████████▍ | 204 kB 2.1 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▋ | 215 kB 2.1 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▉ | 225 kB 2.1 MB/s eta 0:00:01\r\u001b[K |████████████████████████████ | 235 kB 2.1 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▎ | 245 kB 2.1 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▌ | 256 kB 2.1 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▊| 266 kB 2.1 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 268 kB 2.1 MB/s \n",
71 | "\u001b[?25hRequirement already satisfied: numpy>=1.16.3 in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.21.5)\n",
72 | "Requirement already satisfied: termcolor in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.1.0)\n",
73 | "Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.10.0+cu111)\n",
74 | "Collecting GitPython\n",
75 | " Downloading GitPython-3.1.27-py3-none-any.whl (181 kB)\n",
76 | "\u001b[K |████████████████████████████████| 181 kB 65.3 MB/s \n",
77 | "\u001b[?25hCollecting gitdb2\n",
78 | " Downloading gitdb2-4.0.2-py3-none-any.whl (1.1 kB)\n",
79 | "Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (3.2.2)\n",
80 | "Collecting gitdb>=4.0.1\n",
81 | " Downloading gitdb-4.0.9-py3-none-any.whl (63 kB)\n",
82 | "\u001b[K |████████████████████████████████| 63 kB 786 kB/s \n",
83 | "\u001b[?25hCollecting smmap<6,>=3.0.1\n",
84 | " Downloading smmap-5.0.0-py3-none-any.whl (24 kB)\n",
85 | "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from GitPython->rlcard[torch]) (3.10.0.2)\n",
86 | "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (1.4.0)\n",
87 | "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (0.11.0)\n",
88 | "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (2.8.2)\n",
89 | "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (3.0.7)\n",
90 | "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->rlcard[torch]) (1.15.0)\n",
91 | "Building wheels for collected packages: rlcard\n",
92 | " Building wheel for rlcard (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
93 | " Created wheel for rlcard: filename=rlcard-1.0.7-py3-none-any.whl size=325373 sha256=56a06885c7d49be2ca8e7f3b52863880542503cae61a78df1bb3d75e929442a1\n",
94 | " Stored in directory: /root/.cache/pip/wheels/8a/90/bd/bc402a48ca90970c9a7c2c4387dcb885fdf6073ec231a605ad\n",
95 | "Successfully built rlcard\n",
96 | "Installing collected packages: smmap, gitdb, rlcard, GitPython, gitdb2\n",
97 | "Successfully installed GitPython-3.1.27 gitdb-4.0.9 gitdb2-4.0.2 rlcard-1.0.7 smmap-5.0.0\n"
98 | ]
99 | }
100 | ]
101 | },
102 | {
103 | "cell_type": "markdown",
104 | "source": [
105 | "Then we load rlcard, model zoos in rlcard, and the human agent."
106 | ],
107 | "metadata": {
108 | "id": "sJOXjmEp_TTg"
109 | }
110 | },
111 | {
112 | "cell_type": "code",
113 | "source": [
114 | "import rlcard\n",
115 | "from rlcard import models\n",
116 | "from rlcard.agents import LeducholdemHumanAgent as HumanAgent\n",
117 | "from rlcard.utils import print_card"
118 | ],
119 | "metadata": {
120 | "id": "hgD5JUeM_h3N"
121 | },
122 | "execution_count": null,
123 | "outputs": []
124 | },
125 | {
126 | "cell_type": "markdown",
127 | "source": [
128 | "We create the environment and make the opponent the pre-trained model."
129 | ],
130 | "metadata": {
131 | "id": "1GmR_nGC_hNl"
132 | }
133 | },
134 | {
135 | "cell_type": "code",
136 | "metadata": {
137 | "id": "4_A_Br3Jj0xW"
138 | },
139 | "source": [
140 | "env = rlcard.make('leduc-holdem')\n",
141 | "human_agent = HumanAgent(env.num_actions)\n",
142 | "cfr_agent = models.load('leduc-holdem-cfr').agents[0]\n",
143 | "env.set_agents([\n",
144 | " human_agent,\n",
145 | " cfr_agent,\n",
146 | "])"
147 | ],
148 | "execution_count": null,
149 | "outputs": []
150 | },
151 | {
152 | "cell_type": "markdown",
153 | "metadata": {
154 | "id": "HEg4DyV0TPmN"
155 | },
156 | "source": [
157 | "We can start now!\n"
158 | ]
159 | },
160 | {
161 | "cell_type": "code",
162 | "metadata": {
163 | "id": "l_8Kuf47kghG",
164 | "colab": {
165 | "base_uri": "https://localhost:8080/"
166 | },
167 | "outputId": "5ec4ce81-f813-4def-89cd-644a8f5e4243"
168 | },
169 | "source": [
170 | "print(\">> Leduc Hold'em pre-trained model\")\n",
171 | "\n",
172 | "while (True):\n",
173 | " print(\">> Start a new game\")\n",
174 | "\n",
175 | " trajectories, payoffs = env.run(is_training=False)\n",
176 | " # If the human does not take the final action, we need to\n",
177 | " # print other players action\n",
178 | " final_state = trajectories[0][-1]\n",
179 | " action_record = final_state['action_record']\n",
180 | " state = final_state['raw_obs']\n",
181 | " _action_list = []\n",
182 | " for i in range(1, len(action_record)+1):\n",
183 | " if action_record[-i][0] == state['current_player']:\n",
184 | " break\n",
185 | " _action_list.insert(0, action_record[-i])\n",
186 | " for pair in _action_list:\n",
187 | " print('>> Player', pair[0], 'chooses', pair[1])\n",
188 | "\n",
189 | " # Let's take a look at what the agent card is\n",
190 | " print('=============== CFR Agent ===============')\n",
191 | " print_card(env.get_perfect_information()['hand_cards'][1])\n",
192 | "\n",
193 | " print('=============== Result ===============')\n",
194 | " if payoffs[0] > 0:\n",
195 | " print('You win {} chips!'.format(payoffs[0]))\n",
196 | " elif payoffs[0] == 0:\n",
197 | " print('It is a tie.')\n",
198 | " else:\n",
199 | " print('You lose {} chips!'.format(-payoffs[0]))\n",
200 | " print('')\n",
201 | "\n",
202 | " inputs = input(\"Press any key to continue, Q to exit\\n\")\n",
203 | " if inputs.lower() == \"q\":\n",
204 | " break"
205 | ],
206 | "execution_count": null,
207 | "outputs": [
208 | {
209 | "name": "stdout",
210 | "output_type": "stream",
211 | "text": [
212 | ">> Leduc Hold'em pre-trained model\n",
213 | ">> Start a new game\n",
214 | "\n",
215 | "=============== Community Card ===============\n",
216 | "┌─────────┐\n",
217 | "│░░░░░░░░░│\n",
218 | "│░░░░░░░░░│\n",
219 | "│░░░░░░░░░│\n",
220 | "│░░░░░░░░░│\n",
221 | "│░░░░░░░░░│\n",
222 | "│░░░░░░░░░│\n",
223 | "│░░░░░░░░░│\n",
224 | "└─────────┘\n",
225 | "=============== Your Hand ===============\n",
226 | "┌─────────┐\n",
227 | "│K │\n",
228 | "│ │\n",
229 | "│ │\n",
230 | "│ ♠ │\n",
231 | "│ │\n",
232 | "│ │\n",
233 | "│ K│\n",
234 | "└─────────┘\n",
235 | "=============== Chips ===============\n",
236 | "Yours: +\n",
237 | "Agent 1: ++\n",
238 | "=========== Actions You Can Choose ===========\n",
239 | "0: call, 1: raise, 2: fold\n",
240 | "\n",
241 | ">> You choose action (integer): 0\n",
242 | ">> Player 1 chooses check\n",
243 | "\n",
244 | "=============== Community Card ===============\n",
245 | "┌─────────┐\n",
246 | "│J │\n",
247 | "│ │\n",
248 | "│ │\n",
249 | "│ ♥ │\n",
250 | "│ │\n",
251 | "│ │\n",
252 | "│ J│\n",
253 | "└─────────┘\n",
254 | "=============== Your Hand ===============\n",
255 | "┌─────────┐\n",
256 | "│K │\n",
257 | "│ │\n",
258 | "│ │\n",
259 | "│ ♠ │\n",
260 | "│ │\n",
261 | "│ │\n",
262 | "│ K│\n",
263 | "└─────────┘\n",
264 | "=============== Chips ===============\n",
265 | "Yours: ++\n",
266 | "Agent 1: ++\n",
267 | "=========== Actions You Can Choose ===========\n",
268 | "0: raise, 1: fold, 2: check\n",
269 | "\n",
270 | ">> You choose action (integer): 0\n",
271 | ">> Player 1 chooses raise\n",
272 | "\n",
273 | "=============== Community Card ===============\n",
274 | "┌─────────┐\n",
275 | "│J │\n",
276 | "│ │\n",
277 | "│ │\n",
278 | "│ ♥ │\n",
279 | "│ │\n",
280 | "│ │\n",
281 | "│ J│\n",
282 | "└─────────┘\n",
283 | "=============== Your Hand ===============\n",
284 | "┌─────────┐\n",
285 | "│K │\n",
286 | "│ │\n",
287 | "│ │\n",
288 | "│ ♠ │\n",
289 | "│ │\n",
290 | "│ │\n",
291 | "│ K│\n",
292 | "└─────────┘\n",
293 | "=============== Chips ===============\n",
294 | "Yours: ++++++\n",
295 | "Agent 1: ++++++++++\n",
296 | "=========== Actions You Can Choose ===========\n",
297 | "0: call, 1: fold\n",
298 | "\n",
299 | ">> You choose action (integer): 0\n",
300 | ">> Player 0 chooses call\n",
301 | "=============== CFR Agent ===============\n",
302 | "┌─────────┐\n",
303 | "│J │\n",
304 | "│ │\n",
305 | "│ │\n",
306 | "│ ♠ │\n",
307 | "│ │\n",
308 | "│ │\n",
309 | "│ J│\n",
310 | "└─────────┘\n",
311 | "=============== Result ===============\n",
312 | "You lose 5.0 chips!\n",
313 | "\n",
314 | "Press any key to continue, Q to exit\n",
315 | "Q\n"
316 | ]
317 | }
318 | ]
319 | },
320 | {
321 | "cell_type": "markdown",
322 | "source": [
323 | "Sad... we loose... (Open in Colab to see more)"
324 | ],
325 | "metadata": {
326 | "id": "h7ckmk5BAASI"
327 | }
328 | }
329 | ]
330 | }
--------------------------------------------------------------------------------
/random.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "random.ipynb",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "include_colab_link": true
10 | },
11 | "kernelspec": {
12 | "name": "python3",
13 | "display_name": "Python 3"
14 | }
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "id": "view-in-github",
21 | "colab_type": "text"
22 | },
23 | "source": [
24 | "
"
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "miBl4S8JARzX"
31 | },
32 | "source": [
33 | "\n",
34 | "\n",
35 | "#
\n",
36 | "\n",
37 | "## **Playing with Random Agents**\n",
38 | "This example shows how we can make an RLCard environment and interact with the environment. We use Leduc Hold'em environment as an example to walk through the process of installing RLCard and making an environment."
39 | ]
40 | },
41 | {
42 | "cell_type": "markdown",
43 | "metadata": {
44 | "id": "zZbqww8VXSOx"
45 | },
46 | "source": [
47 | "First, we install RLCard and PyTorch."
48 | ]
49 | },
50 | {
51 | "cell_type": "code",
52 | "metadata": {
53 | "id": "zQ8CiXAJjQGi",
54 | "outputId": "8315136d-261c-43a1-a25e-26914e3e048d",
55 | "colab": {
56 | "base_uri": "https://localhost:8080/"
57 | }
58 | },
59 | "source": [
60 | "!pip install rlcard[torch]"
61 | ],
62 | "execution_count": null,
63 | "outputs": [
64 | {
65 | "output_type": "stream",
66 | "name": "stdout",
67 | "text": [
68 | "Requirement already satisfied: rlcard[torch] in /usr/local/lib/python3.7/dist-packages (1.0.7)\n",
69 | "Requirement already satisfied: termcolor in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.1.0)\n",
70 | "Requirement already satisfied: numpy>=1.16.3 in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.21.5)\n",
71 | "Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (3.2.2)\n",
72 | "Requirement already satisfied: GitPython in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (3.1.27)\n",
73 | "Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (1.10.0+cu111)\n",
74 | "Requirement already satisfied: gitdb2 in /usr/local/lib/python3.7/dist-packages (from rlcard[torch]) (4.0.2)\n",
75 | "Requirement already satisfied: gitdb>=4.0.1 in /usr/local/lib/python3.7/dist-packages (from gitdb2->rlcard[torch]) (4.0.9)\n",
76 | "Requirement already satisfied: smmap<6,>=3.0.1 in /usr/local/lib/python3.7/dist-packages (from gitdb>=4.0.1->gitdb2->rlcard[torch]) (5.0.0)\n",
77 | "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from GitPython->rlcard[torch]) (3.10.0.2)\n",
78 | "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (2.8.2)\n",
79 | "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (0.11.0)\n",
80 | "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (1.4.0)\n",
81 | "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->rlcard[torch]) (3.0.7)\n",
82 | "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->rlcard[torch]) (1.15.0)\n"
83 | ]
84 | }
85 | ]
86 | },
87 | {
88 | "cell_type": "markdown",
89 | "source": [
90 | "Then we import RLCard and a random agent, which moves randomly, to interact with the environment."
91 | ],
92 | "metadata": {
93 | "id": "ZgY-v-eKrVSb"
94 | }
95 | },
96 | {
97 | "cell_type": "code",
98 | "metadata": {
99 | "id": "4_A_Br3Jj0xW"
100 | },
101 | "source": [
102 | "import rlcard\n",
103 | "from rlcard.agents import RandomAgent"
104 | ],
105 | "execution_count": null,
106 | "outputs": []
107 | },
108 | {
109 | "cell_type": "markdown",
110 | "metadata": {
111 | "id": "N2ltkfinYmiU"
112 | },
113 | "source": [
114 | "Now we can create a Leduc Hold'em environment by simply passing `leduc-holdem` to the `make` method of the environment."
115 | ]
116 | },
117 | {
118 | "cell_type": "code",
119 | "metadata": {
120 | "id": "l_8Kuf47kghG"
121 | },
122 | "source": [
123 | "env = rlcard.make(\"leduc-holdem\")"
124 | ],
125 | "execution_count": null,
126 | "outputs": []
127 | },
128 | {
129 | "cell_type": "markdown",
130 | "source": [
131 | "Everything about Leduc Hold'em is wrapped in `env`. Now let's take a look at what we have in the environment."
132 | ],
133 | "metadata": {
134 | "id": "c-fBQP6MsZBB"
135 | }
136 | },
137 | {
138 | "cell_type": "code",
139 | "metadata": {
140 | "id": "8bqfMncnJTYU",
141 | "colab": {
142 | "base_uri": "https://localhost:8080/"
143 | },
144 | "outputId": "9d0866a7-8199-4d4f-d38e-a2145f7504ac"
145 | },
146 | "source": [
147 | "print(\"Number of actions:\", env.num_actions)\n",
148 | "print(\"Number of players:\", env.num_players)\n",
149 | "print(\"Shape of state:\", env.state_shape)\n",
150 | "print(\"Shape of action:\", env.action_shape)"
151 | ],
152 | "execution_count": null,
153 | "outputs": [
154 | {
155 | "output_type": "stream",
156 | "name": "stdout",
157 | "text": [
158 | "Number of actions: 4\n",
159 | "Number of players: 2\n",
160 | "Shape of state: [[36], [36]]\n",
161 | "Shape of action: [None, None]\n"
162 | ]
163 | }
164 | ]
165 | },
166 | {
167 | "cell_type": "markdown",
168 | "source": [
169 | "We can know that the Leduc Hold'em environment is a 2-player game with 4 possible actions. The state (which means all the information that can be observed at a specific step) is of the shape of 36. There is no action feature. Different environments have different characteristics. You can try other environments as well. For example, you can try Dou Dizhu."
170 | ],
171 | "metadata": {
172 | "id": "0tq4Y7bfs7Zh"
173 | }
174 | },
175 | {
176 | "cell_type": "code",
177 | "metadata": {
178 | "id": "GhgGYyick13x",
179 | "colab": {
180 | "base_uri": "https://localhost:8080/"
181 | },
182 | "outputId": "5eb95e0f-b0cd-4913-f61a-c9f92e275ab8"
183 | },
184 | "source": [
185 | "env_doudizhu = rlcard.make(\"doudizhu\")\n",
186 | "print(\"Number of actions:\", env_doudizhu.num_actions)\n",
187 | "print(\"Number of players:\", env_doudizhu.num_players)\n",
188 | "print(\"Shape of state:\", env_doudizhu.state_shape)\n",
189 | "print(\"Shape of action:\", env_doudizhu.action_shape)"
190 | ],
191 | "execution_count": null,
192 | "outputs": [
193 | {
194 | "output_type": "stream",
195 | "name": "stdout",
196 | "text": [
197 | "Number of actions: 27472\n",
198 | "Number of players: 3\n",
199 | "Shape of state: [[790], [901], [901]]\n",
200 | "Shape of action: [[54], [54], [54]]\n"
201 | ]
202 | }
203 | ]
204 | },
205 | {
206 | "cell_type": "markdown",
207 | "source": [
208 | "This environment is much more complex with a large number of possible actions and very large state and action spaces. Despite these challenges, RLCard has implemented the algorithm [DMC](https://github.com/kwai/DouZero) in [DouZero](https://arxiv.org/abs/2106.06135), which leads to strong agents. We will show how to train strong Dou Dizhu AI with RLCard in later tutorials."
209 | ],
210 | "metadata": {
211 | "id": "YmHHx63OtveO"
212 | }
213 | },
214 | {
215 | "cell_type": "markdown",
216 | "source": [
217 | "Now let's just interact with the environment with random agents. We first tell the environment we want to use random agents to interact by using `set_agents`."
218 | ],
219 | "metadata": {
220 | "id": "FNw8nykwuqp9"
221 | }
222 | },
223 | {
224 | "cell_type": "code",
225 | "source": [
226 | "agent = RandomAgent(num_actions=env.num_actions)\n",
227 | "env.set_agents([agent for _ in range(env.num_players)])"
228 | ],
229 | "metadata": {
230 | "id": "vNx-cm-Gu_OU"
231 | },
232 | "execution_count": null,
233 | "outputs": []
234 | },
235 | {
236 | "cell_type": "markdown",
237 | "source": [
238 | "Now we are ready to make interactions with `run`."
239 | ],
240 | "metadata": {
241 | "id": "1x4z9QS9vJOo"
242 | }
243 | },
244 | {
245 | "cell_type": "code",
246 | "source": [
247 | "trajectories, player_wins = env.run(is_training=False)"
248 | ],
249 | "metadata": {
250 | "id": "GQUMYyObvTlc"
251 | },
252 | "execution_count": null,
253 | "outputs": []
254 | },
255 | {
256 | "cell_type": "markdown",
257 | "source": [
258 | "Here, `is_training` indicates we use the `eval_step` method of the random agent to interact. If `is_training` is `True`, it will instead use `step` instead. This interface is designed for certain reinforcement learning algorithms that have different behaviors in training and evaluation. `trajectories` are the collected data. Let's print it out!"
259 | ],
260 | "metadata": {
261 | "id": "X51bJQg7vYqV"
262 | }
263 | },
264 | {
265 | "cell_type": "code",
266 | "source": [
267 | "print(trajectories)"
268 | ],
269 | "metadata": {
270 | "colab": {
271 | "base_uri": "https://localhost:8080/"
272 | },
273 | "id": "jyR-66zJv7by",
274 | "outputId": "c33892f3-1b73-4dda-f134-6b4ef9a02a6c"
275 | },
276 | "execution_count": null,
277 | "outputs": [
278 | {
279 | "output_type": "stream",
280 | "name": "stdout",
281 | "text": [
282 | "[[{'legal_actions': OrderedDict([(0, None), (1, None), (2, None)]), 'obs': array([0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
283 | " 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
284 | " 0., 0.]), 'raw_obs': {'hand': 'HK', 'public_card': None, 'all_chips': [2, 4], 'my_chips': 2, 'legal_actions': ['call', 'raise', 'fold'], 'current_player': 0}, 'raw_legal_actions': ['call', 'raise', 'fold'], 'action_record': [(1, 'raise'), (0, 'fold')]}, 2, {'legal_actions': OrderedDict([(1, None), (2, None), (3, None)]), 'obs': array([0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
285 | " 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
286 | " 0., 0.]), 'raw_obs': {'hand': 'HK', 'public_card': None, 'all_chips': [2, 4], 'my_chips': 2, 'legal_actions': ['raise', 'fold', 'check'], 'current_player': 1}, 'raw_legal_actions': ['raise', 'fold', 'check'], 'action_record': [(1, 'raise'), (0, 'fold')]}], [{'legal_actions': OrderedDict([(0, None), (1, None), (2, None)]), 'obs': array([1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
287 | " 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
288 | " 0., 0.]), 'raw_obs': {'hand': 'SJ', 'public_card': None, 'all_chips': [2, 1], 'my_chips': 1, 'legal_actions': ['call', 'raise', 'fold'], 'current_player': 1}, 'raw_legal_actions': ['call', 'raise', 'fold'], 'action_record': [(1, 'raise'), (0, 'fold')]}, 1, {'legal_actions': OrderedDict([(1, None), (2, None), (3, None)]), 'obs': array([1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,\n",
289 | " 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
290 | " 0., 0.]), 'raw_obs': {'hand': 'SJ', 'public_card': None, 'all_chips': [2, 4], 'my_chips': 4, 'legal_actions': ['raise', 'fold', 'check'], 'current_player': 1}, 'raw_legal_actions': ['raise', 'fold', 'check'], 'action_record': [(1, 'raise'), (0, 'fold')]}]]\n"
291 | ]
292 | }
293 | ]
294 | },
295 | {
296 | "cell_type": "markdown",
297 | "source": [
298 | "These consist of observations and actions of the two players. They can be used for training reinforcement learning agents. Let's print out `player_wins` as well."
299 | ],
300 | "metadata": {
301 | "id": "zOYUhs_nwCRX"
302 | }
303 | },
304 | {
305 | "cell_type": "code",
306 | "source": [
307 | "print(player_wins)"
308 | ],
309 | "metadata": {
310 | "colab": {
311 | "base_uri": "https://localhost:8080/"
312 | },
313 | "id": "lSfpMIhFwf_O",
314 | "outputId": "d69070b5-24e4-4d62-d8a7-b895beda5a27"
315 | },
316 | "execution_count": null,
317 | "outputs": [
318 | {
319 | "output_type": "stream",
320 | "name": "stdout",
321 | "text": [
322 | "[-1. 1.]\n"
323 | ]
324 | }
325 | ]
326 | },
327 | {
328 | "cell_type": "markdown",
329 | "source": [
330 | "This means the second player wins and gets 1 point. These are all the interfaces we need to know to train most of the reinforcement learning agents! We will show you how to do this in later tutorials."
331 | ],
332 | "metadata": {
333 | "id": "2DANqImrw3Nb"
334 | }
335 | }
336 | ]
337 | }
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | rlcard
2 |
--------------------------------------------------------------------------------